I am writing a code that read a large text file line by line and find the line that starts with UNIQUE-ID (there are many of them in the file) and it comes right before a certain line (in this example, the one that starts with 'REACTION-LAYOUT -' and in which the 5th element in the string is OLEANDOMYCIN). The code is the following:
data2 = open('pathways.dat', 'r', errors = 'ignore')
pathways = data2.readlines()
PWY_ID = []
line_cont = []
L_PRMR = [] #Left primary
car = []
#i is the line number (first element of enumerate),
#while line is the line content (2nd elem of enumerate)
for i,line in enumerate(pathways):
if 'UNIQUE-ID' in line:
line_cont = line
PWY_ID_line = line_cont.rstrip()
PWY_ID_line = PWY_ID_line.split(' ')
PWY_ID.append(PWY_ID_line[2])
elif 'REACTION-LAYOUT -' in line:
L_PWY = line.rstrip()
L_PWY = L_PWY.split(' ')
L_PRMR.append(L_PWY[4])
elif 'OLEANDOMYCIN' in line:
car.append(PWY_ID)
print(car)
However, the output is instead all the lines that contain PWY_ID (output of the first if statement), like it was ignoring all the rest of the code. Can anybody help?
Edit
Below is a sample of my data (there are like 1000-ish similar "pages" in my textfile):
//
UNIQUE-ID - PWY-741
.
.
.
.
PREDECESSORS - (RXN-663 RXN-662)
REACTION-LAYOUT - (RXN-663 (:LEFT-PRIMARIES CPD-1003) (:DIRECTION :L2R) (:RIGHT-PRIMARIES CPD-1004))
REACTION-LAYOUT - (RXN-662 (:LEFT-PRIMARIES CPD-1002) (:DIRECTION :L2R) (:RIGHT-PRIMARIES CPD-1003))
REACTION-LAYOUT - (RXN-661 (:LEFT-PRIMARIES CPD-1001) (:DIRECTION :L2R) (:RIGHT-PRIMARIES CPD-1002))
REACTION-LIST - RXN-663
REACTION-LIST - RXN-662
REACTION-LIST - RXN-661
SPECIES - TAX-351746
SPECIES - TAX-644631
SPECIES - ORG-6335
SUPER-PATHWAYS - PWY-5266
TAXONOMIC-RANGE - TAX-1224
//
I think it would have been helpful if you'd posted some examples of data. But an approximation to what you're looking for is:
with open('pathways.dat','r', errors='ignore') as infile:
i = infile.read().find(string_to_search)
infile.seek(i+number_of_chars_to_read)
I hope this piece of code will help you focus your script on this line.
print(car) is printing out the list of all lines added by PWD_ID.append(PWY_ID_line[2]) in the first if, since you are appending the whole list of PWD_ID to car when you do car.append(PWY_ID).
so, if you want to print out the list of lines with OLEANDOMYCIN, you might want to just do car.append(line).
Related
I'm trying to loop through some unstructured text data in python. End goal is to structure it in a dataframe. For now I'm just trying to get the relevant data in an array and understand the line, readline() functionality in python.
This is what the text looks like:
Title: title of an article
Full text: unfortunately the full text of each article,
is on numerous lines. Each article has a differing number
of lines. In this example, there are three..
Subject: Python
Title: title of another article
Full text: again unfortunately the full text of each article,
is on numerous lines.
Subject: Python
This same format is repeated for lots of text articles in the same file. So far I've figured out how to pull out lines that include certain text. For example, I can loop through it and put all of the article titles in a list like this:
a = "Title:"
titleList = []
sample = 'sample.txt'
with open(sample,encoding="utf8") as unstr:
for line in unstr:
if a in line:
titleList.append(line)
Now I want to do the below:
a = "Title:"
b = "Full text:"
d = "Subject:"
list = []
sample = 'sample.txt'
with open(sample,encoding="utf8") as unstr:
for line in unstr:
if a in line:
list.append(line)
if b in line:
1. Concatenate this line with each line after it, until i reach the line that includes "Subject:". Ignore the "Subject:" line, stop the "Full text:" subloop, add the concatenated full text to the list array.<br>
2. Continue the for loop within which all of this sits
As a Python beginner, I'm spinning my wheels searching google on this topic. Any pointers would be much appreciated.
If you want to stick with your for-loop, you're probably going to need something like this:
titles = []
texts = []
subjects = []
with open('sample.txt', encoding="utf8") as f:
inside_fulltext = False
for line in f:
if line.startswith("Title:"):
inside_fulltext = False
titles.append(line)
elif line.startswith("Full text:"):
inside_fulltext = True
full_text = line
elif line.startswith("Subject:"):
inside_fulltext = False
texts.append(full_text)
subjects.append(line)
elif inside_fulltext:
full_text += line
else:
# Possibly throw a format error here?
pass
(A couple of things: Python is weird about names, and when you write list = [], you're actually overwriting the label for the list class, which can cause you problems later. You should really treat list, set, and so on like keywords - even thought Python technically doesn't - just to save yourself the headache. Also, the startswith method is a little more precise here, given your description of the data.)
Alternatively, you could wrap the file object in an iterator (i = iter(f), and then next(i)), but that's going to cause some headaches with catching StopIteration exceptions - but it would let you use a more classic while-loop for the whole thing. For myself, I would stick with the state-machine approach above, and just make it sufficiently robust to deal with all your reasonably expected edge-cases.
As your goal is to construct a DataFrame, here is a re+numpy+pandas solution:
import re
import pandas as pd
import numpy as np
# read all file
with open('sample.txt', encoding="utf8") as f:
text = f.read()
keys = ['Subject', 'Title', 'Full text']
regex = '(?:^|\n)(%s): ' % '|'.join(keys)
# split text on keys
chunks = re.split(regex, text)[1:]
# reshape flat list of records to group key/value and infos on the same article
df = pd.DataFrame([dict(e) for e in np.array(chunks).reshape(-1, len(keys), 2)])
Output:
Title Full text Subject
0 title of an article unfortunately the full text of each article,\nis on numerous lines. Each article has a differing number \nof lines. In this example, there are three.. Python
1 title of another article again unfortunately the full text of each article,\nis on numerous lines. Python
Problem description
There is a set of ~4000 python files with the following struture:
#ScriptInfo(number=3254,
attibute=some_value,
title="crawler for my website",
some_other_key=some_value)
scenario_name = entity.get_script_by_title(title)
The goal
The goal is to get the value of the title from the ScriptInfo decorator (in this case it is "crawler for my website"), but there are a couple of problems:
1) There is no rule for naming a variable that contains the title. That's why it can be title_name, my_title, etc. See example:
#ScriptInfo(number=3254,
attibute=some_value,
my_title="crawler for my website",
some_other_key=some_value)
scenario_name = entity.get_script_by_title(my_title)
2) The #ScriptInfo decorator may have more than two arguments so getting its contents from between the parentheses in order to get the second parameter's value is not an option
My (very naive) solution
But the piece of code that stays unchanged is the scenario_name = entity.get_script_by_title(my_title) line. Taking this into account, I've come up with the solution:
import re
title_variable_re = r"scenario_name\s?=\s?entity\.get_script_by_title\((.*)\)"
with open("python_file.py") as file:
for line in file:
if re.match(regexp, line):
title_variable = re.match(title_variable_re, line).group(1)
title_re = title_variable + r"\s?=\s\"(.*)\"?"
with open("python_file.py") as file:
for line in file:
if re.match(title_re, line):
title_value = re.match(regexp, line).group(1)
print title_value
This snippet of code does the following:
1) Traverses (see the first with open) the script file and gets the variable with title value because it is up to a programmer to choose its name
2) Traverses the script file again (see the second with open) and gets the title's value
The question for the stackoverflow family
Is there a better and more efficient way to get the title's (my_title's, title_name's, etc) value than traversing the script file two times?
If you open the file only once and save all lines into fileContent, add break where appropriate, and reuse the matches to access the captured groups, you obtain something like this (with parentheses after print for 3.x, without for 2.7):
import re
title_value = None
title_variable_re = r"scenario_name\s?=\s?entity\.get_script_by_title\((.*)\)"
with open("scenarioName.txt") as file:
fileContent = list(file.read().split('\n'))
title_variable = None
for line in fileContent:
m1 = re.match(title_variable_re, line)
if m1:
title_variable = m1.group(1)
break
title_re = r'\s*' + title_variable + r'\s*=\s*"([^"]*)"[,)]?\s*'
for line in fileContent:
m2 = re.match(title_re, line)
if m2:
title_value = m2.group(1)
break
print(title_value)
Here an unsorted list of changes in the regular expressions:
Allow space before the title_variable, that's what the r'\s*' + is for
Allow space around =
Allow comma or closing round paren in the end of the line in title_re, that's what the [,)]? is for
Allow some space in the end of the line
When tested on the following file as input:
#ScriptInfo(number=3254,
attibute=some_value,
my_title="crawler for my website",
some_other_key=some_value)
scenario_name = entity.get_script_by_title(my_title)
it produces the following output:
crawler for my website
I have a file like this (with +10000 sequences, +98000 lines):
>DILT_0000000001-mRNA-1
MKVVKICSKLRKFIESRKDAVLPEQEEVLADLWAFEGISEFQMERFAKAAQCFQHQYELA
IKANLTEHASRSLENLGRARARLYDYQGALDAWTKRLDYEIKGIDKAWLHHEIGRAYLEL
NQYEEAIDHAATARDVADREADMEWDLNATVLIAQAHFYAGNLEEAKVYFEAAQNAAFRK
GFFKAESVLAEAIAEVDSEIRREEAKQERVYTKHSVLFNEFSQRAVWSEEYSEELHLFPF
AVVMLRCVLARQCTVHLQFRSCYNL
>DILT_0000000101-mRNA-1
MSCRRLSMNPGEALIKESSAPSRENLLKPYFDEDRCKFRHLTAEQFSDIWSHFDLDGVNE
LRFILRVPASQQAGTGLRFFGYISTEVYVHKTVKVSYIGFRKKNNSRALRRWNVNKKCSN
AVQMCGTSQLLAIVGPHTQPLTNKLCHTDYLPLSANFA
>DILT_0001999301-mRNA-1
LEHGIQPDGQMPSDKTIGGGDDSFQTFFSETGAGKHVPRAVMVDLEPTVIGEYLCVLLTS
FILFRLISTNLGPNSQLASRTLLFAADKTTLFRLLGLLPWSLLKIAVQ
>DILT_0001999401-mRNA-1
MAENGEDANMPEEGKEGNTQDQGEHQQDVQSDEPNEADSGYSSAASSDVNSQTIPITVIL
PNREAVNLSFDPNISVSELQERLNGPGITRLNENLFFTYSGKQLDPNKTLLDYKVQKSST
LYVHETPTALPKSAPNAKEEGVVPSNCLIHSGSRMDENRCLKEYQLTQNSVIFVHRPTAN
TAVQNREEKTSSLEVTVTIRETGNQLHLPINPHXXXXTVEMHVAPGVTVGDLNRKIAIKQ
all the lines with the '>' are IDs. The following lines are the sequences regarding the ID.
I also have a file with the IDs of the sequences I want, like:
DILT_0000000001-mRNA-1
DILT_0000000101-mRNA-1
DILT_0000000201-mRNA-1
DILT_0000000301-mRNA-1
DILT_0000000401-mRNA-1
DILT_0000000501-mRNA-1
DILT_0000000601-mRNA-1
DILT_0000000701-mRNA-1
DILT_0000000801-mRNA-1
DILT_0000000901-mRNA-1
I want to write a script to match the ids and copy the sequences of this IDs, but I'm just getting the IDs, without the sequences.
seqs = open('WBPS10.protein.fa').readlines()
ids = open('ids.txt').readlines()
for line in ids:
for record in seqs:
if line == record[1:]:
print record
I don't know what to write to get the 'n' lines after the ID, because sometimes it's 2 lines, other sequences have more as you can see in my example.
The thing is, I'm trying to do it without using Biopython, which would be a lot easier. I just want to learn other ways.
seqs_by_ids = {}
with open('WBPS10.protein.fa', 'r') as read_file:
for line in read_file.readlines():
if line.startswith('>'):
current_key = line[1:].strip()
seqs_by_ids[current_key] = ''
else:
seqs_by_ids[current_key] += line.strip()
ids = set([line.strip() for line in open('ids.txt').readlines()])
for id in ids:
if id in seqs_by_ids:
print(id)
print('\t{}'.format(seqs_by_ids[id]))
output:
DILT_0000000001-mRNA-1
MKVVKICSKLRKFIESRKDAVLPEQEEVLADLWAFEGISEFQMERFAKAAQCFQHQYELAIKANLTEHASRSLENLGRARARLYDYQGALDAWTKRLDYEIKGIDKAWLHHEIGRAYLELNQYEEAIDHAATARDVADREADMEWDLNATVLIAQAHFYAGNLEEAKVYFEAAQNAAFRKGFFKAESVLAEAIAEVDSEIRREEAKQERVYTKHSVLFNEFSQRAVWSEEYSEELHLFPFAVVMLRCVLARQCTVHLQFRSCYNL
DILT_0000000101-mRNA-1
MSCRRLSMNPGEALIKESSAPSRENLLKPYFDEDRCKFRHLTAEQFSDIWSHFDLDGVNELRFILRVPASQQAGTGLRFFGYISTEVYVHKTVKVSYIGFRKKNNSRALRRWNVNKKCSNAVQMCGTSQLLAIVGPHTQPLTNKLCHTDYLPLSANFA
This should work for you. if line == record[1:]: statement will not work if there is some special char in string e.g \r\n . You are interested in finding the matching IDs only. Code below will work for you.
Code sample
seqs = open('WBPS10.protein.fa').readlines()
ids = open('ids.txt').readlines()
for line in ids:
for record in seqs:
if line in record :
print record
output :
>DILT_0000000001-mRNA-1
>DILT_0000000101-mRNA-1
Complete beginner so I'm sorry if this is obvious!
I have a file which is name | +/- or IG_name | 0 in a long list like so -
S1 +
IG_1 0
S2 -
IG_S3 0
S3 +
S4 -
dnaA +
IG_dnaA 0
Everything which starts with IG_ has a corresponding name. I want to add the + or - to the IG_name. e.g. IG_S3 is + like S3 is.
The information is gene names and strand information, IG = intergenic region. Basically I want to know which strand the intergenic region is on.
What I think I want:
open file
for every line, if the line starts with IG_*
find the line with *
print("IG_" and the line it found)
else
print line
What I have:
with open(sys.argv[2]) as geneInfo:
with open(sys.argv[1]) as origin:
for line in origin:
if line.startswith("IG_"):
name = line.split("_")[1]
nname = name[:-3]
for newline in geneInfo:
if re.match(nname, newline):
print("IG_"+newline)
else:
print(line)
where origin is the mixed list and geneInfo has only the names not IG_names.
With this code I end up with a list containing only the else statements.
S1 +
S2 -
S3 +
S4 -
dnaA +
My problem is that I don't know what is wrong to search so I can (attempt) to fix it!
Below is some step-by-step annotated code that hopefully does what you want (though instead of using print I have aggregated the results into a list so you can actually make use of it). I'm not quite sure what happened with your existing code (especially how you're processing two files?)
s_dict = {}
ig_list = []
with open('genes.txt', 'r') as infile: # Simulating reading the file you pass in sys.argv
for line in infile:
if line.startswith('IG_'):
ig_list.append(line.split()[0]) # Collect all our IG values for later
else:
s_name, value = line.split() # Separate out the S value and its operator
s_dict[s_name] = value.strip() # Add to dictionary to map S to operator
# Now you can go back through your list of IG values and append the appropriate operator
pulled_together = []
for item in ig_list:
s_value = item.split('_')[1]
# The following will look for the operator mapped to the S value. If it is
# not found, it will instead give you 'not found'
corresponding_operator = s_dict.get(s_value, 'Not found')
pulled_together.append([item, corresponding_operator])
print ('List structure')
print (pulled_together)
print ('\n')
print('Printout of each item in list')
for item in pulled_together:
print(item[0] + '\t' + item[1])
nname = name[:-3]
Python's slicing through list is very powerful, but can be tricky to understand correctly.
When you write [:-3], you take everything except the last three items. The thing is, if you have less than three element in your list, it does not return you an error, but an empty list.
I think this is where things does not work, as there are not much elements per line, it returns you an empty list. If you could tell what do you exactly want it to return there, with an example or something, it would help a lot, as i don't really know what you're trying to get with your slicing.
Does this do what you want?
from __future__ import print_function
import sys
# Read and store all the gene info lines, keyed by name
gene_info = dict()
with open(sys.argv[2]) as gene_info_file:
for line in gene_info_file:
tokens = line.split()
name = tokens[0].strip()
gene_info[name] = line
# Read the other file and lookup the names
with open(sys.argv[1]) as origin_file:
for line in origin_file:
if line.startswith("IG_"):
name = line.split("_")[1]
nname = name[:-3].strip()
if nname in gene_info:
lookup_line = gene_info[nname]
print("IG_" + lookup_line)
else:
pass # what do you want to do in this case?
else:
print(line)
I wrote a script to read and plot data into the graphs. I have three input files
wells.csv: list of observation wells that I want to create graph
1201
1202
...
well_summary_table.csv: contained information for each well (e.g. reference elevation, depth to water)
Bore_Name Ref_elev
1201 20
data.csv: contained observation data for each well (e.g. pH, Temp)
RowId Bore_Name Depth pH
1 1201 2 7
Not all wells in wells.csv have data to plot
My script is as follow
well_name_list = []
new_depth_list =[]
pH_list = []
from pylab import *
infile = open("wells.csv",'r')
for line in infile:
line=line.strip('\n')
well=line
if not well in well_name_list:
well_name_list.append(well)
infile.close()
for well in well_name_list:
infile1 = open("well_summary_table.csv",'r')
infile2 = open("data.csv",'r')
for line in infile1:
line = line.rstrip()
if not line.startswith('Bore_Name'):
words = line.split(',')
well_name1 = words[0]
if well_name1 == well:
ref_elev = words[1]
for line in infile2:
if not line.startswith("RowId"):
line = line.strip('\n')
words = line.split(',')
well_name2 = words[1]
if well_name2 == well:
depth = words[2]
new_depth = float(ref_elev) - float(depth)
pH = words[3]
new_depth_list.append(float(new_depth))
pH_list.append(float(pH))
fig.plt.figure(figsize = (2,2.7), facecolor='white')
plt.axis([0,8,0,60])
plt.plot(pH_list, new_depth_list, linestyle='', marker = 'o')
plt.savefig(well+'.png')
new_depth_list = []
pH_list = []
infile1.close()
infile2.close()
It works on more than half of my well list then it stops without giving me any error message. I don't know what is going on. Can anyone help me with that problem? Sorry if it is an obvious question. I am a newbie.
Many thanks,
#tcaswell spotted a potential issue - you aren't closing infile1 and infile2 after each time you open them - you'll at the very least have a lot of open file handles floating around, depending on how many wells you have in the wells.csv file. In some versions of python this may cause issues, but this may not be the only problem - it's hard to say without some test data files. There might be an issue with seeking to the start of the file - going back to the beginning when you move on to the next well. This could cause the program to run as you've been experiencing, but it might also be caused by something else. You should avoid problems like this by using with to manage the scope of your open files.
You should also use a dictionary to marry up the well names with the data, and read all of the data up front before doing your plotting. This will allow you to see exactly how you've constructed your data set and where any issues exist.
I've made a few stylistic suggestions below too. This is obviously incomplete but hopefully you get the idea!
import csv
from pylab import * #imports should always go before declarations
well_details = {} #empty dict
with open('wells.csv','r') as well_file:
well_reader = csv.reader(well_file, delimiter=',')
for row in well_reader:
well_name = row[0]
if not well_details.has_key(well_name):
well_details[well_name] = {} #dict to store pH, depth, ref_elev
with open('well_summary_table.csv','r') as elev_file:
elev_reader = csv.reader(elev_file, delimiter=',')
for row in elev_reader:
well_name = row[0]
if well_details.has_key(well_name):
well_details[well_name]['elev_ref'] = row[1]