Related
I have the following .txt-File (modified bash emboss-dreg report, the original report has seqtable format):
Start End Strand Pattern Sequence
43392 43420 + regex:[T][G][A][TC][C][CTG]\D{15,17}[CA][G][T][AT][AT][CTA] TGATCGCACGCCGAATGGAAACACGTTTT
52037 52064 + regex:[T][G][A][TC][C][CTG]\D{15,17}[CA][G][T][AT][AT][CTA] TGACCCTGCTTGGCGATCCCGGCGTTTC
188334 188360 + regex:[T][G][A][TC][C][CTG]\D{15,17}[CA][G][T][AT][AT][CTA] TGATCGCGCAACTGCAGCGGGAGTTAC
I would like to access the elements under "sequence" only, to compare them with some variables and delete the whole lines, if the comparison does not give the desired result (using Levenshtein distance for comparison).
But I can't even get started .... :(
I am searching for something like the linux -f option, to directly get to the right "field" in the line to do my comparison.
I came across re.split:
with open(textFile) as f:
for line in f:
cleaned=re.split(r'\t',line)
print(cleaned)
which results in:
[' Start End Strand Pattern Sequence\n']
['\n']
[' 43392 43420 + regex:[T][G][A][TC][C][CTG]\\D{15,17}[CA][G][T][AT][AT][CTA] TGATCGCACGCCGAATGGAAACACGTTTT\n']
['\n']
[' 52037 52064 + regex:[T][G][A][TC][C][CTG]\\D{15,17}[CA][G][T][AT][AT][CTA] TGACCCTGCTTGGCGATCCCGGCGTTTC\n']
['\n']
[' 188334 188360 + regex:[T][G][A][TC][C][CTG]\\D{15,17}[CA][G][T][AT][AT][CTA] TGATCGCGCAACTGCAGCGGGAGTTAC\n']
['\n']
That is the closest I got to "split my lines into elements". I feel like totally going the wrong way, but searching Stack Overflow and google did not result in anything :(
I have never worked with seqtable-format before, so I tried to deal with it as .txt Maybe, there is another approach better for dealing with it?
Python is the main language I am learning, I am not so firm in Bash, but bash-answers for dealing with the issue would be ok for me, too.
I am thankful for any hint/link/help :)
The format itself seems to be using multiple lines as delimiters while your r'\t' is not doing anything (you're instructing Python to split on a literal \t). Also, based on what you've pasted the data is not using a tab delimiter anyway, but a random number of whitespaces to pad the table.
To address both, you can read the file, treat the first line as a header (if you need it), then read the rest line by line, strip the trailing\leading whitespace, check if there is any data there and if there is - further split it on whitespace to get to your line elements:
with open("your_data", "r") as f:
header = f.readline().split() # read the first line as a header
for line in f: # read the rest of the file line-by-line
line = line.strip() # first clear out the whitespace
if line: # check if there is any content left or is it an empty line
elements = line.split() # split the data on whitespace to get your elements
print(elements[-1]) # print the last element
TGATCGCACGCCGAATGGAAACACGTTTT
TGACCCTGCTTGGCGATCCCGGCGTTTC
TGATCGCGCAACTGCAGCGGGAGTTAC
As a bonus, since you have the header, you can turn it into a map and then use 'proxied' named access to get the element you're looking for so you don't need to worry about the element position:
with open("your_data", "r") as f:
# read the header and turn it into a value:index map
header = {v: i for i, v in enumerate(f.readline().split())}
for line in f: # read the rest of the file line-by-line
line = line.strip() # first clear out the whitespace
if line: # check if there is any content left or is it an empty line
elements = line.split()
print(elements[header["Sequence"]]) # print the Sequence element
You can also use a header map to turn your rows into dict structures for even easier access.
UPDATE: Here's how to create a header map and then use it to build a dict out of your lines:
with open("your_data", "r") as f:
# read the header and turn it into an index:value map
header = {i: v for i, v in enumerate(f.readline().split())}
for line in f: # read the rest of the file line-by-line
line = line.strip() # first clear out the whitespace
if line: # check if there is any content left or is it an empty line
# split the line, iterate over it and use the header map to create a dict
row = {header[i]: v for i, v in enumerate(line.split())}
print(row["Sequence"]) # ... or you can append it to a list for later use
As for how to 'delete' lines that you don't want for some reason, you'll have to create a temporary file, loop through your original file, compare your values, write the ones that you want to keep into the temporary file, delete the original file and finally rename the temporary file to match your original file, something like:
import shutil
from tempfile import NamedTemporaryFile
SOURCE_FILE = "your_data" # path to the original file to process
def compare_func(seq): # a simple comparison function for our sequence
return not seq.endswith("TC") # use Levenshtein distance or whatever you want instead
# open a temporary file for writing and our source file for reading
with NamedTemporaryFile(mode="w", delete=False) as t, open(SOURCE_FILE, "r") as f:
header_line = f.readline() # read the header
t.write(header_line) # write the header immediately to the temporary file
header = {v: i for i, v in enumerate(header_line.split())} # create a header map
last_line = "" # a var to store the whitespace to keep the same format
for line in f: # read the rest of the file line-by-line
row = line.strip() # first clear out the whitespace
if row: # check if there is any content left or is it an empty line
elements = row.split() # split the row into elements
# now lets call our comparison function
if compare_func(elements[header["Sequence"]]): # keep the line if True
t.write(last_line) # write down the last whitespace to the temporary file
t.write(line) # write down the current line to the temporary file
else:
last_line = line # store the whitespace for later use
shutil.move(t.name, SOURCE_FILE) # finally, overwrite the source with the temporary file
This will produce the same file sans the second row from your example since its sequence ends in a TC and our comp_function() returns False in that case.
For a bit less complexity, instead of using temporary files you can load your whole source file into the working memory and then just overwrite it, but that would work only for files that can fit your working memory while the above approach can work with files as large as your free storage space.
I'm new to Python and I have the following csv file (let's call it out.csv):
DATE,TIME,PRICE1,PRICE2
2017-01-15,05:44:27.363000+00:00,0.9987,1.0113
2017-01-15,13:03:46.660000+00:00,0.9987,1.0113
2017-01-15,21:25:07.320000+00:00,0.9987,1.0113
2017-01-15,21:26:46.164000+00:00,0.9987,1.0113
2017-01-16,12:40:11.593000+00:00,,1.0154
2017-01-16,12:40:11.593000+00:00,1.0004,
2017-01-16,12:43:34.696000+00:00,,1.0095
and I want to truncate the second column so the csv looks like:
DATE,TIME,PRICE1,PRICE2
2017-01-15,05:44:27,0.9987,1.0113
2017-01-15,13:03:46,0.9987,1.0113
2017-01-15,21:25:07,0.9987,1.0113
2017-01-15,21:26:46,0.9987,1.0113
2017-01-16,12:40:11,,1.0154
2017-01-16,12:40:11,1.0004,
2017-01-16,12:43:34,,1.0095
This is what I have so far..
with open('out.csv','r+b') as nL, open('outy_3.csv','w+b') as nL3:
new_csv = []
reader = csv.reader(nL)
for row in reader:
time = row[1].split('.')
new_row = []
new_row.append(row[0])
new_row.append(time[0])
new_row.append(row[2])
new_row.append(row[3])
print new_row
nL3.writelines(new_row)
I can't seem to get a new line in after writing each line to the new csv file.
This definitely doesnt look or feel pythonic
Thanks
The missing newlines issue is because the file.writelines() method doesn't automatically add line separators to the elements of the argument it's passed, which it expects to be an sequence of strings. If these elements represent separate lines, then it's your responsibility to ensure each one ends in a newline.
However, your code is tries to use it to only output a single line of output. To fix that you should use file.write() instead because it expects its argument to be a single string—and if you want that string to be a separate line in the file, it must end with a newline or have one manually added to it.
Below is code that does what you want. It works by changing one of the elements of the list of strings that the csv.reader returns in-place, and then writes the modified list to the output file as single string by join()ing them all back together, and then manually adds a newline the end of the result (stored in new_row).
import csv
with open('out.csv','rb') as nL, open('outy_3.csv','wt') as nL3:
for row in csv.reader(nL):
time_col = row[1]
try:
period_location = time_col.index('.')
row[1] = time_col[:period_location] # only keep characters in front of period
except ValueError: # no period character found
pass # leave row unchanged
new_row = ','.join(row)
print(new_row)
nL3.write(new_row + '\n')
Printed (and file) output:
DATE,TIME,PRICE1,PRICE2
2017-01-15,05:44:27,0.9987,1.0113
2017-01-15,13:03:46,0.9987,1.0113
2017-01-15,21:25:07,0.9987,1.0113
2017-01-15,21:26:46,0.9987,1.0113
2017-01-16,12:40:11,,1.0154
2017-01-16,12:40:11,1.0004,
2017-01-16,12:43:34,,1.0095
I have a stock file which looks like this:
12334232:seat belt:2.30:12:10:30
14312332:toy card:3.40:52:10:30
12512312:xbox one:5.30:23:10:30
12543243:laptop:1.34:14:10:30
65478263:banana:1.23:23:10:30
27364729:apple:4.23:42:10:30
28912382:orange:1.12:16:10:30
12892829:elephant:6.45:14:10:30
I want to replace the items in the fourth column if they are below the numbers in the fifth column after a certain transaction to the numbers in the sxith column. How would I replace the items in the fourth column?
Everytime I use the following lines of code below, it overwrites the whole file with nothing (deletes everything)
for line in stockfile:
c=line.split(":")
print("pass")
if stock_order[i] == User_list[i][0]:
stockfile.write(line.replace(current_stocklevel_list[i], reorder_order[i] ) )
else:
i = i + 1
I want the stockfile to look like this after it has replaced the necessary items in the column:
12334232:seat belt:2.30:30:10:30
14312332:toy card:3.40:30:10:30
12512312:xbox one:5.30:30:10:30
12543243:laptop:1.34:30::10:30
65478263:banana:1.23:30:10:30
27364729:apple:4.23:30:10:30
28912382:orange:1.12:30:10:30
12892829:elephant:6.45:30:10:30
If you are opening file after some time, you should use "a" (append) as a mode so that file doesn't get truncated.
Write pointer will automatically be on the end of file.
So:
f = open("filename", "a")
f.seek(0) # To start from beginning
But if you want to read and write, then add "+" to the mode and file wouldn't be truncated as well.
f = open("filename", "r+")
Both read and write pointers will be at the beginning of file, you'll need to seek only onto position where you wish to start writing/reading.
But you are doing it wrong.
See, file's content will be overwritten, not inserted automatically.
If you are in writable mode and at the end of file content will be added.
So, you either need to load whole file, make changes you need and write everything back.
Or, you have to write changes at some point and shift remaining content truncating the file if the content is shorter than before.
mmap module can help you to treat file as a string,. You will be able to efficiently shift data and to resize the file.
But, if you really want to change file in place, you should have the file with fixed length of columns. So, when you want to change a value, you do not need to shift anything back and forth. Just find the right row and col, seek there, write new value over the old one (making sure to delete all of the old) and that is just that.
You should try to read in the data first:
with open('inputfile', 'r') as infile:
data = infile.readlines()
Then you can loop over the data and edit as needed and write it out:
with open('outputfile', 'w') as outfile:
for line in data:
c = line.split(":")
if random.randint(1,3) == 1:
# update fourth column based on some good reason
c[3] += 2
outfile.write(':'.join(c) + '\n')
Or you could do it on go with something like:
with open('inputfile', 'r') as infile, open('outputfile', 'w') as outfile:
line = infile.readline()
c = line.split(":")
if random.randint(1,3) == 1:
# update fourth column based on some good reason
c[3] += 2
outfile.write(':'.join(c) + '\n')
os.rename('outputfile', 'inputfile')
I have a fairly basic question, and I'm wondering what the best solution would be using Python. I have a set of CSV files, and within each file, I have rows of comma separated elements. Importantly, there are two distinct blocks of rows in each CSV file, let's say "Block 1" and "Block 2". Some values overlap between Block 1 and Block 2 (specific item of interest: the name of particular .jpg files), but the order will vary. Here is a shortened version of how the file is organized:
Trial,Image,Type,Reps
1,511.jpg,T,1REP
2,101a.jpg,2,1REP
3,185a.jpg,5,3REP
4,566.jpg,T,3REP
5,560.jpg,T,3REP
Trial,Image,Type,Reps,Keypress
1,101a.jpg,2,1REP,1
2,185a.jpg,5,3REP,0
3,511.jpg,T,1REP,1
4,560.jpg,T,3REP,1
5,566.jpg,T,3REP,0
For some clarification, this is the log file of an experiment where Block 1 is the time when images are studied. "Type" corresponds to the type of picture, and "Reps" corresponds to how many times overall the picture is seen (1 or 3 times), neither of which are central to what I want to achieve. What I would like to do is this: for each row in the first block, match to the name of the same .jpg file in the second block. Then I need to append the Block 1 row with "1" or "0" based on whether the corresponding "Keypress" in Block 2 is "1" or "0" element. Basically, when tested on the pictures, they make a button press of "1" or "0" and I want to back sort which ones got which press during study. Critically, I need to preserve the order of Block 1 (the studied order of images) with whatever solution I take.
Apologies for how basic this request is...I'm learning.
Your question isn't what I would call basic at all (and has nothing to do with sorting). In fact doing the processing you want is fairly involved. Essentially each file has to be read twice, first to extract the information needed from the second block, and then again to update the first block in it. Additionally, reading the file each time is broken down into two sub-steps, since there's two kinds of csv data in each file which must be handled separately in each pass.
Since it's fairly difficult to update a file in-place, an updated version of the file is first written to a separate temporary file which then replaces the original if processing completes without errors.
import csv
import shutil
from tempfile import NamedTemporaryFile
TRIAL = 0
IMAGE = 1
KEYPRESS = 4
filename = 'backsorting.csv'
img_resp_map = {}
# first pass
with open(filename, 'rb') as csvfile:
reader = csv.reader(csvfile)
# skip over first block
next(reader) # header
while True:
row = next(reader)
if not row[TRIAL].isdigit(): # header of second block?
break
# use data in second block to create an image-to-response mapping
for row in reader:
img_resp_map[row[IMAGE]] = row[KEYPRESS]
# second pass
with open(filename, 'rb') as csvfile:
reader = csv.reader(csvfile)
fields = next(reader) # get header of first block
with NamedTemporaryFile('wb', dir='.', delete=False) as tempcsv:
writer = csv.writer(tempcsv)
writer.writerow(fields + ['Keypress']) # new header with added field
# copy and update rows of first block by appending the new field
for row in reader:
if not row[TRIAL].isdigit(): # header of second block?
break
writer.writerow(row+[img_resp_map[row[IMAGE]]])
# copy second block of file unchanged
writer.writerow(row) # header (already read)
writer.writerows(reader)
# NOTE: the following is dangerous since it wipes out the original file
shutil.move(tempcsv.name, filename) # replace original file with temp one
My test file was named backsorting.csv and initially had this in it:
Trial,Image,Type,Reps
1,511.jpg,T,1REP
2,101a.jpg,2,1REP
3,185a.jpg,5,3REP
4,566.jpg,T,3REP
5,560.jpg,T,3REP
Trial,Image,Type,Reps,Keypress
1,101a.jpg,2,1REP,1
2,185a.jpg,5,3REP,0
3,511.jpg,T,1REP,1
4,560.jpg,T,3REP,1
5,566.jpg,T,3REP,0
After running the script, its contents were changed to this:
Trial,Image,Type,Reps,Keypress
1,511.jpg,T,1REP,1
2,101a.jpg,2,1REP,1
3,185a.jpg,5,3REP,0
4,566.jpg,T,3REP,0
5,560.jpg,T,3REP,1
Trial,Image,Type,Reps,Keypress
1,101a.jpg,2,1REP,1
2,185a.jpg,5,3REP,0
3,511.jpg,T,1REP,1
4,560.jpg,T,3REP,1
5,566.jpg,T,3REP,0
Assuming the csv files are small enough, I would simply use an dictionary {} to map values from each file to each other.
Load up all values from Block 2 first.
d= {}
with open('some1.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
num, file_name, third, num = row # 1,a.jpg,XYZ,1
d[file_name] = num
Now when iterating over Block 1, retrieve the values you have stored from Block 2, and append them to your data.
with open('some2.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
num, file_name, third, num = row # 1,a.jpg,XYZ,1
lst = [num, file_name. third, num, d.get(file_name, -1)]
# now convert `lst` to csv, and write to file
Note that the second code block uses a value of -1 if a matching filename wasn't found in the stored Block 2 data.
I am saving a list to a csv using the writerow function from csv module. Something went wrong when I opened the final file in MS office Excel.
Before I encounter this issue, the main problem I was trying to deal with is getting the list saved to each row. It was saving each line into a cell in row1. I made some small changes, now this happened. I am certainly very confused as a novice python guy.
import csv
inputfile = open('small.csv', 'r')
header_list = []
header = inputfile.readline()
header_list.append(header)
input_lines = []
for line in inputfile:
input_lines.append(line)
inputfile.close()
AA_list = []
for i in range(0,len(input_lines)):
if (input_lines[i].split(',')[4]) == 'AA':#column4 has different names including 'AA'
AA_list.append(input_lines[i])
full_list = header_list+AA_list
resultFile = open("AA2013.csv",'w+')
wr = csv.writer(resultFile, delimiter = ',')
wr.writerow(full_list)
Thanks!
UPDATE:
The full_list look like this: ['1,2,3,"MEM",...]
UPDATE2(APR.22nd):
Now I got three cells of data(the header in A1 and the rest in A2 and A3 respectively) in the same row. Apparently, the newline signs are not working for three items in one big list. I think the more specific question now is how do I save a list of records with '\n' behind each record to csv.
UPDATE3(APR.23rd):
original file
Importing the csv module is not enough, you need to use it as well. Right now, you're appending each line as an entire string to your list instead of a list of fields.
Start with
with open('small.csv', 'rb') as inputfile:
reader = csv.reader(inputfile, delimiter=",")
header_list = next(reader)
input_lines = list(reader)
Now header_list contains all the headers, and input_lines contains a nested list of all the rows, each one split into columns.
I think the rest should be pretty straightforward.
append() appends a list at the end of another list. So when you write header_list.append(header), it takes header as a list of characters and appends to header_list. You should write
headers = header.split(',')
header_list.append(headers)
This would split the header row by commas and headers would be the list of header words, then append them properly after header_list.
The same thing goes for AA_list.append(input_lines[i]).
I figured it out.
The different between [val], val, and val.split(",") in the writerow bracket was:
[val]: a string containing everything taking only the first column in excel(header and "2013, 1, 2,..." in A1, B1, C1 and so on ).
val: each letter or comma or space(I forgot the technical terms) take a cell in excel.
val.split(","): comma split the string in [val], and put each string separated by comma into an excel cell.
Here is what I found out: 1.the right way to export the flat list to each line by using with syntax, 2.split the list when writing row
csvwriter.writerow(JD.split())
full_list = header_list+AA_list
with open("AA2013.csv",'w+') as resultFile:
wr = csv.writer(resultFile, delimiter= ",", lineterminator = '\n')
for val in full_list:
wr.writerow(val.split(','))
The wanted output
Please correct my mistakenly used term and syntax! Thanks.