Code snippet below compares two csv files and merge them. My problem is that the second file is printed in new lines.
import csv
import dateutil.parser
with open('a.csv', 'r') as f1:
feed = f1.readlines()
with open ('b.csv', 'r') as f2:
for line in f2.readlines()[1:]:
line = line.split(',')
ts = dateutil.parser.parse(line[3])
print(ts)
for i, log in enumerate(feed):
ls = log.split(',')
ts_start = dateutil.parser.parse(ls[0])
ts_end = dateutil.parser.parse(ls[1])
if (ts >= ts_start) and (ts < ts_end):
print(ts, ts_start, ts_end)
name, tags, mean = line[0], ','.join(line[1:3]),line[-1]
feed[i] = ','.join([log, name, tags, mean])
with open('c.csv', 'w') as f:
f.writelines(feed)
file a:
2015-11-04T13:35:18.657Z,2015-11-04T13:47:06.588Z,load,INSERT
2015-11-04T13:47:47.164Z,2015-11-04T14:07:13.230Z,run,READUPDATE
file b:
name,tags,time,mean
memory_value,"type=memory,instance=buffered",2015-11-04T13:35:00Z,
memory_value,"type=memory,instance=buffered",2015-11-04T13:45:00Z,1.32
memory_value,"type=memory,instance=buffered",2015-11-04T14:05:00Z,1.11
Output:
A1,A2,A3,A4,
A5
B1,B2,B3,B4,
B5,
Expected output:
A1,A2,A3,A4,A5
B1,B2,B3,B4,B5
How can I acheive this?
Thanks
The strings in the list returned by readlines include the newline character at the end of each line, so these may inadvertently be included as you do string manipulation on that data. In particular, ','.join([log, name, tags, mean]) will have a newline between log and name, because log ultimately came from f1.readlines().
Try stripping the newlines from each line before doing anything with it.
for i, log in enumerate(feed):
log = log.strip()
ls = log.split(',')
It may also be necessary to do line = line.strip().split(',') at the top of the first for loop instead of just line = line.split(','). The output looks OK on my machine without it, but I'm not 100% sure that it exactly matches your desired output.
Depending on what version of python you are using you may need to change the 'r' and 'w' to 'rb' and 'wb' in order to read and write files in binary mode. This should help with the new lines.
Related
I am trying to remove duplicate lines from a text file and keep facing issues... The output file keeps putting the first two accounts on the same line. Each account should have a different line... Does anyone know why this is happening and how to fix it?
with open('accounts.txt', 'r') as f:
unique_lines = set(f.readlines())
with open('accounts_No_Dup.txt', 'w') as f:
f.writelines(unique_lines)
accounts.txt:
#account1
#account2
#account3
#account4
#account5
#account6
#account7
#account5
#account8
#account4
accounts_No_Dup.txt:
#account4#account3
#account4
#account8
#account5
#account7
#account1
#account2
#account6
print(unique_lines)
{'#account4', '#account7\n', '#account3\n', '#account6\n', '#account5\n', '#account8\n', '#account4\n', '#account2\n', '#account1\n'}
The last line in your file is missing a newline (technically a violation of POSIX standards for text files, but so common you have to account for it), so "#account4\n" earlier on is interpreted as unique relative to "#account4" at the end. I'd suggest unconditionally stripping newlines, and adding them back when writing:
with open('accounts.txt', 'r') as f:
unique_lines = {line.rstrip("\r\n") for line in f} # Remove newlines for consistent deduplication
with open('accounts_No_Dup.txt', 'w') as f:
f.writelines(f'{line}\n' for line in unique_lines) # Add newlines back
By the by, on modern Python (CPython/PyPy 3.6+, 3.7+ for any interpreter), you can preserve order of first appearance by using a dict rather than a set. Just change the read from the file to:
unique_lines = {line.rstrip("\r\n"): None for line in f}
and you'll see each line the first time it appears, in that order, with subsequent duplicates being ignored.
Your problem is that set changes the order of your lines and your last element doesn't end with \n as you don't have an empty line at the end of your file.
Just add the separator or don't use set.
with open('accounts.txt', 'r') as f:
unique_lines = set()
for line in f.readlines():
if not line.endswith('\n'):
line += '\n'
unique_lines.add(line)
with open('accounts_No_Dup.txt', 'w') as f:
f.writelines(unique_lines)
You can easily do it using unique keyword
The code is as below
import pandas as pd
data = pd.read_csv('d:\\test.txt', sep="/n", header=None)
df = pd.DataFrame(data[0].unique())
with open('d:\\testnew.txt', 'a') as f:
f.write(df.to_string(header = False, index = False)))
Results: Test file to read has data
The result is it removed the duplicate lines
Let's say I have a text file full of nicknames. How can I delete a specific nickname from this file, using Python?
First, open the file and get all your lines from the file. Then reopen the file in write mode and write your lines back, except for the line you want to delete:
with open("yourfile.txt", "r") as f:
lines = f.readlines()
with open("yourfile.txt", "w") as f:
for line in lines:
if line.strip("\n") != "nickname_to_delete":
f.write(line)
You need to strip("\n") the newline character in the comparison because if your file doesn't end with a newline character the very last line won't either.
Solution to this problem with only a single open:
with open("target.txt", "r+") as f:
d = f.readlines()
f.seek(0)
for i in d:
if i != "line you want to remove...":
f.write(i)
f.truncate()
This solution opens the file in r/w mode ("r+") and makes use of seek to reset the f-pointer then truncate to remove everything after the last write.
The best and fastest option, rather than storing everything in a list and re-opening the file to write it, is in my opinion to re-write the file elsewhere.
with open("yourfile.txt", "r") as file_input:
with open("newfile.txt", "w") as output:
for line in file_input:
if line.strip("\n") != "nickname_to_delete":
output.write(line)
That's it! In one loop and one only you can do the same thing. It will be much faster.
This is a "fork" from #Lother's answer (which I believe that should be considered the right answer).
For a file like this:
$ cat file.txt
1: october rust
2: november rain
3: december snow
This fork from Lother's solution works fine:
#!/usr/bin/python3.4
with open("file.txt","r+") as f:
new_f = f.readlines()
f.seek(0)
for line in new_f:
if "snow" not in line:
f.write(line)
f.truncate()
Improvements:
with open, which discard the usage of f.close()
more clearer if/else for evaluating if string is not present in the current line
The issue with reading lines in first pass and making changes (deleting specific lines) in the second pass is that if you file sizes are huge, you will run out of RAM. Instead, a better approach is to read lines, one by one, and write them into a separate file, eliminating the ones you don't need. I have run this approach with files as big as 12-50 GB, and the RAM usage remains almost constant. Only CPU cycles show processing in progress.
I liked the fileinput approach as explained in this answer:
Deleting a line from a text file (python)
Say for example I have a file which has empty lines in it and I want to remove empty lines, here's how I solved it:
import fileinput
import sys
for line_number, line in enumerate(fileinput.input('file1.txt', inplace=1)):
if len(line) > 1:
sys.stdout.write(line)
Note: The empty lines in my case had length 1
If you use Linux, you can try the following approach.
Suppose you have a text file named animal.txt:
$ cat animal.txt
dog
pig
cat
monkey
elephant
Delete the first line:
>>> import subprocess
>>> subprocess.call(['sed','-i','/.*dog.*/d','animal.txt'])
then
$ cat animal.txt
pig
cat
monkey
elephant
Probably, you already got a correct answer, but here is mine.
Instead of using a list to collect unfiltered data (what readlines() method does), I use two files. One is for hold a main data, and the second is for filtering the data when you delete a specific string. Here is a code:
main_file = open('data_base.txt').read() # your main dataBase file
filter_file = open('filter_base.txt', 'w')
filter_file.write(main_file)
filter_file.close()
main_file = open('data_base.txt', 'w')
for line in open('filter_base'):
if 'your data to delete' not in line: # remove a specific string
main_file.write(line) # put all strings back to your db except deleted
else: pass
main_file.close()
Hope you will find this useful! :)
I think if you read the file into a list, then do the you can iterate over the list to look for the nickname you want to get rid of. You can do it much efficiently without creating additional files, but you'll have to write the result back to the source file.
Here's how I might do this:
import, os, csv # and other imports you need
nicknames_to_delete = ['Nick', 'Stephen', 'Mark']
I'm assuming nicknames.csv contains data like:
Nick
Maria
James
Chris
Mario
Stephen
Isabella
Ahmed
Julia
Mark
...
Then load the file into the list:
nicknames = None
with open("nicknames.csv") as sourceFile:
nicknames = sourceFile.read().splitlines()
Next, iterate over to list to match your inputs to delete:
for nick in nicknames_to_delete:
try:
if nick in nicknames:
nicknames.pop(nicknames.index(nick))
else:
print(nick + " is not found in the file")
except ValueError:
pass
Lastly, write the result back to file:
with open("nicknames.csv", "a") as nicknamesFile:
nicknamesFile.seek(0)
nicknamesFile.truncate()
nicknamesWriter = csv.writer(nicknamesFile)
for name in nicknames:
nicknamesWriter.writeRow([str(name)])
nicknamesFile.close()
In general, you can't; you have to write the whole file again (at least from the point of change to the end).
In some specific cases you can do better than this -
if all your data elements are the same length and in no specific order, and you know the offset of the one you want to get rid of, you could copy the last item over the one to be deleted and truncate the file before the last item;
or you could just overwrite the data chunk with a 'this is bad data, skip it' value or keep a 'this item has been deleted' flag in your saved data elements such that you can mark it deleted without otherwise modifying the file.
This is probably overkill for short documents (anything under 100 KB?).
I like this method using fileinput and the 'inplace' method:
import fileinput
for line in fileinput.input(fname, inplace =1):
line = line.strip()
if not 'UnwantedWord' in line:
print(line)
It's a little less wordy than the other answers and is fast enough for
Save the file lines in a list, then remove of the list the line you want to delete and write the remain lines to a new file
with open("file_name.txt", "r") as f:
lines = f.readlines()
lines.remove("Line you want to delete\n")
with open("new_file.txt", "w") as new_f:
for line in lines:
new_f.write(line)
here's some other method to remove a/some line(s) from a file:
src_file = zzzz.txt
f = open(src_file, "r")
contents = f.readlines()
f.close()
contents.pop(idx) # remove the line item from list, by line number, starts from 0
f = open(src_file, "w")
contents = "".join(contents)
f.write(contents)
f.close()
You can use the re library
Assuming that you are able to load your full txt-file. You then define a list of unwanted nicknames and then substitute them with an empty string "".
# Delete unwanted characters
import re
# Read, then decode for py2 compat.
path_to_file = 'data/nicknames.txt'
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# Define unwanted nicknames and substitute them
unwanted_nickname_list = ['SourDough']
text = re.sub("|".join(unwanted_nickname_list), "", text)
Do you want to remove a specific line from file so use this snippet short and simple code you can easily remove any line with sentence or prefix(Symbol).
with open("file_name.txt", "r") as f:
lines = f.readlines()
with open("new_file.txt", "w") as new_f:
for line in lines:
if not line.startswith("write any sentence or symbol to remove line"):
new_f.write(line)
To delete a specific line of a file by its line number:
Replace variables filename and line_to_delete with the name of your file and the line number you want to delete.
filename = 'foo.txt'
line_to_delete = 3
initial_line = 1
file_lines = {}
with open(filename) as f:
content = f.readlines()
for line in content:
file_lines[initial_line] = line.strip()
initial_line += 1
f = open(filename, "w")
for line_number, line_content in file_lines.items():
if line_number != line_to_delete:
f.write('{}\n'.format(line_content))
f.close()
print('Deleted line: {}'.format(line_to_delete))
Example output:
Deleted line: 3
Take the contents of the file, split it by newline into a tuple. Then, access your tuple's line number, join your result tuple, and overwrite to the file.
I have a text file in which each ID line starts with > and the next line(s) are the a sequence of characters. And the next line after the sequence of characters would be an other ID line starting with >. but in some of them, instead of sequence I have “Sequence unavailable”. The sequence after the ID line can be one or more lines.
like this example:
>ENSG00000173153|ENST00000000442|64073050;64074640|64073208;64074651
AAGCAGCCGGCGGCGCCGCCGAGTGAGGGGACGCGGCGCGGTGGGGCGGCGCGGCCCGAGGAGGCGGCGGAGGAGGGGCCGCCCGCGGCCCCCGGCTCACTCCGGCACTCCGGGCCGCTC
>ENSG00000004139|ENST00000003834
Sequence unavailable
I want to filter out those IDs with “Sequence unavailable”. The output should look like this:
output:
>ENSG00000173153|ENST00000000442|64073050;64074640|64073208;64074651
AAGCAGCCGGCGGCGCCGCCGAGTGAGGGGACGCGGCGCGGTGGGGCGGCGCGGCCCGAGGAGGCGGCGGAGGAGGGGCCGCCCGCGGCCCCCGGCTCACTCCGGCACTCCGGGCCGCTC
do you know how to do that in python?
Unlike the other answers, I’d strongly recommand against parsing the FASTA format manually. It’s not too hard but there are pitfalls, and it’s completely unnecessary since efficient, well-tested implementations exist:
Use Bio.SeqIO from BioPython; for example:
from Bio import SeqIO
for record in SeqIO.parse(filename, 'fasta'):
if record.seq != 'Sequenceunavailable':
SeqIO.write(record, outfile, 'fasta')
Note the missing space in 'Sequenceunavailable': reading the sequences in FASTA format will omit spaces.
How about this:
with open(filename, 'r+') as f:
data = f.read()
data = data.split('>')
result = ['>{}'.format(item) for item in data if item and 'Sequence unavailable' not in item]
f.seek(0)
for line in result:
f.write(line)
def main():
filename = open('text.txt', 'rU').readlines()
filterFile(filename)
def filterFile(SequenceFile):
outfile = open('outfile', 'w')
for line in SequenceFile:
if line.startswith('>'):
sequence = line.next()
if sequence.startswith('Sequence unavailable'):
//nothing should happen I suppose?
else:
outfile.write(line + "\n" + sequence + "\n")
main()
I unfortunately can't test this code right now but I made this out of the top of my head! Please test it and let me know what the outcome is so I can adjust the code :-)
So I don't exactly know how large these files will get, just in case, I'm doing it without mapping the file in memory:
with open(filename) as fh:
with open(filename+'.new', 'w+') as fh_new:
for idline, geneseq in zip(*[iter(fh)] * 2):
if geneseq.strip() != 'Sequence unavailable':
fh_new.write(idline)
fh_new.write(geneseq)
It works by creating a new file, then the zip thing is some magic to read the 2 lines of the file, the idline will be the first part and the geneseq the second part.
This solution should be relatively cheap in computer power but will create an extra output file.
I have a file that I need to write certain contents to a new file.
The current contents is as follows:
send from #1373846594 to pool/10.0.68.61#1374451276 estimated size is 7.83G
send from #1374451276 to pool/10.0.68.61#1375056084 estimated size is 10.0G
I need the new file to show:
#1373846594 --> pool/10.0.68.61#1374451276 --> 7.83G
#1374451276 --> pool/10.0.68.61#1375056084 --> 10.0G
I have tried:
with open("file", "r") as drun:
for _,_,snap,_,pool_,_,_,size in zip(*[iter(drun)]*9):
drun.write("{0}\t{1}\t{2}".format(snap,pool,size))
I know I am either way off or just not quite there but I am not sure where to go next with this. Any help would be appreciated.
You want to split your lines using str.split(), and you'll need to write to another file first, then move that back into place; reading and writing to the same file is tricky and should be avoided unless you are working with fixed record sizes.
However, the fileinput module makes in-place file editing easy enough:
import fileinput
for line in fileinput.input(filename, inplace=True):
components = line.split()
snap, pool, size = components[2], components[4], components[-1]
print '\t'.join((snap,pool,size))
The print statement writes to sys.stdout, which fileinput conveniently redirects when inplace=True is set. This means you are writing to the output file (that replaces the original input file), writing a bonus newline on every loop too.
inf = open(file)
outf = open(outfile,'w')
for line in inf:
parts = line.split()
outf.write("{0}-->{1}-->{2}".format(parts[2], parts[4], parts[8]))
inf.close()
outf.close()
Perhaps something simple using a regex pattern match:
with open('output_file', 'w') as outFile:
for line in open('input_file'):
line = line.split()
our_patterns = [i for i in line if re.search('^#', i) or \
re.search('^pool', i) or \
re.search('G$', i)]
outFile.write(' --> '.join(our_patterns) + '\n')
The pattern matching will extract any parts that begin with # or pool, as well as the final size that ends with G. These parts are then joined with the --> and written to file. Hope this helps
SOURCE, DESTINATION, SIZE = 2, 4, 8
with open('file.txt') as drun:
for line in drun:
pieces = line.split()
print(pieces[SOURCE], pieces[DESTINATION], pieces[SIZE], sep=' --> ', file=open('log.txt', 'a'))
I'd like to remove the first column from a file. The file contains 3 columns separated by space and the columns has the following titles:
X', 'Displacement' and 'Force' (Please see the image).
I have came up with the following code, but to my disappointment it doesn't work!
f = open("datafile.txt", 'w')
for line in f:
line = line.split()
del x[0]
f.close()
Any help is much appreciated !
Esan
First of all, you're attempting to read from a file (by iterating through the file contents) that is open for writing. This will give you an IOError.
Second, there is no variable named x in existence (you have not declared/set one in the script). This will generate a NameError.
Thirdly and finally, once you have finished (correctly) reading and editing the columns in your file, you will need to write the data back into the file.
To avoid loading a (potentially large) file into memory all at once, it is probably a good idea to read from one file (line by line) and write to a new file simultaneously.
Something like this might work:
f = open("datafile.txt", "r")
g = open("datafile_fixed.txt", "w")
for line in f:
if line.strip():
g.write("\t".join(line.split()[1:]) + "\n")
f.close()
g.close()
Some reading about python i/o might be helpful, but something like the following should get you on your feet:
with open("datafile.txt", "r") as fin:
with open("outputfile.txt", "w") as fout:
for line in fin:
line = line.split(' ')
if len(line) == 3:
del line[0]
fout.write(line[0] + ' ' + line[1])
else:
fout.write('\n')
EDIT: fixed to work with blank lines
print ''.join([' '.join(l.split()[1:]) for l in file('datafile.txt')])
or, if you want to preserve spaces and you know that the second column always starts at the, say, 10th character:
print ''.join([l[11:] for l in file('datafile.txt')])