delete all rows up to a specific row - python

How you can implement deleting lines in a text document up to a certain line?
I find the line number using the code:
#!/usr/bin/env python
lookup = '00:00:00'
filename = "test.txt"
with open(filename) as text_file:
for num, line in enumerate(text_file, 1):
if lookup in line:
print(num)
print(num) outputs me the value of the string, for example 66.
How do I delete all the lines up to 66, i.e. up to the found line by word?

As proposed here with a small modification to your case:
read all lines of the file.
iterate the lines list until you reach the keyword.
write all remaining lines
with open("yourfile.txt", "r") as f:
lines = iter(f.readlines())
with open("yourfile.txt", "w") as f:
for line in lines:
if lookup in line:
f.write(line)
break
for line in lines:
f.write(line)

That's easy.
filename = "test.txt"
lookup = '00:00:00'
with open(filename,'r') as text_file:
lines = text_file.readlines()
res=[]
for i in range(0,len(lines),1):
if lookup in lines[i]:
res=lines[i:]
break
with open(filename,'w') as text_file:
text_file.writelines(res)

Do you know what lines you want to delete?
#!/usr/bin/env python
lookup = '00:00:00'
filename = "test.txt"
with open(filename) as text_file, open('okfile.txt', 'w') as ok:
lines = text_file.readlines()
ok.writelines(lines[4:])
This will delete the first 4 lines and store them in a different document in case you wanna keep the original.
Remember to close the files when you're done with them :)

Providing three alternate solutions. All begin with the same first part - reading:
filename = "test.txt"
lookup = '00:00:00'
with open(filename) as text_file:
lines = text_file.readlines()
The variations for the second parts are:
Using itertools.dropwhile which discards items from the iterator until the predicate (condition) returns False (ie discard while predicate is True). And from that point on, yields all the remaining items without re-checking the predicate:
import itertools
with open(filename, 'w') as text_file:
text_file.writelines(itertools.dropwhile(lambda line: lookup not in line, lines))
Note that it says not in. So all the lines before lookup is found, are discarded.
Bonus: If you wanted to do the opposite - write lines until you find the lookup and then stop, replace itertools.dropwhile with itertools.takewhile.
Using a flag-value (found) to determine when to start writing the file:
with open(filename, 'w') as text_file:
found = False
for line in lines:
if not found and lookup in line: # 2nd expression not checked once `found` is True
found = True # value remains True for all remaining iterations
if found:
text_file.write(line)
Similar to #c yj's answer, with some refinements - use enumerate instead of range, and then use the last index (idx) to write the lines from that point on; with no other intermediate variables needed:
for idx, line in enumerate(lines):
if lookup in line:
break
with open(filename, 'w') as text_file:
text_file.writelines(lines[idx:])

Related

How to select only lines that have a unique (not duplicated) field?

How might I remove lines that have duplicates the first part of a line?
Example:
input file : include
line 1 : Messi , 1
line 2 : Messi , 2
line 3 : CR7 , 2
I want the output file to be:
line 1: CR7 , 2
Just CR7 , 2; I want to delete the lines that have duplicate first fields (e.g., Messi). The file is not sorted.
The deletion depends on the first column. If there is any match for the first column in the file, then I want to delete the line
How to do this in Python? Here is my code so far:
lines_seen = set() # holds lines already seen
outfile = open(outfilename, "w")
for line in open(infilename, "r"):
if line not in lines_seen: # not a duplicate
outfile.write(line)
lines_seen.add(line)
outfile.close()
This sample has the large original and the known duplicates.
There are a few ways.
You might want to read How do I find the duplicates in a list and create another list with them?
One answer from that, using your code:
from counter import Counter
with open(infilename, 'r') as inp:
lines = inp.readlines()
output_lines = [line for line, count in collections.Counter(lines).items() if count > 1]
with open(outfilename, "w") as out:
out.write("\n".join(output_lines))
Being provided with a sample, its a slightly different question. Here is your solution:
import collections
from typing import List
def remove_duplicate_first_columns(lines: List[str]) -> List[str]:
first_col = [line.split(',')[0] for line in lines]
dups = [col for col, count in collections.Counter(first_col).items() if count > 1]
non_dups = [line for line in lines if line.split(',')[0] not in dups]
return non_dups
with open('input.csv') as inp:
lines = inp.readlines()
non_dups = remove_duplicate_first_columns(lines)
with open('nondups.csv', 'w') as out:
print(''.join(non_dups), file=out)
print(f"There were {len(lines) - len(non_dups)} lines removed.")
print("This program is gratified to be of use")
I hope this completely answers your question.
You need to be able to remove something that was added earlier, so you cant directly dump to outfile.write(line). Instead use an accumulator to keep the data, and only once the full processing of the input is done, commit to writing the output.
lines_seen = set() # holds lines already seen
accumulator = []
with open(infilename, "r") as f:
for line in f.readlines():
if line not in lines_seen: # not a duplicate
accumulator.append(line)
lines_seen.add(line)
else:
accumulator.remove(line)
outfile = open(outfilename, "w")
outfile.write('\n'.join(accumulator))
outfile.close()
Here is another solution you might check it out.
lines_seen = set()
outfile = open(outfilename, "w")
with open(infilename, "r") as f:
lines = f.readlines()
outfile.write([line for line in lines if not (line.split(",")[0] in lines_seen or lines_seen.add(line.split(",")[0])])
outfile.close()
You can get some more info here! How do you remove duplicates from a list whilst preserving order?

Python: Delete lines from except certain criteria

I am trying to delete lines from a file using specific criteria
The script i have seems to work but i have to add to many Or statements
Is there a way i can make an variable that holds all the criterias i would like to remove from the files?
Example code
with open("AW.txt", "r+", encoding='utf-8') as f:
new_f = f.readlines()
f.seek(0)
for line in new_f:
if "PPL"not in line.split() or "PPLX"not in line.split() or "PPLC"not in line.split():
f.write(line)
f.truncate()
I was more thinking in this way but it fails when i add multiple criterias
output = []
with open('AW.txt', 'r+', encoding='utf-8') as f:
lines = f.readlines()
criteria = 'PPL'
output =[line for line in lines if criteria not in line]
f.writelines(output)
Regards
You can use regular expressions to your rescue which will reduce the number of statements and checks in the code. If you have a list of criteria which can be dynamic, let's call the list of criteria crit_list, then the code would look like-
import re
with open("AW.txt", "r+", encoding='utf-8') as f:
new_f = f.readlines()
crit_list = ['PPL', 'PPLC', 'PPLX'] # Can use any number of criterions
obj = re.compile(r'%s' % ('|'.join(crit_list)))
out_lines = [line for line in new_f if not obj.search(line)]
f.truncate(0)
f.seek(0)
f.writelines(out_lines)
Use of regex makes it look different from how OP had posted. Let me explain the two lines containing the regex-
obj = re.compile(r'%s' % ('|'.join(crit_list)))
This line creates a regex object with the regular expression 'PPL|PPLX|PPLC' which means match at least one of these strings in the given line which can be thought of as a substitute for using as many ors in the code as there are criteria.
out_lines = [line for line in new_f if not obj.search(line)]
This statement means, search for the given criteria in the given line and if at least of them is found, preserve that line.
Hope that clears your doubts.
import re
output = []
with open('AW.txt', 'r+', encoding='utf-8') as f:
lines = f.readlines()
criteria = 'PPL'
output = re.sub("^.*[Crit1|Crit2|Crit3].*","")
f.writelines(output)
This will remove the lines. but it will not print them out in the writelines statement
your question was a little fuzzy, asking for lines to be deleted but then trying to write them out
add as many criteria as you want like this
You can get compare each list item with each criteria and get only those items that meet the criteria. Then simply get all lines which meet all the criterias.
For example, this can be done like (EDITED CODE):
with open('AW.txt', 'r+') as f:
lines = f.readlines()
criterias = ["PPL","PPLX","PPLC"]
conditioned_lines = [[line for criteria in criterias if criteria not in line] for line in lines]
output = [criteria_lines[0] for criteria_lines in conditioned_lines if len(criteria_lines) == len(criterias)]
f.truncate(0)
f.seek(0)
f.write(''.join(output))

how can i convert surname:name to name:surname? [duplicate]

In Python, calling e.g. temp = open(filename,'r').readlines() results in a list in which each element is a line from the file. However, these strings have a newline character at the end, which I don't want.
How can I get the data without the newlines?
You can read the whole file and split lines using str.splitlines:
temp = file.read().splitlines()
Or you can strip the newline by hand:
temp = [line[:-1] for line in file]
Note: this last solution only works if the file ends with a newline, otherwise the last line will lose a character.
This assumption is true in most cases (especially for files created by text editors, which often do add an ending newline anyway).
If you want to avoid this you can add a newline at the end of file:
with open(the_file, 'r+') as f:
f.seek(-1, 2) # go at the end of the file
if f.read(1) != '\n':
# add missing newline if not already present
f.write('\n')
f.flush()
f.seek(0)
lines = [line[:-1] for line in f]
Or a simpler alternative is to strip the newline instead:
[line.rstrip('\n') for line in file]
Or even, although pretty unreadable:
[line[:-(line[-1] == '\n') or len(line)+1] for line in file]
Which exploits the fact that the return value of or isn't a boolean, but the object that was evaluated true or false.
The readlines method is actually equivalent to:
def readlines(self):
lines = []
for line in iter(self.readline, ''):
lines.append(line)
return lines
# or equivalently
def readlines(self):
lines = []
while True:
line = self.readline()
if not line:
break
lines.append(line)
return lines
Since readline() keeps the newline also readlines() keeps it.
Note: for symmetry to readlines() the writelines() method does not add ending newlines, so f2.writelines(f.readlines()) produces an exact copy of f in f2.
temp = open(filename,'r').read().split('\n')
Reading file one row at the time. Removing unwanted chars from end of the string with str.rstrip(chars).
with open(filename, 'r') as fileobj:
for row in fileobj:
print(row.rstrip('\n'))
See also str.strip([chars]) and str.lstrip([chars]).
I think this is the best option.
temp = [line.strip() for line in file.readlines()]
temp = open(filename,'r').read().splitlines()
My preferred one-liner -- if you don't count from pathlib import Path :)
lines = Path(filename).read_text().splitlines()
This it auto-closes the file, no need for with open()...
Added in Python 3.5.
https://docs.python.org/3/library/pathlib.html#pathlib.Path.read_text
Try this:
u=open("url.txt","r")
url=u.read().replace('\n','')
print(url)
To get rid of trailing end-of-line (/n) characters and of empty list values (''), try:
f = open(path_sample, "r")
lines = [line.rstrip('\n') for line in f.readlines() if line.strip() != '']
You can read the file as a list easily using a list comprehension
with open("foo.txt", 'r') as f:
lst = [row.rstrip('\n') for row in f]
my_file = open("first_file.txt", "r")
for line in my_file.readlines():
if line[-1:] == "\n":
print(line[:-1])
else:
print(line)
my_file.close()
This script here will take lines from file and save every line without newline with ,0 at the end in file2.
file = open("temp.txt", "+r")
file2 = open("res.txt", "+w")
for line in file:
file2.writelines(f"{line.splitlines()[0]},0\n")
file2.close()
if you looked at line, this value is data\n, so we put splitlines()
to make it as an array and [0] to choose the only word data
import csv
with open(filename) as f:
csvreader = csv.reader(f)
for line in csvreader:
print(line[0])

python: Open file, edit one line, save it as the same file

I want to open a file, search for a specific word, change the word and save the file again. Sounds really easy - but I just can't get it working... I know that I have to overwrite the whole file but only change this one word!
My Code:
f = open('./myfile', 'r')
linelist = f.readlines()
f.close
for line in linelist:
i =0;
if 'word' in line:
for number in arange(0,1,0.1)):
myNumber = 2 - number
myNumberasString = str(myNumber)
myChangedLine = line.replace('word', myNumberasString)
f2 = open('./myfile', 'w')
f2.write(line)
f2.close
#here I have to do some stuff with these files so there is a reason
#why everything is in this for loop. And I know that it will
#overwrite the file every loop and that is good so. I want that :)
If I make it like this, the 'new' myfile file contains only the changed line. But I want the whole file with the changed line... Can anyone help me?
****EDIT*****
I fixed it! I just turned the loops around and now it works perfectly like this:
f=open('myfile','r')
text = f.readlines()
f.close()
i =0;
for number in arange(0,1,0.1):
fw=open('mynewfile', 'w')
myNumber = 2 - number
myNumberasString = str(myNumber)
for line in text:
if 'word' in line:
line = line.replace('word', myNumberasString)
fw.write(line)
fw.close()
#do my stuff here where I need all these input files
You just need to write out all the other lines as you go. As I said in my comment, I don't know what you are really trying to do with your replace, but here's a slightly simplified version in which we're just replacing all occurrences of 'word' with 'new':
f = open('./myfile', 'r')
linelist = f.readlines()
f.close
# Re-open file here
f2 = open('./myfile', 'w')
for line in linelist:
line = line.replace('word', 'new')
f2.write(line)
f2.close()
Or using contexts:
with open('./myfile', 'r') as f:
lines = f.readlines()
with open('./myfile', 'w') as f:
for line in lines:
line = line.replace('word', 'new')
f.write(line)
Use fileinput passing in whatever you want to replace:
import fileinput
for line in fileinput.input("in.txt",inplace=True):
print(line.replace("whatever","foo"),end="")
You don't seem to be doing anything special in your loop that cannot be calculated first outside the loop, so create the string you want to replace the word with and pass it to replace.
inplace=True will mean the original file is changed. If you want to verify everything looks ok then remove the inplace=True for the first run and you will actually see the replaced output instead of the lines being written to the file.
If you want to write to a temporary file, you can use a NamedTemporaryFile with shutil.move:
from tempfile import NamedTemporaryFile
from shutil import move
with open("in.txt") as f, NamedTemporaryFile(dir=".",delete=False) as out:
for line in f:
out.write(line.replace("whatever","foo"))
move("in.txt",out.name)
One problem you may encounter is matching substrings with replace so if you know the word is always followed in the middle of a sentence surrounded by whitespace you could add that but if not you will need to split and check every word.
from tempfile import NamedTemporaryFile
from shutil import move
from string import punctuation
with open("in.txt") as f, NamedTemporaryFile(dir=".",delete=False) as out:
for line in f:
out.write(" ".join(word if word.strip(punctuation) != "whatever" else "foo"
for word in line.split()))
The are three issues with your current code. First, create the f2 file handle before starting the loop, otherwise you'll overwrite the file in each iteration. Third, you are writing an unmodified line in f2.write(line). I guess you meant f2.write(myChangedLine)? Third, you should add an else statement that writes unmodified lines to the file. So:
f = open('./myfile', 'r')
linelist = f.readlines()
f.close
f2 = open('./myfile', 'w')
for line in linelist:
i =0;
if 'word' in line:
for number in arange(0,1,0.1)):
myNumber = 2 - number
myNumberasString = str(myNumber)
myChangedLine = line.replace('word', myNumberasString)
f2.write(myChangedLine)
else:
f2.write(line)
f2.close()

Copy the last three lines of a text file in python?

I'm new to python and the way it handles variables and arrays of variables in lists is quite alien to me. I would normally read a text file into a vector and then copy the last three into a new array/vector by determining the size of the vector and then looping with a for loop a copy function for the last size-three into a new array.
I don't understand how for loops work in python so I can't do that.
so far I have:
#read text file into line list
numberOfLinesInChat = 3
text_file = open("Output.txt", "r")
lines = text_file.readlines()
text_file.close()
writeLines = []
if len(lines) > numberOfLinesInChat:
i = 0
while ((numberOfLinesInChat-i) >= 0):
writeLine[i] = lines[(len(lines)-(numberOfLinesInChat-i))]
i+= 1
#write what people say to text file
text_file = open("Output.txt", "w")
text_file.write(writeLines)
text_file.close()
To get the last three lines of a file efficiently, use deque:
from collections import deque
with open('somefile') as fin:
last3 = deque(fin, 3)
This saves reading the whole file into memory to slice off what you didn't actually want.
To reflect your comment - your complete code would be:
from collections import deque
with open('somefile') as fin, open('outputfile', 'w') as fout:
fout.writelines(deque(fin, 3))
As long as you're ok to hold all of the file lines in memory, you can slice the list of lines to get the last x items. See http://docs.python.org/2/tutorial/introduction.html and search for 'slice notation'.
def get_chat_lines(file_path, num_chat_lines):
with open(file_path) as src:
lines = src.readlines()
return lines[-num_chat_lines:]
>>> lines = get_chat_lines('Output.txt', 3)
>>> print(lines)
... ['line n-3\n', 'line n-2\n', 'line n-1']
First to answer your question, my guress is that you had an index error you should replace the line writeLine[i] with writeLine.append( ). After that, you should also do a loop to write the output :
text_file = open("Output.txt", "w")
for row in writeLine :
text_file.write(row)
text_file.close()
May I suggest a more pythonic way to write this ? It would be as follow :
with open("Input.txt") as f_in, open("Output.txt", "w") as f_out :
for row in f_in.readlines()[-3:] :
f_out.write(row)
A possible solution:
lines = [ l for l in open("Output.txt")]
file = open('Output.txt', 'w')
file.write(lines[-3:0])
file.close()
This might be a little clearer if you do not know python syntax.
lst_lines = lines.split()
This will create a list containing all the lines in the text file.
Then for the last line you can do:
last = lst_lines[-1]
secondLAst = lst_lines[-2]
etc... list and string indexes can be reached from the end with the '-'.
or you can loop through them and print specific ones using:
start = start line, stop = where to end, step = what to increment by.
for i in range(start, stop-1, step):
string = lst_lines[i]
then just write them to a file.

Categories