Removing the line from file once processed - python

I am reading content from file line by line. Once line processed, I clear it out. Here is the code
import os
lines = open('q0.txt').readlines()
for i, line in enumerate(lines[:]):
print line
flag = raw_input()
print lines[i]
del lines[i]
open('q0.txt', 'w').writelines(lines)
I am going through large q0.txt. My intension is, if there is any intruption in between, I should not reprocess previously processed lines again.
In above code, though I delete lines[i], it still remain in file. What is wrong?

I expect the above code to throw an IndexError somewhere.
Why? Let us say your script reads a 100 line file. lines[:] will have 100 lines in it.
Meanwhile, del lines[i] will continue deleting items.
Eventually, the for loop will reach 100th element. If there is, even one single del operation, del lines[99] will fail and throw an IndexError.
Therefore, the lines open('q0.txt', 'w').writelines(lines) will never get executed when there is a deleted. And, hence, the file continue to remain the same.
This is my understanding.

Since raw_input is blocking your code, you might wanna separate the process in two threads: the main one and one that you create in your code. Since threads run concurrently and in an unpredictable order (kinda), you're not gonna be able to control exactly on what line the interruption is gonna reach your main while loop). Threads are a very tricky part to get right and it requires a lot of reading, testing and checking why things happen the way they happen...
Also, since you don't mind consuming your lines, you can do what's called a destructive read: Load the contents of the file into a lines variable, and keep getting the last one with pop() until you run out of lines to consume (or the flag has been activated). Check what a pop() method does in a list. Be aware that pop() always returns the last item of a list. If you want the items printed in the original order, you have to use shift or pop from a previously reversed list.
import threading
interrupt=None
def flag_activator():
global interrupt
interrupt = raw_input("(!!) Type yes when you wanna stop\n\n")
print "Oh gosh! The user input %s" % interrupt
th = threading.Thread(target=flag_activator)
th.start()
fr = open('q0.txt', 'r')
lines = fr.readlines()
fr.close()
while lines and interrupt != 'yes':
print "I read this line: %s" % lines.pop()
if len(lines) > 0:
print "Crap! There are still lines"
fw = open('q0.txt', 'w')
fw.writelines(lines)
fw.close()
Now, that code is gonna block your terminal until you type yes on the terminal.
PS: Don't forget to close your opened files (if you don't want to call close() explicitly, see the with statement here and here)
EDIT (as per OP's comments to my misunderstanding):
If what you want is to ensure that the file will not contain the already processed line if your script suddenly stops, an inefficient (but straightforward) way to accomplish that is:
Open the file for read and write (you're gonna need a different file descriptor for each operation)
Load all the file's lines into a variable
Process the first line
Remove that line from the list variable
Write the remaining list to the file
Repeat until no more lines are loaded.
All this opening/closing of files is really, really inefficient, though, but here it goes:
done = False
while done == False:
with open("q0.txt", 'r') as fr, open("q0.txt", 'w') as fw:
lines = fr.readlines()
if len(lines) > 0:
print lines[0] # This would be your processing
del lines[0]
fw.writelines(lines)
else:
done = True

Related

Error in looping through a text file in python

I am trying to loop through a text file and apply some logic but I am not able to loop through the text file. So currently I have a text file that is structured like this:
--- section1 ---
"a","b","c"
"d","e","f"
--- section2 ---
"1","2","3"
"4","5","6"
--- section3 ---
"12","12","12"
"11","11","11"
I am trying to filter out the first line which contains '---' and convert the lines below into json until the next '---' line appear in the text document.
However I got this error " fields1 = next(file).split(',') StopIteration
with open(fileName,'r') as file:
for line in file:
if line.startswith('-') and 'section1' in line:
while '---' not in next(file):
fields1 = next(file).split(',')
for x in range(0,len(fields1)):
testarr.append({
config.get('test','test'): fields1[x]
})
with open(test_dir,'w') as test_file:
json.dump(testarr, test_file)
Any idea why my code is not working or how i can solve the error ?
The cause of your error is that you are misusing the file object genrator by calling next on it twice as often as you think. Each call to next gets a line and returns it. Therefore, while '---' not in next(file): fields1 = next(file).split(',') gets a line, checks it for ---, then gets another line and tries to parse it. This means that you are able to skip a line containing the --- by having it come up in the second next. In that case you will get to the end of the file before you find the line you are looking for. StopIteration is how iterators normally indicate that their input has been exhausted.
There are a couple of other issues you may want to address in your code:
Using next on a generator like a file when you are already inside a for loop may cause undefined behavior. You may be getting away with it this time, but it is not good practice in general. The main reason you are getting away with it, by the way, is possibly that you never actually return control to the for loop once the while is triggered, and not that files are particularly permissive in this regard.
The inner with that dumps your data to a file is inside your while loop. That means that the file you open with 'w' permissions will get truncated for every iteration of the while (i.e., each line in the file). As the array grows, the output will actually appear fine, but you probably want to move that out of the inner loop.
The simplest solution would be to rewrite the code in two loops: one to find the start of the part you care about, and the other to process it until the end is found.
Something like this:
test_arr = []
with open(fileName, 'r') as file:
for line in file:
if line.startswith('---') and 'section1' in line:
break
for line in file:
if '---' in line:
break
fields1 = line.split(',')
for item in fields1:
testarr.append({config.get('test','test'): item})
with open(test_dir,'w') as test_file:
json.dump(testarr, test_file)
EDIT:
Taking #tripleee's advice, I have removed the regex check for the start line. While regex gives great precision and flexibility for finding a specific pattern, it is really overkill for this example. I would like to point out that if you are looking for a section other than section1, or if section1 appears after some other lines with dashes, you will absolutely need this two-loop approach. The one-loop solutions in the other answers will not work in a non-trivial case.
Looks like you are overcomplicating matters massively. The next inside the inner while loop I imagine is tripping up the outer for loop, but that's just unnecessary anyway. You are already looping over lines; pick the ones you want, then quit when you're done.
with open(fileName,'r') as inputfile:
for line in inputfile:
if line.startswith('-') and 'section1' in line:
continue
elif line.startswith('-'):
break
else:
testarr.append({config.get('test', 'test'): x
for x in line.split(',')})
with open(test_dir,'w') as test_file:
json.dump(testarr, test_file)
I hope I got the append right, as I wanted to also show you how to map the split fields more elegantly, but I'm not sure I completely understand what your original code did. (I'm guessing you'll want to trim the \n off the end of the line before splitting it, actually. Also, I imagine you want to trim the quotes from around each value. x.strip('"') for x in line.rstrip('\n').split(','))
I also renamed file to inputfile to avoid clashing with the reserved keyword file.
If you want to write more files, basically, add more states in the loop and move the write snippet back inside the loop. I don't particularly want to explain how this is equivalent to a state machine but it should not be hard to understand: with two states, you are skipping or collecting; to extend this, add one more state for the boundary when flipping back, where you write out the collected data and reinitialize the collected lines to none.
next() raises a StopIteration exception when the iterator is exhausted. In other words, your code gets to the end of the file, and you call next() again, and there's nothing more for it to return, so it raises that exception.
As for how to solve your problem, I think this might be what you want:
with open(fileName, 'r') as file:
for line in file:
if line.startswith('---'):
if 'section1' in line:
continue
else:
break
fields1 = line.split(',')
for x in range(len(fields1)):
testarr.append({
config.get('test', 'test'): fields1[x]
})
with open(test_dir, 'w') as test_file:
json.dump(testarr, test_file)

Write a program in Python 3.5 that reads a file, then writes a different file with the same text that was in the first one as well as more?

The exact question to this problem is:
*Create a file with a 20 lines of text and name it “lines.txt”. Write a program to read this a file “lines.txt” and write the text to a new file, “numbered_lines.txt”, that will also have line numbers at the beginning of each line.
Example:
Input file: “lines.txt”
Line one
Line two
Expected output file:
1 Line one
2 Line two
I am stuck, and this is what I have so far. I am a true beginner to Python and my instructor does not make things very clear. Critique and help much appreciated.
file_object=open("lines.txt",'r')
for ln in file_object:
print(ln)
count=1
file_input=open("numbered_lines.txt",'w')
for Line in file_object:
print(count,' Line',(str))
count=+1
file_object.close
file_input.close
All I get for output is the .txt file I created stating lines 1-20. I am very stuck and honestly have very little idea about what I am doing. Thank you
You have all the right parts, and you're almost there:
When you do
for ln in file_object:
print(ln)
you've exhausted the contents of that file, and you won't be able to read them again, like you try to do later on.
Also, print does not write to a file, you want file_input.write(...)
This should fix all of that:
infile = open("lines.txt", 'r')
outfile = open("numbered_lines.txt", 'w')
line_number = 1
for line in infile:
outfile.write(str(line_number) + " " + line)
infile.close()
outfile.close()
However, here is a more pythonic way to do it:
with open("lines.txt") as infile, open("numbered_lines.txt", 'w') as outfile:
for i, line in enumerate(infile, 1):
outfile.write("{} {}".format(i, line))
Good first try, and with that, I can go through your code and explain what you did right (or wrong)
file_object=open("lines.txt",'r')
for ln in file_object:
print(ln)
This is fine, though generally you want to put a space before and after assignments (you are assigning the results of open to file_object) and add a space after a,` when separating arguments, so you might want to write that like so:
file_object = open("lines.txt", 'r')
for ln in file_object:
print(ln)
However, at this point the internal reference in the file_object have reached the end of the file, so if you wish to reuse the same object, you need to seek back to the beginning position, which is 0. As your assignment only states write to the file (and not on the screen), the above loop should be omitted from the file (but I get what you want to do, you want to see the contents of the file immediately though sometimes instructors are pretty strict on what they accept). Moving on:
count=1
file_input=open("numbered_lines.txt",'w')
for Line in file_object:
Looks pretty normal so far, again, minor formatting issues. In Python, typically we name all variables lower-case, as names with Capitalization are generally reserved for class names (if you wish to, you may read about them). Now we enter into the loop you got
print(count,' Line',(str))
This prints not quite what you want. as ' Line' is enclosed inside a quote, it is treated as a string literal - so it's treated literally as text and not code. Given that you had assigned Line, you want to take out the quotes. The (str) at the end simply just print out the string object and it definitely is not what you want. Also, you forgot to specify the file you want to print to. By default it will print to the screen, but you want to print it to the the numbered_lines.txt file which you had opened and assigned to file_input. We will correct this later.
count=+1
If you format this differently, you are assigning +1 to count. I am guessing you wanted to use the += operator to increment it. Remember this on your quiz/tests.
Finally:
file_object.close
file_input.close
They are meant to be called as functions, you need to invoke them by adding parentheses at the end with arguments, but as close takes no arguments, there will be nothing inside the parentheses. Putting everything together, the complete corrected code for your program should look like this
file_object = open("lines.txt", 'r')
count = 1
file_input = open("numbered_lines.txt", 'w')
for line in file_object:
print(count, line, file=file_input)
count += 1
file_object.close()
file_input.close()
Run the program. You will notice that there is an extra empty line between every line of text. This is because by default the print function adds a new line end character; the line you got from the file included a new-line character at the end (that's what make them lines, right?) so we don't have to add our own here. You can of course change it to an empty string. That line will look like this.
print(count, line, file=file_input, end='')
Naturally, other Python programmers will tell you that there are Pythonic ways, but you are just starting out, don't worry too much about them (although you can definitely pick up on this later and I highly encourage you to!)
The right way to open a file is using a with statement:
with open("lines.txt",'r') as file_object:
... # do something
That way, the context manager introduced by with will close your file at the end of "something " or in case of exception.
Of course, you can close the file yourself if you are not familiar with that. Not that close is a method: to call it you need parenthesis:
file_object.close()
See the chapter 7.2. Reading and Writing Files, in the official documentation.
In the first loop you're printing the contents of the input file. This means that the file contents have already been consumed when you get to the second loop. (Plus the assignment didn't ask you to print the file contents.)
In the second loop you're using print() instead of writing to a file. Try file_input.write(str(count) + " " + Line) (And file_input seems like a bad name for a file that you will be writing to.)
count=+1 sets count to +1, i.e. positive one. I think you meant count += 1 instead.
At the end of the program you're calling .close instead of .close(). The parentheses are important!

Reading large files in a loop

I'm having some trouble dealing with large text files (about 1GB), when I want to read them and use them in while loops.
More specifically: First I start by doing some parsing on the lines of the file, in order to find e.g. all lines that start with "x". In doing so, I add the indices of the found lines to a list (say l). This is the pre-processing part.
Now in a while loop, I'm choosing random indices from l, and want to read its corresponding line (or say 5 lines around it). Thus I need to keep the file in memory once and for all throughout the while loop, as a priori I do not know what lines I end up reading (the line is randomly picked from l).
The problem is, when I call the file before my main loop, during the first run of the loop, the reading gets done successfully, but already from the second run, the file has vanished from memory. What I have tried:
The preprocess part:
for i, line in enumerate(filename):
prep = ''.join(c for c in line if c.isalnum() or c.isspace())
if 'x' in prep: l.append(i)
Now I have my l list. loading the file in memory before main loop:
with open(filename,'r') as f:
while (some condition):
random_index = random.sample(range(0,len(l)),1)
output_file = open("out","w") #I will write here the read line(s)
for i, line in enumerate(f):
#(the lines to be read, starting from the given random index)
if (i >= l[random_index]) and (i < l[random_index+1]):
out.write(line)
out.close()
Only during the first run of the loop, things work properly.
Alternatively I also tried:
f = open(filename)
while (some condition):
random_index = ... #rest is same as above.
Same issue, only first run work. One thing that worked was putting the f=open(filename) in the loop, so every run the file is called. But since it is a large one, this is really no practical solution.
What am I doing wrong here?
How should such readings be done properly?
What am I doing wrong here?
This answer addresses the same problem: you can't read file twice.
You open file f outside of the while loop and read it completely by calling for i, line in enumerate(f): during first iteration of the while loop. During the second iteration you can't read it again, since it has been read already.
How should such readings be done properly?
As noted in the linked answer:
To answer your question directly, once a file has been read, with read() you can use seek(0) to return the read cursor to the start of the file (docs are here).
That means, that to solve your problem you can add f.seek(0) at the end of the while loop to move pointer to the start of the file after each iteration. Doing this you can reread file from the start again.

Iterate over the same line while reading lines from a file in python

I am reading each line of a file and performing some operations on it. Sometimes the program throws an error due to some strange behavior in the network(It does SSH to a remote machine). This occurs once in a while. I want to catch this error and perform the same operations again on the same line. To be specific, I want to read the same line again. I am looking for something like this.
with open (file_name) as f:
for line in f:
try:
do this
except IndexError:
go back and read the same line again from the file.
As long as you’re within the block of your for loop, you still have access to that line (unless you modified it knowingly of course). So you don’t actually need to reread it from the file but you just still have it in memory.
You could for example try to “do this” repeatedly until it succeeds like this:
for line in f:
while True:
try:
print(line)
doThis()
except IndexError:
# we got an error, so let’s rerun this inner while loop
pass
else:
# if we don’t get an error, abort the inner while loop
# to get to the next line
break
You don't need to re-read the line. The line variable is holding your line. What you want to do is retry your operation in case it fails. One way would be to use a function and call the function from the function whenever it fails.
def do(line):
try:
pass # your "do this" code here
except IndexError:
do(line)
with open (file_name) as f:
for line in f:
do(line)
Python does not have a 'repeat' keyword that resets the execution pointer to the beginning of the current iteration. Your best approach is probably to look again at the structure of your code and break down 'do this' into a function that retries until it completes.
But if you are really set on emulating a repeat keyword as closely as possible, we can implement this by wrapping the file object in a generator
Rather than looping directly over the file, define a generator the yields from the file one line at a time, with a repeat option.
def repeating_generator(iterator_in):
for x in iterator_in:
repeat = True
while repeat:
repeat = yield x
yield
Your file object can be wrapped with this generator. We pass a flag back into the generator telling it whether to repeat the previous line, or continue to the next one...
with open (file_name) as f:
r = repeating_generator(f)
for line in r:
try:
#do this
r.send(False) # Don't repeat
except IndexError:
r.send(True) #go back and read the same line again from the file.
Take a look at this question to see whats happening here. I don't think this is the most readable way of doing this, consider the alternatives first! Note that you will need Python 2.7 or later to be able to use this.

Python truncate lines as they are read

I have an application that reads lines from a file and runs its magic on each line as it is read. Once the line is read and properly processed, I would like to delete the line from the file. A backup of the removed line is already being kept. I would like to do something like
file = open('myfile.txt', 'rw+')
for line in file:
processLine(line)
file.truncate(line)
This seems like a simple problem, but I would like to do it right rather than a whole lot of complicated seek() and tell() calls.
Maybe all I really want to do is remove a particular line from a file.
After spending far to long on this problem I decided that everyone was probably right and this it just not a good way to do things. It just seemed so elegant solution. What I was looking for was something akin to a FIFO that would just let me pop lines out of a file.
Remove all lines after you've done with them:
with open('myfile.txt', 'r+') as file:
for line in file:
processLine(line)
file.truncate(0)
Remove each line independently:
lines = open('myfile.txt').readlines()
for line in lines[::-1]: # process lines in reverse order
processLine(line)
del lines[-1] # remove the [last] line
open('myfile.txt', 'w').writelines(lines)
You can leave only those lines that cause exceptions:
import fileinput, sys
for line in fileinput.input(['myfile.txt'], inplace=1):
try: processLine(line)
except Exception:
sys.stdout.write(line) # it prints to 'myfile.txt'
In general, as other people already said it is a bad idea what you are trying to do.
You can't. It is just not possible with actual text file implementations on current filesystems.
Text files are sequential, because the lines in a text file can be of any length.
Deleting a particular line would mean rewriting the entire file from that point on.
Suppose you have a file with the following 3 lines;
'line1\nline2reallybig\nline3\nlast line'
To delete the second line you'd have to move the third and fourth lines' positions in the disk. The only way would be to store the third and fourth lines somewhere, truncate the file on the second line, and rewrite the missing lines.
If you know the size of every line in the text file, you can truncate the file in any position using .truncate(line_size * line_number) but even then you'd have to rewrite everything after the line.
You're better off keeping a index into the file so that you can start where you stopped last, without destroying part of the file. Something like this would work :
try :
for index, line in enumerate(file) :
processLine(line)
except :
# Failed, start from this line number next time.
print(index)
raise
Truncating the file as you read it seems a bit extreme. What if your script has a bug that doesn't cause an error? In that case you'll want to restart at the beginning of your file.
How about having your script print the line number it breaks on and having it take a line number as a parameter so you can tell it which line to start processing from?
First of all, calling the operation truncate is probably not the best pick. If I understand the problem correctly, you want to delete everything up to the current position in file. (I would expect truncate to cut everything from the current position up to the end of the file. This is how the standard Python truncate method works, at least if I Googled correctly.)
Second, I am not sure it is wise to modify the file while iterating on in using the for loop. Wouldn’t it be better to save the number of lines processed and delete them after the main loop has finished, exception or not? The file iterator supports in-place filtering, which means it should be fairly simple to drop the processed lines afterwards.
P.S. I don’t know Python, take this with a grain of salt.
A related post has what seems a good strategy to do that, see
How can I run the first process from a list of processes stored in a file and immediately delete the first line as if the file was a queue and I called "pop"?
I have used it as follows:
import os;
tasklist_file = open(tasklist_filename, 'rw');
first_line = tasklist_file.readline();
temp = os.system("sed -i -e '1d' " + tasklist_filename); # remove first line from task file;
I'm not sure it works on Windows.
Tried it on a mac and it did do the trick.
This is what I use for file based queues. It returns the first line and rewrites the file with the rest. When it's done it returns None:
def pop_a_text_line(filename):
with open(filename,'r') as f:
S = f.readlines()
if len(S) > 0:
pop = S[0]
with open(filename,'w') as f:
f.writelines(S[1:])
else:
pop = None
return pop

Categories