Python ValueError - python

So I keep receiving this error:
ValueError: Mixing iteration and read methods would lose data
And 1) I don't quite understand why I'm receiving it, and 2) people with similar problems seem to be doing things with their code which are much more complex than a beginner (like myself) can adapt with.
The idea of my code is to read a data_file.txt and convert each line into its own individual array.
so far I have this:
array = [] #declaring a list with name '**array**'
with open('file.txt','r') as input_file:
for line in input_file:
line = input_file.readlines()
array.append(line)
print('done 1') #for test purposes
return array
And I keep recieving an error.
"Value error: Mixing iteration and read methods would lose data "message while extracting numbers from a string from a .txt file using python
The above question seemed to be doing something similar, calling in items for an array, however his code was skipping lines and using a range to call in certain parts of it, I don't need that. All I need is to call in all the lines and have them be made into an array.
Python: Mixing files and loops
In this question, once again, something much more than I can understand was being asked. From what I understood, he just wanted a code that would restart after an error and continue, and the answers were about that part. Once again not what I'm looking for.

The error is pretty much self-explanatory (once you know what it is about), so here goes.
You start with the loop for line in input_file:. File objects are iterable in Python. They iterate over the lines in the file. This means that for each iteration of the loop, line will contain the next line in your file.
Next you read a line manually line = input_file.readlines(). This attempts to read a line from the file, but you are already iterating over the lines in the for loop.
Files are usually read sequentially, with no going backwards. What you end up with is a conflict. If you read a line using readline, the iterator in the loop will be forced to return the line after next since it can not go back. However, it is promising to return the next line. The error is telling you that readline knows that there is an active iterator and that calling it would interfere with the loop.
If you take out line = input_file.readlines(), the loop will do what you expect it to.

To make an array of the lines of the file, with one line per array element:
with open('file.txt','r') as input_file:
array = input_file.readlines()
return array
since readlines will give you the whole file in one shot. Alternatively,
return list(open('file.txt','r'))
will do the same per the docs.

Related

Python pointers

I was asked to write a program to find string "error" from a file and print matched lines in python.
Will first open a file with read more
i use fh.readlines and store it in a variable
After this, will use for loop and iterate line by line. check for the string "error".print those lines if found.
I was asked to use pointers in python since assigning file content to a variable consumes time when logfile contains huge output.
I did research on python pointers. But not found anything useful.
Could anyone help me out writing the above code using pointers instead of storing the whole content in a variable.
There are no pointers in python, although something like pointer can be implemented, but is not worth the efforts for your case.
As pointed out in the solution of this link,
Read large text files in Python, line by line without loading it in to memory .
You can use something like:
with open("log.txt") as infile:
for line in infile:
if "error" in line:
print(line.strip()) .
The context managers will close the file automatically and it only reads one line at a time. When the next line is read, the previous one will be garbage collected unless you have stored a reference to it somewhere else.
You can use a dictionary by using key-pair value. Just dump the log file into dictionary wherein the key would be words and value would be the line number. So if you search for string "error" you will get the line numbers they are present it and accordingly you can print them. Since searching in dictionary or hashtable is in constant time O(1) it will take less time. But yes storing might take time depends if you avoid collision.
I used below code instead of putting the data in a variable and then for loop.
for line in open('c182573.log','r').readlines():
if ('Executing' in line):
print line
So there is no way that we can implement pointers or reference in python.
Thanks all
There are no pointers in python.
But something like pointer can be implemented, but for your case it's not required.
Try Below Code
with open('test.txt') as f:
content = f.readlines()
for i in content:
if "error" in i:
print(i.strip())
Even if you want to understand Python variables as pointers go to this link
http://scottlobdell.me/2013/08/understanding-python-variables-as-pointers/

Error in looping through a text file in python

I am trying to loop through a text file and apply some logic but I am not able to loop through the text file. So currently I have a text file that is structured like this:
--- section1 ---
"a","b","c"
"d","e","f"
--- section2 ---
"1","2","3"
"4","5","6"
--- section3 ---
"12","12","12"
"11","11","11"
I am trying to filter out the first line which contains '---' and convert the lines below into json until the next '---' line appear in the text document.
However I got this error " fields1 = next(file).split(',') StopIteration
with open(fileName,'r') as file:
for line in file:
if line.startswith('-') and 'section1' in line:
while '---' not in next(file):
fields1 = next(file).split(',')
for x in range(0,len(fields1)):
testarr.append({
config.get('test','test'): fields1[x]
})
with open(test_dir,'w') as test_file:
json.dump(testarr, test_file)
Any idea why my code is not working or how i can solve the error ?
The cause of your error is that you are misusing the file object genrator by calling next on it twice as often as you think. Each call to next gets a line and returns it. Therefore, while '---' not in next(file): fields1 = next(file).split(',') gets a line, checks it for ---, then gets another line and tries to parse it. This means that you are able to skip a line containing the --- by having it come up in the second next. In that case you will get to the end of the file before you find the line you are looking for. StopIteration is how iterators normally indicate that their input has been exhausted.
There are a couple of other issues you may want to address in your code:
Using next on a generator like a file when you are already inside a for loop may cause undefined behavior. You may be getting away with it this time, but it is not good practice in general. The main reason you are getting away with it, by the way, is possibly that you never actually return control to the for loop once the while is triggered, and not that files are particularly permissive in this regard.
The inner with that dumps your data to a file is inside your while loop. That means that the file you open with 'w' permissions will get truncated for every iteration of the while (i.e., each line in the file). As the array grows, the output will actually appear fine, but you probably want to move that out of the inner loop.
The simplest solution would be to rewrite the code in two loops: one to find the start of the part you care about, and the other to process it until the end is found.
Something like this:
test_arr = []
with open(fileName, 'r') as file:
for line in file:
if line.startswith('---') and 'section1' in line:
break
for line in file:
if '---' in line:
break
fields1 = line.split(',')
for item in fields1:
testarr.append({config.get('test','test'): item})
with open(test_dir,'w') as test_file:
json.dump(testarr, test_file)
EDIT:
Taking #tripleee's advice, I have removed the regex check for the start line. While regex gives great precision and flexibility for finding a specific pattern, it is really overkill for this example. I would like to point out that if you are looking for a section other than section1, or if section1 appears after some other lines with dashes, you will absolutely need this two-loop approach. The one-loop solutions in the other answers will not work in a non-trivial case.
Looks like you are overcomplicating matters massively. The next inside the inner while loop I imagine is tripping up the outer for loop, but that's just unnecessary anyway. You are already looping over lines; pick the ones you want, then quit when you're done.
with open(fileName,'r') as inputfile:
for line in inputfile:
if line.startswith('-') and 'section1' in line:
continue
elif line.startswith('-'):
break
else:
testarr.append({config.get('test', 'test'): x
for x in line.split(',')})
with open(test_dir,'w') as test_file:
json.dump(testarr, test_file)
I hope I got the append right, as I wanted to also show you how to map the split fields more elegantly, but I'm not sure I completely understand what your original code did. (I'm guessing you'll want to trim the \n off the end of the line before splitting it, actually. Also, I imagine you want to trim the quotes from around each value. x.strip('"') for x in line.rstrip('\n').split(','))
I also renamed file to inputfile to avoid clashing with the reserved keyword file.
If you want to write more files, basically, add more states in the loop and move the write snippet back inside the loop. I don't particularly want to explain how this is equivalent to a state machine but it should not be hard to understand: with two states, you are skipping or collecting; to extend this, add one more state for the boundary when flipping back, where you write out the collected data and reinitialize the collected lines to none.
next() raises a StopIteration exception when the iterator is exhausted. In other words, your code gets to the end of the file, and you call next() again, and there's nothing more for it to return, so it raises that exception.
As for how to solve your problem, I think this might be what you want:
with open(fileName, 'r') as file:
for line in file:
if line.startswith('---'):
if 'section1' in line:
continue
else:
break
fields1 = line.split(',')
for x in range(len(fields1)):
testarr.append({
config.get('test', 'test'): fields1[x]
})
with open(test_dir, 'w') as test_file:
json.dump(testarr, test_file)

Reading large files in a loop

I'm having some trouble dealing with large text files (about 1GB), when I want to read them and use them in while loops.
More specifically: First I start by doing some parsing on the lines of the file, in order to find e.g. all lines that start with "x". In doing so, I add the indices of the found lines to a list (say l). This is the pre-processing part.
Now in a while loop, I'm choosing random indices from l, and want to read its corresponding line (or say 5 lines around it). Thus I need to keep the file in memory once and for all throughout the while loop, as a priori I do not know what lines I end up reading (the line is randomly picked from l).
The problem is, when I call the file before my main loop, during the first run of the loop, the reading gets done successfully, but already from the second run, the file has vanished from memory. What I have tried:
The preprocess part:
for i, line in enumerate(filename):
prep = ''.join(c for c in line if c.isalnum() or c.isspace())
if 'x' in prep: l.append(i)
Now I have my l list. loading the file in memory before main loop:
with open(filename,'r') as f:
while (some condition):
random_index = random.sample(range(0,len(l)),1)
output_file = open("out","w") #I will write here the read line(s)
for i, line in enumerate(f):
#(the lines to be read, starting from the given random index)
if (i >= l[random_index]) and (i < l[random_index+1]):
out.write(line)
out.close()
Only during the first run of the loop, things work properly.
Alternatively I also tried:
f = open(filename)
while (some condition):
random_index = ... #rest is same as above.
Same issue, only first run work. One thing that worked was putting the f=open(filename) in the loop, so every run the file is called. But since it is a large one, this is really no practical solution.
What am I doing wrong here?
How should such readings be done properly?
What am I doing wrong here?
This answer addresses the same problem: you can't read file twice.
You open file f outside of the while loop and read it completely by calling for i, line in enumerate(f): during first iteration of the while loop. During the second iteration you can't read it again, since it has been read already.
How should such readings be done properly?
As noted in the linked answer:
To answer your question directly, once a file has been read, with read() you can use seek(0) to return the read cursor to the start of the file (docs are here).
That means, that to solve your problem you can add f.seek(0) at the end of the while loop to move pointer to the start of the file after each iteration. Doing this you can reread file from the start again.

iterative variable losing value in nested loop

So I seem to be doing something incredibly dumb and I can't seem to figure it out. I am trying to create script that will search a file for terms defined in another file. This seems pretty basic to me but for some reason the outside loop iteration is empty on the inside loop.
if __name__ == "__main__":
searchfile = open(sys.argv[1],"r")
terms = open(sys.argv[2],"r")
for line in searchfile:
for term in terms:
if re.match(term, line.rstrip()):
print line
If I print line before the term loop it has the information. If I print line inside the term loop, it doesn't. What am I missing?
The issue here is that files are iterators that get exhausted - this means that once they have been iterated over once, they will not restart from the beginning.
You are probably used to lists - iterables that return a new iterator each time you loop over them, from the beginning.
Files are single-use iterables - once you loop over them, they are exhausted.
You can either use list() to construct a list you can iterate over multiple times, or open the file inside the loop, so that it is reopened each time, creating a new iterator from the beginning.
Which option is best will vary depending on the use case. Opening the file and reading from disk will be slower, but making a list will require all the data being held in memory - if your file is extremely large, this may be a problem.
It's also worth noting that you should use the with statement when opening files in Python.
with open(sys.argv[1], "r") as searchfile, open(sys.argv[2], "r") as terms:
terms = list(terms)
for line in searchfile:
for term in terms:
if re.match(term, line.rstrip()):
print line
So what are you doing: In the first for-iteration you read the first line of searchfile and compare it with every line in terms, by reading the file terms. After that, the file terms is read completely, so in every next iteration of the searchfile-loop the terms-loop isn't executed any more (terms is 'empty').

Python truncate lines as they are read

I have an application that reads lines from a file and runs its magic on each line as it is read. Once the line is read and properly processed, I would like to delete the line from the file. A backup of the removed line is already being kept. I would like to do something like
file = open('myfile.txt', 'rw+')
for line in file:
processLine(line)
file.truncate(line)
This seems like a simple problem, but I would like to do it right rather than a whole lot of complicated seek() and tell() calls.
Maybe all I really want to do is remove a particular line from a file.
After spending far to long on this problem I decided that everyone was probably right and this it just not a good way to do things. It just seemed so elegant solution. What I was looking for was something akin to a FIFO that would just let me pop lines out of a file.
Remove all lines after you've done with them:
with open('myfile.txt', 'r+') as file:
for line in file:
processLine(line)
file.truncate(0)
Remove each line independently:
lines = open('myfile.txt').readlines()
for line in lines[::-1]: # process lines in reverse order
processLine(line)
del lines[-1] # remove the [last] line
open('myfile.txt', 'w').writelines(lines)
You can leave only those lines that cause exceptions:
import fileinput, sys
for line in fileinput.input(['myfile.txt'], inplace=1):
try: processLine(line)
except Exception:
sys.stdout.write(line) # it prints to 'myfile.txt'
In general, as other people already said it is a bad idea what you are trying to do.
You can't. It is just not possible with actual text file implementations on current filesystems.
Text files are sequential, because the lines in a text file can be of any length.
Deleting a particular line would mean rewriting the entire file from that point on.
Suppose you have a file with the following 3 lines;
'line1\nline2reallybig\nline3\nlast line'
To delete the second line you'd have to move the third and fourth lines' positions in the disk. The only way would be to store the third and fourth lines somewhere, truncate the file on the second line, and rewrite the missing lines.
If you know the size of every line in the text file, you can truncate the file in any position using .truncate(line_size * line_number) but even then you'd have to rewrite everything after the line.
You're better off keeping a index into the file so that you can start where you stopped last, without destroying part of the file. Something like this would work :
try :
for index, line in enumerate(file) :
processLine(line)
except :
# Failed, start from this line number next time.
print(index)
raise
Truncating the file as you read it seems a bit extreme. What if your script has a bug that doesn't cause an error? In that case you'll want to restart at the beginning of your file.
How about having your script print the line number it breaks on and having it take a line number as a parameter so you can tell it which line to start processing from?
First of all, calling the operation truncate is probably not the best pick. If I understand the problem correctly, you want to delete everything up to the current position in file. (I would expect truncate to cut everything from the current position up to the end of the file. This is how the standard Python truncate method works, at least if I Googled correctly.)
Second, I am not sure it is wise to modify the file while iterating on in using the for loop. Wouldn’t it be better to save the number of lines processed and delete them after the main loop has finished, exception or not? The file iterator supports in-place filtering, which means it should be fairly simple to drop the processed lines afterwards.
P.S. I don’t know Python, take this with a grain of salt.
A related post has what seems a good strategy to do that, see
How can I run the first process from a list of processes stored in a file and immediately delete the first line as if the file was a queue and I called "pop"?
I have used it as follows:
import os;
tasklist_file = open(tasklist_filename, 'rw');
first_line = tasklist_file.readline();
temp = os.system("sed -i -e '1d' " + tasklist_filename); # remove first line from task file;
I'm not sure it works on Windows.
Tried it on a mac and it did do the trick.
This is what I use for file based queues. It returns the first line and rewrites the file with the rest. When it's done it returns None:
def pop_a_text_line(filename):
with open(filename,'r') as f:
S = f.readlines()
if len(S) > 0:
pop = S[0]
with open(filename,'w') as f:
f.writelines(S[1:])
else:
pop = None
return pop

Categories