Read large text file without read it into RAM at once - python

I have a large text file and it's 2GB or more. Of course I shouldn't use read().
I think use readline() maybe is a way, but I don't know how to stop the loop at the end of the file.
I've tried this:
with open('test', 'r') as f:
while True:
try:
f.readline()
except:
break
But when the file is at end, the loop won't stop and will keep print empty string ('').

End of File is defined as an empty string returned by readline. Note that an actual empty line, like every line returned by readline ends with the line separator.
with open('test', 'r') as f:
while True:
line = f.readline()
if line == "":
break
But then again, a file object in python is already iterable.
with open('test', 'r') as f:
for line in f:
print(line.strip())
strip removes whitespace, including the newline, so you don't print double newlines.
And if you don't like it safe, and want the least code possible:
for l in open("text"): print(l.strip())
EDIT: strip removes all kind of whitespaces from both sides. If you actually just want to get rid of ending newlines, you can use rstrip("\n")

You could just use a for statement instead of a while statement. You could do something like
for line in f.readlines()
print(line)
Might help.

Related

How to detect EOF when reading a file with readline() in Python?

I need to read the file line by line with readline() and cannot easily change that. Roughly it is:
with open(file_name, 'r') as i_file:
while True:
line = i_file.readline()
# I need to check that EOF has not been reached, so that readline() really returned something
The real logic is more involved, so I can't read the file at once with readlines() or write something like for line in i_file:.
Is there a way to check readline() for EOF? Does it throw an exception maybe?
It was very hard to find the answer on the internet because the documentation search redirects to something non-relevant (a tutorial rather than the reference, or GNU readline), and the noise on the internet is mostly about readlines() function.
The solution should work in Python 3.6+.
From the documentation:
f.readline() reads a single line from the file; a newline character (\n) is left at the end of the string, and is only omitted on the last line of the file if the file doesn’t end in a newline. This makes the return value unambiguous; if f.readline() returns an empty string, the end of the file has been reached, while a blank line is represented by '\n', a string containing only a single newline.
with open(file_name, 'r') as i_file:
while True:
line = i_file.readline()
if not line:
break
# do something with line
Using this I suggest:
fp = open("input")
while True:
nstr = fp.readline()
if len(nstr) == 0:
break # or raise an exception if you want
# do stuff using nstr
As Barmar mentioned in the comments, readline "returns an empty string at EOF".
Empty strings returned in case of EOF evaluate to False, so this could be a nice use case for the walrus operator:
with open(file_name, 'r') as i_file:
while line := i_file.readline():
# do something with line

Checking if string is in text file is not working

I am writing in python 3.6 and am having trouble making my code match strings in a short text document. this is a simple example of the exact logic that is breaking my bigger program:
PATH = "C:\\Users\\JoshLaptop\\PycharmProjects\\practice\\commented.txt"
file = open(PATH, 'r')
words = ['bah', 'dah', 'gah', "fah", 'mah']
print(file.read().splitlines())
if 'bah' not in file.read().splitlines():
print("fail")
with the text document formatted like so:
bah
gah
fah
dah
mah
and it is indeed printing out fail each time I run this. Am I using the incorrect method of reading the data from the text document?
the issue is that you're printing print(file.read().splitlines())
so it exhausts the file, and the next call to file.read().splitlines() returns an empty list...
A better way to "grep" your pattern would be to iterate on the file lines instead of reading it fully. So if you find the string early in the file, you save time:
with open(PATH, 'r') as f:
for line in f:
if line.rstrip()=="bah":
break
else:
# else is reached when no break is called from the for loop: fail
print("fail")
The small catch here is not to forget to call line.rstrip() because file generator issues the line with the line terminator. Also, if there's a trailing space in your file, this code will still match the word (make it strip() if you want to match even with leading blanks)
If you want to match a lot of words, consider creating a set of lines:
lines = {line.rstrip() for line in f}
so your in lines call will be a lot faster.
Try it:
PATH = "C:\\Users\\JoshLaptop\\PycharmProjects\\practice\\commented.txt"
file = open(PATH, 'r')
words = file.read().splitlines()
print(words)
if 'bah' not in words:
print("fail")
You can't read the file two times.
When you do print(file.read().splitlines()), the file is read and the next call to this function will return nothing because you are already at the end of file.
PATH = "your_file"
file = open(PATH, 'r')
words = ['bah', 'dah', 'gah', "fah", 'mah']
if 'bah' not in (file.read().splitlines()) :
print("fail")
as you can see output is not 'fail' you must use one 'file.read().splitlines()' in code or save it in another variable otherwise you have an 'fail' message

Python is adding extra newline to the output

The input file: a.txt
aaaaaaaaaaaa
bbbbbbbbbbb
cccccccccccc
The python code:
with open("a.txt") as f:
for line in f:
print line
The problem:
aaaaaaaaaaaa
bbbbbbbbbbb
cccccccccccc
as you can see the output has extra line between each item.
How to prevent this?
print appends a newline, and the input lines already end with a newline.
A standard solution is to output the input lines verbatim:
import sys
with open("a.txt") as f:
for line in f:
sys.stdout.write(line)
PS: For Python 3 (or Python 2 with the print function), abarnert's print(…, end='') solution is the simplest one.
As the other answers explain, each line has a newline; when you print a bare string, it adds a line at the end. There are two ways around this; everything else is a variation on the same two ideas.
First, you can strip the newlines as you read them:
with open("a.txt") as f:
for line in f:
print line.rstrip()
This will strip any other trailing whitespace, like spaces or tabs, as well as the newline. Usually you don't care about this. If you do, you probably want to use universal newline mode, and strip off the newlines:
with open("a.txt", "rU") as f:
for line in f:
print line.rstrip('\n')
However, if you know the text file will be, say, a Windows-newline file, or a native-to-whichever-platform-I'm-running-on-right-now-newline file, you can strip the appropriate endings explicitly:
with open("a.txt") as f:
for line in f:
print line.rstrip('\r\n')
with open("a.txt") as f:
for line in f:
print line.rstrip(os.linesep)
The other way to do it is to leave the original newline, and just avoid printing an extra one. While you can do this by writing to sys.stdout with sys.stdout.write(line), you can also do it from print itself.
If you just add a comma to the end of the print statement, instead of printing a newline, it adds a "smart space". Exactly what that means is a bit tricky, but the idea is supposed to be that it adds a space when it should, and nothing when it shouldn't. Like most DWIM algorithms, it doesn't always get things right—but in this case, it does:
with open("a.txt") as f:
for line in f:
print line,
Of course we're now assuming that the file's newlines match your terminal's—if you try this with, say, classic Mac files on a Unix terminal, you'll end up with each line printing over the last one. Again, you can get around that by using universal newlines.
Anyway, you can avoid the DWIM magic of smart space by using the print function instead of the print statement. In Python 2.x, you get this by using a __future__ declaration:
from __future__ import print_function
with open("a.txt") as f:
for line in f:
print(line, end='')
Or you can use a third-party wrapper library like six, if you prefer.
What happens is that each line as a newline at the end, and print statement in python also adds a newline. You can strip the newlines:
with open("a.txt") as f:
for line in f:
print line.strip()
You could also try the splitlines() function, it strips automatically:
f = open('a.txt').read()
for l in f.splitlines():
print l
It is not adding a newline, but each scanned line from your file has a trailing one.
Try:
with open ("a.txt") as f:
for line in (x.rstrip ('\n') for x in f):
print line

Remove whitespaces in the beginning of every string in a file in python?

How to remove whitespaces in the beginning of every string in a file with python?
I have a file myfile.txt with the strings as shown below in it:
_ _ Amazon.inc
Arab emirates
_ Zynga
Anglo-Indian
Those underscores are spaces.
The code must be in a way that it must go through each and every line of a file and remove all those whitespaces, in the beginning of a line.
I've tried using lstrip but that's not working for multiple lines and readlines() too.
Using a for loop can make it better?
All you need to do is read the lines of the file one by one and remove the leading whitespace for each line. After that, you can join again the lines and you'll get back the original text without the whitespace:
with open('myfile.txt') as f:
line_lst = [line.lstrip() for line in f.readlines()]
lines = ''.join(line_lst)
print lines
Assuming that your input data is in infile.txt, and you want to write this file to output.txt, it is easiest to use a list comprehension:
inf = open("infile.txt")
stripped_lines = [l.lstrip() for l in inf.readlines()]
inf.close()
# write the new, stripped lines to a file
outf = open("output.txt", "w")
outf.write("".join(stripped_lines))
outf.close()
To read the lines from myfile.txt and write them to output.txt, use
with open("myfile.txt") as input:
with open("output.txt", "w") as output:
for line in input:
output.write(line.lstrip())
That will make sure that you close the files after you're done with them, and it'll make sure that you only keep a single line in memory at a time.
The above code works in Python 2.5 and later because of the with keyword. For Python 2.4 you can use
input = open("myfile.txt")
output = open("output.txt", "w")
for line in input:
output.write(line.lstrip())
if this is just a small script where the files will be closed automatically at the end. If this is part of a larger program, then you'll want to explicitly close the files like this:
input = open("myfile.txt")
try:
output = open("output.txt", "w")
try:
for line in input:
output.write(line.lstrip())
finally:
output.close()
finally:
input.close()
You say you already tried with lstrip and that it didn't work for multiple lines. The "trick" is to run lstrip on each individual line line I do above. You can try the code out online if you want.

End-line characters from lines read from text file, using Python

When reading lines from a text file using python, the end-line character often needs to be truncated before processing the text, as in the following example:
f = open("myFile.txt", "r")
for line in f:
line = line[:-1]
# do something with line
Is there an elegant way or idiom for retrieving text lines without the end-line character?
The idiomatic way to do this in Python is to use rstrip('\n'):
for line in open('myfile.txt'): # opened in text-mode; all EOLs are converted to '\n'
line = line.rstrip('\n')
process(line)
Each of the other alternatives has a gotcha:
file('...').read().splitlines() has to load the whole file in memory at once.
line = line[:-1] will fail if the last line has no EOL.
Simple. Use splitlines()
L = open("myFile.txt", "r").read().splitlines();
for line in L:
process(line) # this 'line' will not have '\n' character at the end
What's wrong with your code? I find it to be quite elegant and simple. The only problem is that if the file doesn't end in a newline, the last line returned won't have a '\n' as the last character, and therefore doing line = line[:-1] would incorrectly strip off the last character of the line.
The most elegant way to solve this problem would be to define a generator which took the lines of the file and removed the last character from each line only if that character is a newline:
def strip_trailing_newlines(file):
for line in file:
if line[-1] == '\n':
yield line[:-1]
else:
yield line
f = open("myFile.txt", "r")
for line in strip_trailing_newlines(f):
# do something with line
Long time ago, there was Dear, clean, old, BASIC code that could run on 16 kb core machines:
like that:
if (not open(1,"file.txt")) error "Could not open 'file.txt' for reading"
while(not eof(1))
line input #1 a$
print a$
wend
close
Now, to read a file line by line, with far better hardware and software (Python), we must reinvent the wheel:
def line_input (file):
for line in file:
if line[-1] == '\n':
yield line[:-1]
else:
yield line
f = open("myFile.txt", "r")
for line_input(f):
# do something with line
I am induced to think that something has gone the wrong way somewhere...
What do you thing about this approach?
with open(filename) as data:
datalines = (line.rstrip('\r\n') for line in data)
for line in datalines:
...do something awesome...
Generator expression avoids loading whole file into memory and with ensures closing the file
You may also consider using line.rstrip() to remove the whitespaces at the end of your line.

Categories