I have the following file:
abcde
kwakwa
<0x1A>
line3
linllll
Where <0x1A> represents a byte with the hex value of 0x1A. When attempting to read this file in Python as:
for line in open('t.txt'):
print line,
It only reads the first two lines, and exits the loop.
The solution seems to be to open the file in binary (or universal newline mode) - 'rb' or 'rU'. Can you explain this behavior ?
0x1A is Ctrl-Z, and DOS historically used that as an end-of-file marker. For example, try using a command prompt, and "type"ing your file. It will only display the content up the Ctrl-Z.
Python uses the Windows CRT function _wfopen, which implements the "Ctrl-Z is EOF" semantics.
Ned is of course correct.
If your curiosity runs a little deeper, the root cause is backwards compatibility taken to an extreme. Windows is compatible with DOS, which used Ctrl-Z as an optional end of file marker for text files. What you might not know is that DOS was compatible with CP/M, which was popular on small computers before the PC. CP/M's file system didn't keep track of file sizes down to the byte level, it only kept track by the number of floppy disk sectors. If your file wasn't an exact multiple of 128 bytes, you needed a way to mark the end of the text. This Wikipedia article implies that the selection of Ctrl-Z was based on an even older convention used by DEC.
Related
The code below
fd = open(r"C:\folder1\file.acc", 'r')
fd.seek(12672)
print str(fd.read(1))
print "after", fd.tell()
Is returning after 16257 instead of the expected after 12673
What is going on here? Is there a way the creator of the file can put some sort of protection on the file to mess with my reads? I am only having issues with a range of addresses. The rest of the file reads as expected.
It looks as though you are trying to deal with a file with a simple "stream of bytes at linearly increasing offsets" model, but you are opening it with 'r' rather than 'rb'. Given that the path name starts with C:\ we can also assume that you are running on a Windows system. Text streams on Windows—whether opened in Python, or in various other languages including the C base for CPython—do funny translations where '\n' in Python becomes the two-byte sequence '\r', '\n' within the bytes-as-stored-in-the-file. This makes file offsets behave in a non-linear fashion (though as someone who avoids Windows I would not care to guess at the precise behaviors).
It's therefore important to open file file with 'rb' mode for reading. This becomes even more critical when you use Python3, which uses Unicode for base strings: opening a stream with mode 'r' produces text, as in strings, type 'str', which are Unicode; but opening it with mode 'rb' produces bytes, as in strings of <class 'bytes'>.
Notes on things you did not ask about
You may use use r+b for writing if you do not want to truncate an existing file, or wb to create a new file or truncate any existing file. Remember that + means "add the other mode", while w means "truncate existing or create anew for writing", so r+ is read-and-write without truncation, while w+ is write-and-read with truncation. In all cases, including the b means "... and treat as stream of bytes."
As you can see, there is a missing mode here: how do you open for writing (only) without truncation, yet creating the file if necessary? Python, like C, gives you a third letter option a (which you can also mix with + and b as usual). This opens for writing without truncation, creating a new file only if necessary—but it has the somewhat annoying side effect of forcing all writes to append, which is what the a stands for. This means you cannot open a file for writing without truncation, position into the middle of it, and overwrite just a bit of it. Instead, you must open for read-plus, position into the middle of it, and overwrite just the one bit. But the read-plus mode fails—raises an OSError exception—if the file does not currently exist.
You can open with r+ and if it fails, try again with w or w+, but the flaw here is that the operation is non-atomic: if two or more entities—let's call them Alice and Bob, though often they are just two competing programs—are trying to do this on a single file name, it's possible that Alice sees the file does not exist yet, then pauses a bit; then Bob sees that the file does not exist, creates-and-truncates it, writes contents, and closes it; then Alice resumes, and creates-and-truncates, losing Bob's data. (In practice, two competing entities like this need to cooperate anyway, but to do so reliably, they need some sort of atomic synchronization, and for that you must drop to OS-specific operations. Python 3.3 adds the x character for exclusive, which helps implement atomicity.)
If you do open a stream for both reading and writing, there is another annoying caveat: any time you wish to "switch directions" you are required to introduce an apparently-pointless seek. ("Any time" is a bit too strong: e.g., after an attempt to read produces end-of-file, you may switch then as well. The set of conditions to remember, however, is somewhat difficult; it's easier to say "seek before changing directions.") This is inherited from the underlying C "standard I/O" implementation. Python could work around it—and I was just now searching to see if Python 3 does, and have not found an answer—but Python 2 did not. The underlying C implementation is also not required to have this flaw, and some, such as mine, do not, but it's safest to assume that it might, and do the apparently-pointless seek.
I'm having problems with some code that loops through a bunch of .csvs and deletes the final line if there's nothing in it (i.e. files that end with the \n newline character)
My code works successfully on all files except one, which is the largest file in the directory at 11gb. The second largest file is 4.5gb.
The line it fails on is simply:
with open(path_str,"r+") as my_file:
and I get the following message:
IOError: [Errno 22] invalid mode ('r+') or filename: 'F:\\Shapefiles\\ab_premium\\processed_csvs\\a.csv'
The path_str I create using os.file.join to avoid errors, and I tried renaming the file to a.csv just to make sure there wasn't anything odd going on with the filename. This made no difference.
Even more strangely, the file is happy to open in r mode. I.e. the following code works fine:
with open(path_str,"r") as my_file:
I have tried navigating around the file in read mode, and it's happy to read characters at the start, end, and in the middle of the file.
Does anyone know of any limits on the size of file that Python can deal with or why I might be getting this error? I'm on Windows 7 64bit and have 16gb of RAM.
The default I/O stack in Python 2 is layered over CRT FILE streams. On Windows these are built on top of a POSIX emulation API that uses file descriptors (which in turn is layered over the user-mode Windows API, which is layered over the kernel-mode I/O system, which itself is a deeply layered system based on I/O request packets; the hardware is down there somewhere...). In the POSIX layer, opening a file with _O_RDWR | _O_TEXT mode (as in "r+"), requires seeking to the end of the file to remove CTRL+Z, if it's present. Here's a quote from the CRT's fopen documentation:
Open in text (translated) mode. In this mode, CTRL+Z is interpreted as
an end-of-file character on input. In files opened for reading/writing
with "a+", fopen checks for a CTRL+Z at the end of the file and
removes it, if possible. This is done because using fseek and ftell to
move within a file that ends with a CTRL+Z, may cause fseek to behave
improperly near the end of the file.
The problem here is that the above check calls the 32-bit _lseek (bear in mind that sizeof long is 4 bytes on 64-bit Windows, unlike most other 64-bit platforms), instead of _lseeki64. Obviously this fails for an 11 GB file. Specifically, SetFilePointer fails because it gets called with a NULL value for lpDistanceToMoveHigh. Here's the return value and LastErrorValue for the latter call:
0:000> kc 2
Call Site
KERNELBASE!SetFilePointer
MSVCR90!lseek_nolock
0:000> r rax
rax=00000000ffffffff
0:000> dt _TEB #$teb LastErrorValue
ntdll!_TEB
+0x068 LastErrorValue : 0x57
The error code 0x57 is ERROR_INVALID_PARAMETER. This is referring to lpDistanceToMoveHigh being NULL when trying to seek from the end of a large file.
To work around this problem with CRT FILE streams, I recommend opening the file using io.open instead. This is a backported implementation of Python 3's I/O stack. It always opens files in raw binary mode (_O_BINARY), and it implements its own buffering and text-mode layers on top of the raw layer.
>>> import io
>>> f = io.open('a.csv', 'r+')
>>> f
<_io.TextIOWrapper name='a.csv' encoding='cp1252'>
>>> f.buffer
<_io.BufferedRandom name='a.csv'>
>>> f.buffer.raw
<_io.FileIO name='a.csv' mode='rb+'>
>>> f.seek(0, os.SEEK_END)
11811160064L
I'm using OpenCV Python library to extract descriptors and write them to file. Each descriptor is 32 bytes and I only save 80 of them. Meaning that, the final file must be exactly 2560 bytes. But it's 2571 bytes.
I also have another file which had been written using the same Python script (Not on Windows but I guess it was on Linux) and it's exactly 2560 bytes.
Using WinMerge, I tried to compare them and it gave me a warning that the carriage return is different in two files and asked me if I wanted to treat them equally. If I say "yes", then both files are identical but if I say "no" then they are different.
I was wondering if there is anyway in Python to write binary files which produce identical result on both Windows and Linux?
Not to mention this is the relevant part of the script:
f = open("something", "w+")
f.write(descriptors)
f.close()
Yes, there's a way to open a file in binary mode - just put the b character into the open.
f = open("something", "wb+")
If you don't do that in Windows, every linefeed '\n' will be converted to the two-character line ending sequence that is used by Windows, '\r\n'.
I'm parsing a 20Gb file and outputting lines that meet a certain condition to another file, however occasionally python will read in 2 lines at once and concatenate them.
inputFileHandle = open(inputFileName, 'r')
row = 0
for line in inputFileHandle:
row = row + 1
if line_meets_condition:
outputFileHandle.write(line)
else:
lstIgnoredRows.append(row)
I've checked the line endings in the source file and they check out as line feeds (ascii char 10). Pulling out the problem rows and parsing them in isolation works as expected. Am I hitting some python limitation here? The position in the file of the first anomaly is around the 4GB mark.
Quick google search for "python reading files larger than 4gb" yielded many many results. See here for such an example and another one which takes over from the first.
It's a bug in Python.
Now, the explanation of the bug; it's not easy to reproduce because it depends both on the internal FILE buffer size and the number of chars passed to fread().
In the Microsoft CRT source code, in open.c, there is a block starting with this encouraging comment "This is the hard part. We found a CR at end of buffer. We must peek ahead to see if next char is an LF."
Oddly, there is an almost exact copy of this function in Perl source code:
http://perl5.git.perl.org/perl.git/blob/4342f4d6df6a7dfa22a470aa21e54a5622c009f3:/win32/win32.c#l3668
The problem is in the call to SetFilePointer(), used to step back one position after the lookahead; it will fail because it is unable to return the current position in a 32bit DWORD. [The fix is easy; do you see it?]
At this point, the function thinks that the next read() will return the LF, but it won't because the file pointer was not moved back.
And the work-around:
But note that Python 3.x is not affected (raw files are always opened in binary mode and CRLF translation is done by Python); with 2.7, you may use io.open().
The 4GB mark is suspiciously near the maximum value that can be stored in a 32-bit register (2**32).
The code you've posted looks fine by itself, so I would suspect a bug in your Python build.
FWIW, the snippet would be a little cleaner if it used enumerate:
inputFileHandle = open(inputFileName, 'r')
for row, line in enumerate(inputFileHandle):
if line_meets_condition:
outputFileHandle.write(line)
else:
lstIgnoredRows.append(row)
I know that I should open a binary file using "rb" instead of "r" because Windows behaves differently for binary and non-binary files.
But I don't understand what exactly happens if I open a file the wrong way and why this distinction is even necessary. Other operating systems seem to do fine by treating both kinds of files the same.
Well this is for historical (or as i like to say it, hysterical) reasons. The file open modes are inherited from C stdio library and hence we follow it.
For Windows, there is no difference between text and binary files, just like in any of the Unix clones. No, i mean it! - there are (were) file systems/OSes in which text file is completely different beast from object file and so on. In some you had to specify the maximum length of lines in advance and fixed size records were used... fossils from the times of 80-column paper punch-cards and such. Luckily, not so in Unices, Windows and Mac.
However - all other things equal - Unix, Windows and Mac hystorically differ in what characters they use in output stream to mark end of one line (or, same thing, as separator between lines). In Unix, \x0A (\n) is used. In Windows, sequence of two characters \x0D\x0A (\r\n) is used; on Mac - just \xOD (\r). Here are some clues on the origin of use of those two symbols - ASCII code 10 is called Line Feed (LF) and when sent to teletype, would cause it to move down one line (Y++), without changing its horizontal (X) position. Carriage Return (CR) - ASCII 13 - on the other hand, would cause the printing carriage to return to the beginning of the line (X=0) without scrolling one line down. So when sending output to the printer, both \r and \n had to be send, so that the carriage will move to the beginning of a new line. Now when typing on terminal keyboard, operators naturally are expected to press one key and not two for end of line. That on Apple][ was the key 'Return' (\r).
At any rate, this is how things settled. C's creators were concerned about portability - much of Unix was written in C, unlike before, when OSes were written in assembler. So they did not want to deal with each platform quirks about text representation, so they added this evil hack to their I/O library depending on the platform, the input and output to that file will be "patched" on the fly so that the program will see the new lines the righteous, Unix-way - as '\n' - no matter if it was '\r\n' from Windows or '\r' from Mac. So the developer need not worry on what OS the program ran, it could still read and write text files in native format.
There was a problem, however - not all files are text, there are other formats and in they are very sensitive to replacing one character with another. So they though, we will call those "binary files" and indicate that to fopen() by including 'b' in the mode - and this will flag the library not to do any behind-the-scenes conversion. And that's how it came to be the way it is :)
So to recap, if file is open with 'b' in binary mode, no conversions will take place. If it was open in text mode, depending on the platform, some conversions of the new line character(s) may occur - towards Unix point of view. Naturally, on Unix platform there is no difference between reading/writing to "text" or "binary" file.
This mode is about conversion of line endings.
When reading in text mode, the platform's native line endings (\r\n on Windows) are converted to Python's Unix-style \n line endings. When writing in text mode, the reverse happens.
In binary mode, no such conversion is done.
Other platforms usually do fine without the conversion, because they store line endings natively as \n. (An exception is Mac OS, which used to use \r in the old days.) Code relying on this, however, is not portable.
In Windows, text mode will convert the newline \n to a carriage return followed by a newline \r\n.
If you read text in binary mode, there are no problems. If you read binary data in text mode, it will likely be corrupted.
For reading files there should be no difference. When writing to text-files Windows will automatically mess up your line-breaks (it will add \r's before the \n's). That's why you should use "wb".