Let's consider a big file (~100MB). Let's consider that the file is line-based (a text file, with relatively short line ~80 chars).
If I use built-in open()/file() the file will be loaded in lazy manner.
I.E. if a I do aFile.readline() only a chunk of a file will reside in memory. Does the urllib.urlopen() do something similar (with usage of a cache on disk)?
How big is the difference in performance between urllib.urlopen().readline() and file().readline()? Let's consider that file is located on localhost. Once I open it with urllib.urlopen() and then with file(). How big will be difference in performance/memory consumption when i loop over the file with readline()?
What is best way to process a file opened via urllib.urlopen()? Is it faster to process it line by line? Or shall I load bunch of lines(~50) into a list and then process the list?
open (or file) and urllib.urlopen look like they're more or less doing the same thing there. urllib.urlopen is (basically) creating a socket._socketobject and then invoking the makefile method (contents of that method included below)
def makefile(self, mode='r', bufsize=-1):
"""makefile([mode[, bufsize]]) -> file object
Return a regular file object corresponding to the socket. The mode
and bufsize arguments are as for the built-in open() function."""
return _fileobject(self._sock, mode, bufsize)
Does the urllib.urlopen() do something similar (with usage of a cache on disk)?
The operating system does. When you use a networking API such as urllib, the operating system and the network card will do the low-level work of splitting data into small packets that are sent over the network, and to receive incoming packets. Those are stored in a cache, so that the application can abstract away the packet concept and pretend it would send and receive continuous streams of data.
How big is the difference in performance between urllib.urlopen().readline() and file().readline()?
It is hard to compare these two. For urllib, this depends on the speed of the network, as well as the speed of the server. Even for local servers, there is some abstraction overhead, so that, usually, it is slower to read from the networking API than from a file directly.
For actual performance comparisons, you will have to write a test script and do the measurement. However, why do you even bother? You cannot replace one with another since they serve different purposes.
What is best way to process a file opened via urllib.urlopen()? Is it faster to process it line by line? Or shall I load bunch of lines(~50) into a list and then process the list?
Since the bottle neck is the networking speed, it might be a good idea to process the data as soon as you get it. This way, the operating system can cache more incoming data "in the background".
It makes no sense to cache lines in a list before processing them. Your program will just sit there waiting for enough data to arrive while it could be doing something useful already.
Related
In Python, I can open a file with f= open(<filename>,<permissions>). This returns an object f which I can write to using f.write(<some data>).
If, at this point, I access the original final (eg with cat from a terminal), it appears empty: Python stored the data I wrote to the object f and not the actual on-disk file. If I then call f.close(), the data in f is persisted to the on-disk file (and I can access it from other programs).
I assume data is buffered to improve latency. However, what happens if the buffered data grows a lot? Will Python initiate a write? If so, details on the internals (what influences the buffer size? is the disk I/O handled within Python or by another program/thread? is there a chance Python will just hang during the write?) would be much appreciated.
The general subject of I/O buffering has been treated many times (including in questions linked from the comments). But to answer your specific questions:
By default, when writing to a terminal (“the screen”), a newline causes the text to be flushed up through it. For all files, the buffer is flushed each time it fills. (Large single writes might flush any existing buffer contents and then bypass it.)
The buffer has a fixed size and is allocated before any data is written; Python 3 doesn’t use stdio, so it chooses its own buffer sizes. (A few kB is typical.)
The “disk I/O” (really kernel I/O, which is distinguishable only in certain special circumstances like network/power failure) happens within whatever Python write triggers the flush.
Yes, it can hang, if the file is a pipe to a busy process, a socket over a slow network, a special device, or even a regular file mounted from a remote machine.
I am trying to implement a "record manager" class in python 3x and linux/macOS. The class is relatively easy and straightforward, the only "hard" thing I want is to be able to access the same file (where results are saved) on multiple processes.
This seemed pretty easy, conceptually: when saving, acquire an exclusive lock on the file. Update your information, save the new information, release exclusive lock on the file. Easy enough.
I am using fcntl.lockf(file, fcntl.LOCK_EX) to acquire the exclusive lock. The problem is that, looking on the internet, I am finding a lot of different websites saying how this is not reliable, that it won't work on windows, that the support on NFS is shaky, and that things could change between macOS and linux.
I have accepted that the code won't work on windows, but I was hoping to be able to make it work on macOS (single machine) and on linux (on multiple servers with NFS).
The problem is that I can't seem to make this work; and after a while of debugging and after the tests passed on macOS, they failed once I tried them on the NFS with linux (ubuntu 16.04). The issue is an inconsistency between the informations saved by multiple processes - some processes have their modifications missing, which means something went wrong in the locking and saving procedure.
I am sure there is something I am doing wrong, and I suspect this may be related to the issues that I read about online. So, what is the proper way to deal multiple access to the same file that works on macOS and linux over NFS?
Edit
This is what the typical method that writes new informations to disk looks like:
sf = open(self._save_file_path, 'rb+')
try:
fcntl.lockf(sf, fcntl.LOCK_EX) # acquire an exclusive lock - only one writer
self._raw_update(sf) #updates the records from file (other processes may have modified it)
self._saved_records[name] = new_info
self._raw_save() #does not check for locks (but does *not* release the lock on self._save_file_path)
finally:
sf.flush()
os.fsync(sf.fileno()) #forcing the OS to write to disk
sf.close() #release the lock and close
While this is how a typical method that only read info from disk looks like:
sf = open(self._save_file_path, 'rb')
try:
fcntl.lockf(sf, fcntl.LOCK_SH) # acquire shared lock - multiple writers
self._raw_update(sf) #updates the records from file (other processes may have modified it)
return self._saved_records
finally:
sf.close() #release the lock and close
Also, this is how _raw_save looks like:
def _raw_save(self):
#write to temp file first to avoid accidental corruption of information.
#os.replace is guaranteed to be an atomic operation in POSIX
with open('temp_file', 'wb') as p:
p.write(self._saved_records)
os.replace('temp_file', self._save_file_path) #pretty sure this does not release the lock
Error message
I have written a unit test where I create 100 different processes, 50 that read and 50 that write to the same file. Each process does some random waiting to avoid accessing the files sequentially.
The problem is that some of the records are not kept; at the end there are some 3-4 random records missing, so I only end up with 46-47 records rather than 50.
Edit 2
I have modified the code above and I acquire the lock not on the file itself, but on a separate lock file. This prevents the issue that closing the file would release the lock (as suggested by #janneb), and makes the code work correctly on mac. The same code fails on linux with NFS though.
I don't see how the combination of file locks and os.replace() can make sense. When the file is replaced (that is, the directory entry is replaced), all the existing file locks (probably including file locks waiting for the locking to succeed, I'm not sure of the semantics here) and file descriptors will be against the old file, not the new one. I suspect this is the reason behind the race conditions causing you to lose some of the records in your tests.
os.replace() is a good technique to ensure that a reader doesn't read a partial update. But it doesn't work robustly in the face of multiple updaters (unless losing some of the updates is ok).
Another issues is that fcntl is a really really stupid API. In particular, the locks are bound to the process, not the file descriptor. Which means that e.g. a close() on ANY file descriptor pointing to the file will release the lock.
One way would be to use a "lock file", e.g. taking advantage of the atomicity of link(). From http://man7.org/linux/man-pages/man2/open.2.html:
Portable
programs that want to perform atomic file locking using a
lockfile, and need to avoid reliance on NFS support for
O_EXCL, can create a unique file on the same filesystem (e.g.,
incorporating hostname and PID), and use link(2) to make a
link to the lockfile. If link(2) returns 0, the lock is
successful. Otherwise, use stat(2) on the unique file to
check if its link count has increased to 2, in which case the
lock is also successful.
If it's Ok to read slightly stale data then you can use this link() dance only for a temp file that you use when updating the file and then os.replace() the "main" file you use for reading (reading can then be lockless). If not, then you need to do the link() trick for the "main" file and forget about shared/exclusive locking, all locks are then exclusive.
Addendum: One tricky thing to deal with when using lock files is what to do when a process dies for whatever reason, and leaves the lock file around. If this is to run unattended, you might want to incorporate some kind of timeout and removal of lock files (e.g. check the stat() timestamps).
Using randomly named hard links and the link counts on those files as lock files is a common strategy (E.g. this), and arguable better than using lockd but for far more information about the limits of all sorts of locks over NFS read this: http://0pointer.de/blog/projects/locking.html
You'll also find that this is a long standing standard problem for MTA software using Mbox files over NFS. Probably the best answer there was to use Maildir instead of Mbox, but if you look for examples in the source code of something like postfix, it'll be close to best practice. And if they simply don't solve that problem, that might also be your answer.
NFS is great for file sharing. It sucks as a "transmission" medium.
I've been down the NFS-for-data-transmission road multiple times. In every instance, the solution involved moving away from NFS.
Getting reliable locking is one part of the problem. The other part is the update of the file on the server and expecting the clients to receive that data at some specific point-in-time (such as before they can grab the lock).
NFS isn't designed to be a data transmission solution. There are caches and timing involved. Not to mention paging of the file content, and file metadata (e.g. the atime attribute). And client O/S'es keeping track of state locally (such as "where" to append the client's data when writing to the end of the file).
For a distributed, synchronized store, I recommend looking at a tool that does just that. Such as Cassandra, or even a general-purpose database.
If I'm reading the use-case correctly, you could also go with a simple server-based solution. Have a server listen for TCP connections, read messages from the connections, and then write each to file, serializing the writes within the server itself. There's some added complexity in having your own protocol (to know where a message starts and stops), but otherwise, it's fairly straight-forward.
Sorry if the topic was already approached, I didn't find it.
I am trying to read with Python a bench of large csv files (>300 MB) that are not located in a local drive.
I am not an expert in programming but I know that if you copy it into a local drive first it should take less time that reading it (or am I wrong?).
The thing is that I tested both methods and the computation times are similar.
Am I missing something? Can someone explain / give me a good method to read those file as fast as possible?
For copying to local drive I am using: shutil.copy2
For reading the file: for each line in MyFile
Thanks a lot for your help,
Christophe
Copying a file is sequentially reading it and saving in another place.
The performance of application might vary depending on the data access patterns, computation to I/O time, network latency and network bandwidth.
If you execute your script once, and read through it sequentially it's the same as copying the file, except you perform computations on it instead of saving. Even if you process small chunks of data it probably gets buffered. In fact in case of a single execution, if you copy 1st you actually read the same data twice, once for copy, and once the copy for computation.
If you execute your script multiple times, then you need to check what is your data troughput. For example, if you have a gigabit ethernet then 1GBit/s is 125MByte/s, if you process data slower, then you are not limited by the bandwidth.
Network latency comes into play when you send multiple requests for small chunks of data. Upon a read request you send a network request and get data back in a finite time, this is the latency. If you make a request for one big chunk of data you "pay" the latency limit once, if you ask 1000 times for 1/1000 of the big chunk you will need to "pay" it 1000 times. However, this is probably abstracted from you by the network file system and in case of a sequential read it will get buffered. It would manifest itself in randon jumping over file and reading small chunks of it.
You can check what you are limited by checking how much bytes you process per second and compare it to limits of the hardware. If it's close to HDD speed (which in your case i bet is not) you are bound by I/O - HDD. If it's lower, close to network bandwidth, you are limited by I/O - network. If it's even lower, it's either you are bound by processing speed of data or I/O network latency. However, if you were bound by I/O you should see difference between the two approaches, so if you are seeing the same results it's computation.
Are there any advantages/disadvantages to reading an entire file in one go rather than reading the bytes as required? So is there any advantage to:
file_handle = open("somefile", rb)
file_contents = file_handle.read()
# do all the things using file_contents
compared to:
file_handle = open("somefile", rb)
part1 = file_handle.read(10)
# do some stuff
part2 = file_handle.read(8)
# do some more stuff etc
Background: I am writing a p-code (bytecode) interpreter in Python and have initially just written a naive implementation that reads bytes from the file as required and performs the necessary actions etc. A friend I was showing the program has suggested that I should instead read the entire file into memory (Python list?) and then process it from memory to avoid lots of slow disk reads. The test files are currently less than 1KB and will probably be at most a few 100KB so I would have expected the Operating System and disk controller system to cache the file obviating any performance issues caused by repeatedly reading small chunks of the file.
Cache aside, you still have system calls. Each read() results in a mode switch to trigger the kernel. You can see this with strace or another tool to look at system calls.
This might be premature for a 100 KB file though. As always, test your code to know for sure.
If you want to do any kind of random access then putting it in a list is going to be much faster than seeking from disk. Even if the OS does cache disk access, you are hitting another layer of cache. In any case, you can't be sure how the OS will behave.
Here are 3 cases I can think of that would motivate doing it in-memory:
You might have a jump instruction which you can execute by adding a number to your program counter. Doing that to the index of an array vs seeking a file is a good use case.
You may want to optimise your VM's behaviour, and that may involve reading the file more than once. Scanning a list twice vs reading a file twice will be much quicker.
Depending on opcodes and the grammar of your language you may want to look ahead in a 'cycle' to speed up execution. If that ends up doing two seeks then this could end up degrading performance.
If your file will always be small enough fit in RAM then it's probably worth reading it all into memory. Profile it with a real program and see if it's noticeably faster.
A single call to read() will be faster than multiple calls to read(). The tradeoff is that with a single call you must be able to fit all data in memory at once, whereas with multiple reads you only have to retain a fraction of the total amount of data. For files that are just a few kilobytes or megabytes, the difference won't be noticeable. For files that are several gigs in size, memory becomes more important.
Also, to do a single read means all of the data must be present, whereas multiple reads can be used to process data as it is streaming in from an external source.
If you are looking for performance, I would recommend going through generators. Since you have small file size, memory would not be any big concern, but its still a good practice. Still reading file from disc multiple times is a definite bottleneck for a scalable solution.
I have a program that wants to be called from the command line many times, but involves reading a large pickle file, so each call can be potentially expensive. Is there any way that I can make cPickle just mmap the file into memory rather than read it in its entirety?
You probably don't even need to do this explicitly as your OS's disk cache will probably do a damn good job already.
Any poor performance might actually be related to the cost of deserialization and not the cost of reading it off the disk. You can test this by creating a temporary ram disk and putting the file there.
And the way to remove the cost of deserialization is to move the loading of the file to a separate python process and call it like a service. Building a quick-and-dirty REST service in python is super-easy and super-useful in these cases.
Take a look at the socket docs for how to do this with a raw socket. The echo server is a good example to start from.