Not sure this is the best title for this question but here goes.
Through python/Qt I started multiple processes of an executable. Each process is writing a large file (~20GB) to disk in chunks. I am finding that the first process to start is always the last to finish and continues on much, much longer than the other processes (despite having the same amount of data to write).
Performance monitors show that the process is still using the expected amount of RAM (~1GB), but the disk activity from the process has slowed to a trickle.
Why would this happen? It is as though the first process started somehow gets its' disk access 'blocked' by the other processes and then doesnt recover after the other processes have finished...
Would the OS (windows) be causing this? What can I do to alleviate this?
Parallelism (of any kind) only results in a speedup if you actually have the resources to solve the problem faster.
Before thinking of optimizing your program, you should carefully analyze what's causing it to run (subjectively) slow - the bottleneck.
While I know nothing about what sort bottleneck your program has, the fact that it writes a large quantity of data to disk is a good hint that it may be I/O bound.
When a program is I/O bound, the conventional single-machine parallelization techniques (threading, multiple processes) are worse than useless - they actually hurt performance, especially if you're dealing with a spinning disk. This happens because once you have more than one process accessing the disk at different places, the hard drive head has to seek between those.
The I/O scheduler of your OS can have a great impact on how slower performance becomes once you have multiple processes accessing I/O, and how processes are alloted disk accesses. You may consider switching your OS, but only if those multiple processes are needed in the first place.
With that being said, what can you do to get better (I/O) performance?
Get better disks (or a SSD)
Get more disks (one per process)
Get more machines
There are no guarantees as to fairness of I/O scheduling. What you're describing seems rather simple: the I/O scheduler, whether intentionally or not, gives a boost to new processes. Since your disk is tapped out, the order in which the processes finish is not under your control. You're most likely wasting a lot of disk bandwidth on seeks, due to parallel access from multiple processes.
TL;DR: Your expectation is unfounded. When I/O, and specifically the virtual memory system, is saturated, anything can happen. And so it does.
Related
I am running a python program that processes a large dataset. Sometimes, it runs into a MemoryError when the machine runs out of memory.
I would like any MemoryError that is going to occur to happen at the start of execution, not in the middle. That is, the program should fail-fast: if the machine will not have enough memory to run to completion, the program should fail as soon as possible.
Is it possible for Python to pre-allocate space on the heap?
Is it possible to allocate heap memory at the start of python process,
Python uses as much memory as needed, so if your program is running out of memory, it would still run out of it even if there was a way to allocate the memory at the start.
One solution is trying to allow for swap to increase your total memory, although performance will be very bad in many scenarios.
The best solution, if possible, is to change the program to process data in chunks instead of loading it entirely.
I am writing a script to simultaneously accept many files transfers from many computers on a subnet using sockets (around 40 jpg files total). I want to use multithreading or multiprocessing to make the the transfer occur as fast as possible.
I'm wondering if this type of image transfer is limited by the CPU - and therefore I should use multiprocessing - or if multithreading will be just as good here.
I would also be curious as to what types of activities are limited by the CPU and require multiprocessing, and which are better suited for multithreading.
If the following assumptions are true:
Your script is simply receiving data from the network and writing that data to disk (more or less) verbatim, i.e. it isn't doing any expensive processing on the data
Your script is running on a modern CPU with typical modern networking hardware (e.g. gigabit Ethernet or slower)
Your script's download routines are not grossly inefficient (e.g. you are receiving reasonably-sized chunks of data and not just 1 byte at a time or something silly like that)
... then it's unlikely that your download rate will be CPU-limited. More likely the bottleneck will be either network bandwidth or disk I/O bandwidth.
In any case, since AFAICT your use-case is embarrassingly parallel (i.e. the various downloads never have to communicate or interact with each other, they just each do their own thing independently), it's unlikely that using multithreading vs multiprocessing will make much difference in terms of performance. Of course, the only way to be certain is to try it both ways and measure the throughput each way.
Short answer:
Generally, it really depends on your workload. If you're serious on the performance, please provide details. for example, whether you store images to disk, whether image sizes are > 1GB or not, and etc.
Note: Generally again, if it not mission-critical, both ways are acceptable since we can easily switch between multithread and multiprocess implementations using threading.Thread and multiprocessing.Process.
some more comments
It seems that not CPU but IO will be the bottleneck.
For multiprocess / multithread, due to GIL and/or your implementation, we may have performance difference. You may implement both ways and make try. BTW, IMHO it won't differ much. I think that async IO vs blocking IO will have greater impact.
If your file transfer isn't extremely slow - slower than writing data to disk, multithreading/multiprocessing isn't going to help. By file transfer I mean downloading images and writing them to the local computer with a single HDD.
Using multithreading or multiprocessing when transferring data from several computers with separate disks definitely can improve overall download performance. Simply data read from several physical disks can be read in paralel. The problem arises when you try to save these images to your local drive.
You have just a single local HDD (if disk array not used), single HDD like most HW devices can do just a single IO operation at time. So trying to write several images to disk in the same time won't improve the overal performance - it can even hamper it.
Just imagine that 40 already downloaded images are trying to be written to a single mechanical HDD with single HDD head to different locations (different physical files) especially if disk is fragmented. Then this can even slow down the whole process because HDD is wasting time moving it magnetic head from one position to different (drives can partially mitigate this by reordering IO operation to limit head movement).
On the other hand if you do some preprocessing with these images that is CPU intensive and just then you are going to save them to disk, multithreading can be really helpful.
And to the question what's preferred. On modern OSs there is not a significant difference between using multithreading and multiprocessing (spanning multiple processes). OSs like Linux or Windows schedule threads not processes - based on process and thread priorities. So there is not a big difference between 40 single threaded processes and a single process containing 40 threads. Using multiple processes normally consumes more memory because OS for every process has to allocate some extra memory (not big), but from point of speed difference between multithreading and multiprocessing is not significant. There are other important question to consider which method to use (will these downloads share some data - like common GUI interface - multithreading is easier to use), (are these files to download so big that 40 transfers can exhaust all virtual address space of a single process - use multiprocessing).
Generally:
Multithreading - easier to use in single application because all threads share virtual address space of a single process and can easily communicate with each other. On the other hand single process has a limited size of virtual address space (less than 4GB on 32bit computer).
Multiprocessing - harder to use in a single application (a need of inter-process communication), but more scalable and more robust (if file transfer process crashes only a single file transfer fails) + more virtual address space to use.
I am trying to understand whether my way of using multiprocessing.Pool is efficient. The method that I would like to do in parallel is a script that reads a certain file, do calculation, and then saves the results to a different file. My code looks something like this:
from multiprocessing import Pool, TimeoutError
import deepdish.io as dd
def savefile(a,b,t,c,g,e,d):
print a
dd.save(str(a),{'b':b,'t':t,'c':c,'g':g,'e':e,'d':d})
def run_many_calcs():
num_processors = 6
print "Num processors - ",num_processors
pool = Pool(processes=num_processors) # start 4 worker processes
for a in ['a','b','c','d','e','f','g','t','y','e','r','w']:
pool.apply(savefile,args=(a,4,5,6,7,8,1))
How can I see that immediately after one process is finished in one of the processors it continues to the next file?
When considering performance of any program, you have to work out if the performance is bound by I/O (memory, disk, network, whatever) or Compute (core count, core speed, etc).
If I/O is the bottleneck, there's no point having multiple processes, a faster CPU, etc.
If the computation is taking up all the time, then it is worth investing in multiple processes, etc. "Computation time" is often diagnosed as being the problem, but on closer investigation turns out to be limited by the computer's memory bus speed, not the clock rate of the cores. In such circumstances adding multiple processes can make things worse...
Check
You can check yours by doing some performance profiling of your code (there's bound to be a whole load of profiling tools out there for Python).
My Guess
Most of the time these days it's I/O that's the bottleneck. If you don't want to profile your code, betting on a faster SSD is likely the best initial approach.
Unsolvable Computer Science Problem
The architectural features of modern CPUs (L1, L2, L3 cache, QPI, hyperthreads) are all symptoms of the underlying problem in computer design; cores are too quick for the I/O we can wrap around them.
For example, the time taken to transfer 1 byte from SDRAM to the core is exceedingly slow in comparison to the core speed. One just has to hope that the L3, L2 and L1 cache subsystems have correctly predicted the need for that 1 byte and have already fetched it ahead of time. If not, there's a big delay; that's where hyperthreading can help the overall performance of the computer's other processes (they can nip in and get some work done), but does absolutely nothing for the stalled program.
Data fetched from files or networks is very slow indeed.
File System Caching
In your case it sounds like you have 1 single input file; that will at least get cached in RAM by the OS (provided it's not too big).
You may be tempted to read it into memory yourself; I wouldn't bother. If it's large you would be allocating a large amount of memory to hold it, and if that's too big for the RAM in the machine the OS will swap some of that RAM out to the virtual memory page file anyway, and you're worse off than before. If it's small enough there's a good chance the OS will cache the whole thing for you anyway, saving you the bother.
Written files are also cached, up to a point. Ultimately there's nothing you can do if "total process time" is taken to mean that all the data is written to disk; you'd be having to wait for the disk to complete writing no matter what you did in memory and what the OS cached.
The OS's filesystem cache might give an initial impression that file writing has completed (the OS will get on with consolidating the data on the actual drive shortly), but successive runs of the same program will get blocked once that write cache is full.
If you do profile your code, be sure to run it for a long time (or repeatedly), to make sure that the measurements made by the profiler reveal the true underlying performance of the computer. If the results show that most of the time is in file.Read() or file.Write()...
I'm reading +1000 of ~200Mb CSVs in parallel and saving the modified CSV afterwards using pandas. This creates many zombie processes that accumulate to +128Gb of RAM which devastates performance.
csv_data = []
c = zip(a, b)
process_pool = Pool(cpu_count())
for name_and_index in process_pool.starmap(load_and_process_csv, c):
csv_data.append(name_and_index)
process_pool.terminate()
process_pool.close()
process_pool.join()
This is my current solution. It doesn't seem to cause a problem until you process more than 80 CSVs or so.
PS: Even when pool is completed ~96Gb of RAM is still occupied and you can see the python processes occupying RAM but not doing anything nor being destoryed. Moreover, I know with certainty that the function the pool is executing itself is running to completion.
I hope that's descriptive enough.
Python's multiprocessing module is process-based. So it is natural that you have many processes.
Worse, these processes do not share memory, but communicate through pickling/unpickling. So they are very slow if large data need to be transferred between processed, which is happening here.
For this case, because the processing is I/O related, you may have better performance using multithread with threading module if I/O is the bottleneck. Threads share memory but they also 'share' 1 CPU core, so it's not guarantee to run faster, you should try it.
Update: If multithread does not help, you don't have many options left. Because this case is exactly against the critical weakness of Python's parallel processing architecture. You may want to try dask (parallel pandas): http://dask.readthedocs.io/en/latest/
I have a situation where I'm downloading a lot of files. Right now everything runs on one main Python thread, and downloads as many as 3000 files every few minutes. The problem is that the time it takes to do this is too long. I realize Python has no true multi-threading, but is there a better way of doing this? I was thinking of launching multiple threads since the I/O bound operations should not require access to the global interpreter lock, but perhaps I misunderstand that concept.
Multithreading is just fine for the specific purpose of speeding up I/O on the net (although asynchronous programming would give even greater performance). CPython's multithreading is quite "true" (native OS threads) -- what you're probably thinking of is the GIL, the global interpreter lock that stops different threads from simultaneously running Python code. But all the I/O primitives give up the GIL while they're waiting for system calls to complete, so the GIL is not relevant to I/O performance!
For asynchronous programming, the most powerful framework around is twisted, but it can take a while to get the hang of it if you're never done such programming. It would probably be simpler for you to get extra I/O performance via the use of a pool of threads.
Could always take a look at multiprocessing.
is there a better way of doing this?
Yes
I was thinking of launching multiple threads since the I/O bound operations
Don't.
At the OS level, all the threads in a process are sharing a limited set of I/O resources.
If you want real speed, spawn as many heavyweight OS processes as your platform will tolerate. The OS is really, really good about balancing I/O workloads among processes. Make the OS sort this out.
Folks will say that spawning 3000 processes is bad, and they're right. You probably only want to spawn a few hundred at a time.
What you really want is the following.
A shared message queue in which the 3000 URI's are queued up.
A few hundred workers which are all reading from the queue.
Each worker gets a URI from the queue and gets the file.
The workers can stay running. When the queue's empty, they'll just sit there, waiting for work.
"every few minutes" you dump the 3000 URI's into the queue to make the workers start working.
This will tie up every resource on your processor, and it's quite trivial. Each worker is only a few lines of code. Loading the queue is a special "manager" that's just a few lines of code, also.
Gevent is perfect for this.
Gevent's use of Greenlets (lightweight coroutines in the same python process) offer you asynchronous operations without compromising code readability or introducing abstract 'reactor' concepts into your mix.