Python--Getting Queue.Empty exception from a nonempty multiprocessing.Queue - python

I've having the opposite problem of many Python users--my program is using too little CPU. I already got help in switching to multiprocessing to utilize all four of my work computer's cores, and I have seen real performance improvement as a result. But the improvement is somewhat unreliable. The CPU usage of my program seems to deteriorate as it continues to run--even with six processes running. After adding some debug messages, I discovered this was because some of the processes I was spawning (which are all supposed to run until completion) were dying prematurely. The main body of the method the processes run is a while True loop, and the only way out is this block:
try:
f = filequeue.get(False)
except Empty:
print "Done"
return
filequeue is populated before the creation of the subprocesses, so it definitely isn't actually empty. All the processes should exit at roughly the same time once it actually is empty. I tried adding a nonzero timeout (0.05) parameter to the Queue.get call, but this didn't fix the problem. Why could I be getting a Queue.empty exception from a nonempty Queue?

I suggest using filequeue.get(True) instead of filequeue.get(False). This will cause the queue to block until there are more elements.
It will, however, block forever after the final element has been processed.
To work around this, the main process could add a special "sentinel" object at the end of each queue. The workers would terminate upon seeing this special object (instead of relying on the emptiness of the queue).

I had a similar problem and found out from experimentation, not reading the docs, that even when the queue is non-empty, get(False) can still spuriously throw Empty. In my use case, the workers have to exit when they run out of work in the Queue, so get(True) is a non-option.
My solution was this: I found that if in the "except Empty:" block, I check that the Queue is indeed empty(), it works -- empty() will not return True unless the Queue is really empty.
I was using Python 2.7.

Related

Can one terminate a python process which is a worker in a pool?

Each worker runs a long CPU-bound computation. The computation depends on parameters that can change anytime, even while the computation is in progress. Should that happen, the eventual result of the computation will become useless. We do not control the computation code, so we cannot signal it to stop. What can we do?
Nothing: Let the worker complete its task and somehow recognize afterwards that the result is incorrect and must be recomputed. That would means continuing using a processor for a useless result, possibly for a long time.
Don't use Pool: Create and join the processes as needed. We can then terminate the useless process and create another one. We can even keep bounds on the number of processes existing simultaneously. Unfortunately, we will not be reusing processes.
Find a way to terminate and replace a Pool worker: Is terminating a Pool worker even possible? Will Pool create replace the terminated one? If not, is there an external way of creating a new worker in a pool?
Given the strict "can't change computation code" limitation (which prevents checking for invalidation intermittently), your best option is probably #2.
In this case, the downside you mention for #2 ("Unfortunately, we will not be reusing processes.") isn't a huge deal. Reusing processes is an issue when the work done by a process is small relative to the overhead of launching the process. But it sounds like you're talking about processes that run over the course of seconds or longer; the cost of forking a new process (default on most UNIX-likes) is a trivial fraction of that, and spawning a process (default behavior on MacOS and Windows) is typically still measured in small fractions of a second.
For comparison:
Option #1 is wasteful; if you're anywhere close to using up your cores, and invalidation occurs with any frequency at all, you don't want to leave a core chugging on garbage indefinitely.
Option #3, even if it worked, would work only by coincidence, and might break in a new release of Python, since the behavior of killing workers explicitly is not a documented feature.

Multithreaded socket Program - Handling Critical section

I am creating a multi-threaded program, in which I want only 1 thread at a time to go in the critical section where is creates a socket and send some data and all the other to wait for that variable to clear.
I tried threading.Events but later realized that on set() it will notify all the threads waiting. While I only wanted to notify one.
Tried locks(acquire and release). It suited my scenario well but I got to know that lock contention for a long time is expensive. After acquiring the lock my thread was performing many functions and hence resulted in holding the lock for long.
Now I tried threading.conditions. Just wanted to know if acquiring and holding the condition for a long time, is it not a good practice as it also uses locks.
Can anyone suggest a better approach to my problem statement.
I would use an additional thread dedicated to sending. Use a Queue where the other threads put their Send-Data. The socket-thread gets items from the queue in a loop and sends them one after the other.
As long as the queue is empty, .get blocks and the send-thread sleeps.
The "producer" threads have no waiting time at all, they just put their data in the queue and continue.
There is no concern about possible deadlock conditions.
To stop the send-thread, put some special item (e.g. None) in the queue.
To enable returning of values, put a tuple (send_data, return_queue) in the send-queue. when a result is ready, return it by putting it in the return_queue.

Multiprocessing-spawn process fails to terminate after call to return

I think there's a fairly high likelihood that this question has been asked before, but I was unable to find an appropriate solution.
I'm using python multiprocessing. Sometimes, all of my child processes terminate and return appropriately. Other times, they don't. This is the way with parallel processes, I suppose. What's odd is that the very last thing the child processes are supposed to do is:
job = q.get()
if job == None:
q.put(None)
print 'Got nothing, exiting'
return
This message is always printed. It appears that it is stuck waiting for something, but I'm not sure what that could be. Everything it's using is declared to be threadsafe by online documentation. I'm not really certain what it could be waiting for, which is making searching for a solution difficult. One potential problem is that the child processes enqueue mutable lists in a 'return' queue, but I haven't found anything that would indicate that that is problematic. Thanks!
Edit: Removing the portion of the child process that enqueues the results in the results queue appears to abolish the issue. However, I believe that the results queue should be threadsafe, and hence this shouldn't be an issue.

Retrieve exit code of processes launched with multiprocessing.Pool.map

I'm using python multiprocessing module to parallelize some computationally heavy tasks.
The obvious choice is to use a Pool of workers and then use the map method.
However, processes can fail. For instance, they may be silently killed for instance by the oom-killer. Therefore I would like to be able to retrieve the exit code of the processes launched with map.
Additionally, for logging purpose, I would like to be able to know the PID of the process launched to execute each value in the the iterable.
If you're using multiprocessing.Pool.map you're generally not interested in the exit code of the sub-processes in the pool, you're interested in what value they returned from their work item. This is because under normal conditions, the processes in a Pool won't exit until you close/join the pool, so there's no exit codes to retrieve until all work is complete, and the Pool is about to be destroyed. Because of this, there is no public API to get the exit codes of those sub-processes.
Now, you're worried about exceptional conditions, where something out-of-band kills one of the sub-processes while it's doing work. If you hit an issue like this, you're probably going to run into some strange behavior. In fact, in my tests where I killed a process in a Pool while it was doing work as part of a map call, map never completed, because the killed process didn't complete. Python did, however, immediately launch a new process to replace the one I killed.
That said, you can get the pid of each process in your pool by accessing the multiprocessing.Process objects inside the pool directly, using the private _pool attribute:
pool = multiprocessing.Pool()
for proc in pool._pool:
print proc.pid
So, one thing you could do to try to detect when a process had died unexpectedly (assuming you don't get stuck in a blocking call as a result). You can do this by examining the list of processes in the pool before and after making a call to map_async:
before = pool._pool[:] # Make a copy of the list of Process objects in our pool
result = pool.map_async(func, iterable) # Use map_async so we don't get stuck.
while not result.ready(): # Wait for the call to complete
if any(proc.exitcode for proc in before): # Abort if one of our original processes is dead.
print "One of our processes has exited. Something probably went horribly wrong."
break
result.wait(timeout=1)
else: # We'll enter this block if we don't reach `break` above.
print result.get() # Actually fetch the result list here.
We have to make a copy of the list because when a process in the Pool dies, Python immediately replaces it with a new process, and removes the dead one from the list.
This worked for me in my tests, but because it's relying on a private attribute of the Pool object (_pool) it's risky to use in production code. I would also suggest that it may be overkill to worry too much about this scenario, since it's very unlikely to occur and complicates the implementation significantly.

How to synchronize python lists?

I have different threads and after processing they put data in a common list. Is there anything built in python for a list or a numpy array to be accessed by only a single thread. Secondly, if it is not what is an elegant way of doing it?
According to Thread synchronisation mechanisms in Python, reading a single item from a list and modifying a list in place are guaranteed to be atomic. If this is right (although it seems to be partially contradicted by the very existence of the Queue module), then if your code is all of the form:
try:
val = mylist.pop()
except IndexError:
# wait for a while or exit
else:
# process val
And everything put into mylist is done by .append(), then your code is already threadsafe. If you don't trust that one document on that score, use a queue.queue, which does all synchronisation for you, and has a better API than list for concurrent programs - particularly, it gives you the option of blocking indefinitely, or for a timeout, waiting for .pop() to work if you don't have anything else the thread could be getting on with in the mean time.
For numpy arrays, and in general any case where you need more than a producer/consumer queue, use a Lock or RLock from threading - these implement the context manager protocol, so using them is quite simple:
with mylock:
# Process as necessarry
And python will guarantee that the lock gets released once you fall off the end of the with block - including in tricky cases like if something you do raises an exception.
Finally, consider whether multiprocessing is a better fit for your application than threading - threads in Python aren't guaranteed to actually run concurrently, and in CPython only can if the drop to C-level code. multiprocessing gets around that issue, but may have some extra overhead - if you haven't already, you should read the docs to determine which one suits your needs better.
threading provides Lock objects if you need to protect an entire critical section, or the Queue module provides a queue that is threadsafe.
How about the standard library Queue?

Categories