Keyboard Interrupt with python's multiprocessing - python

I am having an issues gracefully handling a Keyboard Interrupt with python's multi-processing
(Yes I know that Ctr-C should not guarantee a graceful shutdown -- but lets leave that discussion for a different thread)
Consider the following code, where I am user a multiprocessing.Manager#list() which is a ListProxy which I understood handles multi-process access to a list.
When I Ctr-C out of this -- I get a socket.error: [Errno 2] No such file or directory when trying to access the ListProxy
I would love to have the shared list not to be corrupted upon Ctr-C. Is this possible?!
Note: I want to solve this without using Pools and Queues.
from multiprocessing import Process, Manager
from time import sleep
def f(process_number, shared_array):
try:
print "starting thread: ", process_number
shared_array.append(process_number)
sleep(3)
shared_array.append(process_number)
except KeyboardInterrupt:
print "Keyboard interrupt in process: ", process_number
finally:
print "cleaning up thread", process_number
if __name__ == '__main__':
processes = []
manager = Manager()
shared_array = manager.list()
for i in xrange(4):
p = Process(target=f, args=(i, shared_array))
p.start()
processes.append(p)
try:
for process in processes:
process.join()
except KeyboardInterrupt:
print "Keyboard interrupt in main"
for item in shared_array:
# raises "socket.error: [Errno 2] No such file or directory"
print item
If you run that and then hit Ctr-C, we get the following:
starting thread: 0
starting thread: 1
starting thread: 3
starting thread: 2
^CKeyboard interupt in process: 3
Keyboard interupt in process: 0
cleaning up thread 3
cleaning up thread 0
Keyboard interupt in process: 1
Keyboard interupt in process: 2
cleaning up thread 1
cleaning up thread 2
Keyboard interupt in main
Traceback (most recent call last):
File "multi.py", line 33, in <module>
for item in shared_array:
File "<string>", line 2, in __getitem__
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 755, in _callmethod
self._connect()
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 742, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/connection.py", line 169, in Client
c = SocketClient(address)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/connection.py", line 293, in SocketClient
s.connect(address)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 2] No such file or directory
(Here is another approach using a multiprocessing.Lock with similar affect ... gist)
Similar Questions:
Catch Keyboard Interrupt to stop Python multiprocessing worker from working on queue
Keyboard Interrupts with python's multiprocessing Pool
Shared variable in python's multiprocessing

multiprocessing.Manager() fires up a child process which is responsible for handling your shared list proxy.
netstat output while running:
unix 2 [ ACC ] STREAM LISTENING 3921657 8457/python
/tmp/pymp-B9dcij/listener-X423Ml
this child process created by multiprocessing.Manager() is catching your SIGINT and exiting causing anything related to it to be dereferenced hence your "no such file" error (I also got several other errors depending on when i decided to send SIGINT).
to solve this you may directly declare a SyncManager object (instead of letting Manager() do it for you). this will require you to use the start() method to actually fire up the child process. the start() method takes an initialization function as its first argument (you can override SIGINT for the manager here).
code below, give this a try:
from multiprocessing import Process, Manager
from multiprocessing.managers import BaseManager, SyncManager
from time import sleep
import signal
#handle SIGINT from SyncManager object
def mgr_sig_handler(signal, frame):
print 'not closing the mgr'
#initilizer for SyncManager
def mgr_init():
signal.signal(signal.SIGINT, mgr_sig_handler)
#signal.signal(signal.SIGINT, signal.SIG_IGN) # <- OR do this to just ignore the signal
print 'initialized mananger'
def f(process_number, shared_array):
try:
print "starting thread: ", process_number
shared_array.append(process_number)
sleep(3)
shared_array.append(process_number)
except KeyboardInterrupt:
print "Keyboard interrupt in process: ", process_number
finally:
print "cleaning up thread", process_number
if __name__ == '__main__':
processes = []
#using syncmanager directly instead of letting Manager() do it for me
manager = SyncManager()
manager.start(mgr_init) #fire up the child manager process
shared_array = manager.list()
for i in xrange(4):
p = Process(target=f, args=(i, shared_array))
p.start()
processes.append(p)
try:
for process in processes:
process.join()
except KeyboardInterrupt:
print "Keyboard interrupt in main"
for item in shared_array:
print item

As I answer on similar question (duplicate):
Simplest solution - start manager with
manager.start(signal.signal, (signal.SIGINT, signal.SIG_IGN))
instead of manager.start().
And check if signal module is in your imports (import signal).
This catch and ignore SIGINT (Ctrl-C) in manager process.

Related

How to close a multiprocessing queue from a parent (consumer) process

Where using a multiprocessing queue to communicate between processes, many articles recommend sending a terminate message to the queue.
However, if a child process is the producer, if may fail expectedly, leaving the consumer without and notification to expect more messages.
However, the parent process can be notified if a process when a child dies.
It seems it should be possible for it to notify a worker thread in this process to quit and not expect more messages. But how?
multiprocessing.Queue.close()
...doesn't notify consumers (Really? Wait? what!)
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE")
... doesn't let me wait for pending work to complete.
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE")
# messageQ.close()
messageQ.join_thread() # Wait for worker to complete
... fails because the queue is not yet closed.
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE")
messageQ.close()
messageQ.join_thread() # Wait for worker to complete
... seems like it should work, but fails in the worker with a TypeError exception:
msg = messageQ.get()
File "/usr/lib/python3.7/multiprocessing/queues.py", line 94, in get
res = self._recv_bytes()
File "/usr/lib/python3.7/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 411, in _recv_bytes
return self._recv(size)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
TypeError: an integer is required (got type NoneType)
while !quit:
try:
msg = messageQ.get(block=True, timeout=0.5)
except Empty:
continue
... is terrible in that it unnecessarily demands trading shutdown latency without throttling the CPU.
Full example
import multiprocessing
import threading
def producer(messageQ):
messageQ.put("1")
messageQ.put("2")
messageQ.put("3")
if __name__ == '__main__':
messageQ = multiprocessing.Queue()
def worker():
try:
while True:
msg = messageQ.get()
print(msg)
if msg=="TERMINATE": return
# messageQ.task_done()
finally:
print("Worker quit")
# messageQ.close() # End thread
# messageQ.join_thread()
thr = threading.Thread(target=worker,
daemon=False) # The work queue is precious.
thr.start()
def onProcessQuit(): # Notify worker that we are done.
messageQ.put("TERMINATE") # Notify worker we are done
messageQ.close() # No more messages
messageQ.join_thread() # Wait for worker to complete
def runProcess():
proc = multiprocessing.Process(target=producer, args=(messageQ,))
proc.start()
proc.join()
print("runProcess quitting ...")
onProcessQuit()
print("runProcess quitting .. OK")
runProcess()
If you are concerned about the producer process not completing normally, then I am not sure what your question is because your code as is should work except for a few corrections: (1) it is missing an import statement, (2) there is no call to runProcess and (3) your worker thread is incorrectly a daemon thread (as such it may end up terminating before it has had a chance to process all the messages on the queue).
I would also use as a personal preference (and not a correction) None as the special sentinel message instead of TERMINATE and remove some extraneous queue calls that you don't really need (I don't see your explicitly closing the queue accomplishing anything that is necessary).
These are the changes:
def producer(messageQ):
messageQ.put("1")
messageQ.put("2")
messageQ.put("3")
if __name__ == '__main__':
import multiprocessing
import threading
SENTINEL = None
def worker():
try:
while True:
msg = messageQ.get()
if msg is SENTINEL:
return # No need to print the sentinel
print(msg)
finally:
print("Worker quit")
def onProcessQuit(): # Notify worker that we are done.
messageQ.put(SENTINEL) # Notify worker we are done
def runProcess():
proc = multiprocessing.Process(target=producer, args=(messageQ,))
proc.start()
proc.join()
print("runProcess quitting ...")
onProcessQuit()
print("runProcess quitting .. OK")
thr.join()
messageQ = multiprocessing.Queue()
thr = threading.Thread(target=worker) # The work queue is precious.
thr.start()
runProcess()
Prints:
1
2
3
runProcess quitting ...
runProcess quitting .. OK
Worker quit

How to handle abnormal child process termination?

I'm using python 3.7 and following this documentation. I want to have a process, which should spawn a child process, wait for it to finish a task, and get some info back. I use the following code:
if __name__ == '__main__':
q = Queue()
p = Process(target=some_func, args=(q,))
p.start()
print q.get()
p.join()
When the child process finishes correctly there is no problem, and it works great, but the problem starts when my child process is terminated before it finished.
In this case, my application is hanging on wait.
Giving a timeout to q.get() and p.join() not completely solves the issue, because I want to know immediately that the child process died and not to wait to the timeout.
Another problem is that timeout on q.get() yields an exception, which I prefer to avoid.
Can someone suggest me a more elegant way to overcome those issues?
Queue & Signal
One possibility would be registering a signal handler and use it to pass a sentinel value.
On Unix you could handle SIGCHLD in the parent, but that's not an option in your case. According to the signal module:
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, SIGTERM, or SIGBREAK.
Not sure if killing it through Task-Manager will translate into SIGTERM but you can give it a try.
For handling SIGTERM you would need to register the signal handler in the child.
import os
import sys
import time
import signal
from functools import partial
from multiprocessing import Process, Queue
SENTINEL = None
def _sigterm_handler(signum, frame, queue):
print("received SIGTERM")
queue.put(SENTINEL)
sys.exit()
def register_sigterm(queue):
global _sigterm_handler
_sigterm_handler = partial(_sigterm_handler, queue=queue)
signal.signal(signal.SIGTERM, _sigterm_handler)
def some_func(q):
register_sigterm(q)
print(os.getpid())
for i in range(30):
time.sleep(1)
q.put(f'msg_{i}')
if __name__ == '__main__':
q = Queue()
p = Process(target=some_func, args=(q,))
p.start()
for msg in iter(q.get, SENTINEL):
print(msg)
p.join()
Example Output:
12273
msg_0
msg_1
msg_2
msg_3
received SIGTERM
Process finished with exit code 0
Queue & Process.is_alive()
Even if this works with Task-Manager, your use-case sounds like you can't exclude force kills, so I think you're better off with an approach which doesn't rely on signals.
You can check in a loop if your process p.is_alive(), call queue.get() with a timeout specified and handle the Empty exceptions:
import os
import time
from queue import Empty
from multiprocessing import Process, Queue
def some_func(q):
print(os.getpid())
for i in range(30):
time.sleep(1)
q.put(f'msg_{i}')
if __name__ == '__main__':
q = Queue()
p = Process(target=some_func, args=(q,))
p.start()
while p.is_alive():
try:
msg = q.get(timeout=0.1)
except Empty:
pass
else:
print(msg)
p.join()
It would be also possible to avoid an exception, but I wouldn't recommend this because you don't spend your waiting time "on the queue", hence decreasing the responsiveness:
while p.is_alive():
if not q.empty():
msg = q.get_nowait()
print(msg)
time.sleep(0.1)
Pipe & Process.is_alive()
If you intend to utilize one connection per-child, it would however be possible to use a pipe instead of a queue. It's more performant than a queue
(which is mounted on top of a pipe) and you can use multiprocessing.connection.wait (Python 3.3+) to await readiness of multiple objects at once.
multiprocessing.connection.wait(object_list, timeout=None)
Wait till an object in object_list is ready. Returns the list of those objects in object_list which are ready. If timeout is a float then the call blocks for at most that many seconds. If timeout is None then it will block for an unlimited period. A negative timeout is equivalent to a zero timeout.
For both Unix and Windows, an object can appear in object_list if it is a readable Connection object;
a connected and readable socket.socket object; or
the sentinel attribute of a Process object.
A connection or socket object is ready when there is data available to be read from it, or the other end has been closed.
Unix: wait(object_list, timeout) almost equivalent select.select(object_list, [], [], timeout). The difference is that, if select.select() is interrupted by a signal, it can raise OSError with an error number of EINTR, whereas wait() will not.
Windows: An item in object_list must either be an integer handle which is waitable (according to the definition used by the documentation of the Win32 function WaitForMultipleObjects()) or it can be an object with a fileno() method which returns a socket handle or pipe handle. (Note that pipe handles and socket handles are not waitable handles.)
You can use this to await the sentinel attribute of the process and the parental end of the pipe concurrently.
import os
import time
from multiprocessing import Process, Pipe
from multiprocessing.connection import wait
def some_func(conn_write):
print(os.getpid())
for i in range(30):
time.sleep(1)
conn_write.send(f'msg_{i}')
if __name__ == '__main__':
conn_read, conn_write = Pipe(duplex=False)
p = Process(target=some_func, args=(conn_write,))
p.start()
while p.is_alive():
wait([p.sentinel, conn_read]) # block-wait until something gets ready
if conn_read.poll(): # check if something can be received
print(conn_read.recv())
p.join()

Do not print stack-trace using Pool python

I use a Pool to run several commands simultaneously. I would like to don't print the stack-trace when the user interrupt the script.
Here is my script structure:
def worker(some_element):
try:
cmd_res = Popen(SOME_COMMAND, stdout=PIPE, stderr=PIPE).communicate()
except (KeyboardInterrupt, SystemExit):
pass
except Exception, e:
print str(e)
return
#deal with cmd_res...
pool = Pool()
try:
pool.map(worker, some_list, chunksize = 1)
except KeyboardInterrupt:
pool.terminate()
print 'bye!'
By calling pool.terminated() when KeyboardInterrupt raises, I expected to don't print the stack-trace, but it doesn't works, I got sometimes something like:
^CProcess PoolWorker-6:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/usr/lib/python2.7/multiprocessing/queues.py", line 374, in get
racquire()
KeyboardInterrupt
Process PoolWorker-1:
Process PoolWorker-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Traceback (most recent call last):
...
bye!
Do you know how I could hide this?
Thanks.
In your case you don't even need pool processes or threads. And then it gets easier to silence KeyboardInterrupts with try-catch.
Pool processes are useful when your Python code does CPU-consuming calculations that can profit from parallelization.
Threads are useful when your Python code does complex blocking I/O that can run in parallel. You just want to execute multiple programs in parallel and wait for the results. When you use Pool you create processes that do nothing other than starting other processes and waiting for them to terminate.
The simplest solution is to create all of the processes in parallel and then to call .communicate() on each of them:
try:
processes = []
# Start all processes at once
for element in some_list:
processes.append(Popen(SOME_COMMAND, stdout=PIPE, stderr=PIPE))
# Fetch their results sequentially
for process in processes:
cmd_res = process.communicate()
# Process your result here
except KeyboardInterrupt:
for process in processes:
try:
process.terminate()
except OSError:
pass
This works when when the output on STDOUT and STDERR isn't too big. Else when another process than the one communicate() is currently running for produces too much output for the PIPE buffer (usually around 1-8 kB) it will be suspended by the OS until communicate() is called on the suspended process. In that case you need a more sophisticated solution:
Asynchronous I/O
Since Python 3.4 you can use the asyncio module for single-thread pseudo-multithreading:
import asyncio
from asyncio.subprocess import PIPE
loop = asyncio.get_event_loop()
#asyncio.coroutine
def worker(some_element):
process = yield from asyncio.create_subprocess_exec(*SOME_COMMAND, stdout=PIPE)
try:
cmd_res = yield from process.communicate()
except KeyboardInterrupt:
process.terminate()
return
try:
pass # Process your result here
except KeyboardInterrupt:
return
# Start all workers
workers = []
for element in some_list:
w = worker(element)
workers.append(w)
asyncio.async(w)
# Run until everything complete
loop.run_until_complete(asyncio.wait(workers))
You should be able to limit the number of concurrent processes using e.g. asyncio.Semaphore if you need to.
When you instantiate Pool, it creates cpu_count() (on my machine, 8) python processes waiting for your worker(). Note that they don't run it yet, they are waiting for the command. When they don't perform your code, they also don't handle KeyboardInterrupt. You can see what they are doing if you specify Pool(processes=2) and send the interruption. You can play with processes number to fix it, but I don't think you can handle it in all the cases.
Personally I don't recommend to use multiprocessing.Pool for the task of launching other processes. It's overkill to launch several python processes for that. Much more efficient way – is using threads (see threading.Thread, Queue.Queue). But in this case you need to implement threading pool youself. Which is not so hard though.
Your child process will receive both the KeyboardInterrupt exception and the exception from the terminate().
Because the child process receives the KeyboardInterrupt, a simple join() in the parent -- rather than the terminate() -- should suffice.
As suggested y0prst I used threading.Thread instead of Pool.
Here is a working example, which rasterize a set of vectors with ImageMagick (I know I can use mogrify for this, it's just an example).
#!/usr/bin/python
from os.path import abspath
from os import listdir
from threading import Thread
from subprocess import Popen, PIPE
RASTERISE_CALL = "magick %s %s"
INPUT_DIR = './tests_in/'
def get_vectors(dir):
'''Return a list of svg files inside the `dir` directory'''
return [abspath(dir+f).replace(' ', '\\ ') for f in listdir(dir) if f.endswith('.svg')]
class ImageMagickError(Exception):
'''Custom error for ImageMagick fails calls'''
def __init__(self, value): self.value = value
def __str__(self): return repr(self.value)
class Rasterise(Thread):
'''Rasterizes a given vector.'''
def __init__(self, svg):
self.stdout = None
self.stderr = None
Thread.__init__(self)
self.svg = svg
def run(self):
p = Popen((RASTERISE_CALL % (self.svg, self.svg + '.png')).split(), shell=False, stdout=PIPE, stderr=PIPE)
self.stdout, self.stderr = p.communicate()
if self.stderr is not '':
raise ImageMagickError, 'can not rasterize ' + self.svg + ': ' + self.stderr
threads = []
def join_threads():
'''Joins all the threads.'''
for t in threads:
try:
t.join()
except(KeyboardInterrupt, SystemExit):
pass
#Rasterizes all the vectors in INPUT_DIR.
for f in get_vectors(INPUT_DIR):
t = Rasterise(f)
try:
print 'rasterize ' + f
t.start()
except (KeyboardInterrupt, SystemExit):
join_threads()
except ImageMagickError:
print 'Opps, IM can not rasterize ' + f + '.'
continue
threads.append(t)
# wait for all threads to end
join_threads()
print ('Finished!')
Please, tell me if you think there are a more pythonic way to do that, or if it can be optimised, I will edit my answer.

Python multithreading + multiprocessing BrokenPipeError (subprocess not exiting?)

I am getting BrokenPipeError when threads which employ multiprocessing.JoinableQueue spawn processes. It seems that happens after the program finished working and tries to exit, because it does everyithing it supposed to do. What does it mean, is there a way to fix this / safe to ignore?
import requests
import multiprocessing
from multiprocessing import JoinableQueue
from queue import Queue
import threading
class ProcessClass(multiprocessing.Process):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
class ThreadClass(threading.Thread):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
def get_urls(host, out_queue):
r = requests.get(host)
out_queue.put(r.text)
print(r.status_code, host)
def get_title(text, out_queue):
print(text.strip('\r\n ')[:5])
if __name__ == '__main__':
def test():
q1 = JoinableQueue()
q2 = JoinableQueue()
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
q2.join()
test()
print('Finished')
Program output:
200 http://ibm.com
<!DOC
200 http://google.com
<!doc
200 http://yahoo.com
<!DOC
200 http://apple.com
<!DOC
200 http://amazon.com
<!DOC
Finished
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Python\33\lib\multiprocessing\connection.py", line 313, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109]
The pipe has been ended
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\33\lib\threading.py", line 901, in _bootstrap_inner
self.run()
File "D:\Progs\Uspat\uspat\spider\run\threads_test.py", line 31, in run
arg = self.in_queue.get()
File "C:\Python\33\lib\multiprocessing\queues.py", line 94, in get
res = self._recv()
File "C:\Python\33\lib\multiprocessing\connection.py", line 251, in recv
buf = self._recv_bytes()
File "C:\Python\33\lib\multiprocessing\connection.py", line 322, in _recv_bytes
raise EOFError
EOFError
....
(cut same errors for other threads.)
If I switch JoinableQueue to queue.Queue for multithreading part, everything fixes, but why?
This is happening because you're leaving the background threads blocking in a multiprocessing.Queue.get call when the main thread exits, but it only happens in certain conditions:
A daemon thread is running and blocking on a multiprocessing.Queue.get when the main thread exits.
A multiprocessing.Process is running.
The multiprocessing context is something other than 'fork'.
The exception is telling you that the other end of the Connection that the multiprocessing.JoinableQueue is listening to when its inside of a get() call sent an EOF. Generally this means the other side of the Connection has closed. It makes sense that this happens during shutdown - Python is cleaning up all objects prior to exiting the interpreter, and part of that clean up involves closing all the open Connection objects. What I haven't been able to figure out yet is why it only (and always) happens if a multiprocessing.Process has been spawned (not forked, which is why it doesn't happen on Linux by default) and is still running. I can even reproduce it if I create a multiprocessing.Process that just sleeps in a while loop. It doesn't take any Queue objects at all. For whatever reason, the presence of a running, spawned child process seems to guarantee the exception will be raised. It might simply cause the order that things are destroyed to be just right for race condition to occur, but that's a guess.
In any case, using a queue.Queue instead of multiprocessing.JoinableQueue is a good way to fix it, since you don't actually need a multiprocessing.Queue there. You could also make sure that the background threads and/or background processes are shut down before the main thread, by sending sentinels to their queues. So, make both run methods check for the sentinel:
def run(self):
for arg in iter(self.in_queue.get, None): # None is the sentinel
self.func(arg, self.out_queue)
self.in_queue.task_done()
self.in_queue.task_done()
And then send the sentinels when you're done:
threads = []
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
threads.append(t)
p = multiprocessing.Process(target=blah)
p.daemon = True
p.start()
procs = []
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
procs.append(t)
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
# All items have been consumed from input queue, lets start shutting down.
for t in procs:
q2.put(None)
t.join()
for t in threads:
q1.put(None)
t.join()
q2.join()

BoundedSemaphore hangs in threads on KeyboardInterrupt

If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
Start this up and then ^C as it is running. It will hang and never exit.
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
You can use the signal module to set a flag that tells the main thread to stop processing:
import threading
import time
import signal
import sys
sigint = False
def sighandler(num, frame):
global sigint
sigint = True
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
signal.signal(signal.SIGINT, sighandler)
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
if sigint:
sys.exit()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
t.join()
threads.append(t)
In your original code you could also make the threads daemon threads. When you interrupt the script, the daemon threads all die as you expected.
t = ...
t.setDaemon(True)
t.start()
In this case, it looks like you might just want to use a thread pool to control the starting and stopping of your threads. You could use Chris Arndt's threadpool library in a manner something like this:
pool = ThreadPool(5)
try:
# enqueue 100 worker threads
pool.wait()
except KeyboardInterrupt, k:
pool.dismiss(5)
# the program will exit after all running threads are complete
This is bug #11714, and has been patched in newer versions of python.
If you are using an older python, you could copy the the version of Semaphore found in that patch into your project and use it instead of relying on the buggy version in threading
# importing modules
import threading
import time
# defining our worker and pass a counter and the semaphore to it
def worker(i, sema):
time.sleep(2)
print i, "finished"
# releasing the thread increments the sema value
sema.release()
# creating the semaphore object
sema = threading.BoundedSemaphore(value=5)
# a list to store the created threads
threads = []
for x in xrange(100):
try:
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
# exit once the user hit CTRL+c
# or you can make the thead as daemon t.setdaemon(True)
except KeyboardInterrupt:
exit()

Categories