I use a Pool to run several commands simultaneously. I would like to don't print the stack-trace when the user interrupt the script.
Here is my script structure:
def worker(some_element):
try:
cmd_res = Popen(SOME_COMMAND, stdout=PIPE, stderr=PIPE).communicate()
except (KeyboardInterrupt, SystemExit):
pass
except Exception, e:
print str(e)
return
#deal with cmd_res...
pool = Pool()
try:
pool.map(worker, some_list, chunksize = 1)
except KeyboardInterrupt:
pool.terminate()
print 'bye!'
By calling pool.terminated() when KeyboardInterrupt raises, I expected to don't print the stack-trace, but it doesn't works, I got sometimes something like:
^CProcess PoolWorker-6:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/usr/lib/python2.7/multiprocessing/queues.py", line 374, in get
racquire()
KeyboardInterrupt
Process PoolWorker-1:
Process PoolWorker-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Traceback (most recent call last):
...
bye!
Do you know how I could hide this?
Thanks.
In your case you don't even need pool processes or threads. And then it gets easier to silence KeyboardInterrupts with try-catch.
Pool processes are useful when your Python code does CPU-consuming calculations that can profit from parallelization.
Threads are useful when your Python code does complex blocking I/O that can run in parallel. You just want to execute multiple programs in parallel and wait for the results. When you use Pool you create processes that do nothing other than starting other processes and waiting for them to terminate.
The simplest solution is to create all of the processes in parallel and then to call .communicate() on each of them:
try:
processes = []
# Start all processes at once
for element in some_list:
processes.append(Popen(SOME_COMMAND, stdout=PIPE, stderr=PIPE))
# Fetch their results sequentially
for process in processes:
cmd_res = process.communicate()
# Process your result here
except KeyboardInterrupt:
for process in processes:
try:
process.terminate()
except OSError:
pass
This works when when the output on STDOUT and STDERR isn't too big. Else when another process than the one communicate() is currently running for produces too much output for the PIPE buffer (usually around 1-8 kB) it will be suspended by the OS until communicate() is called on the suspended process. In that case you need a more sophisticated solution:
Asynchronous I/O
Since Python 3.4 you can use the asyncio module for single-thread pseudo-multithreading:
import asyncio
from asyncio.subprocess import PIPE
loop = asyncio.get_event_loop()
#asyncio.coroutine
def worker(some_element):
process = yield from asyncio.create_subprocess_exec(*SOME_COMMAND, stdout=PIPE)
try:
cmd_res = yield from process.communicate()
except KeyboardInterrupt:
process.terminate()
return
try:
pass # Process your result here
except KeyboardInterrupt:
return
# Start all workers
workers = []
for element in some_list:
w = worker(element)
workers.append(w)
asyncio.async(w)
# Run until everything complete
loop.run_until_complete(asyncio.wait(workers))
You should be able to limit the number of concurrent processes using e.g. asyncio.Semaphore if you need to.
When you instantiate Pool, it creates cpu_count() (on my machine, 8) python processes waiting for your worker(). Note that they don't run it yet, they are waiting for the command. When they don't perform your code, they also don't handle KeyboardInterrupt. You can see what they are doing if you specify Pool(processes=2) and send the interruption. You can play with processes number to fix it, but I don't think you can handle it in all the cases.
Personally I don't recommend to use multiprocessing.Pool for the task of launching other processes. It's overkill to launch several python processes for that. Much more efficient way – is using threads (see threading.Thread, Queue.Queue). But in this case you need to implement threading pool youself. Which is not so hard though.
Your child process will receive both the KeyboardInterrupt exception and the exception from the terminate().
Because the child process receives the KeyboardInterrupt, a simple join() in the parent -- rather than the terminate() -- should suffice.
As suggested y0prst I used threading.Thread instead of Pool.
Here is a working example, which rasterize a set of vectors with ImageMagick (I know I can use mogrify for this, it's just an example).
#!/usr/bin/python
from os.path import abspath
from os import listdir
from threading import Thread
from subprocess import Popen, PIPE
RASTERISE_CALL = "magick %s %s"
INPUT_DIR = './tests_in/'
def get_vectors(dir):
'''Return a list of svg files inside the `dir` directory'''
return [abspath(dir+f).replace(' ', '\\ ') for f in listdir(dir) if f.endswith('.svg')]
class ImageMagickError(Exception):
'''Custom error for ImageMagick fails calls'''
def __init__(self, value): self.value = value
def __str__(self): return repr(self.value)
class Rasterise(Thread):
'''Rasterizes a given vector.'''
def __init__(self, svg):
self.stdout = None
self.stderr = None
Thread.__init__(self)
self.svg = svg
def run(self):
p = Popen((RASTERISE_CALL % (self.svg, self.svg + '.png')).split(), shell=False, stdout=PIPE, stderr=PIPE)
self.stdout, self.stderr = p.communicate()
if self.stderr is not '':
raise ImageMagickError, 'can not rasterize ' + self.svg + ': ' + self.stderr
threads = []
def join_threads():
'''Joins all the threads.'''
for t in threads:
try:
t.join()
except(KeyboardInterrupt, SystemExit):
pass
#Rasterizes all the vectors in INPUT_DIR.
for f in get_vectors(INPUT_DIR):
t = Rasterise(f)
try:
print 'rasterize ' + f
t.start()
except (KeyboardInterrupt, SystemExit):
join_threads()
except ImageMagickError:
print 'Opps, IM can not rasterize ' + f + '.'
continue
threads.append(t)
# wait for all threads to end
join_threads()
print ('Finished!')
Please, tell me if you think there are a more pythonic way to do that, or if it can be optimised, I will edit my answer.
Related
I am getting BrokenPipeError when threads which employ multiprocessing.JoinableQueue spawn processes. It seems that happens after the program finished working and tries to exit, because it does everyithing it supposed to do. What does it mean, is there a way to fix this / safe to ignore?
import requests
import multiprocessing
from multiprocessing import JoinableQueue
from queue import Queue
import threading
class ProcessClass(multiprocessing.Process):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
class ThreadClass(threading.Thread):
def __init__(self, func, in_queue, out_queue):
super().__init__()
self.in_queue = in_queue
self.out_queue = out_queue
self.func = func
def run(self):
while True:
arg = self.in_queue.get()
self.func(arg, self.out_queue)
self.in_queue.task_done()
def get_urls(host, out_queue):
r = requests.get(host)
out_queue.put(r.text)
print(r.status_code, host)
def get_title(text, out_queue):
print(text.strip('\r\n ')[:5])
if __name__ == '__main__':
def test():
q1 = JoinableQueue()
q2 = JoinableQueue()
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
q2.join()
test()
print('Finished')
Program output:
200 http://ibm.com
<!DOC
200 http://google.com
<!doc
200 http://yahoo.com
<!DOC
200 http://apple.com
<!DOC
200 http://amazon.com
<!DOC
Finished
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Python\33\lib\multiprocessing\connection.py", line 313, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109]
The pipe has been ended
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python\33\lib\threading.py", line 901, in _bootstrap_inner
self.run()
File "D:\Progs\Uspat\uspat\spider\run\threads_test.py", line 31, in run
arg = self.in_queue.get()
File "C:\Python\33\lib\multiprocessing\queues.py", line 94, in get
res = self._recv()
File "C:\Python\33\lib\multiprocessing\connection.py", line 251, in recv
buf = self._recv_bytes()
File "C:\Python\33\lib\multiprocessing\connection.py", line 322, in _recv_bytes
raise EOFError
EOFError
....
(cut same errors for other threads.)
If I switch JoinableQueue to queue.Queue for multithreading part, everything fixes, but why?
This is happening because you're leaving the background threads blocking in a multiprocessing.Queue.get call when the main thread exits, but it only happens in certain conditions:
A daemon thread is running and blocking on a multiprocessing.Queue.get when the main thread exits.
A multiprocessing.Process is running.
The multiprocessing context is something other than 'fork'.
The exception is telling you that the other end of the Connection that the multiprocessing.JoinableQueue is listening to when its inside of a get() call sent an EOF. Generally this means the other side of the Connection has closed. It makes sense that this happens during shutdown - Python is cleaning up all objects prior to exiting the interpreter, and part of that clean up involves closing all the open Connection objects. What I haven't been able to figure out yet is why it only (and always) happens if a multiprocessing.Process has been spawned (not forked, which is why it doesn't happen on Linux by default) and is still running. I can even reproduce it if I create a multiprocessing.Process that just sleeps in a while loop. It doesn't take any Queue objects at all. For whatever reason, the presence of a running, spawned child process seems to guarantee the exception will be raised. It might simply cause the order that things are destroyed to be just right for race condition to occur, but that's a guess.
In any case, using a queue.Queue instead of multiprocessing.JoinableQueue is a good way to fix it, since you don't actually need a multiprocessing.Queue there. You could also make sure that the background threads and/or background processes are shut down before the main thread, by sending sentinels to their queues. So, make both run methods check for the sentinel:
def run(self):
for arg in iter(self.in_queue.get, None): # None is the sentinel
self.func(arg, self.out_queue)
self.in_queue.task_done()
self.in_queue.task_done()
And then send the sentinels when you're done:
threads = []
for i in range(2):
t = ThreadClass(get_urls, q1, q2)
t.daemon = True
t.setDaemon(True)
t.start()
threads.append(t)
p = multiprocessing.Process(target=blah)
p.daemon = True
p.start()
procs = []
for i in range(2):
t = ProcessClass(get_title, q2, None)
t.daemon = True
t.start()
procs.append(t)
for host in ("http://ibm.com", "http://yahoo.com", "http://google.com", "http://amazon.com", "http://apple.com",):
q1.put(host)
q1.join()
# All items have been consumed from input queue, lets start shutting down.
for t in procs:
q2.put(None)
t.join()
for t in threads:
q1.put(None)
t.join()
q2.join()
What happens when a python script opens subprocesses and one process crashes?
https://stackoverflow.com/a/18216437/311901
Will the main process crash?
Will the other subprocesses crash?
Is there a signal or other event that's propagated?
When using multiprocessing.Pool, if one of the subprocesses in the pool crashes, you will not be notified at all, and a new process will immediately be started to take its place:
>>> import multiprocessing
>>> p = multiprocessing.Pool()
>>> p._processes
4
>>> p._pool
[<Process(PoolWorker-1, started daemon)>, <Process(PoolWorker-2, started daemon)>, <Process(PoolWorker-3, started daemon)>, <Process(PoolWorker-4, started daemon)>]
>>> [proc.pid for proc in p._pool]
[30760, 30761, 30762, 30763]
Then in another window:
dan#dantop:~$ kill 30763
Back to the pool:
>>> [proc.pid for proc in p._pool]
[30760, 30761, 30762, 30767] # New pid for the last process
You can continue using the pool as if nothing happened. However, any work item that the killed child process was running at the time it died will not be completed or restarted. If you were running a blocking map or apply call that was relying on that work item to complete, it will likely hang indefinitely. There is a bug filed for this, but the issue was only fixed in concurrent.futures.ProcessPoolExecutor, rather than in multiprocessing.Pool. Starting with Python 3.3, ProcessPoolExecutor will raise a BrokenProcessPool exception if a child process is killed, and disallow any further use of the pool. Sadly, multiprocessing didn't get this enhancement. For now, if you want to guard against a pool call blocking forever due to a sub-process crashing, you have to use ugly workarounds.
Note: The above only applies to a process in a pool actually crashing, meaning the process completely dies. If a sub-process raises an exception, that will be propagated up the parent process when you try to retrieve the result of the work item:
>>> def f(): raise Exception("Oh no")
...
>>> pool = multiprocessing.Pool()
>>> result = pool.apply_async(f)
>>> result.get()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/multiprocessing/pool.py", line 528, in get
raise self._value
Exception: Oh no
When using a multiprocessing.Process directly, the process object will show that the process has exited with a non-zero exit code if it crashes:
>>> def f(): time.sleep(30)
...
>>> p = multiprocessing.Process(target=f)
>>> p.start()
>>> p.join() # Kill the process while this is blocking, and join immediately ends
>>> p.exitcode
-15
The behavior is similar if an exception is raised:
from multiprocessing import Process
def f(x):
raise Exception("Oh no")
if __name__ == '__main__':
p = Process(target=f)
p.start()
p.join()
print(p.exitcode)
print("done")
Output:
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.2/multiprocessing/process.py", line 267, in _bootstrap
self.run()
File "/usr/lib/python3.2/multiprocessing/process.py", line 116, in run
self._target(*self._args, **self._kwargs)
TypeError: f() takes exactly 1 argument (0 given)
1
done
As you can see, the traceback from the child is printed, but it doesn't affect exceution of the main process, which is able to show the exitcode of the child was 1.
I am having an issues gracefully handling a Keyboard Interrupt with python's multi-processing
(Yes I know that Ctr-C should not guarantee a graceful shutdown -- but lets leave that discussion for a different thread)
Consider the following code, where I am user a multiprocessing.Manager#list() which is a ListProxy which I understood handles multi-process access to a list.
When I Ctr-C out of this -- I get a socket.error: [Errno 2] No such file or directory when trying to access the ListProxy
I would love to have the shared list not to be corrupted upon Ctr-C. Is this possible?!
Note: I want to solve this without using Pools and Queues.
from multiprocessing import Process, Manager
from time import sleep
def f(process_number, shared_array):
try:
print "starting thread: ", process_number
shared_array.append(process_number)
sleep(3)
shared_array.append(process_number)
except KeyboardInterrupt:
print "Keyboard interrupt in process: ", process_number
finally:
print "cleaning up thread", process_number
if __name__ == '__main__':
processes = []
manager = Manager()
shared_array = manager.list()
for i in xrange(4):
p = Process(target=f, args=(i, shared_array))
p.start()
processes.append(p)
try:
for process in processes:
process.join()
except KeyboardInterrupt:
print "Keyboard interrupt in main"
for item in shared_array:
# raises "socket.error: [Errno 2] No such file or directory"
print item
If you run that and then hit Ctr-C, we get the following:
starting thread: 0
starting thread: 1
starting thread: 3
starting thread: 2
^CKeyboard interupt in process: 3
Keyboard interupt in process: 0
cleaning up thread 3
cleaning up thread 0
Keyboard interupt in process: 1
Keyboard interupt in process: 2
cleaning up thread 1
cleaning up thread 2
Keyboard interupt in main
Traceback (most recent call last):
File "multi.py", line 33, in <module>
for item in shared_array:
File "<string>", line 2, in __getitem__
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 755, in _callmethod
self._connect()
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 742, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/connection.py", line 169, in Client
c = SocketClient(address)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/connection.py", line 293, in SocketClient
s.connect(address)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 2] No such file or directory
(Here is another approach using a multiprocessing.Lock with similar affect ... gist)
Similar Questions:
Catch Keyboard Interrupt to stop Python multiprocessing worker from working on queue
Keyboard Interrupts with python's multiprocessing Pool
Shared variable in python's multiprocessing
multiprocessing.Manager() fires up a child process which is responsible for handling your shared list proxy.
netstat output while running:
unix 2 [ ACC ] STREAM LISTENING 3921657 8457/python
/tmp/pymp-B9dcij/listener-X423Ml
this child process created by multiprocessing.Manager() is catching your SIGINT and exiting causing anything related to it to be dereferenced hence your "no such file" error (I also got several other errors depending on when i decided to send SIGINT).
to solve this you may directly declare a SyncManager object (instead of letting Manager() do it for you). this will require you to use the start() method to actually fire up the child process. the start() method takes an initialization function as its first argument (you can override SIGINT for the manager here).
code below, give this a try:
from multiprocessing import Process, Manager
from multiprocessing.managers import BaseManager, SyncManager
from time import sleep
import signal
#handle SIGINT from SyncManager object
def mgr_sig_handler(signal, frame):
print 'not closing the mgr'
#initilizer for SyncManager
def mgr_init():
signal.signal(signal.SIGINT, mgr_sig_handler)
#signal.signal(signal.SIGINT, signal.SIG_IGN) # <- OR do this to just ignore the signal
print 'initialized mananger'
def f(process_number, shared_array):
try:
print "starting thread: ", process_number
shared_array.append(process_number)
sleep(3)
shared_array.append(process_number)
except KeyboardInterrupt:
print "Keyboard interrupt in process: ", process_number
finally:
print "cleaning up thread", process_number
if __name__ == '__main__':
processes = []
#using syncmanager directly instead of letting Manager() do it for me
manager = SyncManager()
manager.start(mgr_init) #fire up the child manager process
shared_array = manager.list()
for i in xrange(4):
p = Process(target=f, args=(i, shared_array))
p.start()
processes.append(p)
try:
for process in processes:
process.join()
except KeyboardInterrupt:
print "Keyboard interrupt in main"
for item in shared_array:
print item
As I answer on similar question (duplicate):
Simplest solution - start manager with
manager.start(signal.signal, (signal.SIGINT, signal.SIG_IGN))
instead of manager.start().
And check if signal module is in your imports (import signal).
This catch and ignore SIGINT (Ctrl-C) in manager process.
My understanding is that finally clauses must *always* be executed if the try has been entered.
import random
from multiprocessing import Pool
from time import sleep
def Process(x):
try:
print x
sleep(random.random())
raise Exception('Exception: ' + x)
finally:
print 'Finally: ' + x
Pool(3).map(Process, ['1','2','3'])
Expected output is that for each of x which is printed on its own by line 8, there must be an occurrence of 'Finally x'.
Example output:
$ python bug.py
1
2
3
Finally: 2
Traceback (most recent call last):
File "bug.py", line 14, in <module>
Pool(3).map(Process, ['1','2','3'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 225, in map
return self.map_async(func, iterable, chunksize).get()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 522, in get
raise self._value
Exception: Exception: 2
It seems that an exception terminating one process terminates the parent and sibling processes, even though there is further work required to be done in other processes.
Why am I wrong? Why is this correct? If this is correct, how should one safely clean up resources in multiprocess Python?
Short answer: SIGTERM trumps finally.
Long answer: Turn on logging with mp.log_to_stderr():
import random
import multiprocessing as mp
import time
import logging
logger=mp.log_to_stderr(logging.DEBUG)
def Process(x):
try:
logger.info(x)
time.sleep(random.random())
raise Exception('Exception: ' + x)
finally:
logger.info('Finally: ' + x)
result=mp.Pool(3).map(Process, ['1','2','3'])
The logging output includes:
[DEBUG/MainProcess] terminating workers
Which corresponds to this code in multiprocessing.pool._terminate_pool:
if pool and hasattr(pool[0], 'terminate'):
debug('terminating workers')
for p in pool:
p.terminate()
Each p in pool is a multiprocessing.Process, and calling terminate (at least on non-Windows machines) calls SIGTERM:
from multiprocessing/forking.py:
class Popen(object)
def terminate(self):
...
try:
os.kill(self.pid, signal.SIGTERM)
except OSError, e:
if self.wait(timeout=0.1) is None:
raise
So it comes down to what happens when a Python process in a try suite is sent a SIGTERM.
Consider the following example (test.py):
import time
def worker():
try:
time.sleep(100)
finally:
print('enter finally')
time.sleep(2)
print('exit finally')
worker()
If you run it, then send it a SIGTERM, then the process ends immediately, without entering the finally suite, as evidenced by no output, and no delay.
In one terminal:
% test.py
In second terminal:
% pkill -TERM -f "test.py"
Result in first terminal:
Terminated
Compare that with what happens when the process is sent a SIGINT (C-c):
In second terminal:
% pkill -INT -f "test.py"
Result in first terminal:
enter finally
exit finally
Traceback (most recent call last):
File "/home/unutbu/pybin/test.py", line 14, in <module>
worker()
File "/home/unutbu/pybin/test.py", line 8, in worker
time.sleep(100)
KeyboardInterrupt
Conclusion: SIGTERM trumps finally.
The answer from unutbu definitely explains why you get the behavior you observe. However, it should emphasized that SIGTERM is sent only because of how multiprocessing.pool._terminate_pool is implemented. If you can avoid using Pool, then you can get the behavior you desire. Here is a borrowed example:
from multiprocessing import Process
from time import sleep
import random
def f(x):
try:
sleep(random.random()*10)
raise Exception
except:
print "Caught exception in process:", x
# Make this last longer than the except clause in main.
sleep(3)
finally:
print "Cleaning up process:", x
if __name__ == '__main__':
processes = []
for i in range(4):
p = Process(target=f, args=(i,))
p.start()
processes.append(p)
try:
for process in processes:
process.join()
except:
print "Caught exception in main."
finally:
print "Cleaning up main."
After sending a SIGINT is, example output is:
Caught exception in process: 0
^C
Cleaning up process: 0
Caught exception in main.
Cleaning up main.
Caught exception in process: 1
Caught exception in process: 2
Caught exception in process: 3
Cleaning up process: 1
Cleaning up process: 2
Cleaning up process: 3
Note that the finally clause is ran for all processes. If you need shared memory, consider using Queue, Pipe, Manager, or some external store like redis or sqlite3.
finally re-raises the original exception unless you return from it. The exception is then raised by Pool.map and kills your entire application. The subprocesses are terminated and you see no other exceptions.
You can add a return to swallow the exception:
def Process(x):
try:
print x
sleep(random.random())
raise Exception('Exception: ' + x)
finally:
print 'Finally: ' + x
return
Then you should have None in your map result when an exception occurred.
I am trying to run a simple multiple processes application in Python. The main thread spawns 1 to N processes and waits until they all done processing. The processes each run an infinite loop, so they can potentially run forever without some user interruption, so I put in some code to handle a KeyboardInterrupt:
#!/usr/bin/env python
import sys
import time
from multiprocessing import Process
def main():
# Set up inputs..
# Spawn processes
Proc( 1).start()
Proc( 2).start()
class Proc ( Process ):
def __init__ ( self, procNum):
self.id = procNum
Process.__init__(self)
def run ( self ):
doneWork = False
while True:
try:
# Do work...
time.sleep(1)
sys.stdout.write('.')
if doneWork:
print "PROC#" + str(self.id) + " Done."
break
except KeyboardInterrupt:
print "User aborted."
sys.exit()
# Main Entry
if __name__=="__main__":
main()
The problem is that when using CTRL-C to exit, I get an additional error even though the processes seem to exit immediately:
......User aborted.
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "C:\Python26\lib\atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "C:\Python26\lib\multiprocessing\util.py", line 281, in _exit_function
p.join()
File "C:\Python26\lib\multiprocessing\process.py", line 119, in join
res = self._popen.wait(timeout)
File "C:\Python26\lib\multiprocessing\forking.py", line 259, in wait
res = _subprocess.WaitForSingleObject(int(self._handle), msecs)
KeyboardInterrupt
Error in sys.exitfunc:
Traceback (most recent call last):
File "C:\Python26\lib\atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "C:\Python26\lib\multiprocessing\util.py", line 281, in _exit_function
p.join()
File "C:\Python26\lib\multiprocessing\process.py", line 119, in join
res = self._popen.wait(timeout)
File "C:\Python26\lib\multiprocessing\forking.py", line 259, in wait
res = _subprocess.WaitForSingleObject(int(self._handle), msecs)
KeyboardInterrupt
I am running Python 2.6 on Windows. If there is a better way to handle multiprocessing in Python, please let me know.
This is a very old question, but it seems like the accepted answer does not actually fix the problem.
The main issue is that you need to handle the keyboard interrupt in the parent process as well. Additionally to that, in the while loop, you just need to exit the loop, there's no need to call sys.exit()
I've tried to as closely match the example in the original question. The doneWork code does nothing in the example so have removed that for clarity.
import sys
import time
from multiprocessing import Process
def main():
# Set up inputs..
# Spawn processes
try:
processes = [Proc(1), Proc(2)]
[p.start() for p in processes]
[p.join() for p in processes]
except KeyboardInterrupt:
pass
class Proc(Process):
def __init__(self, procNum):
self.id = procNum
Process.__init__(self)
def run(self):
while True:
try:
# Do work...
time.sleep(1)
sys.stdout.write('.')
except KeyboardInterrupt:
print("User aborted.")
break
# Main Entry
if __name__ == "__main__":
main()
Rather then just forcing sys.exit(), you want to send a signal to your threads to tell them to stop. Look into using signal handlers and threads in Python.
You could potentially do this by changing your while True: loop to be while keep_processing: where keep_processing is some sort of global variable that gets set on the KeyboardInterrupt exception. I don't think this is a good practice though.