Terminating all processes in Multiprocessing Pool - python

I have a script that is essentially an API scraper, it runs perpetually. I strapped a map_async pool to it and its glorious, the pool was hiding some errors which I learned was pretty common. So I incorporated this wrapped helper function.
helper.py
def trace_unhandled_exceptions(func):
#functools.wraps(func)
def wrapped_func(*args, **kwargs):
try:
return func(*args, **kwargs)
except:
print('Exception in '+func.__name__)
traceback.print_exc()
return wrapped_func
My main script looks like
scraper.py
import multiprocessing as mp
from helper import trace_unhandled_exceptions
start_block = 100
end_block = 50000
#trace_unhandled_exceptions
def main(block_num):
block = blah_blah(block_num)
return block
if __name__ == "__main__":
cpus = min(8, mp.cpu_count()-1 or 1)
pool = mp.Pool(cpus)
pool.map_async(main, range(start_block - 20, end_block), chunksize=cpus)
pool.close()
pool.join()
This works great, im receiving exception:
Exception in main
Traceback (most recent call last):
.....
How can I get the script to end on exception, ive tried incorporating os.exit or sys.exit into the helper function like this
def trace_unhandled_exceptions(func):
#functools.wraps(func)
def wrapped_func(*args, **kwargs):
try:
return func(*args, **kwargs)
except:
print('Exception in '+func.__name__)
traceback.print_exc()
os._exit(1)
return wrapped_func
But I believe its only terminating the child process and not the entire script, any advice?

I don't think you need that trace_unhandled_exception decorator to do what you want, at least not if you use pool.apply_async() instead of pool.map_async() because the you can use the error_callback= option it supports to be notified whenever the target function fails. Note that map_async() also supports something similar, but it's not called until the entire iterable has been consumed — so it would not be suitable for what you're wanting to do.
I got the idea for this approach from #Tim Peters' answer to a similar question titled Multiprocessing Pool - how to cancel all running processes if one returns the desired result?
import multiprocessing as mp
import random
import time
START_BLOCK = 100
END_BLOCK = 1000
def blah_blah(block_num):
if block_num % 10 == 0:
print(f'Processing block {block_num}')
time.sleep(random.uniform(.01, .1))
return block_num
def main(block_num):
if random.randint(0, 100) == 42:
print(f'Raising radom exception')
raise RuntimeError('RANDOM TEST EXCEPTION')
block = blah_blah(block_num)
return block
def error_handler(exception):
print(f'{exception} occurred, terminating pool.')
pool.terminate()
if __name__ == "__main__":
processes = min(8, mp.cpu_count()-1 or 1)
pool = mp.Pool(processes)
for i in range(START_BLOCK-20, END_BLOCK):
pool.apply_async(main, (i,), error_callback=error_handler)
pool.close()
pool.join()
print('-fini-')

I am not sure what you mean by the pool hiding errors. My experience is that when a worker function (i.e. the target of a Pool method) raises an uncaught exception, it doesn't go unnoticed. Anyway,...
I would suggest that:
You do not use your trace_unhandled_exception decorator and allow your worker function, main, to raise an exception and
Instead of using method map_async (why that instead of map?), you use method imap, which allows you to iterate individual return values and any exception that may have been thrown by main as they become available and therefore as soon as you detect an exception you can then call multiprocessing.Pool.terminate() to (1) cancel any tasks that have been submitted but not started or (2) tasks running and not yet completed. As an aside, even if you don't call terminate, once an uncaught exception occurs in a submitted task, the processing pool flushes the input task queue.
Once the main process detects the exception, it can. of course, call sys.exit() after cleaning up the pool.
import multiprocessing as mp
start_block = 100
end_block = 50000
def main(block_num):
if block_num == 1000:
raise ValueError("I don't like 1000.")
return block_num * block_num
if __name__ == "__main__":
cpus = min(8, mp.cpu_count()-1 or 1)
pool = mp.Pool(cpus)
it = pool.imap(main, range(start_block - 20, end_block), chunksize=cpus)
results = []
while True:
try:
result = next(it)
except StopIteration:
break
except Exception as e:
print(e)
# Kill remaining tasks
pool.terminate()
break
else:
results.append(result)
pool.close()
pool.join()
Prints:
I don't like 1000.
Alternatively, you can keep your decorator function, but modify it to return the Exception instance it caught (currently, it implicitly returns None). Then you can modify the while True loop as follows:
while True:
try:
result = next(it)
except StopIteration:
break
else:
if isinstance(result, Exception):
pool.terminate()
break
results.append(result)
Since no actual exception has been raised, the call to terminate becomes absolutely essential if you want to continue execution without allowing the remaining submitted tasks to run. Even if you just want to immediately exit, it is still a good idea terminate and clean up the pool to ensure that nothing hangs when you do call exit.

Related

How to timeout a looped script without stopping the loop [duplicate]

There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
An improvement on #rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException as e:
print("Timed out!")
I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
http://docs.python.org/library/signal.html
So your code is going to look something like this.
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
I prefer a context manager approach because it allows the execution of multiple python statements within a with time_limit statement. Because windows system does not have SIGALARM, a more portable and perhaps more straightforward method could be using a Timer
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception):
def __init__(self, msg=''):
self.msg = msg
#contextmanager
def time_limit(seconds, msg=''):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
raise TimeoutException("Timed out for operation {}".format(msg))
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
import time
# ends after 5 seconds
with time_limit(5, 'sleep'):
for i in range(10):
time.sleep(1)
# this will actually end after 10 seconds
with time_limit(5, 'sleep'):
time.sleep(10)
The key technique here is the use of _thread.interrupt_main to interrupt the main thread from the timer thread. One caveat is that the main thread does not always respond to the KeyboardInterrupt raised by the Timer quickly. For example, time.sleep() calls a system function so a KeyboardInterrupt will be handled after the sleep call.
Here: a simple way of getting the desired effect:
https://pypi.org/project/func-timeout
This saved my life.
And now an example on how it works: lets say you have a huge list of items to be processed and you are iterating your function over those items. However, for some strange reason, your function get stuck on item n, without raising an exception. You need to other items to be processed, the more the better. In this case, you can set a timeout for processing each item:
import time
import func_timeout
def my_function(n):
"""Sleep for n seconds and return n squared."""
print(f'Processing {n}')
time.sleep(n)
return n**2
def main_controller(max_wait_time, all_data):
"""
Feed my_function with a list of itens to process (all_data).
However, if max_wait_time is exceeded, return the item and a fail info.
"""
res = []
for data in all_data:
try:
my_square = func_timeout.func_timeout(
max_wait_time, my_function, args=[data]
)
res.append((my_square, 'processed'))
except func_timeout.FunctionTimedOut:
print('error')
res.append((data, 'fail'))
continue
return res
timeout_time = 2.1 # my time limit
all_data = range(1, 10) # the data to be processed
res = main_controller(timeout_time, all_data)
print(res)
Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix.
You don't have to use threads. You can use another process to do the blocking work, for instance, maybe using the subprocess module. If you want to share data structures between different parts of your program then Twisted is a great library for giving yourself control of this, and I'd recommend it if you care about blocking and expect to have this trouble a lot. The bad news with Twisted is you have to rewrite your code to avoid any blocking, and there is a fair learning curve.
You can use threads to avoid blocking, but I'd regard this as a last resort, since it exposes you to a whole world of pain. Read a good book on concurrency before even thinking about using threads in production, e.g. Jean Bacon's "Concurrent Systems". I work with a bunch of people who do really cool high performance stuff with threads, and we don't introduce threads into projects unless we really need them.
The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way.
I would usually prefer using a contextmanager as suggested by #josh-lee
But in case someone is interested in having this implemented as a decorator, here's an alternative.
Here's how it would look like:
import time
from timeout import timeout
class Test(object):
#timeout(2)
def test_a(self, foo, bar):
print foo
time.sleep(1)
print bar
return 'A Done'
#timeout(2)
def test_b(self, foo, bar):
print foo
time.sleep(3)
print bar
return 'B Done'
t = Test()
print t.test_a('python', 'rocks')
print t.test_b('timing', 'out')
And this is the timeout.py module:
import threading
class TimeoutError(Exception):
pass
class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
threading.Thread.__init__(self)
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout(object):
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
raise TimeoutError('execution expired')
return wrapped_f
The output:
python
rocks
A Done
timing
Traceback (most recent call last):
...
timeout.TimeoutError: execution expired
out
Notice that even if the TimeoutError is thrown, the decorated method will continue to run in a different thread. If you would also want this thread to be "stopped" see: Is there any way to kill a Thread in Python?
Using simple decorator
Here's the version I made after studying above answers. Pretty straight forward.
def function_timeout(seconds: int):
"""Wrapper of Decorator to pass arguments"""
def decorator(func):
#contextmanager
def time_limit(seconds_):
def signal_handler(signum, frame): # noqa
raise TimeoutException(f"Timed out in {seconds_} seconds!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds_)
try:
yield
finally:
signal.alarm(0)
#wraps(func)
def wrapper(*args, **kwargs):
with time_limit(seconds):
return func(*args, **kwargs)
return wrapper
return decorator
How to use?
#function_timeout(seconds=5)
def my_naughty_function():
while True:
print("Try to stop me ;-p")
Well of course, don't forget to import the function if it is in a separate file.
Here's a timeout function I think I found via google and it works for me.
From:
http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
The method from #user2283347 is tested working, but we want to get rid of the traceback messages. Use pass trick from Remove traceback in Python on Ctrl-C, the modified code is:
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
pass
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
def timeout_svm_score(i):
#from sklearn import svm
#import numpy as np
#from IPython.core.display import display
#%store -r names X Y
clf = svm.SVC(kernel='linear', C=1).fit(np.nan_to_num(X[[names[i]]]), Y)
score = clf.score(np.nan_to_num(X[[names[i]]]),Y)
#scoressvm.append((score, names[i]))
display((score, names[i]))
%%time
with time_limit(5):
i=0
timeout_svm_score(i)
#Wall time: 14.2 s
%%time
with time_limit(20):
i=0
timeout_svm_score(i)
#(0.04541284403669725, '计划飞行时间')
#Wall time: 16.1 s
%%time
with time_limit(5):
i=14
timeout_svm_score(i)
#Wall time: 5h 43min 41s
We can see that this method may need far long time to interrupt the calculation, we asked for 5 seconds, but it work out in 5 hours.
This code works for Windows Server Datacenter 2016 with python 3.7.3 and I didn't tested on Unix, after mixing some answers from Google and StackOverflow, it finally worked for me like this:
from multiprocessing import Process, Lock
import time
import os
def f(lock,id,sleepTime):
lock.acquire()
print("I'm P"+str(id)+" Process ID: "+str(os.getpid()))
lock.release()
time.sleep(sleepTime) #sleeps for some time
print("Process: "+str(id)+" took this much time:"+str(sleepTime))
time.sleep(sleepTime)
print("Process: "+str(id)+" took this much time:"+str(sleepTime*2))
if __name__ == '__main__':
timeout_function=float(9) # 9 seconds for max function time
print("Main Process ID: "+str(os.getpid()))
lock=Lock()
p1=Process(target=f, args=(lock,1,6,)) #Here you can change from 6 to 3 for instance, so you can watch the behavior
start=time.time()
print(type(start))
p1.start()
if p1.is_alive():
print("process running a")
else:
print("process not running a")
while p1.is_alive():
timeout=time.time()
if timeout-start > timeout_function:
p1.terminate()
print("process terminated")
print("watching, time passed: "+str(timeout-start) )
time.sleep(1)
if p1.is_alive():
print("process running b")
else:
print("process not running b")
p1.join()
if p1.is_alive():
print("process running c")
else:
print("process not running c")
end=time.time()
print("I am the main process, the two processes are done")
print("Time taken:- "+str(end-start)+" secs") #MainProcess terminates at approx ~ 5 secs.
time.sleep(5) # To see if on Task Manager the child process is really being terminated, and it is
print("finishing")
The main code is from this link:
Create two child process using python(windows)
Then I used .terminate() to kill the child process. You can see that the function f calls 2 prints, one after 5 seconds and another after 10 seconds. However, with a 7 seconds sleep and the terminate(), it does not show the last print.
It worked for me, hope it helps!

Return from function if execution finished within timeout or make callback otherwise

I have a project in Python 3.5 without any usage of asynchronous features. I have to implement the folowing logic:
def should_return_in_3_sec(some_serious_job, arguments, finished_callback):
# Start some_serious_job(*arguments) in a task
# if it finishes within 3 sec:
# return result immediately
# otherwise return None, but do not terminate task.
# If the task finishes in 1 minute:
# call finished_callback(result)
# else:
# call finished_callback(None)
pass
The function should_return_in_3_sec() should remain synchronous, but it is up to me to write any new asynchronous code (including some_serious_job()).
What is the most elegant and pythonic way to do it?
Fork off a thread doing the serious job, let it write its result into a queue and then terminate. Read in your main thread from that queue with a timeout of three seconds. If the timeout occurs, start another thread and return None. Let the second thread read from the queue with a timeout of one minute; if that timeouts also, call finished_callback(None); otherwise call finished_callback(result).
I sketched it like this:
import threading, queue
def should_return_in_3_sec(some_serious_job, arguments, finished_callback):
result_queue = queue.Queue(1)
def do_serious_job_and_deliver_result():
result = some_serious_job(arguments)
result_queue.put(result)
threading.Thread(target=do_serious_job_and_deliver_result).start()
try:
result = result_queue.get(timeout=3)
except queue.Empty: # timeout?
def expect_and_handle_late_result():
try:
result = result_queue.get(timeout=60)
except queue.Empty:
finished_callback(None)
else:
finished_callback(result)
threading.Thread(target=expect_and_handle_late_result).start()
return None
else:
return result
The threading module has some simple timeout options, see Thread.join(timeout) for example.
If you do choose to use asyncio, below is a a partial solution to address some of your needs:
import asyncio
import time
async def late_response(task, flag, timeout, callback):
done, pending = await asyncio.wait([task], timeout=timeout)
callback(done.pop().result() if done else None) # will raise an exception if some_serious_job failed
flag[0] = True # signal some_serious_job to stop
return await task
async def launch_job(loop, some_serious_job, arguments, finished_callback,
timeout_1=3, timeout_2=5):
flag = [False]
task = loop.run_in_executor(None, some_serious_job, flag, *arguments)
done, pending = await asyncio.wait([task], timeout=timeout_1)
if done:
return done.pop().result() # will raise an exception if some_serious_job failed
asyncio.ensure_future(
late_response(task, flag, timeout_2, finished_callback))
return None
def f(flag, n):
for i in range(n):
print("serious", i, flag)
if flag[0]:
return "CANCELLED"
time.sleep(1)
return "OK"
def finished(result):
print("FINISHED", result)
loop = asyncio.get_event_loop()
result = loop.run_until_complete(launch_job(loop, f, [1], finished))
print("result:", result)
loop.run_forever()
This will run the job in a separate thread (Use loop.set_executor(ProcessPoolExecutor()) to run a CPU intensive task in a process instead). Keep in mind it is a bad practice to terminate a process/thread - the code above uses a very simple list to signal the thread to stop (See also threading.Event / multiprocessing.Event).
While implementing your solution, you might discover you would want to modify your existing code to use couroutines instead of using threads.

How to determine what line an error occurred on while using multiprocessing in Python [duplicate]

It seems that when an exception is raised from a multiprocessing.Pool process, there is no stack trace or any other indication that it has failed. Example:
from multiprocessing import Pool
def go():
print(1)
raise Exception()
print(2)
p = Pool()
p.apply_async(go)
p.close()
p.join()
prints 1 and stops silently. Interestingly, raising a BaseException instead works. Is there any way to make the behavior for all exceptions the same as BaseException?
Maybe I'm missing something, but isn't that what the get method of the Result object returns? See Process Pools.
class multiprocessing.pool.AsyncResult
The class of the result returned by Pool.apply_async() and Pool.map_async().get([timeout])
Return the result when it arrives. If timeout is not None and the result does not arrive within
timeout seconds then multiprocessing.TimeoutError is raised. If the remote
call raised an exception then that exception will be reraised by get().
So, slightly modifying your example, one can do
from multiprocessing import Pool
def go():
print(1)
raise Exception("foobar")
print(2)
p = Pool()
x = p.apply_async(go)
x.get()
p.close()
p.join()
Which gives as result
1
Traceback (most recent call last):
File "rob.py", line 10, in <module>
x.get()
File "/usr/lib/python2.6/multiprocessing/pool.py", line 422, in get
raise self._value
Exception: foobar
This is not completely satisfactory, since it does not print the traceback, but is better than nothing.
UPDATE: This bug has been fixed in Python 3.4, courtesy of Richard Oudkerk. See the issue get method of multiprocessing.pool.Async should return full traceback.
I have a reasonable solution for the problem, at least for debugging purposes. I do not currently have a solution that will raise the exception back in the main processes. My first thought was to use a decorator, but you can only pickle functions defined at the top level of a module, so that's right out.
Instead, a simple wrapping class and a Pool subclass that uses this for apply_async (and hence apply). I'll leave map_async as an exercise for the reader.
import traceback
from multiprocessing.pool import Pool
import multiprocessing
# Shortcut to multiprocessing's logger
def error(msg, *args):
return multiprocessing.get_logger().error(msg, *args)
class LogExceptions(object):
def __init__(self, callable):
self.__callable = callable
def __call__(self, *args, **kwargs):
try:
result = self.__callable(*args, **kwargs)
except Exception as e:
# Here we add some debugging help. If multiprocessing's
# debugging is on, it will arrange to log the traceback
error(traceback.format_exc())
# Re-raise the original exception so the Pool worker can
# clean up
raise
# It was fine, give a normal answer
return result
class LoggingPool(Pool):
def apply_async(self, func, args=(), kwds={}, callback=None):
return Pool.apply_async(self, LogExceptions(func), args, kwds, callback)
def go():
print(1)
raise Exception()
print(2)
multiprocessing.log_to_stderr()
p = LoggingPool(processes=1)
p.apply_async(go)
p.close()
p.join()
This gives me:
1
[ERROR/PoolWorker-1] Traceback (most recent call last):
File "mpdebug.py", line 24, in __call__
result = self.__callable(*args, **kwargs)
File "mpdebug.py", line 44, in go
raise Exception()
Exception
The solution with the most votes at the time of writing has a problem:
from multiprocessing import Pool
def go():
print(1)
raise Exception("foobar")
print(2)
p = Pool()
x = p.apply_async(go)
x.get() ## waiting here for go() to complete...
p.close()
p.join()
As #dfrankow noted, it will wait on x.get(), which ruins the point of running a task asynchronously. So, for better efficiency (in particular if your worker function go takes a long time) I would change it to:
from multiprocessing import Pool
def go(x):
print(1)
# task_that_takes_a_long_time()
raise Exception("Can't go anywhere.")
print(2)
return x**2
p = Pool()
results = []
for x in range(1000):
results.append( p.apply_async(go, [x]) )
p.close()
for r in results:
r.get()
Advantages: the worker function is run asynchronously, so if for example you are running many tasks on several cores, it will be a lot more efficient than the original solution.
Disadvantages: if there is an exception in the worker function, it will only be raised after the pool has completed all the tasks. This may or may not be the desirable behaviour. EDITED according to #colinfang's comment, which fixed this.
Since there are already decent answers for multiprocessing.Pool available, I will provide a solution using a different approach for completeness.
For python >= 3.2 the following solution seems to be the simplest:
from concurrent.futures import ProcessPoolExecutor, wait
def go():
print(1)
raise Exception()
print(2)
futures = []
with ProcessPoolExecutor() as p:
for i in range(10):
futures.append(p.submit(go))
results = [f.result() for f in futures]
Advantages:
very little code
raises an exception in the main process
provides a stack trace
no external dependencies
For more info about the API please check out this
Additionally, if you are submitting a large number of tasks and you would like your main process to fail as soon as one of your tasks fail, you can use the following snippet:
from concurrent.futures import ProcessPoolExecutor, wait, FIRST_EXCEPTION, as_completed
import time
def go():
print(1)
time.sleep(0.3)
raise Exception()
print(2)
futures = []
with ProcessPoolExecutor(1) as p:
for i in range(10):
futures.append(p.submit(go))
for f in as_completed(futures):
if f.exception() is not None:
for f in futures:
f.cancel()
break
[f.result() for f in futures]
All of the other answers fail only once all tasks have been executed.
I've had success logging exceptions with this decorator:
import traceback, functools, multiprocessing
def trace_unhandled_exceptions(func):
#functools.wraps(func)
def wrapped_func(*args, **kwargs):
try:
func(*args, **kwargs)
except:
print 'Exception in '+func.__name__
traceback.print_exc()
return wrapped_func
with the code in the question, it's
#trace_unhandled_exceptions
def go():
print(1)
raise Exception()
print(2)
p = multiprocessing.Pool(1)
p.apply_async(go)
p.close()
p.join()
Simply decorate the function you pass to your process pool. The key to this working is #functools.wraps(func) otherwise multiprocessing throws a PicklingError.
code above gives
1
Exception in go
Traceback (most recent call last):
File "<stdin>", line 5, in wrapped_func
File "<stdin>", line 4, in go
Exception
Since you have used apply_sync, I guess the use case is want to do some synchronize tasks. Use callback for handling is another option. Please note this option is available only for python3.2 and above and not available on python2.7.
from multiprocessing import Pool
def callback(result):
print('success', result)
def callback_error(result):
print('error', result)
def go():
print(1)
raise Exception()
print(2)
p = Pool()
p.apply_async(go, callback=callback, error_callback=callback_error)
# You can do another things
p.close()
p.join()
import logging
from multiprocessing import Pool
def proc_wrapper(func, *args, **kwargs):
"""Print exception because multiprocessing lib doesn't return them right."""
try:
return func(*args, **kwargs)
except Exception as e:
logging.exception(e)
raise
def go(x):
print x
raise Exception("foobar")
p = Pool()
p.apply_async(proc_wrapper, (go, 5))
p.join()
p.close()
I created a module RemoteException.py that shows the full traceback of a exception in a process. Python2. Download it and add this to your code:
import RemoteException
#RemoteException.showError
def go():
raise Exception('Error!')
if __name__ == '__main__':
import multiprocessing
p = multiprocessing.Pool(processes = 1)
r = p.apply(go) # full traceback is shown here
I'd try using pdb:
import pdb
import sys
def handler(type, value, tb):
pdb.pm()
sys.excepthook = handler

Exception thrown in multiprocessing Pool not detected

It seems that when an exception is raised from a multiprocessing.Pool process, there is no stack trace or any other indication that it has failed. Example:
from multiprocessing import Pool
def go():
print(1)
raise Exception()
print(2)
p = Pool()
p.apply_async(go)
p.close()
p.join()
prints 1 and stops silently. Interestingly, raising a BaseException instead works. Is there any way to make the behavior for all exceptions the same as BaseException?
Maybe I'm missing something, but isn't that what the get method of the Result object returns? See Process Pools.
class multiprocessing.pool.AsyncResult
The class of the result returned by Pool.apply_async() and Pool.map_async().get([timeout])
Return the result when it arrives. If timeout is not None and the result does not arrive within
timeout seconds then multiprocessing.TimeoutError is raised. If the remote
call raised an exception then that exception will be reraised by get().
So, slightly modifying your example, one can do
from multiprocessing import Pool
def go():
print(1)
raise Exception("foobar")
print(2)
p = Pool()
x = p.apply_async(go)
x.get()
p.close()
p.join()
Which gives as result
1
Traceback (most recent call last):
File "rob.py", line 10, in <module>
x.get()
File "/usr/lib/python2.6/multiprocessing/pool.py", line 422, in get
raise self._value
Exception: foobar
This is not completely satisfactory, since it does not print the traceback, but is better than nothing.
UPDATE: This bug has been fixed in Python 3.4, courtesy of Richard Oudkerk. See the issue get method of multiprocessing.pool.Async should return full traceback.
I have a reasonable solution for the problem, at least for debugging purposes. I do not currently have a solution that will raise the exception back in the main processes. My first thought was to use a decorator, but you can only pickle functions defined at the top level of a module, so that's right out.
Instead, a simple wrapping class and a Pool subclass that uses this for apply_async (and hence apply). I'll leave map_async as an exercise for the reader.
import traceback
from multiprocessing.pool import Pool
import multiprocessing
# Shortcut to multiprocessing's logger
def error(msg, *args):
return multiprocessing.get_logger().error(msg, *args)
class LogExceptions(object):
def __init__(self, callable):
self.__callable = callable
def __call__(self, *args, **kwargs):
try:
result = self.__callable(*args, **kwargs)
except Exception as e:
# Here we add some debugging help. If multiprocessing's
# debugging is on, it will arrange to log the traceback
error(traceback.format_exc())
# Re-raise the original exception so the Pool worker can
# clean up
raise
# It was fine, give a normal answer
return result
class LoggingPool(Pool):
def apply_async(self, func, args=(), kwds={}, callback=None):
return Pool.apply_async(self, LogExceptions(func), args, kwds, callback)
def go():
print(1)
raise Exception()
print(2)
multiprocessing.log_to_stderr()
p = LoggingPool(processes=1)
p.apply_async(go)
p.close()
p.join()
This gives me:
1
[ERROR/PoolWorker-1] Traceback (most recent call last):
File "mpdebug.py", line 24, in __call__
result = self.__callable(*args, **kwargs)
File "mpdebug.py", line 44, in go
raise Exception()
Exception
The solution with the most votes at the time of writing has a problem:
from multiprocessing import Pool
def go():
print(1)
raise Exception("foobar")
print(2)
p = Pool()
x = p.apply_async(go)
x.get() ## waiting here for go() to complete...
p.close()
p.join()
As #dfrankow noted, it will wait on x.get(), which ruins the point of running a task asynchronously. So, for better efficiency (in particular if your worker function go takes a long time) I would change it to:
from multiprocessing import Pool
def go(x):
print(1)
# task_that_takes_a_long_time()
raise Exception("Can't go anywhere.")
print(2)
return x**2
p = Pool()
results = []
for x in range(1000):
results.append( p.apply_async(go, [x]) )
p.close()
for r in results:
r.get()
Advantages: the worker function is run asynchronously, so if for example you are running many tasks on several cores, it will be a lot more efficient than the original solution.
Disadvantages: if there is an exception in the worker function, it will only be raised after the pool has completed all the tasks. This may or may not be the desirable behaviour. EDITED according to #colinfang's comment, which fixed this.
Since there are already decent answers for multiprocessing.Pool available, I will provide a solution using a different approach for completeness.
For python >= 3.2 the following solution seems to be the simplest:
from concurrent.futures import ProcessPoolExecutor, wait
def go():
print(1)
raise Exception()
print(2)
futures = []
with ProcessPoolExecutor() as p:
for i in range(10):
futures.append(p.submit(go))
results = [f.result() for f in futures]
Advantages:
very little code
raises an exception in the main process
provides a stack trace
no external dependencies
For more info about the API please check out this
Additionally, if you are submitting a large number of tasks and you would like your main process to fail as soon as one of your tasks fail, you can use the following snippet:
from concurrent.futures import ProcessPoolExecutor, wait, FIRST_EXCEPTION, as_completed
import time
def go():
print(1)
time.sleep(0.3)
raise Exception()
print(2)
futures = []
with ProcessPoolExecutor(1) as p:
for i in range(10):
futures.append(p.submit(go))
for f in as_completed(futures):
if f.exception() is not None:
for f in futures:
f.cancel()
break
[f.result() for f in futures]
All of the other answers fail only once all tasks have been executed.
I've had success logging exceptions with this decorator:
import traceback, functools, multiprocessing
def trace_unhandled_exceptions(func):
#functools.wraps(func)
def wrapped_func(*args, **kwargs):
try:
func(*args, **kwargs)
except:
print 'Exception in '+func.__name__
traceback.print_exc()
return wrapped_func
with the code in the question, it's
#trace_unhandled_exceptions
def go():
print(1)
raise Exception()
print(2)
p = multiprocessing.Pool(1)
p.apply_async(go)
p.close()
p.join()
Simply decorate the function you pass to your process pool. The key to this working is #functools.wraps(func) otherwise multiprocessing throws a PicklingError.
code above gives
1
Exception in go
Traceback (most recent call last):
File "<stdin>", line 5, in wrapped_func
File "<stdin>", line 4, in go
Exception
Since you have used apply_sync, I guess the use case is want to do some synchronize tasks. Use callback for handling is another option. Please note this option is available only for python3.2 and above and not available on python2.7.
from multiprocessing import Pool
def callback(result):
print('success', result)
def callback_error(result):
print('error', result)
def go():
print(1)
raise Exception()
print(2)
p = Pool()
p.apply_async(go, callback=callback, error_callback=callback_error)
# You can do another things
p.close()
p.join()
import logging
from multiprocessing import Pool
def proc_wrapper(func, *args, **kwargs):
"""Print exception because multiprocessing lib doesn't return them right."""
try:
return func(*args, **kwargs)
except Exception as e:
logging.exception(e)
raise
def go(x):
print x
raise Exception("foobar")
p = Pool()
p.apply_async(proc_wrapper, (go, 5))
p.join()
p.close()
I created a module RemoteException.py that shows the full traceback of a exception in a process. Python2. Download it and add this to your code:
import RemoteException
#RemoteException.showError
def go():
raise Exception('Error!')
if __name__ == '__main__':
import multiprocessing
p = multiprocessing.Pool(processes = 1)
r = p.apply(go) # full traceback is shown here
I'd try using pdb:
import pdb
import sys
def handler(type, value, tb):
pdb.pm()
sys.excepthook = handler

How to limit execution time of a function call?

There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
An improvement on #rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException as e:
print("Timed out!")
I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
http://docs.python.org/library/signal.html
So your code is going to look something like this.
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
I prefer a context manager approach because it allows the execution of multiple python statements within a with time_limit statement. Because windows system does not have SIGALARM, a more portable and perhaps more straightforward method could be using a Timer
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception):
def __init__(self, msg=''):
self.msg = msg
#contextmanager
def time_limit(seconds, msg=''):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
raise TimeoutException("Timed out for operation {}".format(msg))
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
import time
# ends after 5 seconds
with time_limit(5, 'sleep'):
for i in range(10):
time.sleep(1)
# this will actually end after 10 seconds
with time_limit(5, 'sleep'):
time.sleep(10)
The key technique here is the use of _thread.interrupt_main to interrupt the main thread from the timer thread. One caveat is that the main thread does not always respond to the KeyboardInterrupt raised by the Timer quickly. For example, time.sleep() calls a system function so a KeyboardInterrupt will be handled after the sleep call.
Here: a simple way of getting the desired effect:
https://pypi.org/project/func-timeout
This saved my life.
And now an example on how it works: lets say you have a huge list of items to be processed and you are iterating your function over those items. However, for some strange reason, your function get stuck on item n, without raising an exception. You need to other items to be processed, the more the better. In this case, you can set a timeout for processing each item:
import time
import func_timeout
def my_function(n):
"""Sleep for n seconds and return n squared."""
print(f'Processing {n}')
time.sleep(n)
return n**2
def main_controller(max_wait_time, all_data):
"""
Feed my_function with a list of itens to process (all_data).
However, if max_wait_time is exceeded, return the item and a fail info.
"""
res = []
for data in all_data:
try:
my_square = func_timeout.func_timeout(
max_wait_time, my_function, args=[data]
)
res.append((my_square, 'processed'))
except func_timeout.FunctionTimedOut:
print('error')
res.append((data, 'fail'))
continue
return res
timeout_time = 2.1 # my time limit
all_data = range(1, 10) # the data to be processed
res = main_controller(timeout_time, all_data)
print(res)
Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix.
You don't have to use threads. You can use another process to do the blocking work, for instance, maybe using the subprocess module. If you want to share data structures between different parts of your program then Twisted is a great library for giving yourself control of this, and I'd recommend it if you care about blocking and expect to have this trouble a lot. The bad news with Twisted is you have to rewrite your code to avoid any blocking, and there is a fair learning curve.
You can use threads to avoid blocking, but I'd regard this as a last resort, since it exposes you to a whole world of pain. Read a good book on concurrency before even thinking about using threads in production, e.g. Jean Bacon's "Concurrent Systems". I work with a bunch of people who do really cool high performance stuff with threads, and we don't introduce threads into projects unless we really need them.
The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way.
I would usually prefer using a contextmanager as suggested by #josh-lee
But in case someone is interested in having this implemented as a decorator, here's an alternative.
Here's how it would look like:
import time
from timeout import timeout
class Test(object):
#timeout(2)
def test_a(self, foo, bar):
print foo
time.sleep(1)
print bar
return 'A Done'
#timeout(2)
def test_b(self, foo, bar):
print foo
time.sleep(3)
print bar
return 'B Done'
t = Test()
print t.test_a('python', 'rocks')
print t.test_b('timing', 'out')
And this is the timeout.py module:
import threading
class TimeoutError(Exception):
pass
class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
threading.Thread.__init__(self)
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout(object):
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
raise TimeoutError('execution expired')
return wrapped_f
The output:
python
rocks
A Done
timing
Traceback (most recent call last):
...
timeout.TimeoutError: execution expired
out
Notice that even if the TimeoutError is thrown, the decorated method will continue to run in a different thread. If you would also want this thread to be "stopped" see: Is there any way to kill a Thread in Python?
Using simple decorator
Here's the version I made after studying above answers. Pretty straight forward.
def function_timeout(seconds: int):
"""Wrapper of Decorator to pass arguments"""
def decorator(func):
#contextmanager
def time_limit(seconds_):
def signal_handler(signum, frame): # noqa
raise TimeoutException(f"Timed out in {seconds_} seconds!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds_)
try:
yield
finally:
signal.alarm(0)
#wraps(func)
def wrapper(*args, **kwargs):
with time_limit(seconds):
return func(*args, **kwargs)
return wrapper
return decorator
How to use?
#function_timeout(seconds=5)
def my_naughty_function():
while True:
print("Try to stop me ;-p")
Well of course, don't forget to import the function if it is in a separate file.
Here's a timeout function I think I found via google and it works for me.
From:
http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
The method from #user2283347 is tested working, but we want to get rid of the traceback messages. Use pass trick from Remove traceback in Python on Ctrl-C, the modified code is:
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
pass
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
def timeout_svm_score(i):
#from sklearn import svm
#import numpy as np
#from IPython.core.display import display
#%store -r names X Y
clf = svm.SVC(kernel='linear', C=1).fit(np.nan_to_num(X[[names[i]]]), Y)
score = clf.score(np.nan_to_num(X[[names[i]]]),Y)
#scoressvm.append((score, names[i]))
display((score, names[i]))
%%time
with time_limit(5):
i=0
timeout_svm_score(i)
#Wall time: 14.2 s
%%time
with time_limit(20):
i=0
timeout_svm_score(i)
#(0.04541284403669725, '计划飞行时间')
#Wall time: 16.1 s
%%time
with time_limit(5):
i=14
timeout_svm_score(i)
#Wall time: 5h 43min 41s
We can see that this method may need far long time to interrupt the calculation, we asked for 5 seconds, but it work out in 5 hours.
This code works for Windows Server Datacenter 2016 with python 3.7.3 and I didn't tested on Unix, after mixing some answers from Google and StackOverflow, it finally worked for me like this:
from multiprocessing import Process, Lock
import time
import os
def f(lock,id,sleepTime):
lock.acquire()
print("I'm P"+str(id)+" Process ID: "+str(os.getpid()))
lock.release()
time.sleep(sleepTime) #sleeps for some time
print("Process: "+str(id)+" took this much time:"+str(sleepTime))
time.sleep(sleepTime)
print("Process: "+str(id)+" took this much time:"+str(sleepTime*2))
if __name__ == '__main__':
timeout_function=float(9) # 9 seconds for max function time
print("Main Process ID: "+str(os.getpid()))
lock=Lock()
p1=Process(target=f, args=(lock,1,6,)) #Here you can change from 6 to 3 for instance, so you can watch the behavior
start=time.time()
print(type(start))
p1.start()
if p1.is_alive():
print("process running a")
else:
print("process not running a")
while p1.is_alive():
timeout=time.time()
if timeout-start > timeout_function:
p1.terminate()
print("process terminated")
print("watching, time passed: "+str(timeout-start) )
time.sleep(1)
if p1.is_alive():
print("process running b")
else:
print("process not running b")
p1.join()
if p1.is_alive():
print("process running c")
else:
print("process not running c")
end=time.time()
print("I am the main process, the two processes are done")
print("Time taken:- "+str(end-start)+" secs") #MainProcess terminates at approx ~ 5 secs.
time.sleep(5) # To see if on Task Manager the child process is really being terminated, and it is
print("finishing")
The main code is from this link:
Create two child process using python(windows)
Then I used .terminate() to kill the child process. You can see that the function f calls 2 prints, one after 5 seconds and another after 10 seconds. However, with a 7 seconds sleep and the terminate(), it does not show the last print.
It worked for me, hope it helps!

Categories