How to catch exceptions thrown by functions executed using multiprocessing.Process() (python) - python

How can I catch exceptions from a process that was executed using multiprocessing.Process()?
Consider the following python script that executes a simple failFunction() (which immediately throws a runtime error) inside of a child process using mulitprocessing.Process()
#!/usr/bin/env python3
import multiprocessing, time
# this function will be executed in a child process asynchronously
def failFunction():
raise RuntimeError('trust fall, catch me!')
# execute the helloWorld() function in a child process in the background
process = multiprocessing.Process(
target = failFunction,
)
process.start()
# <this is where async stuff would happen>
time.sleep(1)
# try (and fail) to catch the exception
try:
process.join()
except Exception as e:
print( "This won't catch the exception" )
As you can see from the following execution, attempting to wrap the .join() does not actually catch the exception
user#host:~$ python3 example.py
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "example4.py", line 6, in failFunction
raise RuntimeError('trust fall, catch me!')
RuntimeError: trust fall, catch me!
user#host:~$
How can I update the above script to actually catch the exception from the function that was executed inside of a child process using multiprocessing.Process()?

This can be achieved by overloading the run() method in the multiprocessing.Proccess() class with a try..except statement and setting up a Pipe() to get and store any raised exceptions from the child process into an instance field for named exception:
#!/usr/bin/env python3
import multiprocessing, traceback, time
class Process(multiprocessing.Process):
def __init__(self, *args, **kwargs):
multiprocessing.Process.__init__(self, *args, **kwargs)
self._pconn, self._cconn = multiprocessing.Pipe()
self._exception = None
def run(self):
try:
multiprocessing.Process.run(self)
self._cconn.send(None)
except Exception as e:
tb = traceback.format_exc()
self._cconn.send((e, tb))
#raise e # You can still rise this exception if you need to
#property
def exception(self):
if self._pconn.poll():
self._exception = self._pconn.recv()
return self._exception
# this function will be executed in a child process asynchronously
def failFunction():
raise RuntimeError('trust fall, catch me!')
# execute the helloWorld() function in a child process in the background
process = Process(
target = failFunction,
)
process.start()
# <this is where async stuff would happen>
time.sleep(1)
# catch the child process' exception
try:
process.join()
if process.exception:
raise process.exception
except Exception as e:
print( "Exception caught!" )
Example execution:
user#host:~$ python3 example.py
Exception caught!
user#host:~$
Solution taken from this answer:
https://stackoverflow.com/a/33599967/1174102

This solution does not require the target function having to catch its own exceptions.
It may seem like overkill, but you can use class ProcessPoolExecutor in module concurrent.futures to create a process pool of size 1, which is all you that is required for your needs. When you submit a "job" to the executor a Future instance is created representing the state of execution of the process. When you call result() on the Future instance, you block until the process terminates and returns a results (the target function returns). If the target function throws an exception, you can catch it when you call result():
import concurrent.futures
def failFunction():
raise RuntimeError('trust fall, catch me!')
def main():
with concurrent.futures.ProcessPoolExecutor(max_workers=1) as executor:
future = executor.submit(failFunction)
try:
result = future.result()
except Exception as e:
print('exception = ', e)
else:
print('result = ', result)
if __name__ == '__main__':
main()
Prints:
exception = trust fall, catch me!
The bonus of using a process pool is you have a ready-made process already created if you have additional functions you need to invoke in a sub-process.

Related

sys.excepthook in multiprocessing.Process ignored?

Suppose we have the two files namely mymanger.py and mysub.py.
mymanager.py
import time
from multiprocessing import Process
import mysub # the process file
def main():
xprocess = Process(
target=mysub.main,
)
xprocess.start()
xprocess.join()
time.sleep(1)
print(f"== Done, errorcode is {xprocess.exitcode} ==")
if __name__ == '__main__':
main()
mysub.py
import sys
def myexception(exc_type, exc_value, exc_traceback):
print("I want this to be printed!")
print("Uncaught exception", exc_type, exc_value, exc_traceback)
def main():
sys.excepthook = myexception # !!!
raise ValueError()
if __name__ == "__main__":
sys.exit()
When executing mymanager.py the resulting output is:
Process Process-1:
Traceback (most recent call last):
File "c:\program files\python\3.9\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "c:\program files\python\3.9\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\lx\mysub.py", line 11, in main
raise ValueError()
ValueError
== Done, errorcode is 1 ==
When the output i expected would be something like:
I want this to be printed!
Uncaught exception <class 'ValueError'> <traceback object at 0x0000027B6F952780>
which is what i get if i execute main from mysub.py without the multiprocessing.Process.
I've checked the underlying cpython (reference) and the problem seems to be that the try-except in the _boostrap function takes precedence over my child processes sys.excepthook but from my understanding, shouldn't the excepthook from the childs process fire first and then trigger the except from the _boostrap?
I need the child process to handle the exception using the sys.excepthook function.
How can i achieve that?
sys.excepthook is invoked when an exception goes uncaught (bubbling all the way out of the running program). But Process objects run their target function in a special bootstrap function (BaseProcess._bootstrap if it matters to you) that intentionally catches all exceptions, prints information about the failing process plus the traceback, then returns an exit code to the caller (a launcher that varies by start method).
When using the fork start method, the caller of _bootstrap then exits the worker with os._exit(code) (a "hard exit" command which bypasses the normal exception handling system, though since your exception was already caught and handled this hardly matters). When using 'spawn', it uses plain sys.exit over os._exit, but AFAICT the SystemExit exception that sys.exit is implemented in terms of is special cased in the interpreter so it doesn't pass through sys.excepthook when uncaught (presumably because it being implemented via exceptions is considered an implementation detail; when you ask to exit the program it's not the same as dying with an unexpected exception).
Summarizing: No matter the start method, there is no possible way for an exception raised by your code to be "unhandled" (for the purposes of reaching sys.excepthook), because multiprocessing handles all exceptions your function can throw on its own. It's theoretically possible to have an excepthook you set in the worker execute for exceptions raised after your target completes if the multiprocessing wrapper code itself raises an exception, but only if you do pathological things like replace the definition of os._exit or sys.exit (and it would only report the horrible things that happened because you replaced them, your own exception was already swallowed by that point, so don't do that).
If you really want to do this, the closest you could get would be to explicitly catch exceptions and manually call your handler. A simple wrapper function would allow this for instance:
def handle_exceptions_with(excepthook, target, /, *args, **kwargs)
try:
target(*args, **kwargs)
except:
excepthook(*sys.exc_info())
raise # Or maybe convert to sys.exit(1) if you don't want multiprocessing to print it again
changing your Process launch to:
xprocess = Process(
target=handle_exceptions_with,
args=(mysub.myexception, mysub.main)
)
Or for one-off use, just be lazy and only rewrite mysub.main as:
def main():
try:
raise ValueError()
except:
myexception(*sys.exc_info())
raise # Or maybe convert to sys.exit(1) if you don't want multiprocessing to print it again
and leave everything else untouched. You could still set your handler in sys.excepthook and/or threading.excepthook() (to handle cases where a thread launched in the worker process might die with an unhandled exception), but it won't apply to the main thread of the worker process (or more precisely, there's no way for an exception to reach it).

Can an exception or error in the worker thread be caught in a function in the main thread in PyQT5? [duplicate]

I'm very new to Python and multithreaded programming in general. Basically, I have a script that will copy files to another location. I would like this to be placed in another thread so I can output .... to indicate that the script is still running.
The problem that I am having is that if the files cannot be copied it will throw an exception. This is OK if running in the main thread; however, having the following code does not work:
try:
threadClass = TheThread(param1, param2, etc.)
threadClass.start() ##### **Exception takes place here**
except:
print "Caught an exception"
In the thread class itself, I tried to re-throw the exception, but it does not work. I have seen people on here ask similar questions, but they all seem to be doing something more specific than what I am trying to do (and I don't quite understand the solutions offered). I have seen people mention the usage of sys.exc_info(), however I do not know where or how to use it.
Edit: The code for the thread class is below:
class TheThread(threading.Thread):
def __init__(self, sourceFolder, destFolder):
threading.Thread.__init__(self)
self.sourceFolder = sourceFolder
self.destFolder = destFolder
def run(self):
try:
shul.copytree(self.sourceFolder, self.destFolder)
except:
raise
The problem is that thread_obj.start() returns immediately. The child thread that you spawned executes in its own context, with its own stack. Any exception that occurs there is in the context of the child thread, and it is in its own stack. One way I can think of right now to communicate this information to the parent thread is by using some sort of message passing, so you might look into that.
Try this on for size:
import sys
import threading
import queue
class ExcThread(threading.Thread):
def __init__(self, bucket):
threading.Thread.__init__(self)
self.bucket = bucket
def run(self):
try:
raise Exception('An error occured here.')
except Exception:
self.bucket.put(sys.exc_info())
def main():
bucket = queue.Queue()
thread_obj = ExcThread(bucket)
thread_obj.start()
while True:
try:
exc = bucket.get(block=False)
except queue.Empty:
pass
else:
exc_type, exc_obj, exc_trace = exc
# deal with the exception
print exc_type, exc_obj
print exc_trace
thread_obj.join(0.1)
if thread_obj.isAlive():
continue
else:
break
if __name__ == '__main__':
main()
There are a lot of really weirdly complicated answers to this question. Am I oversimplifying this, because this seems sufficient for most things to me.
from threading import Thread
class PropagatingThread(Thread):
def run(self):
self.exc = None
try:
if hasattr(self, '_Thread__target'):
# Thread uses name mangling prior to Python 3.
self.ret = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
else:
self.ret = self._target(*self._args, **self._kwargs)
except BaseException as e:
self.exc = e
def join(self, timeout=None):
super(PropagatingThread, self).join(timeout)
if self.exc:
raise self.exc
return self.ret
If you're certain you'll only ever be running on one or the other version of Python, you could reduce the run() method down to just the mangled version (if you'll only be running on versions of Python before 3), or just the clean version (if you'll only be running on versions of Python starting with 3).
Example usage:
def f(*args, **kwargs):
print(args)
print(kwargs)
raise Exception('I suck at this')
t = PropagatingThread(target=f, args=(5,), kwargs={'hello':'world'})
t.start()
t.join()
And you'll see the exception raised on the other thread when you join.
If you are using six or on Python 3 only, you can improve the stack trace information you get when the exception is re-raised. Instead of only the stack at the point of the join, you can wrap the inner exception in a new outer exception, and get both stack traces with
six.raise_from(RuntimeError('Exception in thread'),self.exc)
or
raise RuntimeError('Exception in thread') from self.exc
The concurrent.futures module makes it simple to do work in separate threads (or processes) and handle any resulting exceptions:
import concurrent.futures
import shutil
def copytree_with_dots(src_path, dst_path):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
# Execute the copy on a separate thread,
# creating a future object to track progress.
future = executor.submit(shutil.copytree, src_path, dst_path)
while future.running():
# Print pretty dots here.
pass
# Return the value returned by shutil.copytree(), None.
# Raise any exceptions raised during the copy process.
return future.result()
concurrent.futures is included with Python 3.2, and is available as the backported futures module for earlier versions.
Although it is not possible to directly catch an exception thrown in a different thread, here's a code to quite transparently obtain something very close to this functionality. Your child thread must subclass the ExThread class instead of threading.Thread and the parent thread must call the child_thread.join_with_exception() method instead of child_thread.join() when waiting for the thread to finish its job.
Technical details of this implementation: when the child thread throws an exception, it is passed to the parent through a Queue and thrown again in the parent thread. Notice that there's no busy waiting in this approach .
#!/usr/bin/env python
import sys
import threading
import Queue
class ExThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.__status_queue = Queue.Queue()
def run_with_exception(self):
"""This method should be overriden."""
raise NotImplementedError
def run(self):
"""This method should NOT be overriden."""
try:
self.run_with_exception()
except BaseException:
self.__status_queue.put(sys.exc_info())
self.__status_queue.put(None)
def wait_for_exc_info(self):
return self.__status_queue.get()
def join_with_exception(self):
ex_info = self.wait_for_exc_info()
if ex_info is None:
return
else:
raise ex_info[1]
class MyException(Exception):
pass
class MyThread(ExThread):
def __init__(self):
ExThread.__init__(self)
def run_with_exception(self):
thread_name = threading.current_thread().name
raise MyException("An error in thread '{}'.".format(thread_name))
def main():
t = MyThread()
t.start()
try:
t.join_with_exception()
except MyException as ex:
thread_name = threading.current_thread().name
print "Caught a MyException in thread '{}': {}".format(thread_name, ex)
if __name__ == '__main__':
main()
If an exception occurs in a thread, the best way is to re-raise it in the caller thread during join. You can get information about the exception currently being handled using the sys.exc_info() function. This information can simply be stored as a property of the thread object until join is called, at which point it can be re-raised.
Note that a Queue.Queue (as suggested in other answers) is not necessary in this simple case where the thread throws at most 1 exception and completes right after throwing an exception. We avoid race conditions by simply waiting for the thread to complete.
For example, extend ExcThread (below), overriding excRun (instead of run).
Python 2.x:
import threading
class ExcThread(threading.Thread):
def excRun(self):
pass
def run(self):
self.exc = None
try:
# Possibly throws an exception
self.excRun()
except:
import sys
self.exc = sys.exc_info()
# Save details of the exception thrown but don't rethrow,
# just complete the function
def join(self):
threading.Thread.join(self)
if self.exc:
msg = "Thread '%s' threw an exception: %s" % (self.getName(), self.exc[1])
new_exc = Exception(msg)
raise new_exc.__class__, new_exc, self.exc[2]
Python 3.x:
The 3 argument form for raise is gone in Python 3, so change the last line to:
raise new_exc.with_traceback(self.exc[2])
concurrent.futures.as_completed
https://docs.python.org/3.7/library/concurrent.futures.html#concurrent.futures.as_completed
The following solution:
returns to the main thread immediately when an exception is called
requires no extra user defined classes because it does not need:
an explicit Queue
to add an except else around your work thread
Source:
#!/usr/bin/env python3
import concurrent.futures
import time
def func_that_raises(do_raise):
for i in range(3):
print(i)
time.sleep(0.1)
if do_raise:
raise Exception()
for i in range(3):
print(i)
time.sleep(0.1)
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
futures = []
futures.append(executor.submit(func_that_raises, False))
futures.append(executor.submit(func_that_raises, True))
for future in concurrent.futures.as_completed(futures):
print(repr(future.exception()))
Possible output:
0
0
1
1
2
2
0
Exception()
1
2
None
It is unfortunately not possible to kill futures to cancel the others as one fails:
concurrent.futures; Python: concurrent.futures How to make it cancelable?
threading: Is there any way to kill a Thread?
C pthreads: Kill Thread in Pthread Library
If you do something like:
for future in concurrent.futures.as_completed(futures):
if future.exception() is not None:
raise future.exception()
then the with catches it, and waits for the second thread to finish before continuing. The following behaves similarly:
for future in concurrent.futures.as_completed(futures):
future.result()
since future.result() re-raises the exception if one occurred.
If you want to quit the entire Python process, you might get away with os._exit(0), but this likely means you need a refactor.
Custom class with perfect exception semantics
I ended up coding the perfect interface for myself at: The right way to limit maximum number of threads running at once? section "Queue example with error handling". That class aims to be both convenient, and give you total control over submission and result / error handling.
Tested on Python 3.6.7, Ubuntu 18.04.
In Python 3.8, we can use threading.excepthook to hook the uncaught exceptions in all the child threads! For example,
threading.excepthook = thread_exception_handler
Referer: https://stackoverflow.com/a/60002752/5093308
What im doing is, simple overriding join and run method of the Thread:
class RaisingThread(threading.Thread):
def run(self):
self._exc = None
try:
super().run()
except Exception as e:
self._exc = e
def join(self, timeout=None):
super().join(timeout=timeout)
if self._exc:
raise self._exc
Used as followed:
def foo():
time.sleep(2)
print('hi, from foo!')
raise Exception('exception from foo')
t = RaisingThread(target=foo)
t.start()
try:
t.join()
except Exception as e:
print(e)
Result:
hi, from foo!
exception from foo!
This was a nasty little problem, and I'd like to throw my solution in. Some other solutions I found (async.io for example) looked promising but also presented a bit of a black box. The queue / event loop approach sort of ties you to a certain implementation. The concurrent futures source code, however, is around only 1000 lines, and easy to comprehend. It allowed me to easily solve my problem: create ad-hoc worker threads without much setup, and to be able to catch exceptions in the main thread.
My solution uses the concurrent futures API and threading API. It allows you to create a worker which gives you both the thread and the future. That way, you can join the thread to wait for the result:
worker = Worker(test)
thread = worker.start()
thread.join()
print(worker.future.result())
...or you can let the worker just send a callback when done:
worker = Worker(test)
thread = worker.start(lambda x: print('callback', x))
...or you can loop until the event completes:
worker = Worker(test)
thread = worker.start()
while True:
print("waiting")
if worker.future.done():
exc = worker.future.exception()
print('exception?', exc)
result = worker.future.result()
print('result', result)
break
time.sleep(0.25)
Here's the code:
from concurrent.futures import Future
import threading
import time
class Worker(object):
def __init__(self, fn, args=()):
self.future = Future()
self._fn = fn
self._args = args
def start(self, cb=None):
self._cb = cb
self.future.set_running_or_notify_cancel()
thread = threading.Thread(target=self.run, args=())
thread.daemon = True #this will continue thread execution after the main thread runs out of code - you can still ctrl + c or kill the process
thread.start()
return thread
def run(self):
try:
self.future.set_result(self._fn(*self._args))
except BaseException as e:
self.future.set_exception(e)
if(self._cb):
self._cb(self.future.result())
...and the test function:
def test(*args):
print('args are', args)
time.sleep(2)
raise Exception('foo')
I know I'm a bit late to the party here but I was having a very similar problem but it included using tkinter as a GUI, and the mainloop made it impossible to use any of the solutions that depend on .join(). Therefore I adapted the solution given in the EDIT of the original question, but made it more general to make it easier to understand for others.
Here is the new thread class in action:
import threading
import traceback
import logging
class ExceptionThread(threading.Thread):
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
def run(self):
try:
if self._target:
self._target(*self._args, **self._kwargs)
except Exception:
logging.error(traceback.format_exc())
def test_function_1(input):
raise IndexError(input)
if __name__ == "__main__":
input = 'useful'
t1 = ExceptionThread(target=test_function_1, args=[input])
t1.start()
Of course you can always have it handle the exception some other way from logging, such as printing it out, or having it output to the console.
This allows you to use the ExceptionThread class exactly like you would the Thread class, without any special modifications.
Similar way like RickardSjogren's without Queue, sys etc. but also without some listeners to signals: execute directly an exception handler which corresponds to an except block.
#!/usr/bin/env python3
import threading
class ExceptionThread(threading.Thread):
def __init__(self, callback=None, *args, **kwargs):
"""
Redirect exceptions of thread to an exception handler.
:param callback: function to handle occured exception
:type callback: function(thread, exception)
:param args: arguments for threading.Thread()
:type args: tuple
:param kwargs: keyword arguments for threading.Thread()
:type kwargs: dict
"""
self._callback = callback
super().__init__(*args, **kwargs)
def run(self):
try:
if self._target:
self._target(*self._args, **self._kwargs)
except BaseException as e:
if self._callback is None:
raise e
else:
self._callback(self, e)
finally:
# Avoid a refcycle if the thread is running a function with
# an argument that has a member that points to the thread.
del self._target, self._args, self._kwargs, self._callback
Only self._callback and the except block in run() is additional to normal threading.Thread.
I use this version, it's minimal and it works well.
class SafeThread(threading.Thread):
def __init__(self, *args, **kwargs):
super(SafeThread, self).__init__(*args, **kwargs)
self.exception = None
def run(self) -> None:
try:
super(SafeThread, self).run()
except Exception as ex:
self.exception = ex
traceback.print_exc()
def join(self, *args, **kwargs) -> None:
super(SafeThread, self).join(*args, **kwargs)
if self.exception:
raise self.exception
To use it, simply replace threading.Thread with SafeThread e.g
t = SafeThread(target = some_function, args = (some, args,))
t.start()
# do something else here if you want as the thread runs in the background
t.join()
As a noobie to Threading, it took me a long time to understand how to implement Mateusz Kobos's code (above). Here's a clarified version to help understand how to use it.
#!/usr/bin/env python
import sys
import threading
import Queue
class ExThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.__status_queue = Queue.Queue()
def run_with_exception(self):
"""This method should be overriden."""
raise NotImplementedError
def run(self):
"""This method should NOT be overriden."""
try:
self.run_with_exception()
except Exception:
self.__status_queue.put(sys.exc_info())
self.__status_queue.put(None)
def wait_for_exc_info(self):
return self.__status_queue.get()
def join_with_exception(self):
ex_info = self.wait_for_exc_info()
if ex_info is None:
return
else:
raise ex_info[1]
class MyException(Exception):
pass
class MyThread(ExThread):
def __init__(self):
ExThread.__init__(self)
# This overrides the "run_with_exception" from class "ExThread"
# Note, this is where the actual thread to be run lives. The thread
# to be run could also call a method or be passed in as an object
def run_with_exception(self):
# Code will function until the int
print "sleeping 5 seconds"
import time
for i in 1, 2, 3, 4, 5:
print i
time.sleep(1)
# Thread should break here
int("str")
# I'm honestly not sure why these appear here? So, I removed them.
# Perhaps Mateusz can clarify?
# thread_name = threading.current_thread().name
# raise MyException("An error in thread '{}'.".format(thread_name))
if __name__ == '__main__':
# The code lives in MyThread in this example. So creating the MyThread
# object set the code to be run (but does not start it yet)
t = MyThread()
# This actually starts the thread
t.start()
print
print ("Notice 't.start()' is considered to have completed, although"
" the countdown continues in its new thread. So you code "
"can tinue into new processing.")
# Now that the thread is running, the join allows for monitoring of it
try:
t.join_with_exception()
# should be able to be replace "Exception" with specific error (untested)
except Exception, e:
print
print "Exceptioon was caught and control passed back to the main thread"
print "Do some handling here...or raise a custom exception "
thread_name = threading.current_thread().name
e = ("Caught a MyException in thread: '" +
str(thread_name) +
"' [" + str(e) + "]")
raise Exception(e) # Or custom class of exception, such as MyException
One method I am fond of is based on the observer pattern. I define a signal class which my thread uses to emit exceptions to listeners. It can also be used to return values from threads. Example:
import threading
class Signal:
def __init__(self):
self._subscribers = list()
def emit(self, *args, **kwargs):
for func in self._subscribers:
func(*args, **kwargs)
def connect(self, func):
self._subscribers.append(func)
def disconnect(self, func):
try:
self._subscribers.remove(func)
except ValueError:
raise ValueError('Function {0} not removed from {1}'.format(func, self))
class WorkerThread(threading.Thread):
def __init__(self, *args, **kwargs):
super(WorkerThread, self).__init__(*args, **kwargs)
self.Exception = Signal()
self.Result = Signal()
def run(self):
if self._Thread__target is not None:
try:
self._return_value = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
except Exception as e:
self.Exception.emit(e)
else:
self.Result.emit(self._return_value)
if __name__ == '__main__':
import time
def handle_exception(exc):
print exc.message
def handle_result(res):
print res
def a():
time.sleep(1)
raise IOError('a failed')
def b():
time.sleep(2)
return 'b returns'
t = WorkerThread(target=a)
t2 = WorkerThread(target=b)
t.Exception.connect(handle_exception)
t2.Result.connect(handle_result)
t.start()
t2.start()
print 'Threads started'
t.join()
t2.join()
print 'Done'
I do not have enough experience of working with threads to claim that this is a completely safe method. But it has worked for me and I like the flexibility.
A simple way of catching thread's exception and communicating back to the caller method could be by passing dictionary or a list to worker method.
Example (passing dictionary to worker method):
import threading
def my_method(throw_me):
raise Exception(throw_me)
def worker(shared_obj, *args, **kwargs):
try:
shared_obj['target'](*args, **kwargs)
except Exception as err:
shared_obj['err'] = err
shared_obj = {'err':'', 'target': my_method}
throw_me = "Test"
th = threading.Thread(target=worker, args=(shared_obj, throw_me), kwargs={})
th.start()
th.join()
if shared_obj['err']:
print(">>%s" % shared_obj['err'])
Wrap Thread with exception storage.
import threading
import sys
class ExcThread(threading.Thread):
def __init__(self, target, args = None):
self.args = args if args else []
self.target = target
self.exc = None
threading.Thread.__init__(self)
def run(self):
try:
self.target(*self.args)
raise Exception('An error occured here.')
except Exception:
self.exc=sys.exc_info()
def main():
def hello(name):
print(!"Hello, {name}!")
thread_obj = ExcThread(target=hello, args=("Jack"))
thread_obj.start()
thread_obj.join()
exc = thread_obj.exc
if exc:
exc_type, exc_obj, exc_trace = exc
print(exc_type, ':',exc_obj, ":", exc_trace)
main()
I like this class:
https://gist.github.com/earonesty/b88d60cb256b71443e42c4f1d949163e
import threading
from typing import Any
class PropagatingThread(threading.Thread):
"""A Threading Class that raises errors it caught, and returns the return value of the target on join."""
def __init__(self, *args, **kwargs):
self._target = None
self._args = ()
self._kwargs = {}
super().__init__(*args, **kwargs)
self.exception = None
self.return_value = None
assert self._target
def run(self):
"""Don't override this if you want the behavior of this class, use target instead."""
try:
if self._target:
self.return_value = self._target(*self._args, **self._kwargs)
except Exception as e:
self.exception = e
finally:
# see super().run() for why this is necessary
del self._target, self._args, self._kwargs
def join(self, timeout=None) -> Any:
super().join(timeout)
if self.exception:
raise self.exception
return self.return_value
Using naked excepts is not a good practice because you usually catch more than you bargain for.
I would suggest modifying the except to catch ONLY the exception that you would like to handle. I don't think that raising it has the desired effect, because when you go to instantiate TheThread in the outer try, if it raises an exception, the assignment is never going to happen.
Instead you might want to just alert on it and move on, such as:
def run(self):
try:
shul.copytree(self.sourceFolder, self.destFolder)
except OSError, err:
print err
Then when that exception is caught, you can handle it there. Then when the outer try catches an exception from TheThread, you know it won't be the one you already handled, and will help you isolate your process flow.
I think the other solutions are somewhat complex if the only thing you want is to actually see somewhere the exception instead of being oblivious and totally blind.
The solution is to create a custom Thread that takes a logger from the main thread and logs any exceptions.
class ThreadWithLoggedException(threading.Thread):
"""
Similar to Thread but will log exceptions to passed logger.
Args:
logger: Logger instance used to log any exception in child thread
Exception is also reachable via <thread>.exception from the main thread.
"""
def __init__(self, *args, **kwargs):
try:
self.logger = kwargs.pop("logger")
except KeyError:
raise Exception("Missing 'logger' in kwargs")
super().__init__(*args, **kwargs)
self.exception = None
def run(self):
try:
if self._target is not None:
self._target(*self._args, **self._kwargs)
except Exception as exception:
thread = threading.current_thread()
self.exception = exception
self.logger.exception(f"Exception in child thread {thread}: {exception}")
finally:
del self._target, self._args, self._kwargs
Example:
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
def serve():
raise Exception("Earth exploded.")
th = ThreadWithLoggedException(target=serve, logger=logger)
th.start()
Output in main thread:
Exception in child thread <ThreadWithLoggedException(Thread-1, started 139922384414464)>: Earth exploded.
Traceback (most recent call last):
File "/core/utils.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/myapp.py", line 105, in serve
raise Exception("Earth exploded.")
Exception: Earth exploded.
pygolang provides sync.WorkGroup which, in particular, propagates exception from spawned worker threads to the main thread. For example:
#!/usr/bin/env python
"""This program demostrates how with sync.WorkGroup an exception raised in
spawned thread is propagated into main thread which spawned the worker."""
from __future__ import print_function
from golang import sync, context
def T1(ctx, *argv):
print('T1: run ... %r' % (argv,))
raise RuntimeError('T1: problem')
def T2(ctx):
print('T2: ran ok')
def main():
wg = sync.WorkGroup(context.background())
wg.go(T1, [1,2,3])
wg.go(T2)
try:
wg.wait()
except Exception as e:
print('Tmain: caught exception: %r\n' %e)
# reraising to see full traceback
raise
if __name__ == '__main__':
main()
gives the following when run:
T1: run ... ([1, 2, 3],)
T2: ran ok
Tmain: caught exception: RuntimeError('T1: problem',)
Traceback (most recent call last):
File "./x.py", line 28, in <module>
main()
File "./x.py", line 21, in main
wg.wait()
File "golang/_sync.pyx", line 198, in golang._sync.PyWorkGroup.wait
pyerr_reraise(pyerr)
File "golang/_sync.pyx", line 178, in golang._sync.PyWorkGroup.go.pyrunf
f(pywg._pyctx, *argv, **kw)
File "./x.py", line 10, in T1
raise RuntimeError('T1: problem')
RuntimeError: T1: problem
The original code from the question would be just:
wg = sync.WorkGroup(context.background())
def _(ctx):
shul.copytree(sourceFolder, destFolder)
wg.go(_)
# waits for spawned worker to complete and, on error, reraises
# its exception on the main thread.
wg.wait()

Exception does not catche in child thread in Python [duplicate]

I'm very new to Python and multithreaded programming in general. Basically, I have a script that will copy files to another location. I would like this to be placed in another thread so I can output .... to indicate that the script is still running.
The problem that I am having is that if the files cannot be copied it will throw an exception. This is OK if running in the main thread; however, having the following code does not work:
try:
threadClass = TheThread(param1, param2, etc.)
threadClass.start() ##### **Exception takes place here**
except:
print "Caught an exception"
In the thread class itself, I tried to re-throw the exception, but it does not work. I have seen people on here ask similar questions, but they all seem to be doing something more specific than what I am trying to do (and I don't quite understand the solutions offered). I have seen people mention the usage of sys.exc_info(), however I do not know where or how to use it.
Edit: The code for the thread class is below:
class TheThread(threading.Thread):
def __init__(self, sourceFolder, destFolder):
threading.Thread.__init__(self)
self.sourceFolder = sourceFolder
self.destFolder = destFolder
def run(self):
try:
shul.copytree(self.sourceFolder, self.destFolder)
except:
raise
The problem is that thread_obj.start() returns immediately. The child thread that you spawned executes in its own context, with its own stack. Any exception that occurs there is in the context of the child thread, and it is in its own stack. One way I can think of right now to communicate this information to the parent thread is by using some sort of message passing, so you might look into that.
Try this on for size:
import sys
import threading
import queue
class ExcThread(threading.Thread):
def __init__(self, bucket):
threading.Thread.__init__(self)
self.bucket = bucket
def run(self):
try:
raise Exception('An error occured here.')
except Exception:
self.bucket.put(sys.exc_info())
def main():
bucket = queue.Queue()
thread_obj = ExcThread(bucket)
thread_obj.start()
while True:
try:
exc = bucket.get(block=False)
except queue.Empty:
pass
else:
exc_type, exc_obj, exc_trace = exc
# deal with the exception
print exc_type, exc_obj
print exc_trace
thread_obj.join(0.1)
if thread_obj.isAlive():
continue
else:
break
if __name__ == '__main__':
main()
There are a lot of really weirdly complicated answers to this question. Am I oversimplifying this, because this seems sufficient for most things to me.
from threading import Thread
class PropagatingThread(Thread):
def run(self):
self.exc = None
try:
if hasattr(self, '_Thread__target'):
# Thread uses name mangling prior to Python 3.
self.ret = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
else:
self.ret = self._target(*self._args, **self._kwargs)
except BaseException as e:
self.exc = e
def join(self, timeout=None):
super(PropagatingThread, self).join(timeout)
if self.exc:
raise self.exc
return self.ret
If you're certain you'll only ever be running on one or the other version of Python, you could reduce the run() method down to just the mangled version (if you'll only be running on versions of Python before 3), or just the clean version (if you'll only be running on versions of Python starting with 3).
Example usage:
def f(*args, **kwargs):
print(args)
print(kwargs)
raise Exception('I suck at this')
t = PropagatingThread(target=f, args=(5,), kwargs={'hello':'world'})
t.start()
t.join()
And you'll see the exception raised on the other thread when you join.
If you are using six or on Python 3 only, you can improve the stack trace information you get when the exception is re-raised. Instead of only the stack at the point of the join, you can wrap the inner exception in a new outer exception, and get both stack traces with
six.raise_from(RuntimeError('Exception in thread'),self.exc)
or
raise RuntimeError('Exception in thread') from self.exc
The concurrent.futures module makes it simple to do work in separate threads (or processes) and handle any resulting exceptions:
import concurrent.futures
import shutil
def copytree_with_dots(src_path, dst_path):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
# Execute the copy on a separate thread,
# creating a future object to track progress.
future = executor.submit(shutil.copytree, src_path, dst_path)
while future.running():
# Print pretty dots here.
pass
# Return the value returned by shutil.copytree(), None.
# Raise any exceptions raised during the copy process.
return future.result()
concurrent.futures is included with Python 3.2, and is available as the backported futures module for earlier versions.
Although it is not possible to directly catch an exception thrown in a different thread, here's a code to quite transparently obtain something very close to this functionality. Your child thread must subclass the ExThread class instead of threading.Thread and the parent thread must call the child_thread.join_with_exception() method instead of child_thread.join() when waiting for the thread to finish its job.
Technical details of this implementation: when the child thread throws an exception, it is passed to the parent through a Queue and thrown again in the parent thread. Notice that there's no busy waiting in this approach .
#!/usr/bin/env python
import sys
import threading
import Queue
class ExThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.__status_queue = Queue.Queue()
def run_with_exception(self):
"""This method should be overriden."""
raise NotImplementedError
def run(self):
"""This method should NOT be overriden."""
try:
self.run_with_exception()
except BaseException:
self.__status_queue.put(sys.exc_info())
self.__status_queue.put(None)
def wait_for_exc_info(self):
return self.__status_queue.get()
def join_with_exception(self):
ex_info = self.wait_for_exc_info()
if ex_info is None:
return
else:
raise ex_info[1]
class MyException(Exception):
pass
class MyThread(ExThread):
def __init__(self):
ExThread.__init__(self)
def run_with_exception(self):
thread_name = threading.current_thread().name
raise MyException("An error in thread '{}'.".format(thread_name))
def main():
t = MyThread()
t.start()
try:
t.join_with_exception()
except MyException as ex:
thread_name = threading.current_thread().name
print "Caught a MyException in thread '{}': {}".format(thread_name, ex)
if __name__ == '__main__':
main()
If an exception occurs in a thread, the best way is to re-raise it in the caller thread during join. You can get information about the exception currently being handled using the sys.exc_info() function. This information can simply be stored as a property of the thread object until join is called, at which point it can be re-raised.
Note that a Queue.Queue (as suggested in other answers) is not necessary in this simple case where the thread throws at most 1 exception and completes right after throwing an exception. We avoid race conditions by simply waiting for the thread to complete.
For example, extend ExcThread (below), overriding excRun (instead of run).
Python 2.x:
import threading
class ExcThread(threading.Thread):
def excRun(self):
pass
def run(self):
self.exc = None
try:
# Possibly throws an exception
self.excRun()
except:
import sys
self.exc = sys.exc_info()
# Save details of the exception thrown but don't rethrow,
# just complete the function
def join(self):
threading.Thread.join(self)
if self.exc:
msg = "Thread '%s' threw an exception: %s" % (self.getName(), self.exc[1])
new_exc = Exception(msg)
raise new_exc.__class__, new_exc, self.exc[2]
Python 3.x:
The 3 argument form for raise is gone in Python 3, so change the last line to:
raise new_exc.with_traceback(self.exc[2])
concurrent.futures.as_completed
https://docs.python.org/3.7/library/concurrent.futures.html#concurrent.futures.as_completed
The following solution:
returns to the main thread immediately when an exception is called
requires no extra user defined classes because it does not need:
an explicit Queue
to add an except else around your work thread
Source:
#!/usr/bin/env python3
import concurrent.futures
import time
def func_that_raises(do_raise):
for i in range(3):
print(i)
time.sleep(0.1)
if do_raise:
raise Exception()
for i in range(3):
print(i)
time.sleep(0.1)
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
futures = []
futures.append(executor.submit(func_that_raises, False))
futures.append(executor.submit(func_that_raises, True))
for future in concurrent.futures.as_completed(futures):
print(repr(future.exception()))
Possible output:
0
0
1
1
2
2
0
Exception()
1
2
None
It is unfortunately not possible to kill futures to cancel the others as one fails:
concurrent.futures; Python: concurrent.futures How to make it cancelable?
threading: Is there any way to kill a Thread?
C pthreads: Kill Thread in Pthread Library
If you do something like:
for future in concurrent.futures.as_completed(futures):
if future.exception() is not None:
raise future.exception()
then the with catches it, and waits for the second thread to finish before continuing. The following behaves similarly:
for future in concurrent.futures.as_completed(futures):
future.result()
since future.result() re-raises the exception if one occurred.
If you want to quit the entire Python process, you might get away with os._exit(0), but this likely means you need a refactor.
Custom class with perfect exception semantics
I ended up coding the perfect interface for myself at: The right way to limit maximum number of threads running at once? section "Queue example with error handling". That class aims to be both convenient, and give you total control over submission and result / error handling.
Tested on Python 3.6.7, Ubuntu 18.04.
In Python 3.8, we can use threading.excepthook to hook the uncaught exceptions in all the child threads! For example,
threading.excepthook = thread_exception_handler
Referer: https://stackoverflow.com/a/60002752/5093308
What im doing is, simple overriding join and run method of the Thread:
class RaisingThread(threading.Thread):
def run(self):
self._exc = None
try:
super().run()
except Exception as e:
self._exc = e
def join(self, timeout=None):
super().join(timeout=timeout)
if self._exc:
raise self._exc
Used as followed:
def foo():
time.sleep(2)
print('hi, from foo!')
raise Exception('exception from foo')
t = RaisingThread(target=foo)
t.start()
try:
t.join()
except Exception as e:
print(e)
Result:
hi, from foo!
exception from foo!
This was a nasty little problem, and I'd like to throw my solution in. Some other solutions I found (async.io for example) looked promising but also presented a bit of a black box. The queue / event loop approach sort of ties you to a certain implementation. The concurrent futures source code, however, is around only 1000 lines, and easy to comprehend. It allowed me to easily solve my problem: create ad-hoc worker threads without much setup, and to be able to catch exceptions in the main thread.
My solution uses the concurrent futures API and threading API. It allows you to create a worker which gives you both the thread and the future. That way, you can join the thread to wait for the result:
worker = Worker(test)
thread = worker.start()
thread.join()
print(worker.future.result())
...or you can let the worker just send a callback when done:
worker = Worker(test)
thread = worker.start(lambda x: print('callback', x))
...or you can loop until the event completes:
worker = Worker(test)
thread = worker.start()
while True:
print("waiting")
if worker.future.done():
exc = worker.future.exception()
print('exception?', exc)
result = worker.future.result()
print('result', result)
break
time.sleep(0.25)
Here's the code:
from concurrent.futures import Future
import threading
import time
class Worker(object):
def __init__(self, fn, args=()):
self.future = Future()
self._fn = fn
self._args = args
def start(self, cb=None):
self._cb = cb
self.future.set_running_or_notify_cancel()
thread = threading.Thread(target=self.run, args=())
thread.daemon = True #this will continue thread execution after the main thread runs out of code - you can still ctrl + c or kill the process
thread.start()
return thread
def run(self):
try:
self.future.set_result(self._fn(*self._args))
except BaseException as e:
self.future.set_exception(e)
if(self._cb):
self._cb(self.future.result())
...and the test function:
def test(*args):
print('args are', args)
time.sleep(2)
raise Exception('foo')
I know I'm a bit late to the party here but I was having a very similar problem but it included using tkinter as a GUI, and the mainloop made it impossible to use any of the solutions that depend on .join(). Therefore I adapted the solution given in the EDIT of the original question, but made it more general to make it easier to understand for others.
Here is the new thread class in action:
import threading
import traceback
import logging
class ExceptionThread(threading.Thread):
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
def run(self):
try:
if self._target:
self._target(*self._args, **self._kwargs)
except Exception:
logging.error(traceback.format_exc())
def test_function_1(input):
raise IndexError(input)
if __name__ == "__main__":
input = 'useful'
t1 = ExceptionThread(target=test_function_1, args=[input])
t1.start()
Of course you can always have it handle the exception some other way from logging, such as printing it out, or having it output to the console.
This allows you to use the ExceptionThread class exactly like you would the Thread class, without any special modifications.
Similar way like RickardSjogren's without Queue, sys etc. but also without some listeners to signals: execute directly an exception handler which corresponds to an except block.
#!/usr/bin/env python3
import threading
class ExceptionThread(threading.Thread):
def __init__(self, callback=None, *args, **kwargs):
"""
Redirect exceptions of thread to an exception handler.
:param callback: function to handle occured exception
:type callback: function(thread, exception)
:param args: arguments for threading.Thread()
:type args: tuple
:param kwargs: keyword arguments for threading.Thread()
:type kwargs: dict
"""
self._callback = callback
super().__init__(*args, **kwargs)
def run(self):
try:
if self._target:
self._target(*self._args, **self._kwargs)
except BaseException as e:
if self._callback is None:
raise e
else:
self._callback(self, e)
finally:
# Avoid a refcycle if the thread is running a function with
# an argument that has a member that points to the thread.
del self._target, self._args, self._kwargs, self._callback
Only self._callback and the except block in run() is additional to normal threading.Thread.
I use this version, it's minimal and it works well.
class SafeThread(threading.Thread):
def __init__(self, *args, **kwargs):
super(SafeThread, self).__init__(*args, **kwargs)
self.exception = None
def run(self) -> None:
try:
super(SafeThread, self).run()
except Exception as ex:
self.exception = ex
traceback.print_exc()
def join(self, *args, **kwargs) -> None:
super(SafeThread, self).join(*args, **kwargs)
if self.exception:
raise self.exception
To use it, simply replace threading.Thread with SafeThread e.g
t = SafeThread(target = some_function, args = (some, args,))
t.start()
# do something else here if you want as the thread runs in the background
t.join()
As a noobie to Threading, it took me a long time to understand how to implement Mateusz Kobos's code (above). Here's a clarified version to help understand how to use it.
#!/usr/bin/env python
import sys
import threading
import Queue
class ExThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.__status_queue = Queue.Queue()
def run_with_exception(self):
"""This method should be overriden."""
raise NotImplementedError
def run(self):
"""This method should NOT be overriden."""
try:
self.run_with_exception()
except Exception:
self.__status_queue.put(sys.exc_info())
self.__status_queue.put(None)
def wait_for_exc_info(self):
return self.__status_queue.get()
def join_with_exception(self):
ex_info = self.wait_for_exc_info()
if ex_info is None:
return
else:
raise ex_info[1]
class MyException(Exception):
pass
class MyThread(ExThread):
def __init__(self):
ExThread.__init__(self)
# This overrides the "run_with_exception" from class "ExThread"
# Note, this is where the actual thread to be run lives. The thread
# to be run could also call a method or be passed in as an object
def run_with_exception(self):
# Code will function until the int
print "sleeping 5 seconds"
import time
for i in 1, 2, 3, 4, 5:
print i
time.sleep(1)
# Thread should break here
int("str")
# I'm honestly not sure why these appear here? So, I removed them.
# Perhaps Mateusz can clarify?
# thread_name = threading.current_thread().name
# raise MyException("An error in thread '{}'.".format(thread_name))
if __name__ == '__main__':
# The code lives in MyThread in this example. So creating the MyThread
# object set the code to be run (but does not start it yet)
t = MyThread()
# This actually starts the thread
t.start()
print
print ("Notice 't.start()' is considered to have completed, although"
" the countdown continues in its new thread. So you code "
"can tinue into new processing.")
# Now that the thread is running, the join allows for monitoring of it
try:
t.join_with_exception()
# should be able to be replace "Exception" with specific error (untested)
except Exception, e:
print
print "Exceptioon was caught and control passed back to the main thread"
print "Do some handling here...or raise a custom exception "
thread_name = threading.current_thread().name
e = ("Caught a MyException in thread: '" +
str(thread_name) +
"' [" + str(e) + "]")
raise Exception(e) # Or custom class of exception, such as MyException
One method I am fond of is based on the observer pattern. I define a signal class which my thread uses to emit exceptions to listeners. It can also be used to return values from threads. Example:
import threading
class Signal:
def __init__(self):
self._subscribers = list()
def emit(self, *args, **kwargs):
for func in self._subscribers:
func(*args, **kwargs)
def connect(self, func):
self._subscribers.append(func)
def disconnect(self, func):
try:
self._subscribers.remove(func)
except ValueError:
raise ValueError('Function {0} not removed from {1}'.format(func, self))
class WorkerThread(threading.Thread):
def __init__(self, *args, **kwargs):
super(WorkerThread, self).__init__(*args, **kwargs)
self.Exception = Signal()
self.Result = Signal()
def run(self):
if self._Thread__target is not None:
try:
self._return_value = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
except Exception as e:
self.Exception.emit(e)
else:
self.Result.emit(self._return_value)
if __name__ == '__main__':
import time
def handle_exception(exc):
print exc.message
def handle_result(res):
print res
def a():
time.sleep(1)
raise IOError('a failed')
def b():
time.sleep(2)
return 'b returns'
t = WorkerThread(target=a)
t2 = WorkerThread(target=b)
t.Exception.connect(handle_exception)
t2.Result.connect(handle_result)
t.start()
t2.start()
print 'Threads started'
t.join()
t2.join()
print 'Done'
I do not have enough experience of working with threads to claim that this is a completely safe method. But it has worked for me and I like the flexibility.
A simple way of catching thread's exception and communicating back to the caller method could be by passing dictionary or a list to worker method.
Example (passing dictionary to worker method):
import threading
def my_method(throw_me):
raise Exception(throw_me)
def worker(shared_obj, *args, **kwargs):
try:
shared_obj['target'](*args, **kwargs)
except Exception as err:
shared_obj['err'] = err
shared_obj = {'err':'', 'target': my_method}
throw_me = "Test"
th = threading.Thread(target=worker, args=(shared_obj, throw_me), kwargs={})
th.start()
th.join()
if shared_obj['err']:
print(">>%s" % shared_obj['err'])
Wrap Thread with exception storage.
import threading
import sys
class ExcThread(threading.Thread):
def __init__(self, target, args = None):
self.args = args if args else []
self.target = target
self.exc = None
threading.Thread.__init__(self)
def run(self):
try:
self.target(*self.args)
raise Exception('An error occured here.')
except Exception:
self.exc=sys.exc_info()
def main():
def hello(name):
print(!"Hello, {name}!")
thread_obj = ExcThread(target=hello, args=("Jack"))
thread_obj.start()
thread_obj.join()
exc = thread_obj.exc
if exc:
exc_type, exc_obj, exc_trace = exc
print(exc_type, ':',exc_obj, ":", exc_trace)
main()
I like this class:
https://gist.github.com/earonesty/b88d60cb256b71443e42c4f1d949163e
import threading
from typing import Any
class PropagatingThread(threading.Thread):
"""A Threading Class that raises errors it caught, and returns the return value of the target on join."""
def __init__(self, *args, **kwargs):
self._target = None
self._args = ()
self._kwargs = {}
super().__init__(*args, **kwargs)
self.exception = None
self.return_value = None
assert self._target
def run(self):
"""Don't override this if you want the behavior of this class, use target instead."""
try:
if self._target:
self.return_value = self._target(*self._args, **self._kwargs)
except Exception as e:
self.exception = e
finally:
# see super().run() for why this is necessary
del self._target, self._args, self._kwargs
def join(self, timeout=None) -> Any:
super().join(timeout)
if self.exception:
raise self.exception
return self.return_value
Using naked excepts is not a good practice because you usually catch more than you bargain for.
I would suggest modifying the except to catch ONLY the exception that you would like to handle. I don't think that raising it has the desired effect, because when you go to instantiate TheThread in the outer try, if it raises an exception, the assignment is never going to happen.
Instead you might want to just alert on it and move on, such as:
def run(self):
try:
shul.copytree(self.sourceFolder, self.destFolder)
except OSError, err:
print err
Then when that exception is caught, you can handle it there. Then when the outer try catches an exception from TheThread, you know it won't be the one you already handled, and will help you isolate your process flow.
I think the other solutions are somewhat complex if the only thing you want is to actually see somewhere the exception instead of being oblivious and totally blind.
The solution is to create a custom Thread that takes a logger from the main thread and logs any exceptions.
class ThreadWithLoggedException(threading.Thread):
"""
Similar to Thread but will log exceptions to passed logger.
Args:
logger: Logger instance used to log any exception in child thread
Exception is also reachable via <thread>.exception from the main thread.
"""
def __init__(self, *args, **kwargs):
try:
self.logger = kwargs.pop("logger")
except KeyError:
raise Exception("Missing 'logger' in kwargs")
super().__init__(*args, **kwargs)
self.exception = None
def run(self):
try:
if self._target is not None:
self._target(*self._args, **self._kwargs)
except Exception as exception:
thread = threading.current_thread()
self.exception = exception
self.logger.exception(f"Exception in child thread {thread}: {exception}")
finally:
del self._target, self._args, self._kwargs
Example:
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
def serve():
raise Exception("Earth exploded.")
th = ThreadWithLoggedException(target=serve, logger=logger)
th.start()
Output in main thread:
Exception in child thread <ThreadWithLoggedException(Thread-1, started 139922384414464)>: Earth exploded.
Traceback (most recent call last):
File "/core/utils.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/myapp.py", line 105, in serve
raise Exception("Earth exploded.")
Exception: Earth exploded.
pygolang provides sync.WorkGroup which, in particular, propagates exception from spawned worker threads to the main thread. For example:
#!/usr/bin/env python
"""This program demostrates how with sync.WorkGroup an exception raised in
spawned thread is propagated into main thread which spawned the worker."""
from __future__ import print_function
from golang import sync, context
def T1(ctx, *argv):
print('T1: run ... %r' % (argv,))
raise RuntimeError('T1: problem')
def T2(ctx):
print('T2: ran ok')
def main():
wg = sync.WorkGroup(context.background())
wg.go(T1, [1,2,3])
wg.go(T2)
try:
wg.wait()
except Exception as e:
print('Tmain: caught exception: %r\n' %e)
# reraising to see full traceback
raise
if __name__ == '__main__':
main()
gives the following when run:
T1: run ... ([1, 2, 3],)
T2: ran ok
Tmain: caught exception: RuntimeError('T1: problem',)
Traceback (most recent call last):
File "./x.py", line 28, in <module>
main()
File "./x.py", line 21, in main
wg.wait()
File "golang/_sync.pyx", line 198, in golang._sync.PyWorkGroup.wait
pyerr_reraise(pyerr)
File "golang/_sync.pyx", line 178, in golang._sync.PyWorkGroup.go.pyrunf
f(pywg._pyctx, *argv, **kw)
File "./x.py", line 10, in T1
raise RuntimeError('T1: problem')
RuntimeError: T1: problem
The original code from the question would be just:
wg = sync.WorkGroup(context.background())
def _(ctx):
shul.copytree(sourceFolder, destFolder)
wg.go(_)
# waits for spawned worker to complete and, on error, reraises
# its exception on the main thread.
wg.wait()

Python: multiprocessing.map: If one process raises an exception, why aren't other processes' finally blocks called?

My understanding is that finally clauses must *always* be executed if the try has been entered.
import random
from multiprocessing import Pool
from time import sleep
def Process(x):
try:
print x
sleep(random.random())
raise Exception('Exception: ' + x)
finally:
print 'Finally: ' + x
Pool(3).map(Process, ['1','2','3'])
Expected output is that for each of x which is printed on its own by line 8, there must be an occurrence of 'Finally x'.
Example output:
$ python bug.py
1
2
3
Finally: 2
Traceback (most recent call last):
File "bug.py", line 14, in <module>
Pool(3).map(Process, ['1','2','3'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 225, in map
return self.map_async(func, iterable, chunksize).get()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 522, in get
raise self._value
Exception: Exception: 2
It seems that an exception terminating one process terminates the parent and sibling processes, even though there is further work required to be done in other processes.
Why am I wrong? Why is this correct? If this is correct, how should one safely clean up resources in multiprocess Python?
Short answer: SIGTERM trumps finally.
Long answer: Turn on logging with mp.log_to_stderr():
import random
import multiprocessing as mp
import time
import logging
logger=mp.log_to_stderr(logging.DEBUG)
def Process(x):
try:
logger.info(x)
time.sleep(random.random())
raise Exception('Exception: ' + x)
finally:
logger.info('Finally: ' + x)
result=mp.Pool(3).map(Process, ['1','2','3'])
The logging output includes:
[DEBUG/MainProcess] terminating workers
Which corresponds to this code in multiprocessing.pool._terminate_pool:
if pool and hasattr(pool[0], 'terminate'):
debug('terminating workers')
for p in pool:
p.terminate()
Each p in pool is a multiprocessing.Process, and calling terminate (at least on non-Windows machines) calls SIGTERM:
from multiprocessing/forking.py:
class Popen(object)
def terminate(self):
...
try:
os.kill(self.pid, signal.SIGTERM)
except OSError, e:
if self.wait(timeout=0.1) is None:
raise
So it comes down to what happens when a Python process in a try suite is sent a SIGTERM.
Consider the following example (test.py):
import time
def worker():
try:
time.sleep(100)
finally:
print('enter finally')
time.sleep(2)
print('exit finally')
worker()
If you run it, then send it a SIGTERM, then the process ends immediately, without entering the finally suite, as evidenced by no output, and no delay.
In one terminal:
% test.py
In second terminal:
% pkill -TERM -f "test.py"
Result in first terminal:
Terminated
Compare that with what happens when the process is sent a SIGINT (C-c):
In second terminal:
% pkill -INT -f "test.py"
Result in first terminal:
enter finally
exit finally
Traceback (most recent call last):
File "/home/unutbu/pybin/test.py", line 14, in <module>
worker()
File "/home/unutbu/pybin/test.py", line 8, in worker
time.sleep(100)
KeyboardInterrupt
Conclusion: SIGTERM trumps finally.
The answer from unutbu definitely explains why you get the behavior you observe. However, it should emphasized that SIGTERM is sent only because of how multiprocessing.pool._terminate_pool is implemented. If you can avoid using Pool, then you can get the behavior you desire. Here is a borrowed example:
from multiprocessing import Process
from time import sleep
import random
def f(x):
try:
sleep(random.random()*10)
raise Exception
except:
print "Caught exception in process:", x
# Make this last longer than the except clause in main.
sleep(3)
finally:
print "Cleaning up process:", x
if __name__ == '__main__':
processes = []
for i in range(4):
p = Process(target=f, args=(i,))
p.start()
processes.append(p)
try:
for process in processes:
process.join()
except:
print "Caught exception in main."
finally:
print "Cleaning up main."
After sending a SIGINT is, example output is:
Caught exception in process: 0
^C
Cleaning up process: 0
Caught exception in main.
Cleaning up main.
Caught exception in process: 1
Caught exception in process: 2
Caught exception in process: 3
Cleaning up process: 1
Cleaning up process: 2
Cleaning up process: 3
Note that the finally clause is ran for all processes. If you need shared memory, consider using Queue, Pipe, Manager, or some external store like redis or sqlite3.
finally re-raises the original exception unless you return from it. The exception is then raised by Pool.map and kills your entire application. The subprocesses are terminated and you see no other exceptions.
You can add a return to swallow the exception:
def Process(x):
try:
print x
sleep(random.random())
raise Exception('Exception: ' + x)
finally:
print 'Finally: ' + x
return
Then you should have None in your map result when an exception occurred.

Catch a thread's exception in the caller thread?

I'm very new to Python and multithreaded programming in general. Basically, I have a script that will copy files to another location. I would like this to be placed in another thread so I can output .... to indicate that the script is still running.
The problem that I am having is that if the files cannot be copied it will throw an exception. This is OK if running in the main thread; however, having the following code does not work:
try:
threadClass = TheThread(param1, param2, etc.)
threadClass.start() ##### **Exception takes place here**
except:
print "Caught an exception"
In the thread class itself, I tried to re-throw the exception, but it does not work. I have seen people on here ask similar questions, but they all seem to be doing something more specific than what I am trying to do (and I don't quite understand the solutions offered). I have seen people mention the usage of sys.exc_info(), however I do not know where or how to use it.
Edit: The code for the thread class is below:
class TheThread(threading.Thread):
def __init__(self, sourceFolder, destFolder):
threading.Thread.__init__(self)
self.sourceFolder = sourceFolder
self.destFolder = destFolder
def run(self):
try:
shul.copytree(self.sourceFolder, self.destFolder)
except:
raise
The problem is that thread_obj.start() returns immediately. The child thread that you spawned executes in its own context, with its own stack. Any exception that occurs there is in the context of the child thread, and it is in its own stack. One way I can think of right now to communicate this information to the parent thread is by using some sort of message passing, so you might look into that.
Try this on for size:
import sys
import threading
import queue
class ExcThread(threading.Thread):
def __init__(self, bucket):
threading.Thread.__init__(self)
self.bucket = bucket
def run(self):
try:
raise Exception('An error occured here.')
except Exception:
self.bucket.put(sys.exc_info())
def main():
bucket = queue.Queue()
thread_obj = ExcThread(bucket)
thread_obj.start()
while True:
try:
exc = bucket.get(block=False)
except queue.Empty:
pass
else:
exc_type, exc_obj, exc_trace = exc
# deal with the exception
print exc_type, exc_obj
print exc_trace
thread_obj.join(0.1)
if thread_obj.isAlive():
continue
else:
break
if __name__ == '__main__':
main()
There are a lot of really weirdly complicated answers to this question. Am I oversimplifying this, because this seems sufficient for most things to me.
from threading import Thread
class PropagatingThread(Thread):
def run(self):
self.exc = None
try:
if hasattr(self, '_Thread__target'):
# Thread uses name mangling prior to Python 3.
self.ret = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
else:
self.ret = self._target(*self._args, **self._kwargs)
except BaseException as e:
self.exc = e
def join(self, timeout=None):
super(PropagatingThread, self).join(timeout)
if self.exc:
raise self.exc
return self.ret
If you're certain you'll only ever be running on one or the other version of Python, you could reduce the run() method down to just the mangled version (if you'll only be running on versions of Python before 3), or just the clean version (if you'll only be running on versions of Python starting with 3).
Example usage:
def f(*args, **kwargs):
print(args)
print(kwargs)
raise Exception('I suck at this')
t = PropagatingThread(target=f, args=(5,), kwargs={'hello':'world'})
t.start()
t.join()
And you'll see the exception raised on the other thread when you join.
If you are using six or on Python 3 only, you can improve the stack trace information you get when the exception is re-raised. Instead of only the stack at the point of the join, you can wrap the inner exception in a new outer exception, and get both stack traces with
six.raise_from(RuntimeError('Exception in thread'),self.exc)
or
raise RuntimeError('Exception in thread') from self.exc
The concurrent.futures module makes it simple to do work in separate threads (or processes) and handle any resulting exceptions:
import concurrent.futures
import shutil
def copytree_with_dots(src_path, dst_path):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
# Execute the copy on a separate thread,
# creating a future object to track progress.
future = executor.submit(shutil.copytree, src_path, dst_path)
while future.running():
# Print pretty dots here.
pass
# Return the value returned by shutil.copytree(), None.
# Raise any exceptions raised during the copy process.
return future.result()
concurrent.futures is included with Python 3.2, and is available as the backported futures module for earlier versions.
Although it is not possible to directly catch an exception thrown in a different thread, here's a code to quite transparently obtain something very close to this functionality. Your child thread must subclass the ExThread class instead of threading.Thread and the parent thread must call the child_thread.join_with_exception() method instead of child_thread.join() when waiting for the thread to finish its job.
Technical details of this implementation: when the child thread throws an exception, it is passed to the parent through a Queue and thrown again in the parent thread. Notice that there's no busy waiting in this approach .
#!/usr/bin/env python
import sys
import threading
import Queue
class ExThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.__status_queue = Queue.Queue()
def run_with_exception(self):
"""This method should be overriden."""
raise NotImplementedError
def run(self):
"""This method should NOT be overriden."""
try:
self.run_with_exception()
except BaseException:
self.__status_queue.put(sys.exc_info())
self.__status_queue.put(None)
def wait_for_exc_info(self):
return self.__status_queue.get()
def join_with_exception(self):
ex_info = self.wait_for_exc_info()
if ex_info is None:
return
else:
raise ex_info[1]
class MyException(Exception):
pass
class MyThread(ExThread):
def __init__(self):
ExThread.__init__(self)
def run_with_exception(self):
thread_name = threading.current_thread().name
raise MyException("An error in thread '{}'.".format(thread_name))
def main():
t = MyThread()
t.start()
try:
t.join_with_exception()
except MyException as ex:
thread_name = threading.current_thread().name
print "Caught a MyException in thread '{}': {}".format(thread_name, ex)
if __name__ == '__main__':
main()
If an exception occurs in a thread, the best way is to re-raise it in the caller thread during join. You can get information about the exception currently being handled using the sys.exc_info() function. This information can simply be stored as a property of the thread object until join is called, at which point it can be re-raised.
Note that a Queue.Queue (as suggested in other answers) is not necessary in this simple case where the thread throws at most 1 exception and completes right after throwing an exception. We avoid race conditions by simply waiting for the thread to complete.
For example, extend ExcThread (below), overriding excRun (instead of run).
Python 2.x:
import threading
class ExcThread(threading.Thread):
def excRun(self):
pass
def run(self):
self.exc = None
try:
# Possibly throws an exception
self.excRun()
except:
import sys
self.exc = sys.exc_info()
# Save details of the exception thrown but don't rethrow,
# just complete the function
def join(self):
threading.Thread.join(self)
if self.exc:
msg = "Thread '%s' threw an exception: %s" % (self.getName(), self.exc[1])
new_exc = Exception(msg)
raise new_exc.__class__, new_exc, self.exc[2]
Python 3.x:
The 3 argument form for raise is gone in Python 3, so change the last line to:
raise new_exc.with_traceback(self.exc[2])
concurrent.futures.as_completed
https://docs.python.org/3.7/library/concurrent.futures.html#concurrent.futures.as_completed
The following solution:
returns to the main thread immediately when an exception is called
requires no extra user defined classes because it does not need:
an explicit Queue
to add an except else around your work thread
Source:
#!/usr/bin/env python3
import concurrent.futures
import time
def func_that_raises(do_raise):
for i in range(3):
print(i)
time.sleep(0.1)
if do_raise:
raise Exception()
for i in range(3):
print(i)
time.sleep(0.1)
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
futures = []
futures.append(executor.submit(func_that_raises, False))
futures.append(executor.submit(func_that_raises, True))
for future in concurrent.futures.as_completed(futures):
print(repr(future.exception()))
Possible output:
0
0
1
1
2
2
0
Exception()
1
2
None
It is unfortunately not possible to kill futures to cancel the others as one fails:
concurrent.futures; Python: concurrent.futures How to make it cancelable?
threading: Is there any way to kill a Thread?
C pthreads: Kill Thread in Pthread Library
If you do something like:
for future in concurrent.futures.as_completed(futures):
if future.exception() is not None:
raise future.exception()
then the with catches it, and waits for the second thread to finish before continuing. The following behaves similarly:
for future in concurrent.futures.as_completed(futures):
future.result()
since future.result() re-raises the exception if one occurred.
If you want to quit the entire Python process, you might get away with os._exit(0), but this likely means you need a refactor.
Custom class with perfect exception semantics
I ended up coding the perfect interface for myself at: The right way to limit maximum number of threads running at once? section "Queue example with error handling". That class aims to be both convenient, and give you total control over submission and result / error handling.
Tested on Python 3.6.7, Ubuntu 18.04.
In Python 3.8, we can use threading.excepthook to hook the uncaught exceptions in all the child threads! For example,
threading.excepthook = thread_exception_handler
Referer: https://stackoverflow.com/a/60002752/5093308
What im doing is, simple overriding join and run method of the Thread:
class RaisingThread(threading.Thread):
def run(self):
self._exc = None
try:
super().run()
except Exception as e:
self._exc = e
def join(self, timeout=None):
super().join(timeout=timeout)
if self._exc:
raise self._exc
Used as followed:
def foo():
time.sleep(2)
print('hi, from foo!')
raise Exception('exception from foo')
t = RaisingThread(target=foo)
t.start()
try:
t.join()
except Exception as e:
print(e)
Result:
hi, from foo!
exception from foo!
This was a nasty little problem, and I'd like to throw my solution in. Some other solutions I found (async.io for example) looked promising but also presented a bit of a black box. The queue / event loop approach sort of ties you to a certain implementation. The concurrent futures source code, however, is around only 1000 lines, and easy to comprehend. It allowed me to easily solve my problem: create ad-hoc worker threads without much setup, and to be able to catch exceptions in the main thread.
My solution uses the concurrent futures API and threading API. It allows you to create a worker which gives you both the thread and the future. That way, you can join the thread to wait for the result:
worker = Worker(test)
thread = worker.start()
thread.join()
print(worker.future.result())
...or you can let the worker just send a callback when done:
worker = Worker(test)
thread = worker.start(lambda x: print('callback', x))
...or you can loop until the event completes:
worker = Worker(test)
thread = worker.start()
while True:
print("waiting")
if worker.future.done():
exc = worker.future.exception()
print('exception?', exc)
result = worker.future.result()
print('result', result)
break
time.sleep(0.25)
Here's the code:
from concurrent.futures import Future
import threading
import time
class Worker(object):
def __init__(self, fn, args=()):
self.future = Future()
self._fn = fn
self._args = args
def start(self, cb=None):
self._cb = cb
self.future.set_running_or_notify_cancel()
thread = threading.Thread(target=self.run, args=())
thread.daemon = True #this will continue thread execution after the main thread runs out of code - you can still ctrl + c or kill the process
thread.start()
return thread
def run(self):
try:
self.future.set_result(self._fn(*self._args))
except BaseException as e:
self.future.set_exception(e)
if(self._cb):
self._cb(self.future.result())
...and the test function:
def test(*args):
print('args are', args)
time.sleep(2)
raise Exception('foo')
I know I'm a bit late to the party here but I was having a very similar problem but it included using tkinter as a GUI, and the mainloop made it impossible to use any of the solutions that depend on .join(). Therefore I adapted the solution given in the EDIT of the original question, but made it more general to make it easier to understand for others.
Here is the new thread class in action:
import threading
import traceback
import logging
class ExceptionThread(threading.Thread):
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
def run(self):
try:
if self._target:
self._target(*self._args, **self._kwargs)
except Exception:
logging.error(traceback.format_exc())
def test_function_1(input):
raise IndexError(input)
if __name__ == "__main__":
input = 'useful'
t1 = ExceptionThread(target=test_function_1, args=[input])
t1.start()
Of course you can always have it handle the exception some other way from logging, such as printing it out, or having it output to the console.
This allows you to use the ExceptionThread class exactly like you would the Thread class, without any special modifications.
Similar way like RickardSjogren's without Queue, sys etc. but also without some listeners to signals: execute directly an exception handler which corresponds to an except block.
#!/usr/bin/env python3
import threading
class ExceptionThread(threading.Thread):
def __init__(self, callback=None, *args, **kwargs):
"""
Redirect exceptions of thread to an exception handler.
:param callback: function to handle occured exception
:type callback: function(thread, exception)
:param args: arguments for threading.Thread()
:type args: tuple
:param kwargs: keyword arguments for threading.Thread()
:type kwargs: dict
"""
self._callback = callback
super().__init__(*args, **kwargs)
def run(self):
try:
if self._target:
self._target(*self._args, **self._kwargs)
except BaseException as e:
if self._callback is None:
raise e
else:
self._callback(self, e)
finally:
# Avoid a refcycle if the thread is running a function with
# an argument that has a member that points to the thread.
del self._target, self._args, self._kwargs, self._callback
Only self._callback and the except block in run() is additional to normal threading.Thread.
I use this version, it's minimal and it works well.
class SafeThread(threading.Thread):
def __init__(self, *args, **kwargs):
super(SafeThread, self).__init__(*args, **kwargs)
self.exception = None
def run(self) -> None:
try:
super(SafeThread, self).run()
except Exception as ex:
self.exception = ex
traceback.print_exc()
def join(self, *args, **kwargs) -> None:
super(SafeThread, self).join(*args, **kwargs)
if self.exception:
raise self.exception
To use it, simply replace threading.Thread with SafeThread e.g
t = SafeThread(target = some_function, args = (some, args,))
t.start()
# do something else here if you want as the thread runs in the background
t.join()
As a noobie to Threading, it took me a long time to understand how to implement Mateusz Kobos's code (above). Here's a clarified version to help understand how to use it.
#!/usr/bin/env python
import sys
import threading
import Queue
class ExThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.__status_queue = Queue.Queue()
def run_with_exception(self):
"""This method should be overriden."""
raise NotImplementedError
def run(self):
"""This method should NOT be overriden."""
try:
self.run_with_exception()
except Exception:
self.__status_queue.put(sys.exc_info())
self.__status_queue.put(None)
def wait_for_exc_info(self):
return self.__status_queue.get()
def join_with_exception(self):
ex_info = self.wait_for_exc_info()
if ex_info is None:
return
else:
raise ex_info[1]
class MyException(Exception):
pass
class MyThread(ExThread):
def __init__(self):
ExThread.__init__(self)
# This overrides the "run_with_exception" from class "ExThread"
# Note, this is where the actual thread to be run lives. The thread
# to be run could also call a method or be passed in as an object
def run_with_exception(self):
# Code will function until the int
print "sleeping 5 seconds"
import time
for i in 1, 2, 3, 4, 5:
print i
time.sleep(1)
# Thread should break here
int("str")
# I'm honestly not sure why these appear here? So, I removed them.
# Perhaps Mateusz can clarify?
# thread_name = threading.current_thread().name
# raise MyException("An error in thread '{}'.".format(thread_name))
if __name__ == '__main__':
# The code lives in MyThread in this example. So creating the MyThread
# object set the code to be run (but does not start it yet)
t = MyThread()
# This actually starts the thread
t.start()
print
print ("Notice 't.start()' is considered to have completed, although"
" the countdown continues in its new thread. So you code "
"can tinue into new processing.")
# Now that the thread is running, the join allows for monitoring of it
try:
t.join_with_exception()
# should be able to be replace "Exception" with specific error (untested)
except Exception, e:
print
print "Exceptioon was caught and control passed back to the main thread"
print "Do some handling here...or raise a custom exception "
thread_name = threading.current_thread().name
e = ("Caught a MyException in thread: '" +
str(thread_name) +
"' [" + str(e) + "]")
raise Exception(e) # Or custom class of exception, such as MyException
One method I am fond of is based on the observer pattern. I define a signal class which my thread uses to emit exceptions to listeners. It can also be used to return values from threads. Example:
import threading
class Signal:
def __init__(self):
self._subscribers = list()
def emit(self, *args, **kwargs):
for func in self._subscribers:
func(*args, **kwargs)
def connect(self, func):
self._subscribers.append(func)
def disconnect(self, func):
try:
self._subscribers.remove(func)
except ValueError:
raise ValueError('Function {0} not removed from {1}'.format(func, self))
class WorkerThread(threading.Thread):
def __init__(self, *args, **kwargs):
super(WorkerThread, self).__init__(*args, **kwargs)
self.Exception = Signal()
self.Result = Signal()
def run(self):
if self._Thread__target is not None:
try:
self._return_value = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
except Exception as e:
self.Exception.emit(e)
else:
self.Result.emit(self._return_value)
if __name__ == '__main__':
import time
def handle_exception(exc):
print exc.message
def handle_result(res):
print res
def a():
time.sleep(1)
raise IOError('a failed')
def b():
time.sleep(2)
return 'b returns'
t = WorkerThread(target=a)
t2 = WorkerThread(target=b)
t.Exception.connect(handle_exception)
t2.Result.connect(handle_result)
t.start()
t2.start()
print 'Threads started'
t.join()
t2.join()
print 'Done'
I do not have enough experience of working with threads to claim that this is a completely safe method. But it has worked for me and I like the flexibility.
A simple way of catching thread's exception and communicating back to the caller method could be by passing dictionary or a list to worker method.
Example (passing dictionary to worker method):
import threading
def my_method(throw_me):
raise Exception(throw_me)
def worker(shared_obj, *args, **kwargs):
try:
shared_obj['target'](*args, **kwargs)
except Exception as err:
shared_obj['err'] = err
shared_obj = {'err':'', 'target': my_method}
throw_me = "Test"
th = threading.Thread(target=worker, args=(shared_obj, throw_me), kwargs={})
th.start()
th.join()
if shared_obj['err']:
print(">>%s" % shared_obj['err'])
Wrap Thread with exception storage.
import threading
import sys
class ExcThread(threading.Thread):
def __init__(self, target, args = None):
self.args = args if args else []
self.target = target
self.exc = None
threading.Thread.__init__(self)
def run(self):
try:
self.target(*self.args)
raise Exception('An error occured here.')
except Exception:
self.exc=sys.exc_info()
def main():
def hello(name):
print(!"Hello, {name}!")
thread_obj = ExcThread(target=hello, args=("Jack"))
thread_obj.start()
thread_obj.join()
exc = thread_obj.exc
if exc:
exc_type, exc_obj, exc_trace = exc
print(exc_type, ':',exc_obj, ":", exc_trace)
main()
I like this class:
https://gist.github.com/earonesty/b88d60cb256b71443e42c4f1d949163e
import threading
from typing import Any
class PropagatingThread(threading.Thread):
"""A Threading Class that raises errors it caught, and returns the return value of the target on join."""
def __init__(self, *args, **kwargs):
self._target = None
self._args = ()
self._kwargs = {}
super().__init__(*args, **kwargs)
self.exception = None
self.return_value = None
assert self._target
def run(self):
"""Don't override this if you want the behavior of this class, use target instead."""
try:
if self._target:
self.return_value = self._target(*self._args, **self._kwargs)
except Exception as e:
self.exception = e
finally:
# see super().run() for why this is necessary
del self._target, self._args, self._kwargs
def join(self, timeout=None) -> Any:
super().join(timeout)
if self.exception:
raise self.exception
return self.return_value
Using naked excepts is not a good practice because you usually catch more than you bargain for.
I would suggest modifying the except to catch ONLY the exception that you would like to handle. I don't think that raising it has the desired effect, because when you go to instantiate TheThread in the outer try, if it raises an exception, the assignment is never going to happen.
Instead you might want to just alert on it and move on, such as:
def run(self):
try:
shul.copytree(self.sourceFolder, self.destFolder)
except OSError, err:
print err
Then when that exception is caught, you can handle it there. Then when the outer try catches an exception from TheThread, you know it won't be the one you already handled, and will help you isolate your process flow.
I think the other solutions are somewhat complex if the only thing you want is to actually see somewhere the exception instead of being oblivious and totally blind.
The solution is to create a custom Thread that takes a logger from the main thread and logs any exceptions.
class ThreadWithLoggedException(threading.Thread):
"""
Similar to Thread but will log exceptions to passed logger.
Args:
logger: Logger instance used to log any exception in child thread
Exception is also reachable via <thread>.exception from the main thread.
"""
def __init__(self, *args, **kwargs):
try:
self.logger = kwargs.pop("logger")
except KeyError:
raise Exception("Missing 'logger' in kwargs")
super().__init__(*args, **kwargs)
self.exception = None
def run(self):
try:
if self._target is not None:
self._target(*self._args, **self._kwargs)
except Exception as exception:
thread = threading.current_thread()
self.exception = exception
self.logger.exception(f"Exception in child thread {thread}: {exception}")
finally:
del self._target, self._args, self._kwargs
Example:
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
def serve():
raise Exception("Earth exploded.")
th = ThreadWithLoggedException(target=serve, logger=logger)
th.start()
Output in main thread:
Exception in child thread <ThreadWithLoggedException(Thread-1, started 139922384414464)>: Earth exploded.
Traceback (most recent call last):
File "/core/utils.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/myapp.py", line 105, in serve
raise Exception("Earth exploded.")
Exception: Earth exploded.
pygolang provides sync.WorkGroup which, in particular, propagates exception from spawned worker threads to the main thread. For example:
#!/usr/bin/env python
"""This program demostrates how with sync.WorkGroup an exception raised in
spawned thread is propagated into main thread which spawned the worker."""
from __future__ import print_function
from golang import sync, context
def T1(ctx, *argv):
print('T1: run ... %r' % (argv,))
raise RuntimeError('T1: problem')
def T2(ctx):
print('T2: ran ok')
def main():
wg = sync.WorkGroup(context.background())
wg.go(T1, [1,2,3])
wg.go(T2)
try:
wg.wait()
except Exception as e:
print('Tmain: caught exception: %r\n' %e)
# reraising to see full traceback
raise
if __name__ == '__main__':
main()
gives the following when run:
T1: run ... ([1, 2, 3],)
T2: ran ok
Tmain: caught exception: RuntimeError('T1: problem',)
Traceback (most recent call last):
File "./x.py", line 28, in <module>
main()
File "./x.py", line 21, in main
wg.wait()
File "golang/_sync.pyx", line 198, in golang._sync.PyWorkGroup.wait
pyerr_reraise(pyerr)
File "golang/_sync.pyx", line 178, in golang._sync.PyWorkGroup.go.pyrunf
f(pywg._pyctx, *argv, **kw)
File "./x.py", line 10, in T1
raise RuntimeError('T1: problem')
RuntimeError: T1: problem
The original code from the question would be just:
wg = sync.WorkGroup(context.background())
def _(ctx):
shul.copytree(sourceFolder, destFolder)
wg.go(_)
# waits for spawned worker to complete and, on error, reraises
# its exception on the main thread.
wg.wait()

Categories