Exception thrown on pool.close() while debugging, but not while running - python

I don't think I encountered this problem working on this in Python 2.7, but while debugging in 3.7, Python throws an exception when pool.close() is called. This is the relevant part of the function:
pool = multiprocessing.Pool(6)
iterator = pool.imap_unordered(worker_func, worker_input)
while True:
try:
t0, t1 = next(iterator)
except multiprocessing.TimeoutError:
continue
except StopIteration:
break
else:
dbinserts1(t0)
dbinserts2(t1)
pool.close()
pool.join()
The only change made by 2to3 was rewriting iterator.next() as next(iterator). The function only fails while debugging (in PyCharm), otherwise it runs successfully. This is (probably) the most relevant part of the stack trace:
Error in atexit._run_exitfuncs: Traceback (most recent call last):
File
"/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/util.py",
line 322, in _exit_function
p.join() File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/process.py",
line 138, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process

Which PyCharm version do you use? This seems to be fixed in 2019.1.2 by https://youtrack.jetbrains.com/issue/PY-34436

Related

How to throw exception messages inside the multiprocessing code?

For example, I am using multiprocessing pool to process files:
with Pool(5) as pool:
results = pool.starmap(self.process_file, zip(files, repeat(channel)))
When an exception occurs inside the function process_file, the exception message indicates that it occurs at the pool.starmap line, but not the actual place inside process_file function.
I am using PyCharm to develop and debug. Is there a way to change this behavior? The current error message doesn't give the correct position of the error occurred.
Multiprocessing transfers the errors between processes using the pickle module, but pickle doesn't know how to preserve the tracebacks of exceptions by default.
I found tblib to be a very convenient way to address this shortcoming. Based on this example I suggest you try adding this code to the main module of your code:
from tblib import pickling_support
# all your setup code
pickling_support.install()
if __name__ == "__main__":
# your pool work
The exception has the original exception info but PyCharm is not ferreting it out.
Assuming there are no PyCharm configuration options to enhance its ability to ferret out all the exception information, and not just the outer exception as your are seeing, you need to programmatically extract it out.
For good in-program error handling, you probably want to do that anyway. Especially with sub-processes, I very often will catch Exception and log it and re-raise if not considered handled (depends), or if I catch a specific exception I'm expecting and consider it handled, I won't re-raise it.
Note, it's not only PyCharm showing the outer exception... I see the same thing with other tools.
Below will show you the original problem (see "Line 7" below) and re-raise. Again, re-raising or not is really context dependent so below is just an example. But the point is that the exception you are seeing has more data that PyCharm by default is apparently not showing you.
from itertools import repeat
from multiprocessing import Pool
import traceback
def process(a,b):
print(a,b)
raise Exception("not good")
if __name__ == '__main__':
with Pool(5) as pool:
try:
results = pool.starmap(process, zip([1,2,3,4,5], repeat('A')))
except Exception as ex:
print("starmap failure:")
for error_line in traceback.format_exception(ex, ex, ex.__traceback__):
error_line = error_line.strip()
if not error_line:
continue
print(f" {error_line}")
raise # re-raise if we do not consider this handled.
Gives me this output:
starmap failure:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\...\multiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "C:\Users\...\multiprocessing\pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "...\starmap_exception.py", line 7, in process
raise Exception("not good")
Exception: not good
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "...\starmap_exception.py", line 12, in <module>
results = pool.starmap(process, zip([1,2,3,4,5], repeat('A')))
File "C:\Users\...\multiprocessing\pool.py", line 372, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\Users\...\multiprocessing\pool.py", line 771, in get
raise self._value
Exception: not good

Python ProcessPoolExecutor cannot handle exception

I am executing a Tornado server with ProcessPoolExecutor to handle multiple requests in parallel.
The problem is that, in one particular case, when an exception is raised in one of the processes it doesn't propagates, but instead the process crashes with this error:
concurrent.futures.process._RemoteTraceback:
\n'''\nTraceback (most recent call last):
\n File \"C:\\Users\\ActionICT\\anaconda3\\lib\\concurrent\\futures\\process.py\", line 367, in _queue_management_worker\n result_item = result_reader.recv()
\n File \"C:\\Users\\ActionICT\\anaconda3\\lib\\multiprocessing\\connection.py\", line 251, in recv
\n return _ForkingPickler.loads(buf.getbuffer())\nTypeError: __init__() missing 1 required positional argument: 'is_local'\n'''\n\nThe above exception was the direct cause of the following exception:
\n
\nTraceback (most recent call last):\n File \"C:\\S1\\Product\\Baseline\\PYTHON\\lab\\controller.py\", line 558, in get\n output = exec_future.result()
\n File \"C:\\Users\\ActionICT\\anaconda3\\lib\\concurrent\\futures\\_base.py\", line 428, in result\n return self.__get_result()\n File \"C:\\Users\\ActionICT\\anaconda3\\lib\\concurrent\\futures\\_base.py\", line 384, in __get_result
\n raise self._exception\nconcurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.\n
I have tried it in debugger, and found that the problem is executing this
def _send_bytes(self, buf):
ov, err = _winapi.WriteFile(self._handle, buf, overlapped=True)
try:
if err == _winapi.ERROR_IO_PENDING:
waitres = _winapi.WaitForMultipleObjects(
[ov.event], False, INFINITE)
assert waitres == WAIT_OBJECT_0
except:
ov.cancel()
raise
finally:
nwritten, err = ov.GetOverlappedResult(True)
assert err == 0
assert nwritten == len(buf)
This is called when the process tries to propagate the exception to the corresponding Future object.
In the first line, when calling _winapi.WriteFile, everything crashes in debugger, and I can't understand why. Any idea?
I have resolved with a workaround: I have wrapped internally the function inside separate process in try except, then copied the old exception in a new exception and raised it. I don't know why... but it works.
def _execute_tuning(tune_parameters: TuneParameters):
# function to parallelize todo to be refactored
# execute scenario, then write result or error in output
try:
config.generate_project_config(
project_name=tune_parameters.project_name,
scenario_name=tune_parameters.scenario_name
)
config.generate_session_log_config(project_name=tune_parameters.project_name,
scenario_name=tune_parameters.scenario_name)
tree = DecisionTreeGenerator(tune_parameters.project_name, tune_parameters.scenario_name)
tree.fit(
# todo refactor
auto_tune=True if tune_parameters == 'true' else False,
max_depth=tune_parameters.max_depth,
columns=tune_parameters.columns,
min_samples_leaf=tune_parameters.min_samples_per_leaf,
max_leaf_nodes=tune_parameters.max_leaf_nodes
)
kpi = KPICalculator(tune_parameters.project_name, tune_parameters.scenario_name)
kpi.run(do_optimization_kpi=False)
except Exception as exc:
Loggers.application.exception(exc)
exc_final = Exception(str(exc))
exc_final.__traceback__ = exc.__traceback__
raise exc_final

Inner exception is not being raised using asyncio.gather

Using Python 3.7, I am trying to catch an exception and re-raise it by following an example I found on StackOverflow. While the example does work, it doesn't seem to work for all situations. Below I have two asynchronous Python scripts that try to re-raise exceptions. The first example works, it will print both the inner and outer exception.
import asyncio
class Foo:
async def throw_exception(self):
raise Exception("This is the inner exception")
async def do_the_thing(self):
try:
await self.throw_exception()
except Exception as e:
raise Exception("This is the outer exception") from e
async def run():
await Foo().do_the_thing()
def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
if __name__ == "__main__":
main()
Running this will correctly output the following exception stack trace:
$ py test.py
Traceback (most recent call last):
File "test.py", line 9, in do_the_thing
await self.throw_exception()
File "test.py", line 5, in throw_exception
raise Exception("This is the inner exception")
Exception: This is the inner exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test.py", line 21, in <module>
main()
File "test.py", line 18, in main
loop.run_until_complete(run())
File "C:\Python37\lib\asyncio\base_events.py", line 584, in run_until_complete
return future.result()
File "test.py", line 14, in run
await Foo().do_the_thing()
File "test.py", line 11, in do_the_thing
raise Exception("This is the outer exception") from e
Exception: This is the outer exception
However, in my next Python script, I have multiple tasks that I queue up that I want to get a similar exception stack trace from. Essentially, I except the above stack trace to be printed 3 times (once for each task in the following script). The only difference between the above and below scripts is the run() function.
import asyncio
class Foo:
async def throw_exception(self):
raise Exception("This is the inner exception")
async def do_the_thing(self):
try:
await self.throw_exception()
except Exception as e:
raise Exception("This is the outer exception") from e
async def run():
tasks = []
foo = Foo()
tasks.append(asyncio.create_task(foo.do_the_thing()))
tasks.append(asyncio.create_task(foo.do_the_thing()))
tasks.append(asyncio.create_task(foo.do_the_thing()))
results = await asyncio.gather(*tasks, return_exceptions=True)
for result in results:
if isinstance(result, Exception):
print(f"Unexpected exception: {result}")
def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
if __name__ == "__main__":
main()
The above code snippet produces the disappointingly short exceptions lacking stack traces.
$ py test.py
Unexpected exception: This is the outer exception
Unexpected exception: This is the outer exception
Unexpected exception: This is the outer exception
If I change return_exceptions to be False, I will get the exceptions and stack trace printed out once and then execution stops and the remaining two tasks are cancelled. The output is identical to the output from the first script. The downside of this approach is, I want to continue processing tasks even when exceptions are encountered and then display all the exceptions at the end when all the tasks are completed.
asyncio.gather will stop at the first exception if you do not provide a return_exceptions=True argument, so your approach is the right one: you need to gather all the results and exceptions first, then display them.
To get the full stacktrace that you are missing, you will need to do more than just "print" the exception. Have a look at the traceback module in the stdlib which has all you need for that: https://docs.python.org/3/library/traceback.html
You can also use logging.exception, that would do more or less the same.

Cannot access Queue.Empty: "AttributeError: 'function' object has no attribute 'Empty'"

For some reason I can't access the Queue.Empty exception - what am I doing wrong here?
from multiprocessing import Process, Queue
# ...
try:
action = action_queue.get(False)
print "Action: " + action
except Queue.Empty:
pass
The stack trace:
Traceback (most recent call last):
File "C:\Program Files\Python27\lib\multiprocessing\process.py", line 258,
in _bootstrap
self.run()
File "C:\Program Files\Python27\lib\multiprocessing\process.py", line 114,
in run
self._target(*self._args, **self._kwargs)
File "D:\Development\populate.py", line 39, in permutate
except Queue.Empty: AttributeError: 'function' object has no attribute 'Empty'
The Queue.Empty exception is in the Queue module, not in the multiprocessing.queues.Queue class. The multiprocessing module actually uses the Queue (module) Empty exception class:
from multiprocessing import Queue
from Queue import Empty
q = Queue()
try:
q.get( False )
except Empty:
print "Queue was empty"
If you want to be very explicit and verbose, you can do this:
import multiprocessing
import Queue
q = multiprocessing.Queue()
try:
q.get( False )
except Queue.Empty:
print "Queue was empty"
Favoring the former approach is probably a better idea because there is only one Queue object to worry about and you don't have to wonder if you are working with the class or the module as in my second example.

multiprocessing Pool hangs when there is a exception in any of the thread

I am new to Python and trying a multiprocessing.pool program to process files, it works fine as long as there are no exceptions. If any of the thread/process gets an exception the whole program waits for the thread
snippet of the code:
cp = ConfigParser.ConfigParser()
cp.read(gdbini)
for table in cp.sections():
jobs.append(table)
#print jobs
poolreturn = pool.map(worker, jobs)
pool.close()
pool.join()
Failure Message:
Traceback (most recent call last):
File "/opt/cnet-python/default-2.6/lib/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/opt/cnet-python/default-2.6/lib/python2.6/threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File "/opt/cnet-python/default-2.6/lib/python2.6/multiprocessing/pool.py", line 259, in _handle_results
task = get()
TypeError: ('__init__() takes exactly 3 arguments (2 given)', <class 'ConfigParser.NoOptionError'>, ("No option 'inputfilename' in section: 'section-1'",))
I went ahead added a exception handler to terminate the process
try:
ifile=cp.get(table,'inputfilename')
except ConfigParser.NoSectionError,ConfigParser.NoOptionError:
usage("One of Parameter not found for"+ table)
terminate()
but still it waits, not sure whats missing.
In Python 3.2+ this works as expected. For Python 2, this bug was fixed in r74545 and will be available in Python 2.7.3. In the mean time, you can use the configparser library which is a backport of the configparser from 3.2+. Check it out.
I had the same issue. It happens when a worker process raises a user exception which has a custom constructor. Make sure your exception (ConfigParser.NoOptionError in that case) initializes the base exception with exactly two arguments:
class NoOptionError(ValueError):
def __init__(self, message, *args):
super(NoOptionError, self).__init__(message, args)

Categories