I've got a function f() that makes an API call and I want to call it multiple times asynchronously. I use the asyncio lib like that:
async def main():
loop = asyncio.get_event_loop()
futures = [loop.run_in_executor(None, f) for i in range(10)]
await asyncio.gather(*futures)
return futures
result = asyncio.get_event_loop().run_until_complete(main())
The problem is that sometimes f() raises an Exception and I'm not sure how to handle it. The doc says that Futures can contain an Exception, but that's not the case here, error is raised and program crashes.
How do I achieve that ? I think I could write a wrapper for f() and try: catch: the Exception, but that seems ugly if the feature is provided by the lib.
Thanks in advance for help,
The problem is that sometimes f() raises an Exception and I'm not sure how to handle it.
That will depend on what you want to do when an exception occurs. Remember that asyncio.gather() is convenience API that propagates exceptions by default to avoid blindly continuing in case of error. If you want to proceed in case of exception, you have other options:
Pass return_exceptions=True to gather - this will cause gather to return the exception objects along with other results. Convenient and easy to use, but mixes exceptions with regular results, which is a bit messy.
Use asyncio.wait() instead of asyncio.gather(). It returns sets of futures, each of which you can test whether it completed by raising or by producing a result.
Wrap f() in your own function that catches the exception as you see fit. You considered and rejected this, but in some cases it's exactly the right approach.
Related
I have a bunch of functions similar to this structure:
def df():
try:
foo = #do some computation
except Exception:
foo = #do other computation
return foo
I was wondering what would be the difference with this other implementation:
def df():
try:
foo = #do some computation
except Exception:
foo = #do other computation
finally:
return foo
What should I use in this case? I see it a little bit redundant and also I'm concerned of the time execution, because I have many more functions with this same architecture and I don't know if adding finally would increase the execution time too much or not.
If you are catching a generic exception like that and not throwing it back to the calling method then both are functionally the same. The finally keyword is guaranteed to run after the try/catch has processed so in those examples, it makes no real difference. Typically, the finally keyword is used to ensure thread state or connection closures after execution of the try/catch block. If those are truly representative of your code then I wouldn’t use finally.
"finally" is executed even if the exception is raised. In your specific case it wouldn't be required.
import sys
def worker(a):
try:
return 1 / a
except ZeroDivisionError:
return None
def master():
res = worker(0)
if not res:
print(sys.exc_info())
raise sys.exc_info()[0]
As code piece above, I have a bunch of functions like worker. They already have their own try-except block to handle exceptions. And then one master function will call each worker. Right now, sys.exc_info() return all None to 3 elements, how to re-raise the exceptions in the master function?
I am using Python 2.7
One update:
I have more than 1000 workers and some worker has very complex logic, they may deal multiple types of exceptions at same time. So my question is can I just raise those exceptions from master rather than edit works?
In your case, the exception in worker returns None. Once that happens, there's no getting the exception back. If your master function knows what the return values should be for each function (for example, ZeroDivisionError in worker reutrns None, you can manually reraise an exception.
If you're not able to edit the worker functions themselves, I don't think there's too much you can do. You might be able to use some of the solutions from this answer, if they work in code as well as on the console.
krflol's code above is kind of like how C handled exceptions - there was a global variable that, whenever an exception happened, was assigned a number which could later be cross-referenced to figure out what the exception was. That is also a possible solution.
If you're willing to edit the worker functions, though, then escalating an exception to the code that called the function is actually really simple:
try:
# some code
except:
# some response
raise
If you use a blank raise at the end of a catch block, it'll reraise the same exception it just caught. Alternatively, you can name the exception if you need to debug print, and do the same thing, or even raise a different exception.
except Exception as e:
# some code
raise e
What you're trying to do won't work. Once you handle an exception (without re-raising it), the exception, and the accompanying state, is cleared, so there's no way to access it. If you want the exception to stay alive, you have to either not handle it, or keep it alive manually.
This isn't that easy to find in the docs (the underlying implementation details about CPython are a bit easier, but ideally we want to know what Python the language defines), but it's there, buried in the except reference:
… This means the exception must be assigned to a different name to be able to refer to it after the except clause. Exceptions are cleared because with the traceback attached to them, they form a reference cycle with the stack frame, keeping all locals in that frame alive until the next garbage collection occurs.
Before an except clause’s suite is executed, details about the exception are stored in the sys module and can be accessed via sys.exc_info(). sys.exc_info() returns a 3-tuple consisting of the exception class, the exception instance and a traceback object (see section The standard type hierarchy) identifying the point in the program where the exception occurred. sys.exc_info() values are restored to their previous values (before the call) when returning from a function that handled an exception.
Also, this is really the point of exception handlers: when a function handles an exception, to the world outside that function, it looks like no exception happened. This is even more important in Python than in many other languages, because Python uses exceptions so promiscuously—every for loop, every hasattr call, etc. is raising and handling an exception, and you don't want to see them.
So, the simplest way to do this is to just change the workers to not handle the exceptions (or to log and then re-raise them, or whatever), and let exception handling work the way it's meant to.
There are a few cases where you can't do this. For example, if your actual code is running the workers in background threads, the caller won't see the exception. In that case, you need to pass it back manually. For a simple example, let's change the API of your worker functions to return a value and an exception:
def worker(a):
try:
return 1 / a, None
except ZeroDivisionError as e:
return None, e
def master():
res, e = worker(0)
if e:
print(e)
raise e
Obviously you can extend this farther to return the whole exc_info triple, or whatever else you want; I'm just keeping this as simple as possible for the example.
If you look inside the covers of things like concurrent.futures, this is how they handle passing exceptions from tasks running on a thread or process pool back to the parent (e.g., when you wait on a Future).
If you can't modify the workers, you're basically out of luck. Sure, you could write some horrible code to patch the workers at runtime (by using inspect to get their source and then using ast to parse, transform, and re-compile it, or by diving right down into the bytecode), but this is almost never going to be a good idea for any kind of production code.
Not tested, but I suspect you could do something like this. Depending on the scope of the variable you'd have to change it, but I think you'll get the idea
try:
something
except Exception as e:
variable_to_make_exception = e
.....later on use variable
an example of using this way of handling errors:
errors = {}
try:
print(foo)
except Exception as e:
errors['foo'] = e
try:
print(bar)
except Exception as e:
errors['bar'] = e
print(errors)
raise errors['foo']
output..
{'foo': NameError("name 'foo' is not defined",), 'bar': NameError("name 'bar' is not defined",)}
Traceback (most recent call last):
File "<input>", line 13, in <module>
File "<input>", line 3, in <module>
NameError: name 'foo' is not defined
I am stuck trying to write a test case that will check whether the code goes inside of an except block.
My method foo() in case of an exception doesn't throw/raise it, it just logs information.
I have tried to use assertRaises but later I realized that this is not working for me because I am not raising an exception.
In Python docs it is clearly said that:
Test that an exception is raised when callable is called with any positional or keyword arguments that are also passed to assertRaises(). The test passes if exception is raised, is an error if another exception is raised, or fails if no exception is raised.
So, if I have following method:
def foo():
try:
# Something that will cause an exception
except AttributeError:
log.msg("Shit happens")
is it possible to write a test case that will test whether execution goes inside of an except block?
You can't do this the way you want to. Python raises and handles exceptions all over the place—e.g., every single for loop exits by raising and handling a StopIteration. So, an assert that there was an exception somewhere, even if it was handled, would pretty much always pass.
What you can do is mock the logger, like this:
_logs = []
def mocklog(str):
_logs.append(str)
mymodule.log = mocklog
mymodule.foo()
assertEqual(_logs, ['Shit happens'])
Of course in a real-life project, you probably want to use a mocking library instead of hacking it in by hand like this, but that should demonstrate the idea.
You can use assertRaises (https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaises) like this:
with self.assertRaises(Exception):
foo()
There is also a method assertLogs if you want to test logging.
In python is there a way to not return to the caller function if a certain event happened in the called function. For example,...
def acquire_image(sdkobject):
ret = sdkobject.PrepareAcquisition()
error_check(ret)
ret = sdkobject.StartAcquisition()
error_check(ret)
error_check is a function that checks the return code to see if the sdk call had an error. If it is an error message then I would like to not go back to acquire and image but go to another function to reinitalise the sdk and start from the beginning again. Is there a pythonic way of doing this?
Have your error_check function raise an exception (like SDKError) if there is a problem, then run all the commands in a while loop.
class SDKError(Exception):
pass
# Perhaps define a separate exception for each possible
# error code, and make a dict that maps error codes to the
# appropriate exception.
class SDKType1Error(SDKError):
pass
class SDKType5Error(SDKError):
pass
sdk_errors = {
1: SDKType1Error,
5: SDKType5Error,
}
# Either return, if there was no error, or raise
# the appropriate exception
def error_check(return_code):
if return_code == 0:
return # No error
else:
raise sdk_errors[return_code]
# Example of how to deal with specific SDKErrors subclasses, or a generic
# catch-all SDKError
def acquire_image(sdkobject):
while True:
try:
# initialize sdk here
error_check(sdkobject.PrepareAcquisition())
error_check(sdkobject.StartAcquisition())
except SDKType1Error:
# Special handling for this error
except SDKError:
pass
else:
break
Return the error and use an if condition to check if the returned value has error, and if it has, call the reinitialization code from the calling function.
Use return for happy scenario
Returning to calling function is done by simple return or return response.
Use it for solving typical run of your code, when all goes well.
Throw exception, when something goes wrong
If something goes wrong, call raise Exception(). In many situations, your code does not has to do it explicitly, it throws the exception on its own.
You may even you your own Exception instances and use them to pass to the caller more information about what went wrong.
It took me a while to learn this approach and it made my coding much simpler and shorter then.
Do not care about what will your calling code do with it
Let your function do the task or fail, if there are problems.
Trying to think for client responsibility in your function will mess up your code and will not be complete solution anyway.
Things to avoid
Ignore who is calling you
In OOP this is principle of client anonymity. Just serve the request and do not care, who is calling.
Do not attempt using Exceptions as replacement for returning a value
Sometime, people use the fact, Exception can pass some information to to caller. But this is rather antipattern (there are always exception.)
Is there a way knowing (at coding time) which exceptions to expect when executing python code?
I end up catching the base Exception class 90% of the time since I don't know which exception type might be thrown (reading the documentation doesn't always help, since many times an exception can be propagated from the deep. And many times the documentation is not updated or correct).
Is there some kind of tool to check this (like by reading the Python code and libs)?
I guess a solution could be only imprecise because of lack of static typing rules.
I'm not aware of some tool that checks exceptions, but you could come up with your own tool matching your needs (a good chance to play a little with static analysis).
As a first attempt, you could write a function that builds an AST, finds all Raise nodes, and then tries to figure out common patterns of raising exceptions (e. g. calling a constructor directly)
Let x be the following program:
x = '''\
if f(x):
raise IOError(errno.ENOENT, 'not found')
else:
e = g(x)
raise e
'''
Build the AST using the compiler package:
tree = compiler.parse(x)
Then define a Raise visitor class:
class RaiseVisitor(object):
def __init__(self):
self.nodes = []
def visitRaise(self, n):
self.nodes.append(n)
And walk the AST collecting Raise nodes:
v = RaiseVisitor()
compiler.walk(tree, v)
>>> print v.nodes
[
Raise(
CallFunc(
Name('IOError'),
[Getattr(Name('errno'), 'ENOENT'), Const('not found')],
None, None),
None, None),
Raise(Name('e'), None, None),
]
You may continue by resolving symbols using compiler symbol tables, analyzing data dependencies, etc. Or you may just deduce, that CallFunc(Name('IOError'), ...) "should definitely mean raising IOError", which is quite OK for quick practical results :)
You should only catch exceptions that you will handle.
Catching all exceptions by their concrete types is nonsense. You should catch specific exceptions you can and will handle. For other exceptions, you may write a generic catch that catches "base Exception", logs it (use str() function) and terminates your program (or does something else that's appropriate in a crashy situation).
If you really gonna handle all exceptions and are sure none of them are fatal (for example, if you're running the code in some kind of a sandboxed environment), then your approach of catching generic BaseException fits your aims.
You might be also interested in language exception reference, not a reference for the library you're using.
If the library reference is really poor and it doesn't re-throw its own exceptions when catching system ones, the only useful approach is to run tests (maybe add it to test suite, because if something is undocumented, it may change!). Delete a file crucial for your code and check what exception is being thrown. Supply too much data and check what error it yields.
You will have to run tests anyway, since, even if the method of getting the exceptions by source code existed, it wouldn't give you any idea how you should handle any of those. Maybe you should be showing error message "File needful.txt is not found!" when you catch IndexError? Only test can tell.
The correct tool to solve this problem is unittests. If you are having exceptions raised by real code that the unittests do not raise, then you need more unittests.
Consider this
def f(duck):
try:
duck.quack()
except ??? could be anything
duck can be any object
Obviously you can have an AttributeError if duck has no quack, a TypeError if duck has a quack but it is not callable. You have no idea what duck.quack() might raise though, maybe even a DuckError or something
Now supposing you have code like this
arr[i] = get_something_from_database()
If it raises an IndexError you don't know whether it has come from arr[i] or from deep inside the database function. usually it doesn't matter so much where the exception occurred, rather that something went wrong and what you wanted to happen didn't happen.
A handy technique is to catch and maybe reraise the exception like this
except Exception as e
#inspect e, decide what to do
raise
Noone explained so far, why you can't have a full, 100% correct list of exceptions, so I thought it's worth commenting on. One of the reasons is a first-class function. Let's say that you have a function like this:
def apl(f,arg):
return f(arg)
Now apl can raise any exception that f raises. While there are not many functions like that in the core library, anything that uses list comprehension with custom filters, map, reduce, etc. are affected.
The documentation and the source analysers are the only "serious" sources of information here. Just keep in mind what they cannot do.
I ran into this when using socket, I wanted to find out all the error conditions I would run in to (so rather than trying to create errors and figure out what socket does I just wanted a concise list). Ultimately I ended up grep'ing "/usr/lib64/python2.4/test/test_socket.py" for "raise":
$ grep raise test_socket.py
Any exceptions raised by the clients during their tests
raise TypeError, "test_func must be a callable function"
raise NotImplementedError, "clientSetUp must be implemented."
def raise_error(*args, **kwargs):
raise socket.error
def raise_herror(*args, **kwargs):
raise socket.herror
def raise_gaierror(*args, **kwargs):
raise socket.gaierror
self.failUnlessRaises(socket.error, raise_error,
self.failUnlessRaises(socket.error, raise_herror,
self.failUnlessRaises(socket.error, raise_gaierror,
raise socket.error
# Check that setting it to an invalid value raises ValueError
# Check that setting it to an invalid type raises TypeError
def raise_timeout(*args, **kwargs):
self.failUnlessRaises(socket.timeout, raise_timeout,
def raise_timeout(*args, **kwargs):
self.failUnlessRaises(socket.timeout, raise_timeout,
Which is a pretty concise list of errors. Now of course this only works on a case by case basis and depends on the tests being accurate (which they usually are). Otherwise you need to pretty much catch all exceptions, log them and dissect them and figure out how to handle them (which with unit testing wouldn't be to difficult).
There are two ways that I found informative. The first one, run the code in iPython, which will display the exception type.
n = 2
str = 'me '
str + 2
TypeError: unsupported operand type(s) for +: 'int' and 'str'
In the second way we settle for catching too much and improve on it over time. Include a try expression in your code and catch except Exception as err. Print sufficient data to know what exception was thrown. As exceptions are thrown improve your code by adding a more precise except clause. When you feel that you have caught all relevant exceptions remove the all inclusive one. A good thing to do anyway because it swallows programming errors.
try:
so something
except Exception as err:
print "Some message"
print err.__class__
print err
exit(1)
normally, you'd need to catch exception only around a few lines of code. You wouldn't want to put your whole main function into the try except clause. for every few line you always should now (or be able easily to check) what kind of exception might be raised.
docs have an exhaustive list of built-in exceptions. don't try to except those exception that you're not expecting, they might be handled/expected in the calling code.
edit: what might be thrown depends on obviously on what you're doing! accessing random element of a sequence: IndexError, random element of a dict: KeyError, etc.
Just try to run those few lines in IDLE and cause an exception. But unittest would be a better solution, naturally.
This is a copy and pasted answer I wrote for How to list all exceptions a function could raise in Python 3?, I hope that is allowed.
I needed to do something similar and found this post. I decided I
would write a little library to help.
Say hello to Deep-AST. It's very early alpha but it is pip
installable. It has all of the limitations mentioned in this post
and some additional ones but its already off to a really good start.
For example when parsing HTTPConnection.getresponse() from
http.client it parses 24489 AST Nodes. It finds 181 total raised
Exceptions (this includes duplicates) and 8 unique Exceptions were
raised. A working code example.
The biggest flaw is this it currently does work with a bare raise:
def foo():
try:
bar()
except TypeError:
raise
But I think this will be easy to solve and I plan on fixing it.
The library can handle more than just figuring out exceptions, what
about listing all Parent classes? It can handle that too!