When performing async urlfetch calls with a callback and inside a tasklet, it seems that exceptions raised from within the callback don't propagate to the wrapping tasklet.
Example code:
def cb() :
raise Exception, 'just a test'
rpc = urlfetch.create_rpc(callback = cb)
#ndb.tasklet
def t() :
try :
response = yield urlfetch.make_fetch_call(rpc, 'http://...')
except :
print 'an error occured'
raise ndb.Return
t().get_result()
In the code above, executed by the dev server, the "just a test" exception doesn't get caught inside the tasklet; ie. instead of the error message being output to the console i'm getting the "just a test" exception reported.
If there's a generic urlfetch exception related to the make_fetch_call (such as DownloadError in case of a bad URL), it's being handled properly.
Is there a way to catch callback-generated exceptions inside the tasklet in such a situation? Or maybe should this behavior be considered a bug?
Thanks.
I've created a sample project to illustrate the correct way to do this.
While reading the code, you'll find a lot of benefit from reading the comments and also cross-referencing with the docs on tasklets and rpc.make_fetch_call().
Some of the confusing aspects of this were the fact that ndb tasklets actually use Exceptions to indicate return values (even if successful, you should raise ndb.Return (True), making exception handling difficult to tiptoe around), and the fact that exceptions in the callback needs to be caught when we call wait() on the rpc future object returned by t(), while exceptions in the url fetch need to be caught inside t() itself when we do yield rpc.make_fetch_call(). There may be a way to do the latter using rpc.check_success(), but that'll be up to your hacking to figure out.
I hope you find the source helpful, and I hope you learned a lesson about avoiding using exceptions to indicate that a generator is done...
Related
How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace
I am scratching my head about what is the best-practice to get the traceback in the logfile only once. Please note that in general I know how to get the traceback into the log.
Let's assume I have a big program consisting of various modules and functions that are imported, so that it can have quite some depth and the logger is set up properly.
Whenever an exception may occur I do the following:
try:
do_something()
except MyError as err:
log.error("The error MyError occurred", exc_info=err)
raise
Note that the traceback is written to the log via the option exc_info=err.
My Problem is now that when everything gets a bit more complex and nested I loose control about how often this traceback is written to the log and it gets quite messy.
An example of the situation with my current solution for this problem is as follows:
from other_module import other_f
def main():
try:
# do something
val = other_f()
except (AlreadyLoggedError1, AlreadyLoggedError2, AlreadyLoggedError3):
# The error was caught within other_f() or deeper and
# already logged with traceback info where it occurred
# After logging it was raised like in the above example
# I do not want to log it again, so it is just raised
raise
except BroaderException as err:
# I cannot expect to have thought of all exceptions
# So in case something unexpected happened
# I want to have the traceback logged here
# since the error is not logged yet
log.error("An unecpected error occured", exc_info=err)
raise
The problem with this solution is, that I need to to keep track of all Exceptions that are already logged by myself and the line except (AlreadyLoggedError1, AlreadyLoggedError2, ...) gets arbitrary long and has to be put at any level between main() and the position the error actually occured.
So my question is: Is there some better (pythonic) way handling this? To be more specific: I want to raise the information that the exception was already logged together with the exception so that I do not have to account for that via an extra except block like in my above example.
The solution normally used for larger applications is for the low-level code to not actually do error handling itself if it's just going to be logged, but to put exception logging/handling at the highest level in the code possible, since exceptions will bubble up as far as needed. For example, libraries that send errors to a service like New Relic and Sentry don't need you to instrument each small part of your code that might throw an error, they are set up to just catch any exception and send it to a remote service for aggregation and tracking.
I'm analysing an AWS response in python in the following script:
#var definition
conversationName = 'NO NAME'
#in the MyClass
if len(resp['FaceMatches'])>0:
faceRecognized = resp['FaceMatches'][0]['Face']['ExternalImageId']
self.logger.info(str(faceRecognized))
if resp['FaceMatches'][0]['Face']['ExternalImageId'] == self.conversationName:
self.logger.info("Name is the same")
return
else:
self.logger.info('Name has changed!')
self.conversationName=faceRecognized.split('_')[0]
self.pepperTTS.say("Hi "+str(faceRecognized.split('_')[0])+". Can I help you with something?")
return
else:
self.logger.info("No face rekognized so far.")
return
Problem is with the second IF ELSE. When I run the programm it seems to ignore this IF ELSE completely and neither prints "Name is the same" nor "Name has changed". And it does not show any errors when running the script.
Does anyone see the error or can give some tips to correct the script?
What's most likely happening is that resp['FaceMatches'][0]['Face']['ExternalImageId'] is raising an exception because one of those keys / indexes is wrong, and then the exception is not getting caught and gets swallowed silently - it's unfortunate, but in NAOqi a lot of exceptions get swallowed if no-one catches them (for example, in the callback in an ALMemory subscribe - as you probably have here).
So you should wrap all that chunk in a big try/except and print whatever exception gets caught.
This is a common enough situation that I created a helper library (documented here) with a log_exceptions decorators you can put on any function that swallows exceptions (typically: ALMemory event and signal callbacks; anything called with qi.async, anything called from outside your service ...), so your code doesn't get cluttered with try/except all over the place.
When using multiprocessing.Pool's apply_async(), what happens to breaks in code? This includes, I think, just exceptions, but there may be other things that make the worker functions fail.
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
for f in files:
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct)
As I understand it right now, the process/worker fails (all other processes continue) and anything past a thrown error is not executed, EVEN if I catch the error with try/except.
As an example, usually I'd except Errors and put in a default value and/or print out an error message, and the code continues. If my callback function involves writing to file, that's done with default values.
This answerer wrote a little about it:
I suspect the reason you're not seeing anything happen with your example code is because all of your worker function calls are failing. If a worker function fails, callback will never be executed. The failure won't be reported at all unless you try to fetch the result from the AsyncResult object returned by the call to apply_async. However, since you're not saving any of those objects, you'll never know the failures occurred. If I were you, I'd try using pool.apply while you're testing so that you see errors as soon as they occur.
If you're using Python 3.2+, you can use the error_callback keyword argument to to handle exceptions raised in workers.
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct, error_callback=handle_error)
handle_error will be called with the exception object as an argument.
If you're not, you have to wrap all your worker functions in a try/except to ensure your callback is executed. (I think you got the impression that this wouldn't work from my answer in that other question, but that's not the case. Sorry!):
def workerfunct(*args):
try:
# Stuff
except Exception as e:
# Do something here, maybe return e?
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct)
You could also use a wrapper function if you can't/don't want to change the function you actually want to call:
def wrapper(func, *args):
try:
return func(*args)
except Exception as e:
return e
pool.apply_async(wrapper, args=(workerfunct, *args), callback=callbackfunct)
I'm new on Flask, when writing view, i wander if all errors should be catched. If i do so, most of view code should be wrappered with try... except. I think it's not graceful.
for example.
#app.route('/')
def index():
try:
API.do()
except:
abort(503)
Should i code like this? If not, will the service crash(uwsgi+lnmp)?
You only catch what you can handle. The word "handle" means "do something useful with" not merely "print a message and die". The print-and-die is already handled by the exception mechanism and probably does it better than you will.
For example, this is not handling an exception usefully:
denominator = 0
try:
y = x / denominator
except ZeroDivisionError:
abort(503)
There is nothing useful you can do, and the abort is redundant as that's what uncaught exceptions will cause to happen anyway. Here is an example of a useful handling:
try:
config_file = open('private_config')
except IOError:
config_file = open('default_config_that_should_always_be_there')
but note that if the second open fails, there is nothing useful to do so it will travel up the call stack and possibly halt the program. What you should never do is have a bare except: because it hides information about what faulted where. This will result in much head scratching when you get a defect report of "all it said was 503" and you have no idea what went wrong in API.do().
Try / except blocks that can't do any useful handling clutter up the code and visually bury the main flow of execution. Languages without exceptions force you to check every call for an error return if only to generate an error return yourself. Exceptions exist in part to get rid of that code noise.