I have a program with some low-level hardware components, which may fail(not initialized, timeout, comm issues, invalid commands etc.). They live in a server, which receives requests from a webclient.
So my idea is to have custom exceptions to capture what may fail in which drive - so that I can in some cases take remediation actions (e.g. try to reset the adapter if it's a comm problem etc.), or bubble up the errors in the cases where I can't do anything low-level, perhaps so that the server can return a generic error message to the webclient.
For instance:
class DriveException(Exception):
""" Raised when we have a drive-specific problem """
def __init__(self, message, drive=None, *args):
self.message = message
self.drive = drive
super().__init__(message, drive, *args)
But then that drive may have had a problem because, say, the ethernet connexion didn't respond:
class EthernetCommException(Exception):
""" Raised when ethernet calls failed """
In the code, I can ensure my exceptions bubble up this way:
# ... some code ....
try:
self.init_controllers() # ethernet cx failed, or key error etc.
except Exception as ex:
raise DriveException(ex) from ex
# .... more code....
I have a high-level try/except in the server to ensure it keeps responding to requests & doesn't crash in case of a low-level component not responding. That mechanic works fine.
However, I have many different drives. I'd rather avoid putting lots of try/except everywhere in my code. My current idea is to do something like:
def koll_exception(func):
""" Raises a drive exception if needed """
#functools.wraps(func)
def wrapper_exception(*args, **kwargs):
try:
value = func(*args, **kwargs)
return value
except Exception as ex:
raise DriveException(ex, drive=DriveEnum.KOLLMORGAN) from ex
return wrapper_exception
So that I can just dO:
#koll_exception
def risky_call_to_kolldrive():
#doing stuff & raising a drive exception if anything goes wrong
# then anywhere in the code
foo = risky_call_to_kolldrive()
My prototype seems to work fine with the decorator. However I've search a bit about using to approach to try/except and was somewhat surprise not to find much about it. Is there a good reason why people don't do that I'm not seeing? Other than they usually just wrap everything in a high-level try/catch & don't bother much more with it?
Related
I'm writing some unit tests for a RabbitMQ service I've developed and I'm trying to catch an AMQP exception when the exchange doesn't exist.
The method prototype for the RabbitMQ class I've written is publish_message_str(self, exchange_name: str, message: dict) -> None.
I am trying to unit test several things, and the reason for this question is one of those tests giving an exception I am unable to "expect".
# service: RabbitMQ is defined in a function-scoped fixture
def test_publish_unexistent_exchange(service: RabbitMQ) -> None:
service.publish_message_str("exchangenotexists", {})
When the code is run, it raises an expected exception.
E amqp.exceptions.NotFound: Basic.publish: (404) NOT_FOUND - no exchange 'exchangenotexists' in vhost '/'
In a regular situation, you could do something like this:
def test_publish_unexistent_exchange(service: RabbitMQ) -> None:
with raises(NotFound):
service.publish_message_str("exchangenotexists", {})
This however also gives the exception and pytest seems to ignore the raises call.
I've also tried to catch a general exception with a try-except.
def test_publish_unexistent_exchange(service: RabbitMQ) -> None:
try:
service.publish_message_str("exchangenotexists", {})
except Exception:
pass
This doesn't work either. The next thing I've tried is reading the AMQP exceptions code, but unfortunately at this point I'm clueless.
If you have faced this situation and/or know how to solve it, I'd deeply appreciate it.
Thank you so much,
How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace
When performing async urlfetch calls with a callback and inside a tasklet, it seems that exceptions raised from within the callback don't propagate to the wrapping tasklet.
Example code:
def cb() :
raise Exception, 'just a test'
rpc = urlfetch.create_rpc(callback = cb)
#ndb.tasklet
def t() :
try :
response = yield urlfetch.make_fetch_call(rpc, 'http://...')
except :
print 'an error occured'
raise ndb.Return
t().get_result()
In the code above, executed by the dev server, the "just a test" exception doesn't get caught inside the tasklet; ie. instead of the error message being output to the console i'm getting the "just a test" exception reported.
If there's a generic urlfetch exception related to the make_fetch_call (such as DownloadError in case of a bad URL), it's being handled properly.
Is there a way to catch callback-generated exceptions inside the tasklet in such a situation? Or maybe should this behavior be considered a bug?
Thanks.
I've created a sample project to illustrate the correct way to do this.
While reading the code, you'll find a lot of benefit from reading the comments and also cross-referencing with the docs on tasklets and rpc.make_fetch_call().
Some of the confusing aspects of this were the fact that ndb tasklets actually use Exceptions to indicate return values (even if successful, you should raise ndb.Return (True), making exception handling difficult to tiptoe around), and the fact that exceptions in the callback needs to be caught when we call wait() on the rpc future object returned by t(), while exceptions in the url fetch need to be caught inside t() itself when we do yield rpc.make_fetch_call(). There may be a way to do the latter using rpc.check_success(), but that'll be up to your hacking to figure out.
I hope you find the source helpful, and I hope you learned a lesson about avoiding using exceptions to indicate that a generator is done...
Some time ago I wrote a piece of code, a Flask route to log out users from a web application I was working on, that looked like that:
#app.route('/logout')
#login_required
def logout():
# lets get the user cookie, and if it exists, delete it
cookie = request.cookies.get('app_login')
response = make_response(redirect(url_for('login')))
if cookie:
riak_bucket = riak_connect('sessions')
riak_bucket.get(cookie).delete()
response.delete_cookie('app_login', None)
return response
return response
I did its job, and was certainly working, but now I am getting into making the app more robust by adding proper error handling, something that I havent done before on a large scale nowhere in my code. So I stumbled on this route function and I started writing its new version, when I realised I dont know how to do it 'the right way'. Here is what I came up with:
#app.route('/logout')
#login_required
def logout():
# why dont we call variables after what they are in specifics?
login_redirect = make_response(redirect(url_for('login')))
try:
cookie = request.cookies.get('app_login')
except:
return login_redirect
# if we are here, the above try/except went good, right?
try:
# perhaps sessions_bucket should really be bucket_object?
# is it valid to chain try statements like that, or should they be
# tried separately one by one?
sessions_bucket = riak_connect('sessions')
sessions_bucket.get(cookie).delete()
login_redirect.delete_cookie('app_login', None)
except:
return login_redirect
# return redirect by default, just because it seems more secure
return login_redirect
It also does it job, but still doesnt look 'right' to me. So, the question are, to all of you who have larger experience in writing really pythonic Python code, given the fact I would love the code to handle all errors nicely, be readable to others and do its job fast and well (in this particular case but also in rest of rather large codebase):
how are you calling your variables, extra specific or general: sessions_bucket vs riak_bucket vs bucket_object?
how do you handle errors, by usage of try/except one after another, or by nesting one try/except in another, or in any other way?
is it ok to do more than one thing in one try/except, or not?
and perhaps anything else, that comes to your mind to the above code examples
Thanks in advance!
I don't know the exact riak python API, so I don't know what exceptions are thrown. On the other hand, how should the web app behave on the different error conditions? Has the user to be informed?
Variable names: I prefer generic. If you change the implementation (e.g. Session store), you don't have to change the variable names.
Exceptions: Depends on the desired behavior. If you want to recover from errors, try/except one after another. (Generally, linear code is simpler.) If you don't recover from errors, I find one bigger try clause with several exception clauses very acceptable.
For me it's ok to do several things in one try/except. If there are too many try/except clauses, the code gets less readable.
More things: logging. logging.exception will log the traceback so you can know where exactly the error appeared.
Some suggestion:
import logging
log = loggin.getLogger(__name__)
#app.route('/logout')
#login_required
def logout():
login_redirect = make_response(redirect(url_for('login')))
try:
sessionid = request.cookies.get('app_login', None)
except AttributeError:
sessionid = None
log.error("Improperly configured")
if sessionid:
try:
session_store = riak_connect('sessions')
session = session_store.get(sessionid)
if session:
session.delete()
login_redirect.delete_cookie('app_login', None)
except IOError: # what errors appear when connect fails?
log.exception("during logout")
return login_redirect
I'm new on Flask, when writing view, i wander if all errors should be catched. If i do so, most of view code should be wrappered with try... except. I think it's not graceful.
for example.
#app.route('/')
def index():
try:
API.do()
except:
abort(503)
Should i code like this? If not, will the service crash(uwsgi+lnmp)?
You only catch what you can handle. The word "handle" means "do something useful with" not merely "print a message and die". The print-and-die is already handled by the exception mechanism and probably does it better than you will.
For example, this is not handling an exception usefully:
denominator = 0
try:
y = x / denominator
except ZeroDivisionError:
abort(503)
There is nothing useful you can do, and the abort is redundant as that's what uncaught exceptions will cause to happen anyway. Here is an example of a useful handling:
try:
config_file = open('private_config')
except IOError:
config_file = open('default_config_that_should_always_be_there')
but note that if the second open fails, there is nothing useful to do so it will travel up the call stack and possibly halt the program. What you should never do is have a bare except: because it hides information about what faulted where. This will result in much head scratching when you get a defect report of "all it said was 503" and you have no idea what went wrong in API.do().
Try / except blocks that can't do any useful handling clutter up the code and visually bury the main flow of execution. Languages without exceptions force you to check every call for an error return if only to generate an error return yourself. Exceptions exist in part to get rid of that code noise.