I'm analysing an AWS response in python in the following script:
#var definition
conversationName = 'NO NAME'
#in the MyClass
if len(resp['FaceMatches'])>0:
faceRecognized = resp['FaceMatches'][0]['Face']['ExternalImageId']
self.logger.info(str(faceRecognized))
if resp['FaceMatches'][0]['Face']['ExternalImageId'] == self.conversationName:
self.logger.info("Name is the same")
return
else:
self.logger.info('Name has changed!')
self.conversationName=faceRecognized.split('_')[0]
self.pepperTTS.say("Hi "+str(faceRecognized.split('_')[0])+". Can I help you with something?")
return
else:
self.logger.info("No face rekognized so far.")
return
Problem is with the second IF ELSE. When I run the programm it seems to ignore this IF ELSE completely and neither prints "Name is the same" nor "Name has changed". And it does not show any errors when running the script.
Does anyone see the error or can give some tips to correct the script?
What's most likely happening is that resp['FaceMatches'][0]['Face']['ExternalImageId'] is raising an exception because one of those keys / indexes is wrong, and then the exception is not getting caught and gets swallowed silently - it's unfortunate, but in NAOqi a lot of exceptions get swallowed if no-one catches them (for example, in the callback in an ALMemory subscribe - as you probably have here).
So you should wrap all that chunk in a big try/except and print whatever exception gets caught.
This is a common enough situation that I created a helper library (documented here) with a log_exceptions decorators you can put on any function that swallows exceptions (typically: ALMemory event and signal callbacks; anything called with qi.async, anything called from outside your service ...), so your code doesn't get cluttered with try/except all over the place.
Related
How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace
I'm writing a custom backup script in Python. Sometimes the mkdir function or the print function or whatever function are failing for diverse reasons. Such exceptions stop the whole script and halt the backup in the middle, which is very frustrating. So far, I've managed these problems by adding try: ... except: ... statements and properly managing these exceptions. However, one day some other statement or function might raise an exception as well because of some other reason that hasn't been triggered yet.
Is there a way to tell a script to proceed anyway? An equivalent of wrapping every single statement of the code in a try: ... except: pass clause? A log would be better of course.
I've noticed that when programming with GUI toolkits like Tkinter, the application keeps running even if exceptions are raised. Is it possible to accomplish this type of thing with the console?
There actually is a module that is supposed to do exactly that: https://github.com/ajalt/fuckitpy.
Although it was obviously written as a joke. I cannot imagine a situation where doing something like that is a good idea. God, I can't believe I'm even suggesting that as a solution.
What you should do instead is identify what lines of code can produce what kind of errors, and handle those errors properly. There is only so much places where errors can actually happen - mostly while interfacing with outside systems, including filesystem, network, user input etc. And remember that actually failing is often better than continuing "work" and messing up your data, files and so on. Exceptions are there for a reason, they are not a result of Guido's malice.
Python has no way of doing that, and for good reasons.
It seems you're confused about what does it mean to write "robust" software: a robust program is not a program that is hard to kill and that will keep running no matter what, but a program that will handle edge cases properly. Keeping running is NOT enough... keeping running doing sensible things is the key point.
Unfortunately there's no way to do reasonable things automatically and you've to think on a case-by-case basis how to handle the error.
Beware that if a program has a lot of catch it's rarely a good program. Exceptions are meant to be raised in a lot of places and caught almost nowhere.
Note also that every catch is potentially a source of bugs... for example:
try:
print my_dict[foo()]
except KeyError:
...
cannot distinguish if the KeyError is coming for accessing a non-existing key in my_dict or if instead escaped from foo(). Rarely the two cases should be handled the same way...
Better is to write:
key = foo()
if key in my_dict:
print my_dict[key]
else:
...
so that only the side case of missing key in my_dict is handled and instead a KeyError exception will stop the program (stopping a program when you're not sure of what it's doing is the only reasonable thing to do).
Python has 'BaseException' as the base class for Exception classes. You can catch and ignore the base class Exception and that will cover all exceptions.
try
... your code here ...
except BaseException as exp:
print "A General Exception Occurred"
try:
# code segment
except:
pass
pass keyword will ignore all the exceptions
Normally, this one should catch everything:
try:
....
except:
pass
Only problem is, that you don't get the exception object with this syntax, but that was not asked for in this case.
You can add a general except block like #Kanwar Saad proposed. The question is, can you continue with your program in a valid state after the exception has been raised?
From the Zen of Python:
Errors should never pass silently.
Unless explicitly silenced.
Trying to catch all exceptions you know is in my opinion is the best way to go here. If you can not explicitly catch an exception you should not try to work around it. You (and your users) should know what exactly went wrong, otherwise your code might become a nightmare to debug.
If you are worried about losing backup data maybe you could do something like this:
def save_unfinished_backup():
# try to find a graceful exit without losing any data
try:
# some code
except OSError:
# handle oS Errors
except Exception:
save_unfinished_backup()
raise
This way you get both: A chance to fend of data loss and the exact error to debug it.
I hope this helps!
On a funny note: You could also use the fuckit module. Which silences ALL errors, including syntax errors. Do not, ever, use this in productive code though.
This should work perfectly. It will not print the "foo", but you will reach the print("bar") without a crash.
import fuckit
with fuckit:
prnt("foo")
print("bar")
New answer for new Gen...
Python now ships with suppress(), this tells the interpreter to suppress the indicated exceptions while running.
Can be easily imported used as below
from contextlib import suppress
with suppress(ValueError):
int('this wont catch')
print('yea')
The above will work, wont raise the ValueError exception of changing invalid int string to int...
It's more clean than third party libraries.
Happy Hacking
I have a server that does "some stuff" in a section and I have a "with gevent.Timeout(5)" around that. I have some checks going on in another greenlet and through that I noticed that one of the greenlets which did that "some stuff" was running for 45mins. I had to eventually restart the program to kill it (I know of other ways of killing it but that's not the problem..).
I monkey patching using gevent.monkey.patch_all() as well. The "some stuff" part does involve network connections and I am guessing something got stuck in one of those places. I don't understand why the timeout exception was not raised. Does anyone have any idea why the gevent.Timeout exception might not have been raised?
Whenever I've used gevent.Timeout, I've also used it as a context manager but with the second argument False. This way the context manager suppresses any exception and just leaves the chunk of code. You can follow up by checking if, say, the block set a value successfully:
result = None
with gevent.Timeout(5, False):
# Something that stalls
if result == None:
# Take care of business
This has worked for me very reliably. It seems that the default second argument exception to gevent.Timeout is None -- have you tried replacing it with your own exception type? Or even Exception?
I'm new on Flask, when writing view, i wander if all errors should be catched. If i do so, most of view code should be wrappered with try... except. I think it's not graceful.
for example.
#app.route('/')
def index():
try:
API.do()
except:
abort(503)
Should i code like this? If not, will the service crash(uwsgi+lnmp)?
You only catch what you can handle. The word "handle" means "do something useful with" not merely "print a message and die". The print-and-die is already handled by the exception mechanism and probably does it better than you will.
For example, this is not handling an exception usefully:
denominator = 0
try:
y = x / denominator
except ZeroDivisionError:
abort(503)
There is nothing useful you can do, and the abort is redundant as that's what uncaught exceptions will cause to happen anyway. Here is an example of a useful handling:
try:
config_file = open('private_config')
except IOError:
config_file = open('default_config_that_should_always_be_there')
but note that if the second open fails, there is nothing useful to do so it will travel up the call stack and possibly halt the program. What you should never do is have a bare except: because it hides information about what faulted where. This will result in much head scratching when you get a defect report of "all it said was 503" and you have no idea what went wrong in API.do().
Try / except blocks that can't do any useful handling clutter up the code and visually bury the main flow of execution. Languages without exceptions force you to check every call for an error return if only to generate an error return yourself. Exceptions exist in part to get rid of that code noise.
I know using below code to ignore a certain exception, but how to let the code go back to where it got exception and keep executing? Say if the exception 'Exception' raises in do_something1, how to make the code ignore it and keep finishing do_something1 and process do_something2? My code just go to finally block after process pass in except block. Please advise, thanks.
try:
do_something1
do_something2
do_something3
do_something4
except Exception:
pass
finally:
clean_up
EDIT:
Thanks for the reply. Now I know what's the correct way to do it. But here's another question, can I just ignore a specific exception (say if I know the error number). Is below code possible?
try:
do_something1
except Exception.strerror == 10001:
pass
try:
do_something2
except Exception.strerror == 10002:
pass
finally:
clean_up
do_something3
do_something4
There's no direct way for the code to go back inside the try-except block. If, however, you're looking at trying to execute these different independant actions and keep executing when one fails (without copy/pasting the try/except block), you're going to have to write something like this:
actions = (
do_something1, do_something2, #...
)
for action in actions:
try:
action()
except Exception, error:
pass
update. The way to ignore specific exceptions is to catch the type of exception that you want, test it to see if you want to ignore it and re-raise it if you dont.
try:
do_something1
except TheExceptionTypeThatICanHandleError, e:
if e.strerror != 10001:
raise
finally:
clean_up
Note also, that each try statement needs its own finally clause if you want it to have one. It wont 'attach itself' to the previous try statement. A raise statement with nothing else is the correct way to re-raise the last exception. Don't let anybody tell you otherwise.
What you want are continuations which python doesn't natively provide. Beyond that, the answer to your question depends on exactly what you want to do. If you want do_something1 to continue regardless of exceptions, then it would have to catch the exceptions and ignore them itself.
if you just want do_something2 to happen regardless of if do_something1 completes, you need a separate try statement for each one.
try:
do_something1()
except:
pass
try:
do_something2()
except:
pass
etc. If you can provide a more detailed example of what it is that you want to do, then there is a good chance that myself or someone smarter than myself can either help you or (more likely) talk you out of it and suggest a more reasonable alternative.
This is pretty much missing the point of exceptions.
If the first statement has thrown an exception, the system is in an indeterminate state and you have to treat the following statement as unsafe to run.
If you know which statements might fail, and how they might fail, then you can use exception handling to specifically clean up the problems which might occur with a particular block of statements before moving on to the next section.
So, the only real answer is to handle exceptions around each set of statements that you want to treat as atomic
you could have all of the do_something's in a list, and iterate through them like this, so it's no so wordy. You can use lambda functions instead if you require arguments for the working functions
work = [lambda: dosomething1(args), dosomething2, lambda: dosomething3(*kw, **kwargs)]
for each in work:
try:
each()
except:
pass
cleanup()
Exceptions are usually raised when a performing task can not be completed in a manner intended by the code due to certain reasons. This is usually raised as exceptions. Exceptions should be handled and not ignored. The whole idea of exception is that the program can not continue in the normal execution flow without abnormal results.
What if you write a code to open a file and read it? What if this file does not exist?
It is much better to raise exception. You can not read a file where none exists. What you can do is handle the exception, let the user know that no such file exists. What advantage would be obtained for continuing to read the file when a file could not be opened at all.
In fact the above answers provided by Aaron works on the principle of handling your exceptions.
I posted this recently as an answer to another question. Here you have a function that returns a function that ignores ("traps") specified exceptions when calling any function. Then you invoke the desired function indirectly through the "trap."
def maketrap(*exceptions):
def trap(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions:
return None
return trap
# create a trap that ignores all exceptions
trapall = maketrap(Exception)
# create a trap that ignores two exceptions
trapkeyattrerr = maketrap(KeyError, AttributeError)
# Now call some functions, ignoring specific exceptions
trapall(dosomething1, arg1, arg2)
trapkeyattrerr(dosomething2, arg1, arg2, arg3)
In general I'm with those who say that ignoring exceptions is a bad idea, but if you do it, you should be as specific as possible as to which exceptions you think your code can tolerate.
Python 3.4 added contextlib.suppress(), a context manager that takes a list of exceptions and suppresses them within the context:
with contextlib.suppress(IOError):
print('inside')
print(pathlib.Path('myfile').read_text()) # Boom
print('inside end')
print('outside')
Note that, just as with regular try/except, an exception within the context causes the rest of the context to be skipped. So, if an exception happens in the line commented with Boom, the output will be:
inside
outside