Python3: purge exception chain? - python

I'm trying to raise an exception within an except: block but the interpreter tries to be helpful and prints stack traces 'by force'. Is it possible to avoid this?
A little bit of background information:
I'm toying with urwid, a TUI library for python. The user interface is started by calling urwid.MainLoop.run() and ended by raising urwid.ExitMainLoop(). So far this works fine but what happens when another exception is raised? E.g. when I'm catching KeyboardInterrupt (the urwid MainLoop does not), I do some cleanup and want to end the user interface - by raising the appropriate exception. But this results in a screen full of stack traces.
Some little research showed python3 remembers chained exceptions and one can explicitly raise an exception with a 'cause': raise B() from A(). I learned a few ways to change or append data regarding the raised exceptions but I found no way to 'disable' this feature. I'd like to avoid the printing of stack traces and lines like The above exception was the direct cause of... and just raise the interface-ending exception within an except: block like I would outside of one.
Is this possible or am I doing something fundamentally wrong?
Edit:
Here's an example resembling my current architecture, resulting in the same problem:
#!/usr/bin/env python3
import time
class Exit_Main_Loop(Exception):
pass
# UI main loop
def main_loop():
try:
while True:
time.sleep(0.1)
except Exit_Main_Loop as e:
print('Exit_Main_Loop')
# do some UI-related clean up
# my main script
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
Unfortunately I can't change main_loop to except KeyboardInterrupt as well. Is there a pattern to solve this?

I still don't quite understand your explanation, but from the code:
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
There is no way that main_loop could ever see the Exit_Main_Loop() exception. By the time you get to the KeyboardInterrupt handle, main_loop is guaranteed to have already finished (in this case, because of an unhandled KeyboardInterrupt), so its exception handler is no longer active.
So, what happens is that you raise a new exception that nobody catches. And when an exception gets to the top of your code without being handled, Python handles it automatically by printing a traceback and quitting.
If you want to convert one type of exception into another so main_loop can handle it, you have to do that somewhere inside the try block.
You say:
Unfortunately I can't change main_loop to except KeyboardInterrupt as well.
If that's true, there's no real answer to your problem… but I'm not sure there's a problem in the first place, other than the one you created. Just remove the Exit_Main_Loop() from your code, and isn't it already doing what you wanted? If you're just trying to prevent Python from printing a traceback and exiting, this will take care of it for you.
If there really is a problem—e.g., the main_loop code has some cleanup code that you need to get executed no matter what, and it's not getting executed because it doesn't handle KeyboardInterrupt—there are two ways you could work around this.
First, as the signal docs explain:
The signal.signal() function allows to define custom handlers to be executed when a signal is received. A small number of default handlers are installed: … SIGINT is translated into a KeyboardInterrupt exception.
So, all you have to do is replace the default handler with a different one:
def handle_sigint(signum, frame):
raise ExitMainLoop()
signal.signal(signal.SIGINT, handle_sigint)
Just do this before you start main_loop, and you should be fine. Keep in mind that there are some limitations with threaded programs, and with Windows, but if none of those limitations apply, you're golden; a ctrl-C will trigger an ExitMainLoop exception instead of a KeyboardInterrupt, so the main loop will handle it. (You may want to also add an except ExitMainLoop: block in your wrapper code, in case there's an exception outside of main_loop. However, you could easily write a contextmanager that sets and restores the signal around the call to main_loop, so there isn't any outside code that could possibly raise it.)
Alternatively, even if you can't edit the main_loop source code, you can always monkeypatch it at runtime. Without knowing what the code looks like, it's impossible to explain exactly how to do this, but there's almost always a way to do it.

Related

How to catch when *any* error occurs in a whole script? [duplicate]

How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace

Killing cv2 read process after N time

I'm desperate.
My code reads nframe in videos, sometimes the code just stop for no reason, and no error.
So I decided to somehow raise an error.
The thing is, the code does raise an error, but it ignores it for some reason, and just works as normal.
*Ive provided a code block on which exactly the same method works.
handler:
def handler(signum,frame):
print("error") ## This is printed
raise Exception('time out') ## I guess this is getting raised
Code part i want to wrap:
for i in range(0,int(frame_count), nframe): # basicly loads every nframe from the video
try:
frame = video.set(1,i)
signal.signal(signal.SIGALRM), handler)
signal.alarm(1) # At this point, the 'handler' did raise the error, but it did not kill this 'try' block.
_n,frame = video.read() # This line sometimes gets for infinit amount of time, and i want to wrap it
except Exception as e:
print('test') # Code does not get here, yet the 'handler' does raise an exception
raise e
# Here i need to return False, or rise an error, but the code just does not get here.
An example where exactly the same method will work:
import signal
import time
def handler(signum, frame):
raise Exception('time out')
def function():
try:
signal.signal(signal.SIGALRM,handler)
signal.alarm(5) # 5 seconds till raise
time.sleep(10) # does not get here, an Exception is raised after 5 seconds
except Exception as e:
raise e # This will indeed work
My guess is that the read() call is blocked somewhere inside C code. The signal handler runs, puts an exception into the Python interpreter somewhere, but the exception isn't handled until the Python interpreter regains control. This is a limitation documented in the signal module:
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
One possible workaround is to read frames on a separate process using the multiprocessing module, and return them to the main process using a multiprocessing.Queue (from which you can get with a timeout). However, there will be extra overhead in sending the frames between processes.
Another approach might be to try and avoid the root of the problem. OpenCV has different video backends (V4L, GStreamer, ffmpeg, ...); one of them might work where another doesn't. Using the second argument to the VideoCapture constructor, you can indicate a preference for which backend to use:
cv.VideoCapture(..., cv.CAP_FFMPEG)
See the documentation for the full list of backends. Depending on your platform and OpenCV build, not all of them will be available.

Unable to KeyboardInterrupt while loop

I have a python script that runs on an infinite loops. In the loop multiprocess and async functions happen so normally I catch KeyboardInterrupt to properly kill all the processes.
Using similar code somehow on one of the loops I am unable to catch the KeyboardInterrupt the loop just keeps going.
logic goes like that:
try:
while True:
do stuff
except (KeyboardInterrupt, SystemExit):
exit cleanly
Normally I would suspect a blanket try ... except somewhere in the children functions but I went over the whole code base and while there is a lot of error catching everything is specific.
Is there a way to trace errors and somehow figure out where the KeyboardInterrupt is caught ?
Thank you
****** Edit after some debugging...
So I disabled the code part by part until I cornered the bug:
Somewhere in the code I was calling a method that was missing self and was not marked as #staticmethod.
Changing that fixed my issue.
This works for me.
try:
while True:
print(1)
except KeyboardInterrupt:
raise
EDIT:
Actually read your question, I won't be able to tell you why it's not raising an error without seeing the rest or more of the code.

Python Ignore Exception and Go Back to Where I Was

I know using below code to ignore a certain exception, but how to let the code go back to where it got exception and keep executing? Say if the exception 'Exception' raises in do_something1, how to make the code ignore it and keep finishing do_something1 and process do_something2? My code just go to finally block after process pass in except block. Please advise, thanks.
try:
do_something1
do_something2
do_something3
do_something4
except Exception:
pass
finally:
clean_up
EDIT:
Thanks for the reply. Now I know what's the correct way to do it. But here's another question, can I just ignore a specific exception (say if I know the error number). Is below code possible?
try:
do_something1
except Exception.strerror == 10001:
pass
try:
do_something2
except Exception.strerror == 10002:
pass
finally:
clean_up
do_something3
do_something4
There's no direct way for the code to go back inside the try-except block. If, however, you're looking at trying to execute these different independant actions and keep executing when one fails (without copy/pasting the try/except block), you're going to have to write something like this:
actions = (
do_something1, do_something2, #...
)
for action in actions:
try:
action()
except Exception, error:
pass
update. The way to ignore specific exceptions is to catch the type of exception that you want, test it to see if you want to ignore it and re-raise it if you dont.
try:
do_something1
except TheExceptionTypeThatICanHandleError, e:
if e.strerror != 10001:
raise
finally:
clean_up
Note also, that each try statement needs its own finally clause if you want it to have one. It wont 'attach itself' to the previous try statement. A raise statement with nothing else is the correct way to re-raise the last exception. Don't let anybody tell you otherwise.
What you want are continuations which python doesn't natively provide. Beyond that, the answer to your question depends on exactly what you want to do. If you want do_something1 to continue regardless of exceptions, then it would have to catch the exceptions and ignore them itself.
if you just want do_something2 to happen regardless of if do_something1 completes, you need a separate try statement for each one.
try:
do_something1()
except:
pass
try:
do_something2()
except:
pass
etc. If you can provide a more detailed example of what it is that you want to do, then there is a good chance that myself or someone smarter than myself can either help you or (more likely) talk you out of it and suggest a more reasonable alternative.
This is pretty much missing the point of exceptions.
If the first statement has thrown an exception, the system is in an indeterminate state and you have to treat the following statement as unsafe to run.
If you know which statements might fail, and how they might fail, then you can use exception handling to specifically clean up the problems which might occur with a particular block of statements before moving on to the next section.
So, the only real answer is to handle exceptions around each set of statements that you want to treat as atomic
you could have all of the do_something's in a list, and iterate through them like this, so it's no so wordy. You can use lambda functions instead if you require arguments for the working functions
work = [lambda: dosomething1(args), dosomething2, lambda: dosomething3(*kw, **kwargs)]
for each in work:
try:
each()
except:
pass
cleanup()
Exceptions are usually raised when a performing task can not be completed in a manner intended by the code due to certain reasons. This is usually raised as exceptions. Exceptions should be handled and not ignored. The whole idea of exception is that the program can not continue in the normal execution flow without abnormal results.
What if you write a code to open a file and read it? What if this file does not exist?
It is much better to raise exception. You can not read a file where none exists. What you can do is handle the exception, let the user know that no such file exists. What advantage would be obtained for continuing to read the file when a file could not be opened at all.
In fact the above answers provided by Aaron works on the principle of handling your exceptions.
I posted this recently as an answer to another question. Here you have a function that returns a function that ignores ("traps") specified exceptions when calling any function. Then you invoke the desired function indirectly through the "trap."
def maketrap(*exceptions):
def trap(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions:
return None
return trap
# create a trap that ignores all exceptions
trapall = maketrap(Exception)
# create a trap that ignores two exceptions
trapkeyattrerr = maketrap(KeyError, AttributeError)
# Now call some functions, ignoring specific exceptions
trapall(dosomething1, arg1, arg2)
trapkeyattrerr(dosomething2, arg1, arg2, arg3)
In general I'm with those who say that ignoring exceptions is a bad idea, but if you do it, you should be as specific as possible as to which exceptions you think your code can tolerate.
Python 3.4 added contextlib.suppress(), a context manager that takes a list of exceptions and suppresses them within the context:
with contextlib.suppress(IOError):
print('inside')
print(pathlib.Path('myfile').read_text()) # Boom
print('inside end')
print('outside')
Note that, just as with regular try/except, an exception within the context causes the rest of the context to be skipped. So, if an exception happens in the line commented with Boom, the output will be:
inside
outside

Is there a way to prevent a SystemExit exception raised from sys.exit() from being caught?

The docs say that calling sys.exit() raises a SystemExit exception which can be caught in outer levels. I have a situation in which I want to definitively and unquestionably exit from inside a test case, however the unittest module catches SystemExit and prevents the exit. This is normally great, but the specific situation I am trying to handle is one where our test framework has detected that it is configured to point to a non-test database. In this case I want to exit and prevent any further tests from being run. Of course since unittest traps the SystemExit and continues happily on it's way, it is thwarting me.
The only option I have thought of so far is using ctypes or something similar to call exit(3) directly but this seems like a pretty fugly hack for something that should be really simple.
You can call os._exit() to directly exit, without throwing an exception:
import os
os._exit(1)
This bypasses all of the python shutdown logic, such as the atexit module, and will not run through the exception handling logic that you're trying to avoid in this situation. The argument is the exit code that will be returned by the process.
As Jerub said, os._exit(1) is your answer. But, considering it bypasses all cleanup procedures, including finally: blocks, closing files, etc, it should really be avoided at all costs. So may I present a safer(-ish) way of using it?
If your problem is SystemExit being caught at outer levels (i.e., unittest), then be the outer level yourself! Wrap your main code in a try/except block, catch SystemExit, and call os._exit() there, and only there! This way you may call sys.exit normally anywhere in the code, let it bubble out to the top level, gracefully closing all files and running all cleanups, and then calling os._exit.
You can even choose which exits are the "emergency" ones. The code below is an example of such approach:
import sys, os
EMERGENCY = 255 # can be any number actually
try:
# wrap your whole code here ...
# ... some code
if x: sys.exit()
# ... some more code
if y: sys.exit(EMERGENCY) # use only for emergency exits
... # yes, this is valid python!
# Might instead wrap all code in a function
# It's a common pattern to exit with main's return value, if any
sys.exit(main())
except SystemExit as e:
if e.code != EMERGENCY:
raise # normal exit, let unittest catch it at the outer level
else:
os._exit(EMERGENCY) # try to stop *that*!
As for e.code that some readers were unaware of, it is documented, as well as the attributes of all built-in exceptions.
You can also use quit, see example below:
while True:
print('Type exit to exit.')
response = input()
if response == 'exit':
quit(0)
print('You typed ' + response + '.')

Categories