My understanding is that: Normally, when an error happens, it's thrown down through all the calling functions, and then displayed in the console. Now there's some packages that do their own error handling, especially GUI related packages often don't show errors at all but just continue excecution.
How can we override such behaviour in general? When I write GUI functions, I would like to see the errors! I found this post where it's explained how to do it for the case of Tkinter. How can this be done in Matplotlib?
Example code:
import matplotlib.pyplot as plt
def onclick(event):
print(event.x, event.y)
raise ValueError('SomeError') # this error is thrown but isn't displayed
fig = plt.figure(5)
fig.clf()
try: # if figure was open before, try to disconnect the button
fig.canvas.mpl_disconnect(cid_button)
except:
pass
cid_button = fig.canvas.mpl_connect('button_press_event', onclick)
Indeed, when the python interpreter encounters an exception that is never caught, it will print a so-called traceback to stdout before exciting. However, GUI packages usually catch and swallow all exceptions, in order to prevent the python interpreter from exciting. You would like to display that traceback somewhere, but in case of GUI applications, you will have to decide where to show that traceback. The standard library has a module that helps you working with such traceback, aptly named traceback. Then, you will have to catch the exception before the GUI toolkit does it. I do not know a general way to insert a callback error handler, but you can manually add error handling to each of your callbacks. The best way to do this is to write a function decorator that you then apply to your callback.
import traceback, functools
def print_errors_to_stdout(fun):
#functools.wraps(fun)
def wrapper(*args,**kw):
try:
return fun(*args,**kw)
except Exception:
traceback.print_exc()
raise
return wrapper
#print_errors_to_stdout
def onclick(event):
print(event.x, event.y)
raise ValueError('SomeError')
The decorator print_errors_to_stdout takes a function and returns a new function that embeds the original function in a try ... except block and in case of an exception prints the traceback to stdout with the help of traceback.print_exc(). (The wrapper itself is decorated with functools.wraps such that the generated wrapper function, among other things, keeps the docstring of the original function). If you would like to show the traceback somewhere else traceback.format_exc() would give you a string that you could then show/store somwhere. The decorator also reraises the exception, such that the GUI toolkit still gets a chance to take it's own actions, typically just swallowing the exception.
Related
I have made an program in python that i converted into .exe using auto-py-to-exe and im wondering how to stop .exe stop closing it self after an error (python exception) the .exe closes it self in the speed of light and you cant read the error.
Does someone knows how to make it not close it self?
using input doesnt work if the exception happens in a pip library
Thanks
You can use a try-except based system (that you have to build suitably to catch every exception of your code) and to print the exception you can use the module traceback like so:
try:
## code with error ##
except Exception:
print("Exception in user code:")
print("-"*60)
traceback.print_exc(file=sys.stdout)
print("-"*60)
or you could use a simpler form like:
try:
## code with error ##
except Exception as e:
print(e, file='crash log.txt')
that only prints the error class (like file not found).
I should also point out that the finally keyword exists with the purpose of executing code either if an exception arose or not:
try:
## code with error ##
except: #optional
## code to execute in case of exception ##
finally:
## code executed either way ##
Something you could do on top of that is logging everything your program does with
print(status_of_program, file=open('log.txt','a'))
this is useful to see exactly at what point the program has crashed and in general to see the program in action step by step.
But a thing you should do is properly test the program while in .py form and if it works you could possibly assume the error comes from the actual exportation method and consult the documentation or try exporting simpler programs to catch the difference (and so the error).
i'd suggest to consult:
https://docs.python.org/3/library/exceptions.html
to learn the error types and the try-except construct.
If input() doesn't work try using sys.stdout().
I gather that this is a console app. If it's a GUI app, you can use a similar approach but the details will be different.
The approach I'd use is to set sys.excepthook to be your own function that prints the exception, then waits for input. You can call the existing exception hook to actually do the printing.
import sys, traceback as tb
oldhook = sys.excepthook
def waitexcepthook(type, exception, traceback):
oldhook(type, exception, traceback)
input()
sys.excepthook = waitexcepthook
How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace
I went through "Logging uncaught exceptions in Python". And, I have tried this:
import web
import sys
app = web.application((
'/test', 'test'), globals())
def test_func(e_type, value, traceback):
print "Handled exception here.."
class test:
def GET(self):
a = 1/0
if __name__ == "__main__":
sys.excepthook = test_func
app.run()
Here, you can easily see if GET /test request comes in, I am deliberately raising ZerDivisionError. As i have overriden sys.excepthook, I expect method test_func to execute on ZeroDivisionError.
Whereas, this piece of code is not working as per expectation. I have observed that when i try to override excepthook in standalone code (not in web-app), it works fine. New method(overriden) is called properly.
Any idea why is this different behavior ?
Using web.py, one way of catching exceptions yourself would be to add a custom processor:
...
def error_processor(handler):
try:
handler()
except:
# do something reasonable to handle the exception
return 'something happened...'
app.add_processor(error_processor)
app.run()
...
Otherwise web.py will catch the exception and show a default error message instead.
Python doc says
sys.excepthook(type, value, traceback)
This function prints out a given traceback and exception to sys.stderr.
So this sys.excepthook is called when a exception needs to printed to terminal. Now you see, web.py itself handles the exceptions. If you dont assign the sys.excepthook, webpy captures the exception and shows the exception in browser. And as web.py is itself handling the exceptions, setting sys.excepthook makes no change.
If web.py didn't handle the exception itself, then it would be uncaught and sys.excepthook would be called.
If you really want to handle exception yourself in this code, try searching web.py doc and source code to figure out how web.py handles exceptions and customize that.
python doc also says
sys.__excepthook__
These objects contain the original values of displayhook and excepthook at the start of the program. They are saved so that displayhook and excepthook can be restored in case they happen to get replaced with broken objects.
So my guess is web.py using that, so your custom handler is not called.
I'm trying to raise an exception within an except: block but the interpreter tries to be helpful and prints stack traces 'by force'. Is it possible to avoid this?
A little bit of background information:
I'm toying with urwid, a TUI library for python. The user interface is started by calling urwid.MainLoop.run() and ended by raising urwid.ExitMainLoop(). So far this works fine but what happens when another exception is raised? E.g. when I'm catching KeyboardInterrupt (the urwid MainLoop does not), I do some cleanup and want to end the user interface - by raising the appropriate exception. But this results in a screen full of stack traces.
Some little research showed python3 remembers chained exceptions and one can explicitly raise an exception with a 'cause': raise B() from A(). I learned a few ways to change or append data regarding the raised exceptions but I found no way to 'disable' this feature. I'd like to avoid the printing of stack traces and lines like The above exception was the direct cause of... and just raise the interface-ending exception within an except: block like I would outside of one.
Is this possible or am I doing something fundamentally wrong?
Edit:
Here's an example resembling my current architecture, resulting in the same problem:
#!/usr/bin/env python3
import time
class Exit_Main_Loop(Exception):
pass
# UI main loop
def main_loop():
try:
while True:
time.sleep(0.1)
except Exit_Main_Loop as e:
print('Exit_Main_Loop')
# do some UI-related clean up
# my main script
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
Unfortunately I can't change main_loop to except KeyboardInterrupt as well. Is there a pattern to solve this?
I still don't quite understand your explanation, but from the code:
try:
main_loop()
except KeyboardInterrupt as e:
print('KeyboardInterrupt')
# do some clean up
raise Exit_Main_Loop() # signal the UI to terminate
There is no way that main_loop could ever see the Exit_Main_Loop() exception. By the time you get to the KeyboardInterrupt handle, main_loop is guaranteed to have already finished (in this case, because of an unhandled KeyboardInterrupt), so its exception handler is no longer active.
So, what happens is that you raise a new exception that nobody catches. And when an exception gets to the top of your code without being handled, Python handles it automatically by printing a traceback and quitting.
If you want to convert one type of exception into another so main_loop can handle it, you have to do that somewhere inside the try block.
You say:
Unfortunately I can't change main_loop to except KeyboardInterrupt as well.
If that's true, there's no real answer to your problem… but I'm not sure there's a problem in the first place, other than the one you created. Just remove the Exit_Main_Loop() from your code, and isn't it already doing what you wanted? If you're just trying to prevent Python from printing a traceback and exiting, this will take care of it for you.
If there really is a problem—e.g., the main_loop code has some cleanup code that you need to get executed no matter what, and it's not getting executed because it doesn't handle KeyboardInterrupt—there are two ways you could work around this.
First, as the signal docs explain:
The signal.signal() function allows to define custom handlers to be executed when a signal is received. A small number of default handlers are installed: … SIGINT is translated into a KeyboardInterrupt exception.
So, all you have to do is replace the default handler with a different one:
def handle_sigint(signum, frame):
raise ExitMainLoop()
signal.signal(signal.SIGINT, handle_sigint)
Just do this before you start main_loop, and you should be fine. Keep in mind that there are some limitations with threaded programs, and with Windows, but if none of those limitations apply, you're golden; a ctrl-C will trigger an ExitMainLoop exception instead of a KeyboardInterrupt, so the main loop will handle it. (You may want to also add an except ExitMainLoop: block in your wrapper code, in case there's an exception outside of main_loop. However, you could easily write a contextmanager that sets and restores the signal around the call to main_loop, so there isn't any outside code that could possibly raise it.)
Alternatively, even if you can't edit the main_loop source code, you can always monkeypatch it at runtime. Without knowing what the code looks like, it's impossible to explain exactly how to do this, but there's almost always a way to do it.
I'm using a PyQt4 user interface. I've redirected stderr to a log file for easy debugging and trouble-shooting, but now I need to display error messages to the user when an error occurs.
My issue is that I need to catch an exception when it happens and let the user know that it happened, but still let the traceback propagate to stderr (i.e. the log file).
If I do something like this:
def updateResults(self):
try:
#code that updates the results
except:
#display error message box
This will catch the exception and not propogate to the error log.
Is there some way to show the user the message and then continue to propogate the error?
Would this work?
except, e:
#display error message box
raise e
Is there a better way to accomplish my goal?
I think you are thinking about this in the wrong way. You shouldn't be re-raising the error simply to log it further down the line. The cannonical way of doing this in Python is to use the logging module. Adapted from the docs:
import logging
LOG_FILENAME = '/tmp/logging_example.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG,)
...
try:
# code
except:
logging.debug('Something bad happened', exc_info=True)
# display message box
# raise (if necessary)
This gives a far more flexible logging system than relying on errors produced on sys.stdout. You may not need to re-raise the exception if you can recover from the exception in some way.
Exactly, but you can just
raise
which will re-raise the currently handled exception.
Some additional information:
(With PyQt4) you will also need to rebind sys.excepthook to your own function to catch all uncaught exceptions. Otherwise PyQt will just print them to the console, which may not be what you need...
import sys
def excepthook(exc_type, exc_val, tracebackobj):
# do something useful with the uncaught exception
...
def main():
# rebind excepthook
sys.excepthook = excepthook
...