Here is my code:
import sys
class App(object):
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, tb):
print sys.exc_info()
app = App()
with app:
try:
1/0
except:
print 'Caught you'
#sys.exc_clear()
and another example using flask app context:
from flask import Flask
app = Flask(__name__)
#app.teardown_appcontext
def teardown(exception):
print exception
with app.app_context():
try:
1 / 0
except:
pass
It's weird that if i didn't call sys.exc_clear when i handle exception, then when exiting app, sys.exc_info will still return exception info.
Is there any way to avoid this case?
In my project which is based on flask, i will rollback transaction when there is a exception. Although i handled exception, app context can still get it using sys.exc_info like code below showed:
#AppContext pop method
def pop(self, exc=None):
"""Pops the app context."""
self._refcnt -= 1
if self._refcnt <= 0:
if exc is None:
exc = sys.exc_info()[1]
self.app.do_teardown_appcontext(exc)
rv = _app_ctx_stack.pop()
assert rv is self, 'Popped wrong app context. (%r instead of %r)' \
% (rv, self)
appcontext_popped.send(self.app)
I cannot ask everyone in my team to call sys.exc_info when handling every single one exception.
How should i do to avoid this situation?
This is really a bug in how the AppContext handles your case.
The exception is automatically cleared the moment the current frame exits (so outside the with statement). If you called another function from within the with statement frame and handled the exception there, the sys.exc_info() information would be cleared again when that other function exits.
The AppContext.__exit__() method is correctly notified that you handled the exception and passes the exception value on to the AppContext.pop() method.
As such the Flask AppContext.pop() method should use a different sentinel value to detect that no exception was passed in; it could detect that None was passed in not as a default but as an actual value, indicating no exceptions were raised or were properly handled.
I've filed an issue with the project requesting that this is implemented, with accompanying pull request. This was merged and will be part of a future release of Flask.
You could use a monkeypatch to backport this to the current Flask version:
from flask import app, ctx
import sys
if ctx.AppContext.pop.__defaults__ == (None,):
# unpatched
_sentinel = object()
def do_teardown_appcontext(self, exc=_sentinel):
if exc is _sentinel:
exc = sys.exc_info()[1]
for func in reversed(self.teardown_appcontext_funcs):
func(exc)
app.appcontext_tearing_down.send(self, exc=exc)
app.Flask.do_teardown_appcontext = do_teardown_appcontext
def pop(self, exc=_sentinel):
"""Pops the app context."""
self._refcnt -= 1
if self._refcnt <= 0:
if exc is _sentinel:
exc = sys.exc_info()[1]
self.app.do_teardown_appcontext(exc)
rv = ctx._app_ctx_stack.pop()
assert rv is self, 'Popped wrong app context. (%r instead of %r)' \
% (rv, self)
ctx.appcontext_popped.send(self.app)
ctx.AppContext.pop = pop
Related
How do you cause uncaught exceptions to output via the logging module rather than to stderr?
I realize the best way to do this would be:
try:
raise Exception, 'Throwing a boring exception'
except Exception, e:
logging.exception(e)
But my situation is such that it would be really nice if logging.exception(...) were invoked automatically whenever an exception isn't caught.
Here's a complete small example that also includes a few other tricks:
import sys
import logging
logger = logging.getLogger(__name__)
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
if __name__ == "__main__":
raise RuntimeError("Test unhandled")
Ignore KeyboardInterrupt so a console python program can exit with Ctrl + C.
Rely entirely on python's logging module for formatting the exception.
Use a custom logger with an example handler. This one changes the unhandled exception to go to stdout rather than stderr, but you could add all sorts of handlers in this same style to the logger object.
As Ned pointed out, sys.excepthook is invoked every time an exception is raised and uncaught. The practical implication of this is that in your code you can override the default behavior of sys.excepthook to do whatever you want (including using logging.exception).
As a straw man example:
import sys
def foo(exctype, value, tb):
print('My Error Information')
print('Type:', exctype)
print('Value:', value)
print('Traceback:', tb)
Override sys.excepthook:
>>> sys.excepthook = foo
Commit obvious syntax error (leave out the colon) and get back custom error information:
>>> def bar(a, b)
My Error Information
Type: <type 'exceptions.SyntaxError'>
Value: invalid syntax (<stdin>, line 1)
Traceback: None
For more information about sys.excepthook, read the docs.
Why not:
import sys
import logging
import traceback
def log_except_hook(*exc_info):
text = "".join(traceback.format_exception(*exc_info()))
logging.error("Unhandled exception: %s", text)
sys.excepthook = log_except_hook
None()
Here is the output with sys.excepthook as seen above:
$ python tb.py
ERROR:root:Unhandled exception: Traceback (most recent call last):
File "tb.py", line 11, in <module>
None()
TypeError: 'NoneType' object is not callable
Here is the output with the sys.excepthook commented out:
$ python tb.py
Traceback (most recent call last):
File "tb.py", line 11, in <module>
None()
TypeError: 'NoneType' object is not callable
The only difference is that the former has ERROR:root:Unhandled exception: at the beginning of the first line.
The method sys.excepthook will be invoked if an exception is uncaught: http://docs.python.org/library/sys.html#sys.excepthook
When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook.
To build on Jacinda's answer, but using a logger object:
def catchException(logger, typ, value, traceback):
logger.critical("My Error Information")
logger.critical("Type: %s" % typ)
logger.critical("Value: %s" % value)
logger.critical("Traceback: %s" % traceback)
# Use a partially applied function
func = lambda typ, value, traceback: catchException(logger, typ, value, traceback)
sys.excepthook = func
In my case (using python 3) when using #Jacinda 's answer the content of the traceback was not printed. Instead, it just prints the object itself: <traceback object at 0x7f90299b7b90>.
Instead I do:
import sys
import logging
import traceback
def custom_excepthook(exc_type, exc_value, exc_traceback):
# Do not print exception when user cancels the program
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logging.error("An uncaught exception occurred:")
logging.error("Type: %s", exc_type)
logging.error("Value: %s", exc_value)
if exc_traceback:
format_exception = traceback.format_tb(exc_traceback)
for line in format_exception:
logging.error(repr(line))
sys.excepthook = custom_excepthook
Wrap your app entry call in a try...except block so you'll be able to catch and log (and perhaps re-raise) all uncaught exceptions. E.g. instead of:
if __name__ == '__main__':
main()
Do this:
if __name__ == '__main__':
try:
main()
except Exception as e:
logger.exception(e)
raise
Although #gnu_lorien's answer gave me good starting point, my program crashes on first exception.
I came with a customised (and/or) improved solution, which silently logs Exceptions of functions that are decorated with #handle_error.
import logging
__author__ = 'ahmed'
logging.basicConfig(filename='error.log', level=logging.DEBUG)
def handle_exception(exc_type, exc_value, exc_traceback):
import sys
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logging.critical(exc_value.message, exc_info=(exc_type, exc_value, exc_traceback))
def handle_error(func):
import sys
def __inner(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception, e:
exc_type, exc_value, exc_tb = sys.exc_info()
handle_exception(exc_type, exc_value, exc_tb)
finally:
print(e.message)
return __inner
#handle_error
def main():
raise RuntimeError("RuntimeError")
if __name__ == "__main__":
for _ in xrange(1, 20):
main()
To answer the question from Mr.Zeus discussed in the comment section of the accepted answer, I use this to log uncaught exceptions in an interactive console (tested with PyCharm 2018-2019). I found out sys.excepthook does not work in a python shell so I looked deeper and found that I could use sys.exc_info instead. However, sys.exc_info takes no arguments unlike sys.excepthook that takes 3 arguments.
Here, I use both sys.excepthook and sys.exc_info to log both exceptions in an interactive console and a script with a wrapper function. To attach a hook function to both functions, I have two different interfaces depending if arguments are given or not.
Here's the code:
def log_exception(exctype, value, traceback):
logger.error("Uncaught exception occurred!",
exc_info=(exctype, value, traceback))
def attach_hook(hook_func, run_func):
def inner(*args, **kwargs):
if not (args or kwargs):
# This condition is for sys.exc_info
local_args = run_func()
hook_func(*local_args)
else:
# This condition is for sys.excepthook
hook_func(*args, **kwargs)
return run_func(*args, **kwargs)
return inner
sys.exc_info = attach_hook(log_exception, sys.exc_info)
sys.excepthook = attach_hook(log_exception, sys.excepthook)
The logging setup can be found in gnu_lorien's answer.
Maybe you could do something at the top of a module that redirects stderr to a file, and then logg that file at the bottom
sock = open('error.log', 'w')
sys.stderr = sock
doSomething() #makes errors and they will log to error.log
logging.exception(open('error.log', 'r').read() )
I have a script executing several independent functions in turn. I would like to collect the errors/exceptions happening along the way, in order to send an email with a summary of the errors.
What is the best way to raise these errors/exceptions and collect them, while allowing the script to complete and go through all the steps? They are independent, so it does not matter if one crashes. The remaining ones can still run.
def step_1():
# Code that can raise errors/exceptions
def step_2():
# Code that can raise errors/exceptions
def step_3():
# Code that can raise errors/exceptions
def main():
step_1()
step_2()
step_3()
send_email_with_collected_errors()
if '__name__' == '__main__':
main()
Should I wrap each step in a try..except block in the main() function? Should I use a decorator on each step function, in addition to an error collector?
You could wrap each function in try/except, usually better for small simple scripts.
def step_1():
# Code that can raise errors/exceptions
def step_2():
# Code that can raise errors/exceptions
def step_3():
# Code that can raise errors/exceptions
def main():
try:
step_1_result = step_1()
log.info('Result of step_1 was {}'.format(result))
except Exception as e:
log.error('Exception raised. {}'.format(e))
step_1_result = e
continue
try:
step_2_result = step_2()
log.info('Result of step_2 was {}'.format(result))
except Exception as e:
log.error('Exception raised. {}'.format(e))
step_2_result = e
continue
try:
step_3_result = step_3()
log.info('Result of step_3 was {}'.format(result))
except Exception as e:
log.error('Exception raised. {}'.format(e))
step_3_result = e
continue
send_email_with_collected_errors(
step_1_result,
step_2_result,
step_3_result
)
if '__name__' == '__main__':
main()
For something more elaborate you could use a decorator that'd construct a list of errors/exceptions caught. For example
class ErrorIgnore(object):
def __init__(self, errors, errorreturn=None, errorcall=None):
self.errors = errors
self.errorreturn = errorreturn
self.errorcall = errorcall
def __call__(self, function):
def returnfunction(*args, **kwargs):
try:
return function(*args, **kwargs)
except Exception as E:
if type(E) not in self.errors:
raise E
if self.errorcall is not None:
self.errorcall(E, *args, **kwargs)
return self.errorreturn
return returnfunction
Then you could use it like this:
exceptions = []
def errorcall(E, *args):
print 'Exception raised {}'.format(E)
exceptions.append(E)
#ErrorIgnore(errors=[ZeroDivisionError, ValueError], errorreturn=None, errorcall=errorcall)
def step_1():
# Code that can raise errors/exceptions
...
def main():
step_1()
step_2()
step_3()
send_email_with_collected_errors(exceptions)
if '__name__' == '__main__':
main()
use simple try except statements and do logging for the exceptions that would be standard way to collect all your errors.
There are two options:
Use decorator in which you catch all exceptions and save it somewhere.
Add try/except everywhere.
Using decorator might be much better and cleaner, and code will be easier to maintain.
How to store errors? Your decision. You can add them to some list, create logging class receiving exceptions and get them after everything is done… Depends on your project and size of code.
Simple logging class:
class LoggingClass(object):
def __init__(self):
self.exceptions = []
def add_exception(self, exception):
self.exceptions.append(exception)
def get_all(self):
return self.exceptions
Create instance of class in your script, catch exceptions in decorator and add them to class (however global variable might be also ok).
My Celery task raises a custom exception NonTransientProcessingError, which is then caught by AsyncResult.get(). Tasks.py:
class NonTransientProcessingError(Exception):
pass
#shared_task()
def throw_exception():
raise NonTransientProcessingError('Error raised by POC model for test purposes')
In the Python console:
from my_app.tasks import *
r = throw_exception.apply_async()
try:
r.get()
except NonTransientProcessingError as e:
print('caught NonTrans in type specific except clause')
But my custom exception is my_app.tasks.NonTransientProcessingError, whereas the exception raised by AsyncResult.get() is celery.backends.base.NonTransientProcessingError, so my except clause fails.
Traceback (most recent call last):
File "<input>", line 4, in <module>
File "/...venv/lib/python3.5/site-packages/celery/result.py", line 175, in get
raise meta['result']
celery.backends.base.NonTransientProcessingError: Error raised by POC model for test purposes
If I catch the exception within the task, it works fine. It is only when the exception is raised to the .get() call that it is renamed.
How can I raise a custom exception and catch it correctly?
I have confirmed that the same happens when I define a Task class and raise the custom exception in its on_failure method. The following does work:
try:
r.get()
except Exception as e:
if type(e).__name__ == 'NonTransientProcessingError':
print('specific exception identified')
else:
print('caught generic but not identified')
Outputs:
specific exception identified
But this can't be the best way of doing this? Ideally I'd like to catch exception superclasses for categories of behaviour.
I'm using Django 1.8.6, Python 3.5 and Celery 3.1.18, with a Redis 3.1.18, Python redis lib 2.10.3 backend.
import celery
from celery import shared_task
class NonTransientProcessingError(Exception):
pass
class CeleryTask(celery.Task):
def on_failure(self, exc, task_id, args, kwargs, einfo):
if isinstance(exc, NonTransientProcessingError):
"""
deal with NonTransientProcessingError
"""
pass
def run(self, *args, **kwargs):
pass
#shared_task(base=CeleryTask)
def add(x, y):
raise NonTransientProcessingError
Use a base Task with on_failure callback to catch custom exception.
It might be a bit late, but I have a solution to solve this issue. Not sure if it is the best one, but it solved my problem at least.
I had the same issue. I wanted to catch the exception produced in the celery task, but the result was of the class celery.backends.base.CustomException. The solution is in the following form:
import celery, time
from celery.result import AsyncResult
from celery.states import FAILURE
def reraise_celery_exception(info):
exec("raise {class_name}('{message}')".format(class_name=info.__class__.__name__, message=info.__str__()))
class CustomException(Exception):
pass
#celery.task(name="test")
def test():
time.sleep(10)
raise CustomException("Exception is raised!")
def run_test():
task = test.delay()
return task.id
def get_result(id):
task = AsyncResult(id)
if task.state == FAILURE:
reraise_celery_exception(task.info)
In this case you prevent your program from raising celery.backends.base.CustomException, and force it to raise the right exception.
I have the following chunk of code
class APITests(unittest.TestCase):
def setUp(self):
app.config['TESTING'] = True
self.app = app.test_client()
app.config['SECRET_KEY'] = 'kjhk'
def test_exn(self, query):
query.all.side_effect = ValueError('test')
rv = self.app.get('/exn/')
assert rv.status_code == 400
I want to check the return code of self.app.get('/exn/). However, I notice that query.all() propagates the exception to the test case as opposed to trapping it and returning an error code.
How do I check the return codes for invalid inputs when exceptions are thrown in Flask?
You would have to do the following to trap the exception.
class APITests(unittest.TestCase):
def setUp(self):
app.config['TESTING'] = True
self.app = app.test_client()
app.config['SECRET_KEY'] = 'kjhk'
def test_exn(self, query):
query.all.side_effect = ValueError('test')
with self.assertRaises(ValueError):
rv = self.app.get('/exn/')
assert rv.status_code == 400
For the assertRaises to work, the function call must be contained within some sort of wrapper to catch the exception.
If it is just ran like you did, it is just executed on its own, the exception is not caught and the test case becomes an error instead.
By wrapping the function call within a “with statement”, the exception is passed back to the “assertRaises” in the clause. The “assertRaises” function can now do the comparison on the exception.
To read up more on this, check this link
You can assertRaise the exception from werkzeug.
Example:
from werkzeug import exceptions
def test_bad_request(self):
with self.app as c:
rv = c.get('/exn/',
follow_redirects=True)
self.assertRaises(exceptions.BadRequest)
How do you cause uncaught exceptions to output via the logging module rather than to stderr?
I realize the best way to do this would be:
try:
raise Exception, 'Throwing a boring exception'
except Exception, e:
logging.exception(e)
But my situation is such that it would be really nice if logging.exception(...) were invoked automatically whenever an exception isn't caught.
Here's a complete small example that also includes a few other tricks:
import sys
import logging
logger = logging.getLogger(__name__)
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
if __name__ == "__main__":
raise RuntimeError("Test unhandled")
Ignore KeyboardInterrupt so a console python program can exit with Ctrl + C.
Rely entirely on python's logging module for formatting the exception.
Use a custom logger with an example handler. This one changes the unhandled exception to go to stdout rather than stderr, but you could add all sorts of handlers in this same style to the logger object.
As Ned pointed out, sys.excepthook is invoked every time an exception is raised and uncaught. The practical implication of this is that in your code you can override the default behavior of sys.excepthook to do whatever you want (including using logging.exception).
As a straw man example:
import sys
def foo(exctype, value, tb):
print('My Error Information')
print('Type:', exctype)
print('Value:', value)
print('Traceback:', tb)
Override sys.excepthook:
>>> sys.excepthook = foo
Commit obvious syntax error (leave out the colon) and get back custom error information:
>>> def bar(a, b)
My Error Information
Type: <type 'exceptions.SyntaxError'>
Value: invalid syntax (<stdin>, line 1)
Traceback: None
For more information about sys.excepthook, read the docs.
Why not:
import sys
import logging
import traceback
def log_except_hook(*exc_info):
text = "".join(traceback.format_exception(*exc_info()))
logging.error("Unhandled exception: %s", text)
sys.excepthook = log_except_hook
None()
Here is the output with sys.excepthook as seen above:
$ python tb.py
ERROR:root:Unhandled exception: Traceback (most recent call last):
File "tb.py", line 11, in <module>
None()
TypeError: 'NoneType' object is not callable
Here is the output with the sys.excepthook commented out:
$ python tb.py
Traceback (most recent call last):
File "tb.py", line 11, in <module>
None()
TypeError: 'NoneType' object is not callable
The only difference is that the former has ERROR:root:Unhandled exception: at the beginning of the first line.
The method sys.excepthook will be invoked if an exception is uncaught: http://docs.python.org/library/sys.html#sys.excepthook
When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook.
To build on Jacinda's answer, but using a logger object:
def catchException(logger, typ, value, traceback):
logger.critical("My Error Information")
logger.critical("Type: %s" % typ)
logger.critical("Value: %s" % value)
logger.critical("Traceback: %s" % traceback)
# Use a partially applied function
func = lambda typ, value, traceback: catchException(logger, typ, value, traceback)
sys.excepthook = func
In my case (using python 3) when using #Jacinda 's answer the content of the traceback was not printed. Instead, it just prints the object itself: <traceback object at 0x7f90299b7b90>.
Instead I do:
import sys
import logging
import traceback
def custom_excepthook(exc_type, exc_value, exc_traceback):
# Do not print exception when user cancels the program
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logging.error("An uncaught exception occurred:")
logging.error("Type: %s", exc_type)
logging.error("Value: %s", exc_value)
if exc_traceback:
format_exception = traceback.format_tb(exc_traceback)
for line in format_exception:
logging.error(repr(line))
sys.excepthook = custom_excepthook
Wrap your app entry call in a try...except block so you'll be able to catch and log (and perhaps re-raise) all uncaught exceptions. E.g. instead of:
if __name__ == '__main__':
main()
Do this:
if __name__ == '__main__':
try:
main()
except Exception as e:
logger.exception(e)
raise
Although #gnu_lorien's answer gave me good starting point, my program crashes on first exception.
I came with a customised (and/or) improved solution, which silently logs Exceptions of functions that are decorated with #handle_error.
import logging
__author__ = 'ahmed'
logging.basicConfig(filename='error.log', level=logging.DEBUG)
def handle_exception(exc_type, exc_value, exc_traceback):
import sys
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logging.critical(exc_value.message, exc_info=(exc_type, exc_value, exc_traceback))
def handle_error(func):
import sys
def __inner(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception, e:
exc_type, exc_value, exc_tb = sys.exc_info()
handle_exception(exc_type, exc_value, exc_tb)
finally:
print(e.message)
return __inner
#handle_error
def main():
raise RuntimeError("RuntimeError")
if __name__ == "__main__":
for _ in xrange(1, 20):
main()
To answer the question from Mr.Zeus discussed in the comment section of the accepted answer, I use this to log uncaught exceptions in an interactive console (tested with PyCharm 2018-2019). I found out sys.excepthook does not work in a python shell so I looked deeper and found that I could use sys.exc_info instead. However, sys.exc_info takes no arguments unlike sys.excepthook that takes 3 arguments.
Here, I use both sys.excepthook and sys.exc_info to log both exceptions in an interactive console and a script with a wrapper function. To attach a hook function to both functions, I have two different interfaces depending if arguments are given or not.
Here's the code:
def log_exception(exctype, value, traceback):
logger.error("Uncaught exception occurred!",
exc_info=(exctype, value, traceback))
def attach_hook(hook_func, run_func):
def inner(*args, **kwargs):
if not (args or kwargs):
# This condition is for sys.exc_info
local_args = run_func()
hook_func(*local_args)
else:
# This condition is for sys.excepthook
hook_func(*args, **kwargs)
return run_func(*args, **kwargs)
return inner
sys.exc_info = attach_hook(log_exception, sys.exc_info)
sys.excepthook = attach_hook(log_exception, sys.excepthook)
The logging setup can be found in gnu_lorien's answer.
Maybe you could do something at the top of a module that redirects stderr to a file, and then logg that file at the bottom
sock = open('error.log', 'w')
sys.stderr = sock
doSomething() #makes errors and they will log to error.log
logging.exception(open('error.log', 'r').read() )