How to ignore certain Python errors from Sentry capture - python

I have Sentry configured to capture all errors from a Django+Celery application. It works ok, but I'm finding an obnoxious use case is when I have to restart my Celery workers, PostgreSQL database or messaging server, which causes thousands of various kinds of "database/messaging server cannot be reached" errors. This pollutes the Sentry reports, and sometimes even exceeds my event quota.
Their docs mention an "ignore_exceptions" parameter, but it's in their old deprecated client that I'm not using, nor is recommended to be used for new projects. How would you do this in the new API?

It took me some source-diving to actually find it, but the option in the new SDK is "ignore_errors". It takes an iterable where each element can be either a string or a type (like the old interface).
I hesitate to link to it because its an internal method that could change at any time, but here
is the snapshot of it at the time of me writing this.
As an example (reimplementing Markus's answer):
import sentry_sdk
sentry_sdk.init(ignore_errors=[IgnoredErrorFoo, IgnoredErrorBar])

You can use before-send to filter errors by arbitrary criteria. Since it's unclear what you actually want to filter by, here's an example that filters by type. However, you can extend it with custom logic to e.g. match by exception message.
import sentry_sdk
def before_send(event, hint):
if 'exc_info' in hint:
exc_type, exc_value, tb = hint['exc_info']
if isinstance(exc_value, (IgnoredErrorFoo, IgnoredErrorBar)):
return None
return event
sentry_sdk.init(before_send=before_send)

To ignore all related errors, there are two ways:
Use before_send
import sentry_sdk
from rest_framework.exceptions import ValidationError
def before_send(event, hint):
if 'exc_info' in hint:
exc_type, exc_value, tb = hint['exc_info']
if isinstance(exc_value, (KeyError, ValidationError)):
return None
return event
sentry_sdk.init(
dsn='SENTRY_DSN',
before_send=before_send
)
Use ignore_errors
import sentry_sdk
from rest_framework.exceptions import ValidationError
sentry_dsk.init(
dsn='SENTRY_DSN',
ignore_errors=[
KeyError() ,
ValidationError('my error message'),
] # All event error (KeyError, ValidationError) will be ignored
)
To ignore a specific event error, just ignore this event ValidationError('my error message') with a custom def before_send:
def before_send(event, hint):
if 'exc_info' in hint:
exc_type, exc_value, tb = hint['exc_info']
if exc_value.args[0] in ['my error message', 'my error message 2', ...]:
return None
return event
This is documented in the sentry-python documentation at: https://docs.sentry.io/platforms/python/guides/django/configuration/filtering/
Note: The hint parameter has 3 cases, you need to know in which case your error will be.
https://docs.sentry.io/platforms/python/guides/django/configuration/filtering/hints/

Related

why is pylint complaining about missing Exception.message?

I usually declare a base exception for my modules which does nothing, from that one I derive custom errors that could have additional custom data: AFAIK this is the Right Way™ to use exeptions in Python.
I'm also used to build a human readable message from that custom info and pass it along, so I can refer to that message in error handlers. This is an example:
# this code is meant to be compatible with Python-2.7.x
class MycoolmoduleException(Exception):
'''base Mycoolmodule Exception'''
class TooManyFoo(MycoolmoduleException):
'''got too many Foo things'''
def __init__(self, foo_num):
self.foo_num = foo_num
msg = "someone passed me %d Foos" % foo_num
super(TooManyFoo, self).__init__(msg)
# .... somewhere else ....
try:
do_something()
except Exception as exc:
tell_user(exc.message)
# real world example using Click
#click.command()
#click.pass_context
def foo(ctx):
'''do something'''
try:
# ... try really hard to do something useful ...
except MycoolmoduleException as exc:
click.echo(exc.message, err=True)
ctx.exit(-1)
Now, when I run that code through pylint-2.3.1 it complains about my use of MycoolmoduleException.message:
coolmodule.py:458:19: E1101: Instance of 'MycoolmoduleException' has no 'message' member (no-member)
That kind of code always worked for me (both in Python2 and Python3) and hasattr(exc, 'message') in the same code returns True, so why is pylint complaining? And/or: how could that code be improved?
(NB the same happens if I try to catch the built in Exception instead of my own MycoolmoduleException)

Adding assertion functionality to python logging module?

I use assertions a lot in my code, and I would like to log any assertion errors that I have. After googling the problem, I didn't find a convenient solution.
So what I came up with is I added a method to logging.Logger class.
import logging
def assertion(self, bool_condition, message):
try:
assert bool_condition, message
except AssertionError:
self.exception(message)
raise
logging.Logger.assertion = assertion
""" apply log config """
log = logging.getLogger(__name__)
log.assertion(1 == 2, 'Assertion failed.')
It seems to do the job but I was wondering if it is a good practice to do so.

Is there a way to handle exceptions automatically with Python Click?

Click's exception handling documentation mentions that certain kinds of exceptions such as Abort, EOFError and KeyboardInterrupt are automatically handled gracefully by the framework.
For the application I'm writing, there are a lot of points from which exceptions could be generated. Terminating the application is the right step, but printing the stack trace isn't. I could always manually do this:
#cli.command()
def somecommand:
try:
# ...
except Exception as e:
click.echo(e)
However, is there a way to have Click handle all exceptions automatically?
In our CLI, all commands are grouped under a single command group. This allowed us to implement some behavior that needed to be executed for each command. One part of that is the exception handling.
Our entry point looks something like this:
#click.group()
#click.pass_context
def entry_point(ctx):
ctx.obj = {"example": "This could be the configuration"}
We use it to run global code, e.g. configure the context, but you can also define an empty method that does nothing. Other commands can be added to this command group either by using the #entry_point.command() decorator or entry_point.add_command(cmd).
For the exception handling, we wrap the entry_point in another method that handles the exceptions:
def safe_entry_point():
try:
entry_point()
except Exception as e:
click.echo(e)
In setup.py, we configure the entry point for the CLI and point it to the wrapper:
entry_points={
'console_scripts': [
'cli = my.package:safe_entry_point'
]
}
The commands of the CLI can be executed through its command group: e.g. cli command.
There might be more elegant solutions out there, but this is how we solved it. While it introduces a command group as the highest-level element in your CLI, but it allows us do handle all exceptions in a single place without the need to duplicate our error handling in each and every command.
If you only want to handle exception only for certain CLI commands. You could use another decorator, to handle exceptions.
Here's an example:
import click
from functools import wraps, partial
class NumberTooLarge(Exception):
pass
def catch_exception(func=None, *, handle):
if not func:
return partial(catch_exception, handle=handle)
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except handle as e:
raise click.ClickException(e)
return wrapper
#click.command()
#click.option("--count", default=1, help="Number of greetings.")
#catch_exception(handle=(NumberTooLarge, ValueError))
def hello(count):
"""Simple program that greets NAME for a total of COUNT times."""
if count > 100:
raise NumberTooLarge('count cannot be greater than 100')
if count < 0:
raise ValueError('count too small')
click.echo('Great choice!')
if __name__ == "__main__":
hello()

How can I determine if any errors were logged during a python program's execute?

I have a python script which calls log.error() and log.exception() in several places. These exceptions are caught so that the script can continue to run, however, I would like to be able to determine if log.error() and/or log.exception() were ever called so I can exit the script with an error code by calling sys.exit(1). A naive implementation using an "error" variable is included below. It seems to me there must be a better way.
error = False
try:
...
except:
log.exception("Something bad occurred.")
error = True
if error:
sys.exit(1)
I had the same issue as the original poster: I wanted to exit my Python script with an error code if any messages of error or greater severity were logged. For my application, it's desirable for execution to continue as long as no unhandled exceptions are raised. However, continuous integrations builds should fail if any errors are logged.
I found the errorhandler python package, which does just what we need. See the GitHub, PyPI page, and docs.
Below is the code I used:
import logging
import sys
import errorhandler
# Track if message gets logged with severity of error or greater
error_handler = errorhandler.ErrorHandler()
# Also log to stderr
stream_handler = logging.StreamHandler(stream=sys.stderr)
logger = logging.getLogger()
logger.setLevel(logging.INFO) # Set whatever logging level for stderr
logger.addHandler(stream_handler)
# Do your program here
if error_handler.fired:
logger.critical('Failure: exiting with code 1 due to logged errors')
raise SystemExit(1)
You can check logger._cache. It returns a dictionary with keys corresponding to the numeric value of the error level logged. So for checking if an error was logged you could do:
if 40 in logger._cache and logger._cache[40]
I think that your solution is not the best option. Logging is one aspect of your script, returning an error code depending on the control flow is another. Perhaps using exceptions would be a better option.
But if you want to track the calls to log, you can wrap it within a decorator. A simple example of a decorator follows (without inheritance or dynamic attribute access):
class LogWrapper:
def __init__(self, log):
self.log = log
self.error = False
def exception(self, message)
self.error = True
self.log.exception(message)
whenever logger._cache is not a solution (when other packages / modules log on their own which won't end in logger._cache), there's a way to build a ContextFilter which will record the worst called log level:
class ContextFilterWorstLevel(logging.Filter):
def __init__(self):
self.worst_level = logging.INFO
def filter(self, record):
if record.levelno > self.worst_level:
self.worst_level = record.levelno
return True
# Create a logger object and add the filter
logger = logging.getLogger()
logger.addFilter(ContextFilterWorstLevel())
# Check the worst log level called later
for filter in logger.filters:
if isinstance(filter, ContextFilterWorstLevel):
print(filter.worst_level)
You can employ a counter. If you want to track individual exceptions, create a dictionary with the exception as the key and the integer counter as the value.

Log exception with traceback in Python

How can I log my Python exceptions?
try:
do_something()
except:
# How can I log my exception here, complete with its traceback?
Use logging.exception from within the except: handler/block to log the current exception along with the trace information, prepended with a message.
import logging
LOG_FILENAME = '/tmp/logging_example.out'
logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
logging.debug('This message should go to the log file')
try:
run_my_stuff()
except:
logging.exception('Got exception on main handler')
raise
Now looking at the log file, /tmp/logging_example.out:
DEBUG:root:This message should go to the log file
ERROR:root:Got exception on main handler
Traceback (most recent call last):
File "/tmp/teste.py", line 9, in <module>
run_my_stuff()
NameError: name 'run_my_stuff' is not defined
Use exc_info options may be better, remains warning or error title:
try:
# coode in here
except Exception as e:
logging.error(e, exc_info=True)
My job recently tasked me with logging all the tracebacks/exceptions from our application. I tried numerous techniques that others had posted online such as the one above but settled on a different approach. Overriding traceback.print_exception.
I have a write up at http://www.bbarrows.com/ That would be much easier to read but Ill paste it in here as well.
When tasked with logging all the exceptions that our software might encounter in the wild I tried a number of different techniques to log our python exception tracebacks. At first I thought that the python system exception hook, sys.excepthook would be the perfect place to insert the logging code. I was trying something similar to:
import traceback
import StringIO
import logging
import os, sys
def my_excepthook(excType, excValue, traceback, logger=logger):
logger.error("Logging an uncaught exception",
exc_info=(excType, excValue, traceback))
sys.excepthook = my_excepthook
This worked for the main thread but I soon found that the my sys.excepthook would not exist across any new threads my process started. This is a huge issue because most everything happens in threads in this project.
After googling and reading plenty of documentation the most helpful information I found was from the Python Issue tracker.
The first post on the thread shows a working example of the sys.excepthook NOT persisting across threads (as shown below). Apparently this is expected behavior.
import sys, threading
def log_exception(*args):
print 'got exception %s' % (args,)
sys.excepthook = log_exception
def foo():
a = 1 / 0
threading.Thread(target=foo).start()
The messages on this Python Issue thread really result in 2 suggested hacks. Either subclass Thread and wrap the run method in our own try except block in order to catch and log exceptions or monkey patch threading.Thread.run to run in your own try except block and log the exceptions.
The first method of subclassing Thread seems to me to be less elegant in your code as you would have to import and use your custom Thread class EVERYWHERE you wanted to have a logging thread. This ended up being a hassle because I had to search our entire code base and replace all normal Threads with this custom Thread. However, it was clear as to what this Thread was doing and would be easier for someone to diagnose and debug if something went wrong with the custom logging code. A custome logging thread might look like this:
class TracebackLoggingThread(threading.Thread):
def run(self):
try:
super(TracebackLoggingThread, self).run()
except (KeyboardInterrupt, SystemExit):
raise
except Exception, e:
logger = logging.getLogger('')
logger.exception("Logging an uncaught exception")
The second method of monkey patching threading.Thread.run is nice because I could just run it once right after __main__ and instrument my logging code in all exceptions. Monkey patching can be annoying to debug though as it changes the expected functionality of something. The suggested patch from the Python Issue tracker was:
def installThreadExcepthook():
"""
Workaround for sys.excepthook thread bug
From
http://spyced.blogspot.com/2007/06/workaround-for-sysexcepthook-bug.html
(https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1230540&group_id=5470).
Call once from __main__ before creating any threads.
If using psyco, call psyco.cannotcompile(threading.Thread.run)
since this replaces a new-style class method.
"""
init_old = threading.Thread.__init__
def init(self, *args, **kwargs):
init_old(self, *args, **kwargs)
run_old = self.run
def run_with_except_hook(*args, **kw):
try:
run_old(*args, **kw)
except (KeyboardInterrupt, SystemExit):
raise
except:
sys.excepthook(*sys.exc_info())
self.run = run_with_except_hook
threading.Thread.__init__ = init
It was not until I started testing my exception logging I realized that I was going about it all wrong.
To test I had placed a
raise Exception("Test")
somewhere in my code. However, wrapping a a method that called this method was a try except block that printed out the traceback and swallowed the exception. This was very frustrating because I saw the traceback bring printed to STDOUT but not being logged. It was I then decided that a much easier method of logging the tracebacks was just to monkey patch the method that all python code uses to print the tracebacks themselves, traceback.print_exception.
I ended up with something similar to the following:
def add_custom_print_exception():
old_print_exception = traceback.print_exception
def custom_print_exception(etype, value, tb, limit=None, file=None):
tb_output = StringIO.StringIO()
traceback.print_tb(tb, limit, tb_output)
logger = logging.getLogger('customLogger')
logger.error(tb_output.getvalue())
tb_output.close()
old_print_exception(etype, value, tb, limit=None, file=None)
traceback.print_exception = custom_print_exception
This code writes the traceback to a String Buffer and logs it to logging ERROR. I have a custom logging handler set up the 'customLogger' logger which takes the ERROR level logs and send them home for analysis.
You can log all uncaught exceptions on the main thread by assigning a handler to sys.excepthook, perhaps using the exc_info parameter of Python's logging functions:
import sys
import logging
logging.basicConfig(filename='/tmp/foobar.log')
def exception_hook(exc_type, exc_value, exc_traceback):
logging.error(
"Uncaught exception",
exc_info=(exc_type, exc_value, exc_traceback)
)
sys.excepthook = exception_hook
raise Exception('Boom')
If your program uses threads, however, then note that threads created using threading.Thread will not trigger sys.excepthook when an uncaught exception occurs inside them, as noted in Issue 1230540 on Python's issue tracker. Some hacks have been suggested there to work around this limitation, like monkey-patching Thread.__init__ to overwrite self.run with an alternative run method that wraps the original in a try block and calls sys.excepthook from inside the except block. Alternatively, you could just manually wrap the entry point for each of your threads in try/except yourself.
You can get the traceback using a logger, at any level (DEBUG, INFO, ...). Note that using logging.exception, the level is ERROR.
# test_app.py
import sys
import logging
logging.basicConfig(level="DEBUG")
def do_something():
raise ValueError(":(")
try:
do_something()
except Exception:
logging.debug("Something went wrong", exc_info=sys.exc_info())
DEBUG:root:Something went wrong
Traceback (most recent call last):
File "test_app.py", line 10, in <module>
do_something()
File "test_app.py", line 7, in do_something
raise ValueError(":(")
ValueError: :(
EDIT:
This works too (using python 3.6)
logging.debug("Something went wrong", exc_info=True)
What I was looking for:
import sys
import traceback
exc_type, exc_value, exc_traceback = sys.exc_info()
traceback_in_var = traceback.format_tb(exc_traceback)
See:
https://docs.python.org/3/library/traceback.html
Uncaught exception messages go to STDERR, so instead of implementing your logging in Python itself you could send STDERR to a file using whatever shell you're using to run your Python script. In a Bash script, you can do this with output redirection, as described in the BASH guide.
Examples
Append errors to file, other output to the terminal:
./test.py 2>> mylog.log
Overwrite file with interleaved STDOUT and STDERR output:
./test.py &> mylog.log
Here is a version that uses sys.excepthook
import traceback
import sys
logger = logging.getLogger()
def handle_excepthook(type, message, stack):
logger.error(f'An unhandled exception occured: {message}. Traceback: {traceback.format_tb(stack)}')
sys.excepthook = handle_excepthook
This is how I do it.
try:
do_something()
except:
# How can I log my exception here, complete with its traceback?
import traceback
traceback.format_exc() # this will print a complete trace to stout.
maybe not as stylish, but easier:
#!/bin/bash
log="/var/log/yourlog"
/path/to/your/script.py 2>&1 | (while read; do echo "$REPLY" >> $log; done)
To key off of others that may be getting lost in here, the way that works best with capturing it in logs is to use the traceback.format_exc() call and then split this string for each line in order to capture in the generated log file:
import logging
import sys
import traceback
try:
...
except Exception as ex:
# could be done differently, just showing you can split it apart to capture everything individually
ex_t = type(ex).__name__
err = str(ex)
err_msg = f'[{ex_t}] - {err}'
logging.error(err_msg)
# go through the trackback lines and individually add those to the log as an error
for l in traceback.format_exc().splitlines():
logging.error(l)
Heres a simple example taken from the python 2.6 documentation:
import logging
LOG_FILENAME = '/tmp/logging_example.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG,)
logging.debug('This message should go to the log file')

Categories