Adding assertion functionality to python logging module? - python

I use assertions a lot in my code, and I would like to log any assertion errors that I have. After googling the problem, I didn't find a convenient solution.
So what I came up with is I added a method to logging.Logger class.
import logging
def assertion(self, bool_condition, message):
try:
assert bool_condition, message
except AssertionError:
self.exception(message)
raise
logging.Logger.assertion = assertion
""" apply log config """
log = logging.getLogger(__name__)
log.assertion(1 == 2, 'Assertion failed.')
It seems to do the job but I was wondering if it is a good practice to do so.

Related

How to assert logging error which is followed by a sys.exit()

I am using Python's logging module to generate an ERROR message in specific cases, followed by a sys.exit().
if platform is None:
logging.error(f'No platform provided!')
sys.exit()
Continue do other stuff
Now I am using pytest to unit test the specific error message. However the sys.exit() statement causes pytest to detect an error because of a SystemExit event, even if the error message passes the test.
And mocking the sys.exit makes that the rest of the code is being run ('Continue do other stuff') which then causes other problems.
I have tried the following:
LOGGER = logging.getLogger(__name__)
platform = None
data.set_platform(platform, logging=LOGGER)
assert "No platform provided!" in caplog.text
This question is a similar: How to assert both UserWarning and SystemExit in pytest , but it raises the error in a different way.
How do I make pytest ignore the SystemExit?
Here is one approach.
In your test module you could write the following test, where your_module is the name of the module where your actual code is defined, and function() is the function that is doing the logging and calling sys.exit().
import logging
from unittest import mock
from your_module import function
def test_function(caplog):
with pytest.raises(SystemExit):
function()
log_record = caplog.records[0]
assert log_record.levelno == logging.ERROR
assert log_record.message == "No platform provided!"
assert log_record.lineno == 8 # Replace with the line no. in which log is actually called in the main code.
(If you want to shorten this a little bit, you can use record_tuples instead of records.)
EDIT: Using caplog instead of mocking the log module.

How to ignore certain Python errors from Sentry capture

I have Sentry configured to capture all errors from a Django+Celery application. It works ok, but I'm finding an obnoxious use case is when I have to restart my Celery workers, PostgreSQL database or messaging server, which causes thousands of various kinds of "database/messaging server cannot be reached" errors. This pollutes the Sentry reports, and sometimes even exceeds my event quota.
Their docs mention an "ignore_exceptions" parameter, but it's in their old deprecated client that I'm not using, nor is recommended to be used for new projects. How would you do this in the new API?
It took me some source-diving to actually find it, but the option in the new SDK is "ignore_errors". It takes an iterable where each element can be either a string or a type (like the old interface).
I hesitate to link to it because its an internal method that could change at any time, but here
is the snapshot of it at the time of me writing this.
As an example (reimplementing Markus's answer):
import sentry_sdk
sentry_sdk.init(ignore_errors=[IgnoredErrorFoo, IgnoredErrorBar])
You can use before-send to filter errors by arbitrary criteria. Since it's unclear what you actually want to filter by, here's an example that filters by type. However, you can extend it with custom logic to e.g. match by exception message.
import sentry_sdk
def before_send(event, hint):
if 'exc_info' in hint:
exc_type, exc_value, tb = hint['exc_info']
if isinstance(exc_value, (IgnoredErrorFoo, IgnoredErrorBar)):
return None
return event
sentry_sdk.init(before_send=before_send)
To ignore all related errors, there are two ways:
Use before_send
import sentry_sdk
from rest_framework.exceptions import ValidationError
def before_send(event, hint):
if 'exc_info' in hint:
exc_type, exc_value, tb = hint['exc_info']
if isinstance(exc_value, (KeyError, ValidationError)):
return None
return event
sentry_sdk.init(
dsn='SENTRY_DSN',
before_send=before_send
)
Use ignore_errors
import sentry_sdk
from rest_framework.exceptions import ValidationError
sentry_dsk.init(
dsn='SENTRY_DSN',
ignore_errors=[
KeyError() ,
ValidationError('my error message'),
] # All event error (KeyError, ValidationError) will be ignored
)
To ignore a specific event error, just ignore this event ValidationError('my error message') with a custom def before_send:
def before_send(event, hint):
if 'exc_info' in hint:
exc_type, exc_value, tb = hint['exc_info']
if exc_value.args[0] in ['my error message', 'my error message 2', ...]:
return None
return event
This is documented in the sentry-python documentation at: https://docs.sentry.io/platforms/python/guides/django/configuration/filtering/
Note: The hint parameter has 3 cases, you need to know in which case your error will be.
https://docs.sentry.io/platforms/python/guides/django/configuration/filtering/hints/

Python logging module: how to save log to a file if (and only if) assertion test fails?

I am looking for an elegant and Pythonic solution, to make tests save a log to a file, though only in case of test failure. I would like to keep things simple, and stick with Python's built-in logging module.
My current solution is to use a wrapper function for assert of every test:
import unittest
class superTestCase(unittest.TestCase):
...
def assertWithLogging(self, assertion, assertion_arguments, expected_response, actual_response, *args):
try:
assertion(*assertion_arguments)
except AssertionError as ae:
test_name = inspect.stack()[1][3]
current_date_time = datetime.datetime.now().strftime("%Y.%m.%d %H-%M-%S")
logging.basicConfig(filename='tests/{}-{}-Failure.log'.format(current_date_time, test_name),
filemode='a',
format='%(message)s',
level=logging.DEBUG
)
logger = logging.getLogger('FailureLogger')
logger.debug('{} has failed'.format(test_name))
logger.debug('Expected response(s):')
logger.debug(expected_response)
logger.debug('Actual response:')
logger.debug(actual_response)
for arg in args:
logger.debug('Additionl logging info:')
logger.debug(arg)
raise ae
def testSomething(self):
...
self.assertWithLogging(self.assertEqual,
[expected_response, actual_response]
expected_response,
actual_response,
some_other_variable
)
Though it works as I expect it to, this solution seems clumsy and not too Pythonic, to me.
What would be (Is there) a more elegant way to achieve the same result?
What are the downsides of current approach?
You can use various logging mechanisms where in you can set the type of logs you want to achieve.
The one below will log only error messages.
Logger.error(msg, *args, **kwargs)
This logs msg with level logging.ERROR on this logger. The arguments are interpreted as for debug().

How do I test if a certain log message is logged in a Django test case?

I want to ensure that a certain condition in my code causes a log message to be written to the django log. How would I do this with the Django unit testing framework?
Is there a place where I can check logged messages, similarly to how I can check sent emails? My unit test extends django.test.TestCase.
Using the mock module for mocking the logging module or the logger object. When you've done that, check the arguments with which the logging function is called.
For example, if you code looks like this:
import logging
logger = logging.getLogger('my_logger')
logger.error("Your log message here")
it would look like:
from unittest.mock import patch # For python 2.x use from mock import patch
#patch('this.is.my.module.logger')
def test_check_logging_message(self, mock_logger):
mock_logger.error.assert_called_with("Your log message here")
You can also use assertLogs from django.test.TestCase
When you code is
import logging
logger = logging.getLogger('my_logger')
def code_that_throws_error_log():
logger.error("Your log message here")
This is the test code.
with self.assertLogs(logger='my_logger', level='ERROR') as cm:
code_that_throws_error_log()
self.assertIn(
"ERROR:your.module:Your log message here",
cm.output
)
This lets you avoid patching just for logs.
The common way of mocking out the logger object (see the splendid chap Simeon Visser's answer) is slightly tricky in that it requires the test to mock out the logging in all the places it's done. This is awkward if the logging comes from more than one module, or is in code you don't own. If the module the logging comes from changes name, it will break your tests.
The splendid 'testfixtures' package includes tools to add a logging handler which captures all generated log messages, no matter where they come from. The captured messages can later be interrogated by the test. In its simplest form:
Assuming code-under-test, which logs:
import logging
logger = logging.getLogger()
logger.info('a message')
logger.error('an error')
A test for this would be:
from testfixtures import LogCapture
with LogCapture() as l:
call_code_under_test()
l.check(
('root', 'INFO', 'a message'),
('root', 'ERROR', 'an error'),
)
The word 'root' indicates the logging was sent via a logger created using logging.getLogger() (i.e. with no args.) If you pass an arg to getLogger (__name__ is conventional), that arg will be used in place of 'root'.
The test does not care what module created the logging. It could be a sub-module called by our code-under-test, including 3rd party code.
The test asserts about the actual log message that was generated, as opposed to the technique of mocking, which asserts about the args that were passed. These will differ if the logging.info call uses '%s' format strings with additional arguments that you don't expand yourself (e.g. use logging.info('total=%s', len(items)) instead of logging.info('total=%s' % len(items)), which you should. It's no extra work, and allows hypothetical future logging aggregation services such as 'Sentry' to work properly - they can see that "total=12" and "total=43" are two instances of the same log message. That's the reason why pylint warns about the latter form of logging.info call.)
LogCapture includes facilities for log filtering and the like. Its parent 'testfixtures' package, written by Chris Withers, another splendid chap, includes many other useful testing tools. Documentation is here: http://pythonhosted.org/testfixtures/logging.html
Django has a nice context manager function called patch_logger.
from django.test.utils import patch_logger
then in your test case:
with patch_logger('logger_name', 'error') as cm:
self.assertIn("Error message", cm)
where:
logger_name is the logger name (duh)
error is the log level
cm is the list of all log messages
More details:
https://github.com/django/django/blob/2.1/django/test/utils.py#L638
It should work the same for django < 2.0, independently of python version (as long as it's supported by dj)
If you're using test classes, you can use following solution:
import logger
from django.test import TestCase
class MyTest(TestCase):
#classmethod
def setUpClass(cls):
super(MyTest, cls).setUpClass()
cls.logging_error = logging.error
logging.error = cls._error_log
#classmethod
def tearDownClass(cls):
super(MyTest, cls).tearDownClass()
logging.error = cls.logging_error
#classmethod
def _error_log(cls, msg):
cls.logger = msg
def test_logger(self):
self.assertIn('Message', self.logger)
This method replaces error function of logging module with your custom method only for test purposes and put stdout into cls.logger variable which is available in every test case by calling self.logger. At the end it reverts changes by placing error function from logging module back.

How can I determine if any errors were logged during a python program's execute?

I have a python script which calls log.error() and log.exception() in several places. These exceptions are caught so that the script can continue to run, however, I would like to be able to determine if log.error() and/or log.exception() were ever called so I can exit the script with an error code by calling sys.exit(1). A naive implementation using an "error" variable is included below. It seems to me there must be a better way.
error = False
try:
...
except:
log.exception("Something bad occurred.")
error = True
if error:
sys.exit(1)
I had the same issue as the original poster: I wanted to exit my Python script with an error code if any messages of error or greater severity were logged. For my application, it's desirable for execution to continue as long as no unhandled exceptions are raised. However, continuous integrations builds should fail if any errors are logged.
I found the errorhandler python package, which does just what we need. See the GitHub, PyPI page, and docs.
Below is the code I used:
import logging
import sys
import errorhandler
# Track if message gets logged with severity of error or greater
error_handler = errorhandler.ErrorHandler()
# Also log to stderr
stream_handler = logging.StreamHandler(stream=sys.stderr)
logger = logging.getLogger()
logger.setLevel(logging.INFO) # Set whatever logging level for stderr
logger.addHandler(stream_handler)
# Do your program here
if error_handler.fired:
logger.critical('Failure: exiting with code 1 due to logged errors')
raise SystemExit(1)
You can check logger._cache. It returns a dictionary with keys corresponding to the numeric value of the error level logged. So for checking if an error was logged you could do:
if 40 in logger._cache and logger._cache[40]
I think that your solution is not the best option. Logging is one aspect of your script, returning an error code depending on the control flow is another. Perhaps using exceptions would be a better option.
But if you want to track the calls to log, you can wrap it within a decorator. A simple example of a decorator follows (without inheritance or dynamic attribute access):
class LogWrapper:
def __init__(self, log):
self.log = log
self.error = False
def exception(self, message)
self.error = True
self.log.exception(message)
whenever logger._cache is not a solution (when other packages / modules log on their own which won't end in logger._cache), there's a way to build a ContextFilter which will record the worst called log level:
class ContextFilterWorstLevel(logging.Filter):
def __init__(self):
self.worst_level = logging.INFO
def filter(self, record):
if record.levelno > self.worst_level:
self.worst_level = record.levelno
return True
# Create a logger object and add the filter
logger = logging.getLogger()
logger.addFilter(ContextFilterWorstLevel())
# Check the worst log level called later
for filter in logger.filters:
if isinstance(filter, ContextFilterWorstLevel):
print(filter.worst_level)
You can employ a counter. If you want to track individual exceptions, create a dictionary with the exception as the key and the integer counter as the value.

Categories