I would like to use rich.logging.RichHandler from the rich library to handle all captured logs in pytest.
Say I have two files,
# library.py
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
def func():
x = {"value": 5}
logger.info(x)
# test_library.py
from library import func
def test_func():
func()
assert False
Running pytest shows the log message as expected, but I want it formatted by rich so I tried to put the following into conftest.py:
import logging
import pytest
from rich.logging import RichHandler
#pytest.hookimpl
def pytest_configure(config: pytest.Config):
logger = logging.getLogger()
logger.addHandler(RichHandler())
which results in the following output:
Under "captured stdout call" the log message appears as formatted by the RichHandler but below that it appears a second time under "captured log call" which is not what I want. Instead the message below "captured log call" should be formatted by the RichHandler and should not appear twice.
It does not perfectly answer your question however it could help getting a closest result.
Disable live logs: log_cli = 0 in pytest.ini
Disable capture: -s flag equivalent to --capture=no
Disable showing captured logs: --show-capture=no (should be disabled when capture is turned off, but ...)
So by disabling live logs and by running pytest -s --show-capture=no you should get rid of the duplicated logs and get only rich-formatted logs.
Related
I have a simple python script that leads to a pandas SettingsWithCopyWarning:
import logging
import pandas as pd
def method():
logging.info("info")
logging.warning("warning1")
logging.warning("warning2")
df = pd.DataFrame({"1": [1, 0], "2": [3, 4]})
df[df["1"] == 1]["2"] = 100
if __name__ == "__main__":
method()
When I run the script, I get what I expect
WARNING:root:warning1
WARNING:root:warning2
main.py:11: SettingWithCopyWarning: ...
Now I write a pytest unit test for it:
from src.main import method
def test_main():
method()
and activate the logging in my pytest.ini
[pytest]
log_cli = true
log_cli_level = DEBUG
========================= 1 passed, 1 warning in 0.27s =========================
Process finished with exit code 0
-------------------------------- live log call ---------------------------------
INFO root:main.py:7 info
WARNING root:main.py:8 warning1
WARNING root:main.py:9 warning2
PASSED [100%]
The SettingsWithCopyWarning is counted while my logging warnings are not. Why is that? How do I control that? Via a configuration in the pytest.ini?
Even worse: The SettingsWithCopyWarning is not printed. I want to see it and perhaps even test on it? How can I see warnings that are generated by dependent packages? Via a configuration in the pytest.ini?
Thank you!
All log warnings logged using the pytest live logs feature are done with the standard logging facility.
Warnings done with the warnings facility are captured separately in pytest by default and logged in the warnings summary after the live log messages.
To log warnings done through warnings.warn function immediately they are emitted, you need to inform pytest not to capture them.
In your pytest.ini, add
[pytest]
log_cli = true
log_level = DEBUG
log_cli_level = DEBUG
addopts=--disable-warnings
Then in your tests/conftest.py, write a hook to capture warnings in the tests using the standard logging facility.
import logging
def pytest_runtest_call(item):
logging.captureWarnings(True)
We've just switched from nose to pytest and there doesn't seem to be an option to suppress third party logging. In nose config, we had the following line:
logging-filter=-matplotlib,-chardet.charsetprober,-PIL,-fiona.env,-fiona._env
Some of those logs are very chatty, especially matplotlib and we don't want to see the output, just output from our logs.
I can't find an equivalent setting in pytest though. Is it possible? Am I missing something? Thanks.
The way I do it is by creating a list of logger names for which logs have to be disabled in conftest.py.
For example, if I want to disable a logger called app, then I can write a conftest.py as below:
import logging
disable_loggers = ['app']
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True
And then run my test:
import logging
def test_brake():
logger = logging.getLogger("app")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake PASSED
[100%]
============================== 1 passed in 0.01s ===============================
Then, Hello there is not there because the logger with the name app was disabled in conftest.py.
However, if I change my logger name in the test to app2 and run the test again:
import logging
def test_brake():
logger = logging.getLogger("app2")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake
-------------------------------- live log call --------------------------------- ERROR app2:test_car.py:5 Hello there PASSED
[100%]
============================== 1 passed in 0.01s ===============================
As you can see, Hello there is in because a logger with app2 is not disabled.
Conclusion
Basically, you could do the same, but just add your undesired logger names to conftest.py as below:
import logging
disable_loggers = ['matplotlib', 'chardet.charsetprober', <add more yourself>]
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True
Apart from the ability to tune logging levels or not show any log output at all which I'm sure you've read in the docs, the only way that comes to mind is to configure your logging in general.
Assuming that all of those packages use the standard library logging facilities, you have various options of configuring what gets logged. Please take a look at the advanced tutorial for a good overview of your options.
If you don't want to configure logging for your application in general but only during testing, you might do so using the pytest_configure or pytest_sessionstart hooks which you might place in a conftest.py at the root of your test file hierarchy.
Then I see three options:
The brute force way is to use the default behaviour of fileConfig or dictConfig to disable all existing loggers. In your conftest.py:
import logging.config
def pytest_sessionstart():
# This is the default so an empty dictionary should work, too.
logging.config.dictConfig({'disable_existing_loggers': True})
The more subtle approach is to change the level of individual loggers or disable them. As an example:
import logging.config
def pytest_sessionstart():
logging.config.dictConfig({
'disable_existing_loggers': False,
'loggers': {
# Add any noisy loggers here with a higher loglevel.
'matplotlib': {'level': 'ERROR'}
}
})
Lastly, you can use the pytest_addoption hook to add a command line option similar to the one you mention. Again, at the root of your test hierarchy put the following in a conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--logging-filter",
help="Provide a comma-separated list of logger names that will be "
"disabled."
)
def pytest_sessionstart(pytestconfig):
for logger_name in pytestconfig.getoption("--logging-filter").split(","):
# Use `logger_name.trim()[1:]` if you want the `-name` CLI syntax.
logger = logging.getLogger(logger_name.trim())
logger.disabled = True
You can then call pytest in the following way:
pytest --logging-filter matplotlib,chardet,...
The default approach by pytest is to hide all logs but provide the caplog fixture to inspect log output in your test cases. This is quite powerful if you are looking for specific log lines. So the question is also why you need to see those logs at all in your test suite?
Adding a log filter to conftest.py looks like it might be useful and I'll come back to that at some point in the future. For now though, we've just gone for silencing the logs in the application. We don't see them at any point when the app is running, not just during testing.
# Hide verbose third-party logs
for log_name in ('matplotlib', 'fiona.env', 'fiona._env', 'PIL', 'chardet.charsetprober'):
other_log = logging.getLogger(log_name)
other_log.setLevel(logging.WARNING)
I have the impression (but do not find the documentation for it) that unittest sets the logging level to WARNING for all loggers. I would like to:
be able to specify the logging level for all loggers, from the command line (when running the tests) or from the test module itself
avoid unittest messing around with the application logging level: when running the tests I want to have the same logging output (same levels) as when running the application
How can I achieve this?
I don't believe unittest itself does anything to logging, unless you use a _CapturingHandler class which it defines. This simple program demonstrates:
import logging
import unittest
logger = logging.getLogger(__name__)
class MyTestCase(unittest.TestCase):
def test_something(self):
logger.debug('logged from test_something')
if __name__ == '__main__':
# DEBUG for demonstration purposes, but you could set the level from
# cmdline args to whatever you like
logging.basicConfig(level=logging.DEBUG, format='%(name)s %(levelname)s %(message)s')
unittest.main()
When run, it prints
__main__ DEBUG logged from test_something
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
showing that it is logging events at DEBUG level, as expected. So the problem is likely to be related to something else, e.g. the code under test, or some other test runner which changes the logging configuration or redirects sys.stdout and sys.stderr. You will probably need to provide more information about your test environment, or better yet a minimal program that demonstrates the problem (as my example above shows that unittest by itself doesn't cause the problem you're describing).
See below example for logging in Python. Also you can change LOG_LEVEL using 'setLevel' method.
import os
import logging
logging.basicConfig()
logger = logging.getLogger(__name__)
# Change logging level here.
logger.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
logger.info('For INFO message')
logger.debug('For DEBUG message')
logger.warning('For WARNING message')
logger.error('For ERROR message')
logger.critical('For CRITICAL message')
This is in addition to #Vinay's answer above. It does not answer the original question. I wanted to include command line options for modifying the log level. The intent was to get detailed loggin only when I pass a certain parameter from the command line. This is how I solved it:
import sys
import unittest
import logging
from histogram import Histogram
class TestHistogram(unittest.TestCase):
def test_case2(self):
h = Histogram([2,1,2])
self.assertEqual(h.calculateMaxAreaON(), 3)
if __name__ == '__main__':
argv = len(sys.argv) > 1 and sys.argv[1]
loglevel = logging.INFO if argv == '-v' else logging.WARNING
logging.basicConfig(level=loglevel)
unittest.main()
The intent is to get more verbose logging. I know it does not answer the question, but I'll leave it here in case someone comes looking for a similar requirement such as this.
this worked for me:
logging.basicConfig(level=logging.DEBUG)
And if I wanted a specific format:
logging.basicConfig(
level=logging.DEBUG,
datefmt="%H:%M:%S",
format="%(asctime)s.%(msecs)03d [%(levelname)-5s] %(message)s",
)
Programmatically:
Put this line of code in each test function defined in your class that you want to set the logging level:
logging.getLogger().setLevel(logging.INFO)
Ex. class:
import unittest
import logging
class ExampleTest(unittest.TestCase):
def test_method(self):
logging.getLogger().setLevel(logging.INFO)
...
Command Line:
This example just shows how to do it in a normal script, not specific to unittest example. Capturing the log level via command line, using argparse for arguments:
import logging
import argparse
...
def parse_args():
parser = argparse.ArgumentParser(description='...')
parser.add_argument('-v', '--verbose', help='enable verbose logging', action='store_const', dest="loglevel", const=logging.INFO, default=logging.WARNING)
...
def main():
args = parse_args()
logging.getLogger().setLevel(args.loglevel)
I want to find out how logging should be organised given that I write many scripts and modules that should feature similar logging. I want to be able to set the logging appearance and the logging level from the script and I want this to propagate the appearance and level to my modules and only my modules.
An example script could be something like the following:
import logging
import technicolor
import example_2_module
def main():
verbose = True
global log
log = logging.getLogger(__name__)
logging.root.addHandler(technicolor.ColorisingStreamHandler())
# logging level
if verbose:
logging.root.setLevel(logging.DEBUG)
else:
logging.root.setLevel(logging.INFO)
log.info("example INFO message in main")
log.debug("example DEBUG message in main")
example_2_module.function1()
if __name__ == '__main__':
main()
An example module could be something like the following:
import logging
log = logging.getLogger(__name__)
def function1():
print("printout of function 1")
log.info("example INFO message in module")
log.debug("example DEBUG message in module")
You can see that in the module there is minimal infrastructure written to import the logging of the appearance and the level set in the script. This has worked fine, but I've encountered a problem: other modules that have logging. This can result in output being printed twice, and very detailed debug logging from modules that are not my own.
How should I code this such that the logging appearance/level is set from the script but then used only by my modules?
You need to set the propagate attribute to False so that the log message does not propagate to ancestor loggers. Here is the documentation for Logger.propagate -- it defaults to True. So just:
import logging
log = logging.getLogger(__name__)
log.propagate = False
I want to ensure that a certain condition in my code causes a log message to be written to the django log. How would I do this with the Django unit testing framework?
Is there a place where I can check logged messages, similarly to how I can check sent emails? My unit test extends django.test.TestCase.
Using the mock module for mocking the logging module or the logger object. When you've done that, check the arguments with which the logging function is called.
For example, if you code looks like this:
import logging
logger = logging.getLogger('my_logger')
logger.error("Your log message here")
it would look like:
from unittest.mock import patch # For python 2.x use from mock import patch
#patch('this.is.my.module.logger')
def test_check_logging_message(self, mock_logger):
mock_logger.error.assert_called_with("Your log message here")
You can also use assertLogs from django.test.TestCase
When you code is
import logging
logger = logging.getLogger('my_logger')
def code_that_throws_error_log():
logger.error("Your log message here")
This is the test code.
with self.assertLogs(logger='my_logger', level='ERROR') as cm:
code_that_throws_error_log()
self.assertIn(
"ERROR:your.module:Your log message here",
cm.output
)
This lets you avoid patching just for logs.
The common way of mocking out the logger object (see the splendid chap Simeon Visser's answer) is slightly tricky in that it requires the test to mock out the logging in all the places it's done. This is awkward if the logging comes from more than one module, or is in code you don't own. If the module the logging comes from changes name, it will break your tests.
The splendid 'testfixtures' package includes tools to add a logging handler which captures all generated log messages, no matter where they come from. The captured messages can later be interrogated by the test. In its simplest form:
Assuming code-under-test, which logs:
import logging
logger = logging.getLogger()
logger.info('a message')
logger.error('an error')
A test for this would be:
from testfixtures import LogCapture
with LogCapture() as l:
call_code_under_test()
l.check(
('root', 'INFO', 'a message'),
('root', 'ERROR', 'an error'),
)
The word 'root' indicates the logging was sent via a logger created using logging.getLogger() (i.e. with no args.) If you pass an arg to getLogger (__name__ is conventional), that arg will be used in place of 'root'.
The test does not care what module created the logging. It could be a sub-module called by our code-under-test, including 3rd party code.
The test asserts about the actual log message that was generated, as opposed to the technique of mocking, which asserts about the args that were passed. These will differ if the logging.info call uses '%s' format strings with additional arguments that you don't expand yourself (e.g. use logging.info('total=%s', len(items)) instead of logging.info('total=%s' % len(items)), which you should. It's no extra work, and allows hypothetical future logging aggregation services such as 'Sentry' to work properly - they can see that "total=12" and "total=43" are two instances of the same log message. That's the reason why pylint warns about the latter form of logging.info call.)
LogCapture includes facilities for log filtering and the like. Its parent 'testfixtures' package, written by Chris Withers, another splendid chap, includes many other useful testing tools. Documentation is here: http://pythonhosted.org/testfixtures/logging.html
Django has a nice context manager function called patch_logger.
from django.test.utils import patch_logger
then in your test case:
with patch_logger('logger_name', 'error') as cm:
self.assertIn("Error message", cm)
where:
logger_name is the logger name (duh)
error is the log level
cm is the list of all log messages
More details:
https://github.com/django/django/blob/2.1/django/test/utils.py#L638
It should work the same for django < 2.0, independently of python version (as long as it's supported by dj)
If you're using test classes, you can use following solution:
import logger
from django.test import TestCase
class MyTest(TestCase):
#classmethod
def setUpClass(cls):
super(MyTest, cls).setUpClass()
cls.logging_error = logging.error
logging.error = cls._error_log
#classmethod
def tearDownClass(cls):
super(MyTest, cls).tearDownClass()
logging.error = cls.logging_error
#classmethod
def _error_log(cls, msg):
cls.logger = msg
def test_logger(self):
self.assertIn('Message', self.logger)
This method replaces error function of logging module with your custom method only for test purposes and put stdout into cls.logger variable which is available in every test case by calling self.logger. At the end it reverts changes by placing error function from logging module back.