How to suppress third party logs in pytest - python

We've just switched from nose to pytest and there doesn't seem to be an option to suppress third party logging. In nose config, we had the following line:
logging-filter=-matplotlib,-chardet.charsetprober,-PIL,-fiona.env,-fiona._env
Some of those logs are very chatty, especially matplotlib and we don't want to see the output, just output from our logs.
I can't find an equivalent setting in pytest though. Is it possible? Am I missing something? Thanks.

The way I do it is by creating a list of logger names for which logs have to be disabled in conftest.py.
For example, if I want to disable a logger called app, then I can write a conftest.py as below:
import logging
disable_loggers = ['app']
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True
And then run my test:
import logging
def test_brake():
logger = logging.getLogger("app")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake PASSED
[100%]
============================== 1 passed in 0.01s ===============================
Then, Hello there is not there because the logger with the name app was disabled in conftest.py.
However, if I change my logger name in the test to app2 and run the test again:
import logging
def test_brake():
logger = logging.getLogger("app2")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake
-------------------------------- live log call --------------------------------- ERROR app2:test_car.py:5 Hello there PASSED
[100%]
============================== 1 passed in 0.01s ===============================
As you can see, Hello there is in because a logger with app2 is not disabled.
Conclusion
Basically, you could do the same, but just add your undesired logger names to conftest.py as below:
import logging
disable_loggers = ['matplotlib', 'chardet.charsetprober', <add more yourself>]
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True

Apart from the ability to tune logging levels or not show any log output at all which I'm sure you've read in the docs, the only way that comes to mind is to configure your logging in general.
Assuming that all of those packages use the standard library logging facilities, you have various options of configuring what gets logged. Please take a look at the advanced tutorial for a good overview of your options.
If you don't want to configure logging for your application in general but only during testing, you might do so using the pytest_configure or pytest_sessionstart hooks which you might place in a conftest.py at the root of your test file hierarchy.
Then I see three options:
The brute force way is to use the default behaviour of fileConfig or dictConfig to disable all existing loggers. In your conftest.py:
import logging.config
def pytest_sessionstart():
# This is the default so an empty dictionary should work, too.
logging.config.dictConfig({'disable_existing_loggers': True})
The more subtle approach is to change the level of individual loggers or disable them. As an example:
import logging.config
def pytest_sessionstart():
logging.config.dictConfig({
'disable_existing_loggers': False,
'loggers': {
# Add any noisy loggers here with a higher loglevel.
'matplotlib': {'level': 'ERROR'}
}
})
Lastly, you can use the pytest_addoption hook to add a command line option similar to the one you mention. Again, at the root of your test hierarchy put the following in a conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--logging-filter",
help="Provide a comma-separated list of logger names that will be "
"disabled."
)
def pytest_sessionstart(pytestconfig):
for logger_name in pytestconfig.getoption("--logging-filter").split(","):
# Use `logger_name.trim()[1:]` if you want the `-name` CLI syntax.
logger = logging.getLogger(logger_name.trim())
logger.disabled = True
You can then call pytest in the following way:
pytest --logging-filter matplotlib,chardet,...
The default approach by pytest is to hide all logs but provide the caplog fixture to inspect log output in your test cases. This is quite powerful if you are looking for specific log lines. So the question is also why you need to see those logs at all in your test suite?

Adding a log filter to conftest.py looks like it might be useful and I'll come back to that at some point in the future. For now though, we've just gone for silencing the logs in the application. We don't see them at any point when the app is running, not just during testing.
# Hide verbose third-party logs
for log_name in ('matplotlib', 'fiona.env', 'fiona._env', 'PIL', 'chardet.charsetprober'):
other_log = logging.getLogger(log_name)
other_log.setLevel(logging.WARNING)

Related

Pytest logging ignores dependency warnings

I have a simple python script that leads to a pandas SettingsWithCopyWarning:
import logging
import pandas as pd
def method():
logging.info("info")
logging.warning("warning1")
logging.warning("warning2")
df = pd.DataFrame({"1": [1, 0], "2": [3, 4]})
df[df["1"] == 1]["2"] = 100
if __name__ == "__main__":
method()
When I run the script, I get what I expect
WARNING:root:warning1
WARNING:root:warning2
main.py:11: SettingWithCopyWarning: ...
Now I write a pytest unit test for it:
from src.main import method
def test_main():
method()
and activate the logging in my pytest.ini
[pytest]
log_cli = true
log_cli_level = DEBUG
========================= 1 passed, 1 warning in 0.27s =========================
Process finished with exit code 0
-------------------------------- live log call ---------------------------------
INFO root:main.py:7 info
WARNING root:main.py:8 warning1
WARNING root:main.py:9 warning2
PASSED [100%]
The SettingsWithCopyWarning is counted while my logging warnings are not. Why is that? How do I control that? Via a configuration in the pytest.ini?
Even worse: The SettingsWithCopyWarning is not printed. I want to see it and perhaps even test on it? How can I see warnings that are generated by dependent packages? Via a configuration in the pytest.ini?
Thank you!
All log warnings logged using the pytest live logs feature are done with the standard logging facility.
Warnings done with the warnings facility are captured separately in pytest by default and logged in the warnings summary after the live log messages.
To log warnings done through warnings.warn function immediately they are emitted, you need to inform pytest not to capture them.
In your pytest.ini, add
[pytest]
log_cli = true
log_level = DEBUG
log_cli_level = DEBUG
addopts=--disable-warnings
Then in your tests/conftest.py, write a hook to capture warnings in the tests using the standard logging facility.
import logging
def pytest_runtest_call(item):
logging.captureWarnings(True)

Adding system or utility live logs for pytest

I'm building a pytest fixture that starts some long process before any tests are executed.
I would like to have utility logs reporting about that process while it's taking place after being initiated by pytest.
The fixture looks something like this:
import logging
logger = logging.getLogger(__name__)
#fixture
def some_long_process():
logger.info("Process started")
logger.info("Process ongoing..")
yield
logger.info("Process ended")
The problem is that pytest automatically captures all outputs during test runs and although you can enable live logging I wish I could enable live logging specifically for my logger.
Is there a way to do that? Or alternatively, a way to add system logs to pytest?
Basically you just need to enable live logs.
It can be done directly at the command line by passing the log_cli_level .
pytest --log-cli-level=INFO test_log_feature.py
# ----- live log setup -------------------------------------------
# INFO test_log_feature:test_log_feature.py:9 Process started
# INFO test_log_feature:test_log_feature.py:10 Process ongoing..
# PASSED [100%]
# ----- live log teardown ----------------------------------------
# INFO test_log_feature:test_log_feature.py:12 Process ended
You can do the same in a pytest.ini configuration file.
[pytest]
log_cli=true
log_cli_level=INFO
Update
I want live logging solely for my logger, is that possible?
I think in this case you have to play with loggers and define various loggers according to your needs and enable log_cli_level at the lower level you want to see. Here is an example to illustrate this solution.
import logging
import pytest
fixture_logger = logging.getLogger("fixture")
logger = logging.getLogger(__name__)
# can also be set by configuration
logger.setLevel(logging.WARNING)
#pytest.fixture()
def some_long_process(caplog):
fixture_logger.info("Process started")
fixture_logger.info("Process ongoing..")
yield
fixture_logger.info("Process ended")
def test_long_process(some_long_process, caplog, request):
logger.info("Log I don't want to see")
pass
Gives.
pytest --log-cli-level=INFO test_log_feature.py
# --- live log setup ---
# INFO fixture:test_log_feature.py:10 Process started
# INFO fixture:test_log_feature.py:11 Process ongoing..
# PASSED [100%]
# ---- live log teardown ---
# INFO fixture:test_log_feature.py:13 Process ended

pytest - specify log level from the CLI command running the tests

my team and I are using Pytest + Jenkins to automate our product testing. we have been using the standard Logging lib of python to get proper log messages during testing, before and after each test etc. we have multiple layers of logging, we log out ERROR, WARNING, INFO and DEBUG. the default value for our logger is INFO. we create the logger object in the primary setup of the tests, and pass it down to each object created, so all our logs go to the same logger.
so far when we are developing a new feature or test, we are working in DEBUG mode locally, and change it back to INFO when submitting new code to our SVN, but i am trying to add option to change logging level using the CLI, but i haven't found anything easy to implement. I've considered using Fixtures, but from what i understand those are only for the tests themselves, and not for the setup/tear-down phases, and the log is created regard less of the tests.
any hack or idea on how to add a Pytest option to the CLI command to support changing logging level?
Try --log-cli-level=INFO
like:
pytest -vv -s --log-cli-level=INFO --log-cli-format="%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)" --log-cli-date-format="%Y-%m-%d %H:%M:%S" ./test_file.py
This is now built into pytest. Just add '--log-level=' to the command line when running your test. For example:
pytest --log-level=INFO
Documentation updates can be found here: https://docs.pytest.org/en/latest/logging.html
Pytest 3.5 introduced a parameter
pytest --log-cli-level=INFO
For versions prior to 3.5 you can combine pytest commandline options and the python loglevel from commandline example to do the following:
Add the following to conftest.py:
import pytest
import logging
def pytest_addoption(parser):
parser.addoption(
"--log", action="store", default="WARNING", help="set logging level"
)
#pytest.fixture
def logger():
loglevel = pytest.config.getoption("--log")
logger = logging.getLogger(__name__)
numeric_level = getattr(
logging,
loglevel.upper(),
None
)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % loglevel)
logger.setLevel(numeric_level)
return logger
and then request the logger fixture in your tests
def test_bla(logger):
assert True
logger.info("True is True")
Then run pytest like
py.test --log INFO
to set the log level to INFO.

Can I determine if an application is being run normally vs via a unittest in Python?

I've inherited an application that desperately needs some unit testing. The problem I have is that the application has a log set up that logs to both the console and a file like so:
def setup_logging(file_name, file_level=logging.INFO, console_level=logging.INFO):
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# Create Console handler
console_log = logging.StreamHandler()
console_log.setLevel(console_level)
formatter = logging.Formatter('%(asctime)s - %(levelname)-s - %(name)-s - %(message)s')
console_log.setFormatter(formatter)
logger.addHandler(console_log)
# Log file
file_log = RotatingFileHandler('logs/%s.log' % (file_name), 'a', MAX_LOG_SIZE, MAX_LOGS_SAVE, encoding='UTF-8')
file_log.setLevel(file_level)
formatter = logging.Formatter('%(asctime)s - %(levelname)-s - %(name)-s - %(message)s')
file_log.setFormatter(formatter)
logger.addHandler(file_log)
return logger
The logging in the application is fairly extensive too. When I run a unit test and it triggers a log message, my unit test output is messed up:
>python tests.py
2014-10-23 09:47:28,857 - INFO - funct_1 - args =>
.2014-10-23 09:47:28,871 - INFO - funct_1 - args => name=unicode<Andy>
.
----------------------------------------------------------------------
Ran 2 tests in 0.040s
OK
Is there a way to determine if an application is being run via unittest so that I can remove the console log events?
This is a flask application, but I don't think that will matter for the end result here.
At a guess.
myapp/main.py
testing = __name__ != "__main__"
myapp/logging.py
from myapp.main import testing
if testing:
...
Or you could try:
myapp/testing.py
import myapp.main
import __main__
testing = vars(myapp.main) is not vars(__main__)
del myapp
del __main__
# import myapp.testing rather than myapp.main
Though what you really want is a way to tell your logging module to produce different loggers depending on how its been setup. One way of doing that is to initially have the logging module produce no or little logging, and then have the first thing main do is to setup up the logging module with the appropriate level of logging, before any loggers have been produced.
eg.
myapp/main.py
# start of file
import myapp.logging
myapp.logging.setlevel(myapp.logging.DEBUG)
# rest of imports
# rest of main
Alternatively you could have logging on at a high level by default and turn it off for testing. One way of doing this is creating a parent package inside your test source directory. Put all your test modules inside the parent package and then create an __init__ module. This init module will be run before any of your test modules are even loaded. Thus, it can turn off logging before any of your test modules even exist in memory.
eg.
testsrc/parentpackage/__init__.py
import myapp.logging
myapp.logging.setlevel(myapp.logging.ERROR)
Since this is a Flask application you should change which configuration is being loaded at application startup. The link to the Flask documentation is here. There are generally a few ways to handle this. Once you've got that all set up you can check app.config['TESTING'] or do any one of a million things.
I would set the logging to 'logging.ERROR' for both file and console logging when you run the unittests.

How do I test if a certain log message is logged in a Django test case?

I want to ensure that a certain condition in my code causes a log message to be written to the django log. How would I do this with the Django unit testing framework?
Is there a place where I can check logged messages, similarly to how I can check sent emails? My unit test extends django.test.TestCase.
Using the mock module for mocking the logging module or the logger object. When you've done that, check the arguments with which the logging function is called.
For example, if you code looks like this:
import logging
logger = logging.getLogger('my_logger')
logger.error("Your log message here")
it would look like:
from unittest.mock import patch # For python 2.x use from mock import patch
#patch('this.is.my.module.logger')
def test_check_logging_message(self, mock_logger):
mock_logger.error.assert_called_with("Your log message here")
You can also use assertLogs from django.test.TestCase
When you code is
import logging
logger = logging.getLogger('my_logger')
def code_that_throws_error_log():
logger.error("Your log message here")
This is the test code.
with self.assertLogs(logger='my_logger', level='ERROR') as cm:
code_that_throws_error_log()
self.assertIn(
"ERROR:your.module:Your log message here",
cm.output
)
This lets you avoid patching just for logs.
The common way of mocking out the logger object (see the splendid chap Simeon Visser's answer) is slightly tricky in that it requires the test to mock out the logging in all the places it's done. This is awkward if the logging comes from more than one module, or is in code you don't own. If the module the logging comes from changes name, it will break your tests.
The splendid 'testfixtures' package includes tools to add a logging handler which captures all generated log messages, no matter where they come from. The captured messages can later be interrogated by the test. In its simplest form:
Assuming code-under-test, which logs:
import logging
logger = logging.getLogger()
logger.info('a message')
logger.error('an error')
A test for this would be:
from testfixtures import LogCapture
with LogCapture() as l:
call_code_under_test()
l.check(
('root', 'INFO', 'a message'),
('root', 'ERROR', 'an error'),
)
The word 'root' indicates the logging was sent via a logger created using logging.getLogger() (i.e. with no args.) If you pass an arg to getLogger (__name__ is conventional), that arg will be used in place of 'root'.
The test does not care what module created the logging. It could be a sub-module called by our code-under-test, including 3rd party code.
The test asserts about the actual log message that was generated, as opposed to the technique of mocking, which asserts about the args that were passed. These will differ if the logging.info call uses '%s' format strings with additional arguments that you don't expand yourself (e.g. use logging.info('total=%s', len(items)) instead of logging.info('total=%s' % len(items)), which you should. It's no extra work, and allows hypothetical future logging aggregation services such as 'Sentry' to work properly - they can see that "total=12" and "total=43" are two instances of the same log message. That's the reason why pylint warns about the latter form of logging.info call.)
LogCapture includes facilities for log filtering and the like. Its parent 'testfixtures' package, written by Chris Withers, another splendid chap, includes many other useful testing tools. Documentation is here: http://pythonhosted.org/testfixtures/logging.html
Django has a nice context manager function called patch_logger.
from django.test.utils import patch_logger
then in your test case:
with patch_logger('logger_name', 'error') as cm:
self.assertIn("Error message", cm)
where:
logger_name is the logger name (duh)
error is the log level
cm is the list of all log messages
More details:
https://github.com/django/django/blob/2.1/django/test/utils.py#L638
It should work the same for django < 2.0, independently of python version (as long as it's supported by dj)
If you're using test classes, you can use following solution:
import logger
from django.test import TestCase
class MyTest(TestCase):
#classmethod
def setUpClass(cls):
super(MyTest, cls).setUpClass()
cls.logging_error = logging.error
logging.error = cls._error_log
#classmethod
def tearDownClass(cls):
super(MyTest, cls).tearDownClass()
logging.error = cls.logging_error
#classmethod
def _error_log(cls, msg):
cls.logger = msg
def test_logger(self):
self.assertIn('Message', self.logger)
This method replaces error function of logging module with your custom method only for test purposes and put stdout into cls.logger variable which is available in every test case by calling self.logger. At the end it reverts changes by placing error function from logging module back.

Categories