I have set logging.captureWarnings(True) in an application, and would like to test if warnings are logged correctly. I'm having difficulties understanding some of the behavior I'm seeing where tests are influencing each other in ways that I don't quite get.
Here is an example test suite which reproduces the behavior I'm seeing:
test_warning_logs.py
import warnings
import logging
def test_a(caplog):
logging.captureWarnings(True)
logging.basicConfig()
warnings.warn("foo")
assert "foo" in caplog.text
def test_b(caplog):
logging.captureWarnings(True)
logging.basicConfig()
warnings.warn("foo")
assert "foo" in caplog.text
Both tests are identical. When run in isolation (pytest test_warning_logs.py -k test_a, pytest test_warning_logs.py -k test_b), they each pass. When both of them are executed in the same run (pytest test_warning_logs.py), only the first one will pass:
============== test session starts ========================
platform linux -- Python 3.10.2, pytest-7.2.1, pluggy-1.0.0
rootdir: /home/me
plugins: mock-3.10.0, dependency-0.5.1
collected 2 items
test_warning_logs.py .F [100%]
==================== FAILURES =============================
_____________________ test_b ______________________________
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f8041857c40>
def test_b(caplog):
logging.captureWarnings(True)
logging.basicConfig()
warnings.warn("foo")
> assert "foo" in caplog.text
E AssertionError: assert 'foo' in ''
E + where '' = <_pytest.logging.LogCaptureFixture object at 0x7f8041857c40>.text
[...]
Additional Information
First I thought that the commands logging.captureWarnings and logging.basicConfig aren't idempotent, and running them more than once is the issue. But if you remove them from test_b, it still fails.
My current assumption is that it's a pytest issue, because when the code is executed without it, both warnings are logged:
# add this block to the bottom of test_warning_logs.py
if __name__ == '__main__':
from unittest.mock import MagicMock
test_a(MagicMock(text="foo"))
test_b(MagicMock(text="foo"))
$ python test_warning_logs.py
WARNING:py.warnings:/home/me/test_warning_logs.py:9: UserWarning: foo
warnings.warn("foo")
WARNING:py.warnings:/home/me/test_warning_logs.py:17: UserWarning: foo
warnings.warn("foo")
If it is an option to log the warnings using the logging module instead of warnings, then you won't face this issue.
import logging
def test_a(caplog):
logging.captureWarnings(True)
logging.basicConfig()
logging.warning("foo")
assert "foo" in caplog.text
def test_b(caplog):
logging.captureWarnings(True)
logging.basicConfig()
# This will be deprecated eventually, you can use logging.warning() instead
logging.warn("foo")
assert "foo" in caplog.text
I don't know why it doesn't work using warnings module. According to their documentation:
Repetitions of a particular warning for the same source location are
typically suppressed.
So I assumed this is what is happening here, but even calling warnings.resetwarnings() does not change the behavior.
Related
I have a simple python script that leads to a pandas SettingsWithCopyWarning:
import logging
import pandas as pd
def method():
logging.info("info")
logging.warning("warning1")
logging.warning("warning2")
df = pd.DataFrame({"1": [1, 0], "2": [3, 4]})
df[df["1"] == 1]["2"] = 100
if __name__ == "__main__":
method()
When I run the script, I get what I expect
WARNING:root:warning1
WARNING:root:warning2
main.py:11: SettingWithCopyWarning: ...
Now I write a pytest unit test for it:
from src.main import method
def test_main():
method()
and activate the logging in my pytest.ini
[pytest]
log_cli = true
log_cli_level = DEBUG
========================= 1 passed, 1 warning in 0.27s =========================
Process finished with exit code 0
-------------------------------- live log call ---------------------------------
INFO root:main.py:7 info
WARNING root:main.py:8 warning1
WARNING root:main.py:9 warning2
PASSED [100%]
The SettingsWithCopyWarning is counted while my logging warnings are not. Why is that? How do I control that? Via a configuration in the pytest.ini?
Even worse: The SettingsWithCopyWarning is not printed. I want to see it and perhaps even test on it? How can I see warnings that are generated by dependent packages? Via a configuration in the pytest.ini?
Thank you!
All log warnings logged using the pytest live logs feature are done with the standard logging facility.
Warnings done with the warnings facility are captured separately in pytest by default and logged in the warnings summary after the live log messages.
To log warnings done through warnings.warn function immediately they are emitted, you need to inform pytest not to capture them.
In your pytest.ini, add
[pytest]
log_cli = true
log_level = DEBUG
log_cli_level = DEBUG
addopts=--disable-warnings
Then in your tests/conftest.py, write a hook to capture warnings in the tests using the standard logging facility.
import logging
def pytest_runtest_call(item):
logging.captureWarnings(True)
We've just switched from nose to pytest and there doesn't seem to be an option to suppress third party logging. In nose config, we had the following line:
logging-filter=-matplotlib,-chardet.charsetprober,-PIL,-fiona.env,-fiona._env
Some of those logs are very chatty, especially matplotlib and we don't want to see the output, just output from our logs.
I can't find an equivalent setting in pytest though. Is it possible? Am I missing something? Thanks.
The way I do it is by creating a list of logger names for which logs have to be disabled in conftest.py.
For example, if I want to disable a logger called app, then I can write a conftest.py as below:
import logging
disable_loggers = ['app']
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True
And then run my test:
import logging
def test_brake():
logger = logging.getLogger("app")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake PASSED
[100%]
============================== 1 passed in 0.01s ===============================
Then, Hello there is not there because the logger with the name app was disabled in conftest.py.
However, if I change my logger name in the test to app2 and run the test again:
import logging
def test_brake():
logger = logging.getLogger("app2")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake
-------------------------------- live log call --------------------------------- ERROR app2:test_car.py:5 Hello there PASSED
[100%]
============================== 1 passed in 0.01s ===============================
As you can see, Hello there is in because a logger with app2 is not disabled.
Conclusion
Basically, you could do the same, but just add your undesired logger names to conftest.py as below:
import logging
disable_loggers = ['matplotlib', 'chardet.charsetprober', <add more yourself>]
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True
Apart from the ability to tune logging levels or not show any log output at all which I'm sure you've read in the docs, the only way that comes to mind is to configure your logging in general.
Assuming that all of those packages use the standard library logging facilities, you have various options of configuring what gets logged. Please take a look at the advanced tutorial for a good overview of your options.
If you don't want to configure logging for your application in general but only during testing, you might do so using the pytest_configure or pytest_sessionstart hooks which you might place in a conftest.py at the root of your test file hierarchy.
Then I see three options:
The brute force way is to use the default behaviour of fileConfig or dictConfig to disable all existing loggers. In your conftest.py:
import logging.config
def pytest_sessionstart():
# This is the default so an empty dictionary should work, too.
logging.config.dictConfig({'disable_existing_loggers': True})
The more subtle approach is to change the level of individual loggers or disable them. As an example:
import logging.config
def pytest_sessionstart():
logging.config.dictConfig({
'disable_existing_loggers': False,
'loggers': {
# Add any noisy loggers here with a higher loglevel.
'matplotlib': {'level': 'ERROR'}
}
})
Lastly, you can use the pytest_addoption hook to add a command line option similar to the one you mention. Again, at the root of your test hierarchy put the following in a conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--logging-filter",
help="Provide a comma-separated list of logger names that will be "
"disabled."
)
def pytest_sessionstart(pytestconfig):
for logger_name in pytestconfig.getoption("--logging-filter").split(","):
# Use `logger_name.trim()[1:]` if you want the `-name` CLI syntax.
logger = logging.getLogger(logger_name.trim())
logger.disabled = True
You can then call pytest in the following way:
pytest --logging-filter matplotlib,chardet,...
The default approach by pytest is to hide all logs but provide the caplog fixture to inspect log output in your test cases. This is quite powerful if you are looking for specific log lines. So the question is also why you need to see those logs at all in your test suite?
Adding a log filter to conftest.py looks like it might be useful and I'll come back to that at some point in the future. For now though, we've just gone for silencing the logs in the application. We don't see them at any point when the app is running, not just during testing.
# Hide verbose third-party logs
for log_name in ('matplotlib', 'fiona.env', 'fiona._env', 'PIL', 'chardet.charsetprober'):
other_log = logging.getLogger(log_name)
other_log.setLevel(logging.WARNING)
I'm trying to use pytest to test if my function is logging the expected text, such as addressed this question (the pyunit equivalent would be assertLogs). Following the pytest logging documentation, I am passing the caplog fixture to the tester. The documentation states:
Lastly all the logs sent to the logger during the test run are made available on the fixture in the form of both the logging.LogRecord instances and the final log text.
The module I'm testing is:
import logging
logger = logging.getLogger(__name__)
def foo():
logger.info("Quinoa")
The tester is:
def test_foo(caplog):
from mwe16 import foo
foo()
assert "Quinoa" in caplog.text
I would expect this test to pass. However, running the test with pytest test_mwe16.py shows a test failure due to caplog.text being empty:
============================= test session starts ==============================
platform linux -- Python 3.7.3, pytest-5.3.0, py-1.8.0, pluggy-0.13.0
rootdir: /tmp
plugins: mock-1.12.1, cov-2.8.1
collected 1 item
test_mwe16.py F [100%]
=================================== FAILURES ===================================
___________________________________ test_foo ___________________________________
caplog = <_pytest.logging.LogCaptureFixture object at 0x7fa86853e8d0>
def test_foo(caplog):
from mwe16 import foo
foo()
> assert "Quinoa" in caplog.text
E AssertionError: assert 'Quinoa' in ''
E + where '' = <_pytest.logging.LogCaptureFixture object at 0x7fa86853e8d0>.text
test_mwe16.py:4: AssertionError
============================== 1 failed in 0.06s ===============================
Why is caplog.text empty despite foo() sending text to a logger? How do I use pytest such that caplog.text does capture the logged text, or otherwise verify that the text is being logged?
The documentation is unclear here. From trial and error, and notwithstanding the "all the logs sent to the logger during the test run are made available" text, it still only captures logs with certain log levels. To actually capture all logs, one needs to set the log level for captured log messages using caplog.set_level or the caplog.at_level context manager, so that the test module becomes:
import logging
def test_foo(caplog):
from mwe16 import foo
with caplog.at_level(logging.DEBUG):
foo()
assert "Quinoa" in caplog.text
Now, the test passes:
============================= test session starts ==============================
platform linux -- Python 3.7.3, pytest-5.3.0, py-1.8.0, pluggy-0.13.0
rootdir: /tmp
plugins: mock-1.12.1, cov-2.8.1
collected 1 item
test_mwe16.py . [100%]
============================== 1 passed in 0.04s ===============================
In the logger set up, please set logger.propagate=True
This is a limitation of Pytest, cf. the feature request. I've resorted to creating a fixture based on _pytest.logging.LogCaptureFixture with a context manager and using it instead of caplog.
Some code:
from _pytest.logging import LogCaptureHandler, _remove_ansi_escape_sequences
class CatchLogFixture:
"""Fixture to capture logs regardless of the Propagate flag. See
https://github.com/pytest-dev/pytest/issues/3697 for details.
"""
#property
def text(self) -> str:
return _remove_ansi_escape_sequences(self.handler.stream.getvalue())
#contextmanager
def catch_logs(self, level: int, logger: logging.Logger) -> LogCaptureHandler:
"""Set the level for capturing of logs. After the end of the 'with' statement,
the level is restored to its original value.
"""
self.handler = LogCaptureHandler()
orig_level = logger.level
logger.setLevel(level)
logger.addHandler(self.handler)
try:
yield self
finally:
logger.setLevel(orig_level)
logger.removeHandler(self.handler)
#pytest.fixture
def capture_log():
return CatchLogFixture().catch_logs
I am trying to check the exit code I have for a script I'm writing in python3 on a Mac (10.14.4). When I run the test it doesn't fail which I think is wrong. But I can't see what it is that I've got wrong.
The test file looks like this:
import pytest
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import my_script
class TestMyScript():
def test_exit(self):
with pytest.raises(SystemExit) as pytest_wrapped_e:
my_script.main()
assert pytest_wrapped_e.type == SystemExit
def test_exit_code(self):
with pytest.raises(SystemExit) as pytest_wrapped_e:
my_script.main()
self.assertEqual(pytest_wrapped_e.exception.code, 42)
My script looks like:
#!/usr/bin/env python3
import sys
def main():
print('Hello World!')
sys.exit(0)
if __name__ == '__main__':
main()
The output I get is:
$ py.test -v
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-3.10.1, py-1.8.0, pluggy-0.9.0 -- /usr/local/opt/python/bin/python3.7
cachedir: .pytest_cache
rootdir: /Users/robertpostill/software/gateway, inifile:
plugins: shutil-1.6.0
collected 2 items
test/test_git_refresh.py::TestGitRefresh::test_exit PASSED [ 50%]
test/test_git_refresh.py::TestGitRefresh::test_exit_code PASSED [100%]
=========================== 2 passed in 0.02 seconds ===========================
$
I would expect the second test(test_exit_code) to fail as the exit call is getting a code of 0, not 42. But for some reason, the assert is happy whatever value I put in the sys.exit call.
Good question, that's because your Asserts are never called (either of them). When exit() is called the program is done (at least within the with clause), it turns off the lights, packs up its bags, and goes home. No further functions will be called. To see this add an assert before and after you call main:
def test_exit_code(self):
with pytest.raises(SystemExit) as pytest_wrapped_e:
self.assertEqual(0, 1) # This will make it fail
my_script.main()
self.assertEqual(0, 1) # This will never be called because main `exits`
self.assertEqual(pytest_wrapped_e.exception.code, 42)
A test passes if no asserts fail and nothing breaks, so in your case both tests passed because the assert was never hit.
To fix this pull your asserts out of the with statement:
def test_exit_code(self):
with pytest.raises(SystemExit) as pytest_wrapped_e:
my_script.main()
self.assertEqual(pytest_wrapped_e.exception.code, 42)
Though now you will need to fix the pytest syntax because you are missing some other stuff.
See: Testing sys.exit() with pytest
I am trying to write a test, using pytest, that would check that a specific function is writing out a warning to the log when needed. For example:
In module.py:
import logging
LOGGER = logging.getLogger(__name__)
def run_function():
if something_bad_happens:
LOGGER.warning('Something bad happened!')
In test_module.py:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func():
LOGGER.info('Testing now.')
run_function()
~ somehow get the stdout/log of run_function() ~
assert 'Something bad happened!' in output
I have seen that you can supposedly get the log or the stdout/stderr with pytest by passing capsys or caplog as an argument to the test, and then using either capsus.readouterr() or caplog.records to access the output.
However, when I try those methods, I only see "Testing now.", and not "Something bad happened!". It seems like the logging output that is happening within the call to run_function() is not accessible from test_func()?
The same thing happens if I try a more direct method, such as sys.stdout.getvalue(). Which is confusing, because run_function() is writing to the terminal, so I would think that would be accessible from stdout...?
Basically, does anyone know how I can access that 'Something bad happened!' from within test_func()?
I don't know why this didn't work when I tried it before, but this solution works for me now:
In test_module.py:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func(caplog):
LOGGER.info('Testing now.')
run_function()
assert 'Something bad happened!' in caplog.text
test_module.py should look like this:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func(caplog):
with caplog.at_level(logging.WARNING):
run_function()
assert 'Something bad happened!' in caplog.text
or, alternatively:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func(caplog):
caplog.set_level(logging.WARNING)
run_function()
assert 'Something bad happened!' in caplog.text
Documentation for pytest capture logging is here
In your logging set up, check propagate is set to True, otherwise caplog handler is not able to see the logging message.
I also want to add to this thread for anybody in the future coming across this. You may need to use
#pytest.fixture(autouse=True)
as a decorator on your test so the test has access to the caplog fixture.
I had the same issue. I just explicitly mentioned the name of the module instead of name inside the test function And set the propagate attribute to True.
Note: module should be the directory in which you have scripts to be test.
def test_func():
LOGGER = logging.getLogger("module")
LOGGER.propagate = True
run_function()
~ somehow get the stdout/log of run_function() ~
assert 'Something bad happened!' in output