I'm building a pytest fixture that starts some long process before any tests are executed.
I would like to have utility logs reporting about that process while it's taking place after being initiated by pytest.
The fixture looks something like this:
import logging
logger = logging.getLogger(__name__)
#fixture
def some_long_process():
logger.info("Process started")
logger.info("Process ongoing..")
yield
logger.info("Process ended")
The problem is that pytest automatically captures all outputs during test runs and although you can enable live logging I wish I could enable live logging specifically for my logger.
Is there a way to do that? Or alternatively, a way to add system logs to pytest?
Basically you just need to enable live logs.
It can be done directly at the command line by passing the log_cli_level .
pytest --log-cli-level=INFO test_log_feature.py
# ----- live log setup -------------------------------------------
# INFO test_log_feature:test_log_feature.py:9 Process started
# INFO test_log_feature:test_log_feature.py:10 Process ongoing..
# PASSED [100%]
# ----- live log teardown ----------------------------------------
# INFO test_log_feature:test_log_feature.py:12 Process ended
You can do the same in a pytest.ini configuration file.
[pytest]
log_cli=true
log_cli_level=INFO
Update
I want live logging solely for my logger, is that possible?
I think in this case you have to play with loggers and define various loggers according to your needs and enable log_cli_level at the lower level you want to see. Here is an example to illustrate this solution.
import logging
import pytest
fixture_logger = logging.getLogger("fixture")
logger = logging.getLogger(__name__)
# can also be set by configuration
logger.setLevel(logging.WARNING)
#pytest.fixture()
def some_long_process(caplog):
fixture_logger.info("Process started")
fixture_logger.info("Process ongoing..")
yield
fixture_logger.info("Process ended")
def test_long_process(some_long_process, caplog, request):
logger.info("Log I don't want to see")
pass
Gives.
pytest --log-cli-level=INFO test_log_feature.py
# --- live log setup ---
# INFO fixture:test_log_feature.py:10 Process started
# INFO fixture:test_log_feature.py:11 Process ongoing..
# PASSED [100%]
# ---- live log teardown ---
# INFO fixture:test_log_feature.py:13 Process ended
Related
I have a simple python script that leads to a pandas SettingsWithCopyWarning:
import logging
import pandas as pd
def method():
logging.info("info")
logging.warning("warning1")
logging.warning("warning2")
df = pd.DataFrame({"1": [1, 0], "2": [3, 4]})
df[df["1"] == 1]["2"] = 100
if __name__ == "__main__":
method()
When I run the script, I get what I expect
WARNING:root:warning1
WARNING:root:warning2
main.py:11: SettingWithCopyWarning: ...
Now I write a pytest unit test for it:
from src.main import method
def test_main():
method()
and activate the logging in my pytest.ini
[pytest]
log_cli = true
log_cli_level = DEBUG
========================= 1 passed, 1 warning in 0.27s =========================
Process finished with exit code 0
-------------------------------- live log call ---------------------------------
INFO root:main.py:7 info
WARNING root:main.py:8 warning1
WARNING root:main.py:9 warning2
PASSED [100%]
The SettingsWithCopyWarning is counted while my logging warnings are not. Why is that? How do I control that? Via a configuration in the pytest.ini?
Even worse: The SettingsWithCopyWarning is not printed. I want to see it and perhaps even test on it? How can I see warnings that are generated by dependent packages? Via a configuration in the pytest.ini?
Thank you!
All log warnings logged using the pytest live logs feature are done with the standard logging facility.
Warnings done with the warnings facility are captured separately in pytest by default and logged in the warnings summary after the live log messages.
To log warnings done through warnings.warn function immediately they are emitted, you need to inform pytest not to capture them.
In your pytest.ini, add
[pytest]
log_cli = true
log_level = DEBUG
log_cli_level = DEBUG
addopts=--disable-warnings
Then in your tests/conftest.py, write a hook to capture warnings in the tests using the standard logging facility.
import logging
def pytest_runtest_call(item):
logging.captureWarnings(True)
I would like to use rich.logging.RichHandler from the rich library to handle all captured logs in pytest.
Say I have two files,
# library.py
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
def func():
x = {"value": 5}
logger.info(x)
# test_library.py
from library import func
def test_func():
func()
assert False
Running pytest shows the log message as expected, but I want it formatted by rich so I tried to put the following into conftest.py:
import logging
import pytest
from rich.logging import RichHandler
#pytest.hookimpl
def pytest_configure(config: pytest.Config):
logger = logging.getLogger()
logger.addHandler(RichHandler())
which results in the following output:
Under "captured stdout call" the log message appears as formatted by the RichHandler but below that it appears a second time under "captured log call" which is not what I want. Instead the message below "captured log call" should be formatted by the RichHandler and should not appear twice.
It does not perfectly answer your question however it could help getting a closest result.
Disable live logs: log_cli = 0 in pytest.ini
Disable capture: -s flag equivalent to --capture=no
Disable showing captured logs: --show-capture=no (should be disabled when capture is turned off, but ...)
So by disabling live logs and by running pytest -s --show-capture=no you should get rid of the duplicated logs and get only rich-formatted logs.
We've just switched from nose to pytest and there doesn't seem to be an option to suppress third party logging. In nose config, we had the following line:
logging-filter=-matplotlib,-chardet.charsetprober,-PIL,-fiona.env,-fiona._env
Some of those logs are very chatty, especially matplotlib and we don't want to see the output, just output from our logs.
I can't find an equivalent setting in pytest though. Is it possible? Am I missing something? Thanks.
The way I do it is by creating a list of logger names for which logs have to be disabled in conftest.py.
For example, if I want to disable a logger called app, then I can write a conftest.py as below:
import logging
disable_loggers = ['app']
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True
And then run my test:
import logging
def test_brake():
logger = logging.getLogger("app")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake PASSED
[100%]
============================== 1 passed in 0.01s ===============================
Then, Hello there is not there because the logger with the name app was disabled in conftest.py.
However, if I change my logger name in the test to app2 and run the test again:
import logging
def test_brake():
logger = logging.getLogger("app2")
logger.error("Hello there")
assert True
collecting ... collected 1 item
test_car.py::test_brake
-------------------------------- live log call --------------------------------- ERROR app2:test_car.py:5 Hello there PASSED
[100%]
============================== 1 passed in 0.01s ===============================
As you can see, Hello there is in because a logger with app2 is not disabled.
Conclusion
Basically, you could do the same, but just add your undesired logger names to conftest.py as below:
import logging
disable_loggers = ['matplotlib', 'chardet.charsetprober', <add more yourself>]
def pytest_configure():
for logger_name in disable_loggers:
logger = logging.getLogger(logger_name)
logger.disabled = True
Apart from the ability to tune logging levels or not show any log output at all which I'm sure you've read in the docs, the only way that comes to mind is to configure your logging in general.
Assuming that all of those packages use the standard library logging facilities, you have various options of configuring what gets logged. Please take a look at the advanced tutorial for a good overview of your options.
If you don't want to configure logging for your application in general but only during testing, you might do so using the pytest_configure or pytest_sessionstart hooks which you might place in a conftest.py at the root of your test file hierarchy.
Then I see three options:
The brute force way is to use the default behaviour of fileConfig or dictConfig to disable all existing loggers. In your conftest.py:
import logging.config
def pytest_sessionstart():
# This is the default so an empty dictionary should work, too.
logging.config.dictConfig({'disable_existing_loggers': True})
The more subtle approach is to change the level of individual loggers or disable them. As an example:
import logging.config
def pytest_sessionstart():
logging.config.dictConfig({
'disable_existing_loggers': False,
'loggers': {
# Add any noisy loggers here with a higher loglevel.
'matplotlib': {'level': 'ERROR'}
}
})
Lastly, you can use the pytest_addoption hook to add a command line option similar to the one you mention. Again, at the root of your test hierarchy put the following in a conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--logging-filter",
help="Provide a comma-separated list of logger names that will be "
"disabled."
)
def pytest_sessionstart(pytestconfig):
for logger_name in pytestconfig.getoption("--logging-filter").split(","):
# Use `logger_name.trim()[1:]` if you want the `-name` CLI syntax.
logger = logging.getLogger(logger_name.trim())
logger.disabled = True
You can then call pytest in the following way:
pytest --logging-filter matplotlib,chardet,...
The default approach by pytest is to hide all logs but provide the caplog fixture to inspect log output in your test cases. This is quite powerful if you are looking for specific log lines. So the question is also why you need to see those logs at all in your test suite?
Adding a log filter to conftest.py looks like it might be useful and I'll come back to that at some point in the future. For now though, we've just gone for silencing the logs in the application. We don't see them at any point when the app is running, not just during testing.
# Hide verbose third-party logs
for log_name in ('matplotlib', 'fiona.env', 'fiona._env', 'PIL', 'chardet.charsetprober'):
other_log = logging.getLogger(log_name)
other_log.setLevel(logging.WARNING)
my team and I are using Pytest + Jenkins to automate our product testing. we have been using the standard Logging lib of python to get proper log messages during testing, before and after each test etc. we have multiple layers of logging, we log out ERROR, WARNING, INFO and DEBUG. the default value for our logger is INFO. we create the logger object in the primary setup of the tests, and pass it down to each object created, so all our logs go to the same logger.
so far when we are developing a new feature or test, we are working in DEBUG mode locally, and change it back to INFO when submitting new code to our SVN, but i am trying to add option to change logging level using the CLI, but i haven't found anything easy to implement. I've considered using Fixtures, but from what i understand those are only for the tests themselves, and not for the setup/tear-down phases, and the log is created regard less of the tests.
any hack or idea on how to add a Pytest option to the CLI command to support changing logging level?
Try --log-cli-level=INFO
like:
pytest -vv -s --log-cli-level=INFO --log-cli-format="%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)" --log-cli-date-format="%Y-%m-%d %H:%M:%S" ./test_file.py
This is now built into pytest. Just add '--log-level=' to the command line when running your test. For example:
pytest --log-level=INFO
Documentation updates can be found here: https://docs.pytest.org/en/latest/logging.html
Pytest 3.5 introduced a parameter
pytest --log-cli-level=INFO
For versions prior to 3.5 you can combine pytest commandline options and the python loglevel from commandline example to do the following:
Add the following to conftest.py:
import pytest
import logging
def pytest_addoption(parser):
parser.addoption(
"--log", action="store", default="WARNING", help="set logging level"
)
#pytest.fixture
def logger():
loglevel = pytest.config.getoption("--log")
logger = logging.getLogger(__name__)
numeric_level = getattr(
logging,
loglevel.upper(),
None
)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % loglevel)
logger.setLevel(numeric_level)
return logger
and then request the logger fixture in your tests
def test_bla(logger):
assert True
logger.info("True is True")
Then run pytest like
py.test --log INFO
to set the log level to INFO.
I'm new to Python logging and I can easily see how it is preferrable to the home-brew solution I have come up with.
One question I can't seem to find an answer to: how do I squelch logging messages on a per-method/function basis?
My hypothetical module contains a single function. As I develop, the log calls are a great help:
logging.basicConfig(level=logging.DEBUG,
format=('%(levelname)s: %(funcName)s(): %(message)s'))
log = logging.getLogger()
my_func1():
stuff...
log.debug("Here's an interesting value: %r" % some_value)
log.info("Going great here!")
more stuff...
As I wrap up my work on 'my_func1' and start work on a second function, 'my_func2', the logging messages from 'my_func1' start going from "helpful" to "clutter".
Is there single-line magic statement, such as 'logging.disabled_in_this_func()' that I can add to the top of 'my_func1' to disable all the logging calls within 'my_func1', but still leave logging calls in all other functions/methods unchanged?
Thanks
linux, Python 2.7.1
The trick is to create multiple loggers.
There are several aspects to this.
First. Don't use logging.basicConfig() at the beginning of a module. Use it only inside the main-import switch
if __name__ == "__main__":
logging.basicConfig(...)
main()
logging.shutdown()
Second. Never get the "root" logger, except to set global preferences.
Third. Get individual named loggers for things which might be enabled or disabled.
log = logging.getLogger(__name__)
func1_log = logging.getLogger( "{0}.{1}".format( __name__, "my_func1" )
Now you can set logging levels on each named logger.
log.setLevel( logging.INFO )
func1_log.setLevel( logging.ERROR )
You could create a decorator that would temporarily suspend logging, ala:
from functools import wraps
def suspendlogging(func):
#wraps(func)
def inner(*args, **kwargs):
previousloglevel = log.getEffectiveLevel()
try:
return func(*args, **kwargs)
finally:
log.setLevel(previousloglevel)
return inner
#suspendlogging
def my_func1(): ...
Caveat: that would also suspend logging for any function called from my_func1 so be careful how you use it.
You could use a decorator:
import logging
import functools
def disable_logging(func):
#functools.wraps(func)
def wrapper(*args,**kwargs):
logging.disable(logging.DEBUG)
result = func(*args,**kwargs)
logging.disable(logging.NOTSET)
return result
return wrapper
#disable_logging
def my_func1(...):
I took me some time to learn how to implement the sub-loggers as suggested by S.Lott.
Given how tough it was to figure out how to setup logging when I was starting out, I figure it's long past time to share what I learned since then.
Please keep in mind that this isn't the only way to set up loggers/sub-loggers, nor is it the best. It's just the way I use to get the job done to fit my needs. I hope this is helpful to someone. Please feel free to comment/share/critique.
Let's assume we have a simple library we like to use. From the main program, we'd like to be able to control the logging messages we get from the library. Of course we're considerate library creators, so we configure our library to make this easy.
First, the main program:
# some_prog.py
import os
import sys
# Be sure to give Vinay Sajip thanks for his creation of the logging module
# and tireless efforts to answer our dumb questions about it. Thanks Vinay!
import logging
# This module will make understanding how Python logging works so much easier.
# Also great for debugging why your logging setup isn't working.
# Be sure to give it's creator Brandon Rhodes some love. Thanks Brandon!
import logging_tree
# Example library
import some_lib
# Directory, name of current module
current_path, modulename = os.path.split(os.path.abspath(__file__))
modulename = modulename.split('.')[0] # Drop the '.py'
# Set up a module-local logger
# In this case, the logger will be named 'some_prog'
log = logging.getLogger(modulename)
# Add a Handler. The Handler tells the logger *where* to send the logging
# messages. We'll set up a simple handler that send the log messages
# to standard output (stdout)
stdout_handler = logging.StreamHandler(stream=sys.stdout)
log.addHandler(stdout_handler)
def some_local_func():
log.info("Info: some_local_func()")
log.debug("Debug: some_local_func()")
if __name__ == "__main__":
# Our main program, here's where we tie together/enable the logging infra
# we've added everywhere else.
# Use logging_tree.printout() to see what the default log levels
# are on our loggers. Make logging_tree.printout() calls at any place in
# the code to see how the loggers are configured at any time.
#
# logging_tree.printout()
print("# Logging level set to default (i.e. 'WARNING').")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# We know a reference to our local logger, so we can set/change its logging
# level directly. Let's set it to INFO:
log.setLevel(logging.INFO)
print("# Local logging set to 'INFO'.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# Next, set the local logging level to DEBUG:
log.setLevel(logging.DEBUG)
print("# Local logging set to 'DEBUG'.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# Set the library's logging level to DEBUG. We don't necessarily
# have a reference to the library's logger, but we can use
# logging_tree.printout() to see the name and then call logging.getLogger()
# to create a local reference. Alternately, we could dig through the
# library code.
lib_logger_ref = logging.getLogger("some_lib")
lib_logger_ref.setLevel(logging.DEBUG)
# The library logger's default handler, NullHandler() won't output anything.
# We'll need to add a handler so we can see the output -- in this case we'll
# also send it to stdout.
lib_log_handler = logging.StreamHandler(stream=sys.stdout)
lib_logger_ref.addHandler(lib_log_handler)
lib_logger_ref.setLevel(logging.DEBUG)
print("# Logging level set to DEBUG in both local program and library.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
print("# ACK! Setting the library's logging level to DEBUG output")
print("# all debug messages from the library. (Use logging_tree.printout()")
print("# To see why.)")
print("# Let's change the library's logging level to INFO and")
print("# only some_special_func()'s level to DEBUG so we only see")
print("# debug message from some_special_func()")
# Raise the logging level of the libary and lower the logging level
# of 'some_special_func()' so we see only some_special_func()'s
# debugging-level messages.
# Since it is a sub-logger of the library's main logger, we don't need
# to create another handler, it will use the handler that belongs
# to the library's main logger.
lib_logger_ref.setLevel(logging.INFO)
special_func_sub_logger_ref = logging.getLogger('some_lib.some_special_func')
special_func_sub_logger_ref.setLevel(logging.DEBUG)
print("# Logging level set to DEBUG in local program, INFO in library and")
print("# DEBUG in some_lib.some_special_func()")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
Next, our library:
# some_lib.py
import os
import logging
# Directory, name of current module
current_path, modulename = os.path.split(os.path.abspath(__file__))
modulename = modulename.split('.')[0] # Drop the '.py'
# Set up a module-local logger. In this case the logger will be
# named 'some_lib'
log = logging.getLogger(modulename)
# In libraries, always default to NullHandler so you don't get
# "No handler for X" messages.
# Let your library callers set up handlers and set logging levels
# in their main program so the main program can decide what level
# of messages they want to see from your library.
log.addHandler(logging.NullHandler())
def some_lib_func():
log.info("Info: some_lib.some_lib_func()")
log.debug("Debug: some_lib.some_lib_func()")
def some_special_func():
"""
This func is special (not really). It just has a function/method-local
logger in addition to the library/module-level logger.
This allows us to create/control logging messages down to the
function/method level.
"""
# Our function/method-local logger
func_log = logging.getLogger('%s.some_special_func' % modulename)
# Using the module-level logger
log.info("Info: some_special_func()")
# Using the function/method-level logger, which can be controlled separately
# from both the library-level logger and the main program's logger.
func_log.debug("Debug: some_special_func(): This message can be controlled at the function/method level.")
Now let's run the program, along with the commentary track:
# Logging level set to default (i.e. 'WARNING').
Notice there's no output at the default level since we haven't generated any WARNING-level messages.
# Local logging set to 'INFO'.
Info: some_local_func()
The library's handlers default to NullHandler(), so we see only output from the main program. This is good.
# Local logging set to 'DEBUG'.
Info: some_local_func()
Debug: some_local_func()
Main program logger set to DEBUG. We still see no output from the library. This is good.
# Logging level set to DEBUG in both local program and library.
Info: some_local_func()
Debug: some_local_func()
Info: some_lib.some_lib_func()
Debug: some_lib.some_lib_func()
Info: some_special_func()
Debug: some_special_func(): This message can be controlled at the function/method level.
Oops.
# ACK! Setting the library's logging level to DEBUG output
# all debug messages from the library. (Use logging_tree.printout()
# To see why.)
# Let's change the library's logging level to INFO and
# only some_special_func()'s level to DEBUG so we only see
# debug message from some_special_func()
# Logging level set to DEBUG in local program, INFO in library and
# DEBUG in some_lib.some_special_func()
Info: some_local_func()
Debug: some_local_func()
Info: some_lib.some_lib_func()
Info: some_special_func()
Debug: some_special_func(): This message can be controlled at the function/method level.
It's also possible to get only debug messages only from some_special_func(). Use logging_tree.printout() to figure out which logging levels to tweak to make that happen!
This combines #KirkStrauser's answer with #unutbu's. Kirk's has try/finally but doesn't disable, and unutbu's disables w/o try/finally. Just putting it here for posterity:
from functools import wraps
import logging
def suspend_logging(func):
#wraps(func)
def inner(*args, **kwargs):
logging.disable(logging.FATAL)
try:
return func(*args, **kwargs)
finally:
logging.disable(logging.NOTSET)
return inner
Example usage:
from logging import getLogger
logger = getLogger()
#suspend_logging
def example()
logger.info("inside the function")
logger.info("before")
example()
logger.info("after")
If you are using logbook silencing logs is very simple. Just use a NullHandler - https://logbook.readthedocs.io/en/stable/api/handlers.html
>>> logger.warn('TEST')
[12:28:17.298198] WARNING: TEST
>>> from logbook import NullHandler
>>> with NullHandler():
... logger.warn('TEST')