I am trying to unit-test some algorithm that uses logging library.
I have a fixture that creates a logger.
In my 1st test case, I do not use this fixture and uses a print to log to stdout. This test case passes.
In my 2nd test case, I use this fixture, but not as documented in pytest doc. I just call the associated function in my test to get the logger. Then I use the logger to log to stdout. This test case passes.
In my 3rd test case, I use this fixture as documented in pytest doc. The fixture is passed as an argument to the test function. Then I use the logger to log to stdout. This test case fails! It does not find anything in stdout. But in the error message, it says that my log is in the captured stdout call.
What am I doing wrong?
import pytest
import logging
import sys
#pytest.fixture()
def logger():
logger = logging.getLogger('Some.Logger')
logger.setLevel(logging.INFO)
stdout = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout)
return logger
def test_print(capsys):
print 'Bouyaka!'
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# passes
def test_logger_without_fixture(capsys):
logger().info('Bouyaka!')
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# passes
def test_logger_with_fixture(logger, capsys):
logger.info('Bouyaka!')
stdout, stderr = capsys.readouterr()
assert 'Bouyaka!' in stdout
# fails with this error:
# > assert 'Bouyaka!' in stdout
# E assert 'Bouyaka!' in ''
#
# tests/test_logging.py:21: AssertionError
# ---- Captured stdout call ----
# Bouyaka!
There is no change if I reorder the test cases by the way.
Thanks a lot for your ideas!
Reverse logger, capsys, make logger request the capsys fixture and use capfd do not change anything.
I tried pytest-catchlog plugin and it works fine!
import pytest
import logging
#pytest.fixture()
def logger():
logger = logging.getLogger('Some.Logger')
logger.setLevel(logging.INFO)
return logger
def test_logger_with_fixture(logger, caplog):
logger.info('Bouyaka!')
assert 'Bouyaka!' in caplog.text
# passes!
In my original tests, I logged to stdout and stderr and captured them.
This is an even better solution, as I do not need this tweak to check that my logs work fine.
Well, now I just need to rework all my tests to use caplog, but this is my own business ;)
The only thing left, now that I have a better solution, is to understand what is wrong in my original test case def test_logger_with_fixture(logger, capsys).
As of pytest 3.3, the functionality of capturing log message was added to pytest core. This is supported by the caplog fixture:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != 'CRITICAL'
assert 'wally' not in caplog.text
More information can be found at the documentation
I'm guessing the logger gets created (via the fixture) before the capsys fixture is set up.
Some ideas:
Use the pytest-catchlog plugin
Maybe reverse logger, capsys
Make logger request the capsys fixture
Use capfd which is more lowlevel capturing without altering sys
Related
Recently I am writting an python logging extension, and I want to add some tests for my extension to verify whether my extension work as expected.
However, I don't know how to capture the complete log and compare with my excepted result in unittest/pytest.
simplified sample:
# app.py
import logging
def create_logger():
formatter = logging.Formatter(fmt='%(name)s-%(levelname)s-%(message)s')
hdlr = logging.StreamHandler()
hdlr.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
logger.addHandler(hdlr)
return logger
app_logger = create_logger()
Here is my tests
Attempt 1: unittest
from app import app_logger
import unittest
class TestApp(unittest.TestCase):
def test_logger(self):
with self.assertLogs('', 'DEBUG') as cm:
app_logger.debug('hello')
# or some other way to capture the log output.
self.assertEqual('app-DEBUG-hello', cm.output)
expected behaviour:
cm.output = 'app-DEBUG-hello'
actual behaviour
cm.output = ['DEBUG:app:hello']
Attempt 2: pytest caplog
from app import app_logger
import pytest
def test_logger(caplog):
app_logger.debug('hello')
assert caplog.text == 'app-DEBUG-hello'
expected behaviour:
caplog.text = 'app-DEBUG-hello'
actual behaviour
caplog.text = 'test_logger.py 6 DEBUG hello'
Attempt 3: pytest capsys
from app import app_logger
import pytest
def test_logger(capsys):
app_logger.debug('hello')
out, err = capsys.readouterr()
assert err
assert err == 'app-DEBUG-hello'
expected behaviour:
err = 'app-DEBUG-hello'
actual behaviour
err = ''
Considering there will be many tests with different format, I don't want to check the log format manually. I have no idea how to get complete log as I see on the console and compare it with my expected one in the test cases. Hoping for your help, thx.
I know this is old but posting here since it pulled up in google for me...
Probably needs cleanup but it is the first thing that has gotten close for me so I figured it would be good to share.
Here is a test case mixin I've put together that lets me verify a particular handler is being formatted as expected by copying the formatter:
import io
import logging
from django.conf import settings
from django.test import SimpleTestCase
from django.utils.log import DEFAULT_LOGGING
class SetupLoggingMixin:
def setUp(self):
super().setUp()
logging.config.dictConfig(settings.LOGGING)
self.stream = io.StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
console_handler = None
for handler in self.root_logger.handlers:
if handler.name == 'console':
console_handler = handler
break
if console_handler is None:
raise RuntimeError('could not find console handler')
formatter = console_handler.formatter
self.root_formatter = formatter
self.root_hdlr.setFormatter(self.root_formatter)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
logging.config.dictConfig(DEFAULT_LOGGING)
And here is an example of how to use it:
class SimpleLogTests(SetupLoggingMixin, SimpleTestCase):
def test_logged_time(self):
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'my-expected-message-formatted-as-expected')
After reading the source code of the unittest library, I've worked out the following bypass. Note, it works by changing a protected member of an imported module, so it may break in future versions.
from unittest.case import _AssertLogsContext
_AssertLogsContext.LOGGING_FORMAT = 'same format as your logger'
After these commands the logging context opened by self.assertLogs will use the above format. I really don't know why this values is left hard-coded and not configurable.
I did not find an option to read the format of a logger, but if you use logging.config.dictConfig you can use a value from the same dictionary.
I know this doesn't completely answer the OP's question but I stumbled upon this post while looking for a neat way to capture logged messages.
Taking what #user319862 did, I've cleaned it and simplified it.
import unittest
import logging
from io import StringIO
class SetupLogging(unittest.TestCase):
def setUp(self):
super().setUp()
self.stream = StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
def test_log_output(self):
""" Does the logger produce the correct output? """
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'foo\n')
if __name__ == '__main__':
unittest.main()
I new to python but have some experience in test/tdd in other languages, and found that the default way of "changing" the formatter is by adding a new streamhandler BUT in the case you already have a stream defined in your logger (i.e. using Azure functions or TestCase::assertLogs that add one for you) you end up logging twice one with your format and another with the "default" format.
If in the OP the function create_logger mutates the formatter of current StreamHandler, instead of adding a new StreamHandler (checks if exist and if doesn't creates a new one and all that jazz...)
Then you can call the create_logger after the with self.assertLogs('', 'DEBUG') as cm: and just assert the cm.output and it just works because you are mutating the Formatter of the StreamHandler that the assertLogs is adding.
So basically what's happening is that the execution order is not appropriate for the test.
The order of execution in OP is:
import stuff
Add stream to logger formatter
Run test
Add another stream to logger formatter via self.assertLogs
assert stuff in 2nd StreamHandler
When it should be
the order of execution is:
import stuff
Add stream with logger formatter (but is irrelevant)
Run test
Add another stream with logger formatter via self.assertLogs
Change current stream logger formatter
assert stuff in only and properly formatted StreamHandler
I am trying to write a test, using pytest, that would check that a specific function is writing out a warning to the log when needed. For example:
In module.py:
import logging
LOGGER = logging.getLogger(__name__)
def run_function():
if something_bad_happens:
LOGGER.warning('Something bad happened!')
In test_module.py:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func():
LOGGER.info('Testing now.')
run_function()
~ somehow get the stdout/log of run_function() ~
assert 'Something bad happened!' in output
I have seen that you can supposedly get the log or the stdout/stderr with pytest by passing capsys or caplog as an argument to the test, and then using either capsus.readouterr() or caplog.records to access the output.
However, when I try those methods, I only see "Testing now.", and not "Something bad happened!". It seems like the logging output that is happening within the call to run_function() is not accessible from test_func()?
The same thing happens if I try a more direct method, such as sys.stdout.getvalue(). Which is confusing, because run_function() is writing to the terminal, so I would think that would be accessible from stdout...?
Basically, does anyone know how I can access that 'Something bad happened!' from within test_func()?
I don't know why this didn't work when I tried it before, but this solution works for me now:
In test_module.py:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func(caplog):
LOGGER.info('Testing now.')
run_function()
assert 'Something bad happened!' in caplog.text
test_module.py should look like this:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func(caplog):
with caplog.at_level(logging.WARNING):
run_function()
assert 'Something bad happened!' in caplog.text
or, alternatively:
import logging
from module import run_function
LOGGER = logging.getLogger(__name__)
def test_func(caplog):
caplog.set_level(logging.WARNING)
run_function()
assert 'Something bad happened!' in caplog.text
Documentation for pytest capture logging is here
In your logging set up, check propagate is set to True, otherwise caplog handler is not able to see the logging message.
I also want to add to this thread for anybody in the future coming across this. You may need to use
#pytest.fixture(autouse=True)
as a decorator on your test so the test has access to the caplog fixture.
I had the same issue. I just explicitly mentioned the name of the module instead of name inside the test function And set the propagate attribute to True.
Note: module should be the directory in which you have scripts to be test.
def test_func():
LOGGER = logging.getLogger("module")
LOGGER.propagate = True
run_function()
~ somehow get the stdout/log of run_function() ~
assert 'Something bad happened!' in output
I am using a pytest fixture to mock up command-line arguments for testing a script. This way the arguments shared by each test function would only need to be declared in one place. I'm also trying to use pytest's capsys to capture output printed by the script. Consider the following silly example.
from __future__ import print_function
import pytest
import othermod
from sys import stdout
#pytest.fixture
def shared_args():
args = type('', (), {})()
args.out = stdout
args.prefix = 'dude:'
return args
def otherfunction(message, prefix, stream):
print(prefix, message, file=stream)
def test_dudesweet(shared_args, capsys):
otherfunction('sweet', shared_args.prefix, shared_args.out)
out, err = capsys.readouterr()
assert out == 'dude: sweet\n'
Here, capsys does not capture sys.stderr properly. If I move from sys import stdout and args.out = stdout directly into the test function, things work as expected. But this makes things much messier, as I have to re-declare these statements for each test. Am I doing something wrong? Can I use capsys with fixtures?
Fixture is invoked before test is run. In your example, shared_args fixture is reading stdout before otherfunction can write anything to stdout.
One way to fix your problem is to make your fixture return a function which can do what you want it to do. You can scope the fixture according to your use case.
from __future__ import print_function
import pytest
from sys import stdout
import os
#pytest.fixture(scope='function')
def shared_args():
def args_func():
args = type('', (), {})()
args.out = stdout
args.prefix = 'dude:'
return args
return args_func
def otherfunction(message, prefix, stream):
print(prefix, message, file=stream)
def test_dudesweet(shared_args, capsys):
prefix, out = shared_args().prefix, shared_args().out
otherfunction('sweet', prefix, out)
out, err = capsys.readouterr()
assert out == 'dude: sweet\n'
You are not using capsys.readouterr() correctly. See the correct usage of capsys here: https://stackoverflow.com/a/26618230/2312300
I have the impression (but do not find the documentation for it) that unittest sets the logging level to WARNING for all loggers. I would like to:
be able to specify the logging level for all loggers, from the command line (when running the tests) or from the test module itself
avoid unittest messing around with the application logging level: when running the tests I want to have the same logging output (same levels) as when running the application
How can I achieve this?
I don't believe unittest itself does anything to logging, unless you use a _CapturingHandler class which it defines. This simple program demonstrates:
import logging
import unittest
logger = logging.getLogger(__name__)
class MyTestCase(unittest.TestCase):
def test_something(self):
logger.debug('logged from test_something')
if __name__ == '__main__':
# DEBUG for demonstration purposes, but you could set the level from
# cmdline args to whatever you like
logging.basicConfig(level=logging.DEBUG, format='%(name)s %(levelname)s %(message)s')
unittest.main()
When run, it prints
__main__ DEBUG logged from test_something
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
showing that it is logging events at DEBUG level, as expected. So the problem is likely to be related to something else, e.g. the code under test, or some other test runner which changes the logging configuration or redirects sys.stdout and sys.stderr. You will probably need to provide more information about your test environment, or better yet a minimal program that demonstrates the problem (as my example above shows that unittest by itself doesn't cause the problem you're describing).
See below example for logging in Python. Also you can change LOG_LEVEL using 'setLevel' method.
import os
import logging
logging.basicConfig()
logger = logging.getLogger(__name__)
# Change logging level here.
logger.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
logger.info('For INFO message')
logger.debug('For DEBUG message')
logger.warning('For WARNING message')
logger.error('For ERROR message')
logger.critical('For CRITICAL message')
This is in addition to #Vinay's answer above. It does not answer the original question. I wanted to include command line options for modifying the log level. The intent was to get detailed loggin only when I pass a certain parameter from the command line. This is how I solved it:
import sys
import unittest
import logging
from histogram import Histogram
class TestHistogram(unittest.TestCase):
def test_case2(self):
h = Histogram([2,1,2])
self.assertEqual(h.calculateMaxAreaON(), 3)
if __name__ == '__main__':
argv = len(sys.argv) > 1 and sys.argv[1]
loglevel = logging.INFO if argv == '-v' else logging.WARNING
logging.basicConfig(level=loglevel)
unittest.main()
The intent is to get more verbose logging. I know it does not answer the question, but I'll leave it here in case someone comes looking for a similar requirement such as this.
this worked for me:
logging.basicConfig(level=logging.DEBUG)
And if I wanted a specific format:
logging.basicConfig(
level=logging.DEBUG,
datefmt="%H:%M:%S",
format="%(asctime)s.%(msecs)03d [%(levelname)-5s] %(message)s",
)
Programmatically:
Put this line of code in each test function defined in your class that you want to set the logging level:
logging.getLogger().setLevel(logging.INFO)
Ex. class:
import unittest
import logging
class ExampleTest(unittest.TestCase):
def test_method(self):
logging.getLogger().setLevel(logging.INFO)
...
Command Line:
This example just shows how to do it in a normal script, not specific to unittest example. Capturing the log level via command line, using argparse for arguments:
import logging
import argparse
...
def parse_args():
parser = argparse.ArgumentParser(description='...')
parser.add_argument('-v', '--verbose', help='enable verbose logging', action='store_const', dest="loglevel", const=logging.INFO, default=logging.WARNING)
...
def main():
args = parse_args()
logging.getLogger().setLevel(args.loglevel)
I want to ensure that a certain condition in my code causes a log message to be written to the django log. How would I do this with the Django unit testing framework?
Is there a place where I can check logged messages, similarly to how I can check sent emails? My unit test extends django.test.TestCase.
Using the mock module for mocking the logging module or the logger object. When you've done that, check the arguments with which the logging function is called.
For example, if you code looks like this:
import logging
logger = logging.getLogger('my_logger')
logger.error("Your log message here")
it would look like:
from unittest.mock import patch # For python 2.x use from mock import patch
#patch('this.is.my.module.logger')
def test_check_logging_message(self, mock_logger):
mock_logger.error.assert_called_with("Your log message here")
You can also use assertLogs from django.test.TestCase
When you code is
import logging
logger = logging.getLogger('my_logger')
def code_that_throws_error_log():
logger.error("Your log message here")
This is the test code.
with self.assertLogs(logger='my_logger', level='ERROR') as cm:
code_that_throws_error_log()
self.assertIn(
"ERROR:your.module:Your log message here",
cm.output
)
This lets you avoid patching just for logs.
The common way of mocking out the logger object (see the splendid chap Simeon Visser's answer) is slightly tricky in that it requires the test to mock out the logging in all the places it's done. This is awkward if the logging comes from more than one module, or is in code you don't own. If the module the logging comes from changes name, it will break your tests.
The splendid 'testfixtures' package includes tools to add a logging handler which captures all generated log messages, no matter where they come from. The captured messages can later be interrogated by the test. In its simplest form:
Assuming code-under-test, which logs:
import logging
logger = logging.getLogger()
logger.info('a message')
logger.error('an error')
A test for this would be:
from testfixtures import LogCapture
with LogCapture() as l:
call_code_under_test()
l.check(
('root', 'INFO', 'a message'),
('root', 'ERROR', 'an error'),
)
The word 'root' indicates the logging was sent via a logger created using logging.getLogger() (i.e. with no args.) If you pass an arg to getLogger (__name__ is conventional), that arg will be used in place of 'root'.
The test does not care what module created the logging. It could be a sub-module called by our code-under-test, including 3rd party code.
The test asserts about the actual log message that was generated, as opposed to the technique of mocking, which asserts about the args that were passed. These will differ if the logging.info call uses '%s' format strings with additional arguments that you don't expand yourself (e.g. use logging.info('total=%s', len(items)) instead of logging.info('total=%s' % len(items)), which you should. It's no extra work, and allows hypothetical future logging aggregation services such as 'Sentry' to work properly - they can see that "total=12" and "total=43" are two instances of the same log message. That's the reason why pylint warns about the latter form of logging.info call.)
LogCapture includes facilities for log filtering and the like. Its parent 'testfixtures' package, written by Chris Withers, another splendid chap, includes many other useful testing tools. Documentation is here: http://pythonhosted.org/testfixtures/logging.html
Django has a nice context manager function called patch_logger.
from django.test.utils import patch_logger
then in your test case:
with patch_logger('logger_name', 'error') as cm:
self.assertIn("Error message", cm)
where:
logger_name is the logger name (duh)
error is the log level
cm is the list of all log messages
More details:
https://github.com/django/django/blob/2.1/django/test/utils.py#L638
It should work the same for django < 2.0, independently of python version (as long as it's supported by dj)
If you're using test classes, you can use following solution:
import logger
from django.test import TestCase
class MyTest(TestCase):
#classmethod
def setUpClass(cls):
super(MyTest, cls).setUpClass()
cls.logging_error = logging.error
logging.error = cls._error_log
#classmethod
def tearDownClass(cls):
super(MyTest, cls).tearDownClass()
logging.error = cls.logging_error
#classmethod
def _error_log(cls, msg):
cls.logger = msg
def test_logger(self):
self.assertIn('Message', self.logger)
This method replaces error function of logging module with your custom method only for test purposes and put stdout into cls.logger variable which is available in every test case by calling self.logger. At the end it reverts changes by placing error function from logging module back.