Recently I am writting an python logging extension, and I want to add some tests for my extension to verify whether my extension work as expected.
However, I don't know how to capture the complete log and compare with my excepted result in unittest/pytest.
simplified sample:
# app.py
import logging
def create_logger():
formatter = logging.Formatter(fmt='%(name)s-%(levelname)s-%(message)s')
hdlr = logging.StreamHandler()
hdlr.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
logger.addHandler(hdlr)
return logger
app_logger = create_logger()
Here is my tests
Attempt 1: unittest
from app import app_logger
import unittest
class TestApp(unittest.TestCase):
def test_logger(self):
with self.assertLogs('', 'DEBUG') as cm:
app_logger.debug('hello')
# or some other way to capture the log output.
self.assertEqual('app-DEBUG-hello', cm.output)
expected behaviour:
cm.output = 'app-DEBUG-hello'
actual behaviour
cm.output = ['DEBUG:app:hello']
Attempt 2: pytest caplog
from app import app_logger
import pytest
def test_logger(caplog):
app_logger.debug('hello')
assert caplog.text == 'app-DEBUG-hello'
expected behaviour:
caplog.text = 'app-DEBUG-hello'
actual behaviour
caplog.text = 'test_logger.py 6 DEBUG hello'
Attempt 3: pytest capsys
from app import app_logger
import pytest
def test_logger(capsys):
app_logger.debug('hello')
out, err = capsys.readouterr()
assert err
assert err == 'app-DEBUG-hello'
expected behaviour:
err = 'app-DEBUG-hello'
actual behaviour
err = ''
Considering there will be many tests with different format, I don't want to check the log format manually. I have no idea how to get complete log as I see on the console and compare it with my expected one in the test cases. Hoping for your help, thx.
I know this is old but posting here since it pulled up in google for me...
Probably needs cleanup but it is the first thing that has gotten close for me so I figured it would be good to share.
Here is a test case mixin I've put together that lets me verify a particular handler is being formatted as expected by copying the formatter:
import io
import logging
from django.conf import settings
from django.test import SimpleTestCase
from django.utils.log import DEFAULT_LOGGING
class SetupLoggingMixin:
def setUp(self):
super().setUp()
logging.config.dictConfig(settings.LOGGING)
self.stream = io.StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
console_handler = None
for handler in self.root_logger.handlers:
if handler.name == 'console':
console_handler = handler
break
if console_handler is None:
raise RuntimeError('could not find console handler')
formatter = console_handler.formatter
self.root_formatter = formatter
self.root_hdlr.setFormatter(self.root_formatter)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
logging.config.dictConfig(DEFAULT_LOGGING)
And here is an example of how to use it:
class SimpleLogTests(SetupLoggingMixin, SimpleTestCase):
def test_logged_time(self):
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'my-expected-message-formatted-as-expected')
After reading the source code of the unittest library, I've worked out the following bypass. Note, it works by changing a protected member of an imported module, so it may break in future versions.
from unittest.case import _AssertLogsContext
_AssertLogsContext.LOGGING_FORMAT = 'same format as your logger'
After these commands the logging context opened by self.assertLogs will use the above format. I really don't know why this values is left hard-coded and not configurable.
I did not find an option to read the format of a logger, but if you use logging.config.dictConfig you can use a value from the same dictionary.
I know this doesn't completely answer the OP's question but I stumbled upon this post while looking for a neat way to capture logged messages.
Taking what #user319862 did, I've cleaned it and simplified it.
import unittest
import logging
from io import StringIO
class SetupLogging(unittest.TestCase):
def setUp(self):
super().setUp()
self.stream = StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
def test_log_output(self):
""" Does the logger produce the correct output? """
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'foo\n')
if __name__ == '__main__':
unittest.main()
I new to python but have some experience in test/tdd in other languages, and found that the default way of "changing" the formatter is by adding a new streamhandler BUT in the case you already have a stream defined in your logger (i.e. using Azure functions or TestCase::assertLogs that add one for you) you end up logging twice one with your format and another with the "default" format.
If in the OP the function create_logger mutates the formatter of current StreamHandler, instead of adding a new StreamHandler (checks if exist and if doesn't creates a new one and all that jazz...)
Then you can call the create_logger after the with self.assertLogs('', 'DEBUG') as cm: and just assert the cm.output and it just works because you are mutating the Formatter of the StreamHandler that the assertLogs is adding.
So basically what's happening is that the execution order is not appropriate for the test.
The order of execution in OP is:
import stuff
Add stream to logger formatter
Run test
Add another stream to logger formatter via self.assertLogs
assert stuff in 2nd StreamHandler
When it should be
the order of execution is:
import stuff
Add stream with logger formatter (but is irrelevant)
Run test
Add another stream with logger formatter via self.assertLogs
Change current stream logger formatter
assert stuff in only and properly formatted StreamHandler
Related
Edit: Thanks #eemz for the idea to redesign the structure and use from unittest.mock import patch, but the problem persists.
So I just recently stumbled into unittest and I have a program which I normally start like this python run.py -config /path/to/config.file -y. I wanted to write a simple test in a separate test.py file: Execute the script, pass the mentioned arguments and get all of its output. I pass a prepared config file which is missing certain things, so the run.py will break and exactly log this error using logging.error: "xyz was missing in Config file!" (see example below). I'll get a few words from print() and then the logging instance kicks in and handles from there on. How do I get its output so I can check it? Feel free to rewrite this, as I'm still learning, please bear with me.
Simplified example:
run.py
import logging
def run(args):
< args.config = /path/to/config.file >
cnfg = Config(args.config)
cnfg.logger.info("Let's start with the rest of the code!") # This is NOT in 'output' of the unittest
< code >
if __name__ == "__main__":
print("Welcome! Starting execution.") # This is in 'output' of the unittest
< code to parse arguments 'args' >
run(args)
Config.py
import logging
class Config:
def __init__(self):
print("Creating logging instance, hold on ...") # This is in 'output' of the unittest
logger = logging.getLogger(__name__)
console_handler = logging.StreamHandler()
logger.addHandler(console_handler)
logger.info("Logging activated, let's go!") # This is NOT in 'output' of the unittest
self.logger = logger
if xyz not in config:
self.logger.error("xyz was missing in Config file!") # This is NOT in 'output' of the unittest
exit(1)
test.py
import unittest
from unittest.mock import patch
class TestConfigs(unittest.TestCase):
def test_xyz(self):
with patch('sys.stdout', new=StringIO()) as capture:
with self.assertRaises(SystemExit) as cm:
run("/p/to/f/missing/xyz/f", "", False, True)
output = capture.getvalue().strip()
self.assertEqual(cm.exception.code, 1)
# Following is working, because the print messages are in output
self.assertTrue("Welcome! Starting execution." in output)
# Following is NOT working, because the logging messages are not in output
self.assertTrue("xyz was missing in Config file!" in output)
if __name__ == "__main__":
unittest.main()
I would restructure run.py like this:
import logging
def main():
print("Welcome! Starting execution.")
Etc etc
if __name__ == "__main__":
main()
Then you can call the function run.main() in your unit test rather than forking a subprocess.
from io import StringIO
from unittest.mock import patch
import sys
import run
class etc etc
def test_run etc etc:
with patch('sys.stdout', new=StringIO()) as capture:
sys.argv = [‘run.py’, ‘-flag’, ‘-flag’, ‘-flag’]
run.main()
output = capture.getvalue().strip()
assert output == <whatever you expect it to be>
If you’re new to unit testing then you might not have seen mocks before. Effectively I am replacing stdout with a fake one to capture everything that gets sent there, so that I can pull it out later into the variable output.
In fact a second patch around sys.argv would be even better because what I’m doing here, an assignment to the real argv, will actually change it which will affect subsequent tests in the same file.
I ended up instantiating the logger of the main program with a specific name, so I could get the logger in the test.py again and assert that the logger was called with a specific text. I didn't know that I could just get the logger by using logging.getLogger("name") with the same name. Simplified example:
test.py
import unittest
from run import run
from unittest.mock import patch
main_logger = logging.getLogger("main_tool")
class TestConfigs(unittest.TestCase):
def test_xyz(self):
with patch('sys.stdout', new=StringIO()) as capture, \
self.assertRaises(SystemExit) as cm, \
patch.object(main_logger , "info") as mock_log1, \
patch.object(main_logger , "error") as mock_log2:
run("/path/to/file/missing/xyz.file")
output = capture.getvalue().strip()
self.assertTrue("Creating logging instance, hold on ..." in output)
mock_log1.assert_called_once_with("Logging activated, let's go!")
mock_log2.assert_called_once_with("xyz was missing in Config file!")
self.assertEqual(cm.exception.code, 1)
if __name__ == "__main__":
unittest.main()
run.py
def run(path: str):
cnfg = Config(path)
< code >
if __name__ == "__main__":
< code to parse arguments 'args' >
path = args.file_path
run(path)
Config.py
import logging
class Config:
def __init__(self, path: str):
print("Creating logging instance, hold on ...")
logger = logging.getLogger("main_tool")
console_handler = logging.StreamHandler()
logger.addHandler(console_handler)
logger.info("Logging activated, let's go!")
self.logger = logger
# Load file, simplified
config = load(path)
if xyz not in config:
self.logger.error("xyz was missing in Config file!")
exit(1)
This method seems to be very complicated and I got to this point by reading through a lot of other posts and the docs. Maybe some one knows a better way to achieve this.
Is it possible to inject contextual information into a LoggerAdapter instance (that already has a set of 'extra' parms) and have it persist throughout the lifetime of the object? I have a parameter I want to inject in a particular method, but I also need other methods that use the instance to have that injected parameter. I kinda tried to do logger.extra["foo"] = "FOO" but it didn't work. Any help is appreciated.
Here's a working example:
import logging
import logging.handlers
import sys
def main():
logger = logging.getLogger()
adapter = logging.LoggerAdapter(logger, extra={'foo': 'bar'})
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(levelname)s %(message)s [%(foo)s]'))
logger.addHandler(sh)
adapter.warning('This is a test warning')
adapter.extra['foo'] = 'baz'
adapter.warning('This is another test warning')
logger.removeHandler(sh)
sh.close()
if __name__ == '__main__':
sys.exit(main())
When you run it, it prints
WARNING This is a test warning [bar]
WARNING This is another test warning [baz]
So as you can see, the info in the adapter can be updated after instantiation.
I'm using pytest-3.7.1 which has good support for logging, including live logging to stdout during tests. I'm using --log-cli-level=DEBUG to dump all debug-level logging to the console as it happens.
The problem I have is that --log-cli-level=DEBUG turns on debug logging for all modules in my test program, including third-party dependencies, and it floods the log with a lot of uninteresting output.
Python's logging module has the ability to set logging levels per module. This enables selective logging - for example, in a normal Python program I can turn on debugging for just one or two of my own modules, and restrict the log output to just those, or set different log levels for each module. This enables turning off debug-level logging for noisy libraries.
So what I'd like to do is apply the same concept to pytest's logging - i.e. specify a logging level, from the command line, for specific non-root loggers. For example, if I have a module called test_foo.py then I'm looking for a way to set the log level for this module from the command line.
I'm prepared to roll-my-own if necessary (I know how to add custom arguments to pytest), but before I do that I just want to be sure that there isn't already a solution. Is anyone aware of one?
I had the same problem, and found a solution in another answer:
Instead of --log-cli-level=DEBUG, use --log-level DEBUG. It disables all third-party module logs (in my case, I had plenty of matplotlib logs), but still outputs your app logs for each test that fails.
I got this working by writing a factory class and using it to set the level of the root logger to logger.INFO and use the logging level from the command line for all the loggers obtained from the factory. If the logging level from the command line is higher than the minimum global log level you specify in the class (using constant MINIMUM_GLOBAL_LOG_LEVEL), the global log level isn't changed.
import logging
MODULE_FIELD_WIDTH_IN_CHARS = '20'
LINE_NO_FIELD_WIDTH_IN_CHARS = '3'
LEVEL_NAME_FIELD_WIDTH_IN_CHARS = '8'
MINIMUM_GLOBAL_LOG_LEVEL = logging.INFO
class EasyLogger():
root_logger = logging.getLogger()
specified_log_level = root_logger.level
format_string = '{asctime} '
format_string += '{module:>' + MODULE_FIELD_WIDTH_IN_CHARS + 's}'
format_string += '[{lineno:' + LINE_NO_FIELD_WIDTH_IN_CHARS + 'd}]'
format_string += '[{levelname:^' + LEVEL_NAME_FIELD_WIDTH_IN_CHARS + 's}]: '
format_string += '{message}'
level_change_warning_sent = False
#classmethod
def get_logger(cls, logger_name):
if not EasyLogger._logger_has_format(cls.root_logger, cls.format_string):
EasyLogger._setup_root_logger()
logger = logging.getLogger(logger_name)
logger.setLevel(cls.specified_log_level)
return logger
#classmethod
def _setup_root_logger(cls):
formatter = logging.Formatter(fmt=cls.format_string, style='{')
if not cls.root_logger.hasHandlers():
handler = logging.StreamHandler()
cls.root_logger.addHandler(handler)
for handler in cls.root_logger.handlers:
handler.setFormatter(formatter)
cls.root_logger.setLevel(MINIMUM_GLOBAL_LOG_LEVEL)
if (cls.specified_log_level < MINIMUM_GLOBAL_LOG_LEVEL and
cls.level_change_warning_sent is False):
cls.root_logger.log(
max(cls.specified_log_level, logging.WARNING),
"Setting log level for %s class to %s, all others to %s" % (
__name__,
cls.specified_log_level,
MINIMUM_GLOBAL_LOG_LEVEL
)
)
cls.level_change_warning_sent = True
#staticmethod
def _logger_has_format(logger, format_string):
for handler in logger.handlers:
return handler.format == format_string
return False
The above class is then used to send logs normally as you would with a logging.logger object as follows:
from EasyLogger import EasyLogger
class MySuperAwesomeClass():
def __init__(self):
self.logger = EasyLogger.get_logger(__name__)
def foo(self):
self.logger.debug("debug message")
self.logger.info("info message")
self.logger.warning("warning message")
self.logger.critical("critical message")
self.logger.error("error message")
Enable/Disable/Modify the log level of any module in Python:
logging.getLogger("module_name").setLevel(logging.log_level)
I have the impression (but do not find the documentation for it) that unittest sets the logging level to WARNING for all loggers. I would like to:
be able to specify the logging level for all loggers, from the command line (when running the tests) or from the test module itself
avoid unittest messing around with the application logging level: when running the tests I want to have the same logging output (same levels) as when running the application
How can I achieve this?
I don't believe unittest itself does anything to logging, unless you use a _CapturingHandler class which it defines. This simple program demonstrates:
import logging
import unittest
logger = logging.getLogger(__name__)
class MyTestCase(unittest.TestCase):
def test_something(self):
logger.debug('logged from test_something')
if __name__ == '__main__':
# DEBUG for demonstration purposes, but you could set the level from
# cmdline args to whatever you like
logging.basicConfig(level=logging.DEBUG, format='%(name)s %(levelname)s %(message)s')
unittest.main()
When run, it prints
__main__ DEBUG logged from test_something
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
showing that it is logging events at DEBUG level, as expected. So the problem is likely to be related to something else, e.g. the code under test, or some other test runner which changes the logging configuration or redirects sys.stdout and sys.stderr. You will probably need to provide more information about your test environment, or better yet a minimal program that demonstrates the problem (as my example above shows that unittest by itself doesn't cause the problem you're describing).
See below example for logging in Python. Also you can change LOG_LEVEL using 'setLevel' method.
import os
import logging
logging.basicConfig()
logger = logging.getLogger(__name__)
# Change logging level here.
logger.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
logger.info('For INFO message')
logger.debug('For DEBUG message')
logger.warning('For WARNING message')
logger.error('For ERROR message')
logger.critical('For CRITICAL message')
This is in addition to #Vinay's answer above. It does not answer the original question. I wanted to include command line options for modifying the log level. The intent was to get detailed loggin only when I pass a certain parameter from the command line. This is how I solved it:
import sys
import unittest
import logging
from histogram import Histogram
class TestHistogram(unittest.TestCase):
def test_case2(self):
h = Histogram([2,1,2])
self.assertEqual(h.calculateMaxAreaON(), 3)
if __name__ == '__main__':
argv = len(sys.argv) > 1 and sys.argv[1]
loglevel = logging.INFO if argv == '-v' else logging.WARNING
logging.basicConfig(level=loglevel)
unittest.main()
The intent is to get more verbose logging. I know it does not answer the question, but I'll leave it here in case someone comes looking for a similar requirement such as this.
this worked for me:
logging.basicConfig(level=logging.DEBUG)
And if I wanted a specific format:
logging.basicConfig(
level=logging.DEBUG,
datefmt="%H:%M:%S",
format="%(asctime)s.%(msecs)03d [%(levelname)-5s] %(message)s",
)
Programmatically:
Put this line of code in each test function defined in your class that you want to set the logging level:
logging.getLogger().setLevel(logging.INFO)
Ex. class:
import unittest
import logging
class ExampleTest(unittest.TestCase):
def test_method(self):
logging.getLogger().setLevel(logging.INFO)
...
Command Line:
This example just shows how to do it in a normal script, not specific to unittest example. Capturing the log level via command line, using argparse for arguments:
import logging
import argparse
...
def parse_args():
parser = argparse.ArgumentParser(description='...')
parser.add_argument('-v', '--verbose', help='enable verbose logging', action='store_const', dest="loglevel", const=logging.INFO, default=logging.WARNING)
...
def main():
args = parse_args()
logging.getLogger().setLevel(args.loglevel)
The following doctest fails:
import logging
logging.basicConfig(level=logging.DEBUG,format='%(message)s')
def say_hello():
'''
>>> say_hello()
Hello!
'''
logging.info('Hello!')
if __name__ == '__main__':
import doctest
doctest.testmod()
These pages
doctest & logging
use doctest and logging in python program
seem to suggest logging.StreamHandler(sys.stdout) and logger.addHandler(handler) but my attempts failed in this respect. (I am new to python, if it wasn't obvious .)
Please help me fix the above code so that the test passes.
Update on Jun 4, 2017: To answer 00prometheus' comments: The accepted answer to use doctest and logging in python program, when I asked this question, seemed unnecessarily complicated. And indeed it is, as the accepted answer here gives a simpler solution. In my highly biased opinion, my question is also clearer than the one I already linked in the original post.
You need to define a "logger" object. This is usually done after import with:
import sys
import logging
log = logging.getLogger(__name__)
When you want to log a message:
log.info('Hello!')
In the code that gets run like a script you set the basicConfig:
if __name__ == '__main__':
import doctest
logging.basicConfig(level=logging.DEBUG, stream=sys.stdout, format='%(message)s')
doctest.testmod()
Edit:
Ok, you were right. It doesn't work, but I got it to work...BUT DO NOT DO THIS! Just use print statements or return what you actually need to check. As your second link says this is just a bad idea. You shouldn't be checking logging output (its for logging). Even the original poster for that second link said they got it to work by switching their logging to using print. But here is the evil code that seems to work:
class MyDocTestRunner(doctest.DocTestRunner):
def run(self, test, compileflags=None, out=None, clear_globs=True):
if out is None:
handler = None
else:
handler = logging.StreamHandler(self._fakeout)
out = sys.stdout.write
logger = logging.getLogger() # root logger (say)
if handler:
logger.addHandler(handler)
try:
doctest.DocTestRunner.run(self, test, compileflags, out, clear_globs)
finally:
if handler:
logger.removeHandler(handler)
handler.close()
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG, format='%(message)s')
tests = doctest.DocTestFinder().find(say_hello, __name__)
dt_runner = MyDocTestRunner()
for t in tests:
dt_runner.run(t, out=True)
Edit (continued):
My attempts also failed when trying what your second link suggested. This is because internally doctest reassigns sys.stdout to self._fakeout. That's why nothing short of my hack will work. I actually tell the logger to write to this "fakeout".
Edit (answer to comment):
It's not exactly the code from the link. If it was the code from the link I would say it's not that bad of an option because its not doing anything too complex. My code, however, is using a "private" internal instance attribute that shouldn't be used by a normal user. That is why it is evil.
And yes, logging can be used for testing output, but it does not make much sense to do so in a unittest/doctest and is probably why doctest doesn't include functionality like this out of the box. The TextTest stuff you linked to is all about functional or integration testing. Unittests (and doctests) should be testing small individual components. If you have to capture logged output to make sure your unittest/doctest is correct then you should maybe think about separating things out or not doing these checks in a doctest.
I personally only use doctests for simple examples and verifications. Mostly for usage examples since any user can see an inline doctest.
Edit (ok last one):
Same solution, simpler code. This code doesn't require that you create a custom runner. You still have to create the default runner and stuff because you need to access the "_fakeout" attribute. There is no way to use doctest to check logging output without logging to this attribute as a stream.
if __name__ == '__main__':
dt_runner = doctest.DocTestRunner()
tests = doctest.DocTestFinder().find(sys.modules[__name__])
logging.basicConfig(level=logging.DEBUG, format='%(message)s', stream=dt_runner._fakeout)
for t in tests:
dt_runner.run(t)
One way to do this is by monkey-patching the logging module (my code; docstring contents from import logging are relevant to your question):
#classmethod
def yield_int(cls, field, text):
"""Parse integer values and yield (field, value)
>>> test = lambda text: dict(Monster.yield_int('passive', text))
>>> test(None)
{}
>>> test('42')
{'passive': 42}
>>> import logging
>>> old_warning = logging.warning
>>> warnings = []
>>> logging.warning = lambda msg: warnings.append(msg)
>>> test('seven')
{}
>>> warnings
['yield_int: failed to parse text "seven"']
>>> logging.warning = old_warning
"""
if text == None:
return
try:
yield (field, int(text))
except ValueError:
logging.warning(f'yield_int: failed to parse text "{text}"')
However, a much cleaner approach uses the unittest module:
>>> from unittest import TestCase
>>> with TestCase.assertLogs(_) as cm:
... print(test('seven'))
... print(cm.output)
{}
['WARNING:root:yield_int: failed to parse text "seven"']
Technically you should probably instantiate a TestCase object rather than passing _ to assertLogs as self, since there's no guarantee that this method won't attempt to access the instance properties in the future.
I use the following technique:
Set the logging stream to a StringIO object.
Log away...
Print the contents for the StringIO object and expect the output.
Or: Assert against the contents of the StringIO object.
This should do it.
Here is some example code.
First it just does the whole setup for the logging within the doctest - just to show how it's working.
Then the code shows how the setup can be put into as seperate function setup_doctest_logging that does the setup ànd returns itself a function that prints the log. This keeps the test code more focused and moves the ceremonial part out of the test.
import logging
def func(s):
"""
>>> import io
>>> string_io = io.StringIO()
>>> # Capture the log output to a StringIO object
>>> # Use force=True to make this configuration stick
>>> logging.basicConfig(stream=string_io, format='%(message)s', level=logging.INFO, force=True)
>>> func('hello world')
>>> # print the contents of the StringIO. I prefer that. Better visibility.
>>> print(string_io.getvalue(), end='')
hello world
>>> # The above needs the end='' because print will otherwise add an new line to the
>>> # one that is already in the string from logging itself
>>> # Or you can just expect an extra empty line like this:
>>> print(string_io.getvalue())
hello world
<BLANKLINE>
>>> func('and again')
>>> # Or just assert on the contents.
>>> assert 'and again' in string_io.getvalue()
"""
logging.info(s)
def setup_doctest_logging(format='%(levelname)s %(message)s', level=logging.WARNING):
"""
This could be put into a separate module to make the logging setup easier
"""
import io
string_io = io.StringIO()
logging.basicConfig(stream=string_io, format=format, level=level, force=True)
def log_printer():
s = string_io.getvalue()
print(s, end='')
return log_printer
def other_logging_func(s, e=None):
"""
>>> print_whole_log = setup_doctest_logging(level=logging.INFO)
>>> other_logging_func('no error')
>>> print_whole_log()
WARNING no error
>>> other_logging_func('I try hard', 'but I make mistakes')
>>> print_whole_log()
WARNING no error
WARNING I try hard
ERROR but I make mistakes
"""
logging.warning(s)
if e is not None:
logging.error(e)
if __name__ == '__main__':
import doctest
doctest.testmod()
As mentioned by others, the issue is that doctest modifies sys.stdout after basicConfig created a StreamHandler that holds its own copy. One way to deal with this is to create a stream object that dispatches write and flush to sys.stdout. Another is to bypass the issue altogether by creating your own handler:
class PrintHandler(logging.Handler):
def emit(self, record):
print(self.format(record))
logging.basicConfig(level=logging.DEBUG, format='%(message)s',
handlers=[PrintHandler()])