I wanted to store all the intermediate log messages (warn, info, error) to a string in Python, and report those log messages to the console at the end of program.
I tried to follow the steps outlined in
http://opensourcehacker.com/2011/02/23/temporarily-capturing-python-logging-output-to-a-string-buffer/
but was unsuccessful .
Could somebody tell me a short, clean way to do this?
This is what I've tried for now:
log = logging.getLogger('basic_logger')
log.setLevel(logging.DEBUG)
report = ""
memory_handler = logging.handlers.MemoryHandler(1024*20, logging.ERROR, report)
memory_handler.setLevel(logging.DEBUG)
log.addHandler(memory_handler)
log.info("hello world")
memory_handler.flush()
print "report:", report
It can be as simple as logging to a StringIO object:
import logging
try:
from cStringIO import StringIO # Python 2
except ImportError:
from io import StringIO
log_stream = StringIO()
logging.basicConfig(stream=log_stream, level=logging.INFO)
logging.info('hello world')
logging.warning('be careful!')
logging.debug("you won't see this")
logging.error('you will see this')
logging.critical('critical is logged too!')
print(log_stream.getvalue())
Output
INFO:root:hello world
WARNING:root:be careful!
ERROR:root:you will see this
CRITICAL:root:critical is logged too!
If you want to log only those messages at levels WARN, INFO and ERROR you can do it with a filter. LevelFilter below checks each log record's level no, allowing only those records of the desired level(s):
import logging
try:
from cStringIO import StringIO # Python 2
except ImportError:
from io import StringIO
class LevelFilter(logging.Filter):
def __init__(self, levels):
self.levels = levels
def filter(self, record):
return record.levelno in self.levels
log_stream = StringIO()
logging.basicConfig(stream=log_stream, level=logging.NOTSET)
logging.getLogger().addFilter(LevelFilter((logging.INFO, logging.WARNING, logging.ERROR)))
logging.info('hello world')
logging.warning('be careful!')
logging.debug("you won't see this")
logging.error('you will see this')
logging.critical('critical is no longer logged!')
print(log_stream.getvalue())
Output
INFO:root:hello world
WARNING:root:be careful!
ERROR:root:you will see this
Note that solutions involving basicConfig set attributes of the root logger which all other loggers inherit from, this can be unwanted because libraries will also log to it. My use case is a website that calls a data processing module, and I only want to capture that module's logs specifically. This also has the advantage of allowing existing handlers that log to file and the terminal to persist:
import io, logging
from django.http import HttpResponse
log_stream = io.StringIO()
log_handler = logging.StreamHandler(log_stream)
logging.getLogger('algorithm.user_output').addHandler(log_handler)
algorithm()
return HttpResponse(f'<pre>{log_stream.getvalue()}</pre>')
In algorithm.py:
logger = logging.getLogger(__name__ + '.user_output') # 'algorithm.user_output'
You can also write your own stream class. As https://docs.python.org/2/library/logging.handlers.html says, only writeand flushare used for the streaming.
Example:
import logging
class LogStream(object):
def __init__(self):
self.logs = ''
def write(self, str):
self.logs += str
def flush(self):
pass
def __str__(self):
return self.logs
log_stream = LogStream()
logging.basicConfig(stream=log_stream, level=logging.DEBUG)
log = logging.getLogger('test')
log.debug('debugging something')
log.info('informing user')
print(log_stream)
Outputs:
DEBUG:test:debugging something
INFO:test:informing user
Quick Recipe to have multiple logger and use the StringIO as storage
Note:
This is an customized version of #mhawke Answer ---> HERE
I needed to have multiple log going each one to do its things, here is a simple script that does that.
from io import StringIO
from datetime import date
# Formatter
LOG_FORMAT = '| %(asctime)s | %(name)s-%(levelname)s: %(message)s '
FORMATTER = logging.Formatter(LOG_FORMAT)
# ------- MAIN LOGGER
main_handler = logging.StreamHandler()
main_handler.setLevel(logging.WARNING)
main_handler.setFormatter(FORMATTER)
# ------- FILE LOGGER
file_handler = logging.FileHandler(f'log_{date.strftime(date.today(), "%Y-%m-%d")}.log')
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(FORMATTER)
# ------- SECONDARY STREAMER (HOLDS ALL THE LOGS FOR RETRIEVE LATER) LOGGER
streamer = StringIO()
stream_handler = logging.StreamHandler(stream=streamer)
stream_handler.setFormatter(FORMATTER)
# Root Logger
logging.basicConfig(level=10, handlers=[main_handler, file_handler, stream_handler]) # Add handlers to Logger
_logger = logging.getLogger(__name__)
_logger.log(10, "DEBUG MESSAGE")
_logger.log(20, "INFO MESSAGE")
_logger.log(30, "WARNING MESSAGE")
_logger.log(40, "ERROR!")
_logger.log(50, "CRITICAL")
print('==='*15)
print('\nGetting All logs from StringIO')
print(streamer.getvalue())
Clearing The Logs from StringIO
In addition, I needed to clear the Data an start from 0 again. The easiest way and faster by performance is just create a new StringIO instance and attach it to the StreamHandler instance.
new_streamer = StringIO() # Creating the new instance
stream_handler.setStream(new_streamer) # here we assign it to the logger
_logger.info("New Message")
_logger.info("New Message")
_logger.info("New Message")
print(new_streamer.getvalue()) # New data
Another way is to 'clear' the Stream, but as per this other **StackOverflow Answer by #Chris Morgan is less performant.
# Python 3
streamer.truncate(0)
streamer.seek(0)
_logger.info("New Message")
_logger.info("New Message")
_logger.info("New Message")
print(streamer.getvalue())
# Python 2
streamer.truncate(0)
_logger.info("New Message")
_logger.info("New Message")
_logger.info("New Message")
print(streamer.getvalue())
Documentation
Logging
StreamHandler
Maybe this example code is enough.
In general, you should post your code so we can see what is going on.
You should also be looking at the actual Python documentation for the logging module while you are following any given tutorial.
https://docs.python.org/2/library/logging.html
The standard Python logging module can log to a file. When you are done logging, you can print the contents of that file to your shell output.
# Do some logging to a file
fname = 'mylog.log'
logging.basicConfig(filename=fname, level=logging.INFO)
logging.info('Started')
logging.info('Finished')
# Print the output
with open(fname, 'r') as f:
print f.read() # You could also store f.read() to a string
We can use StringIO object for both python2 and python3 like this:
Python 3 ::
import logging
from io import StringIO
log_stream = StringIO()
logging.basicConfig(stream=log_stream, level=logging.INFO)
logging.info('this is info')
logging.warning('this is warning!')
logging.debug('this is debug')
logging.error('this is error')
logging.critical('oh ,this is critical!')
print(log_stream.getvalue())
Similarly in Python 2::
import logging
from cStringIO import StringIO
log_stream = StringIO()
logging.basicConfig(stream=log_stream, level=logging.INFO)
logging.info('this is info')
logging.warning('this is warning!')
logging.debug('this is debug')
logging.error('this is error')
logging.critical('oh ,this is critical!')
print(log_stream.getvalue())
Output ::
INFO:root:this is info
WARNING:root:this is warning!
ERROR:root:this is error
CRITICAL:root:oh ,this is critical!
Related
Recently I am writting an python logging extension, and I want to add some tests for my extension to verify whether my extension work as expected.
However, I don't know how to capture the complete log and compare with my excepted result in unittest/pytest.
simplified sample:
# app.py
import logging
def create_logger():
formatter = logging.Formatter(fmt='%(name)s-%(levelname)s-%(message)s')
hdlr = logging.StreamHandler()
hdlr.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
logger.addHandler(hdlr)
return logger
app_logger = create_logger()
Here is my tests
Attempt 1: unittest
from app import app_logger
import unittest
class TestApp(unittest.TestCase):
def test_logger(self):
with self.assertLogs('', 'DEBUG') as cm:
app_logger.debug('hello')
# or some other way to capture the log output.
self.assertEqual('app-DEBUG-hello', cm.output)
expected behaviour:
cm.output = 'app-DEBUG-hello'
actual behaviour
cm.output = ['DEBUG:app:hello']
Attempt 2: pytest caplog
from app import app_logger
import pytest
def test_logger(caplog):
app_logger.debug('hello')
assert caplog.text == 'app-DEBUG-hello'
expected behaviour:
caplog.text = 'app-DEBUG-hello'
actual behaviour
caplog.text = 'test_logger.py 6 DEBUG hello'
Attempt 3: pytest capsys
from app import app_logger
import pytest
def test_logger(capsys):
app_logger.debug('hello')
out, err = capsys.readouterr()
assert err
assert err == 'app-DEBUG-hello'
expected behaviour:
err = 'app-DEBUG-hello'
actual behaviour
err = ''
Considering there will be many tests with different format, I don't want to check the log format manually. I have no idea how to get complete log as I see on the console and compare it with my expected one in the test cases. Hoping for your help, thx.
I know this is old but posting here since it pulled up in google for me...
Probably needs cleanup but it is the first thing that has gotten close for me so I figured it would be good to share.
Here is a test case mixin I've put together that lets me verify a particular handler is being formatted as expected by copying the formatter:
import io
import logging
from django.conf import settings
from django.test import SimpleTestCase
from django.utils.log import DEFAULT_LOGGING
class SetupLoggingMixin:
def setUp(self):
super().setUp()
logging.config.dictConfig(settings.LOGGING)
self.stream = io.StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
console_handler = None
for handler in self.root_logger.handlers:
if handler.name == 'console':
console_handler = handler
break
if console_handler is None:
raise RuntimeError('could not find console handler')
formatter = console_handler.formatter
self.root_formatter = formatter
self.root_hdlr.setFormatter(self.root_formatter)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
logging.config.dictConfig(DEFAULT_LOGGING)
And here is an example of how to use it:
class SimpleLogTests(SetupLoggingMixin, SimpleTestCase):
def test_logged_time(self):
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'my-expected-message-formatted-as-expected')
After reading the source code of the unittest library, I've worked out the following bypass. Note, it works by changing a protected member of an imported module, so it may break in future versions.
from unittest.case import _AssertLogsContext
_AssertLogsContext.LOGGING_FORMAT = 'same format as your logger'
After these commands the logging context opened by self.assertLogs will use the above format. I really don't know why this values is left hard-coded and not configurable.
I did not find an option to read the format of a logger, but if you use logging.config.dictConfig you can use a value from the same dictionary.
I know this doesn't completely answer the OP's question but I stumbled upon this post while looking for a neat way to capture logged messages.
Taking what #user319862 did, I've cleaned it and simplified it.
import unittest
import logging
from io import StringIO
class SetupLogging(unittest.TestCase):
def setUp(self):
super().setUp()
self.stream = StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
def test_log_output(self):
""" Does the logger produce the correct output? """
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'foo\n')
if __name__ == '__main__':
unittest.main()
I new to python but have some experience in test/tdd in other languages, and found that the default way of "changing" the formatter is by adding a new streamhandler BUT in the case you already have a stream defined in your logger (i.e. using Azure functions or TestCase::assertLogs that add one for you) you end up logging twice one with your format and another with the "default" format.
If in the OP the function create_logger mutates the formatter of current StreamHandler, instead of adding a new StreamHandler (checks if exist and if doesn't creates a new one and all that jazz...)
Then you can call the create_logger after the with self.assertLogs('', 'DEBUG') as cm: and just assert the cm.output and it just works because you are mutating the Formatter of the StreamHandler that the assertLogs is adding.
So basically what's happening is that the execution order is not appropriate for the test.
The order of execution in OP is:
import stuff
Add stream to logger formatter
Run test
Add another stream to logger formatter via self.assertLogs
assert stuff in 2nd StreamHandler
When it should be
the order of execution is:
import stuff
Add stream with logger formatter (but is irrelevant)
Run test
Add another stream with logger formatter via self.assertLogs
Change current stream logger formatter
assert stuff in only and properly formatted StreamHandler
Given this code I try to have log statements working but I am not able to. The documentation tells me that I do not have to set a level.
When a logger is created, the level is set to NOTSET (which causes all
messages to be processed when the logger is the root logger, or
delegation to the parent when the logger is a non-root logger).
But it did not work without. Therefore I tried to set it to debug. But still no luck.
"""
Experimental Port Fowarding
"""
import logging
def main(config):
""" entry point"""
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
log.debug("opening config file...")
config_file = open(config, 'r')
log.debug("config found!")
The logger you are getting doesn't have any handlers. You can check this by doing print(log.handlers) and seeing the output is an empty list ([]).
The simplest way to use the logging library is something like this, where you call logging.basicConfig to set everything up, as shown in the logging module basic tutorial:
"""
Experimental Port Fowarding
"""
import logging
logging.basicConfig(level=logging.DEBUG)
def main(config):
""" entry point"""
logging.debug("opening config file...")
config_file = open(config, 'r')
logging.debug("config found!")
main('test.conf')
This works for me from outside and inside IPython.
If you want to avoid basicConfig for some reason you need to register a handler manually, like this:
import logging
def main(config):
""" entry point"""
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
# Minimal change: add StreamHandler to display to stdout
log.addHandler(logging.StreamHandler())
log.debug("opening config file...")
config_file = open(config, 'r')
log.debug("config found!")
By default the logger writes to STDERR stream which usually prints to the console itself.
Basically you can change the log file path by setting:
logging.basicConfig(filename="YourFileName.log")
log = logging.getLogger(__name__)
I had a script with logging capabilities, and it stopped working (the logging, not the script). I wrote a small example to illustrate the problem:
import logging
from os import remove
from os.path import exists
def setup_logger(logger_name, log_file, level=logging.WARNING):
# Erase log if already exists
if exists(log_file):
remove(log_file)
# Configure log file
l = logging.getLogger(logger_name)
formatter = logging.Formatter('%(message)s')
fileHandler = logging.FileHandler(log_file, mode='w')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
l.setLevel(level)
l.addHandler(fileHandler)
l.addHandler(streamHandler)
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log')
log_pl = logging.getLogger('log_pl')
log_pl.info('TEST')
log_pl.debug('TEST')
At the end of the script, the file test.log is created, but it is empty.
What am I missing?
Your setup_logger function specifies a (default) level of WARNING
def setup_logger(logger_name, log_file, level=logging.WARNING):
...and you later log two events that are at a lower level than WARNING, and are ignored as they should be:
log_pl.info('TEST')
log_pl.debug('TEST')
If you change your code that calls your setup_logger function to:
if __name__ == '__main__':
setup_logger('log_pl', '/home/myuser/test.log', logging.DEBUG)
...I'd expect that it works as you'd like.
See the simple example in the Logging HOWTO page.
I'm seeing extra logging messages after I've imported a module I need to use. I'm trying to work out the correct way to stop this happening. The following code shows the issue best:
import os
import logging
import flickrapi
class someObject:
def __init__(self):
self.value = 1
logger = logging.getLogger(__name__)
print logger.handlers
logger.info("value = " + str(self.value))
def main():
# Set up logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('[%(asctime)-15s] %(name)-8s %(levelname)-6s %message)s')
fh = logging.FileHandler(os.path.splitext(os.path.basename(__file__))[0]+".log")
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
logger.addHandler(fh)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
ch.setFormatter(formatter)
logger.addHandler(ch)
logger.debug("Debug message")
logger.info("Info message")
thingy = someObject()
if __name__ == "__main__":
main()
With the flickrapi import I see the following output:
DEBUG:__main__:Debug message
[2013-05-03 12:10:47,755] __main__ INFO Info message
INFO:__main__:Info message
[<logging.FileHandler instance at 0x1676dd0>, <logging.StreamHandler instance at 0x1676ea8>]
[2013-05-03 12:10:47,755] __main__ INFO value = 1
INFO:__main__:value = 1
With the flickrapi import removed I see the correct output:
[2013-05-03 12:10:47,755] __main__ INFO Info message
[<logging.FileHandler instance at 0x1676dd0>, <logging.StreamHandler instance at 0x1676ea8>]
[2013-05-03 12:10:47,755] __main__ INFO value = 1
This is my first time of using logging and it's got me a little stumped. I've read the documentation a couple of times but I think I'm missing something in my understanding.
Looking at logging.Logger.manager.loggerDict, there are other loggers but each of their .handlers is empty. The __main__ logger only has the two handlers I've added so where do these messages come from?
Any pointers as to how I can solve this would be much appreciated as I've hit a wall.
Thanks
This is a bug in the flickrapi library you are using. It is calling logging.basicConfig() in it's __init__.py which is the wrong thing to do for a library since it adds a StreamHandler defaulting to stderr to the root logger.
You should probably open a bug report with the author. There is a HOWTO in the python logging docs on how libraries should configure logging.
To work around this issue until the bug is fixed you should be able to do the following:
# at the top of your module before doing anything else
import flickrapi
import logging
try:
logging.root.handlers.pop()
except IndexError:
# once the bug is fixed in the library the handlers list will be empty - so we need to catch this error
pass
I have a Python script that makes use of 'Print' for printing to stdout. I've recently added logging via Python Logger and would like to make it so these print statements go to logger if logging is enabled. I do not want to modify or remove these print statements.
I can log by doing 'log.info("some info msg")'. I want to be able to do something like this:
if logging_enabled:
sys.stdout=log.info
print("test")
If logging is enabled, "test" should be logged as if I did log.info("test"). If logging isn't enabled, "test" should just be printed to the screen.
Is this possible? I know I can direct stdout to a file in a similar manner (see: redirect prints to log file)
You have two options:
Open a logfile and replace sys.stdout with it, not a function:
log = open("myprog.log", "a")
sys.stdout = log
>>> print("Hello")
>>> # nothing is printed because it goes to the log file instead.
Replace print with your log function:
# If you're using python 2.x, uncomment the next line
#from __future__ import print_function
print = log.info
>>> print("Hello!")
>>> # nothing is printed because log.info is called instead of print
Of course, you can both print to the standard output and append to a log file, like this:
# Uncomment the line below for python 2.x
#from __future__ import print_function
import logging
logging.basicConfig(level=logging.INFO, format='%(message)s')
logger = logging.getLogger()
logger.addHandler(logging.FileHandler('test.log', 'a'))
print = logger.info
print('yo!')
One more method is to wrap the logger in an object that translates calls to write to the logger's log method.
Ferry Boender does just this, provided under the GPL license in a post on his website. The code below is based on this but solves two issues with the original:
The class doesn't implement the flush method which is called when the program exits.
The class doesn't buffer the writes on newline as io.TextIOWrapper objects are supposed to which results in newlines at odd points.
import logging
import sys
class StreamToLogger(object):
"""
Fake file-like stream object that redirects writes to a logger instance.
"""
def __init__(self, logger, log_level=logging.INFO):
self.logger = logger
self.log_level = log_level
self.linebuf = ''
def write(self, buf):
temp_linebuf = self.linebuf + buf
self.linebuf = ''
for line in temp_linebuf.splitlines(True):
# From the io.TextIOWrapper docs:
# On output, if newline is None, any '\n' characters written
# are translated to the system default line separator.
# By default sys.stdout.write() expects '\n' newlines and then
# translates them so this is still cross platform.
if line[-1] == '\n':
self.logger.log(self.log_level, line.rstrip())
else:
self.linebuf += line
def flush(self):
if self.linebuf != '':
self.logger.log(self.log_level, self.linebuf.rstrip())
self.linebuf = ''
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',
filename="out.log",
filemode='a'
)
stdout_logger = logging.getLogger('STDOUT')
sl = StreamToLogger(stdout_logger, logging.INFO)
sys.stdout = sl
stderr_logger = logging.getLogger('STDERR')
sl = StreamToLogger(stderr_logger, logging.ERROR)
sys.stderr = sl
This allows you to easily route all output to a logger of your choice. If needed, you can save sys.stdout and/or sys.stderr as mentioned by others in this thread before replacing it if you need to restore it later.
A much simpler option,
import logging, sys
logging.basicConfig(filename='path/to/logfile', level=logging.DEBUG)
logger = logging.getLogger()
sys.stderr.write = logger.error
sys.stdout.write = logger.info
Once your defined your logger, use this to make print redirect to logger even with mutiple parameters of print.
print = lambda *tup : logger.info(str(" ".join([str(x) for x in tup])))
You really should do that the other way: by adjusting your logging configuration to use print statements or something else, depending on the settings. Do not overwrite print behaviour, as some of the settings that may be introduced in the future (eg. by you or by someone else using your module) may actually output it to the stdout and you will have problems.
There is a handler that is supposed to redirect your log messages to proper stream (file, stdout or anything else file-like). It is called StreamHandler and it is bundled with logging module.
So basically in my opinion you should do, what you stated you don't want to do: replace print statements with actual logging.
Below snipped works perfectly inside my PySpark code. If someone need in case for understanding -->
import os
import sys
import logging
import logging.handlers
log = logging.getLogger(__name_)
handler = logging.FileHandler("spam.log")
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
log.addHandler(handler)
sys.stderr.write = log.error
sys.stdout.write = log.info
(will log every error in "spam.log" in the same directory, nothing will be on console/stdout)
(will log every info in "spam.log" in the same directory,nothing will be on console/stdout)
to print output error/info in both file as well as in console remove above two line.
Happy Coding
Cheers!!!