Override Info, Warn, Error of a third party library logging - python

I am new to Python (background is .Net) and working on a Python application that is a wrapper around a third-party library. The third-party Python library uses standard logging. I need to intercept these logging calls and store them. The code looks something like this:
Third-party main — myApp.py:
# Standard Library
import logging
from options import (info, warn)
from process import (processIt)
# Module-level logger
log = logging.getLogger(__name__)
log.propagate = False
formatter = logging.Formatter("[%(name)s] [%(levelname)-7s] [%(asctime)s] %(message)s")
# Console Handler for Elevator messages
ch = logging.StreamHandler()
ch.setFormatter(formatter)
log.addHandler(ch)
def runIt():
info("Running it.", 1)
processIt()
info("Running it.", 2)
Third-party logging setup — options.py:
# Standard Library
import logging
formatter = logging.Formatter("[%(name)s] [%(ecode)d] [%(levelname)-7s] [%(asctime)s] %(message)s")
# Console Handler for Elevator messages
ch = logging.StreamHandler()
ch.setFormatter(formatter)
# Module-level logger
log = logging.getLogger(__name__)
log.level= logging.INFO
# temporary? hack to prevent multiple loggers from printing messages
log.propagate = False
log.addHandler(ch)
def info(fmt, ecode, *args):
log.info(fmt, *args, extra={'ecode': ecode})
def warn(fmt, ecode, *args):
log.warning(fmt, *args, extra={'ecode': ecode})
def init():
info("Initialized options", 100)
Third-party process — process.py:
from options import (info, warn)
def processIt():
info ("Inside Process", 10)
This is the client — client.py:
import options
import myApp
info_msg = []
warn_msg = []
def info(fmt, ecode, *args):
info_msg.append(dict({ecode:fmt.format(*args)}))
def warn(fmt, ecode, *args):
warn_msg.append(dict({ecode:fmt.format(*args)}))
options.warn = warn
options.info = info
def runApp():
print ("Start")
options.init()
myApp.runIt()
print ("End")
print (info_msg)
print (warn_msg)
runApp()
Here is the output:
Start
[options] [1] [INFO ] [2022-06-09 09:28:46,380] Running it.
[options] [10] [INFO ] [2022-06-09 09:28:46,380] Inside Process
[options] [2] [INFO ] [2022-06-09 09:28:46,380] Running it.
End
[{100: 'Initialized options'}]
[]
You can see that the log in the init folder got overridden, but nothing else.

Explanation and quick-and-dirty solution
First of all, your solution is a bit of a hack, since you're modifying a third-party module at runtime. This might work, but it depends on the situation. The reason it does not work in this case, is because myApp.py contains from options import (info, warn). In general, using the from ... import ... style is not recommended, since this creates additional objects. In this example, an object called info is created in the myApp module.
When you overwrite the options.info function, this has no effect on myApp.info, since it was already created and still references the original function.
The quick-and-dirty fix is to override the methods from the options module before you import all other thirdparty modules:
import options
# import myApp # NOT YET
info_msg = []
warn_msg = []
def info(fmt, ecode, *args):
info_msg.append(dict({ecode:fmt.format(*args)}))
def warn(fmt, ecode, *args):
warn_msg.append(dict({ecode:fmt.format(*args)}))
options.warn = warn
options.info = info
import myApp # Now it's okay
def runApp():
...
I tested above code with your current example code, and this gives the following output:
Start
End
[{100: 'Initialized options'}, {1: 'Running it.'}, {10: 'Inside Process'}, {2: 'Running it.'}]
[]
Note, however, that this solution might not work if the actual thirdparty code is bigger and imports are done in an even different way than demonstrated here. Also note that my above code breaks a number of coding styleguides (which you may or may not care about).
Solution using the logging module
Instead of "hacking" the thirdparty code, you can also achieve a similar result using the logging module.
The logging module is written to support these type of things, and it seems like the third-party library follows the best practices regarding logging. This means you could just create your own logging handler to deal with logging from the third-party library.
To capture the log messages in the way you demonstrated, you'll need to write a custom logging Handler Object. The following code demonstrates how this works.
import logging
import thirdparty.options # See note below
import thirdparty.myApp # See note below
class MyLogHandler(logging.Handler):
def __init__(self):
super().__init__()
self.info_msg = []
self.warn_msg = []
def emit(self, record: logging.LogRecord):
ecode = record.__dict__.get('ecode')
if record.levelno == logging.INFO:
self.info_msg.append({ecode: record.msg.format(*record.args)})
elif record.levelno == logging.WARNING:
self.warn_msg.append({ecode: record.msg.format(*record.args)})
def runApp():
handler = MyLogHandler()
logging.getLogger('thirdparty.options').addHandler(handler) # See note below
print("Start")
thirdparty.options.init() # See note below
thirdparty.myApp.runIt() # See note below
print("End")
print(handler.info_msg)
print(handler.warn_msg)
runApp()
NOTE: I named the thirdparty module "thirdparty" in the above code. Your actual library probably has a different name, in which case you'll need to match the name in the code accordingly!
The output of above script is:
Start
[thirdparty.options] [100] [INFO ] [2022-06-10 18:06:21,979] Initialized options
[thirdparty.options] [1] [INFO ] [2022-06-10 18:06:21,979] Running it.
[thirdparty.options] [10] [INFO ] [2022-06-10 18:06:21,980] Inside Process
[thirdparty.options] [2] [INFO ] [2022-06-10 18:06:21,980] Running it.
End
[{100: 'Initialized options'}, {1: 'Running it.'}, {10: 'Inside Process'}, {2: 'Running it.'}]
[]
Note that this solution does not replace the original logging, but adds a log handler, so the original logging still works as well. (If you don't want that, you can remove the original log handler.)
References:
Library documentation of Handler Object.
Python Logging HOWTO

Try having the import thirdparty.setup in a function because otherwise you are gonna have an import error.

Related

python logging - different level for specific function

i'm trying to reduce the amount of logging that the napalm library sends to syslog, but also allow for info logs to be sent from other parts of the code. I set up logging.basicConfig to be INFO but then i'd like the napalm function to be WARNING and above.
So i have code like this:
from napalm import get_network_driver
import logging
import getpass
logging.basicConfig(
filename="/var/log/myscripts/script.log", level=logging.INFO, format="%(asctime)s %(message)s")
def napalm(device):
logging.getLogger().setLevel(logging.WARNING)
username = getpass.getuser()
driver = get_network_driver("junos")
router = driver(str(device), username, "", password="", timeout=120)
router.open()
return router
router = napalm('myrouter')
config = "hostname foobar"
router.load_merge_candidate(config=config)
show = router.compare_config()
logging.info(show)
The issue is the logging.info output never makes it to the log file. If i do logging.warning(show) it does, but i'd like this to be info. The reason i want the function to be WARNING is that it generates so much other logging at the info level that is just noise. So trying to cut down on that.
A nice trick from the book Effective Python. See if it helps your situation.
def napalm(count):
for x in range(count):
logger.info('useless log line')
#contextmanager
def debug_logging(level):
logger = logging.getLogger()
old_level = logger.getEffectiveLevel()
logger.setLevel(level)
try:
yield
finally:
logger.setLevel(old_level)
napalm(5)
with debug_logging(logging.WARNING):
napalm(5)
napalm(5)
By calling logging.getLogger() without a parameter you are currently retrieving the root logger, and overriding the level for it also affects all other loggers.
You should instead retrieve the library's logger and override level only for that specific one.
The napalm library executes the following in its __init__.py:
logger = logging.getLogger("napalm")
i.e. the library logger's name is "napalm".
You should thus be able to override the level of that specific logger by putting the following line in your script:
logging.getLogger("napalm").setLevel(logging.WARNING)
Generic example:
import logging
logging.basicConfig(level=logging.DEBUG, format="%(levelname)s: %(message)s")
A = logging.getLogger("A")
B = logging.getLogger("B")
A.debug("#1 from A gets printed")
B.debug("#1 from B gets printed")
logging.getLogger("A").setLevel(logging.CRITICAL)
A.debug("#2 from A doesn't get printed") # because we've increased the level
B.debug("#2 from B gets printed")
Output:
DEBUG: #1 from A gets printed
DEBUG: #1 from B gets printed
DEBUG: #2 from B gets printed
Edit:
Since this didn't work well for you, it's probably because there's a bunch of various other loggers in this library:
$ grep -R 'getLogger' .
napalm/junos/junos.py:log = logging.getLogger(__file__)
napalm/base/helpers.py:logger = logging.getLogger(__name__)
napalm/base/clitools/cl_napalm.py:logger = logging.getLogger("napalm")
napalm/base/clitools/cl_napalm_validate.py:logger = logging.getLogger("cl_napalm_validate.py")
napalm/base/clitools/cl_napalm_test.py:logger = logging.getLogger("cl_napalm_test.py")
napalm/base/clitools/cl_napalm_configure.py:logger = logging.getLogger("cl-napalm-config.py")
napalm/__init__.py:logger = logging.getLogger("napalm")
napalm/pyIOSXR/iosxr.py:logger = logging.getLogger(__name__)
napalm/iosxr/iosxr.py:logger = logging.getLogger(__name__)
I would then resort to something like this (related: How to list all existing loggers using python.logging module):
# increase level for all loggers
for name, logger in logging.root.manager.loggerDict.items():
logger.setLevel(logging.WARNING)
or perhaps it will suffice if you just print the various loggers, identify the most noisy ones, and silence them one by one.

how to verfify python log format in unittest?

Recently I am writting an python logging extension, and I want to add some tests for my extension to verify whether my extension work as expected.
However, I don't know how to capture the complete log and compare with my excepted result in unittest/pytest.
simplified sample:
# app.py
import logging
def create_logger():
formatter = logging.Formatter(fmt='%(name)s-%(levelname)s-%(message)s')
hdlr = logging.StreamHandler()
hdlr.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
logger.addHandler(hdlr)
return logger
app_logger = create_logger()
Here is my tests
Attempt 1: unittest
from app import app_logger
import unittest
class TestApp(unittest.TestCase):
def test_logger(self):
with self.assertLogs('', 'DEBUG') as cm:
app_logger.debug('hello')
# or some other way to capture the log output.
self.assertEqual('app-DEBUG-hello', cm.output)
expected behaviour:
cm.output = 'app-DEBUG-hello'
actual behaviour
cm.output = ['DEBUG:app:hello']
Attempt 2: pytest caplog
from app import app_logger
import pytest
def test_logger(caplog):
app_logger.debug('hello')
assert caplog.text == 'app-DEBUG-hello'
expected behaviour:
caplog.text = 'app-DEBUG-hello'
actual behaviour
caplog.text = 'test_logger.py 6 DEBUG hello'
Attempt 3: pytest capsys
from app import app_logger
import pytest
def test_logger(capsys):
app_logger.debug('hello')
out, err = capsys.readouterr()
assert err
assert err == 'app-DEBUG-hello'
expected behaviour:
err = 'app-DEBUG-hello'
actual behaviour
err = ''
Considering there will be many tests with different format, I don't want to check the log format manually. I have no idea how to get complete log as I see on the console and compare it with my expected one in the test cases. Hoping for your help, thx.
I know this is old but posting here since it pulled up in google for me...
Probably needs cleanup but it is the first thing that has gotten close for me so I figured it would be good to share.
Here is a test case mixin I've put together that lets me verify a particular handler is being formatted as expected by copying the formatter:
import io
import logging
from django.conf import settings
from django.test import SimpleTestCase
from django.utils.log import DEFAULT_LOGGING
class SetupLoggingMixin:
def setUp(self):
super().setUp()
logging.config.dictConfig(settings.LOGGING)
self.stream = io.StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
console_handler = None
for handler in self.root_logger.handlers:
if handler.name == 'console':
console_handler = handler
break
if console_handler is None:
raise RuntimeError('could not find console handler')
formatter = console_handler.formatter
self.root_formatter = formatter
self.root_hdlr.setFormatter(self.root_formatter)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
logging.config.dictConfig(DEFAULT_LOGGING)
And here is an example of how to use it:
class SimpleLogTests(SetupLoggingMixin, SimpleTestCase):
def test_logged_time(self):
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'my-expected-message-formatted-as-expected')
After reading the source code of the unittest library, I've worked out the following bypass. Note, it works by changing a protected member of an imported module, so it may break in future versions.
from unittest.case import _AssertLogsContext
_AssertLogsContext.LOGGING_FORMAT = 'same format as your logger'
After these commands the logging context opened by self.assertLogs will use the above format. I really don't know why this values is left hard-coded and not configurable.
I did not find an option to read the format of a logger, but if you use logging.config.dictConfig you can use a value from the same dictionary.
I know this doesn't completely answer the OP's question but I stumbled upon this post while looking for a neat way to capture logged messages.
Taking what #user319862 did, I've cleaned it and simplified it.
import unittest
import logging
from io import StringIO
class SetupLogging(unittest.TestCase):
def setUp(self):
super().setUp()
self.stream = StringIO()
self.root_logger = logging.getLogger("")
self.root_hdlr = logging.StreamHandler(self.stream)
self.root_logger.addHandler(self.root_hdlr)
def tearDown(self):
super().tearDown()
self.stream.close()
def test_log_output(self):
""" Does the logger produce the correct output? """
msg = 'foo'
self.root_logger.error(msg)
self.assertEqual(self.stream.getvalue(), 'foo\n')
if __name__ == '__main__':
unittest.main()
I new to python but have some experience in test/tdd in other languages, and found that the default way of "changing" the formatter is by adding a new streamhandler BUT in the case you already have a stream defined in your logger (i.e. using Azure functions or TestCase::assertLogs that add one for you) you end up logging twice one with your format and another with the "default" format.
If in the OP the function create_logger mutates the formatter of current StreamHandler, instead of adding a new StreamHandler (checks if exist and if doesn't creates a new one and all that jazz...)
Then you can call the create_logger after the with self.assertLogs('', 'DEBUG') as cm: and just assert the cm.output and it just works because you are mutating the Formatter of the StreamHandler that the assertLogs is adding.
So basically what's happening is that the execution order is not appropriate for the test.
The order of execution in OP is:
import stuff
Add stream to logger formatter
Run test
Add another stream to logger formatter via self.assertLogs
assert stuff in 2nd StreamHandler
When it should be
the order of execution is:
import stuff
Add stream with logger formatter (but is irrelevant)
Run test
Add another stream with logger formatter via self.assertLogs
Change current stream logger formatter
assert stuff in only and properly formatted StreamHandler

pytest: selective log levels on a per-module basis

I'm using pytest-3.7.1 which has good support for logging, including live logging to stdout during tests. I'm using --log-cli-level=DEBUG to dump all debug-level logging to the console as it happens.
The problem I have is that --log-cli-level=DEBUG turns on debug logging for all modules in my test program, including third-party dependencies, and it floods the log with a lot of uninteresting output.
Python's logging module has the ability to set logging levels per module. This enables selective logging - for example, in a normal Python program I can turn on debugging for just one or two of my own modules, and restrict the log output to just those, or set different log levels for each module. This enables turning off debug-level logging for noisy libraries.
So what I'd like to do is apply the same concept to pytest's logging - i.e. specify a logging level, from the command line, for specific non-root loggers. For example, if I have a module called test_foo.py then I'm looking for a way to set the log level for this module from the command line.
I'm prepared to roll-my-own if necessary (I know how to add custom arguments to pytest), but before I do that I just want to be sure that there isn't already a solution. Is anyone aware of one?
I had the same problem, and found a solution in another answer:
Instead of --log-cli-level=DEBUG, use --log-level DEBUG. It disables all third-party module logs (in my case, I had plenty of matplotlib logs), but still outputs your app logs for each test that fails.
I got this working by writing a factory class and using it to set the level of the root logger to logger.INFO and use the logging level from the command line for all the loggers obtained from the factory. If the logging level from the command line is higher than the minimum global log level you specify in the class (using constant MINIMUM_GLOBAL_LOG_LEVEL), the global log level isn't changed.
import logging
MODULE_FIELD_WIDTH_IN_CHARS = '20'
LINE_NO_FIELD_WIDTH_IN_CHARS = '3'
LEVEL_NAME_FIELD_WIDTH_IN_CHARS = '8'
MINIMUM_GLOBAL_LOG_LEVEL = logging.INFO
class EasyLogger():
root_logger = logging.getLogger()
specified_log_level = root_logger.level
format_string = '{asctime} '
format_string += '{module:>' + MODULE_FIELD_WIDTH_IN_CHARS + 's}'
format_string += '[{lineno:' + LINE_NO_FIELD_WIDTH_IN_CHARS + 'd}]'
format_string += '[{levelname:^' + LEVEL_NAME_FIELD_WIDTH_IN_CHARS + 's}]: '
format_string += '{message}'
level_change_warning_sent = False
#classmethod
def get_logger(cls, logger_name):
if not EasyLogger._logger_has_format(cls.root_logger, cls.format_string):
EasyLogger._setup_root_logger()
logger = logging.getLogger(logger_name)
logger.setLevel(cls.specified_log_level)
return logger
#classmethod
def _setup_root_logger(cls):
formatter = logging.Formatter(fmt=cls.format_string, style='{')
if not cls.root_logger.hasHandlers():
handler = logging.StreamHandler()
cls.root_logger.addHandler(handler)
for handler in cls.root_logger.handlers:
handler.setFormatter(formatter)
cls.root_logger.setLevel(MINIMUM_GLOBAL_LOG_LEVEL)
if (cls.specified_log_level < MINIMUM_GLOBAL_LOG_LEVEL and
cls.level_change_warning_sent is False):
cls.root_logger.log(
max(cls.specified_log_level, logging.WARNING),
"Setting log level for %s class to %s, all others to %s" % (
__name__,
cls.specified_log_level,
MINIMUM_GLOBAL_LOG_LEVEL
)
)
cls.level_change_warning_sent = True
#staticmethod
def _logger_has_format(logger, format_string):
for handler in logger.handlers:
return handler.format == format_string
return False
The above class is then used to send logs normally as you would with a logging.logger object as follows:
from EasyLogger import EasyLogger
class MySuperAwesomeClass():
def __init__(self):
self.logger = EasyLogger.get_logger(__name__)
def foo(self):
self.logger.debug("debug message")
self.logger.info("info message")
self.logger.warning("warning message")
self.logger.critical("critical message")
self.logger.error("error message")
Enable/Disable/Modify the log level of any module in Python:
logging.getLogger("module_name").setLevel(logging.log_level)

How to turn `logging` warnings into errors?

A library that I use emits warnings and errors through the logging module (logging.Logger's warn() and error() methods). I would like to implement an option to turn the warnings into errors (i.e., fail on warnings).
Is there an easy way to achieve this?
From looking at the documentation, I cannot see a ready-made solution. I assume it is possible by adding a custom Handler object, but I am not sure how to do it "right". Any pointers?
#hoefling's answer is close, but I would change it like so:
class LevelRaiser(logging.Filter):
def filter(self, record):
if record.levelno == logging.WARNING:
record.levelno = logging.ERROR
record.levelname = logging.getLevelName(logging.ERROR)
return True
def configure_library_logging():
library_root_logger = logging.getLogger(library.__name__)
library_root_logger.addFilter(LevelRaiser())
The reason is that filters are used to change LogRecord attributes and filter stuff out, whereas handlers are used to do I/O. What you're trying to do here isn't I/O, and so doesn't really belong in a handler.
Update: I like the proposal of Vinay made in this answer, injecting a custom Filter instead of a Handler is a much cleaner way. Please check it out!
You are on the right track with implementing own Handler. This is pretty easy to implement. I would do it like that: write a handler that edits the LogRecord in-place and attach one handler instance to the library's root loggers. Example:
# library.py
import logging
_LOGGER = logging.getLogger(__name__)
def library_stuff():
_LOGGER.warning('library stuff')
This is a script that uses the library:
import logging
import library
class LevelRaiser(logging.Handler):
def emit(self, record: logging.LogRecord):
if record.levelno == logging.WARNING:
record.levelno = logging.ERROR
record.levelname = logging.getLevelName(logging.ERROR)
def configure_library_logging():
library_root_logger = logging.getLogger(library.__name__)
library_root_logger.addHandler(LevelRaiser())
if __name__ == '__main__':
# do some example global logging config
logging.basicConfig(level=logging.INFO)
# additional configuration for the library logging
configure_library_logging()
# play with different loggers
our_logger = logging.getLogger(__name__)
root_logger = logging.getLogger()
root_logger.warning('spam')
our_logger.warning('eggs')
library.library_stuff()
root_logger.warning('foo')
our_logger.warning('bar')
library.library_stuff()
Run the script:
WARNING:root:spam
WARNING:__main__:eggs
ERROR:library:library stuff
WARNING:root:foo
WARNING:__main__:bar
ERROR:library:library stuff
Note that warning level is elevated to error level only on library's logging prints, all the rest remains unchanged.
You can assign logging.warn to logging.error before calling methods from your library:
import logging
warn_log_original = logging.warn
logging.warn = logging.error
library_call()
logging.warn = warn_log_original

Disable logging per method/function?

I'm new to Python logging and I can easily see how it is preferrable to the home-brew solution I have come up with.
One question I can't seem to find an answer to: how do I squelch logging messages on a per-method/function basis?
My hypothetical module contains a single function. As I develop, the log calls are a great help:
logging.basicConfig(level=logging.DEBUG,
format=('%(levelname)s: %(funcName)s(): %(message)s'))
log = logging.getLogger()
my_func1():
stuff...
log.debug("Here's an interesting value: %r" % some_value)
log.info("Going great here!")
more stuff...
As I wrap up my work on 'my_func1' and start work on a second function, 'my_func2', the logging messages from 'my_func1' start going from "helpful" to "clutter".
Is there single-line magic statement, such as 'logging.disabled_in_this_func()' that I can add to the top of 'my_func1' to disable all the logging calls within 'my_func1', but still leave logging calls in all other functions/methods unchanged?
Thanks
linux, Python 2.7.1
The trick is to create multiple loggers.
There are several aspects to this.
First. Don't use logging.basicConfig() at the beginning of a module. Use it only inside the main-import switch
if __name__ == "__main__":
logging.basicConfig(...)
main()
logging.shutdown()
Second. Never get the "root" logger, except to set global preferences.
Third. Get individual named loggers for things which might be enabled or disabled.
log = logging.getLogger(__name__)
func1_log = logging.getLogger( "{0}.{1}".format( __name__, "my_func1" )
Now you can set logging levels on each named logger.
log.setLevel( logging.INFO )
func1_log.setLevel( logging.ERROR )
You could create a decorator that would temporarily suspend logging, ala:
from functools import wraps
def suspendlogging(func):
#wraps(func)
def inner(*args, **kwargs):
previousloglevel = log.getEffectiveLevel()
try:
return func(*args, **kwargs)
finally:
log.setLevel(previousloglevel)
return inner
#suspendlogging
def my_func1(): ...
Caveat: that would also suspend logging for any function called from my_func1 so be careful how you use it.
You could use a decorator:
import logging
import functools
def disable_logging(func):
#functools.wraps(func)
def wrapper(*args,**kwargs):
logging.disable(logging.DEBUG)
result = func(*args,**kwargs)
logging.disable(logging.NOTSET)
return result
return wrapper
#disable_logging
def my_func1(...):
I took me some time to learn how to implement the sub-loggers as suggested by S.Lott.
Given how tough it was to figure out how to setup logging when I was starting out, I figure it's long past time to share what I learned since then.
Please keep in mind that this isn't the only way to set up loggers/sub-loggers, nor is it the best. It's just the way I use to get the job done to fit my needs. I hope this is helpful to someone. Please feel free to comment/share/critique.
Let's assume we have a simple library we like to use. From the main program, we'd like to be able to control the logging messages we get from the library. Of course we're considerate library creators, so we configure our library to make this easy.
First, the main program:
# some_prog.py
import os
import sys
# Be sure to give Vinay Sajip thanks for his creation of the logging module
# and tireless efforts to answer our dumb questions about it. Thanks Vinay!
import logging
# This module will make understanding how Python logging works so much easier.
# Also great for debugging why your logging setup isn't working.
# Be sure to give it's creator Brandon Rhodes some love. Thanks Brandon!
import logging_tree
# Example library
import some_lib
# Directory, name of current module
current_path, modulename = os.path.split(os.path.abspath(__file__))
modulename = modulename.split('.')[0] # Drop the '.py'
# Set up a module-local logger
# In this case, the logger will be named 'some_prog'
log = logging.getLogger(modulename)
# Add a Handler. The Handler tells the logger *where* to send the logging
# messages. We'll set up a simple handler that send the log messages
# to standard output (stdout)
stdout_handler = logging.StreamHandler(stream=sys.stdout)
log.addHandler(stdout_handler)
def some_local_func():
log.info("Info: some_local_func()")
log.debug("Debug: some_local_func()")
if __name__ == "__main__":
# Our main program, here's where we tie together/enable the logging infra
# we've added everywhere else.
# Use logging_tree.printout() to see what the default log levels
# are on our loggers. Make logging_tree.printout() calls at any place in
# the code to see how the loggers are configured at any time.
#
# logging_tree.printout()
print("# Logging level set to default (i.e. 'WARNING').")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# We know a reference to our local logger, so we can set/change its logging
# level directly. Let's set it to INFO:
log.setLevel(logging.INFO)
print("# Local logging set to 'INFO'.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# Next, set the local logging level to DEBUG:
log.setLevel(logging.DEBUG)
print("# Local logging set to 'DEBUG'.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# Set the library's logging level to DEBUG. We don't necessarily
# have a reference to the library's logger, but we can use
# logging_tree.printout() to see the name and then call logging.getLogger()
# to create a local reference. Alternately, we could dig through the
# library code.
lib_logger_ref = logging.getLogger("some_lib")
lib_logger_ref.setLevel(logging.DEBUG)
# The library logger's default handler, NullHandler() won't output anything.
# We'll need to add a handler so we can see the output -- in this case we'll
# also send it to stdout.
lib_log_handler = logging.StreamHandler(stream=sys.stdout)
lib_logger_ref.addHandler(lib_log_handler)
lib_logger_ref.setLevel(logging.DEBUG)
print("# Logging level set to DEBUG in both local program and library.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
print("# ACK! Setting the library's logging level to DEBUG output")
print("# all debug messages from the library. (Use logging_tree.printout()")
print("# To see why.)")
print("# Let's change the library's logging level to INFO and")
print("# only some_special_func()'s level to DEBUG so we only see")
print("# debug message from some_special_func()")
# Raise the logging level of the libary and lower the logging level
# of 'some_special_func()' so we see only some_special_func()'s
# debugging-level messages.
# Since it is a sub-logger of the library's main logger, we don't need
# to create another handler, it will use the handler that belongs
# to the library's main logger.
lib_logger_ref.setLevel(logging.INFO)
special_func_sub_logger_ref = logging.getLogger('some_lib.some_special_func')
special_func_sub_logger_ref.setLevel(logging.DEBUG)
print("# Logging level set to DEBUG in local program, INFO in library and")
print("# DEBUG in some_lib.some_special_func()")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
Next, our library:
# some_lib.py
import os
import logging
# Directory, name of current module
current_path, modulename = os.path.split(os.path.abspath(__file__))
modulename = modulename.split('.')[0] # Drop the '.py'
# Set up a module-local logger. In this case the logger will be
# named 'some_lib'
log = logging.getLogger(modulename)
# In libraries, always default to NullHandler so you don't get
# "No handler for X" messages.
# Let your library callers set up handlers and set logging levels
# in their main program so the main program can decide what level
# of messages they want to see from your library.
log.addHandler(logging.NullHandler())
def some_lib_func():
log.info("Info: some_lib.some_lib_func()")
log.debug("Debug: some_lib.some_lib_func()")
def some_special_func():
"""
This func is special (not really). It just has a function/method-local
logger in addition to the library/module-level logger.
This allows us to create/control logging messages down to the
function/method level.
"""
# Our function/method-local logger
func_log = logging.getLogger('%s.some_special_func' % modulename)
# Using the module-level logger
log.info("Info: some_special_func()")
# Using the function/method-level logger, which can be controlled separately
# from both the library-level logger and the main program's logger.
func_log.debug("Debug: some_special_func(): This message can be controlled at the function/method level.")
Now let's run the program, along with the commentary track:
# Logging level set to default (i.e. 'WARNING').
Notice there's no output at the default level since we haven't generated any WARNING-level messages.
# Local logging set to 'INFO'.
Info: some_local_func()
The library's handlers default to NullHandler(), so we see only output from the main program. This is good.
# Local logging set to 'DEBUG'.
Info: some_local_func()
Debug: some_local_func()
Main program logger set to DEBUG. We still see no output from the library. This is good.
# Logging level set to DEBUG in both local program and library.
Info: some_local_func()
Debug: some_local_func()
Info: some_lib.some_lib_func()
Debug: some_lib.some_lib_func()
Info: some_special_func()
Debug: some_special_func(): This message can be controlled at the function/method level.
Oops.
# ACK! Setting the library's logging level to DEBUG output
# all debug messages from the library. (Use logging_tree.printout()
# To see why.)
# Let's change the library's logging level to INFO and
# only some_special_func()'s level to DEBUG so we only see
# debug message from some_special_func()
# Logging level set to DEBUG in local program, INFO in library and
# DEBUG in some_lib.some_special_func()
Info: some_local_func()
Debug: some_local_func()
Info: some_lib.some_lib_func()
Info: some_special_func()
Debug: some_special_func(): This message can be controlled at the function/method level.
It's also possible to get only debug messages only from some_special_func(). Use logging_tree.printout() to figure out which logging levels to tweak to make that happen!
This combines #KirkStrauser's answer with #unutbu's. Kirk's has try/finally but doesn't disable, and unutbu's disables w/o try/finally. Just putting it here for posterity:
from functools import wraps
import logging
def suspend_logging(func):
#wraps(func)
def inner(*args, **kwargs):
logging.disable(logging.FATAL)
try:
return func(*args, **kwargs)
finally:
logging.disable(logging.NOTSET)
return inner
Example usage:
from logging import getLogger
logger = getLogger()
#suspend_logging
def example()
logger.info("inside the function")
logger.info("before")
example()
logger.info("after")
If you are using logbook silencing logs is very simple. Just use a NullHandler - https://logbook.readthedocs.io/en/stable/api/handlers.html
>>> logger.warn('TEST')
[12:28:17.298198] WARNING: TEST
>>> from logbook import NullHandler
>>> with NullHandler():
... logger.warn('TEST')

Categories