I'm new to Python logging and I can easily see how it is preferrable to the home-brew solution I have come up with.
One question I can't seem to find an answer to: how do I squelch logging messages on a per-method/function basis?
My hypothetical module contains a single function. As I develop, the log calls are a great help:
logging.basicConfig(level=logging.DEBUG,
format=('%(levelname)s: %(funcName)s(): %(message)s'))
log = logging.getLogger()
my_func1():
stuff...
log.debug("Here's an interesting value: %r" % some_value)
log.info("Going great here!")
more stuff...
As I wrap up my work on 'my_func1' and start work on a second function, 'my_func2', the logging messages from 'my_func1' start going from "helpful" to "clutter".
Is there single-line magic statement, such as 'logging.disabled_in_this_func()' that I can add to the top of 'my_func1' to disable all the logging calls within 'my_func1', but still leave logging calls in all other functions/methods unchanged?
Thanks
linux, Python 2.7.1
The trick is to create multiple loggers.
There are several aspects to this.
First. Don't use logging.basicConfig() at the beginning of a module. Use it only inside the main-import switch
if __name__ == "__main__":
logging.basicConfig(...)
main()
logging.shutdown()
Second. Never get the "root" logger, except to set global preferences.
Third. Get individual named loggers for things which might be enabled or disabled.
log = logging.getLogger(__name__)
func1_log = logging.getLogger( "{0}.{1}".format( __name__, "my_func1" )
Now you can set logging levels on each named logger.
log.setLevel( logging.INFO )
func1_log.setLevel( logging.ERROR )
You could create a decorator that would temporarily suspend logging, ala:
from functools import wraps
def suspendlogging(func):
#wraps(func)
def inner(*args, **kwargs):
previousloglevel = log.getEffectiveLevel()
try:
return func(*args, **kwargs)
finally:
log.setLevel(previousloglevel)
return inner
#suspendlogging
def my_func1(): ...
Caveat: that would also suspend logging for any function called from my_func1 so be careful how you use it.
You could use a decorator:
import logging
import functools
def disable_logging(func):
#functools.wraps(func)
def wrapper(*args,**kwargs):
logging.disable(logging.DEBUG)
result = func(*args,**kwargs)
logging.disable(logging.NOTSET)
return result
return wrapper
#disable_logging
def my_func1(...):
I took me some time to learn how to implement the sub-loggers as suggested by S.Lott.
Given how tough it was to figure out how to setup logging when I was starting out, I figure it's long past time to share what I learned since then.
Please keep in mind that this isn't the only way to set up loggers/sub-loggers, nor is it the best. It's just the way I use to get the job done to fit my needs. I hope this is helpful to someone. Please feel free to comment/share/critique.
Let's assume we have a simple library we like to use. From the main program, we'd like to be able to control the logging messages we get from the library. Of course we're considerate library creators, so we configure our library to make this easy.
First, the main program:
# some_prog.py
import os
import sys
# Be sure to give Vinay Sajip thanks for his creation of the logging module
# and tireless efforts to answer our dumb questions about it. Thanks Vinay!
import logging
# This module will make understanding how Python logging works so much easier.
# Also great for debugging why your logging setup isn't working.
# Be sure to give it's creator Brandon Rhodes some love. Thanks Brandon!
import logging_tree
# Example library
import some_lib
# Directory, name of current module
current_path, modulename = os.path.split(os.path.abspath(__file__))
modulename = modulename.split('.')[0] # Drop the '.py'
# Set up a module-local logger
# In this case, the logger will be named 'some_prog'
log = logging.getLogger(modulename)
# Add a Handler. The Handler tells the logger *where* to send the logging
# messages. We'll set up a simple handler that send the log messages
# to standard output (stdout)
stdout_handler = logging.StreamHandler(stream=sys.stdout)
log.addHandler(stdout_handler)
def some_local_func():
log.info("Info: some_local_func()")
log.debug("Debug: some_local_func()")
if __name__ == "__main__":
# Our main program, here's where we tie together/enable the logging infra
# we've added everywhere else.
# Use logging_tree.printout() to see what the default log levels
# are on our loggers. Make logging_tree.printout() calls at any place in
# the code to see how the loggers are configured at any time.
#
# logging_tree.printout()
print("# Logging level set to default (i.e. 'WARNING').")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# We know a reference to our local logger, so we can set/change its logging
# level directly. Let's set it to INFO:
log.setLevel(logging.INFO)
print("# Local logging set to 'INFO'.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# Next, set the local logging level to DEBUG:
log.setLevel(logging.DEBUG)
print("# Local logging set to 'DEBUG'.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
# Set the library's logging level to DEBUG. We don't necessarily
# have a reference to the library's logger, but we can use
# logging_tree.printout() to see the name and then call logging.getLogger()
# to create a local reference. Alternately, we could dig through the
# library code.
lib_logger_ref = logging.getLogger("some_lib")
lib_logger_ref.setLevel(logging.DEBUG)
# The library logger's default handler, NullHandler() won't output anything.
# We'll need to add a handler so we can see the output -- in this case we'll
# also send it to stdout.
lib_log_handler = logging.StreamHandler(stream=sys.stdout)
lib_logger_ref.addHandler(lib_log_handler)
lib_logger_ref.setLevel(logging.DEBUG)
print("# Logging level set to DEBUG in both local program and library.")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
print("# ACK! Setting the library's logging level to DEBUG output")
print("# all debug messages from the library. (Use logging_tree.printout()")
print("# To see why.)")
print("# Let's change the library's logging level to INFO and")
print("# only some_special_func()'s level to DEBUG so we only see")
print("# debug message from some_special_func()")
# Raise the logging level of the libary and lower the logging level
# of 'some_special_func()' so we see only some_special_func()'s
# debugging-level messages.
# Since it is a sub-logger of the library's main logger, we don't need
# to create another handler, it will use the handler that belongs
# to the library's main logger.
lib_logger_ref.setLevel(logging.INFO)
special_func_sub_logger_ref = logging.getLogger('some_lib.some_special_func')
special_func_sub_logger_ref.setLevel(logging.DEBUG)
print("# Logging level set to DEBUG in local program, INFO in library and")
print("# DEBUG in some_lib.some_special_func()")
some_local_func()
some_lib.some_lib_func()
some_lib.some_special_func()
Next, our library:
# some_lib.py
import os
import logging
# Directory, name of current module
current_path, modulename = os.path.split(os.path.abspath(__file__))
modulename = modulename.split('.')[0] # Drop the '.py'
# Set up a module-local logger. In this case the logger will be
# named 'some_lib'
log = logging.getLogger(modulename)
# In libraries, always default to NullHandler so you don't get
# "No handler for X" messages.
# Let your library callers set up handlers and set logging levels
# in their main program so the main program can decide what level
# of messages they want to see from your library.
log.addHandler(logging.NullHandler())
def some_lib_func():
log.info("Info: some_lib.some_lib_func()")
log.debug("Debug: some_lib.some_lib_func()")
def some_special_func():
"""
This func is special (not really). It just has a function/method-local
logger in addition to the library/module-level logger.
This allows us to create/control logging messages down to the
function/method level.
"""
# Our function/method-local logger
func_log = logging.getLogger('%s.some_special_func' % modulename)
# Using the module-level logger
log.info("Info: some_special_func()")
# Using the function/method-level logger, which can be controlled separately
# from both the library-level logger and the main program's logger.
func_log.debug("Debug: some_special_func(): This message can be controlled at the function/method level.")
Now let's run the program, along with the commentary track:
# Logging level set to default (i.e. 'WARNING').
Notice there's no output at the default level since we haven't generated any WARNING-level messages.
# Local logging set to 'INFO'.
Info: some_local_func()
The library's handlers default to NullHandler(), so we see only output from the main program. This is good.
# Local logging set to 'DEBUG'.
Info: some_local_func()
Debug: some_local_func()
Main program logger set to DEBUG. We still see no output from the library. This is good.
# Logging level set to DEBUG in both local program and library.
Info: some_local_func()
Debug: some_local_func()
Info: some_lib.some_lib_func()
Debug: some_lib.some_lib_func()
Info: some_special_func()
Debug: some_special_func(): This message can be controlled at the function/method level.
Oops.
# ACK! Setting the library's logging level to DEBUG output
# all debug messages from the library. (Use logging_tree.printout()
# To see why.)
# Let's change the library's logging level to INFO and
# only some_special_func()'s level to DEBUG so we only see
# debug message from some_special_func()
# Logging level set to DEBUG in local program, INFO in library and
# DEBUG in some_lib.some_special_func()
Info: some_local_func()
Debug: some_local_func()
Info: some_lib.some_lib_func()
Info: some_special_func()
Debug: some_special_func(): This message can be controlled at the function/method level.
It's also possible to get only debug messages only from some_special_func(). Use logging_tree.printout() to figure out which logging levels to tweak to make that happen!
This combines #KirkStrauser's answer with #unutbu's. Kirk's has try/finally but doesn't disable, and unutbu's disables w/o try/finally. Just putting it here for posterity:
from functools import wraps
import logging
def suspend_logging(func):
#wraps(func)
def inner(*args, **kwargs):
logging.disable(logging.FATAL)
try:
return func(*args, **kwargs)
finally:
logging.disable(logging.NOTSET)
return inner
Example usage:
from logging import getLogger
logger = getLogger()
#suspend_logging
def example()
logger.info("inside the function")
logger.info("before")
example()
logger.info("after")
If you are using logbook silencing logs is very simple. Just use a NullHandler - https://logbook.readthedocs.io/en/stable/api/handlers.html
>>> logger.warn('TEST')
[12:28:17.298198] WARNING: TEST
>>> from logbook import NullHandler
>>> with NullHandler():
... logger.warn('TEST')
Related
How can I redirect logging within a context to a file. Also redirecting logging in other modules (3rd party e.g. requests, numpy, etc.), if called in the scope.
The use case is that we want to integrate algorithms of another team.
we need to output logs of the algorithm to additional file so we can give it to them for debug purpases.
For example:
import some_func_3rd_party
some_func_3rd_party() # logs will only be written to predefined handlers
logger.info("write only to predefined handlers")
with log2file("somefile.txt"):
logger.info("write to file and predefined handlers")
some_func_3rd_party() # logs will be written to predefined handlers and to file
logger.info("write only to predefined handlers")
At the moment, I don't see a way of achieving what you want, without accessing and modifying the root logger.
If you wanted a more targeted approach, you would need to know how the 3rd party library's logger is configured.
Please have a look at the answers to "Log all requests from the python-requests module" to better understand what I mean.
Here is one approach that might work in your particular case:
import contextlib
import logging
import requests
import sys
#contextlib.contextmanager
def special_logger(app_logger, log_file, log_level=logging.DEBUG):
# Get all handlers added to the app_logger.
handlers = app_logger.handlers
# Add handlers specific to this context.
handlers.append(logging.FileHandler(filename=log_file))
handlers.append(logging.StreamHandler(stream=sys.stderr))
# Get the root logger, set the logging level,
# and add all the handlers above.
root_logger = logging.getLogger()
root_logger_level = root_logger.level
root_logger.setLevel(log_level)
for handler in handlers:
root_logger.addHandler(handler)
# Yield the modified root logger.
yield root_logger
# Clean up handlers.
for handler in handlers:
root_logger.removeHandler(handler)
# Reset log level to what it was.
root_logger.setLevel(root_logger_level)
# Get logger for this module.
app_logger = logging.getLogger('my_app')
# Add a handler logging to stdout.
sh = logging.StreamHandler(stream=sys.stdout)
app_logger.addHandler(sh)
app_logger.setLevel(logging.DEBUG)
app_logger.info("Logs go only to stdout.")
# 'requests' logs go to stdout only but won't be emitted in this case
# because root logger level is not set to DEBUG.
requests.get("http://www.google.com")
# Use the new context with modified root logger.
with special_logger(app_logger, 'my_app.log') as spec_logger:
# The output will appear twice in the console because there is
# one handler logging to stdout, and one to stderr.
# This is for demonstration purposes only.
spec_logger.info("Logs go to stdout, stderr, and file.")
# 'requests' logs go to stdout, stderr, and file.
requests.get("http://www.google.com")
app_logger.info("Logs go only to stdout.")
# 'requests' logs go to stdout only but won't be emitted in this case
# because root logger level is not set to DEBUG.
requests.get("http://www.google.com")
i'm trying to reduce the amount of logging that the napalm library sends to syslog, but also allow for info logs to be sent from other parts of the code. I set up logging.basicConfig to be INFO but then i'd like the napalm function to be WARNING and above.
So i have code like this:
from napalm import get_network_driver
import logging
import getpass
logging.basicConfig(
filename="/var/log/myscripts/script.log", level=logging.INFO, format="%(asctime)s %(message)s")
def napalm(device):
logging.getLogger().setLevel(logging.WARNING)
username = getpass.getuser()
driver = get_network_driver("junos")
router = driver(str(device), username, "", password="", timeout=120)
router.open()
return router
router = napalm('myrouter')
config = "hostname foobar"
router.load_merge_candidate(config=config)
show = router.compare_config()
logging.info(show)
The issue is the logging.info output never makes it to the log file. If i do logging.warning(show) it does, but i'd like this to be info. The reason i want the function to be WARNING is that it generates so much other logging at the info level that is just noise. So trying to cut down on that.
A nice trick from the book Effective Python. See if it helps your situation.
def napalm(count):
for x in range(count):
logger.info('useless log line')
#contextmanager
def debug_logging(level):
logger = logging.getLogger()
old_level = logger.getEffectiveLevel()
logger.setLevel(level)
try:
yield
finally:
logger.setLevel(old_level)
napalm(5)
with debug_logging(logging.WARNING):
napalm(5)
napalm(5)
By calling logging.getLogger() without a parameter you are currently retrieving the root logger, and overriding the level for it also affects all other loggers.
You should instead retrieve the library's logger and override level only for that specific one.
The napalm library executes the following in its __init__.py:
logger = logging.getLogger("napalm")
i.e. the library logger's name is "napalm".
You should thus be able to override the level of that specific logger by putting the following line in your script:
logging.getLogger("napalm").setLevel(logging.WARNING)
Generic example:
import logging
logging.basicConfig(level=logging.DEBUG, format="%(levelname)s: %(message)s")
A = logging.getLogger("A")
B = logging.getLogger("B")
A.debug("#1 from A gets printed")
B.debug("#1 from B gets printed")
logging.getLogger("A").setLevel(logging.CRITICAL)
A.debug("#2 from A doesn't get printed") # because we've increased the level
B.debug("#2 from B gets printed")
Output:
DEBUG: #1 from A gets printed
DEBUG: #1 from B gets printed
DEBUG: #2 from B gets printed
Edit:
Since this didn't work well for you, it's probably because there's a bunch of various other loggers in this library:
$ grep -R 'getLogger' .
napalm/junos/junos.py:log = logging.getLogger(__file__)
napalm/base/helpers.py:logger = logging.getLogger(__name__)
napalm/base/clitools/cl_napalm.py:logger = logging.getLogger("napalm")
napalm/base/clitools/cl_napalm_validate.py:logger = logging.getLogger("cl_napalm_validate.py")
napalm/base/clitools/cl_napalm_test.py:logger = logging.getLogger("cl_napalm_test.py")
napalm/base/clitools/cl_napalm_configure.py:logger = logging.getLogger("cl-napalm-config.py")
napalm/__init__.py:logger = logging.getLogger("napalm")
napalm/pyIOSXR/iosxr.py:logger = logging.getLogger(__name__)
napalm/iosxr/iosxr.py:logger = logging.getLogger(__name__)
I would then resort to something like this (related: How to list all existing loggers using python.logging module):
# increase level for all loggers
for name, logger in logging.root.manager.loggerDict.items():
logger.setLevel(logging.WARNING)
or perhaps it will suffice if you just print the various loggers, identify the most noisy ones, and silence them one by one.
Overview
I want to use httpimport as a logging library common to several scripts. This module generates logs of its own which I do not know how to silence.
In other cases such as this one, I would have used
logging.getLogger('httpimport').setLevel(logging.ERROR)
but it did not work.
Details
The following code is a stub of the "common logging code" mentioned above:
# toconsole.py
import logging
import os
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s %(message)s')
handler_console = logging.StreamHandler()
level = logging.DEBUG if 'DEV' in os.environ else logging.INFO
handler_console.setLevel(level)
handler_console.setFormatter(formatter)
log.addHandler(handler_console)
# disable httpimport logging except for errors+
logging.getLogger('httpimport').setLevel(logging.ERROR)
A simple usage such as
import httpimport
httpimport.INSECURE = True
with httpimport.remote_repo(['githublogging'], 'http://localhost:8000/') :
from toconsole import log
log.info('yay!')
gives the following output
[!] Using non HTTPS URLs ('http://localhost:8000//') can be a security hazard!
2019-08-25 13:56:48,671 yay!
yay!
The second (bare) yay! must be coming from httpimport, namely from its logging setup.
How can I disable the logging for such a module, or better - raise its level so that only errors+ are logged?
Note: this question was initially asked at the Issues section of the GitHub repository for httpimport but the author did not know either how to fix that.
Author of httpimport here.
I totally forgot I was using the basicConfig logger thing.
It is fixed in master right now (0.7.2) - will be included in next PyPI release:
https://github.com/operatorequals/httpimport/commit/ff2896c8f666c3f16b0f27716c732d68be018ef7
The reason why this is happening is because when you do import httpimport they do the initial configuration for the logging machinery. This happens right here. What this means is that the root logger already has a StreamHandler attached to it. Because of this, and the fact that all loggers inherit from the root logger, when you do log.info('yay') it not only uses your Handler and Formatter, but it also propagates all they way to the root logger, which also emits the message.
Remember that whoever calls basicConfig first when an application starts that sets up the default configuration for the root logger, which in turn, is inherited by all loggers, unless otherwise specified.
If you have a complex logging configuration you need to ensure that you call it before you do any third-party imports which might call basicConfig. basicConfig is idempotent meaning the first call seals the deal, and subsequent calls have no effect.
Solutions
You could do log.propagate = False and you will see that the 2nd yay will not show.
You could attach the Formatter directly to the already existent root Handler by doing something like this (without adding another Handler yourself)
root = logging.getLogger('')
formatter = logging.Formatter('%(asctime)s %(message)s')
root_handler = root.handlers[0]
root_handler.setFormatter(formatter)
You could do a basicConfig call when you initialize your application (if you had such a config available, with initial Formatters and Handlers, etc. that will elegantly attach everything to the root logger neatly) and then you would only do something like logger = logging.getLogger(__name__) and logger.info('some message') that would work the way you'd expect because it would propagate all the way to the root logger which already has your configuration.
You could remove the initial Handler that's present on the root logger by doing something like
root = logging.getLogger('')
root.handlers = []
... and many more solutions, but you get the idea.
Also do note that logging.getLogger('httpimport').setLevel(logging.ERROR) this works perfectly fine. No messages below logging.ERROR would be logged by that logger, it's just that the problem wasn't from there.
If you however want to completely disable a logger you can just do logger.disabled = True (also do note, again, that the problem wasn't from the httpimport logger, as aforementioned)
One example demonstrated
Change your toconsole.py with this and you won't see the second yay.
import logging
import os
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
root_logger = logging.getLogger('')
root_handler = root_logger.handlers[0]
formatter = logging.Formatter('%(asctime)s %(message)s')
root_handler.setFormatter(formatter)
# or you could just keep your old code and just add log.propagate = False
# or any of the above solutions and it would work
logging.getLogger('httpimport').setLevel(logging.ERROR)
I have python project with multiple modules with logging. I perform initialization (reading log configuration file and creating root logger and enable/disable logging) in every module before start of logging the messages. Is it possible to perform this initialization only once in one place (like in one class may be called as Log) such that the same settings are reused by logging all over the project?
I am looking for a proper solution to have only once to read the configuration file and to only once get and configure a logger, in a class constructor, or perhaps in the initializer (__init__.py). I don't want to do this at client side (in __main__ ). I want to do this configuration only once in separate class and call this class in other modules when logging is required.
setup using #singleton pattern
#log.py
import logging.config
import yaml
from singleton_decorator import singleton
#singleton
class Log:
def __init__(self):
configFile = 'path_to_my_lof_config_file'/logging.yaml
with open(configFile) as f:
config_dict = yaml.load(f)
logging.config.dictConfig(config_dict)
self.logger = logging.getLogger('root')
def info(self, message):
self.logger.info(message)
#module1.py
from Log import Log
myLog = Log()
myLog.info('Message logged successfully)
#module2.py
from Log import Log
myLog = Log() #config read only once and only one object is created
myLog.info('Message logged successfully)
From the documentation,
Note that Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.
You can initialize and configure logging in your main entry point. See Logging from multiple modules in this Howto (Python 2.7).
I had the same problem and I don't have any classes or anything, so I solved it with just using global variable
utils.py:
existing_loggers = {}
def get_logger(name='my_logger', level=logging.INFO):
if name in existing_loggers:
return existing_loggers[name]
# Do the rest of initialization, handlers, formatters etc...
I want to find out how logging should be organised given that I write many scripts and modules that should feature similar logging. I want to be able to set the logging appearance and the logging level from the script and I want this to propagate the appearance and level to my modules and only my modules.
An example script could be something like the following:
import logging
import technicolor
import example_2_module
def main():
verbose = True
global log
log = logging.getLogger(__name__)
logging.root.addHandler(technicolor.ColorisingStreamHandler())
# logging level
if verbose:
logging.root.setLevel(logging.DEBUG)
else:
logging.root.setLevel(logging.INFO)
log.info("example INFO message in main")
log.debug("example DEBUG message in main")
example_2_module.function1()
if __name__ == '__main__':
main()
An example module could be something like the following:
import logging
log = logging.getLogger(__name__)
def function1():
print("printout of function 1")
log.info("example INFO message in module")
log.debug("example DEBUG message in module")
You can see that in the module there is minimal infrastructure written to import the logging of the appearance and the level set in the script. This has worked fine, but I've encountered a problem: other modules that have logging. This can result in output being printed twice, and very detailed debug logging from modules that are not my own.
How should I code this such that the logging appearance/level is set from the script but then used only by my modules?
You need to set the propagate attribute to False so that the log message does not propagate to ancestor loggers. Here is the documentation for Logger.propagate -- it defaults to True. So just:
import logging
log = logging.getLogger(__name__)
log.propagate = False