What's the worst level log that I just logged? - python

I've added logs to a Python 2 application using the logging module.
Now I want to add a closing statement at the end, dependent on the worst thing logged.
If the worst thing logged had the INFO level or lower, print "SUCCESS!"
If the worst thing logged had the WARNING level, write "SUCCESS!, with warnings. Please check the logs"
If the worst thing logged had the ERROR level, write "FAILURE".
Is there a way to get this information from the logger? Some built in method I'm missing, like logging.getWorseLevelLogSoFar?
My current plan is to replace all log calls (logging.info et al) with calls to wrapper functions in a class that also keeps track of that information.
I also considered somehow releasing the log file, reading and parsing it, then appending to it. This seems worse than my current plan.
Are there other options? This doesn't seem like a unique problem.
I'm using the root logger and would prefer to continue using it, but can change to a named logger if that's necessary for the solution.

As you said yourself, I think writing a wrapper function would be the neatest and fastest approach. The problem would be that you need a global variable, if you're not working within a class
global worst_log_lvl = logging.NOTSET
def write_log(logger, lvl, msg):
logger.log(lvl, msg)
if lvl > worst_log_lvl:
global worst_log_lvl
worst_log_lvl = lvl
or make worst_log_lvl a member of a custom class, where you emulate the signature of logging.logger, that you use instead of the actual logger
class CustomLoggerWrapper(object):
def __init__(self):
# setup of your custom logger
self.worst_log_lvl = logging.NOTSET
def debug(self):
pass
# repeat for other functions like info() etc.

As you're only using the root logger, you could attach a filter to it which keeps track of the level:
import argparse
import logging
import random
LEVELS = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
class LevelTrackingFilter(logging.Filter):
def __init__(self):
self.level = logging.NOTSET
def filter(self, record):
self.level = max(self.level, record.levelno)
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument('maxlevel', metavar='MAXLEVEL', default='WARNING',
choices=LEVELS,
nargs='?', help='Set maximum level to log')
options = parser.parse_args()
maxlevel = getattr(logging, options.maxlevel)
logger = logging.getLogger()
logger.addHandler(logging.NullHandler()) # needs Python 2.7
filt = LevelTrackingFilter()
logger.addFilter(filt)
for i in range(100):
level = getattr(logging, random.choice(LEVELS))
if level > maxlevel:
continue
logger.log(level, 'message')
if filt.level <= logging.INFO:
print('SUCCESS!')
elif filt.level == logging.WARNING:
print('SUCCESS, with warnings. Please check the logs.')
else:
print('FAILURE')
if __name__ == '__main__':
main()

There's a "good" way to get this done automatically by using context filters.
TL;DR I've built a package that has the following contextfilter baked in. You can install it with pip install ofunctions.logger_utils then use it with:
from ofunctions import logger_utils
logger = logger_utils.logger_get_logger(log_file='somepath', console=True)
logger.error("Oh no!")
logger.info("Anyway...")
# Now get the worst called loglevel (result is equivalent to logging.ERROR level in this case)
worst_level = logger_utils.get_worst_logger_level(logger)
Here's the long solution which explains what happens under the hood:
Let's built a contextfilter class that can be injected into logging:
class ContextFilterWorstLevel(logging.Filter):
"""
This class records the worst loglevel that was called by logger
Allows to change default logging output or record events
"""
def __init__(self):
self._worst_level = logging.INFO
if sys.version_info[0] < 3:
super(logging.Filter, self).__init__()
else:
super().__init__()
#property
def worst_level(self):
"""
Returns worst log level called
"""
return self._worst_level
#worst_level.setter
def worst_level(self, value):
# type: (int) -> None
if isinstance(value, int):
self._worst_level = value
def filter(self, record):
# type: (str) -> bool
"""
A filter can change the default log output
This one simply records the worst log level called
"""
# Examples
# record.msg = f'{record.msg}'.encode('ascii', errors='backslashreplace')
# When using this filter, something can be added to logging.Formatter like '%(something)s'
# record.something = 'value'
if record.levelno > self.worst_level:
self.worst_level = record.levelno
return True
Now inject this filter into you logger instance
logger = logging.getLogger()
logger.addFilter(ContextFilterWorstLevel())
logger.warning("One does not simply inject a filter into logging")
Now we can iter over present filters and extract the worst called loglevel like this:
for flt in logger.filters:
if isinstance(flt, ContextFilterWorstLevel):
print(flt.worst_level)

Related

python custom log level naming convention [duplicate]

I'd like to have loglevel TRACE (5) for my application, as I don't think that debug() is sufficient. Additionally log(5, msg) isn't what I want. How can I add a custom loglevel to a Python logger?
I've a mylogger.py with the following content:
import logging
#property
def log(obj):
myLogger = logging.getLogger(obj.__class__.__name__)
return myLogger
In my code I use it in the following way:
class ExampleClass(object):
from mylogger import log
def __init__(self):
'''The constructor with the logger'''
self.log.debug("Init runs")
Now I'd like to call self.log.trace("foo bar")
Edit (Dec 8th 2016): I changed the accepted answer to pfa's which is, IMHO, an excellent solution based on the very good proposal from Eric S.
To people reading in 2022 and beyond: you should probably check out the currently next-highest-rated answer here: https://stackoverflow.com/a/35804945/1691778
My original answer is below.
--
#Eric S.
Eric S.'s answer is excellent, but I learned by experimentation that this will always cause messages logged at the new debug level to be printed -- regardless of what the log level is set to. So if you make a new level number of 9, if you call setLevel(50), the lower level messages will erroneously be printed.
To prevent that from happening, you need another line inside the "debugv" function to check if the logging level in question is actually enabled.
Fixed example that checks if the logging level is enabled:
import logging
DEBUG_LEVELV_NUM = 9
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
if self.isEnabledFor(DEBUG_LEVELV_NUM):
# Yes, logger takes its '*args' as 'args'.
self._log(DEBUG_LEVELV_NUM, message, args, **kws)
logging.Logger.debugv = debugv
If you look at the code for class Logger in logging.__init__.py for Python 2.7, this is what all the standard log functions do (.critical, .debug, etc.).
I apparently can't post replies to others' answers for lack of reputation... hopefully Eric will update his post if he sees this. =)
Combining all of the existing answers with a bunch of usage experience, I think that I have come up with a list of all the things that need to be done to ensure completely seamless usage of the new level. The steps below assume that you are adding a new level TRACE with value logging.DEBUG - 5 == 5:
logging.addLevelName(logging.DEBUG - 5, 'TRACE') needs to be invoked to get the new level registered internally so that it can be referenced by name.
The new level needs to be added as an attribute to logging itself for consistency: logging.TRACE = logging.DEBUG - 5.
A method called trace needs to be added to the logging module. It should behave just like debug, info, etc.
A method called trace needs to be added to the currently configured logger class. Since this is not 100% guaranteed to be logging.Logger, use logging.getLoggerClass() instead.
All the steps are illustrated in the method below:
def addLoggingLevel(levelName, levelNum, methodName=None):
"""
Comprehensively adds a new logging level to the `logging` module and the
currently configured logging class.
`levelName` becomes an attribute of the `logging` module with the value
`levelNum`. `methodName` becomes a convenience method for both `logging`
itself and the class returned by `logging.getLoggerClass()` (usually just
`logging.Logger`). If `methodName` is not specified, `levelName.lower()` is
used.
To avoid accidental clobberings of existing attributes, this method will
raise an `AttributeError` if the level name is already an attribute of the
`logging` module or if the method name is already present
Example
-------
>>> addLoggingLevel('TRACE', logging.DEBUG - 5)
>>> logging.getLogger(__name__).setLevel("TRACE")
>>> logging.getLogger(__name__).trace('that worked')
>>> logging.trace('so did this')
>>> logging.TRACE
5
"""
if not methodName:
methodName = levelName.lower()
if hasattr(logging, levelName):
raise AttributeError('{} already defined in logging module'.format(levelName))
if hasattr(logging, methodName):
raise AttributeError('{} already defined in logging module'.format(methodName))
if hasattr(logging.getLoggerClass(), methodName):
raise AttributeError('{} already defined in logger class'.format(methodName))
# This method was inspired by the answers to Stack Overflow post
# http://stackoverflow.com/q/2183233/2988730, especially
# http://stackoverflow.com/a/13638084/2988730
def logForLevel(self, message, *args, **kwargs):
if self.isEnabledFor(levelNum):
self._log(levelNum, message, args, **kwargs)
def logToRoot(message, *args, **kwargs):
logging.log(levelNum, message, *args, **kwargs)
logging.addLevelName(levelNum, levelName)
setattr(logging, levelName, levelNum)
setattr(logging.getLoggerClass(), methodName, logForLevel)
setattr(logging, methodName, logToRoot)
You can find an even more detailed implementation in the utility library I maintain, haggis. The function haggis.logs.add_logging_level is a more production-ready implementation of this answer.
I took the avoid seeing "lambda" answer and had to modify where the log_at_my_log_level was being added. I too saw the problem that Paul did – I don't think this works. Don't you need logger as the first arg in log_at_my_log_level? This worked for me
import logging
DEBUG_LEVELV_NUM = 9
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
# Yes, logger takes its '*args' as 'args'.
self._log(DEBUG_LEVELV_NUM, message, args, **kws)
logging.Logger.debugv = debugv
This question is rather old, but I just dealt with the same topic and found a way similiar to those already mentioned which appears a little cleaner to me. This was tested on 3.4, so I'm not sure whether the methods used exist in older versions:
from logging import getLoggerClass, addLevelName, setLoggerClass, NOTSET
VERBOSE = 5
class MyLogger(getLoggerClass()):
def __init__(self, name, level=NOTSET):
super().__init__(name, level)
addLevelName(VERBOSE, "VERBOSE")
def verbose(self, msg, *args, **kwargs):
if self.isEnabledFor(VERBOSE):
self._log(VERBOSE, msg, args, **kwargs)
setLoggerClass(MyLogger)
While we have already plenty of correct answers, the following is in my opinion more pythonic:
import logging
from functools import partial, partialmethod
logging.TRACE = 5
logging.addLevelName(logging.TRACE, 'TRACE')
logging.Logger.trace = partialmethod(logging.Logger.log, logging.TRACE)
logging.trace = partial(logging.log, logging.TRACE)
If you want to use mypy on your code, it is recommended to add # type: ignore to suppress warnings from adding attribute.
Who started the bad practice of using internal methods (self._log) and why is each answer based on that?! The pythonic solution would be to use self.log instead so you don't have to mess with any internal stuff:
import logging
SUBDEBUG = 5
logging.addLevelName(SUBDEBUG, 'SUBDEBUG')
def subdebug(self, message, *args, **kws):
self.log(SUBDEBUG, message, *args, **kws)
logging.Logger.subdebug = subdebug
logging.basicConfig()
l = logging.getLogger()
l.setLevel(SUBDEBUG)
l.subdebug('test')
l.setLevel(logging.DEBUG)
l.subdebug('test')
I think you'll have to subclass the Logger class and add a method called trace which basically calls Logger.log with a level lower than DEBUG. I haven't tried this but this is what the docs indicate.
I find it easier to create a new attribute for the logger object that passes the log() function. I think the logger module provides the addLevelName() and the log() for this very reason. Thus no subclasses or new method needed.
import logging
#property
def log(obj):
logging.addLevelName(5, 'TRACE')
myLogger = logging.getLogger(obj.__class__.__name__)
setattr(myLogger, 'trace', lambda *args: myLogger.log(5, *args))
return myLogger
now
mylogger.trace('This is a trace message')
should work as expected.
Tips for creating a custom logger:
Do not use _log, use log (you don't have to check isEnabledFor)
the logging module should be the one creating instance of the custom logger since it does some magic in getLogger, so you will need to set the class via setLoggerClass
You do not need to define __init__ for the logger, class if you are not storing anything
# Lower than debug which is 10
TRACE = 5
class MyLogger(logging.Logger):
def trace(self, msg, *args, **kwargs):
self.log(TRACE, msg, *args, **kwargs)
When calling this logger use setLoggerClass(MyLogger) to make this the default logger from getLogger
logging.setLoggerClass(MyLogger)
log = logging.getLogger(__name__)
# ...
log.trace("something specific")
You will need to setFormatter, setHandler, and setLevel(TRACE) on the handler and on the log itself to actually se this low level trace
This worked for me:
import logging
logging.basicConfig(
format=' %(levelname)-8.8s %(funcName)s: %(message)s',
)
logging.NOTE = 32 # positive yet important
logging.addLevelName(logging.NOTE, 'NOTE') # new level
logging.addLevelName(logging.CRITICAL, 'FATAL') # rename existing
log = logging.getLogger(__name__)
log.note = lambda msg, *args: log._log(logging.NOTE, msg, args)
log.note('school\'s out for summer! %s', 'dude')
log.fatal('file not found.')
The lambda/funcName issue is fixed with logger._log as #marqueed pointed out. I think using lambda looks a bit cleaner, but the drawback is that it can't take keyword arguments. I've never used that myself, so no biggie.
NOTE setup: school's out for summer! dude
FATAL setup: file not found.
As alternative to adding an extra method to the Logger class I would recommend using the Logger.log(level, msg) method.
import logging
TRACE = 5
logging.addLevelName(TRACE, 'TRACE')
FORMAT = '%(levelname)s:%(name)s:%(lineno)d:%(message)s'
logging.basicConfig(format=FORMAT)
l = logging.getLogger()
l.setLevel(TRACE)
l.log(TRACE, 'trace message')
l.setLevel(logging.DEBUG)
l.log(TRACE, 'disabled trace message')
In my experience, this is the full solution the the op's problem... to avoid seeing "lambda" as the function in which the message is emitted, go deeper:
MY_LEVEL_NUM = 25
logging.addLevelName(MY_LEVEL_NUM, "MY_LEVEL_NAME")
def log_at_my_log_level(self, message, *args, **kws):
# Yes, logger takes its '*args' as 'args'.
self._log(MY_LEVEL_NUM, message, args, **kws)
logger.log_at_my_log_level = log_at_my_log_level
I've never tried working with a standalone logger class, but I think the basic idea is the same (use _log).
Addition to Mad Physicists example to get file name and line number correct:
def logToRoot(message, *args, **kwargs):
if logging.root.isEnabledFor(levelNum):
logging.root._log(levelNum, message, args, **kwargs)
based on pinned answer,
i wrote a little method which automaticaly create new logging levels
def set_custom_logging_levels(config={}):
"""
Assign custom levels for logging
config: is a dict, like
{
'EVENT_NAME': EVENT_LEVEL_NUM,
}
EVENT_LEVEL_NUM can't be like already has logging module
logging.DEBUG = 10
logging.INFO = 20
logging.WARNING = 30
logging.ERROR = 40
logging.CRITICAL = 50
"""
assert isinstance(config, dict), "Configuration must be a dict"
def get_level_func(level_name, level_num):
def _blank(self, message, *args, **kws):
if self.isEnabledFor(level_num):
# Yes, logger takes its '*args' as 'args'.
self._log(level_num, message, args, **kws)
_blank.__name__ = level_name.lower()
return _blank
for level_name, level_num in config.items():
logging.addLevelName(level_num, level_name.upper())
setattr(logging.Logger, level_name.lower(), get_level_func(level_name, level_num))
config may smth like that:
new_log_levels = {
# level_num is in logging.INFO section, that's why it 21, 22, etc..
"FOO": 21,
"BAR": 22,
}
Someone might wanna do, a root level custom logging; and avoid the usage of logging.get_logger(''):
import logging
from datetime import datetime
c_now=datetime.now()
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] :: %(message)s",
handlers=[
logging.StreamHandler(),
logging.FileHandler("../logs/log_file_{}-{}-{}-{}.log".format(c_now.year,c_now.month,c_now.day,c_now.hour))
]
)
DEBUG_LEVELV_NUM = 99
logging.addLevelName(DEBUG_LEVELV_NUM, "CUSTOM")
def custom_level(message, *args, **kws):
logging.Logger._log(logging.root,DEBUG_LEVELV_NUM, message, args, **kws)
logging.custom_level = custom_level
# --- --- --- ---
logging.custom_level("Waka")
I'm confused; with python 3.5, at least, it just works:
import logging
TRACE = 5
"""more detail than debug"""
logging.basicConfig()
logging.addLevelName(TRACE,"TRACE")
logger = logging.getLogger('')
logger.debug("n")
logger.setLevel(logging.DEBUG)
logger.debug("y1")
logger.log(TRACE,"n")
logger.setLevel(TRACE)
logger.log(TRACE,"y2")
output:
DEBUG:root:y1
TRACE:root:y2
Following up on the top-rated answers by Eric S. and Mad Physicist:
Fixed example that checks if the logging level is enabled:
import logging
DEBUG_LEVELV_NUM = 9
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
if self.isEnabledFor(DEBUG_LEVELV_NUM):
# Yes, logger takes its '*args' as 'args'.
self._log(DEBUG_LEVELV_NUM, message, args, **kws)
logging.Logger.debugv = debugv
This code-snippet
adds a new log-level "DEBUGV" and assigns a number 9
defines a debugv-method, which logs a message with level "DEBUGV" unless the log-level is set to a value higher than 9 (e.g. log-level "DEBUG")
monkey-patches the logging.Logger-class, so that you can call logger.debugv
The suggested implementation worked well for me, but
code completion doesn't recognise the logger.debugv-method
pyright, which is part of our CI-pipeline, fails, because it cannot trace the debugv-member method of the Logger-class (see https://github.com/microsoft/pylance-release/issues/2335 about dynamically adding member-methods)
I ended up using inheritance as was suggested in Noufal Ibrahim's answer in this thread:
I think you'll have to subclass the Logger class and add a method called trace which basically calls Logger.log with a level lower than DEBUG. I haven't tried this but this is what the docs indicate.
Implementing Noufal Ibrahim's suggestion worked and pyright is happy:
import logging
# add log-level DEBUGV
DEBUGV = 9 # slightly lower than DEBUG (10)
logging.addLevelName(DEBUGV, "DEBUGV")
class MyLogger(logging.Logger):
"""Inherit from standard Logger and add level DEBUGV."""
def debugv(self, msg, *args, **kwargs):
"""Log 'msg % args' with severity 'DEBUGV'."""
if self.isEnabledFor(DEBUGV):
self._log(DEBUGV, msg, args, **kwargs)
logging.setLoggerClass(MyLogger)
Next you can initialise an instance of the extended logger using the logger-manager:
logger = logging.getLogger("whatever_logger_name")
Edit: In order to make pyright recognise the debugv-method, you may need to cast the logger returned by logging.getLogger, which would like this:
import logging
from typing import cast
logger = cast(MyLogger, logging.getLogger("whatever_logger_name"))
In case anyone wants an automated way to add a new logging level to the logging module (or a copy of it) dynamically, I have created this function, expanding #pfa's answer:
def add_level(log_name,custom_log_module=None,log_num=None,
log_call=None,
lower_than=None, higher_than=None, same_as=None,
verbose=True):
'''
Function to dynamically add a new log level to a given custom logging module.
<custom_log_module>: the logging module. If not provided, then a copy of
<logging> module is used
<log_name>: the logging level name
<log_num>: the logging level num. If not provided, then function checks
<lower_than>,<higher_than> and <same_as>, at the order mentioned.
One of those three parameters must hold a string of an already existent
logging level name.
In case a level is overwritten and <verbose> is True, then a message in WARNING
level of the custom logging module is established.
'''
if custom_log_module is None:
import imp
custom_log_module = imp.load_module('custom_log_module',
*imp.find_module('logging'))
log_name = log_name.upper()
def cust_log(par, message, *args, **kws):
# Yes, logger takes its '*args' as 'args'.
if par.isEnabledFor(log_num):
par._log(log_num, message, args, **kws)
available_level_nums = [key for key in custom_log_module._levelNames
if isinstance(key,int)]
available_levels = {key:custom_log_module._levelNames[key]
for key in custom_log_module._levelNames
if isinstance(key,str)}
if log_num is None:
try:
if lower_than is not None:
log_num = available_levels[lower_than]-1
elif higher_than is not None:
log_num = available_levels[higher_than]+1
elif same_as is not None:
log_num = available_levels[higher_than]
else:
raise Exception('Infomation about the '+
'log_num should be provided')
except KeyError:
raise Exception('Non existent logging level name')
if log_num in available_level_nums and verbose:
custom_log_module.warn('Changing ' +
custom_log_module._levelNames[log_num] +
' to '+log_name)
custom_log_module.addLevelName(log_num, log_name)
if log_call is None:
log_call = log_name.lower()
setattr(custom_log_module.Logger, log_call, cust_log)
return custom_log_module

Most basic logging handler without subclassing

While subclassing logging.Handler, I can make a custom handler by doing something like:
import requests
import logging
class RequestsHandler(logging.Handler):
def emit(self, record):
res = requests.get('http://google.com')
print (res, record)
handler = RequestsHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
# <Response [200]> <LogRecord: __main__, 30, <stdin>, 1, "ok!">
What would be the simplest RequestHandler (i.e., what methods would it need?) if it was just a base class without subclassing logging.Handler ?
In general, you can find out which attributes of a class is getting accessed externally by overriding the __getattribue__ method with a wrapper function
that adds the name of the attribute being accessed to a set if the caller's class is not the same as the current class:
import logging
import sys
class MyHandler(logging.Handler):
def emit(self, record):
pass
def show_attribute(self, name):
caller_locals = sys._getframe(1).f_locals
if ('self' not in caller_locals or
object.__getattribute__(caller_locals['self'], '__class__') is not
object.__getattribute__(self, '__class__')):
attributes.add(name)
return original_getattribute(self, name)
attributes = set()
original_getattribute = MyHandler.__getattribute__
MyHandler.__getattribute__ = show_attribute
so that:
handler = MyHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
print(attributes)
outputs:
{'handle', 'level'}
Demo: https://repl.it/#blhsing/UtterSoupyCollaborativesoftware
As you see from the result above, handle and level are the only attributes needed for a basic logging handler. In other words, #jirassimok is correct in that handle is the only method of the Handler class that is called externally, but one also needs to implement the level attribute as well since it is also directly accessed in the Logger.callHandlers method:
if record.levelno >= hdlr.level:
where the level attribute has to be an integer, and should be 0 if records of all logging levels are to be handled.
A minimal implementation of a Handler class should therefore be something like:
class MyHandler:
def __init__(self):
self.level = 0
def handle(self, record):
print(record.msg)
so that:
handler = MyHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
outputs:
ok!
Looking at the source for Logger.log leads me to Logger.callHandlers, which calls only handle on the handlers. So that might be the minimum you need if you're injecting the fake handler directly into a logger instance.
If you want to really guarantee compatibility with the rest of the logging module, the only thing you can do is go through the module's source to figure out how it works. The documentation is a good starting place, but that doesn't get into the internals much at all.
If you're just trying to write a dummy handler for a small use case, you could probably get away with skipping a lot of steps; try something, see where it fails, and build on that.
Otherwise, you won't have much choice but to dive into the source code (though trying things and seeing what breaks can also be a good way to find places to start reading).
A quick glance at the class' source tells me that the only gotchas in the class are related to the module's internal management of its objects; Handler.__init__ puts the handler into a global handler list, which the module could use in any number of places. But beyond that, the class is quite straightforward; it shouldn't be too hard to read.

pytest: selective log levels on a per-module basis

I'm using pytest-3.7.1 which has good support for logging, including live logging to stdout during tests. I'm using --log-cli-level=DEBUG to dump all debug-level logging to the console as it happens.
The problem I have is that --log-cli-level=DEBUG turns on debug logging for all modules in my test program, including third-party dependencies, and it floods the log with a lot of uninteresting output.
Python's logging module has the ability to set logging levels per module. This enables selective logging - for example, in a normal Python program I can turn on debugging for just one or two of my own modules, and restrict the log output to just those, or set different log levels for each module. This enables turning off debug-level logging for noisy libraries.
So what I'd like to do is apply the same concept to pytest's logging - i.e. specify a logging level, from the command line, for specific non-root loggers. For example, if I have a module called test_foo.py then I'm looking for a way to set the log level for this module from the command line.
I'm prepared to roll-my-own if necessary (I know how to add custom arguments to pytest), but before I do that I just want to be sure that there isn't already a solution. Is anyone aware of one?
I had the same problem, and found a solution in another answer:
Instead of --log-cli-level=DEBUG, use --log-level DEBUG. It disables all third-party module logs (in my case, I had plenty of matplotlib logs), but still outputs your app logs for each test that fails.
I got this working by writing a factory class and using it to set the level of the root logger to logger.INFO and use the logging level from the command line for all the loggers obtained from the factory. If the logging level from the command line is higher than the minimum global log level you specify in the class (using constant MINIMUM_GLOBAL_LOG_LEVEL), the global log level isn't changed.
import logging
MODULE_FIELD_WIDTH_IN_CHARS = '20'
LINE_NO_FIELD_WIDTH_IN_CHARS = '3'
LEVEL_NAME_FIELD_WIDTH_IN_CHARS = '8'
MINIMUM_GLOBAL_LOG_LEVEL = logging.INFO
class EasyLogger():
root_logger = logging.getLogger()
specified_log_level = root_logger.level
format_string = '{asctime} '
format_string += '{module:>' + MODULE_FIELD_WIDTH_IN_CHARS + 's}'
format_string += '[{lineno:' + LINE_NO_FIELD_WIDTH_IN_CHARS + 'd}]'
format_string += '[{levelname:^' + LEVEL_NAME_FIELD_WIDTH_IN_CHARS + 's}]: '
format_string += '{message}'
level_change_warning_sent = False
#classmethod
def get_logger(cls, logger_name):
if not EasyLogger._logger_has_format(cls.root_logger, cls.format_string):
EasyLogger._setup_root_logger()
logger = logging.getLogger(logger_name)
logger.setLevel(cls.specified_log_level)
return logger
#classmethod
def _setup_root_logger(cls):
formatter = logging.Formatter(fmt=cls.format_string, style='{')
if not cls.root_logger.hasHandlers():
handler = logging.StreamHandler()
cls.root_logger.addHandler(handler)
for handler in cls.root_logger.handlers:
handler.setFormatter(formatter)
cls.root_logger.setLevel(MINIMUM_GLOBAL_LOG_LEVEL)
if (cls.specified_log_level < MINIMUM_GLOBAL_LOG_LEVEL and
cls.level_change_warning_sent is False):
cls.root_logger.log(
max(cls.specified_log_level, logging.WARNING),
"Setting log level for %s class to %s, all others to %s" % (
__name__,
cls.specified_log_level,
MINIMUM_GLOBAL_LOG_LEVEL
)
)
cls.level_change_warning_sent = True
#staticmethod
def _logger_has_format(logger, format_string):
for handler in logger.handlers:
return handler.format == format_string
return False
The above class is then used to send logs normally as you would with a logging.logger object as follows:
from EasyLogger import EasyLogger
class MySuperAwesomeClass():
def __init__(self):
self.logger = EasyLogger.get_logger(__name__)
def foo(self):
self.logger.debug("debug message")
self.logger.info("info message")
self.logger.warning("warning message")
self.logger.critical("critical message")
self.logger.error("error message")
Enable/Disable/Modify the log level of any module in Python:
logging.getLogger("module_name").setLevel(logging.log_level)

How to turn `logging` warnings into errors?

A library that I use emits warnings and errors through the logging module (logging.Logger's warn() and error() methods). I would like to implement an option to turn the warnings into errors (i.e., fail on warnings).
Is there an easy way to achieve this?
From looking at the documentation, I cannot see a ready-made solution. I assume it is possible by adding a custom Handler object, but I am not sure how to do it "right". Any pointers?
#hoefling's answer is close, but I would change it like so:
class LevelRaiser(logging.Filter):
def filter(self, record):
if record.levelno == logging.WARNING:
record.levelno = logging.ERROR
record.levelname = logging.getLevelName(logging.ERROR)
return True
def configure_library_logging():
library_root_logger = logging.getLogger(library.__name__)
library_root_logger.addFilter(LevelRaiser())
The reason is that filters are used to change LogRecord attributes and filter stuff out, whereas handlers are used to do I/O. What you're trying to do here isn't I/O, and so doesn't really belong in a handler.
Update: I like the proposal of Vinay made in this answer, injecting a custom Filter instead of a Handler is a much cleaner way. Please check it out!
You are on the right track with implementing own Handler. This is pretty easy to implement. I would do it like that: write a handler that edits the LogRecord in-place and attach one handler instance to the library's root loggers. Example:
# library.py
import logging
_LOGGER = logging.getLogger(__name__)
def library_stuff():
_LOGGER.warning('library stuff')
This is a script that uses the library:
import logging
import library
class LevelRaiser(logging.Handler):
def emit(self, record: logging.LogRecord):
if record.levelno == logging.WARNING:
record.levelno = logging.ERROR
record.levelname = logging.getLevelName(logging.ERROR)
def configure_library_logging():
library_root_logger = logging.getLogger(library.__name__)
library_root_logger.addHandler(LevelRaiser())
if __name__ == '__main__':
# do some example global logging config
logging.basicConfig(level=logging.INFO)
# additional configuration for the library logging
configure_library_logging()
# play with different loggers
our_logger = logging.getLogger(__name__)
root_logger = logging.getLogger()
root_logger.warning('spam')
our_logger.warning('eggs')
library.library_stuff()
root_logger.warning('foo')
our_logger.warning('bar')
library.library_stuff()
Run the script:
WARNING:root:spam
WARNING:__main__:eggs
ERROR:library:library stuff
WARNING:root:foo
WARNING:__main__:bar
ERROR:library:library stuff
Note that warning level is elevated to error level only on library's logging prints, all the rest remains unchanged.
You can assign logging.warn to logging.error before calling methods from your library:
import logging
warn_log_original = logging.warn
logging.warn = logging.error
library_call()
logging.warn = warn_log_original

using Python logger class to generate multiple logs for different log levels

I looked through the tutorials for the python logging class here and didnt see anything that would let me make multiple logs of different levels for the same output. In the end I would like to have three logs:
<timestamp>_DEBUG.log (debug level)
<timestamp>_INFO.log (info Level)
<timestamp>_ERROR.log (error level)
Is there a way to, in one script, generate multiple log files for the same input?
<-------------UPDATE #1-------------------------->
So in implementing #robert's suggestion, I now have a small issue, probably due to not fully understanding what is being done in his code.
Here is my code in scriptRun.py
import os
import logging
logger = logging.getLogger("exceptionsLogger")
debugLogFileHandler = logging.FileHandler("Debug.log")
errorLogFileHandler = logging.FileHandler("Error.Log")
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
errorLogFileHandler.setFormatter(formatter)
debugLogFileHandler.setFormatter(formatter)
logger.addHandler(debugLogFileHandler)
logger.addHandler(errorLogFileHandler)
class LevelFilter(logging.Filter):
def __init__(self, level):
self.level = level
def filter(self, record):
return record.levelno == self.level
debugLogFileHandler.addFilter(LevelFilter(logging.DEBUG))
errorLogFileHandler.addFilter(LevelFilter(logging.ERROR))
directory = []
for dirpath, dirnames, filenames in os.walk("path\to\scripts"):
for filename in [f for f in filenames if f.endswith(".py")]:
directory.append(os.path.join(dirpath, filename))
for entry in directory:
execfile(entry)
for lists in x:
if lists[0] == 2:
logger.error(lists[1]+" "+lists[2])
elif lists[0] == 1:
logger.debug(lists[1]+" "+lists[2])
an example of what this is running is:
import sys
def script2Test2():
print y
def script2Ttest3():
mundo="hungry"
global x
x = []
theTests = (test2, test3)
for test in theTests:
try:
test()
x.append([1,test.__name__," OK"])
except:
error = str(sys.exc_info()[1])
x.append([2,test.__name__,error])
Now to my issue: running scriptRun.py does not throw any errors when i run it, and error.log and debug.log are created, but only error.log is populated with entries.
any idea why?
<------------------------Update #2----------------------->
So I realized that nothing is being logged that is "lower" than warning. even if i remove the filters and debugLogFileHandler.setLevel(logging.DEBUG) it does not seem to matter. If I set the actual log command to logger.warning or higher, it will print to the logs. Of course once I uncomment debugLogFileHandler.addFilter(LevelFilter(logging.DEBUG)) I get no log activity in Debug.log. I;m tempted to just make my own log level, but that seems like a really bad idea, in case anyone/anything else uses this code.
<-------------------------Final UPDATE--------------------->
Well I was stupid and forgot to set the logger itself to log DEBUG level events. Since by default the logging class doesn't log anything below warning, it wasnt logging any of the debug information I send it.
Final thanks and shoutout to #Robert for the filter.
Create multiple Handlers, each for one output file (INFO.log, DEBUG.log etc.).
Add a filter to each handler that only allows the specific level.
For example:
import logging
# Set up loggers and handlers.
# ...
class LevelFilter(logging.Filter):
def __init__(self, level):
self.level = level
def filter(self, record):
return record.levelno == self.level
debugLogFileHandler.addFilter(LevelFilter(logging.DEBUG))
infoLogFileHandler.addFilter(LevelFilter(logging.INFO))

Categories