How to write own logging methods for own logging levels - python

Hi
I would like to extend my logger (taken by logging.getLogger("rrcheck")) with my own methods like:
def warnpfx(...):
How to do it best?
My original wish is to have a root logger writing everything to a file and additionally named logger ("rrcheck") writing to stdout, but the latter should also have some other methods and levels. I need it to prepend some messages with "! PFXWRN" prefix (but only those going to stdout) and to leave other messages unchanged. I would like also to set logging level separately for root and for named logger.
This is my code:
class CheloExtendedLogger(logging.Logger):
"""
Custom logger class with additional levels and methods
"""
WARNPFX = logging.WARNING+1
def __init__(self, name):
logging.Logger.__init__(self, name, logging.DEBUG)
logging.addLevelName(self.WARNPFX, 'WARNING')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter("%(asctime)s [%(funcName)s: %(filename)s,%(lineno)d] %(message)s")
console.setFormatter(formatter)
# add the handlers to logger
self.addHandler(console)
return
def warnpfx(self, msg, *args, **kw):
self.log(self.WARNPFX, "! PFXWRN %s" % msg, *args, **kw)
logging.setLoggerClass(CheloExtendedLogger)
rrclogger = logging.getLogger("rrcheck")
rrclogger.setLevel(logging.INFO)
def test():
rrclogger.debug("DEBUG message")
rrclogger.info("INFO message")
rrclogger.warnpfx("warning with prefix")
test()
And this is an output - function and lilne number is wrong: warnpfx instead of test
2011-02-10 14:36:51,482 [test: log4.py,35] INFO message
2011-02-10 14:36:51,497 [warnpfx: log4.py,26] ! PFXWRN warning with prefix
Maybe my own logger approach is not the best one?
Which direction would you propose to go (own logger, own handler, own formatter, etc.)?
How to proceed if I would like to have yet another logger?
Unfortunatelly logging has no possibility to register an own logger, so then getLogger(name) would take a required one...
Regards,
Zbigniew

If you check Python sources, you'll see that the culprit is the Logger.findCaller method that walks through the call stack and searches for a first line that is not in the logging.py file. Because of this, your custom call to self.log in CheloExtendedLogger.warnpfx registers a wrong line.
Unfortunately, the code in logging.py is not very modular, so the fix is rather ugly: you have to redefine the findCaller method yourself in your subclass, so that it takes into account both the logging.py file and the file in which your logger resides (note that there shouldn't be any code other than the logger in your file, or again the results will be inaccurate). This requires a one-line change in the method body:
class CheloExtendedLogger(logging.Logger):
[...]
def findCaller(self):
"""
Find the stack frame of the caller so that we can note the source
file name, line number and function name.
"""
f = logging.currentframe().f_back
rv = "(unknown file)", 0, "(unknown function)"
while hasattr(f, "f_code"):
co = f.f_code
filename = os.path.normcase(co.co_filename)
if filename in (_srcfile, logging._srcfile): # This line is modified.
f = f.f_back
continue
rv = (filename, f.f_lineno, co.co_name)
break
return rv
For this to work, you need to define your own _srcfile variable in your file. Again, logging.py doesn't use a function, but rather puts all the code on the module level, so you have to copy-paste again:
if hasattr(sys, 'frozen'): #support for py2exe
_srcfile = "logging%s__init__%s" % (os.sep, __file__[-4:])
elif string.lower(__file__[-4:]) in ['.pyc', '.pyo']:
_srcfile = __file__[:-4] + '.py'
else:
_srcfile = __file__
_srcfile = os.path.normcase(_srcfile)
Well, maybe if you don't care for compiled versions, two last lines will suffice.
Now, your code works as expected:
2011-02-10 16:41:48,108 [test: lg.py,16] INFO message
2011-02-10 16:41:48,171 [test: lg.py,17] ! PFXWRN warning with prefix
As for multiple logger classes, if you don't mind the dependency between a logger name and a logger class, you could make a subclass of logging.Logger that would delegate its calls to an appropriate logger class, based on what its name was. There are probably other, more elegant possibilities, but I can't think of any right now.

Related

python custom log level naming convention [duplicate]

I'd like to have loglevel TRACE (5) for my application, as I don't think that debug() is sufficient. Additionally log(5, msg) isn't what I want. How can I add a custom loglevel to a Python logger?
I've a mylogger.py with the following content:
import logging
#property
def log(obj):
myLogger = logging.getLogger(obj.__class__.__name__)
return myLogger
In my code I use it in the following way:
class ExampleClass(object):
from mylogger import log
def __init__(self):
'''The constructor with the logger'''
self.log.debug("Init runs")
Now I'd like to call self.log.trace("foo bar")
Edit (Dec 8th 2016): I changed the accepted answer to pfa's which is, IMHO, an excellent solution based on the very good proposal from Eric S.
To people reading in 2022 and beyond: you should probably check out the currently next-highest-rated answer here: https://stackoverflow.com/a/35804945/1691778
My original answer is below.
--
#Eric S.
Eric S.'s answer is excellent, but I learned by experimentation that this will always cause messages logged at the new debug level to be printed -- regardless of what the log level is set to. So if you make a new level number of 9, if you call setLevel(50), the lower level messages will erroneously be printed.
To prevent that from happening, you need another line inside the "debugv" function to check if the logging level in question is actually enabled.
Fixed example that checks if the logging level is enabled:
import logging
DEBUG_LEVELV_NUM = 9
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
if self.isEnabledFor(DEBUG_LEVELV_NUM):
# Yes, logger takes its '*args' as 'args'.
self._log(DEBUG_LEVELV_NUM, message, args, **kws)
logging.Logger.debugv = debugv
If you look at the code for class Logger in logging.__init__.py for Python 2.7, this is what all the standard log functions do (.critical, .debug, etc.).
I apparently can't post replies to others' answers for lack of reputation... hopefully Eric will update his post if he sees this. =)
Combining all of the existing answers with a bunch of usage experience, I think that I have come up with a list of all the things that need to be done to ensure completely seamless usage of the new level. The steps below assume that you are adding a new level TRACE with value logging.DEBUG - 5 == 5:
logging.addLevelName(logging.DEBUG - 5, 'TRACE') needs to be invoked to get the new level registered internally so that it can be referenced by name.
The new level needs to be added as an attribute to logging itself for consistency: logging.TRACE = logging.DEBUG - 5.
A method called trace needs to be added to the logging module. It should behave just like debug, info, etc.
A method called trace needs to be added to the currently configured logger class. Since this is not 100% guaranteed to be logging.Logger, use logging.getLoggerClass() instead.
All the steps are illustrated in the method below:
def addLoggingLevel(levelName, levelNum, methodName=None):
"""
Comprehensively adds a new logging level to the `logging` module and the
currently configured logging class.
`levelName` becomes an attribute of the `logging` module with the value
`levelNum`. `methodName` becomes a convenience method for both `logging`
itself and the class returned by `logging.getLoggerClass()` (usually just
`logging.Logger`). If `methodName` is not specified, `levelName.lower()` is
used.
To avoid accidental clobberings of existing attributes, this method will
raise an `AttributeError` if the level name is already an attribute of the
`logging` module or if the method name is already present
Example
-------
>>> addLoggingLevel('TRACE', logging.DEBUG - 5)
>>> logging.getLogger(__name__).setLevel("TRACE")
>>> logging.getLogger(__name__).trace('that worked')
>>> logging.trace('so did this')
>>> logging.TRACE
5
"""
if not methodName:
methodName = levelName.lower()
if hasattr(logging, levelName):
raise AttributeError('{} already defined in logging module'.format(levelName))
if hasattr(logging, methodName):
raise AttributeError('{} already defined in logging module'.format(methodName))
if hasattr(logging.getLoggerClass(), methodName):
raise AttributeError('{} already defined in logger class'.format(methodName))
# This method was inspired by the answers to Stack Overflow post
# http://stackoverflow.com/q/2183233/2988730, especially
# http://stackoverflow.com/a/13638084/2988730
def logForLevel(self, message, *args, **kwargs):
if self.isEnabledFor(levelNum):
self._log(levelNum, message, args, **kwargs)
def logToRoot(message, *args, **kwargs):
logging.log(levelNum, message, *args, **kwargs)
logging.addLevelName(levelNum, levelName)
setattr(logging, levelName, levelNum)
setattr(logging.getLoggerClass(), methodName, logForLevel)
setattr(logging, methodName, logToRoot)
You can find an even more detailed implementation in the utility library I maintain, haggis. The function haggis.logs.add_logging_level is a more production-ready implementation of this answer.
I took the avoid seeing "lambda" answer and had to modify where the log_at_my_log_level was being added. I too saw the problem that Paul did – I don't think this works. Don't you need logger as the first arg in log_at_my_log_level? This worked for me
import logging
DEBUG_LEVELV_NUM = 9
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
# Yes, logger takes its '*args' as 'args'.
self._log(DEBUG_LEVELV_NUM, message, args, **kws)
logging.Logger.debugv = debugv
This question is rather old, but I just dealt with the same topic and found a way similiar to those already mentioned which appears a little cleaner to me. This was tested on 3.4, so I'm not sure whether the methods used exist in older versions:
from logging import getLoggerClass, addLevelName, setLoggerClass, NOTSET
VERBOSE = 5
class MyLogger(getLoggerClass()):
def __init__(self, name, level=NOTSET):
super().__init__(name, level)
addLevelName(VERBOSE, "VERBOSE")
def verbose(self, msg, *args, **kwargs):
if self.isEnabledFor(VERBOSE):
self._log(VERBOSE, msg, args, **kwargs)
setLoggerClass(MyLogger)
While we have already plenty of correct answers, the following is in my opinion more pythonic:
import logging
from functools import partial, partialmethod
logging.TRACE = 5
logging.addLevelName(logging.TRACE, 'TRACE')
logging.Logger.trace = partialmethod(logging.Logger.log, logging.TRACE)
logging.trace = partial(logging.log, logging.TRACE)
If you want to use mypy on your code, it is recommended to add # type: ignore to suppress warnings from adding attribute.
Who started the bad practice of using internal methods (self._log) and why is each answer based on that?! The pythonic solution would be to use self.log instead so you don't have to mess with any internal stuff:
import logging
SUBDEBUG = 5
logging.addLevelName(SUBDEBUG, 'SUBDEBUG')
def subdebug(self, message, *args, **kws):
self.log(SUBDEBUG, message, *args, **kws)
logging.Logger.subdebug = subdebug
logging.basicConfig()
l = logging.getLogger()
l.setLevel(SUBDEBUG)
l.subdebug('test')
l.setLevel(logging.DEBUG)
l.subdebug('test')
I think you'll have to subclass the Logger class and add a method called trace which basically calls Logger.log with a level lower than DEBUG. I haven't tried this but this is what the docs indicate.
I find it easier to create a new attribute for the logger object that passes the log() function. I think the logger module provides the addLevelName() and the log() for this very reason. Thus no subclasses or new method needed.
import logging
#property
def log(obj):
logging.addLevelName(5, 'TRACE')
myLogger = logging.getLogger(obj.__class__.__name__)
setattr(myLogger, 'trace', lambda *args: myLogger.log(5, *args))
return myLogger
now
mylogger.trace('This is a trace message')
should work as expected.
Tips for creating a custom logger:
Do not use _log, use log (you don't have to check isEnabledFor)
the logging module should be the one creating instance of the custom logger since it does some magic in getLogger, so you will need to set the class via setLoggerClass
You do not need to define __init__ for the logger, class if you are not storing anything
# Lower than debug which is 10
TRACE = 5
class MyLogger(logging.Logger):
def trace(self, msg, *args, **kwargs):
self.log(TRACE, msg, *args, **kwargs)
When calling this logger use setLoggerClass(MyLogger) to make this the default logger from getLogger
logging.setLoggerClass(MyLogger)
log = logging.getLogger(__name__)
# ...
log.trace("something specific")
You will need to setFormatter, setHandler, and setLevel(TRACE) on the handler and on the log itself to actually se this low level trace
This worked for me:
import logging
logging.basicConfig(
format=' %(levelname)-8.8s %(funcName)s: %(message)s',
)
logging.NOTE = 32 # positive yet important
logging.addLevelName(logging.NOTE, 'NOTE') # new level
logging.addLevelName(logging.CRITICAL, 'FATAL') # rename existing
log = logging.getLogger(__name__)
log.note = lambda msg, *args: log._log(logging.NOTE, msg, args)
log.note('school\'s out for summer! %s', 'dude')
log.fatal('file not found.')
The lambda/funcName issue is fixed with logger._log as #marqueed pointed out. I think using lambda looks a bit cleaner, but the drawback is that it can't take keyword arguments. I've never used that myself, so no biggie.
NOTE setup: school's out for summer! dude
FATAL setup: file not found.
As alternative to adding an extra method to the Logger class I would recommend using the Logger.log(level, msg) method.
import logging
TRACE = 5
logging.addLevelName(TRACE, 'TRACE')
FORMAT = '%(levelname)s:%(name)s:%(lineno)d:%(message)s'
logging.basicConfig(format=FORMAT)
l = logging.getLogger()
l.setLevel(TRACE)
l.log(TRACE, 'trace message')
l.setLevel(logging.DEBUG)
l.log(TRACE, 'disabled trace message')
In my experience, this is the full solution the the op's problem... to avoid seeing "lambda" as the function in which the message is emitted, go deeper:
MY_LEVEL_NUM = 25
logging.addLevelName(MY_LEVEL_NUM, "MY_LEVEL_NAME")
def log_at_my_log_level(self, message, *args, **kws):
# Yes, logger takes its '*args' as 'args'.
self._log(MY_LEVEL_NUM, message, args, **kws)
logger.log_at_my_log_level = log_at_my_log_level
I've never tried working with a standalone logger class, but I think the basic idea is the same (use _log).
Addition to Mad Physicists example to get file name and line number correct:
def logToRoot(message, *args, **kwargs):
if logging.root.isEnabledFor(levelNum):
logging.root._log(levelNum, message, args, **kwargs)
based on pinned answer,
i wrote a little method which automaticaly create new logging levels
def set_custom_logging_levels(config={}):
"""
Assign custom levels for logging
config: is a dict, like
{
'EVENT_NAME': EVENT_LEVEL_NUM,
}
EVENT_LEVEL_NUM can't be like already has logging module
logging.DEBUG = 10
logging.INFO = 20
logging.WARNING = 30
logging.ERROR = 40
logging.CRITICAL = 50
"""
assert isinstance(config, dict), "Configuration must be a dict"
def get_level_func(level_name, level_num):
def _blank(self, message, *args, **kws):
if self.isEnabledFor(level_num):
# Yes, logger takes its '*args' as 'args'.
self._log(level_num, message, args, **kws)
_blank.__name__ = level_name.lower()
return _blank
for level_name, level_num in config.items():
logging.addLevelName(level_num, level_name.upper())
setattr(logging.Logger, level_name.lower(), get_level_func(level_name, level_num))
config may smth like that:
new_log_levels = {
# level_num is in logging.INFO section, that's why it 21, 22, etc..
"FOO": 21,
"BAR": 22,
}
Someone might wanna do, a root level custom logging; and avoid the usage of logging.get_logger(''):
import logging
from datetime import datetime
c_now=datetime.now()
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] :: %(message)s",
handlers=[
logging.StreamHandler(),
logging.FileHandler("../logs/log_file_{}-{}-{}-{}.log".format(c_now.year,c_now.month,c_now.day,c_now.hour))
]
)
DEBUG_LEVELV_NUM = 99
logging.addLevelName(DEBUG_LEVELV_NUM, "CUSTOM")
def custom_level(message, *args, **kws):
logging.Logger._log(logging.root,DEBUG_LEVELV_NUM, message, args, **kws)
logging.custom_level = custom_level
# --- --- --- ---
logging.custom_level("Waka")
I'm confused; with python 3.5, at least, it just works:
import logging
TRACE = 5
"""more detail than debug"""
logging.basicConfig()
logging.addLevelName(TRACE,"TRACE")
logger = logging.getLogger('')
logger.debug("n")
logger.setLevel(logging.DEBUG)
logger.debug("y1")
logger.log(TRACE,"n")
logger.setLevel(TRACE)
logger.log(TRACE,"y2")
output:
DEBUG:root:y1
TRACE:root:y2
Following up on the top-rated answers by Eric S. and Mad Physicist:
Fixed example that checks if the logging level is enabled:
import logging
DEBUG_LEVELV_NUM = 9
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
if self.isEnabledFor(DEBUG_LEVELV_NUM):
# Yes, logger takes its '*args' as 'args'.
self._log(DEBUG_LEVELV_NUM, message, args, **kws)
logging.Logger.debugv = debugv
This code-snippet
adds a new log-level "DEBUGV" and assigns a number 9
defines a debugv-method, which logs a message with level "DEBUGV" unless the log-level is set to a value higher than 9 (e.g. log-level "DEBUG")
monkey-patches the logging.Logger-class, so that you can call logger.debugv
The suggested implementation worked well for me, but
code completion doesn't recognise the logger.debugv-method
pyright, which is part of our CI-pipeline, fails, because it cannot trace the debugv-member method of the Logger-class (see https://github.com/microsoft/pylance-release/issues/2335 about dynamically adding member-methods)
I ended up using inheritance as was suggested in Noufal Ibrahim's answer in this thread:
I think you'll have to subclass the Logger class and add a method called trace which basically calls Logger.log with a level lower than DEBUG. I haven't tried this but this is what the docs indicate.
Implementing Noufal Ibrahim's suggestion worked and pyright is happy:
import logging
# add log-level DEBUGV
DEBUGV = 9 # slightly lower than DEBUG (10)
logging.addLevelName(DEBUGV, "DEBUGV")
class MyLogger(logging.Logger):
"""Inherit from standard Logger and add level DEBUGV."""
def debugv(self, msg, *args, **kwargs):
"""Log 'msg % args' with severity 'DEBUGV'."""
if self.isEnabledFor(DEBUGV):
self._log(DEBUGV, msg, args, **kwargs)
logging.setLoggerClass(MyLogger)
Next you can initialise an instance of the extended logger using the logger-manager:
logger = logging.getLogger("whatever_logger_name")
Edit: In order to make pyright recognise the debugv-method, you may need to cast the logger returned by logging.getLogger, which would like this:
import logging
from typing import cast
logger = cast(MyLogger, logging.getLogger("whatever_logger_name"))
In case anyone wants an automated way to add a new logging level to the logging module (or a copy of it) dynamically, I have created this function, expanding #pfa's answer:
def add_level(log_name,custom_log_module=None,log_num=None,
log_call=None,
lower_than=None, higher_than=None, same_as=None,
verbose=True):
'''
Function to dynamically add a new log level to a given custom logging module.
<custom_log_module>: the logging module. If not provided, then a copy of
<logging> module is used
<log_name>: the logging level name
<log_num>: the logging level num. If not provided, then function checks
<lower_than>,<higher_than> and <same_as>, at the order mentioned.
One of those three parameters must hold a string of an already existent
logging level name.
In case a level is overwritten and <verbose> is True, then a message in WARNING
level of the custom logging module is established.
'''
if custom_log_module is None:
import imp
custom_log_module = imp.load_module('custom_log_module',
*imp.find_module('logging'))
log_name = log_name.upper()
def cust_log(par, message, *args, **kws):
# Yes, logger takes its '*args' as 'args'.
if par.isEnabledFor(log_num):
par._log(log_num, message, args, **kws)
available_level_nums = [key for key in custom_log_module._levelNames
if isinstance(key,int)]
available_levels = {key:custom_log_module._levelNames[key]
for key in custom_log_module._levelNames
if isinstance(key,str)}
if log_num is None:
try:
if lower_than is not None:
log_num = available_levels[lower_than]-1
elif higher_than is not None:
log_num = available_levels[higher_than]+1
elif same_as is not None:
log_num = available_levels[higher_than]
else:
raise Exception('Infomation about the '+
'log_num should be provided')
except KeyError:
raise Exception('Non existent logging level name')
if log_num in available_level_nums and verbose:
custom_log_module.warn('Changing ' +
custom_log_module._levelNames[log_num] +
' to '+log_name)
custom_log_module.addLevelName(log_num, log_name)
if log_call is None:
log_call = log_name.lower()
setattr(custom_log_module.Logger, log_call, cust_log)
return custom_log_module

Most basic logging handler without subclassing

While subclassing logging.Handler, I can make a custom handler by doing something like:
import requests
import logging
class RequestsHandler(logging.Handler):
def emit(self, record):
res = requests.get('http://google.com')
print (res, record)
handler = RequestsHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
# <Response [200]> <LogRecord: __main__, 30, <stdin>, 1, "ok!">
What would be the simplest RequestHandler (i.e., what methods would it need?) if it was just a base class without subclassing logging.Handler ?
In general, you can find out which attributes of a class is getting accessed externally by overriding the __getattribue__ method with a wrapper function
that adds the name of the attribute being accessed to a set if the caller's class is not the same as the current class:
import logging
import sys
class MyHandler(logging.Handler):
def emit(self, record):
pass
def show_attribute(self, name):
caller_locals = sys._getframe(1).f_locals
if ('self' not in caller_locals or
object.__getattribute__(caller_locals['self'], '__class__') is not
object.__getattribute__(self, '__class__')):
attributes.add(name)
return original_getattribute(self, name)
attributes = set()
original_getattribute = MyHandler.__getattribute__
MyHandler.__getattribute__ = show_attribute
so that:
handler = MyHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
print(attributes)
outputs:
{'handle', 'level'}
Demo: https://repl.it/#blhsing/UtterSoupyCollaborativesoftware
As you see from the result above, handle and level are the only attributes needed for a basic logging handler. In other words, #jirassimok is correct in that handle is the only method of the Handler class that is called externally, but one also needs to implement the level attribute as well since it is also directly accessed in the Logger.callHandlers method:
if record.levelno >= hdlr.level:
where the level attribute has to be an integer, and should be 0 if records of all logging levels are to be handled.
A minimal implementation of a Handler class should therefore be something like:
class MyHandler:
def __init__(self):
self.level = 0
def handle(self, record):
print(record.msg)
so that:
handler = MyHandler()
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logger.warning('ok!')
outputs:
ok!
Looking at the source for Logger.log leads me to Logger.callHandlers, which calls only handle on the handlers. So that might be the minimum you need if you're injecting the fake handler directly into a logger instance.
If you want to really guarantee compatibility with the rest of the logging module, the only thing you can do is go through the module's source to figure out how it works. The documentation is a good starting place, but that doesn't get into the internals much at all.
If you're just trying to write a dummy handler for a small use case, you could probably get away with skipping a lot of steps; try something, see where it fails, and build on that.
Otherwise, you won't have much choice but to dive into the source code (though trying things and seeing what breaks can also be a good way to find places to start reading).
A quick glance at the class' source tells me that the only gotchas in the class are related to the module's internal management of its objects; Handler.__init__ puts the handler into a global handler list, which the module could use in any number of places. But beyond that, the class is quite straightforward; it shouldn't be too hard to read.

How to extend the logging.Logger Class?

I would like to start with a basic logging class that inherits from Python's logging.Logger class. However, I am not sure about how I should be constructing my class so that I can establish the basics needed for customising the inherited logger.
This is what I have so far in my logger.py file:
import sys
import logging
from logging import DEBUG, INFO, ERROR
class MyLogger(object):
def __init__(self, name, format="%(asctime)s | %(levelname)s | %(message)s", level=INFO):
# Initial construct.
self.format = format
self.level = level
self.name = name
# Logger configuration.
self.console_formatter = logging.Formatter(self.format)
self.console_logger = logging.StreamHandler(sys.stdout)
self.console_logger.setFormatter(self.console_formatter)
# Complete logging config.
self.logger = logging.getLogger("myApp")
self.logger.setLevel(self.level)
self.logger.addHandler(self.console_logger)
def info(self, msg, extra=None):
self.logger.info(msg, extra=extra)
def error(self, msg, extra=None):
self.logger.error(msg, extra=extra)
def debug(self, msg, extra=None):
self.logger.debug(msg, extra=extra)
def warn(self, msg, extra=None):
self.logger.warn(msg, extra=extra)
This is the main myApp.py:
import entity
from core import MyLogger
my_logger = MyLogger("myApp")
def cmd():
my_logger.info("Hello from %s!" % ("__CMD"))
entity.third_party()
entity.another_function()
cmd()
And this is the entity.py module:
# Local modules
from core import MyLogger
# Global modules
import logging
from logging import DEBUG, INFO, ERROR, CRITICAL
my_logger = MyLogger("myApp.entity", level=DEBUG)
def third_party():
my_logger.info("Initial message from: %s!" % ("__THIRD_PARTY"))
def another_function():
my_logger.warn("Message from: %s" % ("__ANOTHER_FUNCTION"))
When I run the main app, I get this:
2016-09-14 12:40:50,445 | INFO | Initial message from: __THIRD_PARTY!
2016-09-14 12:40:50,445 | INFO | Initial message from: __THIRD_PARTY!
2016-09-14 12:40:50,445 | WARNING | Message from: __ANOTHER_FUNCTION
2016-09-14 12:40:50,445 | WARNING | Message from: __ANOTHER_FUNCTION
2016-09-14 12:40:50,445 | INFO | Hello from __CMD!
2016-09-14 12:40:50,445 | INFO | Hello from __CMD!
Everything is printed twice, as probably I have failed to set the logger class properly.
Update 1
Let me clarify my goals.
(1) I would like to encapsulate the main logging functionality in one single location so I can do this:
from mylogger import MyLogger
my_logger = MyLogger("myApp")
my_logger.info("Hello from %s!" % ("__CMD"))
(2) I am planning to use CustomFormatter and CustomAdapter classes. This bit does not require a custom logging class, those can be plugged in straight away.
(3) I probably do not need to go very deep in terms of customisation of the underlying logger class (records etc.), intercepting logger.info, loggin.debug etc. should be enough.
So referring back to this python receipt that has been circulated many times on these forums:
I am trying to the find the sweet point between having a Logger Class, yet still be able to use the built in functions like assigning Formatters and Adapters etc. So everything stays compatible with the logging module.
class OurLogger(logging.getLoggerClass()):
def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None):
# Don't pass all makeRecord args to OurLogRecord bc it doesn't expect "extra"
rv = OurLogRecord(name, level, fn, lno, msg, args, exc_info, func)
# Handle the new extra parameter.
# This if block was copied from Logger.makeRecord
if extra:
for key in extra:
if (key in ["message", "asctime"]) or (key in rv.__dict__):
raise KeyError("Attempt to overwrite %r in LogRecord" % key)
rv.__dict__[key] = extra[key]
return rv
Update 2
I have created a repo with a simple python app demonstrating a possible solution. However, I am keen to improve this.
xlog_example
This example effectively demonstrates the technique of overriding the logging.Logger class and the logging.LogRecord class through inheritance.
Two external items are mixed into the log stream: funcname and username without using any Formatters or Adapters.
At this stage, I believe that the research I have made so far and the example provided with the intention to wrap up the solution is sufficient to serve as an answer to my question. In general, there are many approaches that may be utilised to wrap a logging solution. This particular question aimed to focus on a solution that utilises logging.Logger class inheritance so that the internal mechanics can be altered, yet the rest of the functionality kept as it is since it is going to be provided by the original logging.Logger class.
Having said that, class inheritance techniques should be used with great care. Many of the facilities provided by the logging module are already sufficient to maintain and run a stable logging workflow. Inheriting from the logging.Logger class is probably good when the goal is some kind of a fundamental change to the way the log data is processed and exported.
To summarise this, I see that there are two approaches for wrapping logging functionality:
1) The Traditional Logging:
This is simply working with the provided logging methods and functions, but wrap them in a module so that some of the generic repetitive tasks are organised in one place. In this way, things like log files, log levels, managing custom Filters, Adapters etc. will be easy.
I am not sure if a class approach can be utilised in this scenario (and I am not talking about a super class approach which is the topic of the second item) as it seems that things are getting complicated when the logging calls are wrapped inside a class. I would like to hear about this issue and I will definitely prepare a question that explores this aspect.
2) The Logger Inheritance:
This approach is based on inheriting from the original logging.Logger class and adding to the existing methods or entirely hijacking them by modifying the internal behaviour. The mechanics are based on the following bit of code:
# Register our logger.
logging.setLoggerClass(OurLogger)
my_logger = logging.getLogger("main")
From here on, we are relying on our own Logger, yet we are still able to benefit from all of the other logging facilities:
# We still need a loggin handler.
ch = logging.StreamHandler()
my_logger.addHandler(ch)
# Confgure a formatter.
formatter = logging.Formatter('LOGGER:%(name)12s - %(levelname)7s - <%(filename)s:%(username)s:%(funcname)s> %(message)s')
ch.setFormatter(formatter)
# Example main message.
my_logger.setLevel(DEBUG)
my_logger.warn("Hi mom!")
This example is crucial as it demonstrates the injection of two data bits username and funcname without using custom Adapters or Formatters.
Please see the xlog.py repo for more information regarding this solution. This is an example that I have prepared based on other questions and bits of code from other sources.
This line
self.logger = logging.getLogger("myApp")
always retrieves a reference to the same logger, so you are adding an additional handler to it every time you instantiate MyLogger. The following would fix your current instance, since you call MyLogger with a different argument both times.
self.logger = logging.getLogger(name)
but note that you will still have the same problem if you pass the same name argument more than once.
What your class needs to do is keep track of which loggers it has already configured.
class MyLogger(object):
loggers = set()
def __init__(self, name, format="%(asctime)s | %(levelname)s | %(message)s", level=INFO):
# Initial construct.
self.format = format
self.level = level
self.name = name
# Logger configuration.
self.console_formatter = logging.Formatter(self.format)
self.console_logger = logging.StreamHandler(sys.stdout)
self.console_logger.setFormatter(self.console_formatter)
# Complete logging config.
self.logger = logging.getLogger(name)
if name not in self.loggers:
self.loggers.add(name)
self.logger.setLevel(self.level)
self.logger.addHandler(self.console_logger)
This doesn't allow you to re-configure a logger at all, but I leave it as an exercise to figure out how to do that properly.
The key thing to note, though, is that you can't have two separately configured loggers with the same name.
Of course, the fact that logging.getLogger always returns a reference to the same object for a given name means that your class is working at odds with the logging module. Just configure your loggers once at program start-up, then get references as necessary with getLogger.

Writing Python output to either screen or filename

I am writing some code that will output a log to either the screen, or a file, but not both.
I thought the easiest way to do this would be to write a class:
class WriteLog:
"write to screen or to file"
def __init__(self, stdout, filename):
self.stdout = stdout
self.logfile = file(filename, 'a')
def write(self, text):
self.stdout.write(text)
self.logfile.write(text)
def close(self):
self.stdout.close()
self.logfile.close()
And then call it something like this:
output = WriteLog(sys.stdout, 'log.txt')
However, I'm not sure how to allow for switching between the two, i.e. there should be an option within the class that will set WriteLog to either use stdout, or filename. Once that option has been set I just use WriteLog without any need for if statements etc.
Any ideas? Most of the solutions I see online are trying to output to both simultaneously.
Thanks.
Maybe something like this? It uses the symbolic name 'stdout' or 'stderr' in the constructor, or a real filename. The usage of if is limited to the constructor. By the way, I think you're trying to prematurely optimize (which is the root of all evil): you're trying to save time on if's while in real life, the program will spend much more time in file I/O; making the potential waste on your if's negligible.
import sys
class WriteLog:
def __init__(self, output):
self.output = output
if output == 'stdout':
self.logfile = sys.stdout
elif output == 'stderr':
self.logfile = sys.stderr
else:
self.logfile = open(output, 'a')
def write(self, text):
self.logfile.write(text)
def close(self):
if self.output != 'stdout' and self.output != 'stderr':
self.logfile.close()
def __del__(self):
self.close()
if __name__ == '__main__':
a = WriteLog('stdout')
a.write('This goes to stdout\n')
b = WriteLog('stderr')
b.write('This goes to stderr\n')
c = WriteLog('/tmp/logfile')
c.write('This goes to /tmp/logfile\n')
I'm not an expert in it, but try to use the logging library, and maybe you can have logger with 2 handlers, one for file and one for stream and then add/remove handlers dynamically.
I like the suggestion about using the logging library. But if you want to hack out something yourself, maybe passing in the file handle is worth considering.
import sys
class WriteLog:
"write to screen or to file"
def __init__(self, output):
self.output = output
def write(self, text):
self.output.write(text)
def close(self):
self.output.close()
logger = WriteLog(file('c:/temp/log.txt','a' ))
logger.write("I write to the log file.\n")
logger.close()
sysout = WriteLog(sys.stdout)
sysout.write("I write to the screen.\n")
You can utilize the logging library to do something similar to this. The following function will set up a logging object at the INFO level.
def setup_logging(file_name, log_to_file=False, log_to_console=False ):
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# Create Console handler
if log_to_file:
console_log = logging.StreamHandler()
console_log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(name)-12s - %(message)s')
console_log.setFormatter(formatter)
logger.addHandler(console_log)
# Log file
if log_to_console:
file_log = logging.FileHandler('%s.log' % (file_name), 'a', encoding='UTF-8')
file_log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(name)-12s - %(message)s')
file_log.setFormatter(formatter)
logger.addHandler(file_log)
return logger
You pass it the name of your log, and where you wish to log (either to the file, the console or both). Then you can utilize this function in your code block like this:
logger = setup_logging("mylog.log", log_to_file=True, log_to_console=False)
logger.info('Message')
This example will log to a file named mylog.log (in the current directory) and have output like this:
2014-11-05 17:20:29,933 - INFO - root - Message
This function has areas for improvement (if you wish to add more functionality). Right now it logs to both the console and file at log level INFO on the .setLevel(logging.INFO) lines. This could be set dynamically if you wish.
Additionally, as it is now, you can easily add standard logging lines (logger.debug('Message'), logger.critcal('DANGER!')) without modifying a class. In these examples, the debug messages won't print (because it is set to INFO) and the critical ones will.

logging in continouous loop

What would be a good way to create logs (with python logging module) inside a constant running loop, without producing a large amount of useless log-files?
An example would be a loop that constant list a folder, and does some action when it sees a file of a specific type.
I want to log that no files were found, or files were found but of a wrong type, without logging that same line constantly for each folder check, as it might run many times a second.
Create a Handler that subclasses whatever other functionality you need. Store either the last, or all the previously logged messages that you don't want to emit again:
def make_filetype_aware_handler(handler_class):
class DontRepeatFiletypeHandler(handler_class):
def __init__(self, *args, **kwds):
super().__init__(*args, **kwds)
self.previous_types = set()
def emit(self, record):
if not record.file_type in self.previous_types:
self.previous_types.add(record.file_type)
super().emit(record)
return DontRepeatFiletypeHandler
FiletypeStreamHandler = make_filetype_aware_handler(logging.StreamHandler)
logger = logging.getLogger()
logger.addHandler(FiletypeStreamHandler(sys.stderr))
logger.debug('Found file of type %(file_type)', file_type='x-type/zomg')
import logging
logger = logging.getLogger(test)
# logging to a file
hdlr = logging.FileHandler(test.log)
formatter = logging.Formatter('%(asctime)s %(filename)s %(lineno)s %(levelname)s % (message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
Then in the loop, you have to check for file type and if file is present or not.
Then add :
logger.debug('File Type : %s ' % file_type)
also
if file_present:
logger.debug('File Present : %s ' % present)
Log the less important events with a lower precedence, like DEBUG. See setLevel and SysLogHandler.
At development time set the level to DEBUG, and as your application matures, set it to more reasonable values like INFO or ERROR.
Your app should do something about the errors, like remove files of the wrong type and/or create lacking files; or move the wrongfully configured directories from the job polling to a quarantine location, so your log will not be flooded.
My understanding is that you are trying to limit logging the same message over and over again.
If this is your issue, I would create a a set of file_types you have already logged. However you need to be careful, if this is going to run forever, you will eventually crash..
from sets import Set
logged = Set()
while yourCondition:
file_type = get_next_file_type()
needToLog = #determine if you need to log this thing
if needToLog and (not file_type in logged):
logger.info("BAH! " + file_type)
logged.add(file_type)
A bit of a hack but much easier is "misusing" functools lru_cache.
from functools import lru_cache
from logging import getLogger
# Keep track of 10 different messages and then warn again
#lru_cache(10)
def warn_once(logger: Logger, msg: str):
logger.warning(msg)
You can increase the 10 to suppress more if required or set it to None to store suppress everything duplicated.

Categories