Custom exception default logging - python

I've built custom exceptions that accept a parameter(s) and format their own message from constants. They also print to stdout so the user understands the issue.
For instance:
defs.py:
PATH_NOT_FOUND_ERROR = 'Cannot find path "{}"'
exceptions.py:
class PathNotFound(BaseCustomException):
"""Specified path was not found."""
def __init__(self, path):
msg = PATH_NOT_FOUND_ERROR.format(path)
print(msg)
super(PathNotFound, self).__init__(msg)
some_module.py
raise PathNotFound(some_invalid_path)
I also want to log the exceptions as they are thrown, the simplest way would be:
logger.debug('path {} not found'.format(some_invalid_path)
raise PathNotFound(some_invalid_path)
But doing this all across the code seems redundant, and especially it makes the constants pointless because if I decide the change the wording I need to change the logger wording too.
I've trying to do something like moving the logger to the exception class but makes me lose the relevant LogRecord properties like name, module, filename, lineno, etc. This approach also loses exc_info
Is there a way to log the exception and keeping the metadata without logging before raising every time?

If anyone's interested, here's a working solution
The idea was to find the raiser's frame and extract the relevant information from there.
Also had to override logging.makeRecord to let me override internal LogRecord attributes
Set up logging
class MyLogger(logging.Logger):
"""Custom Logger."""
def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None):
"""Override default logger to allow overridding internal attributes."""
if six.PY2:
rv = logging.LogRecord(name, level, fn, lno, msg, args, exc_info, func)
else:
rv = logging.LogRecord(name, level, fn, lno, msg, args, exc_info, func, sinfo)
if extra is not None:
for key in extra:
# if (key in ["message", "asctime"]) or (key in rv.__dict__):
# raise KeyError("Attempt to overwrite %r in LogRecord" % key)
rv.__dict__[key] = extra[key]
return rv
logging.setLoggerClass(MyLogger)
logger = logging.getLogger(__name__)
Custom Exception Handler
class BaseCustomException(Exception):
"""Specified path was not found."""
def __init__(self, path):
"""Override message with defined const."""
try:
raise ZeroDivisionError
except ZeroDivisionError:
# Find the traceback frame that raised this exception
exception_frame = sys.exc_info()[2].tb_frame.f_back.f_back
exception_stack = traceback.extract_stack(exception_frame, limit=1)[0]
filename, lineno, funcName, tb_msg = exception_stack
extra = {'filename': os.path.basename(filename), 'lineno': lineno, 'funcName': funcName}
logger.debug(msg, extra=extra)
traceback.print_stack(exception_frame)
super(BaseCustomException, self).__init__(msg)

Related

Raising exception during logging

In Python 3 is there a way to raise exception during logging? I mean, can I somehow configure the standard logging module to raise an exception when logging.error(e) is done?
You don't normally log an error by calling logging.error(e). You normally call
logging.error('my message', exc_info=True)
OR
logging.error('my message', exc_info=e) # Where e is the exception object
That being said, you can extend the Logger class to do this:
from logging import Logger, setLoggerClass
from sys import exc_info
class ErrLogger(Logger):
def error(msg, *args, **kwargs):
super().error(msg, *args, **kwargs)
err = kwargs.get('exc_info')
if not isintsance(err, Exception):
_, err, _ = exc_info()
if err is None:
err = SomeDefaultError(msg) # You decide what to raise
raise err
Now register the new class using the setLoggerClass function:
setLoggerClass(ErrLogger)
Setting a custom logger with a custom error is likely a more reliable method than overriding the module level error, which only operates on the root logger.
The method show here will raise errors in the following order:
An error passed through the exc_info keyword-only argument.
The error currently in the process of being handled.
SomeDefaultError that you designate.
If you want to forego this process, replace everything after super().error(msg, *args, **kwargs) with just raise SomeDefaultError(...).
Yes, you can replace the method by your own method.
import logging
def except_only(msg, *args, **kwargs):
raise Exception(msg)
logging.error = except_only # do not use () here!
logging.error("Log an error")
Problem: you need to remember the original method if you want logging + an exception.
import logging
old_logging_error = logging.error # do not use () here!
def new_logging_error(msg, *args, **kwargs):
global old_logging_error
old_logging_error(msg, *args, **kwargs)
raise Exception(msg)
logging.error = new_logging_error # do not use () here!
logging.error("Log an error")

Override python lambdas default logging format

I am trying to override the default logging format in python lambda. Followed this blog post
and seems very straight forward in a nutshell doing this inside my lambda function:
LOGGER = logging.getLogger()
HANDLER = LOGGER.handlers[0]
HANDLER.setFormatter(logging.Formatter(“[%(asctime)s] %(levelname)s:%(name)s:%(message)s”, “%Y-%m-%d %H:%M:%S”))
But when executing it, my LOGGER.handlers has not handlers in it. So the override fails.
I have tried adding a new StreamHandler with its own formatter:
LOGGER = Logger.factory(name)
sh = logging.StreamHandler()
sh.setFormatter(JSONFormatter())
LOGGER.add_handler(sh)
that works , BUT I get two of the same log lines for every line I try to log. One from my custom StreamHandler and its formatter, and one default lambda formatted line.
So seems like at the end I have two handlers.
My question is, where should I override the logging handler's format - when does lambda add its own handler ?
I would at the start of the run, change the default python formatter.
import logging
logging.Formatter.format = my_format_function
I used the same method to override the makeRecord function to include more metadata.
import logging
def myMakeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None, another=None):
"""
The normal python handler does not provide all the fields, so this enriches the message and formats it for logz.io.
"""
if extra is None:
extra = {}
extra.update({'message': msg,
'logger': name,
'line_number': lno,
'path_name': fn})
if func is not None:
extra.update({'func_name': func})
msg = json.dumps(extra)
extra = None
return original_makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=func, extra=extra)
logging.Logger.makeRecord = myMakeRecord

Python Logging - Override args/msg in "record" to avoid changing to mutable object data

Is there a way to implement the Python Logging filter with a redaction class in such a way that any call to log a message with arguments (eg LOGGER.debug('my message, %s, %s', data1, data2) uses a copy of the msg and args instead of the passed mutable object? Our implementation means that the logging redaction class ends up changing the mutable data which then gets stored/sent with the redaction string. I know we could do this on the call to the logger for example LOGGER.debug('my message, %s, %s', copy.deepcopy(data1), copy.deepcopy(data2)), but was hoping there was possibly a way to override a filter function resposnbile for the "record" in the logging setup definition?
You have to override the makeRecord function. By the time filters are applied the Record has already been created.
import logging
import copy
class CopyLogger(logging.Logger):
def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None):
args = copy.deepcopy(args) # <- this is the line you care about
return super().makeRecord(name, level, fn, lno, msg, args, exc_info, func, extra, sinfo)
logging.setLoggerClass(CopyLogger)
log = logging.getLogger('mylogger')
data1 = 'hello'
data2 = 'world'
log.warning('my message, %s, %s', data1, data2)

python logging.Logger: overriding makeRecord

I have a formatter that expects special attribute in the record, "user_id", that not always there(sometimes I add it to records using special logging.Filter).
I tried to override the makeRecord method of logging.Logger like so:
import logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)-15s user_id=%(user_id)s %(filename)s:%(lineno)-15s: %(message)s')
class OneTestLogger(logging.Logger):
def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None):
rv = logging.Logger.makeRecord(self, name, level, fn, lno,
msg, args, exc_info,
func, extra)
rv.__dict__.setdefault('user_id', 'master')
return rv
if __name__ == '__main__':
logger = OneTestLogger('main')
print logger
logger.info('Starting test')
But that doesn't seem to work and I keep getting:
<main.MyLogger instance at 0x7f31a6a5b638>
No handlers could be found for logger "main"
What am I doing wrong?
Thanks.
Following the guideline provided in Logging Cookbook. Just the first part, I did not implement Filter (which does not appear in the below quote either).
This has usually meant that if you need to do anything special with a LogRecord, you’ve had to do one of the following.
Create your own Logger subclass, which overrides Logger.makeRecord(), and set it using setLoggerClass() before any loggers that you care about are instantiated.
I have simplifed your exampled just to add the 'hostname':
import logging
from socket import gethostname
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(hostname)s - %(message)s')
class NewLogger(logging.Logger):
def makeRecord(self, *args, **kwargs):
rv = super(NewLogger, self).makeRecord(*args, **kwargs)
# updating the rv value of the original makeRecord
# my idea is to use the same logic than a decorator by
# intercepting the value return by the original makeRecord
# and expanded with what I need
rv.__dict__['hostname'] = gethostname()
# by curiosity I am checking what is in this dictionary
# print(rv.__dict__)
return rv
logging.setLoggerClass(NewLogger)
logger = logging.getLogger(__name__)
logger.info('Hello World!')
Note that this code worked on python 2.7

Alter python logger stack level

Very often when writing framework code, I would prefer the caller's line number and file name to be logged. For example, if I detect improper use of a framerwork level API call, I would like to log that.... not as an in-framework error but as "caller error".
This only comes into play when writing low level libraries and systems that use introspectionn.
Is there any way to get the logger to log "one level up"? Can I create a custom LogRecord and then modify it and use it within the installed loggers somehow? Trying to figure this out.
For example something like this:
def set(self, x):
if x not in self._cols:
log.error("Invalid attribute for set", stack_level=-1)
This is easy to find on google now since it was added to python 3.8, and it was mentioned in the other answer but here is a better explanation. You were close with the stack_level argument, but it should be stacklevel and the value should be 2. For example you can do:
import logging
def fun():
logging.basicConfig(format='line %(lineno)s : %(message)s')
log = logging.getLogger(__name__)
log.error('Something is wrong', stacklevel=2)
if __name__ == '__main__':
fun()
This will output:
line 9 : Something is wrong
If the stacklevel was set to the default of 1, it would output:
line 6 : Something is wrong
From the documentation: https://docs.python.org/3/library/logging.html#logging.Logger.debug
The third optional keyword argument is stacklevel, which defaults to 1. If greater than 1, the corresponding number of stack frames are skipped when computing the line number and function name set in the LogRecord created for the logging event. This can be used in logging helpers so that the function name, filename and line number recorded are not the information for the helper function/method, but rather its caller. The name of this parameter mirrors the equivalent one in the warnings module.
OK, I figured this out for python prior to 3.8:
First, you need to use inspect to get the frame. Then modify the extra= parameter in log with the info. But you also have to override makerecord to prevent the inappropriate guards that prevent log from working filename and linenumber overrides.
def myMakeRecord(self, name, level, fn, lno, msg, args, exc_info, func, extra, sinfo):
rv = logging.LogRecord(name, level, fn, lno, msg, args, exc_info, func, sinfo)
if extra is not None:
rv.__dict__.update(extra)
return rv
def mylog(logger, msg, level, *args)
logging.Logger.makeRecord = myMakeRecord
frame = inspect.currentframe()
caller = frame.f_back
override = {
"lineno":caller.f_lineno,
"filename":os.path.basename(caller.f_code.co_filename),
}
logger.log(level, msg, extra=override, *args)
Oddly, I couldn't get this to work when extra and sinfo had default values of none (like they do in the original definition). Maybe myMakeRecord should use *args.... but that would require grabbing extra = args[9] ... which is odd.... but maybe less bad (more likely to be future proof).
I did the same as Erik, but with some changes:
import logging
import inspect
from os import path
loggers_dict = {}
def myMakeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None):
if extra and 'pathname' in extra:
fn = extra.pop('pathname')
rv = logging.LogRecord(name, level, fn, lno, msg, args, exc_info, func)
if extra is not None:
rv.__dict__.update(extra)
return rv
logging.Logger.makeRecord = myMakeRecord
def getLogger():
fn, lno, func = base_logger.findCaller()
extras = {'pathname': fn, 'lineno': lno, 'funcName': func}
fnn = path.normcase(fn)
caller_name = inspect.modulesbyfile.get(fnn, inspect.getmodule(None, fn).__name__)
if caller_name not in loggers_dict:
loggers_dict[caller_name] = logging.getLogger(caller_name)
return loggers_dict[caller_name], extras
def myLogDebug(*msg):
log, extras = getLogger()
if len(msg) == 1:
log.debug(msg[0], extra=extras)
else:
log.debug(' '.join(map(str, msg)), extra=extras)
The main thing here is the legacy, everywhere people has put the call myLogDebug on the code, then would be a messy if I change everything. Other problem was the python 2.7 version, would be nice if I could use the param stacklevel from this thread.
Then I modified some stuff to get the right caller stack frame level, and doesn't change anything from original method.
EDIT - 1
caller_name = inspect.modulesbyfile.get(fnn, inspect.getmodule(None, fn).__name__)
This part is an little hack, just to don't run all the time getmodule from inspect module. There is an dict (an inside cache modulesbyfile) who gets direct access to module names after first getmodule.
Sometimes debug an track the source helps, it's not documented.

Categories