Raising exception during logging - python

In Python 3 is there a way to raise exception during logging? I mean, can I somehow configure the standard logging module to raise an exception when logging.error(e) is done?

You don't normally log an error by calling logging.error(e). You normally call
logging.error('my message', exc_info=True)
OR
logging.error('my message', exc_info=e) # Where e is the exception object
That being said, you can extend the Logger class to do this:
from logging import Logger, setLoggerClass
from sys import exc_info
class ErrLogger(Logger):
def error(msg, *args, **kwargs):
super().error(msg, *args, **kwargs)
err = kwargs.get('exc_info')
if not isintsance(err, Exception):
_, err, _ = exc_info()
if err is None:
err = SomeDefaultError(msg) # You decide what to raise
raise err
Now register the new class using the setLoggerClass function:
setLoggerClass(ErrLogger)
Setting a custom logger with a custom error is likely a more reliable method than overriding the module level error, which only operates on the root logger.
The method show here will raise errors in the following order:
An error passed through the exc_info keyword-only argument.
The error currently in the process of being handled.
SomeDefaultError that you designate.
If you want to forego this process, replace everything after super().error(msg, *args, **kwargs) with just raise SomeDefaultError(...).

Yes, you can replace the method by your own method.
import logging
def except_only(msg, *args, **kwargs):
raise Exception(msg)
logging.error = except_only # do not use () here!
logging.error("Log an error")
Problem: you need to remember the original method if you want logging + an exception.
import logging
old_logging_error = logging.error # do not use () here!
def new_logging_error(msg, *args, **kwargs):
global old_logging_error
old_logging_error(msg, *args, **kwargs)
raise Exception(msg)
logging.error = new_logging_error # do not use () here!
logging.error("Log an error")

Related

Logging an exception if and only if it was not caught later by try and except

I have a class in which I log errors and raise them. However, I have different functions which expect that error.
Now, even if these functions except the error properly, it is still logged. This leads to confusing log files in which multiple conflicting entries can be seen. For example:
import logging
logging.basicConfig(filename = "./log")
logger = logging.getLogger()
logger.setLevel(logging.INFO)
class Foo:
def __init__(self):
pass
def foo_error(self):
logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except RuntimeError as error:
logger.info("It was not so stupid after all!")
Foo = Foo()
Foo.foo_except()
Here, both messages show up in "./log". Preferably, I would like to suppress the first error log message if it is caught later on.
I have not seen an answer anywhere else. Maybe the way I am doing things suggests bad design. Any ideas?
You cannot really ask Python whether an exception will be caught lateron. So your only choice is to log only after you know whether the exception was caught or not.
One possible solution (though I'm not sure if this will work in your context):
import logging
logging.basicConfig(filename = "./log")
logger = logging.getLogger()
class Foo:
def __init__(self):
pass
def foo_error(self):
# logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except RuntimeError as error:
logger.warning("It was not so stupid after all!")
try:
Foo = Foo()
Foo.foo_except()
Foo.foo_error()
except Exception as exc:
if isinstance(exc, RuntimeError):
logger.error("%s", exc)
raise
After some more thinking and several failed attempts, i arrived at the following answer.
Firstly, as #gelonida mentioned:
You cannot really ask Python whether an exception will be caught later on.
This implies that a log entry which also raises an exception has to be written, because if the exception is not caught later on, the log entry is wiped out and missing from the file.
So instead of trying to control which log message gets written to file, we should implement a way to delete voided log messages from the file.
import logging
logging.basicConfig(filename = "./log")
logger = logging.getLogger()
logger.setLevel(logging.INFO)
class Foo:
def __init__(self):
pass
def foo_error(self):
logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except RuntimeError as error:
logger.info("It was not so stupid after all!")
Foo = Foo()
Foo.foo_except()
Following that logic we should replace in the original example above the line logger.info("It was not so stupid after all!") with a function that deletes the last committed log message and logs the correct one instead!
One way to achieve this is to modify the logging class and add two components. Namely a log record history and a FileHandler which supports deletion of log records. Let's start with the log record history.
class RecordHistory:
def __init__(self):
self._record_history = []
def write(self, record):
self._record_history.append(record)
def flush(self):
pass
def get(self):
return self._record_history[-1]
def pop(self):
return self._record_history.pop()
This is basically a data container which implements the write and flush methods alongside some other conveniences. The write and flush methods are required by logging.StreamHandler. For more information visit the logging.handlers documentation.
Next, we modify the existing logging.FileHandler to support the revoke method. This method allows us to delete a specific log record.
import re
class RevokableFileHandler(logging.FileHandler):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def revoke(self, record):
with open(self.baseFilename, mode="r+") as log:
substitute = re.sub(rf"{record}", "", log.read(), count=1)
log.seek(0)
log.write(substitute)
log.truncate()
Finally, we modify the Logger class. Note however that we cannot inherit from logging.Logger directly as stated here. Additionally, we add a logging.StreamHandler which pushes log records to our RecordHistory object. Also we implement a addRevokableHandler method which registers all handlers that support revoking records.
import logging
class Logger(logging.getLoggerClass()):
def __init__(self, name):
super().__init__(name)
self.revokable_handlers = []
self.record_history = RecordHistory()
stream_handler = logging.StreamHandler(stream=self.record_history)
stream_handler.setLevel(logging.INFO)
self.addHandler(stream_handler)
def addRevokableHandler(self, handler):
self.revokable_handlers.append(handler)
super().addHandler(handler)
def pop_and_log(self, level, msg):
record = self.record_history.pop()
for handler in self.revokable_handlers:
handler.revoke(record)
self.log(level, msg)
This leads to the following solution in the original code:
logging.setLoggerClass(Logger)
logger = logging.getLogger("root")
logger.setLevel(logging.INFO)
file_handler = RevokableFileHandler("./log")
file_handler.setLevel(logging.INFO)
logger.addRevokableHandler(file_handler)
class Foo:
def __init__(self):
pass
def foo_error(self):
logger.error("You have done something very stupid!")
raise RuntimeError("You have done something very stupid!")
def foo_except(self):
try:
self.foo_error()
except KeyError as error:
logger.pop_and_log(logging.INFO, "It was not so stupid after all!")
Foo = Foo()
Foo.foo_except()
Hopefully this lengthy answer can be of use to someone. Though it is still not clear to me if logging errors and info messages in such a way is considered bad code design.

CTypes, raising python-style exception from wrapper

In patching the __rshift__ operator of primitive python types with a callable, the patching utilises a wrapper:
def _patch_rshift(py_class, func):
assert isinstance(func, FunctionType)
py_type_name = 'tp_as_number'
py_type_method = 'nb_rshift'
py_obj = PyTypeObject.from_address(id(py_class))
type_ptr = getattr(py_obj, py_type_name)
if not type_ptr:
tp_as_obj = PyNumberMethods()
FUNC_INDIRECTION_REFS[(py_class, '__rshift__')] = tp_as_obj
tp_as_new_ptr = ctypes.cast(ctypes.addressof(tp_as_obj),
ctypes.POINTER(PyNumberMethods))
setattr(py_obj, py_type_name, tp_as_new_ptr)
type_head = type_ptr[0]
c_func_t = binary_func_p
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except BaseException as be:
# Need to raise a python style error here:
wrapper.exc_info = sys.exc_info()
return False
c_func = c_func_t(wrapper)
C_FUNC_CALLBACK_REFS[(py_class, '__rshift__')] = c_func
setattr(type_head, py_type_method, c_func)
The challenge is now to, once an Exception is caught inside wrapper to raise an exception here just as any normal python exception.
Raising like:
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except BaseException as be:
raise
or not catching at all:
#wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
Yields:
Windows fatal exception: access violation
Process finished with exit code -1073741819 (0xC0000005)
The desired behavior is to simply re-raise the caught exception, but with a python-style output, if possible.
Related answers rely on the platform being Windows-centered to achieve the desired outcome, which is not suitable, do not preserve the original exception, or do not achieve the desired python exception-style behaviour:
Get error message from ctypes windll
Ctypes catching exception
UPDATE:
After some more digging, it would appear that raising anywhere in this method triggers a segfault. None the wiser on how to solve it.
Not a solution, but a workaround:
Manually collecting the error information (Using traceback and inspect), printing to sys.stderr, and then returning a pointer to a CTypes error singleton (As done in forbiddenfruit) prevents the segfault from occuring.
The output would then be:
Custom error code
Python error code as one would expect from the un-patched primitive
(For brevity, some of the methods used here are not included as they do not add any value to the solution, I opted for a pytest style error format.)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except BaseException as be:
# Traceback will not help use here, assemble a custom error message:
from custom_inspect_utils import get_calling_expression_recursively
import sys
calling_file, calling_expression = get_calling_expression_recursively()
indentation = " "*(len(calling_expression)-len(calling_expression.lstrip()))
print(f"\n{calling_file}\n>{calling_expression}E{indentation}{type(be).__name__}: {be}", file=sys.stderr)
print(f"\n\tAs this package patches primitive types, the python RTE also raised:", file=sys.stderr)
return ErrorSingleton
e.g. After patching list to implement __rshift__, the expression [1,2,3] >> wrong_target where one expects a CustomException will now first output
source_file.py:330 (method_name)
> [1,2,3] >> wrong_target
E CustomException: Message.
followed by a TypeError:
TypeError: unsupported operand type(s) for >>: 'list' and 'WrongType'

Custom exception default logging

I've built custom exceptions that accept a parameter(s) and format their own message from constants. They also print to stdout so the user understands the issue.
For instance:
defs.py:
PATH_NOT_FOUND_ERROR = 'Cannot find path "{}"'
exceptions.py:
class PathNotFound(BaseCustomException):
"""Specified path was not found."""
def __init__(self, path):
msg = PATH_NOT_FOUND_ERROR.format(path)
print(msg)
super(PathNotFound, self).__init__(msg)
some_module.py
raise PathNotFound(some_invalid_path)
I also want to log the exceptions as they are thrown, the simplest way would be:
logger.debug('path {} not found'.format(some_invalid_path)
raise PathNotFound(some_invalid_path)
But doing this all across the code seems redundant, and especially it makes the constants pointless because if I decide the change the wording I need to change the logger wording too.
I've trying to do something like moving the logger to the exception class but makes me lose the relevant LogRecord properties like name, module, filename, lineno, etc. This approach also loses exc_info
Is there a way to log the exception and keeping the metadata without logging before raising every time?
If anyone's interested, here's a working solution
The idea was to find the raiser's frame and extract the relevant information from there.
Also had to override logging.makeRecord to let me override internal LogRecord attributes
Set up logging
class MyLogger(logging.Logger):
"""Custom Logger."""
def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None):
"""Override default logger to allow overridding internal attributes."""
if six.PY2:
rv = logging.LogRecord(name, level, fn, lno, msg, args, exc_info, func)
else:
rv = logging.LogRecord(name, level, fn, lno, msg, args, exc_info, func, sinfo)
if extra is not None:
for key in extra:
# if (key in ["message", "asctime"]) or (key in rv.__dict__):
# raise KeyError("Attempt to overwrite %r in LogRecord" % key)
rv.__dict__[key] = extra[key]
return rv
logging.setLoggerClass(MyLogger)
logger = logging.getLogger(__name__)
Custom Exception Handler
class BaseCustomException(Exception):
"""Specified path was not found."""
def __init__(self, path):
"""Override message with defined const."""
try:
raise ZeroDivisionError
except ZeroDivisionError:
# Find the traceback frame that raised this exception
exception_frame = sys.exc_info()[2].tb_frame.f_back.f_back
exception_stack = traceback.extract_stack(exception_frame, limit=1)[0]
filename, lineno, funcName, tb_msg = exception_stack
extra = {'filename': os.path.basename(filename), 'lineno': lineno, 'funcName': funcName}
logger.debug(msg, extra=extra)
traceback.print_stack(exception_frame)
super(BaseCustomException, self).__init__(msg)

How can you catch a custom exception from Celery worker, or stop it being prefixed with `celery.backends.base`?

My Celery task raises a custom exception NonTransientProcessingError, which is then caught by AsyncResult.get(). Tasks.py:
class NonTransientProcessingError(Exception):
pass
#shared_task()
def throw_exception():
raise NonTransientProcessingError('Error raised by POC model for test purposes')
In the Python console:
from my_app.tasks import *
r = throw_exception.apply_async()
try:
r.get()
except NonTransientProcessingError as e:
print('caught NonTrans in type specific except clause')
But my custom exception is my_app.tasks.NonTransientProcessingError, whereas the exception raised by AsyncResult.get() is celery.backends.base.NonTransientProcessingError, so my except clause fails.
Traceback (most recent call last):
File "<input>", line 4, in <module>
File "/...venv/lib/python3.5/site-packages/celery/result.py", line 175, in get
raise meta['result']
celery.backends.base.NonTransientProcessingError: Error raised by POC model for test purposes
If I catch the exception within the task, it works fine. It is only when the exception is raised to the .get() call that it is renamed.
How can I raise a custom exception and catch it correctly?
I have confirmed that the same happens when I define a Task class and raise the custom exception in its on_failure method. The following does work:
try:
r.get()
except Exception as e:
if type(e).__name__ == 'NonTransientProcessingError':
print('specific exception identified')
else:
print('caught generic but not identified')
Outputs:
specific exception identified
But this can't be the best way of doing this? Ideally I'd like to catch exception superclasses for categories of behaviour.
I'm using Django 1.8.6, Python 3.5 and Celery 3.1.18, with a Redis 3.1.18, Python redis lib 2.10.3 backend.
import celery
from celery import shared_task
class NonTransientProcessingError(Exception):
pass
class CeleryTask(celery.Task):
def on_failure(self, exc, task_id, args, kwargs, einfo):
if isinstance(exc, NonTransientProcessingError):
"""
deal with NonTransientProcessingError
"""
pass
def run(self, *args, **kwargs):
pass
#shared_task(base=CeleryTask)
def add(x, y):
raise NonTransientProcessingError
Use a base Task with on_failure callback to catch custom exception.
It might be a bit late, but I have a solution to solve this issue. Not sure if it is the best one, but it solved my problem at least.
I had the same issue. I wanted to catch the exception produced in the celery task, but the result was of the class celery.backends.base.CustomException. The solution is in the following form:
import celery, time
from celery.result import AsyncResult
from celery.states import FAILURE
def reraise_celery_exception(info):
exec("raise {class_name}('{message}')".format(class_name=info.__class__.__name__, message=info.__str__()))
class CustomException(Exception):
pass
#celery.task(name="test")
def test():
time.sleep(10)
raise CustomException("Exception is raised!")
def run_test():
task = test.delay()
return task.id
def get_result(id):
task = AsyncResult(id)
if task.state == FAILURE:
reraise_celery_exception(task.info)
In this case you prevent your program from raising celery.backends.base.CustomException, and force it to raise the right exception.

Assign exception handler to methods of class

I have a class with multiple functions. These functions will handle similar kinds of exceptions. Can I have a handler function and assign this to the functions.
At the end, I would want that there should be no exception handling in functions but on exception the control should go to this handler function.
Class Foo:
def a():
try:
some_code
except Exception1 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
except Exception2 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
Similarly, other functions would have the same kind of exceptions. I wanted to have all these exception handling in a separate method. Just that the functions should hand over the control to exception handler function.
I can think of calling every function from the wrapper handler function. But that looked very weird to me.
Class Foo:
def process_func(func, *args, **kwargs):
try:
func(*args, **kwargs)
except Exception1 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
except Exception2 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
def a(*args, **kwargs):
some_code
Is there a better way for doing this?
You can define a function decorator:
def process_func(func):
def wrapped_func(*args, **kwargs):
try:
func(*args, **kwargs)
except ...
return wrapped_func
And use as:
#process_func
def func(...):
...
So that func(...) is equivalent to process_func(func)(...), and errors are handled inside wrapped_func.

Categories