I'm making a logging module in python which reports every exception that happens in run-time to a server, so in every function I have to write:
def a_func():
try:
#stuff here
pass
except:
Logger.writeError(self.__class__.__name__, inspect.stack()[1][3],\
tracer(self, vars()))
As you can see I'm using vars() function to get the variables which caused the exception. I read about decorators and I decided to use them:
def flog(func):
def flog_wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as e:
print "At flog:", e
#self.myLogger.writeError(self.__class__.__name__, inspect.stack()[1][3], tracer(self, vars()))
return flog_wrapper
The problem is I don't have access to the original function's (func) variables (vars()) here. Is there a way to access them in the decorator function?
You don't need to use vars(). The traceback of an exception has everything you need:
import sys
def flog(func):
def flog_wrapper(self, *args, **kwargs):
try:
return func(self, *args, **kwargs)
except Exception:
exc_type, exc_value, tb = sys.exc_info()
print "At flog:", exc_value
locals = tb.tb_frame.f_locals
self.myLogger.writeError(type(self).__name__, inspect.stack()[1][3], tracer(self, locals))
del tb
return flog_wrapper
The traceback contains a chained series of execution frames; each frame has a reference to the locals used in that frame.
You do very much want to clean up the reference to the traceback; because the traceback includes the wrapper function frame, you have a circular reference and that is best broken early.
Related
In patching the __rshift__ operator of primitive python types with a callable, the patching utilises a wrapper:
def _patch_rshift(py_class, func):
assert isinstance(func, FunctionType)
py_type_name = 'tp_as_number'
py_type_method = 'nb_rshift'
py_obj = PyTypeObject.from_address(id(py_class))
type_ptr = getattr(py_obj, py_type_name)
if not type_ptr:
tp_as_obj = PyNumberMethods()
FUNC_INDIRECTION_REFS[(py_class, '__rshift__')] = tp_as_obj
tp_as_new_ptr = ctypes.cast(ctypes.addressof(tp_as_obj),
ctypes.POINTER(PyNumberMethods))
setattr(py_obj, py_type_name, tp_as_new_ptr)
type_head = type_ptr[0]
c_func_t = binary_func_p
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except BaseException as be:
# Need to raise a python style error here:
wrapper.exc_info = sys.exc_info()
return False
c_func = c_func_t(wrapper)
C_FUNC_CALLBACK_REFS[(py_class, '__rshift__')] = c_func
setattr(type_head, py_type_method, c_func)
The challenge is now to, once an Exception is caught inside wrapper to raise an exception here just as any normal python exception.
Raising like:
#wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except BaseException as be:
raise
or not catching at all:
#wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
Yields:
Windows fatal exception: access violation
Process finished with exit code -1073741819 (0xC0000005)
The desired behavior is to simply re-raise the caught exception, but with a python-style output, if possible.
Related answers rely on the platform being Windows-centered to achieve the desired outcome, which is not suitable, do not preserve the original exception, or do not achieve the desired python exception-style behaviour:
Get error message from ctypes windll
Ctypes catching exception
UPDATE:
After some more digging, it would appear that raising anywhere in this method triggers a segfault. None the wiser on how to solve it.
Not a solution, but a workaround:
Manually collecting the error information (Using traceback and inspect), printing to sys.stderr, and then returning a pointer to a CTypes error singleton (As done in forbiddenfruit) prevents the segfault from occuring.
The output would then be:
Custom error code
Python error code as one would expect from the un-patched primitive
(For brevity, some of the methods used here are not included as they do not add any value to the solution, I opted for a pytest style error format.)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except BaseException as be:
# Traceback will not help use here, assemble a custom error message:
from custom_inspect_utils import get_calling_expression_recursively
import sys
calling_file, calling_expression = get_calling_expression_recursively()
indentation = " "*(len(calling_expression)-len(calling_expression.lstrip()))
print(f"\n{calling_file}\n>{calling_expression}E{indentation}{type(be).__name__}: {be}", file=sys.stderr)
print(f"\n\tAs this package patches primitive types, the python RTE also raised:", file=sys.stderr)
return ErrorSingleton
e.g. After patching list to implement __rshift__, the expression [1,2,3] >> wrong_target where one expects a CustomException will now first output
source_file.py:330 (method_name)
> [1,2,3] >> wrong_target
E CustomException: Message.
followed by a TypeError:
TypeError: unsupported operand type(s) for >>: 'list' and 'WrongType'
I have a class with multiple functions. These functions will handle similar kinds of exceptions. Can I have a handler function and assign this to the functions.
At the end, I would want that there should be no exception handling in functions but on exception the control should go to this handler function.
Class Foo:
def a():
try:
some_code
except Exception1 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
except Exception2 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
Similarly, other functions would have the same kind of exceptions. I wanted to have all these exception handling in a separate method. Just that the functions should hand over the control to exception handler function.
I can think of calling every function from the wrapper handler function. But that looked very weird to me.
Class Foo:
def process_func(func, *args, **kwargs):
try:
func(*args, **kwargs)
except Exception1 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
except Exception2 as ex:
log error
mark file for error
other housekeeping
return some_status, ex.error
def a(*args, **kwargs):
some_code
Is there a better way for doing this?
You can define a function decorator:
def process_func(func):
def wrapped_func(*args, **kwargs):
try:
func(*args, **kwargs)
except ...
return wrapped_func
And use as:
#process_func
def func(...):
...
So that func(...) is equivalent to process_func(func)(...), and errors are handled inside wrapped_func.
I have a general purpose function that sends info about exceptions to an application log.
I use the exception_handler function from within methods in classes. The app log handler that is passed into and called by the exception_handler creates a JSON string that is what actually gets sent to the logfile. This all works fine.
def exception_handler(log, terminate=False):
exc_type, exc_value, exc_tb = sys.exc_info()
filename, line_num, func_name, text = traceback.extract_tb(exc_tb)[-1]
log.error('{0} Thrown from module: {1} in {2} at line: {3} ({4})'.format(exc_value, filename, func_name, line_num, text))
del (filename, line_num, func_name, text)
if terminate:
sys.exit()
I use it as follows: (a hyper-simplified example)
from utils import exception_handler
class Demo1(object):
def __init__(self):
self.log = {a class that implements the application log}
def demo(self, name):
try:
print(name)
except Exception:
exception_handler(self.log, True)
I would like to alter exception_handler for use as a decorator for a large number of methods, i.e.:
#handle_exceptions
def func1(self, name)
{some code that gets wrapped in a try / except by the decorator}
I've looked at a number of articles about decorators, but I haven't yet figured out how to implement what I want to do. I need to pass a reference to the active log object and also pass 0 or more arguments to the wrapped function. I'd be happy to convert exception_handler to a method in a class if that makes things easier.
Such a decorator would simply be:
def handle_exceptions(f):
def wrapper(*args, **kw):
try:
return f(*args, **kw)
except Exception:
self = args[0]
exception_handler(self.log, True)
return wrapper
This decorator simply calls the wrapped function inside a try suite.
This can be applied to methods only, as it assumes the first argument is self.
Thanks to Martijn for pointing me in the right direction.
I couldn't get his suggested solution to work but after a little more searching based on his example the following works fine:
def handle_exceptions(fn):
from functools import wraps
#wraps(fn)
def wrapper(self, *args, **kw):
try:
return fn(self, *args, **kw)
except Exception:
exception_handler(self.log)
return wrapper
In the below sample code, i intend to get the stack frames for the callee of the decorated function. Suppose, the decorated function, power(below), calls pwr function and there is an exception in that, i would like to get stack frame (to print function arguments) for pwr. For the functions that are exposed in api, its arguments and response are printed but the functions that are internal to module and those that api calls, i would like to get those stack frames.
import inspect
def api(func):
def decor(*args, **kwargs):
try:
print "Request %s %s %s" % ( func.__name__, args, kwargs)
response = func(*args,**kwargs)
print "response %s", response
return response
except Exception, e:
print "exception in %s", func.__name__
for frame in inspect.stack():
print frame[3]
raise e
return decor
#api
def power(a,b):
return pwr(a,b)
def pwr():
...
...
when i run the code, during exception, i am getting stack frame from decor and up but not func or below. Can anyone suggest?
If you want to look at the context in which the exception occurred, you need to look at the third value (the "traceback object") returned from sys.exc_info(). The traceback module has some useful functions for handling these objects: you might be able to make use of traceback.print_tb.
For example:
>>> import sys, traceback
>>> try: raise Exception()
... except: traceback.print_tb(sys.exc_info()[2])
...
File "<stdin>", line 1, in <module>
You can't access the frame after the function returned or raised and exceptions since the frame simply no longer exists. If you really want to access a function frames for some debugging/profiling purposes and can't modify the function, consider using sys.setprofile or sys.settrace. The callback will be passed the frame as an argument.
Is it possible to ensure the __exit__() method is called even if there is an exception in __enter__()?
>>> class TstContx(object):
... def __enter__(self):
... raise Exception('Oops in __enter__')
...
... def __exit__(self, e_typ, e_val, trcbak):
... print "This isn't running"
...
>>> with TstContx():
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __enter__
Exception: Oops in __enter__
>>>
Edit
This is as close as I could get...
class TstContx(object):
def __enter__(self):
try:
# __enter__ code
except Exception as e
self.init_exc = e
return self
def __exit__(self, e_typ, e_val, trcbak):
if all((e_typ, e_val, trcbak)):
raise e_typ, e_val, trcbak
# __exit__ code
with TstContx() as tc:
if hasattr(tc, 'init_exc'): raise tc.init_exc
# code in context
In hind sight, a context manager might have not been the best design decision
Like this:
import sys
class Context(object):
def __enter__(self):
try:
raise Exception("Oops in __enter__")
except:
# Swallow exception if __exit__ returns a True value
if self.__exit__(*sys.exc_info()):
pass
else:
raise
def __exit__(self, e_typ, e_val, trcbak):
print "Now it's running"
with Context():
pass
To let the program continue on its merry way without executing the context block you need to inspect the context object inside the context block and only do the important stuff if __enter__ succeeded.
class Context(object):
def __init__(self):
self.enter_ok = True
def __enter__(self):
try:
raise Exception("Oops in __enter__")
except:
if self.__exit__(*sys.exc_info()):
self.enter_ok = False
else:
raise
return self
def __exit__(self, e_typ, e_val, trcbak):
print "Now this runs twice"
return True
with Context() as c:
if c.enter_ok:
print "Only runs if enter succeeded"
print "Execution continues"
As far as I can determine, you can't skip the with-block entirely. And note that this context now swallows all exceptions in it. If you wish not to swallow exceptions if __enter__ succeeds, check self.enter_ok in __exit__ and return False if it's True.
No. If there is the chance that an exception could occur in __enter__() then you will need to catch it yourself and call a helper function that contains the cleanup code.
I suggest you follow RAII (resource acquisition is initialization) and use the constructor of your context to do the potentially failing allocation. Then your __enter__ can simply return self which should never ever raise an exception. If your constructor fails, the exception may be thrown before even entering the with context.
class Foo:
def __init__(self):
print("init")
raise Exception("booh")
def __enter__(self):
print("enter")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print("exit")
return False
with Foo() as f:
print("within with")
Output:
init
Traceback (most recent call last):
File "<input>", line 1, in <module>
...
raise Exception("booh")
Exception: booh
Edit:
Unfortunately this approach still allows the user to create "dangling" resources that wont be cleaned up if he does something like:
foo = Foo() # this allocates resource without a with context.
raise ValueError("bla") # foo.__exit__() will never be called.
I am quite curious if this could be worked around by modifying the new implementation of the class or some other python magic that forbids object instantiation without a with context.
You could use contextlib.ExitStack (not tested):
with ExitStack() as stack:
cm = TstContx()
stack.push(cm) # ensure __exit__ is called
with ctx:
stack.pop_all() # __enter__ succeeded, don't call __exit__ callback
Or an example from the docs:
stack = ExitStack()
try:
x = stack.enter_context(cm)
except Exception:
# handle __enter__ exception
else:
with stack:
# Handle normal case
See contextlib2 on Python <3.3.
if inheritance or complex subroutines are not required, you can use a shorter way:
from contextlib import contextmanager
#contextmanager
def test_cm():
try:
# dangerous code
yield
except Exception, err
pass # do something
class MyContext:
def __enter__(self):
try:
pass
# exception-raising code
except Exception as e:
self.__exit__(e)
def __exit__(self, *args):
# clean up code ...
if args[0]:
raise
I've done it like this. It calls __exit__() with the error as the argument. If args[0] contains an error it reraises the exception after executing the clean up code.
The docs contain an example that uses contextlib.ExitStack for ensuring the cleanup:
As noted in the documentation of ExitStack.push(), this method can be useful in cleaning up an already allocated resource if later steps in the __enter__() implementation fail.
So you would use ExitStack() as a wrapping context manager around the TstContx() context manager:
from contextlib import ExitStack
with ExitStack() as stack:
ctx = TstContx()
stack.push(ctx) # Leaving `stack` now ensures that `ctx.__exit__` gets called.
with ctx:
stack.pop_all() # Since `ctx.__enter__` didn't raise it can handle the cleanup itself.
... # Here goes the body of the actual context manager.