I'm trying to replicate an error-checking pattern that I often use when programming in C, in Python. I have a function check as follows:
def check(exceptions, msg, handler):
def wrapped(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions as err:
log_err(msg)
# Do something with handler
return wrapped
By calling check with appropriate arguments and then calling the result with a function and its arguments, it's possible to reduce a try-except statement to as little as two lines of code without really sacrificing clarity (in my opinion, anyway).
For example
def caller():
try:
files = listdir(directory)
except OSError as err:
log_err(msg)
return []
# Do something with files
becomes
def caller():
c = check(OSError, msg, handler)
files = c(listdir, directory) # caller may return [] here
# Do something with files
The issue is that in order for this transformation to be transparent to the rest of the program it's necessary for handler to execute exactly as if it were written in the scope of the caller of wrapped. (handler need not be a function object. I'm after an effect, not a method.)
In C I would just use macros and expand everything inline (since that's where I would be writing the code anyway), but Python doesn't have macros. Is it possible to achieve this effect in some other way?
There is no way to write Python code to create an object c so that, when you call c, the function that called it returns. You can only return from a function by literally typing a return statement directly in that function's body (or falling off the end).
You could easily make it so that your "checked" function simply returns the default value. The caller can then use it as normal, but it can't make the caller itself return. You could also write a decorator for caller that catches your specified errors and returns [] instead, but this would catch all OSErrors raised anywhere in caller, not just ones raised by calling a particular function (e.g., listdir). For instance:
def check(exceptions, msg, handler):
def deco(func):
def wrapped(*args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions as err:
print "Logged error:", msg
return handler(err)
return wrapped
return deco
def handler(err):
return []
#check(ZeroDivisionError, "divide by zero", handler)
def func(x):
1/0
>>> func(1)
Logged error: divide by zero
[]
The situation you describe seems somewhat unusual. In most cases where it would be worth it to factor out the handling into a "handler" function, that handler function either couldn't know what value it wants the caller to return, or it could know what to return just based on the error, without needing to know what particular line raised the error.
For instance, in your example, you apparently have a function caller that might raise an OSError at many different points. If you only have one place where you need to catch OSError and return [], just write one try/except and it's no big deal. If you want to catch any OSError in the function and return [], decorate it as shown above. What you describe would seem to only be useful in cases where you want to catch more-than-one-but-not-all possible OSErrors raised in caller, and yet in all those cases you want to return the same particular value, which seems rather unusual.
Forgive me if I don't quite understand, but can't you just put the handler inside the calling function?
def caller():
def handler():
print("Handler was called")
# ...
A better way...
Or, if you want to simplify the way you call it, you can use with statements to achieve the desired effect. This will be a lot easier, I think, and it's a lot less of a "hack":
class LogOSError(object):
def __init__(self, loc):
self.loc = loc
def __enter__(self):
return None
def __exit__(self, exc_type, exc_value, traceback):
if isinstance(exc_value, OSError):
print("{}: OSError: {}".format(self.loc, exc_value))
return True
You can use it like this:
with LogOSError('example_func'):
os.unlink('/does/not/exist')
And the output is:
>>> with LogOSError('some_location'):
... os.unlink('/does/not/exist')
...
some_location: OSError: [Errno 2] No such file or directory: '/does/not/exist'
Related
How can I hack into Python's return statement to make it do the extra work of sending some inspection data to a logging module I am writing?
Up until now, I am tacking a function called owl() before every return statement.
owl()
return (something)
I am hoping I can temporarily sneak the work into the return statement. Then I can turn this return tweak off and on and perhaps also gain some access to see what it is returning.
Wrap the function body in a try/finally block:
def func(...)
try:
...
finally:
owl()
The finally: code is always executed when leaving the block, so it will be executed whenever the function returns.
To avoid having to write this every time, you can use a decorator.
def with_owl(func):
def inner(*args, **kwargs):
try:
return func(*args, **kwargs):
finally:
owl()
#with_owl
def func(...):
...
Define a context manager.
from contextlib import contextmanager
#contextmanager
def with_owl():
yield
owl()
Then
def foo():
with with_owl():
...
owl() will be called just before you exit the with statement for any reason, whether by reaching the end of the body or leaving early due to a return statement or an exception. (The context manager can explicitly check if an exception has been raised, if you only want call owl() on a "normal" return.)
Basically, everything before the yield statement in with_owl executes immediately upon entry to the with statement, and just before you leave the with statement you return to with_owl and execute what comes after the yield.
I would like to implement a way to globally handle exceptions in a class coming through my application rather than handle them at the call site because it will cause a lot of duplicate code.
For example's sake say I have a class like so:
class handler():
def throws_value_error(self):
raise ValueError
def throws_not_implemented_error(self):
raise NotImplementedError
# ... Imagine there are hundreds of these
In this handler class I'd like all ValueError and NotImplementedError handled the same way - a message saying [ERROR]: ... where ... is the details from the particular instance.
Instead of reproducing a TON of:
try:
#...
except Exception as e:
# Catch the instances of the exception and handle them
# in more-or-less the same way here
I'd like to have a single method implementable on the class that takes care of this in general:
def exception_watcher(self):
# Big if...elif...else for handling these types of exceptions
Then, this method would be invoked when any Exception inside the class is given.
My motivation is to use this to implement a generic sort of controller interface where a controller method could throw a subclass of an exception (for example - BadRequestException which is subclassed from Error400Exception). From there, the handler could dispatch a method to properly configure a response with an error code tailored to the type of exception, and additionally return a useful reply to the consumer.
Obviously this is doable without this dispatcher - but this way certainly saves a ton of duplicated code (and therefore bugs).
I've done a lot of digging around the internet and I haven't found anything. Admittedly I am borrowing this pattern from the Spring Boot Framework. I will also accept this being non-pythonic as an answer (as long as you can tell me how to do it right!)
Any help would be appreciated - thank you!
Modifying functions without altering their core functionality is exactly what decorators are for. Take a look at this example and try running it:
def catch_exception(func):
def wrapped(self, *args, **kwargs):
try:
return func(self, *args, **kwargs)
except ValueError:
self.value_error_handler()
except NotImplementedError:
self.ni_error_handler()
return wrapped
class Handler:
def value_error_handler(self):
print("found value error")
def ni_error_handler(self):
print("found not implemented error")
#catch_exception
def value_error_raiser(self):
raise ValueError
#catch_exception
def ni_error_raiser(self):
raise NotImplementedError
h = Handler()
h.value_error_raiser()
h.ni_error_raiser()
Because of the #catch_exception decorator, the decorated methods do not raise the errors shown when caled; instead, they call the respective method defined inside wrapped's try/except block, printing a message.
While this is not exactly what you asked for, I believe this solution is more manageable overall. You can both choose exactly what methods should raise exceptions regularly and which should be handled by catch_exception (by deciding which methods are decorated), and add future behaviour for other exception types by writing an extra except block inside wrapped and adding the respective method to the Handler class.
Note that this is not a "clean" solution, since wrapped expects to know that self is an instance of Handler, seeing as it calls a bunch of its methods, even though it is a function outside the Handler class... But hey, that's Python's duck typing for you :-P
I am trying to understand python decorators.
I devised that simple example where I want the decorator function to be a custom log that just print error if for instance I try to sum_ and int and a str
def log(fun):
try:
return fun(*args)
except:
print('error!')
#log
def sum_(a,b):
return a+b
This returns "error" already simply when I define the function. I suspect there are multiple wrong things in what I did... I tried to look into the other questions about that topic, but I find them all to intricate to understand how such a simple example should be drafted ,esp how to pass the arguments from the original function.
All help and pointers appreciated
That's because you're not forwarding the args from the function to your decorator, and the catch-all exception catches the NameError for args; one of the reasons to always specify the exception class.
Here's a modified version of your code with the try-catch removed and the function arguments correctly forwarded:
def log(fun):
def wrapper(*args):
print('in decorator!')
return fun(*args)
return wrapper
#log
def sum_(a,b):
return a+b
print sum_(1,2)
The reason you're getting an error is simply because args is undefined in your decorator. This isn't anything special about decorators, just a regular NameError. For this reason you probably want to restrict your exception clause to just TypeErrors so that you're not silencing other errors. A full implementation would be
import functools
def log(fun):
#functools.wraps(fun)
def inner(*args):
try:
return fun(*args)
except TypeError:
print('error!')
return inner
#log
def sum_(a, b):
return a + b
It's also a good idea to decorate your inner functions with the functools.wrap decorator, which transfers the name and docstring from your original function to your decorated one.
The log decorator, in this case, does not return a function, but a value. This may point on an assumption that the decorator function replaces the original function, where in fact, it is called to create a replacement function.
A fix which may represent the intention:
def log(fun):
def my_func(*args):
try:
return fun(*args)
except:
print('error!')
return my_func
In this case, my_func is the actual function which is called for sum_(1, 2), and internally, it calls the original function (the original sum_) which the decorator received as an argument.
A trivial example that illustrates the order of the actions:
def my_decorator(fun):
print 'This will be printed first, during module load'
def my_wrapper(*args):
print 'This will be printed during call, before the original func'
return fun(*args)
return my_wrapper()
#my_decorator
def func():
print('This will be printed in the original func')
Is it possible for a callee to force its caller to return in python?
If so, is this a good approach? Doesn't it violate the Explicit is better than implicit. sentence of the Zen of Python?
Example:
import inspect
class C(object):
def callee(self):
print 'in callee'
caller_name = inspect.stack()[1][3]
caller = getattr(self, caller_name)
# force caller to return
# so that "in caller after callee" gets never printed
caller.return() # ???
def caller(self):
print 'in caller before calle'
self.callee()
print 'in caller after callee'
c = C()
c.caller()
print 'resume'
Output:
in caller before callee
in callee
resume
Finally, thanks to #Andrew Jaffe's suggestion on context managers I resolved it with a simple decorator.
# In my real code this is not a global variable
REPORT_ERRORS = True
def error_decorator(func):
"""
Returns Event instance with result of the
decorated function, or caught exception.
Or reraises that exception.
"""
def wrap():
error = None
user = None
try:
user = func()
except Exception as e:
error = e
finally:
if REPORT_ERRORS:
return Event(user, error)
else:
raise error
return wrap
#error_decorator
def login():
response = fetch_some_service()
if response.errors:
# flow ends here
raise BadResponseError
user = parse_response(response)
return user
What's wrong in returning a value from the callee, to be read by caller and thus behave accordingly?
instead of
caller.return() # ???
write
return False
and in
def caller(self):
print 'in caller before calle'
rc = self.callee()
if not rc:
return
print 'in caller after callee'
and off course you can raise exception and catch it in the callee and behave accordingly or simply less it fall through
Duplicate of mgilson
Reason I would argue for Return Value based check
Explicit is better than implicit
Callee should not control the Caller Behavior. Its a bad programming practice. Instead Callers should change its behavior based on callee's behavior.
In a sense you can do this with exceptions ... Just raise an exception at the end of callee and don't handle it in caller ... Why exactly do you want to do this? It seems like there's probably a better way to do whatever you're attempting ...
As far as creating a jump in the callee, it looks like that is impossible. From the Data Model section on Frame objects (emphasis is mine)
f_lineno is the current line number of the frame — writing to this from within a trace function jumps to the given line (only for the bottom-most frame). A debugger can implement a Jump command (aka Set Next Statement) by writing to f_lineno.
I want to catch an exception, but only if it comes from the very next level of logic.
The intent is to handle errors caused by the act of calling the function with the wrong number of arguments, without masking errors generated by the function implementation.
How can I implement the wrong_arguments function below?
Example:
try:
return myfunc(*args)
except TypeError, error:
#possibly wrong number of arguments
#we know how to proceed if the error occurred when calling myfunc(),
#but we shouldn't interfere with errors in the implementation of myfunc
if wrong_arguments(error, myfunc):
return fixit()
else:
raise
Addendum:
There are several solutions that work nicely in the simple case, but none of the current answers will work in the real-world case of decorated functions.
Consider that these are possible values of myfunc above:
def decorator(func):
"The most trivial (and common) decorator"
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
def myfunc1(a, b, c='ok'):
return (a, b, c)
myfunc2 = decorator(myfunc1)
myfunc3 = decorator(myfunc2)
Even the conservative look-before-you-leap method (inspecting the function argument spec) fails here, since most decorators will have an argspec of *args, **kwargs regardless of the decorated function. Exception inspection also seems unreliable, since myfunc.__name__ will be simply "wrapper" for most decorators, regardless of the core function's name.
Is there any good solution if the function may or may not have decorators?
You can do:
try:
myfunc()
except IndexError:
trace = sys.exc_info()[2]
if trace.tb_next.tb_next is None:
pass
else:
raise
Although it is kinda ugly and would seem to violate encapsulation.
Stylistically, wanting to catch having passed too many arguments seem strange. I suspect that a more general rethink of what you are doing may resolve the problem. But without more details I can't be sure.
EDIT
Possible approach: check if function you are calling has the arguments *args,**kwargs. If it does, assume its a decorator and adjust the code above to check if the exception was one further layer in. If not, check as above.
Still, I think you need to rethink your solution.
I am not a fan of doing magic this way. I suspect you have an underlying design problem rather.
--original answer and code which was too unspecific to the problem removed--
Edit after understanding specific problem:
from inspect import getargspec
def can_call_effectively(f, args):
(fargs, varargs, _kw, df) = getattr(myfunc, 'effective_argspec', \
getargspec(myfunc))
fargslen = len(fargs)
argslen = len(args)
minargslen = fargslen - len(df)
return (varargs and argslen >= minargslen) or minargslen <= argslen <= fargslen
if can_call_effectively(myfunc, args)
myfunc(*args)
else:
fixit()
All your decorators, or at least those you want to be transparent in regard to
calling via the above code, need to set 'effective_argspec' on the returned callable.
Very explicit, no magic. To achieve this, you could decorate your decorators with the appropriate code...
Edit: more code, the decorator for transparent decorators.
def transparent_decorator(decorator):
def wrapper(f):
wrapped = decorator(f)
wrapped.__doc__ = f.__doc__
wrapped.effective_argspec = getattr(f, 'effective_argspec', getargspec(f))
return wrapped
return wrapper
Use this on your decorator:
#transparent_decorator
def decorator(func):
"The most trivial (and common) decorator"
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper # line missing in example above
Now if you create myfunc1 - myfunc3 as above, they work exactly as expected.
Ugh unfortunately not really. Your best bet is to introspect the error object that is returned and see if myfunc and the number of arguments is mentioned.
So you'd do something like:
except TypeError, err:
if err.has_some_property or 'myfunc' in str(err):
fixit()
raise
you can do it by doing something like
>>> def f(x,y,z):
print (f(0))
>>> try:
f(0)
except TypeError as e:
print (e.__traceback__.tb_next is None)
True
>>> try:
f(0,1,2)
except TypeError as e:
print (e.__traceback__.tb_next is None)
False
but a better way should be to count the number of args of function and comparing with the number of args expected
len(inspect.getargspec(f).args) != len (args)
You can retrieve the traceback and look at its length. Try:
import traceback as tb
import sys
def a():
1/0
def b():
a()
def c():
b()
try:
a()
except:
print len(tb.extract_tb(sys.exc_traceback))
try:
b()
except:
print len(tb.extract_tb(sys.exc_traceback))
try:
c()
except:
print len(tb.extract_tb(sys.exc_traceback))
This prints
2
3
4
Well-written wrappers will preserve the function name, signature, etc, of the functions they wrap; however, if you have to support wrappers that don't, or if you have situations where you want to catch an error in a wrapper (not just the final wrapped function), then there is no general solution that will work.
I know this is an old post, but I stumbled with this question and later with a better answer. This answer depends on a new feature in python 3, Signature objects
With that feature you can write:
sig = inspect.signature(myfunc)
try:
sig.bind(*args)
except TypeError:
return fixit()
else:
f(*args)
Seems to me what you're trying to do is exactly the problem that exceptions are supposed to solve, ie where an exception will be caught somewhere in the call stack, so that there's no need to propagate errors upwards.
Instead, it sounds like you are trying to do error handling the C (non-exception handling) way, where the return value of a function indicates either no error (typically 0) or an error (non 0 value). So, I'd try just writing your function to return a value, and have the caller check for the return value.