I want to catch an exception, but only if it comes from the very next level of logic.
The intent is to handle errors caused by the act of calling the function with the wrong number of arguments, without masking errors generated by the function implementation.
How can I implement the wrong_arguments function below?
Example:
try:
return myfunc(*args)
except TypeError, error:
#possibly wrong number of arguments
#we know how to proceed if the error occurred when calling myfunc(),
#but we shouldn't interfere with errors in the implementation of myfunc
if wrong_arguments(error, myfunc):
return fixit()
else:
raise
Addendum:
There are several solutions that work nicely in the simple case, but none of the current answers will work in the real-world case of decorated functions.
Consider that these are possible values of myfunc above:
def decorator(func):
"The most trivial (and common) decorator"
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
def myfunc1(a, b, c='ok'):
return (a, b, c)
myfunc2 = decorator(myfunc1)
myfunc3 = decorator(myfunc2)
Even the conservative look-before-you-leap method (inspecting the function argument spec) fails here, since most decorators will have an argspec of *args, **kwargs regardless of the decorated function. Exception inspection also seems unreliable, since myfunc.__name__ will be simply "wrapper" for most decorators, regardless of the core function's name.
Is there any good solution if the function may or may not have decorators?
You can do:
try:
myfunc()
except IndexError:
trace = sys.exc_info()[2]
if trace.tb_next.tb_next is None:
pass
else:
raise
Although it is kinda ugly and would seem to violate encapsulation.
Stylistically, wanting to catch having passed too many arguments seem strange. I suspect that a more general rethink of what you are doing may resolve the problem. But without more details I can't be sure.
EDIT
Possible approach: check if function you are calling has the arguments *args,**kwargs. If it does, assume its a decorator and adjust the code above to check if the exception was one further layer in. If not, check as above.
Still, I think you need to rethink your solution.
I am not a fan of doing magic this way. I suspect you have an underlying design problem rather.
--original answer and code which was too unspecific to the problem removed--
Edit after understanding specific problem:
from inspect import getargspec
def can_call_effectively(f, args):
(fargs, varargs, _kw, df) = getattr(myfunc, 'effective_argspec', \
getargspec(myfunc))
fargslen = len(fargs)
argslen = len(args)
minargslen = fargslen - len(df)
return (varargs and argslen >= minargslen) or minargslen <= argslen <= fargslen
if can_call_effectively(myfunc, args)
myfunc(*args)
else:
fixit()
All your decorators, or at least those you want to be transparent in regard to
calling via the above code, need to set 'effective_argspec' on the returned callable.
Very explicit, no magic. To achieve this, you could decorate your decorators with the appropriate code...
Edit: more code, the decorator for transparent decorators.
def transparent_decorator(decorator):
def wrapper(f):
wrapped = decorator(f)
wrapped.__doc__ = f.__doc__
wrapped.effective_argspec = getattr(f, 'effective_argspec', getargspec(f))
return wrapped
return wrapper
Use this on your decorator:
#transparent_decorator
def decorator(func):
"The most trivial (and common) decorator"
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper # line missing in example above
Now if you create myfunc1 - myfunc3 as above, they work exactly as expected.
Ugh unfortunately not really. Your best bet is to introspect the error object that is returned and see if myfunc and the number of arguments is mentioned.
So you'd do something like:
except TypeError, err:
if err.has_some_property or 'myfunc' in str(err):
fixit()
raise
you can do it by doing something like
>>> def f(x,y,z):
print (f(0))
>>> try:
f(0)
except TypeError as e:
print (e.__traceback__.tb_next is None)
True
>>> try:
f(0,1,2)
except TypeError as e:
print (e.__traceback__.tb_next is None)
False
but a better way should be to count the number of args of function and comparing with the number of args expected
len(inspect.getargspec(f).args) != len (args)
You can retrieve the traceback and look at its length. Try:
import traceback as tb
import sys
def a():
1/0
def b():
a()
def c():
b()
try:
a()
except:
print len(tb.extract_tb(sys.exc_traceback))
try:
b()
except:
print len(tb.extract_tb(sys.exc_traceback))
try:
c()
except:
print len(tb.extract_tb(sys.exc_traceback))
This prints
2
3
4
Well-written wrappers will preserve the function name, signature, etc, of the functions they wrap; however, if you have to support wrappers that don't, or if you have situations where you want to catch an error in a wrapper (not just the final wrapped function), then there is no general solution that will work.
I know this is an old post, but I stumbled with this question and later with a better answer. This answer depends on a new feature in python 3, Signature objects
With that feature you can write:
sig = inspect.signature(myfunc)
try:
sig.bind(*args)
except TypeError:
return fixit()
else:
f(*args)
Seems to me what you're trying to do is exactly the problem that exceptions are supposed to solve, ie where an exception will be caught somewhere in the call stack, so that there's no need to propagate errors upwards.
Instead, it sounds like you are trying to do error handling the C (non-exception handling) way, where the return value of a function indicates either no error (typically 0) or an error (non 0 value). So, I'd try just writing your function to return a value, and have the caller check for the return value.
Related
I find myself frequently running into this sort of problem. I have a function like
def compute(input):
result = two_hour_computation(input)
result = post_processing(result)
return result
and post_processing(result) fails. Now the obvious thing to do is to change the function to
import pickle
def compute(input):
result = two_hour_computation(input)
pickle.dump(result, open('intermediate_result.pickle', 'wb'))
result = post_processing(result)
return result
but I don't usually remember to write all my functions that way. What I wish I had was a decorator like:
#return_intermediate_results_if_something_goes_wrong
def compute(input):
result = two_hour_computation(input)
result = post_processing(result)
return result
Does something like that exist? I can't find it on google.
The "outside" of a function has no access to the state of local variables inside the function at runtime whatsoever. So this cannot be solved with a decorator.
In any case, I would argue that the responsibility for catching errors and saving valuable intermediary results should be done explicitly by the programmer. If you "forget" to do that, it must have not been that important to you.
That being said, situations like "do X in case either A, B, or C raises an exception" are a typical use case for context managers. You can write your own context manager that acts as a bucket for your intermediary result (in place of a variable) and performs some save action in case an exception exits it.
Something like this:
from __future__ import annotations
from types import TracebackType
from typing import Generic, Optional, TypeVar
T = TypeVar("T")
class Saver(Generic[T]):
def __init__(self, initial_value: Optional[T] = None) -> None:
self._value = initial_value
def __enter__(self) -> Saver[T]:
return self
def __exit__(
self,
exc_type: Optional[type[BaseException]],
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType],
) -> None:
if exc_type is not None:
self.save()
def save(self) -> None:
print(f"saved {self.value}!")
#property
def value(self) -> T:
if self._value is None:
raise RuntimeError
return self._value
#value.setter
def value(self, value: T) -> None:
self._value = value
Obviously, instead of print(f"saved {self.value}!") inside save you would do something like this:
with open('intermediate_result.pickle', 'wb') as f:
pickle.dump(self.value, f)
Now all you need to remember is to wrap those actions in a with-statement and assign intermediary results to the value property of your context manager. To demonstrate:
def x_times_2(x: float) -> float:
return x * 2
def one_over_x_minus_2(x: float) -> float:
return 1 / (x - 2)
def main() -> None:
with Saver(1.) as s:
s.value = x_times_2(s.value)
s.value = one_over_x_minus_2(s.value)
print(s.value)
if __name__ == "__main__":
main()
The output:
saved 2.0!
Traceback (most recent call last):
[...]
return 1 / (x - 2)
~~^~~~~~~~~
ZeroDivisionError: float division by zero
As you can see, the intermediary computed value 2.0 was "saved", even though the next function raised an exception.
It is worth noting that in this example, the context manager calls save only if an exception was encountered, not if the context is exited "peacefully". If you wanted, you could make this unconditional of course.
This may be not as convenient as just slapping a decorator onto a function, but it gets the job done. And IMO the fact that you have to still consciously wrap your important actions in this context is a good thing because it teaches you to pay special attention to these things.
This is the typical approach of implementing things like database transactions in Python by the way (e.g. in SQLAlchemy).
PS
To be fair, I should probably qualify my initial statement a bit. You could of course just use non-local state in your function, even though that is generally discouraged for good reason. In super simple terms, if in your example result was a global variable (and you stated global result inside the function), this could in fact be solved by a decorator. But I would not recommend that approach because global state is an anti-pattern. (And it would still require you to remember to use whatever global variable you designated for that job every time.)
My objective is to raise SystemExit and log the error when my program encounter an unexpected behavior.
I was doing something like:
logger.error('Unexpected behaviour')
raise SystemExit
In order to avoid the repetition in my code i tried to write a decorator to raise SystemExit at each logger.error call:
error = logger.error
def error_from_logger(msg) :
''' Decorator for logger.error to kill the program at the call '''
error(msg)
raise SystemExit
logger.error = error_from_logger
del(error_from_logger)
So my question is: Is my decorator pythonic? And if not what is the best pythonic way to write it? (I saw people use #something but I don't understand it's usage).
Thanks!
As has been mentioned in the comments, what you have done isn't quite decorating. This would be decorating:
def call_then_exit(func):
def called_and_exited(*args, **kwargs):
func(*args, **kwargs)
raise SystemExit
return called_and_exited
logger = logging.getLogger()
logger.error = call_then_exit(logger.error) # this is the decoration
logger.error("some error has happened") # prints the message and exists
#decorator is just syntactic sugar which you use when declaring a function. This isn't much use to you if you are using a function/method declared elsewhere.
#call_then_exit # this is the decoration
def say_hi():
print('hello')
say_hi() # prints 'hi' and exits
print('did we exit?') # we never reach this
Is my decorator pythonic?
Arguably it is not because patching is ugly and it adds unexpected behaviour. To be more explicit, you could make a log_error_and_exit() function or register your own logging class with logging.setLoggerClass(OurLogger) and maybe add a .fatal_error() method. However, I think your solution is OK as-is.
I am trying to understand python decorators.
I devised that simple example where I want the decorator function to be a custom log that just print error if for instance I try to sum_ and int and a str
def log(fun):
try:
return fun(*args)
except:
print('error!')
#log
def sum_(a,b):
return a+b
This returns "error" already simply when I define the function. I suspect there are multiple wrong things in what I did... I tried to look into the other questions about that topic, but I find them all to intricate to understand how such a simple example should be drafted ,esp how to pass the arguments from the original function.
All help and pointers appreciated
That's because you're not forwarding the args from the function to your decorator, and the catch-all exception catches the NameError for args; one of the reasons to always specify the exception class.
Here's a modified version of your code with the try-catch removed and the function arguments correctly forwarded:
def log(fun):
def wrapper(*args):
print('in decorator!')
return fun(*args)
return wrapper
#log
def sum_(a,b):
return a+b
print sum_(1,2)
The reason you're getting an error is simply because args is undefined in your decorator. This isn't anything special about decorators, just a regular NameError. For this reason you probably want to restrict your exception clause to just TypeErrors so that you're not silencing other errors. A full implementation would be
import functools
def log(fun):
#functools.wraps(fun)
def inner(*args):
try:
return fun(*args)
except TypeError:
print('error!')
return inner
#log
def sum_(a, b):
return a + b
It's also a good idea to decorate your inner functions with the functools.wrap decorator, which transfers the name and docstring from your original function to your decorated one.
The log decorator, in this case, does not return a function, but a value. This may point on an assumption that the decorator function replaces the original function, where in fact, it is called to create a replacement function.
A fix which may represent the intention:
def log(fun):
def my_func(*args):
try:
return fun(*args)
except:
print('error!')
return my_func
In this case, my_func is the actual function which is called for sum_(1, 2), and internally, it calls the original function (the original sum_) which the decorator received as an argument.
A trivial example that illustrates the order of the actions:
def my_decorator(fun):
print 'This will be printed first, during module load'
def my_wrapper(*args):
print 'This will be printed during call, before the original func'
return fun(*args)
return my_wrapper()
#my_decorator
def func():
print('This will be printed in the original func')
I'm trying to replicate an error-checking pattern that I often use when programming in C, in Python. I have a function check as follows:
def check(exceptions, msg, handler):
def wrapped(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions as err:
log_err(msg)
# Do something with handler
return wrapped
By calling check with appropriate arguments and then calling the result with a function and its arguments, it's possible to reduce a try-except statement to as little as two lines of code without really sacrificing clarity (in my opinion, anyway).
For example
def caller():
try:
files = listdir(directory)
except OSError as err:
log_err(msg)
return []
# Do something with files
becomes
def caller():
c = check(OSError, msg, handler)
files = c(listdir, directory) # caller may return [] here
# Do something with files
The issue is that in order for this transformation to be transparent to the rest of the program it's necessary for handler to execute exactly as if it were written in the scope of the caller of wrapped. (handler need not be a function object. I'm after an effect, not a method.)
In C I would just use macros and expand everything inline (since that's where I would be writing the code anyway), but Python doesn't have macros. Is it possible to achieve this effect in some other way?
There is no way to write Python code to create an object c so that, when you call c, the function that called it returns. You can only return from a function by literally typing a return statement directly in that function's body (or falling off the end).
You could easily make it so that your "checked" function simply returns the default value. The caller can then use it as normal, but it can't make the caller itself return. You could also write a decorator for caller that catches your specified errors and returns [] instead, but this would catch all OSErrors raised anywhere in caller, not just ones raised by calling a particular function (e.g., listdir). For instance:
def check(exceptions, msg, handler):
def deco(func):
def wrapped(*args, **kwargs):
try:
return func(*args, **kwargs)
except exceptions as err:
print "Logged error:", msg
return handler(err)
return wrapped
return deco
def handler(err):
return []
#check(ZeroDivisionError, "divide by zero", handler)
def func(x):
1/0
>>> func(1)
Logged error: divide by zero
[]
The situation you describe seems somewhat unusual. In most cases where it would be worth it to factor out the handling into a "handler" function, that handler function either couldn't know what value it wants the caller to return, or it could know what to return just based on the error, without needing to know what particular line raised the error.
For instance, in your example, you apparently have a function caller that might raise an OSError at many different points. If you only have one place where you need to catch OSError and return [], just write one try/except and it's no big deal. If you want to catch any OSError in the function and return [], decorate it as shown above. What you describe would seem to only be useful in cases where you want to catch more-than-one-but-not-all possible OSErrors raised in caller, and yet in all those cases you want to return the same particular value, which seems rather unusual.
Forgive me if I don't quite understand, but can't you just put the handler inside the calling function?
def caller():
def handler():
print("Handler was called")
# ...
A better way...
Or, if you want to simplify the way you call it, you can use with statements to achieve the desired effect. This will be a lot easier, I think, and it's a lot less of a "hack":
class LogOSError(object):
def __init__(self, loc):
self.loc = loc
def __enter__(self):
return None
def __exit__(self, exc_type, exc_value, traceback):
if isinstance(exc_value, OSError):
print("{}: OSError: {}".format(self.loc, exc_value))
return True
You can use it like this:
with LogOSError('example_func'):
os.unlink('/does/not/exist')
And the output is:
>>> with LogOSError('some_location'):
... os.unlink('/does/not/exist')
...
some_location: OSError: [Errno 2] No such file or directory: '/does/not/exist'
Is it possible for a callee to force its caller to return in python?
If so, is this a good approach? Doesn't it violate the Explicit is better than implicit. sentence of the Zen of Python?
Example:
import inspect
class C(object):
def callee(self):
print 'in callee'
caller_name = inspect.stack()[1][3]
caller = getattr(self, caller_name)
# force caller to return
# so that "in caller after callee" gets never printed
caller.return() # ???
def caller(self):
print 'in caller before calle'
self.callee()
print 'in caller after callee'
c = C()
c.caller()
print 'resume'
Output:
in caller before callee
in callee
resume
Finally, thanks to #Andrew Jaffe's suggestion on context managers I resolved it with a simple decorator.
# In my real code this is not a global variable
REPORT_ERRORS = True
def error_decorator(func):
"""
Returns Event instance with result of the
decorated function, or caught exception.
Or reraises that exception.
"""
def wrap():
error = None
user = None
try:
user = func()
except Exception as e:
error = e
finally:
if REPORT_ERRORS:
return Event(user, error)
else:
raise error
return wrap
#error_decorator
def login():
response = fetch_some_service()
if response.errors:
# flow ends here
raise BadResponseError
user = parse_response(response)
return user
What's wrong in returning a value from the callee, to be read by caller and thus behave accordingly?
instead of
caller.return() # ???
write
return False
and in
def caller(self):
print 'in caller before calle'
rc = self.callee()
if not rc:
return
print 'in caller after callee'
and off course you can raise exception and catch it in the callee and behave accordingly or simply less it fall through
Duplicate of mgilson
Reason I would argue for Return Value based check
Explicit is better than implicit
Callee should not control the Caller Behavior. Its a bad programming practice. Instead Callers should change its behavior based on callee's behavior.
In a sense you can do this with exceptions ... Just raise an exception at the end of callee and don't handle it in caller ... Why exactly do you want to do this? It seems like there's probably a better way to do whatever you're attempting ...
As far as creating a jump in the callee, it looks like that is impossible. From the Data Model section on Frame objects (emphasis is mine)
f_lineno is the current line number of the frame — writing to this from within a trace function jumps to the given line (only for the bottom-most frame). A debugger can implement a Jump command (aka Set Next Statement) by writing to f_lineno.