Related
I have a code that tries a block of code, if error happens, retry up to a max count.
The structure is re-used many times.
Something looks like this:
a = 10
b = 20
c = 30
result = None
max_loop = 3
interval = 1
for i in range(max_loop):
try:
# these are generic statements
statement_1(a)
statement_2(b)
# ...
result = statement_3(c)
break
except Exception as e:
logging.error(f'retry {i}')
logging.error(e)
time.sleep(interval)
raise Exception(f'Failed after {max_loop} retries')
Is there a way to create a wrap/decorator/contextmanager, etc of the for: try: ... except:raise so I can reuse the structure? Something similar to in-line block, or anonymous function in other languages?
I cannot create a function because the try: block can contain any statements, and use any variables. Ideally, the structure should be able to take arbitrary code block.
example:
repeat_try_and_raise:
statement_1(a)
statement_2(b)
# ...
result = statement_3(c)
# ...
repeat_try_and_raise:
statement_4(a)
statement_5(b)
# ...
statement_6(c)
EDIT
Reasons I don't want to use a function to wrap the statements
I am lazy, don't want to create a function for a single use every time I reuse the work flow.
The function will have its own scope which will make accessing variables in my code not straightforward
Here's an idea with a context manager:
a = 10
result = None
max_loop = 2
interval = 0.1
for attempt in repeat_try_and_raise(max_loop, interval):
with attempt:
a += 1
result = 0/0 if a < 13 else 42
print(f'Success with {result=}')
Output for max_loop = 2:
ERROR:root:retry 0
ERROR:root:division by zero
ERROR:root:retry 1
ERROR:root:division by zero
Traceback (most recent call last):
File ".code.tio", line 25, in <module>
for attempt in repeat_try_and_raise(max_loop, interval):
File ".code.tio", line 18, in repeat_try_and_raise
raise Exception(f'Failed after {max_loop} retries')
Exception: Failed after 2 retries
Output for max_loop = 3:
ERROR:root:retry 0
ERROR:root:division by zero
ERROR:root:retry 1
ERROR:root:division by zero
Success with result=42
Full code (Try it online!):
import logging, time
def repeat_try_and_raise(max_loop, interval):
class Attempt:
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, traceback):
self.e = exc_value
return True
for i in range(max_loop):
attempt = Attempt()
yield attempt
if attempt.e is None:
return
logging.error(f'retry {i}')
logging.error(attempt.e)
time.sleep(interval)
raise Exception(f'Failed after {max_loop} retries')
a = 10
result = None
max_loop = 3
interval = 0.1
for attempt in repeat_try_and_raise(max_loop, interval):
with attempt:
a += 1
result = 0/0 if a < 13 else 42
print(f'Success with {result=}')
Instead, you may find just packing all of 'em into another function is what you're after
MAX_ATTEMPTS = 3
def function_collection(arg1, arg2, arg3):
fn1(arg1)
fn2(arg2)
fn3(arg3)
def wrapper(target, target_kwargs, max_attempts=MAX_ATTEMPTS):
assert max_attempts >= 2
for attempt in range(max_attempts):
try:
return target(**fn_kwargs)
except SubClassException:
continue
except Exception as ex:
LOGGER.warning(f"something unexpected went wrong: {repr(ex)}")
raise OSError(f"no run of target succeeded after {max_attempts} attempts")
wrapper(function_collection, {"arg1": 10, "arg2": 20, "arg3": 30})
It's not clear why you need to "inject" arbitrary code statements into the try block. If you have procedures that are made up of arbitrary statements, wrap the procedures up in a function definition, and decorate the function:
import time
import logging
def retry(*, max_retries: int, sleep_interval: int):
def retry_wrapper(f):
def wrapped(*args, **kwargs):
for i in range(max_retries):
try:
logging.info(f"Trying function {f.__name__}")
return f(*args, **kwargs)
except Exception as error:
logging.info(f"retry {i}")
logging.error(error)
time.sleep(sleep_interval)
raise Exception(f"Failed after {max_retries} tries")
return wrapped
return retry_wrapper
#retry(max_retries=6, sleep_interval=2)
def procedure_1(*args, **kwargs):
# statement 1
# statement 2
# statement 3
pass
#retry(max_retries=3, sleep_interval=1)
def procedure_2(*args, **kwargs):
# statement 4
# statement 5
# statement 6
pass
If you trust the source of these arbitrary code statements, you could pass them as strings and use eval() but again, I cannot imagine a scenario where wrapping your procedures into a function isn't more appropriate.
Here's one way you might do this, using the same decorator as above:
#retry(max_retries=3, sleep_interval=1)
def arbitrary_code_runner(*statements):
for statement in statements:
eval(statement)
Output:
In [5]: arbitrary_code_runner("print('hello world')", "print(sum(x**2 for x in range(10)))", "print('I am arbitrary code, I am potentially dangerous')")
Trying function arbitrary_code_runner
hello world
285
I am arbitrary code, I am potentially dangerous
The problem with this approach is that you cannot save the results of each statement. If your code statements are mutators then this is not a problem, but if any of your arbitrary statements rely on the results of other statements, you'll have to nest the function calls (like how I used print(sum(....
Another technique would be to use anonymous and/or named functions to store your arbitrary statements, and then run each statement one at a time:
#retry(max_retries=3, sleep_interval=1)
def arbitrary_function_runner(*funcs_and_args_and_kwargs):
for func, args, kwargs in funcs_and_args_and_kwargs:
print(f"\n Function {func.__name__} called")
print(f" args: {', '.join(map(str, args))}")
print(f" kwargs: {', '.join(f'{key}: {value}' for key, value in kwargs.items())}")
print(f" result: {func(*args, **kwargs)}")
Which you could then call with an arbitrary number of 3-tuples of (function, args tuple, kwargs dict):
def some_named_function(*args, **kwargs):
return "some named function's results"
arbitrary_function_runner((lambda *args, **kwargs: "".join(kwargs[arg] for arg in args), ("a", "b", "c"), {"a": "A", "b": "B", "c": "C"}),
(lambda x: x**2, (3,), {}),
(some_named_function, (1, 2, 3), {"kwarg1": 1, "kwarg_2": 2}))
Output:
Trying function arbitrary_function_runner
Function <lambda> called
args: a, b, c
kwargs: a: A, b: B, c: C
result: ABC
Function <lambda> called
args: 3
kwargs:
result: 9
Function some_named_function called
args: 1, 2, 3
kwargs: kwarg1: 1, kwarg_2: 2
result: some named function's results
In some circumstances, I want to print debug-style output like this:
# module test.py
def f()
a = 5
b = 8
debug(a, b) # line 18
I want the debug function to print the following:
debug info at test.py: 18
function f
a = 5
b = 8
I am thinking it should be possible by using inspect module to locate the stack frame, then finding the appropriate line, looking up the source code in that line, getting the names of the arguments from there. The function name can be obtained by moving one stack frame up. (The values of the arguments is easy to obtain: they are passed directly to the function debug.)
Am I on the right track? Is there any recipe I can refer to?
You could do something along the following lines:
import inspect
def debug(**kwargs):
st = inspect.stack()[1]
print '%s:%d %s()' % (st[1], st[2], st[3])
for k, v in kwargs.items():
print '%s = %s' % (k, v)
def f():
a = 5
b = 8
debug(a=a, b=b) # line 12
f()
This prints out:
test.py:12 f()
a = 5
b = 8
You're generally doing it right, though it would be easier to use AOP for this kinds of tasks. Basically, instead of calling "debug" every time with every variable, you could just decorate the code with aspects which do certain things upon certain events, like upon entering the function to print passed variables and it's name.
Please refer to this site and old so post for more info.
Yeah, you are in the correct track. You may want to look at inspect.getargspec which would return a named tuple of args, varargs, keywords, defaults passed to the function.
import inspect
def f():
a = 5
b = 8
debug(a, b)
def debug(a, b):
print inspect.getargspec(debug)
f()
This is really tricky. Let me try and give a more complete answer reusing this code, and the hint about getargspec in Senthil's answer which got me triggered somehow. Btw, getargspec is deprecated in Python 3.0 and getfullarcspec should be used instead.
This works for me on a Python 3.1.2 both with explicitly calling the debug function and with using a decorator:
# from: https://stackoverflow.com/a/4493322/923794
def getfunc(func=None, uplevel=0):
"""Return tuple of information about a function
Go's up in the call stack to uplevel+1 and returns information
about the function found.
The tuple contains
name of function, function object, it's frame object,
filename and line number"""
from inspect import currentframe, getouterframes, getframeinfo
#for (level, frame) in enumerate(getouterframes(currentframe())):
# print(str(level) + ' frame: ' + str(frame))
caller = getouterframes(currentframe())[1+uplevel]
# caller is tuple of:
# frame object, filename, line number, function
# name, a list of lines of context, and index within the context
func_name = caller[3]
frame = caller[0]
from pprint import pprint
if func:
func_name = func.__name__
else:
func = frame.f_locals.get(func_name, frame.f_globals.get(func_name))
return (func_name, func, frame, caller[1], caller[2])
def debug_prt_func_args(f=None):
"""Print function name and argument with their values"""
from inspect import getargvalues, getfullargspec
(func_name, func, frame, file, line) = getfunc(func=f, uplevel=1)
argspec = getfullargspec(func)
#print(argspec)
argvals = getargvalues(frame)
print("debug info at " + file + ': ' + str(line))
print(func_name + ':' + str(argvals)) ## reformat to pretty print arg values here
return func_name
def df_dbg_prt_func_args(f):
"""Decorator: dpg_prt_func_args - Prints function name and arguments
"""
def wrapped(*args, **kwargs):
debug_prt_func_args(f)
return f(*args, **kwargs)
return wrapped
Usage:
#df_dbg_prt_func_args
def leaf_decor(*args, **kwargs):
"""Leaf level, simple function"""
print("in leaf")
def leaf_explicit(*args, **kwargs):
"""Leaf level, simple function"""
debug_prt_func_args()
print("in leaf")
def complex():
"""A complex function"""
print("start complex")
leaf_decor(3,4)
print("middle complex")
leaf_explicit(12,45)
print("end complex")
complex()
and prints:
start complex
debug info at debug.py: 54
leaf_decor:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (3, 4), 'f': <function leaf_decor at 0x2aaaac048d98>, 'kwargs': {}})
in leaf
middle complex
debug info at debug.py: 67
leaf_explicit:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (12, 45), 'kwargs': {}})
in leaf
end complex
The decorator cheats a bit: Since in wrapped we get the same arguments as the function itself it doesn't matter that we find and report the ArgSpec of wrapped in getfunc and debug_prt_func_args. This code could be beautified a bit, but it works alright now for the simple debug testcases I used.
Another trick you can do: If you uncomment the for-loop in getfunc you can see that inspect can give you the "context" which really is the line of source code where a function got called. This code is obviously not showing the content of any variable given to your function, but sometimes it already helps to know the variable name used one level above your called function.
As you can see, with the decorator you don't have to change the code inside the function.
Probably you'll want to pretty print the args. I've left the raw print (and also a commented out print statement) in the function so it's easier to play around with.
In debugging a Python script, I'd really like to know the entire call stack for my entire program. An ideal situation would be if there were a command-line flag for python that would cause Python to print all function names as they are called (I checked man Python2.7, but didn't find anything of this sort).
Because of the number of functions in this script, I'd prefer not to add a print statement to the beginning of each function and/or class, if possible.
An intermediate solution would be to use PyDev's debugger, place a couple breakpoints and check the call stack for given points in my program, so I'll use this approach for the time being.
I'd still prefer to see a complete list of all functions called throughout the life of the program, if such a method exists.
You can do this with a trace function (props to Spacedman for improving the original version of this to trace returns and use some nice indenting):
def tracefunc(frame, event, arg, indent=[0]):
if event == "call":
indent[0] += 2
print("-" * indent[0] + "> call function", frame.f_code.co_name)
elif event == "return":
print("<" + "-" * indent[0], "exit function", frame.f_code.co_name)
indent[0] -= 2
return tracefunc
import sys
sys.setprofile(tracefunc)
main() # or whatever kicks off your script
Note that a function's code object usually has the same name as the associated function, but not always, since functions can be created dynamically. Unfortunately, Python doesn't track the function objects on the stack (I've sometimes fantasized about submitting a patch for this). Still, this is certainly "good enough" in most cases.
If this becomes an issue, you could extract the "real" function name from the source code—Python does track the filename and line number—or ask the garbage collector find out which function object refers to the code object. There could be more than one function sharing the code object, but any of their names might be good enough.
Coming back to revisit this four years later, it behooves me to mention that in Python 2.6 and later, you can get better performance by using sys.setprofile() rather than sys.settrace(). The same trace function can be used; it's just that the profile function is called only when a function is entered or exited, so what's inside the function executes at full speed.
Another good tool to be aware of is the trace module. There are 3 options of showing function names.
Example foo.py:
def foo():
bar()
def bar():
print("in bar!")
foo()
Using -l/--listfuncs to list funtions:
$ python -m trace --listfuncs foo.py
in bar!
functions called:
filename: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/trace.py, modulename: trace, funcname: _unsettrace
filename: foo.py, modulename: foo, funcname: <module>
filename: foo.py, modulename: foo, funcname: bar
filename: foo.py, modulename: foo, funcname: foo
Using -t/--trace to list lines as they are executed.
$python -m trace --trace foo.py
--- modulename: foo, funcname: <module>
foo.py(1): def foo():
foo.py(4): def bar():
foo.py(7): foo()
--- modulename: foo, funcname: foo
foo.py(2): bar()
--- modulename: foo, funcname: bar
foo.py(5): print("in bar!")
in bar!
Using -T/--trackcalls to list what calls what
$ python -m trace --trackcalls foo.py
in bar!
calling relationships:
*** /usr/lib/python3.8/trace.py ***
--> foo.py
trace.Trace.runctx -> foo.<module>
*** foo.py ***
foo.<module> -> foo.foo
foo.foo -> foo.bar
I took kindall's answer and built on it. I made the following module:
"""traceit.py
Traces the call stack.
Usage:
import sys
import traceit
sys.setprofile(traceit.traceit)
"""
import sys
WHITE_LIST = {'trade'} # Look for these words in the file path.
EXCLUSIONS = {'<'} # Ignore <listcomp>, etc. in the function name.
def tracefunc(frame, event, arg):
if event == "call":
tracefunc.stack_level += 1
unique_id = frame.f_code.co_filename+str(frame.f_lineno)
if unique_id in tracefunc.memorized:
return
# Part of filename MUST be in white list.
if any(x in frame.f_code.co_filename for x in WHITE_LIST) \
and \
not any(x in frame.f_code.co_name for x in EXCLUSIONS):
if 'self' in frame.f_locals:
class_name = frame.f_locals['self'].__class__.__name__
func_name = class_name + '.' + frame.f_code.co_name
else:
func_name = frame.f_code.co_name
func_name = '{name:->{indent}s}()'.format(
indent=tracefunc.stack_level*2, name=func_name)
txt = '{: <40} # {}, {}'.format(
func_name, frame.f_code.co_filename, frame.f_lineno)
print(txt)
tracefunc.memorized.add(unique_id)
elif event == "return":
tracefunc.stack_level -= 1
tracefunc.memorized = set()
tracefunc.stack_level = 0
Sample usage
import traceit
sys.setprofile(traceit.tracefunc)
Sample output:
API.getFills() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 331
API._get_req_id() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1053
API._wait_till_done() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1026
---API.execDetails() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1187
-------Fill.__init__() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 256
--------Price.__init__() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 237
-deserialize_order_ref() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 644
--------------------Port() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 647
API.commissionReport() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1118
Features:
Ignores Python language internal functions.
Ignores repeated function calls (optional).
Uses sys.setprofile() instead of sys.settrace() for speed.
There are a few options. If a debugger isn't enough, you can set a trace function using sys.settrace(). This function will be essentially called on every line of Python code executed, but it easy to identify the function calls -- see the linked documentation.
You might also be interested in the trace module, though it doesn't do exactly what you asked for. Be sure to look into the --trackcalls option.
import traceback
def foo():
traceback.print_stack()
def bar():
foo()
def car():
bar():
car()
File "<string>", line 1, in <module>
File "C:\Python27\lib\idlelib\run.py", line 97, in main
ret = method(*args, **kwargs)
File "C:\Python27\lib\idlelib\run.py", line 298, in runcode
exec code in self.locals
File "<pyshell#494>", line 1, in <module>
File "<pyshell#493>", line 2, in car
File "<pyshell#490>", line 2, in bar
File "<pyshell#486>", line 2, in foo
traceback
The hunter tool does exactly this, and more. For example, given:
test.py:
def foo(x):
print(f'foo({x})')
def bar(x):
foo(x)
bar()
The output looks like:
$ PYTHONHUNTER='module="__main__"' python test.py
test.py:1 call => <module>()
test.py:1 line def foo(x):
test.py:4 line def bar(x):
test.py:7 line bar('abc')
test.py:4 call => bar(x='abc')
test.py:5 line foo(x)
test.py:1 call => foo(x='abc')
test.py:2 line print(f'foo({x})')
foo(abc)
test.py:2 return <= foo: None
test.py:5 return <= bar: None
test.py:7 return <= <module>: None
It also provides a pretty flexible query syntax that allows specifying module, file/lineno, function, etc which helps because the default output (which includes standard library function calls) can be pretty big.
You could use settrace, as outlined here: Tracing python code. Use the version near the end of the page. I stick the code of that page into my code to see exactly what lines are executed when my code is running. You can also filter so that you only see the names of functions called.
You can also use a decorator for specific functions you want to trace (with their arguments):
import sys
from functools import wraps
class TraceCalls(object):
""" Use as a decorator on functions that should be traced. Several
functions can be decorated - they will all be indented according
to their call depth.
"""
def __init__(self, stream=sys.stdout, indent_step=2, show_ret=False):
self.stream = stream
self.indent_step = indent_step
self.show_ret = show_ret
# This is a class attribute since we want to share the indentation
# level between different traced functions, in case they call
# each other.
TraceCalls.cur_indent = 0
def __call__(self, fn):
#wraps(fn)
def wrapper(*args, **kwargs):
indent = ' ' * TraceCalls.cur_indent
argstr = ', '.join(
[repr(a) for a in args] +
["%s=%s" % (a, repr(b)) for a, b in kwargs.items()])
self.stream.write('%s%s(%s)\n' % (indent, fn.__name__, argstr))
TraceCalls.cur_indent += self.indent_step
ret = fn(*args, **kwargs)
TraceCalls.cur_indent -= self.indent_step
if self.show_ret:
self.stream.write('%s--> %s\n' % (indent, ret))
return ret
return wrapper
Just import this file and add a #TraceCalls() before the function/method you want to trace.
Variation on kindall's answer, return just the called functions in a package.
def tracefunc(frame, event, arg, indent=[0]):
package_name = __name__.split('.')[0]
if event == "call" and (package_name in str(frame)):
indent[0] += 2
print("-" * indent[0] + "> call function", frame.f_code.co_name)
return tracefunc
import sys
sys.settrace(tracefunc)
e.g. In a package called Dog, this should only show you functions called that were defined in the Dog package.
Suppose I have a function like f(a, b, c=None). The aim is to call the function like f(*args, **kwargs), and then construct a new set of args and kwargs such that:
If the function had default values, I should be able to acquire their values. For example, if I call it like f(1, 2), I should be able to get the tuple (1, 2, None) and/or the dictionary {'c': None}.
If the value of any of the arguments was modified inside the function, get the new value. For example, if I call it like f(1, 100000, 3) and the function does if b > 500: b = 5 modifying the local variable, I should be able to get the the tuple (1, 5, 3).
The aim here is to create a a decorator that finishes the job of a function. The original function acts as a preamble setting up the data for the actual execution, and the decorator finishes the job.
Edit: I'm adding an example of what I'm trying to do. It's a module for making proxies for other classes.
class Spam(object):
"""A fictional class that we'll make a proxy for"""
def eggs(self, start, stop, step):
"""A fictional method"""
return range(start, stop, step)
class ProxyForSpam(clsproxy.Proxy):
proxy_for = Spam
#clsproxy.signature_preamble
def eggs(self, start, stop, step=1):
start = max(0, start)
stop = min(100, stop)
And then, we'll have that:
ProxyForSpam().eggs(-10, 200) -> Spam().eggs(0, 100, 1)
ProxyForSpam().eggs(3, 4) -> Spam().eggs(3, 4, 1)
There are two recipes available here, one which requires an external library and another that uses only the standard library. They don't quite do what you want, in that they actually modify the function being executed to obtain its locals() rather than obtain the locals() after function execution, which is impossible, since the local stack no longer exists after the function finishes execution.
Another option is to see what debuggers, such as WinPDB or even the pdb module do. I suspect they use the inspect module (possibly along with others), to get the frame inside which a function is executing and retrieve locals() that way.
EDIT: After reading some code in the standard library, the file you want to look at is probably bdb.py, which should be wherever the rest of your Python standard library is. Specifically, look at set_trace() and related functions. This will give you an idea of how the Python debugger breaks into the class. You might even be able to use it directly. To get the frame to pass to set_trace() look at the inspect module.
I've stumbled upon this very need today and wanted to share my solution.
import sys
def call_function_get_frame(func, *args, **kwargs):
"""
Calls the function *func* with the specified arguments and keyword
arguments and snatches its local frame before it actually executes.
"""
frame = None
trace = sys.gettrace()
def snatch_locals(_frame, name, arg):
nonlocal frame
if frame is None and name == 'call':
frame = _frame
sys.settrace(trace)
return trace
sys.settrace(snatch_locals)
try:
result = func(*args, **kwargs)
finally:
sys.settrace(trace)
return frame, result
The idea is to use sys.trace() to catch the frame of the next 'call'. Tested on CPython 3.6.
Example usage
import types
def namespace_decorator(func):
frame, result = call_function_get_frame(func)
try:
module = types.ModuleType(func.__name__)
module.__dict__.update(frame.f_locals)
return module
finally:
del frame
#namespace_decorator
def mynamespace():
eggs = 'spam'
class Bar:
def hello(self):
print("Hello, World!")
assert mynamespace.eggs == 'spam'
mynamespace.Bar().hello()
I don't see how you could do this non-intrusively -- after the function is done executing, it doesn't exist any more -- there's no way you can reach inside something that doesn't exist.
If you can control the functions that are being used, you can do an intrusive approach like
def fn(x, y, z, vars):
'''
vars is an empty dict that we use to pass things back to the caller
'''
x += 1
y -= 1
z *= 2
vars.update(locals())
>>> updated = {}
>>> fn(1, 2, 3, updated)
>>> print updated
{'y': 1, 'x': 2, 'z': 6, 'vars': {...}}
>>>
...or you can just require that those functions return locals() -- as #Thomas K asks above, what are you really trying to do here?
Witchcraft below read on your OWN danger(!)
I have no clue what you want to do with this, it's possible but it's an awful hack...
Anyways, I HAVE WARNED YOU(!), be lucky if such things don't work in your favorite language...
from inspect import getargspec, ismethod
import inspect
def main():
#get_modified_values
def foo(a, f, b):
print a, f, b
a = 10
if a == 2:
return a
f = 'Hello World'
b = 1223
e = 1
c = 2
foo(e, 1000, b = c)
# intercept a function and retrieve the modifed values
def get_modified_values(target):
def wrapper(*args, **kwargs):
# get the applied args
kargs = getcallargs(target, *args, **kwargs)
# get the source code
src = inspect.getsource(target)
lines = src.split('\n')
# oh noes string patching of the function
unindent = len(lines[0]) - len(lines[0].lstrip())
indent = lines[0][:len(lines[0]) - len(lines[0].lstrip())]
lines[0] = ''
lines[1] = indent + 'def _temp(_args, ' + lines[1].split('(')[1]
setter = []
for k in kargs.keys():
setter.append('_args["%s"] = %s' % (k, k))
i = 0
while i < len(lines):
indent = lines[i][:len(lines[i]) - len(lines[i].lstrip())]
if lines[i].find('return ') != -1 or lines[i].find('return\n') != -1:
for e in setter:
lines.insert(i, indent + e)
i += len(setter)
elif i == len(lines) - 2:
for e in setter:
lines.insert(i + 1, indent + e)
break
i += 1
for i in range(0, len(lines)):
lines[i] = lines[i][unindent:]
data = '\n'.join(lines) + "\n"
# setup variables
frame = inspect.currentframe()
loc = inspect.getouterframes(frame)[1][0].f_locals
glob = inspect.getouterframes(frame)[1][0].f_globals
loc['_temp'] = None
# compile patched function and call it
func = compile(data, '<witchstuff>', 'exec')
eval(func, glob, loc)
loc['_temp'](kargs, *args, **kwargs)
# there you go....
print kargs
# >> {'a': 10, 'b': 1223, 'f': 'Hello World'}
return wrapper
# from python 2.7 inspect module
def getcallargs(func, *positional, **named):
"""Get the mapping of arguments to values.
A dict is returned, with keys the function argument names (including the
names of the * and ** arguments, if any), and values the respective bound
values from 'positional' and 'named'."""
args, varargs, varkw, defaults = getargspec(func)
f_name = func.__name__
arg2value = {}
# The following closures are basically because of tuple parameter unpacking.
assigned_tuple_params = []
def assign(arg, value):
if isinstance(arg, str):
arg2value[arg] = value
else:
assigned_tuple_params.append(arg)
value = iter(value)
for i, subarg in enumerate(arg):
try:
subvalue = next(value)
except StopIteration:
raise ValueError('need more than %d %s to unpack' %
(i, 'values' if i > 1 else 'value'))
assign(subarg,subvalue)
try:
next(value)
except StopIteration:
pass
else:
raise ValueError('too many values to unpack')
def is_assigned(arg):
if isinstance(arg,str):
return arg in arg2value
return arg in assigned_tuple_params
if ismethod(func) and func.im_self is not None:
# implicit 'self' (or 'cls' for classmethods) argument
positional = (func.im_self,) + positional
num_pos = len(positional)
num_total = num_pos + len(named)
num_args = len(args)
num_defaults = len(defaults) if defaults else 0
for arg, value in zip(args, positional):
assign(arg, value)
if varargs:
if num_pos > num_args:
assign(varargs, positional[-(num_pos-num_args):])
else:
assign(varargs, ())
elif 0 < num_args < num_pos:
raise TypeError('%s() takes %s %d %s (%d given)' % (
f_name, 'at most' if defaults else 'exactly', num_args,
'arguments' if num_args > 1 else 'argument', num_total))
elif num_args == 0 and num_total:
raise TypeError('%s() takes no arguments (%d given)' %
(f_name, num_total))
for arg in args:
if isinstance(arg, str) and arg in named:
if is_assigned(arg):
raise TypeError("%s() got multiple values for keyword "
"argument '%s'" % (f_name, arg))
else:
assign(arg, named.pop(arg))
if defaults: # fill in any missing values with the defaults
for arg, value in zip(args[-num_defaults:], defaults):
if not is_assigned(arg):
assign(arg, value)
if varkw:
assign(varkw, named)
elif named:
unexpected = next(iter(named))
if isinstance(unexpected, unicode):
unexpected = unexpected.encode(sys.getdefaultencoding(), 'replace')
raise TypeError("%s() got an unexpected keyword argument '%s'" %
(f_name, unexpected))
unassigned = num_args - len([arg for arg in args if is_assigned(arg)])
if unassigned:
num_required = num_args - num_defaults
raise TypeError('%s() takes %s %d %s (%d given)' % (
f_name, 'at least' if defaults else 'exactly', num_required,
'arguments' if num_required > 1 else 'argument', num_total))
return arg2value
main()
Output:
1 1000 2
{'a': 10, 'b': 1223, 'f': 'Hello World'}
There you go... I'm not responsible for any small children that get eaten by demons or something the like (or if it breaks on complicated functions).
PS: The inspect module is the pure EVIL.
Since you are trying to manipulate variables in one function, and do some job based on those variables on another function, the cleanest way to do it is having these variables to be an object's attributes.
It could be a dictionary - that could be defined inside the decorator - therefore access to it inside the decorated function would be as a "nonlocal" variable. That cleans up the default parameter tuple of this dictionary, that #bgporter proposed.:
def eggs(self, a, b, c=None):
# nonlocal parms ## uncomment in Python 3
parms["a"] = a
...
To be even more clean, you probably should have all these parameters as attributes of the instance (self) - so that no "magical" variable has to be used inside the decorated function.
As for doing it "magically" without having the parameters set as attributes of certain object explicitly, nor having the decorated function to return the parameters themselves (which is also an option) - that is, to have it to work transparently with any decorated function - I can't think of a way that does not involve manipulating the bytecode of the function itself.
If you can think of a way to make the wrapped function raise an exception at return time, you could trap the exception and check the execution trace.
If it is so important to do it automatically that you consider altering the function bytecode an option, feel free to ask me further.
I am writing a small app that has to perform some 'sanity checks' before entering execution. (eg. of a sanity check: test if a certain path is readable / writable / exists)
The code:
import logging
import os
import shutil
import sys
from paths import PATH
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger('sf.core.sanity')
def sanity_access(path, mode):
ret = os.access(path, mode)
logfunc = log.debug if ret else log.warning
loginfo = (os.access.__name__, path, mode, ret)
logfunc('%s(\'%s\', %s)==%s' % loginfo)
return ret
def sanity_check(bool_func, true_func, false_func):
ret = bool_func()
(logfunc, execfunc) = (log.debug, true_func) if ret else \
(log.warning, false_func)
logfunc('exec: %s', execfunc.__name__)
execfunc()
def sanity_checks():
sanity_check(lambda: sanity_access(PATH['userhome'], os.F_OK), \
lambda: None, sys.exit)
My question is related to the sanity_check function.
This function takes 3 parameters (bool_func, true_func, false_func). If the bool_func (which is the test function, returning a boolean value) fails, true_func gets executed, else the false_func gets executed.
1) lambda: None is a little lame , because for example if the sanity_access returns True, lambda: None gets executed, and the output printed will be:
DEBUG:sf.core.sanity:access('/home/nomemory', 0)==True
DEBUG:sf.core.sanity:exec: <lambda>
So it won't be very clear in the logs what function got executed. The log will only contain <lambda> . Is there a default function that does nothing and can be passed as a parameter ? Is it a way to return the name of the first function that is being executed inside a lambda ?
Or a way not to log that "exec" if 'nothing' is sent as a paramter ?
What's the none / do-nothing equivalent for functions ?
sanity_check(lambda: sanity_access(PATH['userhome'], os.F_OK), \
<do nothing, but show something more useful than <lambda>>, sys.exit)
Additional question, why is lambda: pass instead of lambda: None not working ?
What's with all the lambdas that serve no purpose? Well, maybe optional arguments will help you a bit:
def sanity_check( test, name='undefined', ontrue=None, onfalse=None ):
if test:
log.debug(name)
if ontrue is not None:
ontrue()
else:
log.warn( name )
if onfalse is not None:
onfalse()
def sanity_checks():
sanity_check(sanity_access(PATH['userhome'], os.F_OK), 'test home',
onfalse=sys.exit)
But you are really overcomplicating things.
update
I would normally delete this post because THC4k saw through all the complexity and rewrote your function correctly. However in a different context, the K combinator trick might come in handy, so I'll leave it up.
There is no builtin that does what you want AFIK. I believe that you want the K combinator (the link came up on another question) which can be encoded as
def K_combinator(x, name):
def f():
return x
f.__name__ = name
return f
none_function = K_combinator(None, 'none_function')
print none_function()
of course if this is just a one off then you could just do
def none_function():
return None
But then you don't get to say "K combinator". Another advantage of the 'K_combinator' approach is that you can pass it to functions, for example,
foo(call_back1, K_combinator(None, 'name_for_logging'))
as for your second statement, only expressions are allowed in lambda. pass is a statement. Hence, lambda: pass fails.
You can slightly simplify your call to sanity check by removing the lambda around the first argument.
def sanity_check(b, true_func, false_func):
if b:
logfunc = log.debug
execfunc = true_func
else:
logfunc = log.warning
execfunc = false_func
logfunc('exec: %s', execfunc.__name__)
execfunc()
def sanity_checks():
sanity_check(sanity_access(PATH['userhome'], os.F_OK),
K_combinator(None, 'none_func'), sys.exit)
This is more readable (largely from expanding the ternary operator into an if). the boolfunc wasn't doing anything because sanity_check wasn't adding any arguments to the call. Might as well just call instead of wrapping it in a lambda.
You might want to rethink this.
class SanityCheck( object ):
def __call__( self ):
if self.check():
logger.debug(...)
self.ok()
else:
logger.warning(...)
self.not_ok()
def check( self ):
return True
def ok( self ):
pass
def not_ok( self ):
sys.exit(1)
class PathSanityCheck(SanityCheck):
path = "/path/to/resource"
def check( self ):
return os.access( path, os.F_OK )
class AnotherPathSanityCheck(SanityCheck):
path = "/another/path"
def startup():
checks = ( PathSanityCheck(), AnotherPathSanityCheck() )
for c in checks:
c()
Callable objects can simplify your life.
>>> import dis
>>> f = lambda: None
>>> dis.dis(f)
1 0 LOAD_CONST 0 (None)
3 RETURN_VALUE
>>> g = lambda: Pass
>>>
>>>
>>> dis.dis(g)
1 0 LOAD_GLOBAL 0 (Pass)
3 RETURN_VALUE
>>> g = lambda: pass
File "<stdin>", line 1
g = lambda: pass
^
SyntaxError: invalid syntax
Actually, what you want is a function which does nothing, but has a __name__ which is useful to the log. The lambda function is doing exactly what you want, but execfunc.__name__ is giving "<lambda>". Try one of these:
def nothing_func():
return
def ThisAppearsInTheLog():
return
You can also put your own attributes on functions:
def log_nothing():
return
log_nothing.log_info = "nothing interesting"
Then change execfunc.__name__ to getattr(execfunc,'log_info', '')