Trace specific functions in Python to capture high-level execution flow - python

Python offers tracing through its trace module. There are also custom solutions like this. But these approaches capture most low-level executions, inside-and-out of most/every library you use. Other than deep-dive debugging this isn't very useful.
It would be nice to have something that captures only the highest-level functions laid out in your pipeline. For example, if I had:
def funct1():
res = funct2()
print(res)
def funct2():
factor = 3
res = funct3(factor)
return(res)
def funct3(factor):
res = 1 + 100*factor
return(res)
...and called:
funct1()
...it would be nice to capture:
function order:
- funct1
- funct2
- funct3
I have looked at:
trace
tracefunc
sys.settrace
trace.py
I am happy to manually mark the functions inside the scripts, like we do with Docstrings. Is there a way to add "hooks" to functions, then track them as they get called?

You can always use a decorator to track which functions are called. Here is an example that allows you to keep track of what nesting level the function is called at:
class Tracker:
level = 0
def __init__(self, indent=2):
self.indent = indent
def __call__(self, fn):
def wrapper(*args, **kwargs):
print(' '*(self.indent * self.level) + '-' + fn.__name__)
self.level += 1
out = fn(*args, **kwargs)
self.level -= 1
return out
return wrapper
track = Tracker()
#track
def funct1():
res = funct2()
print(res)
#track
def funct2():
factor = 3
res = funct3(factor)
return(res)
#track
def funct3(factor):
res = 1 + 100*factor
return(res)
It uses the class variable level to keep track of how many functions have been called and simply prints out the the function name with a space indent. So calling funct1 gives:
funct1()
# prints:
-funct1
-funct2
-funct3
# returns:
301
Depending on how you want to save the output, you can use the logging module for the output

Related

Python Error With Decorator: 'NoneType' object is not callable [duplicate]

This question already has answers here:
How do I make function decorators and chain them together?
(20 answers)
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Apologies this is a very broad question.
The code below is a fragment of something found on the web. The key thing I am interested in is the line beginning #protected - I am wondering what this does and how it does it? It appears to be checking that a valid user is logged in prior to executing the do_upload_ajax function. That looks like a really effective way to do user authentication. I don't understand the mechanics of this # function though - can someone steer me in the right direction to explain how this would be implemented in the real world? Python 3 answers please. thanks.
#bottle.route('/ajaxupload', method='POST')
#protected(check_valid_user)
def do_upload_ajax():
data = bottle.request.files.get('data')
if data.file:
size = 0
Take a good look at this enormous answer/novel. It's one of the best explanations I've come across.
The shortest explanation that I can give is that decorators wrap your function in another function that returns a function.
This code, for example:
#decorate
def foo(a):
print a
would be equivalent to this code if you remove the decorator syntax:
def bar(a):
print a
foo = decorate(bar)
Decorators sometimes take parameters, which are passed to the dynamically generated functions to alter their output.
Another term you should read up on is closure, as that is the concept that allows decorators to work.
A decorator is a function that takes a function as its only parameter and returns a function. This is helpful to “wrap” functionality with the same code over and over again.
We use #func_name to specify a decorator to be applied on another function.
Following example adds a welcome message to the string returned by fun(). Takes fun() as parameter and returns welcome().
def decorate_message(fun):
# Nested function
def addWelcome(site_name):
return "Welcome to " + fun(site_name)
# Decorator returns a function
return addWelcome
#decorate_message
def site(site_name):
return site_name;
print site("StackOverflow")
Out[0]: "Welcome to StackOverflow"
Decorators can also be useful to attach data (or add attribute) to functions.
A decorator function to attach data to func
def attach_data(func):
func.data = 3
return func
#attach_data
def add (x, y):
return x + y
print(add(2, 3))
# 5
print(add.data)
# 3
The decorator syntax:
#protected(check_valid_user)
def do_upload_ajax():
"..."
is equivalent to
def do_upload_ajax():
"..."
do_upload_ajax = protected(check_valid_user)(do_upload_ajax)
but without the need to repeat the same name three times. There is nothing more to it.
For example, here's a possible implementation of protected():
import functools
def protected(check):
def decorator(func): # it is called with a function to be decorated
#functools.wraps(func) # preserve original name, docstring, etc
def wrapper(*args, **kwargs):
check(bottle.request) # raise an exception if the check fails
return func(*args, **kwargs) # call the original function
return wrapper # this will be assigned to the decorated name
return decorator
A decorator is a function that takes another function and extends the behavior of the latter function without explicitly modifying it. Python allows "nested" functions ie (a function within another function).
Python also allows you to return functions from other functions.
Let us say, your original function was called orig_func().
def orig_func(): #definition
print("Wheee!")
orig_func() #calling
Run this file, the orig_func() gets called and prints. "wheee".
Now, let us say, we want to modify this function, to do something before this calling this function and also something after this function.
So, we can do like this, either by option 1 or by option 2
--------option 1----------
def orig_func():
print("Wheee!")
print "do something before"
orig_func()
print "do something after"
Note that we have not modified the orig_func. Instead, we have made changes outside this function.
But may be, we want to make changes in a such a way that when orig_func is called, we are able to do something before and after calling the function. So, this is what we do.
--------option 2----------
def orig_func():
print "do something before"
print("Whee!")
print "do something after"
orig_func()
We have achieved our purpose. But at what cost? We had to modify the code of orig_func. This may not always be possible, specially, when someone else has written the function. Yet we want that when this function is called, it is modified in such a way, that something before and/or after can be done. Then the decorator helps us to do this, without modifying the code of orig_func. We create a decorator and can keep the same name as before. So, that if our function is called, it is transparently modified. We go through following steps.
a. Define the decorator. In the docorator,
1) write code to do something before orig_func, if you want to.
2) call the orig_func, to do its job.
3) write code to do something after orig_func, if you want to.
b. Create the decorator
c. Call the decorator.
Here is how we do it.
=============================================================
#-------- orig_func already given ----------
def orig_func():
print("Wheee!")
#------ write decorator ------------
def my_decorator(some_function):
def my_wrapper():
print "do something before" #do something before, if you want to
some_function()
print "do something after" #do something after, if you want to
return my_wrapper
#------ create decorator and call orig func --------
orig_func = my_decorator(orig_func) #create decorator, modify functioning
orig_func() #call modified orig_func
===============================================================
Note, that now orig_func has been modified through the decorator. So, now when you call orig_func(), it will run my_wrapper, which will do three steps, as already outlined.
Thus you have modified the functioning of orig_func, without modifying the code of orig_func, that is the purpose of the decorator.
Decorator is just a function that takes another function as an argument
Simple Example:
def get_function_name_dec(func):
def wrapper(*arg):
function_returns = func(*arg) # What our function returns
return func.__name__ + ": " + function_returns
return wrapper
#get_function_name_dec
def hello_world():
return "Hi"
print(hello_world())
I'm going to use a code to response this.
What I need?: I need to modify math.sin()'s definition to add 1 always to the sine of a value
Problem: I do not have math.sin() code
Solution: Decorators
import math
def decorator_function(sin_function_to_modify):
def sin_function_modified(value):
# You can do something BEFORE math.sin() == sin_function_to_modify call
value_from_sin_function = sin_function_to_modify(value)
# You can do something AFTER math.sin() == sin_function_to_modify call
new_value = value_from_sin_function + 1;
return new_value;
return sin_function_modified
math.sin = decorator_function(math.sin);
print(math.sin(90))
Return of math.sin(90) before implement decorators: 0.8939966636005579
Return of math.sin(90) after implement decorators: 1.8939966636005579
A decorator is the function which takes another function as an argument to change its result or to give it some effect.
For example, with the code below:
# 4 + 6 = 10
def sum(num1, num2):
return num1 + num2
result = sum(4, 6)
print(result)
We can get the result below:
10
Next, we created minus_2() to subtract 2 from the result of sum() as shown below:
# (4 + 6) - 2 = 8
def minus_2(func): # Here
def core(*args, **kwargs):
result = func(*args, **kwargs)
return result - 2
return core
def sum(num1, num2):
return num1 + num2
f1 = minus_2(sum)
result = f1(4, 6)
print(result)
In short:
# ...
result = minus_2(sum)(4, 6)
print(result)
Then, we can get the result below:
8
Now, we can use minus_2() as a decorator with sum() as shown below:
# (4 + 6) - 2 = 8
def minus_2(func):
def core(*args, **kwargs):
result = func(*args, **kwargs)
return result - 2
return core
#minus_2 # Here
def sum(num1, num2):
return num1 + num2
result = sum(4, 6)
print(result)
Then, we can get the same result below:
8
Next, we created times_10() to multiply the result of minus_2() by 10 as shown below:
# ((4 + 6) - 2) x 10 = 80
def minus_2(func):
def core(*args, **kwargs):
result = func(*args, **kwargs)
return result - 2
return core
def times_10(func): # Here
def core(*args, **kwargs):
result = func(*args, **kwargs)
return result * 10
return core
def sum(num1, num2):
return num1 + num2
f1 = minus_2(sum)
f2 = times_10(f1)
result = f2(4, 6)
print(result)
In short:
# ...
result = times_10(minus_2(sum))(4, 6)
print(result)
Then, we can get the result below:
80
Now, we can use times_10() as a decorator with sum() above #minus_2 as shown below:
# ((4 + 6) - 2) x 10 = 80
def minus_2(func):
def core(*args, **kwargs):
result = func(*args, **kwargs)
return result - 2
return core
def times_10(func):
def core(*args, **kwargs):
result = func(*args, **kwargs)
return result * 10
return core
#times_10 # Here
#minus_2
def sum(num1, num2):
return num1 + num2
result = sum(4, 6)
print(result)
Then, we can get the same result below:
80
Don't forget that if a function has multiple decorators as above, they are executed from the bottom to the top as shown below:
# ((4 + 6) - 2) x 10 = 80
#times_10 # 2nd executed.
#minus_2 # 1st executed.
def sum(num1, num2):
return num1 + num2
Then, we can get the same result below as we've already seen it in the above example:
80
So, if we change the order of them as shown below:
# ((4 + 6) * 10) - 2 = 98
#minus_2 # 2nd executed.
#times_10 # 1st executed.
def sum(num1, num2):
return num1 + num2
Then, we can get the different result below:
98
Lastly, we created the code below in Django to run test() in transaction by #tran:
# "views.py"
from django.db import transaction
from django.http import HttpResponse
def tran(func): # Here
def core(request, *args, **kwargs):
with transaction.atomic():
return func(request, *args, **kwargs)
return core
#tran # Here
def test(request):
person = Person.objects.all()
print(person)
return HttpResponse("Test")

Python Runtime Profiler?

Most python profilers are made for python programs or scripts, in my case I'm working with a python plugin for a third-party app (blender 3d), therefore the profiling needs to be sampled in real-time while the user is interacting with the plugin.
I'm currently trying an injection strategy, which consists of procedurally searching through all plugin modules, & injecting a profiler wrapper to every single function.
see below, this is what my current profiler looks like
I'm wondering if there are other profilers out there that can be used for run-time scenarios such as plugins
class ModuleProfiler:
#profiler is running?
allow = False #must be True in order to start the profiler
activated = False #read only indication if profiler has been activated
#please define your plugin main module here
plugin_main_module = "MyBlenderPlugin"
#function calls registry
registry = {}
#ignore parameters, typically ui functions/modules
ignore_fcts = [
"draw",
"foo",
]
ignore_module = [
"interface_drawing",
]
event_prints = True #print all event?
#classmethod
def print_registry(cls):
"""print all registered benchmark"""
#generate total
for k,v in cls.registry.copy().items():
cls.registry[k]["averagetime"] = v["runtime"]/v["calls"]
print("")
print("PROFILER: PRINTING OUTCOME")
sorted_registry = dict(sorted(cls.registry.items(), key=lambda item:item[1]["runtime"], reverse=False))
for k,v in sorted_registry.items():
print("\n",k,":")
for a,val in v.items():
print(" "*6,a,":",val)
return None
#classmethod
def update_registry(cls, fct, exec_time=0):
"""update internal benchmark with new data"""
key = f"{fct.__module__}.{fct.__name__}"
r = cls.registry.get(key)
if (r is None):
cls.registry[key] = {}
cls.registry[key]["calls"] = 0
cls.registry[key]["runtime"] = 0
r = cls.registry[key]
r["calls"] +=1
r["runtime"] += exec_time
return None
#classmethod
def profile_wrap(cls, fct):
"""wrap any functions with our benchmark & call-counter"""
#ignore some function?
if (fct.__name__ in cls.ignore_fcts):
return fct
import functools
import time
#functools.wraps(fct)
def inner(*args,**kwargs):
t = time.time()
r = fct(*args,**kwargs)
cls.update_registry(fct, exec_time=time.time()-t)
if cls.event_prints:
print(f"PROFILER : {fct.__module__}.{fct.__name__} : {time.time()-t}")
return r
return inner
#classmethod
def start(cls):
"""inject the wrapper for every functions of every sub-modules of our plugin
used for benchmark or debugging purpose only"""
if (not cls.allow):
return None
cls.activated = True
import types
import sys
def is_function(obj):
"""check if given object is a function"""
return isinstance(obj, types.FunctionType)
print("")
#for all modules in sys.modules
for mod_k,mod in sys.modules.copy().items():
#separate module componments names
mod_list = mod_k.split('.')
#fileter what isn't ours
if (mod_list[0]!=cls.plugin_main_module):
continue
#ignore some modules?
if any([m in cls.ignore_module for m in mod_list]):
continue
print("PROFILER_SEARCH : ",mod_k)
#for each objects found in module
for ele_k,ele in mod.__dict__.items():
#if it does not have a name, skip
if (not hasattr(ele,"__name__")):
continue
#we have a global function
elif is_function(ele):
print(f" INJECT LOCAL_FUNCTION: {mod_k}.{ele_k}")
mod.__dict__[ele_k] = cls.profile_wrap(ele)
#then we have a homebrewed class? search for class.fcts
#class.fcts implementation is not flawless, need to investigate issue(s)
elif repr(ele).startswith(f"<class '{cls.plugin_main_module}."):
for class_k,class_e in ele.__dict__.items():
if is_function(class_e):
print(f" INJECT CLASS_FUNCTION: {mod_k}.{ele_k}.{class_k}")
setattr( mod.__dict__[ele_k], class_k, cls.profile_wrap(class_e),) #class.__dict__ are mapping proxies, need to assign this way,
continue
print("")
return None
ModuleProfiler.allow = True
ModuleProfiler.plugin_main_module = "MyModule"
ModuleProfiler.start()

decompose "with" statements to various functions

I wrote a generic framework that help me to bench-mark code critical sections.
Here is an explanation of the framework and in the end is the problem I am facing and few ideas I have for solutions.
Basically, I am looking for more elegant solutions
Suppose I have a function that does this (in pseudo code):
#Pseudo Code - Don't expect it to run
def foo():
do_begin()
do_critical()
some_value = do_end()
return some_value
I want to run "do_critical" section many times in loop and measure the time but still get the return value.
so, I wrote BenchMarker class that its api is something like that:
#Pseudo Code - Don't expect it to run
bm = BenchMarker(first=do_begin, critical=do_critical, end=do_end)
bm.start_benchmarking()
returned_value = bm.returned_value
benchmark_result = bm.time
This Benckmarker internally performing the following:
#Pseudo Code - Don't expect it to run
class BenchMarker:
def __init__(self):
.....
def start_benchmarking(self):
first()
t0 = take_time
for i in range(n_loops):
critical()
t1 = take_time
self.time = (t1-t0)/n_loops
value = end()
self.returned_value = value
Important to mention that I also able to pass context between first, critical and end functions, but I omitted it for simplicity as this is not the gist of my question.
This framework is working like a charm until the following use case:
I have the following code
#Pseudo Code - Don't expect it to run
def bar():
do_begin()
with some_context_manager() as ctx:
do_critical()
some_value = do_end()
return some_value
Now, after this long introduction (sorry ...), I am getting to the real question.
I don't want to run the "with statement" in the time measuring loop, but the critical code needs the context manger.
so what I basically want is equivalent to the following decomposing of bar:
first -> do_begin() + "what happens in the with before the with body"
critical -> do_critical()
end -> "what happens after the with body" + do_end()
Two Solutions I thought of (but I don't like):
Solution 1
Mimic what with does under the hood
In end of first()m create the context manager object + run it's enter() function
In the start of end(), call the context manager exit() function
Solution 2
Framework Enhancement to handle CM
Add to the framework a "context work mode" (flag, whatever ...) on which the "start_benchmarking" flow will look like this:
#Pseudo Code - Don't expect it to run
def start_benchmarking(self):
first() #including instantiating the context manager
ctx = get_the_context_manager_created_in_first()
with ctx ...:
t0 = take_time
for i in range(n_loops):
critical()
t1 = take_time
self.time = (t1-t0)/n_loops
value = end()
self.returned_value = value
Any other, more elegant, solutions?
this is way over-complicated. and i cannot quite figure out why you'd actually want to do this, but assuming that you have reasons, just create a function that does your timing for you:
def run_func_n_times(n_times, func, *args, **kwargs):
start = time.time()
for _ in range(n_times):
res = func(*args, **kwargs)
return res, (time.time() - start) / n_times
no need for a class, just a simple func:
def example():
do_begin()
print('look, i am here')
with ctx() as blah:
res, timed = run_func_n_times(27, f, foo, bar)
do_end()

Getting Python's nosetests results in a tearDown() method

I want to be able to get the result of a particular test method and output it inside the teardown method, while using the nose test runner.
There is a very good example here.
But unfortunately, running nosetests example.py does not work, since nose doesn't seem to like the fact that the run method in the superclass is being overridden:
AttributeError: 'ResultProxy' object has no attribute 'wasSuccessful'
Caveat: the following doesn't actually access the test during the tearDown, but it does access each result.
You might want to write a nose plugin (see the API documentation here). The method that you are probably interested in is afterTest(), which is run... after the test. :) Though, depending on your exact application, handleError()/handleFailure() or finalize() might actually be more useful.
Here is an example plugin that accesses the result of a test immediately after it is executed.
from nose.plugins import Plugin
import logging
log = logging.getLogger('nose.plugins.testnamer')
class ReportResults(Plugin):
def __init__(self, *args, **kwargs):
super(ReportResults, self).__init__(*args, **kwargs)
self.passes = 0
self.failures = 0
def afterTest(self, test):
if test.passed:
self.passes += 1
else:
self.failures += 1
def finalize(self, result):
print "%d successes, %d failures" % (self.passes, self.failures)
This trivial example merely reports the number of passes and failures (like the link you included, but I'm sure you can extend it to do something more interesting (here's another fun idea). To use this, make sure that it is installed in Nose (or load it into a custom runner), and then activate it with --with-reportresults.
If you are OK with adding some boilerplate code to the tests, something like the following might work.
In MyTest1, tearDown is called at the end of each test, and the value of self.result has been set to a tuple containing the method name and a dictionary (but you could set that to whatever you like). The inspect module is used to get the method name, so tearDown knows which test just ran.
In MyTest2, all the results are saved in a dictionary (results), which you can do with what you like in the tearDownClass method.
import inspect
import unittest
class MyTest1(unittest.TestCase):
result = None
def tearDown(self):
print "tearDown:", self.result
def test_aaa(self):
frame = inspect.currentframe()
name = inspect.getframeinfo(frame).function
del frame
self.result = (name, None)
x = 1 + 1
self.assertEqual(x, 2)
self.result = (name, dict(x=x))
def test_bbb(self):
frame = inspect.currentframe()
name = inspect.getframeinfo(frame).function
del frame
self.result = (name, None)
# Intentional fail.
x = -1
self.assertEqual(x, 0)
self.result = (name, dict(x=x))
class MyTest2(unittest.TestCase):
results = {}
#classmethod
def tearDownClass(cls):
print "tearDownClass:", cls.results
def test_aaa(self):
frame = inspect.currentframe()
name = inspect.getframeinfo(frame).function
del frame
self.results[name] = None
x = 1 + 1
self.assertEqual(x, 2)
self.results[name] = dict(x=x)
def test_bbb(self):
frame = inspect.currentframe()
name = inspect.getframeinfo(frame).function
del frame
self.results[name] = None
x = -1
self.assertEqual(x, 0)
self.results[name] = dict(x=x)
if __name__ == '__main__':
unittest.main()

Printing names of variables passed to a function

In some circumstances, I want to print debug-style output like this:
# module test.py
def f()
a = 5
b = 8
debug(a, b) # line 18
I want the debug function to print the following:
debug info at test.py: 18
function f
a = 5
b = 8
I am thinking it should be possible by using inspect module to locate the stack frame, then finding the appropriate line, looking up the source code in that line, getting the names of the arguments from there. The function name can be obtained by moving one stack frame up. (The values of the arguments is easy to obtain: they are passed directly to the function debug.)
Am I on the right track? Is there any recipe I can refer to?
You could do something along the following lines:
import inspect
def debug(**kwargs):
st = inspect.stack()[1]
print '%s:%d %s()' % (st[1], st[2], st[3])
for k, v in kwargs.items():
print '%s = %s' % (k, v)
def f():
a = 5
b = 8
debug(a=a, b=b) # line 12
f()
This prints out:
test.py:12 f()
a = 5
b = 8
You're generally doing it right, though it would be easier to use AOP for this kinds of tasks. Basically, instead of calling "debug" every time with every variable, you could just decorate the code with aspects which do certain things upon certain events, like upon entering the function to print passed variables and it's name.
Please refer to this site and old so post for more info.
Yeah, you are in the correct track. You may want to look at inspect.getargspec which would return a named tuple of args, varargs, keywords, defaults passed to the function.
import inspect
def f():
a = 5
b = 8
debug(a, b)
def debug(a, b):
print inspect.getargspec(debug)
f()
This is really tricky. Let me try and give a more complete answer reusing this code, and the hint about getargspec in Senthil's answer which got me triggered somehow. Btw, getargspec is deprecated in Python 3.0 and getfullarcspec should be used instead.
This works for me on a Python 3.1.2 both with explicitly calling the debug function and with using a decorator:
# from: https://stackoverflow.com/a/4493322/923794
def getfunc(func=None, uplevel=0):
"""Return tuple of information about a function
Go's up in the call stack to uplevel+1 and returns information
about the function found.
The tuple contains
name of function, function object, it's frame object,
filename and line number"""
from inspect import currentframe, getouterframes, getframeinfo
#for (level, frame) in enumerate(getouterframes(currentframe())):
# print(str(level) + ' frame: ' + str(frame))
caller = getouterframes(currentframe())[1+uplevel]
# caller is tuple of:
# frame object, filename, line number, function
# name, a list of lines of context, and index within the context
func_name = caller[3]
frame = caller[0]
from pprint import pprint
if func:
func_name = func.__name__
else:
func = frame.f_locals.get(func_name, frame.f_globals.get(func_name))
return (func_name, func, frame, caller[1], caller[2])
def debug_prt_func_args(f=None):
"""Print function name and argument with their values"""
from inspect import getargvalues, getfullargspec
(func_name, func, frame, file, line) = getfunc(func=f, uplevel=1)
argspec = getfullargspec(func)
#print(argspec)
argvals = getargvalues(frame)
print("debug info at " + file + ': ' + str(line))
print(func_name + ':' + str(argvals)) ## reformat to pretty print arg values here
return func_name
def df_dbg_prt_func_args(f):
"""Decorator: dpg_prt_func_args - Prints function name and arguments
"""
def wrapped(*args, **kwargs):
debug_prt_func_args(f)
return f(*args, **kwargs)
return wrapped
Usage:
#df_dbg_prt_func_args
def leaf_decor(*args, **kwargs):
"""Leaf level, simple function"""
print("in leaf")
def leaf_explicit(*args, **kwargs):
"""Leaf level, simple function"""
debug_prt_func_args()
print("in leaf")
def complex():
"""A complex function"""
print("start complex")
leaf_decor(3,4)
print("middle complex")
leaf_explicit(12,45)
print("end complex")
complex()
and prints:
start complex
debug info at debug.py: 54
leaf_decor:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (3, 4), 'f': <function leaf_decor at 0x2aaaac048d98>, 'kwargs': {}})
in leaf
middle complex
debug info at debug.py: 67
leaf_explicit:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (12, 45), 'kwargs': {}})
in leaf
end complex
The decorator cheats a bit: Since in wrapped we get the same arguments as the function itself it doesn't matter that we find and report the ArgSpec of wrapped in getfunc and debug_prt_func_args. This code could be beautified a bit, but it works alright now for the simple debug testcases I used.
Another trick you can do: If you uncomment the for-loop in getfunc you can see that inspect can give you the "context" which really is the line of source code where a function got called. This code is obviously not showing the content of any variable given to your function, but sometimes it already helps to know the variable name used one level above your called function.
As you can see, with the decorator you don't have to change the code inside the function.
Probably you'll want to pretty print the args. I've left the raw print (and also a commented out print statement) in the function so it's easier to play around with.

Categories