In debugging a Python script, I'd really like to know the entire call stack for my entire program. An ideal situation would be if there were a command-line flag for python that would cause Python to print all function names as they are called (I checked man Python2.7, but didn't find anything of this sort).
Because of the number of functions in this script, I'd prefer not to add a print statement to the beginning of each function and/or class, if possible.
An intermediate solution would be to use PyDev's debugger, place a couple breakpoints and check the call stack for given points in my program, so I'll use this approach for the time being.
I'd still prefer to see a complete list of all functions called throughout the life of the program, if such a method exists.
You can do this with a trace function (props to Spacedman for improving the original version of this to trace returns and use some nice indenting):
def tracefunc(frame, event, arg, indent=[0]):
if event == "call":
indent[0] += 2
print("-" * indent[0] + "> call function", frame.f_code.co_name)
elif event == "return":
print("<" + "-" * indent[0], "exit function", frame.f_code.co_name)
indent[0] -= 2
return tracefunc
import sys
sys.setprofile(tracefunc)
main() # or whatever kicks off your script
Note that a function's code object usually has the same name as the associated function, but not always, since functions can be created dynamically. Unfortunately, Python doesn't track the function objects on the stack (I've sometimes fantasized about submitting a patch for this). Still, this is certainly "good enough" in most cases.
If this becomes an issue, you could extract the "real" function name from the source code—Python does track the filename and line number—or ask the garbage collector find out which function object refers to the code object. There could be more than one function sharing the code object, but any of their names might be good enough.
Coming back to revisit this four years later, it behooves me to mention that in Python 2.6 and later, you can get better performance by using sys.setprofile() rather than sys.settrace(). The same trace function can be used; it's just that the profile function is called only when a function is entered or exited, so what's inside the function executes at full speed.
Another good tool to be aware of is the trace module. There are 3 options of showing function names.
Example foo.py:
def foo():
bar()
def bar():
print("in bar!")
foo()
Using -l/--listfuncs to list funtions:
$ python -m trace --listfuncs foo.py
in bar!
functions called:
filename: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/trace.py, modulename: trace, funcname: _unsettrace
filename: foo.py, modulename: foo, funcname: <module>
filename: foo.py, modulename: foo, funcname: bar
filename: foo.py, modulename: foo, funcname: foo
Using -t/--trace to list lines as they are executed.
$python -m trace --trace foo.py
--- modulename: foo, funcname: <module>
foo.py(1): def foo():
foo.py(4): def bar():
foo.py(7): foo()
--- modulename: foo, funcname: foo
foo.py(2): bar()
--- modulename: foo, funcname: bar
foo.py(5): print("in bar!")
in bar!
Using -T/--trackcalls to list what calls what
$ python -m trace --trackcalls foo.py
in bar!
calling relationships:
*** /usr/lib/python3.8/trace.py ***
--> foo.py
trace.Trace.runctx -> foo.<module>
*** foo.py ***
foo.<module> -> foo.foo
foo.foo -> foo.bar
I took kindall's answer and built on it. I made the following module:
"""traceit.py
Traces the call stack.
Usage:
import sys
import traceit
sys.setprofile(traceit.traceit)
"""
import sys
WHITE_LIST = {'trade'} # Look for these words in the file path.
EXCLUSIONS = {'<'} # Ignore <listcomp>, etc. in the function name.
def tracefunc(frame, event, arg):
if event == "call":
tracefunc.stack_level += 1
unique_id = frame.f_code.co_filename+str(frame.f_lineno)
if unique_id in tracefunc.memorized:
return
# Part of filename MUST be in white list.
if any(x in frame.f_code.co_filename for x in WHITE_LIST) \
and \
not any(x in frame.f_code.co_name for x in EXCLUSIONS):
if 'self' in frame.f_locals:
class_name = frame.f_locals['self'].__class__.__name__
func_name = class_name + '.' + frame.f_code.co_name
else:
func_name = frame.f_code.co_name
func_name = '{name:->{indent}s}()'.format(
indent=tracefunc.stack_level*2, name=func_name)
txt = '{: <40} # {}, {}'.format(
func_name, frame.f_code.co_filename, frame.f_lineno)
print(txt)
tracefunc.memorized.add(unique_id)
elif event == "return":
tracefunc.stack_level -= 1
tracefunc.memorized = set()
tracefunc.stack_level = 0
Sample usage
import traceit
sys.setprofile(traceit.tracefunc)
Sample output:
API.getFills() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 331
API._get_req_id() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1053
API._wait_till_done() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1026
---API.execDetails() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1187
-------Fill.__init__() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 256
--------Price.__init__() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 237
-deserialize_order_ref() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 644
--------------------Port() # C:\Python37-32\lib\site-packages\helpers\trade\mdb.py, 647
API.commissionReport() # C:\Python37-32\lib\site-packages\helpers\trade\tws3.py, 1118
Features:
Ignores Python language internal functions.
Ignores repeated function calls (optional).
Uses sys.setprofile() instead of sys.settrace() for speed.
There are a few options. If a debugger isn't enough, you can set a trace function using sys.settrace(). This function will be essentially called on every line of Python code executed, but it easy to identify the function calls -- see the linked documentation.
You might also be interested in the trace module, though it doesn't do exactly what you asked for. Be sure to look into the --trackcalls option.
import traceback
def foo():
traceback.print_stack()
def bar():
foo()
def car():
bar():
car()
File "<string>", line 1, in <module>
File "C:\Python27\lib\idlelib\run.py", line 97, in main
ret = method(*args, **kwargs)
File "C:\Python27\lib\idlelib\run.py", line 298, in runcode
exec code in self.locals
File "<pyshell#494>", line 1, in <module>
File "<pyshell#493>", line 2, in car
File "<pyshell#490>", line 2, in bar
File "<pyshell#486>", line 2, in foo
traceback
The hunter tool does exactly this, and more. For example, given:
test.py:
def foo(x):
print(f'foo({x})')
def bar(x):
foo(x)
bar()
The output looks like:
$ PYTHONHUNTER='module="__main__"' python test.py
test.py:1 call => <module>()
test.py:1 line def foo(x):
test.py:4 line def bar(x):
test.py:7 line bar('abc')
test.py:4 call => bar(x='abc')
test.py:5 line foo(x)
test.py:1 call => foo(x='abc')
test.py:2 line print(f'foo({x})')
foo(abc)
test.py:2 return <= foo: None
test.py:5 return <= bar: None
test.py:7 return <= <module>: None
It also provides a pretty flexible query syntax that allows specifying module, file/lineno, function, etc which helps because the default output (which includes standard library function calls) can be pretty big.
You could use settrace, as outlined here: Tracing python code. Use the version near the end of the page. I stick the code of that page into my code to see exactly what lines are executed when my code is running. You can also filter so that you only see the names of functions called.
You can also use a decorator for specific functions you want to trace (with their arguments):
import sys
from functools import wraps
class TraceCalls(object):
""" Use as a decorator on functions that should be traced. Several
functions can be decorated - they will all be indented according
to their call depth.
"""
def __init__(self, stream=sys.stdout, indent_step=2, show_ret=False):
self.stream = stream
self.indent_step = indent_step
self.show_ret = show_ret
# This is a class attribute since we want to share the indentation
# level between different traced functions, in case they call
# each other.
TraceCalls.cur_indent = 0
def __call__(self, fn):
#wraps(fn)
def wrapper(*args, **kwargs):
indent = ' ' * TraceCalls.cur_indent
argstr = ', '.join(
[repr(a) for a in args] +
["%s=%s" % (a, repr(b)) for a, b in kwargs.items()])
self.stream.write('%s%s(%s)\n' % (indent, fn.__name__, argstr))
TraceCalls.cur_indent += self.indent_step
ret = fn(*args, **kwargs)
TraceCalls.cur_indent -= self.indent_step
if self.show_ret:
self.stream.write('%s--> %s\n' % (indent, ret))
return ret
return wrapper
Just import this file and add a #TraceCalls() before the function/method you want to trace.
Variation on kindall's answer, return just the called functions in a package.
def tracefunc(frame, event, arg, indent=[0]):
package_name = __name__.split('.')[0]
if event == "call" and (package_name in str(frame)):
indent[0] += 2
print("-" * indent[0] + "> call function", frame.f_code.co_name)
return tracefunc
import sys
sys.settrace(tracefunc)
e.g. In a package called Dog, this should only show you functions called that were defined in the Dog package.
Related
a.py
import d
d.funcme('blah')
d.py
import sys
import Errors
def argcheck(in_=(), out=(type(None),)):
def _argcheck(function):
# do something here
def __argcheck(*args, **kw):
print '+++++++++ checking types before calling the func'
# do something here
res = function(*args, **kw)
return res
return __argcheck
return _argcheck
#argcheck((str)) <-----
def funcme(name):
try:
f = sys._getframe(1)
except ValueError, err:
raise Errors.UserError(err) # stack too deep
filename, lineno = f.f_globals['__name__'], f.f_lineno
print filename, lineno
OUTPUT without argcheck decorator (comment out the #argcheck((str))):
$ python a.py
__main__ 3
OUTPUT with argcheck decorator:
$ python a.py
+++++++++ checking types before calling the func
defines 9
Questions:
What's decorator doing so that it's changing the values for _getframe?
How can I preserve the information so it captures the original information i.e __main__ 3 and not defines 9?
The problem is that yourfuncme()function is assuming it has been called directly rather indirectly through something else — such as a decorator. This could be fixed by changing its calling sequence and adding an additionaldepthkeyword argument with a default value which will be passed on to _sys._getframe(). With this scaffolding in place, the decorator can then override the default value. The following will print the same thing whether or not the decorator has been applied:
1 import sys
2 import Errors
3 def argcheck(in_=(), out=(type(None),)):
4 def _argcheck(function):
5 # do something here
6 def __argcheck(*args, **kw):
7 print '+++++++++ checking types before calling the func'
8 # do something here
9 res = function(*args, depth=2, **kw) # override default depth
10 return res
11 return __argcheck
12 return _argcheck
13
14 #argcheck((str))
15 def funcme(name, depth=1): # added keyword arg with default value
16 try:
17 f = sys._getframe(depth) # explicitly pass stack depth wanted
18 except ValueError, err:
19 raise Errors.UserError(err) # stack too deep
20
21 filename, lineno = f.f_globals['__name__'], f.f_lineno
22 print filename, lineno
Decorator is basically syntactic sugar. This:
#argcheck((str))
def funcme(name):
is the same as this:
funcme = argcheck(str)(funcme)
Now you can see why decorators change the call stack.
I am not sure how to work around this in arbitrary cases, but if you know in advance something about the decorators you could perhaps compensate your code for it. You might also look into functools.wraps, perhaps that would provide some clues that might help.
I'm doing code generation and I end up with a string of source that looks like this:
Source
import sys
import operator
def add(a,b):
return operator.add(a,b)
def mul(a,b):
return operator.mul(a,b)
def saveDiv(a,b):
if b==0:
return 0
else:
return a/b
def subtract(a,b):
return operator.sub(a,b)
def main(y,x,z):
y = int(y)
print y
x = int(x)
print x
z = int(z)
print z
ind = lambda y,x,z: mul(saveDiv(x, add(z, z)), 1)
return ind(y,x,z)
print main(**sys.argv)""
Execution
When I'm executing code using exec() and then piping it through stdoutIO()
Working
args={'x':"1",'y':"1",'z':"1"}
source = getSource()
sys.argv = args
with stdoutIO() as s:
exec source
s.getvalue
Not Working
class Coder():
def start(self):
args={'x':"1",'y':"1",'z':"1"}
source = getSource()
sys.argv = args
with stdoutIO() as s:
exec source
return s.getvalue
print "out:", Coder().start()
And the stdoutIO() is implemented like this:
class Proxy(object):
def __init__(self,stdout,stringio):
self._stdout = stdout
self._stringio = stringio
def __getattr__(self,name):
if name in ('_stdout','_stringio','write'):
object.__getattribute__(self,name)
else:
return getattr(self._stringio,name)
def write(self,data):
self._stdout.write(data)
self._stringio.write(data)
#contextlib.contextmanager
def stdoutIO(stdout=None):
old = sys.stdout
if stdout is None:
stdout = StringIO.StringIO()
sys.stdout = Proxy(sys.stdout,stdout)
yield sys.stdout
sys.stdout = old
Problem
If I execute the execution code outside of the class everything works however when I run it inside a class it breaks with this error. How can I fix it or avoid this problem?
File "<string>", line 29, in <module>
File "<string>", line 27, in main
File "<string>", line 26, in <lambda>
NameError: global name 'add' is not defined
Thanks
When you run exec expression, it executes the code contained in expression in the current scope (see here). Apparently inside a class, the function in your expression are dropping out of scope before main is run. I honestly have no idea why (it seems to me like it should work) but maybe someone can add a complete explanation in a comment.
Anyway, if you specifically provide a scope for the expression to be evaluated in, (which is good practice anyway so that you don't pollute your namespace), it works fine inside the class.
So, replace the line:
exec source
with
exec source in {}
and you should be right!
Here we provide an empty dictionary as a the globals() and locals() dctionaries during the evaluation of your expression. You can keep this dictionary if you want, or let it be garbage collected immediately as I have demonstrated in my code. This is all explained in the exec documentation in the link above.
In some circumstances, I want to print debug-style output like this:
# module test.py
def f()
a = 5
b = 8
debug(a, b) # line 18
I want the debug function to print the following:
debug info at test.py: 18
function f
a = 5
b = 8
I am thinking it should be possible by using inspect module to locate the stack frame, then finding the appropriate line, looking up the source code in that line, getting the names of the arguments from there. The function name can be obtained by moving one stack frame up. (The values of the arguments is easy to obtain: they are passed directly to the function debug.)
Am I on the right track? Is there any recipe I can refer to?
You could do something along the following lines:
import inspect
def debug(**kwargs):
st = inspect.stack()[1]
print '%s:%d %s()' % (st[1], st[2], st[3])
for k, v in kwargs.items():
print '%s = %s' % (k, v)
def f():
a = 5
b = 8
debug(a=a, b=b) # line 12
f()
This prints out:
test.py:12 f()
a = 5
b = 8
You're generally doing it right, though it would be easier to use AOP for this kinds of tasks. Basically, instead of calling "debug" every time with every variable, you could just decorate the code with aspects which do certain things upon certain events, like upon entering the function to print passed variables and it's name.
Please refer to this site and old so post for more info.
Yeah, you are in the correct track. You may want to look at inspect.getargspec which would return a named tuple of args, varargs, keywords, defaults passed to the function.
import inspect
def f():
a = 5
b = 8
debug(a, b)
def debug(a, b):
print inspect.getargspec(debug)
f()
This is really tricky. Let me try and give a more complete answer reusing this code, and the hint about getargspec in Senthil's answer which got me triggered somehow. Btw, getargspec is deprecated in Python 3.0 and getfullarcspec should be used instead.
This works for me on a Python 3.1.2 both with explicitly calling the debug function and with using a decorator:
# from: https://stackoverflow.com/a/4493322/923794
def getfunc(func=None, uplevel=0):
"""Return tuple of information about a function
Go's up in the call stack to uplevel+1 and returns information
about the function found.
The tuple contains
name of function, function object, it's frame object,
filename and line number"""
from inspect import currentframe, getouterframes, getframeinfo
#for (level, frame) in enumerate(getouterframes(currentframe())):
# print(str(level) + ' frame: ' + str(frame))
caller = getouterframes(currentframe())[1+uplevel]
# caller is tuple of:
# frame object, filename, line number, function
# name, a list of lines of context, and index within the context
func_name = caller[3]
frame = caller[0]
from pprint import pprint
if func:
func_name = func.__name__
else:
func = frame.f_locals.get(func_name, frame.f_globals.get(func_name))
return (func_name, func, frame, caller[1], caller[2])
def debug_prt_func_args(f=None):
"""Print function name and argument with their values"""
from inspect import getargvalues, getfullargspec
(func_name, func, frame, file, line) = getfunc(func=f, uplevel=1)
argspec = getfullargspec(func)
#print(argspec)
argvals = getargvalues(frame)
print("debug info at " + file + ': ' + str(line))
print(func_name + ':' + str(argvals)) ## reformat to pretty print arg values here
return func_name
def df_dbg_prt_func_args(f):
"""Decorator: dpg_prt_func_args - Prints function name and arguments
"""
def wrapped(*args, **kwargs):
debug_prt_func_args(f)
return f(*args, **kwargs)
return wrapped
Usage:
#df_dbg_prt_func_args
def leaf_decor(*args, **kwargs):
"""Leaf level, simple function"""
print("in leaf")
def leaf_explicit(*args, **kwargs):
"""Leaf level, simple function"""
debug_prt_func_args()
print("in leaf")
def complex():
"""A complex function"""
print("start complex")
leaf_decor(3,4)
print("middle complex")
leaf_explicit(12,45)
print("end complex")
complex()
and prints:
start complex
debug info at debug.py: 54
leaf_decor:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (3, 4), 'f': <function leaf_decor at 0x2aaaac048d98>, 'kwargs': {}})
in leaf
middle complex
debug info at debug.py: 67
leaf_explicit:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (12, 45), 'kwargs': {}})
in leaf
end complex
The decorator cheats a bit: Since in wrapped we get the same arguments as the function itself it doesn't matter that we find and report the ArgSpec of wrapped in getfunc and debug_prt_func_args. This code could be beautified a bit, but it works alright now for the simple debug testcases I used.
Another trick you can do: If you uncomment the for-loop in getfunc you can see that inspect can give you the "context" which really is the line of source code where a function got called. This code is obviously not showing the content of any variable given to your function, but sometimes it already helps to know the variable name used one level above your called function.
As you can see, with the decorator you don't have to change the code inside the function.
Probably you'll want to pretty print the args. I've left the raw print (and also a commented out print statement) in the function so it's easier to play around with.
I would like to use a decorator on a function that I will subsequently pass to a multiprocessing pool. However, the code fails with "PicklingError: Can't pickle : attribute lookup __builtin__.function failed". I don't quite see why it fails here. I feel certain that it's something simple, but I can't find it. Below is a minimal "working" example. I thought that using the functools function would be enough to let this work.
If I comment out the function decoration, it works without an issue. What is it about multiprocessing that I'm misunderstanding here? Is there any way to make this work?
Edit: After adding both a callable class decorator and a function decorator, it turns out that the function decorator works as expected. The callable class decorator continues to fail. What is it about the callable class version that keeps it from being pickled?
import random
import multiprocessing
import functools
class my_decorator_class(object):
def __init__(self, target):
self.target = target
try:
functools.update_wrapper(self, target)
except:
pass
def __call__(self, elements):
f = []
for element in elements:
f.append(self.target([element])[0])
return f
def my_decorator_function(target):
#functools.wraps(target)
def inner(elements):
f = []
for element in elements:
f.append(target([element])[0])
return f
return inner
#my_decorator_function
def my_func(elements):
f = []
for element in elements:
f.append(sum(element))
return f
if __name__ == '__main__':
elements = [[random.randint(0, 9) for _ in range(5)] for _ in range(10)]
pool = multiprocessing.Pool(processes=4)
results = [pool.apply_async(my_func, ([e],)) for e in elements]
pool.close()
f = [r.get()[0] for r in results]
print(f)
The problem is that pickle needs to have some way to reassemble everything that you pickle. See here for a list of what can be pickled:
http://docs.python.org/library/pickle.html#what-can-be-pickled-and-unpickled
When pickling my_func, the following components need to be pickled:
An instance of my_decorator_class, called my_func.
This is fine. Pickle will store the name of the class and pickle its __dict__ contents. When unpickling, it uses the name to find the class, then creates an instance and fills in the __dict__ contents. However, the __dict__ contents present a problem...
The instance of the original my_func that's stored in my_func.target.
This isn't so good. It's a function at the top-level, and normally these can be pickled. Pickle will store the name of the function. The problem, however, is that the name "my_func" is no longer bound to the undecorated function, it's bound to the decorated function. This means that pickle won't be able to look up the undecorated function to recreate the object. Sadly, pickle doesn't have any way to know that object it's trying to pickle can always be found under the name __main__.my_func.
You can change it like this and it will work:
import random
import multiprocessing
import functools
class my_decorator(object):
def __init__(self, target):
self.target = target
try:
functools.update_wrapper(self, target)
except:
pass
def __call__(self, candidates, args):
f = []
for candidate in candidates:
f.append(self.target([candidate], args)[0])
return f
def old_my_func(candidates, args):
f = []
for c in candidates:
f.append(sum(c))
return f
my_func = my_decorator(old_my_func)
if __name__ == '__main__':
candidates = [[random.randint(0, 9) for _ in range(5)] for _ in range(10)]
pool = multiprocessing.Pool(processes=4)
results = [pool.apply_async(my_func, ([c], {})) for c in candidates]
pool.close()
f = [r.get()[0] for r in results]
print(f)
You have observed that the decorator function works when the class does not. I believe this is because functools.wraps modifies the decorated function so that it has the name and other properties of the function it wraps. As far as the pickle module can tell, it is indistinguishable from a normal top-level function, so it pickles it by storing its name. Upon unpickling, the name is bound to the decorated function so everything works out.
I also had some problem using decorators in multiprocessing. I'm not sure if it's the same problem as yours:
My code looked like this:
from multiprocessing import Pool
def decorate_func(f):
def _decorate_func(*args, **kwargs):
print "I'm decorating"
return f(*args, **kwargs)
return _decorate_func
#decorate_func
def actual_func(x):
return x ** 2
my_swimming_pool = Pool()
result = my_swimming_pool.apply_async(actual_func,(2,))
print result.get()
and when I run the code I get this:
Traceback (most recent call last):
File "test.py", line 15, in <module>
print result.get()
File "somedirectory_too_lengthy_to_put_here/lib/python2.7/multiprocessing/pool.py", line 572, in get
raise self._value
cPickle.PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
I fixed it by defining a new function to wrap the function in the decorator function, instead of using the decorator syntax
from multiprocessing import Pool
def decorate_func(f):
def _decorate_func(*args, **kwargs):
print "I'm decorating"
return f(*args, **kwargs)
return _decorate_func
def actual_func(x):
return x ** 2
def wrapped_func(*args, **kwargs):
return decorate_func(actual_func)(*args, **kwargs)
my_swimming_pool = Pool()
result = my_swimming_pool.apply_async(wrapped_func,(2,))
print result.get()
The code ran perfectly and I got:
I'm decorating
4
I'm not very experienced at Python, but this solution solved my problem for me
If you want the decorators too bad (like me), you can also use the exec() command on the function string, to circumvent the mentioned pickling.
I wanted to be able to pass all the arguments to an original function and then use them successively. The following is my code for it.
At first, I made a make_functext() function to convert the target function object to a string. For that, I used the getsource() function from the inspect module (see doctumentation here and note that it can't retrieve source code from compiled code etc.). Here it is:
from inspect import getsource
def make_functext(func):
ft = '\n'.join(getsource(func).split('\n')[1:]) # Removing the decorator, of course
ft = ft.replace(func.__name__, 'func') # Making function callable with 'func'
ft = ft.replace('#§ ', '').replace('#§', '') # For using commented code starting with '#§'
ft = ft.strip() # In case the function code was indented
return ft
It is used in the following _worker() function that will be the target of the processes:
def _worker(functext, args):
scope = {} # This is needed to keep executed definitions
exec(functext, scope)
scope['func'](args) # Using func from scope
And finally, here's my decorator:
from multiprocessing import Process
def parallel(num_processes, **kwargs):
def parallel_decorator(func, num_processes=num_processes):
functext = make_functext(func)
print('This is the parallelized function:\n', functext)
def function_wrapper(funcargs, num_processes=num_processes):
workers = []
print('Launching processes...')
for k in range(num_processes):
p = Process(target=_worker, args=(functext, funcargs[k])) # use args here
p.start()
workers.append(p)
return function_wrapper
return parallel_decorator
The code can finally be used by defining a function like this:
#parallel(4)
def hello(args):
#§ from time import sleep # use '#§' to avoid unnecessary (re)imports in main program
name, seconds = tuple(args) # unpack args-list here
sleep(seconds)
print('Hi', name)
... which can now be called like this:
hello([['Marty', 0.5],
['Catherine', 0.9],
['Tyler', 0.7],
['Pavel', 0.3]])
... which outputs:
This is the parallelized function:
def func(args):
from time import sleep
name, seconds = tuple(args)
sleep(seconds)
print('Hi', name)
Launching processes...
Hi Pavel
Hi Marty
Hi Tyler
Hi Catherine
Thanks for reading, this is my very first post. If you find any mistakes or bad practices, feel free to leave a comment. I know that these string conversions are quite dirty, though...
If you use this code for your decorator:
import multiprocessing
from types import MethodType
DEFAULT_POOL = []
def run_parallel(_func=None, *, name: str = None, context_pool: list = DEFAULT_POOL):
class RunParallel:
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
process = multiprocessing.Process(target=self.func, name=name, args=args, kwargs=kwargs)
context_pool.append(process)
process.start()
def __get__(self, instance, owner):
return self if instance is None else MethodType(self, instance)
if _func is None:
return RunParallel
else:
return RunParallel(_func)
def wait_context(context_pool: list = DEFAULT_POOL, kill_others_if_one_fails: bool = False):
finished = []
for process in context_pool:
process.join()
finished.append(process)
if kill_others_if_one_fails and process.exitcode != 0:
break
if kill_others_if_one_fails:
# kill unfinished processes
for process in context_pool:
if process not in finished:
process.kill()
# wait for every process to be dead
for process in context_pool:
process.join()
Then you can use it like this, in these 4 examples:
#run_parallel
def m1(a, b="b"):
print(f"m1 -- {a=} {b=}")
#run_parallel(name="mym2", context_pool=DEFAULT_POOL)
def m2(d, cc="cc"):
print(f"m2 -- {d} {cc=}")
a = 1/0
class M:
#run_parallel
def c3(self, k, n="n"):
print(f"c3 -- {k=} {n=}")
#run_parallel(name="Mc4", context_pool=DEFAULT_POOL)
def c4(self, x, y="y"):
print(f"c4 -- {x=} {y=}")
if __name__ == "__main__":
m1(11)
m2(22)
M().c3(33)
M().c4(44)
wait_context(kill_others_if_one_fails=True)
The output will be:
m1 -- a=11 b='b'
m2 -- 22 cc='cc'
c3 -- k=33 n='n'
(followed by the exception raised in method m2)
I feel like I should know this, but I haven't been able to figure it out...
I want to get the name of a method--which happens to be an integration test--from inside it so it can print out some diagnostic text. I can, of course, just hard-code the method's name in the string, but I'd like to make the test a little more DRY if possible.
This seems to be the simplest way using module inspect:
import inspect
def somefunc(a,b,c):
print "My name is: %s" % inspect.stack()[0][3]
You could generalise this with:
def funcname():
return inspect.stack()[1][3]
def somefunc(a,b,c):
print "My name is: %s" % funcname()
Credit to Stefaan Lippens which was found via google.
The answers involving introspection via inspect and the like are reasonable. But there may be another option, depending on your situation:
If your integration test is written with the unittest module, then you could use self.id() within your TestCase.
This decorator makes the name of the method available inside the function by passing it as a keyword argument.
from functools import wraps
def pass_func_name(func):
"Name of decorated function will be passed as keyword arg _func_name"
#wraps(func)
def _pass_name(*args, **kwds):
kwds['_func_name'] = func.func_name
return func(*args, **kwds)
return _pass_name
You would use it this way:
#pass_func_name
def sum(a, b, _func_name):
print "running function %s" % _func_name
return a + b
print sum(2, 4)
But maybe you'd want to write what you want directly inside the decorator itself. Then the code is an example of a way to get the function name in a decorator. If you give more details about what you want to do in the function, that requires the name, maybe I can suggest something else.
# file "foo.py"
import sys
import os
def LINE( back = 0 ):
return sys._getframe( back + 1 ).f_lineno
def FILE( back = 0 ):
return sys._getframe( back + 1 ).f_code.co_filename
def FUNC( back = 0):
return sys._getframe( back + 1 ).f_code.co_name
def WHERE( back = 0 ):
frame = sys._getframe( back + 1 )
return "%s/%s %s()" % ( os.path.basename( frame.f_code.co_filename ),
frame.f_lineno, frame.f_code.co_name )
def testit():
print "Here in %s, file %s, line %s" % ( FUNC(), FILE(), LINE() )
print "WHERE says '%s'" % WHERE()
testit()
Output:
$ python foo.py
Here in testit, file foo.py, line 17
WHERE says 'foo.py/18 testit()'
Use "back = 1" to find info regarding two levels back down the stack, etc.
I think the traceback module might have what you're looking for. In particular, the extract_stack function looks like it will do the job.
To elaborate on #mhawke's answer:
Rather than
def funcname():
return inspect.stack()[1][3]
You can use
def funcname():
frame = inspect.currentframe().f_back
return inspect.getframeinfo(frame).function
Which, on my machine, is about 5x faster than the original version according to timeit.