I'm doing code generation and I end up with a string of source that looks like this:
Source
import sys
import operator
def add(a,b):
return operator.add(a,b)
def mul(a,b):
return operator.mul(a,b)
def saveDiv(a,b):
if b==0:
return 0
else:
return a/b
def subtract(a,b):
return operator.sub(a,b)
def main(y,x,z):
y = int(y)
print y
x = int(x)
print x
z = int(z)
print z
ind = lambda y,x,z: mul(saveDiv(x, add(z, z)), 1)
return ind(y,x,z)
print main(**sys.argv)""
Execution
When I'm executing code using exec() and then piping it through stdoutIO()
Working
args={'x':"1",'y':"1",'z':"1"}
source = getSource()
sys.argv = args
with stdoutIO() as s:
exec source
s.getvalue
Not Working
class Coder():
def start(self):
args={'x':"1",'y':"1",'z':"1"}
source = getSource()
sys.argv = args
with stdoutIO() as s:
exec source
return s.getvalue
print "out:", Coder().start()
And the stdoutIO() is implemented like this:
class Proxy(object):
def __init__(self,stdout,stringio):
self._stdout = stdout
self._stringio = stringio
def __getattr__(self,name):
if name in ('_stdout','_stringio','write'):
object.__getattribute__(self,name)
else:
return getattr(self._stringio,name)
def write(self,data):
self._stdout.write(data)
self._stringio.write(data)
#contextlib.contextmanager
def stdoutIO(stdout=None):
old = sys.stdout
if stdout is None:
stdout = StringIO.StringIO()
sys.stdout = Proxy(sys.stdout,stdout)
yield sys.stdout
sys.stdout = old
Problem
If I execute the execution code outside of the class everything works however when I run it inside a class it breaks with this error. How can I fix it or avoid this problem?
File "<string>", line 29, in <module>
File "<string>", line 27, in main
File "<string>", line 26, in <lambda>
NameError: global name 'add' is not defined
Thanks
When you run exec expression, it executes the code contained in expression in the current scope (see here). Apparently inside a class, the function in your expression are dropping out of scope before main is run. I honestly have no idea why (it seems to me like it should work) but maybe someone can add a complete explanation in a comment.
Anyway, if you specifically provide a scope for the expression to be evaluated in, (which is good practice anyway so that you don't pollute your namespace), it works fine inside the class.
So, replace the line:
exec source
with
exec source in {}
and you should be right!
Here we provide an empty dictionary as a the globals() and locals() dctionaries during the evaluation of your expression. You can keep this dictionary if you want, or let it be garbage collected immediately as I have demonstrated in my code. This is all explained in the exec documentation in the link above.
Related
I have a text file that contains Python function like this:
a.txt
def func():
var = 5
return var
And then I read this file in a Python script:
b.py
python_file = open("a.txt").read()
Now I want to assign the a.txt file's function to a variable without worrying about the function name and execute it. I tried something like this:
python_file = open("a.txt").read()
b = exec(python_file)
b()
But it didn't work, I tried execfile as well.
After you've executed the string, you can call func directly, as it has been added to your current namespace:
>>> exec("""def func():
var = 5 # note that the semicolons are redundant and unpythonic
return var""")
>>> func()
5
Per its documentation exec doesn't actually return anything, so there's no point assigning e.g. foo = exec(...).
To see what names are locally defined in the code being executed, pass an empty dictionary to exec as the locals parameter:
>>> ns = {}
>>> exec("""def func():
var = 5
return var""", globals(), ns)
>>> ns
{'func': <function func at 0x0315F540>}
You can then assign the function and call it as you normally would:
>>> b, = ns.values() # this will only work if only one name was defined
>>> b()
5
Before offering my solution, I highly warn against do this unless you know for sure there is no malicious code in a.txt.
My solution uses the execfile function to load the text file and return the first object (could be a variable or function):
def load_function(filename):
""" Assume that filename contains only 1 function """
global_var = dict()
execfile(filename, global_var)
del global_var['__builtins__']
return next(global_var.itervalues())
# Use it
myfunction = load_function('a.txt')
print myfunction()
Update
To be a little more careful, modify the return line like the following so that it skips variables (it cannot skip class declaration, however).
return next(f for f in global_var.itervalues() if callable(f))
Update 2
Thank you johnsharpe for pointing out that there is no execfile in Python 3. Here is a modified solution which use exec instead. This time, the function should be found in the "local" scope.
def load_function(filename):
""" Assume that filename contains only 1 function """
with open(filename) as f:
file_contents = f.read()
global_var = dict()
local_var = dict()
exec file_contents in global_var, local_var
return next(f for f in local_var.itervalues() if callable(f))
# Use it
myfunction = load_function('a.txt')
print myfunction()
I am trying to follow a very simple multiprocessing example:
import multiprocessing as mp
def cube(x):
return x**3
pool = mp.Pool(processes=2)
results = [pool.apply_async(cube, args=x) for x in range(1,7)]
However, on my windows machine, I am not able to get the result (on ubuntu 12.04LTS it runs perfectly).
If I inspect results, I see the following:
[<multiprocessing.pool.ApplyResult object at 0x01FF0910>,
<multiprocessing.pool.ApplyResult object at 0x01FF0950>,
<multiprocessing.pool.ApplyResult object at 0x01FF0990>,
<multiprocessing.pool.ApplyResult object at 0x01FF09D0>,
<multiprocessing.pool.ApplyResult object at 0x01FF0A10>,
<multiprocessing.pool.ApplyResult object at 0x01FF0A50>]
If I run results[0].ready() I always get False.
If I run results[0].get() the python interpreter freezes, waiting to get the result that never comes.
The example is as simple as it gets, so I am thinking this is a low level bug relating to the OS (I am on Windows 7). But perhaps someone else has a better idea?
There are a couple of mistakes here. First, you must declare the Pool inside an if __name__ == "__main__": guard when running on Windows. Second, you have to pass the args keyword argument a sequence, even if you're only passing one argument. So putting that together:
import multiprocessing as mp
def cube(x):
return x**3
if __name__ == "__main__":
pool = mp.Pool(processes=2)
results = [pool.apply_async(cube, args=(x,)) for x in range(1,7)]
print([result.get() for result in results])
Output:
[1, 8, 27, 64, 125, 216]
Edit:
Oh, as moarningsun mentions, multiprocessing does not work well in the interactive interpreter:
Note
Functionality within this package requires that the __main__ module be
importable by the children. This is covered in Programming guidelines
however it is worth pointing out here. This means that some examples,
such as the multiprocessing.Pool examples will not work in the
interactive interpreter.
So you'll need to actually execute the code as a script to test it properly.
I was running python 3 and the IDE was spyder in anaconda (windows ) and so this trick doesn't work for me. I tried a lot but couldn't make any difference. I got the reason for my problem and is the same listed by dano in his note. But after a long day of searching I got some solution and it helped me to run the same code my windows machine. This website helped me to get the solution:
http://python.6.x6.nabble.com/Multiprocessing-Pool-woes-td5047050.html
Since I was using the python 3, I changed the program a little like this:
from types import FunctionType
import marshal
def _applicable(*args, **kwargs):
name = kwargs['__pw_name']
code = marshal.loads(kwargs['__pw_code'])
gbls = globals() #gbls = marshal.loads(kwargs['__pw_gbls'])
defs = marshal.loads(kwargs['__pw_defs'])
clsr = marshal.loads(kwargs['__pw_clsr'])
fdct = marshal.loads(kwargs['__pw_fdct'])
func = FunctionType(code, gbls, name, defs, clsr)
func.fdct = fdct
del kwargs['__pw_name']
del kwargs['__pw_code']
del kwargs['__pw_defs']
del kwargs['__pw_clsr']
del kwargs['__pw_fdct']
return func(*args, **kwargs)
def make_applicable(f, *args, **kwargs):
if not isinstance(f, FunctionType): raise ValueError('argument must be a function')
kwargs['__pw_name'] = f.__name__ # edited
kwargs['__pw_code'] = marshal.dumps(f.__code__) # edited
kwargs['__pw_defs'] = marshal.dumps(f.__defaults__) # edited
kwargs['__pw_clsr'] = marshal.dumps(f.__closure__) # edited
kwargs['__pw_fdct'] = marshal.dumps(f.__dict__) # edited
return _applicable, args, kwargs
def _mappable(x):
x,name,code,defs,clsr,fdct = x
code = marshal.loads(code)
gbls = globals() #gbls = marshal.loads(gbls)
defs = marshal.loads(defs)
clsr = marshal.loads(clsr)
fdct = marshal.loads(fdct)
func = FunctionType(code, gbls, name, defs, clsr)
func.fdct = fdct
return func(x)
def make_mappable(f, iterable):
if not isinstance(f, FunctionType): raise ValueError('argument must be a function')
name = f.__name__ # edited
code = marshal.dumps(f.__code__) # edited
defs = marshal.dumps(f.__defaults__) # edited
clsr = marshal.dumps(f.__closure__) # edited
fdct = marshal.dumps(f.__dict__) # edited
return _mappable, ((i,name,code,defs,clsr,fdct) for i in iterable)
After this function , the above problem code is also changed a little like this:
from multiprocessing import Pool
from poolable import make_applicable, make_mappable
def cube(x):
return x**3
if __name__ == "__main__":
pool = Pool(processes=2)
results = [pool.apply_async(*make_applicable(cube,x)) for x in range(1,7)]
print([result.get(timeout=10) for result in results])
And I got the output as :
[1, 8, 27, 64, 125, 216]
I am thinking that this post may be useful for some of the windows users.
Let's say I have a class like so:
class Shell:
def cat(self, file):
try:
with open(file, 'r') as f:
print f.read()
except IOError:
raise IOError('invalid file location: {}'.format(f))
def echo(self, message):
print message
def ls(self, path):
print os.listdir(path)
In a javascript context, you might be able to do something like "Class"[method_name](), depending on how things were structured. I am looking for something similar in python to make this a "simulated operating system". EG:
import os
def runShell(user_name):
user_input = None
shell = Shell()
while(user_input != 'exit' or user_input != 'quit'):
user_input = raw_input('$'+ user_name + ': ')
...
now, the idea is they can type in something like this...
$crow: cat ../my_text
... and behind the scenes, we get this:
shell.cat('../my_text')
Similarly, I would like to be able to print all method definitions that exist within that class when they type help. EG:
$crow: help\n
> cat (file)
> echo (message)
> ls (path)
is such a thing achievable in python?
You can use the built-in function vars to expose all the members of an object. That's maybe the simplest way to list those for your users. If you're only planning to print to stdout, you could also just call help(shell), which will print your class members along with docstrings and so on. help is really only intended for the interactive interpreter, though, so you'd likely be better off writing your own help-outputter using vars and the __doc__ attribute that's magically added to objects with docstrings. For example:
class Shell(object):
def m(self):
'''Docstring of C#m.'''
return 1
def t(self, a):
'''Docstring of C#t'''
return 2
for name, obj in dict(vars(Shell)).items():
if not name.startswith('__'): #filter builtins
print(name, '::', obj.__doc__)
To pick out and execute a particular method of your object, you can use getattr, which grabs an attribute (if it exists) from an object, by name. For example, to select and run a simple function with no arguments:
fname = raw_input()
if hasattr(shell, fname):
func = getattr(shell, fname)
result = func()
else:
print('That function is not defined.')
Of course you could first tokenize the user input to pass arguments to your function as needed, like for your cat example:
user_input = raw_input().split() # tokenize
fname, *args = user_input #This use of *args syntax is not available prior to Py3
if hasattr(shell, fname):
func = getattr(shell, fname)
result = func(*args) #The *args syntax here is available back to at least 2.6
else:
print('That function is not defined.')
In some circumstances, I want to print debug-style output like this:
# module test.py
def f()
a = 5
b = 8
debug(a, b) # line 18
I want the debug function to print the following:
debug info at test.py: 18
function f
a = 5
b = 8
I am thinking it should be possible by using inspect module to locate the stack frame, then finding the appropriate line, looking up the source code in that line, getting the names of the arguments from there. The function name can be obtained by moving one stack frame up. (The values of the arguments is easy to obtain: they are passed directly to the function debug.)
Am I on the right track? Is there any recipe I can refer to?
You could do something along the following lines:
import inspect
def debug(**kwargs):
st = inspect.stack()[1]
print '%s:%d %s()' % (st[1], st[2], st[3])
for k, v in kwargs.items():
print '%s = %s' % (k, v)
def f():
a = 5
b = 8
debug(a=a, b=b) # line 12
f()
This prints out:
test.py:12 f()
a = 5
b = 8
You're generally doing it right, though it would be easier to use AOP for this kinds of tasks. Basically, instead of calling "debug" every time with every variable, you could just decorate the code with aspects which do certain things upon certain events, like upon entering the function to print passed variables and it's name.
Please refer to this site and old so post for more info.
Yeah, you are in the correct track. You may want to look at inspect.getargspec which would return a named tuple of args, varargs, keywords, defaults passed to the function.
import inspect
def f():
a = 5
b = 8
debug(a, b)
def debug(a, b):
print inspect.getargspec(debug)
f()
This is really tricky. Let me try and give a more complete answer reusing this code, and the hint about getargspec in Senthil's answer which got me triggered somehow. Btw, getargspec is deprecated in Python 3.0 and getfullarcspec should be used instead.
This works for me on a Python 3.1.2 both with explicitly calling the debug function and with using a decorator:
# from: https://stackoverflow.com/a/4493322/923794
def getfunc(func=None, uplevel=0):
"""Return tuple of information about a function
Go's up in the call stack to uplevel+1 and returns information
about the function found.
The tuple contains
name of function, function object, it's frame object,
filename and line number"""
from inspect import currentframe, getouterframes, getframeinfo
#for (level, frame) in enumerate(getouterframes(currentframe())):
# print(str(level) + ' frame: ' + str(frame))
caller = getouterframes(currentframe())[1+uplevel]
# caller is tuple of:
# frame object, filename, line number, function
# name, a list of lines of context, and index within the context
func_name = caller[3]
frame = caller[0]
from pprint import pprint
if func:
func_name = func.__name__
else:
func = frame.f_locals.get(func_name, frame.f_globals.get(func_name))
return (func_name, func, frame, caller[1], caller[2])
def debug_prt_func_args(f=None):
"""Print function name and argument with their values"""
from inspect import getargvalues, getfullargspec
(func_name, func, frame, file, line) = getfunc(func=f, uplevel=1)
argspec = getfullargspec(func)
#print(argspec)
argvals = getargvalues(frame)
print("debug info at " + file + ': ' + str(line))
print(func_name + ':' + str(argvals)) ## reformat to pretty print arg values here
return func_name
def df_dbg_prt_func_args(f):
"""Decorator: dpg_prt_func_args - Prints function name and arguments
"""
def wrapped(*args, **kwargs):
debug_prt_func_args(f)
return f(*args, **kwargs)
return wrapped
Usage:
#df_dbg_prt_func_args
def leaf_decor(*args, **kwargs):
"""Leaf level, simple function"""
print("in leaf")
def leaf_explicit(*args, **kwargs):
"""Leaf level, simple function"""
debug_prt_func_args()
print("in leaf")
def complex():
"""A complex function"""
print("start complex")
leaf_decor(3,4)
print("middle complex")
leaf_explicit(12,45)
print("end complex")
complex()
and prints:
start complex
debug info at debug.py: 54
leaf_decor:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (3, 4), 'f': <function leaf_decor at 0x2aaaac048d98>, 'kwargs': {}})
in leaf
middle complex
debug info at debug.py: 67
leaf_explicit:ArgInfo(args=[], varargs='args', keywords='kwargs', locals={'args': (12, 45), 'kwargs': {}})
in leaf
end complex
The decorator cheats a bit: Since in wrapped we get the same arguments as the function itself it doesn't matter that we find and report the ArgSpec of wrapped in getfunc and debug_prt_func_args. This code could be beautified a bit, but it works alright now for the simple debug testcases I used.
Another trick you can do: If you uncomment the for-loop in getfunc you can see that inspect can give you the "context" which really is the line of source code where a function got called. This code is obviously not showing the content of any variable given to your function, but sometimes it already helps to know the variable name used one level above your called function.
As you can see, with the decorator you don't have to change the code inside the function.
Probably you'll want to pretty print the args. I've left the raw print (and also a commented out print statement) in the function so it's easier to play around with.
I am writing a small app that has to perform some 'sanity checks' before entering execution. (eg. of a sanity check: test if a certain path is readable / writable / exists)
The code:
import logging
import os
import shutil
import sys
from paths import PATH
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger('sf.core.sanity')
def sanity_access(path, mode):
ret = os.access(path, mode)
logfunc = log.debug if ret else log.warning
loginfo = (os.access.__name__, path, mode, ret)
logfunc('%s(\'%s\', %s)==%s' % loginfo)
return ret
def sanity_check(bool_func, true_func, false_func):
ret = bool_func()
(logfunc, execfunc) = (log.debug, true_func) if ret else \
(log.warning, false_func)
logfunc('exec: %s', execfunc.__name__)
execfunc()
def sanity_checks():
sanity_check(lambda: sanity_access(PATH['userhome'], os.F_OK), \
lambda: None, sys.exit)
My question is related to the sanity_check function.
This function takes 3 parameters (bool_func, true_func, false_func). If the bool_func (which is the test function, returning a boolean value) fails, true_func gets executed, else the false_func gets executed.
1) lambda: None is a little lame , because for example if the sanity_access returns True, lambda: None gets executed, and the output printed will be:
DEBUG:sf.core.sanity:access('/home/nomemory', 0)==True
DEBUG:sf.core.sanity:exec: <lambda>
So it won't be very clear in the logs what function got executed. The log will only contain <lambda> . Is there a default function that does nothing and can be passed as a parameter ? Is it a way to return the name of the first function that is being executed inside a lambda ?
Or a way not to log that "exec" if 'nothing' is sent as a paramter ?
What's the none / do-nothing equivalent for functions ?
sanity_check(lambda: sanity_access(PATH['userhome'], os.F_OK), \
<do nothing, but show something more useful than <lambda>>, sys.exit)
Additional question, why is lambda: pass instead of lambda: None not working ?
What's with all the lambdas that serve no purpose? Well, maybe optional arguments will help you a bit:
def sanity_check( test, name='undefined', ontrue=None, onfalse=None ):
if test:
log.debug(name)
if ontrue is not None:
ontrue()
else:
log.warn( name )
if onfalse is not None:
onfalse()
def sanity_checks():
sanity_check(sanity_access(PATH['userhome'], os.F_OK), 'test home',
onfalse=sys.exit)
But you are really overcomplicating things.
update
I would normally delete this post because THC4k saw through all the complexity and rewrote your function correctly. However in a different context, the K combinator trick might come in handy, so I'll leave it up.
There is no builtin that does what you want AFIK. I believe that you want the K combinator (the link came up on another question) which can be encoded as
def K_combinator(x, name):
def f():
return x
f.__name__ = name
return f
none_function = K_combinator(None, 'none_function')
print none_function()
of course if this is just a one off then you could just do
def none_function():
return None
But then you don't get to say "K combinator". Another advantage of the 'K_combinator' approach is that you can pass it to functions, for example,
foo(call_back1, K_combinator(None, 'name_for_logging'))
as for your second statement, only expressions are allowed in lambda. pass is a statement. Hence, lambda: pass fails.
You can slightly simplify your call to sanity check by removing the lambda around the first argument.
def sanity_check(b, true_func, false_func):
if b:
logfunc = log.debug
execfunc = true_func
else:
logfunc = log.warning
execfunc = false_func
logfunc('exec: %s', execfunc.__name__)
execfunc()
def sanity_checks():
sanity_check(sanity_access(PATH['userhome'], os.F_OK),
K_combinator(None, 'none_func'), sys.exit)
This is more readable (largely from expanding the ternary operator into an if). the boolfunc wasn't doing anything because sanity_check wasn't adding any arguments to the call. Might as well just call instead of wrapping it in a lambda.
You might want to rethink this.
class SanityCheck( object ):
def __call__( self ):
if self.check():
logger.debug(...)
self.ok()
else:
logger.warning(...)
self.not_ok()
def check( self ):
return True
def ok( self ):
pass
def not_ok( self ):
sys.exit(1)
class PathSanityCheck(SanityCheck):
path = "/path/to/resource"
def check( self ):
return os.access( path, os.F_OK )
class AnotherPathSanityCheck(SanityCheck):
path = "/another/path"
def startup():
checks = ( PathSanityCheck(), AnotherPathSanityCheck() )
for c in checks:
c()
Callable objects can simplify your life.
>>> import dis
>>> f = lambda: None
>>> dis.dis(f)
1 0 LOAD_CONST 0 (None)
3 RETURN_VALUE
>>> g = lambda: Pass
>>>
>>>
>>> dis.dis(g)
1 0 LOAD_GLOBAL 0 (Pass)
3 RETURN_VALUE
>>> g = lambda: pass
File "<stdin>", line 1
g = lambda: pass
^
SyntaxError: invalid syntax
Actually, what you want is a function which does nothing, but has a __name__ which is useful to the log. The lambda function is doing exactly what you want, but execfunc.__name__ is giving "<lambda>". Try one of these:
def nothing_func():
return
def ThisAppearsInTheLog():
return
You can also put your own attributes on functions:
def log_nothing():
return
log_nothing.log_info = "nothing interesting"
Then change execfunc.__name__ to getattr(execfunc,'log_info', '')