Python use dictionary keys as function names - python

I would like to be able to use dictionary keys as function names, but I'm not sure if it's possible. As a quick example, instead of class().dothis(dictkey, otherstuff), I'd like to have an option for class().dictkey(otherstuff). Here's a not working code example to give an idea of what I was thinking of.
class testclass:
def __init__(self):
self.dict = {'stuff':'value', 'stuff2':'value2'}
#I know this part won't work, but it gives the general idea of what I'd like to do
for key, value in self.dict.iteritems():
def key():
#do stuff
return value
>>> testclass().stuff()
'value'
Obviously each key would need to be checked that it's not overriding anything important, but other than that, I'd appreciate a bit of help if it's possible to get working.
Basically, my script is to store other scripts in the headers of the Maya scene file, so you may call a command and it'll execute the matching script. It stores the scripts in text format in a dictionary, where I've done a wrapper like thing so you can input args and kwargs without much trouble, and because you can only enter and execute the scripts personally, there's virtually no danger of anything being malicious unless you do it to yourself.
The list is pickled and base64 encoded as it all needs to be in string format for the header, so each time the function is called it decodes the dictionary so you can edit or read it, so ideally I'd need the functions built each time it is called.
A couple of examples from the run function:
Execute a simple line of code
>>> SceneScript().add("MyScript", "print 5")
>>> SceneScript().run("MyScript")
5
Execute a function with a return
>>> SceneScript().add("MyScript", "def test(x): return x*5")
>>> SceneScript().run("MyScript", "test(10)", "test('c')")
[50, 'ccccc']
Pass a variable to a function command
>>> SceneScript().run("MyScript", 'test(a+b)', a=10, b=-50)
[-200]
Execute a function without a return
>>> SceneScript().add("MyScript", "def test(x): print x*5")
>>> SceneScript().run("MyScript", "test(10)", "test('c')")
50
ccccc
[None, None]
Pass a variable
>>> SceneScript().add("MyScript", "print x")
>>> SceneScript().run("MyScript", x=20)
20
So as this question is asking, in terms of the above code, I'd like to have something like SceneScript().MyScript( "test(10)" ), just to make it easier to use.

The only "correct" way I can think of to do this looks like this:
class SomeClass(object):
def __init__(self, *args, **kwargs):
funcs = {'funcname': 'returnvalue', ...}
for func, ret_val in funcs.iteritems():
setattr(self, func, self.make_function(ret_val))
#staticmethod
def make_function(return_value):
def wrapped_function(*args, **kwargs):
# do some stuff
return return_value
return wrapped_function
This should allow you do to:
>>> foo = SomeClass()
>>> foo.funcname()
'returnvalue'
Of course the question of why you'd want to do something like this remains, as yet, unanswered :)
EDIT per updated question:
The problem lies in the fact that you cannot safely assign the method to the function signature. I'm not sure how SceneScript().add works currently, but that's essentially going to have to tie into this somehow or another.

Are you looking for a way to call a function residing inside the current module through a string with its name? If so,
def stuff(arg):
return 5
d = {"stuff":"value","stuff2":"value2"}
print globals()["stuff"](d["stuff"])
will print 5.

I would look into partial functions using functools.partial, in conjunction with __getattribute__:
class Foo:
def __init__(self):
self.a = 5
self.b = 6
def funca(self, x):
print(self.a + x)
def funcb(self, x):
self.a += x
self.funca(x)
mydict = {'funca':1, 'funcb':2}
foo = Foo()
for funcname,param in mydict.items():
print('foo before:', foo.a, foo.b)
print('calling', funcname)
functools.partial(foo.__getattribute__(funcname), param)()
print('foo after:', foo.a, foo.b)
Output:
foo before: 5 6
calling funca
6
foo after: 5 6
foo before: 5 6
calling funcb
9
foo after: 7 6

Related

Is there a programmatic way to print the definition of a dynamically created function in python? [duplicate]

Suppose I have a Python function as defined below:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
I can get the name of the function using foo.func_name. How can I programmatically get its source code, as I typed above?
If the function is from a source file available on the filesystem, then inspect.getsource(foo) might be of help:
If foo is defined as:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
Then:
import inspect
lines = inspect.getsource(foo)
print(lines)
Returns:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
But I believe that if the function is compiled from a string, stream or imported from a compiled file, then you cannot retrieve its source code.
The inspect module has methods for retrieving source code from python objects. Seemingly it only works if the source is located in a file though. If you had that I guess you wouldn't need to get the source from the object.
The following tests inspect.getsource(foo) using Python 3.6:
import inspect
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
source_foo = inspect.getsource(foo) # foo is normal function
print(source_foo)
source_max = inspect.getsource(max) # max is a built-in function
print(source_max)
This first prints:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
Then fails on inspect.getsource(max) with the following error:
TypeError: <built-in function max> is not a module, class, method, function, traceback, frame, or code object
Just use foo?? or ??foo.
If you are using IPython, then you need to type foo?? or ??foo to see the complete source code. To see only the docstring in the function, use foo? or ?foo. This works in Jupyter notebook as well.
In [19]: foo??
Signature: foo(arg1, arg2)
Source:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
File: ~/Desktop/<ipython-input-18-3174e3126506>
Type: function
dis is your friend if the source code is not available:
>>> import dis
>>> def foo(arg1,arg2):
... #do something with args
... a = arg1 + arg2
... return a
...
>>> dis.dis(foo)
3 0 LOAD_FAST 0 (arg1)
3 LOAD_FAST 1 (arg2)
6 BINARY_ADD
7 STORE_FAST 2 (a)
4 10 LOAD_FAST 2 (a)
13 RETURN_VALUE
While I'd generally agree that inspect is a good answer, I'd disagree that you can't get the source code of objects defined in the interpreter. If you use dill.source.getsource from dill, you can get the source of functions and lambdas, even if they are defined interactively.
It also can get the code for from bound or unbound class methods and functions defined in curries... however, you might not be able to compile that code without the enclosing object's code.
>>> from dill.source import getsource
>>>
>>> def add(x,y):
... return x+y
...
>>> squared = lambda x:x**2
>>>
>>> print getsource(add)
def add(x,y):
return x+y
>>> print getsource(squared)
squared = lambda x:x**2
>>>
>>> class Foo(object):
... def bar(self, x):
... return x*x+x
...
>>> f = Foo()
>>>
>>> print getsource(f.bar)
def bar(self, x):
return x*x+x
>>>
To expand on runeh's answer:
>>> def foo(a):
... x = 2
... return x + a
>>> import inspect
>>> inspect.getsource(foo)
u'def foo(a):\n x = 2\n return x + a\n'
print inspect.getsource(foo)
def foo(a):
x = 2
return x + a
EDIT: As pointed out by #0sh this example works using ipython but not plain python. It should be fine in both, however, when importing code from source files.
Since this post is marked as the duplicate of this other post, I answer here for the "lambda" case, although the OP is not about lambdas.
So, for lambda functions that are not defined in their own lines: in addition to marko.ristin's answer, you may wish to use mini-lambda or use SymPy as suggested in this answer.
mini-lambda is lighter and supports any kind of operation, but works only for a single variable
SymPy is heavier but much more equipped with mathematical/calculus operations. In particular it can simplify your expressions. It also supports several variables in the same expression.
Here is how you can do it using mini-lambda:
from mini_lambda import x, is_mini_lambda_expr
import inspect
def get_source_code_str(f):
if is_mini_lambda_expr(f):
return f.to_string()
else:
return inspect.getsource(f)
# test it
def foo(arg1, arg2):
# do something with args
a = arg1 + arg2
return a
print(get_source_code_str(foo))
print(get_source_code_str(x ** 2))
It correctly yields
def foo(arg1, arg2):
# do something with args
a = arg1 + arg2
return a
x ** 2
See mini-lambda documentation for details. I'm the author by the way ;)
You can use inspect module to get full source code for that. You have to use getsource() method for that from the inspect module. For example:
import inspect
def get_my_code():
x = "abcd"
return x
print(inspect.getsource(get_my_code))
You can check it out more options on the below link.
retrieve your python code
to summarize :
import inspect
print( "".join(inspect.getsourcelines(foo)[0]))
Please mind that the accepted answers work only if the lambda is given on a separate line. If you pass it in as an argument to a function and would like to retrieve the code of the lambda as object, the problem gets a bit tricky since inspect will give you the whole line.
For example, consider a file test.py:
import inspect
def main():
x, f = 3, lambda a: a + 1
print(inspect.getsource(f))
if __name__ == "__main__":
main()
Executing it gives you (mind the indention!):
x, f = 3, lambda a: a + 1
To retrieve the source code of the lambda, your best bet, in my opinion, is to re-parse the whole source file (by using f.__code__.co_filename) and match the lambda AST node by the line number and its context.
We had to do precisely that in our design-by-contract library icontract since we had to parse the lambda functions we pass in as arguments to decorators. It is too much code to paste here, so have a look at the implementation of this function.
If you're strictly defining the function yourself and it's a relatively short definition, a solution without dependencies would be to define the function in a string and assign the eval() of the expression to your function.
E.g.
funcstring = 'lambda x: x> 5'
func = eval(funcstring)
then optionally to attach the original code to the function:
func.source = funcstring
RafaƂ Dowgird's answer states:
I believe that if the function is compiled from a string, stream or imported from a compiled file, then you cannot retrieve its source code.
However, it is possible to retrieve the source code of a function compiled from a string, provided that the compiling code also added an entry to the linecache.cache dict:
import linecache
import inspect
script = '''
def add_nums(a, b):
return a + b
'''
bytecode = compile(script, 'unique_filename', 'exec')
tmp = {}
eval(bytecode, {}, tmp)
add_nums = tmp["add_nums"]
linecache.cache['unique_filename'] = (
len(script),
None,
script.splitlines(True),
'unique_filename',
)
print(inspect.getsource(add_nums))
# prints:
# """
# def add_nums(a, b):
# return a + b
# """
This is how the attrs library creates various methods for classes automatically, given a set of attributes that the class expects to be initialized with. See their source code here. As the source explains, this is a feature primarily intended to enable debuggers such as PDB to step through the code.
I believe that variable names aren't stored in pyc/pyd/pyo files, so you can not retrieve the exact code lines if you don't have source files.

automatic wrapper that adds an output to a function

[I am using python 2.7]
I wanted to make a little wrapper function that add one output to a function. Something like:
def add_output(fct, value):
return lambda *args, **kargs: (fct(*args,**kargs),value)
Example of use:
def f(a): return a+1
g = add_output(f,42)
print g(12) # print: (13,42)
This is the expected results, but it does not work if the function given to add_ouput return more than one output (nor if it returns no output). In this case, the wrapped function will return two outputs, one contains all the output of the initial function (or None if it returns no output), and one with the added output:
def f1(a): return a,a+1
def f2(a): pass
g1 = add_output(f1,42)
g2 = add_output(f2,42)
print g1(12) # print: ((12,13),42) instead of (12,13,42)
print g2(12) # print: (None,42) instead of 42
I can see this is related to the impossibility to distinguish between one output of type tuple and several output. But this is disappointing not to be able to do something so simple with a dynamic language like python...
Does anyone have an idea on a way to achieve this automatically and nicely enough, or am I in a dead-end ?
Note:
In case this change anything, my real purpose is doing some wrapping of class (instance) method, to looks like function (for workflow stuff). However it is require to add self in the output (in case its content is changed):
class C(object):
def f(self): return 'foo','bar'
def wrap(method):
return lambda self, *args, **kargs: (self,method(self,*args,**kargs))
f = wrap(C.f)
c = C()
f(c) # returns (c,('foo','bar')) instead of (c,'foo','bar')
I am working with python 2.7, so I a want solution with this version or else I abandon the idea. I am still interested (and maybe futur readers) by comments about this issue for python 3 though.
Your add_output() function is what is called a decorator in Python. Regardless, you can use one of the collections module's ABCs (Abstract Base Classes) to distinguish between different results from the function being wrapped. For example:
import collections
def add_output(fct, value):
def wrapped(*args, **kwargs):
result = fct(*args, **kwargs)
if isinstance(result, collections.Sequence):
return tuple(result) + (value,)
elif result is None:
return value
else: # non-None and non-sequence
return (result, value)
return wrapped
def f1(a): return a,a+1
def f2(a): pass
g1 = add_output(f1, 42)
g2 = add_output(f2, 42)
print g1(12) # -> (12,13,42)
print g2(12) # -> 42
Depending of what sort of functions you plan on decorating, you might need to use the collections.Iterable ABC instead of, or in addition to, collections.Sequence.

Testing for side-effects in python

I want to check that my function has no side-effects, or only side-effects affecting precise variables. Is there a function to check that it actually has no side-effects (or side-effects on only certain variables)?
If not, how can I go about writing my own as follows:
My idea would be something like this, initialising, calling the function under test, and then calling the final method:
class test_side_effects(parents_scope, exclude_variables=[]):
def __init__():
for variable_name, variable_initial in parents_scope.items():
if variable_name not in exclude_variables:
setattr(self, "test_"+variable_name, variable_initial)
def final(self, final_parents_scope):
for variable_name, variable_final in final_parents_scope.items():
if variable_name[:5] is "test_" and variable_name not in exclude_variables:
assert getattr(self, "test_"+variable_name) is variable_final, "Unexpected side effect of %s from %s to %s" % (variable_name, variable_initial, variable_final)
#here parents_scope should be inputted as dict(globals(),**locals())
I'm unsure if this is precisely the dictionary I want...
Finally, should I be doing this? If not, why not?
I'm not familiar with the nested function testing library that you might be writing a test with, but it seems like you should really be using classes here (i.e. TestCase in many frameworks).
If your question then, is relating to getting the parent variables in your TestCase, you could get the __dict__ (It wasn't clear to me what the "Parent" variables you were referring to.
UPDATE: #hayden posted a gist to show the use of parent variables:
def f():
a = 2
b = 1
def g():
#a = 3
b = 2
c = 1
print dict(globals(), **locals()) #prints a=1, but we want a=2 (from f)
g()
a = 1
f()
If this is converted to a dictionary, then the problem is solvable with:
class f(object): # could be unittest TestCase
def setUp(self, a=2, b=1):
self.a = a
self.b = b
def g(self):
#a = 3
b = 2
c = 1
full_scope = globals().copy()
full_scope.update(self.__dict__)
full_scope.update(locals())
full_scope.pop('full_scope')
print full_scope # print a = 1
my_test = f()
my_test.setUp(a=1)
my_test.g()
You are right to look for a tool which has already implemented this. I am hopeful that somebody else will have an already implemented solution.

How to implement a submethod in a Python-class?

I appologize, if I didn't express my self clearly. What I want to do is this:
class someClass(object):
def aMethod(self, argument):
return some_data #for example a list or a more complex datastructure
def aMethod_max(self, argument):
var = self.aMethod(argument)
#do someting with var
return altered_var
or I could do:
def aMethod(self, argument):
self.someVar = some_data
return some_data #for example a list or a more complex datastructure
def aMethod_max(self, argument):
if not hasattr(self, someVar):
self.aMethod(argument)
#do someting with self.var
return altered_var
But I considered this too complicated and hoped for a more elegant solution. I hope that it's clear now, what I want to accomplish.
Therefore I phantasized about something like in the following paragraph.
class someClass(object):
someMethod(self):
#doSomething
return result
subMethod(self):
#doSomething with the result of someMethod
Foo = someClass()
Foo.someMethod.subMethod()
or if someMethod has an argument something like
Foo.someMethod(argument).subMethod()
How would I do something like this in python?
EDIT: or like this?
subMethod(self):
var = self.someMethod()
return doSomething(var)
Let's compare the existing solutions already given in your question (e.g. the ones you call "complicated" and "inelegant") with your proposed alternative.
The existing solutions mean you will be able to write:
foo.subMethod() # foo.someMethod() is called internally
but your proposed alternative means you have to write:
foo.someMethod().subMethod()
which is obviously worse.
On the other hand, if subMethod has to be able to modify the result of any method, rather than just someMethod, then the existing solutions would mean you have to write:
foo.subMethod(foo.anyMethod())
with the only disadvantage here being that you have to type foo twice, as opposed to once.
Conclusion: on the whole, the existing solutions are less complicated and inelegant than your proposed alternative - so stick with the existing solutions.
You can do method chaining when the result of someMethod is an instance of someClass.
Simple example:
>>> class someClass:
... def someMethod(self):
... return self
... def subMethod(self):
... return self.__class__
...
>>> x=someClass()
>>> x
<__main__.someClass instance at 0x2aaaaab30d40>
>>> x.someMethod().subMethod()
<class __main__.someClass at 0x2aaaaab31050>
Not sure if I'm understanding it right, but perhaps you mean this:
Foo.subMethod(Foo.someMethod())
This passes the result of someMethod() to subMethod(). You'd have to change your current definition of subMethod() to accept the result of someMethod().
You can achieve something similar using decorators:
def on_result(f):
def decorated(self,other,*args,**kwargs):
result = getattr(self,other)(*args,**kwargs)
return f(result)
return decorated
Usage:
class someClass(object):
def someMethod(self,x,y):
#doSomething
result = [1,2,3,x,y] # example
return result
#on_result
def subMethod(self):
#doSomething with the result of someMethod
print self # example
Foo = someClass()
Foo.subMethod("someMethod",4,5)
Output:
[1, 2, 3, 4, 5]
As you see, the first argument is the name of the method to be chained, and the remaining ones will be passed to it, no matter what its signature is.
EDIT: on second thought, this is rather pointless, since you could always use
Foo.submethod(Foo.someMethod(4,5))...
Maybe I didn't understand what you're trying to achieve. Does the subMethod have to be linked to a specific method only? Or maybe it's the syntatic form
a.b().c()
that's important to you? (in that case, see kojiro's answer)
From the feedback so far, I understand that subMethod will link only to someMethod, right? Maybe you can achieve this combining a decorator with a closure:
def on_result(other):
def decorator(f):
def decorated(self,*a1,**k1):
def closure(*a2,**k2):
return f(self,getattr(self,other)(*a1,**k1),*a2,**k2)
return closure
return decorated
return decorator
class someClass(object):
def someMethod(self,a,b):
return [a,2*b]
#on_result('someMethod')
def subMethod(self,result,c,d):
result.extend([3*c,4*d])
return result
Foo = someClass()
print Foo.subMethod(1,2)(3,4) # prints [1,4,9,16]
The decorator is kinda "ugly", but once written it's usage is quite elegant IMHO (plus, there are no contraints in the signature of either method).
Note: I'm using Python 2.5 and this is the only way I know of writing decorators that take arguments. There's probably a better way, but I'm too lazy to look it up right now...

Simple python oo issue

Have a look a this simple example. I don't quite understand why o1 prints "Hello Alex" twice. I would think that because of the default self.a is always reset to the empty list. Could someone explain to me what's the rationale here? Thank you so much.
class A(object):
def __init__(self, a=[]):
self.a = a
o = A()
o.a.append('Hello')
o.a.append('Alex')
print ' '.join(o.a)
# >> prints Hello Alex
o1 = A()
o1.a.append('Hello')
o1.a.append('Alex')
print ' '.join(o1.a)
# >> prints Hello Alex Hello Alex
Read this Pitfall about mutable default function arguments:
http://www.ferg.org/projects/python_gotchas.html
In short, when you define
def __init__(self,a=[])
The list referenced by self.a by default is defined only once, at definition-time, not run-time. So each time you call o.a.append or o1.a.append, you are modifying the same list.
The typical way to fix this is to say:
class A(object):
def __init__(self, a=None):
self.a = [] if a is None else a
By moving self.a=[] into the body of the __init__ function, a new empty list is created at run-time (each time __init__ is called), not at definition-time.
Default arguments in Python, like:
def blah(a="default value")
are evaluated once and re-used in every call, so when you modify a you modify a globally. A possible solution is to do:
def blah(a=None):
if a is None
a = []
You can read more about this issue on: http://www.ferg.org/projects/python_gotchas.html#contents_item_6
Basically, never use mutable objects, like lists or dictionaries on a default value for an argument.

Categories