Procedurally created decorator functions - python

My goal is to create a function that will procedurally generate a series of other functions within a class from serialized data.
This is easy enough using dict , but...
i would like for each function to be initialized with the #property decorator (or a similar custom decorator) so that i can call these functions like attributes
Basically, I would like to do something like the following:
class myClass(object):
def __init__(self):
self.makeFuncs(['edgar','allan','poe'])
def makeFuncs(self, data):
for item in data:
self.__dict__[item] = '[%s] <--- is_the_data' % item
myInstance = myClass()
print myInstance.poe
#'[poe] <--- is_the_data'
Got any Ideas?

You can dynamically add propertys, but properties are added to the class object, not the instance.
Here's an example:
def make_prop(cls, p):
def f(self):
print 'IN %s' % p
return '[%s]' % p
return f
class myClass(object):
pass
# add the properties
for p in ('edgar','allan','poe'):
setattr(myClass, p, property(make_prop(myClass, p)))
y = myClass()
print y.a
print y.b
Prints:
IN allan
[allan]
IN poe
[poe]
Also, it is essential to use make_prop to create the function object, instead of creating them directly inside the for loop, due to python's lexical scoping. I.e. this won't work as expected:
# add the properties
for p in ('edgar','allan','poe'):
def f(self):
print 'IN %s' % p
return '[%s]' % p
setattr(myClass, p, property(f))

Here is the answer I came to for procedurally adding properties to a custom shader class in maya.
Thx #shx2 !
import maya.cmds as mc
import sushi.maya.node.dependNode as dep
class Shader(dep.DependNode):
def __init__(self, *args, **kwargs):
super(Shader, self).__init__(*args, **kwargs)
makeProps(self.__class__, ['color','transparency','ambientColor','incandescence','diffuse','translucence','translucenceDepth','translucenceFocus'])
def createShaderProperties(attrName):
def getterProp(self):
return mc.getAttr('%s.%s' % (self.name, attrName))[0]
def setterProp(self, value):
mc.setAttr('%s.%s' % (self.name, attrName), *value, type = 'double3')
return (getterProp, setterProp)
def makeProps(cls, data):
for dat in data:
getterProp, setterProp = createShaderProperties(dat)
setattr(cls, dat, property(getterProp))
setattr(cls, dat, property.setter(cls.__dict__[dat],setterProp))

Your current idea won't work because property objects need to be in the class in order to work (they are descriptors). Since your list of functions is specific to each instance, that won't be possible.
However, you can make the general idea work using __getattr__. Here's an implementation that I think does what you want given a dictionary mapping from names to functions:
class MyClass(object):
def __init__(self, func_dict):
self.func_dict = func_dict
def __getattr__(self, name):
if name in self.func_dict:
return self.func_dict[name]() # call the function
raise AttributeError("{} object has no attribute named {}"
.format(self.__class__.__name__, name)) # or raise an error

Related

Creating methods on the fly for a class instance

The following program is unable to create a function of a class
class MyClass(object):
def __init__(self, name=""):
self.name = name
def read_name(self):
return self.name
# First argument should be a ref to class
def callback(fcn, arg):
fcn.name=arg
# Create a instance of class
a = MyClass("Blue")
# Lets add new member functions
setattr(a, 'callback1', callback)
setattr(a, 'callback2', callback)
print a.read_name()
print a.callback1("purple") #! FAILS
print a.callback2("cyan") #! FAILS
What is the right way of creating a class member function automatically?
I want to create 'N' callback functions, they all will modify some common/uncommon class data (A shared dict)
EDIT 1
I wish to collect information from 'N' separate/parallel threads by passing callback functions. I do not know beforehand how many callback functions I need thus I want to create them on fly.
EDIT 2
I have a dictionary(d) where I am storing the information of different processes. The dictionary(d) is accessed within the callback. But because the same callback function is called at different threads, the dictionary data gets garbled. As a quickfix, I thought of creating separate callbacks.
If you know what you're doing, you'd want to try
import types
setattr(a, 'callback1', types.MethodType(callback, a, MyClass))
In short: when grafting a method, assign it to the class, not to the instance.
Here's an elucidating example.
class A(object):
"""As trivial as a class can get."""
def foo(self):
return self.bar(1) + self.baz()
# Rework everything!
def new_bar(self, x):
return 'I got %r' % x
def new_baz(self):
return ' and I\'m okay!'
A.bar = new_bar
A.baz = new_baz
print A().foo()
Now grafting method to an instance.
a = A()
# An instance attribute is a bound method;
# when we replace it with a function, we lose access to self.
a.bar = lambda x: x * 100
A.baz = lambda self: 42
assert a.foo() == 142
# We can do better, though.
from types import MethodType
a2 = A()
a2.foo = MethodType(lambda self: 'I know myself, my class is %s' % self.__class__.__name__, a2)
print a2.foo()
Note how you don't need setattr to set an attribute, even an unknown attribute. You may remember that you don't use setattr in __init__ either.
You can't add a class method to an instance; you have to add it to the class:
setattr(MyClass, 'callback1', callback)
But it's still a terrible idea. Why would you want this functionality?
Edit: keep your callbacks in a container instead:
class MyClass(object):
def __init__(self, name=""):
self.name = name
self.callbacks = []
def callback(self, idx, arg):
self.callbacks[idx](self, arg)
# First argument should be a ref to class
def callback(fcn, arg):
fcn.name=arg
# Create a instance of class
a = MyClass("Blue")
# Lets add new member functions
a.callbacks.append(callback)
a.callbacks.append(callback)
print a.name
a.callback(0, "purple")
print a.name
a.callback(1, "cyan")
print a.name

Dynamically generate method from string?

I have a dict of different types for which I want to add a simple getter based on the name of the actual parameter.
For example, for three storage parameters, let's say:
self.storage = {'total':100,'used':88,'free':1}
I am looking now for a way (if possible?) to generate a function on the fly with some meta-programming magic.
Instead of
class spaceObj(object):
def getSize(what='total'):
return storage[what]
or hard coding
#property
def getSizeTotal():
return storage['total']
but
class spaceObj(object):
# manipulting the object's index and magic
#property
def getSize:
return ???
so that calling mySpaceObj.getSizeFree would be derived - with getSize only defined once in the object and related functions derived from it by manipulating the objects function list.
Is something like that possible?
While certainly possible to get an unknown attribute from a class as a property, this is not a pythonic approach (__getattr__ magic methods are rather rubyist)
class spaceObj(object):
storage = None
def __init__(self): # this is for testing only
self.storage = {'total':100,'used':88,'free':1}
def __getattr__(self, item):
if item[:7] == 'getSize': # check if an undefined attribute starts with this
return self.getSize(item[7:])
def getSize(self, what='total'):
return self.storage[what.lower()]
print (spaceObj().getSizeTotal) # 100
You can put the values into the object as properties:
class SpaceObj(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
storage = {'total':100,'used':88,'free':1}
o = SpaceObj(**storage)
print o.total
or
o = SpaceObj(total=100, used=88, free=1)
print o.total
or using __getattr__:
class SpaceObj(object):
def __init__(self, **kwargs):
self.storage = kwargs
def __getattr__(self,name):
return self.storage[name]
o = SpaceObj(total=100, used=88, free=1)
print o.total
The latter approach takes a bit more code but it's more safe; if you have a method foo and someone create the instance with SpaceObj(foo=1), then the method will be overwritten with the first approach.
>>> import new
>>> funcstr = "def wat(): print \"wat\";return;"
>>> funcbin = compile(funcstr,'','exec')
>>> ns = {}
>>> exec funcbin in ns
>>> watfunction = new.function(ns["wat"].func_code,globals(),"wat")
>>> globals()["wat"]=watfunction
>>> wat()
wat

Class properties and __setattr__

In Python class, when I use __setattr__ it takes precedence over properties defined in this class (or any base classes). Consider the following code:
class Test(object):
def get_x(self):
x = self._x
print "getting x: %s" % x
return x
def set_x(self, val):
print "setting x: %s" % val
self._x = val
x = property(get_x, set_x)
def __getattr__(self, a):
print "getting attr %s" % a
return -1
def __setattr__(self, a, v):
print "setting attr %s" % a
When I create the class and try to set x, __setattr__ is called instead of set_x:
>>> test = Test()
>>> test.x = 2
setting attr x
>>> print test.x
getting attr x_
getting x: -1
-1
What I want to achieve is that the actual code in __setattr__ were called only if there is no relevant property i.e. test.x = 2 should call set_x. I know that I can achieve this easily by manually checking if a is "x" is __setattr__, however this would make a poor design. Is there a more clever way to ensure the proper behavior in __setattr__ for every property defined in the class and all the base classes?
The search order that Python uses for attributes goes like this:
__getattribute__ and __setattr__
Data descriptors, like property
Instance variables from the object's __dict__ (when setting an attribute, the search ends here)
Non-Data descriptors (like methods) and other class variables
__getattr__
Since __setattr__ is first in line, if you have one you need to make it smart unless want it to handle all attribute setting for your class. It can be smart in either of two ways: Make it handle a specific set attributes only, or make it handle all but some set of attributes. For the ones you don't want it to handle, call super().__setattr__.
For your example class, handling "all attributes except 'x'" is probably easiest:
def __setattr__(self, name, value):
if name == "x":
super(Test, self).__setattr__(name, value)
else:
print "setting attr %s" % name
This is not a bullet-proof solution, but, like you suggested, you can check if a property is being setattred by trying to access the property object, from class's attributes (using getattr on the class object).
class Test(object):
def get_x(self):
x = self._x
print "getting x: %s" % x
return x
def set_x(self, val):
print "setting x: %s" % val
self._x = val
x = property(get_x, set_x)
#property # no fset
def y(self):
print "getting y: 99"
return 99
def __getattr__(self, a):
print "getting attr %s" % a
return -1
def __setattr__(self, a, v):
propobj = getattr(self.__class__, a, None)
if isinstance(propobj, property):
print "setting attr %s using property's fset" % a
if propobj.fset is None:
raise AttributeError("can't set attribute")
propobj.fset(self, v)
else:
print "setting attr %s" % a
super(Test, self).__setattr__(a, v)
test = Test()
test.x = 2
print test.x
#test.y = 88 # raises AttributeError: can't set attribute
print test.y
test.z = 3
print test.z
EDIT: replaced self.__dict__[a] = v with super(Test, self).__setattr__(a, v), as seen on #Blckknght's answer
AFAIK, There is no clean way to do this. The problem here arises from the asymmetry between __getattr__ and __setattr__. The former is called only if attribute by the normal means fails, but the latter is called unconditionally. Since there is no general way that __setattr__ will fail, I don't know if there is a way that this behavior could be changed.
Ultimately, I believe the only way to get the behavior that you want is to fold the set_ action of your properties into your __setattr__ function -- And if you're doing that, you might as well fold the behavior of the getters into __getattr__ to have it all maintainable from 1 place.
I encountered this problems several times, my preferred solution is the following:
For each property [property], define get_[property] and set_[property] functions as you do, but without using decorators
Modify __getattr__ and __setattr__ so that they check for the presence of these functions and use them if they are available.
Here's a minimal example:
class SmartSetter(object):
def __init__(self,name):
self.set_name(name)
def get_name(self):
#return the name property
return self._name
def set_name(self,value):
#set the name property
self._name = value
def __getattr__(self,key):
#first condition is to avoid recursive calling of this function by
#__setattr__ when setting a previously undefined class attribute.
if not key.startswith('get_') and hasattr(self,'get_'+key):
return getattr(self,'get_'+key)()
#implement your __getattr__ magic here...
raise AttributeError("no such attribute: %s" % key)
def __setattr__(self,key,value):
try:
return getattr(self,'set_'+key)(value)
except AttributeError:
#implement your __setattr__ magic here...
return super(SmartSetter,self).__setattr__(key,value)
if __name__ == '__main__':
smart_setter = SmartSetter("Bob the Builder")
print smart_setter.name
#Will print "Bob the Builder"
smart_setter.name = "Spongebob Squarepants"
print smart_setter.name
#Will print "Spongebob Squarepants"
The advantage of this method is that it preserves the normal behavior for all attributes except those that have "getter" and "setter" methods and that it doesn't require any modifications of the __getattr__ and __setattr__ functions when you add or remove properties.

Mapping obj.method({argument:value}) to obj.argument(value)

I don't know if this will make sense, but...
I'm trying to dynamically assign methods to an object.
#translate this
object.key(value)
#into this
object.method({key:value})
To be more specific in my example, I have an object (which I didn't write), lets call it motor, which has some generic methods set, status and a few others. Some take a dictionary as an argument and some take a list. To change the motor's speed, and see the result, I use:
motor.set({'move_at':10})
print motor.status('velocity')
The motor object, then formats this request into a JSON-RPC string, and sends it to an IO daemon. The python motor object doesn't care what the arguments are, it just handles JSON formatting and sockets. The strings move_at and velocity are just two of what might be hundreds of valid arguments.
What I'd like to do is the following instead:
motor.move_at(10)
print motor.velocity()
I'd like to do it in a generic way since I have so many different arguments I can pass. What I don't want to do is this:
# create a new function for every possible argument
def move_at(self,x)
return self.set({'move_at':x})
def velocity(self)
return self.status('velocity')
#and a hundred more...
I did some searching on this which suggested the solution lies with lambdas and meta programming, two subjects I haven't been able to get my head around.
UPDATE:
Based on the code from user470379 I've come up with the following...
# This is what I have now....
class Motor(object):
def set(self,a_dict):
print "Setting a value", a_dict
def status(self,a_list):
print "requesting the status of", a_list
return 10
# Now to extend it....
class MyMotor(Motor):
def __getattr__(self,name):
def special_fn(*value):
# What we return depends on how many arguments there are.
if len(value) == 0: return self.status((name))
if len(value) == 1: return self.set({name:value[0]})
return special_fn
def __setattr__(self,attr,value): # This is based on some other answers
self.set({attr:value})
x = MyMotor()
x.move_at = 20 # Uses __setattr__
x.move_at(10) # May remove this style from __getattr__ to simplify code.
print x.velocity()
output:
Setting a value {'move_at': 20}
Setting a value {'move_at': 10}
10
Thank you to everyone who helped!
What about creating your own __getattr__ for the class that returns a function created on the fly? IIRC, there's some tricky cases to watch out for between __getattr__ and __getattribute__ that I don't recall off the top of my head, I'm sure someone will post a comment to remind me:
def __getattr__(self, name):
def set_fn(self, value):
return self.set({name:value})
return set_fn
Then what should happen is that calling an attribute that doesn't exist (ie: move_at) will call the __getattr__ function and create a new function that will be returned (set_fn above). The name variable of that function will be bound to the name parameter passed into __getattr__ ("move_at" in this case). Then that new function will be called with the arguments you passed (10 in this case).
Edit
A more concise version using lambdas (untested):
def __getattr__(self, name):
return lambda value: self.set({name:value})
There are a lot of different potential answers to this, but many of them will probably involve subclassing the object and/or writing or overriding the __getattr__ function.
Essentially, the __getattr__ function is called whenever python can't find an attribute in the usual way.
Assuming you can subclass your object, here's a simple example of what you might do (it's a bit clumsy but it's a start):
class foo(object):
def __init__(self):
print "initting " + repr(self)
self.a = 5
def meth(self):
print self.a
class newfoo(foo):
def __init__(self):
super(newfoo, self).__init__()
def meth2(): # Or, use a lambda: ...
print "meth2: " + str(self.a) # but you don't have to
self.methdict = { "meth2":meth2 }
def __getattr__(self, name):
return self.methdict[name]
f = foo()
g = newfoo()
f.meth()
g.meth()
g.meth2()
Output:
initting <__main__.foo object at 0xb7701e4c>
initting <__main__.newfoo object at 0xb7701e8c>
5
5
meth2: 5
You seem to have certain "properties" of your object that can be set by
obj.set({"name": value})
and queried by
obj.status("name")
A common way to go in Python is to map this behaviour to what looks like simple attribute access. So we write
obj.name = value
to set the property, and we simply use
obj.name
to query it. This can easily be implemented using the __getattr__() and __setattr__() special methods:
class MyMotor(Motor):
def __init__(self, *args, **kw):
self._init_flag = True
Motor.__init__(self, *args, **kw)
self._init_flag = False
def __getattr__(self, name):
return self.status(name)
def __setattr__(self, name, value):
if self._init_flag or hasattr(self, name):
return Motor.__setattr__(self, name, value)
return self.set({name: value})
Note that this code disallows the dynamic creation of new "real" attributes of Motor instances after the initialisation. If this is needed, corresponding exceptions could be added to the __setattr__() implementation.
Instead of setting with function-call syntax, consider using assignment (with =). Similarly, just use attribute syntax to get a value, instead of function-call syntax. Then you can use __getattr__ and __setattr__:
class OtherType(object): # this is the one you didn't write
# dummy implementations for the example:
def set(self, D):
print "setting", D
def status(self, key):
return "<value of %s>" % key
class Blah(object):
def __init__(self, parent):
object.__setattr__(self, "_parent", parent)
def __getattr__(self, attr):
return self._parent.status(attr)
def __setattr__(self, attr, value):
self._parent.set({attr: value})
obj = Blah(OtherType())
obj.velocity = 42 # prints setting {'velocity': 42}
print obj.velocity # prints <value of velocity>

Dynamic/runtime method creation (code generation) in Python

I need to generate code for a method at runtime. It's important to be able to run arbitrary code and have a docstring.
I came up with a solution combining exec and setattr, here's a dummy example:
class Viking(object):
def __init__(self):
code = '''
def dynamo(self, arg):
""" dynamo's a dynamic method!
"""
self.weight += 1
return arg * self.weight
'''
self.weight = 50
d = {}
exec code.strip() in d
setattr(self.__class__, 'dynamo', d['dynamo'])
if __name__ == "__main__":
v = Viking()
print v.dynamo(10)
print v.dynamo(10)
print v.dynamo.__doc__
Is there a better / safer / more idiomatic way of achieving the same result?
Based on Theran's code, but extending it to methods on classes:
class Dynamo(object):
pass
def add_dynamo(cls,i):
def innerdynamo(self):
print "in dynamo %d" % i
innerdynamo.__doc__ = "docstring for dynamo%d" % i
innerdynamo.__name__ = "dynamo%d" % i
setattr(cls,innerdynamo.__name__,innerdynamo)
for i in range(2):
add_dynamo(Dynamo, i)
d=Dynamo()
d.dynamo0()
d.dynamo1()
Which should print:
in dynamo 0
in dynamo 1
Function docstrings and names are mutable properties. You can do anything you want in the inner function, or even have multiple versions of the inner function that makedynamo() chooses between. No need to build any code out of strings.
Here's a snippet out of the interpreter:
>>> def makedynamo(i):
... def innerdynamo():
... print "in dynamo %d" % i
... innerdynamo.__doc__ = "docstring for dynamo%d" % i
... innerdynamo.__name__ = "dynamo%d" % i
... return innerdynamo
>>> dynamo10 = makedynamo(10)
>>> help(dynamo10)
Help on function dynamo10 in module __main__:
dynamo10()
docstring for dynamo10
Python will let you declare a function in a function, so you don't have to do the exec trickery.
def __init__(self):
def dynamo(self, arg):
""" dynamo's a dynamic method!
"""
self.weight += 1
return arg * self.weight
self.weight = 50
setattr(self.__class__, 'dynamo', dynamo)
If you want to have several versions of the function, you can put all of this in a loop and vary what you name them in the setattr function:
def __init__(self):
for i in range(0,10):
def dynamo(self, arg, i=i):
""" dynamo's a dynamic method!
"""
self.weight += i
return arg * self.weight
setattr(self.__class__, 'dynamo_'+i, dynamo)
self.weight = 50
(I know this isn't great code, but it gets the point across). As far as setting the docstring, I know that's possible but I'd have to look it up in the documentation.
Edit: You can set the docstring via dynamo.__doc__, so you could do something like this in your loop body:
dynamo.__doc__ = "Adds %s to the weight" % i
Another Edit: With help from #eliben and #bobince, the closure problem should be solved.
class Dynamo(object):
def __init__(self):
pass
#staticmethod
def init(initData=None):
if initData is not None:
dynamo= Dynamo()
for name, value in initData.items():
code = '''
def %s(self, *args, **kwargs):
%s
''' % (name, value)
result = {}
exec code.strip() in result
setattr(dynamo.__class__, name, result[name])
return dynamo
return None
service = Dynamo.init({'fnc1':'pass'})
service.fnc1()
A bit more general solution:
You can call any method of an instance of class Dummy.
The docstring is generated based on the methods name.
The handling of any input arguments is demonstrated, by just returning them.
Code
class Dummy(object):
def _mirror(self, method, *args, **kwargs):
"""doc _mirror"""
return args, kwargs
def __getattr__(self, method):
"doc __getattr__"
def tmp(*args, **kwargs):
"""doc tmp"""
return self._mirror(method, *args, **kwargs)
tmp.__doc__ = (
'generated docstring, access by {:}.__doc__'
.format(method))
return tmp
d = Dummy()
print(d.test2('asd', level=0), d.test.__doc__)
print(d.whatever_method(7, 99, par=None), d.whatever_method.__doc__)
Output
(('asd',), {'level': 0}) generated docstring, access by test.__doc__
((7, 99), {'par': None}) generated docstring, access by whatever_method.__doc__
Pardon me for my bad English.
I recently need to generate dynamic function to bind each menu item to open particular frame on wxPython. Here is what i do.
first, i create a list of mapping between the menu item and the frame.
menus = [(self.menuItemFile, FileFrame), (self.menuItemEdit, EditFrame)]
the first item on the mapping is the menu item and the last item is the frame to be opened. Next, i bind the wx.EVT_MENU event from each of the menu item to particular frame.
for menu in menus:
f = genfunc(self, menu[1])
self.Bind(wx.EVT_MENU, f, menu[0])
genfunc function is the dynamic function builder, here is the code:
def genfunc(parent, form):
def OnClick(event):
f = form(parent)
f.Maximize()
f.Show()
return OnClick

Categories