On-demand creation of member methods using __getattr__() - python

Given a class MyClass(object), how can I programmatically define class member methods using some template and the MyClass.__getattr__ mechanism? I'm thinking of something along the lines of
class MyClass(object):
def __init__(self, defaultmembers, members):
# defaultmembers is a list
# members is a dict
self._defaultmembers = defaultmembers
self._members = members
# some magic spell creating a member function factory
def __getattr__(self):
# some more magic
def f_MemberB():
pass
C = MyClass(defaultmembers=["MemberA"], members=dict(MemberB=f_MemberB)
C.MemberA() # should be a valid statement
C.MemberB() # should also be a valid statement
C.MemberC() # should raise an AttributeError
C.MemberA should be a method automatically created from some template mechanism inside the class, and C.MemberB should be the function f_MemberB.

You don't need to redefine __getattr__ (and in fact you generally should never do that). Python is a late binding language. This means you can simply assign values to names in a class at any time (even dynamically at runtime) and they will now exist.
So, in your case, you can simply do:
class MyClass(object):
def __init__(self, defaultmembers, members):
# defaultmembers is a list
# members is a dict
for func in defaultmembers:
setattr(self, func.__name__, func)
for name, func in members.items():
setattr(self, name, func)
Note that this will not actually bind the method to the class (it will not get self as its first argument). If that is what you want, you need to use the MethodType function from types, like so:
from types import MethodType
class MyClass(object):
def __init__(self, defaultmembers, members):
# defaultmembers is a list
# members is a dict
for func in defaultmembers:
name = func.__name__
setattr(self, name , func)
setattr(self, name , MethodType(getattr(self, name), self))
for name, func in members.items():
setattr(self, name, func)
setattr(self, name , MethodType(getattr(self, name), self))
Example:
def def_member(self):
return 1
def key_member(self):
return 2
>>> test = MyClass([def_member], {'named_method':key_member})
>>> test.def_member()
1
>>> test.named_method()
2
You can also make the init method slightly less awkward by using *args and **kwargs, so that the example would just be test = MyClass(def_member, named_member = key_member) if you know there won't be any other arguments to the constructor of this class.
Obviously I've left out the template creation bit for the defaultmembers, since I've used passing a function, rather than simply a name, as the argument. But you should be able to see how you would expand that example to suit your needs, as the template creation part is a bit out of the scope of the original question, which is how to dynamically bind methods to classes.
An important note: This only affects the single instance of MyClass. Please alter the question if you want to affect all instances. Though I would think using mixin class would be better in that case.

Related

Is it possible to share a method reference defined in subclass insatnces with instances of the parent class?

I have a class called A:
>>> class A:
def __init__(self):
self.register = {}
>>>
class A will be sub-classed by class B. class B however, contains methods that need to be registered as a name: function pair in instances of class A's dictionary. This is so class A can do work using the methods.
Here is an example of what I mean:
>>> class B(A):
def foo(self):
pass
def bar(self):
pass
>>> b = B()
>>> b.register # foo and bar were registered
{'key': <foo function>, 'key2': <bar function>}
>>>
Is there an idiomatic way to solve this? Or is something like this not possible, and it would be better to change my codes structure.
Note this is not a duplicate of Auto-register class methods using decorator because my register is not global, it is an instance variable of a class. This means using a meta-class like shown in the selected answer would not work.
I think the link to this question you mentioned in your post really can be used to solve your problem, if I understand it correctly.
The information you're trying to register is global information. While you want each instance to have a register containing this global information, all you really need to do is have __init__ copy the global register into the instance register.
If you will declare all classes you need, and after that worry about instance registers have references to all declared methods in all subclasses, you just need to performa a "register" information when declaring the subclasses themselves. That is easy to do in Python 3.6 (but not 3.5) with the new __init_subclass__ mechanism. In Python 3.5 and before that, it is easier performed using a metaclass.
class FallbackDict(dict):
def __init__(self, fallback):
self.fallback = fallback
def __missing__(self, key):
value = self.fallback[key]
self[key] = value
return value
class A:
register = {}
def __init__(self):
# instance register shadows global register
# for access via "self."
self.register = FallbackDict(__class__.register)
def __init_subclass__(cls):
for attrname, value in cls.__dict__.items():
if callable(value):
__class__.register[attrname] = value
The code here is meant to be simple - the custom dict class will "copy on read" values of the A.regiser dictionary into the instance dictionary. If you need a more consistent dictionary that include this behavior (for example, one that will iterate correctly the keys, values and items of both itself and the fallback dictionary) you'd better implement the "FallbackDict" class as an instance of collections.abc.MutableMapping instead (and just use an aggregate dictionary to actually keep the data)
You don't need the custom dict at all if you plan to create all your classes that register new methods before creating any instance of "A" - or if newly created classes don't have to update the existing instances - but in that case, you should copy A.register to the instance's "self.register" inside __init__.
If you can't change to Python 3.6 you will also need a custom metaclass to trigger the __init_subclass__ method above. Just keep it as is, so that your code is ready to move to Python 3.6 and eliminate the metaclass, and add a metaclass something like:
class MetaA(type):
def __init__(cls, name, bases, namespace):
super().__init__(name, bases, namespace)
cls.__init_subclass__()
class A:
...
#clasmethod
def __init_subclass__(cls):
# as above
...

Is there a shorthand to create member variables from function arguments in python?

Suppose I have a constructor in python:
def __init__(self, bBoolFlags_a_Plenty):
self.bBoolFlags_a_Plenty = self.bBoolFlags_a_Plenty
[...] # one line for each bool flag passed in.
Is there a way to assign function arguments passed to init() to member variables/attributes of the class having the same name as the function arguments without having to hand write each one?
Of something more limited in scope, perhaps a one liner will do. Something like:
self.* = ( arg_bool1, arg_bool2, arg_bool3, arg_bool4)
Actually, I would prefer the latter, just because I don't want the 'kitchen sink' to be assigned to self.
thanks.
You can either use kwargs:
def __init__(self, **flags):
for flag, value in flags.iteritems():
setattr(self, flag, value)
Or as #bgporter correctly suggests use the __dict__ directly (assuming that you don't define the __slots__ attribute):
def __init__(self, **flags):
self.__dict__.update(flags)
Depending on what number "plenty" actually specifies, it may however be easier to keep them in a seperate dict anyway:
def __init__(self, **flags):
self.flags = flags
I'd favour the latter possibility especially if the class has other, "non-flag" attributes as well.
You could also:
def __init__(self, **kwargs):
for key in iter(kwargs):
settatr(self, key, kwargs.get(key))

Python, how to keep a non-class method as non-class method?

I am making a set of classes that call functions that were defined in a different module. To know which function they must call, the function is stored as a variable of the class (or at least that was what I tried). However, when I try to call it, it automatically assumes that the function is a class method and passes "self" as an argument, which logically causes an error because the function received too many arguments. Do you know how can I avoid the function becoming a class method.
The code would be like:
# Module A
def func1(a):
print a
def func2(a):
print a,a
# Module B
from A import *
class Parent:
def func():
self.sonFunc("Hiya!")
class Son1:
sonFunc = func1
class Son2:
sonFunc = func2
so = Son1()
s.func()
# Should print "Hiya!"
s = Son2()
s.func()
# Should print "Hiya! Hiya!"
Thanks
What you are doing is somewhat of a nonstandard/odd thing, but this should work:
class Son_1(object):
son_func = staticmethod(func_1)
class Son_2(object):
son_func = staticmethod(func_2)
Normally, staticmethod is used as a decorator, but since decorators are just syntactical sugar, you can use them this way too.
An arguably cleaner but also more advanced way would be with a metaclass:
class HasSonMeta(type):
def __new__(cls, name, bases, attrs):
attrs['son_func'] = staticmethod(attrs.pop('__son_func__'))
return type.__new__(cls, name, bases, attrs)
class Son1(object):
__metaclass__ = HasSonMeta
__son_func__ = func_1
class Son2(object):
__metaclass__ = HasSonMeta
__son_func__ = func_2
Using this form, you could also define the function directly in the class (though then it gets even more confusing to anyone reading this code):
class Son3(object):
__metaclass__ = HasSonMeta
def __son_func__():
pass
While there could be a very narrow/obscure scenario where this would be an optimal implementation, you would probably be better served by putting your functions in a base class and then referring to (or overridding) them as needed in the children.

Python: How To copy function parameters into object's fields effortlessly?

Many times I have member functions that copy parameters into object's fields. For Example:
class NouveauRiches(object):
def __init__(self, car, mansion, jet, bling):
self.car = car
self.mansion = mansion
self.jet = jet
self.bling = bling
Is there a python language construct that would make the above code less tedious?
One could use *args:
def __init__(self, *args):
self.car, self.mansion, self.jet, self.bling = args
+: less tedious
-: function signature not revealing enough. need to dive into function code to know how to use function
-: does not raise a TypeError on call with wrong # of parameters (but does raise a ValueError)
Any other ideas? (Whatever your suggestion, make sure the code calling the function does stays simple)
You could do this with a helper method, something like this:
import inspect
def setargs(func):
f = inspect.currentframe(1)
argspec = inspect.getargspec(func)
for arg in argspec.args:
setattr(f.f_locals["self"], arg, f.f_locals[arg])
Usage:
class Foo(object):
def __init__(self, bar, baz=4711):
setargs(self.__init__)
print self.bar # Now defined
print self.baz # Now defined
This is not pretty, and it should probably only be used when prototyping. Please use explicit assignment if you plan to have others read it.
It could probably be improved not to need to take the function as an argument, but that would require even more ugly hacks and trickery :)
I would go for this, also you could override already defined properties.
class D:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
But i personally would just go the long way.
Think of those:
- Explicit is better than implicit.
- Flat is better than nested.
(The Zen of Python)
Try something like
d = dict(locals())
del d['self']
self.__dict__.update(d)
Of course, it returns all local variables, not just function arguments.
I am not sure this is such a good idea, but it can be done:
import inspect
class NouveauRiches(object):
def __init__(self, car, mansion, jet, bling):
arguments = inspect.getargvalues(frame)[0]
values = inspect.getargvalues(frame)[3];
for name in arguments:
self.__dict__[name] = values[name]
It does not read great either, though I suppose you could put this in a utility method that is reused.
You could try something like this:
class C(object):
def __init__(self, **kwargs):
for k in kwargs:
d = {k: kwargs[k]}
self.__dict__.update(d)
Or using setattr you can do:
class D(object):
def __init__(self, **kwargs):
for k in kwargs:
setattr(self, k, kwargs[k])
Both can then be called like:
myclass = C(test=1, test2=2)
So you have to use **kwargs, rather than *args.
I sometimes do this for classes that act "bunch-like", that is, they have a bunch of customizable attributes:
class SuperClass(object):
def __init__(self, **kw):
for name, value in kw.iteritems():
if not hasattr(self, name):
raise TypeError('Unexpected argument: %s' % name)
setattr(self, name, value)
class SubClass(SuperClass):
instance_var = None # default value
class SubClass2(SubClass):
other_instance_var = True
#property
def something_dynamic(self):
return self._internal_var
#something_dynamic.setter # new Python 2.6 feature of properties
def something_dynamic(self, value):
assert value is None or isinstance(value, str)
self._internal_var = value
Then you can call SubClass2(instance_var=[], other_instance_var=False) and it'll work without defining __init__ in either of them. You can use any property as well. Though this allows you to overwrite methods, which you probably wouldn't intend (as they return True for hasattr() just like an instance variable).
If you add any property or other other descriptor it will work fine. You can use that to do type checking; unlike type checking in __init__ it'll be applied any time that value is updated. Note you can't use any positional arguments for these unless you override __init__, so sometimes what would be a natural positional argument won't work. formencode.declarative covers this and other issues, probably with a thoroughness I would not suggest you attempt (in retrospect I don't think it's worth it).
Note that any recipe that uses self.__dict__ won't respect properties and descriptors, and if you use those together you'll just get weird and unexpected results. I only recommend using setattr() to set attributes, never self.__dict__.
Also this recipe doesn't give a very helpful signature, while some of the ones that do frame and function introspection do. With some work it is possible to dynamically generate a __doc__ that clarifies the arguments... but again I'm not sure the payoff is worth the addition of more moving parts.
I am a fan of the following
import inspect
def args_to_attrs(otherself):
frame = inspect.currentframe(1)
argvalues = inspect.getargvalues(frame)
for arg in argvalues.args:
if arg == 'self':
continue
value = argvalues.locals[arg]
setattr(otherself, arg, value)
class MyClass:
def __init__(self, arga="baf", argb="lek", argc=None):
args_to_attrs(self)
Arguments to __init__ are explicitly named, so it is clear what attributes are being set. Additionally, it is a little bit streamlined over the currently accepted answer.

python decorator to modify variable in current scope

Goal: Make a decorator which can modify the scope that it is used in.
If it worked:
class Blah(): # or perhaps class Blah(ParentClassWhichMakesThisPossible)
def one(self):
pass
#decorated
def two(self):
pass
>>> Blah.decorated
["two"]
Why? I essentially want to write classes which can maintain specific dictionaries of methods, so that I can retrieve lists of available methods of different types on a per class basis. errr.....
I want to do this:
class RuleClass(ParentClass):
#rule
def blah(self):
pass
#rule
def kapow(self):
pass
def shazam(self):
class OtherRuleClass(ParentClass):
#rule
def foo(self):
pass
def bar(self):
pass
>>> RuleClass.rules.keys()
["blah", "kapow"]
>>> OtherRuleClass.rules.keys()
["foo"]
You can do what you want with a class decorator (in Python 2.6) or a metaclass. The class decorator version:
def rule(f):
f.rule = True
return f
def getRules(cls):
cls.rules = {}
for attr, value in cls.__dict__.iteritems():
if getattr(value, 'rule', False):
cls.rules[attr] = value
return cls
#getRules
class RuleClass:
#rule
def foo(self):
pass
The metaclass version would be:
def rule(f):
f.rule = True
return f
class RuleType(type):
def __init__(self, name, bases, attrs):
self.rules = {}
for attr, value in attrs.iteritems():
if getattr(value, 'rule', False):
self.rules[attr] = value
super(RuleType, self).__init__(name, bases, attrs)
class RuleBase(object):
__metaclass__ = RuleType
class RuleClass(RuleBase):
#rule
def foo(self):
pass
Notice that neither of these do what you ask for (modify the calling namespace) because it's fragile, hard and often impossible. Instead they both post-process the class -- through the class decorator or the metaclass's __init__ method -- by inspecting all the attributes and filling the rules attribute. The difference between the two is that the metaclass solution works in Python 2.5 and earlier (down to 2.2), and that the metaclass is inherited. With the decorator, subclasses have to each apply the decorator individually (if they want to set the rules attribute.)
Both solutions do not take inheritance into account -- they don't look at the parent class when looking for methods marked as rules, nor do they look at the parent class rules attribute. It's not hard to extend either to do that, if that's what you want.
Problem is, at the time the decorated decorator is called, there is no object Blah yet: the class object is built after the class body finishes executing. Simplest is to have decorated stash the info "somewhere else", e.g. a function attribute, then a final pass (a class decorator or metaclass) reaps that info into the dictionary you desire.
Class decorators are simpler, but they don't get inherited (so they wouldn't come from a parent class), while metaclasses are inherited -- so if you insist on inheritance, a metaclass it will have to be. Simplest-first, with a class decorator and the "list" variant you have at the start of your Q rather than the "dict" variant you have later:
import inspect
def classdecorator(aclass):
decorated = []
for name, value in inspect.getmembers(aclass, inspect.ismethod):
if hasattr(value, '_decorated'):
decorated.append(name)
del value._decorated
aclass.decorated = decorated
return aclass
def decorated(afun):
afun._decorated = True
return afun
now,
#classdecorator
class Blah(object):
def one(self):
pass
#decorated
def two(self):
pass
gives you the Blah.decorated list you request in the first part of your Q. Building a dict instead, as you request in the second part of your Q, just means changing decorated.append(name) to decorated[name] = value in the code above, and of course initializing decorated in the class decorator to an empty dict rather than an empty list.
The metaclass variant would use the metaclass's __init__ to perform essentially the same post-processing after the class body is built -- a metaclass's __init__ gets a dict corresponding to the class body as its last argument (but you'll have to support inheritance yourself by appropriately dealing with any base class's analogous dict or list). So the metaclass approach is only "somewhat" more complex in practice than a class decorator, but conceptually it's felt to be much more difficult by most people. I'll give all the details for the metaclass if you need them, but I'd recommend sticking with the simpler class decorator if feasible.

Categories