Dynamically Create Methods for Class with list of values - python

I have a list of values that I want to use for a Builder object implementation that is in the works.
For example:
val_list = ["abc", "def", "ghi"]
What I want to do is dynamically create methods in a class that will allow for these to be callable and retrieved in an instance.
I'm vaguely familiar with doing this with setattr(...) but the next step Im stuck at is being able to do some processing inside the method. In the example below, if I was to do this with my ever growing list, it would a WHOLE BUNCH of code that does literally the same thing. It works for now but I want this list to be dynamic, as well as the class.
For example
def abc(self, value):
self.processing1 = value + "workworkwork"
return self
def def(self, value):
self.processing1 = value + "workworkwork"
return self
def ghi(self, value):
self.processing1 = value + "workworkwork"
return self

I haven't tried this before, but I wonder if it would work using lambdas
self.my_methods = {}
val_list = []
def new_method(self,method_name):
self.my_methods[method_name] = "lambda: self.general_method(some_value)"
def general_method(self, value):
print(value)
Honestly, I'm sure that won't work as written, but hopefully you see the train of thought if it looks of possible interest. Since I can't visualize the overall concept, it's a little tough.
But since it seems that the method name seems important, I'm not sure what to do. Perhaps this is an XY type question? Getting stuck on the how instead of the results?
I would think there has to be a way to make this work:
[Class definition]
...
def method(self,secret_method_name,arg1):
# do something based on method name if necessary
# do something based on args

You can't call a non-existing method on a object without wrapping it first, e.g:
# A legacy class
class dummy:
def foo(self):
return "I'm a dummy!"
obj = dummy()
obj.a("1") # You can't
You can do it using a wrapper class first, here's just a idea of how you can get it done:
# Creates a object you can append methods to
def buildable(cls):
class fake:
# This method will receive the class of the object to build
def __init__(self, cls):
self.cls = cls
# This will simulate a constructor of the underlying class
# Return the fake class so we can call methods on it
def __call__(self, *args, **kwargs):
self.obj = self.cls(*args, **kwargs)
return self
# Will be called whenever a property (existing or non-existing)
# is called on a instance of the fake class
def __getattr__(self, attr):
# If the underlying object has the called attribute,
# just return this attribute
if hasattr(self.obj, attr):
return getattr(self.obj, attr)
# Call the respective function on globals with the provided
# arguments and return the fake class so we can add more methods
def wrapper(*args, **kwargs):
globals()[attr](self.obj, *args, **kwargs)
return self
return wrapper
return fake(cls)
So, how does this work?
Decorate your legacy class:
#buildable
class dummy:
def foo(self):
return "I'm a dummy!"
Create the build methods that'll modify dummy:
def a(self, some):
self.a = some + 'a'
def b(self, some):
self.b = some + 'b'
def c(self, some):
self.c = some + 'c'
Modify it:
obj = dummy()
obj.a("1").b("2").c("3")
See the brand new attributes (and the old ones too!):
print(obj.a) # 1a
print(obj.b) # 2b
print(obj.c) # 3c
print(obj.foo()) # I'm a dummy!
Note that this has some important drawbacks, such as:
Calling a non-existing attribute on dummy will not raise AttributeError:
print(obj.nini) # <function buildable.<locals>.fake.__getattr__.<locals>.wrapper at 0x7f4794e663a0>
You can't do it with multiple objects:
obj1 = dummy()
obj1.a("1").b("2")
print(obj1.a) # 1a
print(obj1.b) # 2b
obj2 = dummy()
obj2.c("3")
print(obj2.c) # 3c
print(obj1.a) # <function buildable.<locals>.fake.__getattr__.<locals>.wrapper at 0x7f524ae16280>
print(obj1.b) # <function buildable.<locals>.fake.__getattr__.<locals>.wrapper at 0x7f524ae16280>
The type of obj will not be dummy:
print(type(obj)) # <class '__main__.buildable.<locals>.fake'>
print(type(obj.obj)) # <class '__main__.dummy'>
You can't call a build method with the same name of an already existing method:
def foo(bar):
self.foo = 'foo' + bar
obj.foo("bar")
print(obj.foo())
# raises TypeError: foo() takes 1 positional argument but 2 were
You can't do it with built-in classes:
list = buildable(list)
obj = list()
obj.a("4").b("5").c("6")
# raises AttributeError: 'list' object has no attribute 'a'

Related

How can I return self and another variable in a python class method while method chaining?

I understand what I am asking here is probably not the best code design, but the reason for me asking is strictly academic. I am trying to understand how to make this concept work.
Typically, I will return self from a class method so that the following methods can be chained together. My understanding is by returning self, I am simply returning an instance of the class, for the following methods to work on.
But in this case, I am trying to figure out how to return both self and another value from the method. The idea is if I do not want to chain, or I do not call any class attributes, I want to retrieve the data from the method being called.
Consider this example:
class Test(object):
def __init__(self):
self.hold = None
def methoda(self):
self.hold = 'lol'
return self, 'lol'
def newmethod(self):
self.hold = self.hold * 2
return self, 2
t = Test()
t.methoda().newmethod()
print(t.hold)
In this case, I will get an AttributeError: 'tuple' object has no attribute 'newmethod' which is to be expected because the methoda method is returning a tuple which does not have any methods or attributes called newmethod.
My question is not about unpacking multiple returns, but more about how can I continue to chain methods when the preceding methods are returning multiple values. I also understand that I can control the methods return with an argument to it, but that is not what I am trying to do.
As mentioned previously, I do realize this is probably a bad question, and I am happy to delete the post if the question doesnt make any sense.
Following the suggestion by #JohnColeman, you can return a special tuple with attribute lookup delegated to your object if it is not a normal tuple attribute. That way it acts like a normal tuple except when you are chaining methods.
You can implement this as follows:
class ChainResult(tuple):
def __new__(cls, *args):
return super(ChainResult, cls).__new__(cls, args)
def __getattribute__(self, name):
try:
return getattr(super(), name)
except AttributeError:
return getattr(super().__getitem__(0), name)
class Test(object):
def __init__(self):
self.hold = None
def methoda(self):
self.hold = 'lol'
return ChainResult(self, 'lol')
def newmethod(self):
self.hold = self.hold * 2
return ChainResult(self, 2)
Testing:
>>> t = Test()
>>> t.methoda().newmethod()
>>> print(t.hold)
lollol
The returned result does indeed act as a tuple:
>>> t, res = t.methoda().newmethod()
>>> print(res)
2
>>> print(isinstance(t.methoda().newmethod(), tuple))
True
You could imagine all sorts of semantics with this, such as forwarding the returned values to the next method in the chain using closure:
class ChainResult(tuple):
def __new__(cls, *args):
return super(ChainResult, cls).__new__(cls, args)
def __getattribute__(self, name):
try:
return getattr(super(), name)
except AttributeError:
attr = getattr(super().__getitem__(0), name)
if callable(attr):
chain_results = super().__getitem__(slice(1, None))
return lambda *args, **kw: attr(*(chain_results+args), **kw)
else:
return attr
For example,
class Test:
...
def methodb(self, *args):
print(*args)
would produce
>>> t = Test()
>>> t.methoda().methodb('catz')
lol catz
It would be nice if you could make ChainResults invisible. You can almost do it by initializing the tuple base class with the normal results and saving your object in a separate attribute used only for chaining. Then use a class decorator that wraps every method with ChainResults(self, self.method(*args, **kw)). It will work okay for methods that return a tuple but a single value return will act like a length 1 tuple, so you will need something like obj.method()[0] or result, = obj.method() to work with it. I played a bit with delegating to tuple for a multiple return or to the value itself for a single return; maybe it could be made to work but it introduces so many ambiguities that I doubt it could work well.

Class with a registry of methods based on decorators

I have a class that has several methods which each have certain properties (in the sense of quality). I'd like these methods to be available in a list inside the class so they can be executed at once. Note that the properties can be interchangeable so this can't be solved by using further classes that would inherit from the original one. In an ideal world it would look something like this:
class MyClass:
def __init__():
red_rules = set()
blue_rules = set()
hard_rules = set()
soft_rules = set()
#red
def rule_one(self):
return 1
#blue
#hard
def rule_two(self):
return 2
#hard
def rule_three(self):
return 3
#blue
#soft
def rule_four(self):
return 4
When the class is instantiated, it should be easy to simply execute all red and soft rules by combining the sets and executing the methods. The decorators for this are tricky though since a regular registering decorator can fill out a global object but not the class attribute:
def red(fn):
red_rules.add(fn)
return fn
How do I go about implementing something like this?
You can subclass set and give it a decorator method:
class MySet(set):
def register(self, method):
self.add(method)
return method
class MyClass:
red_rules = MySet()
blue_rules = MySet()
hard_rules = MySet()
soft_rules = MySet()
#red_rules.register
def rule_one(self):
return 1
#blue_rules.register
#hard_rules.register
def rule_two(self):
return 2
#hard_rules.register
def rule_three(self):
return 3
#blue_rules.register
#soft_rules.register
def rule_four(self):
return 4
Or if you find using the .register method ugly, you can always define the __call__ method to use the set itself as a decorator:
class MySet(set):
def __call__(self, method):
"""Use set as a decorator to add elements to it."""
self.add(method)
return method
class MyClass:
red_rules = MySet()
...
#red_rules
def rule_one(self):
return 1
...
This looks better, but it's less explicit, so for other collaborators (or future yourself) it might be harder to grasp what's happening here.
To call the stored functions, you can just loop over the set you want and pass in the instance as the self argument:
my_instance = MyClass()
for rule in MyClass.red_rules:
rule(my_instance)
You can also create an utility function to do this for you, for example you can create a MySet.invoke() method:
class MySet(set):
...
def invoke(self, obj):
for rule in self:
rule(obj)
And now just call:
MyClass.red_rules.invoke(my_instance)
Or you could have MyClass handle this instead:
class MyClass:
...
def invoke_rules(self, rules):
for rule in rules:
rule(self)
And then call this on an instance of MyClass:
my_instance.invoke_rules(MyClass.red_rules)
Decorators are applied when the function is defined; in a class that's when the class is defined. At this point in time there are no instances yet!
You have three options:
Register your decorators at the class level. This is not as clean as it may sound; you either have to explicitly pass additional objects to your decorators (red_rules = set(), then #red(red_rules) so the decorator factory can then add the function to the right location), or you have to use some kind of class initialiser to pick up specially marked functions; you could do this with a base class that defines the __init_subclass__ class method, at which point you can iterate over the namespace and find those markers (attributes set by the decorators).
Have your __init__ method (or a __new__ method) loop over all the methods on the class and look for special attributes the decorators have put there.
The decorator would only need to add a _rule_name or similar attribute to decorated methods, and {getattr(self, name) for for name in dir(self) if getattr(getattr(self, name), '_rule_name', None) == rule_name} would pick up any method that has the right rule name defined in rule_name.
Make your decorators produce new descriptor objects; descriptors have their __set_name__() method called when the class object is created. This gives you access to the class, and thus you can add attributes to that class.
Note that __init_subclass__ and __set_name__ require Python 3.6 or newer; you'd have to resort to a metaclass to achieve similar functionality in earlier versions.
Also note that when you register functions at the class level, that you need to then explicitly bind them with function.__get__(self, type(cls)) to turn them into methods, or you can explicitly pass in self when calling them. You could automate this by making a dedicated class to hold the rule sets, and make this class a descriptor too:
import types
from collections.abc import MutableSet
class RulesSet(MutableSet):
def __init__(self, values=(), rules=None, instance=None, owner=None):
self._rules = rules or set() # can be a shared set!
self._instance = instance
self._owner = owner
self |= values
def __repr__(self):
bound = ''
if self._owner is not None:
bound = f', instance={self._instance!r}, owner={self._owner!r}'
rules = ', '.join([repr(v) for v in iter(self)])
return f'{type(self).__name__}({{{rules}}}{bound})'
def __contains__(self, ob):
try:
if ob.__self__ is self._instance or ob.__self__ is self._owner:
# test for the unbound function instead when both are bound, this requires staticmethod and classmethod to be unwrapped!
ob = ob.__func__
return any(ob is getattr(f, '__func__', f) for f in self._rules)
except AttributeError:
# not a method-like object
pass
return ob in self._rules
def __iter__(self):
if self._owner is not None:
return (f.__get__(self._instance, self._owner) for f in self._rules)
return iter(self._rules)
def __len__(self):
return len(self._rules)
def add(self, ob):
while isinstance(ob, Rule):
# remove any rule wrappers
ob = ob._function
assert isinstance(ob, (types.FunctionType, classmethod, staticmethod))
self._rules.add(ob)
def discard(self, ob):
self._rules.discard(ob)
def __get__(self, instance, owner):
# share the set with a new, bound instance.
return type(self)(rules=self._rules, instance=instance, owner=owner)
class Rule:
#classmethod
def make_decorator(cls, rulename):
ruleset_name = f'{rulename}_rules'
def decorator(f):
return cls(f, ruleset_name)
decorator.__name__ = rulename
return decorator
def __init__(self, function, ruleset_name):
self._function = function
self._ruleset_name = ruleset_name
def __get__(self, *args):
# this is mostly here just to make Python call __set_name__
return self._function.__get__(*args)
def __set_name__(self, owner, name):
# register, then replace the name with the original function
# to avoid being a performance bottleneck
ruleset = getattr(owner, self._ruleset_name, None)
if ruleset is None:
ruleset = RulesSet()
setattr(owner, self._ruleset_name, ruleset)
ruleset.add(self)
# transfer controrol to any further rule objects
if isinstance(self._function, Rule):
self._function.__set_name__(owner, name)
else:
setattr(owner, name, self._function)
red = Rule.make_decorator('red')
blue = Rule.make_decorator('blue')
hard = Rule.make_decorator('hard')
soft = Rule.make_decorator('soft')
Then just use:
class MyClass:
#red
def rule_one(self):
return 1
#blue
#hard
def rule_two(self):
return 2
#hard
def rule_three(self):
return 3
#blue
#soft
def rule_four(self):
return 4
and you can access self.red_rules, etc. as a set with bound methods:
>>> inst = MyClass()
>>> inst.red_rules
RulesSet({<bound method MyClass.rule_one of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> inst.blue_rules
RulesSet({<bound method MyClass.rule_two of <__main__.MyClass object at 0x106fe7550>>, <bound method MyClass.rule_four of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> inst.hard_rules
RulesSet({<bound method MyClass.rule_three of <__main__.MyClass object at 0x106fe7550>>, <bound method MyClass.rule_two of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> inst.soft_rules
RulesSet({<bound method MyClass.rule_four of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> for rule in inst.hard_rules:
... rule()
...
2
3
The same rules are accessible on the class; normal functions remain unbound:
>>> MyClass.blue_rules
RulesSet({<function MyClass.rule_two at 0x107077a60>, <function MyClass.rule_four at 0x107077b70>}, instance=None, owner=<class '__main__.MyClass'>)
>>> next(iter(MyClass.blue_rules))
<function MyClass.rule_two at 0x107077a60>
Containment testing works as expected:
>>> inst.rule_two in inst.hard_rules
True
>>> inst.rule_two in inst.soft_rules
False
>>> MyClass.rule_two in MyClass.hard_rules
True
>>> MyClass.rule_two in inst.hard_rules
True
You can use these rules to register classmethod and staticmethod objects too:
>>> class Foo:
... #hard
... #classmethod
... def rule_class(cls):
... return f'rule_class of {cls!r}'
...
>>> Foo.hard_rules
RulesSet({<bound method Foo.rule_class of <class '__main__.Foo'>>}, instance=None, owner=<class '__main__.Foo'>)
>>> next(iter(Foo.hard_rules))()
"rule_class of <class '__main__.Foo'>"
>>> Foo.rule_class in Foo.hard_rules
True

Pythonic way to import and use data as object attributes

I am working with data that is used as variables after they are imported. I would like to then use the variables in an object as attributes.
So far I have accomplished this by writing an ImportData class and then it is composed into another class, Obj, that uses it for other calculations. Another solution i have used, is to inherit from the ImportData class. I have an example below:
defining data class
class ImportData:
def __init__(self, path):
# open file and assign to some variables
# such as:
self.slope = 1
self.intercept = -1
solution 1: use composition
class Obj:
def __init__(self, data_object):
self.data = data_object
def func(self, x):
return self.data.slope*x + self.data.intercept
data_object = ImportData('<path>')
obj = Obj(data_object)
# get the slope and intercept
print('slope =', obj.data.slope, ' intercept =', obj.data.intercept)
# use the function
print('f(2) =', obj.func(2))
solution 2: use inheritance
class Obj(ImportData):
def __init__(self,path):
super().__init__(path)
def func(self, x):
return self.slope*x + self.intercept
obj = Object('<path>')
# get the slope and intercept
print('slope =', obj.slope, ' intercept =', obj.intercept)
# use the function
print('f(2) =', obj.func(2))
I don't like the composition solution because I have to type an extra "data" every time I need to access an attribute but I'm not sure inheritance is the right way to go either.
Am I out in left field and there is better solution?
Your sense that the chained attribute access in the composed solution is a code smell is correct: data is an implementation detail of Obj and should be hidden from Obj's clients, so if the implementation of the ImportData class change, you only have to change Obj and not every class that calls obj.data.
We can hide Obj.data by giving Obj a __getattr__ method to control how its attributes are accessed.
>>> class ImportData:
... def __init__(self, path):
... self.slope = 1
... self.intercept = -1
...
>>> data = ImportData()
>>> class Obj:
... def __init__(self, data_object):
... self.data = data_object
... def func(self, x):
... return self.slope*x + self.intercept
... def __getattr__(self, name):
... try:
... return getattr(self.data, name)
... except AttributeError:
... raise AttributeError('{} object has no attribute {}.'.format(self.__class__.__name__, name))
>>> o = Obj(data)
>>> o.func(2)
1
>>> o.slope
1
>>> o.intercept
-1
>>>
Normally, if python fails to find an attribute of an object - for example obj.slope - it will raise an AttributeError. However if the object has a __getattr__ method python will call __getattr__ instead of raising an exception.
In the above code, Obj.__getattr__ looks for the attribute on data if it doesn't exist on Obj, so Obj's clients can call obj.slope instead of obj.data.slope. The re-raising of AttributeError is done so that the error message refers to Obj rather than ImportData

Is possible to create a non-existent method when it is called in Python?

If I have this class:
class MyClass(object):
pass
And then I do it:
instance = MyClass()
instance.new_method()
I got an AttributeError Exception, but I want to create this method dinamically and return an especifc value. Is it possible?
Firstly Python checks if attribute with such name exists, if yes, it will call it. There's no clear way to prematurely detect whether this attribute will be called or not.
Here's the tricky way to achieve what you want:
class Dispatcher(object):
def __init__(self, caller, name):
self.name = name
self.caller = caller
def __call__(self, *a, **ka):
print('Call on Dispatcher registered!',
'Will create method on',
self.caller.__class__.__name__,
'now.')
setattr(self.caller, self.name, self.mock)
return getattr(self.caller, self.name)(*a, **ka)
#classmethod
def mock(cls, *a, **ka):
return 'Some default value for newly created methods.'
class MyClass(object):
def __getattr__(self, attr):
return Dispatcher(self, attr)
instance = MyClass()
print(instance.new_method, '\n')
print(instance.new_method(), '\n')
print(instance.new_method(), '\n')
print(instance.other_method)
Output:
<__main__.Dispatcher object at 0x0000000002C07DD8>
Call on Dispatcher registered! Will create method on MyClass now.
Some default value for newly created methods.
Some default value for newly created methods.
<__main__.Dispatcher object at 0x0000000002C07DD8>
Although this solution is comprehensive, it will return the new instance of Dispatcher every time you try to access non-existent attribute.
If Dispatcher instance is called (e.g Dispatcher(self, attr)()), it will set mock as a new method named attr to the object, passed as the first argument to the constructor.
Yes, you can do it as:
class MyClass(object):
pass
def some_method():
pass
name = 'new_method'
setattr(MyClass, name, classmethod(some_method))
It is possible.
>>> class MyClass(object):
pass
>>> instance = MyClass()
>>> def new_method(cls, x):
print x
>>> MyClass.new_method = new_method
>>> instance.new_method(45)
45
Note that the new_method has cls as the first parameter which (the instance) is passed implicitly when called as an instance method.

Python: how to implement __getattr__()?

My class has a dict, for example:
class MyClass(object):
def __init__(self):
self.data = {'a': 'v1', 'b': 'v2'}
Then I want to use the dict's key with MyClass instance to access the dict, for example:
ob = MyClass()
v = ob.a # Here I expect ob.a returns 'v1'
I know this should be implemented by __getattr__, but I'm new to Python, I don't exactly know how to implement it.
class MyClass(object):
def __init__(self):
self.data = {'a': 'v1', 'b': 'v2'}
def __getattr__(self, attr):
return self.data[attr]
>>> ob = MyClass()
>>> v = ob.a
>>> v
'v1'
Be careful when implementing __setattr__ though, you will need to make a few modifications:
class MyClass(object):
def __init__(self):
# prevents infinite recursion from self.data = {'a': 'v1', 'b': 'v2'}
# as now we have __setattr__, which will call __getattr__ when the line
# self.data[k] tries to access self.data, won't find it in the instance
# dictionary and return self.data[k] will in turn call __getattr__
# for the same reason and so on.... so we manually set data initially
super(MyClass, self).__setattr__('data', {'a': 'v1', 'b': 'v2'})
def __setattr__(self, k, v):
self.data[k] = v
def __getattr__(self, k):
# we don't need a special call to super here because getattr is only
# called when an attribute is NOT found in the instance's dictionary
try:
return self.data[k]
except KeyError:
raise AttributeError
>>> ob = MyClass()
>>> ob.c = 1
>>> ob.c
1
If you don't need to set attributes just use a namedtuple
eg.
>>> from collections import namedtuple
>>> MyClass = namedtuple("MyClass", ["a", "b"])
>>> ob = MyClass(a=1, b=2)
>>> ob.a
1
If you want the default arguments you can just write a wrapper class around it:
class MyClass(namedtuple("MyClass", ["a", "b"])):
def __new__(cls, a="v1", b="v2"):
return super(MyClass, cls).__new__(cls, a, b)
or maybe it looks nicer as a function:
def MyClass(a="v1", b="v2", cls=namedtuple("MyClass", ["a", "b"])):
return cls(a, b)
>>> ob = MyClass()
>>> ob.a
'v1'
Late to the party, but found two really good resources that explain this better (IMHO).
As explained here, you should use self.__dict__ to access fields from within __getattr__, in order to avoid infinite recursion. The example provided is:
def __getattr__(self, attrName):
if not self.__dict__.has_key(attrName):
value = self.fetchAttr(attrName) # computes the value
self.__dict__[attrName] = value
return self.__dict__[attrName]
Note: in the second line (above), a more Pythonic way would be (has_key apparently was even removed in Python 3):
if attrName not in self.__dict__:
The other resource explains that the __getattr__ is invoked only when the attribute is not found in the object, and that hasattr always returns True if there is an implementation for __getattr__. It provides the following example, to demonstrate:
class Test(object):
def __init__(self):
self.a = 'a'
self.b = 'b'
def __getattr__(self, name):
return 123456
t = Test()
print 'object variables: %r' % t.__dict__.keys()
#=> object variables: ['a', 'b']
print t.a
#=> a
print t.b
#=> b
print t.c
#=> 123456
print getattr(t, 'd')
#=> 123456
print hasattr(t, 'x')
#=> True
class A(object):
def __init__(self):
self.data = {'a': 'v1', 'b': 'v2'}
def __getattr__(self, attr):
try:
return self.data[attr]
except Exception:
return "not found"
>>>a = A()
>>>print a.a
v1
>>>print a.c
not found
I like to take this therefore.
I took it from somewhere, but I don't remember where.
class A(dict):
def __init__(self, *a, **k):
super(A, self).__init__(*a, **k)
self.__dict__ = self
This makes the __dict__ of the object the same as itself, so that attribute and item access map to the same dict:
a = A()
a['a'] = 2
a.b = 5
print a.a, a['b'] # prints 2 5
I figured out an extension to #glglgl's answer that handles nested dictionaries and dictionaries insides lists that are in the original dictionary:
class d(dict):
def __init__(self, *a, **k):
super(d, self).__init__(*a, **k)
self.__dict__ = self
for k in self.__dict__:
if isinstance(self.__dict__[k], dict):
self.__dict__[k] = d(self.__dict__[k])
elif isinstance(self.__dict__[k], list):
for i in range(len(self.__dict__[k])):
if isinstance(self.__dict__[k][i], dict):
self.__dict__[k][i] = d(self.__dict__[k][i])
A simple approach to solving your __getattr__()/__setattr__() infinite recursion woes
Implementing one or the other of these magic methods can usually be easy. But when overriding them both, it becomes trickier. This post's examples apply mostly to this more difficult case.
When implementing both these magic methods, it's not uncommon to get stuck figuring out a strategy to get around recursion in the __init__() constructor of classes. This is because variables need to be initialized for the object, but every attempt to read or write those variables go through __get/set/attr__(), which could have more unset variables in them, incurring more futile recursive calls.
Up front, a key point to remember is that __getattr__() only gets called by the runtime if the attribute can't be found on the object already. The trouble is to get attributes defined without tripping these functions recursively.
Another point is __setattr__() will get called no matter what. That's an important distinction between the two functions, which is why implementing both attribute methods can be tricky.
This is one basic pattern that solves the problem.
class AnObjectProxy:
_initialized = False # *Class* variable 'constant'.
def __init__(self):
self._any_var = "Able to access instance vars like usual."
self._initialized = True # *instance* variable.
def __getattr__(self, item):
if self._initialized:
pass # Provide the caller attributes in whatever ways interest you.
else:
try:
return self.__dict__[item] # Transparent access to instance vars.
except KeyError:
raise AttributeError(item)
def __setattr__(self, key, value):
if self._initialized:
pass # Provide caller ways to set attributes in whatever ways.
else:
self.__dict__[key] = value # Transparent access.
While the class is initializing and creating it's instance vars, the code in both attribute functions permits access to the object's attributes via the __dict__ dictionary transparently - your code in __init__() can create and access instance attributes normally. When the attribute methods are called, they only access self.__dict__ which is already defined, thus avoiding recursive calls.
In the case of self._any_var, once it's assigned, __get/set/attr__() won't be called to find it again.
Stripped of extra code, these are the two pieces that are most important.
... def __getattr__(self, item):
... try:
... return self.__dict__[item]
... except KeyError:
... raise AttributeError(item)
...
... def __setattr__(self, key, value):
... self.__dict__[key] = value
Solutions can build around these lines accessing the __dict__ dictionary. To implement an object proxy, two modes were implemented: initialization and post-initialization in the code before this - a more detailed example of the same is below.
There are other examples in answers that may have differing levels of effectiveness in dealing with all aspects of recursion. One effective approach is accessing __dict__ directly in __init__() and other places that need early access to instance vars. This works but can be a little verbose. For instance,
self.__dict__['_any_var'] = "Setting..."
would work in __init__().
My posts tend to get a little long-winded.. after this point is just extra. You should already have the idea with the examples above.
A drawback to some other approaches can be seen with debuggers in IDE's. They can be overzealous in their use of introspection and produce warning and error recovery messages as you're stepping through code. You can see this happening even with solutions that work fine standalone. When I say all aspects of recursion, this is what I'm talking about.
The examples in this post only use a single class variable to support 2-modes of operation, which is very maintainable.
But please NOTE: the proxy class required two modes of operation to set up and proxy for an internal object. You don't have to have two modes of operation.
You could simply incorporate the code to access the __dict__ as in these examples in whatever ways suit you.
If your requirements don't include two modes of operation, you may not need to declare any class variables at all. Just take the basic pattern and customize it.
Here's a closer to real-world (but by no means complete) example of a 2-mode proxy that follows the pattern:
>>> class AnObjectProxy:
... _initialized = False # This class var is important. It is always False.
... # The instances will override this with their own,
... # set to True.
... def __init__(self, obj):
... # Because __getattr__ and __setattr__ access __dict__, we can
... # Initialize instance vars without infinite recursion, and
... # refer to them normally.
... self._obj = obj
... self._foo = 123
... self._bar = 567
...
... # This instance var overrides the class var.
... self._initialized = True
...
... def __setattr__(self, key, value):
... if self._initialized:
... setattr(self._obj, key, value) # Proxying call to wrapped obj.
... else:
... # this block facilitates setting vars in __init__().
... self.__dict__[key] = value
...
... def __getattr__(self, item):
... if self._initialized:
... attr = getattr(self._obj, item) # Proxying.
... return attr
... else:
... try:
... # this block facilitates getting vars in __init__().
... return self.__dict__[item]
... except KeyError:
... raise AttributeError(item)
...
... def __call__(self, *args, **kwargs):
... return self._obj(*args, **kwargs)
...
... def __dir__(self):
... return dir(self._obj) + list(self.__dict__.keys())
The 2-mode proxy only needs a bit of "bootstrapping" to access vars in its own scope at initialization before any of its vars are set. After initialization, the proxy has no reason to create more vars for itself, so it will fare fine by deferring all attribute calls to it's wrapped object.
Any attribute the proxy itself owns will still be accessible to itself and other callers since the magic attribute functions only get called if an attribute can't be found immediately on the object.
Hopefully this approach can be of benefit to anyone who appreciates a direct approach to resolving their __get/set/attr__() __init__() frustrations.
You can initialize your class dictionary through the constructor:
def __init__(self,**data):
And call it as follows:
f = MyClass(**{'a': 'v1', 'b': 'v2'})
All of the instance attributes being accessed (read) in __setattr__, need to be declared using its parent (super) method, only once:
super().__setattr__('NewVarName1', InitialValue)
Or
super().__setattr__('data', dict())
Thereafter, they can be accessed or assigned to in the usual manner:
self.data = data
And instance attributes not being accessed in __setattr__, can be declared in the usual manner:
self.x = 1
The overridden __setattr__ method must now call the parent method inside itself, for new variables to be declared:
super().__setattr__(key,value)
A complete class would look as follows:
class MyClass(object):
def __init__(self, **data):
# The variable self.data is used by method __setattr__
# inside this class, so we will need to declare it
# using the parent __setattr__ method:
super().__setattr__('data', dict())
self.data = data
# These declarations will jump to
# super().__setattr__('data', dict())
# inside method __setattr__ of this class:
self.x = 1
self.y = 2
def __getattr__(self, name):
# This will callback will never be called for instance variables
# that have beed declared before being accessed.
if name in self.data:
# Return a valid dictionary item:
return self.data[name]
else:
# So when an instance variable is being accessed, and
# it has not been declared before, nor is it contained
# in dictionary 'data', an attribute exception needs to
# be raised.
raise AttributeError
def __setattr__(self, key, value):
if key in self.data:
# Assign valid dictionary items here:
self.data[key] = value
else:
# Assign anything else as an instance attribute:
super().__setattr__(key,value)
Test:
f = MyClass(**{'a': 'v1', 'b': 'v2'})
print("f.a = ", f.a)
print("f.b = ", f.b)
print("f.data = ", f.data)
f.a = 'c'
f.d = 'e'
print("f.a = ", f.a)
print("f.b = ", f.b)
print("f.data = ", f.data)
print("f.d = ", f.d)
print("f.x = ", f.x)
print("f.y = ", f.y)
# Should raise attributed Error
print("f.g = ", f.g)
Output:
f.a = v1
f.b = v2
f.data = {'a': 'v1', 'b': 'v2'}
f.a = c
f.b = v2
f.data = {'a': 'c', 'b': 'v2'}
f.d = e
f.x = 1
f.y = 2
Traceback (most recent call last):
File "MyClass.py", line 49, in <module>
print("f.g = ", f.g)
File "MyClass.py", line 25, in __getattr__
raise AttributeError
AttributeError
I think this implement is cooler
class MyClass(object):
def __init__(self):
self.data = {'a': 'v1', 'b': 'v2'}
def __getattr__(self,key):
return self.data.get(key,None)

Categories