This question already has answers here:
How to use __setattr__ correctly, avoiding infinite recursion
(5 answers)
Closed 7 years ago.
I have this:
class MySession:
def __init__(self, session):
session['my-data'] = {} # my data is here
self._session = session
def __getattr__(self, name):
return self._session['my-data'][name]
def __setattr__(self, name, value):
my_data = self._session['my-data']
my_data[name] = value
self._session['my-data'] = my_data
obj = MySession({})
obj.x = 3
Basically I want to encapsulate access to the session (sub-)dictionary with an object attribute access. But I can not do it, since this causes infinite recursion, I guess because doing this:
self._session = session
calls setattr, which in turn calls getattr, which in turn calls getattr, etc
How can I pre-initialize some (normal) attributes in a class implementing getattr / setattr?
The __getattr__ method is only called for attributes that don't exist in the normal attribute dictionary. __setattr__ however is called unconditionally (it's mirror is really __getattribute__ rather than __getattr__). If you can get your _session attribute set up properly in __init__, you won't need to worry about anything in the other methods.
To add an attribute without running into any recursion, use super(MySession, self).__setattr__ to call the version of the method you inherited from object (you should always inherit from object in Python 2, to make your class a new-style class, in Python 3, it's the default). You could also call object.__setattr__ directly, but using super is better if you ever end up using multiple inheritance.
class MySession(object):
def __init__(self, session):
session['my-data'] = {}
super(MySession, self).__setattr__("_session", session) # avoid our __setattr__
def __getattr__(self, name):
return self._session['my-data'][name] # this doesn't recurse if _session exists
def __setattr__(self, name, value):
my_data = self._session['my-data']
my_data[name] = value
self._session['my-data'] = my_data
You could initialize first and change the setters getters later on:
def __init__(self, session):
session['my-data'] = {}
self._session = session
self.__setattr__ = self._setattr
self.__getattr__ = self._getattr
assuming self._setattr and self._getattr are implemented of course :)
Related
I'm working with a redis database and I'd like to integrate my prototype code to my baseline as seamlessly as possible so I'm trying to bury most of the inner workings with the interface from the python
client to the redis server code into a few base classes that I will subclass throughout my production code.
I'm wondering if the assignment operator in python = is a callable and whether if it possible to modify the callable's pre and post behavior, particularly the post behavior such that I can call object.save() so that the redis cache would be updated behind the scenes without having to explicitly call it.
For example,
# using the redis-om module
from redis_om import JsonModel
kwargs = {'attr1': 'someval1', 'attr2': 'someval2'}
jsonModel = JsonModel(**kwargs)
# as soon as assignment completes, redis database
# has the new value updated without needing to
# call jsonModel.save()
jsonModel.attr1 = 'newvalue'
You can make such things with proxy class through __getattr__ and __setattr__ methods:
class ProxySaver:
def __init__(self, model):
self.__dict__['_model'] = model
def __getattr__(self, attr):
return getattr(self._model, attr)
def __setattr__(self, attr, value):
setattr(self._model, attr, value)
self._model.save()
p = ProxySaver(jsonModel)
print(p.attr1)
p.attr1 = 'test'
But if attributes has a complex types (list, dict, objects, ... ), assignment for nested objects will not been detected and save call will be skipped (p.attr1.append('test'), p.attr1[0] = 'test2').
Often in Python it is helpful to make use of duck typing, for instance, imagine I have an object spam, whose prompt attribute controls the prompt text in my application. Normally, I would say something like:
spam.prompt = "fixed"
for a fixed prompt. However, a dynamic prompt can also be achived - while I can't change the spam class to use a function as the prompt, thanks to duck typing, because the userlying spam object calls str, I can create a dynamic prompt like so:
class MyPrompt:
def __str__( self ):
return eggs.get_user_name() + ">"
spam.prompt = MyPrompt()
This principal could be extended to make any attribute dynamic, for instance:
class MyEnabled:
def __bool__( self ):
return eggs.is_logged_in()
spam.enabled = MyEnabled()
Sometimes though, it would be more succinct to have this inline, i.e.
spam.prompt = lambda: eggs.get_user_name() + ">"
spam.enabled = eggs.is_logged_in
These of course don't work, because neither the __str__ of the lambda or the __bool__ of the function return the actual value of the call.
I feel like a solution for this should be simple, am I missing something, or do I need to wrap my function in a class every time?
What you want are computed attributes. Python's support for computed attributes is the descriptor protocol, which has a generic implementation as the builtin property type.
Now the trick is that, as documented (cf link above), descriptors only work when they are class attributes. Your code snippet is incomplete as it doesn't contains the definition of the spam object but I assume it's a class instance, so you cannot just do spam.something = property(...) - as the descriptor protocol wouldn't then be invoked on property().
The solution here is the good old "strategy" design pattern: use properties (or custom descriptors, but if you only have a couple of such attributes the builtin property will work just fine) that delegates to a "strategy" function:
def default_prompt_strategy(obj):
return "fixed"
def default_enabled_strategy(obj):
return False
class Spam(object):
def __init__(self, prompt_strategy=default_prompt_strategy, enabled_strategy=default_enabled_strategy):
self.prompt = prompt_strategy
self.enabled = enabled_strategy
#property
def prompt(self):
return self._prompt_strategy(self)
#prompt.setter
def prompt(self, value):
if not callable(value):
raise TypeError("PromptStrategy must be a callable")
self._prompt_strategy = value
#property
def enabled(self):
return self._enabled_strategy(self)
#enabled.setter
def enabled(self, value):
if not callable(value):
raise TypeError("EnabledtStrategy must be a callable")
self._enabled_strategy = value
class Eggs(object):
def is_logged_in(self):
return True
def get_user_name(self):
return "DeadParrot"
eggs = Eggs()
spam = Spam(enabled_strategy=lambda obj: eggs.is_logged_in())
spam.prompt = lambda obj: "{}>".format(eggs.get_user_name())
Let's say I have two classes:
class Container():
def __init__(self, name):
self.name = name
class Data():
def __init__(self):
self._containers = []
def add_container(self,name):
self._containers.append(name)
setattr(self, name, Container(name))
Now let's say
myData = Data()
myData.add_container('contA')
Now, if I do del myData.contA it of course doesn't remove name from myData._containers.
So how would I write a destructor in Container so it deletes the attribute but also removes name from the _containers list?
You seem to be used to a language with deterministic object destruction and dedicated methods for performing that destruction. Python doesn't work that way. Python has no destructors, and even if it had destructors, there is no guarantee that del myData.contA would render the Container object eligible for destruction, let alone actually destroy it.
Probably the simplest way is to just define a remove_container paralleling your add_container:
def remove_container(self, name):
self._containers.remove(name)
delattr(self, name)
If you really want the syntax for this operation to be del myData.contA, then hook into attribute deletion, by implementing a __delattr__ on Data:
def __delattr__(self, name):
self._containers.remove(name)
super().__delattr__(name)
You want to overload the __delattr__ special method: https://docs.python.org/3/reference/datamodel.html#object.delattr
class Data:
[...]
def __delattr__(self, key):
super().__delattr__(name)
#find and remove the Container from _containers
I'm trying to write a library that will register an arbitrary list of service calls from multiple service endpoints to a container. I intend to implement the service calls in classes written one per service. Is there a way to maintain the boundedness of the methods from the service classes when registering them to the container (so they will still have access to the instance data of their owning object instance), or must I register the whole object then write some sort of pass through in the container class with __getattr__ or some such to access the methods within instance context?
container:
class ServiceCalls(object):
def __init__(self):
self._service_calls = {}
def register_call(self, name, call):
if name not in self._service_calls:
self._service_calls[name] = call
def __getattr__(self, name):
if name in self._service_calls:
return self._service_calls[name]
services:
class FooSvc(object):
def __init__(self, endpoint):
self.endpoint = endpoint
def fooize(self, *args, **kwargs):
#call fooize service call with args/kwargs utilizing self.endpoint
def fooify(self, *args, **kwargs):
#call fooify service call with args/kwargs utilizing self.endpoint
class BarSvc(object):
def __init__(self, endpoint):
self.endpoint = endpoint
def barize(self, *args, **kwargs):
#call barize service call with args/kwargs utilizing self.endpoint
def barify(self, *args, **kwargs):
#call barify service call with args/kwargs utilizing self.endpoint
implementation code:
foosvc = FooSvc('fooendpoint')
barsvc = BarSvc('barendpoint')
calls = ServiceCalls()
calls.register('fooize', foosvc.fooize)
calls.register('fooify', foosvc.fooify)
calls.register('barize', barsvc.barize)
calls.register('barify', barsvc.barify)
calls.fooize(args)
I think this answers your question:
In [2]: f = 1 .__add__
In [3]: f(3)
Out[3]: 4
You won't need the staticmethod function when adding these functions to classes, because they are effectively already "staticed".
What you are trying to do will work fine, as you can see by running your own code. :)
The object foosvc.fooize is called a "bound method" in Python, and it contains both, a reference to foosvc and to the function FooSvc.fooize. If you call the bound method, the reference to self will be passed implicitly as the first paramater.
On a side note, __getattr__() shouldn't silently return None for invalid attribute names. Better use this:
def __getattr__(self, name):
try:
return self._service_calls[name]
except KeyError:
raise AttributeError
I don't understand the use case for this -- it seems to me that the easy, simple, idiomatic way to accomplish this is to just pass in an object.
But: program to the interface, not the implementation. Only assume that the object has the method you need -- don't touch the internals or any other methods.
I've started to use the python descriptor protocol more extensively in the code I've been writing. Typically, the default python lookup magic is what I want to happen, but sometimes I'm finding I want to get the descriptor object itself instead the results of its __get__ method. Wanting to know the type of the descriptor, or access state stored in the descriptor, or somesuch thing.
I wrote the code below to walk the namespaces in what I believe is the correct ordering, and return the attribute raw regardless of whether it is a descriptor or not. I'm surprised though that I can't find a built-in function or something in the standard library to do this -- I figure it has to be there and I just haven't noticed it or googled for the right search term.
Is there functionality somewhere in the python distribution that already does this (or something similar)?
Thanks!
from inspect import isdatadescriptor
def namespaces(obj):
obj_dict = None
if hasattr(obj, '__dict__'):
obj_dict = object.__getattribute__(obj, '__dict__')
obj_class = type(obj)
return obj_dict, [t.__dict__ for t in obj_class.__mro__]
def getattr_raw(obj, name):
# get an attribute in the same resolution order one would normally,
# but do not call __get__ on the attribute even if it has one
obj_dict, class_dicts = namespaces(obj)
# look for a data descriptor in class hierarchy; it takes priority over
# the obj's dict if it exists
for d in class_dicts:
if name in d and isdatadescriptor(d[name]):
return d[name]
# look for the attribute in the object's dictionary
if obj_dict and name in obj_dict:
return obj_dict[name]
# look for the attribute anywhere in the class hierarchy
for d in class_dicts:
if name in d:
return d[name]
raise AttributeError
Edit Wed, Oct 28, 2009.
Denis's answer gave me a convention to use in my descriptor classes to get the descriptor objects themselves. But, I had an entire class hierarchy of descriptor classes, and I didn't want to begin every __get__ function with a boilerplate
def __get__(self, instance, instance_type):
if instance is None:
return self
...
To avoid this, I made the root of the descriptor class tree inherit from the following:
def decorate_get(original_get):
def decorated_get(self, instance, instance_type):
if instance is None:
return self
return original_get(self, instance, instance_type)
return decorated_get
class InstanceOnlyDescriptor(object):
"""All __get__ functions are automatically wrapped with a decorator which
causes them to only be applied to instances. If __get__ is called on a
class, the decorator returns the descriptor itself, and the decorated
__get__ is not called.
"""
class __metaclass__(type):
def __new__(cls, name, bases, attrs):
if '__get__' in attrs:
attrs['__get__'] = decorate_get(attrs['__get__'])
return type.__new__(cls, name, bases, attrs)
Most descriptors do their job when accessed as instance attribute only. So it's convenient to return itself when it's accessed for class:
class FixedValueProperty(object):
def __init__(self, value):
self.value = value
def __get__(self, inst, cls):
if inst is None:
return self
return self.value
This allows you to get descriptor itself:
>>> class C(object):
... prop = FixedValueProperty('abc')
...
>>> o = C()
>>> o.prop
'abc'
>>> C.prop
<__main__.FixedValueProperty object at 0xb7eb290c>
>>> C.prop.value
'abc'
>>> type(o).prop.value
'abc'
Note, that this works for (most?) built-in descriptors too:
>>> class C(object):
... #property
... def prop(self):
... return 'abc'
...
>>> C.prop
<property object at 0xb7eb0b6c>
>>> C.prop.fget
<function prop at 0xb7ea36f4>
Accessing descriptor could be useful when you need to extent it in subclass, but there is a better way to do this.
The inspect library provides a function to retrieve an attribute without any descriptor magic: inspect.getattr_static.
Documentation: https://docs.python.org/3/library/inspect.html#fetching-attributes-statically
(This is an old question, but I keep coming across it when trying to remember how to do this, so I'm posting this answer so I can find it again!)
The above method
class FixedValueProperty(object):
def __init__(self, value):
self.value = value
def __get__(self, inst, cls):
if inst is None:
return self
return self.value
Is a great method whenever you control the code of the property, but there are some cases, such as when the property is part of a library controlled by someone else, where another approach is useful. This alternative approach can also be useful in other situations such as implementing object mapping, walking a name-space as described in the question, or other specialised libraries.
Consider a class with a simple property:
class ClassWithProp:
#property
def value(self):
return 3
>>>test=ClassWithProp()
>>>test.value
3
>>>test.__class__.__dict__.['value']
<property object at 0x00000216A39D0778>
When accessed from the container objects class dict, the 'descriptor magic' is bypassed. Note also that if we assign the property to a new class variable, it behaves just like the original with 'descriptor magic', but if assigned to an instance variable, the property behaves as any normal object and also bypasses 'descriptor magic'.
>>> test.__class__.classvar = test.__class__.__dict__['value']
>>> test.classvar
3
>>> test.instvar = test.__class__.__dict__['value']
>>> test.instvar
<property object at 0x00000216A39D0778>
Let's say we want to get the descriptor for obj.prop where type(obj) is C.
C.prop usually works because the descriptor usually returns itself when accessed via C (i.e., bound to C). But C.prop may trigger a descriptor in its metaclass. If prop were not present in obj, obj.prop would raise AttributeError while C.prop might not. So it's better to use inspect.getattr_static(obj, 'prop').
If you are not satisfied with that, here's a CPython-specific method (from _PyObject_GenericGetAttrWithDict in Objects/object.c):
import ctypes, _ctypes
_PyType_Lookup = ctypes.pythonapi._PyType_Lookup
_PyType_Lookup.argtypes = (ctypes.py_object, ctypes.py_object)
_PyType_Lookup.restype = ctypes.c_void_p
def type_lookup(ty, name):
"""look for a name through the MRO of a type."""
if not isinstance(ty, type):
raise TypeError('ty must be a type')
result = _PyType_Lookup(ty, name)
if result is None:
raise AttributeError(name)
return _ctypes.PyObj_FromPtr(result)
type_lookup(type(obj), 'prop') returns the descriptor in the same way when CPython uses it at obj.prop if obj is a usual object (not class, for example).