Dynamic configuration object based on __setattr__ and _getattr__ - python

I tried to create dynamic object to validate my config in fly and present result as object. I tried to achieve this by creating such class:
class SubConfig(object):
def __init__(self, config, key_types):
self.__config = config
self.__values = {}
self.__key_types = key_types
def __getattr__(self, item):
if item in self.__key_types:
return self.__values[item] or None
else:
raise ValueError("No such item to get from config")
def __setattr__(self, item, value):
if self.__config._blocked:
raise ValueError("Can't change values after service has started")
if item in self.__key_types:
if type(value) in self.__key_types[item]:
self.__values[item] = value
else:
raise ValueError("Can't assing value in different type then declared!")
else:
raise ValueError("No such item to set in config")
SubConfig is wrapper for section in config file. Config has switch to kill possibility to change values after program started (you can change values only on initialization).
The problem is when I setting any value it is getting in infinity loop in getattr. As I read __getattr__ shouldn't behave like that (first take existing attr, then call __getattr__). I was comparing my code with available examples but I can't get a thing.
I noticed that all problems are generated my constructor.

The problem is that your constructor in initialising the object calls __setattr__, which then calls __getattr__ because the __ private members aren't initialised yet.
There are two ways I can think of to work around this:
One option is to call down to object.__setattr__ thereby avoiding your __setattr__ or equivalently use super(SubConfig, self).__setattr__(...) in __init__. You could also set values in self.__dict__ directly. A problem here is that because you're using double-underscores you'd have to mangle the attribute names manually (so '__config' becomes '_SubConfig__config'):
def __init__(self, config, key_types):
super(SubConfig, self).__setattr__('_SubConfig__config', config)
super(SubConfig, self).__setattr__('_SubConfig__values', {})
super(SubConfig, self).__setattr__('_SubConfig__key_types', key_types)
An alternative is to have __setattr__ detect and pass through access to attribute names that begin with _ i.e.
if item.startswith('_')
return super(SubConfig, self).__setattr__(item, value)
This is more Pythonic in that if someone has a good reason to access your object's internals, you have no reason to try to stop them.

Cf ecatmur's answer for the root cause - and remember that __setattr__ is not symetrical to __getattr__ - it is unconditionnaly called on each and every attempt to bind an object's attribute. Overriding __setattr__ is tricky and should not be done if you don't clearly understand the pros and cons.
Now for a simple practical solution to your use case: rewrite your initializer to avoid triggering setattr calls:
class SubConfig(object):
def __init__(self, config, key_types):
self.__dict__.update(
_SubConfig__config=config,
_SubConfig__values={},
_SubConfig__key_types=key_types
)
Note that I renamed your attributes to emulate the name-mangling that happens when using the double leading underscores naming scheme.

Related

How can I edit __setattr__ to raise an error when class object is attempted to have a new attribute store but still allow some use

I'm trying to figure out a way, using __setattr__, in my class to raise an error when it is called or used, but at the same time allow for it's use inside the class in places like __init__. Is there any way I can either write specific exceptions under __setattr__ to allow certain instances to work or a total way anything inside the class will work?
Here is the idea that I have been working with:
class Test:
def __init__(self, test_list):
self.p = test_list
def __setattr__(self, key, value):
if key == 'p':
pass
else:
raise AssertionError('Object can not add any new attributes.')
I do not know what I could do in place of the pass, as I would like to use __setattr__ in the original manner in that conditions. Any way I can make exceptions to allow for this to work?

Composing descriptors in python

Background
In python, a descriptor is an object that defines any of __get__, __set__ or __delete__, and sometimes also __set_name__. The most common use of descriptors in python is probably property(getter, setter, deleter, description). The property descriptor calls the given getter, setter, and deleter when the respective descriptor methods are called.
It's interesting to note that functions are also descriptors: they define __get__ which when called, returns a bound method.
Descriptors are used to modify what happens when an objects properties are accessed. Examples are restricting access, logging object access, or even dynamic lookup from a database.
Problem
My question is: how do I design descriptors that are composable?
For example:
Say I have a Restricted descriptor (that only allows setting and getting when a condition of some sort is met), and a AccessLog descriptor (that logs every time the property is "set" or "get"). Can I design those so that I can compose their functionality when using them?
Say my example usage looks like this:
class ExampleClient:
# use them combined, preferably In any order
# (and there could be a special way to combine them,
# although functional composition makes the most sense)
foo: Restricted(AccessLog())
bar: AccessLog(Restricted())
# and also use them separately
qux: Restricted()
quo: AccessLog()
I'm looking for a way to make this into a re-usable pattern, so I can make any descriptor composable. Any advice on how to do this in a pythonic manner? I'm going to experiment with a few ideas myself, and see what works, but I was wondering if this has been tried already, and if there is sort of a "best practice" or common method for this sort of thing...
You can probably make that work. The tricky part might be figuring out what the default behavior should be for your descriptors if they don't have a "child" descriptor to delegate to. Maybe you want to default to behaving like a normal instance variable?
class DelegateDescriptor:
def __init__(self, child=None):
self.child = child
self.name = None
def __set_name__(self, owner, name):
self.name = name
if self.child is not None:
try:
self.child.__set_name__(owner, name)
except AttributeError:
pass
def __get__(self, instance, owner=None):
if instance is None:
return self
if self.child is not None:
return self.child.__get__(instance, owner)
try:
return instance.__dict__[self.name] # default behavior, lookup the value
except KeyError:
raise AttributeError
def __set__(self, instance, value):
if self.child is not None:
self.child.__set__(instance, value)
else:
instance.__dict__[self.name] = value # default behavior, store the value
def __delete__(self, instance):
if self.child is not None:
self.child.__delete__(instance)
else:
try:
del instance.__dict__[self.name] # default behavior, remove value
except KeyError:
raise AttributeError
Now, this descriptor doesn't actually do anything other than store a value or delegate to another descriptor. Your actual Restricted and AccessLog descriptors might be able to use this as a base class however, and add their own logic on top. The error checking is also very basic, you will probably want to do a better job raising the right kinds of exceptions with appropriate error messages in every use case before using this in production.

Is there a way to get class constructor arguments by self inspection?

I have a set of classes that I want to serialize to/from both JSON and a mongodb database. The most efficient way I see to do that is to write methods to serialize to dicts, then use built-in methods to get to/from storage. (Q1: Is this conclusion actually valid, or is there a better way?)
So, to export an instance of my class to a dict, I can use self.dict. In my case these classes get nested, so it has to be recursive, fine. Now I want to read it back...but now I'm stuck if my class has a non trivial constructor. Consider:
class MyClass(object):
def __init__(self, name, value=None):
self.name = name
self._value = value
#property
def value(self):
return self._value
a = MyClass('spam', 42)
d = a.__dict__ #has {'name':'spam', '_value':42}
#now how to unserialize?
b = MyClass(**d) #nope, because '_value' is not a valid argument
c = MyClass(); c.__dict__.update(d) #nope, can't construct 'empty' MyClass
I don't want to write a constructor that ignores unknown parameters, because I hate wasting hours trying to figure out why a class is ignoring some parameter to find that there was a typo in the name. And I don't want to remove the required parameters, because that may cause problems elsewhere.
So how do I get around this mess I've made for myself?
If there's a way to bypass the class's constructor and create an empty
object, that might work, but if there is any useful work done in __init__, I lose that. (E.g. type/range checking of parameters).
In my case, these classes don't change after construction (they define lots of useful methods, and have some caching). So if I could extract just the constructor arguments from the dict, I'd be doing good. Is there a way to do that that doesn't involve repeating all the constructor arguments??
I don't know if it's just a trivial example or not but I can't see the value of the property value(No pun intended). If you're directly assigning the value argument from the __init__ I mean. In that case just using a simple attribute will solve your problem.
But if for some reason you really need a property just strip the dash in the key :
class MyClass(object):
Pdef __init__(self, name, value=None):
self.name = name
self._value = value
#property
def value(self):
return self._value
a = MyClass('spam', 42)
d = {k.strip('_'): v for k, v in a.__dict__.items()}
b = MyClass(**d)

`object.__setattr__(self, ..., ...)` instead of `setattr(self, ..., ...)`?

Following is the __init__ method of the Local class from the werkzeug library:
def __init__(self):
object.__setattr__(self, '__storage__', {})
object.__setattr__(self, '__ident_func__', get_ident)
I don't understand two things about this code:
Why did they write
object.__setattr__(self, '__storage__', {})
instead of simply
`setattr(self, '__storage__', {})`
Why did they even use __setattr__ if the could simply write
self.__storage__ = {}
This ensures that the default Python definition of __setattr__ is used. It's generally used if the class has overridden __setattr__ to perform non-standard behaviour, but you still wish to access the original __setattr__ behaviour.
In the case of werkzeug, if you look at the Local class you'll see __setattr__ is defined like this:
def __setattr__(self, name, value):
ident = self.__ident_func__()
storage = self.__storage__
try:
storage[ident][name] = value
except KeyError:
storage[ident] = {name: value}
Instead of setting attributes in the dictionary of the object, it sets them in the __storage__ dictionary that was initialized previously. In order to set the __storage__ attribute at all (so that it may be accessed like self.__storage__ later), the original definition of __setattr__ from object must be used, which is why the awkward notation is used in the constructor.
They want to explicitly use the base object.__setattr__ implementation instead of a possibly overridden method instance method somewhere else in the inheritance chain. Local implements its own __setattr__ so this avoids that.
Because the same class defines __setattr__ and this needs to bypass that, since the first line says self.__ident_func__() which wouldn't work yet.

Python: Can a class forbid clients setting new attributes?

I just spent too long on a bug like the following:
>>> class Odp():
def __init__(self):
self.foo = "bar"
>>> o = Odp()
>>> o.raw_foo = 3 # oops - meant o.foo
I have a class with an attribute. I was trying to set it, and wondering why it had no effect. Then, I went back to the original class definition, and saw that the attribute was named something slightly different. Thus, I was creating/setting a new attribute instead of the one meant to.
First off, isn't this exactly the type of error that statically-typed languages are supposed to prevent? In this case, what is the advantage of dynamic typing?
Secondly, is there a way I could have forbidden this when defining Odp, and thus saved myself the trouble?
You can implement a __setattr__ method for the purpose -- that's much more robust than the __slots__ which is often misused for the purpose (for example, __slots__ is automatically "lost" when the class is inherited from, while __setattr__ survives unless explicitly overridden).
def __setattr__(self, name, value):
if hasattr(self, name):
object.__setattr__(self, name, value)
else:
raise TypeError('Cannot set name %r on object of type %s' % (
name, self.__class__.__name__))
You'll have to make sure the hasattr succeeds for the names you do want to be able to set, for example by setting the attributes at a class level or by using object.__setattr__ in your __init__ method rather than direct attribute assignment. (To forbid setting attributes on a class rather than its instances you'll have to define a custom metaclass with a similar special method).

Categories