I'm aware that attribute getters and setters are considered "unpythonic", and the pythonic way to do things is to simply use an normal attribute and use the property decorator if you later need to trigger some functionality when an attribute is accessed or set.
e.g. What's the pythonic way to use getters and setters?
But how does this apply when the value of an attribute is a list, for example?
class AnimalShelter(object):
def __init__(self):
dogs = []
cats = []
class Cat(object):
pass
class Dog(object):
pass
Say that initially, the interface works like this:
# Create a new animal shelter
woodgreen = AnimalShelter()
# Add some animals to the shelter
dog1 = Dog()
woodgreen.dogs.append(dog1)
This would seem to be in line with the "pythonic" idea of just using straightforward attributes rather than creating getters, setters, mutators etc. I could have created an addDog method instead. But while not strictly speaking a setter (since it mutates the value of an attribute rather than setting an attribute), it still seems setter-like compared to my above solution.
But then, say that later on you need to trigger some functionality when dogs are added. You can't fall back on the using the property decorator, since adding a dog is not setting a property on the object, but retrieving a list which is the value of that attribute, and mutating that list.
What would be the "pythonic" way of dealing with such a situation?
What's unpythonic are useless getters and setters - since Python have a strong support for computed attributes. This doesn't mean you shouldn't properly encapsulate your implementation.
In your above exemple, the way your AnimalShelter class handles it's "owned" animals is an implementation detail and should not be exposed, so it's totally pythonic to use protected attribute and expose a relevant set of public methods / properties:
class AnimalShelter(object):
def __init__(self):
self._dogs = []
self._cats = []
def add_dog(self, dog):
if dog not in self._dogs:
self._dogs.append(dog)
def get_dogs(self):
return self._dogs[:] # return a shallow copy
# etc
Related
I want to create a configuration class with cascading feature. What do I mean by this? let say we have a configuration class like this
class BaseConfig(metaclass=ConfigMeta, ...):
def getattr():
return 'default values provided by the metaclass'
class Config(BaseConfig):
class Embedding(BaseConfig, size=200):
class WordEmbedding(Embedding):
size = 300
when I use this in code I will access the configuration as follows,
def function(Config, blah, blah):
word_embedding_size = Config.Embedding.Word.size
char_embedding_size = Config.Embedding.Char.size
The last line access a property which does not exist in Embedding class 'Char'. That should invoke getattr() which should return 200 in this case. I am not familiar with metaclasses enough to make a good judgement, but I gues I need to define the __new__() of the metaclass.
does this approach makes sense or is there a better way to do it?
EDIT:
class Config(BaseConfig):
class Embedding(BaseConfig, size=200):
class WordEmbedding(Embedding):
size = 300
class Log(BaseConfig, level=logging.DEBUG):
class PREPROCESS(Log):
level = logging.INFO
#When I use
log = logging.getLogger(level=Config.Log.Model.level) #level should be INFO
This is a bit confuse. I am not sure if this would be the best notation to declare configurations with default parameters - it seems verbose. But yes, given the flexibility of metaclasses and magic methods in Python, it is possible for something like this to old all flexibility you need.
Just for the sake of it, I'd like to say that using nested classes as namespaces, like you are doing, is probably the only useful thing for them. (nested classes). It is common to see a lot of people that misunderstands Python OO at all trying to make use of nested classes.
So - for your problem, you need that in the final class, a __getattr__ method exists that can fetch default values for atributes. These attributes in turn are declared as keywords to nested classes - which also can have the same metaclass. Otherwise, the hierarchy of nested classes just work for you to fetch nested attributes, using the dot notation in Python.
Moreover, for each class in a nested set, one can pass in keyword parameters that are to be used as default, if the next level of nested classes is not defined. In the given example, trying to access Config.Embedding.Char.size with a non exisitng Char should return the default "size". Not that a __getattr__ in "Embedding" can return you a fake "Char" object - but that object is the one that have to yield a size attribute. So, our __getattr__ have yet to yield an object that has itself a propper __getattr__;
However, I will suggest a change to your requirements - instead of passing in the default values as keyword parameters, to have a reserved name - like _default inside which you can put your default attributes. That way, you can provide deeply nested default subtress, instead of just scalar values as well, and the implementation can possibly be simpler.
Actually - a lot simpler. By using keywords to the class as you propose, you'd actually need to have a metaclass set those default parameters in a data structure(it would be possible in either __new__ or __init__ though). But by just using the nested classes all the way, with a reserved name, a custom __getattr__ on the metac class will work. That will retrieve unexisting class attributes on the configuration classes themselves, and all one have to do, if a requested attribute does not exist, is try to retrieve the _default class I mentioned.
Thus, you can work with something like:
class ConfigMeta(type):
def __getattr__(cls, attr):
return cls._default
class Base(metaclass=ConfigMeta):
pass
class Config(Base):
class Embed(Base):
class _default(Base):
size = 200
class Word(Base):
size = 300
assert Config.Embed.Char.size == 200
assert Config.Embed.Word.size == 300
Btw - just last year I was working on a project to have configurations like this, with default values, but using a dictionary syntax - that is why I mentioned I am not sure the nested class would be a nice design. But since all the functionality can be provided by a metaclass with 3 LoC I guess this beats anything in the way.
Also, that is why I think being able to nest whole default subtrees can be useful for what you want - I've been there.
You can use a metaclass to set the attribute:
class ConfigMeta(type):
def __new__(mt, clsn, bases, attrs):
try:
_ = attrs['size']
except KeyError:
attrs['size'] = 300
return super().__new__(mt, clsn, bases, attrs)
Now if the class does not have the size attribute, it would be set to 300 (change this to meet your need).
I have a simple Python class:
class Car:
self.dirty = False
self.owner = 'Alice'
self.wheels = []
def __setattr__(self, name, value):
self.dirty = True
super(Car, self).__setattr__()
After some experimenting, I see __setattr__ is called only when setting owner or wheels:
car_instance.owner = 'Bob'
car_instance.wheels = []
It does not get called when appending to wheels:
wheels.append(wheel_instance)
This does not surprise me, and I understand why it is not being called.
I am just wondering how I would get it to be called for the 3 scenarios I listed:
car_instance.owner = 'Bob' # SCENARIO 1
car_instance.wheels = [] # SCENARIO 2
wheels.append(wheel_instance) # SCENARIO 3
I've experimented a bit with the different descriptors, but no luck. I ultimatley just want to set dirty = True when a class member is modified (set, reset, modified, appended to, etc.).
You cannot do this using only descriptors. Full stop.
You have to provide a custom list class which does what you want. This is not too difficult if your custom list inherits collections.abc.MutableSequence. As you can see, you can get away by "only" implementing __getitem__, __setitem__, __delitem__, __len__, insert—the others are filled in by the abstract base class MutableSequence.
Use a normal list as backing storage and implement the methods using that.
Note that the index argument to __setitem__, __getitem__ and __delitem__ can be a slice, which are more tricky to implement than you’d expect. I recommend a tight test suite.
Once you have your list class, you use it as the type for your class’ attributes (you can control the type using #property or custom descriptors, by preventing the user from assigning any other type).
Concretely, I have a user-defined class of type
class Foo(object):
def __init__(self, bar):
self.bar = bar
def bind(self):
val = self.bar
do_something(val)
I need to:
1) be able to call on the class (not an instance of the class) to recover all the self.xxx attributes defined within the class.
For an instance of a class, this can be done by doing a f = Foo('') and then f.__dict__. Is there a way of doing it for a class, and not an instance? If yes, how? I would expect Foo.__dict__ to return {'bar': None} but it doesn't work this way.
2) be able to access all the self.xxx parameters called from a particular function of a class. For instance I would like to do Foo.bind.__selfparams__ and recieve in return ['bar']. Is there a way of doing this?
This is something that is quite hard to do in a dynamic language, assuming I understand correctly what you're trying to do. Essentially this means going over all the instances in existence for the class and then collecting all the set attributes on those instances. While not infeasible, I would question the practicality of such approach both from a design as well as performance points of view.
More specifically, you're talking of "all the self.xxx attributes defined within the class"—but these things are not defined at all, not at least in a single place—they more like "evolve" as more and more instances of the class are brought to life. Now, I'm not saying all your instances are setting different attributes, but they might, and in order to have a reliable generic solution, you'd literally have to keep track of anything the instances might have done to themselves. So unless you have a static analysis approach in mind, I don't see a clean and efficient way of achieving it (and actually even static analysis is of no help generally speaking in a dynamic language).
A trivial example to prove my point:
class Foo(object):
def __init__(self):
# statically analysable
self.bla = 3
# still, but more difficult
if SOME_CONSTANT > 123:
self.x = 123
else:
self.y = 321
def do_something(self):
import random
setattr(self, "attr%s" % random.randint(1, 100), "hello, world of dynamic languages!")
foo = Foo()
foo2 = Foo()
# only `bla`, `x`, and `y` attrs in existence so far
foo2.do_something()
# now there's an attribute with a random name out there
# in order to detect it, we'd have to get all instances of Foo existence at the moment, and individually inspect every attribute on them.
And, even if you were to iterate all instances in existence, you'd only be getting a snapshot of what you're interested, not all possible attributes.
This is not possible. The class doesn't have those attributes, just functions that set them. Ergo, there is nothing to retrieve and this is impossible.
This is only possible with deep AST inspection. Foo.bar.func_code would normally have the attributes you want under co_freevars but you're looking up the attributes on self, so they are not free variables. You would have to decompile the bytecode from func_code.co_code to AST and then walk said AST.
This is a bad idea. Whatever you're doing, find a different way of doing it.
To do this, you need some way to find all the instances of your class. One way to do this is just to have the class itself keep track of its instances. Unfortunately, keeping a reference to every instance in the class means that those instances can never be garbage-collected. Fortunately, Python has weakref, which will keep a reference to an object but does not count as a reference to Python's memory management, so the instances can be garbage-collected as per usual.
A good place to update the list of instances is in your __init__() method. You could also do it in __new__() if you find the separation of concerns a little cleaner.
import weakref
class Foo(object):
_instances = []
def __init__(self, value):
self.value = value
cls = type(self)
type(self)._instances.append(weakref.ref(self,
type(self)._instances.remove))
#classmethod
def iterinstances(cls):
"Returns an iterator over all instances of the class."
return (ref() for ref in cls._instances)
#classmethod
def iterattrs(cls, attr, default=None):
"Returns an iterator over a named attribute of all instances of the class."
return (getattr(ref(), attr, default) for ref in cls._instances)
Now you can do this:
f1, f2, f3 = Foo(1), Foo(2), Foo(3)
for v in Foo.iterattrs("value"):
print v, # prints 1 2 3
I am, for the record, with those who think this is generally a bad idea and/or not really what you want. In particular, instances may live longer than you expect depending on where you pass them and what that code does with them, so you may not always have the instances you think you have. (Some of this may even happen implicitly.) It is generally better to be explicit about this: rather than having the various instances of your class be stored in random variables all over your code (and libraries), have their primary repository be a list or other container, and access them from there. Then you can easily iterate over them and get whatever attributes you want. However, there may be use cases for something like this and it's possible to code it up, so I did.
I have a class sysprops in which I'd like to have a number of constants. However, I'd like to pull the values for those constants from the database, so I'd like some sort of hook any time one of these class constants are accessed (something like the getattribute method for instance variables).
class sysprops(object):
SOME_CONSTANT = 'SOME_VALUE'
sysprops.SOME_CONSTANT # this statement would not return 'SOME_VALUE' but instead a dynamic value pulled from the database.
Although I think it is a very bad idea to do this, it is possible:
class GetAttributeMetaClass(type):
def __getattribute__(self, key):
print 'Getting attribute', key
class sysprops(object):
__metaclass__ = GetAttributeMetaClass
While the other two answers have a valid method. I like to take the route of 'least-magic'.
You can do something similar to the metaclass approach without actually using them. Simply by using a decorator.
def instancer(cls):
return cls()
#instancer
class SysProps(object):
def __getattribute__(self, key):
return key # dummy
This will create an instance of SysProps and then assign it back to the SysProps name. Effectively shadowing the actual class definition and allowing a constant instance.
Since decorators are more common in Python I find this way easier to grasp for other people that have to read your code.
sysprops.SOME_CONSTANT can be the return value of a function if SOME_CONSTANT were a property defined on type(sysprops).
In other words, what you are talking about is commonly done if sysprops were an instance instead of a class.
But here is the kicker -- classes are instances of metaclasses. So everything you know about controlling the behavior of instances through the use of classes applies equally well to controlling the behavior of classes through the use of metaclasses.
Usually the metaclass is type, but you are free to define other metaclasses by subclassing type. If you place a property SOME_CONSTANT in the metaclass, then the instance of that metaclass, e.g. sysprops will have the desired behavior when Python evaluates sysprops.SOME_CONSTANT.
class MetaSysProps(type):
#property
def SOME_CONSTANT(cls):
return 'SOME_VALUE'
class SysProps(object):
__metaclass__ = MetaSysProps
print(SysProps.SOME_CONSTANT)
yields
SOME_VALUE
I've a following problem. I have a model class in MVC and it has a special purpose. In certain cases it should be able to override itself. Is this kind of behavior possible?
Class Text(Document):
a = StringField()
b = StringField()
def save(self):
if 1==Text.object(a=self.a).count(): # if similar object exists in db,
self = Text.object(a=self.a).first() # get the instance from db and
# override the origian class.
else: #use super class' save-function
return super(Text, self).save()
There's no trivial way for an object to become another object in python. Assigning to self won't do this; self is a local variable in the method definition, And assigning to it won't change the existing instance in any way; only make it inaccessible for the rest of the method.
There are a few ways to approach this problem. The preferred way is to have a method that returns the correct instance.
class Foo(...):
def get_or_save(self):
existing = load_from_database(self.bar)
if existing is not None:
return existing
else:
save_to_database(self)
return self
new_inst = Text()
new_inst.bar = "baz"
inst = new_inst.get_or_save()
# stop using new_inst
There is also a hackish way to get a similar effect to your original example. Ordinary python classes store most of their attributes in a special __dict__ attribute. You can copy that and it will be as though one instance is replaced by the other. Of course, that only works for perfectly plain python classes, and may or may not work classes defined in an ORM, or which retain state in more clever ways.
class Foo(...):
def save(self):
existing = load_from_database(self.bar)
if existing is not None:
self.__dict__ = existing.__dict__
else:
save_to_database(self)
Yes it is possible :-)
Seriously, using a conditional call to super as in your example will achieve the result.
However, the style of your example is a little confusing, and changing it may allow you to achieve your overall objectives more easily. (But neither of these directly affects your question.)
I would not recommend putting a method in your class called object unless I had no other choice.
The fact that you are passing self.a to Text.object, within method Text.save, doesn't seem right. It would be cleaner to simply call self.object() and have method object use self.a directly in its code.