I'm working with the python-docx library from a forked version, and I'm having an issue with editing the elements list as it is defined as a property.
# docx.document.Document
#property
def elements(self):
return self._body.elements
I tried to go with the solution mentioned here but the error AtributeError: can't set attribute still popping out.
Next thing I tried is adding the setter to the attribute derived from self._body and editing the code:
# docx.blkcntnr.BlockItemContainer
#property
def elements(self):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return [element(item,self.part) for item in self._element.getchildren()]
I've tried to add the setter in both levels but ended up again with the error AtributeError: can't set attribute
The setter I wrote:
#elements.setter
def elements(self, value):
return value
The implementation I tired:
elements_list = docx__document.elements
elem_list = []
docx__document.elements = elements_list = elem_list
The main problem with that code is docx__document.elements still contains all the elements that are supposed to have been deleted!
Editing the library was like this:
# Inside docx.document.Document
#property
def elements(self):
return self._body.elements
#elements.setter
def elements(self, value=None):
self._body.elements = value
gc.collect()
return value
The other part:
# Inside docx.blkcntnr.BlockItemContainer
#property
def elements(self):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return [element(item,self.part) for item in self._element.getchildren()]
#elements.setter
def elements(self, value):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return value
Related question [Update]
If I did add a setter for this property :
# docx.document.Document
#property
def elements(self):
return self._body.elements
Should I add also a setter for the property:
# docx.blkcntnr.BlockItemContainer
#property
def elements(self):
"""
A list containing the elements in this container (paragraph and tables), in document order.
"""
return [element(item,self.part) for item in self._element.getchildren()]
Because the value of document.elemetns is actually the value from document._body.elements, am I right?
Any help would appreciate it!
The main "Attribute Error" issue, #Jasmijn already covered... the setter actually needs to set something.
In regards to how to provide a setter for elements:
First we need to figure out where elements comes from:
Document.elements comes from [Document]._body.elements
[Document]._body.elements comes from _Body, which inherits BlockItemContainer.elements
BlockItemContainer.elements builds its elements list dynamically from [BlockItemContainer]._element.getchildren()
[BlockItemContainer]._element is equal to [Document]._element.body
[Document]._element comes from extending ElementProxy, and is the first argument passed to Document's constructor
In a very round-a-bout way, given element passed to Document, the document's elements are derived from: element.body.getchildren(). (A bit tricky tracking down the lookup chain, but that's just what you get when there's a lot of abstraction, or perhaps poor object oriented design)
Now to track down what exactly getchildren() does:
Looks like the element passed to Document is from the included oxml package
oxml is itself a wrapper around lxml
Looks like the relevant classes are actually in Cython. As far as I can tell, the _Element class is where getchildren() is ultimately defined (etree.pyx)
getchildren() calls _collectChildren (apihelpers.pxi), which gives you an idea of how the internal element structure is setup
Given that the root implementation is Cython is going to complicate things, but I see that the _Element class implements some additional methods which you could make use of, in particular: clear() and extend().
So a possible implementation (which I've tested and appears to work):
# inside docx.document.Document
#elements.setter
def elements(self, lst):
cython_el = self._element.body
cython_el.clear()
cython_el.extend(lst)
I'll disagree with #Jasmijn here and say you don't need to provide a setter for BlockItemContainer as well, since that's a private class.
You could also expose other _Element methods directly on the Document object if desired, like clear().
First a word of warning: if you edit a library, those changes will be wiped away if/when you upgrade the library.
While your question is still not reproducible, I can see now what the problem is.
Setters need to mutate some kind of state to be useful. They don't return values.
# Inside docx.document.Document
#elements.setter
def elements(self, value):
self._body.elements = value
# Inside docx.blkcntnr.BlockItemContainer
#elements.setter
def elements(self, value):
# FIXME
Unfortunately, I haven't been able to figure out how to implement docx.blkcntnr.BlockItemContainer.elements.setter in a way that neatly replaces the XML tree.
Since it looks like you want to use it to wipe a document clear, why not simply instantiate a new Document instead, like so?
docx__document = docx.Document()
I’m building a class that extends the list data structure in Python, called a Partitional. I’m adding a few methods that I find myself using frequently when dividing a list into partitions.
The class is initialized with a (nullable) list, which exists as an attribute on the class.
class Partitional(list):
"""Extends the list data type. Adds methods for dividing a list into partition sets
and returning data about those partition sets"""
def __init__(self, source_list: list=[]):
super().__init__()
self.source_list: list = source_list
self.n: int = len(source_list)
...
I want to be able to reliably replace list instances with Partitional instances without violating Liskov substitution. So for list’s methods, I wrote methods on the Partitional class that operate on self.source_list, e.g.
...
def remove(self, matched_item):
self.source_list.remove(matched_item)
self.__init__(self.source_list)
def pop(self, *args):
popped_item = self.source_list.pop(*args)
self.__init__(self.source_list)
return popped_item
def clear(self):
self.source_list.clear()
self.__init__(self.source_list)
...
(the __init__ call is there because the Partitional class builds some internal attributes based on self.source_list when it’s initialized, so these need to be rebuilt if source_list changes.)
And I also want Python’s built-in methods that take a list as an argument to work with a Partitional instance, so I set to work writing method overrides for those as well, e.g.
...
def __len__(self):
return len(self.source_list)
def __enumerate__(self):
return enumerate(self.source_list)
...
The relevant built-in methods are a finite set for any given Python version, but... is there not a simpler way to do this?
My question:
Is there a way to write a class such that, if an instance of that class is used as the argument for a function, the class provides an attribute to the function instead, by default?
That way I’d only need to override this default behaviour for a subset of built-in methods.
So for example, if a use case involving a list instance looks like this:
example_list: list = [1,2,3,4,5]
length = len(example_list)
we substitute a Partitional instance built from the same list:
example_list: list = [1,2,3,4,5]
example_partitional = Partitional(example_list)
length = len(example_partitional)
and what’s “actually” happening is this:
length = len(example_partitional.source_list)
i.e.
length = len([1,2,3,4,5])
Other notes:
In working on this, I’ve realized that there are two broad categories of Liskov substitution violation possible:
Inherent violation, where the structure of the child class will make it incompatible with any use case where the child class is used in place of the parent class, e.g. if you override some fundamental property or structure of the parent.
Context-dependent violation, where, for any given piece of software, so long as you never use the child class in a way that would violate Liskov substitution, you’re fine. E.g. You override a method on the parent class that would change how a built-in function acts when it takes an instance of the class as an argument, but you never use that built-in method with the class instance in your system. Or any system that depends on your system. Or... (you see how relying on this caveat is not foolproof)
What I’m looking to do is come up with a technique that will protect against both categories of violation, without having to worry about use cases and context.
This question already has answers here:
What's the pythonic way to use getters and setters?
(8 answers)
Closed 4 months ago.
What advantages does the #property notation hold over the classic getter+setter? In which specific cases/situations should a programmer choose to use one over the other?
With properties:
class MyClass(object):
#property
def my_attr(self):
return self._my_attr
#my_attr.setter
def my_attr(self, value):
self._my_attr = value
Without properties:
class MyClass(object):
def get_my_attr(self):
return self._my_attr
def set_my_attr(self, value):
self._my_attr = value
Prefer properties. It's what they're there for.
The reason is that all attributes are public in Python. Starting names with an underscore or two is just a warning that the given attribute is an implementation detail that may not stay the same in future versions of the code. It doesn't prevent you from actually getting or setting that attribute. Therefore, standard attribute access is the normal, Pythonic way of, well, accessing attributes.
The advantage of properties is that they are syntactically identical to attribute access, so you can change from one to another without any changes to client code. You could even have one version of a class that uses properties (say, for code-by-contract or debugging) and one that doesn't for production, without changing the code that uses it. At the same time, you don't have to write getters and setters for everything just in case you might need to better control access later.
In Python you don't use getters or setters or properties just for the fun of it. You first just use attributes and then later, only if needed, eventually migrate to a property without having to change the code using your classes.
There is indeed a lot of code with extension .py that uses getters and setters and inheritance and pointless classes everywhere where e.g. a simple tuple would do, but it's code from people writing in C++ or Java using Python.
That's not Python code.
Using properties lets you begin with normal attribute accesses and then back them up with getters and setters afterwards as necessary.
The short answer is: properties wins hands down. Always.
There is sometimes a need for getters and setters, but even then, I would "hide" them to the outside world. There are plenty of ways to do this in Python (getattr, setattr, __getattribute__, etc..., but a very concise and clean one is:
def set_email(self, value):
if '#' not in value:
raise Exception("This doesn't look like an email address.")
self._email = value
def get_email(self):
return self._email
email = property(get_email, set_email)
Here's a brief article that introduces the topic of getters and setters in Python.
[TL;DR? You can skip to the end for a code example.]
I actually prefer to use a different idiom, which is a little involved for using as a one off, but is nice if you have a more complex use case.
A bit of background first.
Properties are useful in that they allow us to handle both setting and getting values in a programmatic way but still allow attributes to be accessed as attributes. We can turn 'gets' into 'computations' (essentially) and we can turn 'sets' into 'events'. So let's say we have the following class, which I've coded with Java-like getters and setters.
class Example(object):
def __init__(self, x=None, y=None):
self.x = x
self.y = y
def getX(self):
return self.x or self.defaultX()
def getY(self):
return self.y or self.defaultY()
def setX(self, x):
self.x = x
def setY(self, y):
self.y = y
def defaultX(self):
return someDefaultComputationForX()
def defaultY(self):
return someDefaultComputationForY()
You may be wondering why I didn't call defaultX and defaultY in the object's __init__ method. The reason is that for our case I want to assume that the someDefaultComputation methods return values that vary over time, say a timestamp, and whenever x (or y) is not set (where, for the purpose of this example, "not set" means "set to None") I want the value of x's (or y's) default computation.
So this is lame for a number of reasons describe above. I'll rewrite it using properties:
class Example(object):
def __init__(self, x=None, y=None):
self._x = x
self._y = y
#property
def x(self):
return self.x or self.defaultX()
#x.setter
def x(self, value):
self._x = value
#property
def y(self):
return self.y or self.defaultY()
#y.setter
def y(self, value):
self._y = value
# default{XY} as before.
What have we gained? We've gained the ability to refer to these attributes as attributes even though, behind the scenes, we end up running methods.
Of course the real power of properties is that we generally want these methods to do something in addition to just getting and setting values (otherwise there is no point in using properties). I did this in my getter example. We are basically running a function body to pick up a default whenever the value isn't set. This is a very common pattern.
But what are we losing, and what can't we do?
The main annoyance, in my view, is that if you define a getter (as we do here) you also have to define a setter.[1] That's extra noise that clutters the code.
Another annoyance is that we still have to initialize the x and y values in __init__. (Well, of course we could add them using setattr() but that is more extra code.)
Third, unlike in the Java-like example, getters cannot accept other parameters. Now I can hear you saying already, well, if it's taking parameters it's not a getter! In an official sense, that is true. But in a practical sense there is no reason we shouldn't be able to parameterize an named attribute -- like x -- and set its value for some specific parameters.
It'd be nice if we could do something like:
e.x[a,b,c] = 10
e.x[d,e,f] = 20
for example. The closest we can get is to override the assignment to imply some special semantics:
e.x = [a,b,c,10]
e.x = [d,e,f,30]
and of course ensure that our setter knows how to extract the first three values as a key to a dictionary and set its value to a number or something.
But even if we did that we still couldn't support it with properties because there is no way to get the value because we can't pass parameters at all to the getter. So we've had to return everything, introducing an asymmetry.
The Java-style getter/setter does let us handle this, but we're back to needing getter/setters.
In my mind what we really want is something that capture the following requirements:
Users define just one method for a given attribute and can indicate there
whether the attribute is read-only or read-write. Properties fail this test
if the attribute writable.
There is no need for the user to define an extra variable underlying the function, so we don't need the __init__ or setattr in the code. The variable just exists by the fact we've created this new-style attribute.
Any default code for the attribute executes in the method body itself.
We can set the attribute as an attribute and reference it as an attribute.
We can parameterize the attribute.
In terms of code, we want a way to write:
def x(self, *args):
return defaultX()
and be able to then do:
print e.x -> The default at time T0
e.x = 1
print e.x -> 1
e.x = None
print e.x -> The default at time T1
and so forth.
We also want a way to do this for the special case of a parameterizable attribute, but still allow the default assign case to work. You'll see how I tackled this below.
Now to the point (yay! the point!). The solution I came up for for this is as follows.
We create a new object to replace the notion of a property. The object is intended to store the value of a variable set to it, but also maintains a handle on code that knows how to calculate a default. Its job is to store the set value or to run the method if that value is not set.
Let's call it an UberProperty.
class UberProperty(object):
def __init__(self, method):
self.method = method
self.value = None
self.isSet = False
def setValue(self, value):
self.value = value
self.isSet = True
def clearValue(self):
self.value = None
self.isSet = False
I assume method here is a class method, value is the value of the UberProperty, and I have added isSet because None may be a real value and this allows us a clean way to declare there really is "no value". Another way is a sentinel of some sort.
This basically gives us an object that can do what we want, but how do we actually put it on our class? Well, properties use decorators; why can't we? Let's see how it might look (from here on I'm going to stick to using just a single 'attribute', x).
class Example(object):
#uberProperty
def x(self):
return defaultX()
This doesn't actually work yet, of course. We have to implement uberProperty and
make sure it handles both gets and sets.
Let's start with gets.
My first attempt was to simply create a new UberProperty object and return it:
def uberProperty(f):
return UberProperty(f)
I quickly discovered, of course, that this doens't work: Python never binds the callable to the object and I need the object in order to call the function. Even creating the decorator in the class doesn't work, as although now we have the class, we still don't have an object to work with.
So we're going to need to be able to do more here. We do know that a method need only be represented the one time, so let's go ahead and keep our decorator, but modify UberProperty to only store the method reference:
class UberProperty(object):
def __init__(self, method):
self.method = method
It is also not callable, so at the moment nothing is working.
How do we complete the picture? Well, what do we end up with when we create the example class using our new decorator:
class Example(object):
#uberProperty
def x(self):
return defaultX()
print Example.x <__main__.UberProperty object at 0x10e1fb8d0>
print Example().x <__main__.UberProperty object at 0x10e1fb8d0>
in both cases we get back the UberProperty which of course is not a callable, so this isn't of much use.
What we need is some way to dynamically bind the UberProperty instance created by the decorator after the class has been created to an object of the class before that object has been returned to that user for use. Um, yeah, that's an __init__ call, dude.
Let's write up what we want our find result to be first. We're binding an UberProperty to an instance, so an obvious thing to return would be a BoundUberProperty. This is where we'll actually maintain state for the x attribute.
class BoundUberProperty(object):
def __init__(self, obj, uberProperty):
self.obj = obj
self.uberProperty = uberProperty
self.isSet = False
def setValue(self, value):
self.value = value
self.isSet = True
def getValue(self):
return self.value if self.isSet else self.uberProperty.method(self.obj)
def clearValue(self):
del self.value
self.isSet = False
Now we the representation; how do get these on to an object? There are a few approaches, but the easiest one to explain just uses the __init__ method to do that mapping. By the time __init__ is called our decorators have run, so just need to look through the object's __dict__ and update any attributes where the value of the attribute is of type UberProperty.
Now, uber-properties are cool and we'll probably want to use them a lot, so it makes sense to just create a base class that does this for all subclasses. I think you know what the base class is going to be called.
class UberObject(object):
def __init__(self):
for k in dir(self):
v = getattr(self, k)
if isinstance(v, UberProperty):
v = BoundUberProperty(self, v)
setattr(self, k, v)
We add this, change our example to inherit from UberObject, and ...
e = Example()
print e.x -> <__main__.BoundUberProperty object at 0x104604c90>
After modifying x to be:
#uberProperty
def x(self):
return *datetime.datetime.now()*
We can run a simple test:
print e.x.getValue()
print e.x.getValue()
e.x.setValue(datetime.date(2013, 5, 31))
print e.x.getValue()
e.x.clearValue()
print e.x.getValue()
And we get the output we wanted:
2013-05-31 00:05:13.985813
2013-05-31 00:05:13.986290
2013-05-31
2013-05-31 00:05:13.986310
(Gee, I'm working late.)
Note that I have used getValue, setValue, and clearValue here. This is because I haven't yet linked in the means to have these automatically returned.
But I think this is a good place to stop for now, because I'm getting tired. You can also see that the core functionality we wanted is in place; the rest is window dressing. Important usability window dressing, but that can wait until I have a change to update the post.
I'll finish up the example in the next posting by addressing these things:
We need to make sure UberObject's __init__ is always called by subclasses.
So we either force it be called somewhere or we prevent it from being implemented.
We'll see how to do this with a metaclass.
We need to make sure we handle the common case where someone 'aliases'
a function to something else, such as:
class Example(object):
#uberProperty
def x(self):
...
y = x
We need e.x to return e.x.getValue() by default.
What we'll actually see is this is one area where the model fails.
It turns out we'll always need to use a function call to get the value.
But we can make it look like a regular function call and avoid having to use e.x.getValue(). (Doing this one is obvious, if you haven't already fixed it out.)
We need to support setting e.x directly, as in e.x = <newvalue>. We can do this in the parent class too, but we'll need to update our __init__ code to handle it.
Finally, we'll add parameterized attributes. It should be pretty obvious how we'll do this, too.
Here's the code as it exists up to now:
import datetime
class UberObject(object):
def uberSetter(self, value):
print 'setting'
def uberGetter(self):
return self
def __init__(self):
for k in dir(self):
v = getattr(self, k)
if isinstance(v, UberProperty):
v = BoundUberProperty(self, v)
setattr(self, k, v)
class UberProperty(object):
def __init__(self, method):
self.method = method
class BoundUberProperty(object):
def __init__(self, obj, uberProperty):
self.obj = obj
self.uberProperty = uberProperty
self.isSet = False
def setValue(self, value):
self.value = value
self.isSet = True
def getValue(self):
return self.value if self.isSet else self.uberProperty.method(self.obj)
def clearValue(self):
del self.value
self.isSet = False
def uberProperty(f):
return UberProperty(f)
class Example(UberObject):
#uberProperty
def x(self):
return datetime.datetime.now()
[1] I may be behind on whether this is still the case.
I think both have their place. One issue with using #property is that it is hard to extend the behaviour of getters or setters in subclasses using standard class mechanisms. The problem is that the actual getter/setter functions are hidden in the property.
You can actually get hold of the functions, e.g. with
class C(object):
_p = 1
#property
def p(self):
return self._p
#p.setter
def p(self, val):
self._p = val
you can access the getter and setter functions as C.p.fget and C.p.fset, but you can't easily use the normal method inheritance (e.g. super) facilities to extend them. After some digging into the intricacies of super, you can indeed use super in this way:
# Using super():
class D(C):
# Cannot use super(D,D) here to define the property
# since D is not yet defined in this scope.
#property
def p(self):
return super(D,D).p.fget(self)
#p.setter
def p(self, val):
print 'Implement extra functionality here for D'
super(D,D).p.fset(self, val)
# Using a direct reference to C
class E(C):
p = C.p
#p.setter
def p(self, val):
print 'Implement extra functionality here for E'
C.p.fset(self, val)
Using super() is, however, quite clunky, since the property has to be redefined, and you have to use the slightly counter-intuitive super(cls,cls) mechanism to get an unbound copy of p.
Using properties is to me more intuitive and fits better into most code.
Comparing
o.x = 5
ox = o.x
vs.
o.setX(5)
ox = o.getX()
is to me quite obvious which is easier to read. Also properties allows for private variables much easier.
I feel like properties are about letting you get the overhead of writing getters and setters only when you actually need them.
Java Programming culture strongly advise to never give access to properties, and instead, go through getters and setters, and only those which are actually needed.
It's a bit verbose to always write these obvious pieces of code, and notice that 70% of the time they are never replaced by some non-trivial logic.
In Python, people actually care for that kind of overhead, so that you can embrace the following practice :
Do not use getters and setters at first, when if they not needed
Use #property to implement them without changing the syntax of the rest of your code.
I would prefer to use neither in most cases. The problem with properties is that they make the class less transparent. Especially, this is an issue if you were to raise an exception from a setter. For example, if you have an Account.email property:
class Account(object):
#property
def email(self):
return self._email
#email.setter
def email(self, value):
if '#' not in value:
raise ValueError('Invalid email address.')
self._email = value
then the user of the class does not expect that assigning a value to the property could cause an exception:
a = Account()
a.email = 'badaddress'
--> ValueError: Invalid email address.
As a result, the exception may go unhandled, and either propagate too high in the call chain to be handled properly, or result in a very unhelpful traceback being presented to the program user (which is sadly too common in the world of python and java).
I would also avoid using getters and setters:
because defining them for all properties in advance is very time consuming,
makes the amount of code unnecessarily longer, which makes understanding and maintaining the code more difficult,
if you were define them for properties only as needed, the interface of the class would change, hurting all users of the class
Instead of properties and getters/setters I prefer doing the complex logic in well defined places such as in a validation method:
class Account(object):
...
def validate(self):
if '#' not in self.email:
raise ValueError('Invalid email address.')
or a similiar Account.save method.
Note that I am not trying to say that there are no cases when properties are useful, only that you may be better off if you can make your classes simple and transparent enough that you don't need them.
I am surprised that nobody has mentioned that properties are bound methods of a descriptor class, Adam Donohue and NeilenMarais get at exactly this idea in their posts -- that getters and setters are functions and can be used to:
validate
alter data
duck type (coerce type to another type)
This presents a smart way to hide implementation details and code cruft like regular expression, type casts, try .. except blocks, assertions or computed values.
In general doing CRUD on an object may often be fairly mundane but consider the example of data that will be persisted to a relational database. ORM's can hide implementation details of particular SQL vernaculars in the methods bound to fget, fset, fdel defined in a property class that will manage the awful if .. elif .. else ladders that are so ugly in OO code -- exposing the simple and elegant self.variable = something and obviate the details for the developer using the ORM.
If one thinks of properties only as some dreary vestige of a Bondage and Discipline language (i.e. Java) they are missing the point of descriptors.
In complex projects I prefer using read-only properties (or getters) with explicit setter function:
class MyClass(object):
...
#property
def my_attr(self):
...
def set_my_attr(self, value):
...
In long living projects debugging and refactoring takes more time than writing the code itself. There are several downsides for using #property.setter that makes debugging even harder:
1) python allows creating new attributes for an existing object. This makes a following misprint very hard to track:
my_object.my_atttr = 4.
If your object is a complicated algorithm then you will spend quite some time trying to find out why it doesn't converge (notice an extra 't' in the line above)
2) setter sometimes might evolve to a complicated and slow method (e.g. hitting a database). It would be quite hard for another developer to figure out why the following function is very slow. He might spend a lot of time on profiling do_something() method, while my_object.my_attr = 4. is actually the cause of slowdown:
def slow_function(my_object):
my_object.my_attr = 4.
my_object.do_something()
Both #property and traditional getters and setters have their advantages. It depends on your use case.
Advantages of #property
You don't have to change the interface while changing the implementation of data access. When your project is small, you probably want to use direct attribute access to access a class member. For example, let's say you have an object foo of type Foo, which has a member num. Then you can simply get this member with num = foo.num. As your project grows, you may feel like there needs to be some checks or debugs on the simple attribute access. Then you can do that with a #property within the class. The data access interface remains the same so that there is no need to modify client code.
Cited from PEP-8:
For simple public data attributes, it is best to expose just the attribute name, without complicated accessor/mutator methods. Keep in mind that Python provides an easy path to future enhancement, should you find that a simple data attribute needs to grow functional behavior. In that case, use properties to hide functional implementation behind simple data attribute access syntax.
Using #property for data access in Python is regarded as Pythonic:
It can strengthen your self-identification as a Python (not Java) programmer.
It can help your job interview if your interviewer thinks Java-style getters and setters are anti-patterns.
Advantages of traditional getters and setters
Traditional getters and setters allow for more complicated data access than simple attribute access. For example, when you are setting a class member, sometimes you need a flag indicating where you would like to force this operation even if something doesn't look perfect. While it is not obvious how to augment a direct member access like foo.num = num, You can easily augment your traditional setter with an additional force parameter:
def Foo:
def set_num(self, num, force=False):
...
Traditional getters and setters make it explicit that a class member access is through a method. This means:
What you get as the result may not be the same as what is exactly stored within that class.
Even if the access looks like a simple attribute access, the performance can vary greatly from that.
Unless your class users expect a #property hiding behind every attribute access statement, making such things explicit can help minimize your class users surprises.
As mentioned by #NeilenMarais and in this post, extending traditional getters and setters in subclasses is easier than extending properties.
Traditional getters and setters have been widely used for a long time in different languages. If you have people from different backgrounds in your team, they look more familiar than #property. Also, as your project grows, if you may need to migrate from Python to another language that doesn't have #property, using traditional getters and setters would make the migration smoother.
Caveats
Neither #property nor traditional getters and setters makes the class member private, even if you use double underscore before its name:
class Foo:
def __init__(self):
self.__num = 0
#property
def num(self):
return self.__num
#num.setter
def num(self, num):
self.__num = num
def get_num(self):
return self.__num
def set_num(self, num):
self.__num = num
foo = Foo()
print(foo.num) # output: 0
print(foo.get_num()) # output: 0
print(foo._Foo__num) # output: 0
Here is an excerpts from "Effective Python: 90 Specific Ways to Write Better Python" (Amazing book. I highly recommend it).
Things to Remember
✦ Define new class interfaces using simple public attributes and avoid
defining setter and getter methods.
✦ Use #property to define special behavior when attributes are
accessed on your objects, if necessary.
✦ Follow the rule of least surprise and avoid odd side effects in your
#property methods.
✦ Ensure that #property methods are fast; for slow or complex
work—especially involving I/O or causing side effects—use normal
methods instead.
One advanced but common use of #property is transitioning what was
once a simple numerical attribute into an on-the-fly calculation. This
is extremely helpful because it lets you migrate all existing usage of
a class to have new behaviors without requiring any of the call sites
to be rewritten (which is especially important if there’s calling code
that you don’t control). #property also provides an important stopgap
for improving interfaces over time.
I especially like #property because it lets you make incremental
progress toward a better data model over time.
#property is a tool to
help you address problems you’ll come across in real-world code. Don’t
overuse it. When you find yourself repeatedly extending #property
methods, it’s probably time to refactor your class instead of further
paving over your code’s poor design.
✦ Use #property to give existing instance attributes
new functionality.
✦ Make incremental progress toward better data
models by using #property.
✦ Consider refactoring a class and all call
sites when you find yourself using #property too heavily.
A little example will help clarify my question:
I define two classes: Security and Universe which I would like to behave as a list of Secutity objects.
Here is my example code:
class Security(object):
def __init__(self, name):
self.name = name
class Universe(object):
def __init__(self, securities):
self.securities = securities
s1 = Security('name1')
s2 = Security('name2')
u = Universe([s1, s2])
I would like my Universe Class to be able to use usual list features such as enumerate(), len(), __getitem__()... :
enumerate(u)
len(u)
u[0]
So I defined my Class as:
class Universe(list, object):
def __init__(self, securities):
super(Universe, self).__init__(iter(securities))
self.securities = securities
It seems to work, but is it the appropriate pythonic way to do it ?
[EDIT]
The above solution does not work as I wish when I subset the list:
>>> s1 = Security('name1')
>>> s2 = Security('name2')
>>> s3 = Security('name3')
>>> u = Universe([s1, s2, s3])
>>> sub_u = u[0:2]
>>> type(u)
<class '__main__.Universe'>
>>> type(sub_u)
<type 'list'>
I would like my variable sub_u to remain of type Universe.
You don't have to actually be a list to use those features. That's the whole point of duck typing. Anything that defines __getitem__(self, i) automatically handles x[i], for i in x, iter(x), enumerate(x), and various other things. Also define __len__(self) and len(x), list(x), etc. also work. Or you can define __iter__ instead of __getitem__. Or both. It depends on exactly how list-y you want to be.
The documentation on Python's special methods explains what each one is for, and organizes them pretty nicely.
For example:
class FakeList(object):
def __getitem__(self, i):
return -i
fl = FakeList()
print(fl[20])
for i, e in enumerate(fl):
print(i)
if e < -2: break
No list in sight.
If you actually have a real list and want to represent its data as your own, there are two ways to do that: delegation, and inheritance. Both work, and both are appropriate in different cases.
If your object really is a list plus some extra stuff, use inheritance. If you find yourself stepping on the base class's behavior, you may want to switch to delegation anyway, but at least start with inheritance. This is easy:
class Universe(list): # don't add object also, just list
def __init__(self, securities):
super(Universe, self).__init__(iter(securities))
# don't also store `securities`--you already have `self`!
You may also want to override __new__, which allows you to get the iter(securities) into the list at creation time rather than initialization time, but this doesn't usually matter for a list. (It's more important for immutable types like str.)
If the fact that your object owns a list rather than being one is inherent in its design, use delegation.
The simplest way to delegate is explicitly. Define the exact same methods you'd define to fake being a list, and make them all just forward to the list you own:
class Universe(object):
def __init__(self, securities):
self.securities = list(securities)
def __getitem__(self, index):
return self.securities[index] # or .__getitem__[index] if you prefer
# ... etc.
You can also do delegation through __getattr__:
class Universe(object):
def __init__(self, securities):
self.securities = list(securities)
# no __getitem__, __len__, etc.
def __getattr__(self, name):
if name in ('__getitem__', '__len__',
# and so on
):
return getattr(self.securities, name)
raise AttributeError("'{}' object has no attribute '{}'"
.format(self.__class__.__name__), name)
Note that many of list's methods will return a new list. If you want them to return a new Universe instead, you need to wrap those methods. But keep in mind that some of those methods are binary operators—for example, should a + b return a Universe only if a is one, or only if both are, or if either are?
Also, __getitem__ is a little tricky, because they can return either a list or a single object, and you only want to wrap the former in a Universe. You can do that by checking the return value for isinstance(ret, list), or by checking the index for isinstance(index, slice); which one is appropriate depends on whether you can have lists as element of a Universe, and whether they should be treated as a list or as a Universe when extracted. Plus, if you're using inheritance, in Python 2, you also need to wrap the deprecated __getslice__ and friends, because list does support them (although __getslice__ always returns a sub-list, not an element, so it's pretty easy).
Once you decide those things, the implementations are easy, if a bit tedious. Here are examples for all three versions, using __getitem__ because it's tricky, and the one you asked about in a comment. I'll show a way to use generic helpers for wrapping, even though in this case you may only need it for one method, so it may be overkill.
Inheritance:
class Universe(list): # don't add object also, just list
#classmethod
def _wrap_if_needed(cls, value):
if isinstance(value, list):
return cls(value)
else:
return value
def __getitem__(self, index):
ret = super(Universe, self).__getitem__(index)
return _wrap_if_needed(ret)
Explicit delegation:
class Universe(object):
# same _wrap_if_needed
def __getitem__(self, index):
ret = self.securities.__getitem__(index)
return self._wrap_if_needed(ret)
Dynamic delegation:
class Universe(object):
# same _wrap_if_needed
#classmethod
def _wrap_func(cls, func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
return cls._wrap_if_needed(func(*args, **kwargs))
def __getattr__(self, name):
if name in ('__getitem__'):
return self._wrap_func(getattr(self.securities, name))
elif name in ('__len__',
# and so on
):
return getattr(self.securities, name)
raise AttributeError("'{}' object has no attribute '{}'"
.format(self.__class__.__name__), name)
As I said, this may be overkill in this case, especially for the __getattr__ version. If you just want to override one method, like __getitem__, and delegate everything else, you can always define __getitem__ explicitly, and let __getattr__ handle everything else.
If you find yourself doing this kind of wrapping a lot, you can write a function that generates wrapper classes, or a class decorator that lets you write skeleton wrappers and fills in the details, etc. Because the details depend on your use case (all those issues I mentioned above that can go one way or the other), there's no one-size-fits-all library that just magically does what you want, but there are a number of recipes on ActiveState that show more complete details—and there are even a few wrappers in the standard library source.
That is a reasonable way to do it, although you don't need to inherit from both list and object. list alone is enough. Also, if your class is a list, you don't need to store self.securities; it will be stored as the contents of the list.
However, depending on what you want to use your class for, you may find it easier to define a class that stores a list internally (as you were storing self.securities), and then define methods on your class that (sometimes) pass through to the methods of this stored list, instead of inheriting from list. The Python builtin types don't define a rigorous interface in terms of which methods depend on which other ones (e.g., whether append depends on insert), so you can run into confusing behavior if you try to do any nontrivial manipulations of the contents of your list-class.
Edit: As you discovered, any operation that returns a new list falls into this category. If you subclass list without overriding its methods, then you call methods on your object (explicitly or implicitly), the underlying list methods will be called. These methods are hardcoded to return a plain Python list and do not check what the actual class of the object is, so they will return a plain Python list.