I have recently stated trying to use the newer style of classes in Python (those derived from object). As an excersise to familiarise myself with them I am trying to define a class which has a number of class instances as attributes, with each of these class instances describing a different type of data, e.g. 1d lists, 2d arrays, scalars etc. Essentially I wish to be able to write
some_class.data_type.some_variable
where data_type is a class instance describing a collection of variables. Below is my first attempt at implementing this, using just a profiles_1d instance and rather generic names:
class profiles_1d(object):
def __init__(self, x, y1=None, y2=None, y3=None):
self.x = x
self.y1 = y1
self.y2 = y2
self.y3 = y3
class collection(object):
def __init__(self):
self._profiles_1d = None
def get_profiles(self):
return self._profiles_1d
def set_profiles(self, x, *args, **kwargs):
self._profiles_1d = profiles_1d(x, *args, **kwargs)
def del_profiles(self):
self._profiles_1d = None
profiles1d = property(fget=get_profiles, fset=set_profiles, fdel=del_profiles,
doc="One dimensional profiles")
Is the above code roughly an appropriate way of tackling this problem. The examples I have seen of using property just set the value of some variable. Here I require my set method to initialise an instance of some class. If not, any other suggestions of better ways to implement this would be greatly appreciated.
In addition, is the way I am defining my set method ok? Generally the set method, as far as I understand, defines what to do when the user types, in this example,
collection.profiles1d = ...
The only way I can correctly set the attributes of the profiles_1d instance with the above code is to type collection.set_profiles([...], y1=[...], ...), but I think that I shouldn't be directly calling this method. Ideally I would want to type collection.profiles = ([...], y1=[...], ...): is this correct/possible?
Finally, I have seen a decorators mentioned alot with repect to the new style of classes, but this is something I know very little about. Is the use of decorators appropriate here? Is this something I should know more about for this problem?
First, it's good you're learning new-style classes. They've got lots of advantages.
The modern way to make properties in Python is:
class Collection(object):
def __init__(self):
self._profiles_1d = None
#property
def profiles(self):
"""One dimensional profiles"""
return self._profiles_1d
#profiles.setter
def profiles(self, argtuple):
args, kwargs = argtuple
self._profiles_1d = profiles_1d(*args, **kwargs)
#profiles.deleter
def profiles(self):
self._profiles_1d = None
then set profiles by doing
collection = Collection()
collection.profiles = (arg1, arg2, arg3), {'kwarg1':val1, 'kwarg2':val2}
Notice all three methods having the same name.
This is not normally done; either have them pass the attributes to collections constructor or have them create the profiles_1d themselves and then do collections.profiles = myprofiles1d or pass it to the constructor.
When you want the attribute to manage access to itself instead of the class managing access to the attribute, make the attribute a class with a descriptor. Do this if, unlike in the property example above, you actually want the data stored inside the attribute (instead of another, faux-private instance variable). Also, it's good for if you're going to use the same property over and over again -- make it a descriptor and you don't need to write the code multiple times or use a base class.
I actually like the page by #S.Lott -- Building Skills in Python's Attributes, Properties and Descriptors.
When creating propertys (or other descriptors) that need to call other instance methods the naming convention is to prepend an _ to those methods; so your names above would be _get_profiles, _set_profiles, and _del_profiles.
In Python 2.6+ each property is also a decorator, so you don't have to create the (otherwise useless) _name methods:
#property
def test(self):
return self._test
#test.setter
def test(self, newvalue):
# validate newvalue if necessary
self._test = newvalue
#test.deleter
def test(self):
del self._test
It looks like your code is trying to set profiles on the class instead of instances -- if this is so, properties on the class won't work as collections.profiles would be overridden with a profiles_1d object, clobbering the property... if this is really what you want, you'll have to make a metaclass and put the property there instead.
Hopefully you are talking about instances, so the class would look like:
class Collection(object): # notice the capital C in Collection
def __init__(self):
self._profiles_1d = None
#property
def profiles1d(self):
"One dimensional profiles"
return self._profiles_1d
#profiles1d.setter
def profiles1d(self, value):
self._profiles_1d = profiles_1d(*value)
#profiles1d.deleter
def profiles1d(self):
del self._profiles_1d
and then you would do something like:
collection = Collection()
collection.profiles1d = x, y1, y2, y3
A couple things to note: the setter method gets called with only two items: self, and the new value (which is why you were having to call set_profiles1d manually); when doing an assignment, keyword naming is not an option (that only works in function calls, which an assignment is not). If it makes sense for you, you can get fancy and do something like:
collection.profiles1d = (x, dict(y1=y1, y2=y2, y3=y3))
and then change the setter to:
#profiles1d.setter
def profiles1d(self, value):
x, y = value
self._profiles_1d = profiles_1d(x, **y)
which is still fairly readable (although I prefer the x, y1, y2, y3 version myself).
Related
I tried writing a decorator as such (going off memory, excuse any problems in code):
def required(fn):
def wrapped(self):
self.required_attributes += [fn.__name__]
fn(self)
return wrapped
and I used this to decorate #property attributes in classes, e.g.:
#property
#required
def some_property(self):
return self._some_property
...so that I could do something like this:
def validate_required_attributes(instance):
for attribute in instance.required_attributes:
if not hasattr(instance, attribute):
raise ValueError(f"Required attribute {attribute} was not set!")
Now I forgot that this wouldn't work because in order for the required_attributes to be updated with the name of the property, I would have to retrieve the property first. So in essence, when I do init in the class, I can just do a self.propertyname to add it... but this solution is not nice at all, I might as well create a list of required attribute names in the init.
From what I know, the decorator is applied at compile time so I wouldn't be able to modify the required_attributes before defining the wrapped function. Is there another way I can make this work? I just want a nice, elegant solution.
Thanks!
I think the attrs library does what you want. You can define a class like this, where x and y are required and z is optional.
from attr import attrs, attrib
#attrs
class MyClass:
x = attrib()
y = attrib()
z = attrib(default=0)
Testing it out:
>>> instance = MyClass(1, 2)
>>> print(instance)
MyClass(x=1, y=2, z=0)
Here's my take at doing it with a class decorator and a method decorator. There's probably a nicer way of doing this using metaclasses (nice being the API not the implementation ;)).
def requiredproperty(f):
setattr(f, "_required", True)
return property(f)
def hasrequiredprops(cls):
props = [x for x in cls.__dict__.items() if isinstance(x[1], property)]
cls._required_props = {k for k, v in props if v.fget._required}
return cls
#hasrequiredprops
class A(object):
def __init__(self):
self._my_prop = 1
def validate(self):
print("required attributes are", ",".join(self._required_props))
#requiredproperty
def my_prop(self):
return self._my_prop
This should make validation work without the requiring the caller to touch the property first:
>>> a = A()
>>> a.validate()
required attributes are my_prop
>>> a.my_prop
1
The class decorator is required to make sure it has the required property names duing instantiation. The requiredproperty function is just a way to mark the properties as required.
That being said, I'm not completely sure what you are trying to achieve here. Perhaps validation of the instance attribute values that the property should return?
I have the following problem and I need advice on how to solve it the best technically in Python. As I am new to programming I would like to have some advice.
So I will have the following object and they should store something. Here is an example:
object 1: cash dividends (they will have the following properties)
exdate (will store a list of dates)
recorddate (will store a list of dates)
paydate (will store a list of dates)
ISIN (will store a list of text)
object 2: stocksplits (they will have the following prpoerties)
stockplitratio (will be some ration)
exdate(list of dates)
...
I have tried to solve it like this:
class cashDividends(object):
def __init__(self, _gross,_net,_ISIN, _paydate, _exdate, _recorddate, _frequency, _type, _announceddate, _currency):
self.gross = _gross
self.net = _net
self.ISIN = _ISIN
self.paydate = _paydate
self.exdate = _exdate
self.recorddate = _recorddate
self.frequency = _frequency
self.type = _type
self.announceddate = _announceddate
self.currency = _currency
So if I have this I would have to create another class named stockplits and then define an __init__ function again.
However is there a way where I can have one class like "Corporate Actions" and then have stock splits and cashdividends in there ?
Sure you can! In python you can pass classes to other classes.
Here a simple example:
class A():
def __init__(self):
self.x = 0
class B():
def __init__(self):
self.x = 1
class Container():
def __init__(self, objects):
self.x = [obj.x for obj in objects]
a = A()
b = B()
c = Container([a,b])
c.x
[0,1]
If I understood correctly what you want is an object that has other objects from a class you created as property?
class CorporateActions(object):
def __init__(self, aCashDividend, aStockSplit):
self.cashDividend = aCashDividend
self.stockSplit = aStockSplit
myCashDividends = CashDividends(...) #corresponding parameters here
myStockSplit = StockSplit(...)
myCorporateActions = CorporateActions(myCashDividends, myStockSplit)
Strictly speaking this answer isn't an answer for the final question. However, it is a way to make your life slightly easier.
Consider creating a sort-of template class (I'm using this term loosely; there's no such thing in Python) that does the __init__ work for you. Like this:
class KwargAttrs():
def __init__(self, **kwargs):
for k,v in kwargs.items():
setattr(self, k, v)
def _update(self, **kwargs):
args_dict = {k:(kwargs[k] if k in kwargs else self.__dict__[k]) for k in self.__dict__}
self.__dict__.update(args_dict)
This class uses every supplied keyword argument as an object attribute. Use it this way:
class CashDividends(KwargAttrs):
def __init__(self, gross, net, ISIN, paydate, exdate, recorddate, frequency, type, announceddate, currency):
# save the namespace before it gets polluted
super().__init__(**locals())
# work that might pollute local namespace goes here
# OPTIONAL: update the argument values in case they were modified:
super()._update(**locals())
Using a method like this, you don't have to go through the argument list and assign every single object attribute; it happens automatically.
We bookend everything you need to accomplish in the __init__ method with method calls to the parent-class via super(). We do this because locals() returns a dict every variable in the function's current namespace, so you need to 1.) capture that namespace before any other work pollutes it and 2.) update the namespace in case any work changes the argument values.
The call to update is optional, but the values of the supplied arguments will not be updated if something is done to them after the call to super().__init__() (that is, unless you change the values using setattr(self, 'argname, value)`, which is not a bad idea).
You can continue using this class like so:
class StockSplits(KwargAttrs):
def __init__(self, stocksplitratio, gross, net, ISIN, paydate, exdate, recorddate, frequency, type, announceddate, currency):
super().__init__(**locals())
As mentioned in the other answers you can create a container for our other classes, but you can even do that using this same template class:
class CorporateActions(KwargAttrs):
def __init__(self, stock_splits , cash_dividends):
super().__init__(**locals())
ca = CorporateActions(stock_splits = StockSplits(<arguments>), cash_dividends = CashDividends(<arguments>) )
Question
How can you extend a python property?
A subclass can extend a super class's function by calling it in the overloaded version, and then operating on the result. Here's an example of what I mean when I say "extending a function":
# Extending a function (a tongue-in-cheek example)
class NormalMath(object):
def __init__(self, number):
self.number = number
def add_pi(self):
n = self.number
return n + 3.1415
class NewMath(object):
def add_pi(self):
# NewMath doesn't know how NormalMath added pi (and shouldn't need to).
# It just uses the result.
n = NormalMath.add_pi(self)
# In NewMath, fractions are considered too hard for our users.
# We therefore silently convert them to integers.
return int(n)
Is there an analogous operation to extending functions, but for functions that use the property decorator?
I want to do some additional calculations immediately after getting an expensive-to-compute attribute. I need to keep the attribute's access lazy. I don't want the user to have to invoke a special routine to make the calculations. basically, I don't want the user to ever know the calculations were made in the first place. However, the attribute must remain a property, since i've got legacy code I need to support.
Maybe this is a job for decorators? If I'm not mistaken, decorator is a function that wraps another function, and I'm looking to wrap a property with some more calculations, and then present it as a property again, which seems like a similar idea... but I can't quite figure it out.
My Specific Problem
I've got a base class LogFile with an expensive-to-construct attribute .dataframe. I've implemented it as a property (with the property decorator), so it won't actually parse the log file until I ask for the dataframe. So far, it works great. I can construct a bunch (100+) LogFile objects, and use cheaper methods to filter and select only the important ones to parse. And whenever I'm using the same LogFile over and over, i only have to parse it the first time I access the dataframe.
Now I need to write a LogFile subclass, SensorLog, that adds some extra columns to the base class's dataframe attribute, but I can't quite figure out the syntax to call the super class's dataframe construction routines (without knowing anything about their internal workings), then operate on the resulting dataframe, and then cache/return it.
# Base Class - rules for parsing/interacting with data.
class LogFile(object):
def __init__(self, file_name):
# file name to find the log file
self.file_name = file_name
# non-public variable to cache results of parse()
self._dataframe = None
def parse(self):
with open(self.file_name) as infile:
...
...
# Complex rules to interpret the file
...
...
self._dataframe = pandas.DataFrame(stuff)
#property
def dataframe(self):
"""
Returns the dataframe; parses file if necessary. This works great!
"""
if self._dataframe is None:
self.parse()
return self._dataframe
#dataframe.setter
def dataframe(self,value):
self._dataframe = value
# Sub class - adds more information to data, but does't parse
# must preserve established .dataframe interface
class SensorLog(LogFile):
def __init__(self, file_name):
# Call the super's constructor
LogFile.__init__(self, file_name)
# SensorLog doesn't actually know about (and doesn't rely on) the ._dataframe cache, so it overrides it just in case.
self._dataframe = None
# THIS IS THE PART I CAN'T FIGURE OUT
# Here's my best guess, but it doesn't quite work:
#property
def dataframe(self):
# use parent class's getter, invoking the hidden parse function and any other operations LogFile might do.
self._dataframe = LogFile.dataframe.getter()
# Add additional calculated columns
self._dataframe['extra_stuff'] = 'hello world!'
return self._dataframe
#dataframe.setter
def dataframe(self, value):
self._dataframe = value
Now, when these classes are used in an interactive session, the user should be able to interact with either in the same way.
>>> log = LogFile('data.csv')
>>> print log.dataframe
#### DataFrame with 10 columns goes here ####
>>> sensor = SensorLog('data.csv')
>>> print sensor.dataframe
#### DataFrame with 11 columns goes here ####
I have lots of existing code that takes a LogFile instance which provides a .dataframe attribute and dos something interesting (mostly plotting). I would LOVE to have SensorLog instances present the same interface so they can use the same code. Is it possible to extend the super-class's dataframe getter to take advantage of existing routines? How? Or am I better off doing this a different way?
Thanks for reading that huge wall of text. You are an internet super hero, dear reader. Got any ideas?
You should be calling the superclass properties, not bypassing them via self._dataframe. Here's a generic example:
class A(object):
def __init__(self):
self.__prop = None
#property
def prop(self):
return self.__prop
#prop.setter
def prop(self, value):
self.__prop = value
class B(A):
def __init__(self):
super(B, self).__init__()
#property
def prop(self):
value = A.prop.fget(self)
value['extra'] = 'stuff'
return value
#prop.setter
def prop(self, value):
A.prop.fset(self, value)
And using it:
b = B()
b.prop = dict((('a', 1), ('b', 2)))
print(b.prop)
Outputs:
{'a': 1, 'b': 2, 'extra': 'stuff'}
I would generally recommend placing side-effects in setters instead of getters, like this:
class A(object):
def __init__(self):
self.__prop = None
#property
def prop(self):
return self.__prop
#prop.setter
def prop(self, value):
self.__prop = value
class B(A):
def __init__(self):
super(B, self).__init__()
#property
def prop(self):
return A.prop.fget(self)
#prop.setter
def prop(self, value):
value['extra'] = 'stuff'
A.prop.fset(self, value)
Having costly operations within a getter is also generally to be avoided (such as your parse method).
If I understand correctly what you want to do is call the parent's method from the child instance. The usual way to do that is by using the super built-in.
I've taken your tongue-in-cheek example and modified it to use super in order to show you:
class NormalMath(object):
def __init__(self, number):
self.number = number
def add_pi(self):
n = self.number
return n + 3.1415
class NewMath(NormalMath):
def add_pi(self):
# this will call NormalMath's add_pi with
normal_maths_pi_plus_num = super(NewMath, self).add_pi()
return int(normal_maths_pi_plus_num)
In your Log example, instead of calling:
self._dataframe = LogFile.dataframe.getter()
you should call:
self._dataframe = super(SensorLog, self).dataframe
You can read more about super here
Edit: Even thought the example I gave you deals with methods, to do the same with #properties shouldn't be a problem.
You have some possibilities to consider:
1/ Inherit from logfile and override parse in your derived sensor class. It should be possible to modify your methods that work on dataframe to work regardless of the number of members that dataframe has - as you are using pandas a lot of it is done for you.
2/ Make sensor an instance of logfile then provide its own parse method.
3/ Generalise parse, and possibly some of your other methods, to use a list of data descriptors and possibly a dictionary of methods/rules either set in your class initialiser or set by a methods.
4/ Look at either making more use of the methods already in pandas, or possibly, extending pandas to provide the missing methods if you and others think that they would be accepted into pandas as useful extensions.
Personally I think that you would find the benefits of options 3 or 4 to be the most powerful.
The problem is that you're missing a self going into the parent class. If your parent is a singleton then a #staticmethod should work.
class X():
x=1
#staticmethod
def getx():
return X.x
class Y(X):
y=2
def getyx(self):
return X.getx()+self.y
wx = Y()
wx.getyx()
3
A descriptor class is as follows:
class Des(object):
def __get__(self, instance, owner): ...
def __set__(self, instance, value): ...
def __delete__(self, instance): ...
class Sub(object):
attr = Des()
X = sub()
Question
I don't see the point of the existence of owner, how can I use it?
To make an attr read-only, we shouldn't omit __set__ but define it to catch the assignments and raise an exception. So X.attr = 123 will fail, but __set__'s arguments doesn't contain owner, which means I can still do Sub.attr = 123, right?
See http://docs.python.org/reference/datamodel.html#implementing-descriptors:
owner is always the owner class, while instance is the instance that the attribute was accessed through, or None when the attribute is accessed through the owner
A case where you would use owner would be creating a classproperty:
class _ContentQueryProperty(object):
def __get__(self, inst, cls):
return Content.query.filter_by(type=cls.TYPE)
You can experiment with this example:
# the descriptor protocol defines 3 methods:
# __get__()
# __set__()
# __delete__()
# any class implementing any of the above methods is a descriptor
# as in this class
class Trace(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, objtype):
print "GET:" + self.name + " = " + str(obj.__dict__[self.name])
return obj.__dict__[self.name]
def __set__(self, obj, value):
obj.__dict__[self.name] = value
print "SET:" + self.name + " = " + str(obj.__dict__[self.name])
# define the attributes of your class (must derive from object)
# to be references to instances of a descriptor
class Point(object):
# NOTES:
# 1. descriptor invoked by dotted attribute access: A.x or a.x
# 2. descripor reference must be stored in the class dict, not the instance dict
# 3. descriptor not invoked by dictionary access: Point.__dict__['x']
x = Trace("x")
y = Trace("y")
def __init__(self, x0, y0):
self.x = x0
self.y = y0
def moveBy(self, dx, dy):
self.x = self.x + dx # attribute access does trigger descriptor
self.y = self.y + dy
# trace all getters and setters
p1 = Point(15, 25)
p1.x = 20
p1.y = 35
result = p1.x
p2 = Point(16, 26)
p2.x = 30
p2.moveBy(1, 1)
I came across this question with similar confusion, and after I answered it for myself it seemed prudent to report my findings here for prosperity.
As ThiefMaster already pointed out, the "owner" parameter makes possible constructions like a classproperty. Sometimes, you want classes to have methods masked as non-method attributes, and using the owner parameter allows you to do that with normal descriptors.
But that is only half the question. As for the "read-only" issue, here's what I found:
I first found the answer here: http://martyalchin.com/2007/nov/23/python-descriptors-part-1-of-2/. I did not understand it at first, and it took me about five minutes to wrap my head around it. What finally convinced me was coming up with an example.
Consider the most common descriptor: property. Let's use a trivial example class, with a property count, which is the number of times the variable count has been accessed.
class MyClass(object):
def __init__(self):
self._count = 0
#property
def count(self):
tmp = self._count
self._count += 1
return tmp
#count.setter
def setcount(self):
raise AttributeError('read-only attribute')
#count.deleter
def delcount(self):
raise AttributeError('read-only attribute')
As we've already established, the owner parameter of the __get__ function means that when you access the attribute at the class level, the __get__ function intercepts the getattr call. As it happens, the code for property simply returns the property itself when accessed at the class level, but it could do anything (like return some static value).
Now, imagine what would happen if __set__ and __del__ worked the same way. The __set__ and __del__ methods would intercept all setattr and delattr calls at the class level, in addition to the instance level.
As a consequence, this means that the "count" attribute of MyClass is effectively unmodifiable. If you're used to programming in static, compiled languages like Java this doesn't seem very interesting, since you can't modify classes in application code. But in Python, you can. Classes are considered objects, and you can dynamically assign any of their attributes. For example, let's say MyClass is part of a third-party module, and MyClass is almost entirely perfect for our application (let's assume there's other code in there besides the code for count) except that we wished the count method worked a little differently. Instead, we want it to always return 10, for every single instance. We could do the following:
>>> MyClass.count = 10
>>> myinstance = MyClass()
>>> myinstance.count
10
If __set__ intercepted the call to setattr(MyClass, 'count'), then there would be no way to actually change MyClass. Instead, the code for setcount would intercept it and couldn't do anything with it. The only solution would be to edit the source code for MyClass. (I'm not even sure you could overwrite it in a subclass, because I think defining it in a subclass would still invoke the setattr code. But I'm not sure, and since we're already dealing with a counterfactual here, I don't really have a way of testing it.)
Now, you may be saying, "That's exactly what I want! I intentionally did not want my user to reassign attributes of my class!" To that, all I can say is that what you wanted is impossible using naive descriptors, and I would direct you to the reasoning above. Allowing class attributes to be reassigned is much more in line with current Python idioms.
If you really, REALLY want to make a read-only class attribute, I don't think could tell you how. But if there is a solution, it would probably involve using metaclasses and either creating a property of the metaclass or modifying the metaclass's code for setattr and delattr. But this is Deep Magic, and well beyond the scope of this answer (and my own abilities with Python).
As far as read only properties are concerned (see discussion above), the following example shows how its done:
############################################################
#
# descriptors
#
############################################################
# define a class where methods are invoked through properties
class Point(object):
def getX(self):
print "getting x"
return self._x
def setX(self, value):
print "setting x"
self._x = value
def delX(self):
print "deleting x"
del self._x
x = property(getX, setX, delX)
p = Point()
p.x = 55 # calls setX
a = p.x # calls getX
del p.x # calls delX
# using property decorator (read only attributes)
class Foo(object):
def __init__(self, x0, y0):
self.__dict__["myX"] = x0
self.__dict__["myY"] = y0
#property
def x(self):
return self.myX
f = Foo(4,6)
print f.x
try:
f.x = 77 # fails: f.x is read-only
except Exception,e:
print e
The owner is just the class of the instance and is provided for convenience. You can always compute it from instance:
owner = instance.__class__
The __set__ method is supposed to change attributes on an instance. But what if you would like to change an attribute that is shared by all instances and therefore lives in the class, e.g., is a class attribute? This can only be done if you have access to the class, hence the owner argument.
Yes, you can overwrite the property / descriptor if you assign to an attribute through the class. This is by design, as Python is a dynamic language.
Hope that answers the question, although it was asked a long time ago.
I am programming a simulations for single neurons. Therefore I have to handle a lot of Parameters. Now the Idea is that I have two classes, one for a SingleParameter and a Collection of parameters. I use property() to access the parameter value easy and to make the code more readable. This works perfect for a sinlge parameter but I don't know how to implement it for the collection as I want to name the property in Collection after the SingleParameter. Here an example:
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(self):
return self._v
def set(self, value):
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
# par1 and par2 I can access perfectly via 'p1.v = ...'
# or get its value with 'p1.v'
class Collection(object):
def __init__(self):
self.dict = {}
def __getitem__(self, name):
return self.dict[name] # get the whole object
# to get the value instead:
# return self.dict[name].v
def add(self, parameter):
self.dict[parameter.name] = parameter
# now comes the part that I don't know how to implement with property():
# It shoule be something like
# self.__dict__[parameter.name] = property(...) ?
col = Collection()
col.add(par1)
col.add(par2)
col['par1'] # gives the whole object
# Now here is what I would like to get:
# col.par1 -> should result like col['par1'].v
# col.par1 = 5 -> should result like col['par1'].v = 5
Other questions that I put to understand property():
Why do managed attributes just work for class attributes and not for instance attributes in python?
How can I assign a new class attribute via __dict__ in python?
Look at built-in functions getattr and setattr. You'll probably be a lot happier.
Using the same get/set functions for both classes forces you into an ugly hack with the argument list. Very sketchy, this is how I would do it:
In class SingleParameter, define get and set as usual:
def get(self):
return self._s
def set(self, value):
self._s = value
In class Collection, you cannot know the information until you create the property, so you define the metaset/metaget function and particularize them only later with a lambda function:
def metaget(self, par):
return par.s
def metaset(self, value, par):
par.s = value
def add(self, par):
self[par.name] = par
setattr(Collection, par.name,
property(
fget=lambda x : Collection.metaget(x, par),
fset=lambda x, y : Collection.metaset(x,y, par))
Properties are meant to dynamically evaluate attributes or to make them read-only. What you need is customizing attribute access. __getattr__ and __setattr__ do that really fine, and there's also __getattribute__ if __getattr__ is not enough.
See Python docs on customizing attribute access for details.
Have you looked at the traits package? It seems that you are reinventing the wheel here with your parameter classes. Traits also have additional features that might be useful for your type of application (incidently I know a person that happily uses traits in neural simulations).
Now I implemented a solution with set-/getattr:
class Collection(object):
...
def __setattr__(self, name, value):
if 'dict' in self.__dict__:
if name in self.dict:
self[name].v = value
else:
self.__dict__[name] = value
def __getattr__(self, name):
return self[name].v
There is one thing I quite don't like that much: The attributes are not in the __dict__. And if I have them there as well I would have a copy of the value - which can be dangerous...
Finally I succeded to implement the classes with property(). Thanks a lot for the advice. It took me quite a bit to work it out - but I can promise you that this exercise helps you to understand better pythons OOP.
I implemented it also with __getattr__ and __setattr__ but still don't know the advantages and disadvantages to the property-solution. But this seems to be worth another question. The property-solutions seems to be quit clean.
So here is the code:
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(*args):
self = args[0]
print "get(): "
print args
return self._v
def set(*args):
print "set(): "
print args
self = args[0]
value = args[-1]
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
class Collection(dict):
# inheriting from dict saves the methods: __getitem__ and __init__
def add(self, par):
self[par.name] = par
# Now here comes the tricky part.
# (Note: this property call the get() and set() methods with one
# more argument than the property of SingleParameter)
setattr(Collection, par.name,
property(fget=par.get, fset=par.set))
# Applying the classes:
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
col = Collection()
col.add(par1)
col.add(par2)
# Setting parameter values:
par1.v = 13
col.par1 = 14
# Getting parameter values:
par1.v
col.par1
# checking identity:
par1.v is col.par1
# to access the whole object:
col['par1']
As I am new I am not sure how to move on:
how to treat follow up questions (like this itself):
get() is seems to be called twice - why?
oop-design: property vs. "__getattr__ & __setattr__" - when should I use what?
is it rude to check the own answer to the own question as accepted?
is it recommended to rename the title in order to put correlated questions or questions elaborated with the same example into the same context?
Other questions that I put to understand property():
Why do managed attributes just work for class attributes and not for instance attributes in python?
How can I assign a new class attribute via __dict__ in python?
I have a class that does something similar, but I did the following in the collection object:
setattr(self, par.name, par.v)