Storing a reference to a reference in Python? - python

Using Python, is there any way to store a reference to a reference, so that I can change what that reference refers to in another context? For example, suppose I have the following class:
class Foo:
def __init__(self):
self.standalone = 3
self.lst = [4, 5, 6]
I would like to create something analogous to the following:
class Reassigner:
def __init__(self, target):
self.target = target
def reassign(self, value):
# not sure what to do here, but reassigns the reference given by target to value
Such that the following code
f = Foo()
rStandalone = Reassigner(f.standalone) # presumably this syntax might change
rIndex = Reassigner(f.lst[1])
rStandalone.reassign(7)
rIndex.reassign(9)
Would result in f.standalone equal to 7 and f.lst equal to [4, 9, 6].
Essentially, this would be an analogue to a pointer-to-pointer.

In short, it's not possible. At all. The closest equivalent is storing a reference to the object whose member/item you want to reassign, plus the attribute name/index/key, and then use setattr/setitem. However, this yields quite different syntax, and you have to differentiate between the two:
class AttributeReassigner:
def __init__(self, obj, attr):
# use your imagination
def reassign(self, val):
setattr(self.obj, self.attr, val)
class ItemReassigner:
def __init__(self, obj, key):
# use your imagination
def reassign(self, val):
self.obj[self.key] = val
f = Foo()
rStandalone = AttributeReassigner(f, 'standalone')
rIndex = ItemReassigner(f.lst, 1)
rStandalone.reassign(7)
rIndex.reassign(9)
I've actually used something very similar, but the valid use cases are few and far between.
For globals/module members, you can use either the module object or globals(), depending on whether you're inside the module or outside of it. There is no equivalent for local variables at all -- the result of locals() cannot be used to change locals reliably, it's only useful for inspecting.
I've actually used something very similar, but the valid use cases are few and far between.

Simple answer: You can't.
Complicated answer: You can use lambdas. Sort of.
class Reassigner:
def __init__(self, target):
self.reassign = target
f = Foo()
rIndex = Reassigner(lambda value: f.lst.__setitem__(1, value))
rStandalone = Reassigner(lambda value: setattr(f, 'strandalone', value))
rF = Reassigner(lambda value: locals().__setitem__('f', value)

If you need to defer assignments; you could use functools.partial (or just lambda):
from functools import partial
set_standalone = partial(setattr, f, "standalone")
set_item = partial(f.lst.__setitem__, 1)
set_standalone(7)
set_item(9)
If reassign is the only operation; you don't need a new class.
Functions are first-class citizens in Python: you can assign them to a variable, store in a list, pass as arguments, etc.

This would work for the contents of container objects. If you don't mind adding one level of indirection to your variables (which you'd need in the C pointer-to-pointer case anyway), you could:
class Container(object):
def __init__(self, val):
self.val = val
class Foo(object):
def __init__(self, target):
self.standalone = Container(3)
self.lst = [Container(4), Container(5), Container(6)]
And you wouldn't really need the reassigner object at all.
Class Reassigner(object):
def __init__(self, target):
self.target = target
def reassign(self, value):
self.target.val = value

Related

Some doubts about #property in python 3

In order not to extend myself too much I will give a basic and hypothetical example of what I am trying to do.
Suppose the following class:
class foo():
def __init__(self):
self.keywords = []
## this method returns the entire list
def get_keywords(self):
return self.keywords
def set_keywords(self, value):
self.keywords.append(value)
But I want to code this in a pythonic way using the #property decorator.
My (wrong) attempt to do this:
class foo:
def __init__(self):
self.key = []
#property
def key(self):
return self.__key
#key.setter
def key(self, value):
self.__key.append(value)
So, whats is wrong in my attempt ?
ps: English is not my native language and I hope my doubt is understandable.
In your original code, self.set_keywords only appends to an existing list; it does not let you initialize the value of keywords to an arbitrary list. This restriction is preserved in your property-based code, which means you cannot assign directly to self.key; you have to initialize the underlying list in __init__ directly.
class foo:
def __init__(self):
# self.key = [] is equivalent to `self.__key.append([])`, but
# self.__key doesn't exist yet. (And would be wrong even if it did.)
self.__key = []
#property
def key(self):
return self.__key
#key.setter
def key(self, value):
self.__key.append(value)
However, this means an assignment like self.key = 3 doesn't actually perform what most people would expect of an assignment. It doesn't overwrite the old value, it adds to it instead. Use the setter to provide a fixed list, but a different method to add to an existing one.
class foo:
def __init__(self):
self.__keys = []
#property
def keys(self):
return self.__keys
#keys.setter
def keys(self, values):
self.__keys = values
def add_key(self, value):
self.__key.append(value)
And finally, it's not necessarily more Pythonic to use a property if you don't actually do any sort of extra work or validation in the getter or setter. If all you are doing is wrapping access to an underlying value, just let the value be used directly.
class foo:
def __init__(self):
self.keys = []
self.keys = [1,2,3]
print(self.keys)
self.keys.append(4)
# etc
The nice thing about properties is that if you start by allowing direct access to keys, then nothing about how you use keys changes if you later decide to replace it with a property.
You can give this a try:
class Foo:
def __init__(self):
self._key = []
#property
def key(self):
return self._key
#key.setter
def key(self, value):
self._key = value
Here are my two cents:
Rename the class foo to Foo
You can't initialize self.key, as this is the property, so initialize the correct variable in the constructor (i.e. __init__)
Private vars are prefixed with one _ scope and not two (two __ are Python internals)
I suppose you rather want my_instance.key = ['spam', 'eggs'] to replace the foo._key value than extend it. Because this is kind of a "setter" and that would result in a weird behaviour, or at least another developer won't expect that behaviour from that setter/function
However, and that's important: As long as you're only doing this, you won't need properties. You can simply initialize self.keys in the constructor and froget about the property and setter function. Later on, when you want to change the behaviour, you can still add the property and setter. That's one reason why we've properties in Python, so that you won't have to refactor your whole code in case "a bit more logic" comes into place.
Btw. if you're really depending everything on those dict functions, you might also want to inherit your class from the dict class. Depends what you're up to.

Create multiple classes or multiple objects in Python?

I have the following problem and I need advice on how to solve it the best technically in Python. As I am new to programming I would like to have some advice.
So I will have the following object and they should store something. Here is an example:
object 1: cash dividends (they will have the following properties)
exdate (will store a list of dates)
recorddate (will store a list of dates)
paydate (will store a list of dates)
ISIN (will store a list of text)
object 2: stocksplits (they will have the following prpoerties)
stockplitratio (will be some ration)
exdate(list of dates)
...
I have tried to solve it like this:
class cashDividends(object):
def __init__(self, _gross,_net,_ISIN, _paydate, _exdate, _recorddate, _frequency, _type, _announceddate, _currency):
self.gross = _gross
self.net = _net
self.ISIN = _ISIN
self.paydate = _paydate
self.exdate = _exdate
self.recorddate = _recorddate
self.frequency = _frequency
self.type = _type
self.announceddate = _announceddate
self.currency = _currency
So if I have this I would have to create another class named stockplits and then define an __init__ function again.
However is there a way where I can have one class like "Corporate Actions" and then have stock splits and cashdividends in there ?
Sure you can! In python you can pass classes to other classes.
Here a simple example:
class A():
def __init__(self):
self.x = 0
class B():
def __init__(self):
self.x = 1
class Container():
def __init__(self, objects):
self.x = [obj.x for obj in objects]
a = A()
b = B()
c = Container([a,b])
c.x
[0,1]
If I understood correctly what you want is an object that has other objects from a class you created as property?
class CorporateActions(object):
def __init__(self, aCashDividend, aStockSplit):
self.cashDividend = aCashDividend
self.stockSplit = aStockSplit
myCashDividends = CashDividends(...) #corresponding parameters here
myStockSplit = StockSplit(...)
myCorporateActions = CorporateActions(myCashDividends, myStockSplit)
Strictly speaking this answer isn't an answer for the final question. However, it is a way to make your life slightly easier.
Consider creating a sort-of template class (I'm using this term loosely; there's no such thing in Python) that does the __init__ work for you. Like this:
class KwargAttrs():
def __init__(self, **kwargs):
for k,v in kwargs.items():
setattr(self, k, v)
def _update(self, **kwargs):
args_dict = {k:(kwargs[k] if k in kwargs else self.__dict__[k]) for k in self.__dict__}
self.__dict__.update(args_dict)
This class uses every supplied keyword argument as an object attribute. Use it this way:
class CashDividends(KwargAttrs):
def __init__(self, gross, net, ISIN, paydate, exdate, recorddate, frequency, type, announceddate, currency):
# save the namespace before it gets polluted
super().__init__(**locals())
# work that might pollute local namespace goes here
# OPTIONAL: update the argument values in case they were modified:
super()._update(**locals())
Using a method like this, you don't have to go through the argument list and assign every single object attribute; it happens automatically.
We bookend everything you need to accomplish in the __init__ method with method calls to the parent-class via super(). We do this because locals() returns a dict every variable in the function's current namespace, so you need to 1.) capture that namespace before any other work pollutes it and 2.) update the namespace in case any work changes the argument values.
The call to update is optional, but the values of the supplied arguments will not be updated if something is done to them after the call to super().__init__() (that is, unless you change the values using setattr(self, 'argname, value)`, which is not a bad idea).
You can continue using this class like so:
class StockSplits(KwargAttrs):
def __init__(self, stocksplitratio, gross, net, ISIN, paydate, exdate, recorddate, frequency, type, announceddate, currency):
super().__init__(**locals())
As mentioned in the other answers you can create a container for our other classes, but you can even do that using this same template class:
class CorporateActions(KwargAttrs):
def __init__(self, stock_splits , cash_dividends):
super().__init__(**locals())
ca = CorporateActions(stock_splits = StockSplits(<arguments>), cash_dividends = CashDividends(<arguments>) )

python lazy variables? or, delayed expensive computation

I have a set of arrays that are very large and expensive to compute, and not all will necessarily be needed by my code on any given run. I would like to make their declaration optional, but ideally without having to rewrite my whole code.
Example of how it is now:
x = function_that_generates_huge_array_slowly(0)
y = function_that_generates_huge_array_slowly(1)
Example of what I'd like to do:
x = lambda: function_that_generates_huge_array_slowly(0)
y = lambda: function_that_generates_huge_array_slowly(1)
z = x * 5 # this doesn't work because lambda is a function
# is there something that would make this line behave like
# z = x() * 5?
g = x * 6
While using lambda as above achieves one of the desired effects - computation of the array is delayed until it is needed - if you use the variable "x" more than once, it has to be computed each time. I'd like to compute it only once.
EDIT:
After some additional searching, it looks like it is possible to do what I want (approximately) with "lazy" attributes in a class (e.g. http://code.activestate.com/recipes/131495-lazy-attributes/). I don't suppose there's any way to do something similar without making a separate class?
EDIT2: I'm trying to implement some of the solutions, but I'm running in to an issue because I don't understand the difference between:
class sample(object):
def __init__(self):
class one(object):
def __get__(self, obj, type=None):
print "computing ..."
obj.one = 1
return 1
self.one = one()
and
class sample(object):
class one(object):
def __get__(self, obj, type=None):
print "computing ... "
obj.one = 1
return 1
one = one()
I think some variation on these is what I'm looking for, since the expensive variables are intended to be part of a class.
The first half of your problem (reusing the value) is easily solved:
class LazyWrapper(object):
def __init__(self, func):
self.func = func
self.value = None
def __call__(self):
if self.value is None:
self.value = self.func()
return self.value
lazy_wrapper = LazyWrapper(lambda: function_that_generates_huge_array_slowly(0))
But you still have to use it as lazy_wrapper() not lazy_wrapper.
If you're going to be accessing some of the variables many times, it may be faster to use:
class LazyWrapper(object):
def __init__(self, func):
self.func = func
def __call__(self):
try:
return self.value
except AttributeError:
self.value = self.func()
return self.value
Which will make the first call slower and subsequent uses faster.
Edit: I see you found a similar solution that requires you to use attributes on a class. Either way requires you rewrite every lazy variable access, so just pick whichever you like.
Edit 2: You can also do:
class YourClass(object)
def __init__(self, func):
self.func = func
#property
def x(self):
try:
return self.value
except AttributeError:
self.value = self.func()
return self.value
If you want to access x as an instance attribute. No additional class is needed. If you don't want to change the class signature (by making it require func), you can hard code the function call into the property.
Writing a class is more robust, but optimizing for simplicity (which I think you are asking for), I came up with the following solution:
cache = {}
def expensive_calc(factor):
print 'calculating...'
return [1, 2, 3] * factor
def lookup(name):
return ( cache[name] if name in cache
else cache.setdefault(name, expensive_calc(2)) )
print 'run one'
print lookup('x') * 2
print 'run two'
print lookup('x') * 2
Python 3.2 and greater implement an LRU algorithm in the functools module to handle simple cases of caching/memoization:
import functools
#functools.lru_cache(maxsize=128) #cache at most 128 items
def f(x):
print("I'm being called with %r" % x)
return x + 1
z = f(9) + f(9)**2
You can't make a simple name, like x, to really evaluate lazily. A name is just an entry in a hash table (e.g. in that which locals() or globals() return). Unless you patch access methods of these system tables, you cannot attach execution of your code to simple name resolution.
But you can wrap functions in caching wrappers in different ways.
This is an OO way:
class CachedSlowCalculation(object):
cache = {} # our results
def __init__(self, func):
self.func = func
def __call__(self, param):
already_known = self.cache.get(param, None)
if already_known:
return already_known
value = self.func(param)
self.cache[param] = value
return value
calc = CachedSlowCalculation(function_that_generates_huge_array_slowly)
z = calc(1) + calc(1)**2 # only calculates things once
This is a classless way:
def cached(func):
func.__cache = {} # we can attach attrs to objects, functions are objects
def wrapped(param):
cache = func.__cache
already_known = cache.get(param, None)
if already_known:
return already_known
value = func(param)
cache[param] = value
return value
return wrapped
#cached
def f(x):
print "I'm being called with %r" % x
return x + 1
z = f(9) + f(9)**2 # see f called only once
In real world you'll add some logic to keep the cache to a reasonable size, possibly using a LRU algorithm.
To me, it seems that the proper solution for your problem is subclassing a dict and using it.
class LazyDict(dict):
def __init__(self, lazy_variables):
self.lazy_vars = lazy_variables
def __getitem__(self, key):
if key not in self and key in self.lazy_vars:
self[key] = self.lazy_vars[key]()
return super().__getitem__(key)
def generate_a():
print("generate var a lazily..")
return "<a_large_array>"
# You can add as many variables as you want here
lazy_vars = {'a': generate_a}
lazy = LazyDict(lazy_vars)
# retrieve the variable you need from `lazy`
a = lazy['a']
print("Got a:", a)
And you can actually evaluate a variable lazily if you use exec to run your code. The solution is just using a custom globals.
your_code = "print('inside exec');print(a)"
exec(your_code, lazy)
If you did your_code = open(your_file).read(), you could actually run your code and achieve what you want. But I think the more practical approach would be the former one.

Mapping obj.method({argument:value}) to obj.argument(value)

I don't know if this will make sense, but...
I'm trying to dynamically assign methods to an object.
#translate this
object.key(value)
#into this
object.method({key:value})
To be more specific in my example, I have an object (which I didn't write), lets call it motor, which has some generic methods set, status and a few others. Some take a dictionary as an argument and some take a list. To change the motor's speed, and see the result, I use:
motor.set({'move_at':10})
print motor.status('velocity')
The motor object, then formats this request into a JSON-RPC string, and sends it to an IO daemon. The python motor object doesn't care what the arguments are, it just handles JSON formatting and sockets. The strings move_at and velocity are just two of what might be hundreds of valid arguments.
What I'd like to do is the following instead:
motor.move_at(10)
print motor.velocity()
I'd like to do it in a generic way since I have so many different arguments I can pass. What I don't want to do is this:
# create a new function for every possible argument
def move_at(self,x)
return self.set({'move_at':x})
def velocity(self)
return self.status('velocity')
#and a hundred more...
I did some searching on this which suggested the solution lies with lambdas and meta programming, two subjects I haven't been able to get my head around.
UPDATE:
Based on the code from user470379 I've come up with the following...
# This is what I have now....
class Motor(object):
def set(self,a_dict):
print "Setting a value", a_dict
def status(self,a_list):
print "requesting the status of", a_list
return 10
# Now to extend it....
class MyMotor(Motor):
def __getattr__(self,name):
def special_fn(*value):
# What we return depends on how many arguments there are.
if len(value) == 0: return self.status((name))
if len(value) == 1: return self.set({name:value[0]})
return special_fn
def __setattr__(self,attr,value): # This is based on some other answers
self.set({attr:value})
x = MyMotor()
x.move_at = 20 # Uses __setattr__
x.move_at(10) # May remove this style from __getattr__ to simplify code.
print x.velocity()
output:
Setting a value {'move_at': 20}
Setting a value {'move_at': 10}
10
Thank you to everyone who helped!
What about creating your own __getattr__ for the class that returns a function created on the fly? IIRC, there's some tricky cases to watch out for between __getattr__ and __getattribute__ that I don't recall off the top of my head, I'm sure someone will post a comment to remind me:
def __getattr__(self, name):
def set_fn(self, value):
return self.set({name:value})
return set_fn
Then what should happen is that calling an attribute that doesn't exist (ie: move_at) will call the __getattr__ function and create a new function that will be returned (set_fn above). The name variable of that function will be bound to the name parameter passed into __getattr__ ("move_at" in this case). Then that new function will be called with the arguments you passed (10 in this case).
Edit
A more concise version using lambdas (untested):
def __getattr__(self, name):
return lambda value: self.set({name:value})
There are a lot of different potential answers to this, but many of them will probably involve subclassing the object and/or writing or overriding the __getattr__ function.
Essentially, the __getattr__ function is called whenever python can't find an attribute in the usual way.
Assuming you can subclass your object, here's a simple example of what you might do (it's a bit clumsy but it's a start):
class foo(object):
def __init__(self):
print "initting " + repr(self)
self.a = 5
def meth(self):
print self.a
class newfoo(foo):
def __init__(self):
super(newfoo, self).__init__()
def meth2(): # Or, use a lambda: ...
print "meth2: " + str(self.a) # but you don't have to
self.methdict = { "meth2":meth2 }
def __getattr__(self, name):
return self.methdict[name]
f = foo()
g = newfoo()
f.meth()
g.meth()
g.meth2()
Output:
initting <__main__.foo object at 0xb7701e4c>
initting <__main__.newfoo object at 0xb7701e8c>
5
5
meth2: 5
You seem to have certain "properties" of your object that can be set by
obj.set({"name": value})
and queried by
obj.status("name")
A common way to go in Python is to map this behaviour to what looks like simple attribute access. So we write
obj.name = value
to set the property, and we simply use
obj.name
to query it. This can easily be implemented using the __getattr__() and __setattr__() special methods:
class MyMotor(Motor):
def __init__(self, *args, **kw):
self._init_flag = True
Motor.__init__(self, *args, **kw)
self._init_flag = False
def __getattr__(self, name):
return self.status(name)
def __setattr__(self, name, value):
if self._init_flag or hasattr(self, name):
return Motor.__setattr__(self, name, value)
return self.set({name: value})
Note that this code disallows the dynamic creation of new "real" attributes of Motor instances after the initialisation. If this is needed, corresponding exceptions could be added to the __setattr__() implementation.
Instead of setting with function-call syntax, consider using assignment (with =). Similarly, just use attribute syntax to get a value, instead of function-call syntax. Then you can use __getattr__ and __setattr__:
class OtherType(object): # this is the one you didn't write
# dummy implementations for the example:
def set(self, D):
print "setting", D
def status(self, key):
return "<value of %s>" % key
class Blah(object):
def __init__(self, parent):
object.__setattr__(self, "_parent", parent)
def __getattr__(self, attr):
return self._parent.status(attr)
def __setattr__(self, attr, value):
self._parent.set({attr: value})
obj = Blah(OtherType())
obj.velocity = 42 # prints setting {'velocity': 42}
print obj.velocity # prints <value of velocity>

How to implement property() with dynamic name (in python)

I am programming a simulations for single neurons. Therefore I have to handle a lot of Parameters. Now the Idea is that I have two classes, one for a SingleParameter and a Collection of parameters. I use property() to access the parameter value easy and to make the code more readable. This works perfect for a sinlge parameter but I don't know how to implement it for the collection as I want to name the property in Collection after the SingleParameter. Here an example:
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(self):
return self._v
def set(self, value):
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
# par1 and par2 I can access perfectly via 'p1.v = ...'
# or get its value with 'p1.v'
class Collection(object):
def __init__(self):
self.dict = {}
def __getitem__(self, name):
return self.dict[name] # get the whole object
# to get the value instead:
# return self.dict[name].v
def add(self, parameter):
self.dict[parameter.name] = parameter
# now comes the part that I don't know how to implement with property():
# It shoule be something like
# self.__dict__[parameter.name] = property(...) ?
col = Collection()
col.add(par1)
col.add(par2)
col['par1'] # gives the whole object
# Now here is what I would like to get:
# col.par1 -> should result like col['par1'].v
# col.par1 = 5 -> should result like col['par1'].v = 5
Other questions that I put to understand property():
Why do managed attributes just work for class attributes and not for instance attributes in python?
How can I assign a new class attribute via __dict__ in python?
Look at built-in functions getattr and setattr. You'll probably be a lot happier.
Using the same get/set functions for both classes forces you into an ugly hack with the argument list. Very sketchy, this is how I would do it:
In class SingleParameter, define get and set as usual:
def get(self):
return self._s
def set(self, value):
self._s = value
In class Collection, you cannot know the information until you create the property, so you define the metaset/metaget function and particularize them only later with a lambda function:
def metaget(self, par):
return par.s
def metaset(self, value, par):
par.s = value
def add(self, par):
self[par.name] = par
setattr(Collection, par.name,
property(
fget=lambda x : Collection.metaget(x, par),
fset=lambda x, y : Collection.metaset(x,y, par))
Properties are meant to dynamically evaluate attributes or to make them read-only. What you need is customizing attribute access. __getattr__ and __setattr__ do that really fine, and there's also __getattribute__ if __getattr__ is not enough.
See Python docs on customizing attribute access for details.
Have you looked at the traits package? It seems that you are reinventing the wheel here with your parameter classes. Traits also have additional features that might be useful for your type of application (incidently I know a person that happily uses traits in neural simulations).
Now I implemented a solution with set-/getattr:
class Collection(object):
...
def __setattr__(self, name, value):
if 'dict' in self.__dict__:
if name in self.dict:
self[name].v = value
else:
self.__dict__[name] = value
def __getattr__(self, name):
return self[name].v
There is one thing I quite don't like that much: The attributes are not in the __dict__. And if I have them there as well I would have a copy of the value - which can be dangerous...
Finally I succeded to implement the classes with property(). Thanks a lot for the advice. It took me quite a bit to work it out - but I can promise you that this exercise helps you to understand better pythons OOP.
I implemented it also with __getattr__ and __setattr__ but still don't know the advantages and disadvantages to the property-solution. But this seems to be worth another question. The property-solutions seems to be quit clean.
So here is the code:
class SingleParameter(object):
def __init__(self, name, default_value=0, unit='not specified'):
self.name = name
self.default_value = default_value
self.unit = unit
self.set(default_value)
def get(*args):
self = args[0]
print "get(): "
print args
return self._v
def set(*args):
print "set(): "
print args
self = args[0]
value = args[-1]
self._v = value
v = property(fget=get, fset=set, doc='value of parameter')
class Collection(dict):
# inheriting from dict saves the methods: __getitem__ and __init__
def add(self, par):
self[par.name] = par
# Now here comes the tricky part.
# (Note: this property call the get() and set() methods with one
# more argument than the property of SingleParameter)
setattr(Collection, par.name,
property(fget=par.get, fset=par.set))
# Applying the classes:
par1 = SingleParameter(name='par1', default_value=10, unit='mV')
par2 = SingleParameter(name='par2', default_value=20, unit='mA')
col = Collection()
col.add(par1)
col.add(par2)
# Setting parameter values:
par1.v = 13
col.par1 = 14
# Getting parameter values:
par1.v
col.par1
# checking identity:
par1.v is col.par1
# to access the whole object:
col['par1']
As I am new I am not sure how to move on:
how to treat follow up questions (like this itself):
get() is seems to be called twice - why?
oop-design: property vs. "__getattr__ & __setattr__" - when should I use what?
is it rude to check the own answer to the own question as accepted?
is it recommended to rename the title in order to put correlated questions or questions elaborated with the same example into the same context?
Other questions that I put to understand property():
Why do managed attributes just work for class attributes and not for instance attributes in python?
How can I assign a new class attribute via __dict__ in python?
I have a class that does something similar, but I did the following in the collection object:
setattr(self, par.name, par.v)

Categories