Python property for array operations - python

When using Python properties (setters and getters), usually following is used:
class MyClass(object):
...
#property
def my_attr(self):
...
#my_attr.setter
def my_attr(self, value):
...
However, is there any similar approach for appending / removing arrays? For example, in a bi-directional relationship between two objects, when removing object A, it would be nice to dereference the relationship to A in object B. I know that SQLAlchemy has implemeneted a similar function.
I also know that I can implement methods like
def add_element_to_some_array(element):
some_array.append(element)
element.some_parent(self)
but I would prefer to do it like "properties" in Python.. do you know some way?

To make your class act array-like (or dict-like), you can override __getitem__ and __setitem__.
class HappyArray(object):
#
def __getitem__(self, key):
# We skip the real logic and only demo the effect
return 'We have an excellent %r for you!' % key
#
def __setitem__(self, key, value):
print('From now on, %r maps to %r' % (key, value))
>>> h = HappyArray()
>>> h[3]
'We have an excellent 3 for you!'
>>> h[3] = 'foo'
From now on, 3 maps to 'foo'
If you want several attributes of your object to exhibit such behavior, you need several array-like objects, one for each attribute, constructed and linked at your master object's creation time.

The getter property will return reference to the array. You may do array operations with that. Like this
class MyClass(object):
...
#property
def my_attr(self):
...
#my_attr.setter
def my_attr(self, value):
...
m = MyClass()
m.my_attr.append(0) # <- array operations like this

Related

Python simple lazy loading

I'm trying to clean up some logic and remove duplicate values in some code and am looking for a way to introduce some very simple lazy-loading to handle settings variables. Something that would work like this:
FOO = {'foo': 1}
BAR = {'test': FOO['foo'] }
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 1 but would like to get 2
Update:
My question may not have been clear based on the initial responses. I'm looking to replace the value being set for test in BAR with a lazy-loaded substitute. I know a way I can do this but it seems unnecessarily complex for what it is, I'm wondering if there's a simpler approach.
Update #2:
Okay, here's a solution that works. Is there any built-in type that can do this out of the box:
FOO = {'foo': 1}
import types
class LazyDict(dict):
def __getitem__(self, item):
value = super().__getitem__(item)
return value if not isinstance(value, types.LambdaType) else value()
BAR = LazyDict({ 'test': lambda: FOO['foo'] })
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 2
As I stated in the comment above, what you are seeking is some of the facilities of reactive programming paradigm. (not to be confounded with the JavaScript library which borrows its name from there).
It is possible to instrument objects in Python to do so - I think the minimum setup here would be a specialized target mapping, and a special object type you set as the values in it, that would fetch the target value.
Python can do this in more straightforward ways with direct attribute access (using the dot notation: myinstance.value) than by using the key-retrieving notation used in dictionaries mydata['value'] due to the fact a class is already a template to a certain data group, and class attributes can define mechanisms to access each instance's attribute value. That is called the "descriptor protocol" and is bound into the language model itself.
Nonetheless a minimalist Mapping based version can be implemented as such:
FOO = {'foo': 1}
from collections.abc import MutableMapping
class LazyValue:
def __init__(self, source, key):
self.source = source
self.key = key
def get(self):
return self.source[self.key]
def __repr__(self):
return f"<LazyValue {self.get()!r}>"
class LazyDict(MutableMapping):
def __init__(self, *args, **kw):
self.data = dict(*args, **kw)
def __getitem__(self, key):
value = self.data[key]
if isinstance(value, LazyValue):
value = value.get()
return value
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__():
return len(self.data)
def __repr__():
return repr({key: value} for key, value in self.items())
BAR = LazyDict({'test': LazyValue(FOO, 'foo')})
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 2
The reason this much code is needed is that there are several ways to retrieve data from a dictionary or mapping (.values, .items, .get, .setdefault) and simply inheriting from dict and implementing __getitem__ would "leak" the special lazy object in any of the other methods. Going through this MutableMapping approach ensure a single point of reading of the value in the __getitem__ method - and the resulting instance can be used reliably anywhere a mapping is expected.
However, notice that if you are using normal classes and instances rather than dictionaries, this can be much simpler - you can just use plain Python "property" and have a getter that will fetch the value. The main factor you should ponder is whether your referenced data keys are fixed, and can be hard-coded when writting the source code, or if they are dynamic, and which keys will work as lazy-references are only known at runtime. In this last case, the custom mapping approach, as above, will be usually better:
FOO = {'foo': 1}
class LazyStuff:
def __init__(self, source):
self.source = source
#property
def test(self):
return self.source["foo"]
BAR = LazyStuff(FOO)
FOO["foo"] = 2
print(BAR.test)
Perceive that in this way you have to hardcode the key "foo" and "test" in the class body, but it is just plaincode, and no need for the intermediary "LazyValue" class. Also, if you need this data as a dictionary, you could add an .as_dict method to LazyStuff that would collect all attributes in the moment it were called and yield a snapshot of those values as a dictionary..
You can try using lambdas and calling the value on return. Like this:
FOO = {'foo': 1}
BAR = {'test': lambda: FOO['foo'] }
FOO['foo'] = 2
print(BAR['test']()) # Outputs 2
If you're only one level deep, you may wish to try ChainMap, E.g.,
>>> from collections import ChainMap
>>> defaults = {'foo': 42}
>>> myvalues = {}
>>> result = ChainMap(myvalues, defaults)
>>> result['foo']
42
>>> defaults['foo'] = 99
>>> result['foo']
99

Is it possible to pass variables or object's attributes to a function or method?

For example consider a following function and class:
def foo(Atributes,values):
for item,value in zip(Attributes,values):
item=value
class bar:
foo=foo
def __init__(self,a,b):
self.a=a
self.b=b
def update(self,a,b):
foo((self.a,self.b),(a,b))
the above function is just to update the attributes of any objects of class bar. Or more specifically, the function foo is any generalized function to update the attributes of any class other than bar which may have different number of attributes to their objects.
So, is it possible to pass attributes not their value to that generalized function? Or is there any other method to do so: make an generalized function that updates the attributes of any objects of of any class type:
Objects in Python are basically dictionaries. You could do something like this
def foo(obj, attributes, values):
for item, value in zip(attributes,values):
obj.__dict__[item] = value
class Bar:
def __init__(self,a,b):
self.a = a
self.b = b
bar = Bar(1, 2)
foo(bar, ["a", "b"], [3, 4])
assert bar.a == 3
assert bar.b == 4
That being said: Just because you could, does not mean you should. This solution is hacky, confusing and really not necessary. In nearly all cases it would be better to just set a bunch of member variables:
bar.a = 3
bar.b = 4
If the fields themself are dynamic in their naming, it would be best just use a dict directly.
Edit: #MisterMiyagi is absolutely correct. You should use setattr(obj, item, value) instead of my even worse hack.

How to calculate hash of a python class object

In python3, I have a class. Like below:
class Foo:
def __init__(self):
self.x = 3
def fcn(self, val):
self.x += val
Then I instantiate objects of that class, like so:
new_obj = Foo()
new_obj2 = Foo()
Now when I hash these objects, I get different hash values. I need them to return the same hash, as they are the same objects (in theory).
Any idea how I can do this?
Thank you to all who answered. You're right that instantiating a new instance of the same class object is not actually the same, as it occupies a different place in memory. What I ended up doing is similar to what #nosklo suggested.
I created a 'get_hashables' function that returned a dictionary with all the properties of the class that would constitute a unique class object, like so:
def get_hashables(self):
return {'data': self.data, 'result': self.result}
Then my main method would take these 'hashable' variables, and hash them to produce the hash itself.
class Foo:
def __init__(self):
self.x = 3
def fcn(self, val):
self.x += val
def __hash__(self):
return hash(self.x)
This will calculate the hash using self.x; That means the hash will be the same when self.x is the same. You can return anything from __hash__, but to prevent consistency bugs you should return the same hash if the objects compare equal. More about that in the docs.
They are not the same object. The expression Foo() invokes the class constructor, Foo.__init__, which returns a new, unique instance of the object on each call. Your two calls return two independent objects, residing in different memory locations, each containing its own, private instance of the x attribute.
You might want to read up on Python class and instance theory.

single class python list

How can every set method in python's list object be overridden in a derived object such that every item in that list is of a specific class?
Consider
class Index(int):
pass
class IndexList(list):
def __init__(self, int_list):
for el in int_list:
self.append(Index(el))
def __setitem__(self, key, val):
super(IndexList, self).__setitem__(key, Index(val))
# override append insert etc...
Can this be done without directly overriding every single function that adds elements to the list? I expected simply overriding __setitem__ was enough.
Example, if append is not overridden.
ilist = IndexList([1,2])
ilist.append(3)
for i in ilist:
print(isinstance(i, Index)) # True, True, False
You'll have to implement the various directly; the underlying C implementation does not call __setitem__ for each and every change, as it is far more efficient to directly manipulate the (dynamically grown) C array.
Take a look at the collections abstract base classes, specifically at the MutableSequence ABC, to get an idea of what methods all can mutate your list, to maintain your type invariant you'd need to implement insert, append, extend and __iadd__.
Better still, you can use the collections.MutableSequence() class as an alternative base class to list; this is a pure-python implementation that does cast many of those methods as calls to a core set of methods; you'd only need to provide implementations for __len__, __getitem__, __setitem__, __delitem__ and insert; any method named in the Abstract Methods column of the table.
class IndexList(collections.MutableSequence):
def __init__(self, int_list):
self._list = []
for el in int_list:
self.append(Index(el))
def __len__(self): return len(self._list)
def __getitem__(self, item): return self._list[item]
def __delitem__(self, item): del self._list[item]
def __setitem__(self, index, value):
self._list.key[index] = Index(value)
def insert(self, index, value):
self._list.insert(index, Index(value))

Inheritance of from_<type> in Python

Edit: There was some confusion, but I want to ask a general question about object oriented design in Python.
Consider a class that lets you map data values to counts or frequencies:
class DataMap(dict):
pass
Now consider a subclass that allows you to construct a histogram from a list of data:
class Histogram(DataMap):
def __init__(self, list_of_values):
# 1. Put appropriate super(...) call here if necessary
# 2. Build the map of values to counts in self
pass
Now consider a class that lets you make a smoothed probability mass table rather than a Histogram.
class ProbabilityMass(DataMap):
pass
What is the best way to allow a ProbabilityMass to be constructed from either a Histogram or a list of values?
I "grew up" programming in C++, and in this case I would use an overloaded constructor. In Python I've thought of doing this with:
The constructor takes multiple arguments (all but one of these should == None)
I define from_Histogram and from_list methods
In the second case (which I believe is better), what is the best way to allow the from_list method to use the shared code from the Histogram constructor? A ProbabilityMass table is nearly identical to a Histogram table, but it is scaled so that the sum of all value is 1.0.
If you have come across a similar problem, please share your expertise!
To start with, if you think you want #staticmethod, you almost always don't. Either the function is not part of the class, in which case it should just be a free function, or it is part of the class, but not tied to an instance, and it should be a #classmethod. Your named constructor is a good candidate for a #classmethod.
Also note that you should invoke A.__init__ from B via super(), otherwise multiple inheritance can bite you bad.
class A:
def __init__(self, data):
self.values_to_counts = {}
for val in data:
if val in self.values_to_counts:
self.values_to_counts[val] += 1
else:
self.values_to_counts[val] = 1
#classmethod
def from_values_to_counts(cls, values_to_counts):
self = cls([])
self.values_to_counts = values_to_counts
return self
class B(A):
def __init__(self, data, parameter):
super(B, self).__init__(data)
self.parameter = parameter
def print_parameter(self):
print self.parameter
In this case, you don't need a B.from_values_to_counts, it inherits from A, and it will return an instance of B, since that's how it was called.
If you need to do more complex initialization in B, you can, using super(), which looks very similar to the way it would when you use it with instances. after all, a classmethod really isn't anything more complex than an instancemethod where the im_self attribute is assigned to the class itself.
class A:
def __init__(self, data):
self.values_to_counts = {}
for val in data:
if val in self.values_to_counts:
self.values_to_counts[val] += 1
else:
self.values_to_counts[val] = 1
#classmethod
def from_values_to_counts(cls, values_to_counts):
self = cls([])
self.values_to_counts = values_to_counts
return self
class B(A):
def __init__(self, data, parameter):
super(B, self).__init__(data)
self.parameter = parameter
def print_parameter(self):
print self.parameter
#classmethod
def from_values_to_counts(cls, values_to_counts):
self = super(B, cls).from_values_to_counts(values_to_counts)
do_more_initialization(self)
return self

Categories