I created a class that has lots of numpy arrays inside. I created a __getitem__ function that attempts to return the class with the arrays indexed like so:
MyClass[i].array1 is equivalent to MyClass.array1[i]
I would like the _ _ getitem _ _ to return references, but they are returning copies, so that assignment doesn't work.
print(MyClass[i].array1)
returns 0
MyClass[i].array1 = 10
print(MyClass[i].array1)
still returns 0
This is the _get_item_ code I'm using:
def __getitem__(self, indices):
g = copy.copy(self) # should be a shallow copy?
for key,value in g.__dict__.iteritems():
g.__dict__[key] = value[indices]
return g
I've also tried:
def __getitem__(self, indices):
g = MyClass()
for key,value in self.__dict__.iteritems():
g.__dict__[key] = value[indices]
return g
and:
def __getitem__(self, indices):
g = MyClass()
g.__dict__ = self.__dict__
for key,value in g.__dict__.iteritems():
g.__dict__[key] = value[indices]
return g
Note, this last instance does indeed seem to return references, but not the way I want. If I index my class using this last code, it performs indexing and truncating on the arrays in the original class, so:
g = MyClass[i].array1 truncates and overwrites the original array in MyClass to only have elements of index i like so:
print(len(MyClass.array1))
returns 128
print(MyClass[0].array1)
returns 0
now MyClass.array1 is a single float value, obviously not what I wanted.
I hope this is clear enough, and any help would be appreciated.
I found this but I wasn't quite sure if this applied to my problem.
This seems like a really bad idea, but it also seems like a fun problem so here is my crack at it:
class MyProxy(object):
def __init__(self, obj, key):
super(MyProxy, self).__setattr__('obj', obj)
super(MyProxy, self).__setattr__('key', key)
def __getattr__(self, name):
return getattr(self.obj, name).__getitem__(self.key)
def __setattr__(self, name, value):
return getattr(self.obj, name).__setitem__(self.key, value)
class MyClass(object):
def __init__(self, array_length):
self.array_length = array_length
def __getitem__(self, key):
if key >= self.array_length:
raise IndexError
return MyProxy(self, key)
Example:
>>> obj = MyClass(4) # replace 4 with the length of your arrays
>>> obj.array1 = [1, 2, 3, 4]
>>> obj.array2 = [5, 6, 7, 8]
>>> for c in obj:
... print c.array1, c.array2
...
1 5
2 6
3 7
4 8
>>> obj[1].array1
2
>>> obj[1].array1 = 5
>>> obj.array1
[1, 5, 3, 4]
Related
The following is a class that customizes how to get elements of multidimensional arrays
class array:
def __init__(self, m,n):
self._rows = []
for _ in range(m):
self._rows.append([0]*n)
def __getitem__(self, key):
row, col = key
return self._rows[row][col]
def __setitem__(self, key, value):
row, col = key
self._rows[row][col] = value
If you try to assign a single element to multiple elements, you get an error:
a, b = 3 # doesn't work
However, python supports "multiple assignment", meaning that if you have the same number of elements on either side, you can assign corresponding ones:
a, b = 3, 4 # a = 3, b = 4
Lists and tuples work as well:
a, b = [3, 4]
a, b = (3, 4)
Your code:
def __getitem__(self, key):
row, col = key
return self._rows[row][col]
assumes that key is a tuple/list/other iterable of at least length 2, and tries to decompose it into its component parts, assigning them respectively to row and col.
d = {}
d[3] = 0
d[1] = 4
I tried
mask = d > 1 # TypeError: '>' not supported between instances of 'dict' and 'int'
mask = d.values > 1 # TypeError: '>' not supported between instances of 'builtin_function_or_method' and 'int'
Both aren't correct. Is it possible to perform the computation without using dictionary comprehension?
The desired output would be:
{3: False, 1: True}
I feel like what you want is the ability to actually write d < 5 and magically get a new dictionary (which I don't think is possible with plain dict()). But on the other hand I thought this was a great idea, so I implemented a first version:
"""Here is our strategy for implementing this:
1) Inherit the abstract Mapping which define a
set of rules — interface — we will have to respect
to be considered a legitimate mapping class.
2) We will implement that by delegating all the hard
work to an inner dict().
3) And we will finally add some methods to be able
to use comparison operators.
"""
import collections
import operator
"Here is step 1)"
class MyDict(collections.abc.MutableMapping):
"step 2)"
def __init__(self, *args):
self.content = dict(*args)
# All kinds of delegation to the inner dict:
def __iter__(self): return iter(self.content.items())
def __len__(self): return len(self.content)
def __getitem__(self, key): return self.content[key]
def __setitem__(self, key, value): self.content[key] = value
def __delitem__(self, key): del self.content[key]
def __str__(self): return str(self.content)
"And finally, step 3)"
# Build where function using the given comparison operator
def _where_using(comparison):
def where(self, other):
# MyDict({...}) is optional
# you could just decide to return a plain dict: {...}
return MyDict({k: comparison(v, other) for k,v in self})
return where
# map all operators to the corresponding "where" method:
__lt__ = _where_using(operator.lt)
__le__ = _where_using(operator.le)
__eq__ = _where_using(operator.eq)
__gt__ = _where_using(operator.gt)
__ge__ = _where_using(operator.ge)
We can use this the way you asked for:
>>> d = MyDict({3:0, 1:4})
>>> print(d)
{3: 0, 1: 4}
>>> print(d > 1)
{3: False, 1: True}
Note that this would also work on other types of (comparable) objects:
>>> d = MyDict({3:"abcd", 1:"abce"})
>>> print(d)
{3: 'abcd', 1: 'abce'}
>>> print(d > "abcd")
{3: False, 1: True}
>>> print(d > "abcc")
{3: True, 1: True}
Here's an easy way for you to use something like d<5. You just need:
import pandas as pd
res = pd.Series(d) < 4
res.to_dict() # returns {3: True, 1: False}`
I already asked this question but still can't figure out how to implement this.
I have matrix class:
class Matrix(list):
def __getitem__(self, item):
try:
return list.__getitem__(self, item)
except TypeError:
rows, cols = item
return [row[cols] for row in self[rows]]
It allows to do things like this:
m = Matrix([[i+j for j in [0,1,2,3]] for i in [0,4,8,12]])
print(m[0:2, 0:2])
will print: [[0, 1], [4, 5]]
I also want to be able to add/multiply all submatrix elements by given value, like:
m[0:2, 0:2] += 1
print(m[0:2, 0:2])
should print: [[1, 2], [5, 6]]
I'm trying to implement those methods: __add__, __setitem__
def __setitem__(self, key, value):
print(key, value)
def __add__(self, item):
print(item)
for i in range(self):
for j in range(self[0]):
self[i][j] += item
At least I want to see they print something. But it doesn't happen. I'm trying with such example:
m[1:2, 2:3] = m[1:2, 2:3] + 1
And get an error like: TypeError: can only concatenate list (not "int") to list.
So, I'm not even at magic methods. Call fails before. How to do this?
You have to return Matrix object from "getitem"
class Matrix(list):
def __getitem__(self, item):
print "get"
try:
return Matrix(list.__getitem__(self, item))
except TypeError:
rows, cols = item
return Matrix([row[cols] for row in self[rows]])
def __setitem__(self, key, value):
print(key, value)
def __add__(self, item):
print "messi the great"
print(item)
# for i in range(self):
# for j in range(self[0]):
# self[i][j] += item
m = Matrix([[i+j for j in [0,1,2,3]] for i in [0,4,8,12]])
print m[1:2, 2:3]
m[1:2, 2:3] = m[1:2, 2:3] + 1
I have a numpy array that contains a list of objects.
x = np.array([obj1,obj2,obj3])
Here is the definition of the object:
class obj():
def __init__(self,id):
self.id = id
obj1 = obj(6)
obj2 = obj(4)
obj3 = obj(2)
Instead of accessing the numpy array based on the position of the object, i want to access it based on the value of id.
For example:
# x[2] = obj3
# x[4] = obj2
# x[6] = obj1
After doing some research, I learned that i could make a structured array:
x = np.array([(3,2,1)],dtype=[('2', 'i4'),('4', 'i4'), ('6', 'i4')])
# x['2'] --> 3
However, the problem with this is that i want the array to take integers as indexes, and dtypes must have a name of type str. Furthermore, i don't think structured arrays can be lists of objects.
You should be able to use filter() here, along with a lambda expression:
np.array(filter(lambda o: o.id == 1, x))
However, as filter() returns a list (in Python 3+, it should return an iterator), you may want to generate a new np.array from the result.
But this does not take care of duplicate keys, if you want to access your data key-like. It is possible to have more than one object with the same id attribute. You might want to control uniqueness of keys.
If you only want to be able to access subarrays "by-index" (e.g. x[2, 4]), with index as id, then you could simply create your own struct:
import collections
class MyArray (collections.OrderedDict):
def __init__ (self, values):
super(MyArray, self).__init__ ((v.id, v) for v in values)
def __rawgetitem (self, key):
return super (MyArray, self).__getitem__ (key)
def __getitem__ (self, key):
if not hasattr (key, '__iter__'):
key = (key, )
return MyArray (self.__rawgetitem (k) for k in key)
def __repr__ (self):
return 'MyArray({})'.format(', '.join('{}: {}'.format(k, self.__rawgetitem(k)) for k in self.keys()))
>>> class obj():
... def __init__(self,id):
... self.id = id
... def __repr__ (self):
... return "obj({})".format(self.id)
...
>>> obj1 = obj(6)
>>> obj2 = obj(4)
>>> obj3 = obj(2)
>>> x = MyArray([obj1, obj2, obj3])
>>> x
MyArray({2: obj(2), 4: obj(4), 6: obj(6)})
>>> x[4]
obj(4)
>>> x[2, 4]
MyArray({2: obj(2), 4: obj(4)})
So i have made my own dict-based named-tuple class:
class t(dict):
def __getattr__(self, v):
try:
return self[v]
except KeyError:
raise AttributeError("Key " + str(v) + " does not exist.")
def __init__(self, *args, **kwargs):
for source in args:
for i, j in source.items():
self[i] = j
for i, j in kwargs.items():
self[i] = j
>>> thing = t(cow=10, moo='moooooooo')
>>> thing.cow
10
>>> thing.moo
'moooooooo'
The point of using a dict is so i can use the **splat operator, as well as so the whole thing can be json'd as a dict. The double for loop in init is so I can immediately re-construct it from the dict after deserialization.
The only thing I'm missing is the multi-assignment thing you can do with tuples, like
>>> thing = (1, 3, ('blargh', 'mooo'))
>>> a, b, (c, d) = thing
>>> a
1
>>> b
3
>>> c
'blargh'
>>> d
'mooo'
Is there any way I can get this sort of behavior with my own custom datatype? Is there any function that I can override or some class I can inherit from to get this behaviour?
Yes. Implement __iter__().
class unpackable_dict(dict):
def __iter__(self):
return (self[key] for key in sorted(self.keys()))
d = unpackable_dict(a=1, b=2)
a, b = d
The reason you normally can't unpack values from a dict like you can a tuple is that dicts don't have a defined order. I've used a generator expression to get the items out in the sorted order of their keys. You might consider using an OrderedDict instead, though.