Set class attribute values in a generator - python

I have a list of lists. Each sublist contains objects of a custom class. What I want to do is set a certain attribute of each class object to 0. The simple way to do this would be a double for loop or similar:
for subl in L:
for myObj in subL:
myObj.attr = 0
Alternatively, I could use itertools.chain:
for myObj in itertools.chain.from_iterable(L):
myObj.attr = 0
However, I wonder if I could set everything in one line. Could I perhaps use a generator-like structure to do this? Something along the lines of:
(myObj.attr=0 for subl in L for myObj in subl)
Now that won't really work, and will raise a SyntaxError, but is something even remotely similar possible?

This is an abuse of generator expressions, but:
any(setattr(obj, "attr", 0) for sub in L for obj in sub)
Or, perhaps slightly faster since there's no testing of each object:
from collections import deque
do = deque(maxlen=0).extend
do(setattr(obj, "attr", 0) for sub in L for obj in sub)

See this example:
class C:
def __init__(self):
self.a = None
def f(self, para):
self.a = para
list1 = [C() for e in range(3)]
list2 = [C() for e in range(3)]
list3 = [list1, list2]
[c.f(5) for l in list3 for c in l]
for e in list3:
for c in e:
print c.a
Conclusion
You could create a method to set the attribute. It will look something like:
[myObj.setattr(0) for subl in L for myObj in subl]
Note the brackets.

Here is a simple solution that popped out in my head.
Using the built-in setattr, your suggestion - itertools.chain.from_iterable -, and an abuse of list comprehension:
class Foo():
def __init__(self):
my_attr = 10
A = Foo()
B = Foo()
C = Foo()
D = Foo()
obj_list = [[A, B], [C, D]]
a = [setattr(obj, "my_attr", 0) for obj in itertools.chain.from_iterable(obj_list)]
Result:
>>> a
[None, None, None, None]
>>> A.my_attr
0
>>> B.my_attr
0
>>> C.my_attr
0
>>> D.my_attr
0
I found setattr to be very useful for cases like this, it's simple, short, and effective.
Hope this helps!

Related

Python: accidentally created a reference but not sure how

I imagine this is one in a very long list of questions from people who have inadvertantly created references in python, but I've got the following situation. I'm using scipy minimize to set the sum of the top row of an array to 5 (as an example).
class problem_test:
def __init__(self):
test_array = [[1,2,3,4,5,6,7],
[4,5,6,7,8,9,10]]
def set_top_row_to_five(x, array):
array[0] = array[0] + x
return abs(sum(array[0]) - 5)
adjustment = spo.minimize(set_top_row_to_five,0,args=(test_array))
print(test_array)
print(adjustment.x)
ptest = problem_test()
However, the optimization is altering the original array (test_array):
[array([-2.03, -1.03, -0.03, 0.97, 1.97, 2.97, 3.97]), [4, 5, 6, 7, 8, 9, 10]]
[-0.00000001]
I realize I can solve this using, for example, deepcopy, but I'm keen to learn why this is happening so I don't do the same in future by accident.
Thanks in advance!
Names are references to objects. What is to observe is whether the objects (also passed in an argument) is modified itself or a new object is created. An example would be:
>>> l1 = list()
>>> l2 = l1
>>> l2.append(0) # this modifies object currently reference to by l1 and l2
>>> print(l1)
[0]
Whereas:
>>> l1 = list()
>>> l2 = list(l1) # New list object has been created with initial values from l1
>>> l2.append(0)
>>> print(l1)
[]
Or:
>>> l1 = list()
>>> l2 = l1
>>> l2 = [0] # New list object has been created and assigned to l2
>>> l2.append(0)
>>> print(l1)
[]
Similarly assuming l = [1, 2, 3]:
>>> def f1(list_arg):
... return list_arg.reverse()
>>> print(f1, l)
None [3, 2, 1]
We have just passed None returned my list.reverse method through and reversed l (in place). However:
>>> def f2(list_arg):
... ret_list = list(list_arg)
... ret_list.reverse()
... return ret_list
>>> print(f2(l), l)
[3, 2, 1] [1, 2, 3]
Function returns a new reversed object (initialized) from l which remained unchanged (NOTE: in this exampled built-in reversed or slicing would of course make more sense.)
When nested, one must not forget that for instance:
>>> l = [1, 2, 3]
>>> d1 = {'k': l}
>>> d2 = dict(d1)
>>> d1 is d2
False
>>> d1['k'] is d2['k']
True
Dictionaries d1 and d2 are two different objects, but their k item is only one (and shared) instance. This is the case when copy.deepcopy might come in handy.
Care needs to be taken when passing objects around to make sure they are modified or copy is used as wanted and expected. It might be helpful to return None or similar generic value when making in place changes and return the resulting object when working with a copy so that the function/method interface itself hints what the intention was and what is actually going on here.
When immutable objects (as the name suggests) are being "modified" a new object would actually be created and assigned to a new or back to the original name/reference:
>>> s = 'abc'
>>> print('0x{:x} {}'.format(id(s), s))
0x7f4a9dbbfa78 abc
>>> s = s.upper()
>>> print('0x{:x} {}'.format(id(s), s))
0x7f4a9c989490 ABC
Note though, that even immutable type could include reference to a mutable object. For instance for l = [1, 2, 3]; t1 = (l,); t2 = t1, one can t1[0].append(4). This change would also be seen in t2[0] (for the same reason as d1['k'] and d2['k'] above) while both tuples themselves remained unmodified.
One extra caveat (possible gotcha). When defining default argument values (using mutable types), that default argument, when function is called without passing an object, behaves like a "static" variable:
>>> def f3(arg_list=[]):
... arg_list.append('x')
... print(arg_list)
>>> f3()
['x']
>>> f3()
['x', 'x']
Since this is often not a behavior people assume at first glance, using mutable objects as default argument value is usually better avoided.
Similar would be true for class attributes where one object would be shared between all instances:
>>> class C(object):
... a = []
... def m(self):
... self.a.append('x') # We actually modify value of an attribute of C
... print(self.a)
>>> c1 = C()
>>> c2 = C()
>>> c1.m()
['x']
>>> c2.m()
['x', 'x']
>>> c1.m()
['x', 'x', 'x']
Note what the behavior would be in case of class immutable type class attribute in a similar example:
>>> class C(object):
... a = 0
... def m(self):
... self.a += 1 # We assign new object to an attribute of self
... print(self.a)
>>> c1 = C()
>>> c2 = C()
>>> c1.m()
1
>>> c2.m()
1
>>> c1.m()
2
All the fun details can be found in the documentation: https://docs.python.org/3.6/reference/datamodel.html

Creating flat list from single float or list of floats

I wish to iterate over a zip of objects and floats together using a parent class function. Sometimes the subclasses have three objects and three floats in a list, this works fine:
L = [A,B,C]
F = [1,2,3]
for f, o in zip(L,F):
# do stuff
Sometimes the subclass has one object and one float
L = A
F = 1
for f, o in zip(L,F):
# TypeError
This throws an exception because F is not iterable.
If I try:
F = list(F)
This works for 2 or more floats but throws an exception for a single float (it's still not iterable :)).
If I try:
F = [F]
This solves the 1 float case but now returns a nested list when there are two or more floats!
a = [1,2]
[a] = [[1,2]]
Is there a simple builtin way to receive either a single float or list of floats and return a single flat list? Or do I need something like itertools.chain?
Just check if you already have a list, and if not create one with a single element:
f = f if isinstance(f,list) else [f]
For example:
>>> f = 1
>>> f = f if isinstance(f,list) else [f]
>>> f
[1]
>>> f = [1,2]
>>> f = f if isinstance(f,list) else [f]
>>> f
[1, 2]
Note, I'm assuming your using lists but you could use collections.Iterable for a more generic solution.
Further note, I don't know where F/f came from, but ideally it would have been initially created this way rather than fixing it now.
You can try a = a if isinstance(a, collections.Iterable) else [a]. This makes sure a is either iterable or converts it to list. Also you need not convert a, you could assign the result to another variable.
You can design your objects so that they have a builtin __iter__ method along with __len__:
class CertainObject:
def __init__(self):
self.variables = [23, 4, 2]
def __len__(self):
return len(self.variables)
def __iter__(self):
for i in self.variables:
yield i
Case 1(Multiple objects):
L = [CertainObject(), CertainObject(),CertainObject()]
F = [1,2,3]
for f, o in zip(L,F):
pass
Case 1 (singe object):
A = CertainObject()
F = 1
for a, b in zip(A, [F]*len(A)):
print(a, b)
With __iter__, iterating over the instance of the object will not raise an error.
You can use your case that returns a nested list, but then cast the list as a numpy array and use the flatten() function, then re-cast as a list if you want it that way.
a = list(np.array(a).flatten())
This way, a = 1 will turn into [1], a = [1, 2] will turn into [1, 2], and a = [[1, 2]] will turn into [1, 2], so all outputs are lists.
Use extend
L = []
F = []
L.extend([A])
F.extend([B])
for f,o in zip(L,F):
# do stuff
This should not give you problems both with one item lists and with more than one item lists (it won't be nested).
example 1 item:
L = []
F = []
A = 1
B = 1
L.extend([A])
F.extend([B])
for f,o in zip(L,F):
print(f,o)
otput
1 1
Example more items
L = []
F = []
A = [1,2,3]
B = [1,2,3]
L.extend([A])
F.extend([B])
for f,o in zip(L,F):
print(f,o)
output
1 1
2 2
3 3

Python: Get attribute from a list of objects of the same type in the most efficient way

I was wondering which is the best way to obtain an attribute from a list of objects of the same type. Is there a more efficient solution than a for loop? I tried with getattr but is not working with lists.
Just to make it clear. Let's say I've defined a class Foo with attribute Bar
class Foo():
def __init__(self, Bar):
self.bar = Bar
A = Foo(5)
B = Foo(6)
C = [A,B]
Can I do something to obtain the attribute Bar at the same time for the whole list so that the final result would be a list with values [5,6] ?
Thanks for your help
You could use a list-comp:
C = [obj.Bar for obj in (A, B)]
Or alternatively:
from operator import attrgetter
C = map(attrgetter('Bar'), (A, B))
Note: It's not strictly "at the same time" but looping over the objects one-by-one is as good as it gets. Also note, the objects don't even need to be of the same type - they just need to have an attribute Bar. You can use getattr to return default values if the attribute doesn't exist:
C = [getattr(obj, 'Bar', None) for obj in (A, B)]
Or, if you wanted to discard items that don't have that attribute instead of a default value:
C = [obj.Bar for obj in (A, B) if hasattr(obj, 'Bar')]
C = [getattr(obj, 'Bar') for obj in [A, B]]
You could even use:
>>> for nam in dir():
... obj = eval(nam)
... if isinstance(obj, Foo):
... C.append(getattr(obj, 'Bar'))
...
>>> C
[5, 6]

Intersection of instance data and list

I have a list of instances of a class:
>>> class A:
...
... def __init__(self,l=None):
... self.data=l
>>> k=list()
>>> for x in range(5):
... k.append(A(x))
Now I need to intersect the 'data' field against a given list
>>> m=[0,2]
>>> f=set([r.data for r in k]) & set(m)
>>> f
set([0, 2])
So far so good.
But now, I need to get the instances of 'A' which had 'data' having one the values in intersection set 'f'.
Is there an easier way to achieve all of this - rather than iterating through instances again?
You can use a list comprehension:
>>> [x for x in k if x.data in f]
[<__main__.A instance at 0x92b1c0c>, <__main__.A instance at 0x92b1c4c>]
While iterating, you can check if the item is in the m list.
class A:
def __init__(self,l=None):
self.data=l
result=[]
k=list()
m=[0,2]
for x in range(5):
some_A= A(x)
k.append(someA)
if x in m:
result.append(someA)
print result
[<__main__.A instance at 0x021CBEB8>, <__main__.A instance at 0x021CBF08>]

Python why would you use [:] over =

I am just learning python and I am going though the tutorials on https://developers.google.com/edu/python/strings
Under the String Slices section
s[:] is 'Hello' -- omitting both always gives us a copy of the whole
thing (this is the pythonic way to copy a sequence like a string or
list)
Out of curiosity why wouldn't you just use an = operator?
s = 'hello';
bar = s[:]
foo = s
As far as I can tell both bar and foo have the same value.
= makes a reference, by using [:] you create a copy. For strings, which are immutable, this doesn't really matter, but for lists etc. it is crucial.
>>> s = 'hello'
>>> t1 = s
>>> t2 = s[:]
>>> print s, t1, t2
hello hello hello
>>> s = 'good bye'
>>> print s, t1, t2
good bye hello hello
but:
>>> li1 = [1,2]
>>> li = [1,2]
>>> li1 = li
>>> li2 = li[:]
>>> print li, li1, li2
[1, 2] [1, 2] [1, 2]
>>> li[0] = 0
>>> print li, li1, li2
[0, 2] [0, 2] [1, 2]
So why use it when dealing with strings? The built-in strings are immutable, but whenever you write a library function expecting a string, a user might give you something that "looks like a string" and "behaves like a string", but is a custom type. This type might be mutable, so it's better to take care of that.
Such a type might look like:
class MutableString(object):
def __init__(self, s):
self._characters = [c for c in s]
def __str__(self):
return "".join(self._characters)
def __repr__(self):
return "MutableString(\"%s\")" % str(self)
def __getattr__(self, name):
return str(self).__getattribute__(name)
def __len__(self):
return len(self._characters)
def __getitem__(self, index):
return self._characters[index]
def __setitem__(self, index, value):
self._characters[index] = value
def __getslice__(self, start, end=-1, stride=1):
return str(self)[start:end:stride]
if __name__ == "__main__":
m = MutableString("Hello")
print m
print len(m)
print m.find("o")
print m.find("x")
print m.replace("e", "a") #translate to german ;-)
print m
print m[3]
m[1] = "a"
print m
print m[:]
copy1 = m
copy2 = m[:]
print m, copy1, copy2
m[1] = "X"
print m, copy1, copy2
Disclaimer: This is just a sample to show how it could work and to motivate the use of [:]. It is untested, incomplete and probably horribly performant
They have the same value, but there is a fundamental difference when dealing with mutable objects.
Say foo = [1, 2, 3]. You assign bar = foo, and baz = foo[:]. Now let's say you want to change bar - bar.append(4). You check the value of foo, and...
print foo
# [1, 2, 3, 4]
Now where did that extra 4 come from? It's because you assigned bar to the identity of foo, so when you change one you change the other. You change baz - baz.append(5), but nothing has happened to the other two - that's because you assigned a copy of foo to baz.
Note however that because strings are immutable, it doesn't matter.
If you have a list the result is different:
l = [1,2,3]
l1 = l
l2 = l[:]
l2 is a copy of l (different object) while l1 is an alias of l which means that l1[0]=7 will modify also l, while l2[1]=7 will not modify l.
While referencing an object and referencing the object's copy doesn't differ for an immutable object like string, they do for mutable objects (and mutable methods), for instance list.
Same thing on mutable objects:
a = [1,2,3,4]
b = a
c = a[:]
a[0] = -1
print a # will print [1,2,3,4]
print b # will print [-1,2,3,4]
print c # will print [1,2,3,4]
A visualization on pythontutor of the above example - http://goo.gl/Aswnl.

Categories