I am working with a list of namedtuples. I would like to add a field to each named tuple after it has already been created. It seems I can do that by just referencing it as an attribute (as in namedtuple.attribute = 'foo'), but then it isn't added to the list of fields. Is there any reason why I shouldn't do it this way if I don't do anything with the fields list? Is there a better way to add a field?
>>> from collections import namedtuple
>>> result = namedtuple('Result',['x','y'])
>>> result.x = 5
>>> result.y = 6
>>> (result.x, result.y)
(5, 6)
>>> result.description = 'point'
>>> (result.x, result.y, result.description)
(5, 6, 'point')
>>> result._fields
('x', 'y')
What you do works because namedtuple(...) returns a new class. To actually get a Result object, you instantiate that class. So the correct way is:
Result = namedtuple('Result', ['x', 'y'])
result = Result(5, 6)
And you'll find that adding attributes to these objects does not work. So the reason you shouldn't do it is because it doesn't work. Only abusing the class objects works, and I hope I don't need to go into detail why this is a horrible, horrible idea.
Note that regardless of whether you can add attributes to namedtuples or not (and even if you list all attributes you need beforehand), you cannot change a namedtuple object after it's created. Tuples are immutable. So if you need to change objects after creation for any reason, in any way or shape, you can't use namedtuple. You're better off defining a custom class (some of the stuff namedtuple adds for you doesn't even make sense for mutable objects).
Notice that here you're modifying the type of the named tuples, not instances of that type. In this case, you'd probably want to create a new type with an additional field from the old one:
result = namedtuple('Result',result._fields+('point',))
e.g.:
>>> result = namedtuple('Result',['x','y'])
>>> result = namedtuple('Result',result._fields+('point',))
>>> result._fields
('x', 'y', 'point')
You can easily concatenate namedtuples, keeping in mind that they are immutable
from collections import namedtuple
T1 = namedtuple('T1', 'a,b')
T2 = namedtuple('T2', 'c,d')
t1 = T1(1,2)
t2 = T2(3,4)
def sum_nt_classes(*args):
return namedtuple('_', ' '.join(sum(map(lambda t:t._fields, args), ())))
def sum_nt_instances(*args):
return sum_nt_classes(*args)(*sum(args,()))
print sum_nt_classes(T1,T2)(5,6,7,8)
print sum_nt_instances(t1,t2)
You cannot add a new field to a namedtuple after defining it. Only way is to create a new template and creating new namedtuple instances.
Analysis
>>> from collections import namedtuple
>>> result = namedtuple('Result',['x','y'])
>>> result
<class '__main__.Result'>
result is not a tuple, but the class which creates tuples.
>>> result.x
<property object at 0x02B942A0>
You create a new tuple like this:
>>> p = result(1, 2)
>>> p
Result(x=1, y=2)
>>> p.x
1
Prints the value x in p.
>>> p.x = 5
Traceback (most recent call last):
File "<pyshell#10>", line 1, in <module>
p.x = 5
AttributeError: can't set attribute
This throws error because tuple is immutable.
>>> result.x = 5
>>> result
<class '__main__.Result'>
>>> result._fields
('x', 'y')
>>> p = result(1, 2)
>>> p
Result(x=1, y=2)
This doesn't change anything.
>>> result.description = 'point'
>>> result
<class '__main__.Result'>
>>> result._fields
('x', 'y')
This doesn't change anything either.
Solution
>>> result = namedtuple('Result', ['x','y'])
>>> p = result(1, 2)
>>> p
Result(x=1, y=2)
>>> # I need one more field
>>> result = namedtuple('Result',['x','y','z'])
>>> p1 = result(1, 2, 3)
>>> p1
Result(x=1, y=2, z=3)
>>> p
Result(x=1, y=2)
Related
I am attempting to access a attribute from an object I have created in a separate python file.
I have tried the following code:
print(self.GENOME[0][0].x)
where self.GENOME[0][0] is the object memory address.
However, I get
AttributeError: 'set' object has no attribute 'x'
agent.py:
import neuron
#Creates an array of custom shape (3,4,3) and assigns unique object
#address
for ii in range(len(_Topology)):
_Genome[ii] = [{neuron.Neuron()} for i in range(_Topology[ii]+1)]
#Calls object variable
print(self.GENOME[0][0].x)
neuron.py:
class Neuron:
def __init__(self):
self.x = 50
_Genome[ii] contains a list that contains a set of at least on Neuron instance. Simplifying, you can make it like this:
>>> a = [{Neuron()} for _ in [1,2]]
>>> a
[{<__main__.Neuron object at 0x0000000002CAFF28>}, {<__main__.Neuron object at 0x0000000002CAFF60>}]
>>> q = [a]
>>> q
[[{<__main__.Neuron object at 0x0000000002CAFF28>}, {<__main__.Neuron object at 0x0000000002CAFF60>}]]
>>>
If you print _Genome, it will look something like that - I'm assuming _Genome is list-like | q above should be analogous to _Genome.
Indexing into it looks like this
>>> q[0]
[{<__main__.Neuron object at 0x0000000002CAFF28>}, {<__main__.Neuron object at 0x0000000002CAFF60>}]
>>> type(q[0])
<class 'list'>
>>> q[0][0]
{<__main__.Neuron object at 0x0000000002CAFF28>}
>>> type(q[0][0])
<class 'set'>
>>>
Set behaviour is well documented - just like most of Python.
One way to access the contents of a set is with a for loop
>>> for thing in q[0][0]:
print(thing.x)
50
>>>
Another way to access the contents of a set is with the pop() method but this will remove an arbitrary item from the set. I don't think you really want this - you have no control over which item you get if there are more than one and the original set has one less item.
>>> x = [[{Neuron()},{Neuron}]]
>>> t = x[0][0].pop()
>>> t
<__main__.Neuron object at 0x0000000002C2F2E8>
>>> t.x
50
>>> x
[[set(), {<class '__main__.Neuron'>}]]
>>>
You could also make a list from the set and use indices to access the contents of the list.
>>> q
[[{<__main__.Neuron object at 0x0000000002CAFF28>}, {<__main__.Neuron object at 0x0000000002CAFF60>}]]
>>> z = list(q[0][0])
>>> z
[<__main__.Neuron object at 0x0000000002CAFF28>]
>>> z[0].x
50
>>>
All of that seems overly complicated and you would probably be better off changing the way you contain the Neuron instances. I have no idea if this is feasible for you. Just dispense with the set containing a single instance:
>>> a = [Neuron() for _ in [1,2]]
>>> a
[<__main__.Neuron object at 0x0000000002C2FDD8>, <__main__.Neuron object at 0x0000000002CD00B8>]
>>> q = [a]
>>> q[0][0]
<__main__.Neuron object at 0x0000000002C2FDD8>
>>> type(q[0][0])
<class '__main__.Neuron'>
>>> q[0][0].x
50
>>>
Are the following declarations different?
l1=list
l2=list()
As I used type() function, following were the results!
type(l1)
<class 'type'>
type(l2)
<class 'list'>
l1 is l2
False
These probably shows that l1 and l2 are not the same. Why does l1 belongs to class type and not class list
l1 = list means assignment of list class in l1 variable.
l2=list() mreans calling list() function to create a list and assign the list to l2 variable.
When you are calling list, you get an instance of it class:
>> l = list()
>> l
[]
but when you assign list to another variable, you are completely transforming all List information to another variable and you can use them as you can use List, for example :
>> a = List
>> l1 = a() # same as l1 = List()
>> l1
[]
>> a
List
>> isinstance(l1, a)
True
>> isinstance(l1, List)
True
I hope this might helps you understand.
In Python, nothing gets called unless you ask for it to be called, with parentheses. Even if there are no arguments. Without the parentheses, you're just referring to the function, or method, or class, or whatever as a value. (This is different from some other languages, like Ruby or Perl.)
This may be a little more obvious with functions or methods:
>>> input
<function input>
>>> abs
<function abs>
>>> 'abc'.upper
<function str.upper>
… but it's exactly the same with classes:
>>> list
list
Those are all perfectly good values, which we can even store in a variable:
>>> abcupper = 'abc'.upper
When you call a function or method, you know what that does. When you call a class, that's how you construct an instance of the class.
Either way, you need the parentheses:
>>> abs(-2)
2
>>> list((1, 2, 3))
[1, 2, 3]
… even if there are no arguments:
>>> 'abc'.upper()
'ABC'
>>> list()
[]
… even if you're calling a method you stored in a variable earlier:
>>> abcupper()
'ABC'
I imagine this is one in a very long list of questions from people who have inadvertantly created references in python, but I've got the following situation. I'm using scipy minimize to set the sum of the top row of an array to 5 (as an example).
class problem_test:
def __init__(self):
test_array = [[1,2,3,4,5,6,7],
[4,5,6,7,8,9,10]]
def set_top_row_to_five(x, array):
array[0] = array[0] + x
return abs(sum(array[0]) - 5)
adjustment = spo.minimize(set_top_row_to_five,0,args=(test_array))
print(test_array)
print(adjustment.x)
ptest = problem_test()
However, the optimization is altering the original array (test_array):
[array([-2.03, -1.03, -0.03, 0.97, 1.97, 2.97, 3.97]), [4, 5, 6, 7, 8, 9, 10]]
[-0.00000001]
I realize I can solve this using, for example, deepcopy, but I'm keen to learn why this is happening so I don't do the same in future by accident.
Thanks in advance!
Names are references to objects. What is to observe is whether the objects (also passed in an argument) is modified itself or a new object is created. An example would be:
>>> l1 = list()
>>> l2 = l1
>>> l2.append(0) # this modifies object currently reference to by l1 and l2
>>> print(l1)
[0]
Whereas:
>>> l1 = list()
>>> l2 = list(l1) # New list object has been created with initial values from l1
>>> l2.append(0)
>>> print(l1)
[]
Or:
>>> l1 = list()
>>> l2 = l1
>>> l2 = [0] # New list object has been created and assigned to l2
>>> l2.append(0)
>>> print(l1)
[]
Similarly assuming l = [1, 2, 3]:
>>> def f1(list_arg):
... return list_arg.reverse()
>>> print(f1, l)
None [3, 2, 1]
We have just passed None returned my list.reverse method through and reversed l (in place). However:
>>> def f2(list_arg):
... ret_list = list(list_arg)
... ret_list.reverse()
... return ret_list
>>> print(f2(l), l)
[3, 2, 1] [1, 2, 3]
Function returns a new reversed object (initialized) from l which remained unchanged (NOTE: in this exampled built-in reversed or slicing would of course make more sense.)
When nested, one must not forget that for instance:
>>> l = [1, 2, 3]
>>> d1 = {'k': l}
>>> d2 = dict(d1)
>>> d1 is d2
False
>>> d1['k'] is d2['k']
True
Dictionaries d1 and d2 are two different objects, but their k item is only one (and shared) instance. This is the case when copy.deepcopy might come in handy.
Care needs to be taken when passing objects around to make sure they are modified or copy is used as wanted and expected. It might be helpful to return None or similar generic value when making in place changes and return the resulting object when working with a copy so that the function/method interface itself hints what the intention was and what is actually going on here.
When immutable objects (as the name suggests) are being "modified" a new object would actually be created and assigned to a new or back to the original name/reference:
>>> s = 'abc'
>>> print('0x{:x} {}'.format(id(s), s))
0x7f4a9dbbfa78 abc
>>> s = s.upper()
>>> print('0x{:x} {}'.format(id(s), s))
0x7f4a9c989490 ABC
Note though, that even immutable type could include reference to a mutable object. For instance for l = [1, 2, 3]; t1 = (l,); t2 = t1, one can t1[0].append(4). This change would also be seen in t2[0] (for the same reason as d1['k'] and d2['k'] above) while both tuples themselves remained unmodified.
One extra caveat (possible gotcha). When defining default argument values (using mutable types), that default argument, when function is called without passing an object, behaves like a "static" variable:
>>> def f3(arg_list=[]):
... arg_list.append('x')
... print(arg_list)
>>> f3()
['x']
>>> f3()
['x', 'x']
Since this is often not a behavior people assume at first glance, using mutable objects as default argument value is usually better avoided.
Similar would be true for class attributes where one object would be shared between all instances:
>>> class C(object):
... a = []
... def m(self):
... self.a.append('x') # We actually modify value of an attribute of C
... print(self.a)
>>> c1 = C()
>>> c2 = C()
>>> c1.m()
['x']
>>> c2.m()
['x', 'x']
>>> c1.m()
['x', 'x', 'x']
Note what the behavior would be in case of class immutable type class attribute in a similar example:
>>> class C(object):
... a = 0
... def m(self):
... self.a += 1 # We assign new object to an attribute of self
... print(self.a)
>>> c1 = C()
>>> c2 = C()
>>> c1.m()
1
>>> c2.m()
1
>>> c1.m()
2
All the fun details can be found in the documentation: https://docs.python.org/3.6/reference/datamodel.html
I'm trying to figure out in what order PriorityQueue.get() returns values in Python. First I thought that the smaller priority values get returned first but after few examples it's not like that. This is the example that I have run:
>>> qe = PriorityQueue()
>>> qe.put("Br", 0)
>>> qe.put("Sh", 0.54743812441605)
>>> qe.put("Gl", 1.1008112004388)
>>> qe.get()
'Br'
>>> qe.get()
'Gl'
>>> qe.get()
'Sh'
Why is it returning values in this order?
According to doc, the first parameter is priority, and second - is value. So that's why you get such result.
A typical pattern for entries is a tuple in the form: (priority_number, data).
So you should pass a tuple to put like this:
>>> q = PriorityQueue()
>>> q.put((10,'ten'))
>>> q.put((1,'one'))
>>> q.put((5,'five'))
>>> q.get()
>>> (1, 'one')
>>> q.get()
>>> (5, 'five')
>>> q.get()
>>> (10, 'ten')
Notice additional braces.
How do I replace a python object everywhere with another object?
I have two classes, SimpleObject and FancyObject. I've created a SimpleObject, and have several references to it. Now I'd like to create a FancyObject, and make all those references point to the new object.
a = SimpleObject()
some_list.append(a)
b = FancyObject()
a = b is not what I want, it just changes what a points to. I read the following would work, but doesn't. I get an error "Attribute __dict__ is not writable":
a.__dict__ = b.__dict__
What I want is the equivalent of (pseudo-C):
*a = *b
I know this is hacky, but is there any way to accomplish this?
There's no way. It'd let you mutate immutable objects and cause all sorts of nastiness.
x = 1
y = (x,)
z = {x: 3}
magic_replace(x, [1])
# x is now a list!
# The contents of y have changed, and z now has an unhashable key.
x = 1 + 1
# Is x 2, or [1, 1], or something stranger?
You can put that object in global namespace of separate module and than monkey patch it when you need.
objstore.py:
replaceable = object()
sample.py:
import objstore
b = object()
def isB():
return objstore.replaceable is b
if __name__ == '__main__':
print isB()#False
objstore.replaceable = b
print isB()#True
P.S. Rely on monkey patching is a symptom of bad design
PyJack has a function replace_all_refs that replaces all references to an object in memory.
An example from the docs:
>>> item = (100, 'one hundred')
>>> data = {item: True, 'itemdata': item}
>>>
>>> class Foobar(object):
... the_item = item
...
>>> def outer(datum):
... def inner():
... return ("Here is the datum:", datum,)
...
... return inner
...
>>> inner = outer(item)
>>>
>>> print item
(100, 'one hundred')
>>> print data
{'itemdata': (100, 'one hundred'), (100, 'one hundred'): True}
>>> print Foobar.the_item
(100, 'one hundred')
>>> print inner()
('Here is the datum:', (100, 'one hundred'))
Calling replace_all_refs
>>> new = (101, 'one hundred and one')
>>> org_item = pyjack.replace_all_refs(item, new)
>>>
>>> print item
(101, 'one hundred and one')
>>> print data
{'itemdata': (101, 'one hundred and one'), (101, 'one hundred and one'): True}
>>> print Foobar.the_item
(101, 'one hundred and one')
>>> print inner()
('Here is the datum:', (101, 'one hundred and one'))
You have a number of options:
Design this in from the beginning, using the Facade pattern (i.e. every object in your main code is a proxy for something else), or a single mutable container (i.e. every variable holds a list; you can change the contents of the list through any such reference). Advantages are that it works with the execution machinery of the language, and is relatively easily discoverable from the affected code. Downside: more code.
Always refer to the same single variable. This is one implementation of the above. Clean, nothing fancy, clear in code. I would recommend this by far.
Use the debug, gc, and introspection features to hunt down every object meeting your criterion and alter the variables while running. The disadvantage is that the value of variables will change during code execution, without it being discoverable from the affected code. Even if the change is atomic (eliminating a class of errors), because this can change the type of a variable after the execution of code which determined the value was of a different type, introduces errors which cannot reasonably be anticipated in that code. For example
a = iter(b) # will blow up if not iterable
[x for x in b] # before change, was iterable, but between two lines, b was changed to an int.
More subtly, when discriminating between string and non-string sequences (because the defining feature of strings is that iterating them also yields strings, which are themselves iterable), when flattening a structure, code may be broken.
Another answer mentions pyjack which implements option 3. Although it may work, it has all of the problems mentioned. This is likely to be appropriate only in debugging and development.
Take advantage of mutable objects such as a list.
a = [SimpleObject()]
some_list.append(a)
b = FancyObject()
a[0] = b
Proof that this works:
class SimpleObject():
def Who(self):
print 'SimpleObject'
class FancyObject():
def Who(self):
print 'FancyObject'
>>> a = [SimpleObject()]
>>> a[0].Who()
SimpleObject
>>> some_list = []
>>> some_list.append(a)
>>> some_list[0][0].Who()
SimpleObject
>>> b = FancyObject()
>>> b.Who()
FancyObject
>>> a[0] = b
>>> some_list[0][0].Who()
FancyObject