The main ideas of immutability are kept the same throughout the scope of OOP and functional programming or do, for example, Java and Python have their own versions of immutability. More specifically do the following hold in all languages?
Mutable Objects: Set, Dict, List
Immutable Objects: Bool, Int, Float, String, Tuple
In python two immutable objects with the same value also have the same id, two references one value.
In python again two mutable objects with the same value don't share the same id, two references two values.
Does this idea of two references binding together in mutable objects hold in all languages? And the reverse as well, that is, bindings cannot be changed, meaning that references can only change the value they are pointing to.
i = {1,2,3} # Set, a mutable object
j = {1,2,3}
i is j
False
i = j
j.remove(3)
i is j
True
I'm asking because for example in scripting languages objects are passed by references (in other languages passed by value or in C where we have both) so doesn't this change the whole notion of immutability?
If you have any object, even literal ones, it needs to use some space in memory.
This memory needs to be mutated by the language runtime and it is the same if it's immutable or not. Thus mutable objects mutate memory when object gets created.
Thus an immutable object is one that either is ensured not to be changed at compile time or protected by the runtime when the program runs.
In python two immutable objects with the same value also have the same
id, two references one value.
I don't think this is guaranteed at all. Eg.
x = (1,2,3)
y = (1,2,3)
x is y
// => False
When I run it in my repl. If it's anything like Common Lisp and Java it might happen that implementations are free to reuse memory locations of the same literals and thus any boolean result would be acceptable.
What is my understanding of the difference between mutable vs immutable in python is the that the first can be changed by indexing. For example, the following x list can be changed by indexing!
x = [1,2,3]
x[0] = 10
y = (1,2,3)
y[0] = 10 # this will raise an error. tuple is not mutable.
y = x
id(y) == id(x) #gives true. since y is a reference to x
y[0] = 10
print(y)
[10, 2, 3]
print(x)
[10, 2, 3] # x is changed as well! y and x are same same.
every time you create lists or sets or tuples with unique names even though they contain the same dataset, still they are not the same list mapped into the memory. each has its unique id.
Related
If I am not mistaken a is b should return True if a and b point to the same object. With two equal lists it returns False because the lists are two different lists. I thought that immutable objects didn't have this problem but when I put in:
a = (1, 2, 3)
b = (1, 2, 3)
a is b #returns false
I thought that this should return True since a and b point to an immutable object that has the same value. Why isn't a pointing to the same object as b when I use tuples?
Your a and b do not point to the same object (you create two individual tuples); you can inspect that with id(a) or -- as you did -- with a is b.
a == b
on the other hand will be True.
If you want them to point to the same object you could do
a = b = (1, 2, 3)
Now a is b is True.
None of this has to do with mutability or immutability; it would work the same if you took lists instead of tuples.
You can visualize your code pythontutor to see what is happening.
Python does intern some strings and some small integers (e.g. a=0; b=0; a is b yields True) but not all immutables are interned. Furthermore you should not rely on that, rather consider it an implementation detail.
Your code should never rely on the details on what gets does and doesn't get interned. It is completely implementation dependent, and could change from point release to point release.
Interning is designed solely to be an optimization to reduce memory usage, and you are correct that in principle any literal of an immutable value could be interned, but that's only actually done in the most trivial of cases.
One likely reason that tuples aren't interned is that they're not deeply immutable. If you put something mutable inside them (such as ({}, [])), interning them could lead to incorrect behavior, as modifying one could modify the other.
You could of course check whether a tuple contains only immutable objects, but tuples can be nested arbitrarily deeply, so you'd have to do work to verify that a tuple only (transitively) contains immutable objects, and that's really not worth it, because you can always "manually intern" them if you want.
a = 1, 2, 3
b = a
If two variable values are identical then it is said to be sharing same memory...
so python follows shared memory concept ?....and if i change one value will it change another?
See Python data model described here
Types affect almost all aspects of object behavior. Even the importance of object identity is affected in some sense: for immutable types, operations that compute new values may actually return a reference to any existing object with the same type and value, while for mutable objects this is not allowed. E.g., after a = 1; b = 1, a and b may or may not refer to the same object with the value one, depending on the implementation, but after c = []; d = [], c and d are guaranteed to refer to two different, unique, newly created empty lists. (Note that c = d = [] assigns the same object to both c and d.)
I understand that a namedtuple in python is immutable and the values of its attributes cant be reassigned directly
N = namedtuple("N",['ind','set','v'])
def solve()
items=[]
R = set(range(0,8))
for i in range(0,8):
items.append(N(i,R,8))
items[0].set.remove(1)
items[0].v+=1
Here last like where I am assigning a new value to attribute 'v' will not work. But removing the element '1' from the set attribute of items[0] works.
Why is that and will this be true if set attribute were of List type
Immutability does not get conferred on mutable objects inside the tuple. All immutability means is you can't change which particular objects are stored - ie, you can't reassign items[0].set. This restriction is the same regardless of the type of that variable - if it was a list, doing items[0].list = items[0].list + [1,2,3] would fail (can't reassign it to a new object), but doing items[0].list.extend([1,2,3]) would work.
Think about it this way: if you change your code to:
new_item = N(i,R,8)
then new_item.set is now an alias for R (Python doesn't copy objects when you reassign them). If tuples conferred immutability to mutable members, what would you expect R.remove(1) to do? Since it is the same set as new_item.set, any changes you make to one will be visible in the other. If the set had become immutable because it has become a member of a tuple, R.remove(1) would suddenly fail. All method calls in Python work or fail depending on the object only, not on the variable - R.remove(1) and new_item.set.remove(1) have to behave the same way.
This also means that:
R = set(range(0,8))
for i in range(0,8):
items.append(N(i,R,8))
probably has a subtle bug. R never gets reassigned here, and so every namedtuple in items gets the same set. You can confirm this by noticing that items[0].set is items[1].set is True. So, anytime you mutate any of them - or R - the modification would show up everywhere (they're all just different names for the same object).
This is a problem that usually comes up when you do something like
a = [[]] * 3
a[0].append(2)
and a will now be [[2], [2], [2]]. There are two ways around this general problem:
First, be very careful to create a new mutable object when you assign it, unless you do deliberately want an alias. In the nested lists example, the usual solution is to do a = [[] for _ in range(3)]. For your sets in tuples, move the line R = ... to inside the loop, so it gets reassigned to a new set for each namedtuple.
The second way around this is to use immutable types. Make R a frozenset, and the ability to add and remove elements goes away.
You mutate the set, not the tuple. And sets are mutable.
>>> s = set()
>>> t = (s,)
>>> l = [s]
>>> d = {42: s}
>>> t
(set([]),)
>>> l
[set([])]
>>> d
{42: set([])}
>>> s.add('foo')
>>> t
(set(['foo']),)
>>> l
[set(['foo'])]
>>> d
{42: set(['foo'])}
On p.35 of "Python Essential Reference" by David Beazley, he first states:
For immutable data such as strings, the interpreter aggressively
shares objects between different parts of the program.
However, later on the same page, he states
For immutable objects such as numbers and strings, this assignment
effectively creates a copy.
But isn't this a contradiction? On one hand he is saying that they are shared, but then he says they are copied.
An assignment in python never ever creates a copy (it is technically possible only if the assignment for a class member is redefined for example by using __setattr__, properties or descriptors).
So after
a = foo()
b = a
whatever was returned from foo has not been copied, and instead you have two variables a and b pointing to the same object. No matter if the object is immutable or not.
With immutable objects however it's hard to tell if this is the case (because you cannot mutate the object using one variable and check if the change is visible using the other) so you are free to think that indeed a and b cannot influence each other.
For some immutable objects also Python is free to reuse old objects instead of creating new ones and after
a = x + y
b = x + y
where both x and y are numbers (so the sum is a number and is immutable) may be that both a and b will be pointing to the same object. Note that there is no such a guarantee... it may also be that instead they will be pointing to different objects with the same value.
The important thing to remember is that Python never ever makes a copy unless specifically instructed to using e.g. copy or deepcopy. This is very important with mutable objects to avoid surprises.
One common idiom you can see is for example:
class Polygon:
def __init__(self, pts):
self.pts = pts[:]
...
In this case self.pts = pts[:] is used instead of self.pts = pts to make a copy of the whole array of points to be sure that the point list will not change unexpectedly if after creating the object changes are applied to the list that was passed to the constructor.
It effectively creates a copy. It doesn't actually create a copy. The main difference between having two copies and having two names share the same value is that, in the latter case, modifications via one name affect the value of the other name. If the value can't be mutated, this difference disappears, so for immutable objects there is little practical consequence to whether the value is copied or not.
There are some corner cases where you can tell the difference between copies and different objects even for immutable types (e.g., by using the id function or the is operator), but these are not useful for Python builtin immutable types (like strings and numbers).
No, assigning a pre-existing str variable to a new variable name does not create an independent copy of the value in memory.
The existence of unique objects in memory can be checked using the id() function. For example, using the interactive Python prompt, try:
>>> str1 = 'ATG'
>>> str2 = str1
Both str1 and str2 have the same value:
>>> str1
'ATG'
>>> str2
'ATG'
This is because str1 and str2 both point to the same object, evidenced by the fact that they share the same unique object ID:
>>> id(str1)
140439014052080
>>> id(str2)
140439014052080
>>> id(str1) == id(str2)
True
Now suppose you modify str1:
>>> str1 += 'TAG' # same as str1 = str1 + 'TAG'
>>> str1
ATGTAG
Because str objects are immutable, the above assignment created a new unique object with its own ID:
>>> id(str1)
140439016777456
>>> id(str1) == id(str2)
False
However, str2 maintains the same ID it had earlier:
>>> id(str2)
140439014052080
Thus, execution of str1 += 'TAG' assigned a brand new str object with its own unique ID to the variable str1, while str2 continues to point to the original str object.
This implies that assigning an existing str variable to another variable name does not create a copy of its value in memory.
Yesterday I asked ("A case of outwardly equal lists of sets behaving differently under Python 2.5 (I think …)") why list W constructed as follows:
r_dim_1_based = range( 1, dim + 1)
set_dim_1_based = set( r_dim_1_based)
def listW_fill_func( val):
if (val == 0):
return set_dim_1_based
else:
return set( [val])
W = [ listW_fill_func( A[cid])
for cid in r_ncells ]
didn't behave as I expected. In particular, it did not behave like other lists that showed equality with it (another_list == W --> True).
Is there a utility, trick, builtin, whatever that would have shown these differing internal structures to me? Something that would have produced perhaps a C-like declaration of the objects so that I would have seen at once that I was dealing with pointers in one case (list W) and values in the others?
You're dealing with references in each case (more similar to pointers than to values). You can surely introspect your objects' references to your heart's contents -- for example, if you have a list and want to check if any items are identical references,
if len(thelist) != len(set(id(x) for x in thelist)): ...
DO note that we're talking about references here -- so, two identical references to None, or two identical references to the int value 17, would also trigger the same alarm. Of course you can keep introspecting to remove that case, eliminating immutables from the list in a first pass, for example, if you think that multiple references to the same immutable are fine -- e.g.:
immutyps = int, long, float, tuple, frozenset, str, unicode
mutables = [x for x in thelist if not isinstance(x, immutyps)]
if len(mutables) != len(set(id(x) for x in mutables)):
cryhavocandletloosethedogsofwar()
but I would question the return-on-investment of such a deep introspection strategy!