Why does an object still work properly without the class - python

I'm a newbie in Python. After reading some chapters of Python Tutorial Release 2.7.5, I'm confused about Python scopes and namespaces. This question may be duplicated because I don't know what to search for.
I created a class and an instance. Then I deleted the class using del. But the instance still works properly. Why?
>>>class MyClass: # define a class
... def greet(self):
... print 'hello'
...
>>>instan = MyClass() # create an instantiation
>>>instan
<__main__.MyClass instance at 0x00BBCDC8>
>>>instan.greet()
hello
>>>dir()
['instan', 'MyClass', '__builtins__', '__doc__', '__name__', '__package__']
>>>
>>>
>>>del MyClass
>>>dir()
['instan', '__builtins__', '__doc__', '__name__', '__package__']
>>>instan
<__main__.MyClass instance at 0x00BBCDC8> # Myclass doesn't exist!
>>>instan.greet()
hello
I know little about OOP so this question may seem simple. Thanks in advance.

Python is a garbage collected language. When you do del MyClass, you do not actually delete the 'class object' (classes are objects too), but you only remove the 'name' MyClass from the current namespace, which is some sort of reference to the class object. Any object stays alive as long as it is referenced by something. Since instances reference their own class, the class will stay alive as long as there is at least one instance alive.
One thing to be careful about is when you redefine a class (e.g. on the command line):
In [1]: class C(object):
...: def hello(self):
...: print 'I am an instance of the old class'
In [2]: c = C()
In [3]: c.hello()
I am an instance of the old class
In [4]: class C(object): # define new class and point C to it
...: def hello(self):
...: print 'I am an instance of the new class'
In [5]: c.hello() # the old object does not magically become a new one
I am an instance of the old class
In [6]: c = C() # point c to new object, old class and object are now garbage
In [7]: c.hello()
I am an instance of the new class
Any existing instances of the old class will continue to have the old behavior, which sort of makes sense considering the things I mentioned. The relation between name-spaces and objects is a bit particular to python, but is not that hard once you get it. A good explanation is given here.

When you delete a variable using del, you delete the variable name and your own reference to the object in the variable, not the object itself.
The object you created still contains its own reference to the class. In general, as long as someone still holds a reference to any object (including a class definition) it won't be deleted by the garbage collector.

Python doesn't store values in variables, it assigns names to objects. The locals() function will return all the names in the current namespace (or more specifically, the current scope). Let's start up a new interpreter session and see what locals() will give us.
>>> locals()
{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None, '__package__': None}
The only names currently in the namespace are built in names that Python puts there at start up. Here a quick one-liner to show us only the names we've assigned:
>>> {k:v for k,v, in locals().iteritems() if k[0] != '_'}
{}
That's better. Don't worry about how that one-liner works, let's move on and create a class.
>>> class C(object):
greeting = "I'm the first class"
When we define a class, it's name in places in the current scope:
>>> {k:v for k,v, in locals().iteritems() if k[0] != '_'}
{'C': <class '__main__.C'>}
The part is Python's way of saying that there's an object that's too big to print out, but it's the class object we defined. Let's look at the memory address that our class object is stored at. We can use the id() function to find out.
>>> id(C)
18968856
The number that id() returns is the memory location of the argument. If you run these commands yourself, you'll see a different number, but the number doesn't change during a single session.
>>> id(C)
18968856
Now let's create an instance.
>>> c = C()
>>> c.greeting
"I'm the first class"
Now when we look at locals(), we can see both our class object, and our instance object.
>>> {k:v for k,v, in locals().iteritems() if k[0] != '_'}
{'C': <class '__main__.C'>, 'c': <__main__.C object at 0x011BDED0>}
Every instance object has a special member __class__ that is a reference to the class object that the instance is an instance of.
>>> c.__class__
<class '__main__.C'>
If we call id() on that variable, we can see it's a reference to the class C we just defined:
>>> id(c.__class__)
18968856
>>> id(c.__class__) == id(C)
True
Now let's delete the name C from out local namespace:
>>> del C
>>> {k:v for k,v, in locals().iteritems() if k[0] != '_'}
{'c': <__main__.C object at 0x011BDED0>}
>>> C
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
C
NameError: name 'C' is not defined
That's exactly what we expect. The name C is no longer assigned to anything. However, our instance still has a reference to the class object.
>>> c.__class__
<class '__main__.C'>
>>> id(c.__class__)
18968856
As you can see, the class still exists, you just can't refer to it through the name C in the local namespace.
Let's create a second class with the name C.
>>> class C(object):
greeting = "I'm the second class"
>>> {k:v for k,v, in locals().iteritems() if k[0] != '_'}
{'C': <class '__main__.C'>, 'c': <__main__.C object at 0x011BDED0>}
If we create an instance of the second class, it behaves like you noticed:
>>> c2 = C()
>>> c2.greeting
"I'm the second class"
>>> c.greeting
"I'm the first class"
To see why, let's look at the id of this new class. We can see that the new class object is stored in a different location from our first one.
>>> id(C)
19011568
>>> id(C) == id(C.__class__)
False
This is why the instances can still work properly: both class object still exists separately, and each instance holds a reference to its object.

Related

How to override a function when Parent already explicitly `setattr` the same function?

A 'minimal' example I created:
class C:
def wave(self):
print("C waves")
class A:
def __init__(self):
c = C()
setattr(self, 'wave', getattr(c, 'wave'))
class B(A):
def wave(self):
print("B waves")
>>> a = A()
>>> a.wave()
C waves # as expected
>>> b = B()
>>> b.wave()
C waves # why not 'B waves'?
>>>
In the example, class A explicitly defined its method wave to be class C's wave method, although not through the more common function definition, but using setattr instead. Then we have class B that inherits A, B tries to override wave method with its own method, however, that's not possible, what is going on? how can I work around it?
I want to keep class A's setattr style definition if at all possible, please advise.
I've never systematically learned Python so I guess I am missing some understanding regarding how Python's inheritance and setattr work.
Class A sets the wave() method as its instance attribute in __init__(). This can be seen by inspecting the instance's dict:
>>> b.__dict__
{'wave': <bound method C.wave of <__main__.C object at 0x7ff0b32c63c8>>}
You can get around this by deleting the instance member from b
>>> del b.__dict__['wave']
>>> b.wave()
B waves
With the instance attribute removed, the wave() function is then taken from the class dict:
>>> B.__dict__
mappingproxy({'__module__': '__main__',
'wave': <function __main__.B.wave(self)>,
'__doc__': None})
The thing to note here is that when Python looks up an attribute, instance attributes take precedence over the class attributes (unless a class attribute is a data descriptor, but this is not the case here).
I have also written a blog post back then explaining how the attribute lookup works in even more detail.

Appending an attribute to a function in python

I am not sure exactly what this means or how it is used, but i came across this functionality when it comes to a function. I am able to dynamically add attributes to a function. Can someone help me understand what is the use of this and how does it happen? I mean its a function name after all, how can we add attributes to it dynamically?
def impossible():
print "Adding a bool"
impossible.enabled = True
# i was able to just do this. And this is a function, how does this add up?
print(impossible.enabled) # No attribute error. o/p --> True
in python functions are objects, they have all sorts of inner attributes and they can also have the __dict__ attribute, and like any other object anything in __dict__ is a variable of that object.you can see it if you print it before and after impossible.enabled
def impossible():
print "Adding a bool"
print(impossible.__dict__) # {}
impossible.enabled = True
print(impossible.__dict__) # {'enabled': True}
this doesn't change the function in any way since most of the time __dict__ is empty and holds no special value in a function.
in fact its a bit useful since you could use this in a function to store static variables without cluttering your global names
The name impossible points to an object that represents the function. You're setting a property on this object (which python allows you to do on all objects by default):
>>> def foo():
... print("test")
...
>>> dir(foo)
['__call__', ..., 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name']
>>> foo.a = 'test'
>>> dir(foo)
['__call__', ..., 'a', ...]
Since this object implements __call__, it can be called as a function - you can implement your own class that defines __call__ and it'll also be allowed to be called as a function:
>>> class B:
... def __call__(self, **kw):
... print(kw)
...
>>> b = B()
>>> b()
{}
>>> b(foo='bar')
{'foo': 'bar'}
If you create a new function, you can see that you haven't done anything to the definition of a function (its class), just the object that represents the original function:
>>> def bar():
... print("ost")
...
>>> dir(bar)
['__call__', ..., 'func_closure', ...]
This new object does not have a property named a, since you haven't set one on the object bar points to.
You can, for example, use a function attribute to implement a simple counter, like so:
def count():
try:
count.counter += 1
except AttributeError:
count.counter = 1
return count.counter
Repeatedly calling "count()" yields: 1, 2, 3, ...

Why does setattr fail on a bound method

In the following, setattr succeeds in the first invocation, but fails in the second, with:
AttributeError: 'method' object has no attribute 'i'
Why is this, and is there a way of setting an attribute on a method such that it will only exist on one instance, not for each instance of the class?
class c:
def m(self):
print(type(c.m))
setattr(c.m, 'i', 0)
print(type(self.m))
setattr(self.m, 'i', 0)
Python 3.2.2
The short answer: There is no way of adding custom attributes to bound methods.
The long answer follows.
In Python, there are function objects and method objects. When you define a class, the def statement creates a function object that lives within the class' namespace:
>>> class c:
... def m(self):
... pass
...
>>> c.m
<function m at 0x025FAE88>
Function objects have a special __dict__ attribute that can hold user-defined attributes:
>>> c.m.i = 0
>>> c.m.__dict__
{'i': 0}
Method objects are different beasts. They are tiny objects just holding a reference to the corresponding function object (__func__) and one to its host object (__self__):
>>> c().m
<bound method c.m of <__main__.c object at 0x025206D0>>
>>> c().m.__self__
<__main__.c object at 0x02625070>
>>> c().m.__func__
<function m at 0x025FAE88>
>>> c().m.__func__ is c.m
True
Method objects provide a special __getattr__ that forwards attribute access to the function object:
>>> c().m.i
0
This is also true for the __dict__ property:
>>> c().m.__dict__['a'] = 42
>>> c.m.a
42
>>> c().m.__dict__ is c.m.__dict__
True
Setting attributes follows the default rules, though, and since they don't have their own __dict__, there is no way to set arbitrary attributes.
This is similar to user-defined classes defining __slots__ and no __dict__ slot, when trying to set a non-existing slot raises an AttributeError (see the docs on __slots__ for more information):
>>> class c:
... __slots__ = ('a', 'b')
...
>>> x = c()
>>> x.a = 1
>>> x.b = 2
>>> x.c = 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'c' object has no attribute 'c'
Q: "Is there a way of setting an attribute on a method such that it will only exist on one instance, not for each instance of the class?"
A: Yes:
class c:
def m(self):
print(type(c.m))
setattr(c.m, 'i', 0)
print(type(self))
setattr(self, 'i', 0)
The static variable on functions in the post you link to is not useful for methods. It sets an attribute on the function so that this attribute is available the next time the function is called, so you can make a counter or whatnot.
But methods have an object instance associated with them (self). Hence you have no need to set attributes on the method, as you simply can set it on the instance instead. That is in fact exactly what the instance is for.
The post you link to shows how to make a function with a static variable. I would say that in Python doing so would be misguided. Instead look at this answer: What is the Python equivalent of static variables inside a function?
That is the way to do it in Python in a way that is clear and easily understandable. You use a class and make it callable. Setting attributes on functions is possible and there are probably cases where it's a good idea, but in general it will just end up confusing people.

replacing the "new" module

I have code which contains the following two lines in it:-
instanceMethod = new.instancemethod(testFunc, None, TestCase)
setattr(TestCase, testName, instanceMethod)
How could it be re-written without using the "new" module? Im sure new style classes provide some kind of workaround for this, but I am not sure how.
There is a discussion that suggests that in python 3, this is not required. The same works in Python 2.6
http://mail.python.org/pipermail/python-list/2009-April/531898.html
See:
>>> class C: pass
...
>>> c=C()
>>> def f(self): pass
...
>>> c.f = f.__get__(c, C)
>>> c.f
<bound method C.f of <__main__.C instance at 0x10042efc8>>
>>> c.f
<unbound method C.f>
>>>
Reiterating the question for every one's benefit, including mine.
Is there a replacement in Python3 for new.instancemethod? That is, given an arbitrary instance (not its class) how can I add a new appropriately defined function as a method to it?
So following should suffice:
TestCase.testFunc = testFunc.__get__(None, TestCase)
You can replace "new.instancemethod" by "types.MethodType":
from types import MethodType as instancemethod
class Foo:
def __init__(self):
print 'I am ', id(self)
def bar(self):
print 'hi', id(self)
foo = Foo() # prints 'I am <instance id>'
mm = instancemethod(bar, foo) # automatically uses foo.__class__
mm() # prints 'I have been bound to <same instance id>'
foo.mm # traceback because no 'field' created in foo to hold ref to mm
foo.mm = mm # create ref to bound method in foo
foo.mm() # prints 'I have been bound to <same instance id>'
This will do the same:
>>> Testcase.testName = testFunc
Yeah, it's really that simple.
Your line
>>> instanceMethod = new.instancemethod(testFunc, None, TestCase)
Is in practice (although not in theory) a noop. :) You could just as well do
>>> instanceMethod = testFunc
In fact, in Python 3 I'm pretty sure it would be the same in theory as well, but the new module is gone so I can't test it in practice.
To confirm that it's not needed to use new.instancemthod() at all since Python v2.4, here's an example how to replace an instance method. It's also not needed to use descriptors (even though it works).
class Ham(object):
def spam(self):
pass
h = Ham()
def fake_spam():
h._spam = True
h.spam = fake_spam
h.spam()
# h._spam should be True now.
Handy for unit testing.

Difference between type(obj) and obj.__class__

What is the difference between type(obj) and obj.__class__? Is there ever a possibility of type(obj) is not obj.__class__?
I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing?
def f(a, b=None):
if b is None:
b = type(a)(1) # #1
b = a.__class__(1) # #2
This is an old question, but none of the answers seems to mention that. in the general case, it IS possible for a new-style class to have different values for type(instance) and instance.__class__:
class ClassA(object):
def display(self):
print("ClassA")
class ClassB(object):
__class__ = ClassA
def display(self):
print("ClassB")
instance = ClassB()
print(type(instance))
print(instance.__class__)
instance.display()
Output:
<class '__main__.ClassB'>
<class '__main__.ClassA'>
ClassB
The reason is that ClassB is overriding the __class__ descriptor, however the internal type field in the object is not changed. type(instance) reads directly from that type field, so it returns the correct value, whereas instance.__class__ refers to the new descriptor replacing the original descriptor provided by Python, which reads the internal type field. Instead of reading that internal type field, it returns a hardcoded value.
Old-style classes are the problem, sigh:
>>> class old: pass
...
>>> x=old()
>>> type(x)
<type 'instance'>
>>> x.__class__
<class __main__.old at 0x6a150>
>>>
Not a problem in Python 3 since all classes are new-style now;-).
In Python 2, a class is new-style only if it inherits from another new-style class (including object and the various built-in types such as dict, list, set, ...) or implicitly or explicitly sets __metaclass__ to type.
type(obj) and type.__class__ do not behave the same for old style classes:
>>> class a(object):
... pass
...
>>> class b(a):
... pass
...
>>> class c:
... pass
...
>>> ai=a()
>>> bi=b()
>>> ci=c()
>>> type(ai) is ai.__class__
True
>>> type(bi) is bi.__class__
True
>>> type(ci) is ci.__class__
False
There's an interesting edge case with proxy objects (that use weak references):
>>> import weakref
>>> class MyClass:
... x = 42
...
>>> obj = MyClass()
>>> obj_proxy = weakref.proxy(obj)
>>> obj_proxy.x # proxies attribute lookup to the referenced object
42
>>> type(obj_proxy) # returns type of the proxy
weakproxy
>>> obj_proxy.__class__ # returns type of the referenced object
__main__.MyClass
>>> del obj # breaks the proxy's weak reference
>>> type(obj_proxy) # still works
weakproxy
>>> obj_proxy.__class__ # fails
ReferenceError: weakly-referenced object no longer exists
FYI - Django does this.
>>> from django.core.files.storage import default_storage
>>> type(default_storage)
django.core.files.storage.DefaultStorage
>>> default_storage.__class__
django.core.files.storage.FileSystemStorage
As someone with finite cognitive capacity who's just trying to figure out what's going in order to get work done... it's frustrating.

Categories