Class variables, difference between instance.var and instance.__class__.var - python

I see this was flagged as a duplicate of "What is the difference between class and instance variables?" However I don't believe I'm using any instance variables in my example, neither of my classes has a __init__ I'm editing class variables in two different ways and trying to understand the difference between them, not the difference between a class and instance variable.
I'm trying to understand the difference between calling a class variable just with .var and with .__class__.var. I thought it was to do with subclassing so I wrote the following code.
class Foo:
foo = 0
class Bar(Foo):
bar = 1
def print_class_vals(f, b):
""" prints both instant.var and instant.__class__.var for foo and bar"""
print("f.foo: {}, {} "
"b.foo: {}, {} "
"b.bar: {}, {} "
"".format(f.foo, f.__class__.foo,
b.foo, b.__class__.foo,
b.bar, b.__class__.bar))
f = Foo()
b = Bar()
print_class_vals(f, b)
Foo.foo += 1
print_class_vals(f, b)
Bar.foo += 1
print_class_vals(f, b)
Bar.bar += 1
print_class_vals(f, b)
This outputs the following:
f.foo: 0, 0, b.foo: 0, 0, b.bar: 1, 1
f.foo: 1, 1, b.foo: 1, 1, b.bar: 1, 1
f.foo: 1, 1, b.foo: 2, 2, b.bar: 1, 1
f.foo: 1, 1, b.foo: 2, 2, b.bar: 2, 2
I can't seem to find any difference between calling inst.var and inst.__class__.var. How are they different and when should I use one over the other?

While Gabriel Reis's answer explains this particular situation perfectly, there actually is a difference between f.foo and f.__class__.foo even if foo isn't shadowed by an instance attribute.
Compare:
>>> class Foo:
... foo = 1
... def bar(self): pass
... baz = lambda self: None
>>> f = Foo()
>>> f.foo
1
>>> f.__class__.foo
1
>>> f.bar
<bound method Foo.bar of <__main__.Foo object at 0x11948cb00>>
>>> f.__class__.bar
<function __main__.Foo.bar(self)>
>>> f.bar()
>>> f.__class__.bar()
TypeError: bar() missing 1 required positional argument: 'self'
And the same is true for f.baz.
The difference is that, by directly accessing f.__class__.foo, you're making an end-run around the descriptor protocol, which is the thing that makes methods, #property, and similar things work.
If you want the full details, read the linked HOWTO, but the short version is that there's a bit more to it than Gabriel's answer says:
Python will look up for a name (attribute) first in the instance namespace/dict. If it doesn't find there, then it will look up in the class namespace. If it still doesn't find there, then it will walk through the base classes respecting the MRO (method resolution order).
But if it finds it in the class namespace (or any base class), and what it finds is a descriptor (a value with a __get__ method), it does an extra step. The details depend on whether it's a data or non-data descriptor (basically, whether it also has a __set__ method), but the short version is that instead of giving you the value, it calls __get__ on the value and gives you what that value returns. Functions have a __get__ method that returns a bound method; properties have a __get__ method that calls the property get method; etc.

Python will look up for a name (attribute) first in the instance namespace/dict. If it doesn't find there, then it will look up in the class namespace. If it still doesn't find there, then it will walk through the base classes respecting the MRO (method resolution order).
What you've done there, is to define the class attributes Foo.foo and Bar.bar.
You never modified any instance namespace.
Try this:
class Foo:
foo = 1
f = Foo()
f.foo = 2
print('f.foo = {!r}'.format(f.foo))
print('f.__class__.foo = {!r}'.format(f.__class__.foo))
And you will be able to understand the difference.

Related

#properties and public attribute

I'm following a tutorial on python 3 and there is a simple example I'm struggling with.
class P:
def __init__(self,x):
self.x = x
#property
def x(self):
return self.__x
#x.setter
def x(self, x):
if x < 0:
self.__x = 0
elif x > 1000:
self.__x = 1000
else:
self.__x = x
Why is the attribute x in __init__ defined as public but is accessed like a private attribute with self.__x in the functions decorated with #property and #x.setter?
This isn't that straightforward because it heavily relies on Pythons descriptor protocol see also Descriptor HOW-TO which refers to property as well. But I will try to explain it in easy terms.
You have a class that has (besides what is inherited by the implicit superclass object and some automatically included stuff) 2 attributes:
>>> P.__dict__
mappingproxy({'__init__': <function __main__.P.__init__>,
'x': <property at 0x2842664cbd8>})
I removed the automatically added attributes for the sake of this discussion. You can always add or replace attributes as much as you want:
>>> P.y = 1000
>>> P.__dict__
mappingproxy({'__init__': <function __main__.P.__init__>,
'x': <property at 0x2842664cbd8>,
'y': 1000})
But when you create an instance the instance will have only one attribute _P__x (the _P is inserted because variables starting with __ and not ending in __ are name-mangled):
>>> p = P(10)
>>> p.__dict__
{'_P__x': 10}
You can also add almost (only almost because the descriptor protocol intercepts certain operations - see below) any attribute for the instance:
>>> p.y = 100
>>> p.__dict__
{'_P__x': 10, 'z': 100}
That's where the descriptor protocol comes into play. If you access an attribute on the instance, it starts by looking if the instance has that attribute. If the instance doesn't have that attribute it will look at the class - but through the descriptor protocol! So when you access self.x this is roughly equivalent to: type(self).x.__get__(self):
>>> p.x
10
>>> type(p).x.__get__(p)
10
Likewise setting the attribute with self.x = 200 will call type(self).x.__set__(self, 200):
>>> p.x = 200
>>> p.x
200
>>> type(p).x.__set__(p, 100)
>>> p.x
100
The #property will intercept though the descriptor protocol any access to x on self. So you can't use the name x to store the actual value on the instance because it would always go into the #property and #x.setter (and also x.deleter but you haven't implemented that one) functions of the class. So you have to use another name to store the variable.
It's typically stored with the same name but one leading underscore (also eases maintainability). It's actually not good practice to use two leading underscores because that makes it hard to subclass your class and modify the x property - without name-mangling the variable name yourself.

Behavioural difference between decorated function and method in Python

I use the following workaround for "Pythonic static variables":
def static_vars(**kwargs):
"""decorator for funciotns that sets static variables"""
def decorate(func):
for k, v in kwargs.items():
setattr(func, k, v)
return func
return decorate
#static_vars(var=1)
def global_foo():
_ = global_foo
print _.var
_.var += 1
global_foo() # >>> 1
global_foo() # >>> 2
It works just as supposed to. But when I move such a decorated function inside a class I get a strange change:
class A(object):
#static_vars(var=1)
def foo(self):
bound = self.foo
unbound = A.foo
print bound.var # OK, print 1 at first call
bound.var += 1 # AttributeError: 'instancemethod' object has no attribute 'var'
def check(self):
bound = self.foo
unbound = A.foo
print 'var' in dir(bound)
print 'var' in dir(unbound)
print bound.var is unbound.var # it doesn't make much sense but anyway
a = A()
a.check() # >>> True
# >>> True
# >>> True
a.foo() # ERROR
I can not see what causes such behaviour. It seems to me that it has something to do with python descriptors protocol, all that bound vs unbound method stuff. Somehow the foo.var attribute is accessible but is not writable.
Any help is appreciated.
P.S. I understand that static function variables are essentially class variables and this decorator is unnecessary in the second case but the question is more for understanding the Python under the hood than to get any working solution.
a.foo doesn't return the actual function you defined; it returns a bound method of it, which wraps up the function and has self assigned toa.
https://docs.python.org/3/howto/descriptor.html#functions-and-methods
That guide is out of date a little, though, since unbound methods just return the function in Python 3.
So, to access the attributes on the function, you need to go through A.foo (or a.foo.__func__)instead of a.foo. And this will only work in Python 3. In Python 2, I think A.foo.__func__ will work.

Is it possible to modify the behavior of len()?

I'm aware of creating a custom __repr__ or __add__ method (and so on), to modify the behavior of operators and functions. Is there a method override for len?
For example:
class Foo:
def __repr__(self):
return "A wild Foo Class in its natural habitat."
foo = Foo()
print(foo) # A wild Foo Class in its natural habitat.
print(repr(foo)) # A wild Foo Class in its natural habitat.
Could this be done for len, with a list? Normally, it would look like this:
foo = []
print(len(foo)) # 0
foo = [1, 2, 3]
print(len(foo)) # 3
What if I want to leave search types out of the count? Like this:
class Bar(list):
pass
foo = [Bar(), 1, '']
print(len(foo)) # 3
count = 0
for item in foo:
if not isinstance(item, Bar):
count += 1
print(count) # 2
Is there a way to do this from within a list subclass?
Yes, implement the __len__ method:
def __len__(self):
return 42
Demo:
>>> class Foo(object):
... def __len__(self):
... return 42
...
>>> len(Foo())
42
From the documentation:
Called to implement the built-in function len(). Should return the length of the object, an integer >= 0. Also, an object that doesn’t define a __bool__() method and whose __len__() method returns zero is considered to be false in a Boolean context.
For your specific case:
>>> class Bar(list):
... def __len__(self):
... return sum(1 for ob in self if not isinstance(ob, Bar))
...
>>> len(Bar([1, 2, 3]))
3
>>> len(Bar([1, 2, 3, Bar()]))
3
Yes, just as you have already discovered that you can override the behaviour of a repr() function call by implementing the __repr__ magic method, you can specify the behaviour from a len() function call by implementing (surprise surprise) then __len__ magic:
>>> class Thing:
... def __len__(self):
... return 123
...
>>> len(Thing())
123
A pedant might mention that you are not modifying the behaviour of len(), you are modifying the behaviour of your class. len just does the same thing it always does, which includes checking for a __len__ attribute on the argument.
Remember: Python is a dynamically and Duck Typed language.
If it acts like something that might have a length;
class MyCollection(object):
def __len__(self):
return 1234
Example:
>>> obj = MyCollection()
>>> len(obj)
1234
if it doesn't act like it has a length; KABOOM!
class Foo(object):
def __repr___(self):
return "<Foo>"
Example:
>>> try:
... obj = Foo()
... len(obj)
... except:
... raise
...
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
TypeError: object of type 'Foo' has no len()
From Typing:
Python uses duck typing and has typed objects but untyped variable
names. Type constraints are not checked at compile time; rather,
operations on an object may fail, signifying that the given object is
not of a suitable type. Despite being dynamically typed, Python is
strongly typed, forbidding operations that are not well-defined (for
example, adding a number to a string) rather than silently attempting
to make sense of them.
Example:
>>> x = 1234
>>> s = "1234"
>>> x + s
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'str'
You can just add a __len__ method to your class.
class Test:
def __len__(self):
return 2
a=Test()
len(a) # --> 2

Adding a method to a class after its creation in Python

Would it be possible in any way to add a function to an existing instance of a class? (most likely only useful in a current interactive session, when someone wants to add a method without reinstantiating)
Example class:
class A():
pass
Example method to add (the reference to self is important here):
def newMethod(self):
self.value = 1
Output:
>>> a = A()
>>> a.newMethod = newMethod # this does not work unfortunately, not enough args
TypeError: newMethod() takes exactly 1 argument (0 given)
>>> a.value # so this is not existing
Yes, but you need to manually bind it:
a.newMethod = newMethod.__get__(a, A)
Functions are descriptors and are normally bound to instances when looked up as attributes on the instance; Python then calls the .__get__ method for you to produce the bound method.
Demo:
>>> class A():
... pass
...
>>> def newMethod(self):
... self.value = 1
...
>>> a = A()
>>> newMethod
<function newMethod at 0x106484848>
>>> newMethod.__get__(a, A)
<bound method A.newMethod of <__main__.A instance at 0x1082d1560>>
>>> a.newMethod = newMethod.__get__(a, A)
>>> a.newMethod()
>>> a.value
1
Do take into account that adding bound methods on instances does create circular references, which means that these instances can stay around longer waiting for the garbage collector to break the cycle if no longer referenced by anything else.

Is it possible to change an instance's method implementation without changing all other instances of the same class? [duplicate]

This question already has answers here:
Override a method at instance level
(11 answers)
Closed 3 years ago.
I do not know python very much (never used it before :D), but I can't seem to find anything online. Maybe I just didn't google the right question, but here I go:
I want to change an instance's implementation of a specific method. When I googled for it, I found you could do it, but it changes the implementation for all other instances of the same class, for example:
def showyImp(self):
print self.y
class Foo:
def __init__(self):
self.x = "x = 25"
self.y = "y = 4"
def showx(self):
print self.x
def showy(self):
print "y = woohoo"
class Bar:
def __init__(self):
Foo.showy = showyImp
self.foo = Foo()
def show(self):
self.foo.showx()
self.foo.showy()
if __name__ == '__main__':
b = Bar()
b.show()
f = Foo()
f.showx()
f.showy()
This does not work as expected, because the output is the following:
x = 25
y = 4
x = 25
y = 4
And I want it to be:
x = 25
y = 4
x = 25
y = woohoo
I tried to change Bar's init method with this:
def __init__(self):
self.foo = Foo()
self.foo.showy = showyImp
But I get the following error message:
showyImp() takes exactly 1 argument (0 given)
So yeah... I tried using setattr(), but seems like it's the same as self.foo.showy = showyImp.
Any clue? :)
Since Python 2.6, you should use the types module's MethodType class:
from types import MethodType
class A(object):
def m(self):
print 'aaa'
a = A()
def new_m(self):
print 'bbb'
a.m = MethodType(new_m, a)
As another answer pointed out, however, this will not work for 'magic' methods of new-style classes, such as __str__().
This answer is outdated; the answer below works with modern Python
Everything you wanted to know about Python Attributes and Methods.
Yes, this is an indirect answer, but it demonstrates a number of techniques and explains some of the more intricate details and "magic".
For a "more direct" answer, consider python's new module. In particular, look at the instancemethod function which allows "binding" a method to an instance -- in this case, that would allow you to use "self" in the method.
import new
class Z(object):
pass
z = Z()
def method(self):
return self
z.q = new.instancemethod(method, z, None)
z is z.q() # true
If you ever need to do it for a special method (which, for a new-style class -- which is what you should always be using and the only kind in Python 3 -- is looked up on the class, not the instance), you can just make a per-instance class, e.g....:
self.foo = Foo()
meths = {'__str__': lambda self: 'peekaboo!'}
self.foo.__class__ = type('yFoo', (Foo,), meths)
Edit: I've been asked to clarify the advantages of this approach wrt new.instancemethod...:
>>> class X(object):
... def __str__(self): return 'baah'
...
>>> x=X()
>>> y=X()
>>> print x, y
baah baah
>>> x.__str__ = new.instancemethod(lambda self: 'boo!', x)
>>> print x, y
baah baah
As you can see, the new.instancemethod is totally useless in this case. OTOH...:
>>> x.__class__=type('X',(X,),{'__str__':lambda self:'boo!'})
>>> print x, y
boo! baah
...assigning a new class works great for this case and every other. BTW, as I hope is clear, once you've done this to a given instance you can then later add more method and other class attributes to its x.__class__ and intrinsically affect only that one instance!
If you're binding to the instance, you shouldn't include the self argument:
>>> class Foo(object):
... pass
...
>>> def donothing():
... pass
...
>>> f = Foo()
>>> f.x = donothing
>>> f.x()
>>>
You do need the self argument if you're binding to a class though:
>>> def class_donothing(self):
... pass
...
>>> foo.y = class_donothing
>>> f.y()
>>>
Your example is kind of twisted and complex, and I don't quite see what it has to do with your question. Feel free to clarify if you like.
However, it's pretty easy to do what you're looking to do, assuming I'm reading your question right.
class Foo(object):
def bar(self):
print('bar')
def baz():
print('baz')
In an interpreter ...
>>> f = Foo()
>>> f.bar()
bar
>>> f.bar = baz
>>> f.bar()
baz
>>> g = Foo()
>>> g.bar()
bar
>>> f.bar()
baz
Do Not Do This.
Changing one instance's methods is just wrong.
Here are the rules of OO Design.
Avoid Magic.
If you can't use inheritance, use delegation.
That means that every time you think you need something magic, you should have been writing a "wrapper" or Facade around the object to add the features you want.
Just write a wrapper.

Categories