Class definition:
class A(object):
def foo(self):
print "A"
class B(object):
def foo(self):
print "B"
class C(A, B):
def foo(self):
print "C"
Output:
>>> super(C)
<super: <class 'C'>, NULL>
>>> super(C).foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'super' object has no attribute 'foo'
What is the use of super(type) if we can't access attributes of a class?
super(type) is an "unbound" super object. The docs on super discuss that, but don't really elaborate what an "unbound" super object is or does. It is simply a fact of the language that you cannot use them in the manner you are attempting to use them.
This is perhaps what you want:
>>> super(C, C).foo is B.foo
True
That said, what good is an unbound super object? I had to look this up, myself, and found a decent answer here. Note however, that the article's conclusion is that unbound super is a language wart, has no practical use, and should be removed from the language (and having read the article, I agree). The article's explanation on unbound super starts with:
Unbound super objects must be turned into bound objects in order to make them to dispatch properly. That can be done via the descriptor protocol. For instance, I can convert super(C1) in a super object bound to c1 in this way:
>>> c1 = C1()
>>> boundsuper = super(C1).__get__(c1, C1) # this is the same as super(C1, c1)
So, that doesn't seem useful, but the article goes on:
Having established that the unbound syntax does not return unbound methods one might ask what its purpose is. The answer is that super(C) is intended to be used as an attribute in other classes. Then the descriptor magic will automatically convert the unbound syntax in the bound syntax. For instance:
>>> class B(object):
... a = 1
>>> class C(B):
... pass
>>> class D(C):
... sup = super(C)
>>> d = D()
>>> d.sup.a
1
This works since d.sup.a calls super(C).__get__(d,D).a which is turned into super(C, d).a and retrieves B.a.
There is a single use case for the single argument syntax of super that I am aware of, but I think it gives more troubles than advantages. The use case is the implementation of autosuper made by Guido on his essay about new-style classes.
The idea there is to use the unbound super objects as private attributes. For instance, in our example, we could define the private attribute __sup in the class C as the unbound super object super(C):
>>> C._C__sup = super(C)
But do note that the article continues to describe the problems with this (it doesn't quite work correctly, I think mostly due to the fact that the MRO is dependent on the class of the instance you are dealing with, and thus given an instance, some class X's superclass may be different depending on the instance of X we are given).
To accomplish what you want, you need to call it like this:
>>> myC = C()
>>> super(C,myC).foo()
A
Note that there is a NULL reference to some object. It basically needs a class and an instance of a related object for it to function.
>>> super(C, C()).foo
<bound method C.foo of <__main__.C object at 0x225dc50>>
See: Understanding Python super() with __init__() methods for more details.
Related
I hope someone can answer this that has a good deep understanding of Python :)
Consider the following code:
>>> class A(object):
... pass
...
>>> def __repr__(self):
... return "A"
...
>>> from types import MethodType
>>> a = A()
>>> a
<__main__.A object at 0x00AC6990>
>>> repr(a)
'<__main__.A object at 0x00AC6990>'
>>> setattr(a, "__repr__", MethodType(__repr__, a, a.__class__))
>>> a
<__main__.A object at 0x00AC6990>
>>> repr(a)
'<__main__.A object at 0x00AC6990>'
>>>
Notice how repr(a) does not yield the expected result of "A" ?
I want to know why this is the case and if there is a way to achieve this...
I contrast, the following example works however (Maybe because we're not trying to override a special method?):
>>> class A(object):
... def foo(self):
... return "foo"
...
>>> def bar(self):
... return "bar"
...
>>> from types import MethodType
>>> a = A()
>>> a.foo()
'foo'
>>> setattr(a, "foo", MethodType(bar, a, a.__class__))
>>> a.foo()
'bar'
>>>
Python usually doesn't call the special methods (those with name surrounded by __) on the instance, but only on the class. (Although this is an implementation detail, it's characteristic of CPython, the standard interpreter.) So there's no way to override __repr__() directly on an instance and make it work. Instead, you need to do something like so:
class A(object):
def __repr__(self):
return self._repr()
def _repr(self):
return object.__repr__(self)
Now you can override __repr__() on an instance by substituting _repr().
As explained in Special Method Lookup:
For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary … In addition to bypassing any instance attributes in the interest of correctness, implicit special method lookup generally also bypasses the __getattribute__() method even of the object’s metaclass
(The part I've snipped out explains the rationale behind this, if you're interested in that.)
Python doesn't document exactly when an implementation should or shouldn't look up the method on the type; all it documents is, in effect, that implementations may or may not look at the instance for special method lookups, so you shouldn't count on either.
As you can guess from your test results, in the CPython implementation, __repr__ is one of the functions looked up on the type.
Things are slightly different in 2.x, mostly because of the presence of classic classes, but as long as you're only creating new-style classes you can think of them as the same.
The most common reason people want to do this is to monkey-patch different instances of an object to do different things. You can't do that with special methods, so… what can you do? There's a clean solution, and a hacky solution.
The clean solution is to implement a special method on the class that just calls a regular method on the instance. Then you can monkey patch that regular method on each instance. For example:
class C(object):
def __repr__(self):
return getattr(self, '_repr')()
def _repr(self):
return 'Boring: {}'.format(object.__repr__(self))
c = C()
def c_repr(self):
return "It's-a me, c_repr: {}".format(object.__repr__(self))
c._repr = c_repr.__get__(c)
The hacky solution is to build a new subclass on the fly and re-class the object. I suspect anyone who really has a situation where this is a good idea will know how to implement it from that sentence, and anyone who doesn't know how to do so shouldn't be trying, so I'll leave it at that.
The reason for this is special methods (__x__()) are defined for the class, not the instance.
This makes sense when you think about __new__() - it would be impossible to call this on an instance as the instance doesn't exist when it's called.
So you can do this on the class as a whole if you want to:
>>> A.__repr__ = __repr__
>>> a
A
Or on an individual instance, as in kindall's answer. (Note there is a lot of similarity here, but I thought my examples added enough to post this as well).
For new style classes, Python uses a special method lookup that bypasses instances. Here an excerpt from the source:
1164 /* Internal routines to do a method lookup in the type
1165 without looking in the instance dictionary
1166 (so we can't use PyObject_GetAttr) but still binding
1167 it to the instance. The arguments are the object,
1168 the method name as a C string, and the address of a
1169 static variable used to cache the interned Python string.
1170
1171 Two variants:
1172
1173 - lookup_maybe() returns NULL without raising an exception
1174 when the _PyType_Lookup() call fails;
1175
1176 - lookup_method() always raises an exception upon errors.
1177
1178 - _PyObject_LookupSpecial() exported for the benefit of other places.
1179 */
You can either change to an old-style class (don't inherit from object) or you can add dispatcher methods to the class (methods that forward lookups back to the instance). For an example of instance dispatcher methods, see the recipe at http://code.activestate.com/recipes/578091
TLDR: It is impossible to define proper, unbound methods on instances; this applies to special methods as well. Since bound methods are first-class objects, in certain circumstances the difference is not noticeable.
However, special methods are always looked up as proper, unbound methods by Python when needed.
You can always manually fall back to a special method that uses the more generic attribute access. Attribute access covers both bound methods stored as attributes as well as unbound methods that are bound as needed. This is similar to how __repr__ or other methods would use attributes to define their output.
class A:
def __init__(self, name):
self.name = name
def __repr__(self):
# call attribute to derive __repr__
return self.__representation__()
def __representation__(self):
return f'{self.__class__.__name__}({self.name})'
def __str__(self):
# return attribute to derive __str__
return self.name
Unbound versus Bound Methods
There are two meanings to a method in Python: unbound methods of a class and bound methods of an instance of that class.
An unbound method is a regular function on a class or one of its base classes. It can be defined either during class definition, or added later on.
>>> class Foo:
... def bar(self): print('bar on', self)
...
>>> Foo.bar
<function __main__.Foo.bar(self)>
An unbound method exists only once on the class - it is the same for all instances.
A bound method is an unbound method which has been bound to a specific instance. This usually means the method was looked up through the instance, which invokes the function's __get__ method.
>>> foo = Foo()
>>> # lookup through instance
>>> foo.bar
<bound method Foo.bar of <__main__.Foo object at 0x10c3b6390>>
>>> # explicit descriptor invokation
>>> type(foo).bar.__get__(foo, type(Foo))
<bound method Foo.bar of <__main__.Foo object at 0x10c3b6390>>
As far as Python is concerned, "a method" generally means an unbound method that is bound to its instance as required. When Python needs a special method, it directly invokes the descriptor protocol for the unbound method. In consequence, the method is looked up on the class; an attribute on the instance is ignored.
Bound Methods on Objects
A bound method is created anew every time it is fetched from its instance. The result is a first-class object that has identity, can be stored and passed around, and be called later on.
>>> foo.bar is foo.bar # binding happens on every lookup
False
>>> foo_bar = foo.bar # bound methods can be stored
>>> foo_bar() # stored bound methods can be called later
bar on <__main__.Foo object at 0x10c3b6390>
>>> foo_bar()
bar on <__main__.Foo object at 0x10c3b6390>
Being able to store bound methods means they can also be stored as attributes. Storing a bound method on its bound instance makes it appear similar to an unbound method. But in fact a stored bound method behaves subtly different and can be stored on any object that allows attributes.
>>> foo.qux = foo.bar
>>> foo.qux
<bound method Foo.bar of <__main__.Foo object at 0x10c3b6390>>
>>> foo.qux is foo.qux # binding is not repeated on every lookup!
True
>>> too = Foo()
>>> too.qux = foo.qux # bound methods can be stored on other instances!
>>> too.qux # ...but are still bound to the original instance!
<bound method Foo.bar of <__main__.Foo object at 0x10c3b6390>>
>>> import builtins
>>> builtins.qux = foo.qux # bound methods can be stored...
>>> qux # ... *anywhere* that supports attributes
<bound method Foo.bar of <__main__.Foo object at 0x10c3b6390>>
As far as Python is concerned, bound methods are just regular, callable objects. Just as it has no way of knowing whether too.qux is a method of too, it cannot deduce whether too.__repr__ is a method either.
I find a good desc for python property in this link
How does the #property decorator work in Python?
below example shows how it works, while I find an exception for class attr 'name'
now I have a reload function which will raise an error
#property
def foo(self): return self._foo
really means the same thing as
def foo(self): return self._foo
foo = property(foo)
here is my example
class A(object):
#property
def __name__(self):
return 'dd'
a = A()
print(a.__name__)
dd
this works, however below cannot work
class B(object):
pass
def test(self):
return 'test'
B.t = property(test)
print(B.t)
B.__name__ = property(test)
<property object at 0x7f71dc5e1180>
Traceback (most recent call last):
File "<string>", line 23, in <module>
TypeError: can only assign string to B.__name__, not 'property'
Does anyone knows the difference for builtin name attr, it works if I use normal property decorator, while not works for the 2nd way. now I have a requirement to reload the function when code changes, however this error will block the reload procedure. Can anyone helps? thanks.
The short answer is: __name__ is deep magic in CPython.
So, first, let's get the technicalities out of the way. To quote what you said
#property
def foo(self): return self._foo
really means the same thing as
def foo(self): return self._foo
foo = property(foo)
This is correct. But it can be a bit misleading. You have this A class
class A(object):
#property
def __name__(self):
return 'dd'
And you claim that it's equivalent to this B class
class B(object):
pass
def test(self):
return 'test'
B.__name__ = property(test)
which is not correct. It's actually equivalent to this
def test(self):
return 'test'
class B(object):
__name__ = property(test)
which works and does what you expect it to. And you're also correct that, for most names in Python, your B and my B would be the same. What difference does it make whether I'm assigning to a name inside the class or immediately after its declaration? Replace __name__ with ravioli in the above snippets and either will work. So what makes __name__ special?
That's where the magic comes in. When you define a name inside the class, you're working directly on the class' internal dictionary, so
class A:
foo = 1
def bar(self):
return 1
This defines two things on the class A. One happens to be a number and the other happens to be a function (which will likely be called as a bound method). Now we can access these.
A.foo # Returns 1, simple access
A.bar # Returns the function object bar
A().foo # Returns 1
A().bar # Returns a bound method object
When we look up the names directly on A, we simply access the slots like we would on any object. However, when we look them up on A() (an instance of A), a multi-step process happens
Look up the name on the instance's __dict__ directly.
If that failed, then look up the name on the class' __dict__.
If we found it on the class, see if there's a __get__ on the result and call it.
That third step is what allows bound method objects to work, and it's also the mechanism underlying the property decorators in Python.
Let's go through this whole process with a property called ravioli. No magic here.
class A(object):
#property
def ravioli(self):
return 'dd'
When we do A().ravioli, first we see if there's a ravioli on the instance we just made. There isn't, so we check the class' __dict__, and indeed we find a property object at that position. That property object has a __get__, so we call it, and it returns 'dd', so indeed we get the string 'dd'.
>>> A().ravioli
'dd'
Now I would expect that, if I do A.ravioli, we will simply get the property object. Since we're not calling it on an instance, we don't call __get__.
>>> A.ravioli
<property object at 0x7f5bd3690770>
And indeed, we get the property object, as expected.
Now let's do the exact same thing but replace ravioli with __name__.
class A(object):
#property
def __name__(self):
return 'dd'
Great! Now let's make an instance.
>>> A().__name__
'dd'
Sensible, we looked up __name__ on A's __dict__ and found a property, so we called its __get__. Nothing weird.
Now
>>> A.__name__
'A'
Um... what? If we had just found the property on A's __dict__, then we should see that property here, right?
Well, no, not always. See, in the abstract, foo.bar normally looks in foo.__dict__ for a field called bar. But it doesn't do that if the type of foo defines a __getattribute__. If it defines that, then that method is always called instead.
Now, the type of A is type, the type of all Python types. Read that sentence a few times and make sure it makes sense. And if we do a bit of spelunking into the CPython source code, we see that type actually defines __getattribute__ and __setattr__ for the following names:
__name__
__qualname__
__bases__
__module__
__abstractmethods__
__dict__
__doc__
__text_signature__
__annotations__
That explains how __name__ can serve double duty as a property on the class instances and also as an accessible field on the same class. It also explains why you get that highly specialized error message when reassigning to B.__name__: the line
B.__name__ = property(test)
is actually equivalent to
type.__setattr__(B, '__name__', property(test))
which is calling our special-case checker in CPython.
For any other type in Python, in particular for user-defined types, we could get around this with object.__setattr__. Unfortunately,
>>> object.__setattr__(B, '__name__', property(test))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't apply this __setattr__ to type object
There's a really specific check to make sure we don't do exactly this, and the comment reads
/* Reject calls that jump over intermediate C-level overrides. */
We also can't use metaclasses to override __setattr__ and __getattribute__, because the instance lookup procedure specifically doesn't call those (in the above examples, __getattribute__ was called in every case except the one we care about for property purposes). I even tried subclassing str to trick __setattr__ into accepting our made-up value
class NameProperty(str):
def __new__(cls, value, **kwargs):
return str.__new__(cls, value)
def __init__(self, value, method):
self.method = method
def __get__(self, instance, owner):
return self.method(instance)
B.__name__ = NameProperty(B.__name__, method=test)
This actually passes the __setattr__ check, but it doesn't assign to B.__dict__ (since the __setattr__ still assigns to the actual CPython-level name, not to B.__dict__['__name__']), so the property lookup doesn't work.
So... that's how I reached my conclusion of: __name__ is deep magic in CPython. All of the usual Python metaprogramming techniques have failed, and all of the methods getting called are written deep down in C. My advice to you is: Stop using __name__ for things it's not intended for, or be prepared to write some C code and hack on CPython directly.
class Foo():
def __init__(self):
pass
def makeInstanceAttribute(oops):
oops.x = 10
f = Foo()
f.makeInstanceAttribute()
print(f.x)
and it's printing 10, how does it work? Why oops has same effect of being self?
Quoting the Python documentation:
Often, the first argument of a method is called self. This is nothing more than a convention: the name self has absolutely no special meaning to Python. Note, however, that by not following the convention your code may be less readable to other Python programmers, and it is also conceivable that a class browser program might be written that relies upon such a convention.
self is just a convention. It can be named anyway you want. What actually is passed to the function as the first argument is the object instance. Consider this code:
class A(object):
def self_test(self):
print self
def foo(oops):
print oops
>>> a = A()
>>> a.self_test()
<__main__.A object at 0x03CB18D0>
>>> a.foo()
<__main__.A object at 0x03CB18D0>
>>>
This question is probably related to yours, you might find it helpful.
Given a class A I can simply add an instancemethod a via
def a(self):
pass
A.a = a
However, if I try to add another class B's instancemethod b, i.e. A.b = B.b, the attempt at calling A().b() yields a
TypeError: unbound method b() must be called with B instance as first argument (got nothing instead)
(while B().b() does fine). Indeed there is a difference between
A.a -> <unbound method A.a>
A.b -> <unbound method B.b> # should be A.b, not B.b
So,
How to fix this?
Why is it this way? It doesn't seem intuitive, but usually Guido has some good reasons...
Curiously enough, this no longer fails in Python3...
Let's:
class A(object): pass
class B(object):
def b(self):
print 'self class: ' + self.__class__.__name__
When you are doing:
A.b = B.b
You are not attaching a function to A, but an unbound method. In consequence python only add it as a standard attribute and do not convert it to a A-unbounded method. The solution is simple, attach the underlying function :
A.b = B.b.__func__
print A.b
# print: <unbound method A.b>
a = A()
a.b()
# print: self class: A
I don't know all the difference between unbound methods and functions (only that the first contains the second), neither how all of that work internally. So I cannot explain the reason of it. My understanding is that a method object (bound or not) requires more information and functionalities than a functions, but it needs one to execute.
I would agree that automating this (changing the class of an unbound method) could be a good choice, but I can find reasons not to. It is thus surprising that python 3 differs from python 2. I'd like to find out the reason of this choice.
When you take the reference to a method on a class instance, the method is bound to that class instance.
B().b is equivalent to: lambda *args, **kwargs: b(<B instance>, *args, **kwargs)
I suspect you are getting a similarly (but not identically) wrapped reference when evaluating B.b. However, this is not the behavior I would have expected.
Interestingly:
A.a = lambda s: B.b(s)
A().a()
yields:
TypeError: unbound method b() must be called with B instance as first
argument (got A instance instead)
This suggests that B.b is evaluating to a wrapper for the actual method, and the wrapper is checking that 'self' has the expected type. I don't know, but this is probably about interpreter efficiency.
It's an interesting question though. I hope someone can chime in with a more definitive answer.
What is the difference between the following class methods?
Is it that one is static and the other is not?
class Test(object):
def method_one(self):
print "Called method_one"
def method_two():
print "Called method_two"
a_test = Test()
a_test.method_one()
a_test.method_two()
In Python, there is a distinction between bound and unbound methods.
Basically, a call to a member function (like method_one), a bound function
a_test.method_one()
is translated to
Test.method_one(a_test)
i.e. a call to an unbound method. Because of that, a call to your version of method_two will fail with a TypeError
>>> a_test = Test()
>>> a_test.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
You can change the behavior of a method using a decorator
class Test(object):
def method_one(self):
print "Called method_one"
#staticmethod
def method_two():
print "Called method two"
The decorator tells the built-in default metaclass type (the class of a class, cf. this question) to not create bound methods for method_two.
Now, you can invoke static method both on an instance or on the class directly:
>>> a_test = Test()
>>> a_test.method_one()
Called method_one
>>> a_test.method_two()
Called method_two
>>> Test.method_two()
Called method_two
Methods in Python are a very, very simple thing once you understood the basics of the descriptor system. Imagine the following class:
class C(object):
def foo(self):
pass
Now let's have a look at that class in the shell:
>>> C.foo
<unbound method C.foo>
>>> C.__dict__['foo']
<function foo at 0x17d05b0>
As you can see if you access the foo attribute on the class you get back an unbound method, however inside the class storage (the dict) there is a function. Why's that? The reason for this is that the class of your class implements a __getattribute__ that resolves descriptors. Sounds complex, but is not. C.foo is roughly equivalent to this code in that special case:
>>> C.__dict__['foo'].__get__(None, C)
<unbound method C.foo>
That's because functions have a __get__ method which makes them descriptors. If you have an instance of a class it's nearly the same, just that None is the class instance:
>>> c = C()
>>> C.__dict__['foo'].__get__(c, C)
<bound method C.foo of <__main__.C object at 0x17bd4d0>>
Now why does Python do that? Because the method object binds the first parameter of a function to the instance of the class. That's where self comes from. Now sometimes you don't want your class to make a function a method, that's where staticmethod comes into play:
class C(object):
#staticmethod
def foo():
pass
The staticmethod decorator wraps your class and implements a dummy __get__ that returns the wrapped function as function and not as a method:
>>> C.__dict__['foo'].__get__(None, C)
<function foo at 0x17d0c30>
Hope that explains it.
When you call a class member, Python automatically uses a reference to the object as the first parameter. The variable self actually means nothing, it's just a coding convention. You could call it gargaloo if you wanted. That said, the call to method_two would raise a TypeError, because Python is automatically trying to pass a parameter (the reference to its parent object) to a method that was defined as having no parameters.
To actually make it work, you could append this to your class definition:
method_two = staticmethod(method_two)
or you could use the #staticmethod function decorator.
>>> class Class(object):
... def __init__(self):
... self.i = 0
... def instance_method(self):
... self.i += 1
... print self.i
... c = 0
... #classmethod
... def class_method(cls):
... cls.c += 1
... print cls.c
... #staticmethod
... def static_method(s):
... s += 1
... print s
...
>>> a = Class()
>>> a.class_method()
1
>>> Class.class_method() # The class shares this value across instances
2
>>> a.instance_method()
1
>>> Class.instance_method() # The class cannot use an instance method
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method instance_method() must be called with Class instance as first argument (got nothing instead)
>>> Class.instance_method(a)
2
>>> b = 0
>>> a.static_method(b)
1
>>> a.static_method(a.c) # Static method does not have direct access to
>>> # class or instance properties.
3
>>> Class.c # a.c above was passed by value and not by reference.
2
>>> a.c
2
>>> a.c = 5 # The connection between the instance
>>> Class.c # and its class is weak as seen here.
2
>>> Class.class_method()
3
>>> a.c
5
method_two won't work because you're defining a member function but not telling it what the function is a member of. If you execute the last line you'll get:
>>> a_test.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
If you're defining member functions for a class the first argument must always be 'self'.
Accurate explanation from Armin Ronacher above, expanding on his answers so that beginners like me understand it well:
Difference in the methods defined in a class, whether static or instance method(there is yet another type - class method - not discussed here so skipping it), lay in the fact whether they are somehow bound to the class instance or not. For example, say whether the method receives a reference to the class instance during runtime
class C:
a = []
def foo(self):
pass
C # this is the class object
C.a # is a list object (class property object)
C.foo # is a function object (class property object)
c = C()
c # this is the class instance
The __dict__ dictionary property of the class object holds the reference to all the properties and methods of a class object and thus
>>> C.__dict__['foo']
<function foo at 0x17d05b0>
the method foo is accessible as above. An important point to note here is that everything in python is an object and so references in the dictionary above are themselves pointing to other objects. Let me call them Class Property Objects - or as CPO within the scope of my answer for brevity.
If a CPO is a descriptor, then python interpretor calls the __get__() method of the CPO to access the value it contains.
In order to determine if a CPO is a descriptor, python interpretor checks if it implements the descriptor protocol. To implement descriptor protocol is to implement 3 methods
def __get__(self, instance, owner)
def __set__(self, instance, value)
def __delete__(self, instance)
for e.g.
>>> C.__dict__['foo'].__get__(c, C)
where
self is the CPO (it could be an instance of list, str, function etc) and is supplied by the runtime
instance is the instance of the class where this CPO is defined (the object 'c' above) and needs to be explicity supplied by us
owner is the class where this CPO is defined(the class object 'C' above) and needs to be supplied by us. However this is because we are calling it on the CPO. when we call it on the instance, we dont need to supply this since the runtime can supply the instance or its class(polymorphism)
value is the intended value for the CPO and needs to be supplied by us
Not all CPO are descriptors. For example
>>> C.__dict__['foo'].__get__(None, C)
<function C.foo at 0x10a72f510>
>>> C.__dict__['a'].__get__(None, C)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute '__get__'
This is because the list class doesnt implement the descriptor protocol.
Thus the argument self in c.foo(self) is required because its method signature is actually this C.__dict__['foo'].__get__(c, C) (as explained above, C is not needed as it can be found out or polymorphed)
And this is also why you get a TypeError if you dont pass that required instance argument.
If you notice the method is still referenced via the class Object C and the binding with the class instance is achieved via passing a context in the form of the instance object into this function.
This is pretty awesome since if you chose to keep no context or no binding to the instance, all that was needed was to write a class to wrap the descriptor CPO and override its __get__() method to require no context.
This new class is what we call a decorator and is applied via the keyword #staticmethod
class C(object):
#staticmethod
def foo():
pass
The absence of context in the new wrapped CPO foo doesnt throw an error and can be verified as follows:
>>> C.__dict__['foo'].__get__(None, C)
<function foo at 0x17d0c30>
Use case of a static method is more of a namespacing and code maintainability one(taking it out of a class and making it available throughout the module etc).
It maybe better to write static methods rather than instance methods whenever possible, unless ofcourse you need to contexualise the methods(like access instance variables, class variables etc). One reason is to ease garbage collection by not keeping unwanted reference to objects.
that is an error.
first of all, first line should be like this (be careful of capitals)
class Test(object):
Whenever you call a method of a class, it gets itself as the first argument (hence the name self) and method_two gives this error
>>> a.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
The second one won't work because when you call it like that python internally tries to call it with the a_test instance as the first argument, but your method_two doesn't accept any arguments, so it wont work, you'll get a runtime error.
If you want the equivalent of a static method you can use a class method.
There's much less need for class methods in Python than static methods in languages like Java or C#. Most often the best solution is to use a method in the module, outside a class definition, those work more efficiently than class methods.
The call to method_two will throw an exception for not accepting the self parameter the Python runtime will automatically pass it.
If you want to create a static method in a Python class, decorate it with the staticmethod decorator.
Class Test(Object):
#staticmethod
def method_two():
print "Called method_two"
Test.method_two()
Please read this docs from the Guido First Class everything Clearly explained how Unbound, Bound methods are born.
Bound method = instance method
Unbound method = static method.
The definition of method_two is invalid. When you call method_two, you'll get TypeError: method_two() takes 0 positional arguments but 1 was given from the interpreter.
An instance method is a bounded function when you call it like a_test.method_two(). It automatically accepts self, which points to an instance of Test, as its first parameter. Through the self parameter, an instance method can freely access attributes and modify them on the same object.
Unbound Methods
Unbound methods are methods that are not bound to any particular class instance yet.
Bound Methods
Bound methods are the ones which are bound to a specific instance of a class.
As its documented here, self can refer to different things depending on the function is bound, unbound or static.
Take a look at the following example:
class MyClass:
def some_method(self):
return self # For the sake of the example
>>> MyClass().some_method()
<__main__.MyClass object at 0x10e8e43a0># This can also be written as:>>> obj = MyClass()
>>> obj.some_method()
<__main__.MyClass object at 0x10ea12bb0>
# Bound method call:
>>> obj.some_method(10)
TypeError: some_method() takes 1 positional argument but 2 were given
# WHY IT DIDN'T WORK?
# obj.some_method(10) bound call translated as
# MyClass.some_method(obj, 10) unbound method and it takes 2
# arguments now instead of 1
# ----- USING THE UNBOUND METHOD ------
>>> MyClass.some_method(10)
10
Since we did not use the class instance — obj — on the last call, we can kinda say it looks like a static method.
If so, what is the difference between MyClass.some_method(10) call and a call to a static function decorated with a #staticmethod decorator?
By using the decorator, we explicitly make it clear that the method will be used without creating an instance for it first. Normally one would not expect the class member methods to be used without the instance and accesing them can cause possible errors depending on the structure of the method.
Also, by adding the #staticmethod decorator, we are making it possible to be reached through an object as well.
class MyClass:
def some_method(self):
return self
#staticmethod
def some_static_method(number):
return number
>>> MyClass.some_static_method(10) # without an instance
10
>>> MyClass().some_static_method(10) # Calling through an instance
10
You can’t do the above example with the instance methods. You may survive the first one (as we did before) but the second one will be translated into an unbound call MyClass.some_method(obj, 10) which will raise a TypeError since the instance method takes one argument and you unintentionally tried to pass two.
Then, you might say, “if I can call static methods through both an instance and a class, MyClass.some_static_method and MyClass().some_static_method should be the same methods.” Yes!