A callable object is supposed to be so by defining __call__. A class is supposed to be an object… or at least with some exceptions. This exception is what I'm failing to formally clarify, thus this question posted here.
Let A be a simple class:
class A(object):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `__call__`"
The first function is purposely named “call”, to make clear the purpose is the comparison with the other.
Let's instantiate it and forget about the expression it implies:
a = A() # Think of it as `a = magic` and forget about `A()`
Now what's worth:
print(A.call())
print(a.call())
print(A())
print(a())
Result in:
>>> In `call`
>>> In `call`
>>> <__main__.A object at 0xNNNNNNNN>
>>> In `__call__`
The output (third statement not running __call__) does not come as a surprise, but when I think every where it is said “Python class are objects”…
This, more explicit, however run __call__
print(A.__call__())
print(a.__call__())
>>> “In `__call__`”
>>> “In `__call__`”
All of this is just to show how finally A() may looks strange.
There are exception in Python rules, but the documentation about “object.call” does not say a lot about __call__… not more than that:
3.3.5. Emulating callable objects
object.__call__(self[, args...])
Called when the instance is “called” as a function; […]
But how do Python tell “it's called as a function” and honour or not the object.__call__ rule?
This could be a matter of type, but even type has object as its base class.
Where can I learn more (and formally) about it?
By the way, is there any difference here between Python 2 and Python 3?
----- %< ----- edit ----- >% -----
Conclusions and other experiments after one answer and one comment
Update #1
After #Veedrac's answer and #chepner's comment, I came to this other test, which complete the comments from both:
class M(type):
def __call__(*args):
return "In `M.__call__`"
class A(object, metaclass=M):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `A.__call__`"
print(A())
The result is:
>>> In `M.__call__`
So it seems that's the meta‑class which drives the “call” operations. If I understand correctly, the meta‑class does not matter only with class, but also with classes instances.
Update #2
Another relevant test, which shows this is not an attribute of the object which matters, but an attribute of the type of the object:
class A(object):
def __call__(*args):
return "In `A.__call__`"
def call2(*args):
return "In `call2`"
a = A()
print(a())
As expected, it prints:
>>> In `A.__call__`
Now this:
a.__call__ = call2
print(a())
It prints:
>>> In `A.__call__`
The same a before the attribute was assigned. It does not print In call2, it's still In A.__call__. That's important to note and also explain why that's the __call__ of the meta‑class which was invoked (keep in mind the meta‑class is the type of the class object). The __call__ used to call as function, is not from the object, it's from its type.
x(*args, **kwargs) is the same as type(x).__call__(x, *args, **kwargs).
So you have
>>> type(A).__call__(A)
<__main__.A object at 0x7f4d88245b50>
and it all makes sense.
chepner points out in the comments that type(A) == type. This is kind-of wierd, because type(A)(A) just gives type again! But remember that we're instead using type(A).__call__(A) which is not the same.
So this resolves to type.__call__(A). This is the constructor function for classes, which builds the data-structures and does all the construction magic.
The same is true of most dunder (double underscore) methods, such as __eq__. This is partially an optimisation in those cases.
Related
I created some test code, but I can't really understand why it works.
Shouldn't moo be defined before we can use it?
#!/usr/bin/python3
class Test():
def __init__(self):
self.printer = None
def foo(self):
self.printer = self.moo
self.printer()
def moo(self):
print("Y u printing?")
test = Test()
test.foo()
Output:
$ python test.py
Y u printing?
I know that the rule is define earlier, not higher, but in this case it's neither of those.
There's really nothing to be confused about here.
We have a function that says "when you call foo with a self parameter, look up moo in self's namespace, assign that value to printer in self's namespace, look up printer in self's namespace, and call that value".1
Unless/until you call that function, it doesn't matter whether or not anyone anywhere has an attribute named moo.
When you do call that method, whatever you pass as the self had better have a moo attribute or you're going to get an AttributeError. But this is no different from looking up an attribute on any object. If you write def spam(n): return n.bit_length() as a global function, when you call that function, whatever you pass as the n had better have a bit_length attribute or you're going to get an AttributeError.
So, we're calling it as test.foo(), so we're passing test as self. If you know how attribute lookup works (and there are already plenty of questions and answers on SO about that), you can trace this through. Slightly oversimplified:
Does test.__dict__ have a 'moo'? No.
Does type(test).__dict__ have a 'moo'? Yes. So we're done.
Again, this is the same way we check if 3 has a bit_length() method; there's no extra magic here.
That's really all there is to it.
In particular, notice that test.__dict__ does not have a 'moo'. Methods don't get created at construction time (__new__) any more than they get created at initialization time (__init__). The instance doesn't have any methods in it, because it doesn't have to; they can be looked up on the type.2
Sure, we could get into descriptors, and method resolution order, and object.__getattribute__, and how class and def statements are compiled and executed, and special method lookup to see if there's a custom __getattribute__, and so on, but you don't need any of that to understand this question.
1. If you're confused by this, it's probably because you're thinking in terms of semi-OO languages like C++ and its descendants, where a class has to specify all of its instances' attributes and methods, so the compiler can look at this->moo(), work out that this has a static type ofFoo, work out thatmoois the third method defined onFoo, and compile it into something likethis->vptr2`. If that's what you're expecting, forget all of it. In Python, methods are just attributes, and attributes are just looked up, by name, on demand.
2. If you're going to ask "then why is a bound method not the same thing as a function?", the answer is descriptors. Briefly: when an attribute is found on the type, Python calls the value's __get__ method, passing it the instance, and function objects' __get__ methods return method objects. So, if you want to refer specifically to bound method objects, then they get created every time a method is looked up. In particular, the bound method object does not exist yet when we call foo; it gets created by looking up self.moo inside foo.
While all that #scharette says is likely true (I don't know enough of Python internals to agree with confidence :) ), I'd like to propose an alternative explanation as to why one can instantiate Test and call foo():
The method's body is not executed until you actually call it. It does not matter if foo() contains references to undefined attributes, it will be parsed fine. As long as you create moo before you call foo, you're ok.
Try entering a truncated Test class in your interpreter:
class Test():
def __init__(self):
self.printer = None
def foo(self):
self.printer = self.moo
self.printer()
No moo, so we get this:
>>> test = Test()
>>> test.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in foo
Let's add moo to the class now:
>>> def moo(self):
... print("Y u printing?")
...
>>> Test.moo = moo
>>> test1 = Test()
>>> test1.foo()
Y u printing?
>>>
Alternatively, you can add moo directly to the instance:
>>> def moo():
... print("Y u printing?")
...
>>> test.moo = moo
>>> test.foo()
Y u printing?
The only difference is that the instance's moo does not take a self (see here for explanation).
class Foo():
def __init__(self):
pass
def makeInstanceAttribute(oops):
oops.x = 10
f = Foo()
f.makeInstanceAttribute()
print(f.x)
and it's printing 10, how does it work? Why oops has same effect of being self?
Quoting the Python documentation:
Often, the first argument of a method is called self. This is nothing more than a convention: the name self has absolutely no special meaning to Python. Note, however, that by not following the convention your code may be less readable to other Python programmers, and it is also conceivable that a class browser program might be written that relies upon such a convention.
self is just a convention. It can be named anyway you want. What actually is passed to the function as the first argument is the object instance. Consider this code:
class A(object):
def self_test(self):
print self
def foo(oops):
print oops
>>> a = A()
>>> a.self_test()
<__main__.A object at 0x03CB18D0>
>>> a.foo()
<__main__.A object at 0x03CB18D0>
>>>
This question is probably related to yours, you might find it helpful.
Class definition:
class A(object):
def foo(self):
print "A"
class B(object):
def foo(self):
print "B"
class C(A, B):
def foo(self):
print "C"
Output:
>>> super(C)
<super: <class 'C'>, NULL>
>>> super(C).foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'super' object has no attribute 'foo'
What is the use of super(type) if we can't access attributes of a class?
super(type) is an "unbound" super object. The docs on super discuss that, but don't really elaborate what an "unbound" super object is or does. It is simply a fact of the language that you cannot use them in the manner you are attempting to use them.
This is perhaps what you want:
>>> super(C, C).foo is B.foo
True
That said, what good is an unbound super object? I had to look this up, myself, and found a decent answer here. Note however, that the article's conclusion is that unbound super is a language wart, has no practical use, and should be removed from the language (and having read the article, I agree). The article's explanation on unbound super starts with:
Unbound super objects must be turned into bound objects in order to make them to dispatch properly. That can be done via the descriptor protocol. For instance, I can convert super(C1) in a super object bound to c1 in this way:
>>> c1 = C1()
>>> boundsuper = super(C1).__get__(c1, C1) # this is the same as super(C1, c1)
So, that doesn't seem useful, but the article goes on:
Having established that the unbound syntax does not return unbound methods one might ask what its purpose is. The answer is that super(C) is intended to be used as an attribute in other classes. Then the descriptor magic will automatically convert the unbound syntax in the bound syntax. For instance:
>>> class B(object):
... a = 1
>>> class C(B):
... pass
>>> class D(C):
... sup = super(C)
>>> d = D()
>>> d.sup.a
1
This works since d.sup.a calls super(C).__get__(d,D).a which is turned into super(C, d).a and retrieves B.a.
There is a single use case for the single argument syntax of super that I am aware of, but I think it gives more troubles than advantages. The use case is the implementation of autosuper made by Guido on his essay about new-style classes.
The idea there is to use the unbound super objects as private attributes. For instance, in our example, we could define the private attribute __sup in the class C as the unbound super object super(C):
>>> C._C__sup = super(C)
But do note that the article continues to describe the problems with this (it doesn't quite work correctly, I think mostly due to the fact that the MRO is dependent on the class of the instance you are dealing with, and thus given an instance, some class X's superclass may be different depending on the instance of X we are given).
To accomplish what you want, you need to call it like this:
>>> myC = C()
>>> super(C,myC).foo()
A
Note that there is a NULL reference to some object. It basically needs a class and an instance of a related object for it to function.
>>> super(C, C()).foo
<bound method C.foo of <__main__.C object at 0x225dc50>>
See: Understanding Python super() with __init__() methods for more details.
Python 2.7 docs for weakref module say this:
Not all objects can be weakly referenced; those objects which can
include class instances, functions written in Python (but not in C),
methods (both bound and unbound), ...
And Python 3.3 docs for weakref module say this:
Not all objects can be weakly referenced; those objects which can
include class instances, functions written in Python (but not in C),
instance methods, ...
To me, these indicate that weakrefs to bound methods (in all versions Python 2.7 - 3.3) should be good, and that weakrefs to unbound methods should be good in Python 2.7.
Yet in Python 2.7, creating a weakref to a method (bound or unbound) results in a dead weakref:
>>> def isDead(wr): print 'dead!'
...
>>> class Foo:
... def bar(self): pass
...
>>> wr=weakref.ref(Foo.bar, isDead)
dead!
>>> wr() is None
True
>>> foo=Foo()
>>> wr=weakref.ref(foo.bar, isDead)
dead!
>>> wr() is None
True
Not what I would have expected based on the docs.
Similarly, in Python 3.3, a weakref to a bound method dies on creation:
>>> wr=weakref.ref(Foo.bar, isDead)
>>> wr() is None
False
>>> foo=Foo()
>>> wr=weakref.ref(foo.bar, isDead)
dead!
>>> wr() is None
True
Again not what I would have expected based on the docs.
Since this wording has been around since 2.7, it's surely not an oversight. Can anyone explain how the statements and the observed behavior are in fact not in contradiction?
Edit/Clarification: In other words, the statement for 3.3 says "instance methods can be weak referenced"; doesn't this mean that it is reasonable to expect that weakref.ref(an instance method)() is not None? and if it None, then "instance methods" should not be listed among the types of objects that can be weak referenced?
Foo.bar produces a new unbound method object every time you access it, due to some gory details about descriptors and how methods happen to be implemented in Python.
The class doesn't own unbound methods; it owns functions. (Check out Foo.__dict__['bar'].) Those functions just happen to have a __get__ which returns an unbound-method object. Since nothing else holds a reference, it vanishes as soon as you're done creating the weakref. (In Python 3, the rather unnecessary extra layer goes away, and an "unbound method" is just the underlying function.)
Bound methods work pretty much the same way: the function's __get__ returns a bound-method object, which is really just partial(function, self). You get a new one every time, so you see the same phenomenon.
You can store a method object and keep a reference to that, of course:
>>> def is_dead(wr): print "blech"
...
>>> class Foo(object):
... def bar(self): pass
...
>>> method = Foo.bar
>>> wr = weakref.ref(method, is_dead)
>>> 1 + 1
2
>>> method = None
blech
This all seems of dubious use, though :)
Note that if Python didn't spit out a new method instance on every attribute access, that'd mean that classes refer to their methods and methods refer to their classes. Having such cycles for every single method on every single instance in the entire program would make garbage collection way more expensive—and before 2.1, Python didn't even have cycle collection, so they would've stuck around forever.
#Eevee's answer is correct but there is a subtlety that is important.
The Python docs state that instance methods (py3k) and un/bound methods (py2.4+) can be weak referenced. You'd expect (naively, as I did) that weakref.ref(foo.bar)() would therefore be non-None, yet it is None, making the weak ref "dead on arrival" (DOA). This lead to my question, if the weakref to an instance method is DOA, why do the docs say you can weak ref a method?
So as #Eevee showed, you can create a non-dead weak reference to an instance method, by creating a strong reference to the method object which you give to weakref:
m = foo.bar # creates a *new* instance method "Foo.bar" and strong refs it
wr = weakref.ref(m)
assert wr() is not None # success
The subtlety (to me, anyways) is that a new instance method object is created every time you use Foo.bar, so even after the above code is run, the following will fail:
wr = weakref.ref(foo.bar)
assert wr() is not None # fails
because foo.bar is new instance of the "Foo instance" foo's "bar" method, different from m, and there is no strong ref to this new instance, so it is immediately gc'd, even if you have created a strong reference to it earlier (it is not the same strong ref). To be clear,
>>> d1 = foo.bla # assume bla is a data member
>>> d2 = foo.bla # assume bla is a data member
>>> d1 is d2
True # which is what you expect
>>> m1 = foo.bar # assume bar is an instance method
>>> m2 = foo.bar
>>> m1 is m2
False # !!! counter-intuitive
This takes many people by surprise since no one expects access to an instance member to be creating a new instance of anything. For example, if foo.bla is a data member of foo, then using foo.bla in your code does not create a new instance of the object referenced by foo.bla. Now if bla is a "function", foo.bla does create a new instance of type "instance method" representing the bound function.
Why the weakref docs (since python 2.4!) don't point that out is very strange, but that's a separate issue.
While I see that there's an accepted answer as to why this should be so, from a simple use-case situation wherein one would like an object that acts as a weakref to a bound method, I believe that one might be able to sneak by with an object as such. It's kind of a runt compared to some of the 'codier' things out there, but it works.
from weakref import proxy
class WeakMethod(object):
"""A callable object. Takes one argument to init: 'object.method'.
Once created, call this object -- MyWeakMethod() --
and pass args/kwargs as you normally would.
"""
def __init__(self, object_dot_method):
self.target = proxy(object_dot_method.__self__)
self.method = proxy(object_dot_method.__func__)
###Older versions of Python can use 'im_self' and 'im_func' in place of '__self__' and '__func__' respectively
def __call__(self, *args, **kwargs):
"""Call the method with args and kwargs as needed."""
return self.method(self.target, *args, **kwargs)
As an example of its ease of use:
class A(object):
def __init__(self, name):
self.name = name
def foo(self):
return "My name is {}".format(self.name)
>>> Stick = A("Stick")
>>> WeakFoo = WeakMethod(Stick.foo)
>>> WeakFoo()
'My name is Stick'
>>> Stick.name = "Dave"
>>> WeakFoo()
'My name is Dave'
Note that evil trickery will cause this to blow up, so depending on how you'd prefer it to work this may not be the best solution.
>>> A.foo = lambda self: "My eyes, aww my eyes! {}".format(self.name)
>>> Stick.foo()
'My eyes, aww my eyes! Dave'
>>> WeakFoo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in __call__
ReferenceError: weakly-referenced object no longer exists
>>>
If you were going to be replacing methods on-the-fly you might need to use a getattr(weakref.proxy(object), 'name_of_attribute_as_string') approach instead. getattr is a fairly fast look-up so that isn't the literal worst thing in the world, but depending on what you're doing, YMMV.
Given a class A I can simply add an instancemethod a via
def a(self):
pass
A.a = a
However, if I try to add another class B's instancemethod b, i.e. A.b = B.b, the attempt at calling A().b() yields a
TypeError: unbound method b() must be called with B instance as first argument (got nothing instead)
(while B().b() does fine). Indeed there is a difference between
A.a -> <unbound method A.a>
A.b -> <unbound method B.b> # should be A.b, not B.b
So,
How to fix this?
Why is it this way? It doesn't seem intuitive, but usually Guido has some good reasons...
Curiously enough, this no longer fails in Python3...
Let's:
class A(object): pass
class B(object):
def b(self):
print 'self class: ' + self.__class__.__name__
When you are doing:
A.b = B.b
You are not attaching a function to A, but an unbound method. In consequence python only add it as a standard attribute and do not convert it to a A-unbounded method. The solution is simple, attach the underlying function :
A.b = B.b.__func__
print A.b
# print: <unbound method A.b>
a = A()
a.b()
# print: self class: A
I don't know all the difference between unbound methods and functions (only that the first contains the second), neither how all of that work internally. So I cannot explain the reason of it. My understanding is that a method object (bound or not) requires more information and functionalities than a functions, but it needs one to execute.
I would agree that automating this (changing the class of an unbound method) could be a good choice, but I can find reasons not to. It is thus surprising that python 3 differs from python 2. I'd like to find out the reason of this choice.
When you take the reference to a method on a class instance, the method is bound to that class instance.
B().b is equivalent to: lambda *args, **kwargs: b(<B instance>, *args, **kwargs)
I suspect you are getting a similarly (but not identically) wrapped reference when evaluating B.b. However, this is not the behavior I would have expected.
Interestingly:
A.a = lambda s: B.b(s)
A().a()
yields:
TypeError: unbound method b() must be called with B instance as first
argument (got A instance instead)
This suggests that B.b is evaluating to a wrapper for the actual method, and the wrapper is checking that 'self' has the expected type. I don't know, but this is probably about interpreter efficiency.
It's an interesting question though. I hope someone can chime in with a more definitive answer.