Using `super()` within `__init_subclass__` doesn't find parent's classmethod [duplicate] - python

This question already has an answer here:
Why does a classmethod's super need a second argument?
(1 answer)
Closed 4 years ago.
I try to access the classmethod of a parent from within __init_subclass__ however that doesn't seem to work.
Suppose the following example code:
class Foo:
def __init_subclass__(cls):
print('init', cls, cls.__mro__)
super(cls).foo()
#classmethod
def foo(cls):
print('foo')
class Bar(Foo):
pass
which produces the following exception:
AttributeError: 'super' object has no attribute 'foo'
The cls.__mro__ however shows that Foo is a part of it: (<class '__main__.Bar'>, <class '__main__.Foo'>, <class 'object'>).
So I don't understand why super(cls).foo() doesn't dispatch to Foo.foo. Can someone explain this?

A normal super object (what you normally get from calling super(MyType, self) or super() or super(MyType, myobj)) keeps track of both the type and the object it was created with. Whenever you look up an attribute on the super, it skips over MyType in the method resolution order, but if it finds a method it binds it to that self object.
An unbound super has no self object. So, super(cls) skips over cls in the MRO to find the method foo, and then binds it to… oops, it has nothing to call it on.
So, what things can you call a classmethod on? The class itself, or a subclass of it, or an instance of that class or subclass. So, any of those will work as the second argument to super here, the most obvious one being:
super(cls, cls)
This is somewhat similar to the difference between staticmethods (bound staticmethods are actually bound to nothing) and classmethods (bound classmethods are bound to the class instead of an instance), but it's not quite that simple.
If you want to know why an unbound super doesn't work, you have to understand what an unbound super really is. Unfortunately, the only explanation in the docs is:
If the second argument is omitted, the super object returned is unbound.
What does this mean? Well, you can try to work it out from first principles as a parallel to what it means for a method to be unbound (except, of course, that unbound methods aren't a thing in modern Python), or you can read the C source, or the original introduction to 2.2's class-type unification (including a pure-Python super clone).
A super object has a __self__ attribute, just like a method object. And super(cls) is missing its __self__, just like str.split is.1
You can't use an unbound super explicitly the way you can with an unbound method (e.g., str.split('123', '2') does the same as '123'.split('2'), but super(cls).foo(cls) doesn't work the same as super(cls, cls).foo()). But you can use them implicitly, the same way you do with unbound methods all the time without normally thinking about it.
If you don't know how methods work, the tl'dr is: when you evaluate myobj.mymeth, Python looks up mymeth, doesn't find it on myobj itself, but does find it on the type, so it checks whether it's a non-data descriptor, and, if so, calls its __get__ method to bind it to myobj.
So, unbound methods2 are non-data descriptors whose __get__ method returns a bound method. Unbound #classmethods are similar, but their __get__ ignores the object and returns a bound method bound to the class. And so on.
And unbound supers are non-data descriptors whose __get__ method returns a bound super.
Example (credit to wim for coming up with the closest thing to a use for unbound super that I've seen):
class A:
def f(self): print('A.f')
class B(A):
def f(self): print('B.f')
b = B()
bs = super(B)
B.bs = bs
b.bs.f()
We created an unbound super bs, stuck it on the type B, and then b.bs is a normal bound super, so b.bs.f is A.f, just like super().f would have been inside a B method.
Why would you want to do that? I'm not sure. I've written all kinds of ridiculously dynamic and reflective code in Python (e.g., for transparent proxies to other interpreters), and I can't remember ever needing an unbound super. But if you ever need it, it's there.
1. I'm cheating a bit here. First, unbound methods aren't a thing anymore in Python 3—but functions work the same way, so Python uses them where it used to use unbound methods. Second, str.split, being a C builtin, wasn't properly an unbound method even in 2.x—but it acts like one anyway, at least as far as we're concerned here.
2. Actually plain-old functions.

Related

Explicit call to __call__ works and uses __init__

I'm learning overloading in Python 3.X and to better understand the topic, I wrote the following code that works in 3.X but not in 2.X. I expected the below code to fail since I've not defined __call__ for class Test. But to my surprise, it works and prints "constructor called". Demo.
class Test:
def __init__(self):
print("constructor called")
#Test.__getitem__() #error as expected
Test.__call__() #this works in 3.X(but not in 2.X) and prints "constructor called"! WHY THIS DOESN'T GIVE ERROR in 3.x?
So my question is that how/why exactly does this code work in 3.x but not in 2.x. I mean I want to know the mechanics behind what is going on.
More importantly, why __init__ is being used here when I am using __call__?
In 3.x:
About attribute lookup, type and object
Every time an attribute is looked up on an object, Python follows a process like this:
Is it directly a part of the actual data in the object? If so, use that and stop.
Is it directly a part of the object's class? If so, hold onto that for step 4.
Otherwise, check the object's class for __getattr__ and __getattribute__ overrides, look through base classes in the MRO, etc. (This is a massive simplification, of course.)
If something was found in step 2 or 3, check if it has a __get__. If it does, look that up (yes, that means starting over at step 1 for the attribute named __get__ on that object), call it, and use its return value. Otherwise, use what was returned directly.
Functions have a __get__ automatically; it is used to implement method binding. Classes are objects; that's why it's possible to look up attributes in them. That is: the purpose of the class Test: block is to define a data type; the code creates an object named Test which represents the data type that was defined.
But since the Test class is an object, it must be an instance of some class. That class is called type, and has a built-in implementation.
>>> type(Test)
<class 'type'>
Notice that type(Test) is not a function call. Rather, the name type is pre-defined to refer to a class, which every other class created in user code is (by default) an instance of.
In other words, type is the default metaclass: the class of classes.
>>> type
<class 'type'>
One may ask, what class does type belong to? The answer is surprisingly simple - itself:
>>> type(type) is type
True
Since the above examples call type, we conclude that type is callable. To be callable, it must have a __call__ attribute, and it does:
>>> type.__call__
<slot wrapper '__call__' of 'type' objects>
When type is called with a single argument, it looks up the argument's class (roughly equivalent to accessing the __class__ attribute of the argument). When called with three arguments, it creates a new instance of type, i.e., a new class.
How does type work?
Because this is digging right at the core of the language (allocating memory for the object), it's not quite possible to implement this in pure Python, at least for the reference C implementation (and I have no idea what sort of magic is going on in PyPy here). But we can approximately model the type class like so:
def _validate_type(obj, required_type, context):
if not isinstance(obj, required_type):
good_name = required_type.__name__
bad_name = type(obj).__name__
raise TypeError(f'{context} must be {good_name}, not {bad_name}')
class type:
def __new__(cls, name_or_obj, *args):
# __new__ implicitly gets passed an instance of the class, but
# `type` is its own class, so it will be `type` itself.
if len(args) == 0: # 1-argument form: check the type of an existing class.
return obj.__class__
# otherwise, 3-argument form: create a new class.
try:
bases, attrs = args
except ValueError:
raise TypeError('type() takes 1 or 3 arguments')
_validate_type(name, str, 'type.__new__() argument 1')
_validate_type(bases, tuple, 'type.__new__() argument 2')
_validate_type(attrs, dict, 'type.__new__() argument 3')
# This line would not work if we were actually implementing
# a replacement for `type`, as it would route to `object.__new__(type)`,
# which is explicitly disallowed. But let's pretend it does...
result = super().__new__()
# Now, fill in attributes from the parameters.
result.__name__ = name_or_obj
# Assigning to `__bases__` triggers a lot of other internal checks!
result.__bases__ = bases
for name, value in attrs.items():
setattr(result, name, value)
return result
del __new__.__get__ # `__new__`s of builtins don't implement this.
def __call__(self, *args):
return self.__new__(self, *args)
# this, however, does have a `__get__`.
What happens (conceptually) when we call the class (Test())?
Test() uses function-call syntax, but it's not a function. To figure out what should happen, we translate the call into Test.__class__.__call__(Test). (We use __class__ directly here, because translating the function call using type - asking type to categorize itself - would end up in endless recursion.)
Test.__class__ is type, so this becomes type.__call__(Test).
type contains a __call__ directly (type is its own class, remember?), so it's used directly - we don't go through the __get__ descriptor. We call the function, with Test as self, and no other arguments. (We have a function now, so we don't need to translate the function call syntax again. We could - given a function func, func.__class__.__call__.__get__(func) gives us an instance of an unnamed builtin "method wrapper" type, which does the same thing as func when called. Repeating the loop on the method wrapper creates a separate method wrapper that still does the same thing.)
This attempts the call Test.__new__(Test) (since self was bound to Test). Test.__new__ isn't explicitly defined in Test, but since Test is a class, we don't look in Test's class (type), but instead in Test's base (object).
object.__new__(Test) exists, and does magical built-in stuff to allocate memory for a new instance of the Test class, make it possible to assign attributes to that instance (even though Test is a subtype of object, which disallows that), and set its __class__ to Test.
Similarly, when we call type, the same logical chain turns type(Test) into type.__class__.__call__(type, Test) into type.__call__(type, Test), which forwards to type.__new__(type, Test). This time, there is a __new__ attribute directly in type, so this doesn't fall back to looking in object. Instead, with name_or_obj being set to Test, we simply return Test.__class__, i.e., type. And with separate name, bases, attrs arguments, type.__new__ instead creates an instance of type.
Finally: what happens when we call Test.__call__() explicitly?
If there's a __call__ defined in the class, it gets used, since it's found directly. This will fail, however, because there aren't enough arguments: the descriptor protocol isn't used since the attribute was found directly, so self isn't bound, and so that argument is missing.
If there isn't a __call__ method defined, then we look in Test's class, i.e., type. There's a __call__ there, so the rest proceeds like steps 3-5 in the previous section.
In Python 3.x, every class is implicitely a child of the builtin class object. And at least in the CPython implementation, the object class has a __call__ method which is defined in its metaclass type.
That means that Test.__call__() is exactly the same as Test() and will return a new Test object, calling your custom __init__ method.
In Python 2.x classes are by default old-style classes and are not child of object. Because of that __call__ is not defined. You can get the same behaviour in Python 2.x by using new style classes, meaning by making an explicit inheritance on object:
# Python 2 new style class
class Test(object):
...

Are there more than three types of methods in Python?

I understand there are at least 3 kinds of methods in Python having different first arguments:
instance method - instance, i.e. self
class method - class, i.e. cls
static method - nothing
These classic methods are implemented in the Test class below including an usual method:
class Test():
def __init__(self):
pass
def instance_mthd(self):
print("Instance method.")
#classmethod
def class_mthd(cls):
print("Class method.")
#staticmethod
def static_mthd():
print("Static method.")
def unknown_mthd():
# No decoration --> instance method, but
# No self (or cls) --> static method, so ... (?)
print("Unknown method.")
In Python 3, the unknown_mthd can be called safely, yet it raises an error in Python 2:
>>> t = Test()
>>> # Python 3
>>> t.instance_mthd()
>>> Test.class_mthd()
>>> t.static_mthd()
>>> Test.unknown_mthd()
Instance method.
Class method.
Static method.
Unknown method.
>>> # Python 2
>>> Test.unknown_mthd()
TypeError: unbound method unknown_mthd() must be called with Test instance as first argument (got nothing instead)
This error suggests such a method was not intended in Python 2. Perhaps its allowance now is due to the elimination of unbound methods in Python 3 (REF 001). Moreover, unknown_mthd does not accept args, and it can be bound to called by a class like a staticmethod, Test.unknown_mthd(). However, it is not an explicit staticmethod (no decorator).
Questions
Was making a method this way (without args while not explicitly decorated as staticmethods) intentional in Python 3's design? UPDATED
Among the classic method types, what type of method is unknown_mthd?
Why can unknown_mthd be called by the class without passing an argument?
Some preliminary inspection yields inconclusive results:
>>> # Types
>>> print("i", type(t.instance_mthd))
>>> print("c", type(Test.class_mthd))
>>> print("s", type(t.static_mthd))
>>> print("u", type(Test.unknown_mthd))
>>> print()
>>> # __dict__ Types, REF 002
>>> print("i", type(t.__class__.__dict__["instance_mthd"]))
>>> print("c", type(t.__class__.__dict__["class_mthd"]))
>>> print("s", type(t.__class__.__dict__["static_mthd"]))
>>> print("u", type(t.__class__.__dict__["unknown_mthd"]))
>>> print()
i <class 'method'>
c <class 'method'>
s <class 'function'>
u <class 'function'>
i <class 'function'>
c <class 'classmethod'>
s <class 'staticmethod'>
u <class 'function'>
The first set of type inspections suggests unknown_mthd is something similar to a staticmethod. The second suggests it resembles an instance method. I'm not sure what this method is or why it should be used over the classic ones. I would appreciate some advice on how to inspect and understand it better. Thanks.
REF 001: What's New in Python 3: “unbound methods” has been removed
REF 002: How to distinguish an instance method, a class method, a static method or a function in Python 3?
REF 003: What's the point of #staticmethod in Python?
Some background: In Python 2, "regular" instance methods could give rise to two kinds of method objects, depending on whether you accessed them via an instance or the class. If you did inst.meth (where inst is an instance of the class), you got a bound method object, which keeps track of which instance it is attached to, and passes it as self. If you did Class.meth (where Class is the class), you got an unbound method object, which had no fixed value of self, but still did a check to make sure a self of the appropriate class was passed when you called it.
In Python 3, unbound methods were removed. Doing Class.meth now just gives you the "plain" function object, with no argument checking at all.
Was making a method this way intentional in Python 3's design?
If you mean, was removal of unbound methods intentional, the answer is yes. You can see discussion from Guido on the mailing list. Basically it was decided that unbound methods add complexity for little gain.
Among the classic method types, what type of method is unknown_mthd?
It is an instance method, but a broken one. When you access it, a bound method object is created, but since it accepts no arguments, it's unable to accept the self argument and can't be successfully called.
Why can unknown_mthd be called by the class without passing an argument?
In Python 3, unbound methods were removed, so Test.unkown_mthd is just a plain function. No wrapping takes place to handle the self argument, so you can call it as a plain function that accepts no arguments. In Python 2, Test.unknown_mthd is an unbound method object, which has a check that enforces passing a self argument of the appropriate class; since, again, the method accepts no arguments, this check fails.
#BrenBarn did a great job answering your question. This answer however, adds a plethora of details:
First of all, this change in bound and unbound method is version-specific, and it doesn't relate to new-style or classic classes:
2.X classic classes by default
>>> class A:
... def meth(self): pass
...
>>> A.meth
<unbound method A.meth>
>>> class A(object):
... def meth(self): pass
...
>>> A.meth
<unbound method A.meth>
3.X new-style classes by default
>>> class A:
... def meth(self): pass
...
>>> A.meth
<function A.meth at 0x7efd07ea0a60>
You've already mentioned this in your question, it doesn't hurt to mention it twice as a reminder.
>>> # Python 2
>>> Test.unknown_mthd()
TypeError: unbound method unknown_mthd() must be called with Test instance as first argument (got nothing instead)
Moreover, unknown_mthd does not accept args, and it can be bound to a class like a staticmethod, Test.unknown_mthd(). However, it is not an explicit staticmethod (no decorator)
unknown_meth doesn't accept args, normally because you've defined the function without so that it does not take any parameter. Be careful and cautious, static methods as well as your coded unknown_meth method will not be magically bound to a class when you reference them through the class name (e.g, Test.unknown_meth). Under Python 3.X Test.unknow_meth returns a simple function object in 3.X, not a method bound to a class.
1 - Was making a method this way (without args while not explicitly decorated as staticmethods) intentional in Python 3's design? UPDATED
I cannot speak for CPython developers nor do I claim to be their representative, but from my experience as a Python programmer, it seems like they wanted to get rid of a bad restriction, especially given the fact that Python is extremely dynamic, not a language of restrictions; why would you test the type of objects passed to class methods and hence restrict the method to specific instances of classes? Type testing eliminates polymorphism. It would be decent if you just return a simple function when a method is fetched through the class which functionally behaves like a static method, you can think of unknown_meth to be static method under 3.X so long as you're careful not to fetch it through an instance of Test you're good to go.
2- Among the classic method types, what type of method is unknown_mthd?
Under 3.X:
>>> from types import *
>>> class Test:
... def unknown_mthd(): pass
...
>>> type(Test.unknown_mthd) is FunctionType
True
It's simply a function in 3.X as you could see. Continuing the previous session under 2.X:
>>> type(Test.__dict__['unknown_mthd']) is FunctionType
True
>>> type(Test.unknown_mthd) is MethodType
True
unknown_mthd is a simple function that lives inside Test__dict__, really just a simple function which lives inside the namespace dictionary of Test. Then, when does it become an instance of MethodType? Well, it becomes an instance of MethodType when you fetch the method attribute either from the class itself which returns an unbound method or its instances which returns a bound method. In 3.X, Test.unknown_mthd is a simple function--instance of FunctionType, and Test().unknown_mthd is an instance of MethodType that retains the original instance of class Test and adds it as the first argument implicitly on function calls.
3- Why can unknown_mthd be called by the class without passing an argument?
Again, because Test.unknown_mthd is just a simple function under 3.X. Whereas in 2.X, unknown_mthd not a simple function and must be called be passed an instance of Test when called.
Are there more than three types of methods in Python?
Yes. There are the three built-in kinds that you mention (instance method, class method, static method), four if you count #property, and anyone can define new method types.
Once you understand the mechanism for doing this, it's easy to explain why unknown_mthd is callable from the class in Python 3.
A new kind of method
Suppose we wanted to create a new type of method, call it optionalselfmethod so that we could do something like this:
class Test(object):
#optionalselfmethod
def optionalself_mthd(self, *args):
print('Optional-Self Method:', self, *args)
The usage is like this:
In [3]: Test.optionalself_mthd(1, 2)
Optional-Self Method: None 1 2
In [4]: x = Test()
In [5]: x.optionalself_mthd(1, 2)
Optional-Self Method: <test.Test object at 0x7fe80049d748> 1 2
In [6]: Test.instance_mthd(1, 2)
Instance method: 1 2
optionalselfmethod works like a normal instance method when called on an instance, but when called on the class, it always receives None for the first parameter. If it were a normal instance method, you would always have to pass an explicit value for the self parameter in order for it to work.
So how does this work? How you can you create a new method type like this?
The Descriptor Protocol
When Python looks up a field of an instance, i.e. when you do x.whatever, it check in several places. It checks the instance's __dict__ of course, but it also checks the __dict__ of the object's class, and base classes thereof. In the instance dict, Python is just looking for the value, so if x.__dict__['whatever'] exists, that's the value. However, in the class dict, Python is looking for an object which implements the Descriptor Protocol.
The Descriptor Protocol is how all three built-in kinds of methods work, it's how #property works, and it's how our special optionalselfmethod will work.
Basically, if the class dict has a value with the correct name1, Python checks if it has an __get__ method, and calls it like type(x).whatever.__get__(x, type(x)) Then, the value returned from __get__ is used as the field value.
So for example, a trivial descriptor which always returns 3:
class GetExample:
def __get__(self, instance, cls):
print("__get__", instance, cls)
return 3
class Test:
get_test = GetExample()
Usage is like this:
In[22]: x = Test()
In[23]: x.get_test
__get__ <__main__.Test object at 0x7fe8003fc470> <class '__main__.Test'>
Out[23]: 3
Notice that the descriptor is called with both the instance and the class type. It can also be used on the class:
In [29]: Test.get_test
__get__ None <class '__main__.Test'>
Out[29]: 3
When a descriptor is used on a class rather than an instance, the __get__ method gets None for self, but still gets the class argument.
This allows a simple implementation of methods: functions simply implement the descriptor protocol. When you call __get__ on a function, it returns a bound method of instance. If the instance is None, it returns the original function. You can actually call __get__ yourself to see this:
In [30]: x = object()
In [31]: def test(self, *args):
...: print(f'Not really a method: self<{self}>, args: {args}')
...:
In [32]: test
Out[32]: <function __main__.test>
In [33]: test.__get__(None, object)
Out[33]: <function __main__.test>
In [34]: test.__get__(x, object)
Out[34]: <bound method test of <object object at 0x7fe7ff92d890>>
#classmethod and #staticmethod are similar. These decorators create proxy objects with __get__ methods which provide different binding. Class method's __get__ binds the method to the instance, and static method's __get__ doesn't bind to anything, even when called on an instance.
The Optional-Self Method Implementation
We can do something similar to create a new method which optionally binds to an instance.
import functools
class optionalselfmethod:
def __init__(self, function):
self.function = function
functools.update_wrapper(self, function)
def __get__(self, instance, cls):
return boundoptionalselfmethod(self.function, instance)
class boundoptionalselfmethod:
def __init__(self, function, instance):
self.function = function
self.instance = instance
functools.update_wrapper(self, function)
def __call__(self, *args, **kwargs):
return self.function(self.instance, *args, **kwargs)
def __repr__(self):
return f'<bound optionalselfmethod {self.__name__} of {self.instance}>'
When you decorate a function with optionalselfmethod, the function is replaced with our proxy. This proxy saves the original method and supplies a __get__ method which returns a boudnoptionalselfmethod. When we create a boundoptionalselfmethod, we tell it both the function to call and the value to pass as self. Finally, calling the boundoptionalselfmethod calls the original function, but with the instance or None inserted into the first parameter.
Specific Questions
Was making a method this way (without args while not explicitly
decorated as staticmethods) intentional in Python 3's design? UPDATED
I believe this was intentional; however the intent would have been to eliminate unbound methods. In both Python 2 and Python 3, def always creates a function (you can see this by checking a type's __dict__: even though Test.instance_mthd comes back as <unbound method Test.instance_mthd>, Test.__dict__['instance_mthd'] is still <function instance_mthd at 0x...>).
In Python 2, function's __get__ method always returns a instancemethod, even when accessed through the class. When accessed through an instance, the method is bound to that instance. When accessed through the class, the method is unbound, and includes a mechanism which checks that the first argument is an instance of the correct class.
In Python 3, function's __get__ method will return the original function unchanged when accessed through the class, and a method when accessed through the instance.
I don't know the exact rationale but I would guess that type-checking of the first argument to a class-level function was deemed unnecessary, maybe even harmful; Python allows duck-typing after all.
Among the classic method types, what type of method is unknown_mthd?
unknown_mthd is a plain function, just like any normal instance method. It only fails when called through the instance because when method.__call__ attempts to call the function unknown_mthd with the bound instance, it doesn't accept enough parameters to receive the instance argument.
Why can unknown_mthd be called by the class without passing an
argument?
Because it's just a plain function, same as any other function. I just doesn't take enough arguments to work correctly when used as an instance method.
You may note that both classmethod and staticmethod work the same whether they're called through an instance or a class, whereas unknown_mthd will only work correctly when when called through the class and fail when called through an instance.
1. If a particular name has both a value in the instance dict and a descriptor in the class dict, which one is used depends on what kind of descriptor it is. If the descriptor only defines __get__, the value in the instance dict is used. If the descriptor also defines __set__, then it's a data-descriptor and the descriptor always wins. This is why you can assign over top of a method but not a #property; method only define __get__, so you can put things in the same-named slot in the instance dict, while #properties define __set__, so even if they're read-only, you'll never get a value from the instance __dict__ even if you've previously bypassed property lookup and stuck a value in the dict with e.g. x.__dict__['whatever'] = 3.

Terminology: A user-defined function object attribute?

According to Python 2.7.12 documentation, User-defined methods:
User-defined method objects may be created when getting an attribute
of a class (perhaps via an instance of that class), if that attribute
is a user-defined function object, an unbound user-defined method
object, or a class method object. When the attribute is a
user-defined method object, a new method object is only created if the
class from which it is being retrieved is the same as, or a derived
class of, the class stored in the original method object; otherwise,
the original method object is used as it is.
I know that everything in Python is an object, so a "user-defined method" must be identical to a "user-defined method object". However, I can't understand why there is a "user-defined function object attribute". Say, in the following code:
class Foo(object):
def meth(self):
pass
meth is a function defined inside a class body, and thus a method. So why can we have a "user-defined function object attribute"? Aren't all attributes defined inside a class body?
Bouns question: Provide some examples illustrating how a user-defined method object is created by getting an attribute of a class. Isn't objects defined in their class definition? (I know methods can be assigned to a class instance, but that's monkey patching.)
I'm asking for help because this part of document is really really confusing to me, a programmer who only knows C, since Python is such a magical language that supports both functional programming and object-oriented programmer, which I haven't mastered yet. I've done a lot of search, but still can't figure that out.
When you do
class Foo(object):
def meth(self):
pass
you are defining a class Foo with a method meth. However, when this class definition is executed, no method object is created to represent the method. The def statement creates an ordinary function object.
If you then do
Foo.meth
or
Foo().meth
the attribute lookup finds the function object, but the function object is not used as the value of the attribute. Instead, using the descriptor protocol, Python calls the __get__ method of the function object to construct a method object, and that method object is used as the value of the attribute for that lookup. For Foo.meth, the method object is an unbound method object, which behaves like the function you defined, but with an extra type checking of self. For Foo().meth, the method object is a bound method object, which already knows what self is.
This is why Foo().meth() doesn't complain about a missing self argument; you pass 0 arguments to the method object, which then prepends self to the (empty) argument list and passes the arguments on to the underlying function object. If Foo().meth evaluated to the meth function directly, you would have to pass it self explicitly.
In Python 3, Foo.meth doesn't create an unbound method object; the function's __get__ still gets called, but it returns the function directly, since Guido decided unbound method objects weren't useful. Foo().meth still creates a bound method object, though.

When does __getattribute__ not get involved in attribute lookup?

Consider the following:
class A(object):
def __init__(self):
print 'Hello!'
def foo(self):
print 'Foo!'
def __getattribute__(self, att):
raise AttributeError()
a = A() # Works, prints "Hello!"
a.foo() # throws AttributeError as expected
The implementation of __getattribute__ obviously fails all lookups. My questions:
Why is it still possible to instantiate an object? I would have expected the lookup of the __init__ method itself to fail as well.
What's the list of attributes that are not subject to __getattribute__?
The implementation of __getattribute__ obviously fails all lookups
Let's say it fails for all vanilla lookups.
So how did __getattribute__ itself get called in the first place since it is also an attribute of the class?
An attribute would refer to any name following a dot. So to get an attribute of a class instance, __getattribute__ is summoned unconditionally when you try to access that attribute (through dot reference).
However magic methods like __init__ are part of the language construct and so are not directly invoked (via dot reference) since they are implemented as part of the language.
Why is it still possible to instantiate an object?
When you do:
a = A()
The __init__ method gets called behind the scenes, but not via a vanilla lookup. The language handles this. Same applies to other methods like __setattr__, __delattr__, __getattribute__ also and others.
But if you directly called __init__:
a.__init__()
It would raise an error. Eh, this does not make any sense since the class is already initialized.
More subtly, if you tried to access __getattribute__ from your class instance via a dot reference:
a.__getattribute__
it would also raise an AttributeError; the language invocation of the same method attempted to lookup on the attribute __getattribute__, but failed with error.
What's the list of attributes that are not subject to
__getattribute__?
Summarily, __getattribute__ comes play when you try to access any attribute via dot reference. As long as you don't try to explicitly call a magic method, __getattribute__ will not be called.

Python Suite, Package, Module, TestCase and TestSuite differences

Best Guess:
method - def(self, maybeSomeVariables); lines of code which achieve some purpose
Function - same as method but returns something
Class - group of methods/functions
Module - a script, OR one or more classes. Basically a .py file.
Package - a folder which has modules in, and also a __init__.py file in there.
Suite - Just a word that gets thrown around a lot, by convention
TestCase - unittest's equivalent of a function
TestSuite - unittest's equivalent of a Class (or Module?)
My question is: Is this completely correct, and did I miss any hierarchical building blocks from that list?
I feel that you're putting in differences that don't actually exist. There isn't really a hierarchy as such. In python everything is an object. This isn't some abstract notion, but quite fundamental to how you should think about constructs you create when using python. An object is just a bunch of other objects. There is a slight subtlety in whether you're using new-style classes or not, but in the absence of a good reason otherwise, just use and assume new-style classes. Everything below is assuming new-style classes.
If an object is callable, you can call it using the calling syntax of a pair of braces, with the arguments inside them: my_callable(arg1, arg2). To be callable, an object needs to implement the __call__ method (or else have the correct field set in its C level type definition).
In python an object has a type associated with it. The type describes how the object was constructed. So, for example, a list object is of type list and a function object is of type function. The types themselves are of type type. You can find the type by using the built-in function type(). A list of all the built-in types can be found in the python documentation. Types are actually callable objects, and are used to create instances of a given type.
Right, now that's established, the nature of a given object is defined by it's type. This describes the objects of which it comprises. Coming back to your questions then:
Firstly, the bunch of objects that make up some object are called the attributes of that object. These attributes can be anything, but they typically consist of methods and some way of storing state (which might be types such as int or list).
A function is an object of type function. Crucially, that means it has the __call__ method as an attribute which makes it a callable (the __call__ method is also an object that itself has the __call__ method. It's __call__ all the way down ;)
A class, in the python world, can be considered as a type, but typically is used to refer to types that are not built-in. These objects are used to create other objects. You can define your own classes with the class keyword, and to create a class which is new-style you must inherit from object (or some other new-style class). When you inherit, you create a type that acquires all the characteristics of the parent type, and then you can overwrite the bits you want to (and you can overwrite any bits you want!). When you instantiate a class (or more generally, a type) by calling it, another object is returned which is created by that class (how the returned object is created can be changed in weird and crazy ways by modifying the class object).
A method is a special type of function that is called using the attribute notation. That is, when it is created, 2 extra attributes are added to the method (remember it's an object!) called im_self and im_func. im_self I will describe in a few sentences. im_func is a function that implements the method. When the method is called, like, for example, foo.my_method(10), this is equivalent to calling foo.my_method.im_func(im_self, 10). This is why, when you define a method, you define it with the extra first argument which you apparently don't seem to use (as self).
When you write a bunch of methods when defining a class, these become unbound methods. When you create an instance of that class, those methods become bound. When you call an bound method, the im_self argument is added for you as the object in which the bound method resides. You can still call the unbound method of the class, but you need to explicitly add the class instance as the first argument:
class Foo(object):
def bar(self):
print self
print self.bar
print self.bar.im_self # prints the same as self
We can show what happens when we call the various manifestations of the bar method:
>>> a = Foo()
>>> a.bar()
<__main__.Foo object at 0x179b610>
<bound method Foo.bar of <__main__.Foo object at 0x179b610>>
<__main__.Foo object at 0x179b610>
>>> Foo.bar()
TypeError: unbound method bar() must be called with Foo instance as first argument (got nothing instead)
>>> Foo.bar(a)
<__main__.Foo object at 0x179b610>
<bound method Foo.bar of <__main__.Foo object at 0x179b610>>
<__main__.Foo object at 0x179b610>
Bringing all the above together, we can define a class as follows:
class MyFoo(object):
a = 10
def bar(self):
print self.a
This generates a class with 2 attributes: a (which is an integer of value 10) and bar, which is an unbound method. We can see that MyFoo.a is just 10.
We can create extra attributes at run time, both within the class methods, and outside. Consider the following:
class MyFoo(object):
a = 10
def __init__(self):
self.b = 20
def bar(self):
print self.a
print self.b
def eep(self):
print self.c
__init__ is just the method that is called immediately after an object has been created from a class.
>>> foo = Foo()
>>> foo.bar()
10
20
>>> foo.eep()
AttributeError: 'MyFoo' object has no attribute 'c'
>>> foo.c = 30
>>> foo.eep()
30
This example shows 2 ways of adding an attribute to a class instance at run time (that is, after the object has been created from it's class).
I hope you can see then, that TestCase and TestSuite are just classes that are used to create test objects. There's nothing special about them except that they happen to have some useful features for writing tests. You can subclass and overwrite them to your heart's content!
Regarding your specific point, both methods and functions can return anything they want.
Your description of module, package and suite seems pretty sound. Note that modules are also objects!

Categories