Python Suite, Package, Module, TestCase and TestSuite differences - python

Best Guess:
method - def(self, maybeSomeVariables); lines of code which achieve some purpose
Function - same as method but returns something
Class - group of methods/functions
Module - a script, OR one or more classes. Basically a .py file.
Package - a folder which has modules in, and also a __init__.py file in there.
Suite - Just a word that gets thrown around a lot, by convention
TestCase - unittest's equivalent of a function
TestSuite - unittest's equivalent of a Class (or Module?)
My question is: Is this completely correct, and did I miss any hierarchical building blocks from that list?

I feel that you're putting in differences that don't actually exist. There isn't really a hierarchy as such. In python everything is an object. This isn't some abstract notion, but quite fundamental to how you should think about constructs you create when using python. An object is just a bunch of other objects. There is a slight subtlety in whether you're using new-style classes or not, but in the absence of a good reason otherwise, just use and assume new-style classes. Everything below is assuming new-style classes.
If an object is callable, you can call it using the calling syntax of a pair of braces, with the arguments inside them: my_callable(arg1, arg2). To be callable, an object needs to implement the __call__ method (or else have the correct field set in its C level type definition).
In python an object has a type associated with it. The type describes how the object was constructed. So, for example, a list object is of type list and a function object is of type function. The types themselves are of type type. You can find the type by using the built-in function type(). A list of all the built-in types can be found in the python documentation. Types are actually callable objects, and are used to create instances of a given type.
Right, now that's established, the nature of a given object is defined by it's type. This describes the objects of which it comprises. Coming back to your questions then:
Firstly, the bunch of objects that make up some object are called the attributes of that object. These attributes can be anything, but they typically consist of methods and some way of storing state (which might be types such as int or list).
A function is an object of type function. Crucially, that means it has the __call__ method as an attribute which makes it a callable (the __call__ method is also an object that itself has the __call__ method. It's __call__ all the way down ;)
A class, in the python world, can be considered as a type, but typically is used to refer to types that are not built-in. These objects are used to create other objects. You can define your own classes with the class keyword, and to create a class which is new-style you must inherit from object (or some other new-style class). When you inherit, you create a type that acquires all the characteristics of the parent type, and then you can overwrite the bits you want to (and you can overwrite any bits you want!). When you instantiate a class (or more generally, a type) by calling it, another object is returned which is created by that class (how the returned object is created can be changed in weird and crazy ways by modifying the class object).
A method is a special type of function that is called using the attribute notation. That is, when it is created, 2 extra attributes are added to the method (remember it's an object!) called im_self and im_func. im_self I will describe in a few sentences. im_func is a function that implements the method. When the method is called, like, for example, foo.my_method(10), this is equivalent to calling foo.my_method.im_func(im_self, 10). This is why, when you define a method, you define it with the extra first argument which you apparently don't seem to use (as self).
When you write a bunch of methods when defining a class, these become unbound methods. When you create an instance of that class, those methods become bound. When you call an bound method, the im_self argument is added for you as the object in which the bound method resides. You can still call the unbound method of the class, but you need to explicitly add the class instance as the first argument:
class Foo(object):
def bar(self):
print self
print self.bar
print self.bar.im_self # prints the same as self
We can show what happens when we call the various manifestations of the bar method:
>>> a = Foo()
>>> a.bar()
<__main__.Foo object at 0x179b610>
<bound method Foo.bar of <__main__.Foo object at 0x179b610>>
<__main__.Foo object at 0x179b610>
>>> Foo.bar()
TypeError: unbound method bar() must be called with Foo instance as first argument (got nothing instead)
>>> Foo.bar(a)
<__main__.Foo object at 0x179b610>
<bound method Foo.bar of <__main__.Foo object at 0x179b610>>
<__main__.Foo object at 0x179b610>
Bringing all the above together, we can define a class as follows:
class MyFoo(object):
a = 10
def bar(self):
print self.a
This generates a class with 2 attributes: a (which is an integer of value 10) and bar, which is an unbound method. We can see that MyFoo.a is just 10.
We can create extra attributes at run time, both within the class methods, and outside. Consider the following:
class MyFoo(object):
a = 10
def __init__(self):
self.b = 20
def bar(self):
print self.a
print self.b
def eep(self):
print self.c
__init__ is just the method that is called immediately after an object has been created from a class.
>>> foo = Foo()
>>> foo.bar()
10
20
>>> foo.eep()
AttributeError: 'MyFoo' object has no attribute 'c'
>>> foo.c = 30
>>> foo.eep()
30
This example shows 2 ways of adding an attribute to a class instance at run time (that is, after the object has been created from it's class).
I hope you can see then, that TestCase and TestSuite are just classes that are used to create test objects. There's nothing special about them except that they happen to have some useful features for writing tests. You can subclass and overwrite them to your heart's content!
Regarding your specific point, both methods and functions can return anything they want.
Your description of module, package and suite seems pretty sound. Note that modules are also objects!

Related

Explicit call to __call__ works and uses __init__

I'm learning overloading in Python 3.X and to better understand the topic, I wrote the following code that works in 3.X but not in 2.X. I expected the below code to fail since I've not defined __call__ for class Test. But to my surprise, it works and prints "constructor called". Demo.
class Test:
def __init__(self):
print("constructor called")
#Test.__getitem__() #error as expected
Test.__call__() #this works in 3.X(but not in 2.X) and prints "constructor called"! WHY THIS DOESN'T GIVE ERROR in 3.x?
So my question is that how/why exactly does this code work in 3.x but not in 2.x. I mean I want to know the mechanics behind what is going on.
More importantly, why __init__ is being used here when I am using __call__?
In 3.x:
About attribute lookup, type and object
Every time an attribute is looked up on an object, Python follows a process like this:
Is it directly a part of the actual data in the object? If so, use that and stop.
Is it directly a part of the object's class? If so, hold onto that for step 4.
Otherwise, check the object's class for __getattr__ and __getattribute__ overrides, look through base classes in the MRO, etc. (This is a massive simplification, of course.)
If something was found in step 2 or 3, check if it has a __get__. If it does, look that up (yes, that means starting over at step 1 for the attribute named __get__ on that object), call it, and use its return value. Otherwise, use what was returned directly.
Functions have a __get__ automatically; it is used to implement method binding. Classes are objects; that's why it's possible to look up attributes in them. That is: the purpose of the class Test: block is to define a data type; the code creates an object named Test which represents the data type that was defined.
But since the Test class is an object, it must be an instance of some class. That class is called type, and has a built-in implementation.
>>> type(Test)
<class 'type'>
Notice that type(Test) is not a function call. Rather, the name type is pre-defined to refer to a class, which every other class created in user code is (by default) an instance of.
In other words, type is the default metaclass: the class of classes.
>>> type
<class 'type'>
One may ask, what class does type belong to? The answer is surprisingly simple - itself:
>>> type(type) is type
True
Since the above examples call type, we conclude that type is callable. To be callable, it must have a __call__ attribute, and it does:
>>> type.__call__
<slot wrapper '__call__' of 'type' objects>
When type is called with a single argument, it looks up the argument's class (roughly equivalent to accessing the __class__ attribute of the argument). When called with three arguments, it creates a new instance of type, i.e., a new class.
How does type work?
Because this is digging right at the core of the language (allocating memory for the object), it's not quite possible to implement this in pure Python, at least for the reference C implementation (and I have no idea what sort of magic is going on in PyPy here). But we can approximately model the type class like so:
def _validate_type(obj, required_type, context):
if not isinstance(obj, required_type):
good_name = required_type.__name__
bad_name = type(obj).__name__
raise TypeError(f'{context} must be {good_name}, not {bad_name}')
class type:
def __new__(cls, name_or_obj, *args):
# __new__ implicitly gets passed an instance of the class, but
# `type` is its own class, so it will be `type` itself.
if len(args) == 0: # 1-argument form: check the type of an existing class.
return obj.__class__
# otherwise, 3-argument form: create a new class.
try:
bases, attrs = args
except ValueError:
raise TypeError('type() takes 1 or 3 arguments')
_validate_type(name, str, 'type.__new__() argument 1')
_validate_type(bases, tuple, 'type.__new__() argument 2')
_validate_type(attrs, dict, 'type.__new__() argument 3')
# This line would not work if we were actually implementing
# a replacement for `type`, as it would route to `object.__new__(type)`,
# which is explicitly disallowed. But let's pretend it does...
result = super().__new__()
# Now, fill in attributes from the parameters.
result.__name__ = name_or_obj
# Assigning to `__bases__` triggers a lot of other internal checks!
result.__bases__ = bases
for name, value in attrs.items():
setattr(result, name, value)
return result
del __new__.__get__ # `__new__`s of builtins don't implement this.
def __call__(self, *args):
return self.__new__(self, *args)
# this, however, does have a `__get__`.
What happens (conceptually) when we call the class (Test())?
Test() uses function-call syntax, but it's not a function. To figure out what should happen, we translate the call into Test.__class__.__call__(Test). (We use __class__ directly here, because translating the function call using type - asking type to categorize itself - would end up in endless recursion.)
Test.__class__ is type, so this becomes type.__call__(Test).
type contains a __call__ directly (type is its own class, remember?), so it's used directly - we don't go through the __get__ descriptor. We call the function, with Test as self, and no other arguments. (We have a function now, so we don't need to translate the function call syntax again. We could - given a function func, func.__class__.__call__.__get__(func) gives us an instance of an unnamed builtin "method wrapper" type, which does the same thing as func when called. Repeating the loop on the method wrapper creates a separate method wrapper that still does the same thing.)
This attempts the call Test.__new__(Test) (since self was bound to Test). Test.__new__ isn't explicitly defined in Test, but since Test is a class, we don't look in Test's class (type), but instead in Test's base (object).
object.__new__(Test) exists, and does magical built-in stuff to allocate memory for a new instance of the Test class, make it possible to assign attributes to that instance (even though Test is a subtype of object, which disallows that), and set its __class__ to Test.
Similarly, when we call type, the same logical chain turns type(Test) into type.__class__.__call__(type, Test) into type.__call__(type, Test), which forwards to type.__new__(type, Test). This time, there is a __new__ attribute directly in type, so this doesn't fall back to looking in object. Instead, with name_or_obj being set to Test, we simply return Test.__class__, i.e., type. And with separate name, bases, attrs arguments, type.__new__ instead creates an instance of type.
Finally: what happens when we call Test.__call__() explicitly?
If there's a __call__ defined in the class, it gets used, since it's found directly. This will fail, however, because there aren't enough arguments: the descriptor protocol isn't used since the attribute was found directly, so self isn't bound, and so that argument is missing.
If there isn't a __call__ method defined, then we look in Test's class, i.e., type. There's a __call__ there, so the rest proceeds like steps 3-5 in the previous section.
In Python 3.x, every class is implicitely a child of the builtin class object. And at least in the CPython implementation, the object class has a __call__ method which is defined in its metaclass type.
That means that Test.__call__() is exactly the same as Test() and will return a new Test object, calling your custom __init__ method.
In Python 2.x classes are by default old-style classes and are not child of object. Because of that __call__ is not defined. You can get the same behaviour in Python 2.x by using new style classes, meaning by making an explicit inheritance on object:
# Python 2 new style class
class Test(object):
...

__get__ of descriptor __class__ of object class doesn't return as expected

I try to understand more thoroughly descriptor and explicit attribute name look-up order.
I read descriptor howto and it states as follows:
The details of invocation depend on whether obj is an object or a class:
...
For classes, the machinery is in type.__getattribute__() which transforms B.x into B.__dict__['x'].__get__(None, B)
I test this on __class__ since it is a data descriptor of object
In [47]: object.__class__
Out[47]: type
So, it returns type as expected since type class creates all classes including object class. Base on the 'descriptor howto', object.__class__ is turned into object.__dict__['__class__'].__get__(None, object).
However, When I run it, the output is the descriptor itself, not type
In [48]: object.__dict__['__class__'].__get__(None, object)
Out[48]: <attribute '__class__' of 'object' objects>
I guess it returns the descriptor itself because inside of this __get__ having some kind of code like:
if instance is None:
return self
So, I understand the reason of returning the descriptor itself when calling from class. The thing confuses me is the different ouputs
When it says 'B.x into B.__dict__['x'].__get__(None, B)', I expect outputs are the same. Why are they different?
The descriptor how-to is a simplification. It glosses over things like metaclasses, and the fact that classes are objects. Classes are objects, and they go through both "object-style" and "class-style" attribute lookup and descriptor handling. (The implementation can be found in type_getattro, if you want to independently verify this.)
A lookup for object.__class__ doesn't just go through object.__mro__; it also looks through type(object).__mro__. Descriptors found in type(object).__mro__ use "object-style" descriptor handling, treating the class as an instance of its metaclass, while descriptors found in object.__mro__ use "class-style" descriptor handling.
When you look up object.__class__, Python searches through type(object).__mro__. Since object is in type(object).__mro__, this search finds object.__dict__['__class__']. Since object.__dict__['__class__'] is a data descriptor (it has a __set__), this takes priority over the search through object.__mro__. Thus, treating object as an instance of object rather than as a class, Python performs
descr.__get__(object, type(object))
instead of
descr.__get__(None, object)
and the __get__ call returns type(object), which is type.
Your manual descr.__get__(None, object) call treats object as a class instead of as an instance of object. Invoked this way, the descriptor returns itself.
To demonstrate that __class__ isn't being special-cased here, we can create our own class that's an instance of itself, just like object is:
class DummyMeta(type):
pass
class SelfMeta(type, metaclass=DummyMeta):
#property
def x(self):
return 3
SelfMeta.__class__ = SelfMeta
print(SelfMeta.x)
print(SelfMeta.__dict__['x'].__get__(None, SelfMeta))
print(SelfMeta.__dict__['x'].__get__(SelfMeta, type(SelfMeta)))
Output:
3
<property object at 0x2aff9f04c5e8>
3
Just like with object.__class__, "object-style" descriptor handling happens here too. (Also, in case you're wondering, properties are data descriptors even if you don't write a setter.)

Explicitly inheriting from 'type' to implement metaclass in python3.x

I was trying to get some intuition about metaclass in Python. I have tried on both Python2.7 and and Python3.5. In Python3.5 I found every class we define is of type <class 'type'> whether we explicitly inherit or not. But if not inherited from type we can't use that class as a metaclass for another class.
>>> class foo:
pass
>>> class Metafoo(type):
pass
>>> foo
<class '__main__.foo'>
>>> Metafoo
<class '__main__.Metafoo'>
>>> type(foo)
<class 'type'>
>>> type(Metafoo)
<class 'type'>
>>>
>>> class foocls1(metaclass=foo):
pass
doing the above I get the following error:
Traceback (most recent call last):
File "<pyshell#52>", line 1, in <module>
class foocls1(metaclass=foo):
TypeError: object() takes no parameters
But that is not in the case while using Metafoo as metaclass for the new class
>>> class foocls3(metaclass=Metafoo):
pass
>>> foocls3
<class '__main__.foocls3'>
>>> type(foocls3)
<class '__main__.Metafoo'>
Can anyone explain why this is the case that we need to explicitly inherit if we want to use a class as metaclass in other class.
"type" is the base class for all class objects in Python 3, and in Python 2 post version 2.2. (It is just that on Python 2, you are supposed to inherit from object. Classes that do not explicitly inherit from "object" in Python 2 were called "old style classes", and kept for backward compatibility purposes, but were of little use.)
So, what happens is that inheritance: what are the superclasses of a class, and "metaclass" that is "since in Python a class is an object itself, what is the class of that object" are two different things. The classes you inherit from define the order of attribute (and method) lookup on the instances of your class, and therefore you have a common behavior.
The metaclass is rather "the class your class is built with", and although it can serve other purposes, is most used to modify the construction step of your classes itself. Search around and you will see metaclasses mostly implementing the __new__ and __init__ methods. (Although it is ok to have other methods there, but yu have to know what you are doing)
It happens that to build a class some actions are needed that are not required to build normal objects. These actions are performed on the native-code layer (C in CPython) and are not even reproducible in pure Python code, like populating the class's special method slots- the pointers to the functions that implement the __add__, __eq__ and such methods. The only class that does that in CPython is "type". So any code you want to use to build a class, i.e. as a metaclass, will have to at some point call type.__new__ method. (Just as any thing you want to create a new object in Python will at some point call the code in object.__new__.).
Your error happened not because Python goes checking whether you made a direct or indirect call to type.__new__ ahead of time. The error: "TypeError: object() takes no parameters" is simply due to the fact that __new__ method of the metaclass is passed 3 parameters (name, bases, namespace), while the same method in object is passed no parameters. (Both get te extra "cls", equivalent to self as well, but this is not counted).
You can use any callable as the metaclass, even an ordinary function. It is just that it will have to take the 3 explicit parameters. Whatever this callable returns is used as the class from that point on, but if at some point you don't call type.__new__ (even indirectly), you don't have a valid class to return.
For example, one could create a simple method just in order to be able to use class bodies as dictionary declarations:
def dictclass(name, bases, namespace):
return namespace
class mydict(metaclass=dictclass):
a = 1
b = 2
c = 3
mydict["a"]
So, one interesting fact with all that is that type is its own metaclass. (that is hardcoded in the Python implementation). But type itself also inherits from object:
In [21]: type.__class__
Out[21]: type
In [22]: type.__mro__
Out[22]: (type, object)
And just to end this: it is possible to create classes without calling type.__new__ and objects without calling object.__new__, but not ordinarily from pure Python code, as data structures at the C-API level have to be filled for both actions. One could either do a native-code implementation of functions to do it, or hack it with ctypes.

Are there more than three types of methods in Python?

I understand there are at least 3 kinds of methods in Python having different first arguments:
instance method - instance, i.e. self
class method - class, i.e. cls
static method - nothing
These classic methods are implemented in the Test class below including an usual method:
class Test():
def __init__(self):
pass
def instance_mthd(self):
print("Instance method.")
#classmethod
def class_mthd(cls):
print("Class method.")
#staticmethod
def static_mthd():
print("Static method.")
def unknown_mthd():
# No decoration --> instance method, but
# No self (or cls) --> static method, so ... (?)
print("Unknown method.")
In Python 3, the unknown_mthd can be called safely, yet it raises an error in Python 2:
>>> t = Test()
>>> # Python 3
>>> t.instance_mthd()
>>> Test.class_mthd()
>>> t.static_mthd()
>>> Test.unknown_mthd()
Instance method.
Class method.
Static method.
Unknown method.
>>> # Python 2
>>> Test.unknown_mthd()
TypeError: unbound method unknown_mthd() must be called with Test instance as first argument (got nothing instead)
This error suggests such a method was not intended in Python 2. Perhaps its allowance now is due to the elimination of unbound methods in Python 3 (REF 001). Moreover, unknown_mthd does not accept args, and it can be bound to called by a class like a staticmethod, Test.unknown_mthd(). However, it is not an explicit staticmethod (no decorator).
Questions
Was making a method this way (without args while not explicitly decorated as staticmethods) intentional in Python 3's design? UPDATED
Among the classic method types, what type of method is unknown_mthd?
Why can unknown_mthd be called by the class without passing an argument?
Some preliminary inspection yields inconclusive results:
>>> # Types
>>> print("i", type(t.instance_mthd))
>>> print("c", type(Test.class_mthd))
>>> print("s", type(t.static_mthd))
>>> print("u", type(Test.unknown_mthd))
>>> print()
>>> # __dict__ Types, REF 002
>>> print("i", type(t.__class__.__dict__["instance_mthd"]))
>>> print("c", type(t.__class__.__dict__["class_mthd"]))
>>> print("s", type(t.__class__.__dict__["static_mthd"]))
>>> print("u", type(t.__class__.__dict__["unknown_mthd"]))
>>> print()
i <class 'method'>
c <class 'method'>
s <class 'function'>
u <class 'function'>
i <class 'function'>
c <class 'classmethod'>
s <class 'staticmethod'>
u <class 'function'>
The first set of type inspections suggests unknown_mthd is something similar to a staticmethod. The second suggests it resembles an instance method. I'm not sure what this method is or why it should be used over the classic ones. I would appreciate some advice on how to inspect and understand it better. Thanks.
REF 001: What's New in Python 3: “unbound methods” has been removed
REF 002: How to distinguish an instance method, a class method, a static method or a function in Python 3?
REF 003: What's the point of #staticmethod in Python?
Some background: In Python 2, "regular" instance methods could give rise to two kinds of method objects, depending on whether you accessed them via an instance or the class. If you did inst.meth (where inst is an instance of the class), you got a bound method object, which keeps track of which instance it is attached to, and passes it as self. If you did Class.meth (where Class is the class), you got an unbound method object, which had no fixed value of self, but still did a check to make sure a self of the appropriate class was passed when you called it.
In Python 3, unbound methods were removed. Doing Class.meth now just gives you the "plain" function object, with no argument checking at all.
Was making a method this way intentional in Python 3's design?
If you mean, was removal of unbound methods intentional, the answer is yes. You can see discussion from Guido on the mailing list. Basically it was decided that unbound methods add complexity for little gain.
Among the classic method types, what type of method is unknown_mthd?
It is an instance method, but a broken one. When you access it, a bound method object is created, but since it accepts no arguments, it's unable to accept the self argument and can't be successfully called.
Why can unknown_mthd be called by the class without passing an argument?
In Python 3, unbound methods were removed, so Test.unkown_mthd is just a plain function. No wrapping takes place to handle the self argument, so you can call it as a plain function that accepts no arguments. In Python 2, Test.unknown_mthd is an unbound method object, which has a check that enforces passing a self argument of the appropriate class; since, again, the method accepts no arguments, this check fails.
#BrenBarn did a great job answering your question. This answer however, adds a plethora of details:
First of all, this change in bound and unbound method is version-specific, and it doesn't relate to new-style or classic classes:
2.X classic classes by default
>>> class A:
... def meth(self): pass
...
>>> A.meth
<unbound method A.meth>
>>> class A(object):
... def meth(self): pass
...
>>> A.meth
<unbound method A.meth>
3.X new-style classes by default
>>> class A:
... def meth(self): pass
...
>>> A.meth
<function A.meth at 0x7efd07ea0a60>
You've already mentioned this in your question, it doesn't hurt to mention it twice as a reminder.
>>> # Python 2
>>> Test.unknown_mthd()
TypeError: unbound method unknown_mthd() must be called with Test instance as first argument (got nothing instead)
Moreover, unknown_mthd does not accept args, and it can be bound to a class like a staticmethod, Test.unknown_mthd(). However, it is not an explicit staticmethod (no decorator)
unknown_meth doesn't accept args, normally because you've defined the function without so that it does not take any parameter. Be careful and cautious, static methods as well as your coded unknown_meth method will not be magically bound to a class when you reference them through the class name (e.g, Test.unknown_meth). Under Python 3.X Test.unknow_meth returns a simple function object in 3.X, not a method bound to a class.
1 - Was making a method this way (without args while not explicitly decorated as staticmethods) intentional in Python 3's design? UPDATED
I cannot speak for CPython developers nor do I claim to be their representative, but from my experience as a Python programmer, it seems like they wanted to get rid of a bad restriction, especially given the fact that Python is extremely dynamic, not a language of restrictions; why would you test the type of objects passed to class methods and hence restrict the method to specific instances of classes? Type testing eliminates polymorphism. It would be decent if you just return a simple function when a method is fetched through the class which functionally behaves like a static method, you can think of unknown_meth to be static method under 3.X so long as you're careful not to fetch it through an instance of Test you're good to go.
2- Among the classic method types, what type of method is unknown_mthd?
Under 3.X:
>>> from types import *
>>> class Test:
... def unknown_mthd(): pass
...
>>> type(Test.unknown_mthd) is FunctionType
True
It's simply a function in 3.X as you could see. Continuing the previous session under 2.X:
>>> type(Test.__dict__['unknown_mthd']) is FunctionType
True
>>> type(Test.unknown_mthd) is MethodType
True
unknown_mthd is a simple function that lives inside Test__dict__, really just a simple function which lives inside the namespace dictionary of Test. Then, when does it become an instance of MethodType? Well, it becomes an instance of MethodType when you fetch the method attribute either from the class itself which returns an unbound method or its instances which returns a bound method. In 3.X, Test.unknown_mthd is a simple function--instance of FunctionType, and Test().unknown_mthd is an instance of MethodType that retains the original instance of class Test and adds it as the first argument implicitly on function calls.
3- Why can unknown_mthd be called by the class without passing an argument?
Again, because Test.unknown_mthd is just a simple function under 3.X. Whereas in 2.X, unknown_mthd not a simple function and must be called be passed an instance of Test when called.
Are there more than three types of methods in Python?
Yes. There are the three built-in kinds that you mention (instance method, class method, static method), four if you count #property, and anyone can define new method types.
Once you understand the mechanism for doing this, it's easy to explain why unknown_mthd is callable from the class in Python 3.
A new kind of method
Suppose we wanted to create a new type of method, call it optionalselfmethod so that we could do something like this:
class Test(object):
#optionalselfmethod
def optionalself_mthd(self, *args):
print('Optional-Self Method:', self, *args)
The usage is like this:
In [3]: Test.optionalself_mthd(1, 2)
Optional-Self Method: None 1 2
In [4]: x = Test()
In [5]: x.optionalself_mthd(1, 2)
Optional-Self Method: <test.Test object at 0x7fe80049d748> 1 2
In [6]: Test.instance_mthd(1, 2)
Instance method: 1 2
optionalselfmethod works like a normal instance method when called on an instance, but when called on the class, it always receives None for the first parameter. If it were a normal instance method, you would always have to pass an explicit value for the self parameter in order for it to work.
So how does this work? How you can you create a new method type like this?
The Descriptor Protocol
When Python looks up a field of an instance, i.e. when you do x.whatever, it check in several places. It checks the instance's __dict__ of course, but it also checks the __dict__ of the object's class, and base classes thereof. In the instance dict, Python is just looking for the value, so if x.__dict__['whatever'] exists, that's the value. However, in the class dict, Python is looking for an object which implements the Descriptor Protocol.
The Descriptor Protocol is how all three built-in kinds of methods work, it's how #property works, and it's how our special optionalselfmethod will work.
Basically, if the class dict has a value with the correct name1, Python checks if it has an __get__ method, and calls it like type(x).whatever.__get__(x, type(x)) Then, the value returned from __get__ is used as the field value.
So for example, a trivial descriptor which always returns 3:
class GetExample:
def __get__(self, instance, cls):
print("__get__", instance, cls)
return 3
class Test:
get_test = GetExample()
Usage is like this:
In[22]: x = Test()
In[23]: x.get_test
__get__ <__main__.Test object at 0x7fe8003fc470> <class '__main__.Test'>
Out[23]: 3
Notice that the descriptor is called with both the instance and the class type. It can also be used on the class:
In [29]: Test.get_test
__get__ None <class '__main__.Test'>
Out[29]: 3
When a descriptor is used on a class rather than an instance, the __get__ method gets None for self, but still gets the class argument.
This allows a simple implementation of methods: functions simply implement the descriptor protocol. When you call __get__ on a function, it returns a bound method of instance. If the instance is None, it returns the original function. You can actually call __get__ yourself to see this:
In [30]: x = object()
In [31]: def test(self, *args):
...: print(f'Not really a method: self<{self}>, args: {args}')
...:
In [32]: test
Out[32]: <function __main__.test>
In [33]: test.__get__(None, object)
Out[33]: <function __main__.test>
In [34]: test.__get__(x, object)
Out[34]: <bound method test of <object object at 0x7fe7ff92d890>>
#classmethod and #staticmethod are similar. These decorators create proxy objects with __get__ methods which provide different binding. Class method's __get__ binds the method to the instance, and static method's __get__ doesn't bind to anything, even when called on an instance.
The Optional-Self Method Implementation
We can do something similar to create a new method which optionally binds to an instance.
import functools
class optionalselfmethod:
def __init__(self, function):
self.function = function
functools.update_wrapper(self, function)
def __get__(self, instance, cls):
return boundoptionalselfmethod(self.function, instance)
class boundoptionalselfmethod:
def __init__(self, function, instance):
self.function = function
self.instance = instance
functools.update_wrapper(self, function)
def __call__(self, *args, **kwargs):
return self.function(self.instance, *args, **kwargs)
def __repr__(self):
return f'<bound optionalselfmethod {self.__name__} of {self.instance}>'
When you decorate a function with optionalselfmethod, the function is replaced with our proxy. This proxy saves the original method and supplies a __get__ method which returns a boudnoptionalselfmethod. When we create a boundoptionalselfmethod, we tell it both the function to call and the value to pass as self. Finally, calling the boundoptionalselfmethod calls the original function, but with the instance or None inserted into the first parameter.
Specific Questions
Was making a method this way (without args while not explicitly
decorated as staticmethods) intentional in Python 3's design? UPDATED
I believe this was intentional; however the intent would have been to eliminate unbound methods. In both Python 2 and Python 3, def always creates a function (you can see this by checking a type's __dict__: even though Test.instance_mthd comes back as <unbound method Test.instance_mthd>, Test.__dict__['instance_mthd'] is still <function instance_mthd at 0x...>).
In Python 2, function's __get__ method always returns a instancemethod, even when accessed through the class. When accessed through an instance, the method is bound to that instance. When accessed through the class, the method is unbound, and includes a mechanism which checks that the first argument is an instance of the correct class.
In Python 3, function's __get__ method will return the original function unchanged when accessed through the class, and a method when accessed through the instance.
I don't know the exact rationale but I would guess that type-checking of the first argument to a class-level function was deemed unnecessary, maybe even harmful; Python allows duck-typing after all.
Among the classic method types, what type of method is unknown_mthd?
unknown_mthd is a plain function, just like any normal instance method. It only fails when called through the instance because when method.__call__ attempts to call the function unknown_mthd with the bound instance, it doesn't accept enough parameters to receive the instance argument.
Why can unknown_mthd be called by the class without passing an
argument?
Because it's just a plain function, same as any other function. I just doesn't take enough arguments to work correctly when used as an instance method.
You may note that both classmethod and staticmethod work the same whether they're called through an instance or a class, whereas unknown_mthd will only work correctly when when called through the class and fail when called through an instance.
1. If a particular name has both a value in the instance dict and a descriptor in the class dict, which one is used depends on what kind of descriptor it is. If the descriptor only defines __get__, the value in the instance dict is used. If the descriptor also defines __set__, then it's a data-descriptor and the descriptor always wins. This is why you can assign over top of a method but not a #property; method only define __get__, so you can put things in the same-named slot in the instance dict, while #properties define __set__, so even if they're read-only, you'll never get a value from the instance __dict__ even if you've previously bypassed property lookup and stuck a value in the dict with e.g. x.__dict__['whatever'] = 3.

How does a python method automatically receive 'self' as the first argument?

Consider this example of a strategy pattern in Python (adapted from the example here). In this case the alternate strategy is a function.
class StrategyExample(object):
def __init__(self, strategy=None) :
if strategy:
self.execute = strategy
def execute(*args):
# I know that the first argument for a method
# must be 'self'. This is just for the sake of
# demonstration
print locals()
#alternate strategy is a function
def alt_strategy(*args):
print locals()
Here are the results for the default strategy.
>>> s0 = StrategyExample()
>>> print s0
<__main__.StrategyExample object at 0x100460d90>
>>> s0.execute()
{'args': (<__main__.StrategyExample object at 0x100460d90>,)}
In the above example s0.execute is a method (not a plain vanilla function) and hence the first argument in args, as expected, is self.
Here are the results for the alternate strategy.
>>> s1 = StrategyExample(alt_strategy)
>>> s1.execute()
{'args': ()}
In this case s1.execute is a plain vanilla function and as expected, does not receive self. Hence args is empty. Wait a minute! How did this happen?
Both the method and the function were called in the same fashion. How does a method automatically get self as the first argument? And when a method is replaced by a plain vanilla function how does it not get the self as the first argument?
The only difference that I was able to find was when I examined the attributes of default strategy and alternate strategy.
>>> print dir(s0.execute)
['__cmp__', '__func__', '__self__', ...]
>>> print dir(s1.execute)
# does not have __self__ attribute
Does the presence of __self__ attribute on s0.execute (the method), but lack of it on s1.execute (the function) somehow account for this difference in behavior? How does this all work internally?
You can read the full explanation here in the python reference, under "User defined methods". A shorter and easier explanation can be found in the python tutorial's description of method objects:
If you still don’t understand how methods work, a look at the implementation can perhaps clarify matters. When an instance attribute is referenced that isn’t a data attribute, its class is searched. If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together in an abstract object: this is the method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.
Basically, what happens in your example is this:
a function assigned to a class (as happens when you declare a method inside the class body) is... a method.
When you access that method through the class, eg. StrategyExample.execute you get an "unbound method": it doesn't "know" to which instance it "belongs", so if you want to use that on an instance, you would need to provide the instance as the first argument yourself, eg. StrategyExample.execute(s0)
When you access the method through the instance, eg. self.execute or s0.execute, you get a "bound method": it "knows" which object it "belongs" to, and will get called with the instance as the first argument.
a function that you assign to an instance attribute directly however, as in self.execute = strategy or even s0.execute = strategy is... just a plain function (contrary to a method, it doesn't pass via the class)
To get your example to work the same in both cases:
either you turn the function into a "real" method: you can do this with types.MethodType:
self.execute = types.MethodType(strategy, self, StrategyExample)
(you more or less tell the class that when execute is asked for this particular instance, it should turn strategy into a bound method)
or - if your strategy doesn't really need access to the instance - you go the other way around and turn the original execute method into a static method (making it a normal function again: it won't get called with the instance as the first argument, so s0.execute() will do exactly the same as StrategyExample.execute()):
#staticmethod
def execute(*args):
print locals()
You need to assign an unbound method (i.e. with a self parameter) to the class or a bound method to the object.
Via the descriptor mechanism, you can make your own bound methods, it's also why it works when you assign the (unbound) function to a class:
my_instance = MyClass()
MyClass.my_method = my_method
When calling my_instance.my_method(), the lookup will not find an entry on my_instance, which is why it will at a later point end up doing this: MyClass.my_method.__get__(my_instance, MyClass) - this is the descriptor protocol. This will return a new method that is bound to my_instance, which you then execute using the () operator after the property.
This will share method among all instances of MyClass, no matter when they were created. However, they could have "hidden" the method before you assigned that property.
If you only want specific objects to have that method, just create a bound method manually:
my_instance.my_method = my_method.__get__(my_instance, MyClass)
For more detail about descriptors (a guide), see here.
The method is a wrapper for the function, and calls the function with the instance as the first argument. Yes, it contains a __self__ attribute (also im_self in Python prior to 3.x) that keeps track of which instance it is attached to. However, adding that attribute to a plain function won't make it a method; you need to add the wrapper. Here is how (although you may want to use MethodType from the types module to get the constructor, rather than using type(some_obj.some_method).
The function wrapped, by the way, is accessible through the __func__ (or im_func) attribute of the method.
When you do self.execute = strategy you set the attribute to a plain method:
>>> s = StrategyExample()
>>> s.execute
<bound method StrategyExample.execute of <__main__.StrategyExample object at 0x1dbbb50>>
>>> s2 = StrategyExample(alt_strategy)
>>> s2.execute
<function alt_strategy at 0x1dc1848>
A bound method is a callable object that calls a function passing an instance as the first argument in addition to passing through all arguments it was called with.
See: Python: Bind an Unbound Method?

Categories