Related
Hello I am recently looking about the metaclass in Python.
I learned about we can attach a metaclass to a class in this way:
class Metaclass(type):
...
class MyClass(metaclass=Metaclass):
...
The first question
I wonder what's the principle of the keyword argument (metaclass=...) in the above example. There seems no description for that in the doc
I know that there is keyword argument for a function, but I've never seen the form occurred in class inheritance.
The second question
We know that we can create a class using type() function in this way
cls = type(class_name, (base_class,), {func_name: func_ref})
Considering the metaclass=... form, how can I pass metaclass when creating class using type() instead of class keyword?
Thanks stackoverflowers!
In general, the keyword arguments used with a class statement are passed to __init_subclass__, to be used however that function sees fit. metaclass, however, is used by the code generator itself to determine what gets called in order to define the class.
The metaclass is not an argument to be passed to a call to type; it is the callable that gets called instead of type. Very roughly speaking,
class MyClass(metaclass=Metaclass):
...
is turned into
MyClass = Metaclass('MyClass', (), ...)
type is simply the default metaclass.
(The gory details of how a class statement is executed can be found in Section 3.3.3 of the language documentation.)
Note that a metaclass does not have to be a subclass of type. You can do truly horrific things with metaclasses, like
>>> class A(metaclass=lambda x, y, z: 3): pass
...
>>> type(A)
<class 'int'>
>>> A
3
It doesn't really matter what the metaclass returns, as long as it accepts 3 arguments.
Please don't write actual code like this, though. A class statement should produce something that at least resembles a type.
In some sense, class statements are less magical than other statements. def statements, for example, can't be customized to produce something other than instances of function, and it's far harder to create a function object by calling types.FunctionType explicitly.
Yes I've looked this, but it doesn't provide something about principle of keyword argument in class inheritance, I am still confused
metaclass=... is not about inheritance.
Everything in Python is an object. So when you define a class MyClass, not only is MyClass() an object but the class MyClass itself is an object.
MyClass() is an instance of MyClass, simple enough.
But if MyClass is also an object, what class is it an instance of?
If you do nothing, the object (!) MyClass is an instance of the class type.
If you do class MyClass(metaclass=Metaclass) then MyClass is an instance of Metaclass.
Consider
class Metaclass(type):
pass
class MyClass1:
pass
class MyClass2(metaclass=Metaclass):
pass
Both MyClass1 and MyClass2 inherit from object and only object.
>>> MyClass1.__mro__
(<class '__main__.MyClass1'>, <class 'object'>)
>>> MyClass2.__mro__
(<class '__main__.MyClass2'>, <class 'object'>)
But their types differ:
>>> type(MyClass1)
<class 'type'>
>>> type(MyClass2)
<class '__main__.Metaclass'>
When would you need this? Usually, you don't. And when you do, you do the kind of metaprogramming where you already know the why and how.
I was trying to get some intuition about metaclass in Python. I have tried on both Python2.7 and and Python3.5. In Python3.5 I found every class we define is of type <class 'type'> whether we explicitly inherit or not. But if not inherited from type we can't use that class as a metaclass for another class.
>>> class foo:
pass
>>> class Metafoo(type):
pass
>>> foo
<class '__main__.foo'>
>>> Metafoo
<class '__main__.Metafoo'>
>>> type(foo)
<class 'type'>
>>> type(Metafoo)
<class 'type'>
>>>
>>> class foocls1(metaclass=foo):
pass
doing the above I get the following error:
Traceback (most recent call last):
File "<pyshell#52>", line 1, in <module>
class foocls1(metaclass=foo):
TypeError: object() takes no parameters
But that is not in the case while using Metafoo as metaclass for the new class
>>> class foocls3(metaclass=Metafoo):
pass
>>> foocls3
<class '__main__.foocls3'>
>>> type(foocls3)
<class '__main__.Metafoo'>
Can anyone explain why this is the case that we need to explicitly inherit if we want to use a class as metaclass in other class.
"type" is the base class for all class objects in Python 3, and in Python 2 post version 2.2. (It is just that on Python 2, you are supposed to inherit from object. Classes that do not explicitly inherit from "object" in Python 2 were called "old style classes", and kept for backward compatibility purposes, but were of little use.)
So, what happens is that inheritance: what are the superclasses of a class, and "metaclass" that is "since in Python a class is an object itself, what is the class of that object" are two different things. The classes you inherit from define the order of attribute (and method) lookup on the instances of your class, and therefore you have a common behavior.
The metaclass is rather "the class your class is built with", and although it can serve other purposes, is most used to modify the construction step of your classes itself. Search around and you will see metaclasses mostly implementing the __new__ and __init__ methods. (Although it is ok to have other methods there, but yu have to know what you are doing)
It happens that to build a class some actions are needed that are not required to build normal objects. These actions are performed on the native-code layer (C in CPython) and are not even reproducible in pure Python code, like populating the class's special method slots- the pointers to the functions that implement the __add__, __eq__ and such methods. The only class that does that in CPython is "type". So any code you want to use to build a class, i.e. as a metaclass, will have to at some point call type.__new__ method. (Just as any thing you want to create a new object in Python will at some point call the code in object.__new__.).
Your error happened not because Python goes checking whether you made a direct or indirect call to type.__new__ ahead of time. The error: "TypeError: object() takes no parameters" is simply due to the fact that __new__ method of the metaclass is passed 3 parameters (name, bases, namespace), while the same method in object is passed no parameters. (Both get te extra "cls", equivalent to self as well, but this is not counted).
You can use any callable as the metaclass, even an ordinary function. It is just that it will have to take the 3 explicit parameters. Whatever this callable returns is used as the class from that point on, but if at some point you don't call type.__new__ (even indirectly), you don't have a valid class to return.
For example, one could create a simple method just in order to be able to use class bodies as dictionary declarations:
def dictclass(name, bases, namespace):
return namespace
class mydict(metaclass=dictclass):
a = 1
b = 2
c = 3
mydict["a"]
So, one interesting fact with all that is that type is its own metaclass. (that is hardcoded in the Python implementation). But type itself also inherits from object:
In [21]: type.__class__
Out[21]: type
In [22]: type.__mro__
Out[22]: (type, object)
And just to end this: it is possible to create classes without calling type.__new__ and objects without calling object.__new__, but not ordinarily from pure Python code, as data structures at the C-API level have to be filled for both actions. One could either do a native-code implementation of functions to do it, or hack it with ctypes.
I'm reading through this blog, which contains:
Since in Python everything is an object, everything is the instance of a class, even classes. Well, type is the class that is instanced to get classes. So remember this: object is the base of every object, type is the class of every type. Sounds puzzling? It is not your fault, don't worry. However, just to strike you with the finishing move, this is what Python is built on.
>>> type(object)
<class 'type'>
>>> type.__bases__
(<class 'object'>,)
I'm having trouble understanding this. Can anyone explain this relationship in a different way to make it clearer?
The relationship between type(x) is basically the same as the result of x.__class__:
for obj in (object,type,1,str):
assert type(obj) is obj.__class__
print("type(obj) and obj.__class__ gave the same results in all test cases")
__bases__ represents the bases that a class is derived from:
class parent:
pass
class child(parent,int):
pass
print(child.__bases__) # prints (<class '__main__.parent'>, <class 'int'>)
however if you are asking about the odd relationship between object and type:
# are `object` and `type` both instances of the `object` class?
>>> isinstance(object, object) and isinstance(type, object)
True
# are they also instances of the `type` class?
>>> isinstance(object, type) and isinstance(type, type)
True
# `type` is a subclass of `object` and not the other way around (this makes sense)
>>> [issubclass(type, object), issubclass(object, type)]
[True, False]
that is more of a chicken vs. egg question: which came first?
The answer is PyObject which is defined in C.
before either object or type is available to the python interpreter their underlying mechanisms are defined in C and the instance checking is overrided after they are defined. (act like abstract classes, see PEP 3119)
you can consider it something like this python implementation:
#this wouldn't be available in python
class superTYPE(type):
def __instancecheck__(cls,inst):
if inst ==TYPE:
return True
else:
return NotImplemented #for this demo
class TYPE(type,metaclass=superTYPE):
def __instancecheck__(cls,inst):
if inst in (OBJECT,TYPE):
return True
else:
return NotImplemented #for this demo
class OBJECT(metaclass=TYPE):
pass
# these all pass
assert isinstance(TYPE,OBJECT)
assert isinstance(OBJECT,TYPE)
assert isinstance(TYPE,TYPE)
assert isinstance(OBJECT,OBJECT)
actually it may be better represented as:
#this isn't available in python
class superTYPE(type):
def __instancecheck__(cls,inst):
if inst in (TYPE,OBJECT):
return True
else:
return NotImplemented #for this demo
class OBJECT(metaclass=superTYPE):
pass
class TYPE(OBJECT):
pass
but again if you want to know exactly how it works you would need to look at the source code written in C.
TL;DR - probably not. But I tried.
This is really weird and feels like turtles all the way down. I've actually not delved into this arena very much before, though it's something that sounded fun and powerful. This explanation was confusing, and so was the rest of the information on that page, but I feel like I have some enlightenment. Whether or not I can explain that clearly, I'm not sure, but I'll have a go.
Let's look at the turtles, first:
>>> isinstance(type, object)
True
>>> isinstance(object, type)
True
Wait, what?
How is object an instance of type, when type is an instance of object? That feels like saying something like:
class Parrot: pass
ex = Parrot()
isinstance(ex, Parrot)
isinstance(Parrot, ex)
Should be True both times. But obviously it's not. Even (as Tadhg McDonald-Jensen pointed out)
>>> isinstance(type, type)
True
This should indicate to you that there is some magic going on behind the scenes. So at this point, let's just completely forget about Python (I know, why would we ever want to do such a horrible thing?)
In general, all computer programs are are 1's and 0's (and more accurately they're just a bunch of logic gates and electrons at >~2.5v and ~<2.5v, but 0's and 1's are good enough). Whether you wrote it in assembly, actual machine code, Python, C#, Java, Perl, whatever - they're all just bits.
If you write a class definition, that class is just bits. An instance of that class is just more bits. And a programming language and a compiler and an interpreter is just even more bits.
In the case of Python, it's the python interpreter that gives meaning to the bits that are our Python programs. As an interesting point, a lot of what we typically consider to be Python is actually written in Python (though most of it is C, for us CPython folks, Java for Jython, etc.).
So now we come to this thing we call type and object. As the article points out, they're kind of special. So, we know that we can create a class, and then that class is an object:
>>> class Confusion: pass
...
>>> isinstance(Confusion, object)
Which makes sense, if you think about it - you may have created class-level variables:
>>> class Counter:
... count = 0
... def __init__(self):
... Counter.count += 1
... print(self.count)
...
>>> Counter()
1
<__main__.Counter object at 0x7fa03fca4518>
>>> Counter()
2
<__main__.Counter object at 0x7fa03fca4470>
>>> Counter()
3
<__main__.Counter object at 0x7fa03fca4518>
>>> Counter()
4
<__main__.Counter object at 0x7fa03fca4470>
>>> Counter.count
4
>>> Counter.__repr__(Counter)
'<type object at 0x1199738>'
But as this last example shows (and is mentioned in the post), a class declaration, what you get with class SomeClass: pass, that declaration of a class is actually an instance of another class. In particular, it's an instance of the type class. And that instance (which we call a class) when called will produce an instance of itself:
>>> Counter.__call__()
5
<__main__.Counter object at 0x7fa03fca4518>
So what does all this have to do with the relationship between type and object?
Well, somewhere, python creates a series of bits that is object, and a series of bits that is type, and then wires them together in such a way that
>>> type.__bases__
(<class 'object'>,)
>>> object.__bases__
()
Because I currently don't feel like looking through the source, I'm going to make a guess that type is created first, and object is produced from that type and that type.__bases__ is set to (class 'object'). By creating this circular relationships between type and object, it gives the appearance that it's just turtles all the way down, when really the last two turtles are just standing on top of each other.
I don't think there's really a better way to explain what's going on here than how the article describes it - at least in a classical OOP is-a/has-a style of thinking, because it's not actually that sort of thing. Like trying to plot a 3d figure in 2d space, you're going to have problems.
It's just two sets of bits that have some bits inside them that happen to be the address of one another.
Relationships
The relationship between type() and object() is tightly interwoven.
Each is an instance of the other:
>>> isinstance(object, type)
True
>>> isinstance(object, object)
True
>>> isinstance(type, type)
True
>>> isinstance(type, object)
True
With respect to subclassing, a type is a kind of object, but not vice versa:
>>> issubclass(object, type)
False
>>> issubclass(type, object)
True
Mechanics
I think of object() as providing the baseline capabilities for all objects:
>>> dir(object)
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
The only connection to type() is that __class__ is set to type(); otherwise the latter is not used at all:
>>> object.__class__
<class 'type'>
Then type() inherits from object(), but overrides several critical methods for creating classes:
for methname in dir(type):
if 'type' in repr(getattr(type, methname, '')):
print(methname)
__call__
__class__
__delattr__
__dict__
__dir__
__doc__
__getattribute__
__init__
__init_subclass__
__instancecheck__
__mro__
__name__
__new__
__or__
__prepare__
__qualname__
__repr__
__ror__
__setattr__
__sizeof__
__subclasscheck__
__subclasses__
__subclasshook__
mro
With these methods and attributes, type can now create classes. Classes aren't really special. They can just an instance of object with methods suitable for making subclasses and instances.
you can sum up all the mystifying terminology and relationships into these 2 sentences:
Every class defines methods that act on its instances, behaviours of classes themselves is defined in the type class.
Behaviour that is common to all objects - the kind of things you usually just write off as builtin - are defined by object so all objects share those behaviours (unless a subclass overrides it :)
The rest of this answer will try to go in depth with a few examples using stuff in python that is useful to be aware of, addressing each part of the description in small chunks.
In Python everything is an object
Practically this means operations that are standard across all python objects will be defined in object, things like attribute lookup and getting the size of an object in memory. Concretely (but not as useful) it also implies:
the statement isinstance(x, object) will always give True for any possible value x.
This also means that any class that is defined will be considered a subclass of object. So issubclass(x, object) is always True for any class x.
I want to take a really quick detour to how python converts stuff to strings, this is from the documentation:
object.__repr__(self)
Called by the repr() built-in function to
compute the “official” string representation of an object. If at all
possible, this should look like a valid Python expression. [...] If a class defines __repr__() but not
__str__(), then __repr__() is also used when an “informal” string representation of instances of that class is required.
This is typically used for debugging, so it is important that the
representation is information-rich and unambiguous.
object.__str__(self)
[...] to compute the “informal” or nicely printable
string representation of an object. [...]
The default implementation defined by the built-in type object calls
object.__repr__().
For beginners in python knowing that both str and repr exist is really useful: when you print data it uses str and when you evaluate stuff on the command line it uses repr which is mostly noticeable with strings:
>>> x = "123"
>>> print(x) # prints string as is which since it contains digits is misleading
123
>>> x # shows "official" representation of x with quotes to indicate it is a string
'123'
>>> print(repr(x)) # from a script you can get this behaviour by calling repr()
'123'
Where this relates to our conversation about object is that the sentence about "The default implementation [for __str__] defined by the built-in type object calls object.__repr__()." So that means if we define a __repr__ method then when we print the object we get that as well
class Test:
def __init__(self, x):
self.x = x
def __repr__(self):
return "Test(x={})".format(repr(self.x))
x = Test("hi") # make a new object
print(x) # prints Test(x='hi')
# and the process it goes through is this chain:
assert ( str(x) # converting to a string
== Test.__str__(x) # is the same as calling __str__ on the class
== object.__str__(x)# which falls back to the method defined in superclass
== repr(x) # which calls repr(x) (according to those docs)
== Test.__repr__(x) # which calls the __repr__ method on the class
)
For the most part we don't care about this much detail - we just care that python can behave reasonably when we print out our data. The only part that really matters is that that default reasonable behaviour is defined inside of object!
type is the class that is instanced to get classes
type is the class of every type
so in the same way that int defines what 1+3 should do or str defines methods for strings, type defines behaviour that is specific to type objects. For example calling a class object (like int("34")) will create a new instance of that class - this behaviour of creating new objects is defined in type.__call__ method. For completion we have the technical implications:
We could say "1 is an int" which in code translates to isinstance(1,int) == True. Similarly we could say "int is a type" which translates to isinstance(int, type) == True.
all classes are considered instances of type. So isinstance(x, type) will be true for all classes. When I say classes I mean things like int, str, bool or the variable created by the class keyword.
everything is an object - even classes.
This means that the standard behaviour that exists for all objects is also applied to class objects. So writing str.join will lookup the join method by looking it up with the same attribute lookup as every other object, which makes sense. (right now I'm not calling the method just accessing the attribute)
As a more concrete example we can look at str.join which is an arbitrarily selected method on a familiar data type (strings) and its duality with type.mro which is a method on type objects in the same way:
>>> x = "hello"
>>> str.join # unbound method of strings
<method 'join' of 'str' objects>
>>> x.join #bound method of x
<built-in method join of str object at 0x109bf23b0>
>>> hex(id(x)) # the memory address of x as seen above
'0x109bf23b0'
>>> type.mro #unbound method
<method 'mro' of 'type' objects>
>>> int.mro #mro method bound to int
<built-in method mro of type object at 0x106afeca0>
>>> hex(id(int)) # address of int object as seen above
'0x106afeca0'
>>> int.mro() #mro stands for Method Resolution Order, is related to __bases__
[<class 'int'>, <class 'object'>]
So at this point I'd recommend you go back and re-read the 2 statements I made at the beginning of this answer and will hopefully feel more confident in believing it.
P.S. In the same way you can make a subclass of str to create special string objects that have different behaviour, you can create a subclass of type to create special classes that have different behaviour. This is called a meta-class (the class of a class object) and the practical applications of using meta-classes are usually abstract. (pun intended)
Today I defined a class just for testing what comes next:
class B(object):
def p(self):
print("p")
And later I did this:
>>> type(B.__dict__['p'])
<type 'function'>
>>> type(B.p)
<type 'instancemethod'>
So, why? Aren't B.p and B.__dict__['p'] the same object?
My astonishment just increased when a I tried this:
>>> B.__dict__['p']
<function p at 0x3d2bc80>
>>> type(B.__dict__['p'])
<type 'function'>
Ok, so far so good, the type is in both results function, but when I tried:
>>> B.p
<unbound method B.p>
>>> type(B.p)
<type 'instancemethod'>
What?!, Why? unbound method and instancemethod? Are those the same? Why two different names?
Well, It seems python is full of surprises!
And this is the python I'm using:
Python 2.7.4 (default, Sep 26 2013, 03:20:26)
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
This is a very nice question.
To clear a few things:
When a method name is accessed through its class (e.g. B.p) it said to be unbound.
When otherwise the method is accessed through its class' instance (e.g. B().p) is said to be bound.
The main difference between bound and unbound is that if bound, the first argument will be implicitly the instance of the calling class. That's why we have to add self as first argument to every method when we define it. When unbound, you'll have to pass explicitly the instance of the class which you want to apply the logic of the method.
For example:
class B(object):
def foo(self):
print 'ok'
>>>B().foo()
ok
>>>B.foo()
Exception, missing an argument.
>>>B.foo(B())
ok
This is a basic explanation of bound and unbound. Now regarding to the __dict__ weirdness. Any object in Python can define a __get__ and __set__ method, which control the access to them when they're attributes in a class. These are called descriptors.
In simpler words, when you access an attribute (property) of a class by its instance or class, Python doesn't return the object directly, instead it calls a __get__ or __set__ method that in turn returns a convenient object to work with.
Functions in Python override this __get__ method. So when you issue B.foo or B().foo it returns an instancemethod type, which is a wrapper to function type (a wrapper that pass self as first argument implicitly). When you access the function through the raw class' dictionary, there's no call to __get__ because you're not accessing them as a property of a class, hence the return value to be a raw function.
There's a lot to say about this topic, I tried to give a very simple answer to such a clever topic. You can find definitive information on Guido's blog article The Inside Story on New-Style Classes, very recommended.
UPDATE: About your last example:
>>> B.p
<unbound method B.p>
>>> type(B.p)
<type 'instancemethod'>
Notice that in Python's interpreter >>>B.p doesn't actually print the type of the object, instead it prints the object's __repr__ method. You can check that by doing >>>print B.p.__repr__() and seeing that its the same result :)
Python's is full of indirection and delegation, that's what makes it so flexible.
Hope this clarify things a bit.
So, why? Aren't B.p and B.__dict__['p'] the same object?
No. Attribute access is magic; when a user-defined method is accessed by attribute name, either a bound method or unbound method is returned (This changes in Python 3; there are no longer unbound methods at all; you get a regular function if you access an instance method through the class object); accessing directly through the dict bypasses this magic.
What?!, Why? unbound method and instancemethod?, Are those the same? Why two different names?
A user-defined method is of type instancemethod unless it was defined with the #classmethod or #staticmethod decorator. It can either be a bound instancemethod (when accessed as an attribute of an instance) or an unbound method (when accessed as an attribute of the class).
(See the "User-defined methods" section of http://docs.python.org/2/reference/datamodel.html for an explanation.)
So, why? Aren't B.p and B.dict['p'] the same object?
They aren't, obviously. Read this : https://wiki.python.org/moin/FromFunctionToMethod for more in-depth explanations.
Best Guess:
method - def(self, maybeSomeVariables); lines of code which achieve some purpose
Function - same as method but returns something
Class - group of methods/functions
Module - a script, OR one or more classes. Basically a .py file.
Package - a folder which has modules in, and also a __init__.py file in there.
Suite - Just a word that gets thrown around a lot, by convention
TestCase - unittest's equivalent of a function
TestSuite - unittest's equivalent of a Class (or Module?)
My question is: Is this completely correct, and did I miss any hierarchical building blocks from that list?
I feel that you're putting in differences that don't actually exist. There isn't really a hierarchy as such. In python everything is an object. This isn't some abstract notion, but quite fundamental to how you should think about constructs you create when using python. An object is just a bunch of other objects. There is a slight subtlety in whether you're using new-style classes or not, but in the absence of a good reason otherwise, just use and assume new-style classes. Everything below is assuming new-style classes.
If an object is callable, you can call it using the calling syntax of a pair of braces, with the arguments inside them: my_callable(arg1, arg2). To be callable, an object needs to implement the __call__ method (or else have the correct field set in its C level type definition).
In python an object has a type associated with it. The type describes how the object was constructed. So, for example, a list object is of type list and a function object is of type function. The types themselves are of type type. You can find the type by using the built-in function type(). A list of all the built-in types can be found in the python documentation. Types are actually callable objects, and are used to create instances of a given type.
Right, now that's established, the nature of a given object is defined by it's type. This describes the objects of which it comprises. Coming back to your questions then:
Firstly, the bunch of objects that make up some object are called the attributes of that object. These attributes can be anything, but they typically consist of methods and some way of storing state (which might be types such as int or list).
A function is an object of type function. Crucially, that means it has the __call__ method as an attribute which makes it a callable (the __call__ method is also an object that itself has the __call__ method. It's __call__ all the way down ;)
A class, in the python world, can be considered as a type, but typically is used to refer to types that are not built-in. These objects are used to create other objects. You can define your own classes with the class keyword, and to create a class which is new-style you must inherit from object (or some other new-style class). When you inherit, you create a type that acquires all the characteristics of the parent type, and then you can overwrite the bits you want to (and you can overwrite any bits you want!). When you instantiate a class (or more generally, a type) by calling it, another object is returned which is created by that class (how the returned object is created can be changed in weird and crazy ways by modifying the class object).
A method is a special type of function that is called using the attribute notation. That is, when it is created, 2 extra attributes are added to the method (remember it's an object!) called im_self and im_func. im_self I will describe in a few sentences. im_func is a function that implements the method. When the method is called, like, for example, foo.my_method(10), this is equivalent to calling foo.my_method.im_func(im_self, 10). This is why, when you define a method, you define it with the extra first argument which you apparently don't seem to use (as self).
When you write a bunch of methods when defining a class, these become unbound methods. When you create an instance of that class, those methods become bound. When you call an bound method, the im_self argument is added for you as the object in which the bound method resides. You can still call the unbound method of the class, but you need to explicitly add the class instance as the first argument:
class Foo(object):
def bar(self):
print self
print self.bar
print self.bar.im_self # prints the same as self
We can show what happens when we call the various manifestations of the bar method:
>>> a = Foo()
>>> a.bar()
<__main__.Foo object at 0x179b610>
<bound method Foo.bar of <__main__.Foo object at 0x179b610>>
<__main__.Foo object at 0x179b610>
>>> Foo.bar()
TypeError: unbound method bar() must be called with Foo instance as first argument (got nothing instead)
>>> Foo.bar(a)
<__main__.Foo object at 0x179b610>
<bound method Foo.bar of <__main__.Foo object at 0x179b610>>
<__main__.Foo object at 0x179b610>
Bringing all the above together, we can define a class as follows:
class MyFoo(object):
a = 10
def bar(self):
print self.a
This generates a class with 2 attributes: a (which is an integer of value 10) and bar, which is an unbound method. We can see that MyFoo.a is just 10.
We can create extra attributes at run time, both within the class methods, and outside. Consider the following:
class MyFoo(object):
a = 10
def __init__(self):
self.b = 20
def bar(self):
print self.a
print self.b
def eep(self):
print self.c
__init__ is just the method that is called immediately after an object has been created from a class.
>>> foo = Foo()
>>> foo.bar()
10
20
>>> foo.eep()
AttributeError: 'MyFoo' object has no attribute 'c'
>>> foo.c = 30
>>> foo.eep()
30
This example shows 2 ways of adding an attribute to a class instance at run time (that is, after the object has been created from it's class).
I hope you can see then, that TestCase and TestSuite are just classes that are used to create test objects. There's nothing special about them except that they happen to have some useful features for writing tests. You can subclass and overwrite them to your heart's content!
Regarding your specific point, both methods and functions can return anything they want.
Your description of module, package and suite seems pretty sound. Note that modules are also objects!