I'm used that in Objective-C I've got this construct:
- (void)init {
if (self = [super init]) {
// init class
}
return self;
}
Should Python also call the parent class's implementation for __init__?
class NewClass(SomeOtherClass):
def __init__(self):
SomeOtherClass.__init__(self)
# init class
Is this also true/false for __new__() and __del__()?
Edit: There's a very similar question: Inheritance and Overriding __init__ in Python
If you need something from super's __init__ to be done in addition to what is being done in the current class's __init__, you must call it yourself, since that will not happen automatically. But if you don't need anything from super's __init__, no need to call it. Example:
>>> class C(object):
def __init__(self):
self.b = 1
>>> class D(C):
def __init__(self):
super().__init__() # in Python 2 use super(D, self).__init__()
self.a = 1
>>> class E(C):
def __init__(self):
self.a = 1
>>> d = D()
>>> d.a
1
>>> d.b # This works because of the call to super's init
1
>>> e = E()
>>> e.a
1
>>> e.b # This is going to fail since nothing in E initializes b...
Traceback (most recent call last):
File "<pyshell#70>", line 1, in <module>
e.b # This is going to fail since nothing in E initializes b...
AttributeError: 'E' object has no attribute 'b'
__del__ is the same way, (but be wary of relying on __del__ for finalization - consider doing it via the with statement instead).
I rarely use __new__. I do all the initialization in __init__.
In Anon's answer:
"If you need something from super's __init__ to be done in addition to what is being done in the current class's __init__ , you must call it yourself, since that will not happen automatically"
It's incredible: he is wording exactly the contrary of the principle of inheritance.
It is not that "something from super's __init__ (...) will not happen automatically" , it is that it WOULD happen automatically, but it doesn't happen because the base-class' __init__ is overriden by the definition of the derived-clas __init__
So then, WHY defining a derived_class' __init__ , since it overrides what is aimed at when someone resorts to inheritance ??
It's because one needs to define something that is NOT done in the base-class' __init__ , and the only possibility to obtain that is to put its execution in a derived-class' __init__ function.
In other words, one needs something in base-class' __init__ in addition to what would be automatically done in the base-classe' __init__ if this latter wasn't overriden.
NOT the contrary.
Then, the problem is that the desired instructions present in the base-class' __init__ are no more activated at the moment of instantiation. In order to offset this inactivation, something special is required: calling explicitly the base-class' __init__ , in order to KEEP , NOT TO ADD, the initialization performed by the base-class' __init__ .
That's exactly what is said in the official doc:
An overriding method in a derived class may in fact want to extend
rather than simply replace the base class method of the same name.
There is a simple way to call the base class method directly: just
call BaseClassName.methodname(self, arguments).
http://docs.python.org/tutorial/classes.html#inheritance
That's all the story:
when the aim is to KEEP the initialization performed by the base-class, that is pure inheritance, nothing special is needed, one must just avoid to define an __init__ function in the derived class
when the aim is to REPLACE the initialization performed by the base-class, __init__ must be defined in the derived-class
when the aim is to ADD processes to the initialization performed by the base-class, a derived-class' __init__ must be defined , comprising an explicit call to the base-class __init__
What I feel astonishing in the post of Anon is not only that he expresses the contrary of the inheritance theory, but that there have been 5 guys passing by that upvoted without turning a hair, and moreover there have been nobody to react in 2 years in a thread whose interesting subject must be read relatively often.
In Python, calling the super-class' __init__ is optional. If you call it, it is then also optional whether to use the super identifier, or whether to explicitly name the super class:
object.__init__(self)
In case of object, calling the super method is not strictly necessary, since the super method is empty. Same for __del__.
On the other hand, for __new__, you should indeed call the super method, and use its return as the newly-created object - unless you explicitly want to return something different.
Edit: (after the code change)
There is no way for us to tell you whether you need or not to call your parent's __init__ (or any other function). Inheritance obviously would work without such call. It all depends on the logic of your code: for example, if all your __init__ is done in parent class, you can just skip child-class __init__ altogether.
consider the following example:
>>> class A:
def __init__(self, val):
self.a = val
>>> class B(A):
pass
>>> class C(A):
def __init__(self, val):
A.__init__(self, val)
self.a += val
>>> A(4).a
4
>>> B(5).a
5
>>> C(6).a
12
There's no hard and fast rule. The documentation for a class should indicate whether subclasses should call the superclass method. Sometimes you want to completely replace superclass behaviour, and at other times augment it - i.e. call your own code before and/or after a superclass call.
Update: The same basic logic applies to any method call. Constructors sometimes need special consideration (as they often set up state which determines behaviour) and destructors because they parallel constructors (e.g. in the allocation of resources, e.g. database connections). But the same might apply, say, to the render() method of a widget.
Further update: What's the OPP? Do you mean OOP? No - a subclass often needs to know something about the design of the superclass. Not the internal implementation details - but the basic contract that the superclass has with its clients (using classes). This does not violate OOP principles in any way. That's why protected is a valid concept in OOP in general (though not, of course, in Python).
IMO, you should call it. If your superclass is object, you should not, but in other cases I think it is exceptional not to call it. As already answered by others, it is very convenient if your class doesn't even have to override __init__ itself, for example when it has no (additional) internal state to initialize.
Yes, you should always call base class __init__ explicitly as a good coding practice. Forgetting to do this can cause subtle issues or run time errors. This is true even if __init__ doesn't take any parameters. This is unlike other languages where compiler would implicitly call base class constructor for you. Python doesn't do that!
The main reason for always calling base class _init__ is that base class may typically create member variable and initialize them to defaults. So if you don't call base class init, none of that code would be executed and you would end up with base class that has no member variables.
Example:
class Base:
def __init__(self):
print('base init')
class Derived1(Base):
def __init__(self):
print('derived1 init')
class Derived2(Base):
def __init__(self):
super(Derived2, self).__init__()
print('derived2 init')
print('Creating Derived1...')
d1 = Derived1()
print('Creating Derived2...')
d2 = Derived2()
This prints..
Creating Derived1...
derived1 init
Creating Derived2...
base init
derived2 init
Run this code.
Related
I'm currently learning tkinter from sentdex's tutorial and to me it seems that I'm writing to run __init__ in its own definition, what does a line like that mean? Is it tKinter's __init__ function?
class seaOfBTCapp(tk.Tk):
def __init__(self,*args,**kwargs)
tk.Tk.__init__(self,*args,**kwargs)
It's invoking another class's constructor on itself.
This is a fun quirk of python's object-oriented design. "Instance methods" are really just class methods that take the current instance as an implicit parameter. You can, in fact, call them as class methods and provide the object explicitly:
ex = [1, 2, 3, 4, 5]
# the following are equivalent:
ex.pop(0) # call the method on the instance, passing it implicitly
list.pop(ex, 0) # call the method on the class `list`, passing the instance explicitly
The same behavior is being invoked here. You're taking the __init__ method of the tk.TK class, and passing self in as the "instance". This is an uncommon, but valid, way of accessing methods in the superclass that have been overridden in your subclass (for example, the constructor).
As in #Barmar's answer, a better solution is using super(), which produces something resembling an instance of the superclass, which you then call __init__ on to get the superclass's implementation of __init__() passing self implicitly, as you would expect.
What that line does is call your parent class's __init__ method. That's what would have happened if you didn't define your own method, so if you're not doing anything else in your __init__, you should probably just skip it and let the inherited method run normally.
It's also probably better to call super().__init__(*args, **kwargs), rather than naming the parent class explicitly (and needing to pass self by hand). This is particularly the case if you might ever use this class in a situation involving multiple inheritance, where explicitly naming the next class to be called can get the MRO wrong. If you're just starting in programming, don't worry too much about this, multiple inheritance is a pretty advanced topic (though it's easier to get right in Python than in many other languages).
I think this is equivalent to the more modern:
class seaOfBTCapp(tk.Tk):
def __init__(self,*args,**kwargs)
super().__init__(*args,**kwargs)
In C++, given a class hierarchy, the most derived class's ctor calls its base class ctor which then initialized the base part of the object, before the derived part is instantiated. In Python I want to understand what's going on in a case where I have the requirement, that Derived subclasses a given class Base which takes a callable in its __init__ method which it then later invokes. The callable features some parameters which I pass in Derived class's __init__, which is where I also define the callable function. My idea then was to pass the Derived class itself to its Base class after having defined the __call__ operator
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self.__class__.__call__ = _process
super(Derived, self).__init__(self)
Is this a pythonic way of dealing with this problem?
What is the exact order of initialization here? Does one needs to call super as a first instruction in the __init__ method or is it ok to do it the way I did?
I am confused whether it is considered good practice to use super with or without arguments in python > 3.6
What is the exact order of initialization here?
Well, very obviously the one you can see in your code - Base.__init__() is only called when you explicitely ask for it (with the super() call). If Base also has parents and everyone in the chain uses super() calls, the parents initializers will be invoked according to the mro.
Basically, Python is a "runtime language" - except for the bytecode compilation phase, everything happens at runtime - so there's very few "black magic" going on (and much of it is actually documented and fully exposed for those who want to look under the hood or do some metaprogramming).
Does one needs to call super as a first instruction in the init method or is it ok to do it the way I did?
You call the parent's method where you see fit for the concrete use case - you just have to beware of not using instance attributes (directly or - less obvious to spot - indirectly via a method call that depends on those attributes) before they are defined.
I am confused whether it is considered good practice to use super with or without arguments in python > 3.6
If you don't need backward compatibily, use super() without params - unless you want to explicitely skip some class in the MRO, but then chances are there's something debatable with your design (but well - sometimes we can't afford to rewrite a whole code base just to avoid one very special corner case, so that's ok too as long as you understand what you're doing and why).
Now with your core question:
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self.__class__.__call__ = _process
super(Derived, self).__init__(self)
self.__class__.__call__ is a class attribute and is shared by all instances of the class. This means that you either have to make sure you are only ever using one single instance of the class (which doesn't seem to be the goal here) or are ready to have totally random results, since each new instance will overwrite self.__class__.__call__ with it's own version.
If what you want is to have each instance's __call__ method to call it's own version of process(), then there's a much simpler solution - just make _process an instance attribute and call it from __call__ :
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self._process = _process
super(Derived, self).__init__(self)
def __call__(self, c, d):
return self._process(c, d)
Or even simpler:
class Derived(Base):
def __init__(self, a, b):
super(Derived, self).__init__(self)
self._a = a
self._b = b
def __call__(self, c, d):
do_something_with(self._a, self._b)
EDIT:
Base requires a callable in ins init method.
This would be better if your example snippet was closer to your real use case.
But when I call super().init() the call method of Derived should not have been instantiated yet or has it?
Now that's a good question... Actually, Python methods are not what you think they are. What you define in a class statement's body using the def statement are still plain functions, as you can see by yourself:
class Foo:
... def bar(self): pass
...
Foo.bar
"Methods" are only instanciated when an attribute lookup resolves to a class attribute that happens to be a function:
Foo().bar
main.Foo object at 0x7f3cef4de908>>
Foo().bar
main.Foo object at 0x7f3cef4de940>>
(if you wonder how this happens, it's documented here)
and they actually are just thin wrappers around a function, instance and class (or function and class for classmethods), which delegate the call to the underlying function, injecting the instance (or class) as first argument. In CS terms, a Python method is the partial application of a function to an instance (or class).
Now as I mentionned upper, Python is a runtime language, and both def and class are executable statements. So by the time you define your Derived class, the class statement creating the Base class object has already been executed (else Base wouldn't exist at all), with all the class statement block being executed first (to define the functions and other class attributes).
So "when you call super().__init()__", the __call__ function of Base HAS been instanciated (assuming it's defined in the class statement for Base of course, but that's by far the most common case).
I am writing a class with multiple constructors using #classmethod. Now I would like both the __init__ constructor as well as the classmethod constructor call some routine of the class to set initial values before doing other stuff.
From __init__ this is usually done with self:
def __init__(self, name="", revision=None):
self._init_attributes()
def _init_attributes(self):
self.test = "hello"
From a classmethod constructor, I would call another classmethod instead, because the instance (i.e. self) is not created until I leave the classmethod with return cls(...). Now, I can call my _init_attributes() method as
#classmethod
def from_file(cls, filename=None)
cls._init_attributes()
# do other stuff like reading from file
return cls()
and this actually works (in the sense that I don't get an error and I can actually see the test attribute after executing c = Class.from_file(). However, if I understand things correctly, then this will set the attributes on the class level, not on the instance level. Hence, if I initialize an attribute with a mutable object (e.g. a list), then all instances of this class would use the same list, rather than their own instance list. Is this correct? If so, is there a way to initialize "instance" attributes in classmethods, or do I have to write the code in such a way that all the attribute initialisation is done in init?
Hmmm. Actually, while writing this: I may even have greater trouble than I thought because init will be called upon return from the classmethod, won't it? So what would be a proper way to deal with this situation?
Note: Article [1] discusses a somewhat similar problem.
Yes, you'r understanding things correctly: cls._init_attributes() will set class attributes, not instance attributes.
Meanwhile, it's up to your alternate constructor to construct and return an instance. In between constructing it and returning it, that's when you can call _init_attributes(). In other words:
#classmethod
def from_file(cls, filename=None)
obj = cls()
obj._init_attributes()
# do other stuff like reading from file
return obj
However, you're right that the only obvious way to construct and return an instance is to just call cls(), which will call __init__.
But this is easy to get around: just have the alternate constructors pass some extra argument to __init__ meaning "skip the usual initialization, I'm going to do it later". For example:
def __init__(self, name="", revision=None, _skip_default_init=False):
# blah blah
#classmethod
def from_file(cls, filename=""):
# blah blah setup
obj = cls(_skip_default_init=True)
# extra initialization work
return obj
If you want to make this less visible, you can always take **kwargs and check it inside the method body… but remember, this is Python; you can't prevent people from doing stupid things, all you can do is make it obvious that they're stupid. And the _skip_default_init should be more than enough to handle that.
If you really want to, you can override __new__ as well. Constructing an object doesn't call __init__ unless __new__ returns an instance of cls or some subclass thereof. So, you can give __new__ a flag that tells it to skip over __init__ by munging obj.__class__, then restore the __class__ yourself. This is really hacky, but could conceivably be useful.
A much cleaner solution—but for some reason even less common in Python—is to borrow the "class cluster" idea from Smalltalk/ObjC: Create a private subclass that has a different __init__ that doesn't super (or intentionally skips over its immediate base and supers from there), and then have your alternate constructor in the base class just return an instance of that subclass.
Alternatively, if the only reason you don't want to call __init__ is so you can do the exact same thing __init__ would have done… why? DRY stands for "don't repeat yourself", not "bend over backward to find ways to force yourself to repeat yourself", right?
i think you can defined either '__init__' or '__new__' in a class,but why all defined in django.utils.datastructures.py.
my code:
class a(object):
def __init__(self):
print 'aaa'
def __new__(self):
print 'sss'
a()#print 'sss'
class b:
def __init__(self):
print 'aaa'
def __new__(self):
print 'sss'
b()#print 'aaa'
datastructures.py:
class SortedDict(dict):
"""
A dictionary that keeps its keys in the order in which they're inserted.
"""
def __new__(cls, *args, **kwargs):
instance = super(SortedDict, cls).__new__(cls, *args, **kwargs)
instance.keyOrder = []
return instance
def __init__(self, data=None):
if data is None:
data = {}
super(SortedDict, self).__init__(data)
if isinstance(data, dict):
self.keyOrder = data.keys()
else:
self.keyOrder = []
for key, value in data:
if key not in self.keyOrder:
self.keyOrder.append(key)
and what circumstances the SortedDict.__init__ will be call.
thanks
You can define either or both of __new__ and __init__.
__new__ must return an object -- which can be a new one (typically that task is delegated to type.__new__), an existing one (to implement singletons, "recycle" instances from a pool, and so on), or even one that's not an instance of the class. If __new__ returns an instance of the class (new or existing), __init__ then gets called on it; if __new__ returns an object that's not an instance of the class, then __init__ is not called.
__init__ is passed a class instance as its first item (in the same state __new__ returned it, i.e., typically "empty") and must alter it as needed to make it ready for use (most often by adding attributes).
In general it's best to use __init__ for all it can do -- and __new__, if something is left that __init__ can't do, for that "extra something".
So you'll typically define both if there's something useful you can do in __init__, but not everything you want to happen when the class gets instantiated.
For example, consider a class that subclasses int but also has a foo slot -- and you want it to be instantiated with an initializer for the int and one for the .foo. As int is immutable, that part has to happen in __new__, so pedantically one could code:
>>> class x(int):
... def __new__(cls, i, foo):
... self = int.__new__(cls, i)
... return self
... def __init__(self, i, foo):
... self.foo = foo
... __slots__ = 'foo',
...
>>> a = x(23, 'bah')
>>> print a
23
>>> print a.foo
bah
>>>
In practice, for a case this simple, nobody would mind if you lost the __init__ and just moved the self.foo = foo to __new__. But if initialization is rich and complex enough to be best placed in __init__, this idea is worth keeping in mind.
__new__ and __init__ do completely different things. The method __init__ initiates a new instance of a class --- it is a constructor. __new__ is a far more subtle thing --- it can change arguments and, in fact, the class of the initiated object. For example, the following code:
class Meters(object):
def __new__(cls, value):
return int(value / 3.28083)
If you call Meters(6) you will not actually create an instance of Meters, but an instance of int. You might wonder why this is useful; it is actually crucial to metaclasses, an admittedly obscure (but powerful) feature.
You'll note that in Python 2.x, only classes inheriting from object can take advantage of __new__, as you code above shows.
The use of __new__ you showed in django seems to be an attempt to keep a sane method resolution order on SortedDict objects. I will admit, though, that it is often hard to tell why __new__ is necessary. Standard Python style suggests that it not be used unless necessary (as always, better class design is the tool you turn to first).
My only guess is that in this case, they (author of this class) want the keyOrder list to exist on the class even before SortedDict.__init__ is called.
Note that SortedDict calls super() in its __init__, this would ordinarily go to dict.__init__, which would probably call __setitem__ and the like to start adding items. SortedDict.__setitem__ expects the .keyOrder property to exist, and therein lies the problem (since .keyOrder isn't normally created until after the call to super().) It's possible this is just an issue with subclassing dict because my normal gut instinct would be to just initialize .keyOrder before the call to super().
The code in __new__ might also be used to allow SortedDict to be subclassed in a diamond inheritance structure where it is possible SortedDict.__init__ is not called before the first __setitem__ and the like are called. Django has to contend with various issues in supporting a wide range of python versions from 2.3 up; it's possible this code is completely un-neccesary in some versions and needed in others.
There is a common use for defining both __new__ and __init__: accessing class properties which may be eclipsed by their instance versions without having to do type(self) or self.__class__ (which, in the existence of metaclasses, may not even be the right thing).
For example:
class MyClass(object):
creation_counter = 0
def __new__(cls, *args, **kwargs):
cls.creation_counter += 1
return super(MyClass, cls).__new__(cls)
def __init__(self):
print "I am the %dth myclass to be created!" % self.creation_counter
Finally, __new__ can actually return an instance of a wrapper or a completely different class from what you thought you were instantiating. This is used to provide metaclass-like features without actually needing a metaclass.
In my opinion, there was no need of overriding __new__ in the example you described.
Creation of an instance and actual memory allocation happens in __new__, __init__ is called after __new__ and is meant for initialization of instance serving the job of constructor in classical OOP terms. So, if all you want to do is initialize variables, then you should go for overriding __init__.
The real role of __new__ comes into place when you are using Metaclasses. There if you want to do something like changing attributes or adding attributes, that must happen before the creation of class, you should go for overriding __new__.
Consider, a completely hypothetical case where you want to make some attributes of class private, even though they are not defined so (I'm not saying one should ever do that).
class PrivateMetaClass(type):
def __new__(metaclass, classname, bases, attrs):
private_attributes = ['name', 'age']
for private_attribute in private_attributes:
if attrs.get(private_attribute):
attrs['_' + private_attribute] = attrs[private_attribute]
attrs.pop(private_attribute)
return super(PrivateMetaClass, metaclass).__new__(metaclass, classname, bases, attrs)
class Person(object):
__metaclass__ = PrivateMetaClass
name = 'Someone'
age = 19
person = Person()
>>> hasattr(person, 'name')
False
>>> person._name
'Someone'
Again, It's just for instructional purposes I'm not suggesting one should do anything like this.
I have read posts like these:
What is a metaclass in Python?
What are your (concrete) use-cases for metaclasses in Python?
Python's Super is nifty, but you can't use it
But somehow I got confused. Many confusions like:
When and why would I have to do something like the following?
# Refer link1
return super(MyType, cls).__new__(cls, name, bases, newattrs)
or
# Refer link2
return super(MetaSingleton, cls).__call__(*args, **kw)
or
# Refer link2
return type(self.__name__ + other.__name__, (self, other), {})
How does super work exactly?
What is class registry and unregistry in link1 and how exactly does it work? (I thought it has something to do with singleton. I may be wrong, being from C background. My coding style is still a mix of functional and OO).
What is the flow of class instantiation (subclass, metaclass, super, type) and method invocation (
metaclass->__new__, metaclass->__init__, super->__new__, subclass->__init__ inherited from metaclass
) with well-commented working code (though the first link is quite close, but it does not talk about cls keyword and super(..) and registry). Preferably an example with multiple inheritance.
P.S.: I made the last part as code because Stack Overflow formatting was converting the text metaclass->__new__
to metaclass->new
OK, you've thrown quite a few concepts into the mix here! I'm going to pull out a few of the specific questions you have.
In general, understanding super, the MRO and metclasses is made much more complicated because there have been lots of changes in this tricky area over the last few versions of Python.
Python's own documentation is a very good reference, and completely up to date. There is an IBM developerWorks article which is fine as an introduction and takes a more tutorial-based approach, but note that it's five years old, and spends a lot of time talking about the older-style approaches to meta-classes.
super is how you access an object's super-classes. It's more complex than (for example) Java's super keyword, mainly because of multiple inheritance in Python. As Super Considered Harmful explains, using super() can result in you implicitly using a chain of super-classes, the order of which is defined by the Method Resolution Order (MRO).
You can see the MRO for a class easily by invoking mro() on the class (not on an instance). Note that meta-classes are not in an object's super-class hierarchy.
Thomas' description of meta-classes here is excellent:
A metaclass is the class of a class.
Like a class defines how an instance
of the class behaves, a metaclass
defines how a class behaves. A class
is an instance of a metaclass.
In the examples you give, here's what's going on:
The call to __new__ is being
bubbled up to the next thing in the
MRO. In this case, super(MyType, cls) would resolve to type;
calling type.__new__ lets Python
complete it's normal instance
creation steps.
This example is using meta-classes
to enforce a singleton. He's
overriding __call__ in the
metaclass so that whenever a class
instance is created, he intercepts
that, and can bypass instance
creation if there already is one
(stored in cls.instance). Note
that overriding __new__ in the
metaclass won't be good enough,
because that's only called when
creating the class. Overriding
__new__ on the class would work,
however.
This shows a way to dynamically
create a class. Here's he's
appending the supplied class's name
to the created class name, and
adding it to the class hierarchy
too.
I'm not exactly sure what sort of code example you're looking for, but here's a brief one showing meta-classes, inheritance and method resolution:
print('>>> # Defining classes:')
class MyMeta(type):
def __new__(cls, name, bases, dct):
print("meta: creating %s %s" % (name, bases))
return type.__new__(cls, name, bases, dct)
def meta_meth(cls):
print("MyMeta.meta_meth")
__repr__ = lambda c: c.__name__
class A(metaclass=MyMeta):
def __init__(self):
super(A, self).__init__()
print("A init")
def meth(self):
print("A.meth")
class B(metaclass=MyMeta):
def __init__(self):
super(B, self).__init__()
print("B init")
def meth(self):
print("B.meth")
class C(A, B, metaclass=MyMeta):
def __init__(self):
super(C, self).__init__()
print("C init")
print('>>> c_obj = C()')
c_obj = C()
print('>>> c_obj.meth()')
c_obj.meth()
print('>>> C.meta_meth()')
C.meta_meth()
print('>>> c_obj.meta_meth()')
c_obj.meta_meth()
Example output (using Python >= 3.6):
>>> # Defining classes:
meta: creating A ()
meta: creating B ()
meta: creating C (A, B)
>>> c_obj = C()
B init
A init
C init
>>> c_obj.meth()
A.meth
>>> C.meta_meth()
MyMeta.meta_meth
>>> c_obj.meta_meth()
Traceback (most recent call last):
File "metatest.py", line 41, in <module>
c_obj.meta_meth()
AttributeError: 'C' object has no attribute 'meta_meth'
Here's the more pragmatic answer.
It rarely matters
"What is a metaclass in Python". Bottom line, type is the metaclass of all classes. You have almost no practical use for this.
class X(object):
pass
type(X) == type
"What are your (concrete) use cases for metaclasses in Python?". Bottom line. None.
"Python's Super is nifty, but you can't use it". Interesting note, but little practical value. You'll never have a need for resolving complex multiple inheritance networks. It's easy to prevent this problem from arising by using an explicity Strategy design instead of multiple inheritance.
Here's my experience over the last 7 years of Python programming.
A class has 1 or more superclasses forming a simple chain from my class to object.
The concept of "class" is defined by a metaclass named type. I might want to extend the concept of "class", but so far, it's never come up in practice. Not once. type always does the right thing.
Using super works out really well in practice. It allows a subclass to defer to it's superclass. It happens to show up in these metaclass examples because they're extending the built-in metaclass, type.
However, in all subclass situations, you'll make use of super to extend a superclass.
Metaclasses
The metaclass issue is this:
Every object has a reference to it's type definition, or "class".
A class is, itself, also an object.
Therefore a object of type class has a reference to it's type or "class". The "class" of a "class" is a metaclass.
Since a "class" isn't a C++ run-time object, this doesn't happen in C++. It does happen in Java, Smalltalk and Python.
A metaclass defines the behavior of a class object.
90% of your interaction with a class is to ask the class to create a new object.
10% of the time, you'll be using class methods or class variables ("static" in C++ or Java parlance.)
I have found a few use cases for class-level methods. I have almost no use cases for class variables. I've never had a situation to change the way object construction works.