What are the main differences between Python metaclasses and class decorators? Is there something I can do with one but not with the other?
Decorators are much, much simpler and more limited -- and therefore should be preferred whenever the desired effect can be achieved with either a metaclass or a class decorator.
Anything you can do with a class decorator, you can of course do with a custom metaclass (just apply the functionality of the "decorator function", i.e., the one that takes a class object and modifies it, in the course of the metaclass's __new__ or __init__ that make the class object!-).
There are many things you can do in a custom metaclass but not in a decorator (unless the decorator internally generates and applies a custom metaclass, of course -- but that's cheating;-)... and even then, in Python 3, there are things you can only do with a custom metaclass, not after the fact... but that's a pretty advanced sub-niche of your question, so let me give simpler examples).
For example, suppose you want to make a class object X such that print X (or in Python 3 print(X) of course;-) displays peekaboo!. You cannot possibly do that without a custom metaclass, because the metaclass's override of __str__ is the crucial actor here, i.e., you need a def __str__(cls): return "peekaboo!" in the custom metaclass of class X.
The same applies to all magic methods, i.e., to all kinds of operations as applied to the class object itself (as opposed to, ones applied to its instances, which use magic methods as defined in the class -- operations on the class object itself use magic methods as defined in the metaclass).
As given in the chapter 21 of the book 'fluent python', one difference is related to inheritance. Please see these two scripts. The python version is 3.5. One point is that the use of metaclass affects its children while the decorator affects only the current class.
The script use class-decorator to replace/overwirte the method 'func1'.
def deco4cls(cls):
cls.func1 = lambda self: 2
return cls
#deco4cls
class Cls1:
pass
class Cls1_1(Cls1):
def func1(self):
return 3
obj1_1 = Cls1_1()
print(obj1_1.func1()) # 3
The script use metaclass to replace/overwrite the method 'func1'.
class Deco4cls(type):
def __init__(cls, name, bases, attr_dict):
# print(cls, name, bases, attr_dict)
super().__init__(name, bases, attr_dict)
cls.func1 = lambda self: 2
class Cls2(metaclass=Deco4cls):
pass
class Cls2_1(Cls2):
def func1(self):
return 3
obj2_1 = Cls2_1()
print(obj2_1.func1()) # 2!! the original Cls2_1.func1 is replaced by metaclass
Related
In C++, given a class hierarchy, the most derived class's ctor calls its base class ctor which then initialized the base part of the object, before the derived part is instantiated. In Python I want to understand what's going on in a case where I have the requirement, that Derived subclasses a given class Base which takes a callable in its __init__ method which it then later invokes. The callable features some parameters which I pass in Derived class's __init__, which is where I also define the callable function. My idea then was to pass the Derived class itself to its Base class after having defined the __call__ operator
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self.__class__.__call__ = _process
super(Derived, self).__init__(self)
Is this a pythonic way of dealing with this problem?
What is the exact order of initialization here? Does one needs to call super as a first instruction in the __init__ method or is it ok to do it the way I did?
I am confused whether it is considered good practice to use super with or without arguments in python > 3.6
What is the exact order of initialization here?
Well, very obviously the one you can see in your code - Base.__init__() is only called when you explicitely ask for it (with the super() call). If Base also has parents and everyone in the chain uses super() calls, the parents initializers will be invoked according to the mro.
Basically, Python is a "runtime language" - except for the bytecode compilation phase, everything happens at runtime - so there's very few "black magic" going on (and much of it is actually documented and fully exposed for those who want to look under the hood or do some metaprogramming).
Does one needs to call super as a first instruction in the init method or is it ok to do it the way I did?
You call the parent's method where you see fit for the concrete use case - you just have to beware of not using instance attributes (directly or - less obvious to spot - indirectly via a method call that depends on those attributes) before they are defined.
I am confused whether it is considered good practice to use super with or without arguments in python > 3.6
If you don't need backward compatibily, use super() without params - unless you want to explicitely skip some class in the MRO, but then chances are there's something debatable with your design (but well - sometimes we can't afford to rewrite a whole code base just to avoid one very special corner case, so that's ok too as long as you understand what you're doing and why).
Now with your core question:
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self.__class__.__call__ = _process
super(Derived, self).__init__(self)
self.__class__.__call__ is a class attribute and is shared by all instances of the class. This means that you either have to make sure you are only ever using one single instance of the class (which doesn't seem to be the goal here) or are ready to have totally random results, since each new instance will overwrite self.__class__.__call__ with it's own version.
If what you want is to have each instance's __call__ method to call it's own version of process(), then there's a much simpler solution - just make _process an instance attribute and call it from __call__ :
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self._process = _process
super(Derived, self).__init__(self)
def __call__(self, c, d):
return self._process(c, d)
Or even simpler:
class Derived(Base):
def __init__(self, a, b):
super(Derived, self).__init__(self)
self._a = a
self._b = b
def __call__(self, c, d):
do_something_with(self._a, self._b)
EDIT:
Base requires a callable in ins init method.
This would be better if your example snippet was closer to your real use case.
But when I call super().init() the call method of Derived should not have been instantiated yet or has it?
Now that's a good question... Actually, Python methods are not what you think they are. What you define in a class statement's body using the def statement are still plain functions, as you can see by yourself:
class Foo:
... def bar(self): pass
...
Foo.bar
"Methods" are only instanciated when an attribute lookup resolves to a class attribute that happens to be a function:
Foo().bar
main.Foo object at 0x7f3cef4de908>>
Foo().bar
main.Foo object at 0x7f3cef4de940>>
(if you wonder how this happens, it's documented here)
and they actually are just thin wrappers around a function, instance and class (or function and class for classmethods), which delegate the call to the underlying function, injecting the instance (or class) as first argument. In CS terms, a Python method is the partial application of a function to an instance (or class).
Now as I mentionned upper, Python is a runtime language, and both def and class are executable statements. So by the time you define your Derived class, the class statement creating the Base class object has already been executed (else Base wouldn't exist at all), with all the class statement block being executed first (to define the functions and other class attributes).
So "when you call super().__init()__", the __call__ function of Base HAS been instanciated (assuming it's defined in the class statement for Base of course, but that's by far the most common case).
Following this answer it seems that a class' metaclass may be changed after the class has been defined by using the following*:
class MyMetaClass(type):
# Metaclass magic...
class A(object):
pass
A = MyMetaClass(A.__name__, A.__bases__, dict(A.__dict__))
Defining a function
def metaclass_wrapper(cls):
return MyMetaClass(cls.__name__, cls.__bases__, dict(cls.__dict__))
allows me to apply a decorator to a class definition like so,
#metaclass_wrapper
class B(object):
pass
It seems that the metaclass magic is applied to B, however B has no __metaclass__ attribute. Is the above method a sensible way to apply metaclasses to class definitions, even though I am definiting and re-definiting a class, or would I be better off simply writing
class B(object):
__metaclass__ = MyMetaClass
pass
I presume there are some differences between the two methods.
*Note, the original answer in the linked question, MyMetaClass(A.__name__, A.__bases__, A.__dict__), returns a TypeError:
TypeError: type() argument 3 must be a dict, not dict_proxy
It seems that the __dict__ attribute of A (the class definition) has a type dict_proxy, whereas the type of the __dict__ attribute of an instance of A has a type dict. Why is this? Is this a Python 2.x vs. 3.x difference?
Admittedly, I am a bit late to the party. However, I fell this was worth adding.
This is completely doable. That being said, there are plenty of other ways to accomplish the same goal. However, the decoration solution, in particular, allows for delayed evaluation ( obj = dec(obj) ), which using __metaclass__ inside the class does not. In typical decorator style, my solution is below.
There is a tricky thing that you may run into if you just construct the class without changing the dictionary or copying its attributes. Any attributes that the class had previously (before decorating) will appear to be missing. So, it is absolutely essential to copy these over and then tweak them as I have in my solution.
Personally, I like to be able to keep track of how an object was wrapped. So, I added the __wrapped__ attribute, which is not strictly necessary. It also makes it more like functools.wraps in Python 3 for classes. However, it can be helpful with introspection. Also, __metaclass__ is added to act more like the normal metaclass use case.
def metaclass(meta):
def metaclass_wrapper(cls):
__name = str(cls.__name__)
__bases = tuple(cls.__bases__)
__dict = dict(cls.__dict__)
for each_slot in __dict.get("__slots__", tuple()):
__dict.pop(each_slot, None)
__dict["__metaclass__"] = meta
__dict["__wrapped__"] = cls
return(meta(__name, __bases, __dict))
return(metaclass_wrapper)
For a trivial example, take the following.
class MetaStaticVariablePassed(type):
def __new__(meta, name, bases, dct):
dct["passed"] = True
return(super(MetaStaticVariablePassed, meta).__new__(meta, name, bases, dct))
#metaclass(MetaStaticVariablePassed)
class Test(object):
pass
This yields the nice result...
|1> Test.passed
|.> True
Using the decorator in the less usual, but identical way...
class Test(object):
pass
Test = metaclass_wrapper(Test)
...yields, as expected, the same nice result.
|1> Test.passed
|.> True
The class has no __metaclass__ attribute set... because you never set it!
Which metaclass to use is normally determined by a name __metaclass__ set in a class block. The __metaclass__ attribute isn't set by the metaclass. So if you invoke a metaclass directly rather than setting __metaclass__ and letting Python figure it out, then no __metaclass__ attribute is set.
In fact, normal classes are all instances of the metaclass type, so if the metaclass always set the __metaclass__ attribute on its instances then every class would have a __metaclass__ attribute (most of them set to type).
I would not use your decorator approach. It obscures the fact that a metaclass is involved (and which one), is still one line of boilerplate, and it's just messy to create a class from the 3 defining features of (name, bases, attributes) only to pull those 3 bits back out from the resulting class, throw the class away, and make a new class from those same 3 bits!
When you do this in Python 2.x:
class A(object):
__metaclass__ = MyMeta
def __init__(self):
pass
You'd get roughly the same result if you'd written this:
attrs = {}
attrs['__metaclass__'] = MyMeta
def __init__(self):
pass
attrs['__init__'] = __init__
A = attrs.get('__metaclass__', type)('A', (object,), attrs)
In reality calculating the metaclass is more complicated, as there actually has to be a search through all the bases to determine whether there's a metaclass conflict, and if one of the bases doesn't have type as its metaclass and attrs doesn't contain __metaclass__ then the default metaclass is the ancestor's metaclass rather than type. This is one situation where I expect your decorator "solution" will differ from using __metaclass__ directly. I'm not sure exactly what would happen if you used your decorator in a situation where using __metaclass__ would give you a metaclass conflict error, but I wouldn't expect it to be pleasant.
Also, if there are any other metaclasses involved, your method would result in them running first (possibly modifying what the name, bases, and attributes are!) and then pulling those out of the class and using it to create a new class. This could potentially be quite different than what you'd get using __metaclass__.
As for the __dict__ not giving you a real dictionary, that's just an implementation detail; I would guess for performance reasons. I doubt there is any spec that says the __dict__ of a (non-class) instance has to be the same type as the __dict__ of a class (which is also an instance btw; just an instance of a metaclass). The __dict__ attribute of a class is a "dictproxy", which allows you to look up attribute keys as if it were a dict but still isn't a dict. type is picky about the type of its third argument; it wants a real dict, not just a "dict-like" object (shame on it for spoiling duck-typing). It's not a 2.x vs 3.x thing; Python 3 behaves the same way, although it gives you a nicer string representation of the dictproxy. Python 2.4 (which is the oldest 2.x I have readily available) also has dictproxy objects for class __dict__ objects.
My summary of your question: "I tried a new tricky way to do a thing, and it didn't quite work. Should I use the simple way instead?"
Yes, you should do it the simple way. You haven't said why you're interested in inventing a new way to do it.
It is pretty easy to implement __len__(self) method in Python so that it handles len(inst) calls like this one:
class A(object):
def __len__(self):
return 7
a = A()
len(a) # gives us 7
And there are plenty of alike methods you can define (__eq__, __str__, __repr__ etc.).
I know that Python classes are objects as well.
My question: can I somehow define, for example, __len__ so that the following works:
len(A) # makes sense and gives some predictable result
What you're looking for is called a "metaclass"... just like a is an instance of class A, A is an instance of class as well, referred to as a metaclass. By default, Python classes are instances of the type class (the only exception is under Python 2, which has some legacy "old style" classes, which are those which don't inherit from object). You can check this by doing type(A)... it should return type itself (yes, that object has been overloaded a little bit).
Metaclasses are powerful and brain-twisting enough to deserve more than the quick explanation I was about to write... a good starting point would be this stackoverflow question: What is a Metaclass.
For your particular question, for Python 3, the following creates a metaclass which aliases len(A) to invoke a class method on A:
class LengthMetaclass(type):
def __len__(self):
return self.clslength()
class A(object, metaclass=LengthMetaclass):
#classmethod
def clslength(cls):
return 7
print(len(A))
(Note: Example above is for Python 3. The syntax is slightly different for Python 2: you would use class A(object):\n __metaclass__=LengthMetaclass instead of passing it as a parameter.)
The reason LengthMetaclass.__len__ doesn't affect instances of A is that attribute resolution in Python first checks the instance dict, then walks the class hierarchy [A, object], but it never consults the metaclasses. Whereas accessing A.__len__ first consults the instance A, then walks it's class hierarchy, which consists of [LengthMetaclass, type].
Since a class is an instance of a metaclass, one way is to use a custom metaclass:
>>> Meta = type('Meta', (type,), {'__repr__': lambda cls: 'class A'})
>>> A = Meta('A', (object,), {'__repr__': lambda self: 'instance of class A'})
>>> A
class A
>>> A()
instance of class A
I fail to see how the Syntax specifically is important, but if you really want a simple way to implement it, just is the normal len(self) that returns len(inst) but in your implementation make it return a class variable that all instances share:
class A:
my_length = 5
def __len__(self):
return self.my_length
and you can later call it like that:
len(A()) #returns 5
obviously this creates a temporary instance of your class, but length only makes sense for an instance of a class and not really for the concept of a class (a Type object).
Editing the metaclass sounds like a very bad idea and unless you are doing something for school or to just mess around I really suggest you rethink this idea..
try this:
class Lengthy:
x = 5
#classmethod
def __len__(cls):
return cls.x
The #classmethod allows you to call it directly on the class, but your len implementation won't be able to depend on any instance variables:
a = Lengthy()
len(a)
I want to have an instance of class registered when the class is defined. Ideally the code below would do the trick.
registry = {}
def register( cls ):
registry[cls.__name__] = cls() #problem here
return cls
#register
class MyClass( Base ):
def __init__(self):
super( MyClass, self ).__init__()
Unfortunately, this code generates the error NameError: global name 'MyClass' is not defined.
What's going on is at the #problem here line I'm trying to instantiate a MyClass but the decorator hasn't returned yet so it doesn't exist.
Is the someway around this using metaclasses or something?
Yes, meta classes can do this. A meta class' __new__ method returns the class, so just register that class before returning it.
class MetaClass(type):
def __new__(cls, clsname, bases, attrs):
newclass = super(MetaClass, cls).__new__(cls, clsname, bases, attrs)
register(newclass) # here is your register function
return newclass
class MyClass(object):
__metaclass__ = MetaClass
The previous example works in Python 2.x. In Python 3.x, the definition of MyClass is slightly different (while MetaClass is not shown because it is unchanged - except that super(MetaClass, cls) can become super() if you want):
#Python 3.x
class MyClass(metaclass=MetaClass):
pass
As of Python 3.6 there is also a new __init_subclass__ method (see PEP 487) that can be used instead of a meta class (thanks to #matusko for his answer below):
class ParentClass:
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
register(cls)
class MyClass(ParentClass):
pass
[edit: fixed missing cls argument to super().__new__()]
[edit: added Python 3.x example]
[edit: corrected order of args to super(), and improved description of 3.x differences]
[edit: add Python 3.6 __init_subclass__ example]
Since python 3.6 you don't need metaclasses to solve this
In python 3.6 simpler customization of class creation was introduced (PEP 487).
An __init_subclass__ hook that initializes all subclasses of a given class.
Proposal includes following example of subclass registration
class PluginBase:
subclasses = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)
In this example, PluginBase.subclasses will contain a plain list of
all subclasses in the entire inheritance tree. One should note that
this also works nicely as a mixin class.
The problem isn't actually caused by the line you've indicated, but by the super call in the __init__ method. The problem remains if you use a metaclass as suggested by dappawit; the reason the example from that answer works is simply that dappawit has simplified your example by omitting the Base class and therefore the super call. In the following example, neither ClassWithMeta nor DecoratedClass work:
registry = {}
def register(cls):
registry[cls.__name__] = cls()
return cls
class MetaClass(type):
def __new__(cls, clsname, bases, attrs):
newclass = super(cls, MetaClass).__new__(cls, clsname, bases, attrs)
register(newclass) # here is your register function
return newclass
class Base(object):
pass
class ClassWithMeta(Base):
__metaclass__ = MetaClass
def __init__(self):
super(ClassWithMeta, self).__init__()
#register
class DecoratedClass(Base):
def __init__(self):
super(DecoratedClass, self).__init__()
The problem is the same in both cases; the register function is called (either by the metaclass or directly as a decorator) after the class object is created, but before it has been bound to a name. This is where super gets gnarly (in Python 2.x), because it requires you to refer to the class in the super call, which you can only reasonably do by using the global name and trusting that it will have been bound to that name by the time the super call is invoked. In this case, that trust is misplaced.
I think a metaclass is the wrong solution here. Metaclasses are for making a family of classes that have some custom behaviour in common, exactly as classes are for making a family of instances that have some custom behavior in common. All you're doing is calling a function on a class. You wouldn't define a class to call a function on a string, neither should you define a metaclass to call a function on a class.
So, the problem is a fundamental incompatibility between: (1) using hooks in the class creation process to create instances of the class, and (2) using super.
One way to resolve this is to not use super. super solves a hard problem, but it introduces others (this is one of them). If you're using a complex multiple inheritance scheme, super's problems are better than the problems of not using super, and if you're inheriting from third-party classes that use super then you have to use super. If neither of those conditions are true, then just replacing your super calls with direct base class calls may actually be a reasonable solution.
Another way is to not hook register into class creation. Adding register(MyClass) after each of your class definitions is pretty equivalent to adding #register before them or __metaclass__ = Registered (or whatever you call the metaclass) into them. A line down the bottom is much less self-documenting than a nice declaration up the top of the class though, so this doesn't feel great, but again it may actually be a reasonable solution.
Finally, you can turn to hacks that are unpleasant, but will probably work. The problem is that a name is being looked up in a module's global scope just before it's been bound there. So you could cheat, as follows:
def register(cls):
name = cls.__name__
force_bound = False
if '__init__' in cls.__dict__:
cls.__init__.func_globals[name] = cls
force_bound = True
try:
registry[name] = cls()
finally:
if force_bound:
del cls.__init__.func_globals[name]
return cls
Here's how this works:
We first check to see whether __init__ is in cls.__dict__ (as opposed to whether it has an __init__ attribute, which will always be true). If it's inherited an __init__ method from another class we're probably fine (because the superclass will already be bound to its name in the usual way), and the magic we're about to do doesn't work on object.__init__ so we want to avoid trying that if the class is using a default __init__.
We lookup the __init__ method and grab it's func_globals dictionary, which is where global lookups (such as to find the class referred to in a super call) will go. This is normally the global dictionary of the module where the __init__ method was originally defined. Such a dictionary is about to have the cls.__name__ inserted into it as soon as register returns, so we just insert it ourselves early.
We finally create an instance and insert it into the registry. This is in a try/finally block to make sure we remove the binding we created whether or not creating an instance throws an exception; this is very unlikely to be necessary (since 99.999% of the time the name is about to be rebound anyway), but it's best to keep weird magic like this as insulated as possible to minimise the chance that someday some other weird magic interacts badly with it.
This version of register will work whether it's invoked as a decorator or by the metaclass (which I still think is not a good use of a metaclass). There are some obscure cases where it will fail though:
I can imagine a weird class that doesn't have an __init__ method but inherits one that calls self.someMethod, and someMethod is overridden in the class being defined and makes a super call. Probably unlikely.
The __init__ method might have been defined in another module originally and then used in the class by doing __init__ = externally_defined_function in the class block. The func_globals attribute of the other module though, which means our temporary binding would clobber any definition of this class' name in that module (oops). Again, unlikely.
Probably other weird cases I haven't thought of.
You could try to add more hacks to make it a little more robust in these situations, but the nature of Python is both that these kind of hacks are possible and that it's impossible to make them absolutely bullet proof.
The answers here didn't work for me in python3, because __metaclass__ didn't work.
Here's my code registering all subclasses of a class at their definition time:
registered_models = set()
class RegisteredModel(type):
def __new__(cls, clsname, superclasses, attributedict):
newclass = type.__new__(cls, clsname, superclasses, attributedict)
# condition to prevent base class registration
if superclasses:
registered_models.add(newclass)
return newclass
class CustomDBModel(metaclass=RegisteredModel):
pass
class BlogpostModel(CustomDBModel):
pass
class CommentModel(CustomDBModel):
pass
# prints out {<class '__main__.BlogpostModel'>, <class '__main__.CommentModel'>}
print(registered_models)
Calling the Base class directly should work (instead of using super()):
def __init__(self):
Base.__init__(self)
It can be also done with something like this (without a registry function)
_registry = {}
class MetaClass(type):
def __init__(cls, clsname, bases, methods):
super().__init__(clsname, bases, methods)
_registry[cls.__name__] = cls
class MyClass1(metaclass=MetaClass): pass
class MyClass2(metaclass=MetaClass): pass
print(_registry)
# {'MyClass1': <class '__main__.MyClass1'>, 'MyClass2': <class '__main__.MyClass2'>}
Additionally, if we need to use a base abstract class (e.g. Base() class), we can do it this way (notice the metacalss inherits from ABCMeta instead of type)
from abc import ABCMeta
_registry = {}
class MetaClass(ABCMeta):
def __init__(cls, clsname, bases, methods):
super().__init__(clsname, bases, methods)
_registry[cls.__name__] = cls
class Base(metaclass=MetaClass): pass
class MyClass1(Base): pass
class MyClass2(Base): pass
print(_registry)
# {'Base': <class '__main__.Base'>, 'MyClass1': <class '__main__.MyClass1'>, 'MyClass2': <class '__main__.MyClass2'>}
I have read posts like these:
What is a metaclass in Python?
What are your (concrete) use-cases for metaclasses in Python?
Python's Super is nifty, but you can't use it
But somehow I got confused. Many confusions like:
When and why would I have to do something like the following?
# Refer link1
return super(MyType, cls).__new__(cls, name, bases, newattrs)
or
# Refer link2
return super(MetaSingleton, cls).__call__(*args, **kw)
or
# Refer link2
return type(self.__name__ + other.__name__, (self, other), {})
How does super work exactly?
What is class registry and unregistry in link1 and how exactly does it work? (I thought it has something to do with singleton. I may be wrong, being from C background. My coding style is still a mix of functional and OO).
What is the flow of class instantiation (subclass, metaclass, super, type) and method invocation (
metaclass->__new__, metaclass->__init__, super->__new__, subclass->__init__ inherited from metaclass
) with well-commented working code (though the first link is quite close, but it does not talk about cls keyword and super(..) and registry). Preferably an example with multiple inheritance.
P.S.: I made the last part as code because Stack Overflow formatting was converting the text metaclass->__new__
to metaclass->new
OK, you've thrown quite a few concepts into the mix here! I'm going to pull out a few of the specific questions you have.
In general, understanding super, the MRO and metclasses is made much more complicated because there have been lots of changes in this tricky area over the last few versions of Python.
Python's own documentation is a very good reference, and completely up to date. There is an IBM developerWorks article which is fine as an introduction and takes a more tutorial-based approach, but note that it's five years old, and spends a lot of time talking about the older-style approaches to meta-classes.
super is how you access an object's super-classes. It's more complex than (for example) Java's super keyword, mainly because of multiple inheritance in Python. As Super Considered Harmful explains, using super() can result in you implicitly using a chain of super-classes, the order of which is defined by the Method Resolution Order (MRO).
You can see the MRO for a class easily by invoking mro() on the class (not on an instance). Note that meta-classes are not in an object's super-class hierarchy.
Thomas' description of meta-classes here is excellent:
A metaclass is the class of a class.
Like a class defines how an instance
of the class behaves, a metaclass
defines how a class behaves. A class
is an instance of a metaclass.
In the examples you give, here's what's going on:
The call to __new__ is being
bubbled up to the next thing in the
MRO. In this case, super(MyType, cls) would resolve to type;
calling type.__new__ lets Python
complete it's normal instance
creation steps.
This example is using meta-classes
to enforce a singleton. He's
overriding __call__ in the
metaclass so that whenever a class
instance is created, he intercepts
that, and can bypass instance
creation if there already is one
(stored in cls.instance). Note
that overriding __new__ in the
metaclass won't be good enough,
because that's only called when
creating the class. Overriding
__new__ on the class would work,
however.
This shows a way to dynamically
create a class. Here's he's
appending the supplied class's name
to the created class name, and
adding it to the class hierarchy
too.
I'm not exactly sure what sort of code example you're looking for, but here's a brief one showing meta-classes, inheritance and method resolution:
print('>>> # Defining classes:')
class MyMeta(type):
def __new__(cls, name, bases, dct):
print("meta: creating %s %s" % (name, bases))
return type.__new__(cls, name, bases, dct)
def meta_meth(cls):
print("MyMeta.meta_meth")
__repr__ = lambda c: c.__name__
class A(metaclass=MyMeta):
def __init__(self):
super(A, self).__init__()
print("A init")
def meth(self):
print("A.meth")
class B(metaclass=MyMeta):
def __init__(self):
super(B, self).__init__()
print("B init")
def meth(self):
print("B.meth")
class C(A, B, metaclass=MyMeta):
def __init__(self):
super(C, self).__init__()
print("C init")
print('>>> c_obj = C()')
c_obj = C()
print('>>> c_obj.meth()')
c_obj.meth()
print('>>> C.meta_meth()')
C.meta_meth()
print('>>> c_obj.meta_meth()')
c_obj.meta_meth()
Example output (using Python >= 3.6):
>>> # Defining classes:
meta: creating A ()
meta: creating B ()
meta: creating C (A, B)
>>> c_obj = C()
B init
A init
C init
>>> c_obj.meth()
A.meth
>>> C.meta_meth()
MyMeta.meta_meth
>>> c_obj.meta_meth()
Traceback (most recent call last):
File "metatest.py", line 41, in <module>
c_obj.meta_meth()
AttributeError: 'C' object has no attribute 'meta_meth'
Here's the more pragmatic answer.
It rarely matters
"What is a metaclass in Python". Bottom line, type is the metaclass of all classes. You have almost no practical use for this.
class X(object):
pass
type(X) == type
"What are your (concrete) use cases for metaclasses in Python?". Bottom line. None.
"Python's Super is nifty, but you can't use it". Interesting note, but little practical value. You'll never have a need for resolving complex multiple inheritance networks. It's easy to prevent this problem from arising by using an explicity Strategy design instead of multiple inheritance.
Here's my experience over the last 7 years of Python programming.
A class has 1 or more superclasses forming a simple chain from my class to object.
The concept of "class" is defined by a metaclass named type. I might want to extend the concept of "class", but so far, it's never come up in practice. Not once. type always does the right thing.
Using super works out really well in practice. It allows a subclass to defer to it's superclass. It happens to show up in these metaclass examples because they're extending the built-in metaclass, type.
However, in all subclass situations, you'll make use of super to extend a superclass.
Metaclasses
The metaclass issue is this:
Every object has a reference to it's type definition, or "class".
A class is, itself, also an object.
Therefore a object of type class has a reference to it's type or "class". The "class" of a "class" is a metaclass.
Since a "class" isn't a C++ run-time object, this doesn't happen in C++. It does happen in Java, Smalltalk and Python.
A metaclass defines the behavior of a class object.
90% of your interaction with a class is to ask the class to create a new object.
10% of the time, you'll be using class methods or class variables ("static" in C++ or Java parlance.)
I have found a few use cases for class-level methods. I have almost no use cases for class variables. I've never had a situation to change the way object construction works.