Using the following code:
class Meta(type):
def __new__(mcl, name, bases, nmspc):
return super(Meta, mcl).__new__(mcl, name, bases, nmspc)
class TestClass(object):
__metaclass__ = Meta
def __init__(self):
pass
t = TestClass(2) # deliberate error
produces the following:
Traceback (most recent call last):
File "foo.py", line 12, in <module>
t = TestClass(2)
TypeError: __init__() takes exactly 1 argument (2 given)
However using __call__ instead of __new__ in the following code:
class Meta(type):
def __call__(cls, *args, **kwargs):
instance = super(Meta, cls).__call__(*args, **kwargs)
# do something
return instance
class TestClass(object):
__metaclass__ = Meta
def __init__(self):
pass
t = TestClass(2) # deliberate error
gives me the following traceback:
Traceback (most recent call last):
File "foo.py", line 14, in <module>
t = TestClass(2)
File "foo.py", line 4, in __call__
instance = super(Meta, cls).__call__(*args, **kwargs)
TypeError: __init__() takes exactly 1 argument (2 given)
Does type also trigger the __init__ of the class from its __call__ or is the behaviour changed when I add the metaclass?
Both __new__ and __call__ are being run by the call to the class constructor. Why is __call__ showing up in the error message but not __new__?
Is there a way of suppressing the lines of the traceback showing the __call__ for the metaclass here? i.e. when the error is in the call to the constructor and not the __call__ code?
Lets see if I can answer your three questions:
Does type also trigger the __init__ of the class from its __call__ or is the behaviour changed when I add the metaclass?
The default behavior of type.__call__ is to create a new object with cls.__new__ (which may be inherited from object.__new__, or call it with super). If the object returned from cls.__new__ is an instance of cls, type.__call__ will then run cls.__init__ on it.
If you define your own __call__ method in a custom metaclass, it can do almost anything. Usually though you'll call type.__call__ at some point (via super) and so the same behavior will happen. This isn't required though. You can return anything from a metaclass's __call__ method.
Both __new__ and __call__ are being run by the call to the class constructor. Why is __call__ showing up in the error message but not __new__?
You're misunderstanding what Meta.__new__ is for. The __new__ method in a metaclass is not called when you make an instance of the normal class. It is called when you make an instance of the metaclass, which is the class object itself.
Try running this code, to better understand what is going on:
print("Before Meta")
class Meta(type):
def __new__(meta, name, bases, dct):
print("In Meta.__new__")
return super(Meta, meta).__new__(meta, name, bases, dct)
def __call__(cls, *args, **kwargs):
print("In Meta.__call__")
return super(Meta, cls).__call__(*args, **kwargs)
print("After Meta, before Cls")
class Cls(object):
__metaclass__ = Meta
def __init__(self):
print("In Cls.__init__")
print("After Cls, before obj")
obj = Cls()
print("Bottom of file")
The output you'll get is:
Before Meta
After Meta, before Cls
In Meta.__new__
After Cls, before obj
In Meta.__call__
In Cls.__init__
Bottom of file
Note that Meta.__new__ is called where the regular class Cls is defined, not when the instance of Cls is created. The Cls class object is in fact an instance of Meta, so this makes some sense.
The difference in your exception tracebacks comes from this fact. When the exception occurs, the metaclass's __new__ method has long since finished (since if it didn't, there wouldn't have been a regular class to call at all).
Is there a way of suppressing the lines of the traceback showing the __call__ for the metaclass here? i.e. when the error is in the call to the constructor and not the __call__ code?
Yes and no. It's probably possible, but its almost certainly a bad idea. Python's stacktraces will, by default, show you the full call stack (excluding builtin stuff that's implemented in C, rather than Python). That's their purpose. The problem causing an exception in your code is not always going to be in the last call, even in less confusing areas than metaclasses.
Consider this trivial example:
def a(*args):
b(args) # note, no * here, so a single tuple will be passed on
def b(*args):
c(*args):
def c():
print(x)
a()
In this code, there's an error in the a function, but an exception is only raised when b calls c with the wrong number of arguments.
I suppose if you needed to you could pretty things up a bit by editing the data in the stack trace object somewhere, but if you do that automatically it is likely to make things much more confusing if you ever encounter an actual error in the metaclass code.
In fact, what the interpreter is complaining about is that you are not passing arg to __init__.
You should do:
t = TestClass('arg')
or:
class TestClass(object):
__metaclass__ = Meta
def __init__(self):
pass
t = TestClass()
Related
I have searched around for an answer to this question but couldn't find anything. My apologies if this was already asked before.
Of the 3-4 methods I know for enforcing from a parent class a given method on a child class (editing the __new__ method of a metaclass, hooking into builtins.__build_class__, use of __init_subclass__ or using abc.abstractmethod) I usually end up using the __init_subclass__, basically because of ease of use and, unlike #abc.abstractmethod, the constraint on the child class is checked upon child class definition and not class instantiation. Example:
class Par():
def __init_subclass__(self, *args, **kwargs):
must_have = 'foo'
if must_have not in list(self.__dict__.keys()):
raise AttributeError(f"Must have {must_have}")
def __init__(self):
pass
class Chi(Par):
def __init__(self):
super().__init__()
This example code will obviously throw an error, since Chi does not have a foo method. Nevertheless, I kind of just came across the fact that this constraint from the upstream class can be by-passed by using a simple class decorator:
def add_hello_world(Cls):
class NewCls(object):
def __init__(self, *args, **kwargs):
self.instance = Cls(*args, **kwargs)
def hello_world(self):
print("hello world")
return NewCls
#add_hello_world
class Par:
def __init_subclass__(self, *args, **kwargs):
must_have = "foo"
if must_have not in list(self.__dict__.keys()):
raise AttributeError(f"Must have {must_have}")
def __init__(self):
pass
class Chi(Par):
def __init__(self):
super().__init__()
c = Chi()
c.hello_world()
The above code runs without a problem. Now, disregarding the fact that the class I have decorated is Par (and, of course, if Par is library code I might not even have access to it as a user code developer), I cannot really explain this behavior. It is obvious to me that one could use a decorator to add a method or functionality to an existing class, but I had never seen an unrelated decorator (just prints hello world, doesn't even mess with class creation) disable a method already present in the class.
Is this an intended Python behavior? Or is this some kind of bug? To be honest, in my understanding, this might present some security concerns.
Does this happen only to the __init_subclass__ data model? Or also to others?
Remember, decorator syntax is just function application:
class Par:
def __init_subclass__(...):
...
Par = add_hello_world(Par)
The class originally bound to Par defined __init_subclass__; the new class defined inside add_hello_world does not, and that's the class that the post-decoration name Par refers to, and the class that you are subclassing.
Incidentally, you can still access the original class Par via __init__.
Calling the decorator explicitly:
class Par:
def __init_subclass__(self, *args, **kwargs):
must_have = "foo"
if must_have not in list(self.__dict__.keys()):
raise AttributeError(f"Must have {must_have}")
def __init__(self):
pass
Foo = Par # Keep this for confirmation
Par = add_hello_world(Par)
we can confirm that the closure keeps a reference to the original class:
>>> Par.__init__.__closure__[0].cell_contents
<class '__main__.Par'>
>>> Par.__init__.__closure__[0].cell_contents is Par
False
>>> Par.__init__.__closure__[0].cell_contents is Foo
True
And if you did try to subclass it, you would get the expected error:
>>> class Bar(Par.__init__.__closure__[0].cell_contents):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "tmp.py", line 16, in __init_subclass__
raise AttributeError(f"Must have {must_have}")
AttributeError: Must have foo
I would like to have in the code underneath that when i type instance_of_A = A(, that the name of the supposed arguments is init_argumentA and not *meta_args, **meta_kwargs. But unfortunatally, the arguments of the __call__ method of the metaclass are shown.
class Meta(type):
def __call__(cls,*meta_args,**meta_kwargs):
# Something here
return super().__call__(*meta_args, **meta_kwargs)
class A(metaclass = Meta):
def __init__(self,init_argumentA):
# something here
class B(metaclass = Meta):
def __init__(self,init_argumentB):
# something here
I have searched for a solution and found the question How to dynamically change signatures of method in subclass?
and Signature-changing decorator: properly documenting additional argument. But none, seem to be completely what I want. The first link uses inspect to change the amount of variables given to a function, but i can't seem to let it work for my case and I think there has to be a more obvious solution.
The second one isn't completely what I want, but something in that way might be a good alternative.
Edit: I am working in Spyder. I want this because I have thousands of classes of the Meta type and each class have different arguments, which is impossible to remember, so the idea is that the user can remember it when seeing the correct arguments show up.
Using the code you provided, you can change the Meta class
class Meta(type):
def __call__(cls, *meta_args, **meta_kwargs):
# Something here
return super().__call__(*meta_args, **meta_kwargs)
class A(metaclass=Meta):
def __init__(self, x):
pass
to
import inspect
class Meta(type):
def __call__(cls, *meta_args, **meta_kwargs):
# Something here
# Restore the signature of __init__
sig = inspect.signature(cls.__init__)
parameters = tuple(sig.parameters.values())
cls.__signature__ = sig.replace(parameters=parameters[1:])
return super().__call__(*meta_args, **meta_kwargs)
Now IPython or some IDE will show you the correct signature.
I found that the answer of #johnbaltis was 99% there but not quite what was needed to ensure the signatures were in place.
If we use __init__ rather than __call__ as below we get the desired behaviour
import inspect
class Meta(type):
def __init__(cls, clsname, bases, attrs):
# Restore the signature
sig = inspect.signature(cls.__init__)
parameters = tuple(sig.parameters.values())
cls.__signature__ = sig.replace(parameters=parameters[1:])
return super().__init__(clsname, bases, attrs)
def __call__(cls, *args, **kwargs):
super().__call__(*args, **kwargs)
print(f'Instanciated: {cls.__name__}')
class A(metaclass=Meta):
def __init__(self, x: int, y: str):
pass
which will correctly give:
In [12]: A?
Init signature: A(x: int, y: str)
Docstring: <no docstring>
Type: Meta
Subclasses:
In [13]: A(0, 'y')
Instanciated: A
Ok - even though the reason for you to want that seems to be equivocated, as any "honest" Python inspecting tool should show the __init__ signature, what is needed for what you ask is that for each class you generate a dynamic metaclass, for which the __call__ method has the same signature of the class's own __init__ method.
For faking the __init__ signature on __call__ we can simply use functools.wraps. (but you might want to check the answers at
https://stackoverflow.com/a/33112180/108205 )
And for dynamically creating an extra metaclass, that can be done on the __metaclass__.__new__ itself, with just some care to avoud infinite recursion on the __new__ method - threads.Lock can help with that in a more consistent way than a simple global flag.
from functools import wraps
creation_locks = {}
class M(type):
def __new__(metacls, name, bases, namespace):
lock = creation_locks.setdefault(name, Lock())
if lock.locked():
return super().__new__(metacls, name, bases, namespace)
with lock:
def __call__(cls, *args, **kwargs):
return super().__call__(*args, **kwargs)
new_metacls = type(metacls.__name__ + "_sigfix", (metacls,), {"__call__": __call__})
cls = new_metacls(name, bases, namespace)
wraps(cls.__init__)(__call__)
del creation_locks[name]
return cls
I initially thought of using a named parameter to the metaclass __new__ argument to control recursion, but then it would be passed to the created class' __init_subclass__ method (which will result in an error) - so the Lock use.
Not sure if this helps the author but in my case I needed to change inspect.signature(Klass) to inspect.signature(Klass.__init__) to get signature of class __init__ instead of metaclass __call__.
I was planning to use metaclass to validate the constructor argument in Python3, but it seems __new__method has no access to the variable val, because the class A() has not been instantiated yet.
Sow what's the correct way to do it?
class MyMeta(type):
def __new__(cls, clsname, superclasses, attributedict):
print("clsname: ", clsname)
print("superclasses: ", superclasses)
print("attributedict: ", attributedict)
return type.__new__(cls, clsname, superclasses, attributedict)
class A(metaclass=MyMeta):
def __init__(self, val):
self.val = val
A(123)
... it seems __new__method has no access to the variable val, because the class A() has not been instantiated yet.
Exactly.
So what's the correct way to do it?
Not with a metaclass.
Metaclasses are for fiddling with the creation of the class object itself, and what you want to do is related to instances of the class.
Best practice: don't type-check the val at all. Pythonic code is duck-typed. Simply document that you expect a string-like argument, and users who put garbage in get garbage out.
wim is absolutely correct that this isn't a good use of metaclasses, but it's certainly possible (and easy, too).
Consider how you would create a new instance of your class. You do this:
A(123)
In other words: You create an instance by calling the class. And python allows us to create custom callable objects by defining a __call__ method. So all we have to do is to implement a suitable __call__ method in our metaclass:
class MyMeta(type):
def __call__(self, val):
if not isinstance(val, str):
raise TypeError('val must be a string')
return super().__call__(val)
class A(metaclass=MyMeta):
def __init__(self, val):
self.val = val
And that's it. Simple, right?
>>> A('foo')
<__main__.A object at 0x007886B0>
>>> A(123)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "untitled.py", line 5, in __call__
raise TypeError('val must be a string')
TypeError: val must be a string
I am trying to get my head around metaclass but i still can't really get the concept of it.
For all i know:
Any class is itself an instance of type "type" - therefore "calling" a class just calls the method __call__ on its class - which happens to be type's __call__. The effect of type.__call__ is exactly: on code like:
class A:
pass
b = A()
The sequence of steps i know here is:
1.type.__call__ receives the class A itself as its first parameter.
It calls the A.__new__ - in pseudocode we could write instance = A.__new__(cls) as what runs.
3.That returns an instance of the "A" class
4.Then it calls __init__ on the instance(instance.__init__())
...and returns that instance return instance
But now consider the below code:
class MetaOne(type):
def __new__(meta, classname, supers, classdict):
print('In MetaOne.new:', meta, classname, supers, classdict, sep='\n...')
return type.__new__(meta, classname, supers, classdict)
class Eggs:
pass
print('making class')
class Spam(Eggs, metaclass=MetaOne):
data = 1
def meth(self, arg):
return self.data + arg
print('making instance')
X = Spam()
print('data:', X.data, X.meth(2))
The output from this script is as follows:
making class
In MetaOne.new:
...<class '__main__.MetaOne'>
...Spam
...(<class '__main__.Eggs'>,)
...{'__qualname__': 'Spam', '__module__': '__main__', 'meth': <function Spam.met
h at 0x00000000010C1D08>, 'data': 1}
making instance
data: 1 3
So as per my understanding this is the sequence of steps:
Since Spam is an instance of MetaOne, calling X = Spam() would try to call the __call__ method of MetaOne class which is not there .
Since MetaOne inherits from type it would call the __call__ method of type class with Spam as the first argument.
After that the call lands up in the __new__ method of MetaOne class but it should contain Spam as the first param.
From where does meta argument of MetaOne class come into picture.
Please help me in my understanding.
Since Spam is an instance of MetaOne, calling X = Spam() would try to
call the __call__ method of MetaOne class which is not there .
That is the soruce of your confusion - the __call__ (or __new__ and __init__) of the metaclass is not called when you create ordinary instances of the class.
Also, since there is no __call__ method for MetaOne, the usual inheritance rules apply: the __call__ method on MetaOne's superclass is used (and it is type.__call__)
The metaclass' __new__ and __init__ methods are invoked when the class body itself is executed (as you can see in your example, the "print" in the metaclass' __new__ shows up before the "making instance" text).
When creating an instance of Span itself, the metaclass methods __new__ and __init__ are not called - the metaclass __call__ is called - and it is that that executes the class's(Span's) __new__ and __init__. In other words: the metaclass __call__ is responsible for calling the "ordinary" class's __new__ and __init__.
Since MetaOne inherits from type it would call the call method of
type class with Spam as the first argument.
And it does, but you made no print statements to "view" this happening:
class MyMeta(type):
def __new__(metacls, name, bases, namespace):
print("At meta __new__")
return super().__new__(metacls, name, bases, namespace)
def __call__(cls, *args, **kwd):
print ("at meta __call__")
return super().__call__(*args, **kwd)
def Egg(metaclass=MyMeta):
def __new__(cls):
print("at class __new__")
If I paste this at the ineractive console, at this point it prints:
At meta __new__
Then, going on the interactive session:
In [4]: fried = Egg()
at meta __call__
at class __new__
And the extra mind-twisting thing is that: "type is type's own metaclass": meaning that type's __call__ is also responsible for running the __new__ and __init__ methods on the metaclass itself, when a new (non-meta) class body is executed.
Say I've got a metaclass and a class using it:
class Meta(type):
def __call__(cls, *args):
print "Meta: __call__ with", args
class ProductClass(object):
__metaclass__ = Meta
def __init__(self, *args):
print "ProductClass: __init__ with", args
p = ProductClass(1)
Output as follows:
Meta: __call__ with (1,)
Question:
Why isn't ProductClass.__init__ triggered...just because of Meta.__call__?
UPDATE:
Now, I add __new__ for ProductClass:
class ProductClass(object):
__metaclass__ = Meta
def __new__(cls, *args):
print "ProductClass: __new__ with", args
return super(ProductClass, cls).__new__(cls, *args)
def __init__(self, *args):
print "ProductClass: __init__ with", args
p = ProductClass(1)
Is it Meta.__call__'s responsibility to call ProductClass's __new__ and __init__?
There is a difference in OOP between extending a method and overriding it, what you just did in your metaclass Meta is called overriding because you defined your __call__ method and you didn't call the parent __call__. to have the behavior that you want you have to extend __call__ method by calling the parent method:
class Meta(type):
def __call__(cls, *args):
print "Meta: __call__ with", args
return super(Meta, cls).__call__(*args)
Yes - it's up to Meta.__call__ to call ProductClass.__init__ (or not, as the case may be).
To quote the documentation:
for example defining a custom __call__() method in the metaclass
allows custom behavior when the class is called, e.g. not always
creating a new instance.
That page also mentions a scenario where the metaclass's __call__ may return an instance of a different class (i.e. not ProductClass in your example). In this scenario it would clearly be inappropriate to call ProductClass.__init__ automatically.