I have been wondering this for a while now, and I hope this isn't a stupid question with an obvious answer I'm not realizing: Why can't I just call the __init__ method of super() like super()()? I have to call the method like this instead: super().__init__()
Here is an example that gets a TypeError: 'super' object is not callable error when I run it (specifically on line 6 that comes from line 3 in __init__):
class My_Int(int):
def __init__(self,value,extr_data=None):
super()(value)
self.extr_data=extr_data
x=My_Int(3)
Doesn't super() get the inherited class int making super()(value) the same as int(value)?
Furthermore, why can't I use the len function with super() when inheriting from the class list? Doesn't len() do the same as __len__()?
class My_List(list):
def some_method1(self):
print(len(super()))
def some_method2(self):
print(super().__len__())
x=My_List((1,2,3,4))
x.some_method2()
x.some_method1()
This example prints 4 and then an error as expected. Here is the output exactly:
4
Traceback (most recent call last):
File "/home/user/test.py", line 11, in <module>
x.some_method1()
File "/home/user/test.py", line 3, in some_method1
print(len(super()))
TypeError: object of type 'super' has no len()
Notice I called some_method2 before calling some_method1 (sorry for the confusion).
Am I missing something obvious here?
P.S. Thanks for all the help!
super() objects can't intercept most special method calls, because they bypass the instance and look up the method on the type directly, and they don't want to implement all the special methods when many of them won't apply for any given usage. This case gets weirder, super()() would try to lookup a __call__ method on the super type itself, and pass it the super instance.
They don't do this because it's ambiguous, and not particularly explicit. Does super()() mean invoke the super class's __init__? Its __call__? What if we're in a __new__ method, do you invoke __new__, __init__ or both? Does this mean all super uses must implicitly know which method they're called in (even more magical than knowing the class they were defined in and the self passed when constructed with zero arguments)?
Rather than deal with all this, and to avoid implementing all the special methods on super just so it can delegate them if they exist on the instance in question, they required you to explicitly specify the special method you intend to call.
Related
If your question was closed as a duplicate of this, it is because you had a code sample including something along the lines of either:
class Example:
def __int__(self, parameter):
self.attribute = parameter
or:
class Example:
def _init_(self, parameter):
self.attribute = parameter
When you subsequently attempt to create an instance of the class, an error occurs:
>>> Example("an argument")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Example() takes no arguments
Alternately, instances of the class seem to be missing attributes:
>>> class Example:
... def __int__(self): # or _init_
... self.attribute = 'value'
>>> Example().attribute
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Example' object has no attribute 'attribute'
You might also wonder: what do these exception messages mean, and how do they relate to the problem? Why didn't a problem occur earlier, for example, with the class definition itself? How else might the problem manifest? How can I guard against this problem in the future?
This is an artificial canonical duplicate created specifically to head off two of the most common typographical errors in code written by new Python programmers. While questions caused by a typo are normally closed for that reason, there are some useful things to explain in this case, and having a duplicate target allows for closing questions faster. I have tried to design the question to be easy to search for.
This is because the code has a simple typographical error: the method should instead be named __init__ - note the spelling, and note that there are two underscores on each side.
What do the exception messages mean, and how do they relate to the problem?
As one might guess, a TypeError is an Error that has to do with the Type of something. In this case, the meaning is a bit strained: Python also uses this error type for function calls where the arguments (the things you put in between () in order to call a function, class constructor or other "callable") cannot be properly assigned to the parameters (the things you put between () when writing a function using the def syntax).
In the examples where a TypeError occurs, the class constructor for Example does not take arguments. Why? Because it is using the base object constructor, which does not take arguments. That is just following the normal rules of inheritance: there is no __init__ defined locally, so the one from the superclass - in this case, object - is used.
Similarly, an AttributeError is an Error that has to do with the Attributes of something. This is quite straightforward: the instance of Example doesn't have any .attribute attribute, because the constructor (which, again, comes from object due to the typo) did not set one.
Why didn't a problem occur earlier, for example, with the class definition itself?
Because the method with a wrongly typed name is still syntactically valid. Only syntax errors (reported as SyntaxError; yes, it's an exception, and yes, there are valid uses for it in real programs) can be caught before the code runs. Python does not assign any special meaning to methods named _init_ (with one underscore on each side), so it does not care what the parameters are. While __int__ is used for converting instances of the class to integer, and shouldn't have any parameters besides self, it is still syntactically valid.
Your IDE might be able to warn you about an __int__ method that takes suspicious parameters (i.e., anything besides self). However, a) that doesn't completely solve the problem (see below), and b) the IDE might have helped you get it wrong in the first place (by making a bad autocomplete suggestion).
The _init_ typo seems to be much less common nowadays. My guess is that people used to do this after reading example code out of books with poor typesetting.
How else might the problem manifest?
In the case where an instance is successfully created (but not properly initialized), any kind of problem could potentially happen later (depending on why proper initialization was needed). For example:
BOMB_IS_SET = True
class DefusalExpert():
def __int__(self):
global BOMB_IS_SET
BOMB_IS_SET = False
def congratulate(self):
global BOMB_IS_SET
if BOMB_IS_SET:
raise RuntimeError("everything blew up, gg")
else:
print("hooray!")
If you intend for the class to be convertible to integer and also wrote __int__ deliberately, the last one will take precedence:
class LoneliestNumber:
def __int__(self):
return 1
def __int__(self): # was supposed to be __init__
self.two = "can be as bad"
>>> int(LoneliestNumber())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __int__ returned non-int (type NoneType)
(Note that __int__ will not be used implicitly to convert instances of the class to an index for a list or tuple. That's done by __index__.)
How might I guard against the problem in the future?
There is no magic bullet. I find it helps a little to have the convention of always putting __init__ (and/or __new__) as the first method in a class, if the class needs one. However, there is no substitute for proofreading, or for training.
While trying to implement some deep magic that I'd rather not get into here (I should be able to figure it out if I get an answer for this), it occurred to me that __new__ doesn't work the same way for classes that define it, as for classes that don't. Specifically: when you define __new__ yourself, it will be passed arguments that mirror those of __init__, but the default implementation doesn't accept any. This makes some sense, in that object is a builtin type and doesn't need those arguments for itself.
However, it leads to the following behaviour, which I find quite vexatious:
>>> class example:
... def __init__(self, x): # a parameter other than `self` is necessary to reproduce
... pass
>>> example(1) # no problem, we can create instances.
<__main__.example object at 0x...>
>>> example.__new__ # it does exist:
<built-in method __new__ of type object at 0x...>
>>> old_new = example.__new__ # let's store it for later, and try something evil:
>>> example.__new__ = 'broken'
>>> example(1) # Okay, of course that will break it...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object is not callable
>>> example.__new__ = old_new # but we CAN'T FIX IT AGAIN
>>> example(1) # the argument isn't accepted any more:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object.__new__() takes exactly one argument (the type to instantiate)
>>> example() # But we can't omit it either due to __init__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() missing 1 required positional argument: 'x'
Okay, but that's just because we still have something explicitly attached to example, so it's shadowing the default, which breaks some descriptor thingy... right? Except not:
>>> del example.__new__ # if we get rid of it, the problem persists
>>> example(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object.__new__() takes exactly one argument (the type to instantiate)
>>> assert example.__new__ is old_new # even though the lookup gives us the same object!
The same thing still happens if we directly add and remove the attribute, without replacing it in between. Simply assigning and removing an attribute breaks the class, apparently irrevocably, and makes it impossible to instantiate. It's as if the class had some hidden attribute that tells it how to call __new__, which has been silently corrupted.
When we instantiate example at the start, how actually does Python find the base __new__ (it apparently finds object.__new__, but is it looking directly in object? Getting there indirectly via type? Something else?), and how does it decide that this __new__ should be called without arguments, even though it would pass an argument if we wrote a __new__ method inside the class? Why does that logic break if we temporarily mess with the class' __new__, even if we restore everything such that there is no observable net change?
The issues you're seeing aren't related to how Python finds __new__ or chooses its arguments. __new__ receives every argument you're passing. The effects you observed come from specific code in object.__new__, combined with a bug in the logic for updating the C-level tp_new slot.
There's nothing special about how Python passes arguments to __new__. What's special is what object.__new__ does with those arguments.
object.__new__ and object.__init__ expect one argument, the class to instantiate for __new__ and the object to initialize for __init__. If they receive any extra arguments, they will either ignore the extra arguments or throw an exception, depending on what methods have been overridden:
If a class overrides exactly one of __new__ or __init__, the non-overridden object method should ignore extra arguments, so people aren't forced to override both.
If a subclass __new__ or __init__ explicitly passes extra arguments to object.__new__ or object.__init__, the object method should raise an exception.
If neither __new__ nor __init__ are overridden, both object methods should throw an exception for extra arguments.
There's a big comment in the source code talking about this.
At C level, __new__ and __init__ correspond to tp_new and tp_init function pointer slots in a class's memory layout. Under normal circumstances, if one of these methods is implemented in C, the slot will point directly to the C-level implementation, and a Python method object will be generated wrapping the C function. If the method is implemented in Python, the slot will point to the slot_tp_new function, which searches the MRO for a __new__ method object and calls it. When instantiating an object, Python will invoke __new__ and __init__ by calling the tp_new and tp_init function pointers.
object.__new__ is implemented by the object_new C-level function, and object.__init__ is implemented by object_init. object's tp_new and tp_init slots are set to point to these functions.
object_new and object_init check whether they're overridden by checking a class's tp_new and tp_init slots. If tp_new points to something other than object_new, then __new__ has been overridden, and similar for tp_init and __init__.
static PyObject *
object_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
if (excess_args(args, kwds)) {
if (type->tp_new != object_new) {
PyErr_SetString(PyExc_TypeError,
"object.__new__() takes exactly one argument (the type to instantiate)");
return NULL;
}
...
Now, when you assign or delete __new__, Python has to update the tp_new slot to reflect this. When you assign __new__ on a class, Python sets the class's tp_new slot to the generic slot_tp_new function, which searches for a __new__ method and calls it. When you delete __new__, the class is supposed to re-inherit tp_new from the superclass, but the code has a bug:
else if (Py_TYPE(descr) == &PyCFunction_Type &&
PyCFunction_GET_FUNCTION(descr) ==
(PyCFunction)(void(*)(void))tp_new_wrapper &&
ptr == (void**)&type->tp_new)
{
/* The __new__ wrapper is not a wrapper descriptor,
so must be special-cased differently.
If we don't do this, creating an instance will
always use slot_tp_new which will look up
__new__ in the MRO which will call tp_new_wrapper
which will look through the base classes looking
for a static base and call its tp_new (usually
PyType_GenericNew), after performing various
sanity checks and constructing a new argument
list. Cut all that nonsense short -- this speeds
up instance creation tremendously. */
specific = (void *)type->tp_new;
/* XXX I'm not 100% sure that there isn't a hole
in this reasoning that requires additional
sanity checks. I'll buy the first person to
point out a bug in this reasoning a beer. */
}
In the specific = (void *)type->tp_new; line, type is the wrong type - it's the class whose slot we're trying to update, not the class we're supposed to inherit tp_new from.
When this code finds a __new__ method written in C, instead of updating tp_new to point to the corresponding C function, it sets tp_new to whatever value it already had! It doesn't change tp_new at all!
So initially, your example class has tp_new set to object_new, and object_new ignores extra arguments because it sees that __init__ is overridden and __new__ isn't.
When you set example.__new__ = 'broken', Python sets example's tp_new to slot_tp_new. Nothing you do after that point changes tp_new to anything else, even though del example.__new__ really should have.
When object_new finds that example's tp_new is slot_tp_new instead of object_new, it rejects extra arguments and throws an exception.
The bug manifests in some other ways too. For example,
>>> class Example: pass
...
>>> Example.__new__ = tuple.__new__
>>> Example()
<__main__.Example object at 0x7f9d0a38f400>
Before the __new__ assignment, Example has tp_new set to object_new. When the example does Example.__new__ = tuple.__new__, Python finds that tuple.__new__ is implemented in C, so it fails to update tp_new, leaving it set to object_new. Then, in Example(1, 2, 3), tuple.__new__ should raise an exception, because tuple.__new__ isn't applicable to Example:
>>> tuple.__new__(Example)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: tuple.__new__(Example): Example is not a subtype of tuple
but because tp_new is still set to object_new, object_new gets called instead of tuple.__new__.
The devs have tried to fix the buggy code several times, but each fix was itself buggy and got reverted. The second attempt got closer, but broke multiple inheritance - see the conversation in the bug tracker.
While executing the following code:
class Test():
def __init__(self):
self.hi_there()
self.a = 5
def hi_there(self):
print(self.a)
new_object = Test()
new_object.hi_there()
I have received an error:
Traceback (most recent call last):
File "/root/a.py", line 241, in <module>
new_object = Test()
File "/root/a.py", line 233, in __init__
self.hello()
File "/root/a.py", line 238, in hello
print(self.a)
AttributeError: 'Test' object has no attribute 'a'
Why do we need to specify the self inside the function while the object is not initialized yet? The possibility to call hi_there() function means that the object is already set, but how come if other variables attributed to this instances haven't been initialized yet?
What is the self inside the __init__ function if it's not a "full" object yet?
Clearly this part of code works:
class Test():
def __init__(self):
#self.hi_there()
self.a = 5
self.hi_there()
def hi_there(self):
print(self.a)
new_object = Test()
new_object.hi_there()
I come from C++ world, there you have to declare the variables before you assign them.
I fully understand your the use of self. Although I don't understand what is the use of self inside__init__() if the self object is not fully initialized.
There is no magic. By the time __init__ is called, the object is created and its methods defined, but you have the chance to set all the instance attributes and do all other initialization. If you look at execution in __init__:
def __init__(self):
self.hi_there()
self.a = 5
def hi_there(self):
print(self.a)
the first thing that happens in __init__ is that hi_there is called. The method already exists, so the function call works, and we drop into hi_there(), which does print(self.a). But this is the problem: self.a isn't set yet, since this only happens in the second line of __init__, but we called hi_there from the first line of __init__. Execution hasn't reached the line where you set self.a = 5, so there's no way that the method call self.hi_there() issued before this assignment can use self.a. This is why you get the AttributeError.
Actually, the object has already been created when __init__ is called. That's why you need self as a parameter. And because of the way Python works internally, you don't have access to the objects without self (Bear in mind that it doesn't need to be called self, you can call it anything you want as long as it is a valid name. The instance is always the first parameter of a method, whatever it's name is.).
The truth is that __init__ doesn't create the object, it just initializes it. There is a class method called __new__, which is in charge of creating the instance and returning it. That's where the object is created.
Now, when does the object get it's a attribute. That's in __init__, but you do have access to it's methods inside of __init__. I'm not completely knowledable about how the creation of the objects works, but methods are already set once you get to that point. That doesn't happen with values, so they are not available until you define them yourself in __init__.
Basically Python creates the object, gives it it's methods, and then gives you the instance so you can initialize it's attributes.
EDIT
Another thing I forgot to mention. Just like you define __init__, you can define __new__ yourself. It's not very common, but you do it when you need to modify the actual object's creation. I've only seen it when defining metaclasses (What are metaclasses in Python?). Another method you can define in that case is __call__, giving you even more control.
Not sure what you meant here, but I guess the first code sample should call an hello() function instead of the hi_there() function.
Someone corrects me if I'm wrong, but in Python, defining a class, or a function is dynamic. By this I mean, defining a class or a function happens at runtime: these are regular statements that are executed just like others.
This language feature allows powerful thing such as decorating the behavior of a function to enrich it with extra functionality (see decorators).
Therefore, when you create an instance of the Test class, you try to call the hello() function before you have set explicitly the value of a. Therefore, the Test class is not YET aware of its a attribute. It has to be read sequentially.
I have two classes
class Something(object):
def __init__(self):
self.thing = "thing"
class SomethingElse(Something):
def __init__(self):
self.thing = "another"
as you can see, one inherits from another.
When I run super(SomethingElse), no error is thrown. However, when I run super(SomethingElse).__init__(), I was expecting an unbound function call (unbound to a hypothetical SomethingElse instance) and so was expecting that __init__() would complain about not receiving an object for its self parameter, but instead I get this error:
TypeError: super() takes at least 1 argument (0 given)
What is the meaning of this message?
EDIT: I often see people hand-wave answer a super question, so please don't answer unless you really know how the super delegate is working here, and know about descriptors and how they are used with super.
EDIT: Alex suggested I update my post with more details. I'm getting something different now in both ways I used it for 3.6 (Anaconda). Not sure what is going on. I don't receive what Alex did, but I get:
class Something(object):
def __init__(self):
self.thing = "thing"
class SomethingElse(Something):
def __init__(self):
super(SomethingElse).__init__()
The calls (on Anaconda's 3.6):
SomethingElse()
<no problem>
super(SomethingElse).__init__()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: super(): no arguments
super(SomethingElse).__init__(SomethingElse())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: super() argument 1 must be type, not SomethingElse
My understanding of super was that, according to https://docs.python.org/3/library/functions.html#super, that super() with just the first argument would leave the super object unbounded to an instance, so that if you called __init__() on the super object, you'd need to pass in an instance as __init__() would be unbounded as well. However, 3.6 complains about how, with super(SomethingElse).__init__(SomethingElse(), SomethingElse isn't a type, which it should be as it inherits from a parent that inherits from object.
on 2.7.13 gives the original error for super(SomethingElse).__init__(), which was TypeError: super() takes at least 1 argument (0 given). For super(SomethingElse).__init__(SomethingElse()) it throws TypeError: super() argument 1 must be type, not SomethingElse
Calling super with 1 argument produces an "unbound" super object. Those are weird and undocumented and mostly useless, and I won't go into how they were intended to be used, but for the purposes of this answer, we really only need to know one thing about them.
super(SomethingElse).__init__ doesn't go through the usual super proxy logic. You're getting the super instance's own __init__ method, not anything related to SomethingElse.
From there, the rest of the behavior follows. The TypeError: super() takes at least 1 argument (0 given) on Python 2 is because super.__init__ takes at least 1 argument, and you're passing it 0. (You might expect it to say TypeError: super() takes at least 2 arguments (1 given) because it's still getting self - the super object self, not the SomethingElse instance - but due to weird implementation details, methods implemented in C generally don't count self for this kind of error message.)
SomethingElse() succeeds on Python 3 because the super constructor pulls __class__ and self from the usual stack inspection magic.
Calling super(SomethingElse).__init__() manually from outside the class produces RuntimeError: super(): no arguments because super.__init__ tries to do its stack inspection magic and doesn't find __class__ or self.
super(SomethingElse).__init__(SomethingElse()) fails because the first argument to the super constructor is supposed to be a type, not an instance.
I would like to have a function pointer ptr that can point to either:
a function,
the method of an object instance, or
the constructor of the object.
In the latter case, the execution of ptr() should instantiate the class.
def function(argument) :
print("Function called with argument: "+str(argument))
class C(object) :
def __init__(self,argument) :
print("C's __init__ method called with argument: "+str(argument))
def m(self,argument) :
print("C's method 'm' called with argument: "+str(argument))
## works
ptr = function
ptr('A')
## works
instance = C('asdf')
ptr = instance.m
ptr('A')
## fails
constructorPtr = C.__init__
constructorPtr('A')
This produces as output:
Function called with argument: A
C's __init__ method called with argument: asdf
C's method 'm' called with argument: A
Traceback (most recent call last): File "tmp.py", line 24, in <module>
constructorPtr('A')
TypeError: unbound method __init__() must be called with C instance as first argument (got str instance instead)
showing that the first two ptr() calls worked, but the last did not.
The reason this doesn't work is that the __init__ method isn't a constructor, it's an initializer.*
Notice that its first argument is self—that self has to be already constructed before its __init__ method gets called, otherwise, where would it come from.
In other words, it's a normal instance method, just like instance.m is, but you're trying to call it as an unbound method—exactly as if you'd tried to call C.m instead of instance.m.
Python does have a special method for constructors, __new__ (although Python calls this a "creator" to avoid confusion with languages with single-stage construction). This is a static method that takes the class to construct as its first argument and the constructor arguments as its other arguments. The default implementation that you've inherited from object just creates an instance of that class and passes the arguments along to its initializer.** So:
constructor = C.__new__
constructor(C, 'A')
Or, if you prefer:
from functools import partial
constructor = partial(C.__new__, C)
constructor('A')
However, it's incredibly rare that you'll ever want to call __new__ directly, except from a subclass's __new__. Classes themselves are callable, and act as their own constructors—effectively that means that they call the __new__ method with the appropriate arguments, but there are some subtleties (and, in every case where they differ, C() is probably what you want, not C.__new__(C)).
So:
constructor = C
constructor('A')
As user2357112 points out in a comment:
In general, if you want a ptr that does whatever_expression(foo) when you call ptr(foo), you should set ptr = whatever_expression
That's a great, simple rule of thumb, and Python has been carefully designed to make that rule of thumb work whenever possible.
Finally, as a side note, you can assign ptr to anything callable, not just the cases you described:
a function,
a bound method (your instance.m),
a constructor (that is, a class),
an unbound method (e.g., C.m—which you can call just fine, but you'll have to pass instance as the first argument),
a bound classmethod (e.g., both C.cm and instance.cm, if you defined cm as a #classmethod),
an unbound classmethod (harder to construct, and less useful),
a staticmethod (e.g., both C.sm and instance.sm, if you defined sm as a #staticmethod),
various kinds of implementation-specific "builtin" types that simulate functions, methods, and classes.
an instance of any type with a __call__ method,
And in fact, all of these are just special cases of the last one—the type type has a __call__ method, as do types.FunctionType and types.MethodType, and so on.
* If you're familiar with other languages like Smalltalk or Objective-C, you may be thrown off by the fact that Python doesn't look like it has two-stage construction. In ObjC terms, you rarely implement alloc, but you call it all the time: [[MyClass alloc] initWithArgument:a]. In Python, you can pretend that MyClass(a) means the same thing (although really it's more like [MyClass allocWithArgument:a], where allocWithArgument: automatically calls initWithArgument: for you).
** Actually, this isn't quite true; the default implementation just returns an instance of C, and Python automatically calls the __init__ method if isinstance(returnvalue, C).
I had a hard time finding the answer to this problem online, but I figured it out, so here is the solution.
Instead of pointing constructorPtr at C.__init__, you can just point it at C, like this.
constructorPtr = C
constructorPtr('A')
which produces as output:
C's __init__ method called with argument: A