The __init__() function gets called when object is created.
Is it ok to call an object __init__() function again, after its been created?
instance = cls(p1=1, p2=2)
# some code
instance.__init__(p1=123, p2=234)
# some more code
instance.__init__(p1=23, p2=24)
why would anyone wanna call __init__() on an object that is already created?
good question. i wanna re-initialize the instance's fields.
It's fine to call __init__ more than once on an object, as long as __init__ is coded with the effect you want to obtain (whatever that may be). A typical case where it happens (so you'd better code __init__ appropriately!-) is when your class's __new__ method returns an instance of the class: that does cause __init__ to be called on the returned instance (for what might be the second, or twentieth, time, if you keep "recycling" instances via your __new__!-).
You can, but it's kind of breaking what __init__ is intended to do. A lot of Python is really just convention, so you might as well follow then and expect __init__ to only be called once. I'd recommend creating a function called init or reset or something which sets the instance variables, use that when you want to reset the instance, and have __init__ just call init. This definitely looks more sane:
x = Pt(1,2)
x.set(3,4)
x.set(5,10)
As far as I know, it does not cause any problems (edit: as suggested by the kosher usage of super(...).__init__(...)), but I think having a reset() method and calling it both in __init__() and when you need to reset would be cleaner.
Related
Why it's required to use self keyword as an argument when calling the parent method from the child method?
Let me give an example,
class Account:
def __init__(self,filepath):
self.filepath = filepath
with open(self.filepath,"r") as file:
self.blanace = int(file.read())
def withDraw(self,amount):
self.blanace = self.blanace - amount
self.commit()
def deposite(self,amount):
self.blanace = self.blanace + amount
self.commit()
def commit(self):
with open(self.filepath,"w") as file:
file.write(str(self.blanace))
class Checking(Account):
def __init__(self,filepath):
Account.__init__(sellf,filepath) ######## I'm asking about this line.
Regarding this code,
I understand that self is automatically passed to the class when declaring a new object, so,
I expect when I declare new object, python will set self = the declared object, so now the self keyword will be available in the "'init'" child method, so no need to write it manually again like
Account.__init__(sellf,filepath) ######## I'm asking about this line.
All instance methods are just function-valued class attributes. If you access the attribute via an instance, some behind-the-scenes "magic" (known as the descriptor protocol) takes care of changing foo.bar() to type(foo).bar(foo). __init__ itself is also just another instance method, albeit one you usually only call explicitly when overriding __init__ in a child.
In your example, you are explicitly invoking the parent class's __init__ method via the class, so you have to pass self explicitly (self.__init__(filepath) would result in infinite recursion).
One way to avoid this is to not refer to the parent class explicitly, but to let a proxy determine the "closest" parent for you.
super().__init__(filepath)
There is some magic here: super with no arguments determines, with some help from the Python implementation, which class it statically occurs in (in this case, Checking) and passes that, along with self, as the implicit arguments to super. In Python 2, you always had to be explicit: super(Checking, self).__init__(filepath). (In Python 3, you can still pass argument explicitly, because there are some use cases, though rare, for passing arguments other than the current static class and self. Most commonly, super(SomeClass) does not get self as an implicit second argument, and handles class-level proxying.)
Specifically, the function class defines a __get__ method; if the result of an attribute lookup defines __get__, the return value of that method is returned instead of the attribute value itself. In other words,
foo.bar
becomes
foo.__dict__['bar'].__get__(foo, type(foo))
and that return value is an object of type method. Calling a method instance simply causes the original function to be called, with its first argument being the instance that __get__ took as its first argument, and its remaining arguments are whatever other arguments were passed to the original method call.
Generally speaking, I would tally this one up to the Zen of Python -- specifically, the following statements:
Explicit is better than implicit.
Readability counts.
In the face of ambiguity, refuse the temptation to guess.
... and so on.
It's the mantra of Python -- this, along with many other cases may seem redundant and overly simplistic, but being explicit is one of Python's key "goals." Perhaps another user can give more explicit examples, but in this case, I would say it makes sense to not have arguments be explicitly defined in one call, then vanish -- it might make things unclear when looking at a child function without also looking at its parent.
While executing the following code:
class Test():
def __init__(self):
self.hi_there()
self.a = 5
def hi_there(self):
print(self.a)
new_object = Test()
new_object.hi_there()
I have received an error:
Traceback (most recent call last):
File "/root/a.py", line 241, in <module>
new_object = Test()
File "/root/a.py", line 233, in __init__
self.hello()
File "/root/a.py", line 238, in hello
print(self.a)
AttributeError: 'Test' object has no attribute 'a'
Why do we need to specify the self inside the function while the object is not initialized yet? The possibility to call hi_there() function means that the object is already set, but how come if other variables attributed to this instances haven't been initialized yet?
What is the self inside the __init__ function if it's not a "full" object yet?
Clearly this part of code works:
class Test():
def __init__(self):
#self.hi_there()
self.a = 5
self.hi_there()
def hi_there(self):
print(self.a)
new_object = Test()
new_object.hi_there()
I come from C++ world, there you have to declare the variables before you assign them.
I fully understand your the use of self. Although I don't understand what is the use of self inside__init__() if the self object is not fully initialized.
There is no magic. By the time __init__ is called, the object is created and its methods defined, but you have the chance to set all the instance attributes and do all other initialization. If you look at execution in __init__:
def __init__(self):
self.hi_there()
self.a = 5
def hi_there(self):
print(self.a)
the first thing that happens in __init__ is that hi_there is called. The method already exists, so the function call works, and we drop into hi_there(), which does print(self.a). But this is the problem: self.a isn't set yet, since this only happens in the second line of __init__, but we called hi_there from the first line of __init__. Execution hasn't reached the line where you set self.a = 5, so there's no way that the method call self.hi_there() issued before this assignment can use self.a. This is why you get the AttributeError.
Actually, the object has already been created when __init__ is called. That's why you need self as a parameter. And because of the way Python works internally, you don't have access to the objects without self (Bear in mind that it doesn't need to be called self, you can call it anything you want as long as it is a valid name. The instance is always the first parameter of a method, whatever it's name is.).
The truth is that __init__ doesn't create the object, it just initializes it. There is a class method called __new__, which is in charge of creating the instance and returning it. That's where the object is created.
Now, when does the object get it's a attribute. That's in __init__, but you do have access to it's methods inside of __init__. I'm not completely knowledable about how the creation of the objects works, but methods are already set once you get to that point. That doesn't happen with values, so they are not available until you define them yourself in __init__.
Basically Python creates the object, gives it it's methods, and then gives you the instance so you can initialize it's attributes.
EDIT
Another thing I forgot to mention. Just like you define __init__, you can define __new__ yourself. It's not very common, but you do it when you need to modify the actual object's creation. I've only seen it when defining metaclasses (What are metaclasses in Python?). Another method you can define in that case is __call__, giving you even more control.
Not sure what you meant here, but I guess the first code sample should call an hello() function instead of the hi_there() function.
Someone corrects me if I'm wrong, but in Python, defining a class, or a function is dynamic. By this I mean, defining a class or a function happens at runtime: these are regular statements that are executed just like others.
This language feature allows powerful thing such as decorating the behavior of a function to enrich it with extra functionality (see decorators).
Therefore, when you create an instance of the Test class, you try to call the hello() function before you have set explicitly the value of a. Therefore, the Test class is not YET aware of its a attribute. It has to be read sequentially.
Is __init__ in python a constructor or a method?
Somewhere it says constructor and somewhere it says method, which is quite confusing.
It is correct to call it a method. It is incorrect, or at best inaccurate, to call it a constructor.
Specifically, it is a magic method. They are also called special methods, "dunders", and a few other names.
This particular method is used to define the initialisation behavior of an object. It is not really similar to a constructor, and it is not even the first method to be called on a new instance.
We use __init__ to set up the state of an already-created instance. It will be automatically called when we use the syntax A() to create an instance of a class A, which is why someone might loosely refer to it as a "constructor". But the responsibility of __init__ is not related to instance construction, really the __new__ magic method is more similar to a constructor in that respect.
In addition to the answer by #wim it is worth noting that the object has already been created by the time __init__ is called i.e. __init__ is not a constructor. Furthermore, __init__ methods are optional: you do not have to define one. Finally, an __init__ method is defined first only by convention i.e. it could be defined after any of the other methods.
I have this simple function:
def f():
print("heh")
When I am calling f in reality I am calling its call method. But when I am calling a call method of f in reality I a calling a call method of call method of f. And so on and so on.
How does far does python go when call f(), it clearly must stop somewhere?
I was wondering whether this can go to infinity and it turns out that 100 000 is enough to crash Python.
>>> exec('f'+100000*'.__call__'+'()')
========= RESTART ==========
What's the reason of this crash?
A 'call' on an object causes the interpreter to look for a way to call it. When that is resolved by locating a __call__ method, that method is invoked, and then something real happens. The __call__ method can't just invoke the same mechanism on itself.
In the case of a function object, I believe there is an internal method table which is directly consulted first to see if there's a defined (C language) call handler, and that is invoked. There may also be a __call__ attribute which does the same thing, but I think the engine checks the table first (some of this may have been reworked in Py 3).
The 'C' langauge call handler for functions is handed a reference to the function object, and a package of parameters. The function object contains a reference to a code object, and another to the proper global namespace. The code object contains a description of what parameters are expected, and all the information needed to actually set up the call on the python stack.
When you call a method of a class, there's a little binder object with its own call method (containing a pointer to the 'self' and to the actual method').
I guess the main point is that some objects have __call__ methods coded in Python, but for many types the interpreter can go straight to C code after looking in the object's internal type descriptor. Another example is calling a type object , such as str, where the C-language constructor will be invoked.
Let's say I have classes Base(object) and Derived(Base). These both implement a function foo, with Derived.foo overriding the version in Base.
However, in one of the methods, say Base.learn_to_foo, I want to call Base.foo instead of the derived version regardless of whether it was overridden. So, I call Base.foo(self) in that method:
class Base(object):
# ...
def learn_to_foo(self, x):
y = Base.foo(self, x)
# check if we foo'd correctly, do interesting stuff
This approach seems to work and from a domain standpoint, it makes perfect sense, but somehow it smells a bit fishy. Is this the way to go, or should I refactor?
The answer is NOT to use the super() function. The way you are doing is exactly right as you don't want to invoke the virtual method that is overridden in the super class. Since you seem to want the base class' exact implementation all the time, the only way is to get the base class' unbound method object back, bound it to self, which could be an instance of Base or Derived. Invoke the unbound method with self supplied explicitly as the first parameter gives you back a bound method. From this point forward, Base.foo will be acting on the instance self's data. This is perfectly acceptable and is the way Python deals with non-virtual method invocation. This is one of the nice things that Python allows you to do that Java does not.
It is recommended:
def learn_to_foo(self, x):
super(Derived, self).foo(x)
More information at http://docs.python.org/library/functions.html#super
An alternative is to use the 'super' built-in:
super(Derived, self).foo(x) # Python 2
super().foo(x) # Python 3