I've a Python program as follows:
class a:
def __init__(self,n):
self.n=n
def __del__(self,n):
print('dest',self.n,n)
def b():
d=a('d')
c=a('c')
d.__del__(8)
b()
Here, I have given a parameter n in __del__() just to clear my doubt. Its output :
$ python des.py
dest d 8
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
Exception ignored in: <function a.__del__ at 0xb799b074>
TypeError: __del__() missing 1 required positional argument: 'n'
In classical programming languages like C++ we can't give parameters for the destructor. To know if it is applicable for python too, I've executed this program. Why does the interpreter allow the parameter n to be given as a parameter for the destructor? How can I specify value for that n then? As a try to give an argument for __del__() and it goes fine. But without it how can I specify the value for n?
You can define the __del__ method with an argument, as you've shown. And if you call the method yourself, you can pass in a value, just like you can with any other method. But when the interpreter calls __del__ itself, it's not going to pass anything, and there will be an exception raised.
However, because __del__ methods are often called in precarious situations, like when the interpreter is shutting down, Python doesn't stop everything if one raises an exception. Instead, it just prints out that it's ignoring the exception and keeps doing what it was doing already. This is why you see two "Exception ignored" messages, as your d and c objects get cleaned up.
It's unclear to me what you actually want your __del__ method to do with the n value you were passing in. Your example was a trivial case, usually there's nothing useful you can do there. Indeed, it's only rarely a good idea to write a __del__ method for a class. There are better ways of doing resource allocation, like the context manager protocol (which uses __enter__ and __exit__ methods).
you cannot. pre-defined dunder methods (methods with leading and trailing double underscore) like __del__ have a fixed signature.
If you define them with another signature, then when python calls them using the non-dunder interface (del, len, ...), the number of arguments is wrong and it fails.
To pass n to del, you'll have to define it as an object member.
Python objects become a candidate for garbage collection when there are no more references to them (object tagging), so you do not need to create such a destructor.
If you want to add optional arguments to a method, it's common to set them to None or an empty tuple ()
def other_del(self, x=None):
...
Related
Why it's required to use self keyword as an argument when calling the parent method from the child method?
Let me give an example,
class Account:
def __init__(self,filepath):
self.filepath = filepath
with open(self.filepath,"r") as file:
self.blanace = int(file.read())
def withDraw(self,amount):
self.blanace = self.blanace - amount
self.commit()
def deposite(self,amount):
self.blanace = self.blanace + amount
self.commit()
def commit(self):
with open(self.filepath,"w") as file:
file.write(str(self.blanace))
class Checking(Account):
def __init__(self,filepath):
Account.__init__(sellf,filepath) ######## I'm asking about this line.
Regarding this code,
I understand that self is automatically passed to the class when declaring a new object, so,
I expect when I declare new object, python will set self = the declared object, so now the self keyword will be available in the "'init'" child method, so no need to write it manually again like
Account.__init__(sellf,filepath) ######## I'm asking about this line.
All instance methods are just function-valued class attributes. If you access the attribute via an instance, some behind-the-scenes "magic" (known as the descriptor protocol) takes care of changing foo.bar() to type(foo).bar(foo). __init__ itself is also just another instance method, albeit one you usually only call explicitly when overriding __init__ in a child.
In your example, you are explicitly invoking the parent class's __init__ method via the class, so you have to pass self explicitly (self.__init__(filepath) would result in infinite recursion).
One way to avoid this is to not refer to the parent class explicitly, but to let a proxy determine the "closest" parent for you.
super().__init__(filepath)
There is some magic here: super with no arguments determines, with some help from the Python implementation, which class it statically occurs in (in this case, Checking) and passes that, along with self, as the implicit arguments to super. In Python 2, you always had to be explicit: super(Checking, self).__init__(filepath). (In Python 3, you can still pass argument explicitly, because there are some use cases, though rare, for passing arguments other than the current static class and self. Most commonly, super(SomeClass) does not get self as an implicit second argument, and handles class-level proxying.)
Specifically, the function class defines a __get__ method; if the result of an attribute lookup defines __get__, the return value of that method is returned instead of the attribute value itself. In other words,
foo.bar
becomes
foo.__dict__['bar'].__get__(foo, type(foo))
and that return value is an object of type method. Calling a method instance simply causes the original function to be called, with its first argument being the instance that __get__ took as its first argument, and its remaining arguments are whatever other arguments were passed to the original method call.
Generally speaking, I would tally this one up to the Zen of Python -- specifically, the following statements:
Explicit is better than implicit.
Readability counts.
In the face of ambiguity, refuse the temptation to guess.
... and so on.
It's the mantra of Python -- this, along with many other cases may seem redundant and overly simplistic, but being explicit is one of Python's key "goals." Perhaps another user can give more explicit examples, but in this case, I would say it makes sense to not have arguments be explicitly defined in one call, then vanish -- it might make things unclear when looking at a child function without also looking at its parent.
I am programming in Python 2.7. In other languages like Ruby, there is an invocation of super in a method such that one does not need to specify the parameters with which to call super -- super. One omits the parameters in one's super call and the given super method is invoked with the same parameters as one's given method was called.
Here is an example from the StackOverflow question Super keyword in Ruby:
class Foo
def baz(str)
p 'parent with ' + str
end
end
class Bar < Foo
def baz(str)
super
p 'child with ' + str
end
end
Bar.new.baz('test') # => 'parent with test' \ 'child with test'
As you can see the method baz in the class Bar contains the invocation of super without any parameters and the super method baz in the super class Foo implicitly gets called with the same parameter(s) as the method baz in the child class Bar
For a variety of reasons this is very convenient, and while obviously implicit behavior in programming has its drawbacks, it is really useful to be able to have an __init__ method in a child class that does stuff in that child class and then invokes __init__ via super without having to know the parameters for __init__ in the super class.
Please help, any advice would be useful. I have been searching for an answer and unable to find one. I have seen many questions and answers on StackOverflow about super and super in __init__ and super in Python, but nonw about whether there is syntax for calling super that does not require parameters to be specified and calls it with the same parameters as the given method was called in the child class.
What I am trying to do is very similar to raise in a try, except, raise construct in Python:
try:
linux_interaction()
except Exception as error:
logger.critical(str(error))
raise
In the above block I catch all exceptions, I log any exception as critical and then I throw it with just a call to raise, no parameters. That calling syntax of raise with no parameters causes the exception that was caught in the variable named error to be thrown and preserves the calling stack/stack trace. If one called raise error instead of just raise, then the stack trace from where the error originated would be lost.
I am trying to do something similar here, just invoke super and get the super method invoked with the same parameters as the given method without needing to know what they are and without needing to specify them -- that would be convenient, right?
Anyone know if this is possible and if so what the syntax in Python 2.7 would be?
I would like to have a function pointer ptr that can point to either:
a function,
the method of an object instance, or
the constructor of the object.
In the latter case, the execution of ptr() should instantiate the class.
def function(argument) :
print("Function called with argument: "+str(argument))
class C(object) :
def __init__(self,argument) :
print("C's __init__ method called with argument: "+str(argument))
def m(self,argument) :
print("C's method 'm' called with argument: "+str(argument))
## works
ptr = function
ptr('A')
## works
instance = C('asdf')
ptr = instance.m
ptr('A')
## fails
constructorPtr = C.__init__
constructorPtr('A')
This produces as output:
Function called with argument: A
C's __init__ method called with argument: asdf
C's method 'm' called with argument: A
Traceback (most recent call last): File "tmp.py", line 24, in <module>
constructorPtr('A')
TypeError: unbound method __init__() must be called with C instance as first argument (got str instance instead)
showing that the first two ptr() calls worked, but the last did not.
The reason this doesn't work is that the __init__ method isn't a constructor, it's an initializer.*
Notice that its first argument is self—that self has to be already constructed before its __init__ method gets called, otherwise, where would it come from.
In other words, it's a normal instance method, just like instance.m is, but you're trying to call it as an unbound method—exactly as if you'd tried to call C.m instead of instance.m.
Python does have a special method for constructors, __new__ (although Python calls this a "creator" to avoid confusion with languages with single-stage construction). This is a static method that takes the class to construct as its first argument and the constructor arguments as its other arguments. The default implementation that you've inherited from object just creates an instance of that class and passes the arguments along to its initializer.** So:
constructor = C.__new__
constructor(C, 'A')
Or, if you prefer:
from functools import partial
constructor = partial(C.__new__, C)
constructor('A')
However, it's incredibly rare that you'll ever want to call __new__ directly, except from a subclass's __new__. Classes themselves are callable, and act as their own constructors—effectively that means that they call the __new__ method with the appropriate arguments, but there are some subtleties (and, in every case where they differ, C() is probably what you want, not C.__new__(C)).
So:
constructor = C
constructor('A')
As user2357112 points out in a comment:
In general, if you want a ptr that does whatever_expression(foo) when you call ptr(foo), you should set ptr = whatever_expression
That's a great, simple rule of thumb, and Python has been carefully designed to make that rule of thumb work whenever possible.
Finally, as a side note, you can assign ptr to anything callable, not just the cases you described:
a function,
a bound method (your instance.m),
a constructor (that is, a class),
an unbound method (e.g., C.m—which you can call just fine, but you'll have to pass instance as the first argument),
a bound classmethod (e.g., both C.cm and instance.cm, if you defined cm as a #classmethod),
an unbound classmethod (harder to construct, and less useful),
a staticmethod (e.g., both C.sm and instance.sm, if you defined sm as a #staticmethod),
various kinds of implementation-specific "builtin" types that simulate functions, methods, and classes.
an instance of any type with a __call__ method,
And in fact, all of these are just special cases of the last one—the type type has a __call__ method, as do types.FunctionType and types.MethodType, and so on.
* If you're familiar with other languages like Smalltalk or Objective-C, you may be thrown off by the fact that Python doesn't look like it has two-stage construction. In ObjC terms, you rarely implement alloc, but you call it all the time: [[MyClass alloc] initWithArgument:a]. In Python, you can pretend that MyClass(a) means the same thing (although really it's more like [MyClass allocWithArgument:a], where allocWithArgument: automatically calls initWithArgument: for you).
** Actually, this isn't quite true; the default implementation just returns an instance of C, and Python automatically calls the __init__ method if isinstance(returnvalue, C).
I had a hard time finding the answer to this problem online, but I figured it out, so here is the solution.
Instead of pointing constructorPtr at C.__init__, you can just point it at C, like this.
constructorPtr = C
constructorPtr('A')
which produces as output:
C's __init__ method called with argument: A
Consider the following code:
class A(object):
def do(self):
print self.z
class B(A):
def __init__(self, y):
self.z = y
b = B(3)
b.do()
Why does this work? When executing b = B(3), attribute z is set. When b.do() is called, Python's MRO finds the do function in class A. But why is it able to access an attribute defined in a subclass?
Is there a use case for this functionality? I would love an example.
It works in a pretty simple way: when a statement is executed that sets an attribute, it is set. When a statement is executed that reads an attribute, it is read. When you write code that reads an attribute, Python does not try to guess whether the attribute will exist when that code is executed; it just waits until the code actually is executed, and if at that time the attribute doesn't exist, then you'll get an exception.
By default, you can always set any attribute on an instance of a user-defined class; classes don't normally define lists of "allowed" attributes that could be set (although you can make that happen too), they just actually set attributes. Of course, you can only read attributes that exist, but again, what matters is whether they exist when you actually try to read them. So it doesn't matter if an attribute exists when you define a function that tries to read it; it only matters when (or if) you actually call that function.
In your example, it doesn't matter that there are two classes, because there is only one instance. Since you only create one instance and call methods on one instance, the self in both methods is the same object. First __init__ is run and it sets the attribute on self. Then do is run and it reads the attribute from the same self. That's all there is to it. It doesn't matter where the attribute is set; once it is set on the instance, it can be accessed from anywhere: code in a superclass, subclass, other class, or not in any class.
Since new attributes can be added to any object at any time, attribute resolution happens at execution time, not compile time. Consider this example which may be a bit more instructive, derived from yours:
class A(object):
def do(self):
print(self.z) # references an attribute which we have't "declared" in an __init__()
#make a new A
aa = A()
# this next line will error, as you would expect, because aa doesn't have a self.z
aa.do()
# but we can make it work now by simply doing
aa.z = -42
aa.do()
The first one will squack at you, but the second will print -42 as expected.
Python objects are just dictionaries. :)
When retrieving an attribute from an object (print self.attrname) Python follows these steps:
If attrname is a special (i.e. Python-provided) attribute for objectname, return it.
Check objectname.__class__.__dict__ for attrname. If it exists and is a data-descriptor, return the descriptor result. Search all bases of objectname.__class__ for the same case.
Check objectname.__dict__ for attrname, and return if found. If objectname is a class, search its bases too. If it is a class and a descriptor exists in it or its bases, return the descriptor result.
Check objectname.__class__.__dict__ for attrname. If it exists and is a non-data descriptor, return the descriptor result. If it exists, and is not a descriptor, just return it. If it exists and is a data descriptor, we shouldn't be here because we would have returned at point 2. Search all bases of objectname.__class__ for same case.
Raise AttributeError
Source
Understanding get and set and Python descriptors
Since you instanciated a B object, B.__init__ was invoked and added an attribute z. This attribute is now present in the object. It's not some weird overloaded magical shared local variable of B methods that somehow becomes inaccessible to code written elsewhere. There's no such thing. Neither does self become a different object when it's passed to a superclass' method (how's polymorphism supposed to work if that happens?).
There's also no such thing as a declaration that A objects have no such object (try o = A(); a.z = whatever), and neither is self in do required to be an instance of A1. In fact, there are no declarations at all. It's all "go ahead and try it"; that's kind of the definition of a dynamic language (not just dynamic typing).
That object's z attribute present "everywhere", all the time2, regardless of the "context" from which it is accessed. It never matters where code is defined for the resolution process, or for several other behaviors3. For the same reason, you can access a list's methods despite not writing C code in listobject.c ;-) And no, methods aren't special. They are just objects too (instances of the type function, as it happens) and are involved in exactly the same lookup sequence.
1 This is a slight lie; in Python 2, A.do would be "bound method" object which in fact throws an error if the first argument doesn't satisfy isinstance(A, <first arg>).
2 Until it's removed with del or one of its function equivalents (delattr and friends).
3 Well, there's name mangling, and in theory, code could inspect the stack, and thereby the caller code object, and thereby the location of its source code.
In addition to bypassing any instance attributes in the interest of correctness, implicit special method lookup generally also bypasses the __getattribute__() method even of the object’s metaclass.
The docs mention special methods such as __hash__, __repr__ and __len__, and I know from experience it also includes __iter__ for Python 2.7.
To quote an answer to a related question:
"Magic __methods__() are treated specially: They are internally assigned to "slots" in the type data structure to speed up their look-up, and they are only looked up in these slots."
In a quest to improve my answer to another question, I need to know: Which methods, specifically, are we talking about?
You can find an answer in the python3 documentation for object.__getattribute__, which states:
Called unconditionally to implement attribute accesses for instances of the class. If the class also defines __getattr__(), the
latter will not be called unless __getattribute__() either calls it
explicitly or raises an AttributeError. This method should return the
(computed) attribute value or raise an AttributeError exception. In
order to avoid infinite recursion in this method, its implementation
should always call the base class method with the same name to access
any attributes it needs, for example, object.__getattribute__(self,
name).
Note
This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in
functions. See Special method lookup.
also this page explains exactly how this "machinery" works. Fundamentally __getattribute__ is called only when you access an attribute with the .(dot) operator(and also by hasattr as Zagorulkin pointed out).
Note that the page does not specify which special methods are implicitly looked up, so I deem that this hold for all of them(which you may find here.
Checked in 2.7.9
Couldn't find any way to bypass the call to __getattribute__, with any of the magical methods that are found on object or type:
# Preparation step: did this from the console
# magics = set(dir(object) + dir(type))
# got 38 names, for each of the names, wrote a.<that_name> to a file
# Ended up with this:
a.__module__
a.__base__
#...
Put this at the beginning of that file, which i renamed into a proper python module (asdf.py)
global_counter = 0
class Counter(object):
def __getattribute__(self, name):
# this will count how many times the method was called
global global_counter
global_counter += 1
return super(Counter, self).__getattribute__(name)
a = Counter()
# after this comes the list of 38 attribute accessess
a.__module__
#...
a.__repr__
#...
print global_counter # you're not gonna like it... it printer 38
Then i also tried to get each of those names by getattr and hasattr -> same result. __getattribute__ was called every time.
So if anyone has other ideas... I was too lazy to look inside C code for this, but I'm sure the answer lies somewhere there.
So either there's something that i'm not getting right, or the docs are lying.
super().method will also bypass __getattribute__. This atrocious code will run just fine (Python 3.11).
class Base:
def print(self):
print("whatever")
def __getattribute__(self, item):
raise Exception("Don't access this with a dot!")
class Sub(Base):
def __init__(self):
super().print()
a = Sub()
# prints 'whatever'
a.print()
# Exception Don't access this with a dot!