Python super method and calling alternatives - python

I see everywhere examples that super-class methods should be called by:
super(SuperClass, instance).method(args)
Is there any disadvantage to doing:
SuperClass.method(instance, args)

Consider the following situation:
class A(object):
def __init__(self):
print('Running A.__init__')
super(A,self).__init__()
class B(A):
def __init__(self):
print('Running B.__init__')
# super(B,self).__init__()
A.__init__(self)
class C(A):
def __init__(self):
print('Running C.__init__')
super(C,self).__init__()
class D(B,C):
def __init__(self):
print('Running D.__init__')
super(D,self).__init__()
foo=D()
So the classes form a so-called inheritance diamond:
A
/ \
B C
\ /
D
Running the code yields
Running D.__init__
Running B.__init__
Running A.__init__
That's bad because C's __init__ is skipped. The reason for that is because B's __init__ calls A's __init__ directly.
The purpose of super is to resolve inheritance diamonds. If you un-comment
# super(B,self).__init__()
and comment-out
A.__init__(self)
the code yields the more desireable result:
Running D.__init__
Running B.__init__
Running C.__init__
Running A.__init__
Now all the __init__ methods get called. Notice that at the time you define B.__init__ you might think that super(B,self).__init__() is the same as calling A.__init__(self), but you'd be wrong. In the above situation, super(B,self).__init__() actually calls C.__init__(self).
Holy smokes, B knows nothing about C, and yet super(B,self) knows to call C's __init__? The reason is because self.__class__.mro() contains C. In other words, self (or in the above, foo) knows about C.
So be careful -- the two are not fungible. They can yield vastly different results.
Using super has pitfalls. It takes a considerable level of coordination between all the classes in the inheritance diagram. (They must, for example, either have the same call signature for __init__, since any particular __init__ would not know which other __init__ super might call next, or
else use **kwargs.) Furthermore, you must be consistent about using super everywhere. Skip it once (as in the above example) and you defeat the entire purpose of super.
See the link for more pitfalls.
If you have full control over your class hierarchy, or you avoid inheritance diamonds, then there is no need for super.

There's no penalty as-is, though your example is somewhat misguided. In the first example, it should be
super(SubClass, instance).method(args) # Sub, not SuperClass
and that leads me to quote the Python docs:
There are two typical use cases for super. In a class hierarchy with single inheritance, super can be used to refer to parent classes without naming them explicitly, thus making the code more maintainable. This use closely parallels the use of super in other programming languages.
The second use case is to support cooperative multiple inheritance in a dynamic execution environment. This use case is unique to Python and is not found in statically compiled languages or languages that only support single inheritance. This makes it possible to implement “diamond diagrams” where multiple base classes implement the same method. Good design dictates that this method have the same calling signature in every case (because the order of calls is determined at runtime, because that order adapts to changes in the class hierarchy, and because that order can include sibling classes that are unknown prior to runtime).
Basically, by using the first method you don't have to hard-code your parent class in there for single-class hierarchies, and you simply can't really do what you want (efficiently/effectively) using the second method when using multiple inheritance.

Related

Calling a classmethod and calling a method through an object, which way is better?

I am setting up a python class and I have 2 ways to go about it:
Create a class and all methods as class methods. When calling the methods on my main block, it would be cls.methodName()
e.g.
class demoClass():
#classmethod
methodA(cls):
print('Method A')
Calling from main
demoClass.methodA()
demoClass.methodA()
demoClass.methodA()
Create a class and all methods are object methods and require an instance of the class to call them.
e.g.
class demoClass():
methodA(self):
print('Method A')
Calling from main
demoObj = demoClass()
demoObj.methodA()
demoObj.methodA()
demoObj.methodA()
I want know which way would be better.
I am more inclined towards using Object level methods because this class will be used across a lot of parts of the main code for different scenarios and require different setup for each scenario hence setting up the objects specific to each use case would make sense
My major point of concern is performance and memory usage
Between the 2 approaches, which would be better in terms of just performance and memory usage (disregard the use case)?

Why aren't all the base classes constructors being called?

In Python 2.7.10
class OneMixin(object):
def __init__(self):
# super(OneMixin, self).__init__()
print "one mixin"
class TwoMixin(object):
def __init__(self):
# super(TwoMixin, self).__init__()
print "two mixin"
class Account(OneMixin, TwoMixin):
def __init__(self):
super(Account, self).__init__()
print "account"
The Account.mro() is: [<class 'Account'>, <class 'OneMixin'>, <class 'TwoMixin'>, <type 'object'>]
Although every class is listed in the MRO, "two mixin" is not printed.
If I uncomment the super calls in OneMixin and TwoMixin, the MRO is exactly the same, but the "two mixin" IS printed.
Why the difference? I would expect every thing in the MRO to be called.
This is because super is used to delegate the calls to either parent or sibling class of a type. Python documentation has following description of the second use case:
The second use case is to support cooperative multiple inheritance in a dynamic execution environment. This use case is unique to Python and is not found in statically compiled languages or languages that only support single inheritance. This makes it possible to implement “diamond diagrams” where multiple base classes implement the same method. Good design dictates that this method have the same calling signature in every case (because the order of calls is determined at runtime, because that order adapts to changes in the class hierarchy, and because that order can include sibling classes that are unknown prior to runtime).
If you remove the super call from OneMixin there's nothing delegating the call to the next type in MRO.
The reason is that you're overriding the __init__ method of the parent class. The method resolution order will be the same, regardless of what is in your __init__ method.
The way super works is that it will pass it down to the next class down the line in the method resolution order. By commenting out that line in OneMixin, you break the chain. super is designed for cooperative inheritance.
Also, __init__ is not truly a class constructor, either. This may trip you up if you think of it as you would a constructor in other languages.

Is __init__ a class method?

I was looking into Python's super method and multiple inheritance. I read along something like when we use super to call a base method which has implementation in all base classes, only one class' method will be called even with variety of arguments. For example,
class Base1(object):
def __init__(self, a):
print "In Base 1"
class Base2(object):
def __init__(self):
print "In Base 2"
class Child(Base1, Base2):
def __init__(self):
super(Child, self).__init__('Intended for base 1')
super(Child, self).__init__()# Intended for base 2
This produces TyepError for the first super method. super would call whichever method implementation it first recognizes and gives TypeError instead of checking for other classes down the road. However, this will be much more clear and work fine when we do the following:
class Child(Base1, Base2):
def __init__(self):
Base1.__init__(self, 'Intended for base 1')
Base2.__init__(self) # Intended for base 2
This leads to two questions:
Is __init__ method a static method or a class method?
Why use super, which implicitly choose the method on it's own rather than explicit call to the method like the latter example? It looks lot more cleaner than using super to me. So what is the advantage of using super over the second way(other than writing the base class name with the method call)
super() in the face of multiple inheritance, especially on methods that are present on object can get a bit tricky. The general rule is that if you use super, then every class in the hierarchy should use super. A good way to handle this for __init__ is to make every method take **kwargs, and always use keyword arguments everywhere. By the time the call to object.__init__ occurs, all arguments should have been popped out!
class Base1(object):
def __init__(self, a, **kwargs):
print "In Base 1", a
super(Base1, self).__init__()
class Base2(object):
def __init__(self, **kwargs):
print "In Base 2"
super(Base2, self).__init__()
class Child(Base1, Base2):
def __init__(self, **kwargs):
super(Child, self).__init__(a="Something for Base1")
See the linked article for way more explanation of how this works and how to make it work for you!
Edit: At the risk of answering two questions, "Why use super at all?"
We have super() for many of the same reasons we have classes and inheritance, as a tool for modularizing and abstracting our code. When operating on an instance of a class, you don't need to know all of the gritty details of how that class was implemented, you only need to know about its methods and attributes, and how you're meant to use that public interface for the class. In particular, you can be confident that changes in the implementation of a class can't cause you problems as a user of its instances.
The same argument holds when deriving new types from base classes. You don't want or need to worry about how those base classes were implemented. Here's a concrete example of how not using super might go wrong. suppose you've got:
class Foo(object):
def frob(self):
print "frobbign as a foo"
class Bar(object):
def frob(self):
print "frobbign as a bar"
and you make a subclass:
class FooBar(Foo, Bar):
def frob(self):
Foo.frob(self)
Bar.frob(self)
Everything's fine, but then you realize that when you get down to it,
Foo really is a kind of Bar, so you change it
class Foo(Bar):
def frob(self):
print "frobbign as a foo"
Bar.frob(self)
Which is all fine, except that in your derived class, FooBar.frob() calls Bar.frob() twice.
This is the exact problem super() solves, it protects you from calling superclass implementations more than once (when used as directed...)
As for your first question, __init__ is neither a staticmethod nor a classmethod; it is an ordinary instance method. (That is, it receives the instance as its first argument.)
As for your second question, if you want to explicitly call multiple base class implementations, then doing it explicitly as you did is indeed the only way. However, you seem to be misunderstanding how super works. When you call super, it does not "know" if you have already called it. Both of your calls to super(Child, self).__init__ call the Base1 implementation, because that is the "nearest parent" (the most immediate superclass of Child).
You would use super if you want to call just this immediate superclass implementation. You would do this if that superclass was also set up to call its superclass, and so on. The way to use super is to have each class call only the next implementation "up" in the class hierarchy, so that the sequence of super calls overall calls everything that needs to be called, in the right order. This type of setup is often called "cooperative inheritance", and you can find various articles about it online, including here and here.

Why python super does not accept only instance?

In python 2.x, super accepts the following cases
class super(object)
| super(type) -> unbound super object
| super(type, obj) -> bound super object; requires isinstance(obj, type)
| super(type, type2) -> bound super object; requires issubclass(type2, type)
| Typical use to call a cooperative superclass method:
as far as I see, super is a class, wrapping the type and (eventually) the instance to resolve the superclass of a class.
I'm rather puzzled by a couple of things:
why there is also no super(instance), with typical usage e.g. super(self).__init__(). Technically, you can obtain the type of an object from the object itself, so the current strategy super(ClassType, self).__init__() is kind of redundant. I assume compatibility issues with old-style classes, or multiple inheritance, but I'd like to hear your point.
why, on the other hand, python 3 will accept (see Understanding Python super() with __init__() methods) super().__init__() ? I see kind of magic in this, violating the explicit is better than implicit Zen. I would have seen more appropriate self.super().__init__().
super(ClassType, self).__init__() is not redundant in a cooperative multiple inheritance scheme -- ClassType is not necessarily the type of self, but the class from which you want to do the cooperative call to __init__.
In the class hierarchy C inherits B inherits A, in C.__init__ you want to call superclass' init from C's perspective, and you call B.__init__; then in B.__init__ you must pass the class type B to super -- since you want to resolve calling superclasses of B (or rather, the next in the mro after B of the class C).
class A (object):
def __init__(self):
pass
class B (A):
def __init__(self):
super(B, self).__init__()
class C (B):
def __init__(self):
super(C, self).__init__()
if you now instantiate c = C(), you see that the class type is not redundant -- super(self).__init__() inside B.__init__ would not really work! What you do is that you manually specify in which class the method calling super is (an this is solved in Python 3's super by a hidden variable pointing to the method's class).
Two links with examples of super and multiple inheritance:
Things to Know About Python Super (1 of 3)
Python's Super is nifty, but you can't use it
I can't provide a specific answer, but have you read the PEP's around the super keyword? I did a quick google search and it came up with PEP 367 and PEP 3135.
http://www.python.org/dev/peps/pep-0367/
http://www.python.org/dev/peps/pep-3135/#numbering-note
Unlike any other language I know of, most of the time you can find the answers to Python's quirks in the PEP's along with clear rational and position statements.
Update:
Having read through 3135, related emails in the Python Mailing and the language reference it kind of makes sense why it is the way it is for Python 2 vs Python 3.
http://docs.python.org/library/functions.html?highlight=super#super
I think super was implemented to be explicit/redundant just to be on the safe side and keep the logic involved as simple as possible ( no sugar or deep logic to find the parent ). Since super is a builtin function, it has to infer the correct return from what is provided without adding more complication to how Python objects are structured.
PEP 3135 changes everything because it presented and won an argument for a DRY'er approach to super.

"MetaClass", "__new__", "cls" and "super" - what is the mechanism exactly?

I have read posts like these:
What is a metaclass in Python?
What are your (concrete) use-cases for metaclasses in Python?
Python's Super is nifty, but you can't use it
But somehow I got confused. Many confusions like:
When and why would I have to do something like the following?
# Refer link1
return super(MyType, cls).__new__(cls, name, bases, newattrs)
or
# Refer link2
return super(MetaSingleton, cls).__call__(*args, **kw)
or
# Refer link2
return type(self.__name__ + other.__name__, (self, other), {})
How does super work exactly?
What is class registry and unregistry in link1 and how exactly does it work? (I thought it has something to do with singleton. I may be wrong, being from C background. My coding style is still a mix of functional and OO).
What is the flow of class instantiation (subclass, metaclass, super, type) and method invocation (
metaclass->__new__, metaclass->__init__, super->__new__, subclass->__init__ inherited from metaclass
) with well-commented working code (though the first link is quite close, but it does not talk about cls keyword and super(..) and registry). Preferably an example with multiple inheritance.
P.S.: I made the last part as code because Stack Overflow formatting was converting the text metaclass->__new__
to metaclass->new
OK, you've thrown quite a few concepts into the mix here! I'm going to pull out a few of the specific questions you have.
In general, understanding super, the MRO and metclasses is made much more complicated because there have been lots of changes in this tricky area over the last few versions of Python.
Python's own documentation is a very good reference, and completely up to date. There is an IBM developerWorks article which is fine as an introduction and takes a more tutorial-based approach, but note that it's five years old, and spends a lot of time talking about the older-style approaches to meta-classes.
super is how you access an object's super-classes. It's more complex than (for example) Java's super keyword, mainly because of multiple inheritance in Python. As Super Considered Harmful explains, using super() can result in you implicitly using a chain of super-classes, the order of which is defined by the Method Resolution Order (MRO).
You can see the MRO for a class easily by invoking mro() on the class (not on an instance). Note that meta-classes are not in an object's super-class hierarchy.
Thomas' description of meta-classes here is excellent:
A metaclass is the class of a class.
Like a class defines how an instance
of the class behaves, a metaclass
defines how a class behaves. A class
is an instance of a metaclass.
In the examples you give, here's what's going on:
The call to __new__ is being
bubbled up to the next thing in the
MRO. In this case, super(MyType, cls) would resolve to type;
calling type.__new__ lets Python
complete it's normal instance
creation steps.
This example is using meta-classes
to enforce a singleton. He's
overriding __call__ in the
metaclass so that whenever a class
instance is created, he intercepts
that, and can bypass instance
creation if there already is one
(stored in cls.instance). Note
that overriding __new__ in the
metaclass won't be good enough,
because that's only called when
creating the class. Overriding
__new__ on the class would work,
however.
This shows a way to dynamically
create a class. Here's he's
appending the supplied class's name
to the created class name, and
adding it to the class hierarchy
too.
I'm not exactly sure what sort of code example you're looking for, but here's a brief one showing meta-classes, inheritance and method resolution:
print('>>> # Defining classes:')
class MyMeta(type):
def __new__(cls, name, bases, dct):
print("meta: creating %s %s" % (name, bases))
return type.__new__(cls, name, bases, dct)
def meta_meth(cls):
print("MyMeta.meta_meth")
__repr__ = lambda c: c.__name__
class A(metaclass=MyMeta):
def __init__(self):
super(A, self).__init__()
print("A init")
def meth(self):
print("A.meth")
class B(metaclass=MyMeta):
def __init__(self):
super(B, self).__init__()
print("B init")
def meth(self):
print("B.meth")
class C(A, B, metaclass=MyMeta):
def __init__(self):
super(C, self).__init__()
print("C init")
print('>>> c_obj = C()')
c_obj = C()
print('>>> c_obj.meth()')
c_obj.meth()
print('>>> C.meta_meth()')
C.meta_meth()
print('>>> c_obj.meta_meth()')
c_obj.meta_meth()
Example output (using Python >= 3.6):
>>> # Defining classes:
meta: creating A ()
meta: creating B ()
meta: creating C (A, B)
>>> c_obj = C()
B init
A init
C init
>>> c_obj.meth()
A.meth
>>> C.meta_meth()
MyMeta.meta_meth
>>> c_obj.meta_meth()
Traceback (most recent call last):
File "metatest.py", line 41, in <module>
c_obj.meta_meth()
AttributeError: 'C' object has no attribute 'meta_meth'
Here's the more pragmatic answer.
It rarely matters
"What is a metaclass in Python". Bottom line, type is the metaclass of all classes. You have almost no practical use for this.
class X(object):
pass
type(X) == type
"What are your (concrete) use cases for metaclasses in Python?". Bottom line. None.
"Python's Super is nifty, but you can't use it". Interesting note, but little practical value. You'll never have a need for resolving complex multiple inheritance networks. It's easy to prevent this problem from arising by using an explicity Strategy design instead of multiple inheritance.
Here's my experience over the last 7 years of Python programming.
A class has 1 or more superclasses forming a simple chain from my class to object.
The concept of "class" is defined by a metaclass named type. I might want to extend the concept of "class", but so far, it's never come up in practice. Not once. type always does the right thing.
Using super works out really well in practice. It allows a subclass to defer to it's superclass. It happens to show up in these metaclass examples because they're extending the built-in metaclass, type.
However, in all subclass situations, you'll make use of super to extend a superclass.
Metaclasses
The metaclass issue is this:
Every object has a reference to it's type definition, or "class".
A class is, itself, also an object.
Therefore a object of type class has a reference to it's type or "class". The "class" of a "class" is a metaclass.
Since a "class" isn't a C++ run-time object, this doesn't happen in C++. It does happen in Java, Smalltalk and Python.
A metaclass defines the behavior of a class object.
90% of your interaction with a class is to ask the class to create a new object.
10% of the time, you'll be using class methods or class variables ("static" in C++ or Java parlance.)
I have found a few use cases for class-level methods. I have almost no use cases for class variables. I've never had a situation to change the way object construction works.

Categories