Python Multiple Inheritance: Argument passing (**kwargs) and super() - python

I am trying to understand Python multiple inheritance and I kind of understand MRO, super() and passing arguments in MI, but while I was reading the below example it kind of confused me.
class Contact:
all_contacts = []
def __init__(self, name=None, email=None, **kwargs):
super().__init__(**kwargs)
self.name = name
self.email = email
self.all_contacts.append(self)
class AddressHolder:
def __init__(self, street=None, city=None, state=None, code=None, **kwargs):
super().__init__(**kwargs)
self.street = street
self.city = city
self.state = state
self.code = code
class Friend(Contact, AddressHolder):
def __init__(self, phone='', **kwargs):
super().__init__(**kwargs)
self.phone = phone
Now what I fail to understand is why use super() in Contact and AddressHolder class. I mean super() is used when we are inheriting from a parent class but both Contact & AddressHolder are not inheriting from any other class. (technically they are inheriting from object). This example confuses me with the right use of super()

All (new style) classes have a linearized method resolution order (MRO). Depending on the inheritance tree, actually figuring out the MRO can be a bit mind-bending, but it is deterministic via a relatively simple algorithm. You can also check the MRO through a class's __mro__ attribute.
super gets a delegator to the next class in the MRO. In your example, Friend has the following MRO:
Friend -> Contact -> AddressHolder -> object
If you call super in one of Friend's methods, you'll get a delegator that delegates to Contact's methods. If that method doesn't call super, you'll never call the methods on AddressHolder. In other words, super is responsible for calling only the next method in the MRO, not ALL the remaining methods in the MRO.
(If you call super in one of Friend's methods and Contact doesn't have its own implementation of that method, then super will delegate to AddressHolder, or whichever class has the next implementation for that method in the MRO.)
This is all well and good since object has a completely functional __init__ method (so long as **kwargs is empty at that point). Unfortunately, it doesn't work if you are trying to resolve the call chain of some custom method. e.g. foo. In that case, you want to insert a base class that all of the base classes inherit from. Since that class is a base for all of the classes (or at least base classes) to inherit from. That class will end up at the end of the MRO and can do parameter validation1:
class FooProvider:
def foo(self, **kwargs):
assert not kwargs # Make sure all kwargs have been stripped
class Bar(FooProvider):
def foo(self, x, **kwargs):
self.x = x
super().foo(**kwargs)
class Baz(FooProvider):
def foo(self, y, **kwargs):
self.y = y
super().foo(**kwargs)
class Qux(Bar, Baz):
def foo(self, z, **kwargs):
self.z = z
super().foo(**kwargs)
demo:
>>> q = Qux()
>>> q.foo(x=1, y=2, z=3)
>>> vars(q)
{'z': 3, 'y': 2, 'x': 1}
>>> q.foo(die='invalid')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() missing 1 required positional argument: 'z'
>>>
>>> q.foo(x=1, y=2, z=3, die='invalid')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/google/home/mgilson/sandbox/super_.py", line 18, in foo
super().foo(**kwargs)
File "/usr/local/google/home/mgilson/sandbox/super_.py", line 8, in foo
super().foo(**kwargs)
File "/usr/local/google/home/mgilson/sandbox/super_.py", line 13, in foo
super().foo(**kwargs)
File "/usr/local/google/home/mgilson/sandbox/super_.py", line 3, in foo
assert not kwargs # Make sure all kwargs have been stripped
AssertionError
Note, you can still have default arguments with this approach so you don't lose too much there.
1Note, this isn't the only strategy to deal with this problem -- There are other approaches you can take when making mixins, etc. but this is by far the most robust approach.

Related

How to block the generation of superclass with .__new__, while allowing the generation of its subclass?

I have an class 'A', which has subclasses 'B', 'C', and D.
'A' serves only the purpose of categorization and inheritance, so I don't want user to create the instance of 'A'.
However, I want to create the instances of 'B', 'C', and 'D' as usual.
First, I blocked the generation of the instance of 'A' by overriding A.__new__.
Then, I overrode B.__new__ again, just like below.
class A(object):
def __new__(cls, *args, **kwargs):
raise AssertionError('A cannot be generated directly.')
class B(A):
def __new__(cls, *args, **kwargs):
return super(cls,B).__new__(cls, *args, **kwargs)
However, with this code, generation of B returns same AssertionError.
I understand that since A.__new__ is disabled, super(cls,B).__new__ (which is exactly same as A.__new__) is disabled as well.
Is there any way to generate the instance of subclass, without invoking the __new__ of its superclass?
Edited - Resolved
Instead of invoking A.__new__, I invoked the __new__ method of higher superclass (which is 'object'). This solution circumvented the problem.
The new code is something like this:
class A(object):
def __new__(cls, *args, **kwargs):
raise AssertionError('A cannot be generated directly.')
class B(A):
def __new__(cls, *args, **kwargs):
return object.__new__(cls)
Note that it is return object.__new__(cls), not return object.__new__(cls, *args, **kwargs). This is because object.__new__ doesn't have any argument except cls, therefore no arguments must be passed.
You're explicitly invoking A.__new__ in the B.__new__ method. This version works fine:
class A:
def __new__(cls, *args, **kwargs):
raise AssertionError()
class B(A):
def __new__(cls, *args, **kwargs):
pass
Result:
>>> x = A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __new__
AssertionError
>>> y = B()
>>> y
>>>
Note that B.__new__ is now pretty much useless: I don't actually get a B instance (y is empty). I'm assuming you have an actual reason to be modifying __new__, in which case you'd put your replacement code where I have pass.
If you don't have a good reason to modify __new__, don't. You almost always want __init__ instead.
>>> class A1:
... def __init__(self):
... raise ValueError()
...
>>> class B1(A1):
... def __init__(self):
... pass
...
>>> A1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __init__
ValueError
>>> B1()
<__main__.B1 object at 0x7fa97b3f26d8>
Note how I get an actual default-constructed B1 object now.
As a general rule, you'll only want to modify __new__ if you're working with immutable types (like str and tuple) or building new metaclasses. You can see more details on what each method is responsible for in the docs.

super() does not work together with type()? : super(type, obj): obj must be an instance or subtype of type

In the code, new_type is a class created with members from class X and derived from class A. Any workaround for the TypeError?
class A:
def __init__(self):
pass
def B(self):
pass
def C(self):
pass
class X:
def __init__(self):
print(type(self).__bases__)
super().__init__()
def B(self):
self.B()
def Z(self):
pass
a = X()
print('ok')
new_type = type("R", ( A,), dict(X.__dict__))
some_obj = new_type()
Program output:
(<class 'object'>,)
ok
(<class '__main__.A'>,)
Traceback (most recent call last):
File "c:\Evobase2005\Main\EvoPro\dc\tests\sandbox.py", line 37, in <module>
some_obj = new_type()
File "c:\Evobase2005\Main\EvoPro\dc\tests\sandbox.py", line 27, in __init__
super().__init__()
TypeError: super(type, obj): obj must be an instance or subtype of type
In production code, class A does not exist either, but is created dynamically as well because it uses resources from a c++ library for class construction. hence the twisted code. ;)
EDIT This fails too.
class X:
def __init__(self):
print(type(self).__bases__)
super().__init__()
def Z(self):
pass
new_type = type("R", (object, ), dict(X.__dict__))
some_obj = new_type()
super() has two forms, two-argument form and zero argument form, quoting standard library docs:
The two argument form specifies the arguments exactly and makes the appropriate references. The zero argument form only works inside a class definition, as the compiler fills in the necessary details to correctly retrieve the class being defined, as well as accessing the current instance for ordinary methods.
The zero argument form will not work as it automatically searches the stack frame for the class (__class__) and the first argument and gets confused.
However, when you use the two-argument form of super(), the code works fine:
class A:
def __init__(self):
pass
class X:
def __init__(self):
print(type(self).__bases__)
super(self.__class__, self).__init__()
x = X()
R = type("R", (A,), dict(X.__dict__))
obj = R()
Output:
(<class 'object'>,)
(<class '__main__.A'>,)
You cannot use super(self.__class__, self) more than once though in the call hierarchy or you run into infinite recursion, see this SO answer.

Python extending with - using super() Python 3 vs Python 2

Originally I wanted to ask this question, but then I found it was already thought of before...
Googling around I found this example of extending configparser. The following works with Python 3:
$ python3
Python 3.2.3rc2 (default, Mar 21 2012, 06:59:51)
[GCC 4.6.3] on linux2
>>> from configparser import SafeConfigParser
>>> class AmritaConfigParser(SafeConfigParser):
... def __init__(self):
... super().__init__()
...
>>> cfg = AmritaConfigParser()
But not with Python 2:
>>> class AmritaConfigParser(SafeConfigParser):
... def __init__(self):
... super(SafeConfigParser).init()
...
>>> cfg = AmritaConfigParser()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __init__
TypeError: must be type, not classob
Then I read a little bit on Python New Class vs. Old Class styles (e.g. here.
And now I am wondering, I can do:
class MyConfigParser(ConfigParser.ConfigParser):
def Write(self, fp):
"""override the module's original write funcition"""
....
def MyWrite(self, fp):
"""Define new function and inherit all others"""
But, shouldn't I call init? Is this in Python 2 the equivalent:
class AmritaConfigParser(ConfigParser.SafeConfigParser):
#def __init__(self):
# super().__init__() # Python3 syntax, or rather, new style class syntax ...
#
# is this the equivalent of the above ?
def __init__(self):
ConfigParser.SafeConfigParser.__init__(self)
super() (without arguments) was introduced in Python 3 (along with __class__):
super() -> same as super(__class__, self)
so that would be the Python 2 equivalent for new-style classes:
super(CurrentClass, self)
for old-style classes you can always use:
class Classname(OldStyleParent):
def __init__(self, *args, **kwargs):
OldStyleParent.__init__(self, *args, **kwargs)
In a single inheritance case (when you subclass one class only), your new class inherits methods of the base class. This includes __init__. So if you don't define it in your class, you will get the one from the base.
Things start being complicated if you introduce multiple inheritance (subclassing more than one class at a time). This is because if more than one base class has __init__, your class will inherit the first one only.
In such cases, you should really use super if you can, I'll explain why. But not always you can. The problem is that all your base classes must also use it (and their base classes as well -- the whole tree).
If that is the case, then this will also work correctly (in Python 3 but you could rework it into Python 2 -- it also has super):
class A:
def __init__(self):
print('A')
super().__init__()
class B:
def __init__(self):
print('B')
super().__init__()
class C(A, B):
pass
C()
#prints:
#A
#B
Notice how both base classes use super even though they don't have their own base classes.
What super does is: it calls the method from the next class in MRO (method resolution order). The MRO for C is: (C, A, B, object). You can print C.__mro__ to see it.
So, C inherits __init__ from A and super in A.__init__ calls B.__init__ (B follows A in MRO).
So by doing nothing in C, you end up calling both, which is what you want.
Now if you were not using super, you would end up inheriting A.__init__ (as before) but this time there's nothing that would call B.__init__ for you.
class A:
def __init__(self):
print('A')
class B:
def __init__(self):
print('B')
class C(A, B):
pass
C()
#prints:
#A
To fix that you have to define C.__init__:
class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)
The problem with that is that in more complicated MI trees, __init__ methods of some classes may end up being called more than once whereas super/MRO guarantee that they're called just once.
In short, they are equivalent.
Let's have a history view:
(1) at first, the function looks like this.
class MySubClass(MySuperClass):
def __init__(self):
MySuperClass.__init__(self)
(2) to make code more abstract (and more portable). A common method to get Super-Class is invented like:
super(<class>, <instance>)
And init function can be:
class MySubClassBetter(MySuperClass):
def __init__(self):
super(MySubClassBetter, self).__init__()
However requiring an explicit passing of both the class and instance break the DRY (Don't Repeat Yourself) rule a bit.
(3) in V3. It is more smart,
super()
is enough in most case. You can refer to http://www.python.org/dev/peps/pep-3135/
Just to have a simple and complete example for Python 3, which most people seem to be using now.
class MySuper(object):
def __init__(self,a):
self.a = a
class MySub(MySuper):
def __init__(self,a,b):
self.b = b
super().__init__(a)
my_sub = MySub(42,'chickenman')
print(my_sub.a)
print(my_sub.b)
gives
42
chickenman
Another python3 implementation that involves the use of Abstract classes with super(). You should remember that
super().__init__(name, 10)
has the same effect as
Person.__init__(self, name, 10)
Remember there's a hidden 'self' in super(), So the same object passes on to the superclass init method and the attributes are added to the object that called it.
Hence super()gets translated to Person and then if you include the hidden self, you get the above code frag.
from abc import ABCMeta, abstractmethod
class Person(metaclass=ABCMeta):
name = ""
age = 0
def __init__(self, personName, personAge):
self.name = personName
self.age = personAge
#abstractmethod
def showName(self):
pass
#abstractmethod
def showAge(self):
pass
class Man(Person):
def __init__(self, name, height):
self.height = height
# Person.__init__(self, name, 10)
super().__init__(name, 10) # same as Person.__init__(self, name, 10)
# basically used to call the superclass init . This is used incase you want to call subclass init
# and then also call superclass's init.
# Since there's a hidden self in the super's parameters, when it's is called,
# the superclasses attributes are a part of the same object that was sent out in the super() method
def showIdentity(self):
return self.name, self.age, self.height
def showName(self):
pass
def showAge(self):
pass
a = Man("piyush", "179")
print(a.showIdentity())

python: super()-like proxy object that starts the MRO search at a specified class

According to the docs, super(cls, obj) returns
a proxy object that delegates method calls to a parent or sibling
class of type cls
I understand why super() offers this functionality, but I need something slightly different: I need to create a proxy object that delegates methods calls (and attribute lookups) to class cls itself; and as in super, if cls doesn't implement the method/attribute, my proxy should continue looking in the MRO order (of the new not the original class). Is there any function I can write that achieves that?
Example:
class X:
def act():
#...
class Y:
def act():
#...
class A(X, Y):
def act():
#...
class B(X, Y):
def act():
#...
class C(A, B):
def act():
#...
c = C()
b = some_magic_function(B, c)
# `b` needs to delegate calls to `act` to B, and look up attribute `s` in B
# I will pass `b` somewhere else, and have no control over it
Of course, I could do b = super(A, c), but that relies on knowing the exact class hierarchy and the fact that B follows A in the MRO. It would silently break if any of these two assumptions change in the future. (Note that super doesn't make any such assumptions!)
If I just needed to call b.act(), I could use B.act(c). But I am passing b to someone else, and have no idea what they'll do with it. I need to make sure it doesn't betray me and start acting like an instance of class C at some point.
A separate question, the documentation for super() (in Python 3.2) only talks about its method delegation, and does not clarify that attribute lookups for the proxy are also performed the same way. Is it an accidental omission?
EDIT
The updated Delegate approach works in the following example as well:
class A:
def f(self):
print('A.f')
def h(self):
print('A.h')
self.f()
class B(A):
def g(self):
self.f()
print('B.g')
def f(self):
print('B.f')
def t(self):
super().h()
a_true = A()
# instance of A ends up executing A.f
a_true.h()
b = B()
a_proxy = Delegate(A, b)
# *unlike* super(), the updated `Delegate` implementation would call A.f, not B.f
a_proxy.h()
Note that the updated class Delegate is closer to what I want than super() for two reasons:
super() only does it proxying for the first call; subsequent calls will happen as normal, since by then the object is used, not its proxy.
super() does not allow attribute access.
Thus, my question as asked has a (nearly) perfect answer in Python.
It turns out that, at a higher level, I was trying to do something I shouldn't (see my comments here).
This class should cover the most common cases:
class Delegate:
def __init__(self, cls, obj):
self._delegate_cls = cls
self._delegate_obj = obj
def __getattr__(self, name):
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self._delegate_obj)
return x
Use it like this:
b = Delegate(B, c)
(with the names from your example code.)
Restrictions:
You cannot retrieve some special attributes like __class__ etc. from the class you pass in the constructor via this proxy. (This restistions also applies to super.)
This might behave weired if the attribute you want to retrieve is some weired kind of descriptor.
Edit: If you want the code in the update to your question to work as desired, you can use the foloowing code:
class Delegate:
def __init__(self, cls):
self._delegate_cls = cls
def __getattr__(self, name):
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self)
return x
This passes the proxy object as self parameter to any called method, and it doesn't need the original object at all, hence I deleted it from the constructor.
If you also want instance attributes to be accessible you can use this version:
class Delegate:
def __init__(self, cls, obj):
self._delegate_cls = cls
self._delegate_obj = obj
def __getattr__(self, name):
if name in vars(self._delegate_obj):
return getattr(self._delegate_obj, name)
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self)
return x
A separate question, the documentation for super() (in Python 3.2)
only talks about its method delegation, and does not clarify that
attribute lookups for the proxy are also performed the same way. Is it
an accidental omission?
No, this is not accidental. super() does nothing for attribute lookups. The reason is that attributes on an instance are not associated with a particular class, they're just there. Consider the following:
class A:
def __init__(self):
self.foo = 'foo set from A'
class B(A):
def __init__(self):
super().__init__()
self.bar = 'bar set from B'
class C(B):
def method(self):
self.baz = 'baz set from C'
class D(C):
def __init__(self):
super().__init__()
self.foo = 'foo set from D'
self.baz = 'baz set from D'
instance = D()
instance.method()
instance.bar = 'not set from a class at all'
Which class "owns" foo, bar, and baz?
If I wanted to view instance as an instance of C, should it have a baz attribute before method is called? How about afterwards?
If I view instance as an instance of A, what value should foo have? Should bar be invisible because was only added in B, or visible because it was set to a value outside the class?
All of these questions are nonsense in Python. There's no possible way to design a system with the semantics of Python that could give sensible answers to them. __init__ isn't even special in terms of adding attributes to instances of the class; it's just a perfectly ordinary method that happens to be called as part of the instance creation protocol. Any method (or indeed code from another class altogether, or not from any class at all) can create attributes on any instance it has a reference to.
In fact, all of the attributes of instance are stored in the same place:
>>> instance.__dict__
{'baz': 'baz set from C', 'foo': 'foo set from D', 'bar': 'not set from a class at all'}
There's no way to tell which of them were originally set by which class, or were last set by which class, or whatever measure of ownership you want. There's certainly no way to get at "the A.foo being shadowed by D.foo", as you would expect from C++; they're the same attribute, and any writes to to it by one class (or from elsewhere) will clobber a value left in it by the other class.
The consequence of this is that super() does not perform attribute lookups the same way it does method lookups; it can't, and neither can any code you write.
In fact, from running some experiments, neither super nor Sven's Delegate actually support direct attribute retrieval at all!
class A:
def __init__(self):
self.spoon = 1
self.fork = 2
def foo(self):
print('A.foo')
class B(A):
def foo(self):
print('B.foo')
b = B()
d = Delegate(A, b)
s = super(B, b)
Then both work as expected for methods:
>>> d.foo()
A.foo
>>> s.foo()
A.foo
But:
>>> d.fork
Traceback (most recent call last):
File "<pyshell#43>", line 1, in <module>
d.fork
File "/tmp/foo.py", line 6, in __getattr__
x = getattr(self._delegate_cls, name)
AttributeError: type object 'A' has no attribute 'fork'
>>> s.spoon
Traceback (most recent call last):
File "<pyshell#45>", line 1, in <module>
s.spoon
AttributeError: 'super' object has no attribute 'spoon'
So they both only really work for calling some methods on, not for passing to arbitrary third party code to pretend to be an instance of the class you want to delegate to.
They don't behave the same way in the presence of multiple inheritance unfortunately. Given:
class Delegate:
def __init__(self, cls, obj):
self._delegate_cls = cls
self._delegate_obj = obj
def __getattr__(self, name):
x = getattr(self._delegate_cls, name)
if hasattr(x, "__get__"):
return x.__get__(self._delegate_obj)
return x
class A:
def foo(self):
print('A.foo')
class B:
pass
class C(B, A):
def foo(self):
print('C.foo')
c = C()
d = Delegate(B, c)
s = super(C, c)
Then:
>>> d.foo()
Traceback (most recent call last):
File "<pyshell#50>", line 1, in <module>
d.foo()
File "/tmp/foo.py", line 6, in __getattr__
x = getattr(self._delegate_cls, name)
AttributeError: type object 'B' has no attribute 'foo'
>>> s.foo()
A.foo
Because Delegate ignores the full MRO of whatever class _delegate_obj is an instance of, only using the MRO of _delegate_cls. Whereas super does what you asked in the question, but the behaviour seems quite strange: it's not wrapping an instance of C to pretend it's an instance of B, because direct instances of B don't have foo defined.
Here's my attempt:
class MROSkipper:
def __init__(self, cls, obj):
self.__cls = cls
self.__obj = obj
def __getattr__(self, name):
mro = self.__obj.__class__.__mro__
i = mro.index(self.__cls)
if i == 0:
# It's at the front anyway, just behave as getattr
return getattr(self.__obj, name)
else:
# Check __dict__ not getattr, otherwise we'd find methods
# on classes we're trying to skip
try:
return self.__obj.__dict__[name]
except KeyError:
return getattr(super(mro[i - 1], self.__obj), name)
I rely on the __mro__ attribute of classes to properly figure out where to start from, then I just use super. You could walk the MRO chain from that point yourself checking class __dict__s for methods instead if the weirdness of going back one step to use super is too much.
I've made no attempt to handle unusual attributes; those implemented with descriptors (including properties), or those magic methods looked up behind the scenes by Python, which often start at the class rather than the instance directly. But this behaves as you asked moderately well (with the caveat expounded on ad nauseum in the first part of my post; looking up attributes this way will not give you any different results than looking them up directly in the instance).

Issue with making object callable in python

I wrote code like this
>>> class a(object):
def __init__(self):
self.__call__ = lambda x:x
>>> b = a()
I expected that object of class a should be callable object but eventually it is not.
>>> b()
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
b()
TypeError: 'a' object is not callable
>>> callable(b)
False
>>> hasattr(b,'__call__')
True
>>>
I can't understand why.
Please help me.
Special methods are looked up on the type (e.g., class) of the object being operated on, not on the specific instance. Think about it: otherwise, if a class defines __call__ for example, when the class is called that __call__ should get called... what a disaster! But fortunately the special method is instead looked up on the class's type, AKA metaclass, and all is well ("legacy classes" had very irregular behavior in this, which is why we're all better off with the new-style ones -- which are the only ones left in Python 3).
So if you need "per-instance overriding" of special methods, you have to ensure the instance has its own unique class. That's very easy:
class a(object):
def __init__(self):
self.__class__ = type(self.__class__.__name__, (self.__class__,), {})
self.__class__.__call__ = lambda x:x
and you're there. Of course that would be silly in this case, as every instance ends up with just the same "so-called per-instance" (!) __call__, but it would be useful if you really needed overriding on a per-individual-instance basis.
__call__ needs to be defined on the class, not the instance
class a(object):
def __init__(self):
pass
__call__ = lambda x:x
but most people probably find it more readable to define the method the usual way
class a(object):
def __init__(self):
pass
def __call__(self):
return self
If you need to have different behaviour for each instance you could do it like this
class a(object):
def __init__(self):
self.myfunc = lambda x:x
def __call__(self):
return self.myfunc(self)
What about this?
Define a base class AllowDynamicCall:
class AllowDynamicCall(object):
def __call__(self, *args, **kwargs):
return self._callfunc(self, *args, **kwargs)
And then subclass AllowDynamicCall:
class Example(AllowDynamicCall):
def __init__(self):
self._callfunc = lambda s: s

Categories