I have two (non-trivial) classes like this:
class MyClass(Base):
def __init__(self, etc, *args, **kwargs):
super().__init__(*args, **kwargs)
self.etc = etc
def do_something(self, arg1, arg2):
# not actually a function call, just to make this as minimal as possible
output = computation_with_a_bunch_of_code(arg1, arg2)
return super().do_something(output)
Don't get too attached to the exact format, the super call may be in the really long computation (possibly multiple times), but the only difference is the super delegated to.
I've tried looking at related questions and none quite did what I wanted. For instance, This recommended one with multiple inheritance doesn't rely on super calls the way mine does.
While I'd love to just instantiate base like an object like so:
class MyFakeBaseClass:
def __init__(self, etc, base):
self.base = base
self.etc = etc
def do_something(self, arg1, arg2):
output = computation_with_a_bunch_of_code(arg1, arg2)
return self.base.do_something(output)
The library I'm using requires the ultimate base class of whatever I pass in to be one of its base classes (I think it uses metaclass subtyping magic to do things to your overridden methods, because duck typing doesn't generally work). While I could use the API's lowest level abstract class as my base, I'd have to do a lot of legwork to manually override a ton of methods with self.base.method() which would end up being about as much boilerplate code as normal code I'm replacing anyway, and a nightmare to read.
My ideal solution would look something like:
class _BaseClass(???):
def __init__(self, etc, *args, **kwargs):
super().__init__(*args, **kwargs)
self.etc = etc
def do_something(self, arguments):
arguments = computation_with_a_bunch_of_code()
return super().do_something(arguments)
class RealOne(_BaseClass(SuperOne)): pass
class RealTwo(_BaseClass(SuperTwo)): pass
I've tried typing.Generic since on the surface it seemed similar from examples, but given a Generic[T], it doesn't seem to use T as a real concrete super class when calling super. And on further research, it appears you can't really use it this way anyway, due to when the relevant values are set if nothing else.
Currently I've settled on:
def _actually_do_something(base, obj, arg1, arg2):
output = computation_with_a_bunch_of_code(arg1, arg2)
return base.do_something(output)
class MyClass(Base):
# ... etc ...
def do_something(self, arg1, arg2):
base = super()
return _actually_do_something(base, self, arg1, arg2)
Which is fine, but with multiple methods doing this it's a lot of boilerplate, and a frustrating amount of indirection, somewhat to me, but definitely to other people working on this who I've shown it to. It's not completely unmaintainable, but if one exists I'd like a cleaner solution.
It seems to me that the only other solution within my (unfortunate) constraints is metaclasses, but I couldn't find anything that quite did what I wanted. Most examples were either trivial (setting an attribute), or didn't specify how to use a dynamic base class. I'd welcome a simpler solution even more, but as far as I can tell decorators and the like can't quite do this. PEP 638 (Syntactic Macros) seems like it'd be an easy avenue to what I want, but as far as I can tell it's not implemented yet, and I'm unfortunately tied to environments that don't have 3.9 available yet anyway if it is (I'm stuck on 3.6, specifically). It's possible the answer is "just do what you're doing now and update to that whenever it comes out," but again, if there's something relatively simple, compact, and nice looking I'm missing I'd like to know.
Related
I have a particular problem, but I will make the example more general.
I have a Parent class with a mandatory constructor parameter and a few optional ones, each with a default value. Then, I inherit Child from it and add a mandatory parameter, and inherit GrandChild from Child and add another mandatory parameter to the constructor. The result is similar to this:
class Parent():
def __init__(self, arg1, opt_arg1='opt_arg1_default_val', opt_arg2='opt_arg2_default_val',
opt_arg3='opt_arg3_default_val', opt_arg4='opt_arg4_default_val'):
self.arg1 = arg1
self.opt_arg1 = opt_arg1
self.opt_arg2 = opt_arg2
self.opt_arg3 = opt_arg3
self.opt_arg4 = opt_arg4
class Child(Parent):
def __init__(self, arg1, arg2, opt_arg1, opt_arg2, opt_arg3, opt_arg4):
super().__init__(arg1, opt_arg1, opt_arg2, opt_arg3, opt_arg4)
self.arg2 = arg2
class GrandChild(Child):
def __init__(self, arg1, arg2, arg3, opt_arg1, opt_arg2, opt_arg3, opt_arg4):
super().__init__(arg1, arg2, opt_arg1, opt_arg2, opt_arg3, opt_arg4)
self.arg3 = arg3
The problem is that this looks rather ugly, especially if I want to inherit more classes from Child, I'd have to copy/paste all the arguments in that new class's constructor.
In search for a solution, I found here that I can solve this problem using **kwargs like so:
class Parent():
def __init__(self, arg1, opt_arg1='opt_arg1_default_val', opt_arg2='opt_arg2_default_val',
opt_arg3='opt_arg3_default_val', opt_arg4='opt_arg4_default_val'):
self.arg1 = arg1
self.opt_arg1 = opt_arg1
self.opt_arg2 = opt_arg2
self.opt_arg3 = opt_arg3
self.opt_arg4 = opt_arg4
class Child(Parent):
def __init__(self, arg1, arg2, **kwargs):
super().__init__(arg1, **kwargs)
self.arg2 = arg2
class GrandChild(Child):
def __init__(self, arg1, arg2, arg3,**kwargs):
super().__init__(arg1, arg2,**kwargs)
self.arg3 = arg3
However, I am not sure if this is the right way.
There is also a slight inconvenience while creating objects of these classes. I am using PyCharm to develop, and in this case the IDE has a useful method of displaying a function/class constructor arguments. For instance, in the first example,
This makes it much easier to develop and can help future developers as well since they can see what other arguments the function has. However, in the second example, the optional arguments are not shown anymore:
And I do not think it is a good practice to use **kwargs in this case, since one would have to dig deeper into the code up to the Parent class to check what optional arguments it has.
I've also looked into using the Builder pattern, but then all I do is move the arguments list from my classes to builder classes, and I have the same problem, builders with lots of arguments that when inherited will create even more arguments on top of the already existing ones. Also in Python, as much as I see, Builder doesn't really make much sense considering all class members are public and can be accessed without needing setters and getters.
Any ideas on how to solve this constructor problem?
The basic idea is to write code that generates the __init__ method for you, with all the parameters specified explicitly rather than via *args and/or **kwargs, and without even needing to repeat yourself with all those self.arg1 = arg1 lines.
And, ideally, it can make it easy to add type annotations that PyCharm can use for popup hints and/or static type checking.1
And, while you're at it, why not build a __repr__ that displays the same values? And maybe even an __eq__, and a __hash__, and maybe lexicographical comparison operators, and conversion to and from a dict whose keys match the attributes for each JSON persistence, and…
Or, even better, use a library that takes care of that for you.
Python 3.7 comes with such a library, dataclasses. Or you can use a third-party library like attrs, that works with Python 3.4 and (with some limitations) 2.7. Or, for simple cases (where your objects are immutable, and you want them to work like a tuple of their attributes in specified order), you can use namedtuple, which works back to 3.0 and 2.6.
Unfortunately, dataclasses doesn't quite work for your use case. If you just write this:
from dataclasses import dataclass
#dataclass
class Parent:
arg1: str
opt_arg1: str = 'opt_arg1_default_val'
opt_arg2: str = 'opt_arg2_default_val'
opt_arg3: str = 'opt_arg3_default_val'
opt_arg4: str = 'opt_arg4_default_val'
#dataclass
class Child(Parent):
arg2: str
… you'll get an error, because it tries to place the mandatory parameter arg2 after the default-values parameters opt_arg1 through opt_arg4.
dataclasses doesn't have any way to reorder parameters (Child(arg1, arg2, opt_arg1=…), or to force them to be keyword-only parameters (Child(*, arg1, opt_arg1=…, arg2)). attrs doesn't have that functionality out of the box, but you can add it.
So, it's not quite as trivial as you'd hope, but it's doable.
But if you wanted to write this yourself, how would you create the __init__ function dynamically?
The simplest option is exec.
You've probably heard that exec is dangerous. But it's only dangerous if you're passing in values that came from your user. Here, you're only passing in values that came from your own source code.
It's still ugly—but sometimes it's the best answer anyway. The standard library's namedtuple used to be one giant exec template., and even the current version uses exec for most of the methods, and so does dataclasses.
Also, notice that all of these modules store the set of fields somewhere in a private class attribute, so subclasses can easily read the parent class's fields. If you didn't do that, you could use the inspect module to get the Signature for your base class's (or base classes', for multiple inheritance) initializer and work it out from there. But just using base._fields is obviously a lot simpler (and allows storing extra metadata that doesn't normally go in signatures).
Here's a dead simple implementation that doesn't handle most of the features of attrs or dataclasses, but does order all mandatory parameters before all optionals.
def makeinit(cls):
fields = ()
optfields = {}
for base in cls.mro():
fields = getattr(base, '_fields', ()) + fields
optfields = {**getattr(base, '_optfields', {}), **optfields}
optparams = [f"{name} = {val!r}" for name, val in optfields.items()]
paramstr = ', '.join(['self', *fields, *optparams])
assignstr = "\n ".join(f"self.{name} = {name}" for name in [*fields, *optfields])
exec(f'def __init__({paramstr}):\n {assignstr}\ncls.__init__ = __init__')
return cls
#makeinit
class Parent:
_fields = ('arg1',)
_optfields = {'opt_arg1': 'opt_arg1_default_val',
'opt_arg2': 'opt_arg2_default_val',
'opt_arg3': 'opt_arg3_default_val',
'opt_arg4': 'opt_arg4_default_val'}
#makeinit
class Child(Parent):
_fields = ('arg2',)
Now, you've got exactly the __init__ methods you wanted on Parent and Child, fully inspectable2 (including help), and without having to repeat yourself.
1. I don't use PyCharm, but I know that well before 3.7 came out, their devs were involved in the discussion of #dataclass and were already working on adding explicit support for it to their IDE, so it doesn't even have to evaluate the class definition to get all that information. I don't know if it's available in the current version, but if not, I assume it will be. Meanwhile, #dataclass already just works for me with IPython auto-completion, emacs flycheck, and so on, which is good enough for me. :)
2. … at least at runtime. PyCharm may not be able to figure things out statically well enough to do popup completion.
I have this problem in a fun project where I have to tell a class to execute an operation for a certain time period with the start and end times specified (along with the interval ofc).
For the sake of argument consider that class A (the class that has the API that I call) has to call classes B (which calls class C and which in turn calls class D) and E (which in turn calls class F and class G).
A -> B -> C -> D
A -> E -> G
Now class B C and E require the context about the time. I've currently set it up so that class A passes the context to class B and E and they in turn pass the context around as needed.
I'm trying to figure out the best way to solve this requirement without passing context around and as much as I hate it, I was considering using the Highlander (or the Singleton) or the Borg pattern (a variant on the Monostate). I just wanted to know what my options were with regard to these patterns.
Option 1
Use a traditional borg e.g.:
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
I could simply instantiate this class everywhere and have access to the global state that I want.
Option 2
Another variant on the monostate:
class InheritableBorg:
__time = (start, end, interval)
def __init__(self):
pass
#property
def time(self):
return __time
#time.setter
def time(self, time):
__time = time
This in theory would allow me to simply extend by doing:
class X(InheritableBorg):
pass
and to extend I would just do:
class NewInheritableBorg(InheritableBorg):
__time = (0, 0, 'd')
Then in theory I could leverage multiple inheritance and then would be able to get access to multiple borgs in one go e.g.:
class X(InheritableBorg1, InheritableBorg2):
pass
I could even selectively override stuff as well.
Option 3
Use a wrapped nested function as a class decorator/ wrapper if possible. However, I could only use this once and would need to pass the function handle around. This is based offa a mix of the proxy/delegation idea.
This is not concrete in my head but in essence something like:
def timer(time)
def class_wrapper(class_to_wrap):
class Wrapper(class_to_wrap):
def __init__(self, *a, **kw):
super().__init__(*a, **kw)
def get_time(self):
return time
return Wrapper
return class_wrapper
#timer(time)
class A_that_needs_time_information:
pass
I think that might work... BUT I still need to pass the function handle.
Summary
All of these are possible solutions, and I'm leaning towards the multiple inheritance Borg pattern (though the class wrapper is cool).
The regular Borg pattern has to be instantiated soo many times that it seems like too much overhead just to store one set of values.
The Borg mixin is instantiated as many times as the class is instantiated. I don't see how it would be any harder to test.
The wrapper is ultra generic and would be relatively easy to test. Theoretically since its a function closure I should be able to mutate stuff but then it essentially becomes a singleton (which just seems too complicated in which case I might as well just use the regular singleton pattern). Also, passing the function handle around rather defeats the purpose.
Barring these three ways, are there any other ways of doing this? Any better ways (the Borg doesn't seem easily testable since no DI)? Are there any drawbacks that I seem to have missed?
Alternatively I could just stick with what I have now... passing the time around as required. It satisfies the loose coupling requirement and DI best practices... but its just soo cumbersome. There has to be a better way!
As a general rule of thumb, try to keep things simple. You're considering very complex solutions to a relatively simple problem.
With the information you've given, it seems you could just wrap the request for execution of an operation in an object that also includes the context. Then you'd just be passing around one object between the different classes. For example:
class OperationRequestContext:
def __init__(self, start_time, end_time):
self.start_time = start_time
self.end_time = end_time
class OperationRequest:
def __init__(self, operation, context):
self.operation = operation
self.context = context
If there are additional requirements that justify considering more complex solutions, you should specify them.
Why doesn't object.__init__ take *args, **kwargs as arguments? This breaks some simple code in a highly annoying manner without any upsides as far as I can see:
Say we want to make sure that all __init__'s of all parent classes are called. As long as every init follows the simple convention of calling super().__init__ this will guarantee that the whole hierarchy is run through and that exactly once (also without ever having to specify the parent specifically). The problem appears when we pass arguments along:
class Foo:
def __init__(self, *args, **kwargs):
print("foo-init")
super().__init__(*args, **kwargs) # error if there are arguments!
class Bar:
def __init__(self, *args, **kwargs):
print("bar-init")
super().__init__(*args, **kwargs)
class Baz(Bar, Foo):
def __init__(self, *args, **kwargs):
print("baz-init")
super().__init__(*args, **kwargs)
b1 = Baz() # works
b2 = Baz("error")
What's the reasoning for this and what's the best general (! it's easily solvable in my specific case but that relies on additional knowledge of the hierarchy) workaround? The best I can see is to check whether the parent is object and in that case not give it any args.. horribly ugly that.
You can see http://bugs.python.org/issue1683368 for a discussion. Note that someone there actually asked for it to cause an error. Also see the discussion on python-dev.
Anyway, your design is rather odd. Why are you writing every single class to take unspecified *args and **kwargs? In general it's better to have methods accept the arguments they need. Accepting open-ended arguments for everything can lead to all sorts of bugs if someone mistypes a keyword name, for instance. Sometimes it's necessary, but it shouldn't be the default way of doing things.
Raymond Hettinger's super() considered super has some information about how to deal with this. It's in the section "Practical advice".
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It's often stated that super should be avoided in Python 2. I've found in my use of super in Python 2 that it never acts the way I expect unless I provide all arguments such as the example:
super(ThisClass, self).some_func(*args, **kwargs)
It seems to me this defeats the purpose of using super(), it's neither more concise, or much better than TheBaseClass.some_func(self, *args, **kwargs). For most purposes method resolution order is a distant fairy tale.
Other than the fact that 2.7 is the last major release to Python 2, why does super remain broken in Python 2?
How and why has Python 3's super changed? Are there any caveats?
When and why should I use super going forward?
super() is not broken -- it just should not be considered the standard way of calling a method of the base class. This did not change with Python 3.x. The only thing that changed is that you don't need to pass the arguments self, cls in the standard case that self is the first parameter of the current function and cls is the class currently being defined.
Regarding your question when to actually use super(), my answer would be: hardly ever. I personally try to avoid the kind of multiple inheritance that would make super() useful.
Edit: An example from real life that I once ran into: I had some classes defining a run() method, some of which had base classes. I used super() to call the inherited constructors -- I did not think it mattered because I was using single inheritance only:
class A(object):
def __init__(self, i):
self.i = i
def run(self, value):
return self.i * value
class B(A):
def __init__(self, i, j):
super(B, self).__init__(i)
self.j = j
def run(self, value):
return super(B, self).run(value) + self.j
Just imagine there were several of these classes, all with individual constructor prototypes, and all with the same interface to run().
Now I wanted to add some additional functionality to all of these classes, say logging. The additional functionality required an additional method to be defined on all these classes, say info(). I did not want to invade the original classes, but rather define a second set of classes inheriting from the original ones, adding the info() method and inheriting from a mix-in providing the actual logging. Now, I could not use super() in the constructor any more, so I used direct calls:
class Logger(object):
def __init__(self, name):
self.name = name
def run_logged(self, value):
print "Running", self.name, "with info", self.info()
return self.run(value)
class BLogged(B, Logger):
def __init__(self, i, j):
B.__init__(self, i, j)
Logger.__init__("B")
def info(self):
return 42
Here things stop working. The super() call in the base class constructor suddenly calls Logger.__init__(), and BLogged can't do anything about it. There is actually no way to make this work, except for removing the super() call in B itself.
[Another Edit: I don't seem to have made my point, judging from all the comments here and below the other answers. Here is how to make this code work using super():
class A(object):
def __init__(self, i, **kwargs):
super(A, self).__init__(**kwargs)
self.i = i
def run(self, value):
return self.i * value
class B(A):
def __init__(self, j, **kwargs):
super(B, self).__init__(**kwargs)
self.j = j
def run(self, value):
return super(B, self).run(value) + self.j
class Logger(object):
def __init__(self, name, **kwargs):
super(Logger,self).__init__(**kwargs)
self.name = name
def run_logged(self, value):
print "Running", self.name, "with info", self.info()
return self.run(value)
class BLogged(B, Logger):
def __init__(self, **kwargs):
super(BLogged, self).__init__(name="B", **kwargs)
def info(self):
return 42
b = BLogged(i=3, j=4)
Compare this with the use of explicit superclass calls. You decide which version you prefer.]
This and similar stories are why I think that super() should not be considered the standard way of calling methods of the base class. It does not mean super() is broken.
super() is not broken, in Python 2 or Python 3.
Let's consider the arguments from the blog post:
It doesn't do what it sounds like it does.
OK, you may agree or disagree on that, it's pretty subjective. What should it have been called then? super() is a replacement for calling the superclass directly, so the name seems fine to me. It does NOT call the superclass directly, because if that was all it did, it would be pointless, as you could do that anyway. OK, admittedly, that may not be obvious, but the cases where you need super() are generally not obvious. If you need it, you are doing some pretty hairy multiple inheritance. It's not going to be obvious. (Or you are doing a simple mixin, in which case it will be pretty obvious and behave as you expect even if you didn't read the docs).
If you can call the superclass directly, that's probably what you'll end up doing. That's the easy and intuitive way of doing it. super() only comes into play when that doesn't work.
It doesn't mesh well with calling the superclass directly.
Yes, because it's designed to solve a problem with doing that. You can call the superclass directly if, and only if, you know exactly what class that is. Which you don't for mixins, for example, or when your class hierarchy is so messed up that you actually are merging two branches (which is the typical example in all examples of using super()).
So as long as every class in your class hierarchy has a well defined place, calling the superclass directly works. If you don't, then it does not work, and in that case you must use super() instead. That's the point of super() that it figures out what the "next superclass" is according to the MRO, without you explicitly having to specify it, because you can't always do that because you don't always know what it is, for example when using mixins.
The completely different programming language Dylan, a sort of lisp-thingy, solves this in another way that can't be used in Python because it's very different.
Eh. OK?
super() doesn't call your superclass.
Yeah, you said that.
Don't mix super() and direct calling.
Yeah, you said that too.
So, there is two arguments against it: 1. The name is bad. 2. You have to use it consistently.
That does not translate to it being "broken" or that it should be "avoided".
You seem to imply in your post that
def some_func(self, *args, **kwargs):
self.__class__.some_func(self, *args, **kwargs)
is not an infinite recursion. It is, and super would be more correct.
Also, yes, you are required to pass all arguments to super(). This is a bit like complaining that max() doesn't work like expected unless you pass it all the numbers you want to check.
In 3.x, however, fewer arguments are needed: you can do super().foo(*args, **kwargs) instead of super(ThisClass, self).foo(*args, **kwargs).
Anyway, I'm unsure as to any situations when super should be avoided. Its behavior is only "weird" when MI is involved, and when MI is involved, super() is basically your only hope for a correct solution. In Single-Inheritance it's just slightly wordier than SuperClass.foo(self, *args, **kwargs), and does nothing different.
I think I agree with Sven that this sort of MI is worth avoiding, but I don't agree that super is worth avoiding. If your class is supposed to be inherited, super offers users of your class hope of getting MI to work, if they're weird in that way, so it makes your class more usable.
Did you read the article that you link it? It doesn't conclude that super should be avoided but that you should be wary of its caveats when using it. These caveats are summarized by the article, though I would disagree with their suggestions.
The main point of the article is that multiple inheritance can get messy, and super doesn't help as much as the author would want. However doing multiple inheritance without super is often even more complicated.
If you're not doing multiple inheritance, super gives you the advantage that anyone inheriting from your class can add simple mixins and their __init__ would be properly called. Just remember to always call the __init__ of the superclass, even when you're inheriting from object, and to pass all the remaining arguments (*a and **kw) to it. When you're calling other methods from the parent class also use super, but this time use their proper signature that you already know (i.e. ensure that they have the same signature in all classes).
If you're doing multiple inheritance you'd have to dig deeper than that, and probably re-read the same article more carefully to be aware of the caveats. And it's also only during multiple inheritance when you might a situation where an explicit call to the parent might be better than super, but without a specific scenario nobody can tell you whether super should be used or not.
The only change in super in Python 3.x is that you don't need to explicitly pass the current class and self to it. This makes super more attractive, because using it would mean no hardcoding of either the parent class or the current class.
#Sven Marnach:
The problem with your example is that you mix explicit superclass calls B.__init__ and Logger.__init__ in Blogged with super() in B. That won't work. Either you use all explicit superclass calls or use super() on all classes. When you use super() you need to use it on all classes involved, including A I think. Also in your example I think you could use explicit superclass calls in all classes, i.e use A.__init__ in class B.
When there is no diamond inheritance I think super() doesn't have much advantage. The problem is, however, that you don't know in advance if you will get into any diamond inheritance in the future so in that case it would be wise to use super() anyway (but then use it consistently). Otherwise you would end up having to change all classes at a later time or run into problems.
I am subclassing an object in order to override a method that I want to add some functionality to. I don't want to completely replace it or add a differently named method but remain compatible to the superclasses method by just adding an optional argument to the method.
Is it possible to work with *args and **kwargs to pass through all arguments to the superclass and still add an optional argument with a default?
I intuitively came up with the following but it doesn't work:
class A(object):
def foo(self, arg1, arg2, argopt1="bar"):
print arg1, arg2, argopt1
class B(A):
def foo(self, *args, argopt2="foo", **kwargs):
print argopt2
A.foo(self, *args, **kwargs)
b = B()
b.foo("a", "b", argopt2="foo")
Of course I can get it to work when I explicitly add all the arguments of the method of the superclass:
class B(A):
def foo(self, arg1, arg2, argopt1="foo", argopt2="bar"):
print argopt2
A.foo(self, arg1, arg2, argopt1=argopt1)
What's the right way to do this, do I have to know and explicitly state all of the overridden methods arguments?
class A(object):
def foo(self, arg1, arg2, argopt1="bar"):
print arg1, arg2, argopt1
class B(A):
def foo(self, *args, **kwargs):
argopt2 = kwargs.get('argopt2', default_for_argopt2)
# remove the extra arg so the base class doesn't complain.
del kwargs['argopt2']
print argopt2
A.foo(self, *args, **kwargs)
b = B()
b.foo("a", "b", argopt2="foo")
What's the right way to do this, do I
have to know and explicitly state all
of the overridden methods arguments?
If you want to cover all cases (rather than just rely on the caller to always do things your way, e.g., always call you only with the extra argument passed by-name, never by position) you do have to code (or dynamically discover) a lot of knowledge about the signature of the method you're overriding -- hardly surprising: inheritance is a strong form of coupling, and overriding methods is one way that coupling presents itself.
You could dynamically discover the superclass's method arguments via inspect.getargspec, in order to make sure you call it properly... but this introspection technique can get tricky if two classes are trying to do exactly the same thing (once you know your superclass's method accepts *a and/or **kw you can do little more than pass all the relevant arguments upwards and hope, with fingers crossed, that the upstream method chain eventually does proper housecleaning before calling a version that's not quite so tolerant).
Such prices may be worth paying when you're designing a wrapper that's meant to be applied dynamically to callables with a wide variety of signatures (especially since in a decorator setting you can arrange to pay the hefty cost of introspection just once per function you're decorating, not every time the resulting wrapper is called). It seems unlikely to be a worthwhile technique in a case such as yours, where you'd better know what you're subclassing (subclassing is strong coupling: doing it blindly is definitely not advisable!), and so you might as well spell out the arguments explicitly.
Yes, if the superclass's code changes drastically (e.g., by altering method signatures), you'll have to revise the subclass as well -- that's (part of) the price of inheritance. The overall price's hefty enough that the new Go programming language does totally without it -- forcing you to apply the Gang of 4's excellent advice to prefer composition over inheritance. In Python complete abstinence from inheritance would just be impractical, but using it soberly and in moderation (and accepting the price you'll pay in terms of coupling when you do) remains advisable.
When subclassing and overriding methods, one must always decide if using super() is a good idea, and this page is good for that.
I'm not saying that super() should be avoided, like the article author may be: I'm saying that super() has some very important prerequisits that must be followed if you don't want super() to come back and bite you.