I’ve seen code that declares abstract methods that actually have a non-trivial body.
What is the point of this since you have to implement in any concrete class anyway?
Is it just to allow you to do something like this?
def method_a(self):
super(self).method_a()
I've used this before in cases where it was possible to have the concrete implementation, but I wanted to force subclass implementers to consider if that implementation is appropriate for them.
One specific example: I was implementing an abstract base class with an abstract factory method so subclasses can define their own __init__ function but have a common interface to create them. It was something like
class Foo(ABC):
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
#classmethod
#abstractmethod
def from_args(cls, a, b, c) -> "Foo":
return cls(a, b, c)
Subclasses usually only need a subset of the arguments. When testing, it's cumbersome to access the framework which actually uses this factory method, so it's likely someone will forget to implement the from_args factory function since it wouldn't come up in their usual testing. Making it an abstractmethod would make it impossible to initialize the class without first implementing it, and will definitely come up during normal testing.
Related
Consider the following example:
class A:
"""
Simple
:param a: parameter a explained
"""
def __init__(self, a):
self._a = a
class B(A):
def __init__(self, a, b):
"""
More complicated
:param a: parameter a explained. <- repetition here
:param b: parameter b explained.
"""
super().__init__(a) # this call will have to change if `A()` does
self._b = b
class C(A):
def __init__(self, *args, c, **kwargs):
"""
More complicated without repetition of documentation
:param c: parameter c explained.
"""
super().__init__(*args, **kwargs)
self._c = c
What I often find myself doing when sub-classing, is what happens for B: repeating the documentation for A (its superclass).
The reason I do is because if you don't, like for C, IDEs and documentation tools can't resolve the full signature of the constructor and will only provide hints for the new functionality, hiding the documentation of the super class that still applies, even though it works as expected.
To avoid having to repeat like in B, what is the best practice in Python to create a sub-class for some other class, adding a parameter to the constructor, without having to repeat the entire documentation of the superclass in the subclass?
In other words: how can I write class B so that parameter hints in IDEs allow users of the class to see that it takes a and b as a parameter, without specifying all the other parameters besides the new b again?
In this case, it doesn't matter all that much of course, although even here, if the signature of As constructor changes later, the documentation for B will have to be updates as well. But for more complex classes with more parameters, the problem becomes more serious.
Additionally, if the signature of A changes, the call to super().__init__() in Bs constructor will have to change as well, while C keeps working as long as there are no conflicts (i.e. As new parameter on the constructor is not also called c).
In C++, given a class hierarchy, the most derived class's ctor calls its base class ctor which then initialized the base part of the object, before the derived part is instantiated. In Python I want to understand what's going on in a case where I have the requirement, that Derived subclasses a given class Base which takes a callable in its __init__ method which it then later invokes. The callable features some parameters which I pass in Derived class's __init__, which is where I also define the callable function. My idea then was to pass the Derived class itself to its Base class after having defined the __call__ operator
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self.__class__.__call__ = _process
super(Derived, self).__init__(self)
Is this a pythonic way of dealing with this problem?
What is the exact order of initialization here? Does one needs to call super as a first instruction in the __init__ method or is it ok to do it the way I did?
I am confused whether it is considered good practice to use super with or without arguments in python > 3.6
What is the exact order of initialization here?
Well, very obviously the one you can see in your code - Base.__init__() is only called when you explicitely ask for it (with the super() call). If Base also has parents and everyone in the chain uses super() calls, the parents initializers will be invoked according to the mro.
Basically, Python is a "runtime language" - except for the bytecode compilation phase, everything happens at runtime - so there's very few "black magic" going on (and much of it is actually documented and fully exposed for those who want to look under the hood or do some metaprogramming).
Does one needs to call super as a first instruction in the init method or is it ok to do it the way I did?
You call the parent's method where you see fit for the concrete use case - you just have to beware of not using instance attributes (directly or - less obvious to spot - indirectly via a method call that depends on those attributes) before they are defined.
I am confused whether it is considered good practice to use super with or without arguments in python > 3.6
If you don't need backward compatibily, use super() without params - unless you want to explicitely skip some class in the MRO, but then chances are there's something debatable with your design (but well - sometimes we can't afford to rewrite a whole code base just to avoid one very special corner case, so that's ok too as long as you understand what you're doing and why).
Now with your core question:
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self.__class__.__call__ = _process
super(Derived, self).__init__(self)
self.__class__.__call__ is a class attribute and is shared by all instances of the class. This means that you either have to make sure you are only ever using one single instance of the class (which doesn't seem to be the goal here) or are ready to have totally random results, since each new instance will overwrite self.__class__.__call__ with it's own version.
If what you want is to have each instance's __call__ method to call it's own version of process(), then there's a much simpler solution - just make _process an instance attribute and call it from __call__ :
class Derived(Base):
def __init__(self, a, b):
def _process(c, d):
do_something with a and b
self._process = _process
super(Derived, self).__init__(self)
def __call__(self, c, d):
return self._process(c, d)
Or even simpler:
class Derived(Base):
def __init__(self, a, b):
super(Derived, self).__init__(self)
self._a = a
self._b = b
def __call__(self, c, d):
do_something_with(self._a, self._b)
EDIT:
Base requires a callable in ins init method.
This would be better if your example snippet was closer to your real use case.
But when I call super().init() the call method of Derived should not have been instantiated yet or has it?
Now that's a good question... Actually, Python methods are not what you think they are. What you define in a class statement's body using the def statement are still plain functions, as you can see by yourself:
class Foo:
... def bar(self): pass
...
Foo.bar
"Methods" are only instanciated when an attribute lookup resolves to a class attribute that happens to be a function:
Foo().bar
main.Foo object at 0x7f3cef4de908>>
Foo().bar
main.Foo object at 0x7f3cef4de940>>
(if you wonder how this happens, it's documented here)
and they actually are just thin wrappers around a function, instance and class (or function and class for classmethods), which delegate the call to the underlying function, injecting the instance (or class) as first argument. In CS terms, a Python method is the partial application of a function to an instance (or class).
Now as I mentionned upper, Python is a runtime language, and both def and class are executable statements. So by the time you define your Derived class, the class statement creating the Base class object has already been executed (else Base wouldn't exist at all), with all the class statement block being executed first (to define the functions and other class attributes).
So "when you call super().__init()__", the __call__ function of Base HAS been instanciated (assuming it's defined in the class statement for Base of course, but that's by far the most common case).
Assuming that I have two classes A and B. In class A, B's instances will created. Both of their instance methods will use a common variable, which is initialized in class A. I have to pass the common_var through the init function. Think that if I have classes A, B, C, D.... and common_var1, var2, var3...where all vars have to be passed through class to class, that’s terrible:
class A:
def __init__(self, variable_part):
self.common_var = "Fixed part" + variable_part
self.bList = []
def add_B(self):
self.bList += [B(self.common_var)]
def use_common_var():
do_something(self.common_var)
class B:
def __init__(self, common_var):
self.common_var = common_var
def use_common_var():
do_something(self.common_var)
There's an ugly approach to use global here:
class A:
def __init__(self, variable_part):
global common_var
common_var = "Fixed part" + variable_part
def use_common_var(self):
do_something(common_var)
class B:
def use_common_var(self):
do_something(common_var)
But I don't think it's a good idea, any better ideas?
Update:
The original question is here:
The common_vars are a series of prefixes of strings, things like "https://{}:{}/rest/v1.5/".format(host, port), "mytest" etc..
and in class A, I use
"https://127.0.0.1:8080/rest/v1.5/interface01" and "mytest_a"
in class B, I use
"https://127.0.0.1:8080/rest/v1.5/interface02" and "mytest_b"
in class C, I may use
"https://127.0.0.1:8080/rest/v1.5/get?name=123" and "mytest_c"
things like that, I use common variables just to multiplex 'https://{}:{}/rest/v1.5' and "mytest" part and none of these A, B, C classes is in "is-a" relationship. But the core part of this problem is the common_var is not common at very first but initialized in one of this class..
Final Update
I compromised. I added a Helper class to reuse the common values:
class Helper:
#staticmethod
def setup(url, prefix):
Helper.COMMON_URL = url
Helper.prefix = prefix
# A always initiates first
Class A:
def __init__(self, host, port):
Helper.setup(
"https://{0}:{1}/rest/v1.5".format(host, port),
"test"
)
def use_common_var():
do_something(Helper.url, Helper.prefix)
class B:
def use_common_var():
do_somesthing(Helper.url, Helper.prefix)
class C:
def use_common_var():
do_something(Helper.url, Helper.prefix)
Is this a better way?
If you have four classes that share the same set of four attributes, then you might (or not) have a use for inheritance - but this really depends on how/whatfor those attributes are used and what's the real relationship between those classes are. Inheritance is a "is a" relationship (if B inherits from A then B "is a" A too).
Another solution - if you don't really have a "is a" relationship - is to create a class that regroup those four attributes and pass an instance of this class where it's needed. Here again, it only makes sense if there is a real semantic relationshop between those attributes.
Oh and yes, using globals is usually not the solution (unless those variables are actually pseudo-constants - set once at startup and never changed by anyone).
To make a long story short: there's no one-size-fits-all answer to your question, the appropriate solution depends on your concrete use case.
EDIT:
given your added explanation, it looks lik the "cargo class" solution (grouping all your "shared" attributes in a same object) might be what you want. Just beware that it will couple all your classes to this class, which might or not be a problem (wrt/ testing and maintainability). If the forces that drive your A / B ∕ C /D classes evolution are mostly the same then you should be good...
I want to define a mix-in of a namedtuple and a base class with defines and abstract method:
import abc
import collections
class A(object):
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def do(self):
print("U Can't Touch This")
B = collections.namedtuple('B', 'x, y')
class C(B, A):
pass
c = C(x=3, y=4)
print(c)
c.do()
From what I understand reading the docs and other examples I have seen, c.do() should raise an error, as class C does not implement do(). However, when I run it... it works:
B(x=3, y=4)
U Can't Touch This
I must be overlooking something.
When you take a look at the method resolution order of C you see that B comes before A in that list. That means when you instantiate C the __new__ method of B will be called first.
This is the implementation of namedtuple.__new__
def __new__(_cls, {arg_list}):
'Create new instance of {typename}({arg_list})'
return _tuple.__new__(_cls, ({arg_list}))
You can see that it does not support cooperative inheritance, because it breaks the chain and simply calls tuples __new__ method. Like this the ABCMeta.__new__ method that checks for abstract methods is never executed (where ever that is) and it can't check for abstract methods. So the instantiation does not fail.
I thought inverting the MRO would solve that problem, but it strangely did not. I'm gonna investigate a bit more and update this answer.
If a subclass wants to modify the behaviour of inherited methods through static fields, is it thread safe?
More specifically:
class A (object):
_m = 0
def do(self):
print self._m
class B (A):
_m=1
def test(self):
self.do()
class C (A):
_m=2
def test(self):
self.do()
Is there a risk that an instance of class B calling do() would behave as class C is supposed to, or vice-versa, in a multithreading environment? I would say yes, but I was wondering if somebody went through actually testing this pattern already.
Note: This is not a question about the pattern itself, which I think should be avoided, but about its consequences, as I found it in reviewing real life code.
First, remember that classes are objects, and static fields (and for that matter, methods) are attributes of said class objects.
So what happens is that self.do() looks up the do method in self and calls do(self). self is set to whatever object is being called, which itself references one of the classes A, B, or C as its class. So the lookup will find the value of _m in the correct class.
Of course, that requires a correction to your code:
class A (object):
_m = 0
def do(self):
if self._m==0: ...
elif ...
Your original code won't work because Python only looks for _m in two places: defined in the function, or as a global. It won't look in class scope like C++ does. So you have to prefix with self. so the right one gets used. If you wanted to force it to use the _m in class A, you would use A._m instead.
P.S. There are times you need this pattern, particularly with metaclasses, which are kinda-sorta Python's analog to C++'s template metaprogramming and functional algorithms.