Consider the following example:
class A:
"""
Simple
:param a: parameter a explained
"""
def __init__(self, a):
self._a = a
class B(A):
def __init__(self, a, b):
"""
More complicated
:param a: parameter a explained. <- repetition here
:param b: parameter b explained.
"""
super().__init__(a) # this call will have to change if `A()` does
self._b = b
class C(A):
def __init__(self, *args, c, **kwargs):
"""
More complicated without repetition of documentation
:param c: parameter c explained.
"""
super().__init__(*args, **kwargs)
self._c = c
What I often find myself doing when sub-classing, is what happens for B: repeating the documentation for A (its superclass).
The reason I do is because if you don't, like for C, IDEs and documentation tools can't resolve the full signature of the constructor and will only provide hints for the new functionality, hiding the documentation of the super class that still applies, even though it works as expected.
To avoid having to repeat like in B, what is the best practice in Python to create a sub-class for some other class, adding a parameter to the constructor, without having to repeat the entire documentation of the superclass in the subclass?
In other words: how can I write class B so that parameter hints in IDEs allow users of the class to see that it takes a and b as a parameter, without specifying all the other parameters besides the new b again?
In this case, it doesn't matter all that much of course, although even here, if the signature of As constructor changes later, the documentation for B will have to be updates as well. But for more complex classes with more parameters, the problem becomes more serious.
Additionally, if the signature of A changes, the call to super().__init__() in Bs constructor will have to change as well, while C keeps working as long as there are no conflicts (i.e. As new parameter on the constructor is not also called c).
Related
Is there any way to connect 2 classes (without merging them in 1) and thus avoiding repetition under statement if a: in class Z?
class A:
def __init__(self, a):
self.a = a
self.b = self.a + self.a
class Z:
def __init__(self, z, a=None):
self.z = z
if a: # this part seems like repetition
self.a = a.a
self.b = a.b
a = A('hello')
z = Z('world', a)
assert z.a == a.a # hello
assert z.b == a.b # hellohello
Wondering if python has some tools. I would prefer to avoid loop over instance variables and using setattr.
Something like inheriting from class A to class Z, Z(A) or such.
Here's a trivial example of class inheritance that may help you to understand:
class A:
def __init__(self, a):
self._a = a
self._b = self.a + self.a
class Z(A):
def __init__(self, z, a):
super().__init__(a)
self._z = z
clazz = Z('Hello', 'world')
print(clazz._z, clazz._a)
Conceptually, the standard techniques for associating an A instance with a Z instance are:
Using composition (and delegation)
"Composition" simply means that the A instance itself is an attribute of the Z instance. We call this a "has-a" relationship: every Z has an A that's associated with it.
In normal cases, we can simply pass the A instance to the Z constructor, and have it assign an attribute in __init__. Thus:
class A:
def __init__(self, a):
self.a = a
self.b = self.a + self.a
def action(self): # added for demonstration purposes.
pass
class Z:
def __init__(self, z, a=None):
self.z = z
self._a = a # if not None, this will be an `A` instance
Notice that the attribute for the a instance is specially named to avoid conflicting with the A class attribute names. This is to avoid ambiguity (calling it .a makes one wonder whether my_z.a should get the .a attribute from the A instance, or the entire instance), and to mark it as an implementation detail (normally, outside code won't have a good reason to get the entire A instance out of the Z; the entire point of delegation is to make it so that users of Z don't have to worry about A's interface).
One important limitation is that the composition relationship is one-way by nature: self._a = a gives the Z class access to A contents, but not the other way around. (Of course, it's also possible to build the relationship in both directions, but this will require some planning ahead.)
"Delegation" means that we use some scheme in the code, so that looking something up in a Z instance finds it in the composed A instance when necessary. There are multiple ways to achieve this in Python, at least two of which are worth mentioning:
Explicit delegation per attribute
We define a separate property in the Z class, for each attribute we want to delegate. For example:
# within the `Z` class
#property
def a(self):
return self._a.a
# The setter can also be omitted to make a read-only attribute;
# alternately, additional validation logic can be added to the function.
#a.setter
def a(self, value):
self._a.a = value
For methods, using the same property approach should work, but it may be simpler to make a wrapper function and calling it:
def action(self):
return self._a.action()
Delegation via __getattr__
The __getattr__ magic ("dunder") method allows us to provide fallback logic for looking up an attribute in a class, if it isn't found by the normal means. We can use this for the Z class, so that it will try looking within its _a if all else fails. This looks like:
def __getattr__(self, name):
return getattr(self._a, name)
Here, we use the free function getattr to look up the name dynamically within the A instance.
Using inheritance
This means that each Z instance will, conceptually, be a kind of A instance - classes represent types, and inheriting Z from A means that it will be a subtype of A.
We call this an "is-a" relationship: every Z instance is an A instance. More precisely, a Z instance should be usable anywhere that an A instance could be used, but also Z might contain additional data and/or use different implementations.
This approach looks like:
class A:
def __init__(self, a):
self.a = a
self.b = self.a + self.a
def action(self): # added for demonstration purposes.
return f'{self.z.title()}, {self.a}!'
class Z(A):
def __init__(self, z, a):
# Use `a` to do the `A`-specific initialization.
super().__init__(a)
# Then do `Z`-specific initialization.
self.z = z
The super function is magic that finds the A.__init__ function, and calls it as a method on the Z instance that's currently being initialized. (That is: self will be the same object for both __init__ calls.)
This is clearly more convenient than the delegation and composition approach. Our Z instance actually has a and b attributes as well as z, and also actually has a action method. Thus, code like my_z.action() will use the method from the A class, and accessing the a and b attributes of a Z instance works - because the Z instance actually directly contains that data.
Note in this example that the code for action now tries to use self.z. this won't work for an A instance constructed directly, but it does work when we construct a Z and call action on it:
>>> A('world').action()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in action
AttributeError: 'A' object has no attribute 'z'
>>> Z('hello', 'world').action()
'Hello, world!'
We say that such an A class, which doesn't properly function on its own, is abstract. (There are more tools we can use to prevent accidentally creating an unusable base A; these are outside the scope of this answer.)
This convenience comes with serious implications for design. It can be hard to reason about deep inheritance structures (where the A also inherits from B, which inherits from C...) and especially about multiple inheritance (Z can inherit from B as well as A). Doing these things requires careful planning and design, and a more detailed understanding of how super works - beyond the scope of this answer.
Inheritance is also less flexible. For example, when the Z instance composes an A instance, it's easy to swap that A instance out later for another one. Inheritance doesn't offer that option.
Using mixins
Essentially, using a mixin means using inheritance (generally, multiple inheritance), even though we conceptually want a "has-a" relationship, because the convenient usage patterns are more important than the time spent designing it all up front. It's a complex, but powerful design pattern that essentially lets us build a new class from component parts.
Typically, mixins will be abstract (in the sense described in the previous section). Most examples of mixins also won't contain data attributes, but only methods, because they're generally designed specifically to implement some functionality. (In some programming languages, when using multiple inheritance, only one base class is allowed to contain data. However, this restriction is not necessary and would make no sense in Python, because of how objects are implemented.)
One specific technique common with mixins is that the first base class listed will be an actual "base", while everything else is treated as "just" an abstract mixin. To keep things organized while initializing all the mixins based on the original Z constructor arguments, we use keyword arguments for everything that will be passed to the mixins, and let each mixin use what it needs from the **kwargs.
class Root:
# We use this to swallow up any arguments that were passed "too far"
def __init__(self, *args, **kwargs):
pass
class ZBase(Root):
def __init__(self, z, **kwargs):
# a common pattern is to just accept arbitrary keyword arguments
# that are passed to all the mixins, and let each one sort out
# what it needs.
super().__init__(**kwargs)
self.z = z
class AMixin(Root):
def __init__(self, **kwargs):
# This `super()` call is in case more mixins are used.
super().__init__(**kwargs)
self.a = kwargs['a']
self.b = self.a + self.a
def func(self): # This time, we'll make it do something
return f'{self.z.title()}, {self.a}!'
# We combine the base with the mixins by deriving from both.
# Normally there is no reason to add any more logic here.
class Z(ZBase, AMixin): pass
We can use this like:
>>> # we use keyword arguments for all the mixins' arguments
>>> my_z = Z('hello', a='world')
>>> # now the `Z` instance has everything defined in both base and mixin:
>>> my_z.func()
'Hello, world!'
>>> my_z.z
'hello'
>>> my_z.a
'world'
>>> my_z.b
'worldworld'
The code in AMixin can't stand on its own:
>>> AMixin(a='world').func()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in func
AttributeError: 'AMixin' object has no attribute 'z'
but when the Z instance has both ZBase and AMixin as bases, and is used to call func, the z attribute can be found - because now self is a Z instance, which has that attribute.
The super logic here is a bit tricky. The details are beyond the scope of this post, but suffice to say that with mixin classes that are set up this way, super will forward to the next, sibling base of Z, as long as there is one. It will do this no matter what order the mixins appear in; the Z instance determines the order, and super calls whatever is "next in line". When all the bases have been consulted, next in line is Root, which is just there to intercept the kwargs (since the last mixin doesn't "know" it's last, and passes them on). This is necessary because otherwise, next in line would be object, and object.__init__ raises an exception if there are any arguments.
For more details, see What is a mixin and why is it useful?.
I’ve seen code that declares abstract methods that actually have a non-trivial body.
What is the point of this since you have to implement in any concrete class anyway?
Is it just to allow you to do something like this?
def method_a(self):
super(self).method_a()
I've used this before in cases where it was possible to have the concrete implementation, but I wanted to force subclass implementers to consider if that implementation is appropriate for them.
One specific example: I was implementing an abstract base class with an abstract factory method so subclasses can define their own __init__ function but have a common interface to create them. It was something like
class Foo(ABC):
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
#classmethod
#abstractmethod
def from_args(cls, a, b, c) -> "Foo":
return cls(a, b, c)
Subclasses usually only need a subset of the arguments. When testing, it's cumbersome to access the framework which actually uses this factory method, so it's likely someone will forget to implement the from_args factory function since it wouldn't come up in their usual testing. Making it an abstractmethod would make it impossible to initialize the class without first implementing it, and will definitely come up during normal testing.
I have following structure for class.
class foo(object):
def __call__(self,param1):
pass
class bar(object):
def __call__(self,param1,param2):
pass
I have many classes of this type. And i am using this callable class as follows.
classes = [foo(), bar()]
for C in classes:
res = C(param1)
'''here i want to put condition if class takes 1 argumnet just pass 1
parameter otherwise pass two.'''
I have think of one pattern like this.
class abc():
def __init__(self):
self.param1 = 'xyz'
self.param2 = 'pqr'
def something(self, classes): # classes = [foo(), bar()]
for C in classes:
if C.__class__.__name__ in ['bar']:
res = C(self.param1, self.param2)
else:
res = C(self.param2)
but in above solution have to maintain list of class which takes two arguments and as i will add more class to file this will become messy.
I dont know whether this is correct(pythonic) way to do it.
On more idea i have in mind is to check how many argument that class is taking. If its 2 then pass an additional argument otherwise pass 1 argument.I have looked at this solution How can I find the number of arguments of a Python function? . But i am not confident enought that this is the best suited solution to my problem.
Few things about this:
There are only two type of classes in my usecase one with 1 argument and one with 2.
Both class takes first argument same so params1 in both case is same argument i am passing. in case of class with two required parameter i am passing additional argument(params2) containing some data.
Ps : Any help or new idea for this problem are appretiated.
UPD : Updated the code.
Basically, you want to use polymorphism on your object's __call__() method, but you have an issue with your callables signature not being the same.
The plain simple answer to this is: you can only use polymorphism on compatible types, which in this case means that your callables MUST have compatible signatures.
Hopefully, there's a quick and easy way to solve this: just modify your methods signatures so they accept varargs and kwargs:
class Foo(object):
def __call__(self,param1, *args, **kw):
pass
class Bar(object):
def __call__(self, param1, param2, *args, **kw):
pass
For the case where you can't change the callable's signature, there's still a workaround: use a lambda as proxy:
def func1(y, z):
pass
def func2(x):
pass
callables = [func1, lambda y, z: func2(y)]
for c in callables:
c(42, 1138)
Note that this last example is actually known as the adapter pattern
Unrelated: this:
if C.__class__.__name__ in ['bar']:
is a inefficient and convoluted way to write:
if C.__class__.__name__ == 'bar':
which is itself an inefficient, convoluted AND brittle way to write:
if type(C) is bar:
which, by itself, is a possible design smell (there are legit use cases for checking the exact type of an object, but most often this is really a design issue).
Consider the following dataclass. I would like to prevent objects from being created using the __init__ method direclty.
from __future__ import annotations
from dataclasses import dataclass, field
#dataclass
class C:
a: int
#classmethod
def create_from_f1(cls, a: int) -> C:
# do something
return cls(a)
#classmethod
def create_from_f2(cls, a: int, b: int) -> C:
# do something
return cls(a+b)
# more constructors follow
c0 = C.create_from_f1(1) # ok
c1 = C() # should raise an exception
c2 = C(1) # should raise an exception
For instance, I would like to force the usage of the the additional constructors I define and raise an exception or a warning if an object is directly created as c = C(..).
What I've tried so far is as follows.
#dataclass
class C:
a : int = field(init=False)
#classmethod
def create_from(cls, a: int) -> C:
# do something
c = cls()
c.a = a
return c
with init=False in field I prevent a from being a parameter to the generated __init__, so this partially solves the problem as c = C(1) raises an exception.
Also, I don't like it as a solution.
Is there a direct way to disable the init method from being called from outside the class?
Since this isn't a standard restriction to impose on instance creation, an extra line or two to help other developers understand what's going on / why this is forbidden is probably worthwhile. Keeping in the spirit of "We are all consenting adults", a hidden parameter to your __init__ may be a nice balance between ease of understanding and ease of implementation:
class Foo:
#classmethod
def create_from_f1(cls, a):
return cls(a, _is_direct=False)
#classmethod
def create_from_f2(cls, a, b):
return cls(a+b, _is_direct=False)
def __init__(self, a, _is_direct=True):
# don't initialize me directly
if _is_direct:
raise TypeError("create with Foo.create_from_*")
self.a = a
It's certainly still possible to create an instance without going through create_from_*, but a developer would have to knowingly work around your roadblock to do it.
Trying to make a constructor private in Python is not a very pythonic thing to do. One of the philosophies of Python is "we're all consenting adults". That is, you don't try to hide the __init__ method, but you do document that a user probably wants to use one of the convenience constructors instead. But if the user thinks they really know what they're doing then they're welcome to try.
You can see this philosophy in action in the standard library. With inspect.Signature. The class constructor takes a list of Parameter, which a fairly complicated to create. This is not the standard way a user is expected to create a Signature instance. Rather a function called signature is provided which takes a callable as argument and does all the leg work of creating the parameter instances from the various different function types in CPython and marshalling them into the Signature object.
That is do something like:
#dataclass
class C:
"""
The class C represents blah. Instances of C should be created using the C.create_from_<x>
family of functions.
"""
a: int
b: str
c: float
#classmethod
def create_from_int(cls, x: int):
return cls(foo(x), bar(x), baz(x))
The __init__ method is not responsible for creating instances from a class. You should override the __new__ method if you want to restrict the instantiation of your class. But if you override the __new__ method if will affect any form of instanciation as well which means that your classmethod won't work anymore. Because of that and since it's generally not Pythonic to delegate instance creation to another function, it's better to do this within the __new__ method. Detailed reasons for that can be simply found in doc:
Called to create a new instance of class cls. __new__() is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. The remaining arguments are those passed to the object constructor expression (the call to the class). The return value of __new__() should be the new object instance (usually an instance of cls).
Typical implementations create a new instance of the class by invoking the superclass’s __new__() method using super().__new__(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it.
If __new__() returns an instance of cls, then the new instance’s __init__() method will be invoked like __init__(self[, ...]), where self is the new instance and the remaining arguments are the same as were passed to __new__().
If __new__() does not return an instance of cls, then the new instance’s __init__() method will not be invoked.
__new__() is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation.
As Dunes' answer explained, this is not something you'd usually want to do. But since it's possible anyway, here is how:
dataclasses import dataclass
#dataclass
class C:
a: int
def __post_init__(self):
# __init__ will call this method automatically
raise TypeError("Don't create instances of this class by hand!")
#classmethod
def create_from_f1(cls, a: int):
# disable the post_init by hand ...
tmp = cls.__post_init__
cls.__post_init__ = lambda *args, **kwargs: None
ret = cls(a)
cls.__post_init__ = tmp
# ... and restore it once we are done
return ret
print(C.create_from_f1(1)) # works
print(C(1)) # raises a descriptive TypeError
I probably don't need to say that the handle code looks absolutely heinous, and that it also makes it impossible to use __post_init__ for anything else, which is quite unfortunate. But it is one way to answer the question in your post.
I am having some problem using multiple inheritance in Python and can't understand what I am doing wrong.
I have three classes A,B,C defined as follows it does not work.
class A(object):
def __init__(**kwargs):
.
.
class B(object):
def __init__(**kwargs):
# prepare a dictionary "options" with the options used to call A
super(B,self).__init__(**options)
def coolmethod(x):
#some cool stuff
For A and B I don't have any problems.
I want to create a third class C that inherits both from A and B
so that I can the coolmethod defined in B, but would like to use the constructor defined in A.
Trying to define class C(A,B) does not work because the MRO is not defined.
But defining class C(B,A) does not allow me to use A.init rather than B.init.
How can I solve the issue?
You can call A.__init__() directly instead of using super() in C:
class C(B,A):
def __init__(self, **kwargs):
A.__init__(self, **kwargs)
You can use
class A(object):
def __init__(self, **kwargs):
super(A, self).__init__(**kwargs)
if you expect multiple inheritance from A and something else. This way, A.__init__ will always be called.
The order is important because of the way method resolution works in python. If you have C inherit from (A, B), it means that if you invoke a method on C that exists both on A and B, the one on A is selected (it has precedence). If you write super(A, self).method in class A, it means you want to extend functionality provided by method. Therefore, it would be strange to skip over one such extension if both A and B had such extensions and C inherited from both. That's why when you call C.method, it will execute A.method, which will call B.method when it invokes super(A, self).method. In other terms, it's as if A inherited from B for the purpose of method extension. This is different when C inherits from (B, A).
Note that __init__'s first argument should always be self, just like for every method.