I have a pretty big class that i want to break down in smaller classes that each handle a single part of the whole. So each child takes care of only one aspect of the whole.
Each of these child classes still need to communicate with one another.
For example Data Access creates a dictionary that Plotting Controller needs to have access to.
And then plotting Controller needs to update stuff on Main GUI Controller. But these children have various more inter-communication functions.
How do I achieve this?
I've read Metaclasses, Cooperative Multiple Inheritence and Wonders of Cooperative Multiple Inheritence, but i cannot figure out how to do this.
The closest I've come is the following code:
class A:
def __init__(self):
self.myself = 'ClassA'
def method_ONE_from_class_A(self, caller):
print(f"I am method ONE from {self.myself} called by {caller}")
self.method_ONE_from_class_B(self.myself)
def method_TWO_from_class_A(self, caller):
print(f"I am method TWO from {self.myself} called by {caller}")
self.method_TWO_from_class_B(self.myself)
class B:
def __init__(self):
self.me = 'ClassB'
def method_ONE_from_class_B(self, caller):
print(f"I am method ONE from {self.me} called by {caller}")
self.method_TWO_from_class_A(self.me)
def method_TWO_from_class_B(self, caller):
print(f"I am method TWO from {self.me} called by {caller}")
class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)
def children_start_talking(self):
self.method_ONE_from_class_A('Big Poppa')
poppa = C()
poppa.children_start_talking()
which results correctly in:
I am method ONE from ClassA called by Big Poppa
I am method ONE from ClassB called by ClassA
I am method TWO from ClassA called by ClassB
I am method TWO from ClassB called by ClassA
But... even though Class B and Class A correctly call the other children's functions, they don't actually find their declaration. Nor do i "see" them when i'm typing the code, which is both frustrating and worrisome that i might be doing something wrong.
Is there a good way to achieve this? Or is it an actually bad idea?
EDIT: Python 3.7 if it makes any difference.
Inheritance
When breaking a class hierarchy like this, the individual "partial" classes, we call "mixins", will "see" only what is declared directly on them, and on their base-classes. In your example, when writing class A, it does not know anything about class B - you as the author, can know that methods from class B will be present, because methods from class A will only be called from class C, that inherits both.
Your programming tools, the IDE including, can't know that. (That you should know better than your programming aid, is a side track). It would work, if run, but this is a poor design.
If all methods are to be present directly on a single instance of your final class, all of them have to be "present" in a super-class for them all - you can even write independent subclasses in different files, and then a single subclass that will inherit all of them:
from abc import abstractmethod, ABC
class Base(ABC):
#abstractmethod
def method_A_1(self):
pass
#abstractmethod
def method_A_2(self):
pass
#abstractmethod
def method_B_1(self):
pass
class A(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_A_1(self):
...
def method_A_2(self):
...
class B(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_B_1(self):
...
...
class C(A, B):
pass
(The "ABC" and "abstractmethod" are a bit of sugar - they will work, but this design would work without any of that - thought their presence help whoever is looking at your code to figure out what is going on, and will raise an earlier runtime error if you per mistake create an instance of one of the incomplete base classes)
Composite
This works, but if your methods are actually for wildly different domains, instead
of multiple inheritance, you should try using the "composite design pattern".
No need for multiple inheritance if it does not arise naturally.
In this case, you instantiate objects of the classes that drive the different domains on the __init__ of the shell class, and pass its own instance to those child, which will keep a reference to it (in a self.parent attribute, for example). Chances are your IDE still won't know what you are talking about, but you will have a saner design.
class Parent:
def __init__(self):
self.a_domain = A(self)
self.b_domain = B(self)
class A:
def __init__(self, parent):
self.parent = parent
# no need to call any "super...init", this is called
# as part of the initialization of the parent class
def method_A_1(self):
...
def method_A_2(self):
...
class B:
def __init__(self, parent):
self.parent = parent
def method_B_1(self):
# need result from 'A' domain:
a_value = self.parent.a_domain.method_A_1()
...
This example uses the basic of the language features, but if you decide
to go for it in a complex application, you can sophisticate it - there are
interface patterns, that could allow you to swap the classes used
for different domains, in specialized subclasses, and so on. But typically
the pattern above is what you would need.
Related
For classes:
class Base(ABC):
def __init__(self, param1):
self.param1 = param1
#abstractmethod
def some_method1(self):
pass
# #abstractmethod
# def potentially_shared_method(self):
# ????
class Child(Base):
def __init__(self, param2):
self.param1 = param1
self.param2 = param2
def some_method1(self):
self.object1 = some_lib.generate_object1(param1, param2)
def potentially_shared_method(self):
return object1.process()
I want to move the potentially_shared_method to be shared in abstract calss, however it uses object1 that is initialized in some_method1 and needs to stay there.
If it's only potentially shared, it doesn't belong in the base class. You'd be breaking a few design principles.
What is a child class supposed to do for which the sharing doesn't make sense?
Also, you're introducing some temporal coupling; you can only call potentially_shared_method after some_method1 has been called. That's not ideal because the users of your class might not realize that.
Also, if the method is shared, you probably don't want it to be abstract in your base class; with an abstract method you're really only sharing the signature; but it seems you'll want to share functionality.
Anyway. Here's some options:
Using Python's multiple inheritance, move potentially_shared_method into a SharedMixin class and have those children who share it inherit from Base and from SharedMixin. You can then also move some_method1 into that SharedMixin class because it seems to me that those go together. Or maybe not...
Hide the access to object1 behind a getter. Make the getter have a dummy implementation in the base class and a proper implementation in those child classes who actually create an object1. Then potentially_shared_method can be moved to Base and just refer to the getter.
According to Python docs super()
is useful for accessing inherited methods that have been overridden in
a class.
I understand that super refers to the parent class and it lets you access parent methods. My question is why do people always use super inside the init method of the child class? I have seen it everywhere. For example:
class Person:
def __init__(self, name):
self.name = name
class Employee(Person):
def __init__(self, **kwargs):
super().__init__(name=kwargs['name']) # Here super is being used
def first_letter(self):
return self.name[0]
e = Employee(name="John")
print(e.first_letter())
I can accomplish the same without super and without even an init method:
class Person:
def __init__(self, name):
self.name = name
class Employee(Person):
def first_letter(self):
return self.name[0]
e = Employee(name="John")
print(e.first_letter())
Are there drawbacks with the latter code? It looks so much cleanr to me. I don't even have to use the boilerplate **kwargs and kwargs['argument'] syntax.
I am using Python 3.8.
Edit: Here's another stackoverflow questions which has code from different people who are using super in the child's init method. I don't understand why. My best guess is there's something new in Python 3.8.
The child might want to do something different or more likely additional to what the super class does - in this case the child must have an __init__.
Calling super’s init means that you don’t have to copy/paste (with all the implications for maintenance) that init in the child’s class, which otherwise would be needed if you wanted some additional code in the child init.
But note there are complications about using super’s init if you use multiple inheritance (e.g. which super gets called) and this needs care. Personally I avoid multiple inheritance and keep inheritance to aminimum anyway - it’s easy to get tempted into creating multiple levels of inheritance/class hierarchy but my experience is that a ‘keep it simple’ approach is usually much better.
The potential drawback to the latter code is that there is no __init__ method within the Employee class. Since there is none, the __init__ method of the parent class is called. However, as soon as an __init__ method is added to the Employee class (maybe there's some Employee-specific attribute that needs to be initialized, like an id_number) then the __init__ method of the parent class is overridden and not called (unless super.__init__() is called) and then an Employee will not have a name attribute.
The correct way to use super here is for both methods to use super. You cannot assume that Person is the last (or at least, next-to-last, before object) class in the MRO.
class Person:
def __init__(self, name, **kwargs):
super().__init__(**kwargs)
self.name = name
class Employee(Person):
# Optional, since Employee.__init__ does nothing
# except pass the exact same arguments "upstream"
def __init__(self, **kwargs):
super().__init__(**kwargs)
def first_letter(self):
return self.name[0]
Consider a class definition like
class Bar:
...
class Foo(Person, Bar):
...
The MRO for Foo looks like [Foo, Person, Bar, object]; the call to super().__init__ inside Person.__init__ would call Bar.__init__, not object.__init__, and Person has no way of knowing if values in **kwargs are meant for Bar, so it must pass them on.
I have a class design where the Children classes inheriting from a certain Parent class just differ in some parameters, but the Parent class contains all methods, which are using the parameters provided as class variables on the Children. So, in other words, each of my Child classes is fully described by the list of parameters and the inheritance of the Parent class.
So, let's say, I have the following classes:
class Parent():
def __init__(self, **kwargs):
for param in self.__class__.parameters:
self.setattr(param, kwargs.get(param))
def compare(self, other):
for param in self.__class__.parameters:
if self.getattr(param) != other.getattr(param):
return False
return True
class ChildA(Parent):
parameters = ["length", "height", "width"]
def __init__(self, **kwargs):
super().__init__(**kwargs)
class ChildB(Parent):
parameters = ["color", "taste"]
def __init__(self, **kwargs):
super().__init__(**kwargs)
My actual classes are a bit different - I have more and more complex methods on the Parent class and also different kinds of parameters - , but this is sort of a minimum example of the design principle.
Since Parent class is relying on its Children to have the class variable parameters, I thought, I might want to enforce the existence of the class variable on each Child class. I have read that I achieve this by using a metaclass. But I have also read that most developers do not need to use metaclasses, and if in doubt, you probably don't need them. I have never worked with metaclasses before, and so I am in doubt whether I should use them, and so by that rule mentioned, I probably do not need a metaclass. But on the other hand, the term "metaclass" just sounds like a good match to my structure, since Parent really looks like something which could well be called "metaclass" in some sense (technically, not in terms of the way the terminus technicus metaclass is used in OOP, but in terms of: it is fully describing the behaviour of the children classes).
So, I wonder: Is there a different (better) design of classes to reflect my structure? Should I use a metaclass to enforce the existence of the parameters, or is there a better way to do so? Or should I just resign to enforce the existence of the parameters class variable on the Children classes in the first place?
If using python3.6 or above, you can accomplish this using __init_subclass__ which I personally reason better with than a metaclass.
An example of __init_subclass__ based on the usecase described:
class Parent:
def __init_subclass__(cls):
if not hasattr(cls, 'parameters'):
raise TypeError(f'Subclass of {cls} does not have a parameters class attribute')
def __init__(self, **kwargs):
for param in self.__class__.parameters:
self.setattr(param, kwargs.get(param))
def compare(self, other):
for param in self.__class__.parameters:
if self.getattr(param) != other.getattr(param):
return False
return True
class GoodChild(Parent):
parameters = ['length', 'height', 'width']
class BadChild(Parent):
pass
Which results in raising a TypeError exception when the BadChild class is created (not when it is instantiated):
TypeError: Subclass of <class '__main__.BadChild'> does not have a parameters class attribute
I'm having a minor, I hope, issue with theory and the proper way to deal with a problem. It's easier for me to show an example then to explain as I seem to fail with my vocabulary.
class Original_1:
def __init__(self):
pass
def meth1(self):
pass
def meth2(self):
pass
class Original_2(Original_1):
def __init__(self):
Original_1.__init__(self)
def meth3(self):
pass
class Mixin:
def __init__(self):
pass
def meth4(self):
...
meth1(self)
meth2(self)
class NewClass_1(Original_1, Mixin):
def __init__(self):
Original_1.__init__(self)
Mixin.__init__(self)
class NewClass_2(Original_2, Mixin):
def __init__(self):
Original_2.__init__(self)
Mixin.__init__(self)
Now the goal is to extend Original_1 or Original_2 with new methods in the Mixin, but I run into some questions if I use meth1(), meth2(), or meth3() in the mixin. 1. I'm not referencing Original_1 or Origninal_2 in the mixin. (At this point it runs but I don't like it.) 2. If I make Mixin a child of Original_1, it breaks. I could make two separate NewClass_X but then I'm duplicating all of that code.
Mixins are used to add functionality (usually methods) to classes by using multiple inheritance.
For example, let's say you want to make a class's __str__ method return everything in uppercase. There are two ways you can do this:
Manually change every single class's __str__ method:
class SomeClass(SomeBase):
def __str__(self):
return super(SomeClass, self).__str__().upper()
Create a mixin class that does only this and inherit from it:
class UpperStrMixin(object):
def __str__(self):
return super(UpperStrMixin, self).__str__().upper()
class SomeClass(SomeBase, UpperStrMixin):
...
In the second example, notice how UpperStrMixin is completely useless as a standalone class. Its only purpose is to be used with multiple inheritance as a base class and to override your class's __str__ method.
In your particular case, the following will work:
class Mixin:
def __init__(self, option):
...
def meth4(self):
...
self.meth1()
self.meth2()
class NewClass_1(Original_1, Mixin):
def __init__(self, option):
Original_1.__init__(self)
Mixin.__init__(self, option)
...
class NewClass_2(Original_2, Mixin):
def __init__(self, option):
Original_2.__init__(self)
Mixin.__init__(self, option)
...
Even though Mixin.meth1 and Mixin.meth2 aren't defined, this isn't an issue because an instance of Mixin is never created directly and it's only used indirectly through multiple inheritance.
Since Mixin is not a standalone class, you can just write it to assume that the necessary methods exist, and it will find them on self assuming the self in question provides, or derives from another class which provides, meth1 and meth2.
If you want to ensure the methods exist, you can either document it in the Mixin docstring, or for programmatic enforcement, use the abc module to make Mixin an ABC and specify what methods must be defined; if a given class doesn't provide them (directly or via inheritance) then you'll get an error if you attempt to instantiate it (because the class is still abstract until those methods are defined):
from abc import ABCMeta, abstractmethod
class Mixin(metaclass=ABCMeta):
def __init__(self):
pass
#abstractmethod
def meth1(self): pass
#abstractmethod
def meth2(self): pass
def meth4(self):
...
self.meth1() # Method call on self will dispatch to other class's meth1 dynamically
self.meth2() # Method call on self will dispatch to other class's meth2 dynamically
Beyond that, you can simplify your code significantly by using super appropriately, which would remove the need to explicitly call the __init__s for each parent class; they'd be called automatically so long as all classes use super appropriately (note: for safety, in cooperative inheritance like this, you usually accept the current class's recognized arguments plus varargs, passing the varargs you don't recognize up the call chain blindly):
class Original_1:
def __init__(self, orig1arg, *args, **kwargs):
self.orig1val = orig1arg # Use what you know
super().__init__(*args, **kwargs) # Pass what you don't
def meth1(self):
pass
def meth2(self):
pass
class Original_2(Original_1):
def __init__(self, orig2arg, *args, **kwargs):
self.orig2val = orig2arg # Use what you know
super().__init__(self, *args, **kwargs) # Pass what you don't
def meth3(self):
pass
class Mixin(metaclass=ABCMeta):
# If Mixin, or any class in your hierarchy, doesn't need to do anything to
# be initialized, just omit __init__ entirely, and the super from other
# classes will skip over it entirely
def __init__(self, mixinarg, *args, **kwargs):
self.mixinval = mixinarg # Use what you know
super().__init__(self, *args, **kwargs) # Pass what you don't
#abstractmethod
def meth1(self): pass
#abstractmethod
def meth2(self): pass
def meth4(self):
...
self.meth1() # Method call on self will dispatch to other class's meth1
self.meth2() # Method call on self will dispatch to other class's meth1
class NewClass_1(Original_1, Mixin):
def __init__(self, newarg1, *args, **kwargs):
self.newval1 = newarg1 # Use what you know
super().__init__(self, *args, **kwargs) # Pass what you don't
class NewClass_2(Original_2, Mixin):
def __init__(self, newarg2, *args, **kwargs):
self.newval2 = newarg2 # Use what you know
super().__init__(self, *args, **kwargs) # Pass what you don't
Note that using super everywhere means you don't need to explicitly call each __init__ for your parents; it automatically linearizes the calls, so for example, in NewClass_2, that single super().__init__ will delegate to the first parent (Original_2), which then delegates to Original_1, which then delegates to Mixin (even though Original_1 knows nothing about Mixin).
In more complicated multiple inheritance (say, you inherit from Mixin through two different parent classes that both inherit from it), using super is the only way to handle it reasonably; super naturally linearizes and deduplicates the parent class tree, so even though two parents derive from it, Mixin.__init__ would still only be called once, preventing subtle errors from initializing Mixin more than once.
Note: You didn't specify which version of Python you're using. Metaclasses and super are both better and simpler in Python 3, so I've used Python 3 syntax. For Python 2, you'd need to set the metaclass a different way, and call super providing the current class object and self explicitly, which makes it less nice, but then, Python 2 is generally less nice at this point, so consider writing new code for Python 3?
I am developing a system, which has a series of single multilevel inheritance hierarachy. one of the methods (applicable to all the classes) has to perform the same thing for most of the classes, which is to pass a list to its parent class.
I know that if one doesn't define a method in one of the inherited classes, its parents' methods are used. But when we use the super method, we need to mention the name of the class being called.
One method I know to achieve this is to redefine the method at every class with class name as argument. Is there any elegant method where I can define it once at the topmost parent, and then override it only when necessary?
The implementation right now looks like this
class a(object):
def __init__(self):
self.myL = list()
print 'hello'
class b(a):
def __init__(self):
super(b,self).__init__()
def resolve(self, passVal):
print passVal
self.myL.append(passVal)
super(b,self).resolve(passVal+1)
class c(b):
def __init__(self):
super(c,self).__init__()
def resolve(self, passVal):
print passVal
self.myL.append(passVal)
super(c,self).resolve(passVal+1)
Instead if I can define resolve in class a, and then all other classes inherit the method from it. I understand a will never be able to use it. but redefining the method seems a lot unnecessary extra work.