I have a class design where the Children classes inheriting from a certain Parent class just differ in some parameters, but the Parent class contains all methods, which are using the parameters provided as class variables on the Children. So, in other words, each of my Child classes is fully described by the list of parameters and the inheritance of the Parent class.
So, let's say, I have the following classes:
class Parent():
def __init__(self, **kwargs):
for param in self.__class__.parameters:
self.setattr(param, kwargs.get(param))
def compare(self, other):
for param in self.__class__.parameters:
if self.getattr(param) != other.getattr(param):
return False
return True
class ChildA(Parent):
parameters = ["length", "height", "width"]
def __init__(self, **kwargs):
super().__init__(**kwargs)
class ChildB(Parent):
parameters = ["color", "taste"]
def __init__(self, **kwargs):
super().__init__(**kwargs)
My actual classes are a bit different - I have more and more complex methods on the Parent class and also different kinds of parameters - , but this is sort of a minimum example of the design principle.
Since Parent class is relying on its Children to have the class variable parameters, I thought, I might want to enforce the existence of the class variable on each Child class. I have read that I achieve this by using a metaclass. But I have also read that most developers do not need to use metaclasses, and if in doubt, you probably don't need them. I have never worked with metaclasses before, and so I am in doubt whether I should use them, and so by that rule mentioned, I probably do not need a metaclass. But on the other hand, the term "metaclass" just sounds like a good match to my structure, since Parent really looks like something which could well be called "metaclass" in some sense (technically, not in terms of the way the terminus technicus metaclass is used in OOP, but in terms of: it is fully describing the behaviour of the children classes).
So, I wonder: Is there a different (better) design of classes to reflect my structure? Should I use a metaclass to enforce the existence of the parameters, or is there a better way to do so? Or should I just resign to enforce the existence of the parameters class variable on the Children classes in the first place?
If using python3.6 or above, you can accomplish this using __init_subclass__ which I personally reason better with than a metaclass.
An example of __init_subclass__ based on the usecase described:
class Parent:
def __init_subclass__(cls):
if not hasattr(cls, 'parameters'):
raise TypeError(f'Subclass of {cls} does not have a parameters class attribute')
def __init__(self, **kwargs):
for param in self.__class__.parameters:
self.setattr(param, kwargs.get(param))
def compare(self, other):
for param in self.__class__.parameters:
if self.getattr(param) != other.getattr(param):
return False
return True
class GoodChild(Parent):
parameters = ['length', 'height', 'width']
class BadChild(Parent):
pass
Which results in raising a TypeError exception when the BadChild class is created (not when it is instantiated):
TypeError: Subclass of <class '__main__.BadChild'> does not have a parameters class attribute
Related
For classes:
class Base(ABC):
def __init__(self, param1):
self.param1 = param1
#abstractmethod
def some_method1(self):
pass
# #abstractmethod
# def potentially_shared_method(self):
# ????
class Child(Base):
def __init__(self, param2):
self.param1 = param1
self.param2 = param2
def some_method1(self):
self.object1 = some_lib.generate_object1(param1, param2)
def potentially_shared_method(self):
return object1.process()
I want to move the potentially_shared_method to be shared in abstract calss, however it uses object1 that is initialized in some_method1 and needs to stay there.
If it's only potentially shared, it doesn't belong in the base class. You'd be breaking a few design principles.
What is a child class supposed to do for which the sharing doesn't make sense?
Also, you're introducing some temporal coupling; you can only call potentially_shared_method after some_method1 has been called. That's not ideal because the users of your class might not realize that.
Also, if the method is shared, you probably don't want it to be abstract in your base class; with an abstract method you're really only sharing the signature; but it seems you'll want to share functionality.
Anyway. Here's some options:
Using Python's multiple inheritance, move potentially_shared_method into a SharedMixin class and have those children who share it inherit from Base and from SharedMixin. You can then also move some_method1 into that SharedMixin class because it seems to me that those go together. Or maybe not...
Hide the access to object1 behind a getter. Make the getter have a dummy implementation in the base class and a proper implementation in those child classes who actually create an object1. Then potentially_shared_method can be moved to Base and just refer to the getter.
I'm trying to create a base class with a number of abstract python properties, in python 3.7.
I tried it one way (see 'start' below) using the #property, #abstractmethod, #property.setter annotations. This worked but it doesn't raise an exception if the subclass doesn't implement a setter. That's the point of using #abstract to me, so that's no good.
So I tried doing it another way (see 'end' below) using two #abstractmethod methods and a 'property()', which is not abstract itself but uses those methods. This approach generates an error when instantiating the subclass:
# {TypeError}Can't instantiate abstract class FirstStep with abstract methods end
I'm clearly implementing the abstract methods, so I don't understand what it means. The 'end' property is not marked #abstract, but if I comment it out, it does run (but I don't get my property). I also added that test non-abstract method 'test_elapsed_time' to demonstrate I have the class structure and abstraction right (it works).
Any chance I'm doing something dumb, or is there some special behavior around property() that's causing this?
class ParentTask(Task):
def get_first_step(self):
# {TypeError}Can't instantiate abstract class FirstStep with abstract methods end
return FirstStep(self)
class Step(ABC):
# __metaclass__ = ABCMeta
def __init__(self, task):
self.task = task
# First approach. Works, but no warnings if don't implement setter in subclass
#property
#abstractmethod
def start(self):
pass
#start.setter
#abstractmethod
def start(self, value):
pass
# Second approach. "This method for 'end' may look slight messier, but raises errors if not implemented.
#abstractmethod
def get_end(self):
pass
#abstractmethod
def set_end(self, value):
pass
end = property(get_end, set_end)
def test_elapsed_time(self):
return self.get_end() - self.start
class FirstStep(Step):
#property
def start(self):
return self.task.start_dt
# No warnings if this is commented out.
#start.setter
def start(self, value):
self.task.start_dt = value
def get_end(self):
return self.task.end_dt
def set_end(self, value):
self.task.end_dt = value
I suspect this is a bug in the interaction of abstract methods and properties.
In your base class, the following things happen, in order:
You define an abstract method named start.
You create a new property that uses the abstract method from 1) as its getter. The name start now refers to this property, with the only reference to the original name now held by Self.start.fget.
Python saves a temporary reference to start.setter, because the name start is about to be bound to yet another object.
You create a second abstract method named start
The reference from 3) is given the abstract method from 4) to define a new property to replace the once currently bound to the name start. This property has as its getter the method from 1 and as its setter the method from 4). Now start refers to this property; start.fget refers to the method from 1); start.fset refers to the method from 4).
At this point, you have a property, whose component functions are abstract methods. The property itself was not decorated as abstract, but the definition of property.__isabstractmethod__ marks it as such because all its component methods are abstract. More importantly, you have the following entries in Step.__abstractmethods__:
start, the property
end, the property
set_end, the setter for end
gen_end, the getter for end
Note that the component functions for the start property are missing, because __abstractmethods__ stores names of, not references to, things that need to be overriden. Using property and the resulting property's setter method as decorators repeatedly replace what the name start refers to.
Now, in your child class, you define a new property named start, shadowing the inherited property, which has no setter and a concrete method as its getter. At this point, it doesn't matter if you provide a setter for this property or not, because as far as the abc machinery is concerned, you have provided everything it asked for:
A concrete method for the name start
Concrete methods for the names get_end and set_end
Implicitly a concrete definition for the name end, because all of the underlying functions for the property end have been provided concrete definitions.
#chepner answered and explained it well. Based on that, I came up with a way around it that is... well... you decide. Sneaky at best. But it achieves my 3 main goals:
Raises exceptions for unimplemented setters in subclasses
Supports the python property semantics (vs. functions etc)
Avoids boilerplate re-declaring every property in every subclass which still might not have solved #1 anyway.
Just declare the abstract get/set functions in the base class (not the property). Then add a #classmethod initializer to the base class that creates the actual properties using those abstract methods, but at that point, they'll be concrete methods on the subclass.
It's a one liner after the subclass declaration to init the properties. Nothing enforces that call being made, so it's not ironclad. Not a big savings in this example, but I'll have many properties. The end results doesn't look as dirty as I thought it would. Would like to hear comments or warnings of things I'm overlooking.
from abc import abstractmethod, ABC
class ParentTask(object):
def __init__(self):
self.first_step = FirstStep(self)
self.second_step = SecondStep(self)
print(self.first_step.end)
print(self.second_step.end)
class Step(ABC):
def __init__(self, task):
self.task = task
#classmethod
def init_properties(cls):
cls.end = property(cls.get_end, cls.set_end)
#abstractmethod
def get_end(self):
pass
#abstractmethod
def set_end(self, value):
pass
class FirstStep(Step):
def get_end(self):
return 1
def set_end(self, value):
self.task.end = value
class SecondStep(Step):
def get_end(self):
return 2
def set_end(self, value):
self.task.end = value
FirstStep.init_properties()
SecondStep.init_properties()
ParentTask()
I have a pretty big class that i want to break down in smaller classes that each handle a single part of the whole. So each child takes care of only one aspect of the whole.
Each of these child classes still need to communicate with one another.
For example Data Access creates a dictionary that Plotting Controller needs to have access to.
And then plotting Controller needs to update stuff on Main GUI Controller. But these children have various more inter-communication functions.
How do I achieve this?
I've read Metaclasses, Cooperative Multiple Inheritence and Wonders of Cooperative Multiple Inheritence, but i cannot figure out how to do this.
The closest I've come is the following code:
class A:
def __init__(self):
self.myself = 'ClassA'
def method_ONE_from_class_A(self, caller):
print(f"I am method ONE from {self.myself} called by {caller}")
self.method_ONE_from_class_B(self.myself)
def method_TWO_from_class_A(self, caller):
print(f"I am method TWO from {self.myself} called by {caller}")
self.method_TWO_from_class_B(self.myself)
class B:
def __init__(self):
self.me = 'ClassB'
def method_ONE_from_class_B(self, caller):
print(f"I am method ONE from {self.me} called by {caller}")
self.method_TWO_from_class_A(self.me)
def method_TWO_from_class_B(self, caller):
print(f"I am method TWO from {self.me} called by {caller}")
class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)
def children_start_talking(self):
self.method_ONE_from_class_A('Big Poppa')
poppa = C()
poppa.children_start_talking()
which results correctly in:
I am method ONE from ClassA called by Big Poppa
I am method ONE from ClassB called by ClassA
I am method TWO from ClassA called by ClassB
I am method TWO from ClassB called by ClassA
But... even though Class B and Class A correctly call the other children's functions, they don't actually find their declaration. Nor do i "see" them when i'm typing the code, which is both frustrating and worrisome that i might be doing something wrong.
Is there a good way to achieve this? Or is it an actually bad idea?
EDIT: Python 3.7 if it makes any difference.
Inheritance
When breaking a class hierarchy like this, the individual "partial" classes, we call "mixins", will "see" only what is declared directly on them, and on their base-classes. In your example, when writing class A, it does not know anything about class B - you as the author, can know that methods from class B will be present, because methods from class A will only be called from class C, that inherits both.
Your programming tools, the IDE including, can't know that. (That you should know better than your programming aid, is a side track). It would work, if run, but this is a poor design.
If all methods are to be present directly on a single instance of your final class, all of them have to be "present" in a super-class for them all - you can even write independent subclasses in different files, and then a single subclass that will inherit all of them:
from abc import abstractmethod, ABC
class Base(ABC):
#abstractmethod
def method_A_1(self):
pass
#abstractmethod
def method_A_2(self):
pass
#abstractmethod
def method_B_1(self):
pass
class A(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_A_1(self):
...
def method_A_2(self):
...
class B(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_B_1(self):
...
...
class C(A, B):
pass
(The "ABC" and "abstractmethod" are a bit of sugar - they will work, but this design would work without any of that - thought their presence help whoever is looking at your code to figure out what is going on, and will raise an earlier runtime error if you per mistake create an instance of one of the incomplete base classes)
Composite
This works, but if your methods are actually for wildly different domains, instead
of multiple inheritance, you should try using the "composite design pattern".
No need for multiple inheritance if it does not arise naturally.
In this case, you instantiate objects of the classes that drive the different domains on the __init__ of the shell class, and pass its own instance to those child, which will keep a reference to it (in a self.parent attribute, for example). Chances are your IDE still won't know what you are talking about, but you will have a saner design.
class Parent:
def __init__(self):
self.a_domain = A(self)
self.b_domain = B(self)
class A:
def __init__(self, parent):
self.parent = parent
# no need to call any "super...init", this is called
# as part of the initialization of the parent class
def method_A_1(self):
...
def method_A_2(self):
...
class B:
def __init__(self, parent):
self.parent = parent
def method_B_1(self):
# need result from 'A' domain:
a_value = self.parent.a_domain.method_A_1()
...
This example uses the basic of the language features, but if you decide
to go for it in a complex application, you can sophisticate it - there are
interface patterns, that could allow you to swap the classes used
for different domains, in specialized subclasses, and so on. But typically
the pattern above is what you would need.
According to Python docs super()
is useful for accessing inherited methods that have been overridden in
a class.
I understand that super refers to the parent class and it lets you access parent methods. My question is why do people always use super inside the init method of the child class? I have seen it everywhere. For example:
class Person:
def __init__(self, name):
self.name = name
class Employee(Person):
def __init__(self, **kwargs):
super().__init__(name=kwargs['name']) # Here super is being used
def first_letter(self):
return self.name[0]
e = Employee(name="John")
print(e.first_letter())
I can accomplish the same without super and without even an init method:
class Person:
def __init__(self, name):
self.name = name
class Employee(Person):
def first_letter(self):
return self.name[0]
e = Employee(name="John")
print(e.first_letter())
Are there drawbacks with the latter code? It looks so much cleanr to me. I don't even have to use the boilerplate **kwargs and kwargs['argument'] syntax.
I am using Python 3.8.
Edit: Here's another stackoverflow questions which has code from different people who are using super in the child's init method. I don't understand why. My best guess is there's something new in Python 3.8.
The child might want to do something different or more likely additional to what the super class does - in this case the child must have an __init__.
Calling super’s init means that you don’t have to copy/paste (with all the implications for maintenance) that init in the child’s class, which otherwise would be needed if you wanted some additional code in the child init.
But note there are complications about using super’s init if you use multiple inheritance (e.g. which super gets called) and this needs care. Personally I avoid multiple inheritance and keep inheritance to aminimum anyway - it’s easy to get tempted into creating multiple levels of inheritance/class hierarchy but my experience is that a ‘keep it simple’ approach is usually much better.
The potential drawback to the latter code is that there is no __init__ method within the Employee class. Since there is none, the __init__ method of the parent class is called. However, as soon as an __init__ method is added to the Employee class (maybe there's some Employee-specific attribute that needs to be initialized, like an id_number) then the __init__ method of the parent class is overridden and not called (unless super.__init__() is called) and then an Employee will not have a name attribute.
The correct way to use super here is for both methods to use super. You cannot assume that Person is the last (or at least, next-to-last, before object) class in the MRO.
class Person:
def __init__(self, name, **kwargs):
super().__init__(**kwargs)
self.name = name
class Employee(Person):
# Optional, since Employee.__init__ does nothing
# except pass the exact same arguments "upstream"
def __init__(self, **kwargs):
super().__init__(**kwargs)
def first_letter(self):
return self.name[0]
Consider a class definition like
class Bar:
...
class Foo(Person, Bar):
...
The MRO for Foo looks like [Foo, Person, Bar, object]; the call to super().__init__ inside Person.__init__ would call Bar.__init__, not object.__init__, and Person has no way of knowing if values in **kwargs are meant for Bar, so it must pass them on.
I have a module (db.py) which loads data from different database types (sqlite,mysql etc..) the module contains a class db_loader and subclasses (sqlite_loader,mysql_loader) which inherit from it.
The type of database being used is in a separate params file,
How does the user get the right object back?
i.e how do I do:
loader = db.loader()
Do I use a method called loader in the db.py module or is there a more elegant way whereby a class can pick its own subclass based on a parameter? Is there a standard way to do this kind of thing?
Sounds like you want the Factory Pattern. You define a factory method (either in your module, or perhaps in a common parent class for all the objects it can produce) that you pass the parameter to, and it will return an instance of the correct class. In python the problem is a bit simpler than perhaps some of the details on the wikipedia article as your types are dynamic.
class Animal(object):
#staticmethod
def get_animal_which_makes_noise(noise):
if noise == 'meow':
return Cat()
elif noise == 'woof':
return Dog()
class Cat(Animal):
...
class Dog(Animal):
...
is there a more elegant way whereby a class can pick its own subclass based on a parameter?
You can do this by overriding your base class's __new__ method. This will allow you to simply go loader = db_loader(db_type) and loader will magically be the correct subclass for the database type. This solution is mildly more complicated than the other answers, but IMHO it is surely the most elegant.
In its simplest form:
class Parent():
def __new__(cls, feature):
subclass_map = {subclass.feature: subclass for subclass in cls.__subclasses__()}
subclass = subclass_map[feature]
instance = super(Parent, subclass).__new__(subclass)
return instance
class Child1(Parent):
feature = 1
class Child2(Parent):
feature = 2
type(Parent(1)) # <class '__main__.Child1'>
type(Parent(2)) # <class '__main__.Child2'>
(Note that as long as __new__ returns an instance of cls, the instance's __init__ method will automatically be called for you.)
This simple version has issues though and would need to be expanded upon and tailored to fit your desired behaviour. Most notably, this is something you'd probably want to address:
Parent(3) # KeyError
Child1(1) # KeyError
So I'd recommend either adding cls to subclass_map or using it as the default, like so subclass_map.get(feature, cls). If your base class isn't meant to be instantiated -- maybe it even has abstract methods? -- then I'd recommend giving Parent the metaclass abc.ABCMeta.
If you have grandchild classes too, then I'd recommend putting the gathering of subclasses into a recursive class method that follows each lineage to the end, adding all descendants.
This solution is more beautiful than the factory method pattern IMHO. And unlike some of the other answers, it's self-maintaining because the list of subclasses is created dynamically, instead of being kept in a hardcoded mapping. And this will only instantiate subclasses, unlike one of the other answers, which would instantiate anything in the global namespace matching the given parameter.
I'd store the name of the subclass in the params file, and have a factory method that would instantiate the class given its name:
class loader(object):
#staticmethod
def get_loader(name):
return globals()[name]()
class sqlite_loader(loader): pass
class mysql_loader(loader): pass
print type(loader.get_loader('sqlite_loader'))
print type(loader.get_loader('mysql_loader'))
Store the classes in a dict, instantiate the correct one based on your param:
db_loaders = dict(sqlite=sqlite_loader, mysql=mysql_loader)
loader = db_loaders.get(db_type, default_loader)()
where db_type is the paramter you are switching on, and sqlite_loader and mysql_loader are the "loader" classes.