Suppose I have a set of (possibly abstract) base classes which cooperate in a certain way, and I want to subclass them in such a way that the subclasses are aware of its respective co-operating subclasses (e.g. it has the other classes as class attributes).
Literally adding attributes seems really messy for more than a handful of classes.
One way I can think of doing this is to class properties for the abstract classes which would reference a dictionary class attribute (same dictionary for all classes), via mixin to avoid repeating code in the superclass module. This way, I only need to add one attribute for each subclass (and add a dictionary referencing all the classes in the module), see the code below.
Is there an established design pattern to achieve this sort of thing?
Example:
abstract_module:
from abc import ABC
_module_classes_dict = {}
class _ClassesDictMixin:
_classes_dict = dict()
#classmethod
#property
def _a_class(cls):
return cls._classes_dict['a']
#classmethod
#property
def _b_class(cls):
return cls._classes_dict['b']
#classmethod
#property
def _c_class(cls):
return cls._classes_dict['c']
class AbstractA(ABC):
pass
class AbstractB(_ClassesDictMixin, ABC):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = AbstractA
class AbstractC(_ClassesDictMixin, ABC):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = AbstractA
# _b_class = AbstractB
class AbstractD(_ClassesDictMixin, ABC):
_classes_dict = _module_classes_dict
# # Alternative solution without using the dict
# _a_class = AbstractA
# _b_class = AbstractB
# _c_class = AbstractC
_module_classes_dict.update(a=AbstractA, b=AbstractB, c=AbstractC, d=AbstractD)
concrete_module:
from abstract_module import AbstractA, AbstractB, AbstractC, AbstractD
_module_classes_dict = {}
class ConcreteA(AbstractA):
pass
class ConcreteB(AbstractB):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = ConcreteA
class ConcreteC(AbstractC):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = ConcreteA
# _b_class = ConcreteB
class ConcreteD(AbstractD):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = ConcreteA
# _b_class = ConcreteB
# _c_class = ConcreteC
_module_classes_dict.update(a=ConcreteA, b=ConcreteB, c=ConcreteC, d=ConcreteD)
The issue is maybe not where you think it is.
Literally adding attributes seems really messy for more than a handful of classes.
I would be concerned if one of my classes was dependent on "more than a handful of classes". This is the issue, in my mind, you should try to solve.
Moreover, the mixin solution has a main drawback: ConcreteB knows about ConcreteC and ConcreteD whereas it should only know about ConcreteA. The dependencies between the classes are blurred. On the contrary, hard coding the dependencies should be a cleaner solution because the relationship between classes is explicit.
Hence this seems better than the mixin:
class ConcreteB(AbstractB):
_a_class = ConcreteA
class ConcreteC(AbstractC):
_a_class = ConcreteA
_b_class = ConcreteB
But sometimes hard coding the relations between ConcreteB and ConcreteA is not the best option. What if you want to use ConcreteA2 instead of ConcreteA?
class ConcreteA(AbstractA):
pass
class ConcreteA2(AbstractA):
pass
To make the code more versatile, you can use (as you wrote in a comment) the parameters of __init__:
class ConcreteB(AbstractB):
def __init__(self, a_class):
self._a_class = a_class
class ConcreteC(AbstractC):
def __init__(self, a_class, b_class):
self._a_class = a_class
self._b_class = b_class
But now, you might have an inconsistent set of classes:
b = ConcreteB(ConcreteA)
c = ConcreteC(ConcreteA2, ConcreteB)
This could happen if the codebase grows and the initialization of objects is dispatched across various modules. To avoid this situation, you may use a variant of the Factory Pattern:
class Factory:
def __init__(a_class, b_class, c_class, d_class):
self._a_class = a_class
self._b_class = b_class
self._c_class = c_class
def concreteA(self):
return self._a_class()
def concreteB(self):
return self._b_class(self._a_class)
def concreteC(self):
return self._c_class(self._a_class, self._c_class)
Now, you are sure that B and C share the same a_class.
This design helps you to ensure that the dependencies are explicit and consistent.
Related
I have a Python library which will be used by other people:
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_a_obj = BaseClassA()
BaseClassB creates a BaseClassA object and stores a pointer. This is an issue because I want to allow the user to extend my library classes:
class ExtendClassA(BaseClassA):
And my library should choose the extended class (ExtendClassA) instead of the base class (BaseClassA) in the func0 method.
Above is a very simple example my problem statement. In reality I have 10ish classes where extending/creation happens. I want to avoid the user having to rewrite func0 in an extended BaseClassB to support the new ExtendClassA class they created.
I'm reaching out to the stack overflow community to see what solutions other people have implemented for issues like this. My initial thought is to have a global dict which 'registers' class types/constructors and classes would get the class constructors from the global dict. When a user wants to extend a class they would replace the class in the dict with the new class.
Library code:
global lib_class_dict
lib_class_dict['ClassA'] = BaseClassA()
lib_class_dict['ClassB'] = BaseClassB()
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_a_obj = lib_class_dict['ClassB']
User code:
lib_class_dict['ClassA'] = ExtendClassA():
class ExtendClassA:
EDIT: Adding more details regarding the complexities I'm dealing with.
I have scenarios where method calls are buried deep within the library, which makes it hard to pass a class from the user entry point -> function:
(user would call BaseClassB.func0() in below example)
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_c_obj = BaseClassC()
class BaseClassC:
def __init__(self):
this.class_d_obj = BaseClassD()
class BaseClassD:
def __init__(self):
this.class_a_obj = BaseClassA()
Multiple classes can create one type of object:
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_a_obj = BaseClassA()
class BaseClassC:
def __init__(self):
this.class_a_obj = BaseClassA()
class BaseClassD:
def __init__(self):
this.class_a_obj = BaseClassA()
For these reasons I'm hoping to have a global or central location all classes can grab the correct class.
Allow them to specify the class to use as an optional parameter to func0
def BaseClassB:
def func0(self, objclass=BaseClassA):
self.class_a_obj = objclass()
obj1 = BlassClassB()
obj1.func0()
obj2 = BassClassB()
obj2.func0(objclass = ExtendClassA)
So, I've tried a PoC that, if I understand correctly, might do the trick. Give it a look.
By the way, whether it does work or not, I have a strong feeling this is actually a bad practice in almost all scenarios, as it changes class behavior in a obscure, unexpected way that would be very difficult to debug.
For example, in the below PoC if you inherit the same BaseClassA multiple times - only the latter inheritance shall be written in the class library, which would be a huge pain for the programmer trying to understand what on earth is happening with his code and why.
But of course, there are some use cases when shooting ourselves in a leg is less painful than designing & using a proper architecture :)
So, the first example where we have inheritance (I specified multiple inherited classes, just to show that only the last inherited one would be saved in a library):
#################################
# 1. We define all base classes
class BaseClassA:
def whoami(self):
print(type(self))
def __init_subclass__(cls):
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = cls
print('Class Dict Updated:')
print('New Class A: ' + str(cls))
#################################
# 2. We define a class library
global omfg_that_feels_like_a_reeeeally_bad_practise
omfg_that_feels_like_a_reeeeally_bad_practise = {}
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = BaseClassA
#################################
# 3. We define a first class that refer our base class (before inheriting from base class)
class UserClassA:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
# 4. We inherit from the base class several times
class FirstExtendedClassA(BaseClassA):
pass
class SecondExtendedClassA(BaseClassA):
pass
class SuperExtendedClassA(FirstExtendedClassA):
pass
#################################
# 5. We define a second class that refer our base class (after inheriting from base class)
class UserClassB:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
## 6. Now we try to refer both user classes
insane_class_test = UserClassA()
print(str(insane_class_test.class_a_obj))
### LOOK - A LAST INHERITED CHILD CLASS OBJECT IS USED!
# <__main__.SuperExtendedClassA object at 0x00000DEADBEEF>
insane_class_test = UserClassB()
print(str(insane_class_test.class_a_obj))
### LOOK - A LAST INHERITED CHILD CLASS OBJECT IS USED!
# <__main__.SuperExtendedClassA object at 0x00000DEADBEEF>
And if we remove inheritance, the base class will be used:
#################################
# 1. We define all base classes
class BaseClassA:
def whoami(self):
print(type(self))
def __init_subclass__(cls):
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = cls
print('Class Dict Updated:')
print('New Class A: ' + str(cls))
#################################
# 2. We define a class library
global omfg_that_feels_like_a_reeeeally_bad_practise
omfg_that_feels_like_a_reeeeally_bad_practise = {}
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = BaseClassA
#################################
# 3. We define a first class that refer our base class
class UserClassA:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
# 5. We define a second class that refer our base class
class UserClassB:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
## 6. Now we try to refer both user classes
insane_class_test = UserClassA()
print(str(insane_class_test.class_a_obj))
### LOOK - A DEFAULT CLASS OBJECT IS USED!
# <__main__.BaseClassA object at 0x00000DEADBEEF>
insane_class_test = UserClassB()
print(str(insane_class_test.class_a_obj))
### LOOK - A DEFAULT CLASS OBJECT IS USED!
# <__main__.BaseClassA object at 0x00000DEADBEEF>
class Tesla_car:
def __init__(self,yourname):
self.name = yourname
print("Hey'%s',I am a bot and I will tell you about....." %self.name)
self.cells = self.batteries()
def material(self,model_no):
self.model = model_no
print("your car",self.model," made from aluminium")
def color(self,color):
self.color = color
print("the color of your car is:'%s'" %self.color)
class batteries:
def __init__(self):
pass
def materials(self):
self.battery_name = "Tesla tabless 4680 cells"
self.chemicals = "Tesla uses Lithium-Nickle-cobalt-magnesium(NMC) mixed in 8:1:1 ratio"
EV_car = Tesla_car('Blah')
EV_car()
Hey everyone, I am trying to use nested classes but whenever I try to use the inner class by writing self.cells = self.batteries() It raises an error:"Tesla_car' object has no attribute 'batteries"
How do I fix it
It seems that you're trying to compose objects, but in the wrong way.
Actually your classes reflect a perfect irl scenario for implementing composition: cars are equipped (composed) with a set of different objects, batteries included.
When using composition, you'd typically define TeslaCar and Batteries as separate classes, and then you would assign an instance of Batteries to one of TeslaCar instance variables. E.g.:
class Batteries:
def __init__(self):
...
class TeslaCar:
def __init__(self):
self.batteries = Batteries()
...
The above code is just a simple skeleton of how composition is implemented, but you can adapt it to your case very easily.
Finally FYI, avoid nesting classes at all. It's unpythonic and you'll discover that it's useless as soon as you dive deep into simple oop patterns like composition and inheritance.
Change
self.batteries()
to
Tesla_car.batteries()
your batteries inner class is wrongly indented.
Currently it is inside the color method instead of being at the same level as the method.
class TeslaCar:
def color(...):
...
class Batteries:
...
instead, do:
class TeslaCar:
def color(...):
...
class Batteries:
...
I know the title is probably a bit confusing, so let me give you an example. Suppose you have a base class Base which is intended to be subclassed to create more complex objects. But you also have optional functionality that you don't need for every subclass, so you put it in a secondary class OptionalStuffA that is always intended to be subclassed together with the base class. Should you also make that secondary class a subclass of Base?
This is of course only relevant if you have more than one OptionalStuff class and you want to combine them in different ways, because otherwise you don't need to subclass both Base and OptionalStuffA (and just have OptionalStuffA be a subclass of Base so you only need to subclass OptionalStuffA). I understand that it shouldn't make a difference for the MRO if Base is inherited from more than once, but I'm not sure if there are any drawbacks to making all the secondary classes inherit from Base.
Below is an example scenario. I've also thrown in the QObject class as a 'third party' token class whose functionality is necessary for one of the secondary classes to work. Where do I subclass it? The example below shows how I've done it so far, but I doubt this is the way to go.
from PyQt5.QtCore import QObject
class Base:
def __init__(self):
self._basic_stuff = None
def reset(self):
self._basic_stuff = None
class OptionalStuffA:
def __init__(self):
super().__init__()
self._optional_stuff_a = None
def reset(self):
if hasattr(super(), 'reset'):
super().reset()
self._optional_stuff_a = None
def do_stuff_that_only_works_if_my_children_also_inherited_from_Base(self):
self._basic_stuff = not None
class OptionalStuffB:
def __init__(self):
super().__init__()
self._optional_stuff_b = None
def reset(self):
if hasattr(super(), 'reset'):
super().reset()
self._optional_stuff_b = None
def do_stuff_that_only_works_if_my_children_also_inherited_from_QObject(self):
print(self.objectName())
class ClassThatIsActuallyUsed(Base, OptionalStuffA, OptionalStuffB, QObject):
def __init__(self):
super().__init__()
self._unique_stuff = None
def reset(self):
if hasattr(super(), 'reset'):
super().reset()
self._unique_stuff = None
What I can get from your problem is that you want to have different functions and properties based on different condition, that sounds like good reason to use MetaClass.
It all depends how complex your each class is, and what are you building, if it is for some library or API then MetaClass can do magic if used rightly.
MetaClass is perfect to add functions and property to the class based on some sort of condition, you just have to add all your subclass function into one meta class and add that MetaClass to your main class
From Where to start
you can read about MetaClass here, or you can watch it here.
After you have better understanding about MetaClass see the source code of Django ModelForm from here and here, but before that take a brief look on how the Django Form works from outside this will give You an idea on how to implement it.
This is how I would implement it.
#You can also inherit it from other MetaClass but type has to be top of inheritance
class meta_class(type):
# create class based on condition
"""
msc: meta_class, behaves much like self (not exactly sure).
name: name of the new class (ClassThatIsActuallyUsed).
base: base of the new class (Base).
attrs: attrs of the new class (Meta,...).
"""
def __new__(mcs, name, bases, attrs):
meta = attrs.get('Meta')
if(meta.optionA){
attrs['reset'] = resetA
}if(meta.optionB){
attrs['reset'] = resetB
}if(meta.optionC){
attrs['reset'] = resetC
}
if("QObject" in bases){
attrs['do_stuff_that_only_works_if_my_children_also_inherited_from_QObject'] = functionA
}
return type(name, bases, attrs)
class Base(metaclass=meta_class): #you can also pass kwargs to metaclass here
#define some common functions here
class Meta:
# Set default values here for the class
optionA = False
optionB = False
optionC = False
class ClassThatIsActuallyUsed(Base):
class Meta:
optionA = True
# optionB is False by default
optionC = True
EDIT: Elaborated on how to implement MetaClass.
Let me start with another alternative. In the example below the Base.foo method is a plain identity function, but options can override that.
class Base:
def foo(self, x):
return x
class OptionDouble:
def foo(self, x):
x *= 2 # preprocess example
return super().foo(x)
class OptionHex:
def foo(self, x):
result = super().foo(x)
return hex(result) # postprocess example
class Combined(OptionDouble, OptionHex, Base):
pass
b = Base()
print(b.foo(10)) # 10
c = Combined()
print(c.foo(10)) # 2x10 = 20, as hex string: "0x14"
The key is that in the definition of the Combined's bases are Options specified before the Base:
class Combined(OptionDouble, OptionHex, Base):
Read the class names left-to right and in this simple case
this is the order in which foo() implementations are ordered.
It is called the method resolution order (MRO).
It also defines what exactly super() means in particular classes and that is important, because Options are written as wrappers around the super() implementation
If you do it the other way around, it won't work:
class Combined(Base, OptionDouble, OptionHex):
pass
c = Combined()
print(Combined.__mro__)
print(c.foo(10)) # 10, options not effective!
In this case the Base implementation is called first and it directly returns the result.
You could take care of the correct base order manually or you could write a function that checks it. It walks through the MRO list and once it sees the Base it will not allow an Option after it.
class Base:
def __init_subclass__(cls, *args, **kwargs):
super().__init_subclass__(*args, **kwargs)
base_seen = False
for mr in cls.__mro__:
if base_seen:
if issubclass(mr, Option):
raise TypeError( f"The order of {cls.__name__} base classes is incorrect")
elif mr is Base:
base_seen = True
def foo(self, x):
return x
class Option:
pass
class OptionDouble(Option):
...
class OptionHex(Option):
...
Now to answer your comment. I wrote that #wettler's approach could be simplified. I meant something like this:
class Base:
def __init_subclass__(cls, *args, **kwargs):
super().__init_subclass__(*args, **kwargs)
print("options for the class", cls.__name__)
print('A', cls.optionA)
print('B', cls.optionB)
print('C', cls.optionC)
# ... modify the class according to the options ...
bases = cls.__bases__
# ... check if QObject is present in bases ...
# defaults
optionA = False
optionB = False
optionC = False
class ClassThatIsActuallyUsed(Base):
optionA = True
optionC = True
This demo will print:
options for the class ClassThatIsActuallyUsed
A True
B False
C True
I'm trying to create a set of classes where each class has a corresponding "array" version of the class. However, I need both classes to be aware of each other. Here is a working example to demonstrate what I'm trying to do. But this requires duplicating a "to_array" in each class. In my actual example, there are other more complicated methods that would need to be duplicated even though the only difference is "BaseArray", "PointArray", or "LineArray". The BaseArray class would similarly have methods that only differ by "BaseObj", "PointObj", or "LineObj".
# ------------------
# Base object types
# ------------------
class BaseObj(object):
def __init__(self, obj):
self.obj = obj
def to_array(self):
return BaseArray([self])
class Point(BaseObj):
def to_array(self):
return PointArray([self])
class Line(BaseObj):
def to_array(self):
return LineArray([self])
# ------------------
# Array object types
# ------------------
class BaseArray(object):
def __init__(self, items):
self.items = [BaseObj(i) for i in items]
class PointArray(BaseArray):
def __init__(self, items):
self.items = [Point(i) for i in items]
class LineArray(BaseArray):
def __init__(self, items):
self.items = [Line(i) for i in items]
# ------------------
# Testing....
# ------------------
p = Point([1])
print(p)
pa = p.to_array()
print(pa)
print(pa.items)
Here is my attempt, which understandably raises an error. I know why I get a NameError and thus I understand why this doesn't work. I'm showing this to make clear what I'd like to do.
# ------------------
# Base object types
# ------------------
class BaseObj(object):
ArrayClass = BaseArray
def __init__(self, obj):
self.obj = obj
def to_array(self):
# By using the "ArrayClass" class attribute here, I can have a single
# "to_array" function on this base class without needing to
# re-implement this function on each subclass
return self.ArrayClass([self])
# In the actual application, there would be other BaseObj methods that
# would use self.ArrayClass to avoid code duplication
class Point(BaseObj):
ArrayClass = PointArray
class Line(BaseObj):
ArrayClass = LineArray
# ------------------
# Array object types
# ------------------
class BaseArray(object):
BaseType = BaseObj
def __init__(self, items):
self.items = [self.BaseType(i) for i in items]
# In the actual application, there would be other BaseArray methods that
# would use self.BaseType to avoid code duplication
class PointArray(BaseArray):
BaseType = Point
class LineArray(BaseArray):
BaseType = Line
# ------------------
# Testing....
# ------------------
p = Point([1])
print(p)
pa = p.to_array()
print(pa)
print(pa.items)
One potential solution would be to just define "ArrayClass" as None for all of the classes, and then after the "array" versions are defined you could monkey patch the original classes like this:
BaseObj.ArrayClass = BaseArray
Point.ArrayClass = PointArray
Line.ArrayClass = LineArray
This works, but it feels a bit unnatural and I suspect there is a better way to achieve this. In case it matters, my use case will ultimate be a plugin to a program that (sadly) still uses Python 2.7, so I need a solution that uses Python 2.7. Ideally the same solution can work in 2.7 and 3+ though.
Here is a solution using decorators. I prefer this to the class attribute assignment ("monkey patch" as I called it) since it keeps things a little more self consistent and clear. I'm happy enough with this, but still interested in other ideas...
# ------------------
# Base object types
# ------------------
class BaseObj(object):
ArrayClass = None
def __init__(self, obj):
self.obj = obj
def to_array(self):
# By using the "ArrayClass" class attribute here, I can have a single
# "to_array" function on this base class without needing to
# re-implement this function on each subclass
return self.ArrayClass([self])
# In the actual application, there would be other BaseObj methods that
# would use self.ArrayClass to avoid code duplication
#classmethod
def register_array(cls):
def decorator(subclass):
cls.ArrayClass = subclass
subclass.BaseType = cls
return subclass
return decorator
class Point(BaseObj):
pass
class Line(BaseObj):
pass
# ------------------
# Array object types
# ------------------
class BaseArray(object):
BaseType = None
def __init__(self, items):
self.items = [self.BaseType(i) for i in items]
# In the actual application, there would be other BaseArray methods that
# would use self.BaseType to avoid code duplication
#Point.register_array()
class PointArray(BaseArray):
pass
#Line.register_array()
class LineArray(BaseArray):
pass
# ------------------
# Testing....
# ------------------
p = Point([1])
print(p)
pa = p.to_array()
print(pa)
print(pa.items)
I have class with hundreds of methods
I want create a hierarchy of them that will let easy find method.
For example
class MyClass:
def SpectrumFrequencyStart()...
def SpectrumFrequencyStop()...
def SpectrumFrequencyCenter()...
def SignalAmplitudedBm()...
That I want to call using:
MyClassObject.Spectrum.Frequency.Start()
MyClassObject.Spectrum.Frequency.Stop()
MyClassObject.Signal.Amplitude.dBm()
Consider using a dictionary to map your methods to keys (either hierarchical dictionaries, or simply '.' separated keys).
Another option which may be more elegant is namedtuples. Something like:
from collections import namedtuple
MyClassObject = namedtuple('MyClassObject', ['Spectrum', 'Signal'])
MyClassObject.Spectrum = namedtuple('Spectrum', ['Frequency'])
MyClassObject.Spectrum.Frequency = namedtuple('Frequency', ['Start', 'Stop'])
MyClassObject.Spectrum.Frequency.Start = MyClass.SpectrumFrequencyStart
You can automate this by using inspection and parse the method names by, say camel case, to build the namedtuples automatically.
Pay attention to binding of the methods
This is just a very bad design.
It's clear that Spectrum, Signal, Frequency (and so on) should be all separate classes with much less than "hundreds of methods".
I'm not sure if MyClassObject actually represents something or is effectively just a namespace.
Objects can encapsulate objects of other classes. For example:
class Frequency(object):
def start(self):
pass
def stop(self):
pass
class Spectrum(object):
def __init__(self):
self.frequency = Frequency()
class Amplitude(object):
def dbm(self):
pass
class Signal(object):
def __init__(self):
self.amplitude = Amplitude()
class MyClass(object):
def __init__(self):
self.spectrum = Spectrum()
self.signal = Signal()
my_class_instance = MyClass()
my_class_instance.spectrum.frequency.start()
my_class_instance.spectrum.frequency.stop()
my_class_instance.spectrum.signal.amplitude.dbm()
There's a convention of code formatting in Python PEP 8 therefore I applied it in my example.