Python: Classes that Depend on Each Other - python

I'm trying to create a set of classes where each class has a corresponding "array" version of the class. However, I need both classes to be aware of each other. Here is a working example to demonstrate what I'm trying to do. But this requires duplicating a "to_array" in each class. In my actual example, there are other more complicated methods that would need to be duplicated even though the only difference is "BaseArray", "PointArray", or "LineArray". The BaseArray class would similarly have methods that only differ by "BaseObj", "PointObj", or "LineObj".
# ------------------
# Base object types
# ------------------
class BaseObj(object):
def __init__(self, obj):
self.obj = obj
def to_array(self):
return BaseArray([self])
class Point(BaseObj):
def to_array(self):
return PointArray([self])
class Line(BaseObj):
def to_array(self):
return LineArray([self])
# ------------------
# Array object types
# ------------------
class BaseArray(object):
def __init__(self, items):
self.items = [BaseObj(i) for i in items]
class PointArray(BaseArray):
def __init__(self, items):
self.items = [Point(i) for i in items]
class LineArray(BaseArray):
def __init__(self, items):
self.items = [Line(i) for i in items]
# ------------------
# Testing....
# ------------------
p = Point([1])
print(p)
pa = p.to_array()
print(pa)
print(pa.items)
Here is my attempt, which understandably raises an error. I know why I get a NameError and thus I understand why this doesn't work. I'm showing this to make clear what I'd like to do.
# ------------------
# Base object types
# ------------------
class BaseObj(object):
ArrayClass = BaseArray
def __init__(self, obj):
self.obj = obj
def to_array(self):
# By using the "ArrayClass" class attribute here, I can have a single
# "to_array" function on this base class without needing to
# re-implement this function on each subclass
return self.ArrayClass([self])
# In the actual application, there would be other BaseObj methods that
# would use self.ArrayClass to avoid code duplication
class Point(BaseObj):
ArrayClass = PointArray
class Line(BaseObj):
ArrayClass = LineArray
# ------------------
# Array object types
# ------------------
class BaseArray(object):
BaseType = BaseObj
def __init__(self, items):
self.items = [self.BaseType(i) for i in items]
# In the actual application, there would be other BaseArray methods that
# would use self.BaseType to avoid code duplication
class PointArray(BaseArray):
BaseType = Point
class LineArray(BaseArray):
BaseType = Line
# ------------------
# Testing....
# ------------------
p = Point([1])
print(p)
pa = p.to_array()
print(pa)
print(pa.items)
One potential solution would be to just define "ArrayClass" as None for all of the classes, and then after the "array" versions are defined you could monkey patch the original classes like this:
BaseObj.ArrayClass = BaseArray
Point.ArrayClass = PointArray
Line.ArrayClass = LineArray
This works, but it feels a bit unnatural and I suspect there is a better way to achieve this. In case it matters, my use case will ultimate be a plugin to a program that (sadly) still uses Python 2.7, so I need a solution that uses Python 2.7. Ideally the same solution can work in 2.7 and 3+ though.

Here is a solution using decorators. I prefer this to the class attribute assignment ("monkey patch" as I called it) since it keeps things a little more self consistent and clear. I'm happy enough with this, but still interested in other ideas...
# ------------------
# Base object types
# ------------------
class BaseObj(object):
ArrayClass = None
def __init__(self, obj):
self.obj = obj
def to_array(self):
# By using the "ArrayClass" class attribute here, I can have a single
# "to_array" function on this base class without needing to
# re-implement this function on each subclass
return self.ArrayClass([self])
# In the actual application, there would be other BaseObj methods that
# would use self.ArrayClass to avoid code duplication
#classmethod
def register_array(cls):
def decorator(subclass):
cls.ArrayClass = subclass
subclass.BaseType = cls
return subclass
return decorator
class Point(BaseObj):
pass
class Line(BaseObj):
pass
# ------------------
# Array object types
# ------------------
class BaseArray(object):
BaseType = None
def __init__(self, items):
self.items = [self.BaseType(i) for i in items]
# In the actual application, there would be other BaseArray methods that
# would use self.BaseType to avoid code duplication
#Point.register_array()
class PointArray(BaseArray):
pass
#Line.register_array()
class LineArray(BaseArray):
pass
# ------------------
# Testing....
# ------------------
p = Point([1])
print(p)
pa = p.to_array()
print(pa)
print(pa.items)

Related

Monkey patching class functions and properties with an existing instance in Jupyter

When I'm prototyping a new project on Jupyter, I sometimes find that I want to add/delete methods to an instance. For example:
class A(object):
def __init__(self):
# some time-consuming function
def keep_this_fxn(self):
return 'hi'
a = A()
## but now I want to make A -> A_new
class A_new(object):
def __init__(self, v):
# some time-consuming function
self._new_prop = v
def keep_this_fxn(self):
return 'hi'
#property
def new_prop(self):
return self._new_prop
def new_fxn(self):
return 'hey'
Without having to manually do A.new_fxn = A_new.new_fxn or reinitializing the instance, is it possible to have this change done automatically? Something like
def update_instance(a, A_new)
# what's here?
a = update_instance(a, A_new(5)) ## should not be as slow as original initialization!
>>> type(a) ## keeps the name, preferably!
<A>
>>> a.keep_this_fxn() ## same as the original
'hi'
>>> a.new_fxn(). ## but with new functions
'hey'
>>> a.new_prop ## and new properties
5
Related posts don't seem to cover this, especially new properties and new args:
How to update instance of class after class method addition?
Monkey patching class and instance in Python
Here's my current attempt:
def update_class_instance(instance, NewClass, new_method_list):
OrigClass = type(instance).__mro__[0]
for method in new_method_list:
setattr(OrigClass, method, getattr(NewClass, method))
but (a) I still have to specify new_method_list (which I prefer to be handled automatically if possible, and (b) I have no idea what to do about the new properties and args.

Python: Inner Class

I am trying to create a json string from a class and I defined my class as follows:
import json
import ast
from datetime import datetime
import pytz
import time
class OuterClass:
def __init__(self):
self.Header = None
self.Body = None
class Header:
def __init__(self, ID = None, Name = None):
self.ID = ID
self.Name = Name
class Body:
def __init__(self, DateTime=None, Display=None):
self.DateTime = DateTime
self.Display = Display
def current_time_by_timezone(timezone_input):
return datetime.now(pytz.timezone(timezone_input))
if __name__ == '__main__':
response = OuterClass()
header = response.Header('123', 'Some Name')
body = response.Body(current_time_by_timezone('US/Central'), 'NOT VALID')
print(json.dumps(response.__dict__))
I'm getting an error 'TypeError: 'NoneType' object is not callable'. Is it because I'm setting the Header and Body in the OuterClass definition myself to None?
The problem with your code is these lines:
self.Header = None
self.Body = None
These create instance variables named Header and Body on every instance of OuterClass, so you can never access the class variables (the nested classes) via an instance, only via OuterClass itself.
It's not very clear what your intention is with this data structure. Defining a class inside another class doesn't do anything special in Python (by default, you could probably make there be special behavior with special effort, like using a metaclass that makes the inner classes into descriptors). Generally though, there's no implied relationship between the classes.
If you want your OuterClass to create instances of the other two classes, you can do that without nesting their definitions. Just put the class definitions at top level and write a method that creates an instance at an appropriate time and does something useful with it (like binding it to an instance variable).
You might want something like:
def Header:
...
def Response:
def __init__(self):
self.header = None
def make_header(self, *args):
self.header = Header(*args)
return self.header
You could keep the classes nested as long as you don't expect that to mean anything special, just be sure that you don't use the class name as an instance variable, or you'll shadow the name of the nested class (a capitalization difference, like self.header vs self.Header could be enough).

How can you solve this in Python? Automatically supply an argument to a static method when called from a managing class

I have some classes FooA and FooB which are basically a collection of "static" methods. They operate on data - let's say it is an DataItem object:
# Base class with common behavior
class FooBase:
#classmethod
def method1(cls, arg, data: DataItem):
#res = ...
return res
#classmethod
def method2(cls, arg1, arg2, data: DataItem):
# res = ... # using method1
return res
# specialized classes
class FooA(FooBase):
# define extra methods
pass
class FooB(FooBase):
# define extra methods
pass
# usage 1: as "static methods"
res = FooA.method1(arg, data)
res2 = FooB.method2(args, data)
Now, I'd like to use these classes as attributes of a "managing" class (MyApp) which also has access to a datasource and should implicitly supply DataItems to the static methods of FooA and FooB. Moreover, the datasource supplies a list of DataItem objects.
# usage 2: as part of an "App" class
# here, the "data" argument should be supplied implicitly by MyApp
# also: MyApp contains a list of "data" objects
class MyApp:
def __init__(self, datasrc):
self.datasrc = datasrc
# this could be a generator
def get_data(self, key) -> List[DataItem]:
return self.datasrc.get_data(key)
# FooA, FooB as class / instance level attributes, descriptors, ???
# usage
my_app = MyApp("datasrc")
res_list = my_app.foo_a.method1(arg) # foo_a is a FooA obj, "data" arg is supplied automatically
# optionally, but not necessarily call as a static attribute:
res = MyApp.foo_a.method1(arg, data: DataItem) # same as FooA.method1(arg, data)
I have tried different things but found not satisfactory solution.
So... I am not sure can it be done in nice way, I thought about that and all approaches has serious drawbacks. One of the problem is we actually want to have a method that returns list or single item, depending on input parameters, which is bad.
One of way could be store datasrc in FooBase, but it violates SRP
class FooBase:
def __init__(self, datasrc):
FooBase.datasrc = datasrc
#classmethod
def method1(cls, arg, data=None):
if data is None:
return [cls.method1(arg, d) for d in cls.datasrc]
return data
Or use isinstance
#classmethod
def method1(cls, arg, data):
if isinstance(data, list):
return [cls.method1(arg, d) for d in data]
return data
But it forces us to adjust every method (which could be done with decorator or metaclass).
Another way could be use some intermediate layer:
def decorator(datasrc):
def wrapper(foo):
def f(*args, **kwargs):
# We could catch TypeError here to serve case when data is passed
return [foo(*args, **kwargs, data=data) for data in datasrc]
return f
return wrapper
class FooAdapter:
def __init__(self, datasrc, foo_cls):
self.datasrc = datasrc
methods = [
getattr(foo_cls, m)
for m in dir(foo_cls)
if callable(getattr(foo_cls, m)) and not m.startswith("__")
] # all methods of our Foo class
for method in methods:
setattr(self, method.__name__, decorator(datasrc)(method))
class MyApp:
def __init__(self, datasrc):
self.datasrc = datasrc
self.foo_a = FooAdapter(datasrc, FooA)
self.foo_b = FooAdapter(datasrc, FooB)
But solution with dynamically added functions breaks IDE support.
The cleanest solution imo could be to have Enum for Foo methods and Enum for Foo classes, then you could write code in MyApp
def get_bulk(m: MethodEnum, f: FooEnum, *args):
return [getattr(enum_to_cls_mapping[f], m)(*args, data=d) for d in self.datasrc]

Choosing between base class or extended class - Python

I have a Python library which will be used by other people:
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_a_obj = BaseClassA()
BaseClassB creates a BaseClassA object and stores a pointer. This is an issue because I want to allow the user to extend my library classes:
class ExtendClassA(BaseClassA):
And my library should choose the extended class (ExtendClassA) instead of the base class (BaseClassA) in the func0 method.
Above is a very simple example my problem statement. In reality I have 10ish classes where extending/creation happens. I want to avoid the user having to rewrite func0 in an extended BaseClassB to support the new ExtendClassA class they created.
I'm reaching out to the stack overflow community to see what solutions other people have implemented for issues like this. My initial thought is to have a global dict which 'registers' class types/constructors and classes would get the class constructors from the global dict. When a user wants to extend a class they would replace the class in the dict with the new class.
Library code:
global lib_class_dict
lib_class_dict['ClassA'] = BaseClassA()
lib_class_dict['ClassB'] = BaseClassB()
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_a_obj = lib_class_dict['ClassB']
User code:
lib_class_dict['ClassA'] = ExtendClassA():
class ExtendClassA:
EDIT: Adding more details regarding the complexities I'm dealing with.
I have scenarios where method calls are buried deep within the library, which makes it hard to pass a class from the user entry point -> function:
(user would call BaseClassB.func0() in below example)
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_c_obj = BaseClassC()
class BaseClassC:
def __init__(self):
this.class_d_obj = BaseClassD()
class BaseClassD:
def __init__(self):
this.class_a_obj = BaseClassA()
Multiple classes can create one type of object:
class BaseClassA:
class BaseClassB:
def func0(self):
this.class_a_obj = BaseClassA()
class BaseClassC:
def __init__(self):
this.class_a_obj = BaseClassA()
class BaseClassD:
def __init__(self):
this.class_a_obj = BaseClassA()
For these reasons I'm hoping to have a global or central location all classes can grab the correct class.
Allow them to specify the class to use as an optional parameter to func0
def BaseClassB:
def func0(self, objclass=BaseClassA):
self.class_a_obj = objclass()
obj1 = BlassClassB()
obj1.func0()
obj2 = BassClassB()
obj2.func0(objclass = ExtendClassA)
So, I've tried a PoC that, if I understand correctly, might do the trick. Give it a look.
By the way, whether it does work or not, I have a strong feeling this is actually a bad practice in almost all scenarios, as it changes class behavior in a obscure, unexpected way that would be very difficult to debug.
For example, in the below PoC if you inherit the same BaseClassA multiple times - only the latter inheritance shall be written in the class library, which would be a huge pain for the programmer trying to understand what on earth is happening with his code and why.
But of course, there are some use cases when shooting ourselves in a leg is less painful than designing & using a proper architecture :)
So, the first example where we have inheritance (I specified multiple inherited classes, just to show that only the last inherited one would be saved in a library):
#################################
# 1. We define all base classes
class BaseClassA:
def whoami(self):
print(type(self))
def __init_subclass__(cls):
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = cls
print('Class Dict Updated:')
print('New Class A: ' + str(cls))
#################################
# 2. We define a class library
global omfg_that_feels_like_a_reeeeally_bad_practise
omfg_that_feels_like_a_reeeeally_bad_practise = {}
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = BaseClassA
#################################
# 3. We define a first class that refer our base class (before inheriting from base class)
class UserClassA:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
# 4. We inherit from the base class several times
class FirstExtendedClassA(BaseClassA):
pass
class SecondExtendedClassA(BaseClassA):
pass
class SuperExtendedClassA(FirstExtendedClassA):
pass
#################################
# 5. We define a second class that refer our base class (after inheriting from base class)
class UserClassB:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
## 6. Now we try to refer both user classes
insane_class_test = UserClassA()
print(str(insane_class_test.class_a_obj))
### LOOK - A LAST INHERITED CHILD CLASS OBJECT IS USED!
# <__main__.SuperExtendedClassA object at 0x00000DEADBEEF>
insane_class_test = UserClassB()
print(str(insane_class_test.class_a_obj))
### LOOK - A LAST INHERITED CHILD CLASS OBJECT IS USED!
# <__main__.SuperExtendedClassA object at 0x00000DEADBEEF>
And if we remove inheritance, the base class will be used:
#################################
# 1. We define all base classes
class BaseClassA:
def whoami(self):
print(type(self))
def __init_subclass__(cls):
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = cls
print('Class Dict Updated:')
print('New Class A: ' + str(cls))
#################################
# 2. We define a class library
global omfg_that_feels_like_a_reeeeally_bad_practise
omfg_that_feels_like_a_reeeeally_bad_practise = {}
omfg_that_feels_like_a_reeeeally_bad_practise['ClassA'] = BaseClassA
#################################
# 3. We define a first class that refer our base class
class UserClassA:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
# 5. We define a second class that refer our base class
class UserClassB:
def __init__(self):
self.class_a_obj = omfg_that_feels_like_a_reeeeally_bad_practise['ClassA']()
#################################
## 6. Now we try to refer both user classes
insane_class_test = UserClassA()
print(str(insane_class_test.class_a_obj))
### LOOK - A DEFAULT CLASS OBJECT IS USED!
# <__main__.BaseClassA object at 0x00000DEADBEEF>
insane_class_test = UserClassB()
print(str(insane_class_test.class_a_obj))
### LOOK - A DEFAULT CLASS OBJECT IS USED!
# <__main__.BaseClassA object at 0x00000DEADBEEF>

How to consistently subclass an ensemble of cooperating classes

Suppose I have a set of (possibly abstract) base classes which cooperate in a certain way, and I want to subclass them in such a way that the subclasses are aware of its respective co-operating subclasses (e.g. it has the other classes as class attributes).
Literally adding attributes seems really messy for more than a handful of classes.
One way I can think of doing this is to class properties for the abstract classes which would reference a dictionary class attribute (same dictionary for all classes), via mixin to avoid repeating code in the superclass module. This way, I only need to add one attribute for each subclass (and add a dictionary referencing all the classes in the module), see the code below.
Is there an established design pattern to achieve this sort of thing?
Example:
abstract_module:
from abc import ABC
_module_classes_dict = {}
class _ClassesDictMixin:
_classes_dict = dict()
#classmethod
#property
def _a_class(cls):
return cls._classes_dict['a']
#classmethod
#property
def _b_class(cls):
return cls._classes_dict['b']
#classmethod
#property
def _c_class(cls):
return cls._classes_dict['c']
class AbstractA(ABC):
pass
class AbstractB(_ClassesDictMixin, ABC):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = AbstractA
class AbstractC(_ClassesDictMixin, ABC):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = AbstractA
# _b_class = AbstractB
class AbstractD(_ClassesDictMixin, ABC):
_classes_dict = _module_classes_dict
# # Alternative solution without using the dict
# _a_class = AbstractA
# _b_class = AbstractB
# _c_class = AbstractC
_module_classes_dict.update(a=AbstractA, b=AbstractB, c=AbstractC, d=AbstractD)
concrete_module:
from abstract_module import AbstractA, AbstractB, AbstractC, AbstractD
_module_classes_dict = {}
class ConcreteA(AbstractA):
pass
class ConcreteB(AbstractB):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = ConcreteA
class ConcreteC(AbstractC):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = ConcreteA
# _b_class = ConcreteB
class ConcreteD(AbstractD):
_classes_dict = _module_classes_dict
# # Basic solution without using the dict
# _a_class = ConcreteA
# _b_class = ConcreteB
# _c_class = ConcreteC
_module_classes_dict.update(a=ConcreteA, b=ConcreteB, c=ConcreteC, d=ConcreteD)
The issue is maybe not where you think it is.
Literally adding attributes seems really messy for more than a handful of classes.
I would be concerned if one of my classes was dependent on "more than a handful of classes". This is the issue, in my mind, you should try to solve.
Moreover, the mixin solution has a main drawback: ConcreteB knows about ConcreteC and ConcreteD whereas it should only know about ConcreteA. The dependencies between the classes are blurred. On the contrary, hard coding the dependencies should be a cleaner solution because the relationship between classes is explicit.
Hence this seems better than the mixin:
class ConcreteB(AbstractB):
_a_class = ConcreteA
class ConcreteC(AbstractC):
_a_class = ConcreteA
_b_class = ConcreteB
But sometimes hard coding the relations between ConcreteB and ConcreteA is not the best option. What if you want to use ConcreteA2 instead of ConcreteA?
class ConcreteA(AbstractA):
pass
class ConcreteA2(AbstractA):
pass
To make the code more versatile, you can use (as you wrote in a comment) the parameters of __init__:
class ConcreteB(AbstractB):
def __init__(self, a_class):
self._a_class = a_class
class ConcreteC(AbstractC):
def __init__(self, a_class, b_class):
self._a_class = a_class
self._b_class = b_class
But now, you might have an inconsistent set of classes:
b = ConcreteB(ConcreteA)
c = ConcreteC(ConcreteA2, ConcreteB)
This could happen if the codebase grows and the initialization of objects is dispatched across various modules. To avoid this situation, you may use a variant of the Factory Pattern:
class Factory:
def __init__(a_class, b_class, c_class, d_class):
self._a_class = a_class
self._b_class = b_class
self._c_class = c_class
def concreteA(self):
return self._a_class()
def concreteB(self):
return self._b_class(self._a_class)
def concreteC(self):
return self._c_class(self._a_class, self._c_class)
Now, you are sure that B and C share the same a_class.
This design helps you to ensure that the dependencies are explicit and consistent.

Categories