how to save redundant calculations for coupled classes with composition - python

I have several classes that are coupled to each other and I would like to have a design that minimizes redundancy in them.
Right now the design is as follows. The classes are
A
C, D
AC, AD: compositions of (A, C) and (A, D)
where all classes are initialized with the same data, each class performed some calculations and saves its own result. More specifically, users would create instances of A, AC, AD but not C and D. In other words, to the user, A, AC, AD are similar things and have uniform interface, whereas C and D are hidden. Here the logic is that, for example, the result of C is not useful unless we exclude the result of A from it. Some mock-up code is as follows
class A(object):
def __init__(self, data):
# some work
def postproc(self, *args):
# some real work
self.result = ..
class AC(object):
def __init__(self, data):
self.a = A(data)
self.c = C(data)
def postproc(self, *args):
result_a = self.a.postproc(*args[:3])
result_c = self.c.postproc(*args[3:])
self.result = exclude_c_from_a(result_a, result_c)
With the current design, if the user create instances of A and AC, (or AC and AD, or other combinations), the calculation of A is done multiple times. I would like to have no redundant calculations, how should the design be changed?
A little more details on the usage: there will be multiple instances of A with different initial data too. Thus I cannot use singleton to guarantee there is only one A instance.

I finally used memoization as the solution.
def memoize(func):
"""
Decorator for memoization. Note the func arguments need to be hashable.
#type func: a callable
"""
memo = func.memo = {}
#wraps(func)
def wrapper(*args):
if args not in memo:
memo[args] = func(*args)
return memo[args]
return wrapper
And it can decorate the class definition
#memoize
class A(object):
def __init__(self, arg1, arg2):
...
such that for the same input arguments, only one instance is created.

how should the design be changed?
Just pass A and C to the constructor of AC so that you don't have to instantiate A and C in the constructor.
Don't pass data to the constructor of A. Instead, pass a result to the constructor of A. You should prevent to do any work in the constructor other than setting up the dependency.
Create a class that is only responsible for calculating the result.

Related

How to write factory functions for subclasses?

Suppose there is a class A and a factory function make_A
class A():
...
def make_A(*args, **kwars):
# returns an object of type A
both defined in some_package.
Suppose also that I want to expand the functionality of A, by subclassing it,
without overriding the constructor:
from some_package import A, make_A
class B(A):
def extra_method(self, ...):
# adds extra functionality
What I also need is to write a new factory function make_B for subclass B.
The solution I have found so far is
def make_B(*args, **kwargs):
"""
same as make_A except that it returns an object of type B
"""
out = make_A(*args, **kwargs)
out.__class__ = B
return out
This seems to work, but I am a bit worried about directly modifying the
__class__ attribute, as it feels to me like a hack. I am also worried about
unexpected side-effects this modification may have. Is this the recommended
solution or is there a "cleaner" pattern to achieve the same result?
I guess I finally found something not verbose yet still working. For this you need to replace inheritance with composition, this will allow to consume an object A by doing self.a = ....
To mimic the methods of A you can use __getattr__ overload to delegate those methods (and fields) to self.a
The next snippet works for me
class A:
def __init__(self, val):
self.val = val
def method(self):
print(f"A={self.val}")
def make_A():
return A(42)
class B:
def __init__(self, *args, consume_A = None, **kwargs):
if consume_A is None:
self.a = A(*args, **kwargs)
else:
self.a = consume_A
def __getattr__(self, name):
return getattr(self.a, name)
def my_extension(self):
print(f"B={self.val * 100}")
def make_B(*args, **kwargs):
return B(consume_A=make_A(*args, **kwargs))
b = make_B()
b.method() # A=42
b.my_extension() # B=4200
What makes this approach superior to yours is that modifying __class__ is probably not harmless. On the other hand __getattr__ and __getattribute__ are specifically provided as the mechanisms to resolve attributes search in an object. For more details, see this tutorial.
Make your original factory function more general by accepting a class as parameter: remember, everything is an object in Python, even classes.
def make(class_type, *args, **kwargs):
return class_type(*args, **kwargs)
a = make(A)
b = make(B)
Since B has the same parameters as A, you don't need to make an A and then turn it into B: B inherits from A, so it "is an A" and will have the same functionality, plus the extra method that you added.

How to instantiate a subclass type variable from an existing superclass type object in Python

I have a situation where I extend a class with several attributes:
class SuperClass:
def __init__(self, tediously, many, attributes):
# assign the attributes like "self.attr = attr"
class SubClass:
def __init__(self, id, **kwargs):
self.id = id
super().__init__(**kwargs)
And then I want to create instances, but I understand that this leads to a situation where a subclass can only be instantiated like this:
super_instance = SuperClass(tediously, many, attributes)
sub_instance = SubClass(id, tediously=super_instance.tediously, many=super_instance.many, attributes=super_instance.attributes)
My question is if anything prettier / cleaner can be done to instantiate a subclass by copying a superclass instance's attributes, without having to write a piece of sausage code to manually do it (either in the constructor call, or a constructor function's body)... Something like:
utopic_sub_instance = SubClass(id, **super_instance)
Maybe you want some concrete ideas of how to not write so much code?
So one way to do it would be like this:
class A:
def __init___(self, a, b, c):
self.a = a
self.b = b
self.c = c
class B:
def __init__(self, x, a, b, c):
self.x = x
super().__init__(a, b, c)
a = A(1, 2, 3)
b = B('x', 1, 2, 3)
# so your problem is that you want to avoid passing 1,2,3 manually, right?
# So as a comment suggests, you should use alternative constructors here.
# Alternative constructors are good because people not very familiar with
# Python could also understand them.
# Alternatively, you could use this syntax, but it is a little dangerous and prone to producing
# bugs in the future that are hard to spot
class BDangerous:
def __init__(self, x, a, b, c):
self.x = x
kwargs = dict(locals())
kwargs.pop('x')
kwargs.pop('self')
# This is dangerous because if in the future someone adds a variable in this
# scope, you need to remember to pop that also
# Also, if in the future, the super constructor acquires the same parameter that
# someone else adds as a variable here... maybe you will end up passing an argument
# unwillingly. That might cause a bug
# kwargs.pop(...pop all variable names you don't want to pass)
super().__init__(**kwargs)
class BSafe:
def __init__(self, x, a, b, c):
self.x = x
bad_kwargs = dict(locals())
# This is safer: you are explicit about which arguments you're passing
good_kwargs = {}
for name in 'a,b,c'.split(','):
good_kwargs[name] = bad_kwargs[name]
# but really, this solution is not that much better compared to simply passing all
# parameters explicitly
super().__init__(**good_kwargs)
Alternatively, let's go a little crazier. We'll use introspection to dynamically build the dict to pass as arguments. I have not included in my example the case where there are keyword-only arguments, defaults, *args or **kwargs
class A:
def __init__(self, a,b,c):
self.a = a
self.b = b
self.c = c
class B(A):
def __init__(self, x,y,z, super_instance):
import inspect
spec = inspect.getfullargspec(A.__init__)
positional_args = []
super_vars = vars(super_instance)
for arg_name in spec.args[1:]: # to exclude 'self'
positional_args.append(super_vars[arg_name])
# ...but of course, you must have the guarantee that constructor
# arguments will be set as instance attributes with the same names
super().__init__(*positional_args)
I managed to finally do it using a combination of an alt constructor and the __dict__ property of the super_instance.
class SuperClass:
def __init__(self, tediously, many, attributes):
self.tediously = tediously
self.many = many
self.attributes = attributes
class SubClass(SuperClass):
def __init__(self, additional_attribute, tediously, many, attributes):
self.additional_attribute = additional_attribute
super().__init__(tediously, many, attributes)
#classmethod
def from_super_instance(cls, additional_attribute, super_instance):
return cls(additional_attribute=additional_attribute, **super_instance.__dict__)
super_instance = SuperClass("tediously", "many", "attributes")
sub_instance = SubClass.from_super_instance("additional_attribute", super_instance)
NOTE: Bear in mind that python executes statements sequentially, so if you want to override the value of an inherited attribute, put super().__init__() before the other assignment statements in SubClass.__init__.
NOTE 2: pydantic has this very nice feature where their BaseModel class auto generates an .__init__() method, helps with attribute type validation and offers a .dict() method for such models (it's basically the same as .__dict__ though).
Kinda ran into the same question and just figured one could simply do:
class SubClass(SuperClass):
def __init__(self, additional_attribute, **args):
self.additional_attribute = additional_attribute
super().__init__(**args)
super_class = SuperClass("tediously", "many", "attributes")
sub_instance = SuperClass("additional_attribute", **super_class.__dict__)

Pass Arguments from Function to Class Method

I think this is programming 101, but the class I missed:
I have a class where roughly 50 default arguments are passed to init. The user can then provide different values for those arguments at the time of construction, or they can modify the resulting attributes in the normal way.
What I would like to do is create a function, probably outside of that class that allows the user to create multiple versions of the class, and then return useful information. However, each iteration of the class in the function will have different arguments for the constructor.
How best to allow the user of the function to supply arguments to the function that get passed on to the class constructor.
Here is what I am trying to achieve:
class someClass(object):
def __init__(self, a=None, b=None, c=None, d=None, e=None):
self.a = a
self.b = b
self.c = c
self.d = d
self.e = e
def some_method(self):
# do something
return # something useful
simulations = {'1': {'a':3, 'e':6},
'2': {'b':2, 'c':1}}
def func(simulations=simulations):
results = []
for sim in simulations.keys():
sc = someClass(simulations[sim]) # use the arguments in the dict to pass to constructor
results.append(sc.some_method())
return results
You can use ** to unpack a dictionary into named keywords:
sc = someClass(**simulations[sim])
would provide 3 as a, 6 as e the first time, then 2 as b and 1 as c the second time.

Polymorphism and Overriding in Python

I have two classes: A and B. I would like to build a class C which is able to overrides some common methods of A and B.
The methods which I would like to override they should be able to call the method of the base class.
In practice I would like to collect some statistics about class A and B, but being transparent to the rest of my code. Now, A and B have some methods in common (obviously implemented in a different way). I would like to have a class C which shares the interface of A and B, and simoultaneously do some other operations (i.e. measure the run time of some shared methods).
I can make this example:
import time
class A:
def __init__(self):
pass
def common_method(self):
return "A"
class B:
def __init__(self):
pass
def common_method(self):
return "B"
class C:
def __init__(self, my_obj):
self.my_obj
self.time_avg = 0
self.alpha = 0.1
pass
def common_method(self):
start = time.time()
ret = self.my_obj.common_method()
stop = time.time()
self.time_avg = (1. - self.alpha) * self.time_avg + self.alpha * (stop - start)
return ret
I hope that from this example is clear that A and B inheriting from C is not working.
However, this method unfortunately require me to redefine all the methods of classes A and B... Which is tedious and dirty!
What is the proper way to implement this situation in python? And how it is called the design pattern (I am almost sure that there is but I cannot recall).
Thanks in advance.
You could solve this with composition instead of polymorphism, meaning that a C object will hold either a A object or a B one:
class C:
def __init__(self, obj):
self._obj = obj
def common_method(self):
return self._obj.common_method()
You can then use it:
>>> ca = C(A())
>>> cb = C(B())
>>> ca.common_method()
'A'
>>> cb.common_method()
'B'
Beware: if you pass an object that does not declare a common_method method, you will get an AttributeError

Pythonic way to 'encourage' use of factory-method to instantiate class

I want to prevent mistakes when instanciating a complex class with lots of rules to instanciate it correctly.
For example I came up with the following complex class:
import math
sentinel = object()
class Foo( object ):
def __init__( self, a, c, d, g, b=sentinel, e=sentinel, f=sentinel, h=sentinel,
i=sentinel ):
# sentinel parameters are only needed in case other parameters have some value, and in
# some cases should be None, __init__ contains only simple logic.
...
def create_foo( a, c, d, g ):
# contains the difficult logic to create a Foo-instance correctly, e.g:
b = ( int( math.pi * 10**a ) / float(10**a) )
if c == "don't care":
e = None
f = None
elif c == 'single':
e = 3
f = None
else:
e = 6
f = 10
if g == "need I say more":
h = "ni"
i = "and now for something completely different"
elif g == "Brian":
h = "Always look at the bright side of life"
i = None
else:
h = None
i = "Always look at the bright side of death"
return Foo( a=a, b=b, c=c, d=d, e=e, f=f, g=g, h=h, i=i )
Since create_foo contains the logic to correctly create a Foo-instance I want to 'encourage'* users to use it.
What is the best pythonic way to do this.
*Yes I know I can't force people to use the factory-function, hence I want to 'encourage' them. ;-)
TL/DR
The key to solving this is marking the origin of the object in the factory and in the constructor (__new__), and checking the origin in the initializer (__init__). Do a warnings.warn() if the origin was the __new__ method by itself. You mark the origin in the factory by calling __new__ and __init__ separately in the factory and marking the factory origin in between.
Preamble
Since create_foo is the preferred method, it would likely be better to use that as the default. Implement the complex logic in the constructor, and refer to the lighter weight factory in your documentation as an alternative in those cases when the user doesn't want the complex logic. This avoids the problem of having to inform the user they are "doing it wrong".
However, assuming it is best to keep the complex logic out of the default constructor, your problem is twofold:
You need to implement - and detect - multiple methods of creation.
This first part is simple; lots of objects in the standard library have alternate constructors, so this is not unusual at all in the Python world. You could use a factory method as you have created already, but at the end I suggest reasons for including the factory in your class (this is what #classmethod was created for).
Detecting the origin is a little harder, but not too difficult. You simply mark the origin at instantiation and check the origin at initialization. Do this by calling __new__ and __init__ separately.
You want to inform the user what they should be doing, but still allow them to do what they want to do.
This is exactly what the warnings module was created for.
Use warnings
You can issue messages to the user- and still allow them to be in control of the messages- using the warnings module. Notify the user that they might want to use the factory by doing:
import warnings
warnings.warn("It is suggested to use the Foo factory.", UserWarning)
This way, the user can filter the warnings out if they wish.
A warnings warning
Quick sidebar: note that once the above warnings.warn message has been executed, then by default it will not come up again until you execute warnings.resetwarnings() or the Python session is restarted.
In other words, unless you change the settings the user will only see the message the first time they make a Foo. This may or may not be what you want.
Add origin attributes and check origin at initialization
Utilizing warnings.warn requires tracking the method of origin for your foos, and calling the warning if the factory isn't used. You could do it relatively simply as follows.
First, add a __new__ method and do an origin check at the end of the __init__ method:
class Foo( object ):
def __new__(cls, *args, **kwargs):
inst = super(Foo, cls).__new__(cls)
# mark the instance origin:
inst.origin == cls.__new__
return inst
def __init__( self, a, c, d, g, b=sentinel, e=sentinel, f=sentinel,
h=sentinel, i=sentinel ):
#### simple logic ####
if self.origin == type(self).__new__:
warnings.warn("It is suggested to use the {} factory for \
instance creation.".format(type(self).__name__), UserWarning)
Then in the factory, instantiate and initialize the new object separately, and setting the origin between the two:
def create_foo( a, c, d, g ):
#### complex logic #####
# instantiate using the constructor directly:
f = Foo.__new__(Foo, a, b, c, d, e, f, g, h, i )
# replace the origin with create_foo:
f.origin = create_foo
# NOW initialize manually:
f.__init__(a, b, c, d, e, f, g, h, i )
return f
Now you can detect where any Foo is coming from, and a warning will be issued to the user (once by default) if they did not use the factory function:
>>> Foo()
>>> __main__:8: UserWarning: It is suggested to use the Foo factory for instance creation.
Allow for Foo subclassing
One other suggested tweak: I would consider adding your factory function into the class as an alternate constructor, and then (in your user warning) suggesting the user use that constructor (rather than the factory function itself) for instantiation. This would allow Foo's child classes to utilize the factory method and still receive an instance of the child class from the factory, rather than a Foo.
Allowing for classes other than Foo to utilize the factory requires some minor changes to the factory:
def create_foo( foo_cls, a, c, d, g ):
'''Contains Foo creation logic.'''
#### complex logic #####
f = foo_cls.__new__(foo_cls, a, b, c, d, e, f, g, h, i )
# replace the marked origin with create_foo
f.origin = create_foo
# now initialize
f.__init__(a, b, c, d, e, f, g, h, i )
return f
Now we'll add the alternate constructor:
class Foo( object ):
#### other methods ####
def create(cls, *args, **kwargs):
inst = create_foo(cls, *args, **kwargs)
return inst
Now we can:
f = Foo.create( a, c, d, g )
But we can also:
class Bar(Foo):
pass
b = Bar.create( a, c, d, g )
And we still see this:
>>> Bar()
>>> __main__:8: UserWarning: It is suggested to use the Bar factory for instance creation.
You can strongly restrict the ordering of operations/properties.
Meaning preventing user from accessing some of class Foo method before calling create_foo method! The same idea is used in Serializer Module
Sample code:-
class Foo(object):
def __init__(self, a, b = sentinel):
self.a = a
self._create_foo_called = False
def create_foo(self):
self.b = "complex value"
self._create_foo_called = True
def do_something(self):
if(self._create_foo_called):
# WHATEVER
else:
raise AssertionError("Please call create_foo() method")

Categories