I have the following scenario:
I have abstract classes A and B, and A uses B to perform some tasks. In both classes, there are some "constant" parameters (right now implemented as class attributes) to be set by concrete classes extending those abstract classes, and some of the parameters are shared (they should have the same value in a derived "suite" of classes SubA and SubB).
The problem I face here is with how namespaces are organized in Python. The ideal solution if Python had dynamic scoping would be to declare those parameters as module variables, and then when creating a new suite of extending classes I could just overwrite them in their new module. But (luckily, because for most cases that is safer and more convenient) Python does not work like that.
To put it in a more concrete context (not my actual problem, and of course not accurate nor realistic), imagine something like a nbody simulator with:
ATTRACTION_CONSTANT = NotImplemented # could be G or a Ke for example
class NbodyGroup(object):
def __init__(self):
self.bodies = []
def step(self):
for a in self.bodies:
for b in self.bodies:
f = ATTRACTION_CONSTANT * a.var * b.var / distance(a, b)**2
...
class Body(object):
def calculate_field_at_surface(self):
return ATTRACTION_CONSTANT * self.var / self.r**2
Then other module could implement a PlanetarySystem(NBodyGroup) and Planet(Body) setting ATTRACTION_CONSTANT to 6.67384E-11 and other module could implement MolecularAggregate(NBodyGroup) and Particle(Body) and set ATTRACTION_CONSTANT to 8.987E9.
In brief: what are good alternatives to emulate global constants at module level that can be "overwritten" in derived modules (modules that implement the abstract classes defined in the first module)?
How about using a mixin? You could define (based on your example) classes for PlanetarySystemConstants and MolecularAggregateConstants that hold the ATTRACTION_CONSTANT and then use class PlanetarySystem(NBodyGroup, PlanetarySystemConstants) and class MolecularAggregate(NBodyGroup, MolecularAggregateConstants) to define those classes.
Here are a few things I could suggest:
Link each body to its group, so that the body accesses the constant from the group when it calculates its force. For example:
class NbodyGroup(object):
def __init__(self, constant):
self.bodies = []
self.constant = constant
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.constant * a.var * b.var / distance(a, b)**2
...
class Body(object):
def __init__(self, group):
self.group = group
def calculate_field_at_surface(self):
return self.group.constant * self.var / self.r**2
Pro: this automatically enforces the fact that bodies in the same group should exert the same kind of force. Con: semantically, you could argue that a body should exist independent of any groups in may be in.
Add a parameter to specify the type of force. This could be a value of an enumeration, for example.
class Force(object):
def __init__(self, constant):
self.constant = constant
GRAVITY = Force(6.67e-11)
ELECTRIC = Force(8.99e9)
class NbodyGroup(object):
def __init__(self, force):
self.bodies = []
self.force = force
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.force.constant * a.charge(self.force) \
* b.charge(self.force) / distance(a, b)**2
...
class Body(object):
def __init__(self, charges, r):
# charges = {GRAVITY: mass_value, ELECTRIC: electric_charge_value}
self.charges = charges
...
def charge(self, force):
return self.charges.get(force, 0)
def calculate_field_at_surface(self, force):
return force.constant * self.charge(force) / self.r**2
Conceptually, I would prefer this method because it encapsulates the properties that you typically associate with a given object (and only those) in that object. If speed of execution is an important goal, though, this may not be the best design.
Hopefully you can translate these to your actual application.
removed old version
you can try subclassing __new__ to create a metaclass. Then at the class creation, you can get the subclass module by looking in previous frames with the inspect module of python std, get your new constant here if you find one, and patch the class attribute of the derived class.
I won't post an implementation for the moment because it is non trivial for me, and kind of dangerous.
edit: added implementation
in A.py:
import inspect
MY_GLOBAL = 'base module'
class BASE(object):
def __new__(cls, *args, **kwargs):
clsObj = super(BASE, cls).__new__(cls, *args, **kwargs)
clsObj.CLS_GLOBAL = inspect.stack()[-1][0].f_globals['MY_GLOBAL']
return clsObj
in B.py:
import A
MY_GLOBAL = 'derived'
print A.BASE().CLS_GLOBAL
now you can have fun with your own scoping rules ...
You should use property for this case,
eg.
class NbodyGroup(object):
#property
def ATTRACTION_CONSTANT(self):
return None
...
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.ATTRACTION_CONSTANT * a.var * b.var / distance(a, b)**2
class PlanetarySystem(NBodyGroup):
#property
def ATTRACTION_CONSTANT(self):
return 6.67384E-11
Related
I have seen super().__init__(*args) used to call the super constructor safely (in a way that does not fail to diamond inheritence). However I cannot find a way to call different super constructors with different arguments in this way.
Here is an example illustraiting the problem.
from typing import TypeVar, Generic
X = TypeVar("X")
Y = TypeVar("Y")
class Base:
def __init__(self):
pass
class Left(Base, Generic[X]):
def __init__(self, x:X):
super().__init__()
self.lft = x
class TopRight(Base, Generic[Y]):
def __init__(self, y:Y):
super().__init__()
self.rgh = y
class BottomRight(TopRight[Y], Generic[Y]):
def __init__(self, y:Y):
super().__init__(y + y)
class Root(Left[X], BottomRight[Y], Generic[X, Y]):
def __init__(self, x:X, y:Y):
pass #issue here
#does not work
#super().__init__(x)
#super().__init__(y)
#calls base twice
#Left[X].__init__(x)
#BottomRight[Y].__init__(y)
How do I call Left.__init__(x) and BottomRight.__init__(y) seperately and safely?
The thing is that to be use in cooperative form, the intermediate classes have to accept the arguments that are not "aimed" at them, and pass those on on their own super call, in a way that becomes transparent.
You them do not place multiple calls to your ancestor classes: you let the language runtime do that for you.
Your code should be written:
from typing import Generic, TypeVar
X = TypeVar("X")
Y = TypeVar("Y")
class Base:
def __init__(self):
pass
class Left(Base, Generic[X]):
def __init__(self, x:X, **kwargs):
super().__init__(**kwargs)
self.lft = x
class TopRight(Base, Generic[Y]):
def __init__(self, y:Y, **kwargs):
super().__init__(**kwargs)
self.rgh = y
class BottomRight(TopRight[Y], Generic[Y]):
def __init__(self, y:Y, **kwargs): # <- when this is executed, "y" is extracted from kwargs
super().__init__(y=y + y, **kwargs) # <- "x" remains in kwargs, but this class does not have to care about it.
class Root(Left[X], BottomRight[Y], Generic[X, Y]):
def __init__(self, x:X, y:Y):
super().__init__(x=x, y=y) # <- will traverse all superclasses, "Generic" being last
Also, note that depending on your project's ends, and final complexity, these type annotations may gain you nothing, and instead, add complexity to a code otherwise trivial. They are not always a gain in Python projects, although due to circunstances the tooling (i.e. IDEs), might recommend them.
Also, check this similar answer from a few days ago, were I detail a bit more of Python method resolution order mechanisms, and point to the official documentation on them: In multiple inheritance in Python, init of parent class A and B is done at the same time?
I have a situation where I extend a class with several attributes:
class SuperClass:
def __init__(self, tediously, many, attributes):
# assign the attributes like "self.attr = attr"
class SubClass:
def __init__(self, id, **kwargs):
self.id = id
super().__init__(**kwargs)
And then I want to create instances, but I understand that this leads to a situation where a subclass can only be instantiated like this:
super_instance = SuperClass(tediously, many, attributes)
sub_instance = SubClass(id, tediously=super_instance.tediously, many=super_instance.many, attributes=super_instance.attributes)
My question is if anything prettier / cleaner can be done to instantiate a subclass by copying a superclass instance's attributes, without having to write a piece of sausage code to manually do it (either in the constructor call, or a constructor function's body)... Something like:
utopic_sub_instance = SubClass(id, **super_instance)
Maybe you want some concrete ideas of how to not write so much code?
So one way to do it would be like this:
class A:
def __init___(self, a, b, c):
self.a = a
self.b = b
self.c = c
class B:
def __init__(self, x, a, b, c):
self.x = x
super().__init__(a, b, c)
a = A(1, 2, 3)
b = B('x', 1, 2, 3)
# so your problem is that you want to avoid passing 1,2,3 manually, right?
# So as a comment suggests, you should use alternative constructors here.
# Alternative constructors are good because people not very familiar with
# Python could also understand them.
# Alternatively, you could use this syntax, but it is a little dangerous and prone to producing
# bugs in the future that are hard to spot
class BDangerous:
def __init__(self, x, a, b, c):
self.x = x
kwargs = dict(locals())
kwargs.pop('x')
kwargs.pop('self')
# This is dangerous because if in the future someone adds a variable in this
# scope, you need to remember to pop that also
# Also, if in the future, the super constructor acquires the same parameter that
# someone else adds as a variable here... maybe you will end up passing an argument
# unwillingly. That might cause a bug
# kwargs.pop(...pop all variable names you don't want to pass)
super().__init__(**kwargs)
class BSafe:
def __init__(self, x, a, b, c):
self.x = x
bad_kwargs = dict(locals())
# This is safer: you are explicit about which arguments you're passing
good_kwargs = {}
for name in 'a,b,c'.split(','):
good_kwargs[name] = bad_kwargs[name]
# but really, this solution is not that much better compared to simply passing all
# parameters explicitly
super().__init__(**good_kwargs)
Alternatively, let's go a little crazier. We'll use introspection to dynamically build the dict to pass as arguments. I have not included in my example the case where there are keyword-only arguments, defaults, *args or **kwargs
class A:
def __init__(self, a,b,c):
self.a = a
self.b = b
self.c = c
class B(A):
def __init__(self, x,y,z, super_instance):
import inspect
spec = inspect.getfullargspec(A.__init__)
positional_args = []
super_vars = vars(super_instance)
for arg_name in spec.args[1:]: # to exclude 'self'
positional_args.append(super_vars[arg_name])
# ...but of course, you must have the guarantee that constructor
# arguments will be set as instance attributes with the same names
super().__init__(*positional_args)
I managed to finally do it using a combination of an alt constructor and the __dict__ property of the super_instance.
class SuperClass:
def __init__(self, tediously, many, attributes):
self.tediously = tediously
self.many = many
self.attributes = attributes
class SubClass(SuperClass):
def __init__(self, additional_attribute, tediously, many, attributes):
self.additional_attribute = additional_attribute
super().__init__(tediously, many, attributes)
#classmethod
def from_super_instance(cls, additional_attribute, super_instance):
return cls(additional_attribute=additional_attribute, **super_instance.__dict__)
super_instance = SuperClass("tediously", "many", "attributes")
sub_instance = SubClass.from_super_instance("additional_attribute", super_instance)
NOTE: Bear in mind that python executes statements sequentially, so if you want to override the value of an inherited attribute, put super().__init__() before the other assignment statements in SubClass.__init__.
NOTE 2: pydantic has this very nice feature where their BaseModel class auto generates an .__init__() method, helps with attribute type validation and offers a .dict() method for such models (it's basically the same as .__dict__ though).
Kinda ran into the same question and just figured one could simply do:
class SubClass(SuperClass):
def __init__(self, additional_attribute, **args):
self.additional_attribute = additional_attribute
super().__init__(**args)
super_class = SuperClass("tediously", "many", "attributes")
sub_instance = SuperClass("additional_attribute", **super_class.__dict__)
I would like to ask if there is a way to implement an interface method within a class without instantiation, or even more, if it is altogether a bad practice? If so, what will be the right way to implement a complex interface method?
Here is a prototype of my code:
class calculator(abc.ABC):
#abc.abstractmethod
def calculate(self):
"start model"
class subcalculator(calculator):
def calculate(self):
return model.attr2 ** 3
def recalculate(self):
z = calculate(self)
return z ** 2
However, this reports calculate() is not defined when run subcalculator.recalculate as it is not instantiated.
As I am just writing interface classes for my model, I suppose writing initiation is not a good idea.(Or is it?) What should I do then in such case?
Edit: According to #chepner 's answer, I have also figured out some hackish way to solve this problem, which I am not sure if it's right practice:
#classmethod
def recalculate(cls, self):
z = cls.calculate(self)
return z ** 2
Also it's worth mentioning the object/model part of the structure:
#In model.py
class model(object):
def __init__(self, attr1):
self.attr1 = attr1
class submodel(model):
def __init__(self, attr1, attr2):
super().__init__(attr1)
self.attr2 = attr2
So my hope is to write calculator as an interface class which can interact with model etc.
calculate is a method whether or not you ever create an instance, and has to be referred to as such.
class subcalculator(calculator):
def calculate(self):
return model.attr2 ** 3
def recalculate(self):
z = subcalculator.calculate(self)
return z ** 2
Of course, it's better to let the inheritance model determine exactly which calculate method needs to be called:
def recalculate(self):
z = self.calculate()
return z ** 2
I have two classes: A and B. I would like to build a class C which is able to overrides some common methods of A and B.
The methods which I would like to override they should be able to call the method of the base class.
In practice I would like to collect some statistics about class A and B, but being transparent to the rest of my code. Now, A and B have some methods in common (obviously implemented in a different way). I would like to have a class C which shares the interface of A and B, and simoultaneously do some other operations (i.e. measure the run time of some shared methods).
I can make this example:
import time
class A:
def __init__(self):
pass
def common_method(self):
return "A"
class B:
def __init__(self):
pass
def common_method(self):
return "B"
class C:
def __init__(self, my_obj):
self.my_obj
self.time_avg = 0
self.alpha = 0.1
pass
def common_method(self):
start = time.time()
ret = self.my_obj.common_method()
stop = time.time()
self.time_avg = (1. - self.alpha) * self.time_avg + self.alpha * (stop - start)
return ret
I hope that from this example is clear that A and B inheriting from C is not working.
However, this method unfortunately require me to redefine all the methods of classes A and B... Which is tedious and dirty!
What is the proper way to implement this situation in python? And how it is called the design pattern (I am almost sure that there is but I cannot recall).
Thanks in advance.
You could solve this with composition instead of polymorphism, meaning that a C object will hold either a A object or a B one:
class C:
def __init__(self, obj):
self._obj = obj
def common_method(self):
return self._obj.common_method()
You can then use it:
>>> ca = C(A())
>>> cb = C(B())
>>> ca.common_method()
'A'
>>> cb.common_method()
'B'
Beware: if you pass an object that does not declare a common_method method, you will get an AttributeError
Say I have the following class in Python 2.7:
class X():
def __init__(self, alpha=1):
self.alpha = alpha
print self.alpha
def beta(self, gamma=1):
self.gamma = gamma
self.omega = self.alpha + self.gamma
print self.omega
I want to use the class definition to create another class definition with different default arguments, e.g. something like:
Y = f(X, alpha=2, gamma=2)
or
Y = f(X, __init__.alpha=2, beta.gamma=2)
which should be equivalent to:
class Y():
def __init__(self, alpha=2):
self.alpha = alpha
print self.alpha
def beta(self, gamma=2):
self.gamma = gamma
self.omega = self.alpha + self.gamma
print self.omega
Is it possible to do something like this in Python 2.7 (or 3?)?
(I know you can use the functools.partial to do the equivalent for functions; so I was wondering if there was anything similar for classes)
You can write a function that creates classes for you:
def makeclass(alpha, gamma):
class C():
def __init__(self, alpha=alpha):
self.alpha = alpha
print self.alpha
def beta(self, gamma=gamma):
self.gamma = gamma
self.omega = self.alpha + self.gamma
print self.omega
return C
>>> X = makeclass(1, 1)
>>> Y = makeclass(2, 2)
>>> x = X() # X is the class, x is an instance
1
>>> x.beta()
2
>>> y = Y()
2
>>> y.beta()
4
This can be done, but it's a bit messy.
This code:
class Foo(base1, base2, ...):
bar = something
baz = something_else
def qux(self):
...
...is equivalent to this code, though I'm not sure this is correct if Foo is a classic class:
Foo = type('Foo', (base1, base2, ...), {'bar': something, 'baz': something_else,
'qux': lambda self: ...})
So we can create classes on the fly, and attach custom method objects to them.
For your specific case, we need to make these calls (assuming you convert X into a new-style class by inheriting from object). First, we extract methods from X and apply functools.partial:
new_init = functools.partial(X.__init__.im_func, # Just X.__init__ in 3.x
alpha=2)
new_beta = functools.partial(X.beta.im_func, # Just X.beta in 3.x
gamma=2)
The im_func attribute grabs the underlying function out of the unbound method object. In Python 3, there's no such thing as an unbound method, so we don't need to use this attribute; X.__init__ is just the function itself.
Next, we create the class:
Y = type('Y', (object,), {'__init__': new_init, 'beta': new_beta})
Unfortunately, we don't have any reasonable way of getting a list of the functions we need to redefine here. If we use dir() we'll get a lot of irrelevant special attributes. We could use the class's __dict__, but that won't include any inherited methods. I think we may need to search X.__mro__ for a fully correct listing of methods:
result = {}
for class_ in X.__mro__:
for name, value in class_.__dict__.items():
if name not in result:
result[name] = value
We also need to figure out which arguments to override in each case; inspect.getargspec() is a good starting point for this, but it will not help much if any of the methods are variadic (i.e. if they take *args or **kwargs arguments).