Can we call a method from within a class of no instantiation - python

I would like to ask if there is a way to implement an interface method within a class without instantiation, or even more, if it is altogether a bad practice? If so, what will be the right way to implement a complex interface method?
Here is a prototype of my code:
class calculator(abc.ABC):
#abc.abstractmethod
def calculate(self):
"start model"
class subcalculator(calculator):
def calculate(self):
return model.attr2 ** 3
def recalculate(self):
z = calculate(self)
return z ** 2
However, this reports calculate() is not defined when run subcalculator.recalculate as it is not instantiated.
As I am just writing interface classes for my model, I suppose writing initiation is not a good idea.(Or is it?) What should I do then in such case?
Edit: According to #chepner 's answer, I have also figured out some hackish way to solve this problem, which I am not sure if it's right practice:
#classmethod
def recalculate(cls, self):
z = cls.calculate(self)
return z ** 2
Also it's worth mentioning the object/model part of the structure:
#In model.py
class model(object):
def __init__(self, attr1):
self.attr1 = attr1
class submodel(model):
def __init__(self, attr1, attr2):
super().__init__(attr1)
self.attr2 = attr2
So my hope is to write calculator as an interface class which can interact with model etc.

calculate is a method whether or not you ever create an instance, and has to be referred to as such.
class subcalculator(calculator):
def calculate(self):
return model.attr2 ** 3
def recalculate(self):
z = subcalculator.calculate(self)
return z ** 2
Of course, it's better to let the inheritance model determine exactly which calculate method needs to be called:
def recalculate(self):
z = self.calculate()
return z ** 2

Related

How to: safely call super constructors with different arguments

I have seen super().__init__(*args) used to call the super constructor safely (in a way that does not fail to diamond inheritence). However I cannot find a way to call different super constructors with different arguments in this way.
Here is an example illustraiting the problem.
from typing import TypeVar, Generic
X = TypeVar("X")
Y = TypeVar("Y")
class Base:
def __init__(self):
pass
class Left(Base, Generic[X]):
def __init__(self, x:X):
super().__init__()
self.lft = x
class TopRight(Base, Generic[Y]):
def __init__(self, y:Y):
super().__init__()
self.rgh = y
class BottomRight(TopRight[Y], Generic[Y]):
def __init__(self, y:Y):
super().__init__(y + y)
class Root(Left[X], BottomRight[Y], Generic[X, Y]):
def __init__(self, x:X, y:Y):
pass #issue here
#does not work
#super().__init__(x)
#super().__init__(y)
#calls base twice
#Left[X].__init__(x)
#BottomRight[Y].__init__(y)
How do I call Left.__init__(x) and BottomRight.__init__(y) seperately and safely?
The thing is that to be use in cooperative form, the intermediate classes have to accept the arguments that are not "aimed" at them, and pass those on on their own super call, in a way that becomes transparent.
You them do not place multiple calls to your ancestor classes: you let the language runtime do that for you.
Your code should be written:
from typing import Generic, TypeVar
X = TypeVar("X")
Y = TypeVar("Y")
class Base:
def __init__(self):
pass
class Left(Base, Generic[X]):
def __init__(self, x:X, **kwargs):
super().__init__(**kwargs)
self.lft = x
class TopRight(Base, Generic[Y]):
def __init__(self, y:Y, **kwargs):
super().__init__(**kwargs)
self.rgh = y
class BottomRight(TopRight[Y], Generic[Y]):
def __init__(self, y:Y, **kwargs): # <- when this is executed, "y" is extracted from kwargs
super().__init__(y=y + y, **kwargs) # <- "x" remains in kwargs, but this class does not have to care about it.
class Root(Left[X], BottomRight[Y], Generic[X, Y]):
def __init__(self, x:X, y:Y):
super().__init__(x=x, y=y) # <- will traverse all superclasses, "Generic" being last
Also, note that depending on your project's ends, and final complexity, these type annotations may gain you nothing, and instead, add complexity to a code otherwise trivial. They are not always a gain in Python projects, although due to circunstances the tooling (i.e. IDEs), might recommend them.
Also, check this similar answer from a few days ago, were I detail a bit more of Python method resolution order mechanisms, and point to the official documentation on them: In multiple inheritance in Python, init of parent class A and B is done at the same time?

Python3 generic inheritance hierarchy

This question is specific for python 3. Suppose I have a class hierarchy like this
class Base():
def calculate():
return 0
class Derived1(Base):
def calculate():
# some calculation
class Derived2(Base):
def calculate():
# some calculation
Now, what I want to do is make a class that defines a generic way to inherit from the Derived classes, and then overrides calculate. In other words, something in the spirit of C++ templates, to avoid copying over the subclasses code, but specify a generic way of subclassing, and then be able to define the subclasses as one liners, like shown below:
# pseudocode
class GenericDerived5(someGenericBase):
def calculate():
return super().calculate() + 5
class GenericDerived6(someGenericBase):
def calculate():
return super().calculate() + 5
class Derived5_1 = GenericDerived5(Derived1)
class Derived6_1 = GenericDerived6(Derived2)
(the calculation is not literally like this, just illustrating the combinatorial nature of the inheritance structure)
How would this code look like, and what are the relevant tools from python3 that I need? I've heard of metaclasses, but not very familiar.
class definition inside a factory-function body
The most straightforward way to go there is really straightforward - but can feel a bit awkward:
def derived_5_factory(Base):
class GenericDerived5(Base):
def calculate(self):
return super().calculate() + 5
return GenericDerived5
def derived_6_factory(Base):
class GenericDerived6(Base):
def calculate(self):
return super().calculate() + 6
return GenericDerived6
Derived5_1 = derived_5_factory(Derived1)
Derived6_2 = derived_6_factory(Derived2)
The inconvenient part is that your classes that need generic bases
have to be defined inside function bodies. That way, Python re-executes
the class statement itself, with a different Base, taking advantage
that in Python classes are first class objects.
This code have the inconveniences that (1) the class bodies must be inside functions, and (2) it can be the wrong approach at all:
Multiple inheritance
If you can have an extra inheritance level - that is the only difference for your example, this is the "correct" way to go. Actually, apart from having the former "GenericDerived" classes explicitly in their inheritance chain, they will behave exactly as intended:
class Base():
def calculate():
return 0
class Derived1(Base):
def calculate(self):
return 1
class Derived2(Base):
def calculate(self):
return 2
# mix-in bases:
class MixinDerived5(Base):
def calculate(self):
return super().calculate() + 5
class MixinDerived6(Base):
def calculate(self):
return super().calculate() + 6
Derived5_1 = type("Derived5_1", (MixinDerived5, Derived1), {})
Derived6_2 = type("Derived6_2", (MixinDerived6, Derived2), {})
Here, instead of using the class statement, a dynamic class is created with the type call, using both the class that needs a dybamic base and that dynamic base as its bases parameter. That is it - Derived5_1 is a fully working Python class with both Bases in its inheritance chain
Note that Python's super() will do exactly what common sense would expect it to do, "rerouting" itself through the extra intermediary "derived" classes before reaching "Base". So, this is what I get on the interactive console after pasting the code above:
In [6]: Derived5_1().calculate()
Out[6]: 6
In [7]: Derived6_2().calculate()
Out[7]: 8
A mix-in class, roughly speaking, is a class that isn't intended to be instantiated directly or act as a standalone base class (other than for other, more specialized mix-in classes), but to provide a small subset of functionality that another class can inherit.
In this case, your GenericDerived classes are perfect examples of mix-ins: you aren't creating instances of GenericDerived, but you can inherit from them to add a calculate method to your own class.
class Calculator:
def calculate(self):
return 9
class Calculator1(Calculator):
def calculate(self):
return super().calculate() + 5
class Calculator2(Calculator):
def calculate(self):
return super().calculate() + 10
class Base(Calculator):
...
Note that the Base and Calculator hierarchies are independent of each other. Base provides, in addition to whatever else it does, basic calculate functionality. A subclass of Base can use calculate that it inherits from Base (via Calculator), or it can inherit from a subclass of Calculator as well.
class Derived1(Base):
...
class Derived2(Base, Calculator1):
...
class Derived3(Base, Calculator2):
...

Maintaining staticmethod calls if class is renamed

I have a class with a static method which is called multiple times by other methods. For example:
class A:
def __init__(self):
return
#staticmethod
def one():
return 1
def two(self):
return 2 * A.one()
def three(self):
return 3 * A.one()
Method one is a utility function that belongs inside the class but isn't logically an attribute of the class or the class instance.
If the name of the class were to be changed from A to B, do I have to explicitly change every call to method one from A.one() to B.one()? Is there a better way of doing this?
I pondered this question once upon a time and, while I agree that using a refactoring utility is probably the best way to go, as far as I can tell it is technically possible to achieve this behaviour in two ways:
Declare the method a classmethod.
Use the __class__ attribute. Leads to rather messy code, and may well be deemed unsafe or inefficient for reasons I am not aware of(?).
class A:
def __init__(self):
return
#staticmethod
def one():
return 1
#classmethod
def two(cls):
return 2 * cls.one()
def three(self):
return 3 * self.__class__.one()
a = A()
print(a.two())
print(a.three())

Design pattern for interface for different classes

I am seeking the right design pattern to implement interface in a class setting.
My file structure like follows:
models: which contains different models written in classes, say Model, subModelA, subModelB, subsubModelC etc.
calculators: which contains different calculator tools for each models. Note the calculator will need to import model attributes for computation.
My question is how should I structure my calculator file so as to follow the structure of models.
My original attempt is to write a ABC class for my calculator and then for each subModel class in model I write a respective subCalculator subclass to implement. However this does not seem to fully exploit the prescribed class structure in model.
Some baby example of my attempt:
# in model.py
class model(object):
def __init__(self,attr1):
self.attr1 = attr1
class submodel(model):
def __init__(self, attr1, attr2):
super().__init__(attr1)
self.attr2
# in calculator.py
from model import model
class calculator(abc.ABC):
#staticmethod
#abc.abstractmethod
def calculate(model):
return model.attr1 ** 2
class subcalculator(calculator):
def calculate(model):
y = super(subcalculator, subcalculator).calculate(model)
return y + model.attr2 ** 3
I have surveyed some design pattern catalogs as listed in here, and strategy seems to be the right pattern. But the baby example there does not address my concern, as I would hope to use class structure in model file.
I'd hope someone can give me a more full-fledged example in such case. My thanks in advance.
So you can separate models and calculators like this:
# in model.py
import calculator as cal
class model(object):
def __init__(self,attr1):
self.attr1 = attr1
def calculate(self):
return cal.defaultCalc(self)
class submodel(model):
def __init__(self, attr1, attr2):
super().__init__(attr1)
self.attr2
def calculate(self):
return cal.subCalc(self)
# in calculator.py
def defaultCalc(model):
return model.attr1 ** 2
def subCalc(model):
return defaultCalc(model) + model.attr2 ** 3

Modules, classes and namespaces in Python

I have the following scenario:
I have abstract classes A and B, and A uses B to perform some tasks. In both classes, there are some "constant" parameters (right now implemented as class attributes) to be set by concrete classes extending those abstract classes, and some of the parameters are shared (they should have the same value in a derived "suite" of classes SubA and SubB).
The problem I face here is with how namespaces are organized in Python. The ideal solution if Python had dynamic scoping would be to declare those parameters as module variables, and then when creating a new suite of extending classes I could just overwrite them in their new module. But (luckily, because for most cases that is safer and more convenient) Python does not work like that.
To put it in a more concrete context (not my actual problem, and of course not accurate nor realistic), imagine something like a nbody simulator with:
ATTRACTION_CONSTANT = NotImplemented # could be G or a Ke for example
class NbodyGroup(object):
def __init__(self):
self.bodies = []
def step(self):
for a in self.bodies:
for b in self.bodies:
f = ATTRACTION_CONSTANT * a.var * b.var / distance(a, b)**2
...
class Body(object):
def calculate_field_at_surface(self):
return ATTRACTION_CONSTANT * self.var / self.r**2
Then other module could implement a PlanetarySystem(NBodyGroup) and Planet(Body) setting ATTRACTION_CONSTANT to 6.67384E-11 and other module could implement MolecularAggregate(NBodyGroup) and Particle(Body) and set ATTRACTION_CONSTANT to 8.987E9.
In brief: what are good alternatives to emulate global constants at module level that can be "overwritten" in derived modules (modules that implement the abstract classes defined in the first module)?
How about using a mixin? You could define (based on your example) classes for PlanetarySystemConstants and MolecularAggregateConstants that hold the ATTRACTION_CONSTANT and then use class PlanetarySystem(NBodyGroup, PlanetarySystemConstants) and class MolecularAggregate(NBodyGroup, MolecularAggregateConstants) to define those classes.
Here are a few things I could suggest:
Link each body to its group, so that the body accesses the constant from the group when it calculates its force. For example:
class NbodyGroup(object):
def __init__(self, constant):
self.bodies = []
self.constant = constant
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.constant * a.var * b.var / distance(a, b)**2
...
class Body(object):
def __init__(self, group):
self.group = group
def calculate_field_at_surface(self):
return self.group.constant * self.var / self.r**2
Pro: this automatically enforces the fact that bodies in the same group should exert the same kind of force. Con: semantically, you could argue that a body should exist independent of any groups in may be in.
Add a parameter to specify the type of force. This could be a value of an enumeration, for example.
class Force(object):
def __init__(self, constant):
self.constant = constant
GRAVITY = Force(6.67e-11)
ELECTRIC = Force(8.99e9)
class NbodyGroup(object):
def __init__(self, force):
self.bodies = []
self.force = force
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.force.constant * a.charge(self.force) \
* b.charge(self.force) / distance(a, b)**2
...
class Body(object):
def __init__(self, charges, r):
# charges = {GRAVITY: mass_value, ELECTRIC: electric_charge_value}
self.charges = charges
...
def charge(self, force):
return self.charges.get(force, 0)
def calculate_field_at_surface(self, force):
return force.constant * self.charge(force) / self.r**2
Conceptually, I would prefer this method because it encapsulates the properties that you typically associate with a given object (and only those) in that object. If speed of execution is an important goal, though, this may not be the best design.
Hopefully you can translate these to your actual application.
removed old version
you can try subclassing __new__ to create a metaclass. Then at the class creation, you can get the subclass module by looking in previous frames with the inspect module of python std, get your new constant here if you find one, and patch the class attribute of the derived class.
I won't post an implementation for the moment because it is non trivial for me, and kind of dangerous.
edit: added implementation
in A.py:
import inspect
MY_GLOBAL = 'base module'
class BASE(object):
def __new__(cls, *args, **kwargs):
clsObj = super(BASE, cls).__new__(cls, *args, **kwargs)
clsObj.CLS_GLOBAL = inspect.stack()[-1][0].f_globals['MY_GLOBAL']
return clsObj
in B.py:
import A
MY_GLOBAL = 'derived'
print A.BASE().CLS_GLOBAL
now you can have fun with your own scoping rules ...
You should use property for this case,
eg.
class NbodyGroup(object):
#property
def ATTRACTION_CONSTANT(self):
return None
...
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.ATTRACTION_CONSTANT * a.var * b.var / distance(a, b)**2
class PlanetarySystem(NBodyGroup):
#property
def ATTRACTION_CONSTANT(self):
return 6.67384E-11

Categories