Extend python staticmethod in other class - python

I have two classes which are responsible for some calculations. In first class I have calculateStatisticalFeatures static method which calculate come stuff and returns a DataFrame for me. In other class I would like to do almost the same but add one more calculation basing on the same new input data from second class. I found something like decorators but somehow I was not able to use it.
Method in first class:
#staticmethod
def calculateStatisticsFeatures(self, inputData) -> pd.DataFrame:
#some calculations
features = pd.DataFrame(np.array([[skewn, kurt, entropyVal, meanCalc]]), columns=['skewness', 'kurtosis','entropy', 'meanCalc'])
return features
I was trying to use decorator like this to extend my first class method in second class but I can't paste the data.
#firstClass.calculateStatisticalFeatures(self.inputData)
def TestDecor(self):
# new calculation
Is it somehow possible to add this calculations to second class? Thank you in advance :)

Maybe this is what you want?
>>> import functools
>>>
>>> class A():
... #staticmethod
... def test(func):
... #functools.wraps(func)
... def wrapper(*args, **kw):
... # features = pd.DataFrame(np.array([[skewn, kurt, entropyVal, meanCalc]]), columns=['skewness', 'kurtosis','entropy', 'meanCalc'])
... print(kw.get("inputData"))
... return func(*args, **kw)
... return wrapper
...
>>> class B(A):
... #A.test
... def testb(self, **kw):
... print('test')
...
>>>
>>> B().testb(inputData="inputData")
inputData
test

I removed the staticmethod and just inherited second class. I was not aware that in Python we can inherit from classes in one class.

Related

Refactoring a class method to be under an abstract base class and splitting the logic without changing the base class method signature?

I'm currently working on redesigning a class to be under an abstract base class. The current class has a method func that does some logic for two things, say A and B.
(note that all the code below is very simplified. There's a lot more functionality than what is shown)
class current_class:
def func(self):
# does stuff for A
# does stuff for B
During logic A, it loads a large dataset into a dictionary, say, dataset and later dataset.keys() is used for logic B, but other than that, A and B are independent of each other.
I will create an alternate class, say, another_class that is similar to current_class, but this class doesn't need B and only needs A. So something like
class another_class:
def func(self):
# does stuff for A
And then both will be under an abstract base class base. Since both inherited classes involves A, I plan on just creating a method in base class that does A, say, func_A. But I'm having trouble with figuring out the best way to approach this so that the function signatures conform and without having to reload dataset for B.
If another_class also needed the logic for B, I think we can just return dataset.keys() from func_A and use it in func_B, but another_class doesn't.
So I don't know if there's a good way to conform this without having different signatures for the methods.
So in code, I have the following two ideas:
1)
class base:
#abstractmethod
def func(self):
pass
def func_A(self):
# does stuff for A and gets the dataset
return dataset.keys()
class current_class:
def func_B(self, keys):
# does stuff for B
def func(self):
keys = self.func_A
self.func_B(keys)
class current_class:
def func(self):
_ = self.func_A() # the return is unused...
class base:
#abstractmethod
def func(self):
pass
class current_class:
def func_A(self):
# does stuff for A and gets the dataset
return dataset.keys()
def func_B(self, keys):
# does stuff for B
def func(self):
keys = self.func_A()
self.func_B(keys)
class current_class:
def func_A(self):
# does same stuff as func_A for current_class, and doesn't return anything
def func(self):
self.func_A()
I don't like the first design because func_A only needs to return something for one of the subclasses and not for all of them. I also don't like the second design because we have to separately implement func_A in each inherited class even though they're identical methods, except one needs to return something and the other doesn't.
It's not a big deal to ignore the return value of a function that is primarily called for its side effects. Just define func_A once in the base class and let both child classes use it as appropriate to their needs.
class Base:
#abstractmethod
def func(self):
pass
def func_A(self):
# does stuff for A and gets the dataset
return dataset.keys()
class Child1:
def func_B(self, keys):
# does stuff for B
def func(self):
keys = self.func_A
self.func_B(keys)
class Child2:
def func(self):
self.func_A()
If there is more in func_A that isn't necessary for Child2, then it should of course be split up to avoid doing unnecessary work in Child2.func. But simply returning a value is not in anyway time- or space-intensive, and should not be a concern.

How to implement associated types in Python/Mypy? Or, what to do when wanting sub-classes to have subclass arguments?

Consider the (simplified) code, where I want Lean* and Isabelle* extend the Base*.
class BaseProblem: ...
class BaseStep: ...
class LeanProblem(BaseProblem): ...
class LeanStep(BaseStep): ...
class IsabelleProblem(BaseProblem): ...
class IsabelleStep(BaseStep): ...
class BaseProver:
def f(self, problem: BaseProblem, step: BaseStep): ...
class LeanProver(BaseProver):
def f(self, problem: LeanProblem, step: LeanStep): ...
class IsabelleProver(BaseProver):
def f(self, problem: IsabelleProblem, step: IsabelleStep): ...
However, the f function will have problem in mypy:
Argument 1 of "f" is incompatible with supertype "LeanProblem";
supertype defines the argument type as "BaseProblem" [override]
I know it can be solved by generics, such as:
TProblem = TypeVar('TProblem', bound=BaseProblem)
TStep = TypeVar('TStep', bound=BaseStep)
class BaseProver(Generic[TProblem, TStep]):
def f(self, problem: TProblem, step: TStep): ...
class LeanProver(BaseProver[LeanProblem, LeanStep]):
def f(self, problem: LeanProblem, step: LeanStep): ...
...
However, instead of only "Problem" and "Step", indeed I have more of such types (say, 10). Thus, the approach to use generic will be quite ugly IMHO.
When using Rust, I know we can have associated types and it can solve the problem; but I have not found an equivalent in Python.

How to use a function in two different classes with different return parameters in Python?

I had the following snippet:
class one(xyz):
def __init__(self,...):
....
def myfunction(self,p,q):
....
self.dummy_ = 1
self.corr_ = 3
return self
class two(one):
....
class three(one):
Here, I want myfunction() to return only self.corr_ and not self.dummy_ in class two but want all of self.dummy_ and self.corr_ to be returned in class three.
One way to achieve this is to write the same function in both the classes after removing the function from the base class. But is there a way to achieve this task without taking myfunction() out of the class one.
class one(xyz):
def __init__(self,...):
....
def myfunction(self,p,q):
....
self.dummy_ = 1
self.corr_ = 3
return self
class two(one):
def myfunction(self,p,q):
....
return self.corr
class three(one):
....
It's called method overriding. You can override a method that you inherited and thereby give a specific subclass a different implementation of that method.
https://docs.python.org/3.6/tutorial/classes.html
https://en.wikipedia.org/wiki/Method_overriding#Python

Python class member lazy initialization

I would like to know what is the python way of initializing a class member but only when accessing it, if accessed.
I tried the code below and it is working but is there something simpler than that?
class MyClass(object):
_MY_DATA = None
#staticmethod
def _retrieve_my_data():
my_data = ... # costly database call
return my_data
#classmethod
def get_my_data(cls):
if cls._MY_DATA is None:
cls._MY_DATA = MyClass._retrieve_my_data()
return cls._MY_DATA
You could use a #property on the metaclass instead:
class MyMetaClass(type):
#property
def my_data(cls):
if getattr(cls, '_MY_DATA', None) is None:
my_data = ... # costly database call
cls._MY_DATA = my_data
return cls._MY_DATA
class MyClass(metaclass=MyMetaClass):
# ...
This makes my_data an attribute on the class, so the expensive database call is postponed until you try to access MyClass.my_data. The result of the database call is cached by storing it in MyClass._MY_DATA, the call is only made once for the class.
For Python 2, use class MyClass(object): and add a __metaclass__ = MyMetaClass attribute in the class definition body to attach the metaclass.
Demo:
>>> class MyMetaClass(type):
... #property
... def my_data(cls):
... if getattr(cls, '_MY_DATA', None) is None:
... print("costly database call executing")
... my_data = 'bar'
... cls._MY_DATA = my_data
... return cls._MY_DATA
...
>>> class MyClass(metaclass=MyMetaClass):
... pass
...
>>> MyClass.my_data
costly database call executing
'bar'
>>> MyClass.my_data
'bar'
This works because a data descriptor like property is looked up on the parent type of an object; for classes that's type, and type can be extended by using metaclasses.
This answer is for a typical instance attribute/method only, not for a class attribute/classmethod, or staticmethod.
For Python 3.8+, how about using the cached_property decorator? It memoizes.
from functools import cached_property
class MyClass:
#cached_property
def my_lazy_attr(self):
print("Initializing and caching attribute, once per class instance.")
return 7**7**8
For Python 3.2+, how about using both property and lru_cache decorators? The latter memoizes.
from functools import lru_cache
class MyClass:
#property
#lru_cache()
def my_lazy_attr(self):
print("Initializing and caching attribute, once per class instance.")
return 7**7**8
Credit: answer by Maxime R.
Another approach to make the code cleaner is to write a wrapper function that does the desired logic:
def memoize(f):
def wrapped(*args, **kwargs):
if hasattr(wrapped, '_cached_val'):
return wrapped._cached_val
result = f(*args, **kwargs)
wrapped._cached_val = result
return result
return wrapped
You can use it as follows:
#memoize
def expensive_function():
print "Computing expensive function..."
import time
time.sleep(1)
return 400
print expensive_function()
print expensive_function()
print expensive_function()
Which outputs:
Computing expensive function...
400
400
400
Now your classmethod would look as follows, for example:
class MyClass(object):
#classmethod
#memoize
def retrieve_data(cls):
print "Computing data"
import time
time.sleep(1) #costly DB call
my_data = 40
return my_data
print MyClass.retrieve_data()
print MyClass.retrieve_data()
print MyClass.retrieve_data()
Output:
Computing data
40
40
40
Note that this will cache just one value for any set of arguments to the function, so if you want to compute different values depending on input values, you'll have to make memoize a bit more complicated.
Consider the pip-installable Dickens package which is available for Python 3.5+. It has a descriptors package which provides the relevant cachedproperty and cachedclassproperty decorators, the usage of which is shown in the example below. It seems to work as expected.
from descriptors import cachedproperty, classproperty, cachedclassproperty
class MyClass:
FOO = 'A'
def __init__(self):
self.bar = 'B'
#cachedproperty
def my_cached_instance_attr(self):
print('Initializing and caching attribute, once per class instance.')
return self.bar * 2
#cachedclassproperty
def my_cached_class_attr(cls):
print('Initializing and caching attribute, once per class.')
return cls.FOO * 3
#classproperty
def my_class_property(cls):
print('Calculating attribute without caching.')
return cls.FOO + 'C'
Ring gives lru_cache-like interface but working with any kind of descriptor supports: https://ring-cache.readthedocs.io/en/latest/quickstart.html#method-classmethod-staticmethod
class Page(object):
(...)
#ring.lru()
#classmethod
def class_content(cls):
return cls.base_content
#ring.lru()
#staticmethod
def example_dot_com():
return requests.get('http://example.com').content
See the link for more details.

How can I give a class as argument to a Python function

I have two functions in my (first!) Python program that only differ by the class that must be instanciated.
def f(id):
c = ClassA(id)
...
return ...
def g(id):
c = ClassB(id)
...
return ...
To avoid repeated code, I would like to be able to write a single function that would somehow accept the class to instanciate as a parameter.
def f(id):
return f_helper(id, ... ClassA ...)
def g(id):
return f_helper(id, ... ClassB ...)
def f_helper(id, the_class):
c = ... the_class ... (id)
...
return ...
I'm pretty sure this is possible, but did not find how...
That works exactly as you have it (minus the ...s):
>>> class foo:
pass
>>> def make_it(cls):
return cls()
>>> make_it(foo)
<__main__.foo instance at 0x011D9B48>
This can be modified to take in/pass params to the class constructor as well if you like, but the idea is perfectly fine. Classes are first-class objects in Python.
You pretty much got it right, just drop the dots. Classes are first-class values, you can just refer to them, as to any object. f would be return f_helper(id, ClassA) and g would be return f_helper(id, ClassB).
You can pass a callable to a function; a class itself is a callable, the returned object being an instance of said class.
def f_helper(id, the_class):
c = the_class(id)
# ...
return # ...

Categories