I have two classes: A and B. I would like to build a class C which is able to overrides some common methods of A and B.
The methods which I would like to override they should be able to call the method of the base class.
In practice I would like to collect some statistics about class A and B, but being transparent to the rest of my code. Now, A and B have some methods in common (obviously implemented in a different way). I would like to have a class C which shares the interface of A and B, and simoultaneously do some other operations (i.e. measure the run time of some shared methods).
I can make this example:
import time
class A:
def __init__(self):
pass
def common_method(self):
return "A"
class B:
def __init__(self):
pass
def common_method(self):
return "B"
class C:
def __init__(self, my_obj):
self.my_obj
self.time_avg = 0
self.alpha = 0.1
pass
def common_method(self):
start = time.time()
ret = self.my_obj.common_method()
stop = time.time()
self.time_avg = (1. - self.alpha) * self.time_avg + self.alpha * (stop - start)
return ret
I hope that from this example is clear that A and B inheriting from C is not working.
However, this method unfortunately require me to redefine all the methods of classes A and B... Which is tedious and dirty!
What is the proper way to implement this situation in python? And how it is called the design pattern (I am almost sure that there is but I cannot recall).
Thanks in advance.
You could solve this with composition instead of polymorphism, meaning that a C object will hold either a A object or a B one:
class C:
def __init__(self, obj):
self._obj = obj
def common_method(self):
return self._obj.common_method()
You can then use it:
>>> ca = C(A())
>>> cb = C(B())
>>> ca.common_method()
'A'
>>> cb.common_method()
'B'
Beware: if you pass an object that does not declare a common_method method, you will get an AttributeError
Related
Immagine that I have defined several methods acting on an object, and I have two or more different classes that cannot inherit from the same parent, having an instance of that object. I want to automatically add all the methods to the two classes, removing the first argument (the object) and replacing it with the instance owned by the class.
Is there a way to do it?
I am sure my question is not clear, so I try to give a super simplified settings. to keep things simple the object is just a list. I hope that after the example my objective is clear! Thanks in advance for your time.
# I define some methods acting on an object (just 2 useless methods acting on a list in this example)
def get_avg(input_list):
return sum(input_list) / len(input_list)
def multiply_elements(input_list, factor):
return [i * factor for i in input_list]
Then we have 2 different classes, both have an instance of our object (the list)
class A:
list_of_apples = []
def get_list_of_apples(self):
return self.list_of_apples
class B:
"""Totally different class from A(pples), also containing a list"""
list_of_bears = []
def get_list_of_bears(self):
return self.list_of_bears
Now, to call a "list" method on the lists owned by A and B instances, I would need to do the following:
b = B()
get_avg(b.get_list_of_bears())
My goal, instead, is to automatically define some wrappers (as the following ones) which would allow me to call list methods directly from instances of A and B. Here there is an example for B:
class B:
"""Totally different class from A(pples), but containing a list"""
list_of_bears = []
def get_list_of_bears(self):
return self.list_of_bears
def get_avg(self):
return get_avg(self.list_of_bears)
def multiply_elements(self, factor):
return multiply_elements(self.list_of_bears, factor)
With the extended class, I can simply do:
b = B()
b.get_avg()
b.multiply_elements(factor=10)
I would like to automatically extend A and B.
I don't know why your classes cannot inherit from a common ancestor but one solution I can think of is to make the ancestor dynamically:
def make_ancestor():
class Temp:
def get_avg(self):
input_list = getattr(self, self.list_name)
return sum(input_list) / len(input_list)
def multiply_elements(self, factor):
input_list = getattr(self, self.list_name)
return [i * factor for i in input_list]
return Temp
class A(make_ancestor()):
list_of_apples = []
list_name = 'list_of_apples'
def get_list_of_apples(self):
return self.list_of_apples
class B(make_ancestor()):
list_of_bears = []
list_name = 'list_of_bears'
def get_list_of_bears(self):
return self.list_of_bears
Now since the parent classes are being generated dynamically your child classes don't inherit from the same parent.
As a test:
print(make_ancestor() == make_ancestor()) # False
I am writing a class that needs to behave as if it were an instance of another class, but have additional methods and attributes. I've tried doing different things within __new__ but to no avail. As an example, here is a half-written class and the desired behavior:
class A(object):
def __new__(self, a):
value = 100 # instances of A need to behave like integers
... # bind A methods and attributes to value?
return value
def __init__(self, a)
self.a = a
def something(self):
return 20 + self.a
Here is the desired behavior:
a = A(10, 5)
print(a + 10) # 110
print(a * 2) # 200
print(a.b) # 5
print(a.something()) # 25
I know that when __new__ returns an instance of a class different than A, then __init__ and other methods are not bound to value. None of the other methods are either. Is this sort of thing possible? Am I thinking about this problem the wrong way?
EDIT
Note that this class doesn't return instances of integer, just for the purpose of the example.
The reason why (I think that) I can't just subclass, in this case, int, is because I need to construct the class when it is called. __init__ doesn't return anything, otherwise, maybe I could do something like:
class A(object):
def __init__(self, a):
self.a = a
... # logic constructing `value`
value = 100 # `value` ends up being an integer
return value
def something(self):
return self.a
In case this is relevant, value is a theano TensorVariable. I would like to add extra methods and attributes to the instance of TensorVariable created for use by other functionality downstream.
I am looking for a way to apply a function to all instances of a class. An example:
class my_class:
def __init__(self, number):
self.my_value = number
self.double = number * 2
#staticmethod
def crunch_all():
# pseudocode starts here
for instances in my_class:
instance.new_value = instance.my_value + 1
So the command my_class.crunch_all() should add a new attribute new_value to all existing instances. I am guessing I will have to use #staticmethod to make it a "global" function.
I know I could keep track of the instances that are being defined by adding something like my_class.instances.append(number) in __init__ and then loop through my_class.instances, but I had no luck so far with that either. Also I am wondering if something more generic exists. Is this even possible?
Register objects with the class at initialisation (i.e. __init__) and define a class method (i.e. #classmethod) for the class:
class Foo(object):
objs = [] # registrar
def __init__(self, num):
# register the new object with the class
Foo.objs.append(self)
self.my_value = num
#classmethod
def crunch_all(cls):
for obj in cls.objs:
obj.new_value = obj.my_value + 1
example:
>>> a, b = Foo(5), Foo(7)
>>> Foo.crunch_all()
>>> a.new_value
6
>>> b.new_value
8
My code contains some objects which are used via Pythons "with" statement to ensure that they get savely closed.
Now i want to create a class where the methods can interact with these objects.
For example my code actually looks like this:
with ... as a, with ... as b:
# do something with a and b here
call_method(a, b) # pass a and b here
I'd like to put it into a class, so it Looks "like" this:
class Something(object):
def __init__(self):
with ... as a:
self.a = a
with ... as b:
self.b = b
def do_something(self):
# do something with self.a and self.b
self.call_method(self.a, self.b)
def call_method(self, a, b)
# do more with a, b
The objects need to stay "opened" all the time.
I don't know how to achieve this, so how can i do this?
You don't have a 'context' in your class to manage, don't use with in __init__. You'll have to close the files in some other manner.
You can always use try:, finally: if you want the file objects to be closed if there is an exception within the method:
def call_method(self, a, b)
try:
# do more with a, b
finally:
self.a.close()
self.b.close()
but it depends heavily on what you wanted to do with the files if you really wanted them to be closed at that point.
If your instances themselves should be used in a specific context (e.g. there is a block of code than starts and ends during which your instance should have the file open), then you can make the class a context manager by implementing the context manager special methods.
You alone as designer of the class API will know how long the files need to stay open for. It depends heavily on how the instance is used when it is time to close the file.
You could make your class itself a context manager:
class Something(object):
def __init__(self):
self.a = a
self.b = b
def __enter__(self):
self.a_e = a.__enter__(self)
self.b_e = b.__enter__(self)
def __exit__(self, *x):
xb = False
try:
xb = self.b_e(*x)
finally:
xa = self.a_e(*x)
return xa or xb # make sure both are called.
def do_something(self):
# do something with self.a and self.b
# - or, if present, with a_e, b_e
self.call_method(self.a, self.b)
def call_method(self, a, b)
# do more with a, b
This is just the raw idea. In order to make it work properly, you must do even more with try: except: finally:.
You can use it then with
with Something(x, y) as sth:
sth.do_something()
and it gets properly __enter__()ed and __exit__()ed.
I have the following scenario:
I have abstract classes A and B, and A uses B to perform some tasks. In both classes, there are some "constant" parameters (right now implemented as class attributes) to be set by concrete classes extending those abstract classes, and some of the parameters are shared (they should have the same value in a derived "suite" of classes SubA and SubB).
The problem I face here is with how namespaces are organized in Python. The ideal solution if Python had dynamic scoping would be to declare those parameters as module variables, and then when creating a new suite of extending classes I could just overwrite them in their new module. But (luckily, because for most cases that is safer and more convenient) Python does not work like that.
To put it in a more concrete context (not my actual problem, and of course not accurate nor realistic), imagine something like a nbody simulator with:
ATTRACTION_CONSTANT = NotImplemented # could be G or a Ke for example
class NbodyGroup(object):
def __init__(self):
self.bodies = []
def step(self):
for a in self.bodies:
for b in self.bodies:
f = ATTRACTION_CONSTANT * a.var * b.var / distance(a, b)**2
...
class Body(object):
def calculate_field_at_surface(self):
return ATTRACTION_CONSTANT * self.var / self.r**2
Then other module could implement a PlanetarySystem(NBodyGroup) and Planet(Body) setting ATTRACTION_CONSTANT to 6.67384E-11 and other module could implement MolecularAggregate(NBodyGroup) and Particle(Body) and set ATTRACTION_CONSTANT to 8.987E9.
In brief: what are good alternatives to emulate global constants at module level that can be "overwritten" in derived modules (modules that implement the abstract classes defined in the first module)?
How about using a mixin? You could define (based on your example) classes for PlanetarySystemConstants and MolecularAggregateConstants that hold the ATTRACTION_CONSTANT and then use class PlanetarySystem(NBodyGroup, PlanetarySystemConstants) and class MolecularAggregate(NBodyGroup, MolecularAggregateConstants) to define those classes.
Here are a few things I could suggest:
Link each body to its group, so that the body accesses the constant from the group when it calculates its force. For example:
class NbodyGroup(object):
def __init__(self, constant):
self.bodies = []
self.constant = constant
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.constant * a.var * b.var / distance(a, b)**2
...
class Body(object):
def __init__(self, group):
self.group = group
def calculate_field_at_surface(self):
return self.group.constant * self.var / self.r**2
Pro: this automatically enforces the fact that bodies in the same group should exert the same kind of force. Con: semantically, you could argue that a body should exist independent of any groups in may be in.
Add a parameter to specify the type of force. This could be a value of an enumeration, for example.
class Force(object):
def __init__(self, constant):
self.constant = constant
GRAVITY = Force(6.67e-11)
ELECTRIC = Force(8.99e9)
class NbodyGroup(object):
def __init__(self, force):
self.bodies = []
self.force = force
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.force.constant * a.charge(self.force) \
* b.charge(self.force) / distance(a, b)**2
...
class Body(object):
def __init__(self, charges, r):
# charges = {GRAVITY: mass_value, ELECTRIC: electric_charge_value}
self.charges = charges
...
def charge(self, force):
return self.charges.get(force, 0)
def calculate_field_at_surface(self, force):
return force.constant * self.charge(force) / self.r**2
Conceptually, I would prefer this method because it encapsulates the properties that you typically associate with a given object (and only those) in that object. If speed of execution is an important goal, though, this may not be the best design.
Hopefully you can translate these to your actual application.
removed old version
you can try subclassing __new__ to create a metaclass. Then at the class creation, you can get the subclass module by looking in previous frames with the inspect module of python std, get your new constant here if you find one, and patch the class attribute of the derived class.
I won't post an implementation for the moment because it is non trivial for me, and kind of dangerous.
edit: added implementation
in A.py:
import inspect
MY_GLOBAL = 'base module'
class BASE(object):
def __new__(cls, *args, **kwargs):
clsObj = super(BASE, cls).__new__(cls, *args, **kwargs)
clsObj.CLS_GLOBAL = inspect.stack()[-1][0].f_globals['MY_GLOBAL']
return clsObj
in B.py:
import A
MY_GLOBAL = 'derived'
print A.BASE().CLS_GLOBAL
now you can have fun with your own scoping rules ...
You should use property for this case,
eg.
class NbodyGroup(object):
#property
def ATTRACTION_CONSTANT(self):
return None
...
def step(self):
for a in self.bodies:
for b in self.bodies:
f = self.ATTRACTION_CONSTANT * a.var * b.var / distance(a, b)**2
class PlanetarySystem(NBodyGroup):
#property
def ATTRACTION_CONSTANT(self):
return 6.67384E-11