super of equal methods in classes with multiple inheritance - python

I have a class that inherits from two others and I want to get the return of the method called "render" that both have this method
ex:
class A:
def render(self, value, name):
return 'render A'
class B:
def render(self, value, name):
return 'render B'
class C(B, A):
def render(self, value, name):
render_a = #here get the value of A
render_b = #here get the value of B
return render_a

You probably should here specify the classes explictily. You can take a look at the direct base classes, or look at the entire MRO, but then you will need to specify what to do if later another direct (or indirect) superclass is added. You thus can call it with A.render(self, value, name):
class C(B, A):
def render(self, value, name):
render_a = A.render(self, value, name)
render_b = B.render(self, value, name)
return render_a

Related

Create a variable in a superclass, but don't overwrite inheritor's setting

I'd like to have a superclass that, as part of it's interface, provides a variable (or a getter function), and allows (but does not require) an inheritor to set this value. As follows:
class A(object):
def __init__(self):
self._value = DEFAULT_VALUE
def value(self):
return self._value
class B(A):
def __init__(self, value):
self._value = value
super().__init__()
class C(A):
def __init__(self):
#note this class does not care about setting value
super.__init__()
However, super's init will overwrite any custom setting done by B. If I don't set it in the superclass, and then a subclass does not want to implement it, anything that uses that variable will fail.
How do I create this variable in the superclass without overwriting what a subclass may want to set it to?
As mentioned in the comments, you can simply do your subclass-specific initialization after calling super().__init__:
class B(A):
def __init__(self, value):
super().__init__()
# time to overwrite that default value
self._value = value
You can also just not call super().__init__ if not necessary:
class B(A):
def __init__(self, value = None):
if value is None:
# fallback to default
super().__init__()
else:
# otherwise set value from caller
self._value = value
b = B()
print(b.value()) # prints the default value
b = B("overridden value")
print(b.value()) # prints "overridden value"
Edit: Updated according to Simon Hawe's comment
You could do something like this:
class A(object):
def __init__(self, value=DEFAULT_VALUE):
self._value = value
def value(self):
return self._value
class B(A):
def __init__(self, value):
super().__init__(value)
Now _value will only be the default value if a subclass did not provide a value to the super().__init__() call.

Wrap all methods of python superclass

Is there a way to wrap all methods of a superclass, if I can't change its code?
As a minimal working example, consider this base class Base, which has many methods that return a new instance of itself, and the descendent class Child
class Base:
def __init__(self, val):
self.val = val
def newinst_addseven(self):
return Base(self.val + 7)
def newinst_timestwo(self):
return Base(self.val * 2)
# ...
class Child(Base):
#property
def sqrt(self):
return math.sqrt(self.val)
The issue here is that calling childinstance.newinst_addseven() returns an instance of Base, instead of Child.
Is there a way to wrap the Base class's methods to force a return value of the type Child?
With something like this wrapper:
def force_child_i(result):
"""Turn Base instance into Child instance."""
if type(result) is Base:
return Child(result.val)
return result
def force_child_f(fun):
"""Turn from Base- to Child-instance-returning function."""
def wrapper(*args, **kwargs):
result = fun(*args, **kwargs)
return force_child_i(result)
return wrapper
Many thanks!
PS: What I currently do, is look at Base's source code and add the methods to Child directly, which is not very mainainable:
Child.newinst_addseven = force_child_f(Base.newinst_addseven)
Child.newinst_timestwo = force_child_f(Base.newinst_timestwo)
One option is to use a metaclass:
class ChildMeta(type):
def __new__(cls, name, bases, dct):
child = super().__new__(cls, name, bases, dct)
for base in bases:
for field_name, field in base.__dict__.items():
if callable(field):
setattr(child, field_name, force_child(field))
return child
class Child(Base, metaclass=ChildMeta):
pass
It will automatically wrap all the Bases methods with your force_child decorator.

How to convert all elements automatically in a list to a given object in Python

I want to create a list child class that can convert all elements automatically in it to an object no matter the element is create by init or append or extend. So by using both for loop or getitem. Here's a simple example code. What kind of magic method should I use?
class A():
def __init__(self, name):
self.name = name
def __repr__(self):
return 'Object A with name {}'.format(self.name)
class CustomerList(list):
def __init__(self, *args):
super(CustomerList, self).__init__(*args)
c = CustomerList('a')
c.append('b')
c[0] # Object A with name a
c[1] # Object A with name b
for ele in c:
print(c)
# Object A with name a
# Object A with name b
are you asking how to override __append__?
class A():
def __init__(self, name):
self.name = name
def __repr__(self):
return 'Object A with name {}'.format(self.name)
class CustomerList(list):
def __init__(self, *args):
super(CustomerList, self).__init__(*args)
def append(self,letter):
super(CustomerList, self).append(A(letter))
I guess???.. but as mentioned in the comments if you want
my_custom_list.extend(["A","B","V"])
my_custom_list[2] = "A"
to work you will need to override
def __setitem__(self,key,value): # cover a[2]='A'
super(CustomerList,self).__setitem__(key,A(value))
def extend(self,other):
super(CustomerList,self).extend([A(val) for val in other])
of coarse you probably then need to override both __add__,__iadd__ at a minimum as well
I think what you're trying to do is: When you append a new item into the list, it is an object of class A. What you can do is override list.append function:
class A():
def __init__(self, name):
self.name = name
def __repr__(self):
return 'Object A with name {}'.format(self.name)
class CustomerList(list):
def __init__(self, *args):
super(CustomerList, self).__init__(*args)
def append(self, arg):
new_obj = A(arg)
self.insert(len(self), new_obj)

Instances of a class and its sub-class sharing state in python

How to make one instance of a derived class share attributes and state with another instance of its base class in Python?
class Foo(object):
def __init__(self, a, b):
self.value = a
def method1(self):
self.value += 1
return self.value
class Foo_child(Foo):
def __init__(self, Foo_instance, c, d):
super().__init__()
A = Foo(30,40)
B = Foo_child(A,50,60)
What i need is some way where changing B should affect A and vice versa.
For e.g. If i call B.method1, then i need A to have a A.value of 31 and vice versa. Is there any obvious pythonic way to do this?
Your problem is a containment (has-a) relationship, not a is-a relationship. It may also be necessary to make Foo_child inherit from Foo (they provide the same functionality), but in essence you want to delegate handling of value and method1 to Foo_instance.
Re-implement method1 on Foo_child to delegate to Foo_instance, value on Foo_child should be a property object that also delegates:
class Foo_child(Foo):
def __init__(self, Foo_instance, c, d):
super().__init__(c, d)
self.Foo_instance = Foo_instance
#property
def value(self):
return self.Foo_instance.value
#value.setter
def value(self, value):
self.Foo_instance.value = value
def method1(self):
return self.Foo_instance.method1()

Can I refactor this simple callback pattern that uses the property decorator?

I'm just getting to grips with decorators in Python and using them to add callbacks to some instance variables using the following simple pattern:
class A(object):
def __init__(self):
self._var = 0
self.var_callbacks = []
#property
def var(self):
return self._var
#var.setter
def var(self, x):
self._var = x
for f in self.var_callbacks:
f(x)
The property decorator is a neat way of allowing me to introduce callbacks where necessary without changing the class interface. However, after the third or fourth variable it's making the code a bit repetitive.
Is there a way to refactor this pattern into something along the following:
class A(object):
def __init__(self):
self.var = 0
enable_callback(self, 'var', 'var_callbacks')
You'll need to set the property on the class (since it is a descriptor), so using a enable_callback call in the initializer is not going to work.
You could use a class decorator to set the properties from a pattern:
def callback_properties(callbacks_attribute, *names):
def create_callback_property(name):
def getter(self):
return getattr(self, '_' + name)
def setter(self, value):
setattr(self, '_' + name, value)
for f in getattr(self, callbacks_attribute):
f(value)
return property(getter, setter)
def add_callback_properties(cls):
for name in names:
setattr(cls, name, create_callback_property(name)
return cls
return add_callback_properties
Then use that as:
#add_callback_properties('var_callbacks', 'var1', 'var2')
class A(object):
# everything else
Have a look at the Python descriptor protocol. In essence, you can define a class that handles the getting, setting and deleting of a property. So you could define a descriptor that runs your callbacks on setting the attribute.
Descriptors are regular classes, and can be parameterized. So you could implement a descriptor that takes the destination variable its constructor. Something like the following:
class A(object):
var = CallbackDescriptor('var')
foo = CallbackDescriptor('foo')

Categories