I have two classes that share a lot of common stuff except one function f(x).
class A(object):
def __init__(self):
// some stuff
def g(self):
// some other stuff
def f(self, x):
// many lines of computations
q = ...
y = ...
return y
class B(A):
def f(self, x):
// same many lines as in A
q = ...
y = ...
// a few extra lines
z = ... # z needs both y and q
return z
In this case, do I have to define f(x) from scratch in class B? Is there some trick to re-use the code in A.f(x)?
One way I can think of is to make q an instance property self.q, then do the following
def f(self.x):
y = A.f(self, x)
// a few extra lines
z = ... # using y and self.q
return z
Or maybe let A.f(x) return both q and y, then call A.f(self, x) in B's definition of f(x).
Are these approaches the standard way to do it? Is there something nicer?
Let's assume you want to organize your code around classes. If that's the case, then I would strongly recommend using super to reference the super class:
class MyA(object):
def f(self, x):
print 'MyA'
return x
class MyB(MyA):
def f(self, x):
print 'MyB'
print super(MyB, self).f(x)
This approach allows you to stick with classes and is an idiomatic way of referencing inherited classes.
If you don't need to organize your code this way or otherwise have reasons to break things out into functions that are usable by other parts of your code which don't care about these classes, then you can move your f logic into a function.
Here's an example:
def f(x):
return x
class MyA(object):
def f(self, x):
return f(x)
class MyB(MyA):
def f(self, x):
y = f(x)
...
return y
Related
Consider the following class hierarchy:
A
|
_____________
| | |
Foo Bar Baz
The class A defines a __call__ method, which Foo, Bar and Baz implement. The classes also define various other methods I'm interested in. All three classes come from a third party library, so I do not want to change their implementation. However, in some cases at runtime, I would like to alter the implementation of Foo's, Bar's and Baz' __call__ method. Right now, I did this by defining my own classes, which solely override the __call__ method:
Foo Bar Baz
| | |
Foo1 Bar1 Baz1
Then, depending on the situation, I instantiate the desired class to a common variable. I'm not satisfied with this solution, as it has the following drawbacks:
The additions I make to __call__ are not necessarily uniquely determined by the class. Bar1 and Baz1 might share the implementation of __call__ (but Bar and Baz differ in other aspects). So, as far as I understand, I need to repeat code.
At runtime, instantiating the desired class requires m × n case distinctions (where m is the number of classes at the level of Foo, Bar, and Baz and n is the number of classes at the level of Foo1, Bar1, and Baz1). This will grow rapidly as m or n increase. Ideally, I would like the number of case distinction to be m + n.
I already had a look at https://python-patterns.guide/gang-of-four/composition-over-inheritance/#solution-3-the-decorator-pattern, at Cannot overwrite implementation for __call__ and at __call__ method of type class. However, these solutions do not fit perfectly to my case, as (1) I'm overriding a dunder method (2) of third party classes and (3) still would like to have access to the other class methods of Foo, Bar, and Baz.
Is there an elegant why to achieve what I am looking for?
Edit: The change of __call__ at runtime should only affect the instance object, not the class or all objects of that class at once.
Edit 2 (requested by #enzo):
# -*- coding: utf-8 -*-
from abc import ABCMeta, abstractmethod
class A(metaclass=ABCMeta):
#abstractmethod
def __call__(self, x, y):
"""Docstring A"""
#abstractmethod
def method1(self, a, b):
"""Docstring method1"""
class Foo(A):
def __call__(self, x, y):
return x + y # indeed, no dependence on `self`
def method1(self, a, b):
# but this might depend on `self`
pass
class Bar(A):
def __call__(self, x, y):
return x * y # indeed, no dependence on `self`
def method1(self, a, b):
# but this might depend on `self`
pass
def method2(self, c, d):
# but this might depend on `self`
pass
class Baz(A):
def __call__(self, x, y):
return x ** y # indeed, no dependence on `self`
def method1(self, a, b):
# but this might depend on `self`
pass
def method2(self, c, d):
# but this might depend on `self`
pass
I don't know if that's what you meant, but based on your sample code you could do something like this:
# Create a decorator to keep the reference to the original __call__'s
class CustomA(A):
def __init__(self, obj):
self.obj = obj
def __call__(self, x, y):
return self.obj(x, y)
def method1(self, a, b):
return self.obj.method1(a, b)
# Create a decorator that calls the original __call__
class MultiplyA(CustomA):
def __init__(self, obj):
super().__init__(obj)
def __call__(self, x, y):
result = super().__call__(x, y)
return result * 10
# Create a decorator that ignores the original __call__
class DivideA(CustomA):
def __init__(self, obj):
super().__init__(obj)
def __call__(self, x, y):
# You can still access the original object' attributes here
super().method1(x, y)
return x / y
Usage:
foo = Foo()
print(foo(1, 2))
# Outputs 3
foo = MultiplyA(foo)
print(foo(1, 2))
# Outputs 30
bar = Bar()
print(bar(2, 3))
# Outputs 6
bar = DivideA(bar)
print(bar(10, 5))
# Outputs 2.0
you can inject a custom call into it:
class A():
def __call__(self, *args, **kwargs):
print("In A.__call__")
def showCall(self):
print(self.__call__)
class foo(A):
pass
class bar(A):
pass
class baz(A):
pass
def my_custom_func():
print("In my custom func")
f = foo()
f()
b = bar()
b()
f.showCall()
b.showCall()
print("##setting call##")
f.__call__ = my_custom_func
print("##running f calls##")
f.showCall()
f()
print("##running b calls##")
b.showCall()
b()
While this does solve your injection problem other things might come into play where you still need the original __call__ method. In which case you will need to save them before setting them.
I want to build an object dynamically which allow use to mix the class properties in whichever way they like base on multiple inheritance. This is the expected behaviour. These classes are dataclasses so there won't be many methods in them, mostly data properties.
class Foo():
def bar(self, x):
return x
class FooA(Foo):
def bar(self, x):
p = super().bar(x)
p += __class__.__name__
return p
class FooB(Foo):
def bar(self, x):
p = super().bar(x)
p += __class__.__name__
return p
class FooC(FooA, FooB):
def bar(self, x):
p = super().bar(x)
p += __class__.__name__
return p
f = FooC()
f.bar('S') # SFooBFooAFooC
However this code violate the DRY principle in broad daylight, hence I want to avoid the bar method completely, if there is no special operations in the current class.
Ideally I want something like
#bar_wrapper
class FooA(Foo):
pass
# OR
class FooA(Foo):
__metaclass__ = BarBase
Instead of this full implementation
class FooA(Foo):
def bar(self, x):
p = super().bar(x)
p += __class__.__name__
return p
Essentially is there a way that I extract the middle layer class information in a multi-level inheritance class through a decorator or metaclass (the two options that I can think of)? Anyone has any idea on how to do this?
Write a class decorator that adds the bar method to the class:
def bar_wrapper(cls):
def bar(self, x):
p = super(cls, self).bar(x)
p += cls.__name__
return p
bar.__module__ = cls.__module__
bar.__qualname__ = '{}.{}'.format(cls.__qualname__, bar.__name__)
cls.bar = bar
return cls
class Foo():
def bar(self, x):
return x
#bar_wrapper
class FooA(Foo):
pass
#bar_wrapper
class FooB(Foo):
pass
#bar_wrapper
class FooC(FooA, FooB):
pass
f = FooC()
print(f.bar('S')) # SFooBFooAFooC
NOTE on the question below. I think the 'proper' pythonic idiom is to a) create module functions, such as foo_math below, and then call their specific action against an instance within the class itself. The bottom piece of code reflects that approach.
I want to define a classmethod which takes two arguments and returns a value. I want the same method to be able to be called on a class instance with the instance value pass as one of the arguments. Can I do this without defining two distinct methods as I have done here?
class Foo(object):
__init__(x):
self.x = x
#classmethod
def foo_math(cls, x, y):
return x + y
def math(self, y):
return Foo.foo_math(self.x, y)
What I would like is:
>>> Foo.math(3, 4)
7
>>> f = Foo()
>>> f.x = 3
>>> f.math(4)
7
Short of subtyping int, here is my conclusion to this question:
def foo_math(cls, x, y):
return x + y
class Foo(object):
__init__(x):
self.x = x
def foo_math(self, y):
return foo_math(self, y)
i don't recommend doing this, but if you really want, it's this (thank you other guy on stackoverflow for first part):
class staticorinstancemethod(object):
def __init__(self, func):
self.func = func
def __get__(self, instance, owner):
return functools.partial(self.func, instance)
then, do something like
class F(object):
#staticorinstancemethod
def math(instOrNone, v1, v2=None):
return instOrNone.x + v1 if instOrNone else v1 + v2
but maybe you just want to define the __add__ and __radd__ methods...
I don't think that you can call a method from a class without defining an object of that class (class methods don't belong inside the methods of any one class), so things like Foo.math(3, 4) will return a NameError as Foo has not been defined.
With this in mind, you should modify your code to be like this (even though with the problem solved there are still some issues with the code):
# A class method would probably go here somewhere.
class Foo(object):
def __init__(self, x):
self.x = x
def foo_math(self, x, y):
return x + y
def math(self, y):
return self.foo_math(self.x, y)
Then you can do:
>>> f = Foo(3)
>>> f.math(4)
7
I am often using classmethods instead of the default constructor in python for example:
class Data(object):
def __init__(self, x, y, z):
self.x = x etc..
#classmethod
def from_xml(cls, xml_file):
x, y, z = import_something_from_xml(xml_file)
return cls(x,y,z)
this approach works good,
but since i often have large classmethod-constructors I want to split them up in smaller functions. My problem with that is, that these smaller functions can be seen in the Class namespace, Is there any way to avoid this ?
You can mark the smaller helper functions as private:
#classmethod
def __import_something_from_xml(cls, data):
#logic
return a, b, c
and you would run:
#classmethod
def from_xml(cls, xml_file):
x, y, z = cls.__import_something_from_xml(xml_file)
return cls(x,y,z)
Keep in mind this is only naming convention and this method can be accessed from Data namespace.
Or you can designate a helper class:
class XMLDataHelper:
#staticmethod
def import_something_from_xml(data):
#logic
return a, b, c
And the code would look like this
#classmethod
def from_xml(cls, xml_file):
x, y, z = XMLDataHelper.import_something_from_xml(xml_file)
return cls(x,y,z)
I have a class (named "A") with some instance variables. I want to add the dir() of this variables to the dir() of instances of class A.
For example:
class A(object):
def __init__(self, x, y):
self.x = x
self.y = y
class X(object):
def f_x(self):
pass
class Y(object):
def f_y(self):
pass
x = X(); y = Y()
a = A(x,y)
I want f_x and f_y to appear in
dir(a)
Is there a better way, or a more 'correct' one, than just iterating X.dict and Y.dict and for each element, use something like:
setattr(A, str(element), element)
Thanks.
A should really be a subclass of X and Y in this case. (Just be sure to read Michele Simionato's article on super and diamond inheritence before you get too deep into it.)
class X(object):
def f_x(self):
pass
class Y(object):
def f_y(self):
pass
class A(X, Y):
def __init__(self, *args, **kwargs): # splats optional
# do what you need to here
dir(A(X(),Y())) # Ah! Lisp!
However, if you really need things to be magic, then just override __getattr__ for X to look in self.x and self.y before throwing an error. But seriously, don't do this.
Why don't you simply inherit from both classes?
class B(A, X):
pass
a = B()
dir(a)