I have a lot of classes, realizing some general tasks for different sites :
class AbstractCalculator :
pass # ... abstract methods lying here
class Realization1 (AbstractCalculator) :
#classmethod
def calculate_foo(...) :
# ...
#classmethod
def calculate_bar(...) :
# ...
class Realization2 (AbstractCalculator) :
#classmethod
def calculate_foo(...) :
# ...
#classmethod
def calculate_bar(...) :
# ...
Then i aggregating all those classes in one dictionary
Now i introduce new different API :
class NewAbstractClass :
# ... introducting new API ...
#staticmethod
def adopt(old_class) :
# .. converting AbstractClass to NewAbstactClass
And then i use adopt() method like #decorator, to convert all old realizations to new.
But it all is very strange and complicated. Is there any better way to do this?
UPD #ColinMcGrath :
No I am asking definitely other.
My adopt() decorator is working, and I have no problems with it functioning (just, its body is not related to my question, so I have not provide it).
I think that hardcoding decoration of several tens of differnet classes right in their source code is not a best idea, and looking for canonical soulution.
In general there is no magic way for code to know that one api equivalent to another.
However, the mechanisms in python for procedural generation of classes are metaclasses and class decorators. You could use a metaclass which uses some information you give it to generate another class.
This is a good introduction: http://eli.thegreenplace.net/2011/08/14/python-metaclasses-by-example/
Because of python "duck typing", abstract classes are rarely necessary, and something of a code smell. You should probably just redesign your code such that renaming and abstract classes are unnecessary. In particular, if your code needs to change, it needs to change. I would only do something like this if you are wrapping external code to make several disparate apis compatible.
Something like:
class renamer(type):
def __init__(cls, classname, bases, dct):
if '__rename__' in dct:
dct.update(cls.__rename__)
class orig(object):
def foo(*params): pass
class newclass:
__meta__ = renamer
__rename__ = (('bar', orig.foo),)
Or, you could do a similar thing with a decorator:
def renamer(subs):
def thedecorator(inclass):
meta = type(inclass)
dct = inclass.__dict__.copy()
dct.update(subs)
return meta(inclass.__name__,inclass.__bases__,dct)
return thedecorator
#renamer((('bar', orig.foo),))
class newclass(object): pass
Related
Python 3.6
I just found myself programming this type of inheritance structure (below). Where a sub class is calling methods and attributes of an object a parent has.
In my use case I'm placing code in class A that would otherwise be ugly in class B.
Almost like a reverse inheritance call or something, which doesn't seem like a good idea... (Pycharm doesn't seem to like it)
Can someone please explain what is best practice in this scenario?
Thanks!
class A(object):
def call_class_c_method(self):
self.class_c.do_something(self)
class B(A):
def __init__(self, class_c):
self.class_c = class_c
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
a = A
c = C
b = B(c)
outputs:
I'm doing something super() useful
There is nothing wrong with implementing a small feature in class A and use it as a base class for B. This pattern is known as mixin in Python. It makes a lot of sense if you want to re-use A or want to compose B from many such optional features.
But make sure your mixin is complete in itself!
The original implementation of class A depends on the derived class to set a member variable. This is a particularly ugly approach. Better define class_c as a member of A where it is used:
class A(object):
def __init__(self, class_c):
self.class_c = class_c
def call_class_c_method(self):
self.class_c.do_something()
class B(A):
def __init__(self, class_c):
super().__init__(class_c)
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
c = C()
b = B(c)
I find that reducing things to abstract letters in cases like this makes it harder for me to reason about whether the interaction makes sense.
In effect, you're asking whether it is reasonable for a class(A) to depend on a member that conforms to a given interface (C). The answer is that there are cases where it clearly does.
As an example, consider the model-view-controller pattern in web application design.
You might well have something like
class Controller:
def get(self, request)
return self.view.render(self, request)
or similar. Then elsewhere you'd have some code that found the view and populated self.view in the controller. Typical examples of doing that include some routing lookups or include having a specific view associated with a controller. While not Python, the Rails web framework does a lot of this.
When we have specific examples, it's a lot easier to reason about whether the abstractions make sense.
In the above example, the controller interface depends on having access to some instance of the view interface to do its work. The controller instance encapsulates an instance that implements that view interface.
Here are some things to consider when evaluating such designs:
Can you clearly articulate the boundaries of each interface/class? That is, can you explain what the controller's job is and what the view's job is?
Does your decision to encapsulate an instance agree with those scopes?
Do the interface and class scopes seem reasonable when you think about future extensibility and about minimizing the scope of code changes?
Suppose I have a simple class like this:
class Class1(object):
def __init__(self, property):
self.property = property
def method1(self):
pass
An instances of Class1 returns a value that can be used in other class:
class Class2(object):
def __init__(self, instance_of_class1, other_property):
self.other_property = other_property
self.instance_of_class1 = instance_of_class1
def method1(self):
# A method that uses self.instance_of_class1.property and self.other_property
This is working. However, I have the feeling that this is not a very common approach and maybe there are alternatives. Having said this, I tried to refactor my classes to pass simpler objects to Class2, but I found that passing the whole instance as an argument actually simplifies the code significantly. In order to use this, I have to do this:
instance_of_class1 = Class1(property=value)
instance_of_class2 = Class2(instance_of_class1, other_property=other_value)
instance_of_class2.method1()
This is very similar to the way some R packages look like. Is there a more "Pythonic" alternative?
There's nothing wrong with doing that, though in this particular example it looks like you could just as easily do
instance_of_class2 = Class2(instance_of_class1.property, other_property=other_value).
But if you find you need to use other properties/methods of Class1 inside of Class2, just go ahead and pass the whole Class1 instance into Class2. This kind of approach is used all the time in Python and OOP in general. Many common design patterns call for a class to take an instance (or several instances) of other classes: Proxy, Facade, Adapter, etc.
Not sure if this is a dupe or not. Here it goes.
I need to write some Python code that looks like:
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
and then have a test engine class that scans all these test classes, loads all the test_case functions and for each invokes get_categories to find out if the test belongs t the group of interest for the specific run.
The problem is that get_categories is not seen as an attribute of test_case, and even if I manually assign it
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
test_case.get_categories = get_categories
this is only going to happen when test_case first runs, too late for me.
The reason why this function can’t go on the class (or at least why I want it to be also available at the per-function level) is that a TestClass can have multiple test cases.
Since this is an already existing testing infrastructure, and the categories mechanism works (other than the categories-on-function scenario, which is of lesser importance), a rewrite is not in the plans.
Language tricks dearly appreciated.
Nested functions don't become attributes any more than any other assignment.
I suspect your test infrastructure is doing some severely weird things if this isn't supported (and uses old-style classes!), but you could just do this:
class TestClass:
def test_case(self):
# ...
def _get_categories(self):
return [...]
test_case.get_categories = _get_categories
del _get_categories
Class bodies are executable code like any other block.
What you need is nested classes. Functions aren't made to do what you are trying to do, so you have to move up a notch. Function attributes are mainly used as markup, whereas classes can have anything you want.
class TestClass(object):
class TestCase(object):
#classmethod
def get_categories(cls):
return ['abc', 'efg']
Note that I used #classmethod so that you could use it without instantiating TestCase(); modify if you want to do test_case = TestCase().
I am pondering if I should use inheritance or delegation to implement a kind of wrapper class. My problem is like this: Say I have a class named Python.
class Python:
def __init__(self):
...
def snake(self):
""" Make python snake through the forest"""
...
def sleep(self):
""" Let python sleep """
...
... and much more behavior. Now I have existing code which expects an Anaconda, which is almost like a Python, but slightly different: Some members have slightly different names and parameters, other members add new functionality. I really want to reuse the code in Python. Therefore I could do this with inheritance:
class Anaconda(Python):
def __init__(self):
Python.__init__(self)
def wriggle(self):
"""Different name, same thing"""
Python.snake(self)
def devourCrocodile(self, croc):
""" Python can't do this"""
...
Of course I can also call Anaconda().sleep(). But here is the problem: There is a PythonFactory which I need to use.
class PythonFactory:
def makeSpecialPython(self):
""" Do a lot of complicated work to produce a special python"""
…
return python
I want it to make a Python and then I should be able to convert it to an Anaconda:
myAnaconda = Anaconda(PythonFactory().makeSpecialPython())
In this case, delegation would be the way to go. (I don't know whether this can be done using inheritance):
class Anaconda:
def __init__(self, python):
self.python = python
def wriggle(self):
self.python.wriggle()
def devourCrocodile(self, croc):
...
But with delegation, I cannot call Anaconda().sleep().
So, if you're still with me, my questions are:
A) In a case similar to this, where I need to
add some functionality
rename some functionality
use "base class" functionality otherwise
convert "base class" object to "subclass" object
should I use inheritance or delegation? (Or something else?)
B) An elegant solution would be to use delegation plus some special method that forwards all attribute and method accesses which Anaconda does not respond to to its instance of Python.
B) An elegant solution would be to use delegation plus some special method that forwards all attribute and method accesses which Anaconda does not respond to to its instance of Python.
This is simple in Python, just define __getattr__:
class Anaconda:
def __init__(self, python):
self.python = python
def wriggle(self):
self.python.snake()
def devourCrocodile(self, croc):
...
def __getattr__(self, name):
return getattr(self.python, name)
See the Python docs on __getattr__
I've written a mixin class that's designed to be layered on top of a new-style class, for example via
class MixedClass(MixinClass, BaseClass):
pass
What's the smoothest way to apply this mixin to an old-style class? It is using a call to super in its __init__ method, so this will presumably (?) have to change, but otherwise I'd like to make as few changes as possible to MixinClass. I should be able to derive a subclass that makes the necessary changes.
I'm considering using a class decorator on top of a class derived from BaseClass, e.g.
#old_style_mix(MixinOldSchoolRemix)
class MixedWithOldStyleClass(OldStyleClass)
where MixinOldSchoolRemix is derived from MixinClass and just re-implements methods that use super to instead use a class variable that contains the class it is mixed with, in this case OldStyleClass. This class variable would be set by old_style_mix as part of the mixing process.
old_style_mix would just update the class dictionary of e.g. MixedWithOldStyleClass with the contents of the mixin class (e.g. MixinOldSchoolRemix) dictionary.
Is this a reasonable strategy? Is there a better way? It seems like this would be a common problem, given that there are numerous available modules still using old-style classes.
This class variable would be set by
old_style_mix as part of the mixing
process.
...I assume you mean: "...on the class it's decorating..." as opposed to "on the class that is its argument" (the latter would be a disaster).
old_style_mix would just update the
class dictionary of e.g.
MixedWithOldStyleClass with the
contents of the mixin class (e.g.
MixinOldSchoolRemix) dictionary.
No good -- the information that MixinOldSchoolRemix derives from MixinClass, for example, is not in the former's dictionary. So, old_style_mix must take a different strategy: for example, build a new class (which I believe has to be a new-style one, because old-style ones do not accept new-style ones as __bases__) with the appropriate sequence of bases, as well as a suitably tweaked dictionary.
Is this a reasonable strategy?
With the above provisos.
It seems like this would be a common
problem, given that there are numerous
available modules still using
old-style classes.
...but mixins with classes that were never designed to take mixins are definitely not a common design pattern, so the problem isn't common at all (I don't remember seeing it even once in the many years since new-style classes were born, and I was actively consulting, teaching advanced classes, and helping people with Python problems for many of those years, as well as doing a lot of software development myself -- I do tend to have encountered any "reasonably common" problem that people may have with features which have been around long enough!-).
Here's example code for what your class decorator could do (if you prefer to have it in a class decorator rather than directly inline...):
>>> class Mixo(object):
... def foo(self):
... print 'Mixo.foo'
... self.thesuper.foo(self)
...
>>> class Old:
... def foo(self):
... print 'Old.foo'
...
>>> class Mixed(Mixo, Old):
... thesuper = Old
...
>>> m = Mixed()
>>> m.foo()
Mixo.foo
Old.foo
If you want to build Mixed under the assumed name/binding of Mixo in your decorator, you could do it with a call to type, or by setting Mixed.__name__ = cls.__name__ (where cls is the class you're decorating). I think the latter approach is simpler (warning, untested code -- the above interactive shell session is a real one, but I have not tested the following code):
def oldstylemix(mixin):
def makemix(cls):
class Mixed(mixin, cls):
thesuper = cls
Mixed.__name__ = cls.__name__
return Mixed
return makemix