Python inheritance structure - python

Python 3.6
I just found myself programming this type of inheritance structure (below). Where a sub class is calling methods and attributes of an object a parent has.
In my use case I'm placing code in class A that would otherwise be ugly in class B.
Almost like a reverse inheritance call or something, which doesn't seem like a good idea... (Pycharm doesn't seem to like it)
Can someone please explain what is best practice in this scenario?
Thanks!
class A(object):
def call_class_c_method(self):
self.class_c.do_something(self)
class B(A):
def __init__(self, class_c):
self.class_c = class_c
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
a = A
c = C
b = B(c)
outputs:
I'm doing something super() useful

There is nothing wrong with implementing a small feature in class A and use it as a base class for B. This pattern is known as mixin in Python. It makes a lot of sense if you want to re-use A or want to compose B from many such optional features.
But make sure your mixin is complete in itself!
The original implementation of class A depends on the derived class to set a member variable. This is a particularly ugly approach. Better define class_c as a member of A where it is used:
class A(object):
def __init__(self, class_c):
self.class_c = class_c
def call_class_c_method(self):
self.class_c.do_something()
class B(A):
def __init__(self, class_c):
super().__init__(class_c)
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
c = C()
b = B(c)

I find that reducing things to abstract letters in cases like this makes it harder for me to reason about whether the interaction makes sense.
In effect, you're asking whether it is reasonable for a class(A) to depend on a member that conforms to a given interface (C). The answer is that there are cases where it clearly does.
As an example, consider the model-view-controller pattern in web application design.
You might well have something like
class Controller:
def get(self, request)
return self.view.render(self, request)
or similar. Then elsewhere you'd have some code that found the view and populated self.view in the controller. Typical examples of doing that include some routing lookups or include having a specific view associated with a controller. While not Python, the Rails web framework does a lot of this.
When we have specific examples, it's a lot easier to reason about whether the abstractions make sense.
In the above example, the controller interface depends on having access to some instance of the view interface to do its work. The controller instance encapsulates an instance that implements that view interface.
Here are some things to consider when evaluating such designs:
Can you clearly articulate the boundaries of each interface/class? That is, can you explain what the controller's job is and what the view's job is?
Does your decision to encapsulate an instance agree with those scopes?
Do the interface and class scopes seem reasonable when you think about future extensibility and about minimizing the scope of code changes?

Related

Why define constants in a metaclass?

I've recently inherited some code. It has a class called SystemConfig that acts as a grab-bag of constants that are used across the code base. But while a few of the constants are defined directly on that class, a big pile of them are defined as properties of a metaclass of that class. Like this:
class _MetaSystemConfig(type):
#property
define CONSTANT_1(cls):
return "value 1"
#property
define CONSTANT_2(cls):
return "value 2"
...
class SystemConfig(metaclass=_MetaSystemConfig):
CONSTANT_3 = "value 3"
...
The class is never instantiated; the values are just used as SystemConfig.CONSTANT_1 and so on.
No-one who is still involved in the project seems to have any idea why it was done this way, except that someone seems to think the guy who did it thought it made unit testing easier.
Can someone explain to me any advantages of doing it this way and why I shouldn't just move all the properties to the SystemConfig class and delete the metaclass?
Edit to add: The metaclass definition doesn't contain anything other than properties.
So I figured out why it was done this way. These properties were defined as properties because a number of them depended on each other - one for a directory, another for a subdirectory of that directory, several for files spread across the directories and so forth.
But #property doesn't work on classmethods. Python 3.9 fixed #classmethod so that it could be stacked on top of #property but this was removed again in Python 3.11. So, as a workaround, he put the properties in a metaclass (presumably after seeing this question).
However, implementing a property decorator that works on classmethods is not exactly rocket science, so for the good of whoever comes after me and has to figure out what's going on, I've replaced the metaclass properties with class properties on the SystemConfig class. For anyone else who's trying to figure this out, this works as a decorator:
class class_property:
def __init__(self, _g):
self._g = _g
def __get__(_, _, cls):
return self._g(cls)
Implementing a setter appears to be much more difficult, as __set__ is not used when assigning to class variables. But I don't need it.
Adding a set of constants to a class can be done with a simple decorator and no properties.
def add_constants(cls):
cls.CONSTANT_1 = "value 1"
cls.CONSTANT_2 = "value 2"
#add_constants
class SystemConfig:
CONSTANT_3 = "value 3"
I'm not concerned about users shooting themselves in the foot by explicitly assigning a new value to any of the "constants", so I consider jumping through hoops just to add read-only class properties more trouble than it's worth.
The problem with metaclasses is that they don't compose. If C1 uses metaclass M1 and C2 uses metaclass M2, you can't assume that class C3(C1, C2): ... will work, because the two metaclasses may not be compatible. The more metaclasses you introduce to do things you could have done without a metaclass, the more problems like this can arise. Use metaclasses when you have no other choice, not just because you think it's a cooler alternative to inheritance or decorators.

Implement same methods in different classes

I really don't know how to word this problem, so I'll try to explain it with an example.
Let's say I have three GUI classes:
Base Surface class
Detailed Surface Class
Sprite Class
All of them are independent of each other, no inheritance among them.
Now I have a function "drag()" that makes a surface/sprite dragable, and I want to implement this function as a method for all three of them.
Since it's the exact same code for all implementations I find it annoying, cumbersome and bad practice to rewrite the code.
The only thing I came up with so far was to make a saperate class for it and inherit this class. But that also doesn't seem to be the way to go.
I'd be very thankfull for some advice.
EDIT
Another example with a slightly different setup - I have the following classes:
BaseSurface
Dragable
Resizable
EventHandler
Only the first one is independent, the others depend on the first (must be inherited).
The end user should, without any effort, be able to choose between a simple BaseSurface, one with that implements dragable, one with resizable, one with eventHandler, and any combination. By "without any effort" I mean the end user should not have to make e custom Class and inherit the desired classes plus call the appropriate methods (init, update, ...) that some classes share.
So what I could do is make a class for every possible combination, eg.
"BaseSurfaceDrag", "BaseSurfaceDragResize", ...
which will get messy really quickly. Whats a different and better approach to this?
This is exactly the kind of case that you should use a parent class for. In both cases it looks like your parent class (logically) should be something like:
class Drawable(object):
def drag(self, *args, **kwargs):
"""Drag and drop behavior"""
# Your code goes here
Then each of your other classes inherits from that
class BaseSurface(Drawable):
# stuff
class DetailedSurface(Drawable):
# stuff
class Sprite(Drawable):
# stuff
In the second case what you have are interfaces, so you could logically do something like:
class DragInterface(object):
"""Implements a `drag` method"""
def drag(self):
"""Drag and drop behavior"""
# Your code goes here
class ResizeInterface(object):
"""Implements a `resize` method"""
def resize(self):
"""Drag and drop resize"""
# Code
class EventHandlerInterface(object):
"""Handles events"""
def handle(self, evt):
# Code
class MyNewSurface(BaseSurface, DragInterface, ResizeInterface):
"""Draggable, resizeable surface"""
# Implement here

Passing an instance to __init__. Is this a good idea?

Suppose I have a simple class like this:
class Class1(object):
def __init__(self, property):
self.property = property
def method1(self):
pass
An instances of Class1 returns a value that can be used in other class:
class Class2(object):
def __init__(self, instance_of_class1, other_property):
self.other_property = other_property
self.instance_of_class1 = instance_of_class1
def method1(self):
# A method that uses self.instance_of_class1.property and self.other_property
This is working. However, I have the feeling that this is not a very common approach and maybe there are alternatives. Having said this, I tried to refactor my classes to pass simpler objects to Class2, but I found that passing the whole instance as an argument actually simplifies the code significantly. In order to use this, I have to do this:
instance_of_class1 = Class1(property=value)
instance_of_class2 = Class2(instance_of_class1, other_property=other_value)
instance_of_class2.method1()
This is very similar to the way some R packages look like. Is there a more "Pythonic" alternative?
There's nothing wrong with doing that, though in this particular example it looks like you could just as easily do
instance_of_class2 = Class2(instance_of_class1.property, other_property=other_value).
But if you find you need to use other properties/methods of Class1 inside of Class2, just go ahead and pass the whole Class1 instance into Class2. This kind of approach is used all the time in Python and OOP in general. Many common design patterns call for a class to take an instance (or several instances) of other classes: Proxy, Facade, Adapter, etc.

What is the correct way to extend a parent class method in modern Python

I frequently do this sort of thing:
class Person(object):
def greet(self):
print "Hello"
class Waiter(Person):
def greet(self):
Person.greet(self)
print "Would you like fries with that?"
The line Person.greet(self) doesn't seem right. If I ever change what class Waiter inherits from I'm going to have to track down every one of these and replace them all.
What is the correct way to do this is modern Python? Both 2.x and 3.x, I understand there were changes in this area in 3.
If it matters any I generally stick to single inheritance, but if extra stuff is required to accommodate multiple inheritance correctly it would be good to know about that.
You use super:
Return a proxy object that delegates
method calls to a parent or sibling
class of type. This is useful for
accessing inherited methods that have
been overridden in a class. The search
order is same as that used by
getattr() except that the type itself
is skipped.
In other words, a call to super returns a fake object which delegates attribute lookups to classes above you in the inheritance chain. Points to note:
This does not work with old-style classes -- so if you are using Python 2.x, you need to ensure that the top class in your hierarchy inherits from object.
You need to pass your own class and instance to super in Python 2.x. This requirement was waived in 3.x.
This will handle all multiple inheritance correctly. (When you have a multiple inheritance tree in Python, a method resolution order is generated and the lookups go through parent classes in this order.)
Take care: there are many places to get confused about multiple inheritance in Python. You might want to read super() Considered Harmful. If you are sure that you are going to stick to a single inheritance tree, and that you are not going to change the names of classes in said tree, you can hardcode the class names as you do above and everything will work fine.
Not sure if you're looking for this but you can call a parent without referring to it by doing this.
super(Waiter, self).greet()
This will call the greet() function in Person.
katrielalex's answer is really the answer to your question, but this wouldn't fit in a comment.
If you plan to go about using super everywhere, and you ever think in terms of multiple inheritance, definitely read the "super() Considered Harmful" link. super() is a great tool, but it takes understanding to use correctly. In my experience, for simple things that don't seem likely to get into complicated diamond inheritance tangles, it's actually easier and less tedious to just call the superclass directly and deal with the renames when you change the name of the base class.
In fact, in Python2 you have to include the current class name, which is usually more likely to change than the base class name. (And in fact sometimes it's very difficult to pass a reference to the current class if you're doing wacky things; at the point when the method is being defined the class isn't bound to any name, and at the point when the super call is executed the original name of the class may not still be bound to the class, such as when you're using a class decorator)
I'd like to make it more explicit in this answer with an example. It's just like how we do in JavaScript. The short answer is, do that like we initiate the constructor using super.
class Person(object):
def __init__(self, name):
self.name = name
def greet(self):
print(f"Hello, I'm {self.name}")
class Waiter(Person):
def __init__(self, name):
super().__init__(name)
# initiate the parent constructor
# or super(Waiter, self).__init__(name)
def greet(self):
super(Waiter, self).greet()
print("Would you like fries with that?")
waiter = Waiter("John")
waiter.greet()
# Hello, I'm John
# Would you like fries with that?

How to apply a "mixin" class to an old-style base class

I've written a mixin class that's designed to be layered on top of a new-style class, for example via
class MixedClass(MixinClass, BaseClass):
pass
What's the smoothest way to apply this mixin to an old-style class? It is using a call to super in its __init__ method, so this will presumably (?) have to change, but otherwise I'd like to make as few changes as possible to MixinClass. I should be able to derive a subclass that makes the necessary changes.
I'm considering using a class decorator on top of a class derived from BaseClass, e.g.
#old_style_mix(MixinOldSchoolRemix)
class MixedWithOldStyleClass(OldStyleClass)
where MixinOldSchoolRemix is derived from MixinClass and just re-implements methods that use super to instead use a class variable that contains the class it is mixed with, in this case OldStyleClass. This class variable would be set by old_style_mix as part of the mixing process.
old_style_mix would just update the class dictionary of e.g. MixedWithOldStyleClass with the contents of the mixin class (e.g. MixinOldSchoolRemix) dictionary.
Is this a reasonable strategy? Is there a better way? It seems like this would be a common problem, given that there are numerous available modules still using old-style classes.
This class variable would be set by
old_style_mix as part of the mixing
process.
...I assume you mean: "...on the class it's decorating..." as opposed to "on the class that is its argument" (the latter would be a disaster).
old_style_mix would just update the
class dictionary of e.g.
MixedWithOldStyleClass with the
contents of the mixin class (e.g.
MixinOldSchoolRemix) dictionary.
No good -- the information that MixinOldSchoolRemix derives from MixinClass, for example, is not in the former's dictionary. So, old_style_mix must take a different strategy: for example, build a new class (which I believe has to be a new-style one, because old-style ones do not accept new-style ones as __bases__) with the appropriate sequence of bases, as well as a suitably tweaked dictionary.
Is this a reasonable strategy?
With the above provisos.
It seems like this would be a common
problem, given that there are numerous
available modules still using
old-style classes.
...but mixins with classes that were never designed to take mixins are definitely not a common design pattern, so the problem isn't common at all (I don't remember seeing it even once in the many years since new-style classes were born, and I was actively consulting, teaching advanced classes, and helping people with Python problems for many of those years, as well as doing a lot of software development myself -- I do tend to have encountered any "reasonably common" problem that people may have with features which have been around long enough!-).
Here's example code for what your class decorator could do (if you prefer to have it in a class decorator rather than directly inline...):
>>> class Mixo(object):
... def foo(self):
... print 'Mixo.foo'
... self.thesuper.foo(self)
...
>>> class Old:
... def foo(self):
... print 'Old.foo'
...
>>> class Mixed(Mixo, Old):
... thesuper = Old
...
>>> m = Mixed()
>>> m.foo()
Mixo.foo
Old.foo
If you want to build Mixed under the assumed name/binding of Mixo in your decorator, you could do it with a call to type, or by setting Mixed.__name__ = cls.__name__ (where cls is the class you're decorating). I think the latter approach is simpler (warning, untested code -- the above interactive shell session is a real one, but I have not tested the following code):
def oldstylemix(mixin):
def makemix(cls):
class Mixed(mixin, cls):
thesuper = cls
Mixed.__name__ = cls.__name__
return Mixed
return makemix

Categories