I am trying to create a re-usable application in python 2.6.
I am developing server-side scripts for listening GPS tracking devices. The script are using sockets.
I have a base class that defines the basic methods for handling the data sent by device.
class MyDevice(object):
def __init__(self, db)
self.db = db # This is the class that defines methods for connecting/using database
def initialize_db(self):
...
def handle_data(self):
self.initialize_db()
...
self.process_data()
def process_data(self):
...
self.categorize_data()
def categorize_data(self):
...
self.save_data()
def save_data(self):
...
This base class serves for many devices since there are only some minor differences between the devices. So I create a class for each specific device type and make arrangements which are specific to that device.
class MyDeviceType1(Mydevice):
def __init__(self, db):
super(MyDeviceType1, self).__init__(db)
def categorize_data(self):
super(MyDeviceType1, self).categorize_data(self)
self.prepopulate_data()
... # Do other operations specific to device
def prepopulate_data(self):
"""this is specific to Type1 devices"""
...
class MyDeviceType2(Mydevice):
def __init__(self, db):
super(MyDeviceType1, self).__init__(db)
def categorize_data(self):
super(MyDeviceType1, self).categorize_data(self)
self.unpopulate_data()
... # Do other operations specific to device
def unpopulate_data(self):
"""this is specific to Type2 devices"""
...
And I have socket listeners that listens specific sockets, and call related class (MyDeviceType1 or MyDeviceType2) like:
conn, address = socket.accept()
...
thread.start_new_thread(MyDeviceType1(db_connector).handle_data, ())
That structure is all fine and useful to me. One device (MyDevice) may have many subtypes (MyDeviceType1, MyDeviceType2) which inherits the base class.
And there are more than one type of base devices. So there is OtherDevice with subtypes OtherDeviceType1 etc.
MyDevice and OtherDevice works quite differently, so they are the base types and underlying code is quite different in all of them.
I also have some add-on functionalities. These functionalities are usable by one or two subtypes of nearly all device base types.
So I want to prepare a single reusable (plug-able) class that can be inherited by any subtype that needs those functionalities.
class MyAddOn(object):
def remove_unusable_data(self):
...
def categorize_data(self):
super ???
self.remove_unusable_data()
And here is the part that I stuck. Since this is an independent module, it should not be inherited from MyDevice or OtherDevice etc, but not all sub device types are using these functionalities, I can not inherit MyDevice from MyAddOn too.
Only logical method looks like, inheriting the subtype MyDeviceSubType1 from both MyDevice and MyAddOn
class MyDeviceType1(Mydevice, MyAddOn):
def __init__(self, db):
super(MyDeviceType1, self).__init__(db)
def categorize_data(self):
>> super(MyDeviceType1, self).categorize_data(self) <<
self.prepopulate_data()
... # Do other operations specific to device
def prepopulate_data(self):
"""this is specific to Type1 devices"""
super(MyDeviceType1, self).categorize_data(self) is the problem part. super is triggering the Mydevice.categorize_data but not MyAddOn.categorize_data
Is there any way to trigger MyAddOn methods using super call or in a such fashion that I do not need to call that class method seperately? Both MyDevice.categorize_data and MyAddOn.categorize_data should be called.
This is called cooperative multiple inheritance in python and works just fine.
What you refer to as an "Addon" class, is generally called a "Mixin".
Just call the super method in your Mixin class:
class MyAddOn(object):
def remove_unusable_data(self):
...
def categorize_data(self):
super(MyAddon,self).categorize_data()
self.remove_unusable_data()
I'd like to note some things:
The method resolution order is left to right
You have to call super
You should be using **kwargs for cooperative inheritance
It seems counterintuitive to call super here, as the parent of MyAddon does not have an attribute called categorize_data, and you would expect this notation to fail.
This is where the super function comes into play. Some consider this behaviour to be the best thing about python.
Unlike in C++ or Java the super function does not necessarily call the class' parent class. In fact it is impossible to know in advance which function will be called by super because it will be decided at run-time based on the method resoltion order.
super in python should really be called next because it will call the next method in the inheritance tree.
For Mixins it is especially important to call super, even if you're inheriting from object.
For further information I advise to watch Raymond Hettinger's excellent talk on Super considered Super from pycon 2015.
It's an excellent pattern to use in python. Here is a pattern I encounter often when programming structured applications obeying the open-closed principle:
I have this library class which is used in production:
class BaseClassA(object):
def __init__(self, **kwargs):
... Do something that's important for many modules
def ...
class BaseClassB(object):
def __init__(self, **kwargs):
... Do something that's important for many modules
def ...
Now you get a feature request that in a particular case both BaseClassA and BaseClassB should implement feature X.
According to open-close you shouldn't have to touch existing code to implement the feature, and according to DRY you shouldnt repeat the code.
The solution is to create a FeatureMixin and create empty child classes which inherit from the base class and the mixin:
class FeatureMixin(object):
def __init__(self,**kwargs):
...do something specific
return super(FeatureMixin,self).__init__(**kwargs)
class ExtendedA(FeatureMixin,BaseClassA):
pass
class ExtendedB(FeatureMixin,BaseClassB):
pass
Related
I have a pretty big class that i want to break down in smaller classes that each handle a single part of the whole. So each child takes care of only one aspect of the whole.
Each of these child classes still need to communicate with one another.
For example Data Access creates a dictionary that Plotting Controller needs to have access to.
And then plotting Controller needs to update stuff on Main GUI Controller. But these children have various more inter-communication functions.
How do I achieve this?
I've read Metaclasses, Cooperative Multiple Inheritence and Wonders of Cooperative Multiple Inheritence, but i cannot figure out how to do this.
The closest I've come is the following code:
class A:
def __init__(self):
self.myself = 'ClassA'
def method_ONE_from_class_A(self, caller):
print(f"I am method ONE from {self.myself} called by {caller}")
self.method_ONE_from_class_B(self.myself)
def method_TWO_from_class_A(self, caller):
print(f"I am method TWO from {self.myself} called by {caller}")
self.method_TWO_from_class_B(self.myself)
class B:
def __init__(self):
self.me = 'ClassB'
def method_ONE_from_class_B(self, caller):
print(f"I am method ONE from {self.me} called by {caller}")
self.method_TWO_from_class_A(self.me)
def method_TWO_from_class_B(self, caller):
print(f"I am method TWO from {self.me} called by {caller}")
class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)
def children_start_talking(self):
self.method_ONE_from_class_A('Big Poppa')
poppa = C()
poppa.children_start_talking()
which results correctly in:
I am method ONE from ClassA called by Big Poppa
I am method ONE from ClassB called by ClassA
I am method TWO from ClassA called by ClassB
I am method TWO from ClassB called by ClassA
But... even though Class B and Class A correctly call the other children's functions, they don't actually find their declaration. Nor do i "see" them when i'm typing the code, which is both frustrating and worrisome that i might be doing something wrong.
Is there a good way to achieve this? Or is it an actually bad idea?
EDIT: Python 3.7 if it makes any difference.
Inheritance
When breaking a class hierarchy like this, the individual "partial" classes, we call "mixins", will "see" only what is declared directly on them, and on their base-classes. In your example, when writing class A, it does not know anything about class B - you as the author, can know that methods from class B will be present, because methods from class A will only be called from class C, that inherits both.
Your programming tools, the IDE including, can't know that. (That you should know better than your programming aid, is a side track). It would work, if run, but this is a poor design.
If all methods are to be present directly on a single instance of your final class, all of them have to be "present" in a super-class for them all - you can even write independent subclasses in different files, and then a single subclass that will inherit all of them:
from abc import abstractmethod, ABC
class Base(ABC):
#abstractmethod
def method_A_1(self):
pass
#abstractmethod
def method_A_2(self):
pass
#abstractmethod
def method_B_1(self):
pass
class A(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_A_1(self):
...
def method_A_2(self):
...
class B(Base):
def __init__(self, *args, **kwargs):
# pop consumed named parameters from "kwargs"
...
super().__init__(*args, **kwargs)
# This call ensures all __init__ in bases are called
# because Python linearize the base classes on multiple inheritance
def method_B_1(self):
...
...
class C(A, B):
pass
(The "ABC" and "abstractmethod" are a bit of sugar - they will work, but this design would work without any of that - thought their presence help whoever is looking at your code to figure out what is going on, and will raise an earlier runtime error if you per mistake create an instance of one of the incomplete base classes)
Composite
This works, but if your methods are actually for wildly different domains, instead
of multiple inheritance, you should try using the "composite design pattern".
No need for multiple inheritance if it does not arise naturally.
In this case, you instantiate objects of the classes that drive the different domains on the __init__ of the shell class, and pass its own instance to those child, which will keep a reference to it (in a self.parent attribute, for example). Chances are your IDE still won't know what you are talking about, but you will have a saner design.
class Parent:
def __init__(self):
self.a_domain = A(self)
self.b_domain = B(self)
class A:
def __init__(self, parent):
self.parent = parent
# no need to call any "super...init", this is called
# as part of the initialization of the parent class
def method_A_1(self):
...
def method_A_2(self):
...
class B:
def __init__(self, parent):
self.parent = parent
def method_B_1(self):
# need result from 'A' domain:
a_value = self.parent.a_domain.method_A_1()
...
This example uses the basic of the language features, but if you decide
to go for it in a complex application, you can sophisticate it - there are
interface patterns, that could allow you to swap the classes used
for different domains, in specialized subclasses, and so on. But typically
the pattern above is what you would need.
I try to figure out what is the best practice in Python inheritance principles, when there is a 'bad idea' to change method signature in a child.
Let's suppose we have some base class BaseClient with already implemented create method (and some abstract ones) that fits good for almost all 'descendants' except one:
class BaseClient(object):
def __init__(self, connection=None):
pass
def create(self, entity_id, data=None):
pass
class ClientA(BaseClient):
pass
class ClientB(BaseClient):
pass
The only class ClientC needs another implementation of create method with a little bit another method signature
class ClientC(BaseClient):
....
def create(self, data):
pass
So the question is how to make this in a more 'pythonic' way, taking into account best python practice? Of course we can use *args, **kwargs and other **kwargs-like approaches in parent (child) method, but I'm afraid it makes my code less readable (self-documented).
I'd say, just add the parameter back as keyword with default value None. Then raise an error that explains that some of the input data is lost.
class ClientC(BaseClient):
....
def create(self,entity_id=None, data):
if entity_id:
raise RedudantInformationError("Value for entity_id does nothing")
pass
This way whenever a programmer tries to handle child C like the other childs, he'll get a warning reminding him, which however he can easily by-step by using the try-Syntax.
The answer to "can I change signature of child methods?" is yes, nonetheless it is very bad practice.
The children function overriding the parents class must have the same signature, if you want to be SOLID and not violating the LSP.
The example above:
class BaseClient:
def create(self, entity_id, data=None):
pass
class EntityBasedClient(BaseClient):
def create(self, entity_id, data=None):
pass
class DataBasedClient(BaseClient):
def create(self, data):
pass
Is violating the principle and would also raise a linter warning ("Parameters differ from overridden 'create' method")
Also raising a RedudantInformationError, as proposed by #Sanitiy to keep the consistency of the signature, is still violating the principle, as the parent-method would have a different behaviour if used in place of the child-method.
Take a look also at:
Python Method overriding, does signature matter?
I am not sure there is a Pythonic way of doing this, as you can just do as you did in the question. Rather, I would say that this is more about OOP than being Pythonic matter.
So I assume that there are other methods implemented in BaseClient other than create that other children share (otherwise, no point is making ClientC a child of BaseClient). In your case, looks like ClientC is diverging from the rest by requiring a different signature of create method. Then maybe it is the case to consider splitting them?
For example you could have the root BaseClient implement all shared methods except create, and then have two more "base" children, like this:
class EntityBasedClient(BaseClient):
def create(self, entity_id, data=None):
pass
class DataBasedClient(BaseClient):
def create(self, data):
pass
So now you can inherit without violating any rule:
class ClientA(EntityBasedClient):
pass
class ClientB(EntityBasedClient):
pass
class ClientC(DataBasedClient):
pass
Also, if the create implementation of those two version are pretty similar, you could avoid the code duplication by having a more generic private method implemented in BaseClient with signature _create(self, entity_id=None, data=None), and then call it with appropriate arguments from inside the EntityBasedClient and DataBasedClient.
Python 3.6
I just found myself programming this type of inheritance structure (below). Where a sub class is calling methods and attributes of an object a parent has.
In my use case I'm placing code in class A that would otherwise be ugly in class B.
Almost like a reverse inheritance call or something, which doesn't seem like a good idea... (Pycharm doesn't seem to like it)
Can someone please explain what is best practice in this scenario?
Thanks!
class A(object):
def call_class_c_method(self):
self.class_c.do_something(self)
class B(A):
def __init__(self, class_c):
self.class_c = class_c
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
a = A
c = C
b = B(c)
outputs:
I'm doing something super() useful
There is nothing wrong with implementing a small feature in class A and use it as a base class for B. This pattern is known as mixin in Python. It makes a lot of sense if you want to re-use A or want to compose B from many such optional features.
But make sure your mixin is complete in itself!
The original implementation of class A depends on the derived class to set a member variable. This is a particularly ugly approach. Better define class_c as a member of A where it is used:
class A(object):
def __init__(self, class_c):
self.class_c = class_c
def call_class_c_method(self):
self.class_c.do_something()
class B(A):
def __init__(self, class_c):
super().__init__(class_c)
self.begin_task()
def begin_task(self):
self.call_class_c_method()
class C(object):
def do_something(self):
print("I'm doing something super() useful")
c = C()
b = B(c)
I find that reducing things to abstract letters in cases like this makes it harder for me to reason about whether the interaction makes sense.
In effect, you're asking whether it is reasonable for a class(A) to depend on a member that conforms to a given interface (C). The answer is that there are cases where it clearly does.
As an example, consider the model-view-controller pattern in web application design.
You might well have something like
class Controller:
def get(self, request)
return self.view.render(self, request)
or similar. Then elsewhere you'd have some code that found the view and populated self.view in the controller. Typical examples of doing that include some routing lookups or include having a specific view associated with a controller. While not Python, the Rails web framework does a lot of this.
When we have specific examples, it's a lot easier to reason about whether the abstractions make sense.
In the above example, the controller interface depends on having access to some instance of the view interface to do its work. The controller instance encapsulates an instance that implements that view interface.
Here are some things to consider when evaluating such designs:
Can you clearly articulate the boundaries of each interface/class? That is, can you explain what the controller's job is and what the view's job is?
Does your decision to encapsulate an instance agree with those scopes?
Do the interface and class scopes seem reasonable when you think about future extensibility and about minimizing the scope of code changes?
I am pondering if I should use inheritance or delegation to implement a kind of wrapper class. My problem is like this: Say I have a class named Python.
class Python:
def __init__(self):
...
def snake(self):
""" Make python snake through the forest"""
...
def sleep(self):
""" Let python sleep """
...
... and much more behavior. Now I have existing code which expects an Anaconda, which is almost like a Python, but slightly different: Some members have slightly different names and parameters, other members add new functionality. I really want to reuse the code in Python. Therefore I could do this with inheritance:
class Anaconda(Python):
def __init__(self):
Python.__init__(self)
def wriggle(self):
"""Different name, same thing"""
Python.snake(self)
def devourCrocodile(self, croc):
""" Python can't do this"""
...
Of course I can also call Anaconda().sleep(). But here is the problem: There is a PythonFactory which I need to use.
class PythonFactory:
def makeSpecialPython(self):
""" Do a lot of complicated work to produce a special python"""
…
return python
I want it to make a Python and then I should be able to convert it to an Anaconda:
myAnaconda = Anaconda(PythonFactory().makeSpecialPython())
In this case, delegation would be the way to go. (I don't know whether this can be done using inheritance):
class Anaconda:
def __init__(self, python):
self.python = python
def wriggle(self):
self.python.wriggle()
def devourCrocodile(self, croc):
...
But with delegation, I cannot call Anaconda().sleep().
So, if you're still with me, my questions are:
A) In a case similar to this, where I need to
add some functionality
rename some functionality
use "base class" functionality otherwise
convert "base class" object to "subclass" object
should I use inheritance or delegation? (Or something else?)
B) An elegant solution would be to use delegation plus some special method that forwards all attribute and method accesses which Anaconda does not respond to to its instance of Python.
B) An elegant solution would be to use delegation plus some special method that forwards all attribute and method accesses which Anaconda does not respond to to its instance of Python.
This is simple in Python, just define __getattr__:
class Anaconda:
def __init__(self, python):
self.python = python
def wriggle(self):
self.python.snake()
def devourCrocodile(self, croc):
...
def __getattr__(self, name):
return getattr(self.python, name)
See the Python docs on __getattr__
I have several similar classes which will all be initialised by the same code, and thus need to have the same "constructor signature." (Are there really constructors and signatures in the dynamic Python? I digress.)
What is the best way to define a classes __ init __ parameters using zope.interface?
I'll paste some code I've used for experimenting with zope.interface to facilitate discussion:
from zope.interface import Interface, Attribute, implements, verify
class ITest(Interface):
required_attribute = Attribute(
"""A required attribute for classes implementing this interface.""")
def required_method():
"""A required method for classes implementing this interface."""
class Test(object):
implements(ITest)
required_attribute = None
def required_method():
pass
print verify.verifyObject(ITest, Test())
print verify.verifyClass(ITest, Test)
I can't just define an __ init __ function in ITest, because it will be treated specially by the Python interpreter - I think? Whatever the case, it doesn't seem to work. So again, what is the best way to define a "class constructor" using a zope.interface?
First of all: there is a big difference between the concepts of providing and implementing an interface.
Basically, classes implement an interface, instances of those classes provide that interface. After all, classes are the blueprints for instances, detailing their implementations.
Now, an interface describes the implementation provided by instances, but the __init__ method is not a part of instances! It is part of the interface directly provided by classes instead (a classmethod in Python terminology). If you were to define an __init__ method in your interface, you are declaring that your instances have (provide) a __init__ method as well (as an instance method).
So interfaces describe what kind of instances you get, not how you get them.
Now, interfaces can be used for more than just describing what functionality an instance provides. You can also use interfaces for any kind object in Python, including modules and classes. You'll have to use the directlyProvides method to assign an interface to these, as you won't be calling these to create an instance. You can also use the #provider() class decorator, or the classProvides or moduleProvides functions from within a class or module declaration to get the same results.
What you want in this case is a factory definition; classes are factories that when called, produce an instance, so a factory interface must provide a __call__ method to indicate they are callable. Here is your example set up with a factory interface:
from zope import interface
class ITest(interface.Interface):
required_attribute = interface.Attribute(
"""A required attribute for classes implementing this interface.""")
def required_method():
"""A required method for classes implementing this interface."""
class ITestFactory(interface.Interface):
"""Creates objects providing the ITest interface"""
def __call__(a, b):
"""Takes two parameters"""
#interface.implementer(ITest)
#interface.provider(ITestFactory)
class Test(object):
def __init__(self, a, b):
self.required_attribute = a*b
def required_method():
return self.required_attribute
The zope.component package provides you with a convenience class and interface for factories, adding a getInterfaces method and a title and description to make discovery and introspection a little easier. You can then just subclass the IFactory interface to document your __init__ parameters a little better:
from zope import component
class ITestFactory(component.interfaces.IFactory):
"""Creates objects providing the ITest interface"""
def __call__(a, b):
"""Takes two parameters"""
testFactory = component.Factory(Test, 'ITest Factory', ITestFactory.__doc__)
interface.directlyProvides(testFactory, ITestFactory)
You could now register that factory as a zope.component utility, for example, allowing other code to find all ITestFactory providers.
I used zope.interface.directlyProvides here to mark the factory instance with your subclassed ITestFactory interface, as zope.component.Factory instances normally only provide the IFactory interface.
No, __init__ is not handled differently:
from zope.interface import Interface, Attribute, implements, verify
class ITest(Interface):
required_attribute = Attribute(
"""A required attribute for classes implementing this interface.""")
def __init__(a,b):
"""Takes two parameters"""
def required_method():
"""A required method for classes implementing this interface."""
class Test(object):
implements(ITest)
def __init__(self, a, b):
self.required_attribute = a*b
def required_method():
return self.required_attribute
print verify.verifyClass(ITest, Test)
print verify.verifyObject(ITest, Test(2,3))
I'm not 100% sure what you are asking though. If you want to have the same constructor signature on several classes in Python, the only way to do that is to actually have the same constructor signature on these classes. :-) If you do this by subclassing or by having different __init__ for each class doesn't matter as long as they have the same signature.
zope.interface is not about defining methods, but declaring signatures. You can therefore define an interface that has a specific signature, also on the __init__, but this is just saying "This object implements the signature IMyFace", but saying that a class implements an interface will not actually make the class implement the interface. You still need to implement it.
Does not make much sense what you are asking. The interface file is supposed to keep the interface description but not any specific implementation to be called from some where at any point. What you what is to inherit. from a common base class. zope.interface is NOT about inheritance.