Name of Design Pattern: get class from class level - python

Especially in unittests we use this "design pattern" I call "get class from class level"
framworktest.py:
class FrameWorkHttpClient(object):
....
class FrameWorkTestCase(unittest.TestCase):
# Subclass can control the class which gets used in get_response()
HttpClient=FrameWorkHttpClient
def get_response(self, url):
client=self.HttpClient()
return client.get(url)
mytest.py:
class MyHttpClient(FrameWorkHttpClient):
....
class MyTestCase(FrameWorkTestCase):
HttpClient=MyHttpClient
def test_something(self):
response=self.get_response()
...
The method get_response() gets the class from self not by importing it. This way a subclass can modify the class and use a different HttpClient.
What's the name of this (get class from class level) "design pattern"?
Is this a way of "inversion of control" or "dependency injection"?

Your code is very similar to Factory method pattern. The only difference is that your variant uses factory class variable instead of factory method.

I believe this has the same purpose as just simple polymorphism implemented using Python-specific syntax. Instead of having a virtual method returning a new instances, you have the instance type stored as "an overridable variable" in a class/subclass.
This can be rewritten as a virtual method (sorry I am not fluent in Python so this is just pseudocode)
virtual HttpClient GetClient()
return new FrameworkHttpClient()
then in the subclass, you change the implementation of the method to return a different type:
override HttpClient GetClient()
return new MyHttpClient()
If you want to call this a pattern, I would say it is similar to Strategy GoF pattern. In your particular case, the algorithm being abstracted away is the creation of the particular HttpClient implementation.
And after second thought - as you stated, indeed this can be looked at as an IoC example.

I'm not exactly a design pattern 'Guru', but to me it looks a bit like the Template Method Pattern. You are defining the 'skeleton' of the get_response method in your base class, and leaving one step (defining which class to use) to the subclasses.
If this can be considered the template pattern, it is an example of inversion of control.

You want to let the sub classes decide which class to instantiate.
This is what the factory method pattern already offers:
Define an interface for creating an object, but let the classes that implement the interface decide which class to instantiate. The Factory method lets a class defer instantiation to subclasses. (GoF)
Your solving the same problem by replacing a variable of the parent class.
It works but your solution has at least two drawbacks (compared to the classic pattern):
you introduce a temporal coupling (design smell). Client must call the instructions in the right order. (first initialize the HttpClient then invoke get_response)
your test case is not immutable. Immutable class are simplest than the mutable ones. And in my opinion test should always be simple.

Related

Using inheritance over composition in a proxy-pattern

I'm just curious why all examples of proxy-pattern written in Python are using composition over inheritance? If proxy class should implement all of the methods of original class, isn't it easier to inherit proxy from original and just overwrite methods we want to perform additional logic (caching, logging, etc.), using super().method()?
The relationship between classes is also respected: the proxy class is some kind of an original class.
Example:
class Original:
def some_method(self):
return "some"
def another_method(self):
return "another"
class Proxy(Original):
def another_method(self):
res = super().another_method()
logger.log(res)
return res
Lets take a look at one of the "classical" diagrams for proxy pattern (from wiki):
I would argue that "If proxy class should implement all of the methods of original class" statement is not true - the proxy class should implement all of the "contract" methods (Subject interface) and it hides the implementation detail i.e. RealSubject from the user (also RealSubject potentially can have some other public methods not defined by the interface).
One more thing to consider - composition can give the proxy ability to control the lifetime of the proxied class instance, while with inheritance (if we skip the fact that there is no more "proxying" happening) it is not possible for the "proxy" itself.
Also note that there are benefits (and drawbacks) in choosing composition over inheritance in general.
Some more useful reading:
Prefer composition over inheritance? SO answer
Differences between Proxy and Decorator Pattern

What is the difference between a mixin classes and standard multiple inheritance in python [duplicate]

In Programming Python, Mark Lutz mentions the term mixin. I am from a C/C++/C# background and I have not heard the term before. What is a mixin?
Reading between the lines of this example (which I have linked to because it is quite long), I am presuming it is a case of using multiple inheritance to extend a class as opposed to proper subclassing. Is this right?
Why would I want to do that rather than put the new functionality into a subclass? For that matter, why would a mixin/multiple inheritance approach be better than using composition?
What separates a mixin from multiple inheritance? Is it just a matter of semantics?
A mixin is a special kind of multiple inheritance. There are two main situations where mixins are used:
You want to provide a lot of optional features for a class.
You want to use one particular feature in a lot of different classes.
For an example of number one, consider werkzeug's request and response system. I can make a plain old request object by saying:
from werkzeug import BaseRequest
class Request(BaseRequest):
pass
If I want to add accept header support, I would make that
from werkzeug import BaseRequest, AcceptMixin
class Request(AcceptMixin, BaseRequest):
pass
If I wanted to make a request object that supports accept headers, etags, authentication, and user agent support, I could do this:
from werkzeug import BaseRequest, AcceptMixin, ETagRequestMixin, UserAgentMixin, AuthenticationMixin
class Request(AcceptMixin, ETagRequestMixin, UserAgentMixin, AuthenticationMixin, BaseRequest):
pass
The difference is subtle, but in the above examples, the mixin classes weren't made to stand on their own. In more traditional multiple inheritance, the AuthenticationMixin (for example) would probably be something more like Authenticator. That is, the class would probably be designed to stand on its own.
First, you should note that mixins only exist in multiple-inheritance languages. You can't do a mixin in Java or C#.
Basically, a mixin is a stand-alone base type that provides limited functionality and polymorphic resonance for a child class. If you're thinking in C#, think of an interface that you don't have to actually implement because it's already implemented; you just inherit from it and benefit from its functionality.
Mixins are typically narrow in scope and not meant to be extended.
[edit -- as to why:]
I suppose I should address why, since you asked. The big benefit is that you don't have to do it yourself over and over again. In C#, the biggest place where a mixin could benefit might be from the Disposal pattern. Whenever you implement IDisposable, you almost always want to follow the same pattern, but you end up writing and re-writing the same basic code with minor variations. If there were an extendable Disposal mixin, you could save yourself a lot of extra typing.
[edit 2 -- to answer your other questions]
What separates a mixin from multiple inheritance? Is it just a matter of semantics?
Yes. The difference between a mixin and standard multiple inheritance is just a matter of semantics; a class that has multiple inheritance might utilize a mixin as part of that multiple inheritance.
The point of a mixin is to create a type that can be "mixed in" to any other type via inheritance without affecting the inheriting type while still offering some beneficial functionality for that type.
Again, think of an interface that is already implemented.
I personally don't use mixins since I develop primarily in a language that doesn't support them, so I'm having a really difficult time coming up with a decent example that will just supply that "ahah!" moment for you. But I'll try again. I'm going to use an example that's contrived -- most languages already provide the feature in some way or another -- but that will, hopefully, explain how mixins are supposed to be created and used. Here goes:
Suppose you have a type that you want to be able to serialize to and from XML. You want the type to provide a "ToXML" method that returns a string containing an XML fragment with the data values of the type, and a "FromXML" that allows the type to reconstruct its data values from an XML fragment in a string. Again, this is a contrived example, so perhaps you use a file stream, or an XML Writer class from your language's runtime library... whatever. The point is that you want to serialize your object to XML and get a new object back from XML.
The other important point in this example is that you want to do this in a generic way. You don't want to have to implement a "ToXML" and "FromXML" method for every type that you want to serialize, you want some generic means of ensuring that your type will do this and it just works. You want code reuse.
If your language supported it, you could create the XmlSerializable mixin to do your work for you. This type would implement the ToXML and the FromXML methods. It would, using some mechanism that's not important to the example, be capable of gathering all the necessary data from any type that it's mixed in with to build the XML fragment returned by ToXML and it would be equally capable of restoring that data when FromXML is called.
And.. that's it. To use it, you would have any type that needs to be serialized to XML inherit from XmlSerializable. Whenever you needed to serialize or deserialize that type, you would simply call ToXML or FromXML. In fact, since XmlSerializable is a fully-fledged type and polymorphic, you could conceivably build a document serializer that doesn't know anything about your original type, accepting only, say, an array of XmlSerializable types.
Now imagine using this scenario for other things, like creating a mixin that ensures that every class that mixes it in logs every method call, or a mixin that provides transactionality to the type that mixes it in. The list can go on and on.
If you just think of a mixin as a small base type designed to add a small amount of functionality to a type without otherwise affecting that type, then you're golden.
Hopefully. :)
This answer aims to explain mixins with examples that are:
self-contained: short, with no need to know any libraries to understand the example.
in Python, not in other languages.
It is understandable that there were examples from other languages such as Ruby since the term is much more common in those languages, but this is a Python thread.
It shall also consider the controversial question:
Is multiple inheritance necessary or not to characterize a mixin?
Definitions
I have yet to see a citation from an "authoritative" source clearly saying what is a mixin in Python.
I have seen 2 possible definitions of a mixin (if they are to be considered as different from other similar concepts such as abstract base classes), and people don't entirely agree on which one is correct.
The consensus may vary between different languages.
Definition 1: no multiple inheritance
A mixin is a class such that some method of the class uses a method which is not defined in the class.
Therefore the class is not meant to be instantiated, but rather serve as a base class. Otherwise the instance would have methods that cannot be called without raising an exception.
A constraint which some sources add is that the class may not contain data, only methods, but I don't see why this is necessary. In practice however, many useful mixins don't have any data, and base classes without data are simpler to use.
A classic example is the implementation of all comparison operators from only <= and ==:
class ComparableMixin(object):
"""This class has methods which use `<=` and `==`,
but this class does NOT implement those methods."""
def __ne__(self, other):
return not (self == other)
def __lt__(self, other):
return self <= other and (self != other)
def __gt__(self, other):
return not self <= other
def __ge__(self, other):
return self == other or self > other
class Integer(ComparableMixin):
def __init__(self, i):
self.i = i
def __le__(self, other):
return self.i <= other.i
def __eq__(self, other):
return self.i == other.i
assert Integer(0) < Integer(1)
assert Integer(0) != Integer(1)
assert Integer(1) > Integer(0)
assert Integer(1) >= Integer(1)
# It is possible to instantiate a mixin:
o = ComparableMixin()
# but one of its methods raise an exception:
#o != o
This particular example could have been achieved via the functools.total_ordering() decorator, but the game here was to reinvent the wheel:
import functools
#functools.total_ordering
class Integer(object):
def __init__(self, i):
self.i = i
def __le__(self, other):
return self.i <= other.i
def __eq__(self, other):
return self.i == other.i
assert Integer(0) < Integer(1)
assert Integer(0) != Integer(1)
assert Integer(1) > Integer(0)
assert Integer(1) >= Integer(1)
Definition 2: multiple inheritance
A mixin is a design pattern in which some method of a base class uses a method it does not define, and that method is meant to be implemented by another base class, not by the derived like in Definition 1.
The term mixin class refers to base classes which are intended to be used in that design pattern (TODO those that use the method, or those that implement it?)
It is not easy to decide if a given class is a mixin or not: the method could be just implemented on the derived class, in which case we're back to Definition 1. You have to consider the author's intentions.
This pattern is interesting because it is possible to recombine functionalities with different choices of base classes:
class HasMethod1(object):
def method(self):
return 1
class HasMethod2(object):
def method(self):
return 2
class UsesMethod10(object):
def usesMethod(self):
return self.method() + 10
class UsesMethod20(object):
def usesMethod(self):
return self.method() + 20
class C1_10(HasMethod1, UsesMethod10): pass
class C1_20(HasMethod1, UsesMethod20): pass
class C2_10(HasMethod2, UsesMethod10): pass
class C2_20(HasMethod2, UsesMethod20): pass
assert C1_10().usesMethod() == 11
assert C1_20().usesMethod() == 21
assert C2_10().usesMethod() == 12
assert C2_20().usesMethod() == 22
# Nothing prevents implementing the method
# on the base class like in Definition 1:
class C3_10(UsesMethod10):
def method(self):
return 3
assert C3_10().usesMethod() == 13
Authoritative Python occurrences
At the official documentatiton for collections.abc the documentation explicitly uses the term Mixin Methods.
It states that if a class:
implements __next__
inherits from a single class Iterator
then the class gets an __iter__ mixin method for free.
Therefore at least on this point of the documentation, mixin does not not require multiple inheritance, and is coherent with Definition 1.
The documentation could of course be contradictory at different points, and other important Python libraries might be using the other definition in their documentation.
This page also uses the term Set mixin, which clearly suggests that classes like Set and Iterator can be called Mixin classes.
In other languages
Ruby: Clearly does not require multiple inheritance for mixin, as mentioned in major reference books such as Programming Ruby and The Ruby programming Language
C++: A virtual method that is set =0 is a pure virtual method.
Definition 1 coincides with the definition of an abstract class (a class that has a pure virtual method).
That class cannot be instantiated.
Definition 2 is possible with virtual inheritance: Multiple Inheritance from two derived classes
I think of them as a disciplined way of using multiple inheritance - because ultimately a mixin is just another python class that (might) follow the conventions about classes that are called mixins.
My understanding of the conventions that govern something you would call a Mixin are that a Mixin:
adds methods but not instance variables (class constants are OK)
only inherits from object (in Python)
That way it limits the potential complexity of multiple inheritance, and makes it reasonably easy to track the flow of your program by limiting where you have to look (compared to full multiple inheritance). They are similar to ruby modules.
If I want to add instance variables (with more flexibility than allowed for by single inheritance) then I tend to go for composition.
Having said that, I have seen classes called XYZMixin that do have instance variables.
What separates a mixin from multiple inheritance? Is it just a matter of semantics?
A mixin is a limited form of multiple inheritance. In some languages the mechanism for adding a mixin to a class is slightly different (in terms of syntax) from that of inheritance.
In the context of Python especially, a mixin is a parent class that provides functionality to subclasses but is not intended to be instantiated itself.
What might cause you to say, "that's just multiple inheritance, not really a mixin" is if the class that might be confused for a mixin can actually be instantiated and used - so indeed it is a semantic, and very real, difference.
Example of Multiple Inheritance
This example, from the documentation, is an OrderedCounter:
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)
It subclasses both the Counter and the OrderedDict from the collections module.
Both Counter and OrderedDict are intended to be instantiated and used on their own. However, by subclassing them both, we can have a counter that is ordered and reuses the code in each object.
This is a powerful way to reuse code, but it can also be problematic. If it turns out there's a bug in one of the objects, fixing it without care could create a bug in the subclass.
Example of a Mixin
Mixins are usually promoted as the way to get code reuse without potential coupling issues that cooperative multiple inheritance, like the OrderedCounter, could have. When you use mixins, you use functionality that isn't as tightly coupled to the data.
Unlike the example above, a mixin is not intended to be used on its own. It provides new or different functionality.
For example, the standard library has a couple of mixins in the socketserver library.
Forking and threading versions of each type of server can be created
using these mix-in classes. For instance, ThreadingUDPServer is
created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
pass
The mix-in class comes first, since it overrides a method defined in
UDPServer. Setting the various attributes also changes the behavior of
the underlying server mechanism.
In this case, the mixin methods override the methods in the UDPServer object definition to allow for concurrency.
The overridden method appears to be process_request and it also provides another method, process_request_thread. Here it is from the source code:
class ThreadingMixIn:
"""Mix-in class to handle each request in a new thread."""
# Decides how threads will act upon termination of the
# main process
daemon_threads = False
def process_request_thread(self, request, client_address):
"""Same as in BaseServer but as a thread.
In addition, exception handling is done here.
"""
try:
self.finish_request(request, client_address)
except Exception:
self.handle_error(request, client_address)
finally:
self.shutdown_request(request)
def process_request(self, request, client_address):
"""Start a new thread to process the request."""
t = threading.Thread(target = self.process_request_thread,
args = (request, client_address))
t.daemon = self.daemon_threads
t.start()
A Contrived Example
This is a mixin that is mostly for demonstration purposes - most objects will evolve beyond the usefulness of this repr:
class SimpleInitReprMixin(object):
"""mixin, don't instantiate - useful for classes instantiable
by keyword arguments to their __init__ method.
"""
__slots__ = () # allow subclasses to use __slots__ to prevent __dict__
def __repr__(self):
kwarg_strings = []
d = getattr(self, '__dict__', None)
if d is not None:
for k, v in d.items():
kwarg_strings.append('{k}={v}'.format(k=k, v=repr(v)))
slots = getattr(self, '__slots__', None)
if slots is not None:
for k in slots:
v = getattr(self, k, None)
kwarg_strings.append('{k}={v}'.format(k=k, v=repr(v)))
return '{name}({kwargs})'.format(
name=type(self).__name__,
kwargs=', '.join(kwarg_strings)
)
and usage would be:
class Foo(SimpleInitReprMixin): # add other mixins and/or extend another class here
__slots__ = 'foo',
def __init__(self, foo=None):
self.foo = foo
super(Foo, self).__init__()
And usage:
>>> f1 = Foo('bar')
>>> f2 = Foo()
>>> f1
Foo(foo='bar')
>>> f2
Foo(foo=None)
I think previous responses defined very well what MixIns are. However,
in order to better understand them, it might be useful to compare MixIns with Abstract Classes and Interfaces from the code/implementation perspective:
1. Abstract Class
Class that needs to contain one or more abstract methods
Abstract Class can contain state (instance variables) and non-abstract methods
2. Interface
Interface contains abstract methods only (no non-abstract methods and no internal state)
3. MixIns
MixIns (like Interfaces) do not contain internal state (instance variables)
MixIns contain one or more non-abstract methods (they can contain non-abstract methods unlike interfaces)
In e.g. Python these are just conventions, because all of the above are defined as classes. However, the common feature of both Abstract Classes, Interfaces and MixIns is that they should not exist on their own, i.e. should not be instantiated.
Mixins is a concept in Programming in which the class provides functionalities but it is not meant to be used for instantiation. Main purpose of Mixins is to provide functionalities which are standalone and it would be best if the mixins itself do not have inheritance with other mixins and also avoid state. In languages such as Ruby, there is some direct language support but for Python, there isn't. However, you could used multi-class inheritance to execute the functionality provided in Python.
I watched this video http://www.youtube.com/watch?v=v_uKI2NOLEM to understand the basics of mixins. It is quite useful for a beginner to understand the basics of mixins and how they work and the problems you might face in implementing them.
Wikipedia is still the best: http://en.wikipedia.org/wiki/Mixin
I think there have been some good explanations here but I wanted to provide another perspective.
In Scala, you can do mixins as has been described here but what is very interesting is that the mixins are actually 'fused' together to create a new kind of class to inherit from. In essence, you do not inherit from multiple classes/mixins, but rather, generate a new kind of class with all the properties of the mixin to inherit from. This makes sense since Scala is based on the JVM where multiple-inheritance is not currently supported (as of Java 8). This mixin class type, by the way, is a special type called a Trait in Scala.
It's hinted at in the way a class is defined:
class NewClass extends FirstMixin with SecondMixin with ThirdMixin
...
I'm not sure if the CPython interpreter does the same (mixin class-composition) but I wouldn't be surprised. Also, coming from a C++ background, I would not call an ABC or 'interface' equivalent to a mixin -- it's a similar concept but divergent in use and implementation.
I'd advise against mix-ins in new Python code, if you can find any other way around it (such as composition-instead-of-inheritance, or just monkey-patching methods into your own classes) that isn't much more effort.
In old-style classes you could use mix-ins as a way of grabbing a few methods from another class. But in the new-style world everything, even the mix-in, inherits from object. That means that any use of multiple inheritance naturally introduces MRO issues.
There are ways to make multiple-inheritance MRO work in Python, most notably the super() function, but it means you have to do your whole class hierarchy using super(), and it's considerably more difficult to understand the flow of control.
Perhaps a couple of examples will help.
If you're building a class and you want it to act like a dictionary, you can define all the various __ __ methods necessary. But that's a bit of a pain. As an alternative, you can just define a few, and inherit (in addition to any other inheritance) from UserDict.DictMixin (moved to collections.DictMixin in py3k). This will have the effect of automatically defining all the rest of the dictionary api.
A second example: the GUI toolkit wxPython allows you to make list controls with multiple columns (like, say, the file display in Windows Explorer). By default, these lists are fairly basic. You can add additional functionality, such as the ability to sort the list by a particular column by clicking on the column header, by inheriting from ListCtrl and adding appropriate mixins.
It's not a Python example but in the D programing language the term mixin is used to refer to a construct used much the same way; adding a pile of stuff to a class.
In D (which by the way doesn't do MI) this is done by inserting a template (think syntactically aware and safe macros and you will be close) into a scope. This allows for a single line of code in a class, struct, function, module or whatever to expand to any number of declarations.
OP mentioned that he/she never heard of mixin in C++, perhaps that is because they are called Curiously Recurring Template Pattern (CRTP) in C++. Also, #Ciro Santilli mentioned that mixin is implemented via abstract base class in C++. While abstract base class can be used to implement mixin, it is an overkill as the functionality of virtual function at run-time can be achieved using template at compile time without the overhead of virtual table lookup at run-time.
The CRTP pattern is described in detail here
I have converted the python example in #Ciro Santilli's answer into C++ using template class below:
#include <iostream>
#include <assert.h>
template <class T>
class ComparableMixin {
public:
bool operator !=(ComparableMixin &other) {
return ~(*static_cast<T*>(this) == static_cast<T&>(other));
}
bool operator <(ComparableMixin &other) {
return ((*(this) != other) && (*static_cast<T*>(this) <= static_cast<T&>(other)));
}
bool operator >(ComparableMixin &other) {
return ~(*static_cast<T*>(this) <= static_cast<T&>(other));
}
bool operator >=(ComparableMixin &other) {
return ((*static_cast<T*>(this) == static_cast<T&>(other)) || (*(this) > other));
}
protected:
ComparableMixin() {}
};
class Integer: public ComparableMixin<Integer> {
public:
Integer(int i) {
this->i = i;
}
int i;
bool operator <=(Integer &other) {
return (this->i <= other.i);
}
bool operator ==(Integer &other) {
return (this->i == other.i);
}
};
int main() {
Integer i(0) ;
Integer j(1) ;
//ComparableMixin<Integer> c; // this will cause compilation error because constructor is protected.
assert (i < j );
assert (i != j);
assert (j > i);
assert (j >= i);
return 0;
}
EDIT: Added protected constructor in ComparableMixin so that it can only be inherited and not instantiated. Updated the example to show how protected constructor will cause compilation error when an object of ComparableMixin is created.
The concept comes from Steve’s Ice Cream, an ice cream store founded by Steve Herrell in Somerville, Massachusetts, in 1973, where mix-ins (candies, cakes, etc.) were mixed into basic ice cream flavors (vanilla, chocolate, etc.).
Inspired by Steve’s Ice Cream, the designers of the Lisp object system Flavors included the concept in a programming language for the first time, where mix-ins were small helper classes designed for enhancing other classes, and flavors were large standalone classes.
So the main idea is that a mix-in is a reusable extension (’reusable’ as opposed to ‘exclusive’; ‘extension’ as opposed to ‘base’).
The concept is orthogonal to the concepts of single or multiple inheritance and abstract or concrete class. Mix-in classes can be used in single or multiple inheritance and can be abstract or concrete classes. Mix-in classes have incomplete interfaces while abstract classes have incomplete implementations and concrete classes have complete implementations.
Mix-in class names are conventionally suffixed with ‘-MixIn’, ‘-able’, or ‘-ible’ to emphasize their nature, like in the Python standard library with the ThreadingMixIn and ForkingMixIn classes of the socketserver module, and the Hashable, Iterable, Callable, Awaitable, AsyncIterable, and Reversible classes of the collections.abc module.
Here is an example of a mix-in class used for extending the Python built-in list and dict classes with logging capability:
import logging
class LoggingMixIn:
def __setitem__(self, key, value):
logging.info('Setting %r to %r', key, value)
super().__setitem__(key, value)
def __delitem__(self, key):
logging.info('Deleting %r', key)
super().__delitem__(key)
class LoggingList(LoggingMixIn, list):
pass
class LoggingDict(LoggingMixIn, dict):
pass
>>> logging.basicConfig(level=logging.INFO)
>>> l = LoggingList([False])
>>> d = LoggingDict({'a': False})
>>> l[0] = True
INFO:root:Setting 0 to True
>>> d['a'] = True
INFO:root:Setting 'a' to True
>>> del l[0]
INFO:root:Deleting 0
>>> del d['a']
INFO:root:Deleting 'a'
mixin gives a way to add functionality in a class, i.e you can interact with methods defined in a module by including the module inside the desired class. Though ruby doesn't supports multiple inheritance but provides mixin as an alternative to achieve that.
here is an example that explains how multiple inheritance is achieved using mixin.
module A # you create a module
def a1 # lets have a method 'a1' in it
end
def a2 # Another method 'a2'
end
end
module B # let's say we have another module
def b1 # A method 'b1'
end
def b2 #another method b2
end
end
class Sample # we create a class 'Sample'
include A # including module 'A' in the class 'Sample' (mixin)
include B # including module B as well
def S1 #class 'Sample' contains a method 's1'
end
end
samp = Sample.new # creating an instance object 'samp'
# we can access methods from module A and B in our class(power of mixin)
samp.a1 # accessing method 'a1' from module A
samp.a2 # accessing method 'a2' from module A
samp.b1 # accessing method 'b1' from module B
samp.b2 # accessing method 'a2' from module B
samp.s1 # accessing method 's1' inside the class Sample
I just used a python mixin to implement unit testing for python milters. Normally, a milter talks to an MTA, making unit testing difficult. The test mixin overrides methods that talk to the MTA, and create a simulated environment driven by test cases instead.
So, you take an unmodified milter application, like spfmilter, and mixin TestBase, like this:
class TestMilter(TestBase,spfmilter.spfMilter):
def __init__(self):
TestBase.__init__(self)
spfmilter.config = spfmilter.Config()
spfmilter.config.access_file = 'test/access.db'
spfmilter.spfMilter.__init__(self)
Then, use TestMilter in the test cases for the milter application:
def testPass(self):
milter = TestMilter()
rc = milter.connect('mail.example.com',ip='192.0.2.1')
self.assertEqual(rc,Milter.CONTINUE)
rc = milter.feedMsg('test1',sender='good#example.com')
self.assertEqual(rc,Milter.CONTINUE)
milter.close()
http://pymilter.cvs.sourceforge.net/viewvc/pymilter/pymilter/Milter/test.py?revision=1.6&view=markup
Maybe an example from ruby can help:
You can include the mixin Comparable and define one function "<=>(other)", the mixin provides all those functions:
<(other)
>(other)
==(other)
<=(other)
>=(other)
between?(other)
It does this by invoking <=>(other) and giving back the right result.
"instance <=> other" returns 0 if both objects are equal, less than 0 if instance is bigger than other and more than 0 if other is bigger.
I read that you have a c# background. So a good starting point might be a mixin implementation for .NET.
You might want to check out the codeplex project at http://remix.codeplex.com/
Watch the lang.net Symposium link to get an overview. There is still more to come on documentation on codeplex page.
regards
Stefan
Roughly summarizing all great answers above:
                States        /     Methods
Concrete Method
Abstract Method
Concrete State
Class
Abstract Class
Abstract State
Mixin
Interface

Methods intended for overriding but not part of the public API

I have a class that implements a strategy. As part of a wider strategy API it has a public interface. In this particular class the main_method applies various conditions and has a helper_... method for each condition. Thus, by subclassing and overriding these helper methods you can change the behaviour of the strategy. This is intended. However, these helper methods are not part of the API that is being implemented/exposed to the client.
It seems to me that these methods should be considered private, as they are not part of the interface that is being implemented, but on the other hand they are intended to be overridden by subclasses. In Java they would be "protected".
What's the pythonic way to deal with this situation? My code is schematically similar to the following:
class BasicFoo(Object):
def __init__(self):
pass
def main_method(self, input) :
if condition_1 :
self._helper1(input)
elif condition_2 :
self._helper2(input)
else :
self._helper3(input)
def _helper1(self, input)
# do something
def _helper2(self, input):
# do something else
def _helper3(self, input):
# do something else again
class ModifiedFoo(BasicFoo):
def __init__(self):
super(ModifiedFoo, self).__init__()
def _helper1(self, input):
# a different behaviour
Just make the methods public and clarify their intended use in the documentation. Take the example from the ast.NodeVisitor class from the standard library:
This class is meant to be subclassed, with the subclass adding visitor
methods.
visit(node)
Visit a node. The default implementation calls the method called self.visit_classname where classname is the name of the node class, or
generic_visit() if that method doesn’t exist.
Given that in Python you don't have the access modifiers you can only rely on documentation and conventions to clarify the roles of such methods.
In this case your methods are part of the public API (a private attribute/method could be renamed at leisure between releases, but this isn't the case here).

pick a subclass based on a parameter

I have a module (db.py) which loads data from different database types (sqlite,mysql etc..) the module contains a class db_loader and subclasses (sqlite_loader,mysql_loader) which inherit from it.
The type of database being used is in a separate params file,
How does the user get the right object back?
i.e how do I do:
loader = db.loader()
Do I use a method called loader in the db.py module or is there a more elegant way whereby a class can pick its own subclass based on a parameter? Is there a standard way to do this kind of thing?
Sounds like you want the Factory Pattern. You define a factory method (either in your module, or perhaps in a common parent class for all the objects it can produce) that you pass the parameter to, and it will return an instance of the correct class. In python the problem is a bit simpler than perhaps some of the details on the wikipedia article as your types are dynamic.
class Animal(object):
#staticmethod
def get_animal_which_makes_noise(noise):
if noise == 'meow':
return Cat()
elif noise == 'woof':
return Dog()
class Cat(Animal):
...
class Dog(Animal):
...
is there a more elegant way whereby a class can pick its own subclass based on a parameter?
You can do this by overriding your base class's __new__ method. This will allow you to simply go loader = db_loader(db_type) and loader will magically be the correct subclass for the database type. This solution is mildly more complicated than the other answers, but IMHO it is surely the most elegant.
In its simplest form:
class Parent():
def __new__(cls, feature):
subclass_map = {subclass.feature: subclass for subclass in cls.__subclasses__()}
subclass = subclass_map[feature]
instance = super(Parent, subclass).__new__(subclass)
return instance
class Child1(Parent):
feature = 1
class Child2(Parent):
feature = 2
type(Parent(1)) # <class '__main__.Child1'>
type(Parent(2)) # <class '__main__.Child2'>
(Note that as long as __new__ returns an instance of cls, the instance's __init__ method will automatically be called for you.)
This simple version has issues though and would need to be expanded upon and tailored to fit your desired behaviour. Most notably, this is something you'd probably want to address:
Parent(3) # KeyError
Child1(1) # KeyError
So I'd recommend either adding cls to subclass_map or using it as the default, like so subclass_map.get(feature, cls). If your base class isn't meant to be instantiated -- maybe it even has abstract methods? -- then I'd recommend giving Parent the metaclass abc.ABCMeta.
If you have grandchild classes too, then I'd recommend putting the gathering of subclasses into a recursive class method that follows each lineage to the end, adding all descendants.
This solution is more beautiful than the factory method pattern IMHO. And unlike some of the other answers, it's self-maintaining because the list of subclasses is created dynamically, instead of being kept in a hardcoded mapping. And this will only instantiate subclasses, unlike one of the other answers, which would instantiate anything in the global namespace matching the given parameter.
I'd store the name of the subclass in the params file, and have a factory method that would instantiate the class given its name:
class loader(object):
#staticmethod
def get_loader(name):
return globals()[name]()
class sqlite_loader(loader): pass
class mysql_loader(loader): pass
print type(loader.get_loader('sqlite_loader'))
print type(loader.get_loader('mysql_loader'))
Store the classes in a dict, instantiate the correct one based on your param:
db_loaders = dict(sqlite=sqlite_loader, mysql=mysql_loader)
loader = db_loaders.get(db_type, default_loader)()
where db_type is the paramter you are switching on, and sqlite_loader and mysql_loader are the "loader" classes.

Can zope.interface define how a class' __init__ method should look?

I have several similar classes which will all be initialised by the same code, and thus need to have the same "constructor signature." (Are there really constructors and signatures in the dynamic Python? I digress.)
What is the best way to define a classes __ init __ parameters using zope.interface?
I'll paste some code I've used for experimenting with zope.interface to facilitate discussion:
from zope.interface import Interface, Attribute, implements, verify
class ITest(Interface):
required_attribute = Attribute(
"""A required attribute for classes implementing this interface.""")
def required_method():
"""A required method for classes implementing this interface."""
class Test(object):
implements(ITest)
required_attribute = None
def required_method():
pass
print verify.verifyObject(ITest, Test())
print verify.verifyClass(ITest, Test)
I can't just define an __ init __ function in ITest, because it will be treated specially by the Python interpreter - I think? Whatever the case, it doesn't seem to work. So again, what is the best way to define a "class constructor" using a zope.interface?
First of all: there is a big difference between the concepts of providing and implementing an interface.
Basically, classes implement an interface, instances of those classes provide that interface. After all, classes are the blueprints for instances, detailing their implementations.
Now, an interface describes the implementation provided by instances, but the __init__ method is not a part of instances! It is part of the interface directly provided by classes instead (a classmethod in Python terminology). If you were to define an __init__ method in your interface, you are declaring that your instances have (provide) a __init__ method as well (as an instance method).
So interfaces describe what kind of instances you get, not how you get them.
Now, interfaces can be used for more than just describing what functionality an instance provides. You can also use interfaces for any kind object in Python, including modules and classes. You'll have to use the directlyProvides method to assign an interface to these, as you won't be calling these to create an instance. You can also use the #provider() class decorator, or the classProvides or moduleProvides functions from within a class or module declaration to get the same results.
What you want in this case is a factory definition; classes are factories that when called, produce an instance, so a factory interface must provide a __call__ method to indicate they are callable. Here is your example set up with a factory interface:
from zope import interface
class ITest(interface.Interface):
required_attribute = interface.Attribute(
"""A required attribute for classes implementing this interface.""")
def required_method():
"""A required method for classes implementing this interface."""
class ITestFactory(interface.Interface):
"""Creates objects providing the ITest interface"""
def __call__(a, b):
"""Takes two parameters"""
#interface.implementer(ITest)
#interface.provider(ITestFactory)
class Test(object):
def __init__(self, a, b):
self.required_attribute = a*b
def required_method():
return self.required_attribute
The zope.component package provides you with a convenience class and interface for factories, adding a getInterfaces method and a title and description to make discovery and introspection a little easier. You can then just subclass the IFactory interface to document your __init__ parameters a little better:
from zope import component
class ITestFactory(component.interfaces.IFactory):
"""Creates objects providing the ITest interface"""
def __call__(a, b):
"""Takes two parameters"""
testFactory = component.Factory(Test, 'ITest Factory', ITestFactory.__doc__)
interface.directlyProvides(testFactory, ITestFactory)
You could now register that factory as a zope.component utility, for example, allowing other code to find all ITestFactory providers.
I used zope.interface.directlyProvides here to mark the factory instance with your subclassed ITestFactory interface, as zope.component.Factory instances normally only provide the IFactory interface.
No, __init__ is not handled differently:
from zope.interface import Interface, Attribute, implements, verify
class ITest(Interface):
required_attribute = Attribute(
"""A required attribute for classes implementing this interface.""")
def __init__(a,b):
"""Takes two parameters"""
def required_method():
"""A required method for classes implementing this interface."""
class Test(object):
implements(ITest)
def __init__(self, a, b):
self.required_attribute = a*b
def required_method():
return self.required_attribute
print verify.verifyClass(ITest, Test)
print verify.verifyObject(ITest, Test(2,3))
I'm not 100% sure what you are asking though. If you want to have the same constructor signature on several classes in Python, the only way to do that is to actually have the same constructor signature on these classes. :-) If you do this by subclassing or by having different __init__ for each class doesn't matter as long as they have the same signature.
zope.interface is not about defining methods, but declaring signatures. You can therefore define an interface that has a specific signature, also on the __init__, but this is just saying "This object implements the signature IMyFace", but saying that a class implements an interface will not actually make the class implement the interface. You still need to implement it.
Does not make much sense what you are asking. The interface file is supposed to keep the interface description but not any specific implementation to be called from some where at any point. What you what is to inherit. from a common base class. zope.interface is NOT about inheritance.

Categories