I'm just getting started with Python, and am trying to figure out the Right Way to use classes.
My program currently has two classes, call them Planner and Model. The Planner is model-agnostic, given that any Model it uses presents a consistent interface. So, it seems like if I want to have several different available models, they should all inherit from something, in order to enforce the consistent interface. Additionally, some of the Model classes will share functionality. For example, a singleAgent Model might simulate one agent, while a doubleAgent Model would simulate two agents, each behaving just like the singleAgent.
So - how should I implement this / what language features do I need?
EDIT: Thanks for the fast responses straightening me out about duck classes! So, it sounds like I would only use inheritance if I wanted to override a subset of another Model's functionality? (And for my doubleAgent, I'd probably just use singleAgents as class members?)
I've taken a look through a few other questions with similar tags, but they seem to be more concerned with syntax rather than design choices. I've also looked at the official Python documentation on classes and not found what I'm looking for. (Possibly because I don't know enough to recognize it.)
In Python you don't generally use quite the same approaches to OOP as in statically typed languages. Specifically you don't actually need an object to implement a specific interface or derive from an abstract base class and so on. Rather that object just needs to be able to do the required operations. This is colloquially known as duck typing. If it walks like a duck and talks like a duck then, to all intents and purposes it is a duck.
So just decide what methods are required for your objects and make sure that they always have them. Should you wish to share implementation between different actors in your system then you can consider class inheritance. But if not then you may as well implement disjoint class hierarchies.
One of Python's strengths (and, as many would argue, weaknesses) is that it does not rely on compile-time type checking to enforce interfaces. This means that it is not required for a set of objects to inherit from a common base class in order to have the same interface - they can still be used interchangeably in any function. This behavior is commonly known as duck typing.
In fact, because Python is dynamically typed, you would be hard pressed to "enforce a consistent interface", as you said. For this reason, things like zope.interface have been created. The main benefit you'll get from classes in your case is code reuse - if all Model types implement some common behavior.
To take this even one step further, if you should have some unrelated object type in a third party library out of your control that you wish to use as a Model, you can even do what is called "monkey patching" or "duck punching" in order to add the code necessary to provide your Model interface!
Basically the link you provided on Classes in Python answers all your questions under the section about inheritance.
In your case just define a class called Model and then two subclasses: singleAgent und doubleAgent.:
class Model:
pass
class singleAgent(Model):
pass
If you really, really need abstract classes take a look into "abstract base class"es: http://docs.python.org/library/abc.html
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are two schools of thought on how to best extend, enhance, and reuse code in an object-oriented system:
Inheritance: extend the functionality of a class by creating a subclass. Override superclass members in the subclasses to provide new functionality. Make methods abstract/virtual to force subclasses to "fill-in-the-blanks" when the superclass wants a particular interface but is agnostic about its implementation.
Aggregation: create new functionality by taking other classes and combining them into a new class. Attach an common interface to this new class for interoperability with other code.
What are the benefits, costs, and consequences of each? Are there other alternatives?
I see this debate come up on a regular basis, but I don't think it's been asked on
Stack Overflow yet (though there is some related discussion). There's also a surprising lack of good Google results for it.
It's not a matter of which is the best, but of when to use what.
In the 'normal' cases a simple question is enough to find out if we need inheritance or aggregation.
If The new class is more or less as the original class. Use inheritance. The new class is now a subclass of the original class.
If the new class must have the original class. Use aggregation. The new class has now the original class as a member.
However, there is a big gray area. So we need several other tricks.
If we have used inheritance (or we plan to use it) but we only use part of the interface, or we are forced to override a lot of functionality to keep the correlation logical. Then we have a big nasty smell that indicates that we had to use aggregation.
If we have used aggregation (or we plan to use it) but we find out we need to copy almost all of the functionality. Then we have a smell that points in the direction of inheritance.
To cut it short. We should use aggregation if part of the interface is not used or has to be changed to avoid an illogical situation. We only need to use inheritance, if we need almost all of the functionality without major changes. And when in doubt, use Aggregation.
An other possibility for, the case that we have an class that needs part of the functionality of the original class, is to split the original class in a root class and a sub class. And let the new class inherit from the root class. But you should take care with this, not to create an illogical separation.
Lets add an example. We have a class 'Dog' with methods: 'Eat', 'Walk', 'Bark', 'Play'.
class Dog
Eat;
Walk;
Bark;
Play;
end;
We now need a class 'Cat', that needs 'Eat', 'Walk', 'Purr', and 'Play'. So first try to extend it from a Dog.
class Cat is Dog
Purr;
end;
Looks, alright, but wait. This cat can Bark (Cat lovers will kill me for that). And a barking cat violates the principles of the universe. So we need to override the Bark method so that it does nothing.
class Cat is Dog
Purr;
Bark = null;
end;
Ok, this works, but it smells bad. So lets try an aggregation:
class Cat
has Dog;
Eat = Dog.Eat;
Walk = Dog.Walk;
Play = Dog.Play;
Purr;
end;
Ok, this is nice. This cat does not bark anymore, not even silent. But still it has an internal dog that wants out. So lets try solution number three:
class Pet
Eat;
Walk;
Play;
end;
class Dog is Pet
Bark;
end;
class Cat is Pet
Purr;
end;
This is much cleaner. No internal dogs. And cats and dogs are at the same level. We can even introduce other pets to extend the model. Unless it is a fish, or something that does not walk. In that case we again need to refactor. But that is something for an other time.
At the beginning of GOF they state
Favor object composition over class inheritance.
This is further discussed here
The difference is typically expressed as the difference between "is a" and "has a". Inheritance, the "is a" relationship, is summed up nicely in the Liskov Substitution Principle. Aggregation, the "has a" relationship, is just that - it shows that the aggregating object has one of the aggregated objects.
Further distinctions exist as well - private inheritance in C++ indicates a "is implemented in terms of" relationship, which can also be modeled by the aggregation of (non-exposed) member objects as well.
Here's my most common argument:
In any object-oriented system, there are two parts to any class:
Its interface: the "public face" of the object. This is the set of capabilities it announces to the rest of the world. In a lot of languages, the set is well defined into a "class". Usually these are the method signatures of the object, though it varies a bit by language.
Its implementation: the "behind the scenes" work that the object does to satisfy its interface and provide functionality. This is typically the code and member data of the object.
One of the fundamental principles of OOP is that the implementation is encapsulated (ie:hidden) within the class; the only thing that outsiders should see is the interface.
When a subclass inherits from a subclass, it typically inherits both the implementation and the interface. This, in turn, means that you're forced to accept both as constraints on your class.
With aggregation, you get to choose either implementation or interface, or both -- but you're not forced into either. The functionality of an object is left up to the object itself. It can defer to other objects as it likes, but it's ultimately responsible for itself. In my experience, this leads to a more flexible system: one that's easier to modify.
So, whenever I'm developing object-oriented software, I almost always prefer aggregation over inheritance.
I gave an answer to "Is a" vs "Has a" : which one is better?.
Basically I agree with other folks: use inheritance only if your derived class truly is the type you're extending, not merely because it contains the same data. Remember that inheritance means the subclass gains the methods as well as the data.
Does it make sense for your derived class to have all the methods of the superclass? Or do you just quietly promise yourself that those methods should be ignored in the derived class? Or do you find yourself overriding methods from the superclass, making them no-ops so no one calls them inadvertently? Or giving hints to your API doc generation tool to omit the method from the doc?
Those are strong clues that aggregation is the better choice in that case.
I see a lot of "is-a vs. has-a; they're conceptually different" responses on this and the related questions.
The one thing I've found in my experience is that trying to determine whether a relationship is "is-a" or "has-a" is bound to fail. Even if you can correctly make that determination for the objects now, changing requirements mean that you'll probably be wrong at some point in the future.
Another thing I've found is that it's very hard to convert from inheritance to aggregation once there's a lot of code written around an inheritance hierarchy. Just switching from a superclass to an interface means changing nearly every subclass in the system.
And, as I mentioned elsewhere in this post, aggregation tends to be less flexible than inheritance.
So, you have a perfect storm of arguments against inheritance whenever you have to choose one or the other:
Your choice will likely be the wrong one at some point
Changing that choice is difficult once you've made it.
Inheritance tends to be a worse choice as it's more constraining.
Thus, I tend to choose aggregation -- even when there appears to be a strong is-a relationship.
The question is normally phrased as Composition vs. Inheritance, and it has been asked here before.
I wanted to make this a comment on the original question, but 300 characters bites [;<).
I think we need to be careful. First, there are more flavors than the two rather specific examples made in the question.
Also, I suggest that it is valuable not to confuse the objective with the instrument. One wants to make sure that the chosen technique or methodology supports achievement of the primary objective, but I don't thing out-of-context which-technique-is-best discussion is very useful. It does help to know the pitfalls of the different approaches along with their clear sweet spots.
For example, what are you out to accomplish, what do you have available to start with, and what are the constraints?
Are you creating a component framework, even a special purpose one? Are interfaces separable from implementations in the programming system or is it accomplished by a practice using a different sort of technology? Can you separate the inheritance structure of interfaces (if any) from the inheritance structure of classes that implement them? Is it important to hide the class structure of an implementation from the code that relies on the interfaces the implementation delivers? Are there multiple implementations to be usable at the same time or is the variation more over-time as a consequence of maintenance and enhancememt? This and more needs to be considered before you fixate on a tool or a methodology.
Finally, is it that important to lock distinctions in the abstraction and how you think of it (as in is-a versus has-a) to different features of the OO technology? Perhaps so, if it keeps the conceptual structure consistent and manageable for you and others. But it is wise not to be enslaved by that and the contortions you might end up making. Maybe it is best to stand back a level and not be so rigid (but leave good narration so others can tell what's up). [I look for what makes a particular portion of a program explainable, but some times I go for elegance when there is a bigger win. Not always the best idea.]
I'm an interface purist, and I am drawn to the kinds of problems and approaches where interface purism is appropriate, whether building a Java framework or organizing some COM implementations. That doesn't make it appropriate for everything, not even close to everything, even though I swear by it. (I have a couple of projects that appear to provide serious counter-examples against interface purism, so it will be interesting to see how I manage to cope.)
I'll cover the where-these-might-apply part. Here's an example of both, in a game scenario. Suppose, there's a game which has different types of soldiers. Each soldier can have a knapsack which can hold different things.
Inheritance here?
There's a marine, green beret & a sniper. These are types of soldiers. So, there's a base class Soldier with Marine, Green Beret & Sniper as derived classes
Aggregation here?
The knapsack can contain grenades, guns (different types), knife, medikit, etc. A soldier can be equipped with any of these at any given point in time, plus he can also have a bulletproof vest which acts as armor when attacked and his injury decreases to a certain percentage. The soldier class contains an object of bulletproof vest class and the knapsack class which contains references to these items.
I think it's not an either/or debate. It's just that:
is-a (inheritance) relationships occur less often than has-a (composition) relationships.
Inheritance is harder to get right, even when it's appropriate to use it, so due diligence has to be taken because it can break encapsulation, encourage tight coupling by exposing implementation and so forth.
Both have their place, but inheritance is riskier.
Although of course it wouldn't make sense to have a class Shape 'having-a' Point and a Square classes. Here inheritance is due.
People tend to think about inheritance first when trying to design something extensible, that is what's wrong.
Favour happens when both candidate qualifies. A and B are options and you favour A. The reason is that composition offers more extension/flexiblity possiblities than generalization. This extension/flexiblity refers mostly to runtime/dynamic flexibility.
The benefit is not immediately visible. To see the benefit you need to wait for the next unexpected change request. So in most cases those sticked to generlalization fails when compared to those who embraced composition(except one obvious case mentioned later). Hence the rule. From a learning point of view if you can implement a dependency injection successfully then you should know which one to favour and when. The rule helps you in making a decision as well; if you are not sure then select composition.
Summary: Composition :The coupling is reduced by just having some smaller things you plug into something bigger, and the bigger object just calls the smaller object back. Generlization: From an API point of view defining that a method can be overridden is a stronger commitment than defining that a method can be called. (very few occassions when Generalization wins). And never forget that with composition you are using inheritance too, from a interface instead of a big class
Both approaches are used to solve different problems. You don't always need to aggregate over two or more classes when inheriting from one class.
Sometimes you do have to aggregate a single class because that class is sealed or has otherwise non-virtual members you need to intercept so you create a proxy layer that obviously isn't valid in terms of inheritance but so long as the class you are proxying has an interface you can subscribe to this can work out fairly well.
I have a bunch of python classes that are responsible for validating files like so:
java file validator
html file validator
css file validator
javascript file validator
All of these classes define a validate_file(path) and validate_string(string) function.
In the client code of my application, I store all of these validators in an array and I iterate through all of them and call the methods on them in exactly the same way.
Is there a purpose in my case of defining a base "validator" class which defines these methods as abstract? How can I benefit from making these individual classes adhere to the base class? Or maybe in other words, what features would be unlocked if I had a base class with all of these validators implementing it?
If you ever write code that needs to process mixed types, some of which might define the API method, but which shouldn't be considered "validators", having a common base class for "true" validators might be helpful for checking types ahead of time and only processing the "true" validators, not just "things that look like validators". That said, the more complex the method name, the less likely you are to end up with this sort of confusion; the odds of a different type implementing a really niche method name by coincidence are pretty small.
That said, that's a really niche use case. Often, as you can see, Python's duck-typing behavior is good enough on its own; if there is no common functionality between the different types, the only meaningful advantages to having the abstract base class are for the developer, not the program itself:
Having a common point of reference for the API consumer makes it clear what methods they can expect to have access too
For actual ABCs with #abstractmethod decorated methods, it provides a definition-time check for subclasses, making it impossible for maintainers to omit a method by accident
Base classes in Python serve to share implementation details and document likenesses. We don't need to use a common base class for things that function similarly though, as we use protocols and duck typing. For these validation functions, we might not have a use for a class at all; Java is designed to force people to put everything inside classes, but if all you have in your class is one static method, the class is just noise. Having a common superclass would enable you to check dynamically for that using isinstance, but then I'd really wonder why you're handling them in a context where you don't know what they are. For a programmer, the common word in the function name is probably enough.
My question is related to this question Is enforcing an abstract method implementation unpythonic? . I am using abstract classes in python but I realize that there is nothing that stops the user from implementing a function that takes an argument that is totally different from what was intended for the base class. Therefore, I may have to check the type of the argument. Isn't that unpythonic?
How do people use abstract classes in python?
Short answer: yes it is unpythonic. It might still make sense to have an abstract base class to check your implementations for completeness. But the pythonic way to do it is mostly just duck-typing, i.e. assume that the object you work with satisfies the specification you expect. Just state these expectations explicitly in the documentation.
Actually you typically would not do a lot of these safety checks. Even if you would check types, one might still be able to fake that at runtime. It also allows users of your code to be more flexible as to what kind of objects they pass to your code. Having to implement an ABC every single time clutters their code a lot.
A fairly small question: does anyone know about a pre-made suite of Python unit tests that just check if a class conforms to one of the standard Python data structure interfaces (e.g., lists, sets, dictionaries, queues, etc). It's not overly hard to write them, but I'd hate to bother doing so if someone has already done this. It seems like very basic functionality that someone has probably done already.
The use case is that I am using a factory pattern to create data structures due to different restrictions related to platforms. As such, I need to be able to test that the resulting created objects still conform to the standard interfaces on the surface. Also, I should note that by "conform" I mean that the tests should check not just that the interface functions exist, but also check that they work (e.g., can set and retrieve a value in a map, for instance). Python 2.7 tests would be preferred.
First, "the standard Python data structure interfaces" are not lists, sets, dictionaries, queues, etc. Those are specific implementations of the interfaces. (And queue isn't even a data structure in the sense you're thinking of—its salient features are that its operations are atomic, and put and get optionally synchronize on a Condition, and so on.)
Anyway, the interfaces are defined in five different not-quite-compatible ways.
The Built-in Types section of the documentation describes what it means to be an iterator type, a sequence type, etc. However, these are not nearly as rigorous as you'd expect for reference documentation (at least if you're used to, say, C++ or Java).
I'm not aware of any tests for such a thing, so I think you'd have to build them from scratch.
The collections module contains Collections Abstract Base Classes that define the interfaces, and provide a way to register "virtual subclasses" via the abc module. So, you can declare "I am a mapping" by inheriting from collections.Mapping, or calling collections.Mapping.register. But that doesn't actually prove that you are a mapping, just that you're claiming to be. (If you inherit from Mapping, it also acts as a mixin that helps you complete the interface by implementing, e.g., __contains__ on top of __getitem__.)
If you want to test the ABC meaning, defuz's answer is very close, and with a little more work I think he or someone else can complete it.
The CPython C API defines an Abstract Objects Layer. While this is not actually authoritative for the language, it's obviously intended that the C-API protocols and the language-level interfaces are supposed to match. And, unlike the latter, the former are rigorously defined. And of course the source code from CPython 2.7, and maybe other implementations like PyPy, may help.
There are tests for this that come with CPython, but really, they're for testing that calling PyMapping_GetItem from C properly calls your mymapping.__getitem__ in Python, which is really at a tangent to what you want to test, so I don't think it will help much.
The actual concrete classes have additional interface on top of the protocols, that you may want to test, but that's harder to describe. In particular, the way the __new__ and __init__ methods work is often important. Implementing the Mapping protocol means someone can construct an empty Foo instance and add items to it with foo[key] = value, but it doesn't mean someone can construct Foo(key=value), or Foo({key: value}) or Foo([(key, value)]).
And for this case, there are existing tests that come with all of the standard Python implementations. CPython comes with a very extensive test suite that includes things like test_dict.py. PyPy runs all the (Python-level) CPython tests, and some extra ones besides.
You will obviously have to modify these tests to run on an arbitrary class instead of one hardcoded into the tests, and you may also have to modify them to handle whichever definition you pick. Plus, they probably test more than you asked for. You just want to know if a class conforms to the protocol, not whether its methods do the right thing, right? But still, I think they're a good starting point.
Finally, the C API defines a Concrete Objects Layer that, although it's not authoritative, matches the previous definition and is more rigorously defined.
Unfortunately, the tests for this one are definitely not going to be very useful to you, because they're checking things like whether PyDict_Check and PyDict_GetItem work on your class, which they will not for any mapping defined in pure Python.
If you do build something complete for any of these definitions, I would strongly suggest putting it on PyPI, and posting about it to python-list, so you get feedback (and bug reports).
There are abstract base classes in standart module collections based on ABC module.
You have to inherit your classes from these classes to be sure that your classes correspond to the standard behavior:
import collections
class MyDict(collections.Mapping):
...
Also, your can test already existed class that does not obviously inherit the abstract class:
class MyPerfectDict(object):
... realization ...
def is_inherit(cls, abstract):
try:
class Test(abstract, cls): pass
test = Test()
except TypeError:
return False
else:
return True
is_inherit(MyPerfectDict, Mapping) # False
is_inherit(dict, Mapping) # True
My first "serious" language was Java, so I have comprehended object-oriented programming in sense that elemental brick of program is a class.
Now I write on VBA and Python. There are module languages and I am feeling persistent discomfort: I don't know how should I decompose program in a modules/classes.
I understand that one module corresponds to one knowledge domain, one module should ba able to test separately...
Should I apprehend module as namespace(c++) only?
I don't do VBA but in python, modules are fundamental. As you say, the can be viewed as namespaces but they are also objects in their own right. They are not classes however, so you cannot inherit from them (at least not directly).
I find that it's a good rule to keep a module concerned with one domain area. The rule that I use for deciding if something is a module level function or a class method is to ask myself if it could meaningfully be used on any objects that satisfy the 'interface' that it's arguments take. If so, then I free it from a class hierarchy and make it a module level function. If its usefulness truly is restricted to a particular class hierarchy, then I make it a method.
If you need it work on all instances of a class hierarchy and you make it a module level function, just remember that all the the subclasses still need to implement the given interface with the given semantics. This is one of the tradeoffs of stepping away from methods: you can no longer make a slight modification and call super. On the other hand, if subclasses are likely to redefine the interface and its semantics, then maybe that particular class hierarchy isn't a very good abstraction and should be rethought.
It is matter of taste. If you use modules your 'program' will be more procedural oriented. If you choose classes it will be more or less object oriented. I'm working with Excel for couple of months and personally I choose classes whenever I can because it is more comfortable to me. If you stop thinking about objects and think of them as Components you can use them with elegance. The main reason why I prefer classes is that you can have it more that one. You can't have two instances of module. It allows me use encapsulation and better code reuse.
For example let's assume that you like to have some kind of logger, to log actions that were done by your program during execution. You can write a module for that. It can have for example a global variable indicating on which particular sheet logging will be done. But consider the following hypothetical situation: your client wants you to include some fancy report generation functionality in your program. You are smart so you figure out that you can use your logging code to prepare them. But you can't do log and report simultaneously by one module. And you can with two instances of logging Component without any changes in their code.
Idioms of languages are different and thats the reason a problem solved in different languages take different approaches.
"C" is all about procedural decomposition.
Main idiom in Java is about "class or Object" decomposition. Functions are not absent, but they become a part of exhibited behavior of these classes.
"Python" provides support for both Class based problem decomposition as well as procedural based.
All of these uses files, packages or modules as concept for organizing large code pieces together. There is nothing that restricts you to have one module for one knowledge domain.
These are decomposition and organizing techniques and can be applied based on the problem at hand.
If you are comfortable with OO, you should be able to use it very well in Python.
VBA also allows the use of classes. Unfortunately, those classes don't support all the features of a full-fleged object oriented language. Especially inheritance is not supported.
But you can work with interfaces, at least up to a certain degree.
I only used modules like "one module = one singleton". My modules contain "static" or even stateless methods. So in my opinion a VBa module is not namespace. More often a bunch of classes and modules would form a "namespace". I often create a new project (DLL, DVB or something similar) for such a "namespace".