I have a Main class.
There are five strategies needs to be implemented from which customer can choose one strategy on run time.
Here I am using strategy patterns because the objective of these is the same but the algorithm is different.
Apart from common functions (defined in the interface of strategy classes), there could be some other functions as well in each strategy class to fulfil some requirements if one has chosen that particular strategy at runtime. Some requirements means extra methods which are required to fulfil the requirements but at same time it may or may not be required at user's end.
Also, there are some common functions that I am defining in the Main class. Common means which could be helpful for any strategies.
class Main:
def __init__(self, input):
self.other_work = Extra(input)
self.strategy = Factory(input)
Question 1:
How to use this class:
a = Main(input)
# if want to use some extra function
a.other_work.do_this()
# if related to particular strategy
a.strategy.uncommonStrategy1()
Here challenges are:
How can user know that this extra uncommonStrategy1 function is defined in Strategy1 class and not in Extra class.
I can put uncommonStrategy1 function in Extra class as well but it is irrelevant to other strategies.
The user should not need to know where functionalities are implemented. Use inheritance, python import or wrapper functions (as appropriate to the design) to call the specialized functions. Ideally, when triggering the work the user will not know, or care, which strategy is used. (But the different strategies may need to be initialized with different parameters, etc.; so it cannot be completely transparent).
Related
Im writing code for research purposes, in which I search through a bulk of files and rank them according to their relevance. I call the entire process quickSearching, and it is composed of two serial stages - first I search the file and retrieve a list a candidates files, then I score those candidates and rank them.
So a quicksearch is simply a serial combination of a search method and a score method.
Im planning to implement various searching and scoring methodologies, and I would like to test all possible combinations and evaluate them to see which is the winning combo.
Since the number of combos will grow very fast, It is important to write the code in a good structure and design. I thought about the following designs (Im writing the code in python):
A quickSearcher class that will receive pointers to a searcher and scorer functions
A quickSearcher class that will receive a searcher object and a scorer object
A quickSearcher calss that will inherit from a searcher class and an scorer class
since Im basically an EE engineer, Im not sure how to select between the options and if this a common problem in CS with trivial pattern design.The design i'm looking will hopefully:
Be very code-volume efficient, since some of the searching and scoring methods differ in simply a different value of a parameter or two.
Be very modular and logical errors prone.
Be easy to navigate through
Any other consideration I should take?
This is my first design question so it might not be valid or missing important info, please notify me if it is.
Classes are often overused, especially by programmers coming from languages like Java and C# where they are compulsory. I recommend watching the presentation Stop Writing Classes.
When deciding whether to create a class it is useful to ask yourself the following questions:
1) Will the class need to have multiple methods?
If the class only has a single method (apart from __init__) then you may as well make it a function instead. If it needs to preserve state between calls then use a generator. If it needs to be created in one place with some parameters then called elsewhere you can use a closure (a function that returns another function) or functools.partial.
2) will it need to share state between methods?
If the class does not need to share state between methods then it may be better replaced with either a set of independent functions or smaller classes (or some combination).
If the answer to both questions is yes then go ahead and create a class.
For your example I think option 1 is the way to go. The searcher and scorer objects sound like they if they are classes they will only have a single method, probably called something like execute or run. Make them functions instead.
Depending on your use case, quickSorter itself may be better off as a function or generator as well, so no need for any classes at all.
BTW there is no distinction in Python between a function and a pointer to a function.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been programming in Python for some time and have covered some knowledge in Python style but still have a problem on how to use classes properly.
When reading object oriented lecture I often find rules like Single Responsibility Principle that state
"The Single Responsibility Principle says that a class should have
one, and only one, reason to change"
Reading this, I might think of breaking one class into two, like:
class ComplicatedOperations(object):
def __init__(self, item):
pass
def do(self):
...
## lots of other functions
class CreateOption(object):
def __init__(self, simple_list):
self.simple_list = simple_list
def to_options(self):
operated_data = self.transform_data(self.simple_list)
return self.default_option() + operated_data
def default_option(self):
return [('', '')]
def transform_data(self, simple_list):
return [self.make_complicated_operations_that_requires_losts_of_manipulation(item)
for item in simple_list]
def make_complicated_operations_that_requires_losts_of_manipulation(self, item):
return ComplicatedOperations(item).do()
This, for me, raises lots of different questions; like:
When should I use class variables or pass arguments in class functions?
Should the ComplicatedOperations class be a class or just a bunch of functions?
Should the __init__ method be used to calculate the final result. Does that makes that class hard to test.
What are the rules for the pythonists?
Edited after answers:
So, reading Augusto theory, I would end up with something like this:
class ComplicatedOperations(object):
def __init__(self):
pass
def do(self, item):
...
## lots of other functions
def default_option():
return [('', '')]
def complicate_data(item):
return ComplicatedOperations().do(item)
def transform_data_to_options(simple_list):
return default_option() + [self.complicate_data(item)
for item in simple_list]
(Also corrected a small bug with default_option.)
When should I use class variables or pass arguments in class functions
In your example I would pass item into the do method. Also, this is related to programming in any language, give a class only the information it needs (Least Authority), and pass everything that is not internal to you algorithm via parameters (Depedency Injection), so, if the ComplicatedOperations does not need item to initialize itself, do not give it as a init parameter, and if it needs item to do it's job, give it as a parameter.
Should the ComplicatedOperations class be a class or just a bunch of functions
I'd say, depends. If you're using various kinds of operations, and they share some sort of interface or contract, absolutely. If the operation reflects some concept and all the methods are related to the class, sure. But if they are loose and unrelated, you might just use functions or think again about the Single Responsability and split the methods up into other classes
Should the init method be used to calculate the final result. Does that makes that class hard to test.
No, the init method is for initialization, you should do its work on a separated method.
As a side note, because of the lack of context, I did not understand what is CreateOption's role. If it is only used as show above, you might as well just remove it ...
I personally think of classes as of concepts. I'd define a Operation class which behaves like an operation, so contains a do() method, and every other method/property that may make it unique.
As mgilson correctly says, if you cannot define and isolate any concept, maybe a simple functional approach would be better.
To answer your questions:
you should use class attributes when a certain property is shared among the instances (in Python class attributes are initialized at compile time, so different object will see the same value. Usually class attributes should be constants). Use instance attributes to have object-specific properties to use in its methods without passing them. This doesn't mean you should put everything in self, but just what you consider characterising for your object. Use passed variables to have values that do not regard your object and may depend from the state of external objects (or on the execution of the program).
As said above, I'd keep one single class Operation and use a list of Operation objects to do your computations.
the init method would just instantiate the object and make all the processing needed for the proper behaviour of the object (in other words make it read to use).
Just think about the ideas you're trying to model.
A class generally represents a type of object. Class instances are specific objects of that type. A classic example is an Animal class. a cat would be an instance of Animal. class variables (I assume you mean those that belong to the instance rather than the class object itself), should be used for attributes of the instance. In this case, for example, colour could be a class attribute, which would be set as cat.colour = "white" or bear.colour = "brown". Arguments should be used where the value could come from some source outside the class. If the Animal class has a sleep method, it might need to know the duration of the sleep and posture that the animal sleeps in. duration would be an argument of the method, since it has no relation on the animal, but posture would be a class variable since it is determined by the animal.
In python, a class is typically used to group together a set of functions and variables which share a state. Continuing with the above example, a specific animal has a state which is shared across its methods and is defined by its attributes. If your class is just a group of functions which don't in any way depend on the state of the class, then they could just as easily be separate functions.
If __init__ is used to calculate the final result (which would have to be stored in an attribute of the class since __init__ cannot return a result), then you might as well use a function. A common pattern, however, is to do a lot of processing in __init__ via several other, sometimes private, methods of the class. The reason for this is that large complicated functions are often easier to test if they are broken down into smaller, distinct tasks, each of which can then be tested individually. However, this is usually only done when a class is needed anyway.
One approach to the whole business is to start out by deciding what functionality you need. When you have a group of functions or variables which all act on or apply to the same object, then it is time to move them into a class. Remember that Object Oriented Programming (OOP) is a design method suited to some tasks, but is not inherently superiour to functional programming (in fact, some programmers would argue the opposite!), so there's no need to use classes unless there is actually a need.
Classes are an organizational structure. So, if you are not using them to organize, you are doing it wrong. :)
There are several different things you can use them for organizing:
Bundle data with methods that use said data, defines one spot that the code will interact with this data
Bundle like functions together, provides understandable api since 'everyone knows' that all math functions are in the math object
Provide defined communications between methods, sets up a 'conveyor belt' of operations with a defined interface. Each operation is a black box, and can change arbitrarily, so long as it keeps to the standard
Abstract a concept. This can include sub classes, data, methods, so on and so forth all around some central idea like database access. This class then becomes a component you can use in other projects with a minimal amount of retooling
If you don't need to do some organizational thing like the above, then you should go for simplicity and program in a procedural/functional style. Python is about having a toolbox, not a hammer.
I'm just getting started with Python, and am trying to figure out the Right Way to use classes.
My program currently has two classes, call them Planner and Model. The Planner is model-agnostic, given that any Model it uses presents a consistent interface. So, it seems like if I want to have several different available models, they should all inherit from something, in order to enforce the consistent interface. Additionally, some of the Model classes will share functionality. For example, a singleAgent Model might simulate one agent, while a doubleAgent Model would simulate two agents, each behaving just like the singleAgent.
So - how should I implement this / what language features do I need?
EDIT: Thanks for the fast responses straightening me out about duck classes! So, it sounds like I would only use inheritance if I wanted to override a subset of another Model's functionality? (And for my doubleAgent, I'd probably just use singleAgents as class members?)
I've taken a look through a few other questions with similar tags, but they seem to be more concerned with syntax rather than design choices. I've also looked at the official Python documentation on classes and not found what I'm looking for. (Possibly because I don't know enough to recognize it.)
In Python you don't generally use quite the same approaches to OOP as in statically typed languages. Specifically you don't actually need an object to implement a specific interface or derive from an abstract base class and so on. Rather that object just needs to be able to do the required operations. This is colloquially known as duck typing. If it walks like a duck and talks like a duck then, to all intents and purposes it is a duck.
So just decide what methods are required for your objects and make sure that they always have them. Should you wish to share implementation between different actors in your system then you can consider class inheritance. But if not then you may as well implement disjoint class hierarchies.
One of Python's strengths (and, as many would argue, weaknesses) is that it does not rely on compile-time type checking to enforce interfaces. This means that it is not required for a set of objects to inherit from a common base class in order to have the same interface - they can still be used interchangeably in any function. This behavior is commonly known as duck typing.
In fact, because Python is dynamically typed, you would be hard pressed to "enforce a consistent interface", as you said. For this reason, things like zope.interface have been created. The main benefit you'll get from classes in your case is code reuse - if all Model types implement some common behavior.
To take this even one step further, if you should have some unrelated object type in a third party library out of your control that you wish to use as a Model, you can even do what is called "monkey patching" or "duck punching" in order to add the code necessary to provide your Model interface!
Basically the link you provided on Classes in Python answers all your questions under the section about inheritance.
In your case just define a class called Model and then two subclasses: singleAgent und doubleAgent.:
class Model:
pass
class singleAgent(Model):
pass
If you really, really need abstract classes take a look into "abstract base class"es: http://docs.python.org/library/abc.html
I've performed the "Replace Method with Method Object" refactoring described by Beck.
Now, I have a class with a "run()" method and a bunch of member functions that decompose the computation into smaller units. How do I test those member functions?
My first idea is that my unit tests be basically copies of the "run()" method (with different initializations), but with assertions between each call to the member functions to check the state of the computation.
(I'm using Python and the unittest module.)
class Train:
def __init__(self, options, points):
self._options = options
self._points = points
# other initializations
def run(self):
self._setup_mappings_dict()
self._setup_train_and_estimation_sets()
if self._options.estimate_method == 'per_class':
self._setup_priors()
self._estimate_all_mappings()
self._save_mappings()
def _estimate_all_mappings():
# implementation, calls to methods in this class
#other method definitions
I definitely have expectations about what the the states of the member attributes should be before and after calls to the the different methods as part of the implementation of the run() method. Should I be making assertions about these "private" attributes? I don't know how else to unittest these methods.
The other option is that I really shouldn't be testing these.
I'll answer my own question. After a bit of reading and thinking, I believe I shouldn't be unit testing these private methods. I should just test the public interface. If the private methods that do the internal processing are important enough to test independently and are not just coincidences of the current implementation, then perhaps this is a sign that they should be refactored out into a separate class.
I like your answer, but I disagree.
The situation where you would use this design pattern is one where there is a fairly complex operation going on. As a result, being able to verify the individual components of such an operation, I would say, is highly desirable.
You then have the issue of dependancies on other resources (which may or may not be true in this case).
You need to be able to use some form of Inversion of Control in order to inject some form of mock to isolate the class.
Besides most mocking frameworks will provide you with accessors to get at the private members.
There are two principles at play here. The first is that public methods should be the public API you want to expose. In this case, exposing run() is appropriate, whereas exposing estimate_all_mappings() is not, since you don't want anyone else calling that function.
The second is that a single function should only ever do one thing. In this case run() assembles the results of several other complex actions. estimate_all_mappings() is doing one of those complex actions. It, in turn, might be delegating to some other function estimate_map() that does a single estimation that estimate_all_mappings() aggregates.
Therefore, it is correct to have this sort of delegation of responsibilities. Then all that is required is to know how to test a private method.
The only reason to have another class is if there is some subset of the functionality that composes it's own behavioral unit. You wouldn't, for instance, create some class B that is only ever called/used by a class A, unless there was some unit of state that is easier to pass around as an object.
OK I've got 2 really big classes > 1k lines each that I currently have split up into multiple ones. They then get recombined using multiple inheritance. Now I'm wondering, if there is any cleaner/better more pythonic way of doing this. Completely factoring them out would result in endless amounts of self.otherself.do_something calls, which I don't think is the way it should be done.
To make things clear here's what it currently looks like:
from gui_events import GUIEvents # event handlers
from gui_helpers import GUIHelpers # helper methods that don't directly modify the GUI
# GUI.py
class GUI(gtk.Window, GUIEvents, GUIHelpers):
# general stuff here stuff here
One problem that is result of this is Pylint complaining giving me trillions of "init not called" / "undefined attribute" / "attribute accessed before definition" warnings.
EDIT:
You may want to take a look at the code, to make yourself a picture about what the whole thing actually is.
http://github.com/BonsaiDen/Atarashii/tree/next/atarashii/usr/share/pyshared/atarashii/
Please note, I'm really trying anything to keep this thing as DRY as possible, I'm using pylint to detect code duplication, the only thing it complains about are the imports.
If you want to use multiple inheritance to combine everything into one big class (it might make sense to do this), then you can refactor each of the parent classes so that every method and property is either private (starts with '__') or has a short 2-3 character prefix unique to that class. For example, all the methods and properties in your GUIEvents class could start with ge_, everything in GUIHelpers could start with gh_. By doing this, you'll get achieve some of the clarity of using separate sub-class instances (self.ge.doSomething() vs self.ge_doSomething()) and you'll avoid conflicting member names, which is the main risk when combining such large classes into one.
Start by finding classes that model real world concepts that your application needs to work with. Those are natural candidates for classes.
Try to avoid multiple inheritance as much as possible; it's rarely useful and always somewhat confusing. Instead, look to use functional composition ("HAS-A" relationships) to give rich attributes to your objects made of other objects.
Remember to make each method do one small, specific thing; this necessarily entails breaking up methods that do too many things into smaller pieces.
Refactor cases where you find many such methods are duplicating each other's functionality; this is another way to find natural collections of functionality that deserve to be in a distinct class.
I think this is more of a general OO-design problem than Python problem. Python pretty much gives you all the classic OOP tools, conveniently packaged. You'd have to describe the problem in more detail (e.g. what do the GUIEvents and GUIHelpers classes contain?)
One Python-specific aspect to consider is the following: Python supports multiple programming paradigms, and often the best solution is not OOP. This may be the case here. But again, you'll have to throw in more details to get a meaningful answer.
Your code may be substantially improved by implementing a Model-View-Controller design. Depending on how your GUI and tool are setup, you may also benefit from "widgetizing" portions of your GUI, so that rather than having one giant Model-View-Controller, you have a main Model-View-Controller that manages a bunch of smaller Model-View-Controllers, each for distinct portions of your GUI. This would allow you to break up your tool and GUI into many classes, and you may be able to reuse portions of it, reducing the total amount of code you need to maintain.
While python does support multiple programming paradigms, for GUI tools, the best solution will nearly always be an Object-Oriented design.
One possibility is to assign imported functions to class attributes:
In file a_part_1.py:
def add(self, n):
self.n += n
def __init__(self, n):
self.n = n
And in main class file:
import a_part_1
class A:
__init__ = a_part_1.__init__
add = a_part_1.add
Or if you don't want to update main file when new methods are added:
class A: pass
import a_part_1
for k, v in a_part_1.__dict__.items():
if callable(v):
setattr(A,k,v)