I've been looking at some tutorials for some PyQt4 stuff and It Is Good, but I don't quite understand why the syntax when creating an object is such:
class Example(QtGui.QMainWindow):
def __init__(self):
super(Example, self).__init__()
self.initUI()
def initUI(self):
#code to set up instance variables and show() the window
What is gained exactly from doing it in this way, and not just eliminating the self.initUI() call entirely and just putting any code that sets attributes in the __init__() after the super is called?
Sometimes code is separated into functions for readability purposes.
If your object initialization requires three steps, then logically it would make sense to break it into three functions. The names of those functions could describe which portion of the initialization they handle.
Another reason you might see an "init" function called from the true __init__ is if that function restores the object to a fresh state; in that case, you might want to call the "clean" function from outside __init__ in, for example, an object pool.
You've also hinted at a third reason for the reuse in your own question: if a subclass needs to change the order in which portions of the initialization happen (or omit/replace some portions entirely!) this is not possible with a monolithic __init__ but easy if it's split into separate parts.
A fourth and final reason is for profiling. If you track function entries/exists and timings, separate functions gives you finer granularity in the timing metrics.
Regardless, how you code is up to you - but the approach you question here does have merits.
Perhaps so that initUI can be called again.
Only one reason I know of, the same as most languages -- usability. The core of Object Oriented programming, is the ability to re-use things -- be they classes, methods or variables. By separating our different methods/functions, we can call them later on. Whether you ever will call them later...that's debatable. I think it comes down to good programming practice.
Related
So I am a little confused about what I have been reading on Object Oriented Programming. I realized that while I was focusing on the rule of each object doing only one thing, I created a class that does not have a changing state.
Basically I am writing a program that does a lot of reading and writing on text files. I thought that none of the objects I have should be dealing with these operations and I should have a fileIO class that does these operations for them. However, I am a little worried that this might be the same thing as having a utility class.
Is having a class whose fields never change(or not even need to be initialized) same as a utility class? Is it a bad practice from OOP perspective? Does it make sense to have a fileIO object? If not should objects be allowed to read from and write to files?
class fileIO:
__processFilePath = None
__trackFilePath = None
def __init__(self, iProcessFilePath, iTrackFilePath):
def getProcesses(self):
#checks if appString is in file
def isAppRunning(self,appString):
#reads all
def getAllTrackedLines(self):
#appends
def addNewOnTracked(self,toBeAdded):
#overwrites
def overWriteTrackedLines(self,fullData):
in this particular instance I am actually initializing the fields in the init method. But for the purposes of my program I don't actually need to because there are only two files that I read and write from.
Reading and Writing data from file can be wrapped in some class that handles the state of the data to ensure that the transaction completes. What I mean by this is that the resource needs to be de-allocated properly, preferably in the same transaction, no matter the outcome of the operation. If you consider the allocation and de-allocation of resources as state, then your class is not exactly stateless. In functional programming, a function handling resources will be considered impure as it is not stateless. The state is merely external.
Having a class with no state does not constitute a bad class. It is true that Utility classes are an anti-pattern, but if your class does one small thing, and it does it well, then it is not a problem. The problem comes in when you have a ton of related methods bunched into the class and the code begins to rot. That is what you want to avoid. A class that has a well defined purpose, and only does that thing, will resist rot.
Make sure you write lots of tests around your class as well, as this is key in long term maintainability.
Please let me know if there is anything that I can clarify.
I built a pygame game a few years back. It worked, but wasn't the best coding style and had a lot of classic code smells. I've recently picked it back up and am trying to refactor it with more discipline this time.
One big code smell was that I had a huge class, GameObject, that inherited from pygame.sprite.DirtySprite which had a lot of code related to:
various ways of moving a sprite
various ways of animating a sprite
various ways of exploding a sprite
etc.
The crazier I though of ways for sprites to behave, the code duplication was adding up and changes were getting more difficult. So, I started breaking out functionality into lots of smaller classes and then passing them in at object creation:
class GameObject(DirtySprite):
def __init__(initial_position, mover_cls, imager_cls, exploder_cls):
self.mover = mover(self, initial_position)
self.imager = imager(self)
self.exploder = exploder(self)
...
spaceship = GameObject(pos, CrazyMover, SpaceshipImager, BasicExploder)
As I factored out more and more code into these helper classes, the code was definitely better, more flexible and had less duplication. However, for each type of helper classes, the number of parameters got longer and longer. Creating sprites became a chore and the code was ugly. So, during another refactor I created a bunch of really small classes to do the composition:
class GameObjectAbstract(MoverAbstract, ImagerAbstract, \
ExploderAbstract, DirtySprite):
def __init__(self, initial_position):
...
...
class CrazySpaceship(CrazyMover, SpaceshipImager, BasicExploder, GameObjectAbstract):
pass # Many times, all the behavior comes from super classes
...
spaceship = CrazySpaceship(pos)
I like this approach better. Is this a common approach? It seems to have the same benefits of having all the logic broken out in small classes, but creating the objects is much cleaner.
However, this approach isn't as dynamic. I cannot, for example, decide on a new mashup at run-time. However, this wasn't something I was really doing. While I do a lot of mashups, it seems OK that they are statically defined using class statements.
Am I missing anything when it comes to future maintainability and reuse? I hear that composition is better than inheritance, but this feels like I'm using inheritance to do composition - so I feel like this is OK.
Is there a different pattern that I should be using?
That is ok, if you can separate the behaviors well enough -
Just that it is not "composition" at all - it is multiple inheritance, using what we call "mixin classes": a mixin class is roughly a class that provides an specific behavior that can be combined with other classes.
If you are using Python's super correctly, thatcouldbe the best approach. (If you are managing to create your game objects basically just defining the class name and the mixin classes it uses, that is actually a very good approach)
By the way, if you ever want to create new classes at runtime with this method, it is also possible - just use a call to type to create a new class, instead of a class statement:
class CrazySpaceship(CrazyMover, SpaceshipImager, BasicExploder, GameObjectAbstract):
pass # Many times, all the behavior comes from super classes
Is just equivalent in Python to:
CrazySpaceShip = type('CrazySpaceShip', (CrazyMover, SpaceshipImager, BasicExploder, GameObjectAbstract), {})
And the tuple you used as second parameter can be any sequence built at runtime.
Related: Python object conversion
I recently learned that Python allows you to change an instance's class like so:
class Robe:
pass
class Dress:
pass
r = Robe()
r.__class__ = Dress
I'm trying to figure out whether there is a case where 'transmuting' an object like this can be useful. I've messed around with this in IDLE, and one thing I've noticed is that assigning a different class doesn't call the new class's __init__ method, though this can be done explicitly if needed.
Virtually every use case I can think of would be better served by composition, but I'm a coding newb so what do I know. ;)
There is rarely a good reason to do this for unrelated classes, like Robe and Dress in your example. Without a bit of work, it's hard to ensure that the object you get in the end is in a sane state.
However, it can be useful when inheriting from a base class, if you want to use a non-standard factory function or constructor to build the base object. Here's an example:
class Base(object):
pass
def base_factory():
return Base() # in real code, this would probably be something opaque
def Derived(Base):
def __new__(cls):
self = base_factory() # get an instance of Base
self.__class__ = Derived # and turn it into an instance of Derived
return self
In this example, the Derived class's __new__ method wants to construct its object using the base_factory method which returns an instance of the Base class. Often this sort of factory is in a library somewhere, and you can't know for certain how it's making the object (you can't just call Base() or super(Derived, cls).__new__(cls) yourself to get the same result).
The instance's __class__ attribute is rewritten so that the result of calling Derived.__new__ will be an instance of the Derived class, which ensures that it will have the Derived.__init__ method called on it (if such a method exists).
I remember using this technique ages ago to “upgrade” existing objects after recognizing what kind of data they hold. It was a part of an experimental XMPP client. XMPP uses many short XML messages (“stanzas”) for communication.
When the application received a stanza, it was parsed into a DOM tree. Then the application needed to recognize what kind of stanza it is (a presence stanza, message, automated query etc.). If, for example, it was recognized as a message stanza, the DOM object was “upgraded” to a subclass that provided methods like “get_author”, “get_body” etc.
I could of course just make a new class to represent a parsed message, make new object of that class and copy the relevant data from the original XML DOM object. There were two benefits of changing object's class in-place, though. Firstly, XMPP is a very extensible standard, and it was useful to still have an easy access to the original DOM object in case some other part of the code found something useful there, or while debugging. Secondly, profiling the code told me that creating a new object and explicitly copying data is much slower than just reusing the object that would be quickly destroyed anyway—the difference was enough to matter in XMPP, which uses many short messages.
I don't think any of these reasons justifies the use of this technique in production code, unless maybe you really need the (not that big) speedup in CPython. It's just a hack which I found useful to make code a bit shorter and faster in the experimental application. Note also that this technique will easily break JIT engines in non-CPython implementations, making the code much slower!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been programming in Python for some time and have covered some knowledge in Python style but still have a problem on how to use classes properly.
When reading object oriented lecture I often find rules like Single Responsibility Principle that state
"The Single Responsibility Principle says that a class should have
one, and only one, reason to change"
Reading this, I might think of breaking one class into two, like:
class ComplicatedOperations(object):
def __init__(self, item):
pass
def do(self):
...
## lots of other functions
class CreateOption(object):
def __init__(self, simple_list):
self.simple_list = simple_list
def to_options(self):
operated_data = self.transform_data(self.simple_list)
return self.default_option() + operated_data
def default_option(self):
return [('', '')]
def transform_data(self, simple_list):
return [self.make_complicated_operations_that_requires_losts_of_manipulation(item)
for item in simple_list]
def make_complicated_operations_that_requires_losts_of_manipulation(self, item):
return ComplicatedOperations(item).do()
This, for me, raises lots of different questions; like:
When should I use class variables or pass arguments in class functions?
Should the ComplicatedOperations class be a class or just a bunch of functions?
Should the __init__ method be used to calculate the final result. Does that makes that class hard to test.
What are the rules for the pythonists?
Edited after answers:
So, reading Augusto theory, I would end up with something like this:
class ComplicatedOperations(object):
def __init__(self):
pass
def do(self, item):
...
## lots of other functions
def default_option():
return [('', '')]
def complicate_data(item):
return ComplicatedOperations().do(item)
def transform_data_to_options(simple_list):
return default_option() + [self.complicate_data(item)
for item in simple_list]
(Also corrected a small bug with default_option.)
When should I use class variables or pass arguments in class functions
In your example I would pass item into the do method. Also, this is related to programming in any language, give a class only the information it needs (Least Authority), and pass everything that is not internal to you algorithm via parameters (Depedency Injection), so, if the ComplicatedOperations does not need item to initialize itself, do not give it as a init parameter, and if it needs item to do it's job, give it as a parameter.
Should the ComplicatedOperations class be a class or just a bunch of functions
I'd say, depends. If you're using various kinds of operations, and they share some sort of interface or contract, absolutely. If the operation reflects some concept and all the methods are related to the class, sure. But if they are loose and unrelated, you might just use functions or think again about the Single Responsability and split the methods up into other classes
Should the init method be used to calculate the final result. Does that makes that class hard to test.
No, the init method is for initialization, you should do its work on a separated method.
As a side note, because of the lack of context, I did not understand what is CreateOption's role. If it is only used as show above, you might as well just remove it ...
I personally think of classes as of concepts. I'd define a Operation class which behaves like an operation, so contains a do() method, and every other method/property that may make it unique.
As mgilson correctly says, if you cannot define and isolate any concept, maybe a simple functional approach would be better.
To answer your questions:
you should use class attributes when a certain property is shared among the instances (in Python class attributes are initialized at compile time, so different object will see the same value. Usually class attributes should be constants). Use instance attributes to have object-specific properties to use in its methods without passing them. This doesn't mean you should put everything in self, but just what you consider characterising for your object. Use passed variables to have values that do not regard your object and may depend from the state of external objects (or on the execution of the program).
As said above, I'd keep one single class Operation and use a list of Operation objects to do your computations.
the init method would just instantiate the object and make all the processing needed for the proper behaviour of the object (in other words make it read to use).
Just think about the ideas you're trying to model.
A class generally represents a type of object. Class instances are specific objects of that type. A classic example is an Animal class. a cat would be an instance of Animal. class variables (I assume you mean those that belong to the instance rather than the class object itself), should be used for attributes of the instance. In this case, for example, colour could be a class attribute, which would be set as cat.colour = "white" or bear.colour = "brown". Arguments should be used where the value could come from some source outside the class. If the Animal class has a sleep method, it might need to know the duration of the sleep and posture that the animal sleeps in. duration would be an argument of the method, since it has no relation on the animal, but posture would be a class variable since it is determined by the animal.
In python, a class is typically used to group together a set of functions and variables which share a state. Continuing with the above example, a specific animal has a state which is shared across its methods and is defined by its attributes. If your class is just a group of functions which don't in any way depend on the state of the class, then they could just as easily be separate functions.
If __init__ is used to calculate the final result (which would have to be stored in an attribute of the class since __init__ cannot return a result), then you might as well use a function. A common pattern, however, is to do a lot of processing in __init__ via several other, sometimes private, methods of the class. The reason for this is that large complicated functions are often easier to test if they are broken down into smaller, distinct tasks, each of which can then be tested individually. However, this is usually only done when a class is needed anyway.
One approach to the whole business is to start out by deciding what functionality you need. When you have a group of functions or variables which all act on or apply to the same object, then it is time to move them into a class. Remember that Object Oriented Programming (OOP) is a design method suited to some tasks, but is not inherently superiour to functional programming (in fact, some programmers would argue the opposite!), so there's no need to use classes unless there is actually a need.
Classes are an organizational structure. So, if you are not using them to organize, you are doing it wrong. :)
There are several different things you can use them for organizing:
Bundle data with methods that use said data, defines one spot that the code will interact with this data
Bundle like functions together, provides understandable api since 'everyone knows' that all math functions are in the math object
Provide defined communications between methods, sets up a 'conveyor belt' of operations with a defined interface. Each operation is a black box, and can change arbitrarily, so long as it keeps to the standard
Abstract a concept. This can include sub classes, data, methods, so on and so forth all around some central idea like database access. This class then becomes a component you can use in other projects with a minimal amount of retooling
If you don't need to do some organizational thing like the above, then you should go for simplicity and program in a procedural/functional style. Python is about having a toolbox, not a hammer.
I'm writing a Python program with a GUI built with the Tkinter module. I'm using a class to define the GUI because it makes it easier to pass commands to buttons and makes the whole thing a bit easier to understand.
The actual initialization of my GUI takes about 150 lines of code. To make this easier to understand, I've written the __init__ function like so:
def __init__(self, root):
self.root = root
self._init_menu()
self._init_connectbar()
self._init_usertree()
self._init_remotetree()
self._init_bottom()
where _init_menu(), _init_connectbar(), and so on do all the initialization work. This makes my code easier to follow and prevents __init__ from getting too big.
However, this creates scope issues. Since an Entry widget that I defined in _init_connectbar() is in the function scope and is not a class attribute, I can't refer to it in other methods in the class.
I can make these problems go away by doing most of the initialization in __init__, but I'll lose the abstraction I got with my first method.
Should I expand __init__, or find another way to bring the widgets into class scope?
Either store some of those widget references in instance variables or return them (a minimal set mind you; you want to Reduce Coupling) and store them in local variables in __init__ before passing the relevant ones as arguments to your subsequent construction helpers. The latter is cleaner, but requires that things be decoupled enough that you can create an ordering that makes it possible.
Why don't you make your widgets that you need to refer to, instance variables. This is what I usaully do and seems to be quite a common approach.
e.g.
self.some_widget
In my opinion, you should store the widgets as instance variables so that you can refer to them from any method. As in most programming languages, readability decreases when functions get too large, so your approach of splitting up the initialization code is a good idea.
When the class itself grows too large for one source file, you can also split up the class using mix-in classes (similar to having partial classes in C#).
For example:
class MainGuiClass(GuiMixin_FunctionalityA, GuiMixin_FunctionalityB):
def __init__(self):
GuiMixin_FunctionalityA.__init__(self)
GuiMixin_FunctionalityB.__init__(self)
This comes in handy when the GUI consists of different functionalities (for instance a configuration tab, an execution tab or whatsoever).
You should look into the builder-pattern for this kind of stuff. If your GUI is complex, then there will be some complexity in describing it. Whether that is a complex function or a complex description in some file comes down to the same. You can just try to make it as readable and maintainable as possible, and in my experience the builder pattern really helps here.