Is it proper practice to have all code within classes? I have one class that does all my calculating and whatnot. But I have all the rest of the code (mainly used to call the class) outside of a class. It looks like this.
class bigClass:
executing here
functions and whatnot
blah blah
b=bigClass()
b.bigClassfunction()
My question is whether those last two lines should go in a class of their own? Or do I just leave them to float about not bound to a class.
That's absolutely OK, there's no need to put them in a class. A function could be an option if you need to repeat the code several times.
A class shouldn't be used for things like this; The role of class, as in Wikipedia, is
In object-oriented programming, a class is a construct that is used to
create instances of itself – referred to as class instances, class
objects, instance objects or simply objects. A class defines
constituent members which enable its instances to have state and
behavior. Data field members (member variables or instance
variables) enable a class instance to maintain state. Other kinds of
members, especially methods, enable the behavior of class instances.
Classes define the type of their instances.
Although you can embed this code in a class, it would be unnecessary to put this inside a class if it needs to be executed only once.
EDIT:
As I now understand, the confusion is about how to indicate python which code to run first, like you would do in java using a main method in the ProjectName class. In python, the code runs top-down. Each statement is being calculated on the go. That's why you cannot reference to a class above its definition, for example.
obj = Klass()
class Klass: pass #Doesn't work!
your question is not especially clear but you would always put all code related to a class within the class. It makes no design sense to do other wise.
Some people put their "main" code into a block such as:
if __name__ == '__main__':
foo()
bar()
See this thread for more information.
Do not use classes for the sake of having classes, however. It isn't very "Pythonic".
Related
I'm quite new to python, and could not understand what is static method in python(for example __new__()) and what does it do. Can anyone possibly explain it? Thanks a million
Have you already read this?
https://en.wikipedia.org/wiki/Method_(computer_programming)
Especially this?
https://en.wikipedia.org/wiki/Method_(computer_programming)#Static_methods
Explanation
In OOP you define classes that you later on instantiate. A class is nothing more than a blueprint: Once you instantiate objects from a class your object will follow exactly the blueprint of your class. That means: If you define a field named "abc" in your class you will later on have a field "abc" in your object. If you define a method "foo()" in your class, you will later on have a method "foo()" to be invoked on your object.
Please note that this "on your object" is essential: You always instantiate a class and then you can invoke the method. This is the "normal" way.
A static method is different. While a normal method always requires to have an instance (where you then can invoke this method at) a static method does not. A static method exists independently from your instances (that's why it is named "static"). So a static method is associated with your class definition itself and therefore is always there and therefore can be invoked only at your class itself. It is completely independent from all instances.
That's a static method.
Python's implementation is a bit ... well ... simple. In details there are deviations from this description above. But that does not make any difference: To be in line with OOP concepts you always should use methods exactly as described above.
Example
Let's give you an example:
class FooBar:
def someMethod(self):
print("abc")
This is a regular (instance) method. You use it like this:
myObj = FooBar()
myObj.someMethod()
If you have ...
myObjB = FooBar()
myObjB.someMethod()
... you have an additional instance and therefore invoking someMethod() on this second instance will be the invocation of a second someMethod() method - defined at the second object. This is because you instantiate objects before use so all instances follow the blueprint FooBar defined. All instances therefore receive some kind of copy of someMethod().
(In practice Python will use optimizations internally, so there actually is only one piece of code that implements your someMethod() in memory, but forget about this for now. To a programmer it appears as that every instance of a class will have a copy of the method someMethod(). And that's the level of abstraction that is relevant to us as this is the "surface" we work on. Deep within the implementation of a programming or script language things might be a bit different but this is not very relevant.)
Let's have a look at a static method:
class FooBar:
#staticmethod
def someStaticMethod():
print("abc")
Such static methods can be invoked like this:
FooBar.someStaticMethod()
As you can see: No instance. You directly invoke this method in the context of the class itself. While regular methods work on the particular instance itself - they typically modify data within this instance itself - a class method does not. It could modify static (!) data, but typically it does not anyway.
Consider a static method a special case. It is rarely needed. What you typically want if you write code is not to implement a static method. Only in very specific situations it makes sense to implement a static method.
The self parameter
Please note that a standard "instance" method always must have self as a first argument. This is a python specific. In the real world Python will (of course!) store your method only once in memory, even if you instantiate thousands of objects of your class. Consider this an optimization. If you then invoke your method on one of your thousands of instances, always the same single piece of code is called. But for it to distinguish on which particular object the code of the method should work on your instance is passed to this internally stored piece of code as the very first argument. This is the self argument. It is some kind of implicit argument and always needed for regular (instance) methods. (Not: static methods - there you don't need an instance to invoke them).
As this argument is implicit and always needed most programming languages hide it to the programmer (and handle this internally - under the hood - in the correct way). It does not really make much sense to expose this special argument anyway.
Unfortunately Python does not follow this principle. Python does not hide this argument which is implicitly required. (Python's incorporation of OOP concepts is a bit ... simple.) Therefore you see self almost everywhere in methods. In your mind you can ignore it, but you need to write it explicitly if you define your own classes. (Which is something you should do in order to structure your programs in a good way.)
The static method __new__()
Python is quite special. While regular programming languages follow a strict and immutable concept of how to create instances of particular classes, Python is a bit different here. This behavior can be changed. This behavior is implemented in __new__(). So if you do this ...
myObj = FooBar()
... Python implicitly invokes FooBar.__new__() which in turn invokes a constructor-like (instance) method named __init__() that you could (!) define in your class (as an instance method) and then returns the fully initialized instance. This instance is then what is stored in myObj in this example her.
You could modify this behavior if you want. But this would requires a very very very particularly unusual use case. You will likely never have anything to do with __new__() itself in your entire work with Python. My advice: If you're somehow new to Python just ignore it.
To preface this, this is a hypothetical, and just a question that popped into my head while I was prototyping some code. Dynamically creating classes has a pretty narrow range of applicable usages.
In Python, I can dynamically define a class by, for example, nesting its definition inside a def[1]:
def NewClass(doc):
class MyClass(object):
__doc__ = doc
return MyClass
What happens when the instance of the class becomes unused? Does its refcount go to zero and is it destroyed just like other objects? Or is it handled a little bit specially because its a class? Reading the language docs, I didn't see much anything about class object destruction.
More specifically, if I had code creating these in response to user requests, e.g., so there were thousands or millions being created through the lifetime of a process, would I need to worry about running out of memory because of all the created classes?
[1] e.g., the type "function", and probably various meta-class things or __new__ tricks.
Yes, classes are objects too, and are governed by the same reference counting rules.
If all you do with the return value is create instances, then the only reference to the class is the __class__ attribute on those instances. If there are no more instances of that specific class, it will no longer be referenced and deleted.
I have a rather large and involved decorator to debug PyQt signals that I want to dynamically add to a class. Is there a way to add a decorator to a class dynamically?
I might be approaching this problem from the wrong angle, so here is what I want to accomplish.
Goal
I have a decorator that will discover/attach to all pyqt signals in a class and print debug when those signals are emitted.
This decorator is great for debugging a single class' signals. However, there might be a time when I would like to attach to ALL my signals in an application. This could be used to see if I'm emitting signals at unexpected times, etc.
I'd like to dynamically attach this decorator to all my classes that have signals.
Possible solutions/ideas
I've thought through a few possible solutions so far:
Inheritance: This would be easy if all my classes had the same base class (other than Python's built-in object and PyQt's built-in QtCore.QObject). I suppose I could just attach this decorator to my base class and everything would workout as expected. However, this is not the case in this particular application. I don't want to change all my classes to have the same base class either.
Monkey-patch Python object or QtCore.QObject: I don't know how this would work practically. However, in theory could I change one of these base classes' __init__ to be the new_init I define in my decorator? This seems really dangerous and hackish but maybe it's a good way?
Metaclasses: I don't think metaclasses will work in this scenario because I'd have to dynamically add the __metaclass__ attribute to the classes I want to inject the decorator into. I think this is impossible because to insert this attribute the class must have already been constructed. Thus, whatever metaclass I define won't be called. Is this true?
I tried a few variants of metaclass magic but nothing seemed to work. I feel like using metaclasses might be a way to accomplish what I want, but I can't seem to get it working.
Again, I might be going about this all wrong. Essentially I want to attach the behavior in my decorator referenced above to all classes in my application (maybe even a list of select classes). Also, I could refactor my decorator if necessary. I don't really care if I attach this behavior with a decorator or another mechanism. I just assumed this decorator already accomplishes what I want for a single class so maybe it was easy to extend.
Decorators are nothing more than callables that are applied automatically. To apply it manually, replace the class with the return value of the decorator:
import somemodule
somemodule.someclass = debug_signals(somemodule.someclass)
This replaces the somemodule.someclass name with the return value of debug_signals, which we passed the original somemodule.someclass class.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been programming in Python for some time and have covered some knowledge in Python style but still have a problem on how to use classes properly.
When reading object oriented lecture I often find rules like Single Responsibility Principle that state
"The Single Responsibility Principle says that a class should have
one, and only one, reason to change"
Reading this, I might think of breaking one class into two, like:
class ComplicatedOperations(object):
def __init__(self, item):
pass
def do(self):
...
## lots of other functions
class CreateOption(object):
def __init__(self, simple_list):
self.simple_list = simple_list
def to_options(self):
operated_data = self.transform_data(self.simple_list)
return self.default_option() + operated_data
def default_option(self):
return [('', '')]
def transform_data(self, simple_list):
return [self.make_complicated_operations_that_requires_losts_of_manipulation(item)
for item in simple_list]
def make_complicated_operations_that_requires_losts_of_manipulation(self, item):
return ComplicatedOperations(item).do()
This, for me, raises lots of different questions; like:
When should I use class variables or pass arguments in class functions?
Should the ComplicatedOperations class be a class or just a bunch of functions?
Should the __init__ method be used to calculate the final result. Does that makes that class hard to test.
What are the rules for the pythonists?
Edited after answers:
So, reading Augusto theory, I would end up with something like this:
class ComplicatedOperations(object):
def __init__(self):
pass
def do(self, item):
...
## lots of other functions
def default_option():
return [('', '')]
def complicate_data(item):
return ComplicatedOperations().do(item)
def transform_data_to_options(simple_list):
return default_option() + [self.complicate_data(item)
for item in simple_list]
(Also corrected a small bug with default_option.)
When should I use class variables or pass arguments in class functions
In your example I would pass item into the do method. Also, this is related to programming in any language, give a class only the information it needs (Least Authority), and pass everything that is not internal to you algorithm via parameters (Depedency Injection), so, if the ComplicatedOperations does not need item to initialize itself, do not give it as a init parameter, and if it needs item to do it's job, give it as a parameter.
Should the ComplicatedOperations class be a class or just a bunch of functions
I'd say, depends. If you're using various kinds of operations, and they share some sort of interface or contract, absolutely. If the operation reflects some concept and all the methods are related to the class, sure. But if they are loose and unrelated, you might just use functions or think again about the Single Responsability and split the methods up into other classes
Should the init method be used to calculate the final result. Does that makes that class hard to test.
No, the init method is for initialization, you should do its work on a separated method.
As a side note, because of the lack of context, I did not understand what is CreateOption's role. If it is only used as show above, you might as well just remove it ...
I personally think of classes as of concepts. I'd define a Operation class which behaves like an operation, so contains a do() method, and every other method/property that may make it unique.
As mgilson correctly says, if you cannot define and isolate any concept, maybe a simple functional approach would be better.
To answer your questions:
you should use class attributes when a certain property is shared among the instances (in Python class attributes are initialized at compile time, so different object will see the same value. Usually class attributes should be constants). Use instance attributes to have object-specific properties to use in its methods without passing them. This doesn't mean you should put everything in self, but just what you consider characterising for your object. Use passed variables to have values that do not regard your object and may depend from the state of external objects (or on the execution of the program).
As said above, I'd keep one single class Operation and use a list of Operation objects to do your computations.
the init method would just instantiate the object and make all the processing needed for the proper behaviour of the object (in other words make it read to use).
Just think about the ideas you're trying to model.
A class generally represents a type of object. Class instances are specific objects of that type. A classic example is an Animal class. a cat would be an instance of Animal. class variables (I assume you mean those that belong to the instance rather than the class object itself), should be used for attributes of the instance. In this case, for example, colour could be a class attribute, which would be set as cat.colour = "white" or bear.colour = "brown". Arguments should be used where the value could come from some source outside the class. If the Animal class has a sleep method, it might need to know the duration of the sleep and posture that the animal sleeps in. duration would be an argument of the method, since it has no relation on the animal, but posture would be a class variable since it is determined by the animal.
In python, a class is typically used to group together a set of functions and variables which share a state. Continuing with the above example, a specific animal has a state which is shared across its methods and is defined by its attributes. If your class is just a group of functions which don't in any way depend on the state of the class, then they could just as easily be separate functions.
If __init__ is used to calculate the final result (which would have to be stored in an attribute of the class since __init__ cannot return a result), then you might as well use a function. A common pattern, however, is to do a lot of processing in __init__ via several other, sometimes private, methods of the class. The reason for this is that large complicated functions are often easier to test if they are broken down into smaller, distinct tasks, each of which can then be tested individually. However, this is usually only done when a class is needed anyway.
One approach to the whole business is to start out by deciding what functionality you need. When you have a group of functions or variables which all act on or apply to the same object, then it is time to move them into a class. Remember that Object Oriented Programming (OOP) is a design method suited to some tasks, but is not inherently superiour to functional programming (in fact, some programmers would argue the opposite!), so there's no need to use classes unless there is actually a need.
Classes are an organizational structure. So, if you are not using them to organize, you are doing it wrong. :)
There are several different things you can use them for organizing:
Bundle data with methods that use said data, defines one spot that the code will interact with this data
Bundle like functions together, provides understandable api since 'everyone knows' that all math functions are in the math object
Provide defined communications between methods, sets up a 'conveyor belt' of operations with a defined interface. Each operation is a black box, and can change arbitrarily, so long as it keeps to the standard
Abstract a concept. This can include sub classes, data, methods, so on and so forth all around some central idea like database access. This class then becomes a component you can use in other projects with a minimal amount of retooling
If you don't need to do some organizational thing like the above, then you should go for simplicity and program in a procedural/functional style. Python is about having a toolbox, not a hammer.
I have some Python code that creates a Calendar object based on parsed VEvent objects from and iCalendar file.
The calendar object just has a method that adds events as they get parsed.
Now I want to create a factory function that creates a calendar from a file object, path, or URL.
I've been using the iCalendar python module, which implements a factory function as a class method directly on the Class that it returns an instance of:
cal = icalendar.Calendar.from_string(data)
From what little I know about Java, this is a common pattern in Java code, though I seem to find more references to a factory method being on a different class than the class you actually want to instantiate instances from.
The question is, is this also considered Pythonic ? Or is it considered more pythonic to just create a module-level method as the factory function ?
[Note. Be very cautious about separating "Calendar" a collection of events, and "Event" - a single event on a calendar. In your question, it seems like there could be some confusion.]
There are many variations on the Factory design pattern.
A stand-alone convenience function (e.g., calendarMaker(data))
A separate class (e.g., CalendarParser) which builds your target class (Calendar).
A class-level method (e.g. Calendar.from_string) method.
These have different purposes. All are Pythonic, the questions are "what do you mean?" and "what's likely to change?" Meaning is everything; change is important.
Convenience functions are Pythonic. Languages like Java can't have free-floating functions; you must wrap a lonely function in a class. Python allows you to have a lonely function without the overhead of a class. A function is relevant when your constructor has no state changes or alternate strategies or any memory of previous actions.
Sometimes folks will define a class and then provide a convenience function that makes an instance of the class, sets the usual parameters for state and strategy and any other configuration, and then calls the single relevant method of the class. This gives you both the statefulness of class plus the flexibility of a stand-alone function.
The class-level method pattern is used, but it has limitations. One, it's forced to rely on class-level variables. Since these can be confusing, a complex constructor as a static method runs into problems when you need to add features (like statefulness or alternative strategies.) Be sure you're never going to expand the static method.
Two, it's more-or-less irrelevant to the rest of the class methods and attributes. This kind of from_string is just one of many alternative encodings for your Calendar objects. You might have a from_xml, from_JSON, from_YAML and on and on. None of this has the least relevance to what a Calendar IS or what it DOES. These methods are all about how a Calendar is encoded for transmission.
What you'll see in the mature Python libraries is that factories are separate from the things they create. Encoding (as strings, XML, JSON, YAML) is subject to a great deal of more-or-less random change. The essential thing, however, rarely changes.
Separate the two concerns. Keep encoding and representation as far away from state and behavior as you can.
It's pythonic not to think about esoteric difference in some pattern you read somewhere and now want to use everywhere, like the factory pattern.
Most of the time you would think of a #staticmethod as a solution it's probably better to use a module function, except when you stuff multiple classes in one module and each has a different implementation of the same interface, then it's better to use a #staticmethod
Ultimately weather you create your instances by a #staticmethod or by module function makes little difference.
I'd probably use the initializer ( __init__ ) of a class because one of the more accepted "patterns" in python is that the factory for a class is the class initialization.
IMHO a module-level method is a cleaner solution. It hides behind the Python module system that gives it a unique namespace prefix, something the "factory pattern" is commonly used for.
The factory pattern has its own strengths and weaknesses. However, choosing one way to create instances usually has little pragmatic effect on your code.
A staticmethod rarely has value, but a classmethod may be useful. It depends on what you want the class and the factory function to actually do.
A factory function in a module would always make an instance of the 'right' type (where 'right' in your case is the 'Calendar' class always, but you might also make it dependant on the contents of what it is creating the instance out of.)
Use a classmethod if you wish to make it dependant not on the data, but on the class you call it on. A classmethod is like a staticmethod in that you can call it on the class, without an instance, but it receives the class it was called on as first argument. This allows you to actually create an instance of that class, which may be a subclass of the original class. An example of a classmethod is dict.fromkeys(), which creates a dict from a list of keys and a single value (defaulting to None.) Because it's a classmethod, when you subclass dict you get the 'fromkeys' method entirely for free. Here's an example of how one could write dict.fromkeys() oneself:
class dict_with_fromkeys(dict):
#classmethod
def fromkeys(cls, keys, value=None):
self = cls()
for key in keys:
self[key] = value
return self