I have been trying to get my head around classmethods for a while now. I know how they work but I don't understand why use them or not use them.
For example.
I know i can use an instance method like this:
class MyClass():
def __init__(self):
self.name = 'Chris'
self.age = 27
def who_are_you(self):
print('Hello {}, you are {} years old'.format(self.name, self.age)
c = MyClass()
c.who_are_you()
I also know that by using the classmethod I can call the who_are_you() without creating an instance of my class:
class MyClass():
name = 'Chris'
age = 27
#classmethod
def who_are_you(cls):
print('Hello {}, you are {} years old'.format(cls.name, cls.age)
MyClass.who_are_you()
I dont get why you would pick one method over the other
In your second example, you've hard-coded the name and age into the class. If name and age are indeed properties of the class and not a specific instance of the class, than using a class method makes sense. However, if your class was something like Human of which there are many instances with different names and ages, then it wouldn't be possible to create a class method to access the unique names and ages of the specific instance. In that case, you would want to use an instance method.
In general:
If you want to access a property of a class as a whole, and not the property of a specific instance of that class, use a class method.
If you want to access/modify a property associated with a specific instance of the class, then you will want to use an instance method.
Class methods are called when you don't have, or don't need, or can't have, an instance. Sometimes, a class can serve as a singleton when used this way. But probably the most common use of class methods is as a non-standard constructor.
For example, the Python dict class has a non-standard constructor called dict.fromkeys(seq, [value]). Clearly, there can be no instance involved - the whole point is to create an instance. But it's not the standard __init__() constructor, because it takes data in a slightly different format.
There are similar methods in the standard library: int.from_bytes, bytes.fromhex and bytearray.fromhex() and float.fromhex().
If you think about the Unix standard library, the fdopen function is a similar idea - it constructs a file from a descriptor, instead of a string path. Python's open() will accept file handles instead of paths, so it doesn't need a separate constructor. But the concept is more common than you might suspect.
#classmethod declares that method is static, therefore you could use it without creating new instance of class. One the other hand, in first example you have to create instance before youcould use method.
Static methods are very useful for controllers in MVC pattern, etc, while nonstatic methods are used in models.
More about #classmethod and #staticmethod here
https://stackoverflow.com/a/12179752/5564059
Related
I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.
Let's say I have a class (in python3.6)
class Testing:
def __init__(self, stuff):
self.stuff = stuff
#staticmethod
def method_one(number):
"""My staticmethod"""
return number + 1
def method_two(self):
"""Other method"""
number = 10
# option A
self.method_one(number)
# option B
Testing.method_one(number)
What is more python here?
option A, using the self var
option B, using the class itself
I tend to think A, but I am unsure and couldn't find a solid answer on the topic
In the case of static methods I would always prefer notation B over notation A. The main reason for this is that notation B tells me something about the method that I wouldn't have known if you used notation A.
If you use the class name instead of self it is immediately obvious to the reader that they are dealing with a static method. This way the reader knows that the method doesn't use or change the class or instance state. If you use self you have to check the actual method to see if the class or instance state is used.
In fact using self is not good in static methods. so when you want to call static methods you can use option B , with class object.
So for calling other methods means non static here is the strategy :
If both methods are in same class you can use self like you mentioned in option A.
option B(which means calling a methodinside a class... using class object) , you can use when function that you want to call is in another class
UPDATE :
For calling static methods in the same class you can use two ways,other than self. They are more pythonic.
1. call from class : classname.static_method_name()
2. call from instances
these two ways are more pythonic than self. So in your case option B is more pythonic.
For me Option B has the disadvantage that is explicitly uses the class name Testing again. If the class is renamed (or contents are copy/pasted), this needs to be updated. Using
type(self).method_one(number)
seems a way if one really wants to highlight that this is not an instance method.
Note that this gives you your own class. If your class overwrites a static method of the parent class with and instance method that has the name name, then the call fill fail. Using super().some_static_method() will work, but since super() returns a proxy object type(super()).some_static_method() does not work.
So practically, using self seems ok if all you care about is accessing your (currently set) method. A class instance also "inherits" the class-methods and static methods of parents.
But if you really want to invoke the static method from Testing regardless of what instance you are currently in, then use Testing.method_one(). It's a bit hard to argue about the most "pythonic way" when the whole class/inheritance design can be questioned also...
What i have is for each instance of a class a different method.So what i want is when i make a new intance of that class to be somehow able to choose which method this instance will call.I am a complete newbie in python and i wonder if i could have a list of methods and each instance to call a specific one from this list.Is that even possible?
I did some search and found the http://docs.python.org/2/library/inspect.html (inspect module) but i got confused.
In python, functions are first class objects. So, you can directly pass a function as argument. If you have defined functions f(x), g(x), h(x), you can create a class method
def set_function(self, external_function):
self.F = external_function
You can then use object.F(x), as if it had been defined inside the class.
However, object belonging to the same class having different methods is bad design. If objects of the same class have different behavior, they should probably belong to different classes to begin with. A better approach would be to subclass the original class, define the different functions inside the subclasses, and then instantiate the corresponding objects.
I have a class sysprops in which I'd like to have a number of constants. However, I'd like to pull the values for those constants from the database, so I'd like some sort of hook any time one of these class constants are accessed (something like the getattribute method for instance variables).
class sysprops(object):
SOME_CONSTANT = 'SOME_VALUE'
sysprops.SOME_CONSTANT # this statement would not return 'SOME_VALUE' but instead a dynamic value pulled from the database.
Although I think it is a very bad idea to do this, it is possible:
class GetAttributeMetaClass(type):
def __getattribute__(self, key):
print 'Getting attribute', key
class sysprops(object):
__metaclass__ = GetAttributeMetaClass
While the other two answers have a valid method. I like to take the route of 'least-magic'.
You can do something similar to the metaclass approach without actually using them. Simply by using a decorator.
def instancer(cls):
return cls()
#instancer
class SysProps(object):
def __getattribute__(self, key):
return key # dummy
This will create an instance of SysProps and then assign it back to the SysProps name. Effectively shadowing the actual class definition and allowing a constant instance.
Since decorators are more common in Python I find this way easier to grasp for other people that have to read your code.
sysprops.SOME_CONSTANT can be the return value of a function if SOME_CONSTANT were a property defined on type(sysprops).
In other words, what you are talking about is commonly done if sysprops were an instance instead of a class.
But here is the kicker -- classes are instances of metaclasses. So everything you know about controlling the behavior of instances through the use of classes applies equally well to controlling the behavior of classes through the use of metaclasses.
Usually the metaclass is type, but you are free to define other metaclasses by subclassing type. If you place a property SOME_CONSTANT in the metaclass, then the instance of that metaclass, e.g. sysprops will have the desired behavior when Python evaluates sysprops.SOME_CONSTANT.
class MetaSysProps(type):
#property
def SOME_CONSTANT(cls):
return 'SOME_VALUE'
class SysProps(object):
__metaclass__ = MetaSysProps
print(SysProps.SOME_CONSTANT)
yields
SOME_VALUE
I have a module (db.py) which loads data from different database types (sqlite,mysql etc..) the module contains a class db_loader and subclasses (sqlite_loader,mysql_loader) which inherit from it.
The type of database being used is in a separate params file,
How does the user get the right object back?
i.e how do I do:
loader = db.loader()
Do I use a method called loader in the db.py module or is there a more elegant way whereby a class can pick its own subclass based on a parameter? Is there a standard way to do this kind of thing?
Sounds like you want the Factory Pattern. You define a factory method (either in your module, or perhaps in a common parent class for all the objects it can produce) that you pass the parameter to, and it will return an instance of the correct class. In python the problem is a bit simpler than perhaps some of the details on the wikipedia article as your types are dynamic.
class Animal(object):
#staticmethod
def get_animal_which_makes_noise(noise):
if noise == 'meow':
return Cat()
elif noise == 'woof':
return Dog()
class Cat(Animal):
...
class Dog(Animal):
...
is there a more elegant way whereby a class can pick its own subclass based on a parameter?
You can do this by overriding your base class's __new__ method. This will allow you to simply go loader = db_loader(db_type) and loader will magically be the correct subclass for the database type. This solution is mildly more complicated than the other answers, but IMHO it is surely the most elegant.
In its simplest form:
class Parent():
def __new__(cls, feature):
subclass_map = {subclass.feature: subclass for subclass in cls.__subclasses__()}
subclass = subclass_map[feature]
instance = super(Parent, subclass).__new__(subclass)
return instance
class Child1(Parent):
feature = 1
class Child2(Parent):
feature = 2
type(Parent(1)) # <class '__main__.Child1'>
type(Parent(2)) # <class '__main__.Child2'>
(Note that as long as __new__ returns an instance of cls, the instance's __init__ method will automatically be called for you.)
This simple version has issues though and would need to be expanded upon and tailored to fit your desired behaviour. Most notably, this is something you'd probably want to address:
Parent(3) # KeyError
Child1(1) # KeyError
So I'd recommend either adding cls to subclass_map or using it as the default, like so subclass_map.get(feature, cls). If your base class isn't meant to be instantiated -- maybe it even has abstract methods? -- then I'd recommend giving Parent the metaclass abc.ABCMeta.
If you have grandchild classes too, then I'd recommend putting the gathering of subclasses into a recursive class method that follows each lineage to the end, adding all descendants.
This solution is more beautiful than the factory method pattern IMHO. And unlike some of the other answers, it's self-maintaining because the list of subclasses is created dynamically, instead of being kept in a hardcoded mapping. And this will only instantiate subclasses, unlike one of the other answers, which would instantiate anything in the global namespace matching the given parameter.
I'd store the name of the subclass in the params file, and have a factory method that would instantiate the class given its name:
class loader(object):
#staticmethod
def get_loader(name):
return globals()[name]()
class sqlite_loader(loader): pass
class mysql_loader(loader): pass
print type(loader.get_loader('sqlite_loader'))
print type(loader.get_loader('mysql_loader'))
Store the classes in a dict, instantiate the correct one based on your param:
db_loaders = dict(sqlite=sqlite_loader, mysql=mysql_loader)
loader = db_loaders.get(db_type, default_loader)()
where db_type is the paramter you are switching on, and sqlite_loader and mysql_loader are the "loader" classes.