In Python, how do you change an instantiated object after a reload? - python

Let's say you have an object that was instantiated from a class inside a module.
Now, you reload that module.
The next thing you'd like to do is make that reload affect that class.
mymodule.py
---
class ClassChange():
def run(self):
print 'one'
myexperiment.py
---
import mymodule
from mymodule import ClassChange # why is this necessary?
myObject = ClassChange()
myObject.run()
>>> one
### later, i changed this file, so that it says print 'two'
reload(mymodule)
# trick to change myObject needed here
myObject.run()
>>> two
Do you have to make a new ClassChange object, copy myObject into that, and delete the old myObject? Or is there a simpler way?
Edit: The run() method seems like a static class style method but that was only for the sake of brevity. I'd like the run() method to operate on data inside the object, so a static module function wouldn't do...

To update all instances of a class, it is necessary to keep track somewhere about those instances -- typically via weak references (weak value dict is handiest and general) so the "keeping track" functionality won't stop unneeded instances from going away, of course!
You'd normally want to keep such a container in the class object, but, in this case, since you'll be reloading the module, getting the old class object is not trivial; it's simpler to work at module level.
So, let's say that an "upgradable module" needs to define, at its start, a weak value dict (and an auxiliary "next key to use" int) with, say, conventional names:
import weakref
class _List(list): pass # a weakly-referenceable sequence
_objs = weakref.WeakValueDictionary()
_nextkey = 0
def _register(obj):
_objs[_nextkey] = List((obj, type(obj).__name__))
_nextkey += 1
Each class in the module must have, typically in __init__, a call _register(self) to register new instances.
Now the "reload function" can get the roster of all instances of all classes in this module by getting a copy of _objs before it reloads the module.
If all that's needed is to change the code, then life is reasonably easy:
def reload_all(amodule):
objs = getattr(amodule, '_objs', None)
reload(amodule)
if not objs: return # not an upgraable-module, or no objects
newobjs = getattr(amodule, '_objs', None)
for obj, classname in objs.values():
newclass = getattr(amodule, classname)
obj.__class__ = newclass
if newobjs: newobjs._register(obj)
Alas, one typically does want to give the new class a chance to upgrade an object of the old class to itself more finely, e.g. by a suitable class method. That's not too hard either:
def reload_all(amodule):
objs = getattr(amodule, '_objs', None)
reload(amodule)
if not objs: return # not an upgraable-module, or no objects
newobjs = getattr(amodule, '_objs', None)
for obj, classname in objs:
newclass = getattr(amodule, classname)
upgrade = getattr(newclass, '_upgrade', None)
if upgrade:
upgrade(obj)
else:
obj.__class__ = newclass
if newobjs: newobjs._register(obj)
For example, say the new version of class Zap has renamed an attribute from foo to bar. This could be the code of the new Zap:
class Zap(object):
def __init__(self):
_register(self)
self.bar = 23
#classmethod
def _upgrade(cls, obj):
obj.bar = obj.foo
del obj.foo
obj.__class__ = cls
This is NOT all -- there's a LOT more to say on the subject -- but, it IS the gist, and the answer is WAY long enough already (and I, exhausted enough;-).

You have to make a new object. There's no way to magically update the existing objects.
Read the reload builtin documentation - it is very clear. Here's the last paragraph:
If a module instantiates instances of a class, reloading the module that defines the class does not affect the method definitions of the instances — they continue to use the old class definition. The same is true for derived classes.
There are other caveats in the documentation, so you really should read it, and consider alternatives. Maybe you want to start a new question with why you want to use reload and ask for other ways of achieving the same thing.

My approach to this is the following:
Look through all imported modules and reload only those with a new .py file (as compared to the existing .pyc file)
For every function and class method that is reloaded, set old_function.__code__ = new_function.__code__.
For every reloaded class, use gc.get_referrers to list instances of the class and set their __class__ attribute to the new version.
Advantages to this approach are:
Usually no need to reload modules in any particular order
Usually only need to reload the modules with changed code and no more
Don't need to modify classes to keep track of their instances
You can read about the technique (and its limitations) here:
http://luke-campagnola.blogspot.com/2010/12/easy-automated-reloading-in-python.html
And you can download the code here:
http://luke.campagnola.me/code/downloads/reload.py

You have to get the new class from the fresh module and assign it back to the instance.
If you could trigger this operation anytime you use an instance with this mixin:
import sys
class ObjDebug(object):
def __getattribute__(self,k):
ga=object.__getattribute__
sa=object.__setattr__
cls=ga(self,'__class__')
modname=cls.__module__
mod=__import__(modname)
del sys.modules[modname]
reload(mod)
sa(self,'__class__',getattr(mod,cls.__name__))
return ga(self,k)

The following code does what you want, but please don't use it (at least not until you're very sure you're doing the right thing), I'm posting it for explanation purposes only.
mymodule.py:
class ClassChange():
#classmethod
def run(cls,instance):
print 'one',id(instance)
myexperiment.py:
import mymodule
myObject = mymodule.ClassChange()
mymodule.ClassChange.run(myObject)
# change mymodule.py here
reload(mymodule)
mymodule.ClassChange.run(myObject)
When in your code you instanciate myObject, you get an instance of ClassChange. This instance has an instance method called run. The object keeps this instance method (for the reason explained by nosklo) even when reloading, because reloading only reloads the class ClassChange.
In my code above, run is a class method. Class methods are always bound to and operate on the class, not the instance (which is why their first argument is usually called cls, not self). Wenn ClassChange is reloaded, so is this class method.
You can see that I also pass the instance as an argument to work with the correct (same) instance of ClassChange. You can see that because the same object id is printed in both cases.

I'm not sure if this is the best way to do it, or meshes with what you want to do... but this may work for you. If you want to change the behavior of a method, for all objects of a certain type... just use a function variable. For example:
def default_behavior(the_object):
print "one"
def some_other_behavior(the_object):
print "two"
class Foo(object):
# Class variable: a function that has the behavior
# (Takes an instance of a Foo as argument)
behavior = default_behavior
def __init__(self):
print "Foo initialized"
def method_that_changes_behavior(self):
Foo.behavior(self)
if __name__ == "__main__":
foo = Foo()
foo.method_that_changes_behavior() # prints "one"
Foo.behavior = some_other_behavior
foo.method_that_changes_behavior() # prints "two"
# OUTPUT
# Foo initialized
# one
# two
You can now have a class that is responsible for reloading modules, and after reloading, setting Foo.behavior to something new. I tried out this code. It works fine :-).
Does this work for you?

There are tricks to make what you want possible.
Someone already mentioned that you can have a class that keeps a list of its instances, and then changing the class of each instance to the new one upon reload.
However, that is not efficient. A better method is to change the old class so that it is the same as the new class.

Related

How to decorate a python class and override a method?

I have a class
class A:
def sample_method():
I would like to decorate class A sample_method() and override the contents of sample_method()
class DecoratedA(A):
def sample_method():
The setup above resembles inheritance, but I need to keep the preexisting instance of class A when the decorated function is used.
a # preexisting instance of class A
decorated_a = DecoratedA(a)
decorated_a.functionInClassA() #functions in Class A called as usual with preexisting instance
decorated_a.sample_method() #should call the overwritten sample_method() defined in DecoratedA
What is the proper way to go about this?
There isn't a straightforward way to do what you're asking. Generally, after an instance has been created, it's too late to mess with the methods its class defines.
There are two options you have, as far as I see it. Either you create a wrapper or proxy object for your pre-existing instance, or you modify the instance to change its behavior.
A proxy defers most behavior to the object itself, while only adding (or overriding) some limited behavior of its own:
class Proxy:
def __init__(self, obj):
self.obj = obj
def overridden_method(self): # add your own limited behavior for a few things
do_stuff()
def __getattr__(self, name): # and hand everything else off to the other object
return getattr(self.obj, name)
__getattr__ isn't perfect here, it can only work for regular methods, not special __dunder__ methods that are often looked up directly in the class itself. If you want your proxy to match all possible behavior, you probably need to add things like __add__ and __getitem__, but that might not be necessary in your specific situation (it depends on what A does).
As for changing the behavior of the existing object, one approach is to write your subclass, and then change the existing object's class to be the subclass. This is a little sketchy, since you won't have ever initialized the object as the new class, but it might work if you're only modifying method behavior.
class ModifiedA(A):
def overridden_method(self): # do the override in a normal subclass
do_stuff()
def modify_obj(obj): # then change an existing object's type in place!
obj.__class__ = ModifiedA # this is not terribly safe, but it can work
You could also consider adding an instance variable that would shadow the method you want to override, rather than modifying __class__. Writing the function could be a little tricky, since it won't get bound to the object automatically when called (that only happens for functions that are attributes of a class, not attributes of an instance), but you could probably do the binding yourself (with partial or lambda if you need to access self.
First, why not just define it from the beginning, how you want it, instead of decorating it?
Second, why not decorate the method itself?
To answer the question:
You can reassign it
class A:
def sample_method(): ...
pass
A.sample_method = DecoratedA.sample_method;
but that affects every instance.
Another solution is to reassign the method for just one object.
import functools;
a.sample_method = functools.partial(DecoratedA.sample_method, a);
Another solution is to (temporarily) change the type of an existing object.
a = A();
a.__class__ = DecoratedA;
a.sample_method();
a.__class__ = A;

Possible for a class to look down at subclass constructor?

I need to change a class variable but I don't know if I can do it at runtime because I'm using third party open source software and I'm not very well experienced with pure python inheritance (the software I use provide a custom inheritance system) and python in general. My idea was to inherit the class and change the constructor, but then the objects are initialized with original software classname, so they are not initialized with my init..
I partially solved it by inheriting classes that are using the original software class name and override methods so now they use my classes, but I still cannot reach every method because some of them are static.
Full step about what I was trying to do can be found here
Inherit class Worker on Odoo15
I think that I clould solve the problem if I can use something, like a decorator or something else, to tell parent classes to 'look down' to child class constructor when they are initialized. Is there a way to do that?
I will provide an example:
class A(object):
def __init__(self):
self.timeout = 0
def print_class_info(self):
print(f'Obj A timeout: {self.timeout}')
class B(A):
def __init__(self):
super().__init__()
self.timeout = 10
def print_class_info(self):
print(f'Obj B timeout: {self.timeout}')
# Is it possible, somehow, to make obj_A use the init of B class
# even if the call to the class is on class A?
obj_A = A()
obj_B = B()
obj_A.print_class_info()
obj_B.print_class_info()
OUT:
Obj A timeout: 0
Obj B timeout: 10
Of course, situation is more complex in the real scenario so I'm not sure if I can simply access object A and setup the class variable, I think I would have to do it at runtime, probably need a server restart and I'm not even sure how to access objects at runtime, as I said I'm not very well experienced with pure python.
Maybe there is an easy way but I just don't see it or know it, is it possible to use a subclass constructor with a parent class call, basically?
You can assign any attribute to a class, including a method. This is called monkey patching
# save the old init function
A.__oldinit__ = A.__init__
# create a new function that calls the old one
def custom_init(self):
self.__oldinit__()
self.timeout = 10
# overwrite the old function
# the actual old function will still exist because
# it's referenced as A.__oldinit__ as well
A.__init__ = custom_init
# optional cleanup
del custom_init

Find class in which a method is defined

I want to figure out the type of the class in which a certain method is defined (in essence, the enclosing static scope of the method), from within the method itself, and without specifying it explicitly, e.g.
class SomeClass:
def do_it(self):
cls = enclosing_class() # <-- I need this.
print(cls)
class DerivedClass(SomeClass):
pass
obj = DerivedClass()
# I want this to print 'SomeClass'.
obj.do_it()
Is this possible?
If you need this in Python 3.x, please see my other answer—the closure cell __class__ is all you need.
If you need to do this in CPython 2.6-2.7, RickyA's answer is close, but it doesn't work, because it relies on the fact that this method is not overriding any other method of the same name. Try adding a Foo.do_it method in his answer, and it will print out Foo, not SomeClass
The way to solve that is to find the method whose code object is identical to the current frame's code object:
def do_it(self):
mro = inspect.getmro(self.__class__)
method_code = inspect.currentframe().f_code
method_name = method_code.co_name
for base in reversed(mro):
try:
if getattr(base, method_name).func_code is method_code:
print(base.__name__)
break
except AttributeError:
pass
(Note that the AttributeError could be raised either by base not having something named do_it, or by base having something named do_it that isn't a function, and therefore doesn't have a func_code. But we don't care which; either way, base is not the match we're looking for.)
This may work in other Python 2.6+ implementations. Python does not require frame objects to exist, and if they don't, inspect.currentframe() will return None. And I'm pretty sure it doesn't require code objects to exist either, which means func_code could be None.
Meanwhile, if you want to use this in both 2.7+ and 3.0+, change that func_code to __code__, but that will break compatibility with earlier 2.x.
If you need CPython 2.5 or earlier, you can just replace the inpsect calls with the implementation-specific CPython attributes:
def do_it(self):
mro = self.__class__.mro()
method_code = sys._getframe().f_code
method_name = method_code.co_name
for base in reversed(mro):
try:
if getattr(base, method_name).func_code is method_code:
print(base.__name__)
break
except AttributeError:
pass
Note that this use of mro() will not work on classic classes; if you really want to handle those (which you really shouldn't want to…), you'll have to write your own mro function that just walks the hierarchy old-school… or just copy it from the 2.6 inspect source.
This will only work in Python 2.x implementations that bend over backward to be CPython-compatible… but that includes at least PyPy. inspect should be more portable, but then if an implementation is going to define frame and code objects with the same attributes as CPython's so it can support all of inspect, there's not much good reason not to make them attributes and provide sys._getframe in the first place…
First, this is almost certainly a bad idea, and not the way you want to solve whatever you're trying to solve but refuse to tell us about…
That being said, there is a very easy way to do it, at least in Python 3.0+. (If you need 2.x, see my other answer.)
Notice that Python 3.x's super pretty much has to be able to do this somehow. How else could super() mean super(THISCLASS, self), where that THISCLASS is exactly what you're asking for?*
Now, there are lots of ways that super could be implemented… but PEP 3135 spells out a specification for how to implement it:
Every function will have a cell named __class__ that contains the class object that the function is defined in.
This isn't part of the Python reference docs, so some other Python 3.x implementation could do it a different way… but at least as of 3.2+, they still have to have __class__ on functions, because Creating the class object explicitly says:
This class object is the one that will be referenced by the zero-argument form of super(). __class__ is an implicit closure reference created by the compiler if any methods in a class body refer to either __class__ or super. This allows the zero argument form of super() to correctly identify the class being defined based on lexical scoping, while the class or instance that was used to make the current call is identified based on the first argument passed to the method.
(And, needless to say, this is exactly how at least CPython 3.0-3.5 and PyPy3 2.0-2.1 implement super anyway.)
In [1]: class C:
...: def f(self):
...: print(__class__)
In [2]: class D(C):
...: pass
In [3]: D().f()
<class '__main__.C'>
Of course this gets the actual class object, not the name of the class, which is apparently what you were after. But that's easy; you just need to decide whether you mean __class__.__name__ or __class__.__qualname__ (in this simple case they're identical) and print that.
* In fact, this was one of the arguments against it: that the only plausible way to do this without changing the language syntax was to add a new closure cell to every function, or to require some horrible frame hacks which may not even be doable in other implementations of Python. You can't just use compiler magic, because there's no way the compiler can tell that some arbitrary expression will evaluate to the super function at runtime…
If you can use #abarnert's method, do it.
Otherwise, you can use some hardcore introspection (for python2.7):
import inspect
from http://stackoverflow.com/a/22898743/2096752 import getMethodClass
def enclosing_class():
frame = inspect.currentframe().f_back
caller_self = frame.f_locals['self']
caller_method_name = frame.f_code.co_name
return getMethodClass(caller_self.__class__, caller_method_name)
class SomeClass:
def do_it(self):
print(enclosing_class())
class DerivedClass(SomeClass):
pass
DerivedClass().do_it() # prints 'SomeClass'
Obviously, this is likely to raise an error if:
called from a regular function / staticmethod / classmethod
the calling function has a different name for self (as aptly pointed out by #abarnert, this can be solved by using frame.f_code.co_varnames[0])
Sorry for writing yet another answer, but here's how to do what you actually want to do, rather than what you asked for:
this is about adding instrumentation to a code base to be able to generate reports of method invocation counts, for the purpose of checking certain approximate runtime invariants (e.g. "the number of times that method ClassA.x() is executed is approximately equal to the number of times that method ClassB.y() is executed in the course of a run of a complicated program).
The way to do that is to make your instrumentation function inject the information statically. After all, it has to know the class and method it's injecting code into.
I will have to instrument many classes by hand, and to prevent mistakes I want to avoid typing the class names everywhere. In essence, it's the same reason why typing super() is preferable to typing super(ClassX, self).
If your instrumentation function is "do it manually", the very first thing you want to turn it into an actual function instead of doing it manually. Since you obviously only need static injection, using a decorator, either on the class (if you want to instrument every method) or on each method (if you don't) would make this nice and readable. (Or, if you want to instrument every method of every class, you might want to define a metaclass and have your root classes use it, instead of decorating every class.)
For example, here's an easy way to instrument every method of a class:
import collections
import functools
import inspect
_calls = {}
def inject(cls):
cls._calls = collections.Counter()
_calls[cls.__name__] = cls._calls
for name, method in cls.__dict__.items():
if inspect.isfunction(method):
#functools.wraps(method)
def wrapper(*args, **kwargs):
cls._calls[name] += 1
return method(*args, **kwargs)
setattr(cls, name, wrapper)
return cls
#inject
class A(object):
def f(self):
print('A.f here')
#inject
class B(A):
def f(self):
print('B.f here')
#inject
class C(B):
pass
#inject
class D(C):
def f(self):
print('D.f here')
d = D()
d.f()
B.f(d)
print(_calls)
The output:
{'A': Counter(),
'C': Counter(),
'B': Counter({'f': 1}),
'D': Counter({'f': 1})}
Exactly what you wanted, right?
You can either do what #mgilson suggested or take another approach.
class SomeClass:
pass
class DerivedClass(SomeClass):
pass
This makes SomeClass the base class for DerivedClass.
When you normally try to get the __class__.name__ then it will refer to derived class rather than the parent.
When you call do_it(), it's really passing DerivedClass as self, which is why you are most likely getting DerivedClass being printed.
Instead, try this:
class SomeClass:
pass
class DerivedClass(SomeClass):
def do_it(self):
for base in self.__class__.__bases__:
print base.__name__
obj = DerivedClass()
obj.do_it() # Prints SomeClass
Edit:
After reading your question a few more times I think I understand what you want.
class SomeClass:
def do_it(self):
cls = self.__class__.__bases__[0].__name__
print cls
class DerivedClass(SomeClass):
pass
obj = DerivedClass()
obj.do_it() # prints SomeClass
[Edited]
A somewhat more generic solution:
import inspect
class Foo:
pass
class SomeClass(Foo):
def do_it(self):
mro = inspect.getmro(self.__class__)
method_name = inspect.currentframe().f_code.co_name
for base in reversed(mro):
if hasattr(base, method_name):
print(base.__name__)
break
class DerivedClass(SomeClass):
pass
class DerivedClass2(DerivedClass):
pass
DerivedClass().do_it()
>> 'SomeClass'
DerivedClass2().do_it()
>> 'SomeClass'
SomeClass().do_it()
>> 'SomeClass'
This fails when some other class in the stack has attribute "do_it", since this is the signal name for stop walking the mro.

Python events and delegates

This is probably a basic question but I am new to programming. I am working with a third party python code and it provides a class with event and event delegates. The syntax for the events and event delegates are follows:
public Delegate Sub RequestEventDelegate (request As MDNPRequest, _
response as MDNPResponseParser)
public Event RequestEvent As MDNPRequest.RequestEventDelegate
I wrote the following code to subcribe to the event but is not working. I do not know what I am
doing wrong.
Mreq = MDNPRequest()
Mreq.RequestEvent += Mreq.RequestEventDelegate(handleResponseEvent)
def handleResponseEvent (request, response):
print ' event fired'
I am adding the two lines of code to the end of a function that opens up the communication channel. I also tested adding the two lines of code to a function that send a poll on the communication channel. In the second scenario the event fires and every time I execute the polling function. Does this defeat the purpose of event subscription?
I think that my problem maybe due to different functions creating instances of the same class. I would like to consolidate some of the functions into a class using the outline shown below. Method1 creates an instance 'a' of a class1 that I would like the other methods in myClass to use. I tried using a class variable which I set to a class1 instance but this is not working. I reference the class variable using the class name for example myClass.variable.somemethod from class1 but I get "Object reference not set to an instance of an object" error. What is the best approach so that all methods in myClass can have access to a? Eventually I would like to call myClass from another module.
from file1 import *
myClass:
class_variable = class1() # class1 from file1
def __init__(self)
...
def Method1(self, argument list):
# this method instantiates a
...
a = class1()
def Method2 (self):
...
a.class1method1
...
def Method3 (self):
...
a.class1method2
...
If this is actually your code:
Mreq.RequestEvent += Mreq.RequestEventDelegate(handleResponseEvent)
def handleRequestEvent (request, response):
print ' event fired'
… handleResponseEvent is not the same thing as handleRequestEvent.
As a side note, you almost never need to create an explicit delegate. It's sometimes a useful optimization, but it's one more thing you can get wrong, and one more thing that can disguise useful debugging information when you do, so it's usually simpler to write the code without it first, and only add wrap it as a delegate after it's working, if you find yourself creating a whole lot of them and want to save some memory.
From your later edits, I suspect that you're missing the fundamentals of how classes work in Python. You may want to read through the tutorial chapter, or maybe search for a friendlier/more detailed tutorial.
In particular:
I would like to consolidate some of the functions into a class using the outline shown below. Method1 creates an instance 'a' of a class1 that I would like the other methods in myClass to use. I tried using a class variable which I set to a class1 instance but this is not working.
That's not the way to do it. Class attributes, like your class_variable, are created at class creation time (that is, generally, as soon as you import the module or run the script), not instance creation time. If you want something created when instances of your class are created, you use instance attributes, not class attributes, and you set them in the __init__ method. In your case, you don't want the instance created until Method1 is called on an instance—again, that means you use an instance attribute; you just do it inside Method1 rather than __init__.
Also, class attributes are shared by all instances of the class; instance attributes, each instance has its own one. Thing about dogs: each dog has its own tail, there's not one tail shared by all dogs, so tail is an instance attribute. Often, in simple scripts, you don't notice the difference, because you only happen to ever create one instance of the class. But if you can't figure out the difference practically, think about it conceptually (like the Dog example)—and if you still can't figure it out, you almost always want an instance attribute.
I reference the class variable using the class name for example myClass.variable.somemethod from class1 but I get "Object reference not set to an instance of an object" error.
Most likely this is because class1 is a COM/interop or .NET class, and you're trying to create and use it before doing any of the relevant setup, which is only happening because you're trying to do it as soon as you import the module/run the script. If so, if you create it when you actually intended to, there won't be a problem.
What is the best approach so that all methods in myClass can have access to a?
Create an instance attribute in Method1, like this:
def Method1(self, argument list):
# this method instantiates a
...
self.a = class1()
And then use it the same way:
def Method2 (self):
...
self.a.class1method1()
...
Just doing a = whatever just creates a local variable that goes away at the end of the method. Even if it happens to have the same name as a class attribute, instance attribute, or global, you're still creating a new local variable, not modifying the thing you want to modify. Unlike some other languages, Python requires you to be explicit about what you're trying to overwrite—self.a for an instance attribute, myClass.a for a class attribute, etc.—so you don't do it by accident.
Also, note the parentheses at the end of that last expression. If you want to call a function or method, you need parentheses; otherwise, you're just referencing the method itself as a value.
Eventually I would like to call myClass from another module.
I'm not sure what you mean by "class myClass". When you call a class, that constructs a new instance of the class. You can then call that instance's methods the same way you would any other object. It doesn't matter what module it was defined in (except that you obviously have to write my_instance = mymodule.MyClass()).
Look at how you use the standard library; it's exactly the same. For example, if you import csv, you can construct a DictWriter by writing my_writer = csv.DictWriter(my_file). And then you call its methods by writing my_writer.writerow(my_row). Once you've constructed it, it doesn't matter what module it came from.
One more thing:
You've tried to define a class like this:
myClass:
You obviously can't do that; you need the class keyword. But also, in Python 2.x, you always want to give base classes, using object if you don't need anything else. Otherwise, you get an old-style class, which causes all kinds of weird quirks and limitations that you don't want to learn about and have to debug. So:
class myClass(object):

What is the purpose of class methods?

I'm teaching myself Python and my most recent lesson was that Python is not Java, and so I've just spent a while turning all my Class methods into functions.
I now realise that I don't need to use Class methods for what I would done with static methods in Java, but now I'm not sure when I would use them. All the advice I can find about Python Class methods is along the lines of newbies like me should steer clear of them, and the standard documentation is at its most opaque when discussing them.
Does anyone have a good example of using a Class method in Python or at least can someone tell me when Class methods can be sensibly used?
Class methods are for when you need to have methods that aren't specific to any particular instance, but still involve the class in some way. The most interesting thing about them is that they can be overridden by subclasses, something that's simply not possible in Java's static methods or Python's module-level functions.
If you have a class MyClass, and a module-level function that operates on MyClass (factory, dependency injection stub, etc), make it a classmethod. Then it'll be available to subclasses.
Factory methods (alternative constructors) are indeed a classic example of class methods.
Basically, class methods are suitable anytime you would like to have a method which naturally fits into the namespace of the class, but is not associated with a particular instance of the class.
As an example, in the excellent unipath module:
Current directory
Path.cwd()
Return the actual current directory; e.g., Path("/tmp/my_temp_dir"). This is a class method.
.chdir()
Make self the current directory.
As the current directory is process wide, the cwd method has no particular instance with which it should be associated. However, changing the cwd to the directory of a given Path instance should indeed be an instance method.
Hmmm... as Path.cwd() does indeed return a Path instance, I guess it could be considered to be a factory method...
Think about it this way: normal methods are useful to hide the details of dispatch: you can type myobj.foo() without worrying about whether the foo() method is implemented by the myobj object's class or one of its parent classes. Class methods are exactly analogous to this, but with the class object instead: they let you call MyClass.foo() without having to worry about whether foo() is implemented specially by MyClass because it needed its own specialized version, or whether it is letting its parent class handle the call.
Class methods are essential when you are doing set-up or computation that precedes the creation of an actual instance, because until the instance exists you obviously cannot use the instance as the dispatch point for your method calls. A good example can be viewed in the SQLAlchemy source code; take a look at the dbapi() class method at the following link:
https://github.com/zzzeek/sqlalchemy/blob/ab6946769742602e40fb9ed9dde5f642885d1906/lib/sqlalchemy/dialects/mssql/pymssql.py#L47
You can see that the dbapi() method, which a database backend uses to import the vendor-specific database library it needs on-demand, is a class method because it needs to run before instances of a particular database connection start getting created — but that it cannot be a simple function or static function, because they want it to be able to call other, supporting methods that might similarly need to be written more specifically in subclasses than in their parent class. And if you dispatch to a function or static class, then you "forget" and lose the knowledge about which class is doing the initializing.
I recently wanted a very light-weight logging class that would output varying amounts of output depending on the logging level that could be programmatically set. But I didn't want to instantiate the class every time I wanted to output a debugging message or error or warning. But I also wanted to encapsulate the functioning of this logging facility and make it reusable without the declaration of any globals.
So I used class variables and the #classmethod decorator to achieve this.
With my simple Logging class, I could do the following:
Logger._level = Logger.DEBUG
Then, in my code, if I wanted to spit out a bunch of debugging information, I simply had to code
Logger.debug( "this is some annoying message I only want to see while debugging" )
Errors could be out put with
Logger.error( "Wow, something really awful happened." )
In the "production" environment, I can specify
Logger._level = Logger.ERROR
and now, only the error message will be output. The debug message will not be printed.
Here's my class:
class Logger :
''' Handles logging of debugging and error messages. '''
DEBUG = 5
INFO = 4
WARN = 3
ERROR = 2
FATAL = 1
_level = DEBUG
def __init__( self ) :
Logger._level = Logger.DEBUG
#classmethod
def isLevel( cls, level ) :
return cls._level >= level
#classmethod
def debug( cls, message ) :
if cls.isLevel( Logger.DEBUG ) :
print "DEBUG: " + message
#classmethod
def info( cls, message ) :
if cls.isLevel( Logger.INFO ) :
print "INFO : " + message
#classmethod
def warn( cls, message ) :
if cls.isLevel( Logger.WARN ) :
print "WARN : " + message
#classmethod
def error( cls, message ) :
if cls.isLevel( Logger.ERROR ) :
print "ERROR: " + message
#classmethod
def fatal( cls, message ) :
if cls.isLevel( Logger.FATAL ) :
print "FATAL: " + message
And some code that tests it just a bit:
def logAll() :
Logger.debug( "This is a Debug message." )
Logger.info ( "This is a Info message." )
Logger.warn ( "This is a Warn message." )
Logger.error( "This is a Error message." )
Logger.fatal( "This is a Fatal message." )
if __name__ == '__main__' :
print "Should see all DEBUG and higher"
Logger._level = Logger.DEBUG
logAll()
print "Should see all ERROR and higher"
Logger._level = Logger.ERROR
logAll()
Alternative constructors are the classic example.
It allows you to write generic class methods that you can use with any compatible class.
For example:
#classmethod
def get_name(cls):
print cls.name
class C:
name = "tester"
C.get_name = get_name
#call it:
C.get_name()
If you don't use #classmethod you can do it with self keyword but it needs an instance of Class:
def get_name(self):
print self.name
class C:
name = "tester"
C.get_name = get_name
#call it:
C().get_name() #<-note the its an instance of class C
When a user logs in on my website, a User() object is instantiated from the username and password.
If I need a user object without the user being there to log in (e.g. an admin user might want to delete another users account, so i need to instantiate that user and call its delete method):
I have class methods to grab the user object.
class User():
#lots of code
#...
# more code
#classmethod
def get_by_username(cls, username):
return cls.query(cls.username == username).get()
#classmethod
def get_by_auth_id(cls, auth_id):
return cls.query(cls.auth_id == auth_id).get()
I think the most clear answer is AmanKow's one. It boils down to how u want to organize your code. You can write everything as module level functions which are wrapped in the namespace of the module i.e
module.py (file 1)
---------
def f1() : pass
def f2() : pass
def f3() : pass
usage.py (file 2)
--------
from module import *
f1()
f2()
f3()
def f4():pass
def f5():pass
usage1.py (file 3)
-------------------
from usage import f4,f5
f4()
f5()
The above procedural code is not well organized, as you can see after only 3 modules it gets confusing, what is each method do ? You can use long descriptive names for functions(like in java) but still your code gets unmanageable very quick.
The object oriented way is to break down your code into manageable blocks i.e Classes & objects and functions can be associated with objects instances or with classes.
With class functions you gain another level of division in your code compared with module level functions.
So you can group related functions within a class to make them more specific to a task that you assigned to that class. For example you can create a file utility class :
class FileUtil ():
def copy(source,dest):pass
def move(source,dest):pass
def copyDir(source,dest):pass
def moveDir(source,dest):pass
//usage
FileUtil.copy("1.txt","2.txt")
FileUtil.moveDir("dir1","dir2")
This way is more flexible and more maintainable, you group functions together and its more obvious to what each function do. Also you prevent name conflicts, for example the function copy may exist in another imported module(for example network copy) that you use in your code, so when you use the full name FileUtil.copy() you remove the problem and both copy functions can be used side by side.
Honestly? I've never found a use for staticmethod or classmethod. I've yet to see an operation that can't be done using a global function or an instance method.
It would be different if python used private and protected members more like Java does. In Java, I need a static method to be able to access an instance's private members to do stuff. In Python, that's rarely necessary.
Usually, I see people using staticmethods and classmethods when all they really need to do is use python's module-level namespaces better.
I used to work with PHP and recently I was asking myself, whats going on with this classmethod? Python manual is very technical and very short in words so it wont help with understanding that feature. I was googling and googling and I found answer -> http://code.anjanesh.net/2007/12/python-classmethods.html.
If you are lazy to click it. My explanation is shorter and below. :)
in PHP (maybe not all of you know PHP, but this language is so straight forward that everybody should understand what I'm talking about) we have static variables like this:
class A
{
static protected $inner_var = null;
static public function echoInnerVar()
{
echo self::$inner_var."\n";
}
static public function setInnerVar($v)
{
self::$inner_var = $v;
}
}
class B extends A
{
}
A::setInnerVar(10);
B::setInnerVar(20);
A::echoInnerVar();
B::echoInnerVar();
The output will be in both cases 20.
However in python we can add #classmethod decorator and thus it is possible to have output 10 and 20 respectively. Example:
class A(object):
inner_var = 0
#classmethod
def setInnerVar(cls, value):
cls.inner_var = value
#classmethod
def echoInnerVar(cls):
print cls.inner_var
class B(A):
pass
A.setInnerVar(10)
B.setInnerVar(20)
A.echoInnerVar()
B.echoInnerVar()
Smart, ain't?
Class methods provide a "semantic sugar" (don't know if this term is widely used) - or "semantic convenience".
Example: you got a set of classes representing objects. You might want to have the class method all() or find() to write User.all() or User.find(firstname='Guido'). That could be done using module level functions of course...
if you are not a "programmer by training", this should help:
I think I have understood the technical explanations above and elsewhere on the net, but I was always left with a question "Nice, but why do I need it? What is a practical, use case?". and now life gave me a good example that clarified all:
I am using it to control the global-shared variable that is shared among instances of a class instantiated by multi-threading module. in humane language, I am running multiple agents that create examples for deep learning IN PARALLEL. (imagine multiple players playing ATARI game at the same time and each saving the results of their game to one common repository (the SHARED VARIABLE))
I instantiate the players/agents with the following code (in Main/Execution Code):
a3c_workers = [A3C_Worker(self.master_model, self.optimizer, i, self.env_name, self.model_dir) for i in range(multiprocessing.cpu_count())]
it creates as many players as there are processor cores on my comp
A3C_Worker - is a class that defines the agent
a3c_workers - is a list of the instances of that class (i.e. each instance is one player/agent)
now i want to know how many games have been played across all players/agents thus within the A3C_Worker definition I define the variable to be shared across all instances:
class A3C_Worker(threading.Thread):
global_shared_total_episodes_across_all_workers = 0
now as the workers finish their games they increase that count by 1 each for each game finished
at the end of my example generation i was closing the instances but the shared variable had assigned the total number of games played. so when I was re-running it again my initial total number of episodes was that of the previous total. but i needed that count to represent that value for each run individually
to fix that i specified :
class A3C_Worker(threading.Thread):
#classmethod
def reset(cls):
A3C_Worker.global_shared_total_episodes_across_all_workers = 0
than in the execution code i just call:
A3C_Worker.reset()
note that it is a call to the CLASS overall not any INSTANCE of it individually. thus it will set my counter to 0 for every new agent I initiate from now on.
using the usual method definition def play(self):, would require us to reset that counter for each instance individually, which would be more computationally demanding and difficult to track.
What just hit me, coming from Ruby, is that a so-called class method and a so-called instance method is just a function with semantic meaning applied to its first parameter, which is silently passed when the function is called as a method of an object (i.e. obj.meth()).
Normally that object must be an instance but the #classmethod method decorator changes the rules to pass a class. You can call a class method on an instance (it's just a function) - the first argument will be its class.
Because it's just a function, it can only be declared once in any given scope (i.e. class definition). If follows therefore, as a surprise to a Rubyist, that you can't have a class method and an instance method with the same name.
Consider this:
class Foo():
def foo(x):
print(x)
You can call foo on an instance
Foo().foo()
<__main__.Foo instance at 0x7f4dd3e3bc20>
But not on a class:
Foo.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method foo() must be called with Foo instance as first argument (got nothing instead)
Now add #classmethod:
class Foo():
#classmethod
def foo(x):
print(x)
Calling on an instance now passes its class:
Foo().foo()
__main__.Foo
as does calling on a class:
Foo.foo()
__main__.Foo
It's only convention that dictates that we use self for that first argument on an instance method and cls on a class method. I used neither here to illustrate that it's just an argument. In Ruby, self is a keyword.
Contrast with Ruby:
class Foo
def foo()
puts "instance method #{self}"
end
def self.foo()
puts "class method #{self}"
end
end
Foo.foo()
class method Foo
Foo.new.foo()
instance method #<Foo:0x000000020fe018>
The Python class method is just a decorated function and you can use the same techniques to create your own decorators. A decorated method wraps the real method (in the case of #classmethod it passes the additional class argument). The underlying method is still there, hidden but still accessible.
footnote: I wrote this after a name clash between a class and instance method piqued my curiosity. I am far from a Python expert and would like comments if any of this is wrong.
This is an interesting topic. My take on it is that python classmethod operates like a singleton rather than a factory (which returns a produced an instance of a class). The reason it is a singleton is that there is a common object that is produced (the dictionary) but only once for the class but shared by all instances.
To illustrate this here is an example. Note that all instances have a reference to the single dictionary. This is not Factory pattern as I understand it. This is probably very unique to python.
class M():
#classmethod
def m(cls, arg):
print "arg was", getattr(cls, "arg" , None),
cls.arg = arg
print "arg is" , cls.arg
M.m(1) # prints arg was None arg is 1
M.m(2) # prints arg was 1 arg is 2
m1 = M()
m2 = M()
m1.m(3) # prints arg was 2 arg is 3
m2.m(4) # prints arg was 3 arg is 4 << this breaks the factory pattern theory.
M.m(5) # prints arg was 4 arg is 5
I was asking myself the same question few times. And even though the guys here tried hard to explain it, IMHO the best answer (and simplest) answer I have found is the description of the Class method in the Python Documentation.
There is also reference to the Static method. And in case someone already know instance methods (which I assume), this answer might be the final piece to put it all together...
Further and deeper elaboration on this topic can be found also in the documentation:
The standard type hierarchy (scroll down to Instance methods section)
#classmethod can be useful for easily instantiating objects of that class from outside resources. Consider the following:
import settings
class SomeClass:
#classmethod
def from_settings(cls):
return cls(settings=settings)
def __init__(self, settings=None):
if settings is not None:
self.x = settings['x']
self.y = settings['y']
Then in another file:
from some_package import SomeClass
inst = SomeClass.from_settings()
Accessing inst.x will give the same value as settings['x'].
A class defines a set of instances, of course. And the methods of a class work on the individual instances. The class methods (and variables) a place to hang other information that is related to the set of instances over all.
For example if your class defines a the set of students you might want class variables or methods which define things like the set of grade the students can be members of.
You can also use class methods to define tools for working on the entire set. For example Student.all_of_em() might return all the known students. Obviously if your set of instances have more structure than just a set you can provide class methods to know about that structure. Students.all_of_em(grade='juniors')
Techniques like this tend to lead to storing members of the set of instances into data structures that are rooted in class variables. You need to take care to avoid frustrating the garbage collection then.
Classes and Objects concepts are very useful in organizing things. It's true that all the operations that can be done by a method can also be done using a static function.
Just think of a scenario, to build a Students Databases System to maintain student details.
You need to have details about students, teachers and staff. You need to build functions to calculate fees, salary, marks, etc. Fees and marks are only applicable for students, salary is only applicable for staff and teachers. So if you create separate classes for every type of people, the code will be organized.

Categories