From a famous example, I learned the difference between method, classmethod and staticmethod in a Python class.
Source:
What is the difference between #staticmethod and #classmethod in Python?
class A(object):
def foo(self,x):
print "executing foo(%s,%s)"%(self,x)
#classmethod
def class_foo(cls,x):
print "executing class_foo(%s,%s)"%(cls,x)
#staticmethod
def static_foo(x):
print "executing static_foo(%s)"%x
# My Guesses
def My_Question(self,x):
self.foo(x)
A.class_foo(x)
A.static_foo(x)
a=A()
Now I am wondering, how to call a method, #classmethod, and #staticmethod inside the class.
I put my guesses in the My_Question function above, please correct me if I am wrong with any of these.
Yes, your guesses will work. Note that it is also possible/normal to call staticmethods and classmethods outside the class:
class A():
...
A.class_foo()
A.static_foo()
Also note that inside regular instance methods, it's customary to call the staticmethods and class methods directly on the instance (self) rather than the class (A):
class A():
def instance_method(self):
self.class_foo()
self.static_foo()
This allow for inheritance to work as you might expect -- If I create a B subclass from A, if I call B.instance_method(), my class_foo function will get B instead of A as the cls argument -- And possibly, if I override static_foo on B to do something slightly different than A.static_foo, this will allow the overridden version to be called as well.
Some examples might make this more clear:
class A(object):
#staticmethod
def static():
print("Static, in A")
#staticmethod
def staticoverride():
print("Static, in A, overrideable")
#classmethod
def clsmethod(cls):
print("class, in A", cls)
#classmethod
def clsmethodoverrideable(cls):
print("class, in A, overridable", cls)
def instance_method(self):
self.static()
self.staticoverride()
self.clsmethod()
self.clsmethodoverride()
class B(A):
#classmethod
def clsmethodoverrideable(cls):
print("class, in B, overridable", cls)
#staticmethod
def staticoverride():
print("Static, in B, overrideable")
a = A()
b = B()
a.instance_method()
b.instance_method()
...
After you've run that, try it by changing all of the self. to A. inside instance_method. Rerun and compare. You'll see that all of the references to B have gone (even when you're calling b.instance_method()). This is why you want to use self rather than the class.
As #wim said, what you have is right. Here's the output when My_Question is called.
>>> a.My_Question("My_Answer=D")
executing foo(<__main__.A object at 0x0000015790FF4668>,My_Answer=D)
executing class_foo(<class '__main__.A'>,My_Answer=D)
executing static_foo(My_Answer=D)
Related
class A:
def foo(self):
print("original")
class B(A):
def foo(self):
super().foo()
print("override")
class C(B):
def foo(self):
super().foo()
print("override")
o = c()
Now after defining this object, I want to, access the foo of class A through same object, how do I do that???
If you actually want to bypass all the child class implementations, just name the base class explicitly, e.g. replace:
self.method_name() # Calls own class (or first parent with implementation if own class lacks it)
super().method_name() # Call first parent class with implementation of the method
with:
GrandparentClass.method_name(self) # Explicitly calls specific class's version of the method with self
To be clear, GrandparentClass is a placeholder for the actual name of the top-level class you want to call, it's not a special name/function like super().
Note: If you're doing this, you likely have an XY problem that should probably be solved instead.
So now if I want to access that function with the same name from sub-subclass, of the base class, how do I do it???
super().<method_name>(<params>) is how you call a method "in the parent class" in order to delegate upwards.
class A:
def foo(self):
print("original")
class B(A):
def foo(self):
super().foo()
print("override")
Calling B().foo() will print "original" then "override".
I have one weird problem. I have following code:
class A:
def f():
return __class__()
class B(A):
pass
a = A.f()
b = B.f()
print(a, b)
And output is something like this:
<__main__.A object at 0x01AF2630> <__main__.A object at 0x01B09B70>
So how can I get B instead of second A?
The magic __class__ closure is set for the method context and only really meant for use by super().
For methods you'd want to use self.__class__ instead:
return self.__class__()
or better still, use type(self):
return type(self)()
If you want to be able to call the method on a class, then use the classmethod decorator to be handed a reference to the class object, rather than remain unbound:
#classmethod
def f(cls):
return cls()
classmethods are always bound to the class they are called on, so for A.f() that'd be A, for B.f() you get handed in B.
How do I introspect A's instance from within b.func() (i.e. A's instance's self):
class A():
def go(self):
b=B()
b.func()
class B():
def func(self):
# Introspect to find the calling A instance here
In general we don't want that func to have access back to the calling instance of A because this breaks encapsulation. Inside of b.func you should have access to any args and kwargs passed, the state/attributes of the instance b (via self here), and any globals hanging around.
If you want to know about a calling object, the valid ways are:
Pass the calling object in as an argument to the function
Explicitly add a handle to the caller onto b instance sometime before using func, and then access that handle through self.
However, with that disclaimer out of the way, it's still worth knowing that Python's introspection capabilities are powerful enough to access the caller module in some cases. In the CPython implementation, here is how you could access the calling A instance without changing your existing function signatures:
class A:
def go(self):
b=B()
b.func()
class B:
def func(self):
import inspect
print inspect.currentframe().f_back.f_locals["self"]
if __name__ == "__main__":
a = A()
a.go()
Output:
<__main__.A instance at 0x15bd9e0>
This might be a useful trick to know about for debugging purposes. A similar technique is even used in stdlib logging, here, so that loggers are able to discover the source code/file name/line number/function name without needing to be explicitly passed that context. However, in normal use cases, it would not usually be a sensible design decision to access stack frames in the case that B.func actually needed to use A, because it's cleaner and easier to pass along the information that you need rather than to try and "reach back" to a caller.
You pass it to b.func() as an argument.
Do this by refactoring your code to work like
class A():
def go(self):
b = B(self)
b.func()
class B():
def __init__(self, a):
self.a = a
def func(self):
# Use self.a
or
class A():
def go(self):
b = B()
b.func(self)
class B():
def func(self, a):
# a
I agree with Benjamin - pass it to b.func() as an argument and don't introspect it!!!!
If your life really depends on it, then I think you can deduce the answer from this answer.
Assume you define a class, which has a method which does some complicated processing:
class A(object):
def my_method(self):
# Some complicated processing is done here
return self
And now you want to use that method on some object from another class entirely. Like, you want to do A.my_method(7).
This is what you'd get: TypeError: unbound method my_method() must be called with A instance as first argument (got int instance instead).
Now, is there any possibility to hack things so you could call that method on 7? I'd want to avoid moving the function or rewriting it. (Note that the method's logic does depend on self.)
One note: I know that some people will want to say, "You're doing it wrong! You're abusing Python! You shouldn't do it!" So yes, I know, this is a terrible terrible thing I want to do. I'm asking if someone knows how to do it, not how to preach to me that I shouldn't do it.
Of course I wouldn't recommend doing this in real code, but yes, sure, you can reach inside of classes and use its methods as functions:
class A(object):
def my_method(self):
# Some complicated processing is done here
return 'Hi'
print(A.__dict__['my_method'](7))
# Hi
You can't. The restriction has actually been lifted in Python 3000, but I presume you are not using that.
However, why can't you do something like:
def method_implementation(self, x,y):
# do whatever
class A():
def method(self, x, y):
return method_implementation(self, x, y)
If you are really in the mood for python abuse, write a descriptor class that implements the behavior. Something like
class Hack:
def __init__(self, fn):
self.fn = fn
def __get__(self, obj, cls):
if obj is None: # called staticly
return self.fn
else:
def inner(*args, **kwargs):
return self.fn(obj, *args, **kwargs)
return inner
Note that this is completely untested, will probably break some corner cases, and is all around evil.
def some_method(self):
# Some complicated processing is done here
return self
class A(object):
my_method = some_method
a = A()
print some_method
print a.my_method
print A.my_method
print A.my_method.im_func
print A.__dict__['my_method']
prints:
<function some_method at 0x719f0>
<bound method A.some_method of <__main__.A object at 0x757b0>>
<unbound method A.some_method>
<function some_method at 0x719f0>
<function some_method at 0x719f0>
It sounds like you're looking up a method on a class and getting an unbound method. An unbound method expects a object of the appropriate type as the first argument.
If you want to apply the function as a function, you've got to get a handle to the function version of it instead.
You could just put that method into a superclass of the two objects that need to call it, couldn't you? If its so critical that you can't copy it, nor can you change it to not use self, thats the only other option I can see.
>>> class A():
... me = 'i am A'
...
>>> class B():
... me = 'i am B'
...
>>> def get_name(self):
... print self.me
...
>>> A.name = get_name
>>> a=A()
>>> a.name()
i am A
>>>
>>> B.name = get_name
>>> b=B()
>>> b.name()
i am B
>>>
Why cant you do this
class A(object):
def my_method(self,arg=None):
if (arg!=None):
#Do Some Complicated Processing with both objects and return something
else:
# Some complicated processing is done here
return self
In Python functions are not required to be enclosed in classes. It sounds like what you need is utility function, so just define it as such:
def my_function(object):
# Some complicated processing is done here
return object
my_function(7)
my_function("Seven")
As long as your processing is using methods and attribute available on all objects that you pass to my_function through the magic of duck typing everything will work fine.
That's what's called a staticmethod:
class A(object):
#staticmethod
def my_method(a, b, c):
return a, b, c
However in staticmethods, you do not get a reference to self.
If you'd like a reference to the class not the instance (instance implies reference to self), you can use a classmethod:
class A(object):
classvar = "var"
#classmethod
def my_method(cls, a, b, c):
print cls.classvar
return a, b, c
But you'll only get access to class variables, not to instance variables (those typically created/defined inside the __init__ constructor).
If that's not good enough, then you will need to somehow pass a "bound" method or pass "self" into the method like so:
class A(object):
def my_method(self):
# use self and manipulate the object
inst = A()
A.my_method(inst)
As some people have already said, it's not a bad idea to just inherit one class from the other:
class A(object):
... methods ...
class B(A):
def my_method(self):
... use self
newA = B()
Based on this answer, of how __new__ and __init__ are supposed to work in Python,
I wrote this code to dynamically define and create a new class and object.
class A(object):
def __new__(cls):
class C(cls, B):
pass
self = C()
return self
def foo(self):
print 'foo'
class B(object):
def bar(self):
print 'bar'
a = A()
a.foo()
a.bar()
Basically, because the __new__ of A returns a dynamically created C that inherits A and B, it should have an attribute bar.
Why does C not have a bar attribute?
Resolve the infinite recursion:
class A(object):
def __new__(cls):
class C(cls, B):
pass
self = object.__new__(C)
return self
(Thanks to balpha for pointing out the actual question.)
Since there is no actual question in the question, I am going to take it literally:
Whats wrong doing it dynamically?
Well, it is practically unreadable, extremely opaque and non-obvious to the user of your code (that includes you in a month :P).
From my experience (quite limited, I must admit, unfortunately I don't have 20 years of programming under the belt), a need for such solutions indicates, that the class structure is not well defined, - means, there's almost always a better, more readable and less arcane way to do such things.
For example, if you really want to define base classes on the fly, you are better off using a factory function, that will return appropriate classes according to your needs.
Another take on the question:
Whats wrong doing it dynamically?
In your current implementation, it gives me a "maximum recursion depth exceeded" error. That happens, because A.__new__ calls itself from within itself indefinitely (since it inherits from itself and from B).
10: Inside A.__new__, "cls" is set to <class '.A'>. Inside the constructor you define a class C, which inherits from cls (which is actually A) and another class B. Upon instantiating C, its __new__ is called. Since it doesn't define its own __new__, its base class' __new__ is called. The base class just happens to be A.
20: GOTO 10
If your question is "How can I accomplish this" – this works:
class A(object):
#classmethod
def get_with_B(cls):
class C(B, cls):
pass
return C()
def foo(self):
print 'foo'
class B(object):
def bar(self):
print 'bar'
a = A.get_with_B()
a.foo()
a.bar()
If your question is "Why doesn't it work" – that's because you run into an infinite recursion when you call C(), which leads to A.__new__ being called, which again calls C() etc.