Edit: I'm using Python 3 (some people asked).
I think this is just a syntax question, but I want to be sure there's nothing I'm missing. Notice the syntax difference in how Foo and Bar are implemented. They achieve the same thing and I want to make sure they're really doing the same thing. The output suggests that there are just two ways to do the same thing. Is that the case?
Code:
class X:
def some_method(self):
print("X.some_method called")
class Y:
def some_method(self):
print("Y.some_method called")
class Foo(X,Y):
def some_method(self):
X().some_method()
Y().some_method()
print("Foo.some_method called")
class Bar(X,Y):
def some_method(self):
X.some_method(self)
Y.some_method(self)
print("Bar.some_method called")
print("=== Fun with Foo ===")
foo_instance = Foo()
foo_instance.some_method()
print("=== Fun with Bar ===")
bar_instance = Bar()
bar_instance.some_method()
Output:
=== Fun with Foo ===
X.some_method called
Y.some_method called
Foo.some_method called
=== Fun with Bar ===
X.some_method called
Y.some_method called
Bar.some_method called
PS - Hopefully it goes without saying but this is just an abstract example, let's not worry about why I'd want to call some_method on both ancestors, I'm just trying to understand the syntax and mechanics of the language here. Thanks all!
You should be using new-style classes. If this is Python 3, you are; if you are using Python 2, you should inherit from object (or some other new-style class).
The usual way to invoke ancestor methods is using super. Read about it in the standard docs, and the other excellent articles on how it operates. It is never recommended to invoke the methods in the way you are doing because (a) it will be fragile in the face of further inheritance; and (b) you increase the maintenance effort by hardcoding references to classes.
Update: Here is an example showing how to use super to achieve this: http://ideone.com/u3si2
Also look at: http://rhettinger.wordpress.com/2011/05/26/super-considered-super/
Update 2: Here's a little library for python 2 that adds a __class__ variable and a no-args super to every method to avoid hardcoding the current name: https://github.com/marcintustin/superfixer
They aren't the same. X() creates an object of class X. When you do X().someMethod() you create a new object and then call the method on that object, not on self. X.someMethod(self) is what you want, since that calls the inherited method on the same object.
You will see the difference if your method actually does anything to the self object. For instance, if you put self.blah = 8 into your method, then after X.someMethod(self) the object you call it on will have the blah attribute set, but after X().someMethod() it will not. (Instead, you will have created a new object, set blah on that, and then thrown away that new object without using it, leaving the original object untouched.)
Here is a simple example modifying your code:
>>> class X:
...
... def some_method(self):
... print("X.some_method called on", self)
...
... class Y:
...
... def some_method(self):
... print("Y.some_method called on", self)
...
... class Foo(X,Y):
...
... def some_method(self):
... X().some_method()
... Y().some_method()
... print("Foo.some_method called on", self)
...
... class Bar(X,Y):
...
... def some_method(self):
... X.some_method(self)
... Y.some_method(self)
... print("Bar.some_method called on", self)
>>> Foo().some_method()
('X.some_method called on', <__main__.X instance at 0x0142F3C8>)
('Y.some_method called on', <__main__.Y instance at 0x0142F3C8>)
('Foo.some_method called on', <__main__.Foo instance at 0x0142F3A0>)
>>> Bar().some_method()
('X.some_method called on', <__main__.Bar instance at 0x0142F3C8>)
('Y.some_method called on', <__main__.Bar instance at 0x0142F3C8>)
('Bar.some_method called on', <__main__.Bar instance at 0x0142F3C8>)
Note that when I use Foo, the objects printed are not the same; one is an X instance, one is a Y instance, and the last is the original Foo instance that I called the method on. When Bar is used, it is the same object in each method call.
(You can also use super in some cases to avoid naming the base classes explicitly; e.g., super(Foo, self).someMethod() or in Python 3 just super().someMethod(). However, if you have a need to directly call inherited methods from two base classes, super might not be a good fit. It is generally aimed at cases where each method calls super just once, passing control to the next version of the method in the inheritance chain, which will then pass it along to the next, etc.)
Related
If I have the following class.
class Foo:
#staticmethod
def _bar():
# do something
print "static method"
def instancemethod(self):
Foo._bar() # method 1
self._bar() # method 2
In this case, is method 1 a preferred way of calling staticmethod _bar() or method 2 in the Python world?
Write the code that expresses what you want to do. If you want to call this method right here:
class Foo:
#staticmethod
def _bar(): # <-- this one
# do something
print "static method"
then specify that particular method:
Foo._bar()
If you want to call whatever self._bar resolves to, meaning you've actually decided that it makes sense to override it and made sure your code still behaves sensibly when that method is overridden, then specify self._bar:
self._bar()
Most likely, this method isn't designed to be overridden, and the code that uses it isn't designed to anticipate overriding it, so you probably want Foo._bar().
A callable object is supposed to be so by defining __call__. A class is supposed to be an object… or at least with some exceptions. This exception is what I'm failing to formally clarify, thus this question posted here.
Let A be a simple class:
class A(object):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `__call__`"
The first function is purposely named “call”, to make clear the purpose is the comparison with the other.
Let's instantiate it and forget about the expression it implies:
a = A() # Think of it as `a = magic` and forget about `A()`
Now what's worth:
print(A.call())
print(a.call())
print(A())
print(a())
Result in:
>>> In `call`
>>> In `call`
>>> <__main__.A object at 0xNNNNNNNN>
>>> In `__call__`
The output (third statement not running __call__) does not come as a surprise, but when I think every where it is said “Python class are objects”…
This, more explicit, however run __call__
print(A.__call__())
print(a.__call__())
>>> “In `__call__`”
>>> “In `__call__`”
All of this is just to show how finally A() may looks strange.
There are exception in Python rules, but the documentation about “object.call” does not say a lot about __call__… not more than that:
3.3.5. Emulating callable objects
object.__call__(self[, args...])
Called when the instance is “called” as a function; […]
But how do Python tell “it's called as a function” and honour or not the object.__call__ rule?
This could be a matter of type, but even type has object as its base class.
Where can I learn more (and formally) about it?
By the way, is there any difference here between Python 2 and Python 3?
----- %< ----- edit ----- >% -----
Conclusions and other experiments after one answer and one comment
Update #1
After #Veedrac's answer and #chepner's comment, I came to this other test, which complete the comments from both:
class M(type):
def __call__(*args):
return "In `M.__call__`"
class A(object, metaclass=M):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `A.__call__`"
print(A())
The result is:
>>> In `M.__call__`
So it seems that's the meta‑class which drives the “call” operations. If I understand correctly, the meta‑class does not matter only with class, but also with classes instances.
Update #2
Another relevant test, which shows this is not an attribute of the object which matters, but an attribute of the type of the object:
class A(object):
def __call__(*args):
return "In `A.__call__`"
def call2(*args):
return "In `call2`"
a = A()
print(a())
As expected, it prints:
>>> In `A.__call__`
Now this:
a.__call__ = call2
print(a())
It prints:
>>> In `A.__call__`
The same a before the attribute was assigned. It does not print In call2, it's still In A.__call__. That's important to note and also explain why that's the __call__ of the meta‑class which was invoked (keep in mind the meta‑class is the type of the class object). The __call__ used to call as function, is not from the object, it's from its type.
x(*args, **kwargs) is the same as type(x).__call__(x, *args, **kwargs).
So you have
>>> type(A).__call__(A)
<__main__.A object at 0x7f4d88245b50>
and it all makes sense.
chepner points out in the comments that type(A) == type. This is kind-of wierd, because type(A)(A) just gives type again! But remember that we're instead using type(A).__call__(A) which is not the same.
So this resolves to type.__call__(A). This is the constructor function for classes, which builds the data-structures and does all the construction magic.
The same is true of most dunder (double underscore) methods, such as __eq__. This is partially an optimisation in those cases.
In Python 3.x, super() can be called without arguments:
class A(object):
def x(self):
print("Hey now")
class B(A):
def x(self):
super().x()
>>> B().x()
Hey now
In order to make this work, some compile-time magic is performed, one consequence of which is that the following code (which rebinds super to super_) fails:
super_ = super
class A(object):
def x(self):
print("No flipping")
class B(A):
def x(self):
super_().x()
>>> B().x()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in x
RuntimeError: super(): __class__ cell not found
Why is super() unable to resolve the superclass at runtime without assistance from the compiler? Are there practical situations in which this behaviour, or the underlying reason for it, could bite an unwary programmer?
... and, as a side question: are there any other examples in Python of functions, methods etc. which can be broken by rebinding them to a different name?
The new magic super() behaviour was added to avoid violating the D.R.Y. (Don't Repeat Yourself) principle, see PEP 3135. Having to explicitly name the class by referencing it as a global is also prone to the same rebinding issues you discovered with super() itself:
class Foo(Bar):
def baz(self):
return super(Foo, self).baz() + 42
Spam = Foo
Foo = something_else()
Spam().baz() # liable to blow up
The same applies to using class decorators where the decorator returns a new object, which rebinds the class name:
#class_decorator_returning_new_class
class Foo(Bar):
def baz(self):
# Now `Foo` is a *different class*
return super(Foo, self).baz() + 42
The magic super() __class__ cell sidesteps these issues nicely by giving you access to the original class object.
The PEP was kicked off by Guido, who initially envisioned super becoming a keyword, and the idea of using a cell to look up the current class was also his. Certainly, the idea to make it a keyword was part of the first draft of the PEP.
However, it was in fact Guido himself who then stepped away from the keyword idea as 'too magical', proposing the current implementation instead. He anticipated that using a different name for super() could be a problem:
My patch uses an intermediate solution: it assumes you need __class__
whenever you use a variable named 'super'. Thus, if you (globally)
rename super to supper and use supper but not super, it won't work
without arguments (but it will still work if you pass it either
__class__ or the actual class object); if you have an unrelated
variable named super, things will work but the method will use the
slightly slower call path used for cell variables.
So, in the end, it was Guido himself that proclaimed that using a super keyword did not feel right, and that providing a magic __class__ cell was an acceptable compromise.
I agree that the magic, implicit behaviour of the implementation is somewhat surprising, but super() is one of the most mis-applied functions in the language. Just take a look at all the misapplied super(type(self), self) or super(self.__class__, self) invocations found on the Internet; if any of that code was ever called from a derived class you'd end up with an infinite recursion exception. At the very least the simplified super() call, without arguments, avoids that problem.
As for the renamed super_; just reference __class__ in your method as well and it'll work again. The cell is created if you reference either the super or __class__ names in your method:
>>> super_ = super
>>> class A(object):
... def x(self):
... print("No flipping")
...
>>> class B(A):
... def x(self):
... __class__ # just referencing it is enough
... super_().x()
...
>>> B().x()
No flipping
I have a class with many methods. How can I modify my methods so that they can also be accessed directly as a function without creating object of that class? Is it possible.
The methods will be "unbound" (meaning, essentially, that they have no self to work with). If the functions do not operate upon self, you can turn them into static-methods (which do not take a self first argument) and then assign them to variables to be used like functions.
Like so:
class MyClass(object):
#staticmethod
def myfunc():
return "It works!"
myfunc = MyClass.myfunc
myfunc() # prints "It works!"
Essentially, you need to ask yourself "What data does my method need to (er) function?" Depending on your answer, you can use #staticmethod or #classmethod or you may find that you do in fact need a self in which case you will need to create an object before trying to use its methods.
That final case would look something like:
myobj = MyClass()
del MyClass # This is a singleton class
myfunc = myobj.myfunc
All of that aside, if you find that all of your methods are actually staticmethods, then it's better style to refactor them out of the class into plain-old functions, which they really are already. You may have learned this "class as namespace" style from Java, but that isn't correct in Python. Python namespaces are represented by modules.
Unbound Methods
To create an unbound method (i.e., its first variable is'nt self), you can decorate the method using the #staticmethod built-in decorator. If decorators or any of that is not making sense, check out the Wiki, this simple explanation, decorators as syntactic sugar and learn how to write a good one.
>>> class foo(object):
... #staticmethod
... def bar(blah_text):
... print "Unbound method `bar` of Class `foo`"
... return blah_text
...
>>> foobar = foo.bar
>>> foobar("We are the Knights who say 'Ni'!")
Unbound method `bar` of Class `foo`
"We are the Knights who say 'Ni'!"
Bound Methods
These methods are not technically 'bound', but are meant to be binded when called. You just have to point a reference to them and "Wala", you now have a reference to that method. Now you just have to pass a valid instance of that Class:
>>> class foo:
... def __init__(self, bar_value = 'bar'):
... self.bar_value = bar_value
... def bar(self, blah_text):
... return self.bar_value + blah_text
...
>>> bar = foo.bar
>>> bar(foo('We are the Knights who say '), "'Ni'")
"We are the Knights who say 'Ni'"
Edit: As is pointed out in the comments, it seems my usage of 'binding' is wrong. Could somebody with knowledge of it edit/correct my post?
You can call the function with the class name as a parameter, if you do not want to lose self:
class something:
def test(self):
print('Hello')
something.test(something)
#prints "Hello"
I want to write a decorator that acts differently depending on whether it is applied to a function or to a method.
def some_decorator(func):
if the_magic_happens_here(func): # <---- Point of interest
print 'Yay, found a method ^_^ (unbound jet)'
else:
print 'Meh, just an ordinary function :/'
return func
class MyClass(object):
#some_decorator
def method(self):
pass
#some_decorator
def function():
pass
I tried inspect.ismethod(), inspect.ismethoddescriptor() and inspect.isfunction() but no luck. The problem is that a method actually is neither a bound nor an unbound method but an ordinary function as long as it is accessed from within the class body.
What I really want to do is to delay the actions of the decorator to the point the class is actually instantiated because I need the methods to be callable in their instance scope. For this, I want to mark methods with an attribute and later search for these attributes when the .__new__() method of MyClass is called. The classes for which this decorator should work are required to inherit from a class that is under my control. You can use that fact for your solution.
In the case of a normal function the delay is not necessary and the decorator should take action immediately. That is why I wand to differentiate these two cases.
I would rely on the convention that functions that will become methods have a first argument named self, and other functions don't. Fragile, but then, there's no really solid way.
So (pseudocode as I have comments in lieu of what you want to do in either case...):
import inspect
import functools
def decorator(f):
args = inspect.getargspec(f)
if args and args[0] == 'self':
# looks like a (future) method...
else:
# looks like a "real" function
#functools.wraps(f)
def wrapper # etc etc
One way to make it a bit more solid, as you say all classes involved inherit from a class under your control, is to have that class provide a metaclass (which will also of course be inherited by said classes) which checks things at the end of the class body. Make the wrapped function accessible e.g. by wrapper._f = f and the metaclass's __init__ can check that all wrapped methods did indeed have self as the first argument.
Unfortunately there's no easy way to check that other functions (non-future-methods) being wrapped didn't have such a first argument, since you're not in control of the environment in that case. The decorator might check for "top-level" functions (ones whose def is a top-level statement in their module), via the f_globals (globals dict, i.e., module's dict) and f_name attributes of the function -- if the function's such a global presumably it won't later be assigned as an attribute of the class (thereby becoming a future-method anyway;-) so the self named first arg, if there, can be diagnosed as wrong and warned about (while still treating the function as a real function;-).
One alternative would be to do the decoration in the decorator itself under the hypothesis of a real function, but also make available the original function object as wrapper._f. Then, the metaclass's __init__ can re-do the decoration for all functions in the class body that it sees have been marked this way. This approach is much more solid than the convention-relying one I just sketched, even with the extra checks. Still, something like
class Foo(Bar): ... # no decorations
#decorator
def f(*a, **k): ...
Foo.f = f # "a killer"... function becomes method!
would still be problematic -- you could try intercepting this with a __setattr__ in your metaclass (but then other assignments to class attributes after the class statement can become problematic).
The more the user's code has freedom to do funky things (and Python generally leaves the programmer a lot of such freedom), the harder time your "framework-y" code has keeping things under tight control instead, of course;-).
From Python 3.3 onwards by using PEP 3155:
def some_decorator(func):
if func.__name__ != func.__qualname__:
print('Yay, found a method ^_^ (unbound jet)')
else:
print('Meh, just an ordinary function :/')
return func
A method x of class A will have a __qualname__ that is A.x while a function x will have a __qualname__ of x.
Do you need to have the magic happen where you choose which wrapper to return, or can you defer the magic until the function is actually called?
You could always try a parameter to your decorator to indicate which of the two wrappers it should use, like
def some_decorator( clams ):
def _mydecor(func ):
#wraps(func)
def wrapping(*args....)
...
return wrapping
def _myclassdecor(func):
#wraps(func)
.....
return _mydecor if clams else _myclassdecor
The other thing that I might suggest is to create a metaclass and define the init method in the metaclass to look for methods decorated with your decorator and revise them accordingly, like Alex hinted at. Use this metaclass with your base class, and since all the classes that will use the decorator will inherit from the base class, they'll also get the metaclass type and use its init as well.
You just need to check to see if the function being decorated has an im_func attribute. If it does, then it is a method. If it doesn't then it is a function.
Note that the code sample below does the detection at call time but you can do it at decoration time as well. Just move the hasattr check to the outer decorator generator.
Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> def deco(f):
... def _wrapper(*args, **kwargs):
... if hasattr(f, 'im_func'):
... print 'method'
... else:
... print 'function'
... return _wrapper
...
>>> deco(lambda x: None)()
function
>>> def f(x):
... return x + 5
...
>>> deco(f)()
function
>>> class A:
... def f(self, x):
... return x + 5
...
>>> a = A()
>>> deco(a.f)()
method
>>> deco(A.f)()
method
>>>
Edit
Oh snap! And I get it totally wrong. I so should have read Alex's post more thoroughly.
>>> class B:
... #deco
... def f(self, x):
... return x +5
...
>>> b = B()
>>> b.f()
function