In python 2 I could create a class bound method using:
types.MethodType(func, None, cls)
However in python3 MethodType does not take the 3rd parameter.
How can I achieve the same behaviour in python 3? Preferably in a way that is still valid in python2.7.
It seems that in Python3, you don't need to use MethodType anymore to create class bound methods (though you still need it to assign instance bound methods). Thus if you're going to do this:
class A:
def __init__(self):
pass
# end of class definition
# ...
# whoops I forgot a method for A
def foo(self, *args):
pass
You can later attach foo to A using just A.foo = foo. (In Python2 you had to use A.foo = types.MethodType(foo, None, A) or such.)
If you want to add foo just to a particular instance of A, you would still use MethodType, although (in Python3) only with two arguments:
a = A()
a.foo = types.MethodType(foo, a)
(In Python2, you had to use a.foo = types.MethodType(foo, a, A) or such.)
As far as I can see, if you want a strategy that works with both versions, you'd have to do something like this:
try:
A.foo = types.MethodType(foo, None, A)
except TypeError:
# too many arguments to MethodType, we must be in Python3
A.foo = foo
Related
Is it possible to do something like this? (This syntax doesn't actually work)
class TestClass(object):
def method(self):
print 'one'
def dynamically_defined_method(self):
print 'two'
c = TestClass()
c.method()
c.dynamically_defined_method() #this doesn't work
If it's possible, is it terrible programming practice? What I'm really trying to do is to have one of two variations of the same method be called (both with identical names and signatures), depending on the state of the instance.
Defining the function in the method doesn't automatically make it visible to the instance--it's just a function that is scoped to live within the method.
To expose it, you'd be tempted to do:
self.dynamically_defined_method = dynamically_defined_method
Only that doesn't work:
TypeError: dynamically_defined_method() takes exactly 1 argument (0 given)
You have to mark the function as being a method (which we do by using MethodType). So the full code to make that happen looks like this:
from types import MethodType
class TestClass(object):
def method(self):
def dynamically_defined_method(self):
print "two"
self.dynamically_defined_method = MethodType(dynamically_defined_method, self)
c = TestClass()
c.method()
c.dynamically_defined_method()
What are the advantages of using MethodType from the types module? You can use it to add methods to an object. But we can do that easily without it:
def func():
print 1
class A:
pass
obj = A()
obj.func = func
It works even if we delete func in the main scope by running del func.
Why would one want to use MethodType? Is it just a convention or a good programming habit?
In fact the difference between adding methods dynamically at run time and
your example is huge:
in your case, you just attach a function to an object, you can call it of course but it is unbound, it has no relation with the object itself (ie. you cannot use self inside the function)
when added with MethodType, you create a bound method and it behaves like a normal Python method for the object, you have to take the object it belongs to as first argument (it is normally called self) and you can access it inside the function
This example shows the difference:
def func(obj):
print 'I am called from', obj
class A:
pass
a=A()
a.func=func
a.func()
This fails with a TypeError: func() takes exactly 1 argument (0 given),
whereas this code works as expected:
import types
a.func = types.MethodType(func, a) # or types.MethodType(func, a, A) for PY2
a.func()
shows I am called from <__main__.A instance at xxx>.
A common use of types.MethodType is checking whether some object is a method. For example:
>>> import types
>>> class A(object):
... def method(self):
... pass
...
>>> isinstance(A().method, types.MethodType)
True
>>> def nonmethod():
... pass
...
>>> isinstance(nonmethod, types.MethodType)
False
Note that in your example isinstance(obj.func, types.MethodType) returns False. Imagine you have defined a method meth in class A. isinstance(obj.meth, types.MethodType) would return True.
Given a class A I can simply add an instancemethod a via
def a(self):
pass
A.a = a
However, if I try to add another class B's instancemethod b, i.e. A.b = B.b, the attempt at calling A().b() yields a
TypeError: unbound method b() must be called with B instance as first argument (got nothing instead)
(while B().b() does fine). Indeed there is a difference between
A.a -> <unbound method A.a>
A.b -> <unbound method B.b> # should be A.b, not B.b
So,
How to fix this?
Why is it this way? It doesn't seem intuitive, but usually Guido has some good reasons...
Curiously enough, this no longer fails in Python3...
Let's:
class A(object): pass
class B(object):
def b(self):
print 'self class: ' + self.__class__.__name__
When you are doing:
A.b = B.b
You are not attaching a function to A, but an unbound method. In consequence python only add it as a standard attribute and do not convert it to a A-unbounded method. The solution is simple, attach the underlying function :
A.b = B.b.__func__
print A.b
# print: <unbound method A.b>
a = A()
a.b()
# print: self class: A
I don't know all the difference between unbound methods and functions (only that the first contains the second), neither how all of that work internally. So I cannot explain the reason of it. My understanding is that a method object (bound or not) requires more information and functionalities than a functions, but it needs one to execute.
I would agree that automating this (changing the class of an unbound method) could be a good choice, but I can find reasons not to. It is thus surprising that python 3 differs from python 2. I'd like to find out the reason of this choice.
When you take the reference to a method on a class instance, the method is bound to that class instance.
B().b is equivalent to: lambda *args, **kwargs: b(<B instance>, *args, **kwargs)
I suspect you are getting a similarly (but not identically) wrapped reference when evaluating B.b. However, this is not the behavior I would have expected.
Interestingly:
A.a = lambda s: B.b(s)
A().a()
yields:
TypeError: unbound method b() must be called with B instance as first
argument (got A instance instead)
This suggests that B.b is evaluating to a wrapper for the actual method, and the wrapper is checking that 'self' has the expected type. I don't know, but this is probably about interpreter efficiency.
It's an interesting question though. I hope someone can chime in with a more definitive answer.
The following Python code executes normally without raising an exception:
class Foo:
pass
class Foo:
pass
def bar():
pass
def bar():
pass
print(Foo.__module__ + Foo.__name__)
Yet clearly, there are multiple instances of __main__.Foo and __main__.bar. Why does Python not raise an error when it encounters this namespace collision? And since it doesn't raise an error, what exactly is it doing? Is the first class __main__.Foo replaced by the second class __main__.Foo?
In Python everything is an object - instance of some type. E.g. 1 is an instance of type int, def foo(): pass creates object foo which is an instance of type function (same for classes - objects, created by class statement are instances of type type). Given this, there no difference (at the level of name binding mechanism) between
class Foo:
string = "foo1"
class Foo:
string = "foo2"
and
a = 1
a = 2
BTW, class definition may be performed using type function (yeah, there is type type and built-in function type):
Foo = type('Foo', (), {string: 'foo1'})
So classes and functions are not some different kind of data, although special syntax may be used for creating their instances.
See also related Data Model section.
The Foo class is effectively being re-defined further down the script (script is read by the interpreter from top to bottom).
class Foo:
string = "foo1"
class Foo:
string = "foo2"
f = Foo()
print f.string
prints "foo2"
The second definition replaces the first one, as expected if you think at classes as elements in the "types dictionary" of the current namespace:
>>> class Foo:
... def test1(self):
... print "test1"
...
>>> Foo
<class __main__.Foo at 0x7fe8c6943650>
>>> class Foo:
... def test2(self):
... print "test2"
...
>>> Foo
<class __main__.Foo at 0x7fe8c6943590>
>>> a = Foo()
>>> a.test1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Foo instance has no attribute 'test1'
>>> a.test2()
test2
>>>
here you can clearly see that the "definition" of Foo changes (Foo points to different classes in memory), and that it's the last one that prevails.
Conceptually this is just rebinding a name. It's no different from this:
x = 1
x = 2
and I'm sure you would not want that to be an error.
In compiled and some interpreted languages there is a clear seperation between definition, declaration and execution. But in python it's simpler. There are just statements!
Python EXECUTES your script/program/module as soon as it is invoked. It may help, to see def and class as "syntactic sugar". E.g. class is a convenient wrapper around Foo = type("class-name", (bases), {attributes}).
So python executes:
class Foo #equivalent to: Foo = type("class-name", (bases), {attributes})
class Foo
def bar
def bar
print(Foo.__module__ + Foo.__name__)
which boils down to overwriting the names Fooand bar with the latest "declaration". So this just works as intended from a python-pov - but maybe not as you intended it! ;-)
so it's also a typical error for developers with a different background to misunderstand:
def some_method(default_list = []):
...
default_list is a singleton here. Every call to some_method usese the same default_list, because the list-object is created at first execution.
Python doesn't enter the function-body, but only executes the signature/head as soon as it begins parsing.
I am using pypy to translate some python script to C language.
Say that I have a python class like this:
class A:
def __init__(self):
self.a = 0
def func(self):
pass
I notice that A.func is a unbound method rather than a function so that it cannot be translated by pypy. So I change the code slightly:
def func(self):
pass
class A:
def __init__(self):
self.a = 0
A.func = func
def target(*args):
return func, None
Now func seems to be able to be translated by pypy. However when I try translate.py --source test.py, an exception [translation:ERROR] TypeError: signature mismatch: func() takes exactly 2 arguments (1 given) is raised. I notice that it might because I haven't annotate self argument yet. However this self have type A, so how can I annotate a class?
Thank you for your reading and answer.
Essentially PyPy's entry point is a function (accepting sys.argv usually as an argument). Whatever this function calls (create objects, call methods) will get annotated. There is no way to annotate a class, since PyPy's compiled code does not export this as API, but rather as a standalone program.
You might want to for example:
def f():
a = A()
a.func()
or even:
a = A()
def f():
a.func()
in which case a is a prebuilt constant.
Are you wanting a staticmethod or a classmethod?