The following Python code executes normally without raising an exception:
class Foo:
pass
class Foo:
pass
def bar():
pass
def bar():
pass
print(Foo.__module__ + Foo.__name__)
Yet clearly, there are multiple instances of __main__.Foo and __main__.bar. Why does Python not raise an error when it encounters this namespace collision? And since it doesn't raise an error, what exactly is it doing? Is the first class __main__.Foo replaced by the second class __main__.Foo?
In Python everything is an object - instance of some type. E.g. 1 is an instance of type int, def foo(): pass creates object foo which is an instance of type function (same for classes - objects, created by class statement are instances of type type). Given this, there no difference (at the level of name binding mechanism) between
class Foo:
string = "foo1"
class Foo:
string = "foo2"
and
a = 1
a = 2
BTW, class definition may be performed using type function (yeah, there is type type and built-in function type):
Foo = type('Foo', (), {string: 'foo1'})
So classes and functions are not some different kind of data, although special syntax may be used for creating their instances.
See also related Data Model section.
The Foo class is effectively being re-defined further down the script (script is read by the interpreter from top to bottom).
class Foo:
string = "foo1"
class Foo:
string = "foo2"
f = Foo()
print f.string
prints "foo2"
The second definition replaces the first one, as expected if you think at classes as elements in the "types dictionary" of the current namespace:
>>> class Foo:
... def test1(self):
... print "test1"
...
>>> Foo
<class __main__.Foo at 0x7fe8c6943650>
>>> class Foo:
... def test2(self):
... print "test2"
...
>>> Foo
<class __main__.Foo at 0x7fe8c6943590>
>>> a = Foo()
>>> a.test1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Foo instance has no attribute 'test1'
>>> a.test2()
test2
>>>
here you can clearly see that the "definition" of Foo changes (Foo points to different classes in memory), and that it's the last one that prevails.
Conceptually this is just rebinding a name. It's no different from this:
x = 1
x = 2
and I'm sure you would not want that to be an error.
In compiled and some interpreted languages there is a clear seperation between definition, declaration and execution. But in python it's simpler. There are just statements!
Python EXECUTES your script/program/module as soon as it is invoked. It may help, to see def and class as "syntactic sugar". E.g. class is a convenient wrapper around Foo = type("class-name", (bases), {attributes}).
So python executes:
class Foo #equivalent to: Foo = type("class-name", (bases), {attributes})
class Foo
def bar
def bar
print(Foo.__module__ + Foo.__name__)
which boils down to overwriting the names Fooand bar with the latest "declaration". So this just works as intended from a python-pov - but maybe not as you intended it! ;-)
so it's also a typical error for developers with a different background to misunderstand:
def some_method(default_list = []):
...
default_list is a singleton here. Every call to some_method usese the same default_list, because the list-object is created at first execution.
Python doesn't enter the function-body, but only executes the signature/head as soon as it begins parsing.
Related
In python 2 I could create a class bound method using:
types.MethodType(func, None, cls)
However in python3 MethodType does not take the 3rd parameter.
How can I achieve the same behaviour in python 3? Preferably in a way that is still valid in python2.7.
It seems that in Python3, you don't need to use MethodType anymore to create class bound methods (though you still need it to assign instance bound methods). Thus if you're going to do this:
class A:
def __init__(self):
pass
# end of class definition
# ...
# whoops I forgot a method for A
def foo(self, *args):
pass
You can later attach foo to A using just A.foo = foo. (In Python2 you had to use A.foo = types.MethodType(foo, None, A) or such.)
If you want to add foo just to a particular instance of A, you would still use MethodType, although (in Python3) only with two arguments:
a = A()
a.foo = types.MethodType(foo, a)
(In Python2, you had to use a.foo = types.MethodType(foo, a, A) or such.)
As far as I can see, if you want a strategy that works with both versions, you'd have to do something like this:
try:
A.foo = types.MethodType(foo, None, A)
except TypeError:
# too many arguments to MethodType, we must be in Python3
A.foo = foo
What are the advantages of using MethodType from the types module? You can use it to add methods to an object. But we can do that easily without it:
def func():
print 1
class A:
pass
obj = A()
obj.func = func
It works even if we delete func in the main scope by running del func.
Why would one want to use MethodType? Is it just a convention or a good programming habit?
In fact the difference between adding methods dynamically at run time and
your example is huge:
in your case, you just attach a function to an object, you can call it of course but it is unbound, it has no relation with the object itself (ie. you cannot use self inside the function)
when added with MethodType, you create a bound method and it behaves like a normal Python method for the object, you have to take the object it belongs to as first argument (it is normally called self) and you can access it inside the function
This example shows the difference:
def func(obj):
print 'I am called from', obj
class A:
pass
a=A()
a.func=func
a.func()
This fails with a TypeError: func() takes exactly 1 argument (0 given),
whereas this code works as expected:
import types
a.func = types.MethodType(func, a) # or types.MethodType(func, a, A) for PY2
a.func()
shows I am called from <__main__.A instance at xxx>.
A common use of types.MethodType is checking whether some object is a method. For example:
>>> import types
>>> class A(object):
... def method(self):
... pass
...
>>> isinstance(A().method, types.MethodType)
True
>>> def nonmethod():
... pass
...
>>> isinstance(nonmethod, types.MethodType)
False
Note that in your example isinstance(obj.func, types.MethodType) returns False. Imagine you have defined a method meth in class A. isinstance(obj.meth, types.MethodType) would return True.
In Python (2 and 3) we can assign attributes to function:
>>> class A(object):
... def foo(self):
... """ This is obviously just an example """
... return "FOO{}!!".format(self.foo.bar)
... foo.bar = 123
...
>>> a = A()
>>> a.foo()
'FOO123!!'
And that's cool.
But why cannot we change foo.bar at a later time? For example, in the constructor, like so:
>>> class A(object):
... def __init__(self, *args, **kwargs):
... super(A, self).__init__(*args, **kwargs)
... print(self.foo.bar)
... self.foo.bar = 456 # KABOOM!
... def foo(self):
... """ This is obviously just an example """
... return "FOO{}!!".format(self.foo.bar)
... foo.bar = 123
...
>>> a = A()
123
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __init__
AttributeError: 'instancemethod' object has no attribute 'bar'
Python claims there is no bar even though it printed it fine on just the previous line.
Same error happens if we try to change it directly on the class:
>>> A.foo.bar
123
>>> A.foo.bar = 345 # KABOOM!
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'instancemethod' object has no attribute 'bar'
What's happening here, i.e. why are we seeing this behaviour?
Is there a way to set attributes on a function after class creation?
(I'm aware of multiple alternatives, but I'm explicitly wondering about attributes on methods here, or possibly a broader issue.)
Motivation: Django makes use of the possibility to set attributes on methods, e.g:
class MyModelAdmin(ModelAdmin):
...
def custom_admin_column(self, obj):
return obj.something()
custom_admin_column.admin_order_field ='relation__field__span'
custom_admin_column.allow_tags = True
Setting foo.bar inside the class body works because foo is the actual foo function. However, when you do
self.foo.bar = 456
self.foo isn't that function. self.foo is an instance method object, created on demand when you access it. You can't set attributes on it for several reasons:
If those attributes are stored on the foo function, then assigning to a.foo.bar has an unexpected effect on b.foo.bar, contrary to all the usual expectations about attribute assignment.
If those attributes are stored on the self.foo instance method object, they won't show up the next time you access self.foo, because you'll get a new instance method object next time.
If those attributes are stored on the self.foo instance method object and you change the rules so self.foo is always the same object, then that massively bloats every object in Python to store a bunch of instance method objects you almost never need.
If those attributes are stored in self.__dict__, what about objects that don't have a __dict__? Also, you'd need to come up with some sort of name mangling rule, or store non-string keys in self.__dict__, both of which have their own problems.
If you want to set attributes on the foo function after the class definition is done, you can do that with A.__dict__['foo'].bar = 456. (I've used A.__dict__ to bypass the issue of whether A.foo is the function or an unbound method object, which depends on your Python version. If A inherits foo, you'll have to either deal with that issue or access the dict of the class it inherits foo from.)
I always hear this statement in Python (for topics such as decorators, etc. when you are passing functions, etc.) but have never really seen an elaboration on this.
For example is it possible to create a class c that has only one abstract method that is called with a set of opened and closed brackets.
i.e class c:
#abstractmethod
def method_to_be_called_by():
...
so you can have
c(whatever parameters are required)
I could be way off the mark with my understanding here, I was just curious about what people meant by this.
You are looking for the __call__ method. Function objects have that method:
>>> def foo(): pass
...
>>> foo.__call__
<method-wrapper '__call__' of function object at 0x106aafd70>
Not that the Python interpreter loop actually makes use of that method when encountering a Python function object; optimisations in the implementation jump straight to the contained bytecode in most cases.
But you can use that on your own custom class:
class Callable(object):
def __init__(self, name):
self.name = name
def __call__(self, greeting):
return '{}, {}!'.format(greeting, self.name)
Demo:
>>> class Callable(object):
... def __init__(self, name):
... self.name = name
... def __call__(self, greeting):
... return '{}, {}!'.format(greeting, self.name)
...
>>> Callable('World')('Hello')
'Hello, World!'
Python creates function objects for you when you use a def statement, or you use a lambda expression:
>>> def foo(): pass
...
>>> foo
<function foo at 0x106aafd70>
>>> lambda: None
<function <lambda> at 0x106d90668>
You can compare this to creating a string or an integer or a list using literal syntax:
listobject = [1, 'two']
The above creates 3 objects without ever calling a type, Python did that all for you based on the syntax used. The same applies to functions.
Creating one yourself can be a little more complex; you need to have a code object and reference to a global namespace, at the very least:
>>> function_type = type(lambda: None)
>>> function_type
<type 'function'>
>>> function_type(foo.__code__, globals(), 'bar')
<function bar at 0x106d906e0>
Here I created a function object by reusing the function type, taking the code object from the foo function; the function type is not a built-in name but the type really does exist and can be obtained by calling type() on an existing function instance.
I also passed in the global namespace of my interpreter, and a name; the latter is an optional argument; the name is otherwise taken from the code object.
One simple way to see this is to create a function in the Python interpreter def bar(x): return x + 1 and then use dir(bar) to see the various magic attributes including __class__.
Yes, python functions are full objects.
For another approach, objects are functions if they have a magic __call__() method.
I get an error when trying to bind a class method to a function. Why?
def foo():
print "Hello world"
class something(object):
bar = foo
test = something()
test.bar()
TypeError: foo() takes no arguments (1 given)
Also, if I am unable to modify foo, can I do this adaptation from within the class definition?
A simple way to do it is to wrap the function in a staticmethod inside A:
class A():
bar = staticmethod(foo)
>>> test = A()
>>> test.bar()
Hello world
A class method in python always takes at least one argument, usually called self. This example is taken from the official Python tutorial:
# Function defined outside the class
def f1(self, x, y):
return min(x, x+y)
class C:
f = f1
def g(self):
return 'hello world'
h = g
Note that both methods, regardless of whether they are defined outside or inside of the class, take self as an argument.
Edit: If you really can't change your foo function, then you can do something like this:
>>> def foo():
... print "Hello World"
...
>>> class something(object):
... def bar(self): foo()
...
>>> test = something()
>>> test.bar()
Hello World
When you call a class method this way you pass the class instance as the first parameter.
When you call test.bar what in fact happens is more like bar(test). You pass an argument to the method.
Class methods all have a first argument of the instance of the class. So add a parameter to your function and it would work.
The initial def creates a function object named foo. Since it's outside any class, it's just a function that takes no arguments. The assignment bar = foo just gives the new name test.bar to that same function object. The call test.bar(), however, assumes that bar is a class method, and passes the object test as the first argument to the method (the one that you would normally call self). You could call it as a static method with something.bar() and not get the error.
Remember that when a python class calls one of it's methods it will pass in a self argument. You need to account for that in your code:
def foo(self):
print ("Hello world")
class something(object):
bar = foo
test = something()
test.bar()
You can read all about classes in the Python Documentation
The easiest workaround to not passing in a self that I can think of is:
def foo():
print ("Hello world")
class something(object):
bar = [foo] # contains pointer to static method
test = something()
test.bar[0]() # calls the static method