Customize how a Python object is processed as a function argument? - python

A Python class's __call__ method lets us specify how a class member should be behave as a function. Can we do the "opposite", i.e. specify how a class member should behave as an argument to an arbitrary other function?
As a simple example, suppose I have a ListWrapper class that wraps lists, and when I call a function f on a member of this class, I want f to be mapped over the wrapped list. For instance:
x = WrappedList([1, 2, 3])
print(x + 1) # should be WrappedList([2, 3, 4])
d = {1: "a", 2: "b", 3:"c"}
print(d[x]) # should be WrappedList(["a", "b", "c"])
Calling the hypothetical __call__ analogue I'm looking for __arg__, we could imagine something like this:
class WrappedList(object):
def __init__(self, to_wrap):
self.wrapped = to_wrap
def __arg__(self, func):
return WrappedList(map(func, self.wrapped))
Now, I know that (1) __arg__ doesn't exist in this form, and (2) it's easy to get the behavior in this simple example without any tricks. But is there a way to approximate the behavior I'm looking for in the general case?

You can't do this in general.*
You can do something equivalent for most of the builtin operators (like your + example), and a handful of builtin functions (like abs). They're implemented by calling special methods on the operands, as described in the reference docs.
Of course that means writing a whole bunch of special methods for each of your types—but it wouldn't be too hard to write a base class (or decorator or metaclass, if that doesn't fit your design) that implements all those special methods in one place, by calling the subclass's __arg__ and then doing the default thing:
class ArgyBase:
def __add__(self, other):
return self.__arg__() + other
def __radd__(self, other):
return other + self.__arg__()
# ... and so on
And if you want to extend that to a whole suite of functions that you create yourself, you can give them all similar special-method protocols similar to the builtin ones, and expand your base class to cover them. Or you can just short-circuit that and use the __arg__ protocol directly in those functions. To avoid lots of repetition, I'd use a decorator for that.
def argify(func):
def _arg(arg):
try:
return arg.__arg__()
except AttributeError:
return arg
#functools.wraps(func)
def wrapper(*args, **kwargs):
args = map(_arg, args)
kwargs = {kw: _arg(arg) for arg in args}
return func(*args, **kwargs)
return wrapper
#argify
def spam(a, b):
return a + 2 * b
And if you really want to, you can go around wrapping other people's functions:
sin = argify(math.sin)
… or even monkeypatching their modules:
requests.get = argify(requests.get)
… or monkeypatching a whole module dynamically a la early versions of gevent, but I'm not going to even show that, because at this point we're getting into don't-do-this-for-multiple-reasons territory.
You mentioned in a comment that you'd like to do this to a bunch of someone else's functions without having to specify them in advance, if possible. Does that mean every function that ever gets constructed in any module you import? Well, you can even do that if you're willing to create an import hook, but that seems like an even worse idea. Explaining how to write an import hook and either AST-patch each function creation node or insert wrappers around the bytecode or the like is way too much to get into here, but if your research abilities exceed your common sense, you can figure it out. :)
As a side note, if I were doing this, I wouldn't call the method __arg__, I'd call it either arg or _arg. Besides being reserved for future use by the language, the dunder-method style implies things that aren't true here (special-method lookup instead of a normal call, you can search for it in the docs, etc.).
* There are languages where you can, such as C++, where a combination of implicit casting and typed variables instead of typed values means you can get a method called on your objects just by giving them an odd type with an implicit conversion operator to the expected type.

Related

Python __init__ second argument [duplicate]

This question already has answers here:
What is the purpose of the `self` parameter? Why is it needed?
(26 answers)
Closed 6 months ago.
When defining a method on a class in Python, it looks something like this:
class MyClass(object):
def __init__(self, x, y):
self.x = x
self.y = y
But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype.
Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument?
I like to quote Peters' Zen of Python. "Explicit is better than implicit."
In Java and C++, 'this.' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't.
Python elects to make things like this explicit rather than based on a rule.
Additionally, since nothing is implied or assumed, parts of the implementation are exposed. self.__class__, self.__dict__ and other "internal" structures are available in an obvious way.
It's to minimize the difference between methods and functions. It allows you to easily generate methods in metaclasses, or add methods at runtime to pre-existing classes.
e.g.
>>> class C:
... def foo(self):
... print("Hi!")
...
>>>
>>> def bar(self):
... print("Bork bork bork!")
...
>>>
>>> c = C()
>>> C.bar = bar
>>> c.bar()
Bork bork bork!
>>> c.foo()
Hi!
>>>
It also (as far as I know) makes the implementation of the python runtime easier.
I suggest that one should read Guido van Rossum's blog on this topic - Why explicit self has to stay.
When a method definition is decorated, we don't know whether to automatically give it a 'self' parameter or not: the decorator could turn the function into a static method (which has no 'self'), or a class method (which has a funny kind of self that refers to a class instead of an instance), or it could do something completely different (it's trivial to write a decorator that implements '#classmethod' or '#staticmethod' in pure Python). There's no way without knowing what the decorator does whether to endow the method being defined with an implicit 'self' argument or not.
I reject hacks like special-casing '#classmethod' and '#staticmethod'.
Python doesn't force you on using "self". You can give it whatever name you want. You just have to remember that the first argument in a method definition header is a reference to the object.
Also allows you to do this: (in short, invoking Outer(3).create_inner_class(4)().weird_sum_with_closure_scope(5) will return 12, but will do so in the craziest of ways.
class Outer(object):
def __init__(self, outer_num):
self.outer_num = outer_num
def create_inner_class(outer_self, inner_arg):
class Inner(object):
inner_arg = inner_arg
def weird_sum_with_closure_scope(inner_self, num)
return num + outer_self.outer_num + inner_arg
return Inner
Of course, this is harder to imagine in languages like Java and C#. By making the self reference explicit, you're free to refer to any object by that self reference. Also, such a way of playing with classes at runtime is harder to do in the more static languages - not that's it's necessarily good or bad. It's just that the explicit self allows all this craziness to exist.
Moreover, imagine this: We'd like to customize the behavior of methods (for profiling, or some crazy black magic). This can lead us to think: what if we had a class Method whose behavior we could override or control?
Well here it is:
from functools import partial
class MagicMethod(object):
"""Does black magic when called"""
def __get__(self, obj, obj_type):
# This binds the <other> class instance to the <innocent_self> parameter
# of the method MagicMethod.invoke
return partial(self.invoke, obj)
def invoke(magic_self, innocent_self, *args, **kwargs):
# do black magic here
...
print magic_self, innocent_self, args, kwargs
class InnocentClass(object):
magic_method = MagicMethod()
And now: InnocentClass().magic_method() will act like expected. The method will be bound with the innocent_self parameter to InnocentClass, and with the magic_self to the MagicMethod instance. Weird huh? It's like having 2 keywords this1 and this2 in languages like Java and C#. Magic like this allows frameworks to do stuff that would otherwise be much more verbose.
Again, I don't want to comment on the ethics of this stuff. I just wanted to show things that would be harder to do without an explicit self reference.
I think it has to do with PEP 227:
Names in class scope are not accessible. Names are resolved in the
innermost enclosing function scope. If a class definition occurs in a
chain of nested scopes, the resolution process skips class
definitions. This rule prevents odd interactions between class
attributes and local variable access. If a name binding operation
occurs in a class definition, it creates an attribute on the resulting
class object. To access this variable in a method, or in a function
nested within a method, an attribute reference must be used, either
via self or via the class name.
I think the real reason besides "The Zen of Python" is that Functions are first class citizens in Python.
Which essentially makes them an Object. Now The fundamental issue is if your functions are object as well then, in Object oriented paradigm how would you send messages to Objects when the messages themselves are objects ?
Looks like a chicken egg problem, to reduce this paradox, the only possible way is to either pass a context of execution to methods or detect it. But since python can have nested functions it would be impossible to do so as the context of execution would change for inner functions.
This means the only possible solution is to explicitly pass 'self' (The context of execution).
So i believe it is a implementation problem the Zen came much later.
As explained in self in Python, Demystified
anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit). This is the reason the first parameter of a function in class must be the object itself.
class Point(object):
def __init__(self,x = 0,y = 0):
self.x = x
self.y = y
def distance(self):
"""Find distance from origin"""
return (self.x**2 + self.y**2) ** 0.5
Invocations:
>>> p1 = Point(6,8)
>>> p1.distance()
10.0
init() defines three parameters but we just passed two (6 and 8). Similarly distance() requires one but zero arguments were passed.
Why is Python not complaining about this argument number mismatch?
Generally, when we call a method with some arguments, the corresponding class function is called by placing the method's object before the first argument. So, anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit).
This is the reason the first parameter of a function in class must be the object itself. Writing this parameter as self is merely a convention. It is not a keyword and has no special meaning in Python. We could use other names (like this) but I strongly suggest you not to. Using names other than self is frowned upon by most developers and degrades the readability of the code ("Readability counts").
...
In, the first example self.x is an instance attribute whereas x is a local variable. They are not the same and lie in different namespaces.
Self Is Here To Stay
Many have proposed to make self a keyword in Python, like this in C++ and Java. This would eliminate the redundant use of explicit self from the formal parameter list in methods. While this idea seems promising, it's not going to happen. At least not in the near future. The main reason is backward compatibility. Here is a blog from the creator of Python himself explaining why the explicit self has to stay.
The 'self' parameter keeps the current calling object.
class class_name:
class_variable
def method_name(self,arg):
self.var=arg
obj=class_name()
obj.method_name()
here, the self argument holds the object obj. Hence, the statement self.var denotes obj.var
There is also another very simple answer: according to the zen of python, "explicit is better than implicit".

Is it possible to use pattern matching on functions in python? Like, a function that will respond to multiple calls

I am trying to make a class that acts like a list. However, I would like to be able to use all of the built-in list repr() functions and such, Is there any way to do something like this:
class myList:
...
def __getitem__(self, index):
#custom getitem function
def __setitem(self,index,value):
#custom setitem function
def toList(self):
#returns a list
#and a catch-all case:
def __*__(*args):
return self.toList().__*__(*args)
So, My question is, is it possible to make a catch-all case for something like that in python, to catch things like __repr__, __str__, __iter__, etc.
Thanks!
It’s not possible to trap such calls with a single method, in CPython at least. While you can define special methods like __getattr__, even defining the low-level __getattribute__ on a metaclass doesn’t work because the interpreter scans for special method names to build a fast lookup table of the sort used by types defined in C. Anything that doesn’t actually possess such individual methods will not respond to the interpreter’s internal calls (e.g., from repr).
What you can do is dynamically generate (or augment, as below) a class that has whatever special methods you want. The readability of the result might suffer too much to make this approach worthwhile:
def mkfwd(n):
def fwd(self,*a): return getattr(self.toList(),n)(*a)
return fwd
for m in "repr","str":
m="__%s__"%m
setattr(myList,m,mkfwd(m))

Allow help() to work on partial function object

I'm trying to make sure running help() at the Python 2.7 REPL displays the __doc__ for a function that was wrapped with functools.partial. Currently running help() on a functools.partial 'function' displays the __doc__ of the functools.partial class, not my wrapped function's __doc__. Is there a way to achieve this?
Consider the following callables:
def foo(a):
"""My function"""
pass
partial_foo = functools.partial(foo, 2)
Running help(foo) will result in showing foo.__doc__. However, running help(partial_foo) results in the __doc__ of a Partial object.
My first approach was to use functools.update_wrapper which correctly replaces the partial object's __doc__ with foo.__doc__. However, this doesn't fix the 'problem' because of how pydoc.
I've investigated the pydoc code, and the issue seems to be that partial_foo is actually a Partial object not a typical function/callable, see this question for more information on that detail.
By default, pydoc will display the __doc__ of the object type, not instance if the object it was passed is determined to be a class by inspect.isclass. See the render_doc function for more information about the code itself.
So, in my scenario above pydoc is displaying the help of the type, functools.partial NOT the __doc__ of my functools.partial instance.
Is there anyway to make alter my call to help() or functools.partial instance that's passed to help() so that it will display the __doc__ of the instance, not type?
I found a pretty hacky way to do this. I wrote the following function to override the __builtins__.help function:
def partialhelper(object=None):
if isinstance(object, functools.partial):
return pydoc.help(object.func)
else:
# Preserve the ability to go into interactive help if user calls
# help() with no arguments.
if object is None:
return pydoc.help()
else:
return pydoc.help(object)
Then just replace it in the REPL with:
__builtins__.help = partialhelper
This works and doesn't seem to have any major downsides, yet. However, there isn't a way with the above naive implementation to support still showing the __doc__ of some functools.partial objects. It's all or nothing, but could probably attach an attribute to the wrapped (original) function to indicate whether or not the original __doc__ should be shown. However, in my scenario I never want to do this.
Note the above does NOT work when using IPython and the embed functionality. This is because IPython directly sets the shell's namespace with references to the 'real' __builtin__, see the code and old mailing list for information on why this is.
So, after some investigation there's another way to hack this into IPython. We must override the site._Helper class, which is used by IPython to explicitly setup the help system. The following code will do just that when called BEFORE IPython.embed:
import site
site._Helper.__call__ = lambda self, *args, **kwargs: partialhelper(*args, **kwargs)
Are there any other downsides I'm missing here?
how bout implementing your own?
def partial_foo(*args):
""" some doc string """
return foo(*((2)+args))
not a perfect answer but if you really want this i suspect this is the only way to do it
You identified the issue - partial functions aren't typical functions, and the dunder variables don't carry over. This applies not just to __doc__, but also __name__, __module__, and more. Not sure if this solution existed when the question was asked, but you can achieve this more elegantly ("elegantly" up to interpretation) by re-writing partial() as a decorator factory. Since decorators (& factories) do not automatically copy over dunder variables, you need to also use #wraps(func):
def wrapped_partial(*args, **kwargs):
def foo(func):
#wraps(func)
def bar(*fargs,**fkwargs):
return func(*args, *fargs, **kwargs, **fkwargs)
return bar
return foo
Usage example:
#wrapped_partial(3)
def multiply_triple(x, y=1, z=0):
"""Multiplies three numbers"""
return x * y * z
# Without decorator syntax: multiply_triple = wrapped_partial(3)(multiply_triple)
With output:
>>>print(multiply_triple())
0
>>>print(multiply_triple(3,z=3))
9
>>>help(multiply_triple)
help(multiply_triple)
Help on function multiply_triple in module __main__:
multiply_triple(x: int, y: int = 1, z: int = 0)
Multiplies three numbers
Thing that didn't work, but informative when using multiple decorators
You might think, as I first did, that based upon the stacking syntax of decorators in PEP-318, you could put the wrapping and the partial function definition in separate decorators, e.g.
def partial_func(*args, **kwargs):
def foo(func):
def bar(*fargs,**fkwargs):
return func(*args, *fargs, **kwargs, **fkwargs)
return bar
return foo
def wrapped(f):
#wraps(f)
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return wrapper
#wrapped
#partial_func(z=3)
def multiply_triple(x, y=1, z=0):
"""Multiplies three numbers"""
return x * y * z
In these cases (and in reverse order), the decorators are applied one at a time, and the #partial_func interrupts wrapping. This means that if you are trying to use any decorator that you want to wrap, you need to rewrite the decorator in a factory where the decorator's return function is itself decorated by #wraps(func). If you are using multiple decorators, they all have to be turned into wrapped factories.
Alternate method to have decorators "wrap"
Since decorators are just functions, you can write a copy_dunder_vars(obj1, obj2) function that retruns obj2 but with all the dunder variables from obj1. Call as:
def foo()
pass
foo = copy_dunder_vars(decorator(foo), foo)
This goes against the preferred syntax, but practicality beats purity. I think "not forcing you to rewrite decorators that you're borrowing from elsewhere and leaving largely unchanged" fits into that category. After all that wrapping, don't forget ribbon and a bow ;)

How can I see if a method is a decorator?

Is it possible to inspect a function/method to see whether it can be used as a decorator? In that it follows the usual way decorators wrap other functions and return a callable? Specifically, I'm looking to validate 3rd party code.
By applying a suspected decorator, catching exceptions, and then testing whether the result contains a __call__ method, you could produce a guess as to whether a given callable is a decorator or not. But it will be only a guess, not a guarantee.
Beyond that, I do not believe what you want will be possible in general, due to the dynamically typed nature of the Python language and to the special treatment of built-in functions in the CPython interpreter. It is not possible to programmatically tell whether a callable will accept another callable as an argument, or what type its return value will have. Also, in CPython, for functions implemented in C, you cannot even inspect a callable to see how many arguments it accepts.
The word "decorator" can be taken to mean different things. One way to define it is, a decorator is any callable that accepts a single (callable) argument and returns a callable.
Note that I have not even used the word "function" in this definition; it would actually be incorrect to do so. Indeed, some commonly used decorators have strange properties:
The built-in classmethod and staticmethod decorators return descriptor objects, not functions.
Since language version 2.6 you can decorate classes, not just functions and methods.
Any class containing an __init__(self, somecallable) method and a __call__(self, *args, **kwargs) method can be used as a decorator.
Since there is no standardized decorator in Python, there's no real way of telling if a function is a decorator unless you know something about the decorator you're looking for.
If the decorator is under your control, you can add a mark to indicate it's a decorated function. Otherwise there is no real unified way of doing this. Take this example for instance:
def decorator(func):
return g
#decorator
def f()
pass
def g():
pass
In the above example, in run-time, f and g will be identical, and there is no way of telling the two apart.
Any callable with the right number of arguments can be used as a decorator. Remember that
#foo
def bar(...):
is exactly the same as
def bar(...):
...
bar = foo(bar)
Naturally, since foo could return anything, you have no way of checking whether a function has been decorated or not. Although foo could be nice and leave a mark, it has no obligation to do so.
If you are given some Python code and you want to find all the things that are decorators, you can do so by parsing the code into an abstract syntax tree then walking the tree looking for decorated functions. Here's an example, storing the .ids of the decorators. Obviously, you could store the astobjects if you wanted to.
>>> class DecoratorFinder(ast.NodeVisitor):
... def __init__(self, *args, **kwargs):
... super(DecoratorFinder, self).__init__(*args, **kwargs)
... self.decorators = set()
...
... def visit_FunctionDef(self, node):
... self.decorators.update(dec.id for dec in node.decorator_list)
... self.generic_visit(node)
...
>>> finder = DecoratorFinder()
>>> x = ast.parse("""
... #dec
... def foo():
... pass
... """)
>>> finder.visit(x)
>>> finder.decorators
set(['dec'])
No this is not possible. May be instead of checking if f is a decorator, you should think why you need to check that?
If you are expecting some specific decorator, you can directly check that, if you want some specific behavior/methods/attributes you can check that
If you want to check if some callable f can be used as decorator, you can test the decorator behavior by passing some dummy function, but in general it may not work or have different behavior for different inputs.
Here is a such naive check:
def decorator1(func):
def _wrapper(*args, **kwargs):
print "before"
func(*args, **kwargs)
print "after"
return _wrapper
def dummy_func(): pass
out_func = decorator1(dummy_func)
if callable(out_func) and dummy_func != out_func:
print "aha decorated!"
I've never done anything like this, but in general python relies on "duck-typing" in situations like this. So you could just try to decorate a dummy function and see if a callable is returned.

Why do you need explicitly have the "self" argument in a Python method? [duplicate]

This question already has answers here:
What is the purpose of the `self` parameter? Why is it needed?
(26 answers)
Closed 6 months ago.
When defining a method on a class in Python, it looks something like this:
class MyClass(object):
def __init__(self, x, y):
self.x = x
self.y = y
But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype.
Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument?
I like to quote Peters' Zen of Python. "Explicit is better than implicit."
In Java and C++, 'this.' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't.
Python elects to make things like this explicit rather than based on a rule.
Additionally, since nothing is implied or assumed, parts of the implementation are exposed. self.__class__, self.__dict__ and other "internal" structures are available in an obvious way.
It's to minimize the difference between methods and functions. It allows you to easily generate methods in metaclasses, or add methods at runtime to pre-existing classes.
e.g.
>>> class C:
... def foo(self):
... print("Hi!")
...
>>>
>>> def bar(self):
... print("Bork bork bork!")
...
>>>
>>> c = C()
>>> C.bar = bar
>>> c.bar()
Bork bork bork!
>>> c.foo()
Hi!
>>>
It also (as far as I know) makes the implementation of the python runtime easier.
I suggest that one should read Guido van Rossum's blog on this topic - Why explicit self has to stay.
When a method definition is decorated, we don't know whether to automatically give it a 'self' parameter or not: the decorator could turn the function into a static method (which has no 'self'), or a class method (which has a funny kind of self that refers to a class instead of an instance), or it could do something completely different (it's trivial to write a decorator that implements '#classmethod' or '#staticmethod' in pure Python). There's no way without knowing what the decorator does whether to endow the method being defined with an implicit 'self' argument or not.
I reject hacks like special-casing '#classmethod' and '#staticmethod'.
Python doesn't force you on using "self". You can give it whatever name you want. You just have to remember that the first argument in a method definition header is a reference to the object.
Also allows you to do this: (in short, invoking Outer(3).create_inner_class(4)().weird_sum_with_closure_scope(5) will return 12, but will do so in the craziest of ways.
class Outer(object):
def __init__(self, outer_num):
self.outer_num = outer_num
def create_inner_class(outer_self, inner_arg):
class Inner(object):
inner_arg = inner_arg
def weird_sum_with_closure_scope(inner_self, num)
return num + outer_self.outer_num + inner_arg
return Inner
Of course, this is harder to imagine in languages like Java and C#. By making the self reference explicit, you're free to refer to any object by that self reference. Also, such a way of playing with classes at runtime is harder to do in the more static languages - not that's it's necessarily good or bad. It's just that the explicit self allows all this craziness to exist.
Moreover, imagine this: We'd like to customize the behavior of methods (for profiling, or some crazy black magic). This can lead us to think: what if we had a class Method whose behavior we could override or control?
Well here it is:
from functools import partial
class MagicMethod(object):
"""Does black magic when called"""
def __get__(self, obj, obj_type):
# This binds the <other> class instance to the <innocent_self> parameter
# of the method MagicMethod.invoke
return partial(self.invoke, obj)
def invoke(magic_self, innocent_self, *args, **kwargs):
# do black magic here
...
print magic_self, innocent_self, args, kwargs
class InnocentClass(object):
magic_method = MagicMethod()
And now: InnocentClass().magic_method() will act like expected. The method will be bound with the innocent_self parameter to InnocentClass, and with the magic_self to the MagicMethod instance. Weird huh? It's like having 2 keywords this1 and this2 in languages like Java and C#. Magic like this allows frameworks to do stuff that would otherwise be much more verbose.
Again, I don't want to comment on the ethics of this stuff. I just wanted to show things that would be harder to do without an explicit self reference.
I think it has to do with PEP 227:
Names in class scope are not accessible. Names are resolved in the
innermost enclosing function scope. If a class definition occurs in a
chain of nested scopes, the resolution process skips class
definitions. This rule prevents odd interactions between class
attributes and local variable access. If a name binding operation
occurs in a class definition, it creates an attribute on the resulting
class object. To access this variable in a method, or in a function
nested within a method, an attribute reference must be used, either
via self or via the class name.
I think the real reason besides "The Zen of Python" is that Functions are first class citizens in Python.
Which essentially makes them an Object. Now The fundamental issue is if your functions are object as well then, in Object oriented paradigm how would you send messages to Objects when the messages themselves are objects ?
Looks like a chicken egg problem, to reduce this paradox, the only possible way is to either pass a context of execution to methods or detect it. But since python can have nested functions it would be impossible to do so as the context of execution would change for inner functions.
This means the only possible solution is to explicitly pass 'self' (The context of execution).
So i believe it is a implementation problem the Zen came much later.
As explained in self in Python, Demystified
anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit). This is the reason the first parameter of a function in class must be the object itself.
class Point(object):
def __init__(self,x = 0,y = 0):
self.x = x
self.y = y
def distance(self):
"""Find distance from origin"""
return (self.x**2 + self.y**2) ** 0.5
Invocations:
>>> p1 = Point(6,8)
>>> p1.distance()
10.0
init() defines three parameters but we just passed two (6 and 8). Similarly distance() requires one but zero arguments were passed.
Why is Python not complaining about this argument number mismatch?
Generally, when we call a method with some arguments, the corresponding class function is called by placing the method's object before the first argument. So, anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit).
This is the reason the first parameter of a function in class must be the object itself. Writing this parameter as self is merely a convention. It is not a keyword and has no special meaning in Python. We could use other names (like this) but I strongly suggest you not to. Using names other than self is frowned upon by most developers and degrades the readability of the code ("Readability counts").
...
In, the first example self.x is an instance attribute whereas x is a local variable. They are not the same and lie in different namespaces.
Self Is Here To Stay
Many have proposed to make self a keyword in Python, like this in C++ and Java. This would eliminate the redundant use of explicit self from the formal parameter list in methods. While this idea seems promising, it's not going to happen. At least not in the near future. The main reason is backward compatibility. Here is a blog from the creator of Python himself explaining why the explicit self has to stay.
The 'self' parameter keeps the current calling object.
class class_name:
class_variable
def method_name(self,arg):
self.var=arg
obj=class_name()
obj.method_name()
here, the self argument holds the object obj. Hence, the statement self.var denotes obj.var
There is also another very simple answer: according to the zen of python, "explicit is better than implicit".

Categories