Is this an example of python function overload? - python

I know python does not allow us to overload functions. However, does it have inbuilt overloaded methods?
Consider this:
setattr(object_name,'variable', 'value')
setattr(class_name,'method','function')
The first statement dynamically adds variables to objects during run time, but the second one attaches outside functions to classes at run time.
The same function does different things based on its arguments. Is this function overload?

The function setattr(foo, 'bar', baz) is always the same as foo.bar = baz, regardless of the type of foo. There is no overloading here.
In Python 3, limited overloading is possible with functools.singledispatch, but setattr is not implemented with that.
A far more interesting example, in my opinion, is type(). type() does two entirely different things depending on how you call it:
If called with a single argument, it returns the type of that argument.
If called with three arguments (of the correct types), it dynamically creates a new class.
Nevertheless, type() is not overloaded. Why not? Because it is implemented as one function that counts how many arguments it got and then decides what to do. In pure Python, this is done with the variadic *args syntax, but type() is implemented in C, so it looks rather different. It's doing the same thing, though.

Python, in some sense, doesn't need a function overloading capability when other languages do. Consider the following example in C:
int add(int x, int y) {
return x + y;
}
If you wish to extend the notion to include stuff that are not integers you would need to make another function:
float add(float x, float y) {
return x + y;
}
In Python, all you need is:
def add(x, y):
return x + y
It works fine for both, and it isn't considered function overloading. You can also handle different cases of variable types using methods like isinstance. The major issue, as pointed out by this question, is the number of types. But in your case you pass the same number of types, and even so, there are ways around this without function overloading.

overloading methods is tricky in python. However, there could be usage of passing the dict, list or primitive variables.
I have tried something for my use cases, this could help here to understand people to overload the methods.
Let's take the example:
a class overload method with call the methods from different class.
def add_bullet(sprite=None, start=None, headto=None, spead=None, acceleration=None):
pass the arguments from remote class:
add_bullet(sprite = 'test', start=Yes,headto={'lat':10.6666,'long':10.6666},accelaration=10.6}
OR add_bullet(sprite = 'test', start=Yes,headto={'lat':10.6666,'long':10.6666},speed=['10','20,'30']}
So, handling is being achieved for list, Dictionary or primitive variables from method overloading.
try it out for your codes

Related

Why was the mutable default argument's behavior never changed? [duplicate]

This question already has answers here:
"Least Astonishment" and the Mutable Default Argument
(33 answers)
Closed 6 months ago.
I had a very difficult time with understanding the root cause of a problem in an algorithm. Then, by simplifying the functions step by step I found out that evaluation of default arguments in Python doesn't behave as I expected.
The code is as follows:
class Node(object):
def __init__(self, children = []):
self.children = children
The problem is that every instance of Node class shares the same children attribute, if the attribute is not given explicitly, such as:
>>> n0 = Node()
>>> n1 = Node()
>>> id(n1.children)
Out[0]: 25000176
>>> id(n0.children)
Out[0]: 25000176
I don't understand the logic of this design decision? Why did Python designers decide that default arguments are to be evaluated at definition time? This seems very counter-intuitive to me.
The alternative would be quite heavyweight -- storing "default argument values" in the function object as "thunks" of code to be executed over and over again every time the function is called without a specified value for that argument -- and would make it much harder to get early binding (binding at def time), which is often what you want. For example, in Python as it exists:
def ack(m, n, _memo={}):
key = m, n
if key not in _memo:
if m==0: v = n + 1
elif n==0: v = ack(m-1, 1)
else: v = ack(m-1, ack(m, n-1))
_memo[key] = v
return _memo[key]
...writing a memoized function like the above is quite an elementary task. Similarly:
for i in range(len(buttons)):
buttons[i].onclick(lambda i=i: say('button %s', i))
...the simple i=i, relying on the early-binding (definition time) of default arg values, is a trivially simple way to get early binding. So, the current rule is simple, straightforward, and lets you do all you want in a way that's extremely easy to explain and understand: if you want late binding of an expression's value, evaluate that expression in the function body; if you want early binding, evaluate it as the default value of an arg.
The alternative, forcing late binding for both situation, would not offer this flexibility, and would force you to go through hoops (such as wrapping your function into a closure factory) every time you needed early binding, as in the above examples -- yet more heavy-weight boilerplate forced on the programmer by this hypothetical design decision (beyond the "invisible" ones of generating and repeatedly evaluating thunks all over the place).
In other words, "There should be one, and preferably only one, obvious way to do it [1]": when you want late binding, there's already a perfectly obvious way to achieve it (since all of the function's code is only executed at call time, obviously everything evaluated there is late-bound); having default-arg evaluation produce early binding gives you an obvious way to achieve early binding as well (a plus!-) rather than giving TWO obvious ways to get late binding and no obvious way to get early binding (a minus!-).
[1]: "Although that way may not be obvious at first unless you're Dutch."
The issue is this.
It's too expensive to evaluate a function as an initializer every time the function is called.
0 is a simple literal. Evaluate it once, use it forever.
int is a function (like list) that would have to be evaluated each time it's required as an initializer.
The construct [] is literal, like 0, that means "this exact object".
The problem is that some people hope that it to means list as in "evaluate this function for me, please, to get the object that is the initializer".
It would be a crushing burden to add the necessary if statement to do this evaluation all the time. It's better to take all arguments as literals and not do any additional function evaluation as part of trying to do a function evaluation.
Also, more fundamentally, it's technically impossible to implement argument defaults as function evaluations.
Consider, for a moment the recursive horror of this kind of circularity. Let's say that instead of default values being literals, we allow them to be functions which are evaluated each time a parameter's default values are required.
[This would parallel the way collections.defaultdict works.]
def aFunc( a=another_func ):
return a*2
def another_func( b=aFunc ):
return b*3
What is the value of another_func()? To get the default for b, it must evaluate aFunc, which requires an eval of another_func. Oops.
Of course in your situation it is difficult to understand. But you must see, that evaluating default args every time would lay a heavy runtime burden on the system.
Also you should know, that in case of container types this problem may occur -- but you could circumvent it by making the thing explicit:
def __init__(self, children = None):
if children is None:
children = []
self.children = children
The workaround for this, discussed here (and very solid), is:
class Node(object):
def __init__(self, children = None):
self.children = [] if children is None else children
As for why look for an answer from von Löwis, but it's likely because the function definition makes a code object due to the architecture of Python, and there might not be a facility for working with reference types like this in default arguments.
I thought this was counterintuitive too, until I learned how Python implements default arguments.
A function's an object. At load time, Python creates the function object, evaluates the defaults in the def statement, puts them into a tuple, and adds that tuple as an attribute of the function named func_defaults. Then, when a function is called, if the call doesn't provide a value, Python grabs the default value out of func_defaults.
For instance:
>>> class C():
pass
>>> def f(x=C()):
pass
>>> f.func_defaults
(<__main__.C instance at 0x0298D4B8>,)
So all calls to f that don't provide an argument will use the same instance of C, because that's the default value.
As far as why Python does it this way: well, that tuple could contain functions that would get called every time a default argument value was needed. Apart from the immediately obvious problem of performance, you start getting into a universe of special cases, like storing literal values instead of functions for non-mutable types to avoid unnecessary function calls. And of course there are performance implications galore.
The actual behavior is really simple. And there's a trivial workaround, in the case where you want a default value to be produced by a function call at runtime:
def f(x = None):
if x == None:
x = g()
This comes from python's emphasis on syntax and execution simplicity. a def statement occurs at a certain point during execution. When the python interpreter reaches that point, it evaluates the code in that line, and then creates a code object from the body of the function, which will be run later, when you call the function.
It's a simple split between function declaration and function body. The declaration is executed when it is reached in the code. The body is executed at call time. Note that the declaration is executed every time it is reached, so you can create multiple functions by looping.
funcs = []
for x in xrange(5):
def foo(x=x, lst=[]):
lst.append(x)
return lst
funcs.append(foo)
for func in funcs:
print "1: ", func()
print "2: ", func()
Five separate functions have been created, with a separate list created each time the function declaration was executed. On each loop through funcs, the same function is executed twice on each pass through, using the same list each time. This gives the results:
1: [0]
2: [0, 0]
1: [1]
2: [1, 1]
1: [2]
2: [2, 2]
1: [3]
2: [3, 3]
1: [4]
2: [4, 4]
Others have given you the workaround, of using param=None, and assigning a list in the body if the value is None, which is fully idiomatic python. It's a little ugly, but the simplicity is powerful, and the workaround is not too painful.
Edited to add: For more discussion on this, see effbot's article here: http://effbot.org/zone/default-values.htm, and the language reference, here: http://docs.python.org/reference/compound_stmts.html#function
I'll provide a dissenting opinion, by addessing the main arguments in the other posts.
Evaluating default arguments when the function is executed would be bad for performance.
I find this hard to believe. If default argument assignments like foo='some_string' really add an unacceptable amount of overhead, I'm sure it would be possible to identify assignments to immutable literals and precompute them.
If you want a default assignment with a mutable object like foo = [], just use foo = None, followed by foo = foo or [] in the function body.
While this may be unproblematic in individual instances, as a design pattern it's not very elegant. It adds boilerplate code and obscures default argument values. Patterns like foo = foo or ... don't work if foo can be an object like a numpy array with undefined truth value. And in situations where None is a meaningful argument value that may be passed intentionally, it can't be used as a sentinel and this workaround becomes really ugly.
The current behaviour is useful for mutable default objects that should be shared accross function calls.
I would be happy to see evidence to the contrary, but in my experience this use case is much less frequent than mutable objects that should be created anew every time the function is called. To me it also seems like a more advanced use case, whereas accidental default assignments with empty containers are a common gotcha for new Python programmers. Therefore, the principle of least astonishment suggests default argument values should be evaluated when the function is executed.
In addition, it seems to me that there exists an easy workaround for mutable objects that should be shared across function calls: initialise them outside the function.
So I would argue that this was a bad design decision. My guess is that it was chosen because its implementation is actually simpler and because it has a valid (albeit limited) use case. Unfortunately, I don't think this will ever change, since the core Python developers want to avoid a repeat of the amount of backwards incompatibility that Python 3 introduced.
Python function definitions are just code, like all the other code; they're not "magical" in the way that some languages are. For example, in Java you could refer "now" to something defined "later":
public static void foo() { bar(); }
public static void main(String[] args) { foo(); }
public static void bar() {}
but in Python
def foo(): bar()
foo() # boom! "bar" has no binding yet
def bar(): pass
foo() # ok
So, the default argument is evaluated at the moment that that line of code is evaluated!
Because if they had, then someone would post a question asking why it wasn't the other way around :-p
Suppose now that they had. How would you implement the current behaviour if needed? It's easy to create new objects inside a function, but you cannot "uncreate" them (you can delete them, but it's not the same).

Assign a numeric value that is returned from a function by using exec command

I need to assign a numeric value, which is returned from a function, to a variable name by using exec() command:
def func1(x,y):
return x+y
def main(x,y,n):
x=3; y=5; n=1
t = exec("func%s(%s,%s)" % (n,x,y))
return t**2
main(3,5,1)
I have many functions like "func1", "func2" and so on... I try to return t = func1(x,y) with theexec("func%s(%s,%s)" % (n,x,y)) statement. However, I cannot assign a value, returned fromexec().
There is a partly similar question on SE, but it is written for Python3 and is also not applicable to my case.
How can we resolve this problem or is there a more elegant way to perform such an operation, maybe without the use of'exec()'?
By the way, as "func%s(%s,%s)" % (n,x,y) is a statement, I used exec. Or should I better use eval?
It is almost always a really bad idea to get at functions and variables using their names as strings (or bits of their names as integers, as in the example code in the question). One reason why is that eval or exec can do literally anything and you generally want to avoid using code constructs whose behaviour is so hard to predict and reason about.
So here are two ways to get similar results with less pain.
First: Instead of passing around magic index numbers, like the 1 in the code above, pass the actual function which is, itself, a perfectly reasonable value for a variable in Python.
def main(x,y,f):
return f(x,y)**2
main(3, 5, func1)
(Incidentally, the definition of main in the question throws away the values of x,y,n that are passed in to it. That's probably a mistake.)
Second: Instead of making these mere functions, make them methods on classes, and pass around not the functions but either the classes themselves or instances of the classes.
class Solver:
def solve(self, x,y): return "eeeek, not implemented"
class Solver1:
def solve(self, x,y): return x+y
def main(x, y, obj):
return obj.solve(x,y)**2
main(3, 5, Solver1())
Which of these is the better approach depends on the details of your application. For instance, if you actually have multiple "parallel" sets of functions -- as well as your func1, func2, etc., there are also otherfunc1, otherfunc2 etc. -- then this is crying out for an object-oriented solution (the second approach above).
In general I would argue for the second approach; it tends to lead to cleaner code as your requirements grow.

Python: emulate C-style pass-by-reference for variables

I have a framework with some C-like language. Now I'm re-writing that framework and the language is being replaced with Python.
I need to find appropriate Python replacement for the following code construction:
SomeFunction(&arg1)
What this does is a C-style pass-by-reference so the variable can be changed inside the function call.
My ideas:
just return the value like v = SomeFunction(arg1)
is not so good, because my generic function can have a lot of arguments like SomeFunction(1,2,'qqq','vvv',.... and many more)
and I want to give the user ability to get the value she wants.
Return the collection of all the arguments no matter have they changed or not, like: resulting_list = SomeFunction(1,2,'qqq','vvv',.... and many more) interesting_value = resulting_list[3]
this can be improved by giving names to the values and returning dictionary interesting_value = resulting_list['magic_value1']
It's not good because we have constructions like
DoALotOfStaff( [SomeFunction1(1,2,3,&arg1,'qq',val2),
SomeFunction2(1,&arg2,v1),
AnotherFunction(),
...
], flags1, my_var,... )
And I wouldn't like to load the user with list of list of variables, with names or indexes she(the user) should know. The kind-of-references would be very useful here ...
Final Response
I compiled all the answers with my own ideas and was able to produce the solution. It works.
Usage
SomeFunction(1,12, get.interesting_value)
AnotherFunction(1, get.the_val, 'qq')
Explanation
Anything prepended by get. is kind-of reference, and its value will be filled by the function. There is no need in previous defining of the value.
Limitation - currently I support only numbers and strings, but these are sufficient form my use-case.
Implementation
wrote a Getter class which overrides getattribute and produces any variable on demand
all newly created variables has pointer to their container Getter and support method set(self,value)
when set() is called it checks if the value is int or string and creates object inheriting from int or str accordingly but with addition of the same set() method. With this new object we replace our instance in the Getter container
Thank you everybody. I will mark as "answer" the response which led me on my way, but all of you helped me somehow.
I would say that your best, cleanest, bet would be to construct an object containing the values to be passed and/or modified - this single object can be passed, (and will automatically be passed by reference), in as a single parameter and the members can be modified to return the new values.
This will simplify the code enormously and you can cope with optional parameters, defaults, etc., cleanly.
>>> class C:
... def __init__(self):
... self.a = 1
... self.b = 2
...
>>> c=C
>>> def f(o):
... o.a = 23
...
>>> f(c)
>>> c
<class __main__.C at 0x7f6952c013f8>
>>> c.a
23
>>>
Note
I am sure that you could extend this idea to have a class of parameter that carried immutable and mutable data into your function with fixed member names plus storing the names of the parameters actually passed then on return map the mutable values back into the caller parameter name. This technique could then be wrapped into a decorator.
I have to say that it sounds like a lot of work compared to re-factoring your existing code to a more object oriented design.
This is how Python works already:
def func(arg):
arg += ['bar']
arg = ['foo']
func(arg)
print arg
Here, the change to arg automatically propagates back to the caller.
For this to work, you have to be careful to modify the arguments in place instead of re-binding them to new objects. Consider the following:
def func(arg):
arg = arg + ['bar']
arg = ['foo']
func(arg)
print arg
Here, func rebinds arg to refer to a brand new list and the caller's arg remains unchanged.
Python doesn't come with this sort of thing built in. You could make your own class which provides this behavior, but it will only support a slightly more awkward syntax where the caller would construct an instance of that class (equivalent to a pointer in C) before calling your functions. It's probably not worth it. I'd return a "named tuple" (look it up) instead--I'm not sure any of the other ways are really better, and some of them are more complex.
There is a major inconsistency here. The drawbacks you're describing against the proposed solutions are related to such subtle rules of good design, that your question becomes invalid. The whole problem lies in the fact that your function violates the Single Responsibility Principle and other guidelines related to it (function shouldn't have more than 2-3 arguments, etc.). There is really no smart compromise here:
either you accept one of the proposed solutions (i.e. Steve Barnes's answer concerning your own wrappers or John Zwinck's answer concerning usage of named tuples) and refrain from focusing on good design subtleties (as your whole design is bad anyway at the moment)
or you fix the design. Then your current problem will disappear as you won't have the God Objects/Functions (the name of the function in your example - DoALotOfStuff really speaks for itself) to deal with anymore.

Efficient arithmetic special methods in Cython

According to the Cython documentation regarding arithmetic special methods (operator overloads), the way they're implemented, I can't rely on self being the object whose special method is being called.
Evidently, this has two consequences:
I can't specify a static type in the method declaration. For example, if I have a class Foo which can only be multiplied by, say, an int, then I can't have def __mul__(self, int op) without seeing TypeErrors (sometimes).
In order to decide what to do, I have to check the types of the operands, presumably using isinstance() to handle subclasses, which seems farcically expensive in an operator.
Is there any good way to handle this while retaining the convenience of operator syntax? My whole reason for switching my classes to Cython extension types is to improve efficiency, but as they rely heavily on the arithmetic methods, based on the above it seems like I'm actually going to make them worse.
If I understand the docs and my test results correctly, you actually can have a fast __mul__(self, int op) on a Foo, but you can only use it as Foo() * 4, not 4 * Foo(). The latter would require an __rmul__, which is not supported, so it always raises TypeError.
The fact that the second argument is typed int means that Cython does the typecheck for you, so you can be sure that the left argument is really self.

Python argument passing in object oriented programming

I apologize if this was asked somewhere else, but I do not know how else to formulate this question.
I am a physicist and Python is my first object-oriented language. I love this language for its clean code, and somehow everything works as intended (by me ;).
However I have one problem, maybe it is more of a design choice, but since my object oriented programming is self-taught and very basic I am not sure which way to go.
So the question is: should I mainly pass arguments or manipulate the object data directly? Because, for instance:
class test(object):
...
def dosomething(self, x, y):
# do someting with x, y involving a lot of mathematic manipulations
def calcit(self):
k = self.dosomething(self.x[i], self.y[i])
# do something else with k
produces much cleaner code than not passing x, y but passing i and writing the self explicitly every time. What do you prefer, or is this an object oriented paradigm that I am breaking?
Performance-wise this shouldn't make a difference since the arguments are passed by reference, right?
should i mainly pass arguments or manipulate the object data directly
Think of objects as systems with a state. If data belongs to the state of the object, then it should be packaged in the object as a member. Otherwise, it should be passed to its methods as an argument.
In your example, what you should do depends on whether you want to dosomething on values x and y that are not members of the object. If you don't, then you can have dosomething fetch x and y from self.
Also, keep in mind that if you're not using self inside a method, then it probably shouldn't be a method at all but rather a freestanding function.
performance-wise this shouldn't make a difference since the arguments are passed by reference, right?
I wouldn't worry about performance at this point at all.
Object paradigm is that :
you pack up methods and attributes together and call them an object.
So, if you manipulate precisely one of those attributes you don't need to pass them as parameters, you SHOULD use the object's ones. If you use anything else then you got to pass it as parameters.
And nothing prevents you from getting the values of your object into another variable if it bothers you to write self every time !
To finish, your function that takes x and y as parameters should not be in your object but outside of it as an helper function if you really wanna do something like that, the reason being that there is no reason to pass your object as first parameter (even if it's implicit) if you do not use it.
and yeah performance wise it should be pretty similar !

Categories