Suppose a function with a mutable default argument:
def f(l=[]):
l.append(len(l))
return l
If I run this:
def f(l=[]):
l.append(len(l))
return l
print(f()+["-"]+f()+["-"]+f()) # -> [0, '-', 0, 1, '-', 0, 1, 2]
Or this:
def f(l=[]):
l.append(len(l))
return l
print(f()+f()+f()) # -> [0, 1, 0, 1, 0, 1, 2]
Instead of the following one, which would be more logical:
print(f()+f()+f()) # -> [0, 0, 1, 0, 1, 2]
Why?
That's actually pretty interesting!
As we know, the list l in the function definition is initialized only once at the definition of this function, and for all invocations of this function, there will be exactly one copy of this list. Now, the function modifies this list, which means that multiple calls to this function will modify the exact same object multiple times. This is the first important part.
Now, consider the expression that adds these lists:
f()+f()+f()
According to the laws of operator precedence, this is equivalent to the following:
(f() + f()) + f()
...which is exactly the same as this:
temp1 = f() + f() # (1)
temp2 = temp1 + f() # (2)
This is the second important part.
Addition of lists produces a new object, without modifying any of its arguments. This is the third important part.
Now let's combine what we know together.
In line 1 above, the first call returns [0], as you'd expect. The second call returns [0, 1], as you'd expect. Oh, wait! The function will return the exact same object (not its copy!) over and over again, after modifying it! This means that the object that the first call returned has now changed to become [0, 1] as well! And that's why temp1 == [0, 1] + [0, 1].
The result of addition, however, is a completely new object, so [0, 1, 0, 1] + f() is the same as [0, 1, 0, 1] + [0, 1, 2]. Note that the second list is, again, exactly what you'd expect your function to return. The same thing happens when you add f() + ["-"]: this creates a new list object, so that any other calls to f won't interfere with it.
You can reproduce this by concatenating the results of two function calls:
>>> f() + f()
[0, 1, 0, 1]
>>> f() + f()
[0, 1, 2, 3, 0, 1, 2, 3]
>>> f() + f()
[0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5]
Again, you can do all that because you're concatenating references to the same object.
Here's a way to think about it that might help it make sense:
A function is a data structure. You create one with a def block, much the same way as you create a type with a class block or you create a list with square brackets.
The most interesting part of that data structure is the code that gets run when the function is called, but the default arguments are also part of it! In fact, you can inspect both the code and the default arguments from Python, via attributes on the function:
>>> def foo(a=1): pass
...
>>> dir(foo)
['__annotations__', '__call__', '__class__', '__closure__', '__code__', '__defaults__', ...]
>>> foo.__code__
<code object foo at 0x7f114752a660, file "<stdin>", line 1>
>>> foo.__defaults__
(1,)
(A much nicer interface for this is inspect.signature, but all it does is examine those attributes.)
So the reason that this modifies the list:
def f(l=[]):
l.append(len(l))
return l
is exactly the same reason that this also modifies the list:
f = dict(l=[])
f['l'].append(len(f['l']))
In both cases, you're mutating a list that belongs to some parent structure, so the change will naturally be visible in the parent as well.
Note that this is a design decision that Python specifically made, and it's not inherently necessary in a language. JavaScript recently learned about default arguments, but it treats them as expressions to be re-evaluated anew on each call — essentially, each default argument is its own tiny function. The advantage is that JS doesn't have this gotcha, but the drawback is that you can't meaningfully inspect the defaults the way you can in Python.
Related
So I need to create overload function sum() for adding two numbers, one should take integer values and another float. The input will be only 2 integer numbers for both sum() functions.
How can I distinguish between the first sum() function and second sum() function then? The first sum() function is supposed to be for integer parameters and the second is supposed to be for floating-point. But if the last function is always the one getting called for regardless of whether the parameter is integer or floating-point. I tired different casting but no success
I have for example these function, but can not understand how to overload them
def add(a, b):
return a + b
def add(a:float, b:float):
return a + b
I can not use dispatch() , isintance() or any other modules
Based on your comments, I'm guessing you looked at this geeksforgeeks article. If you look closely, it did not overload the function with a different number of arguments because the last defined function will always be the defined function.
At the end of the day, Python does not do function overloading like in C++ or Java. There is always only one definition for a function. You'll have to parse the inputs in order to decide the behavior within the function itself. I'm a little confused why you can't use isinstance but I suppose you can get around it with type.
If you have a statically typed language, a add function would need variants for each type of thing you want added. The underlying machine code for adding integers is different than floats, and each would need to be in its own function.
Python isn't that way. In python, + is translated to a call to the object's __add__ method and the object itself decides what it means to add. Take your single sum function and you could add integers, floats, and even concatenate strings.
>>> def sum(a, b):
... return a + b
...
>>> sum(1, 3)
4
>>> sum(1.1, 3.3)
4.4
>>> sum("foo", "bar")
'foobar'
How about adding million element arrays? It'll do that, too.
>>> import numpy as np
>>> a1 = np.ones((1000,1000), dtype=int)
>>> a2 = np.zeros((1000,1000), dtype=int)
>>> sum(a1, a2)
array([[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
...,
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1]])
Python does not support overloading. The best that you can do is remove any type identifiers from the arguments and use if-else statements to change the function of the method depending on the input types.
For example:
def add(a,b):
if type(a) == float and type(b) == float:
pass # add floats
elif type(a) == int and type(b) == int:
pass # add ints
else:
assert False, 'input error, arguments (a,b) need to both be either a float or an int'
or
class integers():
def __init__(self) -> None:
pass
#classmethod
def add(cls,a,b):
return int(a) + int(b)
class floats():
def __init__(self) -> None:
pass
#classmethod
def add(cls,a,b):
return float(a) + float(b)
a_float_sum = floats.add(1.0, 2.0)
print(f'{a_float_sum=}')
an_int_sum = integers.add(1, 2)
print(f'{an_int_sum=}')
I am trying to code a recursive function that generates all the lists of numbers < N who's sum equal to N in python
This is the code I wrote :
def fn(v,n):
N=5
global vvi
v.append(n) ;
if(len(v)>N):
return
if(sum(v)>=5):
if(sum(v)==5): vvi.append(v)
else:
for i in range(n,N+1):
fn(v,i)
this is the output I get
vvi
Out[170]: [[1, 1, 1, 1, 1, 2, 3, 4, 5, 2, 3, 4, 5, 2, 3, 4, 5, 2, 3, 4, 5]]
I tried same thing with c++ and it worked fine
What you need to do is to just formulate it as a recursive description and implement it. You want to prepend all singleton [j] to each of the lists with sum N-j, unless N-j=0 in which you also would include the singleton itself. Translated into python this would be
def glist(listsum, minelm=1):
for j in range(minelm, listsum+1):
if listsum-j > 0:
for l in glist(listsum-j, minelm=j):
yield [j]+l
else:
yield [j]
for l in glist(5):
print(l)
The solution contains a mechanism that will exclude permutated solutions by requiring the lists to be non-decreasing, this is done via the minelm argument that limits the values in the rest of the list. If you wan't to include permuted lists you could disable the minelm mechanism by replacing the recursion call to glist(listsum-j).
As for your code I don't really follow what you're trying to do. I'm sorry, but your code is not very clear (and that's not a bad thing only in python, it's actually more so in C).
First of all it's a bad idea to return the result from a function via a global variable, returning result is what return is for, but in python you have also yield that is nice if you want to return multiple elements as you go. For a recursive function it's even more horrible to return via a global variable (or even use it) since you are running many nested invocations of the function, but have only one global variable.
Also calling a function fn taking arguments v and n as argument. What do that actually tell you about the function and it's argument? At most that it's a function and probably that one of the argument should be a number. Not very useful if somebody (else) is to read and understand the code.
If you want an more elaborate answer what's formally wrong with your code you should probably include a minimal, complete, verifiable example including the expected output (and perhaps observed output).
You may want to reconsider the recursive solution and consider a dynamic programming approach:
def fn(N):
ways = {0:[[]]}
for n in range(1, N+1):
for i, x in enumerate(range(n, N+1)):
for v in ways[i]:
ways.setdefault(x, []).append(v+[n])
return ways[N]
>>> fn(5)
[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 2, 2], [1, 1, 3], [2, 3], [1, 4], [5]]
>>> fn(3)
[[1, 1, 1], [1, 2], [3]]
Using global variables and side effects on input parameters is generally consider bad practice and you should look to avoid.
For example, I want to print out a board for tic tac toe that is initially
board = [[0]*3]*3
I want to use map to apply print() to each row, so that the output is
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]
In Python 3, map returns an iterator instead of a list, so an example of adapting to this I found is
list(map(print, board))
Which gives the correct output. But I don't know what's going on here - can someone explain what is happening when you do
list(iterator)
?
The built-in list constructor is a common way of forcing iterators and generators to fully iterate in Python. When you call map, it only returns a map object instead of actually evaluating the mapping, which is not desired by the author of your code snippet.
However, using map just to print all the items of an iterable on separate lines is inelegant when you consider all the power that the print function itself holds in Python 3:
>>> board = [[0]*3]*3
>>> board[0] is board[1]
True
>>> "Uh oh, we don't want that!"
"Uh oh, we don't want that!"
>>> board = [[0]*3 for _ in range(3)]
>>> board[0] is board[1]
False
>>> "That's a lot better!"
"That's a lot better!"
>>> print(*board, sep='\n')
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]
Additional Note: In Python 2, where print is treated as a statement, and is not so powerful, you still have at least two better options than using map:
Use a good old for-loop:
for row in board: print row
Import Python 3's print function from the __future__ module:
from __future__ import print_function
>>> list(map(print, board))
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]
[None, None, None]
When you call list on an iterable, it extracts each element from the iterable. In this case, a side-effect of that is that the three rows are printed. The result of the print operation, though, is None. Thus, for each print performed, a None is added to the list. The last row above, consisting of the three Nones, is the actual list that was returned by list.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
“Least Astonishment” in Python: The Mutable Default Argument
def f(a, L=[]):
L.append(a)
return L
print(f(1, [1, 2]))
print(f(1))
print(f(2))
print(f(3))
I wonder why the other f(1), f(2), f(3) has not append to the first f(1, [1,2]).
I guess the result should be :
[1, 2, 1]
[1, 2, 1, 1]
[1, 2, 1, 1, 2]
[1, 2, 1, 1, 2, 3]
But the result is not this. I do not know why.
There are two different issues (better to called concepts) fused into one problem statement.
The first one is related to the SO Question as pointed by agh. The thread gives a detailed explanation and it would make no sense in explaining it again except for the sake of this thread I can just take the privilege to say that functions are first class objects and the parameters and their default values are bounded during declaration. So the parameters acts much like static parameters of a function (if something can be made possible in other languages which do not support First Class Function Objects.
The Second issue is to what List Object the parameter L is bound to. When you are passing a parameter, the List parameter passed is what L is bound to. When called without any parameters its more like bonding with a different list (the one mentioned as the default parameter) which off-course would be different from what was passed in the first call. To make the case more prominent, just change your function as follow and run the samples.
>>> def f(a, L=[]):
L.append(a)
print id(L)
return L
>>> print(f(1, [1, 2]))
56512064
[1, 2, 1]
>>> print(f(1))
51251080
[1]
>>> print(f(2))
51251080
[1, 2]
>>> print(f(3))
51251080
[1, 2, 3]
>>>
As you can see, the first call prints a different id of the parameter L contrasting to the subsequent calls. So if the Lists are different so would be the behavior and where the value is getting appended. Hopefully now it should make sense
Why you wait that's results if you call function where initialize empty list if you dont pass second argument?
For those results that you want you should use closure or global var.
I have a Python function that takes a list as a parameter. If I set the parameter's default value to an empty list like this:
def func(items=[]):
print items
Pylint would tell me "Dangerous default value [] as argument". So I was wondering what is the best practice here?
Use None as a default value:
def func(items=None):
if items is None:
items = []
print items
The problem with a mutable default argument is that it will be shared between all invocations of the function -- see the "important warning" in the relevant section of the Python tutorial.
I just encountered this for the first time, and my immediate thought is "well, I don't want to mutate the list anyway, so what I really want is to default to an immutable list so Python will give me an error if I accidentally mutate it." An immutable list is just a tuple. So:
def func(items=()):
print items
Sure, if you pass it to something that really does want a list (eg isinstance(items, list)), then this'll get you in trouble. But that's a code smell anyway.
For mutable object as a default parameter in function- and method-declarations the problem is, that the evaluation and creation takes place at exactly the same moment. The python-parser reads the function-head and evaluates it at the same moment.
Most beginers asume that a new object is created at every call, but that's not correct! ONE object (in your example a list) is created at the moment of DECLARATION and not on demand when you are calling the method.
For imutable objects that's not a problem, because even if all calls share the same object, it's imutable and therefore it's properties remain the same.
As a convention you use the None object for defaults to indicate the use of a default initialization, which now can take place in the function-body, which naturally is evaluated at call-time.
In addition and also to better understand what python is, here my little themed snippet:
from functools import wraps
def defaultFactories(func):
'wraps function to use factories instead of values for defaults in call'
defaults = func.func_defaults
#wraps(func)
def wrapped(*args,**kwargs):
func.func_defaults = tuple(default() for default in defaults)
return func(*args,**kwargs)
return wrapped
def f1(n,b = []):
b.append(n)
if n == 1: return b
else: return f1(n-1) + b
#defaultFactories
def f2(n,b = list):
b.append(n)
if n == 1: return b
else: return f2(n-1) + b
>>> f1(6)
[6, 5, 4, 3, 2, 1, 6, 5, 4, 3, 2, 1, 6, 5, 4, 3, 2, 1, 6, 5, 4, 3, 2, 1, 6, 5, 4, 3, 2, 1, 6, 5, 4, 3, 2, 1]
>>> f2(6)
[1, 2, 3, 4, 5, 6]