I have function with default values. Based on a condition I would like to either use a specified value or the default value. Example:
def f(a=1,b=2,c=3):
print(a, b, c)
I would like to have the following logic:
if condition:
f(a=4, b=5, c=6)
else:
f(a=4, c=6)
But I would like to understand if it's possible to do this using a ternary operator e.g. something like:
f(a=4, b=5 if condition else <default value>, c=6)
Is there a way?
Edit for clarification:
I would like to tell the function to take the default value in case the condition is false without providing the default value in the if-else statement. So in the example above the output should be 4 5 6 if true and 4 2 6 if false.
The only way I can think of would be:
func("arg", "arg2", *(("default",) if condition == True else ()))
As far as I know it's not possible as one liner. Named arguments are simply dict and default value is used when key is not present in dict.
So you can also write it as
if condition:
f(**{"a": 4, "b": 5, "c": 6})
else:
f(**{"a": 4, "c": 6})
kwargs approach is commonly used to add dynamic arguments
kwargs = {"a": 4, "c": 6}
if condition:
kwargs["b"] = 5
f(**kwargs)
But I am afraid there is no single expression which put conditionally key into dict.
Related
In most languages, including Python, the true value of a variable can be used implicitly in conditions expressions i.e:
is_selected = True
If is_selected:
#do something
Is it possible to create functions that it's arguments behave similarly?
For example, if I had a sort function named asort that takes the argument ascending
The standard way of calling it would be:
asort(list, ascending=True)
What I'm envisioning is calling the function like:
asort(list, ascending)
Edit: The behavior I'm looking for is a little different than default arguments.
Maybe sorting is a bad example but let's stay with that, suppose my asort function has a default=False for the ascending argument
def asort(list, ascending=False):
if ascending: #no need to =True here
#sort ascending
Else:
#sort descending
If I want to sort ascending, I must use asort(alist, ascending=True)
What I'm wondering is if there is a way to call the non-default ascending asasort(alist, ascending)
Yes you can define a function with default arguments:
def f(a=True, b=False, c=1):
print(a,b,c);
f() # True False 1
f(a=False) # False False 1
f(a=False, c=2) # False False 2
One addition to defaults - don't use it for mutables:
def danger(k=[]):
k.append(1)
print(k)
danger()
danger()
danger()
danger()
danger()
danger()
Output:
[1]
[1, 1]
[1, 1, 1]
[1, 1, 1, 1]
[1, 1, 1, 1, 1]
[1, 1, 1, 1, 1, 1]
Fix:
def danger(k=None):
k = k or []
k.append(1)
print(k)
danger()
danger()
danger()
danger()
danger()
danger()
[1]
[1]
[1]
[1]
[1]
[1]
More:
"Least Astonishment" and the Mutable Default Argument
What is the pythonic way to avoid default parameters that are empty lists?
http://docs.python-guide.org/en/latest/writing/gotchas/
Quote: "A new list is created once when the function is defined, and the same list is used in each successive call.
Python’s default arguments are evaluated once when the function is defined, not each time the function is called (like it is in say, Ruby). This means that if you use a mutable default argument and mutate it, you will and have mutated that object for all future calls to the function as well."
Python has the option of having default arguments in its functions
Default values indicate that the function argument will take that value if no argument value is passed during function call. The default value is assigned by using assignment(=) operator of the form keywordname=value.
So for example
def asort(list, ascending = True):
*Your function*
will make it so that everytime you call the function asort it will be with ascending set to true to call it you will need to write:
asort(list)
And if you neeed to indicate that ascending is False you need to call it like
asort(list, ascending=False)
This question already has answers here:
Destructuring-bind dictionary contents
(17 answers)
Closed 4 years ago.
I'm trying to write a function spread in Python 3.6 (I cannot use any newer release), and, so far, I've got something that looks like this:
d = {"a": 1, "b": 2, "c": 3}
a, b, c = spread(d, ['a', 'b', 'c'])
a
>> 1
b
>> 2
c
>> 3
The problem is: there is kind of duplication since the position of the left side must match the keys list on the function's 2nd argument for it to make sense. So, change the order of the keys list, and variable a will hold a different value than d['a']. I need to keep consistency by either
a, b, c = spread(d) # how?
Or spread(d, ???). I'm not considering initializing a, b, c with None and then pass them as a list.
Any thoughts or leads on how to approach this? Is it even possible?
Thanks!
No this isn't really possible. You can't have
a, b, c = spread(d)
and
a, c, b = spread(d)
give the same value to b. This is because the right side of an assignment statement is evaluated first. So spread executes and returns its values before your code knows which order you put them in on the left.
Some googling leads be to believe that by "spread-like syntax for dicts", you're looking for the **dict syntax. See What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
not very pretty, but you can sort of get there doing:
def f1(a, b, c, **_):
print(a)
print(b)
print(c)
d = {"a": 1, "b": 2, "c": 3}
f1(**d)
very different semantics, but posted in the hope it'll inspire something!
as per #phhu's comment, ** in the definition of f1 is a catch-all keyword argument specifier telling Python that all unmatched parameters should be put into a dictionary of the given name, _ in my case. calling as f1(**d) says to unpack the specified dictionary into the function's parameters.
hence if it was used like:
e = {"a": 1, "b": 2, "c": 3, "extra": 42}
f1(**e)
then inside f1 the _ variable would be set to {"extra": 42}. I'm using _ because this identifier is used across a few languages to indicate a throwaway/placeholder variable name, i.e. something that is not expected to be used later.
globals().update(d) does what you ask, but...
It works in the global scope only, locals() is not guaranteed to return a writable dictionary.
It impairs debuggability of your code. If one of the variables set this way ends up with an unexpected value, no search will show you that this is the place the variable is being set.
You could assign the variables to the result of a values() call:
>>> d = {"a": 1, "b": 2, "c": 3}
>>> a,b,c = d.values()
>>> a
1
>>> b
2
>>> c
3
I don't recommend doing this for versions of Python where dict ordering is not guaranteed, but luckily this should work in 3.6 and above.
Can anyone explain the difference when unpacking the dictionary using single or double asterisk? You can mention their difference when used in function parameters, only if it is relevant here, which I don't think so.
However, there may be some relevance, because they share the same asterisk syntax.
def foo(a,b)
return a+b
tmp = {1:2,3:4}
foo(*tmp) #you get 4
foo(**tmp) #typeError: keyword should be string. Why it bothers to check the type of keyword?
Besides, why the key of dictionary is not allowed to be non-string when passed as function arguments in THIS situation? Are there any exceptions? Why they design Python in this way, is it because the compiler can't deduce the types in here or something?
When dictionaries are iterated as lists the iteration takes the keys of it, for example
for key in tmp:
print(key)
is the same as
for key in tmp.keys():
print(key)
in this case, unpacking as *tmp is equivalent to *tmp.keys(), ignoring the values. If you want to use the values you can use *tmp.values().
Double asterisk is used for when you define a function with keyword parameters such as
def foo(a, b):
or
def foo(**kwargs):
here you can store the parameters in a dictionary and pass it as **tmp. In the first case keys must be strings with the names of the parameter defined in the function firm. And in the second case you can work with kwargs as a dictionary inside the function.
def foo(a,b)
return a+b
tmp = {1:2,3:4}
foo(*tmp) #you get 4
foo(**tmp)
In this case:
foo(*tmp) mean foo(1, 3)
foo(**tmp) mean foo(1=2, 3=4), which will raise an error since 1 can't be an argument. Arg must be strings and (thanks # Alexander Reynolds for pointing this out) must start with underscore or alphabetical character. An argument must be a valid Python identifier. This mean you can't even do something like this:
def foo(1=2, 3=4):
<your code>
or
def foo('1'=2, '3'=4):
<your code>
See python_basic_syntax for more details.
It is a Extended Iterable Unpacking.
>>> def add(a=0, b=0):
... return a + b
...
>>> d = {'a': 2, 'b': 3}
>>> add(**d)#corresponding to add(a=2,b=3)
5
For single *,
def add(a=0, b=0):
... return a + b
...
>>> d = {'a': 2, 'b': 3}
>>> add(*d)#corresponding to add(a='a',b='b')
ab
Learn more here.
I think the ** double asterisk in function parameter and unpacking dictionary means intuitively in this way:
#suppose you have this function
def foo(a,**b):
print(a)
for x in b:
print(x,"...",b[x])
#suppose you call this function in the following form
foo(whatever,m=1,n=2)
#the m=1 syntax actually means assign parameter by name, like foo(a = whatever, m = 1, n = 2)
#so you can also do foo(whatever,**{"m":1,"n":2})
#the reason for this syntax is you actually do
**b is m=1,n=2 #something like pattern matching mechanism
so b is {"m":1,"n":2}, note "m" and "n" are now in string form
#the function is actually this:
def foo(a,**b): # b = {"m":1,"n":2}
print(a)
for x in b: #for x in b.keys(), thanks to #vlizana answer
print(x,"...",b[x])
All the syntax make sense now. And it is the same for single asterisk. It is only worth noting that if you use single asterisk to unpack dictionary, you are actually trying to unpack it in a list way, and only key of dictionary are unpacked.
[https://docs.python.org/3/reference/expressions.html#calls]
A consequence of this is that although the *expression syntax may appear after explicit keyword arguments, it is processed before the keyword arguments (and any **expression arguments – see below). So:
def f(a, b):
print(a, b)
f(b=1, *(2,))
f(a=1, *(2,))
#Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
#TypeError: f() got multiple values for keyword argument 'a'
f(1, *(2,))
Let's say I have a method definition like this:
def myMethod(a, b, c, d, e)
Then, I have a variable and a tuple like this:
myVariable = 1
myTuple = (2, 3, 4, 5)
Is there a way I can pass explode the tuple so that I can pass its members as parameters? Something like this (although I know this won't work as the entire tuple is considered the second parameter):
myMethod(myVariable, myTuple)
I'd like to avoid referencing each tuple member individually if possible...
You are looking for the argument unpacking operator *:
myMethod(myVariable, *myTuple)
From the Python documentation:
The reverse situation occurs when the
arguments are already in a list or
tuple but need to be unpacked for a
function call requiring separate
positional arguments. For instance,
the built-in range() function expects
separate start and stop arguments. If
they are not available separately,
write the function call with the
*-operator to unpack the arguments out of a list or tuple:
>>> range(3, 6) # normal call with separate arguments
[3, 4, 5]
>>> args = [3, 6]
>>> range(*args) # call with arguments unpacked from a list
[3, 4, 5]
In the same fashion, dictionaries can
deliver keyword arguments with the
**-operator:
>>> def parrot(voltage, state='a stiff', action='voom'):
... print "-- This parrot wouldn't", action,
... print "if you put", voltage, "volts through it.",
... print "E's", state, "!"
...
>>> d = {"voltage": "four million", "state": "bleedin' demised", "action": "VOOM"}
>>> parrot(**d)
-- This parrot wouldn't VOOM if you put four million volts through it. E's bleedin' demised !
So, Python functions can return multiple values. It struck me that it would be convenient (though a bit less readable) if the following were possible.
a = [[1,2],[3,4]]
def cord():
return 1, 1
def printa(y,x):
print a[y][x]
printa(cord())
...but it's not. I'm aware that you can do the same thing by dumping both return values into temporary variables, but it doesn't seem as elegant. I could also rewrite the last line as "printa(cord()[0], cord()[1])", but that would execute cord() twice.
Is there an elegant, efficient way to do this? Or should I just see that quote about premature optimization and forget about this?
printa(*cord())
The * here is an argument expansion operator... well I forget what it's technically called, but in this context it takes a list or tuple and expands it out so the function sees each list/tuple element as a separate argument.
It's basically the reverse of the * you might use to capture all non-keyword arguments in a function definition:
def fn(*args):
# args is now a tuple of the non-keyworded arguments
print args
fn(1, 2, 3, 4, 5)
prints (1, 2, 3, 4, 5)
fn(*[1, 2, 3, 4, 5])
does the same.
Try this:
>>> def cord():
... return (1, 1)
...
>>> def printa(y, x):
... print a[y][x]
...
>>> a=[[1,2],[3,4]]
>>> printa(*cord())
4
The star basically says "use the elements of this collection as positional arguments." You can do the same with a dict for keyword arguments using two stars:
>>> a = {'a' : 2, 'b' : 3}
>>> def foo(a, b):
... print a, b
...
>>> foo(**a)
2 3
Actually, Python doesn't really return multiple values, it returns one value which can be multiple values packed into a tuple. Which means that you need to "unpack" the returned value in order to have multiples.
A statement like
x,y = cord()
does that, but directly using the return value as you did in
printa(cord())
doesn't, that's why you need to use the asterisk. Perhaps a nice term for it might be "implicit tuple unpacking" or "tuple unpacking without assignment".