Let us say I have a function like this:
def helloWorld(**args):
for arg in args:
print(args[arg])
To call this function is easy:
helloWorld(myMsg = 'hello, world')
helloWorld(anotherMessages = 'no, no this is hard')
But the hard part is that I want to dynamically name the args from variables coming from a list or somewhere else. And I'd want for myMsg and anotherMessages to be passed from a list (for example, the hard part that I am clueless and I need help with is how to take strings into variables to be inputs of a function).
list_of_variable_names = ['myMsg','anotherMessages']
for name in list_of_variable_names:
helloWorld(name = 'ooops, this is not easy, how do I pass a variable name that is stored as a string in a list? No idea! help!')
You can create a dict using the variable and then unpack while passing it to the function:
list_of_variable_names = ['myMsg','anotherMessages']
for name in list_of_variable_names:
helloWorld(**{name: '...'})
The ** syntax is a dictionary unpacking.
So in your function, args (which is usually kwargs) is a dictionary.
Therefore, you need to pass an unpacked dictionary to it, which is what is done when f(a=1, b=2) is called.
For instance:
kwargs = {'myMsg': "hello, world", 'anotherMessages': "no, no this is hard"}
helloWorld(**kwargs)
Then, you will get kwargs as a dictionary.
def f(**kwargs):
for k, v in kwargs.items():
print(k, v)
>>> kwargs = {'a': 1, 'b': 2}
>>> f(**kwargs)
a 1
b 2
If you want to do so, you can call the function once for every name as well, by creating a dictionary on the fly and unpacking it, as Moses suggested.
def f(**kwargs):
print("call to f")
for k, v in kwargs.items():
print(k, v)
>>> for k, v in {'a': 1, 'b': 2}:
... kwargs = {k: v}
... f(**kwargs)
...
call to f
a 1
call to f
b 2
Related
Let's say we have a function that takes some arguments and a dict that is a superset of values required to invoke the function:
d = {"a": 1, "b": 2, "c": 3}
def foo(a, b):
print(a, b)
# this won't work
foo(**d)
# this works but is tedious the more values there are
foo(**{k: v for k,v in d.items() if k not in ("c")})
Is there a more elegant solution?
Instead of doing this:
def foo(a, b):
print(a, b)
You could do this:
def foo(a, b, **kwargs):
print(a, b)
Then the function would just ignore all the unneeded arguments.
However, in my view, there are some problems with this approach.
Sometimes you don't own the source code of the function. Of course, you can make lots of wrapper functions. If you do that manually, you'll have to change the signatures of your helper functions every time the library authors modify the function signature.
This solution can make some bugs harder to find. Maybe it's better to explicitly state that you're only using some of the elements from the dictionary.
You can still make it easier using the inspect module of the standard library.
a) Make a decorator that makes the function filter its keyword arguments.
For instance:
import inspect
def filter_kwargs(f):
arg_names = inspect.getargspec(f).args
def _f(*args, **kwargs):
return f(*args, **{k: v for k, v in kwargs.items() if k in arg_names})
return _f
# Use it like this:
filter_kwargs(foo)(a_dict)
b) You can create a function that transforms the argument to fit the function
def fit_for(f, kwargs):
arg_names = inspect.getargspec(f).args
return {k: v for k, v in kwargs.items() if k in arg_names}
foo(fit_for(foo, a_dict))
Optionally, you can expand this function to also take into account the *args and **kwargs in the original function.
If this is a recurring pattern in your code, you can just pass the dict as a single argument.
Also, as #chepner pointed out in the comments, consider creating a class.
you could use keyword arguments:
def foo(**kwarg):
print(kwarg['a'], kwarg['b'])
foo(**d)
output:
1 2
Assuming you do not want to modify foo's function signature, you can use foo.__code__.co_varnames to get the argument names of foo if you are using cpython
foo(**{k: d[k] for k in foo.__code__.co_varnames})
You can do something like:
d = {"a": 1, "b": 2, "c": 3}
def foo(a, b, *arg, **kwarg):
print(a, b)
foo(**d)
As the title suggests, I am trying to update a dictionary using the update() method like in the following code block
for key, val in my_dict.items():
new_dict.update(key=val)
If my_dict = {'a': 1, 'b': 2} I would expect the result to be that new_dict = {'a': 1, 'b': 2} (assuming of course that new_dict is already defined). However, when executed, I instead get new_dict = {'key': 2}.
What am I doing wrong?
Keyword arguments always use the fixed identifier as the key. Use keyword expansion instead.
new_dict.update(**{key: val})
Or if new_dict really is a dict, just pass the dict itself.
new_dict.update({key: val})
Here is a code for the update method, so you can see why it behaves the way it does (it is not the real source code, just an example):
def update(self, other_dict={}, **kwargs):
for k, v in other_dict.items():
self[k] = v
for k, v in kwargs.items():
self[k] = v
So if you call new_dict.update(key=val) your kwargs will be equal to {"key": value}.
You need to pass your arguments inside a dictionary if you want to dinamically set the new keys.
update uses keyword arguments to update dictionary, or dictionary or iterable of pairs. You can just pass your dictionary as the first argument:
new_dict.update(my_dict)
update designed to work with several keys at once. If you just want to set single value, you can just set the value:
new_dict[key] = value
How can one define the initial contents of the keyword arguments dictionary? Here's an ugly solution:
def f(**kwargs):
ps = {'c': 2}
ps.update(kwargs)
print(str(ps))
Expected behaviour:
f(a=1, b=2) => {'c': 2, 'a': 1, 'b': 2}
f(a=1, b=2, c=3) => {'c': 3, 'a': 1, 'b': 2}
Yet, I would like to do something a little bit more in the lines of:
def f(**kwargs = {'c': 2}):
print(str(kwargs))
Or even:
def f(c=2, **kwargs):
print(str(???))
Ideas?
First to address the issues with your current solution:
def f(**kwargs):
ps = {'c': 2}
ps.update(kwargs)
print(str(ps))
This creates a new dictionary and then has to take the time to update it with all the values from kwargs. If kwargs is large that can be fairly inefficient and as you pointed out is a bit ugly.
Obviously your second isn't valid.
As for the third option, an implementation of that was already given by Austin Hastings
If you are using kwargs and not keyword arguments for default values there's probably a reason (or at least there should be; for example an interface that defines c without explicitly requiring a and b might not be desired even though the implementation may require a value for c).
A simple implementation would take advantage of dict.setdefault to update the values of kwargs if and only if the key is not already present:
def f(**kwargs):
kwargs.setdefault('c', 2)
print(str(kwargs))
Now as mentioned by the OP in a previous comment, the list of default values may be quite long, in that case you can have a loop set the default values:
def f(**kwargs):
defaults = {
'c': 2,
...
}
for k, v in defaults.items():
kwargs.setdefault(k, v)
print(str(kwargs))
A couple of performance issues here as well. First the defaults dict literal gets created on every call of the function. This can be improved upon by moving the defaults outside of the function like so:
DEFAULTS = {
'c': 2,
...
}
def f(**kwargs):
for k, v in DEFAULTS.items():
kwargs.setdefault(k, v)
print(str(kwargs))
Secondly in Python 2, dict.items returns a copy of the (key, value) pairs so instead dict.iteritems or dict.viewitems allows you to iterate over the contents and thus is more efficient. In Python 3, 'dict.items` is a view so there's no issue there.
DEFAULTS = {
'c': 2,
...
}
def f(**kwargs):
for k, v in DEFAULTS.iteritems(): # Python 2 optimization
kwargs.setdefault(k, v)
print(str(kwargs))
If efficiency and compatibility are both concerns, you can use the six library for compatibility as follows:
from six import iteritems
DEFAULTS = {
'c': 2,
...
}
def f(**kwargs):
for k, v in iteritems(DEFAULTS):
kwargs.setdefault(k, v)
print(str(kwargs))
Additionally, on every iteration of the for loop, a lookup of the setdefault method of kwargs needs to be performed. If you truly have a really large number of default values a micro-optimization is to assign the method to a variable to avoid repeated lookup:
from six import iteritems
DEFAULTS = {
'c': 2,
...
}
def f(**kwargs):
setdefault = kwargs.setdefault
for k, v in iteritems(DEFAULTS):
setdefault(k, v)
print(str(kwargs))
Lastly if the number of default values is instead expected to be larger than the number of kwargs, it would likely be more efficient to update the default with the kwargs. To do this, you can't use the global default or it would update the defaults with every run of the function, so the defaults need to be moved back into the function. This would leave us with the following:
def f(**kwargs):
defaults = {
'c': 2,
...
}
defaults.update(kwargs)
print(str(defaults))
Enjoy :D
A variation possible on your first approach on Python 3.5+ is to define the default and expand the provided arguments on a single line, which also lets you replace kwargs on the same line, e.g.:
def f(**kwargs):
# Start with defaults, then expand kwargs which will overwrite defaults if
# it has them
kwargs = {'c': 2, **kwargs}
print(str(kwargs))
Another approach (that won't produce an identical string) that creates a mapping that behaves the same way using collections.ChainMap (3.3+):
from collections import ChainMap
def f(**kwargs):
# Chain overrides over defaults
# {'c': 2} could be defined outside f to avoid recreating it each call
ps = ChainMap(kwargs, {'c': 2})
print(str(ps))
Like I said, that won't produce the same string output, but unlike the other solutions, it won't become more and more costly as the number of passed keyword arguments increases (it doesn't have to copy them at all).
Have you tried:
def f(c=2, **kwargs):
kwargs['c'] = c
print(kwargs)
Update
Barring that, you can use inspect to access the code object, and get the keyword-only args from that, or even all the args:
import inspect
def f(a,b,c=2,*,d=1,**kwargs):
code_obj = inspect.currentframe().f_code
nposargs = code_obj.co_argcount
nkwargs = code_obj.co_kwonlyargcount
localvars = locals()
kwargs.update({k:localvars[k] for k in code_obj.co_varnames[:nposargs+nkwargs]})
print(kwargs)
g=f
g('a', 'b')
I have, for example, 3 functions, with required arguments (some arguments are shared by the functions, in different order):
def function_one(a,b,c,d,e,f):
value = a*b/c ...
return value
def function_two(b,c,e):
value = b/e ..
return value
def function_three(f,a,c,d):
value = a*f ...
return value
If I have the next dictionary:
argument_dict = {'a':3,'b':3,'c':23,'d':6,'e':1,'f':8}
Is posible to call the functions in this way??:
value_one = function_one(**argument_dict)
value_two = function_two (**argument_dict)
value_three = function_three (**argument_dict)
Not the way you have written those functions, no: they are not expecting the extra arguments so will raise a TypeError.
If you define all the functions as also expecting **kwargs, things will work as you want.
I assume what you're trying to do is to create a function with an undefined number of arguments. You can do this by using args (arguments) or kwargs (key word arguments kind of foo='bar') style so for example:
for arguments
def f(*args): print(args)
f(1, 2, 3)
(1, 2, 3)`
then for kwargs
def f2(**kwargs): print(kwargs)
f2(a=1, b=3)
{'a': 1, 'b': 3}
Let's try a couple more things.
def f(my_dict): print (my_dict['a'])
f(dict(a=1, b=3, c=4))
1
It works!!! so, you could do it that way and complement it with kwargs if you don't know what else the function could receive.
Of course you could do:
argument_dict = {'a':1, 'b':3, 'c':4}
f(argument_dict)
1
So you don't have to use kwargs and args all the time. It all depends the level of abstraction of the object you're passing to the function. In your case, you're passing a dictionary so you can handle that guy without only.
is there possible to pass more than one keyword argument to a function in python?
foo(self, **kwarg) # Want to pass one more keyword argument here
You only need one keyword-arguments parameter; it receives any number of keyword arguments.
def foo(**kwargs):
return kwargs
>>> foo(bar=1, baz=2)
{'baz': 2, 'bar': 1}
I would write a function to do this for you
def partition_mapping(mapping, keys):
""" Return two dicts. The first one has key, value pair for any key from the keys
argument that is in mapping and the second one has the other keys from
mapping
"""
# This could be modified to take two sequences of keys and map them into two dicts
# optionally returning a third dict for the leftovers
d1 = {}
d2 = {}
keys = set(keys)
for key, value in mapping.iteritems():
if key in keys:
d1[key] = value
else:
d2[key] = value
return d1, d2
You can then use it like this
def func(**kwargs):
kwargs1, kwargs2 = partition_mapping(kwargs, ("arg1", "arg2", "arg3"))
This will get them into two separate dicts. It doesn't make any sense for python to provide this behavior as you have to manually specify which dict you want them to end up in. Another alternative is to just manually specify it in the function definition
def func(arg1=None, arg2=None, arg3=None, **kwargs):
# do stuff
Now you have one dict for the ones you don't specify and regular local variables for the ones you do want to name.
You cannot. But keyword arguments are dictionaries and when calling you can call as many keyword arguments as you like. They all will be captured in the single **kwarg. Can you explain a scenario where you would need more than one of those **kwarg in the function definition?
>>> def fun(a, **b):
... print b.keys()
... # b is dict here. You can do whatever you want.
...
...
>>> fun(10,key1=1,key2=2,key3=3)
['key3', 'key2', 'key1']
Maybe this helps. Can you clarify how the kw arguments are divided into the two dicts?
>>> def f(kw1, kw2):
... print kw1
... print kw2
...
>>> f(dict(a=1,b=2), dict(c=3,d=4))
{'a': 1, 'b': 2}
{'c': 3, 'd': 4}