I have several layers of function calls, passing around a common dictionary of key word arguments:
def func1(**qwargs):
func2(**qwargs)
func3(**qwargs)
I would like to supply some default arguments in some of the subsequent function calls, something like this:
def func1(**qwargs):
func2(arg = qwargs.get("arg", default), **qwargs)
func3(**qwargs)
The problem with this approach is that if arg is inside qwargs, a TypeError is raised with "got multiple values for keyword argument".
I don't want to set qwargs["arg"] to default, because then func3 gets this argument without warrant. I could make a copy.copy of the qwargs and set "arg" in the copy, but qwargs could have large data structures in it and I don't want to copy them (maybe copy.copy wouldn't, only copy.deepcopy?).
What's the pythonic thing to do here?
Just build and use another dict for the purpose of calling func2, leaving the original alone for the later call to func3:
def func1(**qwargs):
d = dict(arg=default)
d.update(qwqargs)
func2(**d)
func3(**qwargs)
This is if you want a setting for arg in qwargs to override the default. Otherwise (if you want default to override any possible setting for arg in qwargs):
def func1(**qwargs):
d = dict(qwargs, arg=default)
func2(**d)
func3(**qwargs)
since the keyword-argument to dict overrides the value in the positional argument, if any.
To create a new dict with the same keys and values you can use
newdict=dict(qwargs)
If qwargs doesn't contain really many keys that's cheap.
If it's possible you could rewrite the functions to take their args really as dict instead of multiple args.
Related
I have a function that updates 4 dictionaries by priority.
When testing, I found out that in some cases not all 4 will be provided.
The function fails of course because I am trying to use update on a NoneType.
#staticmethod
def create_configuration(layer1, layer2, layer3, layer4):
configuration = {}
configuration.update(layer4)
configuration.update(layer3)
configuration.update(layer2)
configuration.update(layer1)
return configuration
I tried to set the parameters, in the function signature, to be
layer3={} and layer3=dict() but in both ways, when I run it, the dictionary will be a NoneType.
Is there a more elegant way to do it, rather than looping over the variables and setting them to an empty dict if they are NoneType?
Many options.
The reason your default arguments do nothing is that you do in fact give an argument for those parameters, that happens to be None. So the default is unused.
One option, fix the code that calls this function. If this function expects four dicts and gets None instead of a dict, then something is wrong on that side.
But there is nothing special about needing four dicts in this code. It could instead be:
def combines_dicts(*dicts):
combined = {}
for d in dicts:
combined.update(d)
return combined
Now the calling code could give two, or five, arguments if it had that many dicts.
You could also fix it using if:
if layer4:
configuration.update(layer4)
Et cetera.
Under normal circumstances one calls a function with its default arguments by omitting those arguments. However if I'm generating arguments on the fly, omitting one isn't always easy or elegant. Is there a way to use a function's default argument explicitly? That is, to pass an argument which points back to the default argument.
So something like this except with ~use default~ replaced with something intelligent.
def function(arg='default'):
print(arg)
arg_list= ['not_default', ~use default~ ]
for arg in arg_list:
function(arg=arg)
# output:
# not_default
# default
I don't know if it's even possible and given the term "default argument" all my searches come up with is coders first tutorial. If this functionality is not supported that's ok too, I'd just like to know.
Unfortunately there is no such feature in Python. There are hackarounds, but they're not very likable.
The simple and popular pattern is to move the default into the function body:
def function(arg=None):
if arg is None:
arg = 'default'
...
Now you can either omit the argument or pass arg=None explicitly to take on the default value.
There is no general purpose way to omit an argument; you can specialize to particular functions by explicitly passing the appropriate default value, but otherwise, your only option is to fail to pass the argument.
The closest you could come is to replace your individual values with tuples or dicts that omit the relevant argument, then unpack them at call time. So for your example, you'd change arglist's definition to:
arg_list = [{'arg': 'not_default'}, {}]
then use it like so:
for arg in arg_list:
function(**arg)
A slightly uglier approach is to use a sentinel when you don't want to pass the argument, use that in your arg_list, and test for it, e.g.:
USEDEFAULT = object()
arg_list = ['not_default', USEDEFAULT]
for arg in arg_list:
if arg is USEDEFAULT:
function()
else:
function(arg=arg)
Obviously a bit less clean, but possibly more appropriate for specific use cases.
I am trying to construct a dispatch the following way:
def run_nn(type=None):
print type, 'nn'
return
def run_svm(type=None):
print type, 'svm'
return
action = {'nn' : run_nn( type=None),
'svm' : run_svm(type=None),}
I want the function to be executed only when called with something like:
action.get('nn',type='foo')
With expectation it to print:
foo nn
But it breaks giving:
TypeError: get() takes no keyword arguments
What's the right way to do it?
Furthermore, two functions run_nn() and run_svm() were executed without even being called. I don't want that. How can I avoid it?
You're calling the functions while building the dictionary. You should instead put the function objects in the dict without calling them. And afterwards, get the appropriate function from the dict and call it with the keyword argument.
What you want is:
action = {'nn' : run_nn,
'svm' : run_svm,}
...
action.get('nn')(type='foo') # get function object from dict and then call it.
I'll suggest you use action['nn'] over action.get('nn') since you're not specifying any default callable in the get method; the get method returns None when you don't specify one. A KeyError is much more intuitive than a TypeError NoneType object is not callable in this scenario.
On another note, you can drop those return statements as you aren't actually returning anything. Your function will still return without them.
BTW, I have the feeling your function(s) want to change behavior depending on type (although your type is counter-intuitive as it is always a string). In any case, you may have a look at functools.singledispatch. That'll transform your function(s) into a single-dispatch generic function with the possibility to create several overloaded implementations.
Finally, although type does make for a good argument name, you will run into problems when you need to use the builtin type in your function.
Suppose you have a Python class whose constructor looks something like this:
def __init__(self,fname=None,data=[],imobj=None,height=0,width=0):
and you want to create an instance of it but only provide the fname and imobj inputs. Would the correct way to do this be
thing = Thing(f_name, None, im_obj, None, None)
or is there a preferred way of making this call?
You can just do:
thing = Thing(f_name=value1, im_obj=value2)
Note that you do not actually need the f_name= in this case since fname is the first parameter (besides self, which is passed implicitly). You could just do:
thing = Thing(value1, im_obj=value2)
But I personally think that the first solution is more readable. It makes it clear that we are only changing the values of f_name and im_obj while leaving every other parameter to its default value. In addition, it keeps people from wondering what parameter value1 will be assigned to.
Also, you almost never want to have a mutable object such as a list be a default argument. It will be shared across all calls of the function. For more information, see "Least Astonishment" and the Mutable Default Argument
You can instanciate with:
thing = Thing(f_name, imobj=im_obj)
Other named arguments will be set to default.
You can also pass a dict to the constructor:
>>> argDict={"fname": f_name, "imobj": im_obj}
>>> thing = Thing(**argDict)
This will unpack the dict values. See keyword arguments.
Suppose I have a function that I do not control that looks something like the following:
def some_func(foo, bar, bas, baz):
do_something()
return some_val
Now I want to call this function passing elements from a dict that contains keys that are identical to the arguments of this function. I could do something like:
some_func(foo=mydict['foo'],
bar=mydict['bar'],
bas=mydict['bas'],
baz=mydict['baz'])
Is there some elegant way I could take advantage of the fact that the keys match the parms to do this less verbosely? I know I could pass the whole dict, but let's say I either don't want to or can't change the function to accept a single dict rather than the individual arguments.
Thanks,
Jerry
That's what ** argument unpacking is for:
some_func(**mydict)
See also Unpacking argument lists in the Python tutorial.
As Sven notes, you can pass a dict to a function using ** unpacking. But if the dict contains keys that aren't argument names to the target function, this works only if the function you're calling accepts keyword arguments using the ** notation. If it doesn't, you'll get an error.
If the function you're calling doesn't have a **kwargs parameter, the easiest way to handle it is to add one. But if you can't do that, you have a couple approaches:
1) Write a wrapper function:
def wrapper(foo, bar, baz, quux, **kwargs):
return some_func(foo, bar, baz, quux)
wrapper(**mydict)
2) Write a function to extract just the dict keys you need:
def extract(dikt, keys):
return dict((k, dikt[k]) for k in keys.split())
some_func(**extract(mydict, "foo bar baz quux"))
3) Write a function that introspects the function you're calling and extracts the keys you need from the dictionary -- basically the same as #2 except you don't have to "repeat yourself" as much.
def call_with_dict(func, dikt):
func(**dict((k, dikt[k]) for k in
func.func_code.co_varnames[:func.func_code.co_argcount]))
call_with_dict(some_func, my_dict)
Note that this doesn't allow you to omit arguments that have default values in the function signature. You could do some additional introspection to permit that (the length of func.func_defaults determines which of the arguments have default values).
This introspection is for Python 2.x, Python 3 and later will probably need some tweaking. For this reason, I prefer one of the first two methods.
PS -- Thanks to Sven for catching my missing ** in my second approach.