Under normal circumstances one calls a function with its default arguments by omitting those arguments. However if I'm generating arguments on the fly, omitting one isn't always easy or elegant. Is there a way to use a function's default argument explicitly? That is, to pass an argument which points back to the default argument.
So something like this except with ~use default~ replaced with something intelligent.
def function(arg='default'):
print(arg)
arg_list= ['not_default', ~use default~ ]
for arg in arg_list:
function(arg=arg)
# output:
# not_default
# default
I don't know if it's even possible and given the term "default argument" all my searches come up with is coders first tutorial. If this functionality is not supported that's ok too, I'd just like to know.
Unfortunately there is no such feature in Python. There are hackarounds, but they're not very likable.
The simple and popular pattern is to move the default into the function body:
def function(arg=None):
if arg is None:
arg = 'default'
...
Now you can either omit the argument or pass arg=None explicitly to take on the default value.
There is no general purpose way to omit an argument; you can specialize to particular functions by explicitly passing the appropriate default value, but otherwise, your only option is to fail to pass the argument.
The closest you could come is to replace your individual values with tuples or dicts that omit the relevant argument, then unpack them at call time. So for your example, you'd change arglist's definition to:
arg_list = [{'arg': 'not_default'}, {}]
then use it like so:
for arg in arg_list:
function(**arg)
A slightly uglier approach is to use a sentinel when you don't want to pass the argument, use that in your arg_list, and test for it, e.g.:
USEDEFAULT = object()
arg_list = ['not_default', USEDEFAULT]
for arg in arg_list:
if arg is USEDEFAULT:
function()
else:
function(arg=arg)
Obviously a bit less clean, but possibly more appropriate for specific use cases.
Related
For example, I'd like to do something like: greet(,'hola'), where greet is:
def greet(person='stranger', greeting='hello')
This would help greatly for testing while writing code
Upon calling a function you can use the variable names to make it even more clear what variable will assume which value. At the same time, if defaults are provided in the function definition, skipping variables when calling the function does not raise any errors. So, in short you can just do this:
def greet(person='stranger', greeting='hello')
print('{} {}'.format(greeting, person))
return
greet(greeting='hola') # same as greet(person='stranger', greeting='hola')
# returns 'hola stranger'
Note that, as I said above this would not work if for example your function definition was like this:
def greet(person, greeting)
print('{} {}'.format(greeting, person))
return
Since in this case, Python would complain saying that it does not know what to do with person; no default is supplied..
And by the way, the problem you are describing is most likely the very reason defaults are used in the first place
Without knowing the other parameters, and only knowing that the parameter you want to change is in second position you could use the inspect module to get function signature & associated default values.
Then make a copy of the default values list and change the one at the index you want:
import inspect
def greet(person='stranger', greeting='hello'):
print(person,greeting)
argspec = inspect.getargspec(greet)
defaults = list(argspec.defaults)
defaults[1] = "hola" # change second default parameter
greet(**dict(zip(argspec.args,defaults)))
Assuming that all parameters have default values (else it shifts the lists an that fails) that prints:
stranger hola
Suppose you have a Python class whose constructor looks something like this:
def __init__(self,fname=None,data=[],imobj=None,height=0,width=0):
and you want to create an instance of it but only provide the fname and imobj inputs. Would the correct way to do this be
thing = Thing(f_name, None, im_obj, None, None)
or is there a preferred way of making this call?
You can just do:
thing = Thing(f_name=value1, im_obj=value2)
Note that you do not actually need the f_name= in this case since fname is the first parameter (besides self, which is passed implicitly). You could just do:
thing = Thing(value1, im_obj=value2)
But I personally think that the first solution is more readable. It makes it clear that we are only changing the values of f_name and im_obj while leaving every other parameter to its default value. In addition, it keeps people from wondering what parameter value1 will be assigned to.
Also, you almost never want to have a mutable object such as a list be a default argument. It will be shared across all calls of the function. For more information, see "Least Astonishment" and the Mutable Default Argument
You can instanciate with:
thing = Thing(f_name, imobj=im_obj)
Other named arguments will be set to default.
You can also pass a dict to the constructor:
>>> argDict={"fname": f_name, "imobj": im_obj}
>>> thing = Thing(**argDict)
This will unpack the dict values. See keyword arguments.
I'm using python 3.3. Consider this function:
def foo(action, log=False,*args) :
print(action)
print(log)
print(args)
print()
The following call works as expected:
foo("A",True,"C","D","E")
A
True
('C', 'D', 'E')
But this one doesn't.
foo("A",log=True,"C","D","E")
SyntaxError: non-keyword arg after keyword arg
Why is this the case?
Does this somehow introduce ambiguity?
Consider the following:
def foo(bar="baz", bat=False, *args):
...
Now if I call
foo(bat=True, "bar")
Where does "bar" go? Either:
bar = "bar", bat = True, args = (), or
bar = "baz", bat = True, args = ("bar",), or even
bar = "baz", bat = "bar", args = ()
and there's no obvious choice (at least between the first two) as to which one it should be. We want bat = True to 'consume' the second argument slot, but it's not clear which order the remaining arguments should be consumed in: treating it as if bat doesn't exist at all and moving everything to the left, or treating it as if bat moved the "cursor" past itself on to the next argument. Or, if we wanted to do something truly strange, we could defend the decision to say that the second argument in the argument tuple always goes with the second positional argument, whether or not other keyword arguments were passed.
Regardless, we're left with something pretty confusing, and someone is going to be surprised which one we picked regardless of which one it is. Python aims to be simple and clean, and it wants to avoid making any language design choices that might be unintuitive. There should be one-- and preferably only one --obvious way to do it.
The function of keyword arguments is twofold:
To provide an interface to functions that does not rely on the order of the parameters.
To provide a way to reduce ambiguity when passing parameters to a function.
Providing a mixture of keyword and ordered arguments is only a problem when you provide the keyword arguments before the ordered arguments. Why is this?
Two reasons:
It is confusing to read. If you're providing ordered parameters, why would you label some of them and not others?
The algorithm to process the arguments would be needless and complicated. You can provide keyword args after your 'ordered' arguments. This makes sense because it is clear that everything is ordered up until the point that you employ keywords. However; if you employ keywords between ordered arguments, there is no clear way to determine whether you are still ordering your arguments.
I am creating a module in python that can take multiple arguments. What would be the best way to pass the arguments to the definition of a method?
def abc(arg):
...
abc({"host" : "10.1.0.100", "protocol" : "http"})
def abc(host, protocol):
...
abc("10.1.0.100", "http")
def abc(**kwargs):
...
abc(host = "10.1.0.100", protocol = "http")
Or something else?
Edit
I will actually have those arguments (username, password, protocol, host, password2) where none of them are required.
def abc(host, protocol):
...
abc("10.1.0.100", "http")
abc(host="10.1.0.100", protocol="http")
If you call it using positional args or keyword args depends on the number of arguments and if you skip any. For your example I don't see much use in calling the function with keyword args.
Now some reasons why the other solutions are bad:
abc({"host" : "10.1.0.100", "protocol" : "http"})
This is a workaround for keyword arguments commonly used in languages which lack real keyword args (usually JavaScript). In those languages it is nice, but in Python it is just wrong since you gain absolutely nothing from it. If you ever want to call the functions with args from a dict, you can always do abc(**{...}) anyway.
def abc(**kwargs):
People expect the function to accept lots of - maybe even arbitrary - arguments. Besides that, they don't see what arguments are possible/required and you'd have to write code to require certain arguments on your own.
If all the arguments are known ahead, use an explicit argument list, optionally with default values, like def abc(arg1="hello", arg2="world",...). This will make the code most readable.
When you call the function, you can use either abd("hello", "world") or abc(arg1="hello", arg2="world"). I use the longer form if there are more than 4 or 5 arguments, it's a matter of taste.
This really depends on the context, the design and the purpose of your method.
If the parameters are defined and compulsory, then the best option is:
def abc(host, protocol):
...
abc('10.1.0.100', 'http')
in case the parameters are defined but optional, then:
def abc(host=None, protocol=None): # I used None, but put a default value
...
you can call it thorugh positional arguments as abc('10.1.0.100', 'http') or by their name abc(protocol='http')
And if you don't know at first what are the arguments it will receive are (for example in string formatting) then the best option is using the **kwargs argument:
def abc(**kwargs):
...
And in this way you must call it using named arguments.
I have several layers of function calls, passing around a common dictionary of key word arguments:
def func1(**qwargs):
func2(**qwargs)
func3(**qwargs)
I would like to supply some default arguments in some of the subsequent function calls, something like this:
def func1(**qwargs):
func2(arg = qwargs.get("arg", default), **qwargs)
func3(**qwargs)
The problem with this approach is that if arg is inside qwargs, a TypeError is raised with "got multiple values for keyword argument".
I don't want to set qwargs["arg"] to default, because then func3 gets this argument without warrant. I could make a copy.copy of the qwargs and set "arg" in the copy, but qwargs could have large data structures in it and I don't want to copy them (maybe copy.copy wouldn't, only copy.deepcopy?).
What's the pythonic thing to do here?
Just build and use another dict for the purpose of calling func2, leaving the original alone for the later call to func3:
def func1(**qwargs):
d = dict(arg=default)
d.update(qwqargs)
func2(**d)
func3(**qwargs)
This is if you want a setting for arg in qwargs to override the default. Otherwise (if you want default to override any possible setting for arg in qwargs):
def func1(**qwargs):
d = dict(qwargs, arg=default)
func2(**d)
func3(**qwargs)
since the keyword-argument to dict overrides the value in the positional argument, if any.
To create a new dict with the same keys and values you can use
newdict=dict(qwargs)
If qwargs doesn't contain really many keys that's cheap.
If it's possible you could rewrite the functions to take their args really as dict instead of multiple args.