I would like to know what the best practice is regarding default values for functions.
Let's say I have a function:
def my_function(x, **kwargs):
kwargs_default = {'boolean_offset': False}
kwargs_default.update(kwargs)
if kwargs_default['boolean_offset']:
x += 100
return x
It is just a quick example and does not have any other meaning.
my_function(2) will return 2. my_function(2, boolean_offset=True) will return 102.
The point is that I have a variable called boolean_offset that is turned off by default, but may be turned on by the user.
In my real problem I have a function with many input variables. Often not all of these input variables are used and in most cases users want to use the default settings. To make the code more readable I would like to use *args and **kwargs. Further I would like the potentially used variables to have default values, which can be overwritten by the user.
Is the code in my example the best way to do this?
*args and **kwargs do not make the code more readable, on the contrary, they are pure hell.
Before, your editor could show you the function parameters, now there is just **kwargs - and there is no reliable way to find which parameters can or must be set.
If you have many parameters, you should either split the function or add a configuration class , which can be leveraged to make sanity checks, too.
You can use **kwargs or *args. If you want to use **Kwargs, i would add the default value of boolean_offset in the declaration, like this:
def my_function(x, boolean_offset = False, **kwargs)
and i would use **kwargs for pass more arguments without default value.
While I do agree with the fact that using *args / **kwargs is not the way to go, if you still wish to do something around these lines you could define a default dict and work with it.
something like this :
from collections import defaultdict
def my_function(x, **kwargs):
my_default_dict = defaultdict(lambda : False)
for key in kwargs.keys():
my_default_dict[key] = kwargs[key]
This is assuming you want all your default values to be a certain value(false in this case) - You can then work with my_default_dict.
In any other case you will have to do it manually (Further strengthening the answer above me).
Related
I have dict that I want to convert to several different objects. For instance:
Currently Have
kwargs = {'this': 7, 'that': 'butterfly'}
And I want to convert it, maybe using something similar to function argument unpacking, but without actually calling a function, so that I end up with something like:
Desired
**kwargs
print(this) # Will print out `7`
print(that) # Will print out `'butterfly'`
I know that I cannot use the double-star [**] bare like I have shown. But is there something similar I can do?
Edit: More Specifics
Since folks were asking for details and warning that what I am trying to do is dangerous, I will give more context for my need.
I have a property in a class that I want to set using keyword arguments. Something like what is shown here, but unpacking kwargs instead of args.
As a functioning toy example, it looks like this:
class A(object):
def __init__(self):
self._idx = None
#property
def idx(self):
return self._idx
#idx.setter
def idx(self, kwargs):
print(kwargs)
self._idx = {}
for kw, val in kwargs.items():
self._idx[kw] = val
This works. But now I am unit testing the function, and in the unit test, I would like to be able to provide the specific key-value pairs as normal objects, so that I can effectively use hypothesis and pytest's #mark.parametrize effectively.
In order to do this, it looks like I will need the "bare unpacking" like I described above.
Since these are all actually local variables within a method, I feel like #g.d.d.c's concerns are probably not significant. It sounds like his worries were if I created new global variables without knowing what they were.
Note: this approach is dangerous. You should not (in most circumstances) muddle with locals() or globals(). However, what you want can be done, kind of:
>>> kwargs = {'this': 7, 'that': 'butterfly'}
>>>
>>> locals().update(kwargs)
>>> this
7
>>> that
'butterfly'
It still calls a function (the update on locals()) but it does give you names in your local namespace.
You could do:
this, that = {"this": 7, "that": "butterfly"}.values()
But it is really specific. The question would be why do you have the values in a dict in the first place.
You seem to have them in kwargs, why cannot you expend them there?
If you don't know the keys in the dict how can you then use them in the code?
If you know the keys in the dict, why not simply expand it like so:
this, that = kwargs["this"], kwargs["that"]
And my final question? Why not use kwargs["this"] where you need the variable, especially if you don't need it a lot.
Can you provide us with the real use case so that we can tell you how we would do that?
Under normal circumstances one calls a function with its default arguments by omitting those arguments. However if I'm generating arguments on the fly, omitting one isn't always easy or elegant. Is there a way to use a function's default argument explicitly? That is, to pass an argument which points back to the default argument.
So something like this except with ~use default~ replaced with something intelligent.
def function(arg='default'):
print(arg)
arg_list= ['not_default', ~use default~ ]
for arg in arg_list:
function(arg=arg)
# output:
# not_default
# default
I don't know if it's even possible and given the term "default argument" all my searches come up with is coders first tutorial. If this functionality is not supported that's ok too, I'd just like to know.
Unfortunately there is no such feature in Python. There are hackarounds, but they're not very likable.
The simple and popular pattern is to move the default into the function body:
def function(arg=None):
if arg is None:
arg = 'default'
...
Now you can either omit the argument or pass arg=None explicitly to take on the default value.
There is no general purpose way to omit an argument; you can specialize to particular functions by explicitly passing the appropriate default value, but otherwise, your only option is to fail to pass the argument.
The closest you could come is to replace your individual values with tuples or dicts that omit the relevant argument, then unpack them at call time. So for your example, you'd change arglist's definition to:
arg_list = [{'arg': 'not_default'}, {}]
then use it like so:
for arg in arg_list:
function(**arg)
A slightly uglier approach is to use a sentinel when you don't want to pass the argument, use that in your arg_list, and test for it, e.g.:
USEDEFAULT = object()
arg_list = ['not_default', USEDEFAULT]
for arg in arg_list:
if arg is USEDEFAULT:
function()
else:
function(arg=arg)
Obviously a bit less clean, but possibly more appropriate for specific use cases.
For example, I'd like to do something like: greet(,'hola'), where greet is:
def greet(person='stranger', greeting='hello')
This would help greatly for testing while writing code
Upon calling a function you can use the variable names to make it even more clear what variable will assume which value. At the same time, if defaults are provided in the function definition, skipping variables when calling the function does not raise any errors. So, in short you can just do this:
def greet(person='stranger', greeting='hello')
print('{} {}'.format(greeting, person))
return
greet(greeting='hola') # same as greet(person='stranger', greeting='hola')
# returns 'hola stranger'
Note that, as I said above this would not work if for example your function definition was like this:
def greet(person, greeting)
print('{} {}'.format(greeting, person))
return
Since in this case, Python would complain saying that it does not know what to do with person; no default is supplied..
And by the way, the problem you are describing is most likely the very reason defaults are used in the first place
Without knowing the other parameters, and only knowing that the parameter you want to change is in second position you could use the inspect module to get function signature & associated default values.
Then make a copy of the default values list and change the one at the index you want:
import inspect
def greet(person='stranger', greeting='hello'):
print(person,greeting)
argspec = inspect.getargspec(greet)
defaults = list(argspec.defaults)
defaults[1] = "hola" # change second default parameter
greet(**dict(zip(argspec.args,defaults)))
Assuming that all parameters have default values (else it shifts the lists an that fails) that prints:
stranger hola
i have see some code like bellow:
params = {
'username': username,
'password': password,
'attended': attended,
'openid_identifier': openid_identifier,
'multistage': (stage and True) or None
}
ret = authmethod.login(request, userobj, **params)
login is implemented like this
def login(self,request,user_obj,**kw):
username = kw.get('username')
password = kw.get('password')
so we know that kw is a dictionary , but i don't know the ** meaning , is it something like pointer in C language ? is it used to input the dictionary as a reference .
thank you if you could answer me .
Basically this is what it means:
Without using **kw, you would need to list all the input parameters for login in it's signature.
Now, you are calling login function and you know what variable names login's parameters are. So if you have a lot of parameters, it's difficult to always remember the order of the parameters.
Therefore, you call login on parameters by naming the parameters' variable names and setting them equal to the value that you want to pass it. Think of it like this:
Without using **kw, you'd do this:
def say(phrase):
print phrase
say("Hello, World!")
But, by using **kw, you can do this:
def say(**kw):
phrase = kw.get('say_what')
print phrase
say(**{'say_what':"Hello, World!"})
Now what happens is that using ** "unpacks the dictionary in such a way that it tells say that what it expects as the input parameter named say_what will have the value "Hello, World!".
The above example is not the best place to use **kw because there is only one input parameter. But if you have a function with a long signature, then it would be unreasonable to expect any programmer to remember exactly what parameters must be passed in what order to this function.
If (you and) the programmer were to use **kw, then the programmer can specify a dictionary which maps input the parameters' variable names (as strings) to their values. The function takes care of the rest and the programmer doesn't have to concern himself with the order in which he passes the parameters to the function
Hope this helps
These are called keyword arguments and they're described in lots of places, like the Python manual and in blog posts.
We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment.
We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments.
So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times.
We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function.
However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description.
So, the question is, what are good 'pythonic' ways of adding this sort of information to a function.
The two possibilities I have thought of are:
Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec)
Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata.
Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.
Decorators are a good way to add metadata to functions. Add one that takes a list of types to append to a .params property or something:
def takes(*args):
def _takes(fcn):
fcn.params = args
return fcn
return _takes
#takes("time", "temp", "time")
def do_stuff(start_time, average_temp, stop_time):
pass
I would use some kind of decorator:
class TypeProtector(object):
def __init__(self, fun, types):
self.fun, self.types = fun, types
def __call__(self, *args, **kwargs)
# validate args with self.types
pass
# run function
return fun(*args, **kwargs)
def types(*args):
def decorator(fun):
# validate args count with fun parameters count
pass
# return covered function
return TypeProtector(fun, args)
return decorator
#types(Time, Temperature)
def myfunction(foo, bar):
pass
myfunction('21:21', '32C')
print myfunction.types
The 'pythonic' way to do this are function annotations.
def DoSomething(critical_temp: "temperature", time: "time")
pass
For python 2.x, I like to use the docstring
def my_func(txt):
"""{
"name": "Justin",
"age" :15
}"""
pass
and it can be automatically assign to the function object with this snippet
for f in globals():
if not hasattr(globals()[f], '__call__'):
continue
try:
meta = json.loads(globals()[f].__doc__)
except:
continue
for k, v in meta.items():
setattr(globals()[f], k, v)