Default Initialization of Starred Variables within the Definition of a Function - python

It is well-known that in order to set a default value to a variable within a function in Python, the following syntax is used:
def func(x = 0):
if x == 0:
print("x is equal to 0")
else:
print("x is not equal to 0")
So if the function is called as such:
>>> func()
It results in
'x is equal to 0'
But when a similar technique is used for starred variables, for example,
def func(*x = (0, 0)):
it results in a syntax error. I've tried switching up the syntax by also doing (*x = 0, 0) but the same error is encountered. Is it possible to initialize a starred variable to a default value?

star variables are non standard variables that are meant to allow functions with arbitrary length
*variables is a tuple with all positional arguments (This is usually named args)
**variables is a dictionary with all named arguments (This is usually named kwargs )
They would always be there, just empty if none is provided. You could test if a value is in the dictionary or tuple depending on what type of argument and initialize it.
def arg_test(*args,**kwargs):
if not args:
print "* not args provided set default here"
print args
else:
print "* Positional Args provided"
print args
if not kwargs:
print "* not kwargs provided set default here"
print kwargs
else:
print "* Named arguments provided"
print kwargs
#no args, no kwargs
print "____ calling with no arguments ___"
arg_test()
#args, no kwargs
print "____ calling with positional arguments ___"
arg_test("a", "b", "c")
#no args, but kwargs
print "____ calling with named arguments ___"
arg_test(a = 1, b = 2, c = 3)

The starred variable has a value of an empty tuple () by default. While it's not possible to change that default value due to how starred parameters work (tl;dr: Python assigns un-starred parameters, if any are available and collects the rest inside a tuple; you can read more about them for example in the relevant PEP 3132: https://www.python.org/dev/peps/pep-3132/) you could implement a check at the beginning of the function to find out if x is an empty tuple and then change it accordingly. Your code would look something like this:
def func(*x):
if x == (): # Check if x is an empty tuple
x = (0, 0)
if x == 0:
print("x is equal to 0")
else:
print("x is not equal to 0")

Related

reassign values to args of a function

i have a function with about 20-30 parameters
def function (a,b,c,d,e,....):
....
those paramenters may have any value, including "None".
I want to reasign a specifique string to each var that has the value "None" before my function does its magic.
But I dont want to have a huge block of code such as:
if a is None:
....
if b is None: ....
How can I go through each var and reasign its value if the condition is met?
ty
Unless you are doing something pretty exotic, this kind of thing is usually better handled by collecting the variables in a data structure, e.g. a dict like so:
def function(**kwargs):
default = 42
for key, val in kwargs.items():
if val is None:
kwargs[key] = default
...
print(kwargs)
# Example call
function(a=1, b=None)
You can assign to individual variables using the magic of exec, but it's generally not advised. Furthermore, it's not clear to me how one can successfully use this inside of a function as e.g. exec('a = 42') doesn't actually change the value of the local a variable.
If you have so many arguments to the function, then you can try using iterable unpacking operator * instead of explicitly defining each argument. In that you you will have more control over the contents of the arguments:-
def function(*args):
args = list(args)
for x, y in enumerate(args):
if y is None:
args[x] = "default_value"
print(args)
Then do a function call similar to :-
function(123, 3, None, "hello")
OUTPUT:-
[123, 3, 'default_value', 'hello']
I'm not recommending doing this, but if you insist:
def function(a,b,c,d,e,...):
myvars = locals() # Get a dictionary of parameters
arg_names = myvars.keys() # Get their names
for k in arg_names: # Replace the ones whose value is None
if myvars[k] == None:
myvars[k] = 'None'

Check if function can be called with another function's arguments

How can I check if a function can always be called with the same arguments as another function? For example, b can be called with all arguments provided to a.
def a(a, b, c=None):
pass
def b(a, *args, d=4,**kwargs):
pass
The reason I want this is that I have a base function:
def f(a, b):
print('f', a, b)
and a list of callbacks:
def g(b, a):
print('g', a, b)
def h(*args, **kwargs):
print('h', args, kwargs)
funcs = [g, h]
and a wrapper function that accepts anything:
def call(*args, **kwargs):
f(*args, **kwargs)
for func in funcs:
func(*args, **kwargs)
Now I want to check if all callbacks will accept the arguments provided to call(), assuming they're valid for f().
For performance reasons, I don't want to check the arguments every time call() is called, but rather check each callback before adding it to the list of callbacks.
For example, those calls are ok:
call(1, 2)
call(a=1, b=3)
But this one should fail because g has arguments in wrong order:
call(1, b=3)
This took a bit of fun research, but i think i've covered the corner cases. A number of them arise to keep things compatible with python 2 while new syntax being added.
Most problematic part is the fact that some named (keyword) parameters can be passed in as positional argument or be required based on order passed in.
For more see comments.
Below code will ensure that function b can be called using any possible combination of valid arguments to function a. (does not imply inverse).
Uncomment/add try except block to get true/valse result and not an AssertionError.
import inspect
def check_arg_spec(a,b):
"""
attrs of FullArgSpec object:
sp.args = pos or legacy keyword arguments, w/ keyword at back
sp.varargs = *args
sp.varkw = **kwargs
sp.defaults = default values for legacy keyword arguments #
sp.args
sp.kwdonly = keyword arguments follow *args or *, must be passed in by name
sp.kwdonlydefaults = {name: default_val, .... }
sp.annotatons -> currently not in use, except as standard flag for outside applications
Consume order:
(1) Positional arguments
(2) legacy keyword argument = default (can be filled by both keyword or positional parameter)
[
(3) *args
[
(4) keyword only arguments [=default]
]
]
(5) **kwds
"""
a_sp = inspect.getfullargspec(a)
b_sp = inspect.getfullargspec(b)
kwdfb = b_sp.kwonlydefaults or {}
kwdfa = a_sp.kwonlydefaults or {}
kwddefb = b_sp.defaults or []
kwddefa = a_sp.defaults or []
# try:
akwd = a_sp.kwonlyargs
if len(kwddefa):
akwd += a_sp.args[-len(kwddefa):]
bkwd = b_sp.kwonlyargs
if len(kwddefb):
bkwd += b_sp.args[-len(kwddefb):]
# all required arguments in b must have name match in a spec.
for bkey in (set(b_sp.args) ^ set(bkwd)) & set(b_sp.args) :
assert bkey in a_sp.args
# all required positional in b can be met by a
assert (len(a_sp.args)-len(kwddefb)) >= (len(b_sp.args)-len(kwddefb))
# if a has *args spec, so must b
assert not ( a_sp.varargs and b_sp.varargs is None )
# if a does not take *args, max number of pos args passed to a is len(a_sp.args). b must accept at least this many positional args unless it can consume *args
if b_sp.varargs is None:
# if neither a nor b accepts *args, check that total number of pos plus py2 style keyword arguments for sg of b is more than a can send its way.
assert len(a_sp.args) <= len(b_sp.args)
# Keyword only arguments of b -> they are required, must be present in a.
akws = set(a_sp.kwonlyargs) | set(a_sp.args[-len(kwddefa):])
for nmreq in (set(b_sp.kwonlyargs)^set(kwdfb)) & set(b_sp.kwonlyargs):
assert nmreq in akws
# if a and b both accept an arbitrary number of positional arguments or if b can but a cannot, no more checks neccessary here
# if a accepts optional arbitrary, **kwds, then so must b
assert not (a_sp.varkw and b_sp.varkw is None)
if b_sp.varkw is None:
# neither a nor b can consume arbitrary keyword arguments
# then b must be able to consume all keywords that a can be called w/th.
for akw in akwd:
assert akw in bkwd
# if b accepts **kwds, but not a, then there is no need to check further
# if both accept **kwds, then also no need to check further
# return True
#
# except AssertionError:
#
# return False
Not sure what you are really looking for and I'm pretty sure your issue could be solved in a better way, but anyway:
from inspect import getargspec
def foo(a, b, c=None):
pass
def bar(a, d=4, *args, **kwargs):
pass
def same_args(func1, func2):
return list(set(getargspec(func1)[0]).intersection(set(getargspec(func2)[0])))
print same_args(foo, bar)
# => ['a']
same_args just check arguments from func1 and func2 and returns a new list with only same arguments in both func1 and func2.

syntax error: non-keyword arg after keyword arg

I'm using Python 2.7.13 in spyder.
def test(a,b,c='c',*args):
for item in args:
print item
This function definition is valid in Python 2.7, but as soon as I try to pass in args it gives me the non-keyword arg after keyword arg error:
test(1,2,c='c',10,11)
Gives this:
non-keyword arg after keyword arg
But this:
test(1,2,3,4,5)
Is working.
I'm not sure what's the issue here since putting the *args bfore c='c'
def test(a,b,*args,c='c'):
for item in args:
print item
This gives me an error in the function definition.
The above code is just a dummy example the original code looks like the following:
def export_plot_as_mat(fname, undersamp, undersamp_type, n_comp, b_s, n_iter, fit_alg, transf_alg, alpha_train, alpha_test, export_info=False, *args):
info = ('undersampling=' + str(undersamp) + ' undersampling_type=' +str(undersamp_type) +
' n_comp=' + str(n_comp) + ' batch_size=' + str(b_s) +
' n_iter=' + str(n_iter) + ' fit_alg=' + str(fit_alg) +
' transform_alg=' + str(transf_alg) + ' alpha_train=' +
str(alpha_train) + ' alpha_test=' + str(alpha_test))
d = [(str(args[i]), args[i]) for i in range(len(args))]
if export_info:
d.append('info',info)
sp.io.savemat(fname + '.mat', d)
I want to have the option to export the parameters used to build the data I'm exporting.
The definition is fine, since c can be specified positionally. The call is the problem; you specified a keyword argument before another positional argument.
Python 2.x doesn't have a way to define keyword-only arguments.
This is an interesting side case of how Python 2.x does argument parsing. Roughly, Python has some logic for how to "match up" the arguments you pass to a function with the arguments that it is expecting, and this logic changed in Python 3.
The easiest way to get what you want is to accept **kwargs instead of specifying c=0, and do the default argument manually.
A couple things here.
When change the c = 'c' in your test() to just 'c'
also u can put your last to arguments in a list to print them on the same line. or leave them as is to print them in separate lines.
def test(a,b,c='c',*args):
for item in args:
print item
test(1,2,'c',[10,11])
result:
[10, 11]
For the argument c = 'c' you can pass anything you want there and interact with c:
Take this example:
def test(a,b,c='c',*args):
print c
for item in args:
print item
test(1,2,'g',10,11)
Result:
g
10
11

Function that accepts both expanded arguments and tuple

Is there a Pythonic way to create a function that accepts both separate arguments and a tuple? i.e to achieve something like this:
def f(*args):
"""prints 2 values
f(1,2)
1 2
f( (1,2) )
1 2"""
if len(args) == 1:
if len(args[0]) != 2:
raise Exception("wrong number of arguments")
else:
print args[0][0],args[0][1]
elif len(args) == 2:
print args[0],args[1]
else:
raise Exception("wrong number of arguments")
First of all I don't know if it is very wise to do so. Say a person calls a function like:
f(*((1,4),(2,5)))
As you can see the tuple contains two elements. But now for some reason, the person calls it with only one element (because for instance the generator did not generated two elements):
f(*((1,4),))
Then the user would likely want your function to report this, but now it will simply accept it (which can lead to complicated and unexpected behavior). Okay printing the elements of course will not do much harm. But in a general case the consequences might be more severe.
Nevertheless an elegant way to do this is making a simple decorator that first checks if one element is fed it checks if one tuple element is feeded and if so expands it:
def one_tuple(f):
def g(*args):
if len(args) == 1 and isinstance(args[0],tuple):
return f(*args[0])
else:
return f(*args)
return g
And apply it to your f:
#one_tuple
def f(*args):
if len(args) == 2:
print args[0],args[1]
else:
raise Exception("wrong number of arguments")
The decorator one_tuple thus checks if one tuple is fed, and if so unpacks it for you before passing it to your f function.
As a result f does not have to take the tuple case into account: it will always be fed expanded arguments and handle these (of course the opposite could be done as well).
The advantage of defining a decorator is its reuse: you can apply this decorator to all kinds of functions (and thus make it easier to implement these).
The Pythonic way would be to use duck typing. This will only work if you are sure that none of the expanded arguments are going to be iterable.
def f(*args):
def g(*args):
# play with guaranteed expanded arguments
if len(args) == 1:
try:
iter(args[0])
except TypeError:
pass
else:
return g(*args[0])
return g(*args)
This implementation offers a slight improvement on #Ev.Kounis's answer for cases where a single non-tuple argument is passed in. It can also easily be turned into the equivalent decorator described by #WillemVanOnsem. Use the decorator version if you have more than one function like this.
I do not agree to the idea myself (even though I like the fact that python does not require the definition of variable types and thus allows such a thing) but there might be cases where such a thing is needed. So here you go:
def my_f(a, *b):
def whatever(my_tuple):
# check tuple for conformity
# do stuff with the tuple
print(my_tuple)
return
if hasattr(a, '__iter__'):
whatever(a)
elif b:
whatever((a,) + b)
else:
raise TypeError('malformed input')
return
Restructured it a bit but the logic stays the same. if "a" is an iterable consider it to be your tuple, if not take "b" into account as well. if "a" is not an iterable and "b" is not defined, raise TypeError
my_f((1, 2, 3)) # (1, 2, 3)
my_f(1, 2, 3) # (1, 2, 3)

What does asterisk * mean in Python? [duplicate]

This question already has answers here:
What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
(25 answers)
Closed 9 years ago.
Does * have a special meaning in Python as it does in C? I saw a function like this in the Python Cookbook:
def get(self, *a, **kw)
Would you please explain it to me or point out where I can find an answer (Google interprets the * as wild card character and thus I cannot find a satisfactory answer).
See Function Definitions in the Language Reference.
If the form *identifier is
present, it is initialized to a tuple
receiving any excess positional
parameters, defaulting to the empty
tuple. If the form **identifier is
present, it is initialized to a new
dictionary receiving any excess
keyword arguments, defaulting to a new
empty dictionary.
Also, see Function Calls.
Assuming that one knows what positional and keyword arguments are, here are some examples:
Example 1:
# Excess keyword argument (python 2) example:
def foo(a, b, c, **args):
print "a = %s" % (a,)
print "b = %s" % (b,)
print "c = %s" % (c,)
print args
foo(a="testa", d="excess", c="testc", b="testb", k="another_excess")
As you can see in the above example, we only have parameters a, b, c in the signature of the foo function. Since d and k are not present, they are put into the args dictionary. The output of the program is:
a = testa
b = testb
c = testc
{'k': 'another_excess', 'd': 'excess'}
Example 2:
# Excess positional argument (python 2) example:
def foo(a, b, c, *args):
print "a = %s" % (a,)
print "b = %s" % (b,)
print "c = %s" % (c,)
print args
foo("testa", "testb", "testc", "excess", "another_excess")
Here, since we're testing positional arguments, the excess ones have to be on the end, and *args packs them into a tuple, so the output of this program is:
a = testa
b = testb
c = testc
('excess', 'another_excess')
You can also unpack a dictionary or a tuple into arguments of a function:
def foo(a,b,c,**args):
print "a=%s" % (a,)
print "b=%s" % (b,)
print "c=%s" % (c,)
print "args=%s" % (args,)
argdict = dict(a="testa", b="testb", c="testc", excessarg="string")
foo(**argdict)
Prints:
a=testa
b=testb
c=testc
args={'excessarg': 'string'}
And
def foo(a,b,c,*args):
print "a=%s" % (a,)
print "b=%s" % (b,)
print "c=%s" % (c,)
print "args=%s" % (args,)
argtuple = ("testa","testb","testc","excess")
foo(*argtuple)
Prints:
a=testa
b=testb
c=testc
args=('excess',)
I only have one thing to add that wasn't clear from the other answers (for completeness's sake).
You may also use the stars when calling the function. For example, say you have code like this:
>>> def foo(*args):
... print(args)
...
>>> l = [1,2,3,4,5]
You can pass the list l into foo like so...
>>> foo(*l)
(1, 2, 3, 4, 5)
You can do the same for dictionaries...
>>> def foo(**argd):
... print(argd)
...
>>> d = {'a' : 'b', 'c' : 'd'}
>>> foo(**d)
{'a': 'b', 'c': 'd'}
All of the above answers were perfectly clear and complete, but just for the record I'd like to confirm that the meaning of * and ** in python has absolutely no similarity with the meaning of similar-looking operators in C.
They are called the argument-unpacking and keyword-argument-unpacking operators.
A single star means that the variable 'a' will be a tuple of extra parameters that were supplied to the function. The double star means the variable 'kw' will be a variable-size dictionary of extra parameters that were supplied with keywords.
Although the actual behavior is spec'd out, it still sometimes can be very non-intuitive. Writing some sample functions and calling them with various parameter styles may help you understand what is allowed and what the results are.
def f0(a)
def f1(*a)
def f2(**a)
def f3(*a, **b)
etc...
I find * useful when writing a function that takes another callback function as a parameter:
def some_function(parm1, parm2, callback, *callback_args):
a = 1
b = 2
...
callback(a, b, *callback_args)
...
That way, callers can pass in arbitrary extra parameters that will be passed through to their callback function. The nice thing is that the callback function can use normal function parameters. That is, it doesn't need to use the * syntax at all. Here's an example:
def my_callback_function(a, b, x, y, z):
...
x = 5
y = 6
z = 7
some_function('parm1', 'parm2', my_callback_function, x, y, z)
Of course, closures provide another way of doing the same thing without requiring you to pass x, y, and z through some_function() and into my_callback_function().

Categories