optional arguments function - python

I am searching how I could use optional arguments in python.
I have read this question but it is not clear to me.
Lets say I have a function f that can take 1 or more arguments to understand time series. Am i obliged to specify the number of arguments and set default values for each argument?
What I aim to do is being able to write a function this way:
simple function:
def f(x,y):
return x + y
#f(1,2) returns 3
What i want is also f(1,2,3) to return me 6 and f(7) returning me 7
Is it possible to write it without setting a predefined number of mandatory/optional parameters?
Is it possible to write it without having to set default values to 0 ?
How to write this function?
Its a simple example with numbers but the function i need to write is comparing a set of successive objects. After comparison is done, the data set will feed a neural network.
Thanks for reading.
EDIT:
Objects I am feeding my function with are tuples like this (float,float,float,bool,string)

You can put *args in your function and then take arbitrary (non-keyword) arguments. *args is a tuple, so you can iterate over it like any Python tuple/list/iterable. IE:
def f(*args):
theSum = 0
for arg in args:
theSum += arg
return theSum
print f(1,2,3,4)

def f(*args):
"""
>>> f(1, 2)
3
>>> f(7)
7
>>> f(1, 2, 3)
6
>>> f(1, 2, 3, 4, 5, 6)
21
"""
return sum(args)
If you need to do something more complicated than sum you could just iterate over args like this:
def f(*args):
r = 0
for arg in args:
r += arg
return r
See this question for more information on *args and **kwargs
Also see this sections on the Python tutorial: Arbitray Argument List

You can use the follow syntax:
def f(*args):
return sum(args)
The * before args tells it to "swallow up" all arguments, makng args a tuple. You can also mix this form with standard arguments, as long as the *args goes last. For example:
def g(a,b,*args):
return a * b * sum(args)
The first example uses the built-in sum function to total up the arguments. sum takes a sequence as adds it up for you:
>>> sum([1,3,5])
9
>>> sum(range(100))
4950
The args name is not mandatory but is used by convention so best to stick with it. There is also **kwargs for undefined keyword arguments.

Related

Python equivalent of R's ... for matching function arguments

Within R, you can use a special argument ... when defining functions, which is described here:
This argument will match any arguments not otherwise matched, and can be easily passed on to other functions. This is useful if you want to collect arguments to call another function, but you don’t want to prespecify their possible names.
This means that if I write a function a that uses another function b within it, I don't need to include all of the possible arguments of b as arguments to a in order to use them.
In Python, is there anything equivalent?
There's no single construct, but a combination of * and ** parameters does the same, I believe.
def g(x, y, z):
return x + y + z
def f(x, *args, **kwargs):
print(x)
print(args)
print(kwargs)
print(g(*args, **kwargs))
f(9, 1, 2, z=3)
produces
9
(1, 2)
{'z': 3}
6
as output.

What is the point of using *args when a list of arguments can be used?

Would passing in a list or dictionary of variables be more concise than passing in *args in Python methods?
For example,
def function(a, *argv):
print('First variable:', a)
for k in argv:
print('Additional variable:',k)
is the same as
def function(a, list):
print('First variable:', a)
for k in list:
print('Additional variable:',k)
except a list is passed in the second argument. What I think using *args would often do is to cause additional bugs in the program because the argument length only needs to be longer than the mandatory argument length. Would any please explain situations where *args would be really helpful? Thanks
The first function accepts:
function('hello', 'I', 'am', 'a', 'function')
The second one won't. For the second you'd need:
function('hello', ['I', 'am', 'a', 'function'])
In principle, the first one is used when your function can have an arbitrary number of parameters (think: print), while the second one specifies that there's always a second parameter, which is an iterable (not necessarily a list, despite the name)
Passing *args is useful when you have to extract only some (or none) arguments in first level function and then pass others to other inner function without knowing about the details. e.g.
def inner_func(x, y, z, *args, **kwargs):
# do something with x, y, and z
return output
def outer_func(a, b, *args, **kwargs):
# do something with a and b
# pass the rest arguments to inner function without caring about details
output = inner_func(*args, **kwargs)
# do something with output
return
That is a fair ask as to why *args (or **kwargs) is essentially required when a list (or dict) could do the same task. The key reason to that is when a ** caller of a function does not know the number of arguments beforehand**. I'll try to explain this with reference to the particular scenario you have shared.
Lets suppose that we have the below function which finds the sum of all integers passed in. (I'm giving up sum builtin function for demonstration purpose, please bear with me :) )
def do_add(*int_args):
total = 0
for num in int_args:
total += num
return total
And you want to call this for an unknown number of arguments with an unknown number of times.
If in case you need to send a list argument, the do_add function might look like below:
def do_add(int_list):
total = 0
for num in int_list:
total += 0
return total
l1 = [1, 2, 3, 4, ... n] # memory loaded with n int objects
do_add(l1)
l2 = [10, 20, ... n]
do_add(l2)
Firstly, you are loading the memory with an additional list object created just for the sake of function call. Secondly, if you have to add some more items to the list we may need to call another list method such as append or extend.
But if you follow the *args approach, you can avoid creating an extra list and focus only on the function call. If you need to add more arguments you can just add another argument separated by a comma rather than calling append or extend methods.
Assume that you want to call this function n times with 1000 arguments. It will result in n * 1000 new list objects to be created every time. But with the variable arguments approach, you can just call it directly.
do_add(1, 2, 3) # call 1
do_add(10.0, 20.0, 30.0) # call 2
...
do_add(x, y, z, ..., m) # call n

python multiple output variables as input

def function_1(a,b,c,d):
print('{}{}{}{}'.format(a,b,c,d))
return
def function_2():
t=y=u=i= 5
return t,y,u,i
function_1(function_2())
I expect that python would execute function 2 first, and return each t, y, u and i as inputs to function1, but instead I get:
TypeError: function_1() missing 3 required positional arguments: 'b', 'c', and 'd'
I understand that either the output of function2 is in a single object, or it is treating function2 as an input function, instead of executing.
how do I change my code to execute as expected? (each of the output variables from function2 treated as input variables to function1)
You need a splat operator.
function_1(*function_2())
function_2() returns a tuple (5, 5, 5, 5). To pass that as a set of four parameters (rather than one four-element tuple as one parameter), you use the splat operator *
Closely related is this question
This might be a slightly awkward way of doing it, but you can actually pss the function as a parameter itself, and then decompose it!
def function_1(func):
print('{}{}{}{}'.format(*func))
return
def function_2():
t=y=u=i= 5
return t,y,u,i
function_1(function_2())
output:
5555

What is the fastest way to create a function if its creation depends upon some other input parameters?

I have an API that supports (unfortunately) too many ways for a user to give inputs. I have to create a function depending upon the type of inputs. Once the function has been created (lets call it foo), it is run many times (around 10^7 times).
The user first gives 3 inputs - A, B and C that tell us about the types of input given to our foo function. A can have three possible values, B can have four and C can have 5. Thus giving me 60 possible permutations.
Example - if A is a list then the first parameter for foo will be a type list and similarly for B and C.
Now I cannot make these checks inside the function foo for obvious reasons. Hence, I create a function create_foo that checks for all the possible combinations and returns a unique function foo. But the drawback with this approach is that I have to write the foo function 60 times for all the permutations.
An example of this approach -
def create_foo(A,B,C):
if A=='a1' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a2' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a3' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
.
.
.
.(60 times)
.
.
.
if A=='a3' and B=='b4' and C=='c5':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
The foo function parses the parameters differently every time, but then it performs the same task after parsing.
Now I call the function
f = create_foo(A='a2',B='b3',C='c4')
Now foo is stored in f and f is called many times. Now f is very time efficient but the problem with this approach is the messy code which involves writing the foo function 60 times.
Is there a cleaner way to do this? I cannot compromise on performance hence, the new method must not take more time for evaluation that this method.
Currying lamda functions to handle all this takes more time than the above method because of the extra function calls. I cannot afford it.
A,B and C are not used by the function they are only used for parsing values and they would not change after the creation of foo.
For example - if A is a type list no change is required but if its a dict it needs to call a function that parses dict to list. A,B and C only tell us about the type of parameters in *args
It's not 100% clear what you want, but I'll assume A, B, and C are the same as the *args to foo.
Here are some thoughts toward possible solutions:
(1) I wouldn't take it for granted that your 60 if statements necessarily make an impact. If the rest of your function is at all computationally demanding, you might not even notice the proportional increase in running time from 60 if statements.
(2) You could parse/sanity-check the arguments once (slowly/inefficiently) and then pass the sanity-checked versions to a single function foo that runs 10^7 times.
(3) You could write a single function foo that handles every case, but only let it parse/sanity check arguments if a certain optional keyword is provided:
def foo( A, B, C, check=False ):
if check:
pass # TODO: implement sanity-checking on A, B and C here
pass # TODO: perform computationally-intensive stuff here
Call foo(..., check=True ) once. Then call it without check=True 10^7 times.
(4) If you want to pass around multiple versions of the same callable function, each version configured with different argument values pre-filled, that's what functools.partial is for. This may be what you want, rather than duplicating the same code 60 times.
Whoa whoa whoa. You're saying you're duplicating code SIXTY TIMES in your function? No, that will not do. Here's what you do instead.
def coerce_args(f):
def wrapped(a, b, c):
if isinstance(a, list):
pass
elif isinstance(a, dict):
a = list(a.items()) # turn it into a list
# etc for each argument
return f(a, b, c)
return wrapped
#coerce_args
def foo(a, b, c):
"""foo will handle a, b, c in a CONCRETE form"""
# do stuff
Essentially you're building one decorator that's going to change a, b, and c into a known format for foo to handle the business logic on. You've already built an API where it's acceptable to call this in a number of different ways (which is bad to begin with), so you need to support it. Doing so means internally treating it as the same way every time, and providing a helper to coerce that.
given your vague explanation I will make some assumptions and work with that.
I will assume that A, B and C are mutually independent, and that they are parsed to the same type, then I give you this
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
#parse your A arguments here
if parser_B is not None:
#parse your B arguments here
if parser_C is not None:
#parse your C arguments here
# do something
def foo_maker(A,B,C):
parser_A = None
parser_B = None
parser_C = None
if A == "list": #or isinstance(A,list)
pass
elif A == "dict": #or isinstance(A,dict)
#put your parse here, for example:
parser_A = lambda x: x.items()
...
#the same with B and C
return lambda *argv: _foo(*argv, parser_A=parser_A, parser_B=parser_B, parser_C=parser_C )
simple working example
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
argv = parser_A(argv)
if parser_B is not None:
argv = parser_B(argv)
if parser_C is not None:
argv = parser_C(argv)
print(argv)
def foo_maker(A,B,C):
pa = None
pb = None
pc = None
if A==1:
pa = lambda x: (23,32)+x
if B==2:
pb = lambda x: list(map(str,x))
if C==3:
pc = lambda x: set(x)
return lambda *x:_foo(*x,parser_A=pa,parser_B=pb,parser_C=pc)
test
>>> f1=foo_maker(1,4,3)
>>> f2=foo_maker(1,2,3)
>>> f1(1,2,3,5,8)
{32, 1, 2, 3, 5, 8, 23}
>>> f2(1,2,3,5,8)
{'8', '23', '2', '3', '5', '1', '32'}
>>> f3=foo_maker(0,0,3)
>>> f3(1,2,3,5,8)
{8, 1, 2, 3, 5}
>>> f4=foo_maker(0,0,0)
>>> f4(1,2,3,5,8)
(1, 2, 3, 5, 8)
>>> f5=foo_maker(1,0,0)
>>> f5(1,2,3,5,8)
(23, 32, 1, 2, 3, 5, 8)
>>>
Because you don't specify what actually happens in each individual foo, there is not way to generalize how foo's are created. But there is a way to speed up demarshaling.
def create_foo(A,B,C, d={}):
if len(d) == 0:
def _111():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a1','b1','c1')] = _111
def _211():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a2','b1','c1')] = _211
....
def _345():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a3','b4','c5')] = _345
return d[(A,B,C)]()
The _XXX functions will not be evaluated each time you call create_foo. So the functions will only be created at run-time. Also the default value of d will be set when the create_foo function is defined (only once). So after the first run, the code inside the if statement will not be entered anymore and the dictionary will have all the functions ready to start initializing and returning your foo's.
Edit: if the behavior in each foo is the same (as the edit of the question seems to suggest), then rather than passing types of foo's parameters in A,B and C, maybe its better to pass in conversion functions? Then the whole thing becomes:
def create_foo(L1, L2, L3):
def foo(a,b,c):
a=L1(a)
b=L2(b)
c=L3(c)
/*does something*/
return foo
If you want to (for example) convert all 3 parameters to sets from lists, then you can call it as:
f = create_foo(set,set,set)
If you want foo to add 1 to 1st parameter, subtract 5 from second and multiply 3rd by 4, you'd say
f = create_foo(lambda x:x+1, lambda x:x-5, lambda x:x*4)

Most pythonic way to write a function to either pass in arguments or a tuple of arguments

What is a most pythonic way to write a function to either pass in arguments or a tuple/list of arguments?
For example, a function add could either take in an argument of add(1, 2) or add((1, 2)) and both output 3.
What I have so far: (it works, but does not look nice)
def add(*args):
if len(args) == 1:
return (args[0][0] + args[0][1])
if len(args) == 2:
return args[0] + args[1]
else:
print "error: add takes in one or two arguments"
What I don't like about it is:
I have to print the error about passing in one or two arguments
The args[0][0] looks very unreadable
This way, it is hard to tell what the arguments passed in represent (they don't have names)
I dont know if this is the most "pythonic" way but it will do what you want:
def add(a, b=None):
return a+b if b is not None else sum(a)
If your function takes a specific number of arguments, then the most pythonic way to do this is to not do it. Rather if the user has a tuple with the arguments, you make them unpack them when they call the function. E.g.
def add(a, b):
return a + b
Then the caller can do
add(1,2)
or
t = (1,2)
add(*t)
The only time you want to accept either a sequence of params or individual params is when you can have any arbitrary (non-zero) number of arguments (e.g. the max and min builtin functions) in which case you'd just use *args
If you can only take a finite number of arguments, it makes more sense to ask for those specifically. If you can accept an arbitrary number of arguments, then the *args paradigm works well if you loop through it. Mixing and matching those two aren't very elegant.
def add(*args):
total = 0
for i in args:
total += i
return total
>>> add(1, 2, 3)
6
(I know we could just use sum() there, but I'm trying to make it look a bit more general)
In the spirit of python duck typing, if you see 1 argument, assume its something that expands to 2 arguments. If its then 2, assume its two things that add together. If it violates your rules, raise an exception like python would do on a function call.
def add(*args):
if len(args) == 1:
args = args[0]
if len(args) != 2:
raise TypeError("add takes 2 arguments or a tuple of 2 arguments")
return args[0] + args[1]
A decorator would be best suited for this job.
from functools import wraps
def tupled_arguments(f):
#wraps(f) # keeps name, docstring etc. of f
def accepts_tuple(tup, *args):
if not args: # only one argument given
return f(*tup)
return f(tup, *args)
return accepts_tuple
#tupled_arguments
def add(a, b):
return a + b

Categories