I can assign a sequence like this in python:
a,b,c="ABC"
But I am unable to pass this sequence to a function as a parameter. i.e
def function2(a,b,c):
print a
print b
print c
function2("ABC")
The above statement is raising an error.
Can any one tell me the difference between assignment and argument passing in python?
The compiler sees a comma-separated list on the LHS and emits bytecode to iterate over the RHS for you. With the function call it sees a single value and so sends it as a single argument. You need to tell it to split the sequence explicitly:
>>> function2(*"ABC")
A
B
C
The function you created takes 3 parameters a, b, and c. But when you're calling the function you only provide one parameter.
To correctly call your function, you would need to do something like
function2("A","B","C")
and it would print
A
B
C
Since no one's mentioned this yet, the a, b, c = "ABC" assigns a = "A", b = "B" and c = "C" as a string also functions as an iterator of sorts, that is, you can loop over it, check for characters in it just like checking elements of a list etc etc.
It's similar to the code
a, b = 1, 2
Python interprets that as
(a, b) = (1, 2)
and the compiler detects 2 variables and so makes 2 assignments. When you pass 3 to a length 3 string, it works the same:
(a, b, c) = ("A", "B", "C")
However, as the others have said, in a function parameter passing, function2(a, b, c) must take 3 arguments, and "ABC" is only one object: a string, and so assigns a = "ABC" and then raises and error on b and c. To get the same behavior as a, b, c = ("A", "B", "C"), you parse it with a *, that is
function2(*"ABC") # is the same as function2("A", "B", "C")
The * parser works for any iterable: lists, (strings), dict (keys) etc
Functions can have optional parameters, specified as parameters with default argument values. Consider the function
def function2(a, b=None, c=None):
print a
print b
print c
Which of the following should function2("ABC") output?
ABC
None
None
or
A
B
C
Python opts for the first choice, instead of examining whether some of the parameters have default values, where an iterable value is treated as a single value for argument passing, and the special syntax
function2(*"ABC")
provides the second.
Related
Is it possible for one function to take as an argument the name of one of the optional arguments of a second function then call the second function with the optional argument set to the value of some variable in the first function?
Code (that obviously doesn't work)
def foo(a=1,b=2,c=3):
d = a + b + c
return d
def bar(variable):
z = get_user_input()
e = foo(variable = z)
return e
print(bar(a))
The desired result is for bar to call foo(a=z) and print whatever z+2+3 is. Of course Python doesn't know what (a) is here. My guess is that I can somehow reference the list of arguments of foo() , but I am stumped as to how you might do that.
Maybe try the code snippet below
def foo(a=1,b=2,c=3):
d = a + b + c
return d
def bar(variable: str):
z = int(input())
e = foo(**{variable: z})
return e
# variable name should be defined as string
print(bar("a"))
The ** parses all arbitrary arguments on dict, in this case is a. Be careful tho, as if passing wrong variable name (differ from a, b or c) will result raising error.
I was wondering about variables assignments and why is this allowed:
a = 1, 2
a = b = 1
but this is not allowed:
a, b = 1
What is the logic behind?
Thank you
I'm going to assume you might be familiar with a language like C/C++, which is a statically-typed language. This means that the type of a variable must be declared when initialising a variable (eg you'd say int a;).
In C/C++, the syntax you are trying to do is valid syntax when doing int a, b = 1; (for example), because we're initialising two variables, a and b, to be integers, where the second one we're assigning a value 1.
However, Python is a dynamically typed language - the type of the variable does not need to be declared. Thus, when we do a, b = 1, we're actually using a feature Python has which is called "unpacking". Python is trying to unpack 1 to the variables a and b - but this is not possible since 1 is just a single piece of data - it's not a list or a tuple or whatever.
Because Python is dynamically typed, we can not just initiate a variable and not give it any value (like we do in C when we do int a;). When you do a, b = 1, it's trying to iterate through 1 and assign its contents to the variables a and b. Hence, the error TypeError: 'int' object is not iterable.
The left and right side are not symmetric. In
a = 1, 2
python does packing of the right-hand side arguments. The two comma-separated arguments create a tuple, so this is equivalent to a = (1, 2)
With
a, b = 1
python tries to do unpacking. It assigns the first value of the right-hand expression to a, and then tries to assign the second value to b. Since there is no second value, this will fail. It will treat the value 1 as iterable, so will give TypeError: int is not iterable.
You should write something like a, b = 1, 2.
In the first case python assumes a is a tuple of 1 and 2
>>> a = 1, 2
>>> a
(1, 2)
But a, b = 1 you want to give values to a and b, so there must be two values for them, but you're only providing one i.e 1. If you have a iterable of length of 2 then it would work.
>>> a, b = [6, 7]
>>> a
6
>>> b
7
In python when you give two number/strings with , python interpreter thinks its a tuple
a = 1, 2
in the above line you are creating tuple object called a
a, b = 1
In the above line left hand side syntax is for a tuple , so right side it expects tuple value
so
a,b = 1,1
works
I have an API that supports (unfortunately) too many ways for a user to give inputs. I have to create a function depending upon the type of inputs. Once the function has been created (lets call it foo), it is run many times (around 10^7 times).
The user first gives 3 inputs - A, B and C that tell us about the types of input given to our foo function. A can have three possible values, B can have four and C can have 5. Thus giving me 60 possible permutations.
Example - if A is a list then the first parameter for foo will be a type list and similarly for B and C.
Now I cannot make these checks inside the function foo for obvious reasons. Hence, I create a function create_foo that checks for all the possible combinations and returns a unique function foo. But the drawback with this approach is that I have to write the foo function 60 times for all the permutations.
An example of this approach -
def create_foo(A,B,C):
if A=='a1' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a2' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a3' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
.
.
.
.(60 times)
.
.
.
if A=='a3' and B=='b4' and C=='c5':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
The foo function parses the parameters differently every time, but then it performs the same task after parsing.
Now I call the function
f = create_foo(A='a2',B='b3',C='c4')
Now foo is stored in f and f is called many times. Now f is very time efficient but the problem with this approach is the messy code which involves writing the foo function 60 times.
Is there a cleaner way to do this? I cannot compromise on performance hence, the new method must not take more time for evaluation that this method.
Currying lamda functions to handle all this takes more time than the above method because of the extra function calls. I cannot afford it.
A,B and C are not used by the function they are only used for parsing values and they would not change after the creation of foo.
For example - if A is a type list no change is required but if its a dict it needs to call a function that parses dict to list. A,B and C only tell us about the type of parameters in *args
It's not 100% clear what you want, but I'll assume A, B, and C are the same as the *args to foo.
Here are some thoughts toward possible solutions:
(1) I wouldn't take it for granted that your 60 if statements necessarily make an impact. If the rest of your function is at all computationally demanding, you might not even notice the proportional increase in running time from 60 if statements.
(2) You could parse/sanity-check the arguments once (slowly/inefficiently) and then pass the sanity-checked versions to a single function foo that runs 10^7 times.
(3) You could write a single function foo that handles every case, but only let it parse/sanity check arguments if a certain optional keyword is provided:
def foo( A, B, C, check=False ):
if check:
pass # TODO: implement sanity-checking on A, B and C here
pass # TODO: perform computationally-intensive stuff here
Call foo(..., check=True ) once. Then call it without check=True 10^7 times.
(4) If you want to pass around multiple versions of the same callable function, each version configured with different argument values pre-filled, that's what functools.partial is for. This may be what you want, rather than duplicating the same code 60 times.
Whoa whoa whoa. You're saying you're duplicating code SIXTY TIMES in your function? No, that will not do. Here's what you do instead.
def coerce_args(f):
def wrapped(a, b, c):
if isinstance(a, list):
pass
elif isinstance(a, dict):
a = list(a.items()) # turn it into a list
# etc for each argument
return f(a, b, c)
return wrapped
#coerce_args
def foo(a, b, c):
"""foo will handle a, b, c in a CONCRETE form"""
# do stuff
Essentially you're building one decorator that's going to change a, b, and c into a known format for foo to handle the business logic on. You've already built an API where it's acceptable to call this in a number of different ways (which is bad to begin with), so you need to support it. Doing so means internally treating it as the same way every time, and providing a helper to coerce that.
given your vague explanation I will make some assumptions and work with that.
I will assume that A, B and C are mutually independent, and that they are parsed to the same type, then I give you this
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
#parse your A arguments here
if parser_B is not None:
#parse your B arguments here
if parser_C is not None:
#parse your C arguments here
# do something
def foo_maker(A,B,C):
parser_A = None
parser_B = None
parser_C = None
if A == "list": #or isinstance(A,list)
pass
elif A == "dict": #or isinstance(A,dict)
#put your parse here, for example:
parser_A = lambda x: x.items()
...
#the same with B and C
return lambda *argv: _foo(*argv, parser_A=parser_A, parser_B=parser_B, parser_C=parser_C )
simple working example
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
argv = parser_A(argv)
if parser_B is not None:
argv = parser_B(argv)
if parser_C is not None:
argv = parser_C(argv)
print(argv)
def foo_maker(A,B,C):
pa = None
pb = None
pc = None
if A==1:
pa = lambda x: (23,32)+x
if B==2:
pb = lambda x: list(map(str,x))
if C==3:
pc = lambda x: set(x)
return lambda *x:_foo(*x,parser_A=pa,parser_B=pb,parser_C=pc)
test
>>> f1=foo_maker(1,4,3)
>>> f2=foo_maker(1,2,3)
>>> f1(1,2,3,5,8)
{32, 1, 2, 3, 5, 8, 23}
>>> f2(1,2,3,5,8)
{'8', '23', '2', '3', '5', '1', '32'}
>>> f3=foo_maker(0,0,3)
>>> f3(1,2,3,5,8)
{8, 1, 2, 3, 5}
>>> f4=foo_maker(0,0,0)
>>> f4(1,2,3,5,8)
(1, 2, 3, 5, 8)
>>> f5=foo_maker(1,0,0)
>>> f5(1,2,3,5,8)
(23, 32, 1, 2, 3, 5, 8)
>>>
Because you don't specify what actually happens in each individual foo, there is not way to generalize how foo's are created. But there is a way to speed up demarshaling.
def create_foo(A,B,C, d={}):
if len(d) == 0:
def _111():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a1','b1','c1')] = _111
def _211():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a2','b1','c1')] = _211
....
def _345():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a3','b4','c5')] = _345
return d[(A,B,C)]()
The _XXX functions will not be evaluated each time you call create_foo. So the functions will only be created at run-time. Also the default value of d will be set when the create_foo function is defined (only once). So after the first run, the code inside the if statement will not be entered anymore and the dictionary will have all the functions ready to start initializing and returning your foo's.
Edit: if the behavior in each foo is the same (as the edit of the question seems to suggest), then rather than passing types of foo's parameters in A,B and C, maybe its better to pass in conversion functions? Then the whole thing becomes:
def create_foo(L1, L2, L3):
def foo(a,b,c):
a=L1(a)
b=L2(b)
c=L3(c)
/*does something*/
return foo
If you want to (for example) convert all 3 parameters to sets from lists, then you can call it as:
f = create_foo(set,set,set)
If you want foo to add 1 to 1st parameter, subtract 5 from second and multiply 3rd by 4, you'd say
f = create_foo(lambda x:x+1, lambda x:x-5, lambda x:x*4)
I'm looking to strip the white-space from each of the arguments in a function that takes a bunch of required strings. And I don't want to use **kwargs which defeats the purpose of the required arguments.
def func(a, b, c):
for argument, argument_value in sorted(list(locals().items())):
print(argument, ':', argument_value)
argument_value = ' '.join(argument_value.split())
print(argument, ':', argument_value)
print('a is now:', a)
func(a=' a test 1 ', b=' b test 2 ', c='c test 3')
Output
a : a test 1
a : a test 1
b : b test 2
b : b test 2
c : c test 3
c : c test 3
a is now: a test 1
Desired output for the original 'a' argument:
a is now : a test 1
Being a newb, I cobbled this together and then read the python documentation which clearly states.
locals()
Update and return a dictionary representing the current local symbol table. Free variables are returned by locals() when it is
called in function blocks, but not in class blocks.
Note
The contents of this dictionary should not be modified; changes may not affect the values of local and free variables used by the
interpreter.
What is the right way to do what I'm attempting here?
You can use a decorator to do that kind of task.
The idea is to mask the real function behind a decorator that will take generic arguments, do modifications "on them" (actually create new variables containing the modifications) and pass the modified arguments to the real function.
def strip_blanks(f):
def decorated_func(*args, **kwargs):
# Strip blanks from non-keyword arguments
new_args = [ " ".join(arg.split()) for arg in args]
# Strip blanks from keyword arguments
new_kwargs = { key:" ".join(arg.split()) for key,arg in kwargs.items()}
# Pass the modified arguments to the decorated function
# And forward its result in case needed
return f(*new_args, **new_kwargs)
return decorated_func
#strip_blanks
def func(a, b, c):
for i in a, b, c:
print(i)
Then you'd get
>>> func(a = " foo bar", b = "baz boz", c = "biz buz ")
foo bar
baz boz
biz buz
>>> func(" foo bar", "baz boz", "biz buz ")
foo bar
baz boz
biz buz
>>> func(a = " foo bar", b = "baz boz", c = "biz buz ", d = " ha ha")
Traceback (most recent call last):
File "<pyshell#40>", line 1, in <module>
func(a = " foo bar", b = "baz boz", c = "biz buz ", d = " ha ha")
File "<pyshell#35>", line 5, in decorated_func
f(*new_args, **new_kwargs)
TypeError: func() got an unexpected keyword argument 'd'
>>>
I would start by changing your definition to def func(**kwargs). This takes whatever keyword arguments you provide and adds them to a dictionary. For example:
def func(**kwargs):
for key in kwargs:
print key, kwargs[key]
>>> func(a='hello', b='goodbye')
a hello
b goodbye
>>> func()
>>>
As you can see, it works with no arguments as well (nothing to print). From there, have a look at the string method strip.
EDIT:
You're giving some pretty arbitrary restrictions. So, what you want is:
...a specific number of arguments with specific names.
...to loop over the arguments.
...perform some work on each one.
The fastest way to do what you want is with locals() I think. I'm guessing what you're balking at is the bit about the contents of the dictionary not being modified. This isn't a concern here, as you're looping over a list of tuples representing the keys and values from the locals dictionary. When you do for argument, argument_value in ____ you are unpacking the tuples and assigning one value to each of those names. When you then do argument_value = 'blahblah' you are assigning a new string to argument_value. Strings are immutable, so you can't change the value "in placed". You aren't changing the value in the dictionary, as you haven't assigned anything to the dictionary's key.
I'm trying to do the following:
def f( a=(b,c,d) ):
pass
But the interpreter complains that b is not defined when I do this. Is there a "Pythonic" way of getting the intended result? I know that I could do something like the following:
def f( (b,c,d) ):
a = (b,c,d)
pass
But I'd rather a solution that didn't require me to repeat myself. Any ideas?
Edit for clarification: What I am trying to do is have a function that can be called as follows:
f( (1,2,3) )
Then, within the body of the function, the following names are assigned:
a = (1,2,3)
b = 1
c = 2
d = 3
There is no way to do precisely what you want. Moreover, tuple unpacking in formal function arguments is going the way of the dodo in python 3. The suggested replacement is to change
def f((a,b,c)):
...
to
def f(a_b_c):
a,b,c = a_b_c
...
(That's the style of new argument name the 2to3 conversion script would generate; obviously you can use whatever sort of name you want.)
In your case, the simplest thing to do would be this:
def f(a):
b,c,d = a
...
That has the minimum repetition.
Instead of having the function accept a tuple, why not have the caller unpack the tuple?
>>> def f(a, b, c):
print a, b, c
>>> t = (1, 2, 3)
>>> f(*t)
1 2 3
Otherwise consider the NamedTuple class.