Related
Context
I use CherryPy to serve a simple webpage that shows different content based on the URL parameters. Specifically it takes the sum of the parameters and shows a different message based on that.
In CherryPy webpages can be defined as functions, and URL parameters are passed as an argument to that function.
As explained in this tutorial URL parameters are passed as strings, so to calculate the sum I want to convert each argument to a float. I will have many URL parameters, so doing this one by one seems cumbersome.
How can I type convert (a large number of) arguments in place?
What I've tried
Dumb
The "dumb" approach would be to simply take each argument and re-assign it as a float:
def dumb(a="0", b="0", c="0", d="0", e="0", f="0", g="0"):
a = float(a)
b = float(b)
c = float(c)
d = float(d)
e = float(e)
f = float(f)
g = float(g)
return print(sum([a, b, c, d, e, f, g]))
It's readable, but rather repetitive and not very "pythonic".
Loop over locals()
Another option I found is to re-assign the locals to a dictionary, then loop over it and call the values from the dict.
def looping_dict(a="0", b="0", c="0", d="0", e="0", f="0", g="0"):
args = locals()
for key in args:
if key in ["a", "b", "c", "d", "e", "f", "g"]:
args[key] = float(args[key])
return print(sum([args["a"], args["b"], args["c"], args["d"], args["e"], args["f"], args["g"]] ) )
This is a bit annoying as I have to reference the dictionary every time. So a simple reference d becomes args["d"]. Doesn't help code readability neither.
This is only documented in the changelog but since 2016 with cherrypy >= 6.2.0 there is a #cherrypy.tools.params tool doing exactly what you want (provided that you use a Python 3 version supporting type annotations):
import cherrypy
#cherrypy.tools.params()
def your_http_handler(
a: float = 0, b: float = 0,
c: float = 0, d: float = 0,
e: float = 0, f: float = 0,
g: float = 0,
):
return str(a + b + c + d + e + f + g)
The PR that added it is PR #1442 — you can explore the usage by looking at the tests there.
If your Python is old for some reason, you could do:
import cherrypy
def your_http_handler(**kwargs):
# Validate that the right query args are present in the HTTP request:
if kwargs.keys() ^ {'a', 'b', 'c', 'd', 'e', 'f', 'g'}:
raise cherrypy.HTTPError(400, message='Got invalid args!')
numbers = (float(num) for num in kwargs.values()) # generator expression won't raise conversion errors here
try:
return str(sum(numbers)) # this will actually call those `float()` conversions so we need to catch a `ValueError`
except ValueError as val_err:
raise cherrypy.HTTPError(
400,
message='All args should be valid numbers: {exc!s}'.format(exc=val_err),
)
P.S. In your initial post you use return print(...) which is wrong. print() always returns None so you'd be sending "None" back to the HTTP client while the argument of print(arg) would be just printed out in your terminal where you run the server.
Here's a #convert decorator I've used before for something similar (originally inspired by https://stackoverflow.com/a/28268292/4597523):
import functools, inspect
def convert(*to_convert, to):
def actual_convert(fn):
arg_names = inspect.signature(fn).parameters.keys()
#functools.wraps(fn)
def wrapper(*args, **kwargs):
args_converted = [to(arg) if arg_name in to_convert else arg
for arg, arg_name in zip(args, arg_names)]
kwargs_converted = {kw_name: to(val) if kw_name in to_convert else val
for kw_name, val in kwargs.items()}
return fn(*args_converted, **kwargs_converted)
return wrapper
return actual_convert
#convert('a', 'c', 'd', to=str)
def f(a, b, c=5, *, d, e=0):
return a, b, c, d, e
print(f(1, 2, d=7))
# Output: ('1', 2, 5, '7', 0)
# Passed params `a` and `d` got changed to `str`,
# however `c` used the default value without conversion
It uses inspect.signature to get the non-keyword arg names. I am not sure how CherryPy passes the params or how it gets the names, but this might be a solid start. Using functools.wraps is important - it makes sure the original signature function signature is preserved, which seems to be important for CherryPy.
Is there a Pythonic way to create a function that accepts both separate arguments and a tuple? i.e to achieve something like this:
def f(*args):
"""prints 2 values
f(1,2)
1 2
f( (1,2) )
1 2"""
if len(args) == 1:
if len(args[0]) != 2:
raise Exception("wrong number of arguments")
else:
print args[0][0],args[0][1]
elif len(args) == 2:
print args[0],args[1]
else:
raise Exception("wrong number of arguments")
First of all I don't know if it is very wise to do so. Say a person calls a function like:
f(*((1,4),(2,5)))
As you can see the tuple contains two elements. But now for some reason, the person calls it with only one element (because for instance the generator did not generated two elements):
f(*((1,4),))
Then the user would likely want your function to report this, but now it will simply accept it (which can lead to complicated and unexpected behavior). Okay printing the elements of course will not do much harm. But in a general case the consequences might be more severe.
Nevertheless an elegant way to do this is making a simple decorator that first checks if one element is fed it checks if one tuple element is feeded and if so expands it:
def one_tuple(f):
def g(*args):
if len(args) == 1 and isinstance(args[0],tuple):
return f(*args[0])
else:
return f(*args)
return g
And apply it to your f:
#one_tuple
def f(*args):
if len(args) == 2:
print args[0],args[1]
else:
raise Exception("wrong number of arguments")
The decorator one_tuple thus checks if one tuple is fed, and if so unpacks it for you before passing it to your f function.
As a result f does not have to take the tuple case into account: it will always be fed expanded arguments and handle these (of course the opposite could be done as well).
The advantage of defining a decorator is its reuse: you can apply this decorator to all kinds of functions (and thus make it easier to implement these).
The Pythonic way would be to use duck typing. This will only work if you are sure that none of the expanded arguments are going to be iterable.
def f(*args):
def g(*args):
# play with guaranteed expanded arguments
if len(args) == 1:
try:
iter(args[0])
except TypeError:
pass
else:
return g(*args[0])
return g(*args)
This implementation offers a slight improvement on #Ev.Kounis's answer for cases where a single non-tuple argument is passed in. It can also easily be turned into the equivalent decorator described by #WillemVanOnsem. Use the decorator version if you have more than one function like this.
I do not agree to the idea myself (even though I like the fact that python does not require the definition of variable types and thus allows such a thing) but there might be cases where such a thing is needed. So here you go:
def my_f(a, *b):
def whatever(my_tuple):
# check tuple for conformity
# do stuff with the tuple
print(my_tuple)
return
if hasattr(a, '__iter__'):
whatever(a)
elif b:
whatever((a,) + b)
else:
raise TypeError('malformed input')
return
Restructured it a bit but the logic stays the same. if "a" is an iterable consider it to be your tuple, if not take "b" into account as well. if "a" is not an iterable and "b" is not defined, raise TypeError
my_f((1, 2, 3)) # (1, 2, 3)
my_f(1, 2, 3) # (1, 2, 3)
I have an API that supports (unfortunately) too many ways for a user to give inputs. I have to create a function depending upon the type of inputs. Once the function has been created (lets call it foo), it is run many times (around 10^7 times).
The user first gives 3 inputs - A, B and C that tell us about the types of input given to our foo function. A can have three possible values, B can have four and C can have 5. Thus giving me 60 possible permutations.
Example - if A is a list then the first parameter for foo will be a type list and similarly for B and C.
Now I cannot make these checks inside the function foo for obvious reasons. Hence, I create a function create_foo that checks for all the possible combinations and returns a unique function foo. But the drawback with this approach is that I have to write the foo function 60 times for all the permutations.
An example of this approach -
def create_foo(A,B,C):
if A=='a1' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a2' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a3' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
.
.
.
.(60 times)
.
.
.
if A=='a3' and B=='b4' and C=='c5':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
The foo function parses the parameters differently every time, but then it performs the same task after parsing.
Now I call the function
f = create_foo(A='a2',B='b3',C='c4')
Now foo is stored in f and f is called many times. Now f is very time efficient but the problem with this approach is the messy code which involves writing the foo function 60 times.
Is there a cleaner way to do this? I cannot compromise on performance hence, the new method must not take more time for evaluation that this method.
Currying lamda functions to handle all this takes more time than the above method because of the extra function calls. I cannot afford it.
A,B and C are not used by the function they are only used for parsing values and they would not change after the creation of foo.
For example - if A is a type list no change is required but if its a dict it needs to call a function that parses dict to list. A,B and C only tell us about the type of parameters in *args
It's not 100% clear what you want, but I'll assume A, B, and C are the same as the *args to foo.
Here are some thoughts toward possible solutions:
(1) I wouldn't take it for granted that your 60 if statements necessarily make an impact. If the rest of your function is at all computationally demanding, you might not even notice the proportional increase in running time from 60 if statements.
(2) You could parse/sanity-check the arguments once (slowly/inefficiently) and then pass the sanity-checked versions to a single function foo that runs 10^7 times.
(3) You could write a single function foo that handles every case, but only let it parse/sanity check arguments if a certain optional keyword is provided:
def foo( A, B, C, check=False ):
if check:
pass # TODO: implement sanity-checking on A, B and C here
pass # TODO: perform computationally-intensive stuff here
Call foo(..., check=True ) once. Then call it without check=True 10^7 times.
(4) If you want to pass around multiple versions of the same callable function, each version configured with different argument values pre-filled, that's what functools.partial is for. This may be what you want, rather than duplicating the same code 60 times.
Whoa whoa whoa. You're saying you're duplicating code SIXTY TIMES in your function? No, that will not do. Here's what you do instead.
def coerce_args(f):
def wrapped(a, b, c):
if isinstance(a, list):
pass
elif isinstance(a, dict):
a = list(a.items()) # turn it into a list
# etc for each argument
return f(a, b, c)
return wrapped
#coerce_args
def foo(a, b, c):
"""foo will handle a, b, c in a CONCRETE form"""
# do stuff
Essentially you're building one decorator that's going to change a, b, and c into a known format for foo to handle the business logic on. You've already built an API where it's acceptable to call this in a number of different ways (which is bad to begin with), so you need to support it. Doing so means internally treating it as the same way every time, and providing a helper to coerce that.
given your vague explanation I will make some assumptions and work with that.
I will assume that A, B and C are mutually independent, and that they are parsed to the same type, then I give you this
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
#parse your A arguments here
if parser_B is not None:
#parse your B arguments here
if parser_C is not None:
#parse your C arguments here
# do something
def foo_maker(A,B,C):
parser_A = None
parser_B = None
parser_C = None
if A == "list": #or isinstance(A,list)
pass
elif A == "dict": #or isinstance(A,dict)
#put your parse here, for example:
parser_A = lambda x: x.items()
...
#the same with B and C
return lambda *argv: _foo(*argv, parser_A=parser_A, parser_B=parser_B, parser_C=parser_C )
simple working example
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
argv = parser_A(argv)
if parser_B is not None:
argv = parser_B(argv)
if parser_C is not None:
argv = parser_C(argv)
print(argv)
def foo_maker(A,B,C):
pa = None
pb = None
pc = None
if A==1:
pa = lambda x: (23,32)+x
if B==2:
pb = lambda x: list(map(str,x))
if C==3:
pc = lambda x: set(x)
return lambda *x:_foo(*x,parser_A=pa,parser_B=pb,parser_C=pc)
test
>>> f1=foo_maker(1,4,3)
>>> f2=foo_maker(1,2,3)
>>> f1(1,2,3,5,8)
{32, 1, 2, 3, 5, 8, 23}
>>> f2(1,2,3,5,8)
{'8', '23', '2', '3', '5', '1', '32'}
>>> f3=foo_maker(0,0,3)
>>> f3(1,2,3,5,8)
{8, 1, 2, 3, 5}
>>> f4=foo_maker(0,0,0)
>>> f4(1,2,3,5,8)
(1, 2, 3, 5, 8)
>>> f5=foo_maker(1,0,0)
>>> f5(1,2,3,5,8)
(23, 32, 1, 2, 3, 5, 8)
>>>
Because you don't specify what actually happens in each individual foo, there is not way to generalize how foo's are created. But there is a way to speed up demarshaling.
def create_foo(A,B,C, d={}):
if len(d) == 0:
def _111():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a1','b1','c1')] = _111
def _211():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a2','b1','c1')] = _211
....
def _345():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a3','b4','c5')] = _345
return d[(A,B,C)]()
The _XXX functions will not be evaluated each time you call create_foo. So the functions will only be created at run-time. Also the default value of d will be set when the create_foo function is defined (only once). So after the first run, the code inside the if statement will not be entered anymore and the dictionary will have all the functions ready to start initializing and returning your foo's.
Edit: if the behavior in each foo is the same (as the edit of the question seems to suggest), then rather than passing types of foo's parameters in A,B and C, maybe its better to pass in conversion functions? Then the whole thing becomes:
def create_foo(L1, L2, L3):
def foo(a,b,c):
a=L1(a)
b=L2(b)
c=L3(c)
/*does something*/
return foo
If you want to (for example) convert all 3 parameters to sets from lists, then you can call it as:
f = create_foo(set,set,set)
If you want foo to add 1 to 1st parameter, subtract 5 from second and multiply 3rd by 4, you'd say
f = create_foo(lambda x:x+1, lambda x:x-5, lambda x:x*4)
What is a most pythonic way to write a function to either pass in arguments or a tuple/list of arguments?
For example, a function add could either take in an argument of add(1, 2) or add((1, 2)) and both output 3.
What I have so far: (it works, but does not look nice)
def add(*args):
if len(args) == 1:
return (args[0][0] + args[0][1])
if len(args) == 2:
return args[0] + args[1]
else:
print "error: add takes in one or two arguments"
What I don't like about it is:
I have to print the error about passing in one or two arguments
The args[0][0] looks very unreadable
This way, it is hard to tell what the arguments passed in represent (they don't have names)
I dont know if this is the most "pythonic" way but it will do what you want:
def add(a, b=None):
return a+b if b is not None else sum(a)
If your function takes a specific number of arguments, then the most pythonic way to do this is to not do it. Rather if the user has a tuple with the arguments, you make them unpack them when they call the function. E.g.
def add(a, b):
return a + b
Then the caller can do
add(1,2)
or
t = (1,2)
add(*t)
The only time you want to accept either a sequence of params or individual params is when you can have any arbitrary (non-zero) number of arguments (e.g. the max and min builtin functions) in which case you'd just use *args
If you can only take a finite number of arguments, it makes more sense to ask for those specifically. If you can accept an arbitrary number of arguments, then the *args paradigm works well if you loop through it. Mixing and matching those two aren't very elegant.
def add(*args):
total = 0
for i in args:
total += i
return total
>>> add(1, 2, 3)
6
(I know we could just use sum() there, but I'm trying to make it look a bit more general)
In the spirit of python duck typing, if you see 1 argument, assume its something that expands to 2 arguments. If its then 2, assume its two things that add together. If it violates your rules, raise an exception like python would do on a function call.
def add(*args):
if len(args) == 1:
args = args[0]
if len(args) != 2:
raise TypeError("add takes 2 arguments or a tuple of 2 arguments")
return args[0] + args[1]
A decorator would be best suited for this job.
from functools import wraps
def tupled_arguments(f):
#wraps(f) # keeps name, docstring etc. of f
def accepts_tuple(tup, *args):
if not args: # only one argument given
return f(*tup)
return f(tup, *args)
return accepts_tuple
#tupled_arguments
def add(a, b):
return a + b
TL;TR Looking for idioms and patterns to unpack positional and keyword arguments into ordered sequence of positional arguments, based on simple specification, e.g. a list of names. The idea seems similar to scanf-like parsing.
I'm wrapping functions of a Python module, called someapi.
Functions of someapi only expect positional arguments, which are in pain numbers in most cases.
I'd like to enable callers with flexibility of how they can pass arguments to my wrappers.
Here are examples of the wrappers invocations I'd like to allow:
# foo calls someapi.foo()
foo(1, 2, 3, 4)
foo(1, 2, 3, 4, 5) # but forward only 1st 4 to someapi.foo
foo([1, 2, 3, 4])
foo([1, 2, 3, 4, 5, 6]) # but forward only 1st 4 to someapi.foo
foo({'x':1, 'y':2, 'z':3, 'r':4})
foo(x=1, y=2, z=3, r=4)
foo(a=0, b=0, x=1, y=2, z=3, r=4) # but forward only x,y,z,r someapi.foo
I don't see any need to support convoluted case of mixed positional and keyword arguments:
foo(3, 4, x=1, y=2)
Here is my first stab at implementing such arguments handling for the foo wrapper calling someapi.foo:
def foo(*args, **kwargs):
# BEGIN arguments un/re-packing
a = None
kwa = None
if len(args) > 1:
# foo(1, 2, 3, 4)
a = args
elif len(args) == 1:
if isinstance(args[0], (list, tuple)) and len(args[0]) > 1:
# foo([1, 2, 3, 4])
a = args[0]
if isinstance(args[0], dict):
# foo({'x':1, 'y':2, 'z':3, 'r':4})
kwa = args[0]
else:
# foo(x=1, y=2, z=3, r=4)
kwa = kwargs
if a:
(x, y, z, r) = a
elif kwa:
(x, y, z, r) = (kwa['x'], kwa['y'], kwa['z'], kwa['r'])
else:
raise ValueError("invalid arguments")
# END arguments un/re-packing
# make call forwarding unpacked arguments
someapi.foo(x, y, z, r)
It does the job as expected, as far as I can tell, but it there are two issues:
Can I do it better in more Python idiomatic fashion?
I have dozen(s) of someapi functions to wrap, so how to avoid copying and adjusting the whole block between BEGIN/END marks in every wrapper?
I don't know the answer for the question 1, yet.
Here, however, is my attempt to address the issue 2.
So, I defined a generic handler for arguments based on the simple specification of names.
The names specify a couple of things, depending on the actual wrapper invocation:
How many arguments to unpack from *args? (see len(names) test below)
What keyword arguments are expected in **kwargs? (see generator expression returning tuple below)
Here is new version:
def unpack_args(names, *args, **kwargs):
a = None
kwa = None
if len(args) >= len(names):
# foo(1, 2, 3, 4...)
a = args
elif len(args) == 1:
if isinstance(args[0], (list, tuple)) and len(args[0]) >= len(names):
# foo([1, 2, 3, 4...])
a = args[0]
if isinstance(args[0], dict):
# foo({'x':1, 'y':2, 'z':3, 'r':4...})
kwa = args[0]
else:
# foo(x=1, y=2, z=3, r=4)
kwa = kwargs
if a:
return a
elif kwa:
if all(name in kwa.keys() for name in names):
return (kwa[n] for n in names)
else:
raise ValueError("missing keys:", \
[name for name in names if name not in kwa.keys()])
else:
raise ValueError("invalid arguments")
This allows me to implement the wrapper functions in the following way:
def bar(*args, **kwargs):
# arguments un/re-packing according to given of names
zargs = unpack_args(('a', 'b', 'c', 'd', 'e', 'f'), *args, **kwargs)
# make call forwarding unpacked arguments
someapi.bar(*zargs)
I think I have achieved all the advantages over the foo version above that I was looking for:
Enable callers with the requested flexibility.
Compact form, cut down on copy-and-paste.
Flexible protocol for positional arguments: bar can be called with 7, 8 and more positional arguments or a long list of numbers, but only first 6 are taken into account. For example, it would allow iterations processing long list of numbers (e.g. think of geometry coordinates):
# meaw expects 2 numbers
n = [1,2,3,4,5,6,7,8]
for i in range(0, len(n), 2):
meaw(n[i:i+2])
Flexible protocol for keyword arguments: more keywords may be specified than actually used or dictionary can have more items than used.
Getting back to the question 1 above, can I do better and make it more Pythonic?
Also, I'd like to ask for review of my solution: you see any bugs? have I overlooked anything? how to improve it?
Python is a very powerful language that allows you manipulate code in any way you want, but understanding what you're doing is hard. For this you can use the inspect module. So an example of how to wrap a function in someapi. I'll only consider positional arguments in this example, you can intuit how to extend this further. You can do it like this:
import inspect
import someapi
def foo(args*):
argspec = inspect.getargspec(someapi.foo)
if len(args) > len(argspec.args):
args = args[:len(argspec.args)]
return someapi.foo(*args)
This will detect if the number of arguments given to foo is too many and if so, it will get rid of the excess arguments. On the other hand, if there are too few arguments then it will just do nothing and let foo handle the errors.
Now to make it more pythonic. The ideal way to wrap many functions using the same template is to use decorator syntax (familiarity with this subject is assumed, if you want to learn more then see the docs at http://www.python.org/doc). Although since decorator syntax is mostly used on functions that are in development rather than wrapping another API, we'll make a decorator but just use it as a factory (the factory pattern) for our API. To make this factory we'll make use of the functools module to help us out (so the wrapped function looks as it should). So we can turn our example into:
import inspect
import functools
import someapi
def my_wrapper_maker(func):
#functools.wraps(func)
def wrapper(args*):
argspec = inspect.getargspec(func)
if len(args) > len(argspec.args):
args = args[:len(argspec.args)]
return func(*args)
return wrapper
foo = my_wrapper_maker(someapi.foo)
Finally, if someapi has a relatively large API that could change between versions (or we just want to make our source file more modular so it can wrap any API) then we can automate the application of my_wrapper_maker to everything exported by the module someapi. We'll do this like so:
__all__ = ['my_wrapper_maker']
# Add the entire API of someapi to our program.
for func in someapi.__all__:
# Only add in bindings for functions.
if callable(getattr(someapi, func)):
globals()[func] = my_wrapper_maker(getattr(someapi, func))
__all__.append(func)
This probably considered the most pythonic way to implement this, it makes full use of Python's meta-programming resources and allows the programmer to use this API everywhere they want without depending on a specific someapi.
Note: Whether this is most idiomatic way to do this is really up to opinion. I personally believe that this follows the philosophy set out in "The Zen of Python" quite well and so to me it is very idiomatic.