I've often been frustrated by the lack of flexibility in Python's iterable unpacking.
Take the following example:
a, b = range(2)
Works fine. a contains 0 and b contains 1, just as expected. Now let's try this:
a, b = range(1)
Now, we get a ValueError:
ValueError: not enough values to unpack (expected 2, got 1)
Not ideal, when the desired result was 0 in a, and None in b.
There are a number of hacks to get around this. The most elegant I've seen is this:
a, *b = function_with_variable_number_of_return_values()
b = b[0] if b else None
Not pretty, and could be confusing to Python newcomers.
So what's the most Pythonic way to do this? Store the return value in a variable and use an if block? The *varname hack? Something else?
As mentioned in the comments, the best way to do this is to simply have your function return a constant number of values and if your use case is actually more complicated (like argument parsing), use a library for it.
However, your question explicitly asked for a Pythonic way of handling functions that return a variable number of arguments and I believe it can be cleanly accomplished with decorators. They're not super common and most people tend to use them more than create them so here's a down-to-earth tutorial on creating decorators to learn more about them.
Below is a decorated function that does what you're looking for. The function returns an iterator with a variable number of arguments and it is padded up to a certain length to better accommodate iterator unpacking.
def variable_return(max_values, default=None):
# This decorator is somewhat more complicated because the decorator
# itself needs to take arguments.
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
# This will fail if `actual_values` is a single value.
# Such as a single integer or just `None`.
actual_values = list(actual_values)
except:
actual_values = [actual_values]
extra = [default] * (max_values - len(actual_values))
actual_values.extend(extra)
return actual_values
return wrapper
return decorator
#variable_return(max_values=3)
# This would be a function that actually does something.
# It should not return more values than `max_values`.
def ret_n(n):
return list(range(n))
a, b, c = ret_n(1)
print(a, b, c)
a, b, c = ret_n(2)
print(a, b, c)
a, b, c = ret_n(3)
print(a, b, c)
Which outputs what you're looking for:
0 None None
0 1 None
0 1 2
The decorator basically takes the decorated function and returns its output along with enough extra values to fill in max_values. The caller can then assume that the function always returns exactly max_values number of arguments and can use fancy unpacking like normal.
Here's an alternative version of the decorator solution by #supersam654, using iterators rather than lists for efficiency:
def variable_return(max_values, default=None):
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
for count, value in enumerate(actual_values, 1):
yield value
except TypeError:
count = 1
yield actual_values
yield from [default] * (max_values - count)
return wrapper
return decorator
It's used in the same way:
#variable_return(3)
def ret_n(n):
return tuple(range(n))
a, b, c = ret_n(2)
This could also be used with non-user-defined functions like so:
a, b, c = variable_return(3)(range)(2)
Shortest known to me version (thanks to #KellyBundy in comments below):
a, b, c, d, e, *_ = *my_list_or_iterable, *[None]*5
Obviously it's possible to use other default value than None if necessary.
Also there is one nice feature in Python 3.10 which comes handy here when we know upfront possible numbers of arguments - like when unpacking sys.argv
Previous method:
import sys.argv
_, x, y, z, *_ = *sys.argv, *[None]*3
New method:
import sys
match sys.argv[1:]: #slice needed to drop first value of sys.argv
case [x]:
print(f'x={x}')
case [x,y]:
print(f'x={x}, y={y}')
case [x,y,z]:
print(f'x={x}, y={y}, z={z}')
case _:
print('No arguments')
Related
So the easiest way to illustrate my question is through an example. Let's say I want to write a function function(a=None,b=None,c=None), this function will only take two arguments at a time and the third one will be left blank. For any two arguments that one gives to the function it will return the missing one such that a+b=c. So for example function(a=5, c=15) would return 10 and for function(a = 5, b = 10) it would return 15. Now for the sake of the argument let's say that a could not be written as a function of b and c, or its simply too complicated to find a closed solution (this is clearly not the case here because to find a, I could simply say a = c-b). Anyway, if I was to write such a function I'd do something like this:
#import a root finder
from scipy.optimize import newton
def function(a= None, b= None, c = None):
#find the missing parameter:
vars = [a,b,c]
comp = vars.index(None)
if comp == 0:
def aux_fun(a):
return (a+b-c)
elif comp == 1:
def aux_fun(b):
return (a+b-c)
else:
def aux_fun(c):
return (a+b-c)
return newton(aux_fun, 0)
I have not found a solution to this other than writing 3 different functions and calling the correct one in newton. This works for this small example but if I have a bigger problem let's say with 100 variables writing 100 functions is not pretty.
My question is: is there any way such that I only have to write aux_fun once and change its parameter based on the missing parameter from function
Thanks a lot for your answers!
I haven't fully grasped what you are trying to do, but I don't think you quite understand how newton works.
optimize.newton is going to call your function with
func(x, a, b, c ...)
where x is a scalar or array of the size of the initial value, the x0. The other arguments are passed via the args tuple. This pattern of passing an iteration variable and args to the function is widely used in these scipy.optimize functions. I've answered a number of questions regarding these arguments.
newton(func, x0, args=(a,b,c))
These aren't keyword arguments.
Read and experiment with the examples in the docs. People often mess up the args tuple. And then explore a few small examples of your own before seriously trying to do this 'renaming'.
edit
This might work - I haven't tested it:
def function(a= None, b= None, c = None):
#find the missing parameter:
vars = [a,b,c]
comp = vars.index(None)
def aux_fun(x):
vars[comp] = x
a,b,c = vars
return (a+b-c)
return newton(aux_fun, 0)
What you want is impossible. Given a generic function f(a, b, c, d) = 0, there is no way to generically turn it into a set of functions a = f1(b, c, d), b = f2(a, c, d), etc. You are entering the realm of symbolic computation. You would need your code to understand trigonometry, algebra, calculus, exponentiation and logarithms.
Updating based on comments below.
So you want something like:
def aux_func(x):
vars[comp] = x
return f(*vars)
I have a function that returns a variable list of values and I know you can do this by using a tuple. To assign these variables you can then do something like a, b = func(..). However, if there is only one value returned you have to do a, = func(..) [notice the ,] rather than a = func(..). To achieve the latter you can include a test to see if there is one value to be returned or more (see example below) but I wonder if there is no easier or less verbose way to do this.
def foo(*args):
returnvalues = []
for arg in args:
arg += 100
returnvalues.append(arg)
if len(returnvalues) == 1:
return returnvalues[0]
else:
return tuple(returnvalues)
def baz(*args):
returnvalues = []
for arg in args:
arg += 100
returnvalues.append(arg)
return tuple(returnvalues)
a = foo(10)
b, c = foo(20, 30)
print(f'a={a}, b={b}, c={c}')
a, = baz(10)
b, c = baz(20, 30)
print(f'a={a}, b={b}, c={c}')
#
a=110, b=120, c=130
a=110, b=120, c=130
I believe you are referring to "tuple unpacking". Also known as destructive assignment. The word "tuple" is a bit of a misnomer as you can use any iterable / iterator. So returning a list is fine.
def f():
return [1]
(a,) = f()
b, = f()
You can also use list syntax on the left hand side. There's no difference to the byte code that is generated. It does make unpacking a single item look less like a syntax error in the case of b and slightly less verbose than a.
[c] = f()
I would avoid returning the value itself and not a list in the special case where only one argument is passed. The reason for this is it makes the code harder to be used in a generic manner. Any caller of the function needs to know how many arguments it's passing or check the return value (which is clumsy). For example:
result = f()
if isinstance(result, (list, tuple)):
smallest = min(result)
else:
smallest = result
# as opposed to this when you always return a list / tuple
smallest = min(f())
You can assign the returning value of such a function to a single variable, so that you can use it as a list or tuple:
a = baz(10, 20, 30)
print(', '.join(map(str, a))) # this outputs 110, 120, 130
I have a function with one optional argument, like this:
def funA(x, a, b=1):
return a+b*x
I want to write a new function that calls funA and also has an optional argument, but if no argument is passed, I want to keep the default in funA.
I was thinking something like this:
def funB(x, a, b=None):
if b:
return funA(x, a, b)
else:
return funA(x, a)
Is there a more pythonic way of doing this?
I would replace if b with if b is not None, so that if you pass b=0 (or any other "falsy" value) as argument to funB it will be passed to funA.
Apart from that it seems pretty pythonic to me: clear and explicit. (albeit maybe a bit useless, depending on what you're trying to do!)
A little more cryptic way that relies on calling funB with the correct keyword arguments (e.g. funB(3, 2, b=4):
def funB(x, a, **kwargs):
return funA(x, a, **kwargs)
def funA(x, a, b=1):
return a+b*x
def funB(x, a, b=1):
return funA(x, a, b)
Make the default value of b=1 in funB() and then pass it always to funA()
The way you did it is fine. Another way is for funB to have the same defaults as funA, so you can pass the same parameters right through. E.g., if you do def funB(x, a, b=1), then you can always call return funA(x, a, b) just like that.
For simple cases, the above will work fine. For more complex cases, you may want to use *args and **kwargs (explained here and here). Specifically, you can pass in all your keyword arguments as a dictionary (conventionally called kwargs). In this case, each function would set its own independent defaults, and you would just pass the whole dictionary through:
def funA(x, a, **kwargs):
b = kwargs.get("b", 1)
return a+b*x
def funB(x, a, **kwargs):
return funA(x, a, **kwargs)
If kwargs is empty when passed to funB (b is not specified), it will be set to the default in funA by the statement b = kwargs.get("b", 1). If b is specified, it will be passed through as-is. Note that in funB, you can access b with its own, independent default value and still get the behavior you are looking for.
While this may seem like overkill for your example, extracting a couple of arguments at the beginning of a function is not a big deal if the function is complex enough. It also gives you a lot more flexibility (such as avoiding many of the common gotchas).
Using inspect.getargspec, you can get the default values (fourth item of the returned tuple = defaults):
import inspect
def funA(x, a, b=1):
return a + b * x
# inspect.getargspec(funA) =>
# ArgSpec(args=['x', 'a', 'b'], varargs=None, keywords=None, defaults=(1,))
def funcB(x, a, b=inspect.getargspec(funA)[3][0]):
return funA(x, a, b)
OR (in Python 2.7+)
def funcB(x, a, b=inspect.getargspec(funA).defaults[0]):
return funA(x, a, b)
In Python 3.5+, it's recommend to use inspect.signature instead:
def funcB(x, a, b=inspect.signature(funA).parameters['b'].default):
return funA(x, a, b)
Using FunctionType from types, you can just take a function and create a new one specifying the defaults at runtime. You can put all this in a decorator so that at the point of where you write your code it will keep things tidy, whilst still giving the reader a clue about what you are trying to accomplish. It also allows the exact same call signature for funB as funA -- all arguments can be positional, or all arguments can be keywords, or any valid mix thereof, and any arguments with default values are optional. Should play nice with positional arguments (*args) and keyword arguments (**kwargs) too.
import inspect
from types import FunctionType
def copy_defaults(source_function):
def decorator(destination_function):
"""Creates a wrapper for the destination function with the exact same
signature as source_function (including defaults)."""
# check signature matches
src_sig = inspect.signature(source_function)
dst_sig = inspect.signature(destination_function)
if list(src_sig.parameters) != list(dst_sig.parameters):
raise ValueError("src func and dst func do not having matching " \
"parameter names / order")
return FunctionType(
destination_function.__code__,
destination_function.__globals__,
destination_function.__name__,
source_function.__defaults__, # use defaults from src
destination_function.__closure__
)
return decorator
def funA(x, a, b=1):
return a+b*x
#copy_defaults(funA)
def funB(x, a, b):
"""this is fun B"""
return funA(x, a, b)
assert funA(1, 2) == funB(1, 2)
assert funB.__name__ == "funB"
assert funB.__doc__ == "this is fun B"
You can also use:
def funA(x, a, b=1):
return a+b*x
def funB(x, a, b=None):
return funA(*filter(lambda o: o is not None, [x, a, b]))
Version which will not fail if x or a are None:
def funB(x, a, b=None):
return funA(*([x, a]+filter(lambda o: o is not None, [b])))
i would like to perform a calculation using python, where the current value (i) of the equation is based on the previous value of the equation (i-1), which is really easy to do in a spreadsheet but i would rather learn to code it
i have noticed that there is loads of information on finding the previous value from a list, but i don't have a list i need to create it! my equation is shown below.
h=(2*b)-h[i-1]
can anyone give me tell me a method to do this ?
i tried this sort of thing, but that will not work as when i try to do the equation i'm calling a value i haven't created yet, if i set h=0 then i get an error that i am out of index range
i = 1
for i in range(1, len(b)):
h=[]
h=(2*b)-h[i-1]
x+=1
h = [b[0]]
for val in b[1:]:
h.append(2 * val - h[-1]) # As you add to h, you keep up with its tail
for large b list (brr, one-letter identifier), to avoid creating large slice
from itertools import islice # For big list it will keep code less wasteful
for val in islice(b, 1, None):
....
As pointed out by #pad, you simply need to handle the base case of receiving the first sample.
However, your equation makes no use of i other than to retrieve the previous result. It's looking more like a running filter than something which needs to maintain a list of past values (with an array which might never stop growing).
If that is the case, and you only ever want the most recent value,then you might want to go with a generator instead.
def gen():
def eqn(b):
eqn.h = 2*b - eqn.h
return eqn.h
eqn.h = 0
return eqn
And then use thus
>>> f = gen()
>>> f(2)
4
>>> f(3)
2
>>> f(2)
0
>>>
The same effect could be acheived with a true generator using yield and send.
First of, do you need all the intermediate values? That is, do you want a list h from 0 to i? Or do you just want h[i]?
If you just need the i-th value you could us recursion:
def get_h(i):
if i>0:
return (2*b) - get_h(i-1)
else:
return h_0
But be aware that this will not work for large i, as it will exceed the maximum recursion depth. (Thanks for pointing this out kdopen) In that case a simple for-loop or a generator is better.
Even better is to use a (mathematically) closed form of the equation (for your example that is possible, it might not be in other cases):
def get_h(i):
if i%2 == 0:
return h_0
else:
return (2*b)-h_0
In both cases h_0 is the initial value that you start out with.
h = []
for i in range(len(b)):
if i>0:
h.append(2*b - h[i-1])
else:
# handle i=0 case here
You are successively applying a function (equation) to the result of a previous application of that function - the process needs a seed to start it. Your result looks like this [seed, f(seed), f(f(seed)), f(f(f(seed)), ...]. This concept is function composition. You can create a generalized function that will do this for any sequence of functions, in Python functions are first class objects and can be passed around just like any other object. If you need to preserve the intermediate results use a generator.
def composition(functions, x):
""" yields f(x), f(f(x)), f(f(f(x)) ....
for each f in functions
functions is an iterable of callables taking one argument
"""
for f in functions:
x = f(x)
yield x
Your specs require a seed and a constant,
seed = 0
b = 10
The equation/function,
def f(x, b = b):
return 2*b - x
f is applied b times.
functions = [f]*b
Usage
print list(composition(functions, seed))
If the intermediate results are not needed composition can be redefined as
def composition(functions, x):
""" Returns f(x), g(f(x)), h(g(f(x)) ....
for each function in functions
functions is an iterable of callables taking one argument
"""
for f in functions:
x = f(x)
return x
print composition(functions, seed)
Or more generally, with no limitations on call signature:
def compose(funcs):
'''Return a callable composed of successive application of functions
funcs is an iterable producing callables
for [f, g, h] returns f(g(h(*args, **kwargs)))
'''
def outer(f, g):
def inner(*args, **kwargs):
return f(g(*args, **kwargs))
return inner
return reduce(outer, funcs)
def plus2(x):
return x + 2
def times2(x):
return x * 2
def mod16(x):
return x % 16
funcs = (mod16, plus2, times2)
eq = compose(funcs) # mod16(plus2(times2(x)))
print eq(15)
While the process definition appears to be recursive, I resisted the temptation so I could stay out of maximum recursion depth hades.
I got curious, searched SO for function composition and, of course, there are numerous relavent Q&A's.
Consider the following function, which does not work in Python, but I will use to explain what I need to do.
def exampleFunction(a, b, c = a):
...function body...
That is I want to assign to variable c the same value that variable a would take, unless an alternative value is specified. The above code does not work in python. Is there a way to do this?
def example(a, b, c=None):
if c is None:
c = a
...
The default value for the keyword argument can't be a variable (if it is, it's converted to a fixed value when the function is defined.) Commonly used to pass arguments to a main function:
def main(argv=None):
if argv is None:
argv = sys.argv
If None could be a valid value, the solution is to either use *args/**kwargs magic as in carl's answer, or use a sentinel object. Libraries that do this include attrs and Marshmallow, and in my opinion it's much cleaner and likely faster.
missing = object()
def example(a, b, c=missing):
if c is missing:
c = a
...
The only way for c is missing to be true is for c to be exactly that dummy object you created there.
This general pattern is probably the best and most readable:
def exampleFunction(a, b, c = None):
if c is None:
c = a
...
You have to be careful that None is not a valid state for c.
If you want to support 'None' values, you can do something like this:
def example(a, b, *args, **kwargs):
if 'c' in kwargs:
c = kwargs['c']
elif len(args) > 0:
c = args[0]
else:
c = a
One approach is something like:
def foo(a, b, c=None):
c = a if c is None else c
# do something