Partially unpack parameters in python - python

I know in Python we can unpack parameters from a tuple or list:
def add(x,y,z):
return x + y + z
xyz = (1,2,3)
s = add(*xyz)
But what is the proper way to accomplish something like this:
xy = (1,2)
s = add(*xy, 3)
SyntaxError: only named arguments may follow *expression
I can do this:
s = add(*xy + (3,))
but that looks ugly and badly readable, and if I have a few more variables in there it would be very messy.
So, is there a cleaner way to deal with such situation?

If you name your arguments; you can then proceed as you like:
>>> def foo(x=None,y=None,z=None):
... return x+y+z
...
>>> s = foo(*(1,2),z=3)
>>> s
6
Now if you do it like this, you can't override keyword arguments, so foo(*(1,2), y=3) will not work; but you can switch the order around as you like foo(z=3, *(1,2)).

I don't know that this is much cleaner, but since we're talking about partials..
from functools import partial
sum_ = partial(add,*xy)(3)

The pep for this has been proposed long back in 2007. You can take a look at it here - http://www.python.org/dev/peps/pep-3132
Although it might not come in py3.4 but it is certainly accepted by Guido & is proposed to come in some python 3 release.

sum = add(3, *xy)
Hope this will do.
The prototype for a method declaration in python is:
def method1(arg1,arg2, *args, **kwargs):
....your code.....

Related

How to ignore unpacked parts of a tuple as argument of a lambda?

In Python, by convention, the underscore (_) is often used to throw away parts of an unpacked tuple, like so
>>> tup = (1,2,3)
>>> meaningfulVariableName,_,_ = tup
>>> meaningfulVariableName
1
I'm trying to do the same for a tuple argument of a lambda. It seems unfair that it can only be done with 2-tuples...
>>> map(lambda (meaningfulVariableName,_): meaningfulVariableName*2, [(1,10), (2,20), (3,30)]) # This is fine
[2, 4, 6]
>>> map(lambda (meaningfulVariableName,_,_): meaningfulVariableName*2, [(1,10,100), (2,20,200), (3,30,300)]) # But I need this!
SyntaxError: duplicate argument '_' in function definition (<pyshell#24>, line 1)
Any ideas why, and what the best way to achieve this is?
As it is in the comments, just use stared arguments
to throw an remaining arguments in "_":
lambda x, *_: x*2
If you were using these in a map statement, as Python does not map each item in a tuple to a different parameter, you could use itertools.starmap, that does that:
from itertools import starmap
result = map(lambda x, *_: x, [(0,1,2),])
But there is no equivalent to that on the key parameter to sort or sorted.
If you won't be using arguments in the middle of the tuple,
just number those:
lambda x, _1, _2, _3, w: x*2 + w
If you get a complaint from some linter tool about the parameters not being used: the purpose of the linter is to suggest mor readable code. My personal preference is not to let that to be in the way of practicity, and if this happens, I just turn off the linter for that line of code, without a second thought.
Otherwise, you will really have to do the "beautiful" thing - just use good sense if it is to please you and your team, or solely to please the linter. In this case, it is to write a full fledged function, and pretend
to consume the unused arguments.
def my_otherwise_lambda(x, unused_1, unused_2, w):
"""My make linter-happy docstring here"""
unused_1, unused_2 # Use the unused variables
return 2 * x + w
Short of having a problem with the linter, is the purpose is to have the lambda parameter readable, then habing a full-fledged function is the recomended anyway. lambda was really close of being stripped of the language in v. 3.0, in order to commit to readability.
And last, but not least, if the semantics of the value in your tuples is that meaningful, maybe you should consider using a class to hold the values in there. In that way you could just pass the instances of that class to the lambda funcion and check the values bytheir respective names.
Namedtuple is one that would work well:
from collections import namedtuple
vector = namedtuple("vector", "x y z")
mydata = [(1,10,100), (2,20,200), (3,30,300)]
mydata = [vector(*v) for v in mydata]
sorted_data = sorted(mydata, lambda v: v.x * 2)
Tuples are immutable in Python so you won't be able to "throw away" (modify) the extraneous values.
Additionally, since you don't care about what those values are, there is absolutely no need to assign them to variables.
What I would do, is to simply index the tuple at the index you are interested in, like so:
>>> list(map(lambda x: x[0] * 2, [(1,10,100), (2,20,200), (3,30,300)]))
[2, 4, 6]
No need for *args or dummy variables.
You are often better off to use list comprehensions rather than lambdas:
some_list = [(1, 10, 100), (2, 20, 200), (3, 30, 300)]
processed_list = [2 * x for x, dummy1, dummy2 in some_list]
If you really insist, you could use _ instead of dummy1 and dummy2 here. However, I recommend against this, since I've frequently seen this causing confusion. People often think _ is some kind of special syntax (which it is e.g. in Haskell and Rust), while it is just some unusual variable name without any special properties. This confusion is completely avoidable by using names like dummy1. Moreover, _ clashes with the common gettext alias, and it also does have a special meaning in the interactive interpreter, so overall I prefer using dummy to avoid all the confusion.

Pythonic way to re-apply a function to its own output n times?

Assume there are some useful transformation functions, for example random_spelling_error, that we would like to apply n times.
My temporary solution looks like this:
def reapply(n, fn, arg):
for i in range(n):
arg = fn(arg)
return arg
reapply(3, random_spelling_error, "This is not a test!")
Is there a built-in or otherwise better way to do this?
It need not handle variable lengths args or keyword args, but it could. The function will be called at scale, but the values of n will be low and the size of the argument and return value will be small.
We could call this reduce but that name was of course taken for a function that can do this and too much more, and was removed in Python 3. Here is Guido's argument:
So in my mind, the applicability of reduce() is pretty much limited to
associative operators, and in all other cases it's better to write out
the accumulation loop explicitly.
reduce is still available in python 3 using the functools module. I don't really know that it's any more pythonic, but here's how you could achieve it in one line:
from functools import reduce
def reapply(n, fn, arg):
return reduce(lambda x, _: fn(x), range(n), arg)
Get rid of the custom function completely, you're trying to compress two readable lines into one confusing function call. Which one do you think is easier to read and understand, your way:
foo = reapply(3, random_spelling_error, foo)
Or a simple for loop that's one more line:
for _ in range(3):
foo = random_spelling_error(foo)
Update: According to your comment
Let's assume that there are many transformation functions I may want to apply.
Why not try something like this:
modifiers = (random_spelling_error, another_function, apply_this_too)
for modifier in modifiers:
for _ in range(3):
foo = modifier(foo)
Or if you need different amount of repeats for different functions, try creating a list of tuples:
modifiers = [
(random_spelling_error, 5),
(another_function, 3),
...
]
for modifier, count in modifiers:
for _ in range(count):
foo = modifier(foo)
some like recursion, not always obviously 'better'
def reapply(n, fn, arg):
if n:
arg = reapply(n-1, fn, fn(arg))
return arg
reapply(1, lambda x: x**2, 2)
Out[161]: 4
reapply(2, lambda x: x**2, 2)
Out[162]: 16

How to change a default parameter programatically? [duplicate]

In Python, is it possible to redefine the default parameters of a function at runtime?
I defined a function with 3 parameters here:
def multiplyNumbers(x,y,z):
return x*y*z
print(multiplyNumbers(x=2,y=3,z=3))
Next, I tried (unsuccessfully) to set the default parameter value for y, and then I tried calling the function without the parameter y:
multiplyNumbers.y = 2;
print(multiplyNumbers(x=3, z=3))
But the following error was produced, since the default value of y was not set correctly:
TypeError: multiplyNumbers() missing 1 required positional argument: 'y'
Is it possible to redefine the default parameters of a function at runtime, as I'm attempting to do here?
Just use functools.partial
multiplyNumbers = functools.partial(multiplyNumbers, y = 42)
One problem here: you will not be able to call it as multiplyNumbers(5, 7, 9); you should manually say y=7
If you need to remove default arguments I see two ways:
Store original function somewhere
oldF = f
f = functools.partial(f, y = 42)
//work with changed f
f = oldF //restore
use partial.func
f = f.func //go to previous version.
Technically, it is possible to do what you ask… but it's not a good idea. RiaD's answer is the Pythonic way to do this.
In Python 3:
>>> def f(x=1, y=2, z=3):
... print(x, y, z)
>>> f()
1 2 3
>>> f.__defaults__ = (4, 5, 6)
4 5 6
As with everything else that's under the covers and hard to find in the docs, the inspect module chart is the best place to look for function attributes.
The details are slightly different in Python 2, but the idea is the same. (Just change the pulldown at the top left of the docs page from 3.3 to 2.7.)
If you're wondering how Python knows which defaults go with which arguments when it's just got a tuple… it just counts backward from the end (or the first of *, *args, **kwargs—anything after that goes into the __kwdefaults__ dict instead). f.__defaults = (4, 5) will set the defaults to y and z to 4 and 5, and with default for x. That works because you can't have non-defaulted parameters after defaulted parameters.
There are some cases where this won't work, but even then, you can immutably copy it to a new function with different defaults:
>>> f2 = types.FunctionType(f.__code__, f.__globals__, f.__name__,
... (4, 5, 6), f.__closure__)
Here, the types module documentation doesn't really explain anything, but help(types.FunctionType) in the interactive interpreter shows the params you need.
The only case you can't handle is a builtin function. But they generally don't have actual defaults anyway; instead, they fake something similar in the C API.
yes, you can accomplish this by modifying the function's func.__defaults__ tuple
that attribute is a tuple of the default values for each argument of the function.
for example, to make pandas.read_csv always use sep='\t', you could do:
import inspect
import pandas as pd
default_args = inspect.getfullargspec(pd.read_csv).args
default_arg_values = list(pd.read_csv.__defaults__)
default_arg_values[default_args.index("sep")] = '\t'
pd.read_csv.__defaults__ = tuple(default_arg_values)
use func_defaults as in
def myfun(a=3):
return a
myfun.func_defaults = (4,)
b = myfun()
assert b == 4
check the docs for func_defaults here
UPDATE: looking at RiaD's response I think I was too literal with mine. I don't know the context from where you're asking this question but in general (and following the Zen of Python) I believe working with partial applications is a better option than redefining a function's defaults arguments

Is there any operator.unpack in Python?

Is there any built-in version for this
def unpack(f, a):
return f(**a) #or ``return f(*a)''
Why isn't unpack considered to be an operator and located in operator.*?
I'm trying to do something similar to this (but of course want a general solution to the same type of problem):
from functools import partial, reduce
from operator import add
data = [{'tag':'p','inner':'Word'},{'tag':'img','inner':'lower'}]
renderer = partial(unpack, "<{tag}>{inner}</{tag}>".format)
print(reduce(add, map(renderer, data)))
as without using lambdas or comprehensions.
That is not the way to go about this. How about
print(''.join('<{tag}>{inner}</{tag}>'.format(**d) for d in data))
Same behavior in a much more Pythonic style.
Edit: Since you seem opposed to using any of the nice features of Python, how about this:
def tag_format(x):
return '<{tag}>{inner}</{tag}>'.format(tag=x['tag'], inner=x['inner'])
results = []
for d in data:
results.append(tag_format(d))
print(''.join(results))
I don't know of an operator that does what you want, but you don't really need it to avoid lambdas or comprehensions:
from functools import reduce
from operator import add
data = [{'tag':'p','inner':'Word'},{'tag':'img','inner':'lower'}]
print(reduce(add, map("<{0[tag]}>{0[inner]}</{0[tag]}>".format, data)))
Seems like it would be possible to generalize something like this if you wanted.

Is it a good idea to have a syntax sugar to function composition in Python?

Some time ago I looked over Haskell docs and found it's functional composition operator really nice. So I've implemented this tiny decorator:
from functools import partial
class _compfunc(partial):
def __lshift__(self, y):
f = lambda *args, **kwargs: self.func(y(*args, **kwargs))
return _compfunc(f)
def __rshift__(self, y):
f = lambda *args, **kwargs: y(self.func(*args, **kwargs))
return _compfunc(f)
def composable(f):
return _compfunc(f)
#composable
def f1(x):
return x * 2
#composable
def f2(x):
return x + 3
#composable
def f3(x):
return (-1) * x
#composable
def f4(a):
return a + [0]
print (f1 >> f2 >> f3)(3) #-9
print (f4 >> f1)([1, 2]) #[1, 2, 0, 1, 2, 0]
print (f4 << f1)([1, 2]) #[1, 2, 1, 2, 0]
The problem:
without language support we can't use this syntax on builtin functions or lambdas like this:
((lambda x: x + 3) >> abs)(2)
The question:
is it useful? Does it worth to be discussed on python mail list?
IMHO: no, it's not. While I like Haskell, this just doesn't seem to fit in Python. Instead of (f1 >> f2 >> f3) you can do compose(f1, f2, f3) and that solves your problem -- you can use it with any callable without any overloading, decorating or changing the core (IIRC somebody already proposed functools.compose at least once; I can't find it right now).
Besides, the language definition is frozen right now, so they will probably reject that kind of change anyway -- see PEP 3003.
Function composition isn't a super-common operation in Python, especially not in a way that a composition operator is clearly needed. If something was added, I am not certain I like the choice of << and >> for Python, which are not as obvious to me as they seem to be to you.
I suspect a lot of people would be more comfortable with a function compose, the order of which is not problematic: compose(f, g)(x) would mean f(g(x)), the same order as o in math and . in Haskell. Python tries to avoid using punctuation when English words will do, especially when the special characters don't have widely-known meaning. (Exceptions are made for things that seem too useful to pass up, such as # for decorators (with much hesitation) and * and ** for function arguments.)
If you do choose to send this to python-ideas, you'll probably win a lot more people if you can find some instances in the stdlib or popular Python libraries that function composition could have made code more clear, easy to write, maintainable, or efficient.
You can do it with reduce, although the order of calls is left-to-right only:
def f1(a):
return a+1
def f2(a):
return a+10
def f3(a):
return a+100
def call(a,f):
return f(a)
reduce(call, (f1, f2, f3), 5)
# 5 -> f1 -> f2 -> f3 -> 116
reduce(call, ((lambda x: x+3), abs), 2)
# 5
I don't have enough experience with Python to have a view on whether a language change would be worthwhile. But I wanted to describe the options available with the current language.
To avoid creating unexpected behavior, functional composition should ideally follow the standard math (or Haskell) order of operations, i.e., f ∘ g ∘ h should mean apply h, then g, then f.
If you want to use an existing operator in Python, say <<, as you mention you'd have a problem with lambdas and built-ins. You can make your life easier by defining the reflected version __rlshift__ in addition to __lshift__. With that, lambda/built-ins adjacent to composable objects would be taken care of. When you do have two adjacent lambda/built-ins, you'll need to explicitly convert (just one of) them with composable, as #si14 suggested. Note I really mean __rlshift__, not __rshift__; in fact, I would advise against using __rshift__ at all, since the order change is confusing despite the directional hint provided by the shape of the operator.
But there's another approach that you may want to consider. Ferdinand Jamitzky has a great recipe for defining pseudo infix operators in Python that work even on built-ins. With this, you can write f |o| g for function composition, which actually looks very reasonable.

Categories