Looking for a fold-like idiom - python

So my friend presented a problem for me to solve, and I'm currently writing a solution in functional-style Python. The problem itself isn't my question; I'm looking for a possible idiom that I can't find at the moment.
What I need is a fold, but instead of using the same function for every one of it's applications, it would do a map-like exhaustion of another list containing functions. For example, given this code:
nums = [1, 2, 3]
funcs = [add, sub]
special_foldl(nums, funcs)
the function (special_foldl) would fold the number list down with ((1 + 2) - 3). Is there a function/idiom that elegantly does this, or should I just roll my own?

There is no such function in the Python standard library. You'll have to roll you own, perhaps something like this:
import operator
import functools
nums = [1, 2, 3]
funcs = iter([operator.add, operator.sub])
def special_foldl(nums, funcs):
return functools.reduce(lambda x,y: next(funcs)(x,y), nums)
print(special_foldl(nums, funcs))
# 0

Related

Python equivalent to Ruby Array.each method

In Python what is equivalent to Ruby's Array.each method? Does Python have a nice and short closure/lambda syntax for it?
[1,2,3].each do |x|
puts x
end
Does Python have a nice and short closure/lambda syntax for it?
Yes, but you don't want it in this case.
The closest equivalent to that Ruby code is:
new_values = map(print, [1, 2, 3])
That looks pretty nice when you already have a function lying around, like print. When you just have some arbitrary expression and you want to use it in map, you need to create a function out of it with a def or a lambda, like this:
new_values = map(lambda x: print(x), [1, 2, 3])
That's the ugliness you apparently want to avoid. And Python has a nice way to avoid it: comprehensions:
new_values = [print(x) for x in values]
However, in this case, you're just trying to execute some statement for each value, not accumulate the new values for each value. So, while this will work (you'll get back a list of None values), it's definitely not idiomatic.
In this case, the right thing to do is to write it explicitly—no closures, no functions, no comprehensions, just a loop:
for x in values:
print x
The most idiomatic:
for x in [1,2,3]:
print x
You can use numpy for vectorized arithmetic over an array:
>>> import numpy as np
>>> a = np.array([1, 2, 3])
>>> a * 3
array([3, 6, 9])
You can easily define a lambda that can be used over each element of an array:
>>> array_lambda=np.vectorize(lambda x: x * x)
>>> array_lambda([1, 2, 3])
array([1, 4, 9])
But as others have said, if you want to just print each, use a loop.
There are also libraries that wrap objects to expose all the usual functional programming stuff.
PyDash http://pydash.readthedocs.org/en/latest/
underscorepy (Google github underscore.py)
E.g. pydash allows you to do things like this:
>>> from pydash import py_
>>> from __future__ import print_function
>>> x = py_([1,2,3,4]).map(lambda x: x*2).each(print).value()
2
4
6
8
>>> x
[2, 4, 6, 8]
(Just always remember to "trigger" execution and/or to un-wrap the wrapped values with .value() at the end!)
without need of an assignment:
list(print(_) for _ in [1, 2, 3])
or just
[print(_) for _ in [1, 2, 3]]

zipWith analogue in Python?

What is the analogue of Haskell's zipWith function in Python?
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
map()
map(operator.add, [1, 2, 3], [3, 2, 1])
Although a LC with zip() is usually used.
[x + y for (x, y) in zip([1, 2, 3], [3, 2, 1])]
You can create yours, if you wish, but in Python we mostly do
list_c = [ f(a,b) for (a,b) in zip(list_a,list_b) ]
as Python is not inherently functional. It just happens to support a few convenience idioms.
You can use map:
>>> x = [1,2,3,4]
>>> y = [4,3,2,1]
>>> map(lambda a, b: a**b, x, y)
[1, 8, 9, 4]
A lazy zipWith with itertools:
import itertools
def zip_with(f, *coll):
return itertools.starmap(f, itertools.izip(*coll))
This version generalizes the behaviour of zipWith with any number of iterables.
Generally as others have mentioned map and zip can help you replicate the functionality of zipWith as in Haskel.
Generally you can either apply a defined binary operator or some binary function on two list.An example to replace an Haskel zipWith with Python's map/zip
Input: zipWith (+) [1,2,3] [3,2,1]
Output: [4,4,4]
>>> map(operator.add,[1,2,3],[4,3,2])
[5, 5, 5]
>>> [operator.add(x,y) for x,y in zip([1,2,3],[4,3,2])]
[5, 5, 5]
>>>
There are other variation of zipWith aka zipWith3, zipWith4 .... zipWith7. To replicate these functionalists you may want to use izip and imap instead of zip and map.
>>> [x for x in itertools.imap(lambda x,y,z:x**2+y**2-z**2,[1,2,3,4],[5,6,7,8],[9,10,11,12])]
>>> [x**2+y**2-z**2 for x,y,z in itertools.izip([1,2,3,4],[5,6,7,8],[9,10,11,12])]
[-55, -60, -63, -64]
As you can see, you can operate of any number of list you desire and you can still use the same procedure.
I know this is an old question, but ...
It's already been said that the typical python way would be something like
results = [f(a, b) for a, b in zip(list1, list2)]
and so seeing a line like that in your code, most pythonistas will understand just fine.
There's also already been a (I think) purely lazy example shown:
import itertools
def zipWith(f, *args):
return itertools.starmap(f, itertools.izip(*args))
but I believe that starmap returns an iterator, so you won't be able to index, or go through multiple times what that function will return.
If you're not particularly concerned with laziness and/or need to index or loop through your new list multiple times, this is probably as general purpose as you could get:
def zipWith(func, *lists):
return [func(*args) for args in zip(*lists)]
Not that you couldn't do it with the lazy version, but you could also call that function like so if you've already built up your list of lists.
results = zipWith(func, *lists)
or just like normal like:
results = zipWith(func, list1, list2)
Somehow, that function call just looks simpler and easier to grok than the list comprehension version.
Looking at that, this looks strangely reminiscent of another helper function I often write:
def transpose(matrix):
return zip(*matrix)
which could then be written like:
def transpose(matrix):
return zipWith(lambda *x: x, *matrix)
Not really a better version, but I always find it interesting how when writing generic functions in a functional style, I often find myself going, "Oh. That's just a more general form of a function I've already written before."

Does Python have NO need for the Y-Combinator?

After an hour of trying to understand the Y-Combinator... i finally got it, mostly but then i realized that the same thing can be achieved without it... although I'm not sure if i fully understand it's purpose.
eg. Factorials with Y-Combinator
print (lambda h: (lambda f:f(f))(lambda f: h(lambda n: f(f)(n))))(lambda g: lambda n: n and n * g(n-1) or 1)(input())
Factorials by haveing a reference to the function in another lambda
print (lambda f,m:f(f,m))((lambda g,n: n and n * g(g,n-1) or 1),input())
Can anybody please tell me if there is a purpose for the Y-Combinator in python?
The purpose of the Y combinator is to demonstrate how to write an arbitrary recursive function using only anonymous functions. But almost every language ever invented allows named functions! In other words, it is mainly of academic interest. Of course, you can define factorials much more "naturally" in Python:
def fac(n):
return n * fac(n-1) if n else 1
The only languages in which the Y combinator is actually useful in practice are the "Turing tarpit" languages, like Unlambda. Not even Lisp/Scheme users will typically use the Y combinator when writing real programs.
Python is not based on Lambda calculus; when you put the question this way it does not make much sense. The lambda statement is simply a practical feature to create an anonymous function inplace:
>>> list( map(lambda x: x**2, [1, 2, 3, 4, 5]) )
[1, 4, 9, 16, 25]
# the same as:
>>> def sq(x):
... return x**2
...
>>> list( map(sq, [1, 2, 3, 4, 5]) )
[1, 4, 9, 16, 25]
It is named this way because it was borrowed from functional languages, but it is not for computing with combinatory logic.

LISP cons in python

Is there an equivalent of cons in Python? (any version above 2.5)
If so, is it built in? Or do I need easy_install do get a module?
WARNING AHEAD: The material below may not be practical!
Actually, cons needs not to be primitive in Lisp, you can build it with λ.
See Use of lambda for cons/car/cdr definition in SICP for details. In Python, it is translated to:
def cons(x, y):
return lambda pair: pair(x, y)
def car(pair):
return pair(lambda p, q: p)
def cdr(pair):
return pair(lambda p, q: q)
Now, car(cons("a", "b")) should give you 'a'.
How is that? Prefix Scheme :)
Obviously, you can start building list using cdr recursion. You can define nil to be the empty pair in Python.
def nil(): return ()
Note that you must bind variable using = in Python. Am I right? Since it may mutate the variable, I'd rather define constant function.
Of course, this is not Pythonic but Lispy, not so practical yet elegant.
Exercise: Implement the List Library http://srfi.schemers.org/srfi-1/srfi-1.html of Scheme in Python. Just kidding :)
In Python, it's more typical to use the array-based list class than Lisp-style linked lists. But it's not too hard to convert between them:
def cons(seq):
result = None
for item in reversed(seq):
result = (item, result)
return result
def iter_cons(seq):
while seq is not None:
car, cdr = seq
yield car
seq = cdr
>>> cons([1, 2, 3, 4, 5, 6])
(1, (2, (3, (4, (5, (6, None))))))
>>> iter_cons(_)
<generator object uncons at 0x00000000024D7090>
>>> list(_)
[1, 2, 3, 4, 5, 6]
Note that Python's lists are implemented as vectors, not as linked lists. You could do lst.insert(0, val), but that operation is O(n).
If you want a data structure that behaves more like a linked list, try using a Deque.
In Python 3, you can use the splat operator * to do this concisely by writing [x, *xs]. For example:
>>> x = 1
>>> xs = [1, 2, 3]
>>> [x, *xs]
[1, 1, 2, 3]
If you prefer to define it as a function, that is easy too:
def cons(x, xs):
return [x, *xs]
You can quite trivially define a class that behaves much like cons:
class Cons(object):
def __init__(self, car, cdr):
self.car = car
self.cdr = cdr
However this will be a very 'heavyweight' way to build basic data structures, which Python is not optimised for, so I would expect the results to be much more CPU/memory intensive than doing something similar in Lisp.
No. cons is an implementation detail of Lisp-like languages; it doesn't exist in any meaningful sense in Python.

Get a subset of a generator

I have a generator function and want to get the first ten items from it; my first attempt was:
my_generator()[:10]
This doesn't work because generators aren't subscriptable, as the error tells me. Right now I have worked around that with:
list(my_generator())[:10]
This works since it converts the generator to a list; however, it's inefficient and defeats the point of having a generator. Is there some built-in, Pythonic equivalent of [:10] for generators?
import itertools
itertools.islice(mygenerator(), 10)
itertools has a number of utilities for working with iterators. islice takes start, stop, and step arguments to slice an iterator just as you would slice a list.
to clarify the above comments:
from itertools import islice
def fib_gen():
a, b = 1, 1
while True:
yield a
a, b = b, a + b
assert [1, 1, 2, 3, 5] == list(islice(fib_gen(), 5))

Categories