Is there an equivalent of cons in Python? (any version above 2.5)
If so, is it built in? Or do I need easy_install do get a module?
WARNING AHEAD: The material below may not be practical!
Actually, cons needs not to be primitive in Lisp, you can build it with λ.
See Use of lambda for cons/car/cdr definition in SICP for details. In Python, it is translated to:
def cons(x, y):
return lambda pair: pair(x, y)
def car(pair):
return pair(lambda p, q: p)
def cdr(pair):
return pair(lambda p, q: q)
Now, car(cons("a", "b")) should give you 'a'.
How is that? Prefix Scheme :)
Obviously, you can start building list using cdr recursion. You can define nil to be the empty pair in Python.
def nil(): return ()
Note that you must bind variable using = in Python. Am I right? Since it may mutate the variable, I'd rather define constant function.
Of course, this is not Pythonic but Lispy, not so practical yet elegant.
Exercise: Implement the List Library http://srfi.schemers.org/srfi-1/srfi-1.html of Scheme in Python. Just kidding :)
In Python, it's more typical to use the array-based list class than Lisp-style linked lists. But it's not too hard to convert between them:
def cons(seq):
result = None
for item in reversed(seq):
result = (item, result)
return result
def iter_cons(seq):
while seq is not None:
car, cdr = seq
yield car
seq = cdr
>>> cons([1, 2, 3, 4, 5, 6])
(1, (2, (3, (4, (5, (6, None))))))
>>> iter_cons(_)
<generator object uncons at 0x00000000024D7090>
>>> list(_)
[1, 2, 3, 4, 5, 6]
Note that Python's lists are implemented as vectors, not as linked lists. You could do lst.insert(0, val), but that operation is O(n).
If you want a data structure that behaves more like a linked list, try using a Deque.
In Python 3, you can use the splat operator * to do this concisely by writing [x, *xs]. For example:
>>> x = 1
>>> xs = [1, 2, 3]
>>> [x, *xs]
[1, 1, 2, 3]
If you prefer to define it as a function, that is easy too:
def cons(x, xs):
return [x, *xs]
You can quite trivially define a class that behaves much like cons:
class Cons(object):
def __init__(self, car, cdr):
self.car = car
self.cdr = cdr
However this will be a very 'heavyweight' way to build basic data structures, which Python is not optimised for, so I would expect the results to be much more CPU/memory intensive than doing something similar in Lisp.
No. cons is an implementation detail of Lisp-like languages; it doesn't exist in any meaningful sense in Python.
Related
Is there a way in Python to add agnostically to a collection?
Given the prevalence of duck typing I was surprised that the method to add to a list is append(x) but the method to add to a set is add(x).
I'm writing a family of utility functions that need to build up collections and would ideally like them not to care what type is accumulating the result. It should at least work for list and set - and ideally for other targets, as long as they know what method to implement. Essentially, the duck type here is 'thing to which items can be added'.
In practice, these utility functions will either be passed the target object to add the results to, or - more commonly - a function that generates new instances of the target type when needed.
For example:
def collate(xs, n, f_make=lambda: list()):
if n < 1:
raise ValueError('n < 1')
col = f_make()
for x in xs:
if len(col) == n:
yield col
col = f_make()
col.append(x) # append() okay for list but not for set
yield col
>>> list(collate(range(6), 3))
[[0, 1, 2], [3, 4, 5]]
>>> list(collate(range(6), 4))
[[0, 1, 2, 3], [4, 5]]
>>> # desired result here: [{1, 2, 3, 4}, {5, 6}]
>>> list(collate(range(6), 4, f_make=lambda: set()))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/paul/proj/mbrain/src/fossil/fn.py", line 42, in collate
col.append(x)
AttributeError: 'set' object has no attribute 'append'
Here collate() is just a simple example. I expect there's already a way to achieve this 'collation' in Python. That's not the real question here.
I'm currently using Python 3.8.5.
(This is an edited answer. Feel free to look at history for my old answer, but it was not relevant to the question)
The pythonic way is to use the standard library here. Rather than manipulating lists, you can use some of the built-in functions that work on iterables more generally. As in, itertools.
This function is a rough guideline for using itertools. It doesn't handle the case where f_make() isn't blank. It's also a bit dense and if you're not used to python you probably won't find this super easy to read, but it's technically probably one of the more pythonic ways to do this. I'm not certain I'd recommend using this, but it does a lot in a fairly small number of lines, which is sort of the point. I'm sure someone else could find a "more pythonic" approach though.
def collate(xs, n, f_make=list):
result = f_make()
for _, val in itertools.groupby(
enumerate(xs),
lambda i: (len(result) + i[0]) // n
):
yield list(itertools.chain(result, (v[1] for v in val)))
Edit 2
Your question has been edited so I'll address the clear points in there now:
If you want a duck-typed way to add to an iterable, you should probably create your own data structure. That way, you can handle all iterables, and not just sets and lists. If you pass a generator in, or something that's been sorted or a map result, you probably want to be able to handle those too, right?
Here's an example of a wrapper for that kind of thing:
class Appender:
def __init__(self, iterable):
self.length = sum(1 for _ in iterable)
self.iterable = iter(iterable)
def append(self, new_item):
self.length += 1
self.iterable = itertools.chain(self.iterable, new_item)
def __iter__(self):
return self.iterable
def __len__(self):
return self.length
Note that you can further modify this to be a MutableSequence but I don't think that's strictly necessary for your use case, where you just need length. If you don't care about iterables, then I'd advise you change your question title to remove "or other receivers"
Also note that this doesn't handle sets like sets (obviously). I'm of the belief that it should be up to the caller to manage the output of a function. I personally feel that it's perfectly acceptable to require that a function caller only pass in a MutableSequence, and responsibility of casting it to a set should be separate. This leads to clearer and more concise functions that require less logic. If you expect a set and/or dict is going to be a common acceptance method, it's likely worth handling that separately. As was mentioned in comments to your question, these are fundamentally different data types (particularly sets which are not ordered and thus can't really be collated without first being sorted into a non-set anyway).
Returning to this later I found a better solution using #functools.singledispatch which is also user-extensible to additional types.
import functools
#functools.singledispatch
def append(xs, v):
raise ValueError('append() not supported for ' + str(type(xs)))
#append.register
def _(xs: MutableSequence, v):
xs.append(v)
#append.register
def _(xs: MutableSet, v):
xs.add(v)
Here's the solution I ended up with...
def appender(xs):
if isinstance(xs, MutableSequence):
f = xs.append
elif isinstance(xs, MutableSet):
f = xs.add
# Could probably do better validation here...
elif hasattr(xs, 'append'):
f = getattr(xs, 'append')
else:
raise ValueError('Don\'t know how to append to ' + str(type(xs)))
return f
def collate(xs, n, f_make=lambda: list()):
if n < 1:
raise ValueError('n < 1')
col = f_make()
app = appender(col)
for x in xs:
if len(col) == n:
yield col
col = f_make()
app = appender(col)
app(x)
if col:
yield col
>>> list(collate(range(6), 4, set))
[{0, 1, 2, 3}, {4, 5}]
>>> list(collate(range(6), 4, list))
[[0, 1, 2, 3], [4, 5]]
(I previously added this to the question - and it was removed. So I'm now adding it as an answer.)
Additionally, just to clarify the intended behaviour:
>>> list(collate(range(6), 2, list))
[[0, 1], [2, 3], [4, 5]]
>>> list(collate(range(6), 1, set))
[{0}, {1}, {2}, {3}, {4}, {5}]
Suppose in Mathematica I define the following function:
f[list_] := Map[Prime[Sow[#]] &, list];
which outputs a list of prime numbers, such that if the input list has n at position i, then the output list will contain the nth prime number at position i. For example,
In[2]:= f[{1, 3, 4}]
Out[2]= {2, 5, 7}
Now, if for some reason (debugging, etc...) I want to check what values are being fed into the Prime function. Because of the Sow command in the function, I can do
In[3] := Reap[f[{1, 3, 4}]]
Out[3] := {{2, 5, 7}, {{1, 3, 4}}}
For more details on Sow/Reap, see the Wolfram Documentation. My question is, is there a natural Python equivalent of Mathematica's Sow and Reap functionality? In particular, is there a way to do this kind of thing without explicitly returning extra things from the python function you want to do it to, writing a second python function that is almost the same but returns something extra, or using global variables?
I came up with two ways to implement a rudimentary version of something like this, each with its own limitations. Here's the first version:
farm = []
def sower(func):
def wrapped(*args, **kw):
farm.append([])
return func(*args, **kw)
return wrapped
def sow(val):
farm[-1].append(val)
return val
def reap(val):
return val, farm.pop()
You can use it like this (based on one of the examples from the Mathematica doc page):
>>> #sower
... def someSum():
... return sum(sow(x**2 + 1) if (x**2 + 1) % 2 == 0 else x**2 + 1 for x in xrange(1, 11))
>>> someSum()
395
>>> reap(someSum())
(395, [2, 10, 26, 50, 82])
This has a number of limitations:
Any function that wants to use sow has to be decorated with the sower decorator. This means you can't use sow inside inside inline expressions like list comprehensions the way the Mathematica examples do. You might be able to hack this by inspecting the call stack, but it could get ugly.
Any values that are sown but not reaped get stored in the "farm" forever, so the farm will get bigger and bigger over time.
It doesn't have the "tag" abilities shown in the docs, although that wouldn't be too hard to add.
Writing this made me think of a simpler implementation with slightly different tradeoffs:
farm = []
def sow(val):
if farm:
farm[-1].append(val)
return val
def reap(expr):
farm.append([])
val = expr()
return val, farm.pop()
This one you can use like this, which is somewhat more similar to the Mathematica version:
>>> reap(lambda: sum(sow(x**2 + 1) if (x**2 + 1) % 2 == 0 else x**2 + 1 for x in xrange(1, 11)))
(395, [2, 10, 26, 50, 82])
This one doesn't require the decorator, and it cleans up reaped values, but it takes a no-argument function as its argument, which requires you to wrap your sowing expression in a function (here done with lambda). Also, this means that all sown values in any function called by the reaped expression will be inserted into the same list, which could result in weird ordering; I can't tell from the Mathematica docs if that's what Mathematica does or what.
Unfortunately, as far as I know, there's no simple or idiomatic equivalent in Python of "sow" and "reap". However, you might be able to fake it using a combination of generators and decorators like so:
def sow(func):
class wrapper(object):
def __call__(self, *args, **kwargs):
output = list(func(*args, **kwargs))
return output[-1]
def reap(self, *args, **kwargs):
output = list(func(*args, **kwargs))
final = output[-1]
intermediate = output[0:-1]
return [final, intermediate]
return wrapper()
#sow
def f(seq, mul):
yield seq
yield mul
yield [a * mul for a in seq]
print f([1, 2, 3], 4) # [4, 8, 12]
print f.reap([1, 2, 3], 4) # [[4, 8, 12], [[1, 2, 3], 4]]
However, compared to Mathematica, this method has a few limitations. First, the function has to be rewritten so it uses yield instead of return, turning it into a generator. The last value to be yielded would then be the final output.
It also doesn't have the same "exception"-like property that the docs describe. The decorator #sow is simply returning a class which fakes looking like a function, and adds an extra parameter reap.
An alternate solution might be to try using macropy. Since it directly manipulates the AST and bytecode of Python, you may be able to hack together direct support for something more in line with what you're looking for. The tracing macro looks vaguely similar in intent to what you want.
So my friend presented a problem for me to solve, and I'm currently writing a solution in functional-style Python. The problem itself isn't my question; I'm looking for a possible idiom that I can't find at the moment.
What I need is a fold, but instead of using the same function for every one of it's applications, it would do a map-like exhaustion of another list containing functions. For example, given this code:
nums = [1, 2, 3]
funcs = [add, sub]
special_foldl(nums, funcs)
the function (special_foldl) would fold the number list down with ((1 + 2) - 3). Is there a function/idiom that elegantly does this, or should I just roll my own?
There is no such function in the Python standard library. You'll have to roll you own, perhaps something like this:
import operator
import functools
nums = [1, 2, 3]
funcs = iter([operator.add, operator.sub])
def special_foldl(nums, funcs):
return functools.reduce(lambda x,y: next(funcs)(x,y), nums)
print(special_foldl(nums, funcs))
# 0
What is the analogue of Haskell's zipWith function in Python?
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
map()
map(operator.add, [1, 2, 3], [3, 2, 1])
Although a LC with zip() is usually used.
[x + y for (x, y) in zip([1, 2, 3], [3, 2, 1])]
You can create yours, if you wish, but in Python we mostly do
list_c = [ f(a,b) for (a,b) in zip(list_a,list_b) ]
as Python is not inherently functional. It just happens to support a few convenience idioms.
You can use map:
>>> x = [1,2,3,4]
>>> y = [4,3,2,1]
>>> map(lambda a, b: a**b, x, y)
[1, 8, 9, 4]
A lazy zipWith with itertools:
import itertools
def zip_with(f, *coll):
return itertools.starmap(f, itertools.izip(*coll))
This version generalizes the behaviour of zipWith with any number of iterables.
Generally as others have mentioned map and zip can help you replicate the functionality of zipWith as in Haskel.
Generally you can either apply a defined binary operator or some binary function on two list.An example to replace an Haskel zipWith with Python's map/zip
Input: zipWith (+) [1,2,3] [3,2,1]
Output: [4,4,4]
>>> map(operator.add,[1,2,3],[4,3,2])
[5, 5, 5]
>>> [operator.add(x,y) for x,y in zip([1,2,3],[4,3,2])]
[5, 5, 5]
>>>
There are other variation of zipWith aka zipWith3, zipWith4 .... zipWith7. To replicate these functionalists you may want to use izip and imap instead of zip and map.
>>> [x for x in itertools.imap(lambda x,y,z:x**2+y**2-z**2,[1,2,3,4],[5,6,7,8],[9,10,11,12])]
>>> [x**2+y**2-z**2 for x,y,z in itertools.izip([1,2,3,4],[5,6,7,8],[9,10,11,12])]
[-55, -60, -63, -64]
As you can see, you can operate of any number of list you desire and you can still use the same procedure.
I know this is an old question, but ...
It's already been said that the typical python way would be something like
results = [f(a, b) for a, b in zip(list1, list2)]
and so seeing a line like that in your code, most pythonistas will understand just fine.
There's also already been a (I think) purely lazy example shown:
import itertools
def zipWith(f, *args):
return itertools.starmap(f, itertools.izip(*args))
but I believe that starmap returns an iterator, so you won't be able to index, or go through multiple times what that function will return.
If you're not particularly concerned with laziness and/or need to index or loop through your new list multiple times, this is probably as general purpose as you could get:
def zipWith(func, *lists):
return [func(*args) for args in zip(*lists)]
Not that you couldn't do it with the lazy version, but you could also call that function like so if you've already built up your list of lists.
results = zipWith(func, *lists)
or just like normal like:
results = zipWith(func, list1, list2)
Somehow, that function call just looks simpler and easier to grok than the list comprehension version.
Looking at that, this looks strangely reminiscent of another helper function I often write:
def transpose(matrix):
return zip(*matrix)
which could then be written like:
def transpose(matrix):
return zipWith(lambda *x: x, *matrix)
Not really a better version, but I always find it interesting how when writing generic functions in a functional style, I often find myself going, "Oh. That's just a more general form of a function I've already written before."
After an hour of trying to understand the Y-Combinator... i finally got it, mostly but then i realized that the same thing can be achieved without it... although I'm not sure if i fully understand it's purpose.
eg. Factorials with Y-Combinator
print (lambda h: (lambda f:f(f))(lambda f: h(lambda n: f(f)(n))))(lambda g: lambda n: n and n * g(n-1) or 1)(input())
Factorials by haveing a reference to the function in another lambda
print (lambda f,m:f(f,m))((lambda g,n: n and n * g(g,n-1) or 1),input())
Can anybody please tell me if there is a purpose for the Y-Combinator in python?
The purpose of the Y combinator is to demonstrate how to write an arbitrary recursive function using only anonymous functions. But almost every language ever invented allows named functions! In other words, it is mainly of academic interest. Of course, you can define factorials much more "naturally" in Python:
def fac(n):
return n * fac(n-1) if n else 1
The only languages in which the Y combinator is actually useful in practice are the "Turing tarpit" languages, like Unlambda. Not even Lisp/Scheme users will typically use the Y combinator when writing real programs.
Python is not based on Lambda calculus; when you put the question this way it does not make much sense. The lambda statement is simply a practical feature to create an anonymous function inplace:
>>> list( map(lambda x: x**2, [1, 2, 3, 4, 5]) )
[1, 4, 9, 16, 25]
# the same as:
>>> def sq(x):
... return x**2
...
>>> list( map(sq, [1, 2, 3, 4, 5]) )
[1, 4, 9, 16, 25]
It is named this way because it was borrowed from functional languages, but it is not for computing with combinatory logic.