Concatenating list results from multiple functions - python

So, basically I've got a few functions that return tuples. Essentially of the form:
def function():
return (thing, other_thing)
I want to be able to add several of these functions together in a straightforward way, like this:
def use_results(*args):
"""
Each arg is a function like the one above
"""
results = [test() for test in args]
things = magic_function(results)
other_things = magic_function(results)
Basically I have the data structure:
[([item_1, item_1], [item_2, item_2]), ([item_3, item_3], [item_4, item_4])]
and I want to turn it into:
[[item_1, item_1, item_3, item_3], [item_2, item_2, item_4, item_4]]
It seems like there's probably a nice pythonic way of doing this with a combination of zip and *, but it's not quite coming to me.

Oh, I feel kind of silly. I found an answer quickly after posting the question. I'm going to still keep this up in case there's a better solution though:
>>> import operator
>>> results = [([1,1], [2,2]), ([3,3], [4,4])]
>>> map(operator.add, *results)
[[1, 1, 3, 3], [2, 2, 4, 4]]

Without importing any module, just built-in methods:
>>> results = [([1,1], [2,2]), ([3,3], [4,4])]
>>> [x+y for x,y in zip(*results)]
[[1, 1, 3, 3], [2, 2, 4, 4]]
Or even this way as well:
>>> map(lambda s,t:s+t, *results)

Related

recursive function python, create function that generates all numbers that have same sum N

I am trying to code a recursive function that generates all the lists of numbers < N who's sum equal to N in python
This is the code I wrote :
def fn(v,n):
N=5
global vvi
v.append(n) ;
if(len(v)>N):
return
if(sum(v)>=5):
if(sum(v)==5): vvi.append(v)
else:
for i in range(n,N+1):
fn(v,i)
this is the output I get
vvi
Out[170]: [[1, 1, 1, 1, 1, 2, 3, 4, 5, 2, 3, 4, 5, 2, 3, 4, 5, 2, 3, 4, 5]]
I tried same thing with c++ and it worked fine
What you need to do is to just formulate it as a recursive description and implement it. You want to prepend all singleton [j] to each of the lists with sum N-j, unless N-j=0 in which you also would include the singleton itself. Translated into python this would be
def glist(listsum, minelm=1):
for j in range(minelm, listsum+1):
if listsum-j > 0:
for l in glist(listsum-j, minelm=j):
yield [j]+l
else:
yield [j]
for l in glist(5):
print(l)
The solution contains a mechanism that will exclude permutated solutions by requiring the lists to be non-decreasing, this is done via the minelm argument that limits the values in the rest of the list. If you wan't to include permuted lists you could disable the minelm mechanism by replacing the recursion call to glist(listsum-j).
As for your code I don't really follow what you're trying to do. I'm sorry, but your code is not very clear (and that's not a bad thing only in python, it's actually more so in C).
First of all it's a bad idea to return the result from a function via a global variable, returning result is what return is for, but in python you have also yield that is nice if you want to return multiple elements as you go. For a recursive function it's even more horrible to return via a global variable (or even use it) since you are running many nested invocations of the function, but have only one global variable.
Also calling a function fn taking arguments v and n as argument. What do that actually tell you about the function and it's argument? At most that it's a function and probably that one of the argument should be a number. Not very useful if somebody (else) is to read and understand the code.
If you want an more elaborate answer what's formally wrong with your code you should probably include a minimal, complete, verifiable example including the expected output (and perhaps observed output).
You may want to reconsider the recursive solution and consider a dynamic programming approach:
def fn(N):
ways = {0:[[]]}
for n in range(1, N+1):
for i, x in enumerate(range(n, N+1)):
for v in ways[i]:
ways.setdefault(x, []).append(v+[n])
return ways[N]
>>> fn(5)
[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 2, 2], [1, 1, 3], [2, 3], [1, 4], [5]]
>>> fn(3)
[[1, 1, 1], [1, 2], [3]]
Using global variables and side effects on input parameters is generally consider bad practice and you should look to avoid.

Returning a list of list elements

I need help writing a function that will take a single list and return a different list where every element in the list is in its own original list.
I know that I'll have to iterate through the original list that I pass through and then append the value depending on whether or not the value is already in my list or create a sublist and add that sublist to the final list.
an example would be:
input:[1, 2, 2, 2, 3, 1, 1, 3]
Output:[[1,1,1], [2,2,2], [3,3]]
I'd do this in two steps:
>>> import collections
>>> inputs = [1, 2, 2, 2, 3, 1, 1, 3]
>>> counts = collections.Counter(inputs)
>>> counts
Counter({1: 3, 2: 3, 3: 2})
>>> outputs = [[key] * count for key, count in counts.items()]
>>> outputs
[[1, 1, 1], [2, 2, 2], [3, 3]]
(The fact that these happen to be in sorted numerical order, and also in the order of first appearance, is just a coincidence here. Counters, like normal dictionaries, store their keys in arbitrary order, and you should assume that [[3, 3], [1, 1, 1], [2, 2, 2]] would be just as possible a result. If that's not acceptable, you need a bit more work.)
So, how does it work?
The first step creates a Counter, which is just a special subclass of dict made for counting occurrences of each key. One of the many nifty things about it is that you can just pass it any iterable (like a list) and it will count up how many times each element appears. It's a trivial one-liner, it's obvious and readable once you know how Counter works, and it's even about as efficient as anything could possibly be.*
But that isn't the output format you wanted. How do we get that? Well, we have to get back from 1: 3 (meaning "3 copies of 1") to [1, 1, 1]). You can write that as [key] * count.** And the rest is just a bog-standard list comprehension.
If you look at the docs for the collections module, they start with a link to the source. Many modules in the stdlib are like this, because they're meant to serve as source code for learning from as well as usable code. So, you should be able to figure out how the Counter constructor works. (It's basically just calling that _count_elements function.) Since that's the only part of Counter you're actually using beyond a basic dict, you could just write that part yourself. (But really, once you've understood how it works, there's no good reason not to use it, right?)
* For each element, it's just doing a hash table lookup (and insert if needed) and a += 1. And in CPython, it all happens in reasonably-optimized C.
** Note that we don't have to worry about whether to use [key] * count vs. [key for _ in range(count)] here, because the values have to be immutable, or at least of an "equality is as good as identity" type, or they wouldn't be usable as keys.
The most time efficient would be to use a dictionary:
collector = {}
for elem in inputlist:
collector.setdefault(elem, []).append(elem)
output = collector.values()
The other, more costly option is to sort, then group using itertools.groupby():
from itertools import groupby
output = [list(g) for k, g in groupby(sorted(inputlist))]
Demo:
>>> inputlist = [1, 2, 2, 2, 3, 1, 1, 3]
>>> collector = {}
>>> for elem in inputlist:
... collector.setdefault(elem, []).append(elem)
...
>>> collector.values()
[[1, 1, 1], [2, 2, 2], [3, 3]]
>>> from itertools import groupby
>>> [list(g) for k, g in groupby(sorted(inputlist))]
[[1, 1, 1], [2, 2, 2], [3, 3]]
What about this, as you said you wanted a function:
def makeList(user_list):
user_list.sort()
x = user_list[0]
output = [[]]
for i in user_list:
if i == x:
output[-1].append(i)
else:
output.append([i])
x = i
return output
>>> print makeList([1, 2, 2, 2, 3, 1, 1, 3])
[[1, 1, 1], [2, 2, 2], [3, 3]]

Python equivalent to Ruby Array.each method

In Python what is equivalent to Ruby's Array.each method? Does Python have a nice and short closure/lambda syntax for it?
[1,2,3].each do |x|
puts x
end
Does Python have a nice and short closure/lambda syntax for it?
Yes, but you don't want it in this case.
The closest equivalent to that Ruby code is:
new_values = map(print, [1, 2, 3])
That looks pretty nice when you already have a function lying around, like print. When you just have some arbitrary expression and you want to use it in map, you need to create a function out of it with a def or a lambda, like this:
new_values = map(lambda x: print(x), [1, 2, 3])
That's the ugliness you apparently want to avoid. And Python has a nice way to avoid it: comprehensions:
new_values = [print(x) for x in values]
However, in this case, you're just trying to execute some statement for each value, not accumulate the new values for each value. So, while this will work (you'll get back a list of None values), it's definitely not idiomatic.
In this case, the right thing to do is to write it explicitly—no closures, no functions, no comprehensions, just a loop:
for x in values:
print x
The most idiomatic:
for x in [1,2,3]:
print x
You can use numpy for vectorized arithmetic over an array:
>>> import numpy as np
>>> a = np.array([1, 2, 3])
>>> a * 3
array([3, 6, 9])
You can easily define a lambda that can be used over each element of an array:
>>> array_lambda=np.vectorize(lambda x: x * x)
>>> array_lambda([1, 2, 3])
array([1, 4, 9])
But as others have said, if you want to just print each, use a loop.
There are also libraries that wrap objects to expose all the usual functional programming stuff.
PyDash http://pydash.readthedocs.org/en/latest/
underscorepy (Google github underscore.py)
E.g. pydash allows you to do things like this:
>>> from pydash import py_
>>> from __future__ import print_function
>>> x = py_([1,2,3,4]).map(lambda x: x*2).each(print).value()
2
4
6
8
>>> x
[2, 4, 6, 8]
(Just always remember to "trigger" execution and/or to un-wrap the wrapped values with .value() at the end!)
without need of an assignment:
list(print(_) for _ in [1, 2, 3])
or just
[print(_) for _ in [1, 2, 3]]

Redis: How to parse a list result

I am storing a list in Redis like this:
redis.lpush('foo', [1,2,3,4,5,6,7,8,9])
And then I get the list back like this:
redis.lrange('foo', 0, -1)
and I get something like this:
[b'[1, 2, 3, 4, 5, 6, 7, 8, 9]']
How can I convert this to actual Python list?
Also, I don't see anything defined in RESPONSE_CALLBACKS that can help? Am I missing something?
A possible solution (which in my opinion sucks) can be:
result = redis.lrange('foo',0, -1)[0].decode()
result = result.strip('[]')
result = result.split(', ')
# lastly, if you know all your items in the list are integers
result = [int(x) for x in result]
UPDATE
Ok, so I got the solution.
Actually, the lpush function expects all the list items be passed as arguments and NOT as a single list. The function signature from redis-py source makes it clear...
def lpush(self, name, *values):
"Push ``values`` onto the head of the list ``name``"
return self.execute_command('LPUSH', name, *values)
What I am doing above is send a single list as an argument, which is then sent to redis as a SINGLE item.
I should be unpacking the list instead as suggested in the answer:
redis.lpush('foo', *[1,2,3,4,5,6,7,8,9])
which returns the result I expect...
redis.lrange('foo', 0, -1)
[b'9', b'8', b'7', b'6', b'5', b'4', b'3', b'2', b'1']
I think you're bumping into semantics which are similar to the distinction between list.append() and list.extend(). I know that this works for me:
myredis.lpush('foo', *[1,2,3,4])
... note the * (map-over) operator prefixing the list!
Another way: you can use RedisWorks library.
pip install redisworks
>>> from redisworks import Root
>>> root = Root()
>>> root.foo = [1,2,3,4,5,6,7,8,9] # saves it to Redis as a list
...
>>> print(root.foo) # loads it from Redis later
It converts python types to Redis types and vice-versa. So even if you had nested list, it would have worked:
>>> root.sides = [10, [1, 2]] # saves it as list in Redis.
>>> print(root.sides) # loads it from Redis
[10, [1, 2]]
>>> type(root.sides[1])
<class 'list'>
Disclaimer: I wrote the library. Here is the code: https://github.com/seperman/redisworks
import json
r = [b'[1, 2, 3, 4, 5, 6, 7, 8, 9]']
rstr = r[0]
res_list = json.loads(rstr)

zipWith analogue in Python?

What is the analogue of Haskell's zipWith function in Python?
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
map()
map(operator.add, [1, 2, 3], [3, 2, 1])
Although a LC with zip() is usually used.
[x + y for (x, y) in zip([1, 2, 3], [3, 2, 1])]
You can create yours, if you wish, but in Python we mostly do
list_c = [ f(a,b) for (a,b) in zip(list_a,list_b) ]
as Python is not inherently functional. It just happens to support a few convenience idioms.
You can use map:
>>> x = [1,2,3,4]
>>> y = [4,3,2,1]
>>> map(lambda a, b: a**b, x, y)
[1, 8, 9, 4]
A lazy zipWith with itertools:
import itertools
def zip_with(f, *coll):
return itertools.starmap(f, itertools.izip(*coll))
This version generalizes the behaviour of zipWith with any number of iterables.
Generally as others have mentioned map and zip can help you replicate the functionality of zipWith as in Haskel.
Generally you can either apply a defined binary operator or some binary function on two list.An example to replace an Haskel zipWith with Python's map/zip
Input: zipWith (+) [1,2,3] [3,2,1]
Output: [4,4,4]
>>> map(operator.add,[1,2,3],[4,3,2])
[5, 5, 5]
>>> [operator.add(x,y) for x,y in zip([1,2,3],[4,3,2])]
[5, 5, 5]
>>>
There are other variation of zipWith aka zipWith3, zipWith4 .... zipWith7. To replicate these functionalists you may want to use izip and imap instead of zip and map.
>>> [x for x in itertools.imap(lambda x,y,z:x**2+y**2-z**2,[1,2,3,4],[5,6,7,8],[9,10,11,12])]
>>> [x**2+y**2-z**2 for x,y,z in itertools.izip([1,2,3,4],[5,6,7,8],[9,10,11,12])]
[-55, -60, -63, -64]
As you can see, you can operate of any number of list you desire and you can still use the same procedure.
I know this is an old question, but ...
It's already been said that the typical python way would be something like
results = [f(a, b) for a, b in zip(list1, list2)]
and so seeing a line like that in your code, most pythonistas will understand just fine.
There's also already been a (I think) purely lazy example shown:
import itertools
def zipWith(f, *args):
return itertools.starmap(f, itertools.izip(*args))
but I believe that starmap returns an iterator, so you won't be able to index, or go through multiple times what that function will return.
If you're not particularly concerned with laziness and/or need to index or loop through your new list multiple times, this is probably as general purpose as you could get:
def zipWith(func, *lists):
return [func(*args) for args in zip(*lists)]
Not that you couldn't do it with the lazy version, but you could also call that function like so if you've already built up your list of lists.
results = zipWith(func, *lists)
or just like normal like:
results = zipWith(func, list1, list2)
Somehow, that function call just looks simpler and easier to grok than the list comprehension version.
Looking at that, this looks strangely reminiscent of another helper function I often write:
def transpose(matrix):
return zip(*matrix)
which could then be written like:
def transpose(matrix):
return zipWith(lambda *x: x, *matrix)
Not really a better version, but I always find it interesting how when writing generic functions in a functional style, I often find myself going, "Oh. That's just a more general form of a function I've already written before."

Categories