Related
This question already has answers here:
How do I split a list into equally-sized chunks?
(66 answers)
Closed 1 year ago.
How do I make a for loop or a list comprehension so that every iteration gives me two elements?
l = [1,2,3,4,5,6]
for i,k in ???:
print str(i), '+', str(k), '=', str(i+k)
Output:
1+2=3
3+4=7
5+6=11
You need a pairwise() (or grouped()) implementation.
def pairwise(iterable):
"s -> (s0, s1), (s2, s3), (s4, s5), ..."
a = iter(iterable)
return zip(a, a)
for x, y in pairwise(l):
print("%d + %d = %d" % (x, y, x + y))
Or, more generally:
def grouped(iterable, n):
"s -> (s0,s1,s2,...sn-1), (sn,sn+1,sn+2,...s2n-1), (s2n,s2n+1,s2n+2,...s3n-1), ..."
return zip(*[iter(iterable)]*n)
for x, y in grouped(l, 2):
print("%d + %d = %d" % (x, y, x + y))
In Python 2, you should import izip as a replacement for Python 3's built-in zip() function.
All credit to martineau for his answer to my question, I have found this to be very efficient as it only iterates once over the list and does not create any unnecessary lists in the process.
N.B: This should not be confused with the pairwise recipe in Python's own itertools documentation, which yields s -> (s0, s1), (s1, s2), (s2, s3), ..., as pointed out by #lazyr in the comments.
Little addition for those who would like to do type checking with mypy on Python 3:
from typing import Iterable, Tuple, TypeVar
T = TypeVar("T")
def grouped(iterable: Iterable[T], n=2) -> Iterable[Tuple[T, ...]]:
"""s -> (s0,s1,s2,...sn-1), (sn,sn+1,sn+2,...s2n-1), ..."""
return zip(*[iter(iterable)] * n)
Well you need tuple of 2 elements, so
data = [1,2,3,4,5,6]
for i,k in zip(data[0::2], data[1::2]):
print str(i), '+', str(k), '=', str(i+k)
Where:
data[0::2] means create subset collection of elements that (index % 2 == 0)
zip(x,y) creates a tuple collection from x and y collections same index elements.
>>> l = [1,2,3,4,5,6]
>>> zip(l,l[1:])
[(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]
>>> zip(l,l[1:])[::2]
[(1, 2), (3, 4), (5, 6)]
>>> [a+b for a,b in zip(l,l[1:])[::2]]
[3, 7, 11]
>>> ["%d + %d = %d" % (a,b,a+b) for a,b in zip(l,l[1:])[::2]]
['1 + 2 = 3', '3 + 4 = 7', '5 + 6 = 11']
A simple solution.
l = [1, 2, 3, 4, 5, 6]
for i in range(0, len(l), 2):
print str(l[i]), '+', str(l[i + 1]), '=', str(l[i] + l[i + 1])
While all the answers using zip are correct, I find that implementing the functionality yourself leads to more readable code:
def pairwise(it):
it = iter(it)
while True:
try:
yield next(it), next(it)
except StopIteration:
# no more elements in the iterator
return
The it = iter(it) part ensures that it is actually an iterator, not just an iterable. If it already is an iterator, this line is a no-op.
Usage:
for a, b in pairwise([0, 1, 2, 3, 4, 5]):
print(a + b)
I hope this will be even more elegant way of doing it.
a = [1,2,3,4,5,6]
zip(a[::2], a[1::2])
[(1, 2), (3, 4), (5, 6)]
In case you're interested in the performance, I did a small benchmark (using my library simple_benchmark) to compare the performance of the solutions and I included a function from one of my packages: iteration_utilities.grouper
from iteration_utilities import grouper
import matplotlib as mpl
from simple_benchmark import BenchmarkBuilder
bench = BenchmarkBuilder()
#bench.add_function()
def Johnsyweb(l):
def pairwise(iterable):
"s -> (s0, s1), (s2, s3), (s4, s5), ..."
a = iter(iterable)
return zip(a, a)
for x, y in pairwise(l):
pass
#bench.add_function()
def Margus(data):
for i, k in zip(data[0::2], data[1::2]):
pass
#bench.add_function()
def pyanon(l):
list(zip(l,l[1:]))[::2]
#bench.add_function()
def taskinoor(l):
for i in range(0, len(l), 2):
l[i], l[i+1]
#bench.add_function()
def mic_e(it):
def pairwise(it):
it = iter(it)
while True:
try:
yield next(it), next(it)
except StopIteration:
return
for a, b in pairwise(it):
pass
#bench.add_function()
def MSeifert(it):
for item1, item2 in grouper(it, 2):
pass
bench.use_random_lists_as_arguments(sizes=[2**i for i in range(1, 20)])
benchmark_result = bench.run()
mpl.rcParams['figure.figsize'] = (8, 10)
benchmark_result.plot_both(relative_to=MSeifert)
So if you want the fastest solution without external dependencies you probably should just use the approach given by Johnysweb (at the time of writing it's the most upvoted and accepted answer).
If you don't mind the additional dependency then the grouper from iteration_utilities will probably be a bit faster.
Additional thoughts
Some of the approaches have some restrictions, that haven't been discussed here.
For example a few solutions only work for sequences (that is lists, strings, etc.), for example Margus/pyanon/taskinoor solutions which uses indexing while other solutions work on any iterable (that is sequences and generators, iterators) like Johnysweb/mic_e/my solutions.
Then Johnysweb also provided a solution that works for other sizes than 2 while the other answers don't (okay, the iteration_utilities.grouper also allows setting the number of elements to "group").
Then there is also the question about what should happen if there is an odd number of elements in the list. Should the remaining item be dismissed? Should the list be padded to make it even sized? Should the remaining item be returned as single? The other answer don't address this point directly, however if I haven't overlooked anything they all follow the approach that the remaining item should be dismissed (except for taskinoors answer - that will actually raise an Exception).
With grouper you can decide what you want to do:
>>> from iteration_utilities import grouper
>>> list(grouper([1, 2, 3], 2)) # as single
[(1, 2), (3,)]
>>> list(grouper([1, 2, 3], 2, truncate=True)) # ignored
[(1, 2)]
>>> list(grouper([1, 2, 3], 2, fillvalue=None)) # padded
[(1, 2), (3, None)]
Use the zip and iter commands together:
I find this solution using iter to be quite elegant:
it = iter(l)
list(zip(it, it))
# [(1, 2), (3, 4), (5, 6)]
Which I found in the Python 3 zip documentation.
it = iter(l)
print(*(f'{u} + {v} = {u+v}' for u, v in zip(it, it)), sep='\n')
# 1 + 2 = 3
# 3 + 4 = 7
# 5 + 6 = 11
To generalise to N elements at a time:
N = 2
list(zip(*([iter(l)] * N)))
# [(1, 2), (3, 4), (5, 6)]
for (i, k) in zip(l[::2], l[1::2]):
print i, "+", k, "=", i+k
zip(*iterable) returns a tuple with the next element of each iterable.
l[::2] returns the 1st, the 3rd, the 5th, etc. element of the list: the first colon indicates that the slice starts at the beginning because there's no number behind it, the second colon is only needed if you want a 'step in the slice' (in this case 2).
l[1::2] does the same thing but starts in the second element of the lists so it returns the 2nd, the 4th, 6th, etc. element of the original list.
With unpacking:
l = [1,2,3,4,5,6]
while l:
i, k, *l = l
print(f'{i}+{k}={i+k}')
Note: this will consume l, leaving it empty afterward.
There are many ways to do that. For example:
lst = [1,2,3,4,5,6]
[(lst[i], lst[i+1]) for i,_ in enumerate(lst[:-1])]
>>>[(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]
list(zip(*[iter(lst)]*2))
>>>[(1, 2), (3, 4), (5, 6)]
you can use more_itertools package.
import more_itertools
lst = range(1, 7)
for i, j in more_itertools.chunked(lst, 2):
print(f'{i} + {j} = {i+j}')
For anyone it might help, here is a solution to a similar problem but with overlapping pairs (instead of mutually exclusive pairs).
From the Python itertools documentation:
from itertools import izip
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return izip(a, b)
Or, more generally:
from itertools import izip
def groupwise(iterable, n=2):
"s -> (s0,s1,...,sn-1), (s1,s2,...,sn), (s2,s3,...,sn+1), ..."
t = tee(iterable, n)
for i in range(1, n):
for j in range(0, i):
next(t[i], None)
return izip(*t)
The title of this question is misleading, you seem to be looking for consecutive pairs, but if you want to iterate over the set of all possible pairs than this will work :
for i,v in enumerate(items[:-1]):
for u in items[i+1:]:
A simplistic approach:
[(a[i],a[i+1]) for i in range(0,len(a),2)]
this is useful if your array is a and you want to iterate on it by pairs.
To iterate on triplets or more just change the "range" step command, for example:
[(a[i],a[i+1],a[i+2]) for i in range(0,len(a),3)]
(you have to deal with excess values if your array length and the step do not fit)
The polished Python3 solution is given in one of the itertools recipes:
import itertools
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
Another try at cleaner solution
def grouped(itr, n=2):
itr = iter(itr)
end = object()
while True:
vals = tuple(next(itr, end) for _ in range(n))
if vals[-1] is end:
return
yield vals
For more customization options
from collections.abc import Sized
def grouped(itr, n=2, /, truncate=True, fillvalue=None, strict=False, nofill=False):
if strict:
if isinstance(itr, Sized):
if len(itr) % n != 0:
raise ValueError(f"{len(itr)=} is not divisible by {n=}")
itr = iter(itr)
end = object()
while True:
vals = tuple(next(itr, end) for _ in range(n))
if vals[-1] is end:
if vals[0] is end:
return
if strict:
raise ValueError("found extra stuff in iterable")
if nofill:
yield tuple(v for v in vals if v is not end)
return
if truncate:
return
yield tuple(v if v is not end else fillvalue for v in vals)
return
yield vals
Thought that this is a good place to share my generalization of this for n>2, which is just a sliding window over an iterable:
def sliding_window(iterable, n):
its = [ itertools.islice(iter, i, None)
for i, iter
in enumerate(itertools.tee(iterable, n)) ]
return itertools.izip(*its)
I need to divide a list by a number and fixed like this.
l = [1,2,3,4,5,6]
def divideByN(data, n):
return [data[i*n : (i+1)*n] for i in range(len(data)//n)]
>>> print(divideByN(l,2))
[[1, 2], [3, 4], [5, 6]]
>>> print(divideByN(l,3))
[[1, 2, 3], [4, 5, 6]]
Using typing so you can verify data using mypy static analysis tool:
from typing import Iterator, Any, Iterable, TypeVar, Tuple
T_ = TypeVar('T_')
Pairs_Iter = Iterator[Tuple[T_, T_]]
def legs(iterable: Iterator[T_]) -> Pairs_Iter:
begin = next(iterable)
for end in iterable:
yield begin, end
begin = end
Here we can have alt_elem method which can fit in your for loop.
def alt_elem(list, index=2):
for i, elem in enumerate(list, start=1):
if not i % index:
yield tuple(list[i-index:i])
a = range(10)
for index in [2, 3, 4]:
print("With index: {0}".format(index))
for i in alt_elem(a, index):
print(i)
Output:
With index: 2
(0, 1)
(2, 3)
(4, 5)
(6, 7)
(8, 9)
With index: 3
(0, 1, 2)
(3, 4, 5)
(6, 7, 8)
With index: 4
(0, 1, 2, 3)
(4, 5, 6, 7)
Note: Above solution might not be efficient considering operations performed in func.
This is simple solution, which is using range function to pick alternative elements from a list of elements.
Note: This is only valid for an even numbered list.
a_list = [1, 2, 3, 4, 5, 6]
empty_list = []
for i in range(0, len(a_list), 2):
empty_list.append(a_list[i] + a_list[i + 1])
print(empty_list)
# [3, 7, 11]
I want to write a Rem(a, b) which return a new tuple that is like a, with the first appearance of element b is removed. For example
Rem((0, 1, 9, 1, 4), 1) which will return (0, 9, 1, 4).
I am only allowed to use higher order functions such as lambda, filter, map, and reduce.
I am thinking about to use filter but this will delete all of the match elements
def myRem(T, E):
return tuple(filter(lambda x: (x!=E), T))
myRem((0, 1, 9, 1, 4), 1) I will have (0,9,4)
The following works (Warning: hacky code):
tuple(map(lambda y: y[1], filter(lambda x: (x[0]!=T.index(E)), enumerate(T))))
But I would never recommend doing this unless the requirements are rigid
Trick with temporary list:
def removeFirst(t, v):
tmp_lst = [v]
return tuple(filter(lambda x: (x != v or (not tmp_lst or v != tmp_lst.pop(0))), t))
print(removeFirst((0, 1, 9, 1, 4), 1))
tmp_lst.pop(0) - will be called only once (thus, excluding the 1st occurrence of the crucial value v)
not tmp_lst - all remaining/potential occurrences will be included due to this condition
The output:
(0, 9, 1, 4)
For fun, using itertools, you can sorta use mostly higher-order functions...
>>> from itertools import *
>>> data = (0, 1, 9, 1, 4)
>>> not1 = (1).__ne__
>>> tuple(chain(takewhile(not1, data), islice(dropwhile(not1, data), 1, None)))
(0, 9, 1, 4)
BTW, here's some timings comparing different approaches for dropping a particular index in a tuple:
>>> timeit.timeit("t[:i] + t[i+1:]", "t = tuple(range(100000)); i=50000", number=10000)
10.42419078599778
>>> timeit.timeit("(*t[:i], *t[i+1:])", "t = tuple(range(100000)); i=50000", number=10000)
20.06185237201862
>>> timeit.timeit("(*islice(t,None, i), *islice(t, i+1, None))", "t = tuple(range(100000)); i=50000; from itertools import islice", number=10000)
>>> timeit.timeit("tuple(chain(islice(t,None, i), islice(t, i+1, None)))", "t = tuple(range(100000)); i=50000; from itertools import islice, chain", number=10000)
19.71128663700074
>>> timeit.timeit("it = iter(t); tuple(chain(islice(it,None, i), islice(it, 1, None)))", "t = tuple(range(100000)); i=50000; from itertools import islice, chain", number=10000)
17.6895881179953
Looks like it is hard to beat the straightforward: t[:i] + t[i+1:], which is not surprising.
Note, this one is shockingly less performant:
>>> timeit.timeit("tuple(j for i, j in enumerate(t) if i != idx)", "t = tuple(range(100000)); idx=50000", number=10000)
111.66658291200292
Which makes me thing all these solutions using takewhile, filter and lambda will all suffer pretty bad...
Although:
>>> timeit.timeit("not1 = (i).__ne__; tuple(chain(takewhile(not1, t), islice(dropwhile(not1, t), 1, None)))", "t = tuple(range(100000)); i=50000; from itertools import chain, takewhile,dropwhile, islice", number=10000)
62.22159145199112
Almost twice as fast as the generator expression, which goes to show, generator overhead can be quite large. However, takewhile and dropwhile are implemented in C, albeit this implementation has redundancy (take-while and dropwhile will pass the dropwhile areas twice).
Another interesting observation, if we simply wrap the substitute a list-comp for the generator expression, it is significantly faster despite the fact that the list-comprehension + tuple call iterates over the result twice compared to only once with the generator expression:
>>> timeit.timeit("tuple([j for i, j in enumerate(t) if i != idx])", "t = tuple(range(100000)); idx=50000", number=10000)
82.59887028901721
Goes to show how steep the generator-expression price can be...
Here is a solution that only uses lambda, filter(), map(), reduce() and tuple().
def myRem(T, E):
# map the tuple into a list of tuples (value, indicator)
M = map(lambda x: [(x, 1)] if x == E else [(x,0)], T)
# make the indicator 0 once the first instance of E is found
# think of this as a boolean mask of items to remove
# here the second reduce can be changed to the sum function
R = reduce(
lambda x, y: x + (y if reduce(lambda a, b: a+b, map(lambda z: z[1], x)) < 1
else [(y[0][0], 0)]),
M
)
# filter the reduced output based on the indicator
F = filter(lambda x: x[1]==0, R)
# map the output back to the desired format
O = map(lambda x: x[0], F)
return tuple(O)
Explanation
A good way to understand what's going on is to print the outputs of the intermediate steps.
Step 1: First Map
For each value in the tuple, we return a tuple with the value and a flag to indicate if it's the value to remove. These tuples are encapsulated in a list because it makes combining easier in the next step.
# original example
T = (0, 1, 9, 1, 4)
E = 1
M = map(lambda x: [(x, 1)] if x == E else [(x,0)], T)
print(M)
#[[(0, 0)], [(1, 1)], [(9, 0)], [(1, 1)], [(4, 0)]]
Step 2: Reduce
This returns a list of tuples in a similar structure to the contents of M, but the flag variable is set to 1 for the first instance of E, and 0 for all subsequent instances. This is achieved by calculating the sum of the indicator up to that point (implemented as another reduce()).
R = reduce(
lambda x, y: x + (y if reduce(lambda a, b: a+b, map(lambda z: z[1], x)) < 1
else [(y[0][0], 0)]),
M
)
print(R)
#[(0, 0), (1, 1), (9, 0), (1, 0), (4, 0)]
Now the output is in the form of (value, to_be_removed).
Step 3: Filter
Filter out the value to be removed.
F = filter(lambda x: x[1]==0, R)
print(F)
#[(0, 0), (9, 0), (1, 0), (4, 0)]
Step 4: Second map and conversion to tuple
Extract the value from the filtered list, and convert it to a tuple.
O = map(lambda x: x[0], F)
print(tuple(O))
#(0, 9, 1, 4)
This violates your requirement for "only using higher order functions" - but since it's not clear why this is a requirement, I include the below solution.
def myRem(tup, n):
idx = tup.index(n)
return tuple(j for i, j in enumerate(tup) if i != idx)
myRem((0, 1, 9, 1, 4), 1)
# (0, 9, 1, 4)
Here is a numpy solution (still not using higher-order functions):
import numpy as np
def myRem(tup, n):
tup_arr = np.array(tup)
return tuple(np.delete(tup_arr, np.min(np.nonzero(tup_arr == n)[0])))
myRem((0, 1, 9, 1, 4), 1)
# (0, 9, 1, 4)
What's a good way to define a function partial_k(f, k, args) that takes an arbitrary function f as input (f takes n positional arguments), a value k, and a list of n-1 values, and returns a new function that freezes all the arguments of f except the k-th argument?
For example:
def f(a, b, c):
return (a, b, c)
assert partial_k(f, 2, [0, 1])(10) == (0, 1, 10) # lambda x: (0, 1, x)
assert partial_k(f, 1, [0, 1])(10) == (0, 10, 1) # lambda x: (0, x, 1)
I could only find some very verbose ways of doing that.
You can use a wrapper function and pass the arguments before and after kth item using slicing to the original function f:
def partial_k(f, k, seq):
seq = tuple(seq) # To handle any iterable
def wrapper(x):
return f(*(seq[:k] + (x,) + seq[k:]))
return wrapper
print(partial_k(f, 2, [0, 1])(10))
print(partial_k(f, 1, [0, 1])(10))
Output:
(0, 1, 10)
(0, 10, 1)
For Python 3.5+:
def partial_k(f, k, seq):
def wrapper(x):
return f(*seq[:k], x, *seq[k:])
return wrapper
You could probably use things from functools package to simplify further, but basically:
def make_partial(f, args, k):
def func(x):
new_args = args[:k] + [x] + args[k:]
return f(*new_args)
return func
I'm not able to understand the following code segment:
>>> lot = ((1, 2), (3, 4), (5,))
>>> reduce(lambda t1, t2: t1 + t2, lot)
(1, 2, 3, 4, 5)
How does the reduce function produce a tuple of (1,2,3,4,5) ?
It's easier if you break out the lambda into a function, so it's clearer to what's going on:
>>> def do_and_print(t1, t2):
print 't1 is', t1
print 't2 is', t2
return t1+t2
>>> reduce(do_and_print, ((1,2), (3,4), (5,)))
t1 is (1, 2)
t2 is (3, 4)
t1 is (1, 2, 3, 4)
t2 is (5,)
(1, 2, 3, 4, 5)
reduce() applies a function sequentially, chaining the elements of a sequence:
reduce(f, [a,b,c,d], s)
is the same as
f(f(f(f(s, a), b), c), d)
and so on. In your case the f() is a lambda function (lambda t1, t2: t1 + t2) which just adds up its two arguments, so you end up with
(((s + a) + b) + c) + d
and because the parenthesizing on adding sequences doesn't make any difference, this is
s + a + b + c + d
or with your actual values
(1, 2) + (3, 4) + (5,)
If s is not given, the first term is just not done, but usually the neutral element is used for s, so in your case () would have been correct:
reduce(lambda t1, t2: t1 + t2, lot, ())
But without it, you only run into trouble if lot has no elements (TypeError: reduce() of empty sequence with no initial value).
reduce(...)
reduce(function, sequence[, initial]) -> value
Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, ((1, 2), (3, 4), (5))) calculates
(((1+2)+(3+4))+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.
let's trace the reduce
result = (1,2) + (3,4)
result = result + (5, )
Notice that your reduction concatenates tuples.
reduce takes a function and an iterator as arguments. The function must accept two arguments.
What reduce does is that it iterates through the iterable. First it sends the first two values to the function. Then it sends the result of that together with the next value, and so on.
So in your case, it takes the first and the second item in the tuple, (1,2) and (3,4) and sends them to the lambda function. That function adds them together. The result is sent to the lambda function again, together with the third item. Since there are no more items in the tuple, the result is returned.
This question already has answers here:
How do I split a list into equally-sized chunks?
(66 answers)
Closed 1 year ago.
How do I make a for loop or a list comprehension so that every iteration gives me two elements?
l = [1,2,3,4,5,6]
for i,k in ???:
print str(i), '+', str(k), '=', str(i+k)
Output:
1+2=3
3+4=7
5+6=11
You need a pairwise() (or grouped()) implementation.
def pairwise(iterable):
"s -> (s0, s1), (s2, s3), (s4, s5), ..."
a = iter(iterable)
return zip(a, a)
for x, y in pairwise(l):
print("%d + %d = %d" % (x, y, x + y))
Or, more generally:
def grouped(iterable, n):
"s -> (s0,s1,s2,...sn-1), (sn,sn+1,sn+2,...s2n-1), (s2n,s2n+1,s2n+2,...s3n-1), ..."
return zip(*[iter(iterable)]*n)
for x, y in grouped(l, 2):
print("%d + %d = %d" % (x, y, x + y))
In Python 2, you should import izip as a replacement for Python 3's built-in zip() function.
All credit to martineau for his answer to my question, I have found this to be very efficient as it only iterates once over the list and does not create any unnecessary lists in the process.
N.B: This should not be confused with the pairwise recipe in Python's own itertools documentation, which yields s -> (s0, s1), (s1, s2), (s2, s3), ..., as pointed out by #lazyr in the comments.
Little addition for those who would like to do type checking with mypy on Python 3:
from typing import Iterable, Tuple, TypeVar
T = TypeVar("T")
def grouped(iterable: Iterable[T], n=2) -> Iterable[Tuple[T, ...]]:
"""s -> (s0,s1,s2,...sn-1), (sn,sn+1,sn+2,...s2n-1), ..."""
return zip(*[iter(iterable)] * n)
Well you need tuple of 2 elements, so
data = [1,2,3,4,5,6]
for i,k in zip(data[0::2], data[1::2]):
print str(i), '+', str(k), '=', str(i+k)
Where:
data[0::2] means create subset collection of elements that (index % 2 == 0)
zip(x,y) creates a tuple collection from x and y collections same index elements.
>>> l = [1,2,3,4,5,6]
>>> zip(l,l[1:])
[(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]
>>> zip(l,l[1:])[::2]
[(1, 2), (3, 4), (5, 6)]
>>> [a+b for a,b in zip(l,l[1:])[::2]]
[3, 7, 11]
>>> ["%d + %d = %d" % (a,b,a+b) for a,b in zip(l,l[1:])[::2]]
['1 + 2 = 3', '3 + 4 = 7', '5 + 6 = 11']
A simple solution.
l = [1, 2, 3, 4, 5, 6]
for i in range(0, len(l), 2):
print str(l[i]), '+', str(l[i + 1]), '=', str(l[i] + l[i + 1])
While all the answers using zip are correct, I find that implementing the functionality yourself leads to more readable code:
def pairwise(it):
it = iter(it)
while True:
try:
yield next(it), next(it)
except StopIteration:
# no more elements in the iterator
return
The it = iter(it) part ensures that it is actually an iterator, not just an iterable. If it already is an iterator, this line is a no-op.
Usage:
for a, b in pairwise([0, 1, 2, 3, 4, 5]):
print(a + b)
I hope this will be even more elegant way of doing it.
a = [1,2,3,4,5,6]
zip(a[::2], a[1::2])
[(1, 2), (3, 4), (5, 6)]
In case you're interested in the performance, I did a small benchmark (using my library simple_benchmark) to compare the performance of the solutions and I included a function from one of my packages: iteration_utilities.grouper
from iteration_utilities import grouper
import matplotlib as mpl
from simple_benchmark import BenchmarkBuilder
bench = BenchmarkBuilder()
#bench.add_function()
def Johnsyweb(l):
def pairwise(iterable):
"s -> (s0, s1), (s2, s3), (s4, s5), ..."
a = iter(iterable)
return zip(a, a)
for x, y in pairwise(l):
pass
#bench.add_function()
def Margus(data):
for i, k in zip(data[0::2], data[1::2]):
pass
#bench.add_function()
def pyanon(l):
list(zip(l,l[1:]))[::2]
#bench.add_function()
def taskinoor(l):
for i in range(0, len(l), 2):
l[i], l[i+1]
#bench.add_function()
def mic_e(it):
def pairwise(it):
it = iter(it)
while True:
try:
yield next(it), next(it)
except StopIteration:
return
for a, b in pairwise(it):
pass
#bench.add_function()
def MSeifert(it):
for item1, item2 in grouper(it, 2):
pass
bench.use_random_lists_as_arguments(sizes=[2**i for i in range(1, 20)])
benchmark_result = bench.run()
mpl.rcParams['figure.figsize'] = (8, 10)
benchmark_result.plot_both(relative_to=MSeifert)
So if you want the fastest solution without external dependencies you probably should just use the approach given by Johnysweb (at the time of writing it's the most upvoted and accepted answer).
If you don't mind the additional dependency then the grouper from iteration_utilities will probably be a bit faster.
Additional thoughts
Some of the approaches have some restrictions, that haven't been discussed here.
For example a few solutions only work for sequences (that is lists, strings, etc.), for example Margus/pyanon/taskinoor solutions which uses indexing while other solutions work on any iterable (that is sequences and generators, iterators) like Johnysweb/mic_e/my solutions.
Then Johnysweb also provided a solution that works for other sizes than 2 while the other answers don't (okay, the iteration_utilities.grouper also allows setting the number of elements to "group").
Then there is also the question about what should happen if there is an odd number of elements in the list. Should the remaining item be dismissed? Should the list be padded to make it even sized? Should the remaining item be returned as single? The other answer don't address this point directly, however if I haven't overlooked anything they all follow the approach that the remaining item should be dismissed (except for taskinoors answer - that will actually raise an Exception).
With grouper you can decide what you want to do:
>>> from iteration_utilities import grouper
>>> list(grouper([1, 2, 3], 2)) # as single
[(1, 2), (3,)]
>>> list(grouper([1, 2, 3], 2, truncate=True)) # ignored
[(1, 2)]
>>> list(grouper([1, 2, 3], 2, fillvalue=None)) # padded
[(1, 2), (3, None)]
Use the zip and iter commands together:
I find this solution using iter to be quite elegant:
it = iter(l)
list(zip(it, it))
# [(1, 2), (3, 4), (5, 6)]
Which I found in the Python 3 zip documentation.
it = iter(l)
print(*(f'{u} + {v} = {u+v}' for u, v in zip(it, it)), sep='\n')
# 1 + 2 = 3
# 3 + 4 = 7
# 5 + 6 = 11
To generalise to N elements at a time:
N = 2
list(zip(*([iter(l)] * N)))
# [(1, 2), (3, 4), (5, 6)]
for (i, k) in zip(l[::2], l[1::2]):
print i, "+", k, "=", i+k
zip(*iterable) returns a tuple with the next element of each iterable.
l[::2] returns the 1st, the 3rd, the 5th, etc. element of the list: the first colon indicates that the slice starts at the beginning because there's no number behind it, the second colon is only needed if you want a 'step in the slice' (in this case 2).
l[1::2] does the same thing but starts in the second element of the lists so it returns the 2nd, the 4th, 6th, etc. element of the original list.
With unpacking:
l = [1,2,3,4,5,6]
while l:
i, k, *l = l
print(f'{i}+{k}={i+k}')
Note: this will consume l, leaving it empty afterward.
There are many ways to do that. For example:
lst = [1,2,3,4,5,6]
[(lst[i], lst[i+1]) for i,_ in enumerate(lst[:-1])]
>>>[(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]
list(zip(*[iter(lst)]*2))
>>>[(1, 2), (3, 4), (5, 6)]
you can use more_itertools package.
import more_itertools
lst = range(1, 7)
for i, j in more_itertools.chunked(lst, 2):
print(f'{i} + {j} = {i+j}')
For anyone it might help, here is a solution to a similar problem but with overlapping pairs (instead of mutually exclusive pairs).
From the Python itertools documentation:
from itertools import izip
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return izip(a, b)
Or, more generally:
from itertools import izip
def groupwise(iterable, n=2):
"s -> (s0,s1,...,sn-1), (s1,s2,...,sn), (s2,s3,...,sn+1), ..."
t = tee(iterable, n)
for i in range(1, n):
for j in range(0, i):
next(t[i], None)
return izip(*t)
The title of this question is misleading, you seem to be looking for consecutive pairs, but if you want to iterate over the set of all possible pairs than this will work :
for i,v in enumerate(items[:-1]):
for u in items[i+1:]:
A simplistic approach:
[(a[i],a[i+1]) for i in range(0,len(a),2)]
this is useful if your array is a and you want to iterate on it by pairs.
To iterate on triplets or more just change the "range" step command, for example:
[(a[i],a[i+1],a[i+2]) for i in range(0,len(a),3)]
(you have to deal with excess values if your array length and the step do not fit)
The polished Python3 solution is given in one of the itertools recipes:
import itertools
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
Another try at cleaner solution
def grouped(itr, n=2):
itr = iter(itr)
end = object()
while True:
vals = tuple(next(itr, end) for _ in range(n))
if vals[-1] is end:
return
yield vals
For more customization options
from collections.abc import Sized
def grouped(itr, n=2, /, truncate=True, fillvalue=None, strict=False, nofill=False):
if strict:
if isinstance(itr, Sized):
if len(itr) % n != 0:
raise ValueError(f"{len(itr)=} is not divisible by {n=}")
itr = iter(itr)
end = object()
while True:
vals = tuple(next(itr, end) for _ in range(n))
if vals[-1] is end:
if vals[0] is end:
return
if strict:
raise ValueError("found extra stuff in iterable")
if nofill:
yield tuple(v for v in vals if v is not end)
return
if truncate:
return
yield tuple(v if v is not end else fillvalue for v in vals)
return
yield vals
Thought that this is a good place to share my generalization of this for n>2, which is just a sliding window over an iterable:
def sliding_window(iterable, n):
its = [ itertools.islice(iter, i, None)
for i, iter
in enumerate(itertools.tee(iterable, n)) ]
return itertools.izip(*its)
I need to divide a list by a number and fixed like this.
l = [1,2,3,4,5,6]
def divideByN(data, n):
return [data[i*n : (i+1)*n] for i in range(len(data)//n)]
>>> print(divideByN(l,2))
[[1, 2], [3, 4], [5, 6]]
>>> print(divideByN(l,3))
[[1, 2, 3], [4, 5, 6]]
Using typing so you can verify data using mypy static analysis tool:
from typing import Iterator, Any, Iterable, TypeVar, Tuple
T_ = TypeVar('T_')
Pairs_Iter = Iterator[Tuple[T_, T_]]
def legs(iterable: Iterator[T_]) -> Pairs_Iter:
begin = next(iterable)
for end in iterable:
yield begin, end
begin = end
Here we can have alt_elem method which can fit in your for loop.
def alt_elem(list, index=2):
for i, elem in enumerate(list, start=1):
if not i % index:
yield tuple(list[i-index:i])
a = range(10)
for index in [2, 3, 4]:
print("With index: {0}".format(index))
for i in alt_elem(a, index):
print(i)
Output:
With index: 2
(0, 1)
(2, 3)
(4, 5)
(6, 7)
(8, 9)
With index: 3
(0, 1, 2)
(3, 4, 5)
(6, 7, 8)
With index: 4
(0, 1, 2, 3)
(4, 5, 6, 7)
Note: Above solution might not be efficient considering operations performed in func.
This is simple solution, which is using range function to pick alternative elements from a list of elements.
Note: This is only valid for an even numbered list.
a_list = [1, 2, 3, 4, 5, 6]
empty_list = []
for i in range(0, len(a_list), 2):
empty_list.append(a_list[i] + a_list[i + 1])
print(empty_list)
# [3, 7, 11]