Related
I have a list like
A = [(1, 2, 3), (3, 4, 5), (3, 5, 7)]
and I want to turn it into
A = [[123], [345], [357]]
Is there any way to do this?
My upper list with tuple comes from permutation function so maybe you can reccomend me to change something in that code
def converter(N):
y = list(str(N))
t = [int(x) for x in y]
f = list(itertools.permutations(t))
return f
r = converter(345)
print(r)
You can swizzle that up like so:
Code:
[[int(''.join(str(i) for i in x))] for x in a]
this converts the integer digits to a str, and then joins them before converting back to an integer.
Test Code:
a = [(1, 2, 3), (3, 4, 5), (3, 5, 7)]
print([[int(''.join(str(i) for i in x))] for x in a])
Results:
[[123], [345], [357]]
For fun (and for proving a radically different approach):
>>> [[sum(i * 10**(len(t) - k - 1) for k, i in enumerate(t))] for t in A]
[[123], [345], [357]]
Use map with a list-comprehension:
[[int(''.join(map(str, x)))] for x in A]
# [[123], [345], [357]]
I want find all possible ways to map a series of (contiguous) integers M = {0,1,2,...,m} to another series of integers N = {0,1,2,...,n} where m > n, subject to the constraint that only contiguous integers in M map to the same integer in N.
The following piece of python code comes close (start corresponds to the first element in M, stop-1 corresponds to the last element in M, and nbins corresponds to |N|):
import itertools
def find_bins(start, stop, nbins):
if (nbins > 1):
return list(list(itertools.product([range(start, ii)], find_bins(ii, stop, nbins-1))) for ii in range(start+1, stop-nbins+2))
else:
return [range(start, stop)]
E.g
In [20]: find_bins(start=0, stop=5, nbins=3)
Out[20]:
[[([0], [([1], [2, 3, 4])]),
([0], [([1, 2], [3, 4])]),
([0], [([1, 2, 3], [4])])],
[([0, 1], [([2], [3, 4])]),
([0, 1], [([2, 3], [4])])],
[([0, 1, 2], [([3], [4])])]]
However, as you can see the output is nested, and for the life of me, I cant find a way to properly amend the code without breaking it.
The desired output would look like this:
In [20]: find_bins(start=0, stop=5, nbins=3)
Out[20]:
[[(0), (1), (2, 3, 4)],
[(0), (1, 2), (3, 4)],
[(0), (1, 2, 3), (4)],
[(0, 1), (2), (3, 4)],
[(0, 1), (2, 3), (4)],
[(0, 1, 2), (3), (4)]]
I suggest a different approach: a partitioning into n non-empty bins is uniquely determined by the n-1 distinct indices marking the boundaries between the bins, where the first marker is after the first element, and the final marker before the last element. itertools.combinations() can be used directly to generate all such index tuples, and then it's just a matter of using them as slice indices. Like so:
def find_nbins(start, stop, nbins):
from itertools import combinations
base = range(start, stop)
nbase = len(base)
for ixs in combinations(range(1, stop - start), nbins - 1):
yield [tuple(base[lo: hi])
for lo, hi in zip((0,) + ixs, ixs + (nbase,))]
Then, e.g.,
for x in find_nbins(0, 5, 3):
print(x)
displays:
[(0,), (1,), (2, 3, 4)]
[(0,), (1, 2), (3, 4)]
[(0,), (1, 2, 3), (4,)]
[(0, 1), (2,), (3, 4)]
[(0, 1), (2, 3), (4,)]
[(0, 1, 2), (3,), (4,)]
EDIT: Making it into 2 problems
Just noting that there's a more general underlying problem here: generating the ways to break an arbitrary sequence into n non-empty bins. Then the specific question here is applying that to the sequence range(start, stop). I believe viewing it that way makes the code easier to understand, so here it is:
def gbins(seq, nbins):
from itertools import combinations
base = tuple(seq)
nbase = len(base)
for ixs in combinations(range(1, nbase), nbins - 1):
yield [base[lo: hi]
for lo, hi in zip((0,) + ixs, ixs + (nbase,))]
def find_nbins(start, stop, nbins):
return gbins(range(start, stop), nbins)
This does what I want; I will gladly accept simpler, more elegant solutions:
def _split(start, stop, nbins):
if (nbins > 1):
out = []
for ii in range(start+1, stop-nbins+2):
iterator = itertools.product([range(start, ii)], _split(ii, stop, nbins-1))
for item in iterator:
out.append(item)
return out
else:
return [range(start, stop)]
def _unpack(nested):
unpacked = []
if isinstance(nested, (list, tuple)):
for item in nested:
if isinstance(item, tuple):
for subitem in item:
unpacked.extend(_unpack(subitem))
elif isinstance(item, list):
unpacked.append([_unpack(subitem) for subitem in item])
elif isinstance(item, int):
unpacked.append([item])
return unpacked
else: # integer
return nested
def find_nbins(start, stop, nbins):
nested = _split(start, stop, nbins)
unpacked = [_unpack(item) for item in nested]
return unpacked
Assuming two data sets are in order and that they contain pairwise matches, what is an efficient way to discover the pairs? There can be noise in either list.
From sets A,B the set C will consist of pairs (A[X1],B[Y1]),(A[X2],B[Y2]),...,(A[Xn],B[Yn]) such that X1 < X2 < ... < Xn and Y1 < Y2 < ... < Yn.
The problem can be demonstrated with the simplified Python block, where the specifics of how a successful pair is validated is irrelevant.
Because the validation condition is irrelevant, the condition return_pairs(A, B, validate) == return_pairs(B, A, validate) is not required to hold, given that the data in A,B need not be the same, just that there must exist a validation function for (A[x],B[y])
A = [0,0,0,1,2,0,3,4,0,5,6,0,7,0,0,8,0,0,9]
B = [1,2,0,0,0,0,0,3,0,0,4,0,5,6,0,0,7,0,0,8,0,9]
B1 = [1,2,0,0,0,0,0,3,0,0,4,0,5,6,0,0,7,7,7,0,0,8,0,9]
def validate(a,b):
return a and b and a==b
def return_pairs(A,B, validation):
ret = []
x,y = 0,0
# Do loops and index changes...
if validation(A[x], B[y]):
ret.append((A[x], B[y]))
return ret
assert zip(range(1,10), range(1,10)) == return_pairs(A,B,validate)
assert zip(range(1,10), range(1,10)) == return_pairs(A,B1,validate)
Instead of iterating each list in two nested loops you can first remove the noise according to your own criteria, then create a third list with the filtered elements and run each item (being a newly formed tuple) of the list against your validation. This is assuming I understood the question correctly, which I think I didn't really:
Demo
A = [0,0,0,1,2,0,3,4,0,5,6,0,7,0,0,8,0,0,9]
B = [1,2,0,0,0,0,0,3,0,0,4,0,5,6,0,0,7,0,0,8,0,9]
def clean(oldList):
newList = []
for item in oldList:
if 0<item and (not newList or item>newList[-1]):
newList.append(item)
return newList
def validate(C):
for item in C:
if item[0] != item[1]:
return False
return True
C = zip(clean(A),clean(B))
#clean(A):[1, 2, 3, 4, 5, 6, 7, 8, 9]
#clean(B):[1, 2, 3, 4, 5, 6, 7, 8, 9]
#list(C):[(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9)]
#validate(C): True
A solution. O(n2)
def return_pairs(A,B, validation):
ret = []
used_x, used_y = -1,-1
for x, _x in enumerate(A):
for y, _y in enumerate(B):
if x <= used_x or y <= used_y:
continue
if validation(A[x], B[y]):
used_x,used_y = x,y
ret.append((A[x], B[y]))
return ret
Say I have a list of valid X = [1, 2, 3, 4, 5] and a list of valid Y = [1, 2, 3, 4, 5].
I need to generate all combinations of every element in X and every element in Y (in this case, 25) and get those combinations in random order.
This in itself would be simple, but there is an additional requirement: In this random order, there cannot be a repetition of the same x in succession. For example, this is okay:
[1, 3]
[2, 5]
[1, 2]
...
[1, 4]
This is not:
[1, 3]
[1, 2] <== the "1" cannot repeat, because there was already one before
[2, 5]
...
[1, 4]
Now, the least efficient idea would be to simply randomize the full set as long as there are no more repetitions. My approach was a bit different, repeatedly creating a shuffled variant of X, and a list of all Y * X, then picking a random next one from that. So far, I've come up with this:
import random
output = []
num_x = 5
num_y = 5
all_ys = list(xrange(1, num_y + 1)) * num_x
while True:
# end if no more are available
if len(output) == num_x * num_y:
break
xs = list(xrange(1, num_x + 1))
while len(xs):
next_x = random.choice(xs)
next_y = random.choice(all_ys)
if [next_x, next_y] not in output:
xs.remove(next_x)
all_ys.remove(next_y)
output.append([next_x, next_y])
print(sorted(output))
But I'm sure this can be done even more efficiently or in a more succinct way?
Also, my solution first goes through all X values before continuing with the full set again, which is not perfectly random. I can live with that for my particular application case.
A simple solution to ensure an average O(N*M) complexity:
def pseudorandom(M,N):
l=[(x+1,y+1) for x in range(N) for y in range(M)]
random.shuffle(l)
for i in range(M*N-1):
for j in range (i+1,M*N): # find a compatible ...
if l[i][0] != l[j][0]:
l[i+1],l[j] = l[j],l[i+1]
break
else: # or insert otherwise.
while True:
l[i],l[i-1] = l[i-1],l[i]
i-=1
if l[i][0] != l[i-1][0]: break
return l
Some tests:
In [354]: print(pseudorandom(5,5))
[(2, 2), (3, 1), (5, 1), (1, 1), (3, 2), (1, 2), (3, 5), (1, 5), (5, 4),\
(1, 3), (5, 2), (3, 4), (5, 3), (4, 5), (5, 5), (1, 4), (2, 5), (4, 4), (2, 4),\
(4, 2), (2, 1), (4, 3), (2, 3), (4, 1), (3, 3)]
In [355]: %timeit pseudorandom(100,100)
10 loops, best of 3: 41.3 ms per loop
Here is my solution. First the tuples are chosen among the ones who have a different x value from the previous selected tuple. But I ve noticed that you have to prepare the final trick for the case you have only bad value tuples to place at end.
import random
num_x = 5
num_y = 5
all_ys = range(1,num_y+1)*num_x
all_xs = sorted(range(1,num_x+1)*num_y)
output = []
last_x = -1
for i in range(0,num_x*num_y):
#get list of possible tuple to place
all_ind = range(0,len(all_xs))
all_ind_ok = [k for k in all_ind if all_xs[k]!=last_x]
ind = random.choice(all_ind_ok)
last_x = all_xs[ind]
output.append([all_xs.pop(ind),all_ys.pop(ind)])
if(all_xs.count(last_x)==len(all_xs)):#if only last_x tuples,
break
if len(all_xs)>0: # if there are still tuples they are randomly placed
nb_to_place = len(all_xs)
while(len(all_xs)>0):
place = random.randint(0,len(output)-1)
if output[place]==last_x:
continue
if place>0:
if output[place-1]==last_x:
continue
output.insert(place,[all_xs.pop(),all_ys.pop()])
print output
Here's a solution using NumPy
def generate_pairs(xs, ys):
n = len(xs)
m = len(ys)
indices = np.arange(n)
array = np.tile(ys, (n, 1))
[np.random.shuffle(array[i]) for i in range(n)]
counts = np.full_like(xs, m)
i = -1
for _ in range(n * m):
weights = np.array(counts, dtype=float)
if i != -1:
weights[i] = 0
weights /= np.sum(weights)
i = np.random.choice(indices, p=weights)
counts[i] -= 1
pair = xs[i], array[i, counts[i]]
yield pair
Here's a Jupyter notebook that explains how it works
Inside the loop, we have to copy the weights, add them up, and choose a random index using the weights. These are all linear in n. So the overall complexity to generate all pairs is O(n^2 m)
But the runtime is deterministic and overhead is low. And I'm fairly sure it generates all legal sequences with equal probability.
An interesting question! Here is my solution. It has the following properties:
If there is no valid solution it should detect this and let you know
The iteration is guaranteed to terminate so it should never get stuck in an infinite loop
Any possible solution is reachable with nonzero probability
I do not know the distribution of the output over all possible solutions, but I think it should be uniform because there is no obvious asymmetry inherent in the algorithm. I would be surprised and pleased to be shown otherwise, though!
import random
def random_without_repeats(xs, ys):
pairs = [[x,y] for x in xs for y in ys]
output = [[object()], [object()]]
seen = set()
while pairs:
# choose a random pair from the ones left
indices = list(set(xrange(len(pairs))) - seen)
try:
index = random.choice(indices)
except IndexError:
raise Exception('No valid solution exists!')
# the first element of our randomly chosen pair
x = pairs[index][0]
# search for a valid place in output where we slot it in
for i in xrange(len(output) - 1):
left, right = output[i], output[i+1]
if x != left[0] and x != right[0]:
output.insert(i+1, pairs.pop(index))
seen = set()
break
else:
# make sure we don't randomly choose a bad pair like that again
seen |= {i for i in indices if pairs[i][0] == x}
# trim off the sentinels
output = output[1:-1]
assert len(output) == len(xs) * len(ys)
assert not any(L==R for L,R in zip(output[:-1], output[1:]))
return output
nx, ny = 5, 5 # OP example
# nx, ny = 2, 10 # output must alternate in 1st index
# nx, ny = 4, 13 # shuffle 'deck of cards' with no repeating suit
# nx, ny = 1, 5 # should raise 'No valid solution exists!' exception
xs = range(1, nx+1)
ys = range(1, ny+1)
for pair in random_without_repeats(xs, ys):
print pair
This should do what you want.
rando will never generate the same X twice in a row, but I realized that it is possible (though seems unlikely, in that I never noticed it happen in the 10 or so times I ran without the extra check) that due to the potential discard of duplicate pairs it could happen upon a previous X. Oh! But I think I figured it out... will update my answer in a moment.
import random
X = [1,2,3,4,5]
Y = [1,2,3,4,5]
def rando(choice_one, choice_two):
last_x = random.choice(choice_one)
while True:
yield last_x, random.choice(choice_two)
possible_x = choice_one[:]
possible_x.remove(last_x)
last_x = random.choice(possible_x)
all_pairs = set(itertools.product(X, Y))
result = []
r = rando(X, Y)
while set(result) != all_pairs:
pair = next(r)
if pair not in result:
if result and result[-1][0] == pair[0]:
continue
result.append(pair)
import pprint
pprint.pprint(result)
For completeness, I guess I will throw in the super-naive "just keep shuffling till you get one" solution. It's not guaranteed to even terminate, but if it does, it will have a good degree of randomness, and you did say one of the desired qualities was succinctness, and this sure is succinct:
import itertools
import random
x = range(5) # this is a list in Python 2
y = range(5)
all_pairs = list(itertools.product(x, y))
s = list(all_pairs) # make a working copy
while any(s[i][0] == s[i + 1][0] for i in range(len(s) - 1)):
random.shuffle(s)
print s
As was commented, for small values of x and y (especially y!), this is actually a reasonably quick solution. Your example of 5 for each completes in an average time of "right away". The deck of cards example (4 and 13) can take much longer, because it will usually require hundreds of thousands of shuffles. (And again, is not guaranteed to terminate at all.)
Distribute the x values (5 times each value) evenly across your output:
import random
def random_combo_without_x_repeats(xvals, yvals):
# produce all valid combinations, but group by `x` and shuffle the `y`s
grouped = [[x, random.sample(yvals, len(yvals))] for x in xvals]
last_x = object() # sentinel not equal to anything
while grouped[0][1]: # still `y`s left
for _ in range(len(xvals)):
# shuffle the `x`s, but skip any ordering that would
# produce consecutive `x`s.
random.shuffle(grouped)
if grouped[0][0] != last_x:
break
else:
# we tried to reshuffle N times, but ended up with the same `x` value
# in the first position each time. This is pretty unlikely, but
# if this happens we bail out and just reverse the order. That is
# more than good enough.
grouped = grouped[::-1]
# yield a set of (x, y) pairs for each unique x
# Pick one y (from the pre-shuffled groups per x
for x, ys in grouped:
yield x, ys.pop()
last_x = x
This shuffles the y values per x first, then gives you a x, y combination for each x. The order in which the xs are yielded is shuffled each iteration, where you test for the restriction.
This is random, but you'll get all numbers between 1 and 5 in the x position before you'll see the same number again:
>>> list(random_combo_without_x_repeats(range(1, 6), range(1, 6)))
[(2, 1), (3, 2), (1, 5), (5, 1), (4, 1),
(2, 4), (3, 1), (4, 3), (5, 5), (1, 4),
(5, 2), (1, 1), (3, 3), (4, 4), (2, 5),
(3, 5), (2, 3), (4, 2), (1, 2), (5, 4),
(2, 2), (3, 4), (1, 3), (4, 5), (5, 3)]
(I manually grouped that into sets of 5). Overall, this makes for a pretty good random shuffling of a fixed input set with your restriction.
It is efficient too; because there is only a 1-in-N chance that you have to re-shuffle the x order, you should only see one reshuffle on average take place during a full run of the algorithm. The whole algorithm stays within O(N*M) boundaries therefor, pretty much ideal for something that produces N times M elements of output. Because we limit the reshuffling to N times at most before falling back to a simple reverse we avoid the (extremely unlikely) posibility of endlessly reshuffling.
The only drawback then is that it has to create N copies of the M y values up front.
Here is an evolutionary algorithm approach. It first evolves a list in which the elements of X are each repeated len(Y) times and then it randomly fills in each element of Y len(X) times. The resulting orders seem fairly random:
import random
#the following fitness function measures
#the number of times in which
#consecutive elements in a list
#are equal
def numRepeats(x):
n = len(x)
if n < 2: return 0
repeats = 0
for i in range(n-1):
if x[i] == x[i+1]: repeats += 1
return repeats
def mutate(xs):
#swaps random pairs of elements
#returns a new list
#one of the two indices is chosen so that
#it is in a repeated pair
#and swapped element is different
n = len(xs)
repeats = [i for i in range(n) if (i > 0 and xs[i] == xs[i-1]) or (i < n-1 and xs[i] == xs[i+1])]
i = random.choice(repeats)
j = random.randint(0,n-1)
while xs[j] == xs[i]: j = random.randint(0,n-1)
ys = xs[:]
ys[i], ys[j] = ys[j], ys[i]
return ys
def evolveShuffle(xs, popSize = 100, numGens = 100):
#tries to evolve a shuffle of xs so that consecutive
#elements are different
#takes the best 10% of each generation and mutates each 9
#times. Stops when a perfect solution is found
#popsize assumed to be a multiple of 10
population = []
for i in range(popSize):
deck = xs[:]
random.shuffle(deck)
fitness = numRepeats(deck)
if fitness == 0: return deck
population.append((fitness,deck))
for i in range(numGens):
population.sort(key = (lambda p: p[0]))
newPop = []
for i in range(popSize//10):
fit,deck = population[i]
newPop.append((fit,deck))
for j in range(9):
newDeck = mutate(deck)
fitness = numRepeats(newDeck)
if fitness == 0: return newDeck
newPop.append((fitness,newDeck))
population = newPop
#if you get here :
return [] #no special shuffle found
#the following function takes a list x
#with n distinct elements (n>1) and an integer k
#and returns a random list of length nk
#where consecutive elements are not the same
def specialShuffle(x,k):
n = len(x)
if n == 2:
if random.random() < 0.5:
a,b = x
else:
b,a = x
return [a,b]*k
else:
deck = x*k
return evolveShuffle(deck)
def randOrder(x,y):
xs = specialShuffle(x,len(y))
d = {}
for i in x:
ys = y[:]
random.shuffle(ys)
d[i] = iter(ys)
pairs = []
for i in xs:
pairs.append((i,next(d[i])))
return pairs
for example:
>>> randOrder([1,2,3,4,5],[1,2,3,4,5])
[(1, 4), (3, 1), (4, 5), (2, 2), (4, 3), (5, 3), (2, 1), (3, 3), (1, 1), (5, 2), (1, 3), (2, 5), (1, 5), (3, 5), (5, 5), (4, 4), (2, 3), (3, 2), (5, 4), (2, 4), (4, 2), (1, 2), (5, 1), (4, 1), (3, 4)]
As len(X) and len(Y) gets larger this has more difficulty finding a solution (and is designed to return the empty list in that eventuality), in which case the parameters popSize and numGens could be increased. As is, it is able to find 20x20 solutions very rapidly. It takes about a minute when X and Y are of size 100 but even then is able to find a solution (in the times that I have run it).
Interesting restriction! I probably overthought this, solving a more general problem: shuffling an arbitrary list of sequences such that (if possible) no two adjacent sequences share a first item.
from itertools import product
from random import choice, randrange, shuffle
def combine(*sequences):
return playlist(product(*sequences))
def playlist(sequence):
r'''Shuffle a set of sequences, avoiding repeated first elements.
'''#"""#'''
result = list(sequence)
length = len(result)
if length < 2:
# No rearrangement is possible.
return result
def swap(a, b):
if a != b:
result[a], result[b] = result[b], result[a]
swap(0, randrange(length))
for n in range(1, length):
previous = result[n-1][0]
choices = [x for x in range(n, length) if result[x][0] != previous]
if not choices:
# Trapped in a corner: Too many of the same item are left.
# Backtrack as far as necessary to interleave other items.
minor = 0
major = length - n
while n > 0:
n -= 1
if result[n][0] == previous:
major += 1
else:
minor += 1
if minor == major - 1:
if n == 0 or result[n-1][0] != previous:
break
else:
# The requirement can't be fulfilled,
# because there are too many of a single item.
shuffle(result)
break
# Interleave the majority item with the other items.
major = [item for item in result[n:] if item[0] == previous]
minor = [item for item in result[n:] if item[0] != previous]
shuffle(major)
shuffle(minor)
result[n] = major.pop(0)
n += 1
while n < length:
result[n] = minor.pop(0)
n += 1
result[n] = major.pop(0)
n += 1
break
swap(n, choice(choices))
return result
This starts out simple, but when it discovers that it can't find an item with a different first element, it figures out how far back it needs to go to interleave that element with something else. Therefore, the main loop traverses the array at most three times (once backwards), but usually just once. Granted, each iteration of the first forward pass checks each remaining item in the array, and the array itself contains every pair, so the overall run time is O((NM)**2).
For your specific problem:
>>> X = Y = [1, 2, 3, 4, 5]
>>> combine(X, Y)
[(3, 5), (1, 1), (4, 4), (1, 2), (3, 4),
(2, 3), (5, 4), (1, 5), (2, 4), (5, 5),
(4, 1), (2, 2), (1, 4), (4, 2), (5, 2),
(2, 1), (3, 3), (2, 5), (3, 2), (1, 3),
(4, 3), (5, 3), (4, 5), (5, 1), (3, 1)]
By the way, this compares x values by equality, not by position in the X array, which may make a difference if the array can contain duplicates. In fact, duplicate values might trigger the fallback case of shuffling all pairs together if more than half of the X values are the same.
Python's list type has an index() method that takes one parameter and returns the index of the first item in the list matching the parameter. For instance:
>>> some_list = ["apple", "pear", "banana", "grape"]
>>> some_list.index("pear")
1
>>> some_list.index("grape")
3
Is there a graceful (idiomatic) way to extend this to lists of complex objects, like tuples? Ideally, I'd like to be able to do something like this:
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> some_list.getIndexOfTuple(1, 7)
1
>>> some_list.getIndexOfTuple(0, "kumquat")
2
getIndexOfTuple() is just a hypothetical method that accepts a sub-index and a value, and then returns the index of the list item with the given value at that sub-index. I hope
Is there some way to achieve that general result, using list comprehensions or lambas or something "in-line" like that? I think I could write my own class and method, but I don't want to reinvent the wheel if Python already has a way to do it.
How about this?
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> [x for x, y in enumerate(tuple_list) if y[1] == 7]
[1]
>>> [x for x, y in enumerate(tuple_list) if y[0] == 'kumquat']
[2]
As pointed out in the comments, this would get all matches. To just get the first one, you can do:
>>> [y[0] for y in tuple_list].index('kumquat')
2
There is a good discussion in the comments as to the speed difference between all the solutions posted. I may be a little biased but I would personally stick to a one-liner as the speed we're talking about is pretty insignificant versus creating functions and importing modules for this problem, but if you are planning on doing this to a very large amount of elements you might want to look at the other answers provided, as they are faster than what I provided.
Those list comprehensions are messy after a while.
I like this Pythonic approach:
from operator import itemgetter
tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
def collect(l, index):
return map(itemgetter(index), l)
# And now you can write this:
collect(tuple_list,0).index("cherry") # = 1
collect(tuple_list,1).index("3") # = 2
If you need your code to be all super performant:
# Stops iterating through the list as soon as it finds the value
def getIndexOfTuple(l, index, value):
for pos,t in enumerate(l):
if t[index] == value:
return pos
# Matches behavior of list.index
raise ValueError("list.index(x): x not in list")
getIndexOfTuple(tuple_list, 0, "cherry") # = 1
One possibility is to use the itemgetter function from the operator module:
import operator
f = operator.itemgetter(0)
print map(f, tuple_list).index("cherry") # yields 1
The call to itemgetter returns a function that will do the equivalent of foo[0] for anything passed to it. Using map, you then apply that function to each tuple, extracting the info into a new list, on which you then call index as normal.
map(f, tuple_list)
is equivalent to:
[f(tuple_list[0]), f(tuple_list[1]), ...etc]
which in turn is equivalent to:
[tuple_list[0][0], tuple_list[1][0], tuple_list[2][0]]
which gives:
["pineapple", "cherry", ...etc]
You can do this with a list comprehension and index()
tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
[x[0] for x in tuple_list].index("kumquat")
2
[x[1] for x in tuple_list].index(7)
1
Inspired by this question, I found this quite elegant:
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> next(i for i, t in enumerate(tuple_list) if t[1] == 7)
1
>>> next(i for i, t in enumerate(tuple_list) if t[0] == "kumquat")
2
I would place this as a comment to Triptych, but I can't comment yet due to lack of rating:
Using the enumerator method to match on sub-indices in a list of tuples.
e.g.
li = [(1,2,3,4), (11,22,33,44), (111,222,333,444), ('a','b','c','d'),
('aa','bb','cc','dd'), ('aaa','bbb','ccc','ddd')]
# want pos of item having [22,44] in positions 1 and 3:
def getIndexOfTupleWithIndices(li, indices, vals):
# if index is a tuple of subindices to match against:
for pos,k in enumerate(li):
match = True
for i in indices:
if k[i] != vals[i]:
match = False
break;
if (match):
return pos
# Matches behavior of list.index
raise ValueError("list.index(x): x not in list")
idx = [1,3]
vals = [22,44]
print getIndexOfTupleWithIndices(li,idx,vals) # = 1
idx = [0,1]
vals = ['a','b']
print getIndexOfTupleWithIndices(li,idx,vals) # = 3
idx = [2,1]
vals = ['cc','bb']
print getIndexOfTupleWithIndices(li,idx,vals) # = 4
ok, it might be a mistake in vals(j), the correction is:
def getIndex(li,indices,vals):
for pos,k in enumerate(lista):
match = True
for i in indices:
if k[i] != vals[indices.index(i)]:
match = False
break
if(match):
return pos
z = list(zip(*tuple_list))
z[1][z[0].index('persimon')]
tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
def eachtuple(tupple, pos1, val):
for e in tupple:
if e == val:
return True
for e in tuple_list:
if eachtuple(e, 1, 7) is True:
print tuple_list.index(e)
for e in tuple_list:
if eachtuple(e, 0, "kumquat") is True:
print tuple_list.index(e)
Python's list.index(x) returns index of the first occurrence of x in the list. So we can pass objects returned by list compression to get their index.
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> [tuple_list.index(t) for t in tuple_list if t[1] == 7]
[1]
>>> [tuple_list.index(t) for t in tuple_list if t[0] == 'kumquat']
[2]
With the same line, we can also get the list of index in case there are multiple matched elements.
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11), ("banana", 7)]
>>> [tuple_list.index(t) for t in tuple_list if t[1] == 7]
[1, 4]
I guess the following is not the best way to do it (speed and elegance concerns) but well, it could help :
from collections import OrderedDict as od
t = [('pineapple', 5), ('cherry', 7), ('kumquat', 3), ('plum', 11)]
list(od(t).keys()).index('kumquat')
2
list(od(t).values()).index(7)
7
# bonus :
od(t)['kumquat']
3
list of tuples with 2 members can be converted to ordered dict directly, data structures are actually the same, so we can use dict method on the fly.
This is also possible using Lambda expressions:
l = [('rana', 1, 1), ('pato', 1, 1), ('perro', 1, 1)]
map(lambda x:x[0], l).index("pato") # returns 1
Edit to add examples:
l=[['rana', 1, 1], ['pato', 2, 1], ['perro', 1, 1], ['pato', 2, 2], ['pato', 2, 2]]
extract all items by condition:
filter(lambda x:x[0]=="pato", l) #[['pato', 2, 1], ['pato', 2, 2], ['pato', 2, 2]]
extract all items by condition with index:
>>> filter(lambda x:x[1][0]=="pato", enumerate(l))
[(1, ['pato', 2, 1]), (3, ['pato', 2, 2]), (4, ['pato', 2, 2])]
>>> map(lambda x:x[1],_)
[['pato', 2, 1], ['pato', 2, 2], ['pato', 2, 2]]
Note: The _ variable only works in the interactive interpreter. More generally, one must explicitly assign _, i.e. _=filter(lambda x:x[1][0]=="pato", enumerate(l)).
I came up with a quick and dirty approach using max and lambda.
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> target = 7
>>> max(range(len(tuple_list)), key=lambda i: tuple_list[i][1] == target)
1
There is a caveat though that if the list does not contain the target, the returned index will be 0, which could be misleading.
>>> target = -1
>>> max(range(len(tuple_list)), key=lambda i: tuple_list[i][1] == target)
0