Itertools Combinations No Repeats: Where rgb is equivelant to rbg etc - python

I'm trying to use itertools.combinations to return unique combinations. I've searched through several similar questions but have not been able to find an answer.
An example:
>>> import itertools
>>> e = ['r','g','b','g']
>>> list(itertools.combinations(e,3))
[('r', 'g', 'b'), ('r', 'g', 'g'), ('r', 'b', 'g'), ('g', 'b', 'g')]
For my purposes, (r,g,b) is identical to (r,b,g) and so I would want to return only (rgb),(rgg) and (gbg).
This is just an illustrative example and I would want to ignore all such 'duplicates'. The list e could contain up to 5 elements. Each individual element would be either r, g or b. Always looking for combinations of 3 elements from e.
To be concrete, the following are the only combinations I wish to call 'valid': (rrr), (ggg), (bbb), (rgb).
So perhaps the question boils down to how to treat any variation of (rgb) as equal to (rgb) and therefore ignore it.
Can I use itertools to achieve this or do I need to write my own code to drop the 'dupliates' here? If no itertools solution then I can just easily check if each is a variation of (rgb), but this feels a bit 'un-pythonic'.

You can use a set to discard duplicates.
In your case the number of characters is the way you identify duplicates so you could use collections.Counter. In order to save them in a set you need to convert them to frozensets though (because Counter isn't hashable):
>>> import itertools
>>> from collections import Counter
>>> e = ['r','g','b','g']
>>> result = []
>>> seen = set()
>>> for comb in itertools.combinations(e,3):
... cnts = frozenset(Counter(comb).items())
... if cnts in seen:
... pass
... else:
... seen.add(cnts)
... result.append(comb)
>>> result
[('r', 'g', 'b'), ('r', 'g', 'g'), ('g', 'b', 'g')]
If you want to convert them to strings use:
result.append(''.join(comb)) # instead of result.append(comb)
and it will give:
['rgb', 'rgg', 'gbg']
The approach is a variation of the unique_everseen recipe (itertools module documentation) - so it's probably "quite pythonic".

According to your definition of "valid outputs", you can directly build them like this:
from collections import Counter
# Your distinct values
values = ['r', 'g', 'b']
e = ['r','g','b','g', 'g']
count = Counter(e)
# Counter({'g': 3, 'r': 1, 'b': 1})
# If x appears at least 3 times, 'xxx' is a valid combination
combinations = [x*3 for x in values if count[x] >=3]
# If all values appear at least once, 'rgb' is a valid combination
if all([count[x]>=1 for x in values]):
combinations.append('rgb')
print(combinations)
#['ggg', 'rgb']
This will be more efficient than creating all possible combinations and filtering the valid ones afterwards.

It is not completely clear what you want to return. It depends on what comes first when iterating. For example if gbr is found first, then rgb will be discarded as a duplicate:
import itertools
e = ['r','g','b','g']
s = set(e)
v = [s] * len(s)
solns = []
for c in itertools.product(*v):
in_order = sorted(c)
if in_order not in solns:
solns.append(in_order)
print solns
This would give you:
[['r', 'r', 'r'], ['b', 'r', 'r'], ['g', 'r', 'r'], ['b', 'b', 'r'], ['b', 'g', 'r'], ['g', 'g', 'r'], ['b', 'b', 'b'], ['b', 'b', 'g'], ['b', 'g', 'g'], ['g', 'g', 'g']]

Related

How to return a shuffled list considering mutually exclusive items?

Say I have a list of options and I want to pick a certain number randomly.
In my case, say the options are in a list ['a', 'b', 'c', 'd', 'e'] and I want my script to return 3 elements.
However, there is also the case of two options that cannot appear at the same time. That is, if option 'a' is picked randomly, then option 'b' cannot be picked. And the same applies the other way round.
So valid outputs are: ['a', 'c', 'd'] or ['c', 'd', 'b'], while things like ['a', 'b', 'c'] would not because they contain both 'a' and 'b'.
To fulfil these requirements, I am fetching 3 options plus another one to compensate a possible discard. Then, I keep a set() with the mutually exclusive condition and keep removing from it and check if both elements have been picked or not:
import random
mutually_exclusive = set({'a', 'b'})
options = ['a', 'b', 'c', 'd', 'e']
num_options_to_return = 3
shuffled_options = random.sample(options, num_options_to_return + 1)
elements_returned = 0
for item in shuffled_options:
if elements_returned >= num_options_to_return:
break
if item in mutually_exclusive:
mutually_exclusive.remove(item)
if not mutually_exclusive:
# if both elements have appeared, then the set is empty so we cannot return the current value
continue
print(item)
elements_returned += 1
However, I may be overcoding and Python may have better ways to handle these requirements. Going through random's documentation I couldn't find ways to do this out of the box. Is there a better solution than my current one?
One way to do this is use itertools.combinations to create all of the possible results, filter out the invalid ones and make a random.choice from that:
>>> from itertools import combinations
>>> from random import choice
>>> def is_valid(t):
... return 'a' not in t or 'b' not in t
...
>>> choice([
... t
... for t in combinations('abcde', 3)
... if is_valid(t)
... ])
...
('c', 'd', 'e')
Maybe a bit naive, but you could generate samples until your condition is met:
import random
options = ['a', 'b', 'c', 'd', 'e']
num_options_to_return = 3
mutually_exclusive = set({'a', 'b'})
while True:
shuffled_options = random.sample(options, num_options_to_return)
if all (item not in mutually_exclusive for item in shuffled_options):
break
print(shuffled_options)
You can restructure your options.
import random
options = [('a', 'b'), 'c', 'd', 'e']
n_options = 3
selected_option = random.sample(options, n_options)
result = [item if not isinstance(item, tuple) else random.choice(item)
for item in selected_option]
print(result)
I would implement it with sets:
import random
mutually_exclusive = {'a', 'b'}
options = ['a', 'b', 'c', 'd', 'e']
num_options_to_return = 3
while True:
s = random.sample(options, num_options_to_return)
print('Sample is', s)
if not mutually_exclusive.issubset(s):
break
print('Discard!')
print('Final sample:', s)
Prints (for example):
Sample is ['a', 'b', 'd']
Discard!
Sample is ['b', 'a', 'd']
Discard!
Sample is ['e', 'a', 'c']
Final sample: ['e', 'a', 'c']
I created the below function and I think it's worth sharing it too ;-)
def random_picker(options, n, mutually_exclusives=None):
if mutually_exclusives is None:
return random.sample(options, n)
elif any(len(pair) != 2 for pair in mutually_exclusives):
raise ValueError('Lenght of pairs of mutually_exclusives iterable, must be 2')
res = []
while len(res) < n:
item_index = random.randint(0, len(options) - 1)
item = options[item_index]
if any(item == exc and pair[-(i - 1)] in res for pair in mutually_exclusives
for i, exc in enumerate(pair)):
continue
res.append(options.pop(item_index))
return res
Where:
options is the list of available options to pick from.
n is the number of items you want to be picked from options
mutually_exclusives is an iterable containing tuples pairs of mutually exclusive items
You can use it as follows:
>>> random_picker(['a', 'b', 'c', 'd', 'e'], 3)
['c', 'e', 'a']
>>> random_picker(['a', 'b', 'c', 'd', 'e'], 3, [('a', 'b')])
['d', 'b', 'e']
>>> random_picker(['a', 'b', 'c', 'd', 'e'], 3, [('a', 'b'), ('a', 'c')])
['e', 'd', 'a']
import random
l = [['a','b'], ['c'], ['d'], ['e']]
x = [random.choice(i) for i in random.sample(l,3)]
here l is a two-dimensional list, where the fist level reflects an and relation and the second level an or relation.

What's the most efficient way of identifying repeated pattern in array of objects using Python

I have two arrays of 5 objects
a = ['a', 'b', 'c', 'd', 'e', 'f', 'e', 'f']
b = ['a', 'b', 'd', 'f', 'e', 'f']
I would like to identify the repeated patterns of more than one object and their occurrences like
['a', 'b']: 2
['e', 'f']: 3
['f', 'e', 'f']: 2
The first sequence ['a', 'b'] appeared once in a and once in b, so total count 2. The 2nd sequence ['e', 'f'] appeared twice in a, once in b, so total 3. The 3rd sequence ['f', 'e', 'f'] appeared once in a, and once in b, so total 2.
Is there a good way to do this in Python?
Also the universe of objects is limited. Was wondering if there's an efficient solution that utilizes hash table?
If the approach is only for two lists, the following approach should work. I am not sure if this is the most efficient solution though.
A nice description of find n-grams is given in this blog post.
This approach provides the min length and determines the max length that a repeating sequence of a list might have (at most half the length of the list).
We then find all the sequences for each of the lists by combining the sequences for individual lists. Then we have a counter of every sequence and its count.
Finally we return a dictionary of all the sequences that occur more than once.
def find_repeating(list_a, list_b):
min_len = 2
def find_ngrams(input_list, n):
return zip(*[input_list[i:] for i in range(n)])
seq_list_a = []
for seq_len in range(min_len, len(list_a) + 1):
seq_list_a += [val for val in find_ngrams(list_a, seq_len)]
seq_list_b = []
for seq_len in range(min_len, len(list_b) + 1):
seq_list_b += [val for val in find_ngrams(list_b, seq_len)]
all_sequences = seq_list_a + seq_list_b
counter = {}
for seq in all_sequences:
counter[seq] = counter.get(seq, 0) + 1
filtered_counter = {k: v for k, v in counter.items() if v > 1}
return filtered_counter
Do let me know if you are unsure about anything.
>>> list_a = ['a', 'b', 'c', 'd', 'e', 'f', 'e', 'f']
>>> list_b = ['a', 'b', 'd', 'f', 'e', 'f']
>>> print find_repeating(list_a, list_b)
{('f', 'e'): 2, ('e', 'f'): 3, ('f', 'e', 'f'): 2, ('a', 'b'): 2}
When you mentioned that you were looking for an efficient solution, my first thought was of the approaches to solving the longest common subsequence problem. But in your case, we actually do need to enumerate all common subsequences so that we can count them, so a dynamic programming solution will not do. Here's my solution. It's certainly shorter than SSSINISTER's solution (mostly because I use the collections.Counter class).
#!/usr/bin/env python3
def find_repeating(sequence_a, sequence_b, min_len=2):
from collections import Counter
# Find all subsequences
subseq_a = [tuple(sequence_a[start:stop]) for start in range(len(sequence_a)-min_len+1)
for stop in range(start+min_len,len(sequence_a)+1)]
subseq_b = [tuple(sequence_b[start:stop]) for start in range(len(sequence_b)-min_len+1)
for stop in range(start+min_len,len(sequence_b)+1)]
# Find common subsequences
common = set(tup for tup in subseq_a if tup in subseq_b)
# Count common subsequences
return Counter(tup for tup in (subseq_a + subseq_b) if tup in common)
Resulting in ...
>>> list_a = ['a', 'b', 'c', 'd', 'e', 'f', 'e', 'f']
>>> list_b = ['a', 'b', 'd', 'f', 'e', 'f']
>>> print(find_repeating(list_a, list_b))
Counter({('e', 'f'): 3, ('f', 'e'): 2, ('a', 'b'): 2, ('f', 'e', 'f'): 2})
The advantage to using collections.Counter is that not only do you not need to produce the actual code to iterate and count, you get access to all of the dict methods as well as a few specialized methods for using those counts.

Splitting a Python list into a list of overlapping chunks

This question is similar to Slicing a list into a list of sub-lists, but in my case I want to include the last element of each previous sub-list as the first element in the next sub-list. And I have to take into account that the last sub-list always has to have at least two elements.
For example:
list_ = ['a','b','c','d','e','f','g','h']
The result for a size 3 sub-list:
resultant_list = [['a','b','c'],['c','d','e'],['e','f','g'],['g','h']]
The list comprehension in the answer you linked is easily adapted to support overlapping chunks by simply shortening the "step" parameter passed to the range:
>>> list_ = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
>>> n = 3 # group size
>>> m = 1 # overlap size
>>> [list_[i:i+n] for i in range(0, len(list_), n-m)]
[['a', 'b', 'c'], ['c', 'd', 'e'], ['e', 'f', 'g'], ['g', 'h']]
Other visitors to this question mightn't have the luxury of working with an input list (slicable, known length, finite). Here is a generator-based solution that can work with arbitrary iterables:
from collections import deque
def chunks(iterable, chunk_size=3, overlap=0):
# we'll use a deque to hold the values because it automatically
# discards any extraneous elements if it grows too large
if chunk_size < 1:
raise Exception("chunk size too small")
if overlap >= chunk_size:
raise Exception("overlap too large")
queue = deque(maxlen=chunk_size)
it = iter(iterable)
i = 0
try:
# start by filling the queue with the first group
for i in range(chunk_size):
queue.append(next(it))
while True:
yield tuple(queue)
# after yielding a chunk, get enough elements for the next chunk
for i in range(chunk_size - overlap):
queue.append(next(it))
except StopIteration:
# if the iterator is exhausted, yield any remaining elements
i += overlap
if i > 0:
yield tuple(queue)[-i:]
Note: I've since released this implementation in wimpy.util.chunks. If you don't mind adding the dependency, you can pip install wimpy and use from wimpy import chunks rather than copy-pasting the code.
more_itertools has a windowing tool for overlapping iterables.
Given
import more_itertools as mit
iterable = list("abcdefgh")
iterable
# ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
Code
windows = list(mit.windowed(iterable, n=3, step=2))
windows
# [('a', 'b', 'c'), ('c', 'd', 'e'), ('e', 'f', 'g'), ('g', 'h', None)]
If required, you can drop the None fillvalue by filtering the windows:
[list(filter(None, w)) for w in windows]
# [['a', 'b', 'c'], ['c', 'd', 'e'], ['e', 'f', 'g'], ['g', 'h']]
See also more_itertools docs for details on more_itertools.windowed
Here's what I came up with:
l = [1, 2, 3, 4, 5, 6]
x = zip (l[:-1], l[1:])
for i in x:
print (i)
(1, 2)
(2, 3)
(3, 4)
(4, 5)
(5, 6)
Zip accepts any number of iterables, there is also zip_longest

Memory efficient padding a list

I have a list
a = ['a', 'b', 'c']
of given length and I want to insert a certain element 'x' after every item to get
ax = ['a', 'x', 'b', 'x', 'c', 'x']
Since the elements are of large size, I don't want to do a lot of pops or sublists.
Any ideas?
Since the list is large, the best way is to go with a generator, like this
def interleave(my_list, filler):
for item in my_list:
yield item
yield filler
print list(interleave(['a', 'b', 'c'], 'x'))
# ['a', 'x', 'b', 'x', 'c', 'x']
Or you can return a chained iterator like this
from itertools import chain, izip, repeat
def interleave(my_list, filler):
return chain.from_iterable(izip(my_list, repeat(filler)))
repeat(filler) returns an iterator which gives filler infinite number of times.
izip(my_list, repeat(filler)) returns an iterator, which picks one value at a time from both my_list and repeat(filler). So, the output of list(izip(my_list, repeat(filler))) would look like this
[('a', 'x'), ('b', 'x'), ('c', 'x')]
Now, all we have to do is flatten the data. So, we chain the result of izip, with chain.from_iterable, which gives one value at a time from the iterables.
Have you considered itertools izip?
izip('ABCD', 'xy') --> Ax By
izip_longest can be used with a zero length list, a fillvalue, and combined via chain.from_iterable as follows:
import itertools
list(itertools.chain.from_iterable(itertools.izip_longest('ABCD', '', fillvalue='x'))
>>> ['A', 'x', 'B', 'x', 'C', 'x', 'D', 'x']
I tend to use list comprehension for such things.
a = ['a', 'b', 'c']
ax = [a[i/2] if i%2 == 0 else 'x' for i in range(2*len(a))]
print ax
['a', 'x', 'b', 'x', 'c', 'x']
You can generate your list with a nested list comprehension
a = ['a', 'b', 'c']
ax = [c for y in a for x in y, 'x']
If you don't really need ax to be a list, you can make a generator like this
ax = (c for y in a for c in (y, 'x'))
for item in ax:
# do something ...

Ordered Sets Python 2.7

I have a list that I'm attempting to remove duplicate items from. I'm using python 2.7.1 so I can simply use the set() function. However, this reorders my list. Which for my particular case is unacceptable.
Below is a function I wrote; which does this. However I'm wondering if there's a better/faster way. Also any comments on it would be appreciated.
def ordered_set(list_):
newlist = []
lastitem = None
for item in list_:
if item != lastitem:
newlist.append(item)
lastitem = item
return newlist
The above function assumes that none of the items will be None, and that the items are in order (ie, ['a', 'a', 'a', 'b', 'b', 'c', 'd'])
The above function returns ['a', 'a', 'a', 'b', 'b', 'c', 'd'] as ['a', 'b', 'c', 'd'].
Another very fast method with set:
def remove_duplicates(lst):
dset = set()
# relies on the fact that dset.add() always returns None.
return [item for item in lst
if item not in dset and not dset.add(item)]
Use an OrderedDict:
from collections import OrderedDict
l = ['a', 'a', 'a', 'b', 'b', 'c', 'd']
d = OrderedDict()
for x in l:
d[x] = True
# prints a b c d
for x in d:
print x,
print
Assuming the input sequence is unordered, here's O(N) solution (both in space and time).
It produces a sequence with duplicates removed, while leaving unique items in the same relative order as they appeared in the input sequence.
>>> def remove_dups_stable(s):
... seen = set()
... for i in s:
... if i not in seen:
... yield i
... seen.add(i)
>>> list(remove_dups_stable(['q', 'w', 'e', 'r', 'q', 'w', 'y', 'u', 'i', 't', 'e', 'p', 't', 'y', 'e']))
['q', 'w', 'e', 'r', 'y', 'u', 'i', 't', 'p']
I know this has already been answered, but here's a one-liner (plus import):
from collections import OrderedDict
def dedupe(_list):
return OrderedDict((item,None) for item in _list).keys()
>>> dedupe(['q', 'w', 'e', 'r', 'q', 'w', 'y', 'u', 'i', 't', 'e', 'p', 't', 'y', 'e'])
['q', 'w', 'e', 'r', 'y', 'u', 'i', 't', 'p']
I think this is perfectly OK. You get O(n) performance which is the best you could hope for.
If the list were unordered, then you'd need a helper set to contain the items you've already visited, but in your case that's not necessary.
if your list isn't sorted then your question doesn't make sense.
e.g. [1,2,1] could become [1,2] or [2,1]
if your list is large you may want to write your result back into the same list using a SLICE to save on memory:
>>> x=['a', 'a', 'a', 'b', 'b', 'c', 'd']
>>> x[:]=[x[i] for i in range(len(x)) if i==0 or x[i]!=x[i-1]]
>>> x
['a', 'b', 'c', 'd']
for inline deleting see Remove items from a list while iterating or Remove items from a list while iterating without using extra memory in Python
one trick you can use is that if you know x is sorted, and you know x[i]=x[i+j] then you don't need to check anything between x[i] and x[i+j] (and if you don't need to delete these j values, you can just copy the values you want into a new list)
So while you can't beat n operations if everything in the set is unique i.e. len(set(x))=len(x)
There is probably an algorithm that has n comparisons as its worst case but can have n/2 comparisons as its best case (or lower than n/2 as its best case if you know somehow know in advance that len(x)/len(set(x))>2 because of the data you've generated):
The optimal algorithm would probably use binary search to find maximum j for each minimum i in a divide and conquer type approach. Initial divisions would probably be of length len(x)/approximated(len(set(x))). Hopefully it could be carried out such that even if len(x)=len(set(x)) it still uses only n operations.
There is unique_everseen solution described in
http://docs.python.org/2/library/itertools.html
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
Looks ok to me. If you really want to use sets do something like this:
def ordered_set (_list) :
result = set()
lastitem = None
for item in _list :
if item != lastitem :
result.add(item)
lastitem = item
return sorted(tuple(result))
I don't know what performance you will get, you should test it; probably the same because of method's overheat!
If you really are paranoid, just like me, read here:
http://wiki.python.org/moin/HowTo/Sorting/
http://wiki.python.org/moin/PythonSpeed/PerformanceTips
Just remembered this(it contains the answer):
http://www.peterbe.com/plog/uniqifiers-benchmark

Categories