Related
I am currently sitting on a problem considering Markov chains were an input is given in the form of a list of strings. This input has to be transformed into a Markov chain. I have been sitting on this problem already a couple of hours.
My idea: As you can see below I have tried to use the counter from collections to count all transitions, which has worked. Now I am trying to count all the tuples where A and B are the first elements. This gives me all possible transitions for A.
Then I'll count the transitions like (A, B).
Then I want to use these to create a matrix with all probabilities.
def markov(seq):
states = Counter(seq).keys()
liste = []
print(states)
a = zip(seq[:-1], seq[1:])
print(list(a))
print(markov(["A","A","B","B","A","B","A","A","A"]))
So far I can't get the counting of the tuples to work.
Any help or new ideas on how to solve this is appreciated
To count the tuple, you can create another counter.
b = Counter()
for word_pair in a:
b[word_pair] += 1
b will keep the count of the pair.
To create the matrix, you can use numpy.
c = np.array([[b[(i,j)] for j in states] for i in states], dtype = float)
I will leave the task of normalizing each row sum to 1 as an exercise.
I didn't get exactly what you wanted but here is what I think it is:
from collections import Counter
def count_occurence(seq):
counted_states = []
transition_dict = {}
for tup in seq:
if tup not in counted_states:
transition_dict[tup] = seq.count(tup)
counted_states.append(tup)
print(transition_dict)
#{('A', 'A'): 3, ('A', 'B'): 2, ('B', 'B'): 1, ('B', 'A'): 2}
def markov(seq):
states = Counter(seq).keys()
print(states)
#dict_keys(['A', 'B'])
a = list(zip(seq[:-1], seq[1:]))
print(a)
#[('A', 'A'), ('A', 'B'), ('B', 'B'), ('B', 'A'), ('A', 'B'), ('B',
#'A'), ('A', 'A'), ('A', 'A')]
return a
seq = markov(["A","A","B","B","A","B","A","A","A"])
count_occurence(seq)
I have a list of pairs (tuples), for simplification something like this:
L = [("A","B"), ("B","C"), ("C","D"), ("E","F"), ("G","H"), ("H","I"), ("G","I"), ("G","J")]
Using python I want efficiently split this list to:
L1 = [("A","B"), ("B","C"), ("C","D")]
L2 = [("E","F")]
L3 = [("G","H"), ("G","I"), ("G","J"), ("H","I")]
How to efficiently split list into groups of pairs, where for pairs in the group there must be always at least one pair which shares one item with others? As stated in one of the answers this is actually network problem. The goal is to efficiently split network into disconnected (isolated) network parts.
Type lists, tuples (sets) may be changed for achieving higher efficiency.
This is more like a network problem, so we can use networkx:
import networkx as nx
G=nx.from_edgelist(L)
l=list(nx.connected_components(G))
# after that we create the map dict , for get the unique id for each nodes
mapdict={z:x for x, y in enumerate(l) for z in y }
# then append the id back to original data for groupby
newlist=[ x+(mapdict[x[0]],)for x in L]
import itertools
#using groupby make the same id into one sublist
newlist=sorted(newlist,key=lambda x : x[2])
yourlist=[list(y) for x , y in itertools.groupby(newlist,key=lambda x : x[2])]
yourlist
[[('A', 'B', 0), ('B', 'C', 0), ('C', 'D', 0)], [('E', 'F', 1)], [('G', 'H', 2), ('H', 'I', 2), ('G', 'I', 2), ('G', 'J', 2)]]
Then to match your output format:
L1,L2,L3=[[y[:2]for y in x] for x in yourlist]
L1
[('A', 'B'), ('B', 'C'), ('C', 'D')]
L2
[('E', 'F')]
L3
[('G', 'H'), ('H', 'I'), ('G', 'I'), ('G', 'J')]
Initialise a list of groups as empty
Let (a, b) be the next pair
Collect all groups that contain any elements with a or b
Remove them all, join them, add (a, b), and insert as a new group
Repeat till done
That'd be something like this:
import itertools, functools
def partition(pred, iterable):
t1, t2 = itertools.tee(iterable)
return itertools.filterfalse(pred, t1), filter(pred, t2)
groups = []
for a, b in L:
unrelated, related = partition(lambda group: any(aa == a or bb == b or aa == b or bb == a for aa, bb in group), groups)
groups = [*unrelated, sum(related, [(a, b)])]
An efficient and Pythonic approach is to convert the list of tuples to a set of frozensets as a pool of candidates, and in a while loop, create a set as group and use a nested while loop to keep expanding the group by adding the first candidate set and then performing set union with other candidate sets that intersects with the group until there is no more intersecting candidate, at which point go back to the outer loop to form a new group:
pool = set(map(frozenset, L))
groups = []
while pool:
group = set()
groups.append([])
while True:
for candidate in pool:
if not group or group & candidate:
group |= candidate
groups[-1].append(tuple(candidate))
pool.remove(candidate)
break
else:
break
Given your sample input, groups will become:
[[('A', 'B'), ('C', 'B'), ('C', 'D')],
[('G', 'H'), ('H', 'I'), ('G', 'J'), ('G', 'I')],
[('E', 'F')]]
Keep in mind that sets are unordered in Python, which is why the order of the above output doesn't match your expected output, but for your purpose the order should not matter.
You can use the following code:
l = [("A","B"), ("B","C"), ("C","D"), ("E","F"), ("G","H"), ("H","I"), ("G","I"), ("G","J")]
result = []
if len(l) > 1:
tmp = [l[0]]
for i in range(1,len(l)):
if l[i][0] == l[i-1][1] or l[i][1] == l[i-1][0] or l[i][1] == l[i-1][1] or l[i][0] == l[i-1][0]:
tmp.append(l[i])
else:
result.append(tmp)
tmp = [l[i]]
result.append(tmp)
else:
result = l
for elem in result:
print(elem)
output:
[('A', 'B'), ('B', 'C'), ('C', 'D')]
[('E', 'F')]
[('G', 'H'), ('H', 'I'), ('G', 'I'), ('G', 'J')]
Note: this code is based on the hypothesis that your initial array is sorted. If this is not the case it will not work as it does only one pass on the whole list to create the groups (complexity O(n)).
Explanations:
result will store your groups
if len(l) > 1: if you have only one element in your list or an empty list no need to do any processing you have the answer
You will to a one pass on each element of the list and compare the 4 possible equality between the tuple at position i and the one at position i-1.
tmp is used to construct your groups, as long as the condition is met you add tuples to tmp
when the condition is not respected you add tmp (the current group that has been created to the result, reinitiate tmp with the current tuple) and you continue.
You can use a while loop and start iteration from first member of L(using a for loop inside). Check for the whole list if any member(either of the two) is shared or not. Then append it to a list L1 and pop that member from original list L. Then while loop would run again (till list L is nonempty). And for loop inside would run for each element in list to append to a new list L2. You can try this. (I will provide code it doesn't help)
I am using the following code to dedup and count a given list:
def my_dedup_count(l):
l.append(None)
new_l = []
current_x = l[0]
current_count = 1
for x in l[1:]:
if x == current_x:
current_count += 1
else:
new_l.append((current_x, current_count))
current_x = x
current_count = 1
return new_l
With my testing code:
my_test_list = ['a','a','b','b','b','c','c','d']
my_dedup_count(my_test_list)
result is:
[('a', 2), ('b', 3), ('c', 2), ('d', 1)]
The code is doing fine and the output is correct. However, I feel my code is quite lengthy and am wondering would anyone suggest a more elegant way to improve the above code? Thanks!
Yes, don't re-invent the wheel. Use the standard library instead; you want to use the collections.Counter() class here:
from collections import Counter
def my_dedup_count(l):
return Counter(l).items()
You may want to just return the counter itself and use all functionality it provides (such as giving you a key-count list sorted by counts).
If you expected only consecutive runs to be counted (so ['a', 'b', 'a'] results in [('a', 1), ('b', 1), ('a', 1)], then use itertools.groupby():
from itertools import groupby
def my_dedup_count(l):
return [(k, sum(1 for _ in g)) for k, g in groupby(l)]
I wrote two versions of some shorter ways to write what you accomplished.
This first option ignores ordering, and all like values in the list will be deduplicated.
from collections import defaultdict
def my_dedup_count(test_list):
foo = defaultdict(int)
for el in test_list:
foo[el] += 1
return foo.items()
my_test_list = ['a','a','b','b','b','c','c','d', 'a', 'a', 'd']
>>> [('a', 4), ('c', 2), ('b', 3), ('d', 2)]
This second option respects order and only deduplicates consecutive duplicate values.
def my_dedup_count(my_test_list):
output = []
succession = 1
for idx, el in enumerate(my_test_list):
if idx+1 < len(my_test_list) and el == my_test_list[idx+1]:
succession += 1
else:
output.append((el, succession))
succession = 1
return output
my_test_list = ['a','a','b','b','b','c','c','d', 'a', 'a', 'd']
>>> [('a', 2), ('b', 3), ('c', 2), ('d', 1), ('a', 2), ('d', 1)]
I want to know, if there is a way to speed up the function shown here. I know this looks not very pythonic...
def MakePairs(inputlist):
'''
#param inputlist: [[["a","b","c"],["d","e","f"]],[["g","h","i"],["j","k","l"]],...]
#return returnlist: [[["a","d"],["b","e],["c","f"]],[["g","j"],["h","k"],["i","l"]],...]
'''
returnlist = []
for Pair in xrange(len(inputlist)):
dummy2 = []
for item in xrange(len(inputlist[Pair][0])):
dummy = [Pair[0][item], Pair[1][item]]
dummy2.append(dummy)
returnlist.append(dummy2)
return returnlist
Edit: The pairs in the returnlist have to be lists.
Thanks in advance!!!
Looks like a job for zip():
>>> l = [[["a","b","c"],["d","e","f"]],[["g","h","i"],["j","k","l"]]]
>>> [zip(*item) for item in l]
[[('a', 'd'), ('b', 'e'), ('c', 'f')], [('g', 'j'), ('h', 'k'), ('i', 'l')]]
So, your function will be:
def MakePairs(inputlist):
return [zip(*item) for item in inputlist]
Also, consider using itertools.izip() instead of zip().
What is the most efficient way of finding a certain tuple based on e.g. the second element of that tuple in a list and move that tuple to the top of the list
Something of the form:
LL=[('a','a'),('a','b'),('a','c'),('a','d')]
LL.insert(0,LL.pop(LL.index( ... )))
where I would like something in index() that would give me the position of the tuple that has 'c' as second element.
Is there a classic python 1-line approach to do that?
>>> LL.insert(0,LL.pop([x for x, y in enumerate(LL) if y[1] == 'c'][0]))
>>> LL
[('a', 'c'), ('a', 'a'), ('a', 'b'), ('a', 'd')]
>>>
To find position you can:
positions = [i for i, tup in enumerate(LL) if tup[1] == 'c']
You can now take the index of the desired element, pop it and push to the beginning of the list
pos = positions[0]
LL.insert(0, LL.pop(pos))
But you can also sort your list using the item in the tuple as key:
sorted(LL, key=lambda tup: tup[1] == 'c', reverse=True)
if you don't care about order of the other elements
2 lines, however 1 line solutions are all inefficient
>>> LL=[('a','a'),('a','b'),('a','c'),('a','d')]
>>> i = next((i for i, (x, y) in enumerate(LL) if y == 'c'), 0) # 0 default index
>>> LL[0], LL[i] = LL[i], LL[0]
>>> LL
[('a', 'c'), ('a', 'b'), ('a', 'a'), ('a', 'd')]
This does nothing if the index is not found
>>> LL=[('a','a'),('a','b'),('a','c'),('a','d')]
>>> i = next((i for i, (x, y) in enumerate(LL) if y == 'e'), 0) # 0 default index
>>> LL[0], LL[i] = LL[i], LL[0]
>>> LL
[('a', 'a'), ('a', 'b'), ('a', 'c'), ('a', 'd')]
The problem with terse, pythonic, 'fancy schmancy' solutions is the code might not be easily maintained and/or reused in other closely aligned contexts.
It seems best to just use 'boiler plate' code to do the search, and then continue with the application specific requirements.
So here is an example of easy to understand search code that can be easily 'plugged into' when these questions come up, including those situations when we need to know if the key is found.
def searchTupleList(list_of_tuples, coord_value, coord_index):
for i in range(0, len(list_of_tuples)):
if list_of_tuples[i][coord_index] == coord_value:
return i # matching index in list
return -1 # not found