Related
I have two lists:
lookup_list = [1,2,3]
my_list = [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
I want to count how many times the lookup_list appeared in my_list with the following logic:
The order should be 1 -> 2 -> 3
In my_list, the lookup_list items doesn't have to be next to each other: 1,4,2,1,5,3 -> should generate a match since there is a 2 comes after a 1 and a 3 comes after 2.
The mathces based on the logic:
1st match: [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
2nd match: [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
3rd match: [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
4th match: [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
The lookup_list is dynamic, it could be defined as [1,2] or [1,2,3,4], etc. How can I solve it? All the answers I've found is about finding matches where 1,2,3 appears next to each other in an ordered way like this one: Find matching sequence of items in a list
I can find the count of consecutive sequences with the below code but it doesn't count the nonconsecutive sequences:
from nltk import ngrams
lookup_list = [1,2,3]
my_list = [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
all_counts = Counter(ngrams(l2, len(l1)))
counts = {k: all_counts[k] for k in [tuple(lookup_list)]}
counts
>>> {(1, 2, 3): 2}
I tried using pandas rolling window functions but they don't have a custom reset option.
def find_all_sequences(source, sequence):
def find_sequence(source, sequence, index, used):
for i in sequence:
while True:
index = source.index(i, index + 1)
if index not in used:
break
yield index
first, *rest = sequence
index = -1
used = set()
while True:
try:
index = source.index(first, index + 1)
indexes = index, *find_sequence(source, rest, index, used)
except ValueError:
break
else:
used.update(indexes)
yield indexes
Usage:
lookup_list = [1,2,3]
my_list = [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
print(*find_all_sequences(my_list, lookup_list), sep="\n")
Output:
(0, 1, 2)
(6, 7, 11)
(9, 10, 15)
(14, 16, 17)
Generator function find_all_sequences() yields tuples with indexes of sequence matches. In this function we initialize loop which will be stopped when list.index() call will throw ValueError. Internal generator function find_sequence() yields index of every sequence item.
According to this benchmark, my method is about 60% faster than one from Andrej Kesely's answer.
The function find_matches() returns indices where the matches from lookup_list are:
def find_matches(lookup_list, lst):
buckets = []
def _find_bucket(i, v):
for b in buckets:
if lst[b[-1]] == lookup_list[len(b) - 1] and v == lookup_list[len(b)]:
b.append(i)
if len(b) == len(lookup_list):
buckets.remove(b)
return b
break
else:
if v == lookup_list[0]:
buckets.append([i])
rv = []
for i, v in enumerate(my_list):
b = _find_bucket(i, v)
if b:
rv.append(b)
return rv
lookup_list = [1, 2, 3]
my_list = [1, 2, 3, 4, 5, 2, 1, 2, 2, 1, 2, 3, 4, 5, 1, 3, 2, 3, 1]
print(find_matches(lookup_list, my_list))
Prints:
[[0, 1, 2], [6, 7, 11], [9, 10, 15], [14, 16, 17]]
Here is a recursive solution:
lookup_list = [1,2,3]
my_list = [1,2,3,4,5,2,1,2,2,1,2,3,4,5,1,3,2,3,1]
def find(my_list, continue_from_index):
if continue_from_index > (len(my_list) - 1):
return 0
last_found_index = 0
found_indizes = []
first_occuring_index = 0
found = False
for l in lookup_list:
for m_index in range(continue_from_index, len(my_list)):
if my_list[m_index] is l and m_index >= last_found_index:
if not found:
found = True
first_occuring_index = m_index
last_found_index = m_index
found += 1
found_indizes.append(str(m_index))
break
if len(found_indizes) is len(lookup_list):
return find(my_list, first_occuring_index+1) + 1
return 0
print(find(my_list, 0))
my_list = [5, 6, 3, 8, 2, 1, 7, 1]
lookup_list = [8, 2, 7]
counter =0
result =False
for i in my_list:
if i in lookup_list:
counter+=1
if(counter==len(lookup_list)):
result=True
print (result)
I have a list of caller_address elements. For each of these addresses I can get a caller_function, a function containing that caller_address. In a single function there may be more than 1 address.
So if I have a list of caller_address elements:
caller_addresses = [1, 2, 3, 4, 5, 6, 7, 8]
For each of them I can get a function:
caller_functions = [getFunctionContaining(addr) for addr in caller_addresses]
print(caller_functions)
# prints(example): ['func1', 'func1', 'func2', 'func2', 'func2', 'func2', 'func3', 'func3']
In the result I need to get a dict where keys are the functions and values are lists of addresses those functions contain. In my example in must be:
{'func1': [1, 2], 'func2': [3, 4, 5, 6], 'func3': [7, 8]}
# Means 'func1' contains addresses 1 and 2, 'func2' contains 3, 4, 5 and 6, ...
It would be great if there was a function like:
result = to_dict(lambda addr: getFunctionContaining(addr), caller_addresses)
to get the same result.
Where the first argument is the function for keys and the second argument is the list of values. Is there such function in standard library in python?
I could implement it with for loop and dict[getFunctionContaining(addr)].append(addr), but I'm looking for more pythonic way to do this.
Thanks!
Found a solution using itertools.groupby.
This solution is also faster than a solution using a loop.
import itertools
import time
def f(v):
if v < 5:
return 1
if v < 7:
return 2
return 3
def to_dict(key, list_):
out = {}
for el in list_:
out.setdefault(key(el), []).append(el)
return out
def to_dict2(key, list_):
return {k: list(v) for k, v in itertools.groupby(list_, key)}
lst = [1, 2, 3, 4, 5, 6, 7, 8] * 10**4
COUNT = 1000
def timeit(to_dict_f):
elapsed_sum = 0
for _ in range(COUNT):
elapsed_sum -= time.time()
to_dict_f(f, lst)
elapsed_sum += time.time()
return elapsed_sum / COUNT
print('Average time: ', timeit(to_dict), timeit(to_dict2))
Results:
Average time: 0.014930561065673828 0.01346096110343933
to_dict2 (itertools.groupby) on average takes less time than to_dict (loop)
I want to group consecutive values if it's duplicates and each value is just in one group, let's see my example below:
Note: results is an index of the value in test_list
test_list = ["1","2","1","2","1","1","5325235","2","62623","1","1"]
--->results = [[[0, 1], [2, 3]],
[[4, 5], [9, 10]]]
test_list = ["1","2","1","1","2","1","5325235","2","62623","1","2","1","236","2388","626236437","1","2","1","236","2388"]
--->results = [[[9, 10, 11, 12, 13], [15, 16, 17, 18, 19]],
[[0, 1, 2], [3, 4, 5]]]
I build a recursive function:
def group_duplicate_continuous_value(list_label_group):
# how to know which continuous value is duplicate, I implement take next number minus the previous number
list_flag_grouping = [str(int(j.split("_")[0]) - int(i.split("_")[0])) +f"_{j}_{i}" for i,j in zip(list_label_group,list_label_group[1:])]
# I find duplicate value in list_flag_grouping
counter_elements = Counter(list_flag_grouping)
list_have_duplicate = [k for k,v in counter_elements.items() if v > 1]
if len(list_have_duplicate) > 0:
list_final_index = group_duplicate_continuous_value(list_flag_grouping)
# To return exactly value, I use index to define
for k, v in list_final_index.items():
temp_list = [v[i] + [v[i][-1] + 1] for i in range(0,len(v))]
list_final_index[k] = temp_list
check_first_cursive = list_label_group[0].split("_")
# If we have many list grouping duplicate countinous value with different length, we need function below to return exactly results
if len(check_first_cursive) > 1:
list_temp_index = find_index_duplicate(list_label_group)
list_duplicate_index = list_final_index.values()
list_duplicate_index = [val for sublist in list_duplicate_index for val1 in sublist for val in val1]
for k,v in list_temp_index.items():
list_index_v = [val for sublist in v for val in sublist]
if any(x in list_index_v for x in list_duplicate_index) is False:
list_final_index[k] = v
return list_final_index
else:
if len(list_label_group) > 0:
check_first_cursive = list_label_group[0].split("_")
if len(check_first_cursive) > 1:
list_final_index = find_index_duplicate(list_label_group)
return list_final_index
list_final_index = None
return list_final_index
Support function:
def find_index_duplicate(list_data):
dups = defaultdict(list)
for i, e in enumerate(list_data):
dups[e].append([i])
new_dict = {key:val for key, val in dups.items() if len(val) >1}
return new_dict
But when I run with test_list = [5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,1,2,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,1,2,5,5,5], it's very slow and make out of memory (~6GB). I knew a reason is stack overflow of my recursive function group_duplicate_continuous_value but I don't know how to fix it.
You can create a dict of lists, where every item from the original list is a key in the dict, and every key is mapped to the list of its indices in the original list. For instance, your list ["1","3","5","5","7","1","3","5"] would result in the dict {"1": [0, 5], "3": [1, 6], "5": [2, 3, 7], "7": [4]}.
Creating a dict of lists in this way is very idiomatic in python, and fast, too: it can be done by iterating just once on the list.
def build_dict(l):
d = {}
for i, x in enumerate(l):
d.setdefault(x, []).append(i)
return d
l = ["1","3","5","5","7","1","3","5"]
d = build_dict(l)
print(d)
# {'1': [0, 5], '3': [1, 6], '5': [2, 3, 7], '7': [4]}
Then you can iterate on the dict to build two lists of indices:
def build_index_results(l):
d = build_dict(l)
idx1, idx2 = [], []
for v in d.values():
if len(v) > 1:
idx1.append(v[0])
idx2.append(v[1])
return idx1, idx2
print(build_index_results(l))
# ([0, 1, 2], [5, 6, 3])
Or using zip:
from operator import itemgetter
def build_index_results(l):
d = build_dict(l)
return list(zip(*map(itemgetter(0,1), (v for v in d.values() if len(v) > 1))))
print(build_index_results(l))
# [(0, 1, 2), (5, 6, 3)]
I can't resist showcasing more_itertools.map_reduce for this:
from more_itertools import map_reduce
from operator import itemgetter
def build_index_results(l):
d = map_reduce(enumerate(l),
keyfunc=itemgetter(1),
valuefunc=itemgetter(0),
reducefunc=lambda v: v[:2] if len(v) > 1 else None
)
return list(zip(*filter(None, d.values())))
print(build_index_results(l))
# [(0, 1, 2), (5, 6, 3)]
I have tried this code:
dictt = {'a':{'d':[4,5,6]},'b':2,'c':3,'d':{'e':[7,8,9]}}
def dic(a):
lst = []
for i in a.values():
if type(i) is dict:
lst.append(dic(i))
#lst.append(i)
else:
lst.append(i)
return lst
o/p:
[[[4, 5, 6]], 2, 3, [[7, 8, 9]]]
Expected o/p:
[4,5,6,2,3,7,8,9]
You need to use list.extend to add multiple values in the list, and not the list itself, also handle list type
def dic(a):
lst = []
for i in a.values():
if isinstance(i, dict):
lst.extend(dic(i))
elif isinstance(i, list):
lst.extend(i)
else:
lst.append(i)
return lst
dictt = {'a': {'d': [4, 5, 6]}, 'b': 2, 'c': 3, 'd': {'e': [7, 8, 9]}}
x = dic(dictt)
print(x) # [4, 5, 6, 2, 3, 7, 8, 9]
I think this is the fastest way
from typing import Iterable
def flatten(items):
"""Yield items from any nested iterable; see Reference."""
for x in items:
if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):
for sub_x in flatten(x):
yield sub_x
else:
yield x
f = lambda v: [f(x) for x in v.values()] if isinstance(v,dict) else v
list(flatten(f(dictt)))
# [4, 5, 6, 2, 3, 7, 8, 9]
flatten function from: How to make a flat list out of a list of lists
Try this:
dictis = {
'a':{
'd':[4,5,6]
}
,'b':2
,'c':3
,'d':{
'e':[7,8,9]
}
}
def value(obj):
for value in obj.values():
if isinstance(value, dict):
for value_list in value.values():
for value in value_list:
yield value
else:
yield value
listis = [x for x in value(dictis)]
print(listis)
I have a list that looks something like this:
lst_A = [32,12,32,55,12,90,32,75]
I want to replace the numbers with their rank. I am using this function to do this:
def obtain_rank(lstC):
sort_data = [(x,i) for i,x in enumerate(lstC)]
sort_data = sorted(sort_data,reverse=True)
result = [0]*len(lstC)
for i,(_,idx) in enumerate(sort_data,1):
result[idx] = i
return result
I am getting the following output while I use this:
[6, 8, 5, 3, 7, 1, 4, 2]
But what I want from this is:
[4, 7, 5, 3, 8, 1, 6, 2]
How can I go about this?
Try this:
import pandas as pd
def obtain_rank(a):
s = pd.Series(a)
return [int(x) for x in s.rank(method='first', ascending=False)]
#[4, 7, 5, 3, 8, 1, 6, 2]
You could use 2 loops:
l = [32,12,32,55,12,90,32,75]
d = list(enumerate(sorted(l, reverse = True), start = 1))
res = []
for i in range(len(l)):
for j in range(len(d)):
if d[j][1] == l[i]:
res.append(d[j][0])
del d[j]
break
print(res)
#[4, 7, 5, 3, 8, 1, 6, 2]
Here you go. In case, you are not already aware, please read https://docs.python.org/3.7/library/collections.html to understand defaultdict and deque
from collections import defaultdict, deque
def obtain_rank(listC):
sorted_list = sorted(listC, reverse=True)
d = defaultdict(deque) # deque are efficient at appending/popping elements at both sides of the sequence.
for i, ele in enumerate(sorted_list):
d[ele].append(i+1)
result = []
for ele in listC:
result.append(d[ele].popleft()) # repeating numbers with lower rank will be the start of the list, therefore popleft
return result
Update: Without using defaultdict and deque
def obtain_rank(listC):
sorted_list = sorted(listC, reverse=True)
d = {}
for i, ele in enumerate(sorted_list):
d[ele] = d.get(ele, []) + [i + 1] # As suggested by Joshua Nixon
result = []
for ele in listC:
result.append(d[ele][0])
del d[ele][0]
return result