I have a group of dictionaries of 2 patterns like {"Id": 1, "title":"example"} and {"Id": 1, "location":"city"}. I want to combine these 2 together to get {"Id": 1, "title":"example", "location":"city"}, for all the dictionaries with Ids that match. In this case the group is of 200 items of 100 titles and 100 locations all with Ids from 0-99. I want to return a list of 100 combined dictionaries.
May be like the following:
def ResultHandler(extractedResult: list):
jsonObj = {}
jsonList = []
for result in extractedResult:
for key, val in result.items():
#this works if its hardcoded val to a number...
if key == "Id" and val == 1:
jsonObj.update(result)
jsonList.append(jsonObj)
return jsonList
Group the dicts by ID. Then merge each group.
from collections import defaultdict
def merge_dicts(dicts):
grouped = defaultdict(list)
for d in dicts:
grouped[d['id']].append(d)
merged = []
for ds in grouped.values():
m = {}
for d in ds:
m |= d # If Python < 3.9 : m = {**m, **d}
merged.append(m)
return merged
A more functional (but slightly less efficient) approach:
from itertools import groupby
from functools import reduce
from operator import itemgetter
new_data = []
for _, g in groupby(sorted(data, key=itemgetter("Id")), key=itemgetter("Id")):
new_data.append(reduce(lambda d1, d2: {**d1, **d2}, g))
This function has a nested loop. The outer loop iterates through the list of dictionaries. The inner loop
iterates through the list of dictionaries again to check if the id of the current dictionary is
already in the list of dictionaries. If it is not, it appends the dictionary to the list of
dictionaries. If it is, it updates the dictionary in the list of dictionaries with the contents of
the current dictionary.
lst = [
{"id": 1, "fname": "John"},
{"id": 2, "name": "Bob"},
{"id": 1, "lname": "Mary"},
]
def combine_dicts(lst):
res = []
for d in lst:
if d.get("id") not in [x.get("id") for x in res]:
res.append(d)
else:
for r in res:
if r.get("id") == d.get("id"):
r.update(d)
return res
print(combine_dicts(last))
# output: [{'id': 1, 'fname': 'John', 'lname': 'Mary'}, {'id': 2, 'name': 'Bob'}]
The following code should work:
def resultHandler(extractedResult):
jsonList = []
for i in range(len(extractedResult) // 2):
jsonList.append({"Id": i})
for i in range(len(extractedResult)):
for j in range(len(jsonList)):
if jsonList[j]["Id"] == extractedResult[i]["Id"]:
if "title" in extractedResult[i]:
jsonList[j]["title"] = extractedResult[i]["title"];
else:
jsonList[j]["location"] = extractedResult[i]["location"];
return jsonList;
extractedResult = [{"Id": 0, "title":"example1"}, {"Id": 1, "title":"example2"}, {"Id": 0, "location":"example3"}, {"Id": 1, "location":"example4"}]
jsonList = resultHandler(extractedResult)
print(jsonList)
Output:
[{'Id': 0, 'title': 'example1', 'location': 'example3'}, {'Id': 1, 'title': 'example2', 'location': 'example4'}]
This code works by first filling up jsonList with Id values from 0 to half of the length of extractedResult (so the number of IDs).
Then, for every dictionary in extractedResult, we find the dictionary in jsonList with the matching ID. If that dictionary of extractedResult contains a key, "title", then we create that value for that dictionary in jsonList. The same applied for "location".
I hope this helps answer your question! Please let me know if you need any further clarification or details :)
This code will solve your problem in linear time i.e., O(n) where n is the order of growth of the length of your dictionary. It will consider only those Id which has both title and location and will ignore the rest.
from collections import Counter
data = [{"Id": 1, "title":"example1"},
{"Id": 2, "title":"example2"},
{"Id": 3, "title":"example3"},
{"Id": 4, "title":"example4"},
{"Id": 1, "location":"city1"},
{"Id": 2, "location":"city2"},
{"Id": 4, "location":"city4"},
{"Id": 5, "location":"city5"}]
paired_ids = set([key for key, val in dict(Counter([item["Id"] for item in data])).items() if val == 2]) # O(n)
def combine_dict(data):
result = {key: [] for key in paired_ids} # O(m), m: number of paired ids (m <= n/2)
for item in data: # O(n)
items = list(item.items())
id, tl, val = items[0][1], items[1][0], items[1][1]
if id in paired_ids: # O(1), as paired_ids is a set lookup takes O(1)
result[id].append({tl: val})
return [{"Id": id, "title": lst[0]["title"], "location": lst[1]["location"]} for id, lst in result.items()] # O(n)
print(*combine_dict(data), sep="\n")
Output:
{'Id': 1, 'title': 'example1', 'location': 'city1'}
{'Id': 2, 'title': 'example2', 'location': 'city2'}
{'Id': 4, 'title': 'example4', 'location': 'city4'}
Related
I have a dictionary "A":
A = {
"Industry1": 1,
"Industry2": 1,
"Industry3": 1,
"Customer1": 1,
"Customer2": 1,
"LocalShop1": 1,
"LocalShop2": 1,
}
I want to group by key names and create new dictionaries for each "category", the names should be generated automatically.
Expected Output:
Industry = {
"Industry1": 1,
"Industry2": 1,
"Industry3": 1,
}
Customer = {
"Customer1": 1,
"Customer2": 1,
}
LocalShop = {
"LocalShop1": 1,
"LolcalShop2": 1,
}
Can you guys give me a hint to achieve this output, please?
Assuming your keys are in (KEYNAME)(NUM), you can do the following:
import re
from collections import defaultdict
from pprint import pprint
A = {
"Industry1": 1,
"Industry2": 1,
"Industry3": 1,
"Customer1": 1,
"Customer2": 1,
"LocalShop1": 1,
"LocalShop2": 1,
}
key_pattern = re.compile(r"[a-zA-Z]+")
result = defaultdict(dict)
for k, v in A.items():
key = key_pattern.search(k).group()
result[key][k] = v
pprint(dict(result))
output:
{'Customer': {'Customer1': 1, 'Customer2': 1},
'Industry': {'Industry1': 1, 'Industry2': 1, 'Industry3': 1},
'LocalShop': {'LocalShop1': 1, 'LocalShop2': 1}}
I created a dictionary of dictionaries instead of having individual variables for each dictionary. Its easier to manage and it doesn't pollute the global namespace.
Basically you iterate through the key value pairs and with r"[a-zA-Z]+" pattern, you grab the part without number. This is what is gonna be used for the key in outer dictionary.
For the purposes of this answer I've assumed that every key in the dictionary A has either the word "Industry", "Customer", or "Shop" in it. This allows us to detect what category each entry needs to be in by checking if each key contains a certain substring (i.e. "Industry"). If this assumption doesn't hold for your specific circumstances, you'll have to find a different way to write the if/elif statements in the solutions below that fits your situation better.
Here's one way to do it. You make a new dictionary for each category and check if "Industry", "Customer", or "Shop" is in each key.
industries = {}
customers = {}
shops = {}
for key, value in A.items():
if "Industry" in key:
industries[key] = value
elif "Customer" in key:
customers[key] = value
elif "Shop" in key:
shops[key] = value
Another, cleaner version would be where you have a nested dictionary that stores all of your categories, and each category would have its own dictionary inside the main one. This would help in the future if you needed to add more categories. You'd only have to add them in one place (in the dictionary definition) and the code would automatically adjust.
categories = {
"Industry": {},
"Customer": {},
"Shop": {},
}
for key, value in A.items():
for category_name, category_dict in categories.items():
if category_name in key:
category_dict[key] = value
If you can't detect the category from the string of an entry, then you may have to store that categorical information in the key or the value of each entry in A, so that you can detect the category when trying to filter everything.
You can use itertools.groupby with a key that only extracts the word without the number. I wouldn't recommend to make variables of them, this is not scalable if there are more than 3 keys... just put them in a new dictionary or in a list.
A = {'Industry1': 1,
'Industry2': 1,
'Industry3': 1,
'Customer1': 1,
'Customer2': 1,
'LocalShop1': 1,
'LocalShop2': 1}
grouped = [dict(val) for k, val in itertools.groupby(A.items(), lambda x: re.match('(.+)\d{1,}', x[0]).group(1))]
Output grouped:
[
{'Industry1': 1, 'Industry2': 1, 'Industry3': 1},
{'Customer1': 1, 'Customer2': 1},
{'LocalShop1': 1, 'LocalShop2': 1}
]
If you are sure, that there are exactly 3 elements in that list and you really want them as variables, you can do it with tuple unpacking:
Industry, Customer, LocalShop = [dict(val) for k, val in itertools.groupby(A.items(), lambda x: re.match('(.+)\d{1,}', x[0]).group(1))]
I think I would save the results in a new dictionary with the grouped key as new key and the list as value:
grouped_dict = {k: dict(val) for k, val in itertools.groupby(A.items(), lambda x: re.match('(.+)\d{1,}', x[0]).group(1))}
Output grouped_dict:
{'Industry': {'Industry1': 1, 'Industry2': 1, 'Industry3': 1},
'Customer': {'Customer1': 1, 'Customer2': 1},
'LocalShop': {'LocalShop1': 1, 'LocalShop2': 1}}
Assuming a list of dictionaries
l=[
{"id":1, "score":80, "remarks":"B" },
{"id":2, "score":80, "remarks":"A" },
{"id":1, "score":80, "remarks":"C" },
{"id":3, "score":80, "remarks":"B" },
{"id":1, "score":80, "remarks":"F" },
]
I would like to find the indexes of the first unique value given a key. So given the list above i am expecting a result of
using_id = [0,1,3]
using_score = [0]
using_remarks = [0,1,2,4]
What makes it hard form me is the list has type of dictionary, if it were numbers i could just use this line
indexes = [l.index(x) for x in sorted(set(l))]
using set() on a list of dictionary throws an error TypeError: unhashable type: 'dict'.
The constraints are: Must only use modules that came default with python3.10,the code should be scalable friendly as the length of the list will reach into the hundreds, as few lines as possible is a bonus too :)
Of course there is the brute force method, but can this be made to be more efficient, or use lesser lines of code ?
unique_items = []
unique_index = []
for index, item in enumerate(l, start=0):
if item["remarks"] not in unique_items:
unique_items.append(item["remarks"])
unique_index.append(index)
print(unique_items)
print(unique_index)
You could refigure your data into a dict of lists, then you can use code similar to what you would use for a list of values:
dd = { k : [d[k] for d in l] for k in l[0] }
indexes = { k : sorted(dd[k].index(x) for x in set(dd[k])) for k in dd }
Output:
{'id': [0, 1, 3], 'score': [0], 'remarks': [0, 1, 2, 4]}
Since dict keys are inherently unique, you can use a dict to keep track of the first index of each unique value by storing the values as keys of a dict and setting the index as the default value of a key with dict.setdefault:
for key in 'id', 'score', 'remarks':
unique = {}
for i, d in enumerate(l):
unique.setdefault(d[key], i)
print(key, list(unique.values()))
This outputs:
id [0, 1, 3]
score [0]
remarks [0, 1, 2, 4]
Demo: https://replit.com/#blhsing/PalatableAnxiousMetric
With functools.reduce:
l=[
{"id":1, "score":80, "remarks":"B" },
{"id":2, "score":80, "remarks":"A" },
{"id":1, "score":80, "remarks":"C" },
{"id":3, "score":80, "remarks":"B" },
{"id":1, "score":80, "remarks":"F" },
]
from functools import reduce
result = {}
reduce(lambda x, y: result.update({y[1]['remarks']:y[0]}) \
if y[1]['remarks'] not in result else None, \
enumerate(l), result)
result
# {'B': 0, 'A': 1, 'C': 2, 'F': 4}
unique_items = list(result.keys())
unique_items
# ['B', 'A', 'C', 'F']
unique_index = list(result.values())
unique_index
# [0, 1, 2, 4]
Explanation: the lambda function adds to the dictionary result at each step a list containing index (in l) and id but only at the first occurrence of a given value for remarks.
The dictionary structure for the result makes sense since you're extracting unique values and they can therefore be seen as keys.
I've tried other solutions but still have no luck, My problem is that I have a list of dictionaries in which I have to check if there are any duplicate value in the key (name of the person):
Sample list:
[{"id": 1,"name": "jack","count": 7},{"id": 12,"name": "jack","count": 5}]
If there are duplicate names, It should add the value in the key count, and the result should be:
[{"id": 1,"name": "jack","count": 12}]
Edited: ID's don't matter, I need at least one id to appear.
A detailed solution could be that:
new = {}
for d in data:
name = d["name"]
if name in new:
new[name]["count"] += d["count"]
else:
new[name] = dict(d)
result = list(new.values())
NB: this could be simplified with the use of list comprehension and the method get, but I think this one is the more readable.
As id field is not so important I will create a dict with key as name and value being the item values in a list
from collections import defaultdict
a = [{"id": 1,"name": "jack","count": 7},{"id": 12,"name": "jack","count": 5}]
d = defaultdict(list)
Iterate over the list and map key and values
for idx,i in enumerate(a):
d[i['name']].append(i) #key is 'name'
At this point d will look like this
{'jack': [{'count': 7, 'id': 1, 'name': 'jack'},{'count': 5, 'id': 12,
'name': 'jack'}]}
Now if the length of list is >1 means we have to iterate over the list and do the summation and update the count
for k,v in d.items():
if len(v) > 1:
temp = v[0]
for t in v[1:]:
temp['count'] = temp['count']+t['count'] #update the count
d[k] = temp
print(list(d.values())) #[{'count': 12, 'id': 1, 'name': 'jack'}]
In order to handle the case when count is missing like
[{"id": 1,"name": "jack"},{"id": 12,"name": "jack","count": 5}]
replace above count update logic with
temp['count']=temp.get('count',0)+t.get('count',0)
I have a list of strings, from which I have to construct a dict. So, for example, I have:
foo.bar:10
foo.hello.world:30
xyz.abc:40
pqr:100
This is represented as a dict:
{
"foo": {
"bar": 10,
"hello": {
"world": 30
}
},
"xyz": {
"abc": 40
},
"pqr": 100
}
This question is based on the same premise, but the answers discuss hardcoded depths such as:
mydict = ...
mydict['foo']['bar'] = 30
Since the dot seperated strings on the left may be of any depth, I can't figure out a way to build the dict. How should I parse the dot separated string and build the dict?
Building upon the solution in the links, you could
iterate over each line
for each line, extract a list of keys, and its value
recurse into a dictionary with each key using setdefault
assign the value at the bottom
lines = \
'''
foo.bar:10
foo.hello.world:30
xyz.abc:40
pqr:100
'''.splitlines()
d = {}
for l in lines:
k, v = l.split(':')
*f, l = k.split('.')
t = d
for k in f:
t = t.setdefault(k, {})
t[l] = int(v) # don't perform a conversion if your values aren't numeric
print(d)
{
"pqr": 100,
"foo": {
"bar": 10,
"hello": {
"world": 30
}
},
"xyz": {
"abc": 40
}
}
Recursive setdefault traversal learned from here.
Breaking down each step -
Split on :, extract the key-list string and the value
k, v = l.split(':')
Split the key-string on . to get a list of keys. I take the opportunity to partition the keys as well, so I have a separate reference to the last key that will be the key to v.
*f, l = k.split('.')
*f is the catch-all assignment, and f is a list of any number of values (possibly 0 values, if there's only one key in the key-string!)
For each key k in the key list f, recurse down into the "tree" using setdefault. This is similar to recursively traversing a linked list.
for k in f:
t = t.setdefault(k, {})
At the end, the last key value pair comes from l and v.
t[l] = v
What's wrong with incrementally building it?
mydict = {}
mydict["foo"] = {}
mydict["foo"]["bar"] = 30
mydict["foo"]["hello"] = {}
mydict["foo"]["hello"]["world"] = 30
mydict["foo"]["xyz"] = {}
mydict["foo"]["xyz"]["abc"] = 40
mydict["foo"]["pqr"] = 100
# ...
pprint.pprint(mydict) # {'foo': {'bar': 30, 'hello': {'world': 30}, 'pqr': 100, 'xyz': {'abc': 40}}}
Including the parsing, you could use something like this:
import pprint
inp = """foo.bar:10
foo.hello.world:30
xyz.abc:40
pqr:100
"""
mydict = {}
for line in inp.splitlines():
s, v = line.split(':')
parts = s.split(".")
d = mydict
for i in parts[:-1]:
if i not in d:
d[i] = {}
d = d[i]
d[parts[-1]] = v
pprint.pprint(mydict) # {'foo': {'bar': '10', 'hello': {'world': 30'}}, 'pqr': '100', 'xyz': {'abc': '40'}}
One key point to consider in your case is that you either want to create a dictionary in a parent's dictionarys value part or an integer
x = """
foo.bar:10
foo.hello.world:30
xyz.abc:40
pqr.a:100
"""
tree = {}
for item in x.split():
level, value = item.split(":")[0], item.split(":")[1]
t = tree
for part in item.split('.'):
keyval = part.split(":")
if len(keyval) > 1:
#integer
t = t.setdefault(keyval[0], keyval[1])
else:
t = t.setdefault(part, {})
import pprint
pprint.pprint(tree)
Result:
{'foo': {'bar': '10', 'hello': {'world': '30'}},
'pqr': {'a': '100'},
'xyz': {'abc': '40'}}
It would have simpler if my nested objects were dictionaries, but these are list of dictionaries.
Example:
all_objs1 = [{
'a': 1,
'b': [{'ba': 2, 'bb': 3}, {'ba': 21, 'bb': 31}],
'c': 4
}, {
'a': 11,
'b': [{'ba': 22, 'bb': 33, 'bc': [{'h': 1, 'e': 2}]}],
'c': 44
}]
I expect output in following format:
[
{'a': 1, 'b.ba': 2, 'b.bb': 3, 'c': 4},
{'a': 1, 'b.ba': 21, 'b.bb': 31, 'c': 4},
{'a': 11, 'b.ba': 22, 'b.bb': 33, 'bc.h': 1, 'bc.e': 2, 'c': 44},
]
Basically, number of flattened objects generated will be equal to (obj * depth)
With my current code:
def flatten(obj, flattened_obj, last_key=''):
for k,v in obj.iteritems():
if not isinstance(v, list):
flattened_obj.update({last_key+k : v})
else:
last_key += k + '.'
for nest_obj in v:
flatten(nest_obj, flattened_obj, last_key)
last_key = remove_last_key(last_key)
def remove_last_key(key_path):
second_dot = key_path[:-1].rfind('.')
if second_dot > 0:
return key_path[:second_dot+1]
return key_path
Output:
[
{'a': 1, 'b.bb': 31, 'c': 4, 'b.ba': 21},
{'a': 11, 'b.bc.e': 2, 'c': 44, 'b.bc.h': 1, 'b.bb': 33, 'b.ba': 22}
]
I am able to flatten the object (not accurate though), but I am not able to create a new object at each nested object.
I can not use pandas library as my app is deployed on app engine.
code.py:
from itertools import product
from pprint import pprint as pp
all_objs = [{
"a": 1,
"b": [{"ba": 2, "bb": 3}, {"ba": 21, "bb": 31}],
"c": 4,
#"d": [{"da": 2}, {"da": 5}],
}, {
"a": 11,
"b": [{"ba": 22, "bb": 33, "bc": [{"h": 1, "e": 2}]}],
"c": 44,
}]
def flatten_dict(obj, parent_key=None):
base_dict = dict()
complex_items = list()
very_complex_items = list()
for key, val in obj.items():
new_key = ".".join((parent_key, key)) if parent_key is not None else key
if isinstance(val, list):
if len(val) > 1:
very_complex_items.append((key, val))
else:
complex_items.append((key, val))
else:
base_dict[new_key] = val
if not complex_items and not very_complex_items:
return [base_dict]
base_dicts = list()
partial_dicts = list()
for key, val in complex_items:
partial_dicts.append(flatten_dict(val[0], parent_key=new_key))
for product_tuple in product(*tuple(partial_dicts)):
new_base_dict = base_dict.copy()
for new_dict in product_tuple:
new_base_dict.update(new_dict)
base_dicts.append(new_base_dict)
if not very_complex_items:
return base_dicts
ret = list()
very_complex_keys = [item[0] for item in very_complex_items]
very_complex_vals = tuple([item[1] for item in very_complex_items])
for product_tuple in product(*very_complex_vals):
for base_dict in base_dicts:
new_dict = base_dict.copy()
new_items = zip(very_complex_keys, product_tuple)
for key, val in new_items:
new_key = ".".join((parent_key, key)) if parent_key is not None else key
new_dict.update(flatten_dict(val, parent_key=new_key)[0])
ret.append(new_dict)
return ret
def main():
flatten = list()
for obj in all_objs:
flatten.extend(flatten_dict(obj))
pp(flatten)
if __name__ == "__main__":
main()
Notes:
As expected, recursion is used
It's general, it also works for the case that I mentioned in my 2nd comment (for one input dict having more than one key with a value consisting of a list with more than one element), that can be tested by decommenting the "d" key in all_objs. Also, theoretically it should support any depth
flatten_dict: takes an input dictionary and outputs a list of dictionaries (as the input dictionary might yield more than one output dictionary):
Every key having a "simple" (not list) value, goes into the output dictionar(y/ies) unchanged
At this point, a base output dictionary is complete (if the input dictionary will generate more than output dictionary, all will have the base dictionary keys/values, if it only generates one output dictionary, then that will be the base one)
Next, the keys with "problematic" values - that may generate more than output dictionary - (if any) are processed:
Keys having a list with a single element ("problematic") - each might generate more than one output dictionary:
Each of the values will be flattened (might yield more than one output dictionary); the corresponding key will be used in the process
Then, the cartesian product will be computed on all the flatten dictionary lists (for current input, there will only be one list with one element)
Now, each product item needs to be in a distinct output dictionary, so the base dictionary is duplicated and updated with the keys / values of every element in the product item (for current input, there will be only one element per product item)
The new dictionary is appended to a list
At this point a list of base dictionaries (might only be one) is complete, if no values consisting of lists with more than one element are present, this is the return list, else everything below has to be done for each base dictionary in the list
Keys having a list with a more elements ("very problematic") - each will generate more than one output dictionaries:
First, the cartesian product will be computed against all the values (lists with more than one element). In the current case, since since it's only one such list, each product item will only contain an element from that list
Then, for each product item element, its key will need to be established based on the lists order (for the current input, the product item will only contain one element, and also, there will only be one key)
Again, each product item needs to be in a distinct output dictionary, so the base dictionary is duplicated and updated with the keys / values, of the flattened product item
The new dictionary is appended to the output dictionaries list
Works with Python 3 and Python 2
Might be slow (especially for big input objects), as performance was not the goal. Also since it was built bottom-up (adding functionality as new cases were handled), it is pretty twisted (RO: intortocheated :) ), there might be a simpler implementation that I missed.
Output:
c:\Work\Dev\StackOverflow\q046341856>c:\Work\Dev\VEnvs\py35x64_test\Scripts\python.exe code.py
[{'a': 1, 'b.ba': 2, 'b.bb': 3, 'c': 4},
{'a': 1, 'b.ba': 21, 'b.bb': 31, 'c': 4},
{'a': 11, 'b.ba': 22, 'b.bb': 33, 'b.bc.e': 2, 'b.bc.h': 1, 'c': 44}]
#EDIT0:
Made it more general (although it's not visible for the current input): values containing only one element can yield more than output dictionary (when flattened), addressed that case (before I was only considering the 1st output dictionary, simply ignoring the rest)
Corrected a logical error that was masked out tuple unpacking combined with cartesian product: if not complex_items ... part
#EDIT1:
Modified the code to match a requirement change: the key in the flattened dictionary must have the full nesting path in the input dictionary
use this code to get your desired output. It generates output based on recursive call.
import json
from copy import deepcopy
def flatten(final_list, all_obj, temp_dct, last_key):
for dct in all_obj:
deep_temp_dct = deepcopy(temp_dct)
for k, v in dct.items():
if isinstance(v, list):
final_list, deep_temp_dct = flatten(final_list, v, deep_temp_dct, k)
else:
prefix = ""
if last_key : prefix = last_key + "."
key = prefix+ k
deep_temp_dct[key] = v
if deep_temp_dct not in final_list:
final_list.append(deep_temp_dct)
return final_list, deep_temp_dct
final_list, _ = flatten([], all_objs1, {}, "")
print json.dumps(final_list, indent =4 )
let me know if it works for you.