I have three lists that I would like to make into keys for an empty triple nested dictionary using dictionary comprehension. "Facility" is the outer key, "Products" is the middle key, and "Customer" is the inner key. How can I do this? Thanks in advance
Here are my lists that need to be made into keys:
Facility = ['Fac-1', 'Fac-2','Fac-3','Fac-4','Fac-5']
Products = ['45906','48402','80591','102795','107275','128067','129522',]
Customer = ["Apple","Samsung","Huawei","Nokia","Motorolla"]
If by empty dict, you mean to have empty lists as values.
You could do that with a dict comprehension:
>>> d = {f: {p: {c: [] for c in Customer} for p in Products} for f in Facility}
Then you will get the data structure you described:
>>> d
{'Fac-1': {'102795': {'Apple': [], ...}, ...},
'Fac-2': {'102795': {'Apple': [], ...}, ...},
...}
>>> d.keys()
dict_keys(['Fac-1', 'Fac-2', 'Fac-3', 'Fac-4', 'Fac-5'])
>>> d['Fac-1'].keys()
dict_keys(['45906', '48402', '80591', '102795', '107275', '128067', '129522'])
>>> d['Fac-1']['45906'].keys()
dict_keys(['Apple', 'Samsung', 'Huawei', 'Nokia', 'Motorolla'])
Related
I want to add the key 'Name' to list of dictionaries in whichever dictionary 'Name' doesn't exist.
For example,
[dict(item, **{'Name': 'apple'}) for item in d_list]
will update value of key 'Name' even if key already exists and
[dict(item, **{'Name': 'apple'}) for item in d_list if 'Name' not in item]
returns empty list
You need to handle the two different cases. In case the list is empty, and if it's not.
It's not possible to handle both use-cases in a single list comprehension statement since when the list is empty, it will always return zero-value (empty list). It is like doing for i in my_list. If the list is empty, the code inside the for block won't be executed.
I would tackle it with a single loop. I find it more readable.
>>> default = {"Name": "apple"}
>>> miss_map = {"Data": "text"}
>>> exist_map = {"Name": "pie"}
>>>
>>> d = [miss_map, exist_map]
>>>
>>> list_dict = [miss_map, exist_map]
>>> for d in list_dict:
... if "Name" not in d.keys():
... d.update(default)
...
>>> list_dict
[{'Data': 'text', 'Name': 'apple'}, {'Name': 'pie'}]
>>>
You can then move it to it's own function and pass it the list of dicts.
In one line of code:
d_list = [{**d, "Name": "apple"} for d in d_list if "Name" not in d] + [d for d in d_list if "Name" in d]
Based on Abdul Aziz comment, I could do it in 1 line using
[item.setdefault("Name", 'apple') for item in d_list]
I have a list of drugs that I want to compare to a dictionary, where the dictionary keys are drug codes and the dictionary values are lists of drugs. I'd like to only retain the drugs within the dictionary that correspond to the list of drugs.
Example list:
l = ['sodium', 'nitrogen', 'phosphorus']
And dictionary:
d = {'A02A4': ['sodium', 'nitrogen', 'carbon']}
I would want my final dictionary to look like:
{'A02A4': ['nitrogen', 'sodium']}
with the value that is not present in the list removed, and to do this for all key, value pairs in the dictionary
You could use a dictionary comprehension and sets to keep only the values that intersect with the list:
l = ['sodium', 'nitrogen', 'phosphorus']
d = {'A02A4': ['sodium', 'nitrogen', 'carbon']}
{i: list(set(v) & set(l)) for i,v in d.items()}
{'A02A4': ['nitrogen', 'sodium']}
Or equivalently, using intersection:
{i: list(set(v).intersection(l)) for i,v in d.items()}
{'A02A4': ['nitrogen', 'sodium']}
I have a solution to this question, but I'm curious if there might be a better way.
I have a dict like this:
dict1 = {
'key1':[['value1','value2','value3'],['value4','value5','value6']],
'key2':[['value7','value8','value9'],['value10','value11','value12']],
'key3':[['value13','value14','value15'],['value16','value17','value18']]}
I want to convert this into a nested list, and inserting the keys into the new sublist like this:
nestedlist = [
['value1','value2','key1','value3'],['value4','value5','key1','value6'],
['value7','value8','key1','value9'],['value10','value11','key2','value12'],
['value13','value14','key2','value15'],['value16','value17','key2','value18'],
['value10','value11','key3','value12'],['value13','value14','key3','value15'],
['value16','value17','key3','value18']]
I solve this the following way:
keys = [*dict1]
newlist = []
for item in keys:
for item2 in dict1[item]:
item2.insert(2,item)
newlist.append(item2)
so, how can i improve this piece of code?
Here is one way via a list comprehension:
res = [[w[0], w[1], k, w[2]] for k, v in dict1.items() for w in v]
# [['value1', 'value2', 'key1', 'value3'],
# ['value4', 'value5', 'key1', 'value6'],
# ['value7', 'value8', 'key2', 'value9'],
# ['value10', 'value11', 'key2', 'value12'],
# ['value13', 'value14', 'key3', 'value15'],
# ['value16', 'value17', 'key3', 'value18']]
I would've done it pretty similarly. The only differences shown here:
result = []
for k, l in dict1.items():
for ll in l:
ll.insert(2, k)
result.append(ll)
No need to do list unpacking or doing [] accessing on dict1. items() returns a list of tuples containing each key and value of the dict.
Say I have a dictionary with many items that have the same values; for example:
dict = {'hello':'a', 'goodbye':'z', 'bonjour':'a', 'au revoir':'z', 'how are you':'m'}
How would I split the dictionary into dictionaries (in this case, three dictionaries) with the same values? In the example, I want to end up with this:
dict1 = {'hello':'a', 'bonjour':'a'}
dict2 = {'goodbye':'z', 'au revoir':'z'}
dict3 = {'how are you':'m'}
You can use itertools.groupby to collect by the common values, then create dict objects for each group within a list comprehension.
>>> from itertools import groupby
>>> import operator
>>> by_value = operator.itemgetter(1)
>>> [dict(g) for k, g in groupby(sorted(d.items(), key = by_value), by_value)]
[{'hello': 'a', 'bonjour': 'a'},
{'how are you': 'm'},
{'goodbye': 'z', 'au revoir': 'z'}]
Another way without importing any modules is as follows:
def split_dict(d):
unique_vals = list(set(d.values()))
split_dicts = []
for i in range(len(unique_vals)):
unique_dict = {}
for key in d:
if d[key] == unique_vals[i]:
unique_dict[key] = d[key]
split_dicts.append(unique_dict)
return split_dicts
For each unique value in the input dictionary, we create a dictionary and add the key values pairs from the input dictionary where the value is equal to that value. We then append each dictionary to a list, which is finally returned.
I have a list with a set amount of dictionaries inside which I have to compare to one other dictionary.
They have the following form (there is no specific form or pattern for keys and values, these are randomly chosen examples):
list1 = [
{'X1': 'Q587', 'X2': 'Q67G7', ...},
{'AB1': 'P5K7', 'CB2': 'P678', ...},
{'B1': 'P6H78', 'C2': 'BAA5', ...}]
dict1 = {
'X1': set([B00001,B00020,B00010]),
'AB1': set([B00001,B00007,B00003]),
'C2': set([B00001,B00002,B00003]), ...
}
What I want to have now is a new dictionary which has as keys: the values of the dictionaries in list1. and as values the values of dict1. And this only when the keys intersect in compared dictionaries.
I have done this in the following way:
nDicts = len(list1)
resultDict = {}
for key in range(0,nDicts):
for x in list1[key].keys():
if x in dict1.keys():
resultDict.update{list1[key][x]:dict1[x]}
print resultDict
The desired output should be of the form:
resulDict = {
'Q587': set([B00001,B00020,B00010]),
'P5K7': set([B00001,B00007,B00003]),
'BAA5': set([B00001,B00002,B00003]), ...
}
This works but since the amount of data is so high this takes forever.
Is there a better way to do this?
EDIT: I have changed the input values a little, the only ones that matter are the keys which intersect between the dictionaries within list1 and those within dict1.
The keys method in Python 2.x makes a list with a copy of all of the keys, and you're doing this not only for each dict in list1 (probably not a big deal, but it's hard to know for sure without knowing your data), but also doing it for dict1 over and over again.
On top of that, doing an in test on a list takes a long time, because it has to check each value in the list until it finds a match, but doing an in test on a dictionary is nearly instant, because it just has to look up the hash value.
Both keys are actually completely unnecessary—iterating a dict gives you the keys in order (an unspecified order, but the same is true for calling keys()), and in-checking a dict searches the same keys you'd get with keys(). So, just removing them does the same thing, but simpler, faster, and with less memory used. So:
for key in range(0,nDicts):
for x in list1[key]:
if x in dict1:
resultDict={list1[key][x]:dict1[x]}
print resultDict
There are also ways you can simplify this that probably won't help performance that much, but are still worth doing.
You can iterate directly over list1 instead of building a huge list of all the indices and iterating that.
for list1_dict in list1:
for x in list1_dict:
if x in dict1:
resultDict = {list_dict[x]: dict1[x]}
print resultDict
And you can get the keys and values in a single step:
for list1_dict in list1:
for k, v in list1_dict.iteritems():
if k in dict1:
resultDict = {v: dict1[k]}
print resultDict
Also, if you expect most of the values to be found, it will take about twice as long to first check for the value and then look it up as it would to just try to look it up and handle failure. (This is not true if most of the values will not be found, however.) So:
for list1_dict in list1:
for k, v in list1_dict.iteritems():
try:
resultDict = {v: dict1[k]}
print resultDict
except KeyError:
pass
You can simplify and optimize your operation with set intersections; as of Python 2.7 dictionaries can represent keys as sets using the dict.viewkeys() method, or dict.keys() in Python 3:
resultDict = {}
for d in list1:
for sharedkey in d.viewkeys() & dict1:
resultDict[d[sharedkey]] = dict1[sharedkey]
This can be turned into a dict comprehension even:
resultDict = {d[sharedkey]: dict1[sharedkey]
for d in list1 for sharedkey in d.viewkeys() & dict1}
I am assuming here you wanted one resulting dictionary, not a new dictionary per shared key.
Demo on your sample input:
>>> list1 = [
... {'X1': 'AAA1', 'X2': 'BAA5'},
... {'AB1': 'AAA1', 'CB2': 'BAA5'},
... {'B1': 'AAA1', 'C2': 'BAA5'},
... ]
>>> dict1 = {
... 'X1': set(['B00001', 'B00002', 'B00003']),
... 'AB1': set(['B00001', 'B00002', 'B00003']),
... }
>>> {d[sharedkey]: dict1[sharedkey]
... for d in list1 for sharedkey in d.viewkeys() & dict1}
{'AAA1': set(['B00001', 'B00002', 'B00003'])}
Note that both X1 and AB1 are shared with dictionaries in list1, but in both cases, the resulting key is AAA1. Only one of these wins (the last match), but since both values in dict1 are exactly the same anyway that doesn't make any odds in this case.
If you wanted separate dictionaries per dictionary in list1, simply move the for d in list1: loop out:
for d in list1:
resultDict = {d[sharedkey]: dict1[sharedkey] for sharedkey in d.viewkeys() & dict1}
if resultDict: # can be empty
print resultDict
If you really wanted one dictionary per shared key, move another loop out:
for d in list1:
for sharedkey in d.viewkeys() & dict1:
resultDict = {d[sharedkey]: dict1[sharedkey]}
print resultDict
#!/usr/bin/env python
list1 = [
{'X1': 'AAA1', 'X2': 'BAA5'},
{'AB1': 'AAA1', 'CB2': 'BAA5'},
{'B1': 'AAA1', 'C2': 'BAA5'}
]
dict1 = {
'X1': set(['B00001','B00002','B00003']),
'AB1': set(['B00001','B00002','B00003'])
}
g = ( k.iteritems() for k in list1)
ite = ((a,b) for i in g for a,b in i if dict1.has_key(a))
d = dict(ite)
print d