I am trying to sort a nested dictionary in Python.
Right now I have a dictionary of dictionaries. I was able to sort the outer keys using sorted on the list before I started building the dictionary, but I am unable to get the inner keys sorted at same time.
I've been trying to mess with the sorted API whereas still having problems with it.
Right now I have:
myDict = {'A': {'Key3':4,'Key2':3,'Key4':2,'Key1':1},
'B': {'Key4':1,'Key3':2,'Key2':3,'Key1':4},
'C': {'Key1':4,'Key2':2,'Key4':1,'Key3':3}};
But I would like:
myDict = {'A': {'Key1':1,'Key2':3,'Key3':4,'Key4':2},
'B': {'Key1':4,'Key2':3,'Key2':1,'Key4':1},
'C': {'Key1':4,'Key2':2,'Key3':3,'Key4':1}};
I appreciate the help!
>>> from collections import OrderedDict
>>> def sortedDict(items):
... return OrderedDict(sorted(items))
>>> myDict = {'A': {'Key3':4,'Key2':3,'Key4':2,'Key1':1},
... 'B': {'Key4':1,'Key3':2,'Key2':3,'Key1':4},
... 'C': {'Key1':4,'Key2':2,'Key4':1,'Key3':3}}
>>> sortedDict((key, sortedDict(value.items())) for key, value in myDict.items())
Related
How can I turn a list of dicts like [{'a':1}, {'b':2}, {'c':1}, {'d':2}], into a single dict like {'a':1, 'b':2, 'c':1, 'd':2}?
Answers here will overwrite keys that match between two of the input dicts, because a dict cannot have duplicate keys. If you want to collect multiple values from matching keys, see How to merge dicts, collecting values from matching keys?.
This works for dictionaries of any length:
>>> result = {}
>>> for d in L:
... result.update(d)
...
>>> result
{'a':1,'c':1,'b':2,'d':2}
As a comprehension:
# Python >= 2.7
{k: v for d in L for k, v in d.items()}
# Python < 2.7
dict(pair for d in L for pair in d.items())
In case of Python 3.3+, there is a ChainMap collection:
>>> from collections import ChainMap
>>> a = [{'a':1},{'b':2},{'c':1},{'d':2}]
>>> dict(ChainMap(*a))
{'b': 2, 'c': 1, 'a': 1, 'd': 2}
Also see:
What is the purpose of collections.ChainMap?
Little improvement for #dietbuddha answer with dictionary unpacking from PEP 448, for me, it`s more readable this way, also, it is faster as well:
from functools import reduce
result_dict = reduce(lambda a, b: {**a, **b}, list_of_dicts)
But keep in mind, this works only with Python 3.5+ versions.
This is similar to #delnan but offers the option to modify the k/v (key/value) items and I believe is more readable:
new_dict = {k:v for list_item in list_of_dicts for (k,v) in list_item.items()}
for instance, replace k/v elems as follows:
new_dict = {str(k).replace(" ","_"):v for list_item in list_of_dicts for (k,v) in list_item.items()}
unpacks the k,v tuple from the dictionary .items() generator after pulling the dict object out of the list
For flat dictionaries you can do this:
from functools import reduce
reduce(lambda a, b: dict(a, **b), list_of_dicts)
You can use join function from funcy library:
from funcy import join
join(list_of_dicts)
>>> L=[{'a': 1}, {'b': 2}, {'c': 1}, {'d': 2}]
>>> dict(i.items()[0] for i in L)
{'a': 1, 'c': 1, 'b': 2, 'd': 2}
Note: the order of 'b' and 'c' doesn't match your output because dicts are unordered
if the dicts can have more than one key/value
>>> dict(j for i in L for j in i.items())
If you don't need the singleton dicts anymore:
>>> L = [{'a':1}, {'b':2}, {'c':1}, {'d':2}]
>>> dict(map(dict.popitem, L))
{'a': 1, 'b': 2, 'c': 1, 'd': 2}
dict1.update( dict2 )
This is asymmetrical because you need to choose what to do with duplicate keys; in this case, dict2 will overwrite dict1. Exchange them for the other way.
EDIT: Ah, sorry, didn't see that.
It is possible to do this in a single expression:
>>> from itertools import chain
>>> dict( chain( *map( dict.items, theDicts ) ) )
{'a': 1, 'c': 1, 'b': 2, 'd': 2}
No credit to me for this last!
However, I'd argue that it might be more Pythonic (explicit > implicit, flat > nested ) to do this with a simple for loop. YMMV.
this way worked for me:
object = [{'a':1}, {'b':2}, {'c':1}, {'d':2}]
object = {k: v for dct in object for k, v in dct.items()}
printing object:
object = {'a':1,'b':2,'c':1,'d':2}
thanks Axes
>>> dictlist = [{'a':1},{'b':2},{'c':1},{'d':2, 'e':3}]
>>> dict(kv for d in dictlist for kv in d.iteritems())
{'a': 1, 'c': 1, 'b': 2, 'e': 3, 'd': 2}
>>>
Note I added a second key/value pair to the last dictionary to show it works with multiple entries.
Also keys from dicts later in the list will overwrite the same key from an earlier dict.
Am trying to combine a dictionary and a list which has dictionaries and an empty dictionary
dict1 = {'a': 1, 'b': 2}
list1 = [{'c': 3}, {'d':4}]
emptydict = {}
emptylist = []
Trying to merge and make it a final dictionary which looks like below.
final = {'a': 1, 'b': 2, 'c': 3, 'd':4}
Code:
final = {**dict1, **list1[0], **list1[1], **emptydict, **emptylist}
Here I dont know the length of list1, can anyone suggest me a better way than this ?
dict.update updates an existing dictionary. Unfortunately it’s a mutating method that does not return a value. Therefore adapting it to functools.reduce requires a wrapper.
Due to this, I’d be tempted to use a good old loop:
final = dict(dict1)
for d in list1:
final.update(d)
But for completeness, here’s a way using functools.reduce:
import functools
def dict_update(d, v):
d.update(v)
return d
final = functools.reduce(dict_update, list1, dict1.copy())
If you are using python 3.3+, you can use ChainMap to merge list of dicts into a single dict. And then use ** operator to merge dict1 and dict(ChainMap(*list1)).
from collections import ChainMap
final = {**dict1, **ChainMap(*list1)}
I have a dictionary in which one key is a NaN, and I want to delete the key and its corresponding value. I have tried this:
from math import isnan
clean_dict = filter(lambda k: not isnan(k), dict_HMDB_ID)
but the clean_dict is not a dictionary. I want to output it from python, but I get ''
filter doesn't return a dictionary. It returns a list in Python 2 and a filter object in Python 3.
You can use dict.pop:
d = {'a': 1, 'b': 2}
print(d)
# {'a': 1, 'b': 2}
d.pop('b')
print(d)
# {'a': 1}
And in your specific case,
dict_HMDB_ID.pop(float('NaN'))
For the sake of completeness. it could be done with a dictionary comprehension but there is no point in iterating since keys are unique anyway
clean_dict = {k: v for k, v in dict_HMDB_ID.items() if not math.isnan(k)}
If you insist on using filter here (you really shouldn't) you will need to:
pass it dict_HMDB_ID.items() so it will keep the original values
provide a custom function because it will now operate on (key, value) tuples.
transform the returned filter object (it will contain an iterator with (key, value) tuples) back to a dictionary
import math
dict_HMDB_ID = {1: 'a', float('Nan'): 'b'}
clean_dict = dict(filter(lambda tup: not math.isnan(tup[0]), dict_HMDB_ID.items()))
print(clean_dict)
# {1: 'a'}
I should probably mention that the first approach (.pop) directly modifies the dict_HMDB_ID while the other two create a new dictionary. If you wish to use .pop and create a new dictionary (leaving dict_HMDB_ID as it is) you can create a new dictionary with dict:
d = {'a': 1, 'b': 2}
new_d = dict(d)
new_d.pop('b')
print(d)
# {'b': 2, 'a': 1}
print(new_d)
# {'a': 1}
you could do:
from math import nan
dict_HMDB_ID.pop(nan)
clean_dict = dict_HMDB_ID
or the other way around if you wna to preserve dict_HMDB_ID
from math import nan
clean_dict = dict(dict_HMDB_ID)
clean_dict.pop(nan)
Here is my dictionary:
d = {'a': 100, 'b': 200, 'c': 300, 'd': 350}
I can find top 2 keys with biggest values and put them to the list:
sorted(d, key=d.get, reverse=True)[:2]
But what should I do to put top 2 biggest keys and values in another dictionary instead of list?
Thanks.
Sort dict.items based on values.
Slice the sorted list.
Pass the sliced list to dict().
Pass the dict returned from dict() to update method of the dict you want to modify.
Demo:
>>> d = {'a': 100, 'b': 200, 'c': 300, 'd': 350}
>>> dic = {}
>>> dic.update(dict(sorted(d.items(), key=lambda x:x[1], reverse=True)[:2]))
>>> dic
{'c': 300, 'd': 350}
Using operator.itemgetter:
from operator import itemgetter
dic = {}
dic.update(dict(sorted(d.items(), key=itemgetter(1), reverse=True)[:2]))
If dictionary is huge then heapq.nlargest will be more efficient than sorted:
>>> import heapq
>>> dic = {}
>>> dic.update({k:d[k] for k in heapq.nlargest(2, d, key=d.get)})
>>> dic
{'c': 300, 'd': 350}
dict(sorted(d.items(), key=lambda x:x[1], reverse=True)[:2])
update something:
thanks for the one upvotes.
I saw no one answered this question so I decided to answer it,but when I submitted my answer ,it already ranked 3rd.
I am glab to see the VERY USEFUL form "lambda x:x[1]" showed up in the earlist answer.
I strongly recommend this form.Simple yet Powerful in descripting idea.
I also strongly wonder: the one downvote the lambda form-what's up with lambda? any better idea to share with us?
How can I turn a list of dicts like [{'a':1}, {'b':2}, {'c':1}, {'d':2}], into a single dict like {'a':1, 'b':2, 'c':1, 'd':2}?
Answers here will overwrite keys that match between two of the input dicts, because a dict cannot have duplicate keys. If you want to collect multiple values from matching keys, see How to merge dicts, collecting values from matching keys?.
This works for dictionaries of any length:
>>> result = {}
>>> for d in L:
... result.update(d)
...
>>> result
{'a':1,'c':1,'b':2,'d':2}
As a comprehension:
# Python >= 2.7
{k: v for d in L for k, v in d.items()}
# Python < 2.7
dict(pair for d in L for pair in d.items())
In case of Python 3.3+, there is a ChainMap collection:
>>> from collections import ChainMap
>>> a = [{'a':1},{'b':2},{'c':1},{'d':2}]
>>> dict(ChainMap(*a))
{'b': 2, 'c': 1, 'a': 1, 'd': 2}
Also see:
What is the purpose of collections.ChainMap?
Little improvement for #dietbuddha answer with dictionary unpacking from PEP 448, for me, it`s more readable this way, also, it is faster as well:
from functools import reduce
result_dict = reduce(lambda a, b: {**a, **b}, list_of_dicts)
But keep in mind, this works only with Python 3.5+ versions.
This is similar to #delnan but offers the option to modify the k/v (key/value) items and I believe is more readable:
new_dict = {k:v for list_item in list_of_dicts for (k,v) in list_item.items()}
for instance, replace k/v elems as follows:
new_dict = {str(k).replace(" ","_"):v for list_item in list_of_dicts for (k,v) in list_item.items()}
unpacks the k,v tuple from the dictionary .items() generator after pulling the dict object out of the list
For flat dictionaries you can do this:
from functools import reduce
reduce(lambda a, b: dict(a, **b), list_of_dicts)
You can use join function from funcy library:
from funcy import join
join(list_of_dicts)
>>> L=[{'a': 1}, {'b': 2}, {'c': 1}, {'d': 2}]
>>> dict(i.items()[0] for i in L)
{'a': 1, 'c': 1, 'b': 2, 'd': 2}
Note: the order of 'b' and 'c' doesn't match your output because dicts are unordered
if the dicts can have more than one key/value
>>> dict(j for i in L for j in i.items())
If you don't need the singleton dicts anymore:
>>> L = [{'a':1}, {'b':2}, {'c':1}, {'d':2}]
>>> dict(map(dict.popitem, L))
{'a': 1, 'b': 2, 'c': 1, 'd': 2}
dict1.update( dict2 )
This is asymmetrical because you need to choose what to do with duplicate keys; in this case, dict2 will overwrite dict1. Exchange them for the other way.
EDIT: Ah, sorry, didn't see that.
It is possible to do this in a single expression:
>>> from itertools import chain
>>> dict( chain( *map( dict.items, theDicts ) ) )
{'a': 1, 'c': 1, 'b': 2, 'd': 2}
No credit to me for this last!
However, I'd argue that it might be more Pythonic (explicit > implicit, flat > nested ) to do this with a simple for loop. YMMV.
this way worked for me:
object = [{'a':1}, {'b':2}, {'c':1}, {'d':2}]
object = {k: v for dct in object for k, v in dct.items()}
printing object:
object = {'a':1,'b':2,'c':1,'d':2}
thanks Axes
>>> dictlist = [{'a':1},{'b':2},{'c':1},{'d':2, 'e':3}]
>>> dict(kv for d in dictlist for kv in d.iteritems())
{'a': 1, 'c': 1, 'b': 2, 'e': 3, 'd': 2}
>>>
Note I added a second key/value pair to the last dictionary to show it works with multiple entries.
Also keys from dicts later in the list will overwrite the same key from an earlier dict.