Return copy of dictionary excluding specified keys - python

I want to make a function that returns a copy of a dictionary excluding keys specified in a list.
Considering this dictionary:
my_dict = {
"keyA": 1,
"keyB": 2,
"keyC": 3
}
A call to without_keys(my_dict, ['keyB', 'keyC']) should return:
{
"keyA": 1
}
I would like to do this in a one-line with a neat dictionary comprehension but I'm having trouble. My attempt is this:
def without_keys(d, keys):
return {k: d[f] if k not in keys for f in d}
which is invalid syntax. How can I do this?

You were close, try the snippet below:
>>> my_dict = {
... "keyA": 1,
... "keyB": 2,
... "keyC": 3
... }
>>> invalid = {"keyA", "keyB"}
>>> def without_keys(d, keys):
... return {x: d[x] for x in d if x not in keys}
>>> without_keys(my_dict, invalid)
{'keyC': 3}
Basically, the if k not in keys will go at the end of the dict comprehension in the above case.

In your dictionary comprehension you should be iterating over your dictionary (not k , not sure what that is either). Example -
return {k:v for k,v in d.items() if k not in keys}

This should work for you.
def without_keys(d, keys):
return {k: v for k, v in d.items() if k not in keys}

Even shorter. Apparently python 3 lets you 'subtract' a list from a dict_keys.
def without_keys(d, keys):
return {k: d[k] for k in d.keys() - keys}

For those who don't like list comprehensions, this is my version:
def without_keys(d, *keys):
return dict(filter(lambda key_value: key_value[0] not in keys, d.items()))
Usage:
>>> d={1:3, 5:7, 9:11, 13:15}
>>> without_keys(d, 1, 5, 9)
{13: 15}
>>> without_keys(d, 13)
{1: 3, 5: 7, 9: 11}
>>> without_keys(d, *[5, 7])
{1: 3, 13: 15, 9: 11}

Your oneliner
my_dict = {"keyA": 1, "keyB": 2, "keyC": 3}
(lambda keyB, keyC, **kw: kw)(**my_dict)
which returns {'keyA': 1}.
Not very pythonic and dynamic, but hacky and short.
It uses the dict unpacking (destructuring assignment) of function arguments.
See also
https://stackoverflow.com/a/53851069/11769765.

You could this generalized for nested dictionaries solution
def copy_dict(data, strip_values=False, remove_keys=[]):
if type(data) is dict:
out = {}
for key, value in data.items():
if key not in remove_keys:
out[key] = copy_dict(value, strip_values=strip_values, remove_keys=remove_keys)
return out
else:
return [] if strip_values else data
This recursive solution works for nested dictionaries and removes keys not required from the entire nested structure. It also gives you the ability to return the nest with only keys and no values.

Related

Python3 dictionary: remove duplicate values in alphabetical order

Let's say I have the following dictionary:
full_dic = {
'aa': 1,
'ac': 1,
'ab': 1,
'ba': 2,
...
}
I normally use standard dictionary comprehension to remove dupes like:
t = {val : key for (key, val) in full_dic.items()}
cleaned_dic = {val : key for (key, val) in t.items()}
Calling print(cleaned_dic) outputs {'ab': 1,'ba': 2, ...}
With this code, the key that remains seems to always be the final one in the list, but I'm not sure that's even guaranteed as dictionaries are unordered. Instead, I'd like to find a way to ensure that the key I keep is the first alphabetically.
So, regardless of the 'order' the dictionary is in, I want the output to be:
>> {'aa': 1,'ba': 2, ...}
Where 'aa' comes first alphabetically.
I ran some timer tests on 3 answers below and got the following (dictionary was created with random key/value pairs):
dict length: 10
# of loops: 100000
HoliSimo (OrderedDict): 0.0000098405 seconds
Ricardo: 0.0000115448 seconds
Mark (itertools.groupby): 0.0000111745 seconds
dict length: 1000000
# of loops: 10
HoliSimo (OrderedDict): 6.1724137300 seconds
Ricardo: 3.3102091300 seconds
Mark (itertools.groupby): 6.1338266200 seconds
We can see that for smaller dictionary sizes using OrderedDict is fastest but for large dictionary sizes it's slightly better to use Ricardo's answer below.
t = {val : key for (key, val) in dict(sorted(full_dic.items(), key=lambda x: x[0].lower(), reverse=True)).items()}
cleaned_dic = {val : key for (key, val) in t.items()}
dict(sorted(cleaned_dic.items(), key=lambda x: x[0].lower()))
>>> {'aa': 1, 'ba': 2}
Seems like you can do this with a single sort and itertools.groupby. First sort the items by value, then key. Pass this to groupby and take the first item of each group to pass to the dict constructor:
from itertools import groupby
full_dic = {
'aa': 1,
'ac': 1,
'xx': 2,
'ab': 1,
'ba': 2,
}
groups = groupby(sorted(full_dic.items(), key=lambda p: (p[1], p[0])), key=lambda x: x[1])
dict(next(g) for k, g in groups)
# {'aa': 1, 'ba': 2}
You should use the OrderectDict class.
import collections
full_dic = {
'aa': 1,
'ac': 1,
'ab': 1
}
od = collections.OrderedDict(sorted(full_dic.items()))
In this way you will be sure to have sorted dictionary (Original code: StackOverflow).
And then:
result = {}
for k, vin od.items():
if value not in result.values():
result[key] = value
I'm not sure if it will speed up the computation but you can try:
inverted_dict = {}
for k, v in od.items():
if inverted_dict.get(v) is None:
inverted_dict[v] = k
res = {v: k for k, v in inverted_dict.items()}

Nexted dictionary into a dictionary in python [duplicate]

Suppose you have a dictionary like:
{'a': 1,
'c': {'a': 2,
'b': {'x': 5,
'y' : 10}},
'd': [1, 2, 3]}
How would you go about flattening that into something like:
{'a': 1,
'c_a': 2,
'c_b_x': 5,
'c_b_y': 10,
'd': [1, 2, 3]}
Basically the same way you would flatten a nested list, you just have to do the extra work for iterating the dict by key/value, creating new keys for your new dictionary and creating the dictionary at final step.
import collections
def flatten(d, parent_key='', sep='_'):
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, collections.MutableMapping):
items.extend(flatten(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)
>>> flatten({'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3]})
{'a': 1, 'c_a': 2, 'c_b_x': 5, 'd': [1, 2, 3], 'c_b_y': 10}
For Python >= 3.3, change the import to from collections.abc import MutableMapping to avoid a deprecation warning and change collections.MutableMapping to just MutableMapping.
Or if you are already using pandas, You can do it with json_normalize() like so:
import pandas as pd
d = {'a': 1,
'c': {'a': 2, 'b': {'x': 5, 'y' : 10}},
'd': [1, 2, 3]}
df = pd.json_normalize(d, sep='_')
print(df.to_dict(orient='records')[0])
Output:
{'a': 1, 'c_a': 2, 'c_b_x': 5, 'c_b_y': 10, 'd': [1, 2, 3]}
There are two big considerations that the original poster needs to consider:
Are there keyspace clobbering issues? For example, {'a_b':{'c':1}, 'a':{'b_c':2}} would result in {'a_b_c':???}. The below solution evades the problem by returning an iterable of pairs.
If performance is an issue, does the key-reducer function (which I hereby refer to as 'join') require access to the entire key-path, or can it just do O(1) work at every node in the tree? If you want to be able to say joinedKey = '_'.join(*keys), that will cost you O(N^2) running time. However if you're willing to say nextKey = previousKey+'_'+thisKey, that gets you O(N) time. The solution below lets you do both (since you could merely concatenate all the keys, then postprocess them).
(Performance is not likely an issue, but I'll elaborate on the second point in case anyone else cares: In implementing this, there are numerous dangerous choices. If you do this recursively and yield and re-yield, or anything equivalent which touches nodes more than once (which is quite easy to accidentally do), you are doing potentially O(N^2) work rather than O(N). This is because maybe you are calculating a key a then a_1 then a_1_i..., and then calculating a then a_1 then a_1_ii..., but really you shouldn't have to calculate a_1 again. Even if you aren't recalculating it, re-yielding it (a 'level-by-level' approach) is just as bad. A good example is to think about the performance on {1:{1:{1:{1:...(N times)...{1:SOME_LARGE_DICTIONARY_OF_SIZE_N}...}}}})
Below is a function I wrote flattenDict(d, join=..., lift=...) which can be adapted to many purposes and can do what you want. Sadly it is fairly hard to make a lazy version of this function without incurring the above performance penalties (many python builtins like chain.from_iterable aren't actually efficient, which I only realized after extensive testing of three different versions of this code before settling on this one).
from collections import Mapping
from itertools import chain
from operator import add
_FLAG_FIRST = object()
def flattenDict(d, join=add, lift=lambda x:(x,)):
results = []
def visit(subdict, results, partialKey):
for k,v in subdict.items():
newKey = lift(k) if partialKey==_FLAG_FIRST else join(partialKey,lift(k))
if isinstance(v,Mapping):
visit(v, results, newKey)
else:
results.append((newKey,v))
visit(d, results, _FLAG_FIRST)
return results
To better understand what's going on, below is a diagram for those unfamiliar with reduce(left), otherwise known as "fold left". Sometimes it is drawn with an initial value in place of k0 (not part of the list, passed into the function). Here, J is our join function. We preprocess each kn with lift(k).
[k0,k1,...,kN].foldleft(J)
/ \
... kN
/
J(k0,J(k1,J(k2,k3)))
/ \
/ \
J(J(k0,k1),k2) k3
/ \
/ \
J(k0,k1) k2
/ \
/ \
k0 k1
This is in fact the same as functools.reduce, but where our function does this to all key-paths of the tree.
>>> reduce(lambda a,b:(a,b), range(5))
((((0, 1), 2), 3), 4)
Demonstration (which I'd otherwise put in docstring):
>>> testData = {
'a':1,
'b':2,
'c':{
'aa':11,
'bb':22,
'cc':{
'aaa':111
}
}
}
from pprint import pprint as pp
>>> pp(dict( flattenDict(testData) ))
{('a',): 1,
('b',): 2,
('c', 'aa'): 11,
('c', 'bb'): 22,
('c', 'cc', 'aaa'): 111}
>>> pp(dict( flattenDict(testData, join=lambda a,b:a+'_'+b, lift=lambda x:x) ))
{'a': 1, 'b': 2, 'c_aa': 11, 'c_bb': 22, 'c_cc_aaa': 111}
>>> pp(dict( (v,k) for k,v in flattenDict(testData, lift=hash, join=lambda a,b:hash((a,b))) ))
{1: 12416037344,
2: 12544037731,
11: 5470935132935744593,
22: 4885734186131977315,
111: 3461911260025554326}
Performance:
from functools import reduce
def makeEvilDict(n):
return reduce(lambda acc,x:{x:acc}, [{i:0 for i in range(n)}]+range(n))
import timeit
def time(runnable):
t0 = timeit.default_timer()
_ = runnable()
t1 = timeit.default_timer()
print('took {:.2f} seconds'.format(t1-t0))
>>> pp(makeEvilDict(8))
{7: {6: {5: {4: {3: {2: {1: {0: {0: 0,
1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0}}}}}}}}}
import sys
sys.setrecursionlimit(1000000)
forget = lambda a,b:''
>>> time(lambda: dict(flattenDict(makeEvilDict(10000), join=forget)) )
took 0.10 seconds
>>> time(lambda: dict(flattenDict(makeEvilDict(100000), join=forget)) )
[1] 12569 segmentation fault python
... sigh, don't think that one is my fault...
[unimportant historical note due to moderation issues]
Regarding the alleged duplicate of Flatten a dictionary of dictionaries (2 levels deep) of lists
That question's solution can be implemented in terms of this one by doing sorted( sum(flatten(...),[]) ). The reverse is not possible: while it is true that the values of flatten(...) can be recovered from the alleged duplicate by mapping a higher-order accumulator, one cannot recover the keys. (edit: Also it turns out that the alleged duplicate owner's question is completely different, in that it only deals with dictionaries exactly 2-level deep, though one of the answers on that page gives a general solution.)
If you're using pandas there is a function hidden in pandas.io.json._normalize1 called nested_to_record which does this exactly.
from pandas.io.json._normalize import nested_to_record
flat = nested_to_record(my_dict, sep='_')
1 In pandas versions 0.24.x and older use pandas.io.json.normalize (without the _)
Here is a kind of a "functional", "one-liner" implementation. It is recursive, and based on a conditional expression and a dict comprehension.
def flatten_dict(dd, separator='_', prefix=''):
return { prefix + separator + k if prefix else k : v
for kk, vv in dd.items()
for k, v in flatten_dict(vv, separator, kk).items()
} if isinstance(dd, dict) else { prefix : dd }
Test:
In [2]: flatten_dict({'abc':123, 'hgf':{'gh':432, 'yu':433}, 'gfd':902, 'xzxzxz':{"432":{'0b0b0b':231}, "43234":1321}}, '.')
Out[2]:
{'abc': 123,
'gfd': 902,
'hgf.gh': 432,
'hgf.yu': 433,
'xzxzxz.432.0b0b0b': 231,
'xzxzxz.43234': 1321}
Not exactly what the OP asked, but lots of folks are coming here looking for ways to flatten real-world nested JSON data which can have nested key-value json objects and arrays and json objects inside the arrays and so on. JSON doesn't include tuples, so we don't have to fret over those.
I found an implementation of the list-inclusion comment by #roneo to the answer posted by #Imran :
https://github.com/ScriptSmith/socialreaper/blob/master/socialreaper/tools.py#L8
import collections
def flatten(dictionary, parent_key=False, separator='.'):
"""
Turn a nested dictionary into a flattened dictionary
:param dictionary: The dictionary to flatten
:param parent_key: The string to prepend to dictionary's keys
:param separator: The string used to separate flattened keys
:return: A flattened dictionary
"""
items = []
for key, value in dictionary.items():
new_key = str(parent_key) + separator + key if parent_key else key
if isinstance(value, collections.MutableMapping):
items.extend(flatten(value, new_key, separator).items())
elif isinstance(value, list):
for k, v in enumerate(value):
items.extend(flatten({str(k): v}, new_key).items())
else:
items.append((new_key, value))
return dict(items)
Test it:
flatten({'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3] })
>> {'a': 1, 'c.a': 2, 'c.b.x': 5, 'c.b.y': 10, 'd.0': 1, 'd.1': 2, 'd.2': 3}
Annd that does the job I need done: I throw any complicated json at this and it flattens it out for me.
All credits to https://github.com/ScriptSmith .
Code:
test = {'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3]}
def parse_dict(init, lkey=''):
ret = {}
for rkey,val in init.items():
key = lkey+rkey
if isinstance(val, dict):
ret.update(parse_dict(val, key+'_'))
else:
ret[key] = val
return ret
print(parse_dict(test,''))
Results:
$ python test.py
{'a': 1, 'c_a': 2, 'c_b_x': 5, 'd': [1, 2, 3], 'c_b_y': 10}
I am using python3.2, update for your version of python.
This is not restricted to dictionaries, but every mapping type that implements .items(). Further ist faster as it avoides an if condition. Nevertheless credits go to Imran:
def flatten(d, parent_key=''):
items = []
for k, v in d.items():
try:
items.extend(flatten(v, '%s%s_' % (parent_key, k)).items())
except AttributeError:
items.append(('%s%s' % (parent_key, k), v))
return dict(items)
How about a functional and performant solution in Python3.5?
from functools import reduce
def _reducer(items, key, val, pref):
if isinstance(val, dict):
return {**items, **flatten(val, pref + key)}
else:
return {**items, pref + key: val}
def flatten(d, pref=''):
return(reduce(
lambda new_d, kv: _reducer(new_d, *kv, pref),
d.items(),
{}
))
This is even more performant:
def flatten(d, pref=''):
return(reduce(
lambda new_d, kv: \
isinstance(kv[1], dict) and \
{**new_d, **flatten(kv[1], pref + kv[0])} or \
{**new_d, pref + kv[0]: kv[1]},
d.items(),
{}
))
In use:
my_obj = {'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y': 10}}, 'd': [1, 2, 3]}
print(flatten(my_obj))
# {'d': [1, 2, 3], 'cby': 10, 'cbx': 5, 'ca': 2, 'a': 1}
If you are a fan of pythonic oneliners:
my_dict={'a': 1,'c': {'a': 2,'b': {'x': 5,'y' : 10}},'d': [1, 2, 3]}
list(pd.json_normalize(my_dict).T.to_dict().values())[0]
returns:
{'a': 1, 'c.a': 2, 'c.b.x': 5, 'c.b.y': 10, 'd': [1, 2, 3]}
You can leave the [0] from the end, if you have a list of dictionaries and not just a single dictionary.
My Python 3.3 Solution using generators:
def flattenit(pyobj, keystring=''):
if type(pyobj) is dict:
if (type(pyobj) is dict):
keystring = keystring + "_" if keystring else keystring
for k in pyobj:
yield from flattenit(pyobj[k], keystring + k)
elif (type(pyobj) is list):
for lelm in pyobj:
yield from flatten(lelm, keystring)
else:
yield keystring, pyobj
my_obj = {'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y': 10}}, 'd': [1, 2, 3]}
#your flattened dictionary object
flattened={k:v for k,v in flattenit(my_obj)}
print(flattened)
# result: {'c_b_y': 10, 'd': [1, 2, 3], 'c_a': 2, 'a': 1, 'c_b_x': 5}
Utilizing recursion, keeping it simple and human readable:
def flatten_dict(dictionary, accumulator=None, parent_key=None, separator="."):
if accumulator is None:
accumulator = {}
for k, v in dictionary.items():
k = f"{parent_key}{separator}{k}" if parent_key else k
if isinstance(v, dict):
flatten_dict(dictionary=v, accumulator=accumulator, parent_key=k)
continue
accumulator[k] = v
return accumulator
Call is simple:
new_dict = flatten_dict(dictionary)
or
new_dict = flatten_dict(dictionary, separator="_")
if we want to change the default separator.
A little breakdown:
When the function is first called, it is called only passing the dictionary we want to flatten. The accumulator parameter is here to support recursion, which we see later. So, we instantiate accumulator to an empty dictionary where we will put all of the nested values from the original dictionary.
if accumulator is None:
accumulator = {}
As we iterate over the dictionary's values, we construct a key for every value. The parent_key argument will be None for the first call, while for every nested dictionary, it will contain the key pointing to it, so we prepend that key.
k = f"{parent_key}{separator}{k}" if parent_key else k
In case the value v the key k is pointing to is a dictionary, the function calls itself, passing the nested dictionary, the accumulator (which is passed by reference, so all changes done to it are done on the same instance) and the key k so that we can construct the concatenated key. Notice the continue statement. We want to skip the next line, outside of the if block, so that the nested dictionary doesn't end up in the accumulator under key k.
if isinstance(v, dict):
flatten_dict(dict=v, accumulator=accumulator, parent_key=k)
continue
So, what do we do in case the value v is not a dictionary? Just put it unchanged inside the accumulator.
accumulator[k] = v
Once we're done we just return the accumulator, leaving the original dictionary argument untouched.
NOTE
This will work only with dictionaries that have strings as keys. It will work with hashable objects implementing the __repr__ method, but will yield unwanted results.
Simple function to flatten nested dictionaries. For Python 3, replace .iteritems() with .items()
def flatten_dict(init_dict):
res_dict = {}
if type(init_dict) is not dict:
return res_dict
for k, v in init_dict.iteritems():
if type(v) == dict:
res_dict.update(flatten_dict(v))
else:
res_dict[k] = v
return res_dict
The idea/requirement was:
Get flat dictionaries with no keeping parent keys.
Example of usage:
dd = {'a': 3,
'b': {'c': 4, 'd': 5},
'e': {'f':
{'g': 1, 'h': 2}
},
'i': 9,
}
flatten_dict(dd)
>> {'a': 3, 'c': 4, 'd': 5, 'g': 1, 'h': 2, 'i': 9}
Keeping parent keys is simple as well.
I was thinking of a subclass of UserDict to automagically flat the keys.
class FlatDict(UserDict):
def __init__(self, *args, separator='.', **kwargs):
self.separator = separator
super().__init__(*args, **kwargs)
def __setitem__(self, key, value):
if isinstance(value, dict):
for k1, v1 in FlatDict(value, separator=self.separator).items():
super().__setitem__(f"{key}{self.separator}{k1}", v1)
else:
super().__setitem__(key, value)
‌
The advantages it that keys can be added on the fly, or using standard dict instanciation, without surprise:
‌
>>> fd = FlatDict(
... {
... 'person': {
... 'sexe': 'male',
... 'name': {
... 'first': 'jacques',
... 'last': 'dupond'
... }
... }
... }
... )
>>> fd
{'person.sexe': 'male', 'person.name.first': 'jacques', 'person.name.last': 'dupond'}
>>> fd['person'] = {'name': {'nickname': 'Bob'}}
>>> fd
{'person.sexe': 'male', 'person.name.first': 'jacques', 'person.name.last': 'dupond', 'person.name.nickname': 'Bob'}
>>> fd['person.name'] = {'civility': 'Dr'}
>>> fd
{'person.sexe': 'male', 'person.name.first': 'jacques', 'person.name.last': 'dupond', 'person.name.nickname': 'Bob', 'person.name.civility': 'Dr'}
This is similar to both imran's and ralu's answer. It does not use a generator, but instead employs recursion with a closure:
def flatten_dict(d, separator='_'):
final = {}
def _flatten_dict(obj, parent_keys=[]):
for k, v in obj.iteritems():
if isinstance(v, dict):
_flatten_dict(v, parent_keys + [k])
else:
key = separator.join(parent_keys + [k])
final[key] = v
_flatten_dict(d)
return final
>>> print flatten_dict({'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3]})
{'a': 1, 'c_a': 2, 'c_b_x': 5, 'd': [1, 2, 3], 'c_b_y': 10}
The answers above work really well. Just thought I'd add the unflatten function that I wrote:
def unflatten(d):
ud = {}
for k, v in d.items():
context = ud
for sub_key in k.split('_')[:-1]:
if sub_key not in context:
context[sub_key] = {}
context = context[sub_key]
context[k.split('_')[-1]] = v
return ud
Note: This doesn't account for '_' already present in keys, much like the flatten counterparts.
Davoud's solution is very nice but doesn't give satisfactory results when the nested dict also contains lists of dicts, but his code be adapted for that case:
def flatten_dict(d):
items = []
for k, v in d.items():
try:
if (type(v)==type([])):
for l in v: items.extend(flatten_dict(l).items())
else:
items.extend(flatten_dict(v).items())
except AttributeError:
items.append((k, v))
return dict(items)
def flatten(unflattened_dict, separator='_'):
flattened_dict = {}
for k, v in unflattened_dict.items():
if isinstance(v, dict):
sub_flattened_dict = flatten(v, separator)
for k2, v2 in sub_flattened_dict.items():
flattened_dict[k + separator + k2] = v2
else:
flattened_dict[k] = v
return flattened_dict
I actually wrote a package called cherrypicker recently to deal with this exact sort of thing since I had to do it so often!
I think the following code would give you exactly what you're after:
from cherrypicker import CherryPicker
dct = {
'a': 1,
'c': {
'a': 2,
'b': {
'x': 5,
'y' : 10
}
},
'd': [1, 2, 3]
}
picker = CherryPicker(dct)
picker.flatten().get()
You can install the package with:
pip install cherrypicker
...and there's more docs and guidance at https://cherrypicker.readthedocs.io.
Other methods may be faster, but the priority of this package is to make such tasks easy. If you do have a large list of objects to flatten though, you can also tell CherryPicker to use parallel processing to speed things up.
here's a solution using a stack. No recursion.
def flatten_nested_dict(nested):
stack = list(nested.items())
ans = {}
while stack:
key, val = stack.pop()
if isinstance(val, dict):
for sub_key, sub_val in val.items():
stack.append((f"{key}_{sub_key}", sub_val))
else:
ans[key] = val
return ans
Using generators:
def flat_dic_helper(prepand,d):
if len(prepand) > 0:
prepand = prepand + "_"
for k in d:
i = d[k]
if isinstance(i, dict):
r = flat_dic_helper(prepand + k,i)
for j in r:
yield j
else:
yield (prepand + k,i)
def flat_dic(d):
return dict(flat_dic_helper("",d))
d = {'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3]}
print(flat_dic(d))
>> {'a': 1, 'c_a': 2, 'c_b_x': 5, 'd': [1, 2, 3], 'c_b_y': 10}
Here's an algorithm for elegant, in-place replacement. Tested with Python 2.7 and Python 3.5. Using the dot character as a separator.
def flatten_json(json):
if type(json) == dict:
for k, v in list(json.items()):
if type(v) == dict:
flatten_json(v)
json.pop(k)
for k2, v2 in v.items():
json[k+"."+k2] = v2
Example:
d = {'a': {'b': 'c'}}
flatten_json(d)
print(d)
unflatten_json(d)
print(d)
Output:
{'a.b': 'c'}
{'a': {'b': 'c'}}
I published this code here along with the matching unflatten_json function.
If you want to flat nested dictionary and want all unique keys list then here is the solution:
def flat_dict_return_unique_key(data, unique_keys=set()):
if isinstance(data, dict):
[unique_keys.add(i) for i in data.keys()]
for each_v in data.values():
if isinstance(each_v, dict):
flat_dict_return_unique_key(each_v, unique_keys)
return list(set(unique_keys))
I always prefer access dict objects via .items(), so for flattening dicts I use the following recursive generator flat_items(d). If you like to have dict again, simply wrap it like this: flat = dict(flat_items(d))
def flat_items(d, key_separator='.'):
"""
Flattens the dictionary containing other dictionaries like here: https://stackoverflow.com/questions/6027558/flatten-nested-python-dictionaries-compressing-keys
>>> example = {'a': 1, 'c': {'a': 2, 'b': {'x': 5, 'y' : 10}}, 'd': [1, 2, 3]}
>>> flat = dict(flat_items(example, key_separator='_'))
>>> assert flat['c_b_y'] == 10
"""
for k, v in d.items():
if type(v) is dict:
for k1, v1 in flat_items(v, key_separator=key_separator):
yield key_separator.join((k, k1)), v1
else:
yield k, v
def flatten_nested_dict(_dict, _str=''):
'''
recursive function to flatten a nested dictionary json
'''
ret_dict = {}
for k, v in _dict.items():
if isinstance(v, dict):
ret_dict.update(flatten_nested_dict(v, _str = '_'.join([_str, k]).strip('_')))
elif isinstance(v, list):
for index, item in enumerate(v):
if isinstance(item, dict):
ret_dict.update(flatten_nested_dict(item, _str= '_'.join([_str, k, str(index)]).strip('_')))
else:
ret_dict['_'.join([_str, k, str(index)]).strip('_')] = item
else:
ret_dict['_'.join([_str, k]).strip('_')] = v
return ret_dict
Using dict.popitem() in straightforward nested-list-like recursion:
def flatten(d):
if d == {}:
return d
else:
k,v = d.popitem()
if (dict != type(v)):
return {k:v, **flatten(d)}
else:
flat_kv = flatten(v)
for k1 in list(flat_kv.keys()):
flat_kv[k + '_' + k1] = flat_kv[k1]
del flat_kv[k1]
return {**flat_kv, **flatten(d)}
If you do not mind recursive functions, here is a solution. I have also taken the liberty to include an exclusion-parameter in case there are one or more values you wish to maintain.
Code:
def flatten_dict(dictionary, exclude = [], delimiter ='_'):
flat_dict = dict()
for key, value in dictionary.items():
if isinstance(value, dict) and key not in exclude:
flatten_value_dict = flatten_dict(value, exclude, delimiter)
for k, v in flatten_value_dict.items():
flat_dict[f"{key}{delimiter}{k}"] = v
else:
flat_dict[key] = value
return flat_dict
Usage:
d = {'a':1, 'b':[1, 2], 'c':3, 'd':{'a':4, 'b':{'a':7, 'b':8}, 'c':6}, 'e':{'a':1,'b':2}}
flat_d = flatten_dict(dictionary=d, exclude=['e'], delimiter='.')
print(flat_d)
Output:
{'a': 1, 'b': [1, 2], 'c': 3, 'd.a': 4, 'd.b.a': 7, 'd.b.b': 8, 'd.c': 6, 'e': {'a': 1, 'b': 2}}
Variation of this Flatten nested dictionaries, compressing keys with max_level and custom reducer.
def flatten(d, max_level=None, reducer='tuple'):
if reducer == 'tuple':
reducer_seed = tuple()
reducer_func = lambda x, y: (*x, y)
else:
raise ValueError(f'Unknown reducer: {reducer}')
def impl(d, pref, level):
return reduce(
lambda new_d, kv:
(max_level is None or level < max_level)
and isinstance(kv[1], dict)
and {**new_d, **impl(kv[1], reducer_func(pref, kv[0]), level + 1)}
or {**new_d, reducer_func(pref, kv[0]): kv[1]},
d.items(),
{}
)
return impl(d, reducer_seed, 0)
I tried some of the solutions on this page - though not all - but those I tried failed to handle the nested list of dict.
Consider a dict like this:
d = {
'owner': {
'name': {'first_name': 'Steven', 'last_name': 'Smith'},
'lottery_nums': [1, 2, 3, 'four', '11', None],
'address': {},
'tuple': (1, 2, 'three'),
'tuple_with_dict': (1, 2, 'three', {'is_valid': False}),
'set': {1, 2, 3, 4, 'five'},
'children': [
{'name': {'first_name': 'Jessica',
'last_name': 'Smith', },
'children': []
},
{'name': {'first_name': 'George',
'last_name': 'Smith'},
'children': []
}
]
}
}
Here's my makeshift solution:
def flatten_dict(input_node: dict, key_: str = '', output_dict: dict = {}):
if isinstance(input_node, dict):
for key, val in input_node.items():
new_key = f"{key_}.{key}" if key_ else f"{key}"
flatten_dict(val, new_key, output_dict)
elif isinstance(input_node, list):
for idx, item in enumerate(input_node):
flatten_dict(item, f"{key_}.{idx}", output_dict)
else:
output_dict[key_] = input_node
return output_dict
which produces:
{
owner.name.first_name: Steven,
owner.name.last_name: Smith,
owner.lottery_nums.0: 1,
owner.lottery_nums.1: 2,
owner.lottery_nums.2: 3,
owner.lottery_nums.3: four,
owner.lottery_nums.4: 11,
owner.lottery_nums.5: None,
owner.tuple: (1, 2, 'three'),
owner.tuple_with_dict: (1, 2, 'three', {'is_valid': False}),
owner.set: {1, 2, 3, 4, 'five'},
owner.children.0.name.first_name: Jessica,
owner.children.0.name.last_name: Smith,
owner.children.1.name.first_name: George,
owner.children.1.name.last_name: Smith,
}
A makeshift solution and it's not perfect.
NOTE:
it doesn't keep empty dicts such as the address: {} k/v pair.
it won't flatten dicts in nested tuples - though it would be easy to add using the fact that python tuples act similar to lists.
You can use recursion in order to flatten your dictionary.
import collections
def flatten(
nested_dict,
seperator='.',
name=None,
):
flatten_dict = {}
if not nested_dict:
return flatten_dict
if isinstance(
nested_dict,
collections.abc.MutableMapping,
):
for key, value in nested_dict.items():
if name is not None:
flatten_dict.update(
flatten(
nested_dict=value,
seperator=seperator,
name=f'{name}{seperator}{key}',
),
)
else:
flatten_dict.update(
flatten(
nested_dict=value,
seperator=seperator,
name=key,
),
)
else:
flatten_dict[name] = nested_dict
return flatten_dict
if __name__ == '__main__':
nested_dict = {
1: 'a',
2: {
3: 'c',
4: {
5: 'e',
},
6: [1, 2, 3, 4, 5, ],
},
}
print(
flatten(
nested_dict=nested_dict,
),
)
Output:
{
"1":"a",
"2.3":"c",
"2.4.5":"e",
"2.6":[1, 2, 3, 4, 5]
}

copy part of one dict to a new dict based on a list of keys

Sample:
d = {
"test": 1,
"sample": 2,
"example": 3,
"product": 4,
"software": 5,
"demo": 6,
}
filter_keys = ["test","sample","example","demo"]
I want to create a new dict that contains only those items from the first dict whose keys appear in the list. In other words, I want:
d2 = {
"test": 1,
"sample": 2,
"example": 3,
"demo": 6,
}
I could do it with a loop:
d2 = {}
for k in d.keys():
if (k in filter_keys):
d2[k] = d[k]
But this seems awfully "un-Pythonic". I'm also guessing that if you had a huge dict, say 5,000 items or so, the constant adding of new items to the new dict would be slow compared to a more direct way.
Also, you'd want to be able to handle errors. If the list contains something that's not a key in the dict, it should just be ignored. Or maybe it gets added to the new dict but with a value of None.
Is there a better way to accomplish this?
A straight-forward way to do this is with the "dictionary comprehension":
filtered_dict = {key: value for key, value in d.items() if key in filter_keys}
Note that if the condition appears at the end of the comprehension, it filters execution of the loop statement. Depending on whether the numbers of keys in the dictionary is greater than the number of keys you want to filter on, this revision could be more efficient:
filtered_dict = {key: d[key] for key in filter_keysif key in d}
Checking for membership in the dictionary (key in d) is significantly faster than checking for membership in the filter key list (key in filter_keys). But which ends up faster depends on the size of the filter key list (and, to a lesser extent, the size of the dictionary).
A relatively python way to do it without a dictionary comprehension is with the dict constructor:
filtered_dict = dict([(key, value) for key, value in d.items() if key in filter_keys])
Note that this is essentially equivalent to the dictionary comprehension, but may be clearer if you aren't familiar with dictionary comprehension syntax.
Dictionary comprehension is one way to do it:
new_d = {k: v for k, v in d.items() if k in l}
Demo:
>>> d = {
... "test": 1,
... "sample": 2,
... "example": 3,
... "product": 4,
... "software": 5,
... "demo": 6,
... }
>>>
>>> l = ["test","sample","example","demo"]
>>> new_d = {k: v for k, v in d.items() if k in l}
>>> new_d
{'sample': 2, 'demo': 6, 'test': 1, 'example': 3}
For optimal performance, you should iterate over the keys in the list and check if they are in the dict rather than the other way around:
d2 = {}
for k in list_of_keys:
if k in d:
d2[k] = d[k]
The benefit here is that the dict.__contains__ (in) on a dict is O(1) whereas for the list it's O(N). For big lists, that's a HUGE benefit (O(N) algorithm vs. O(N^2)).
We can be a little more succinct by expressing the above loop with an equivalent dict-comprehension:
d2 = {k: d[k] for k in list_of_keys if k in d}
This will be likely be marginally faster than the loop, but probably not enough to ever worry about. That said, most python programmers would prefer this version as it is more succinct and very common.
As per your last part of the question:
Or maybe it gets added to the new dict but with a value of None.
l = ["test","sample","example","demo","badkey"]
d = {
"test": 1,
"sample": 2,
"example": 3,
"product": 4,
"software": 5,
"demo": 6,
}
print {k: d.get(k) for k in l}
{'test': 1, 'sample': 2, 'badkey': None, 'example': 3, 'demo': 6}
You can pass a default return value to dict.get, it is None by default but you could set it to d.get(k,"No_match") etc.. or whatever value you wanted.

Dictionary comprehension for swapping keys/values in a dict with multiple equal values

def invert_dict(d):
inv = dict()
for key in d:
val = d[key]
if val not in inv:
inv[val] = [key]
else:
inv[val].append(key)
return inv
This is an example from Think Python book, a function for inverting(swapping) keys and values in a dictionary. New values (former keys) are stored as lists, so if there was multiple dictionary values (bound to a different keys) that were equal before inverting, then this function simply appends them to the list of former keys.
Example:
somedict = {'one': 1, 'two': 2, 'doubletwo': 2, 'three': 3}
invert_dict(somedict) ---> {1: ['one'], 2: ['doubletwo', 'two'], 3: ['three']}
My question is, can the same be done with dictionary comprehensions? This function creates an empty dict inv = dict(), which is then checked later in the function with if/else for the presence of values. Dict comprehension, in this case, should check itself. Is that possible, and how the syntax should look like?
General dict comprehension syntax for swapping values is:
{value:key for key, value in somedict.items()}
but if I want to add an 'if' clausule, what it should look like? if value not in (what)?
Thanks.
I don't think it's possible with simple dict comprehension without using other functions.
Following code uses itertools.groupby to group keys that have same values.
>>> import itertools
>>> {k: [x[1] for x in grp]
for k, grp in itertools.groupby(
sorted((v,k) for k, v in somedict.iteritems()),
key=lambda x: x[0])
}
{1: ['one'], 2: ['doubletwo', 'two'], 3: ['three']}
You can use a set comprehension side effect:
somedict = {'one': 1, 'two': 2, 'doubletwo': 2, 'three': 3}
invert_dict={}
{invert_dict.setdefault(v, []).append(k) for k, v in somedict.items()}
print invert_dict
# {1: ['one'], 2: ['doubletwo', 'two'], 3: ['three']}
Here is a good answer:
fts = {1:1,2:1,3:2,4:1}
new_dict = {dest: [k for k, v in fts.items() if v == dest] for dest in set(fts.values())}
Reference: Head First Python ,2nd Edition, Page(502)

How do I exchange keys with values in a dictionary? [duplicate]

This question already has answers here:
Reverse / invert a dictionary mapping
(32 answers)
Closed 10 months ago.
I receive a dictionary as input, and would like to to return a dictionary whose keys will be the input's values and whose value will be the corresponding input keys. Values are unique.
For example, say my input is:
a = dict()
a['one']=1
a['two']=2
I would like my output to be:
{1: 'one', 2: 'two'}
To clarify I would like my result to be the equivalent of the following:
res = dict()
res[1] = 'one'
res[2] = 'two'
Any neat Pythonic way to achieve this?
Python 2:
res = dict((v,k) for k,v in a.iteritems())
Python 3 (thanks to #erik):
res = dict((v,k) for k,v in a.items())
new_dict = dict(zip(my_dict.values(), my_dict.keys()))
From Python 2.7 on, including 3.0+, there's an arguably shorter, more readable version:
>>> my_dict = {'x':1, 'y':2, 'z':3}
>>> {v: k for k, v in my_dict.items()}
{1: 'x', 2: 'y', 3: 'z'}
You can make use of dict comprehensions:
Python 3
res = {v: k for k, v in a.items()}
Python 2
res = {v: k for k, v in a.iteritems()}
Edited: For Python 3, use a.items() instead of a.iteritems(). Discussions about the differences between them can be found in iteritems in Python on SO.
In [1]: my_dict = {'x':1, 'y':2, 'z':3}
Python 3
In [2]: dict((value, key) for key, value in my_dict.items())
Out[2]: {1: 'x', 2: 'y', 3: 'z'}
Python 2
In [2]: dict((value, key) for key, value in my_dict.iteritems())
Out[2]: {1: 'x', 2: 'y', 3: 'z'}
The current leading answer assumes values are unique which is not always the case. What if values are not unique? You will loose information!
For example:
d = {'a':3, 'b': 2, 'c': 2}
{v:k for k,v in d.iteritems()}
returns {2: 'b', 3: 'a'}.
The information about 'c' was completely ignored.
Ideally it should had be something like {2: ['b','c'], 3: ['a']}. This is what the bottom implementation does.
Python 2.x
def reverse_non_unique_mapping(d):
dinv = {}
for k, v in d.iteritems():
if v in dinv:
dinv[v].append(k)
else:
dinv[v] = [k]
return dinv
Python 3.x
def reverse_non_unique_mapping(d):
dinv = {}
for k, v in d.items():
if v in dinv:
dinv[v].append(k)
else:
dinv[v] = [k]
return dinv
You could try:
Python 3
d={'one':1,'two':2}
d2=dict((value,key) for key,value in d.items())
d2
{'two': 2, 'one': 1}
Python 2
d={'one':1,'two':2}
d2=dict((value,key) for key,value in d.iteritems())
d2
{'two': 2, 'one': 1}
Beware that you cannot 'reverse' a dictionary if
More than one key shares the same value. For example {'one':1,'two':1}. The new dictionary can only have one item with key 1.
One or more of the values is unhashable. For example {'one':[1]}. [1] is a valid value but not a valid key.
See this thread on the python mailing list for a discussion on the subject.
res = dict(zip(a.values(), a.keys()))
new_dict = dict( (my_dict[k], k) for k in my_dict)
or even better, but only works in Python 3:
new_dict = { my_dict[k]: k for k in my_dict}
Another way to expand on Ilya Prokin's response is to actually use the reversed function.
dict(map(reversed, my_dict.items()))
In essence, your dictionary is iterated through (using .items()) where each item is a key/value pair, and those items are swapped with the reversed function. When this is passed to the dict constructor, it turns them into value/key pairs which is what you want.
Suggestion for an improvement for Javier answer :
dict(zip(d.values(),d))
Instead of d.keys() you can write just d, because if you go through dictionary with an iterator, it will return the keys of the relevant dictionary.
Ex. for this behavior :
d = {'a':1,'b':2}
for k in d:
k
'a'
'b'
Can be done easily with dictionary comprehension:
{d[i]:i for i in d}
dict(map(lambda x: x[::-1], YourDict.items()))
.items() returns a list of tuples of (key, value). map() goes through elements of the list and applies lambda x:[::-1] to each its element (tuple) to reverse it, so each tuple becomes (value, key) in the new list spitted out of map. Finally, dict() makes a dict from the new list.
Hanan's answer is the correct one as it covers more general case (the other answers are kind of misleading for someone unaware of the duplicate situation). An improvement to Hanan's answer is using setdefault:
mydict = {1:a, 2:a, 3:b}
result = {}
for i in mydict:
result.setdefault(mydict[i],[]).append(i)
print(result)
>>> result = {a:[1,2], b:[3]}
Using loop:-
newdict = {} #Will contain reversed key:value pairs.
for key, value in zip(my_dict.keys(), my_dict.values()):
# Operations on key/value can also be performed.
newdict[value] = key
If you're using Python3, it's slightly different:
res = dict((v,k) for k,v in a.items())
Adding an in-place solution:
>>> d = {1: 'one', 2: 'two', 3: 'three', 4: 'four'}
>>> for k in list(d.keys()):
... d[d.pop(k)] = k
...
>>> d
{'two': 2, 'one': 1, 'four': 4, 'three': 3}
In Python3, it is critical that you use list(d.keys()) because dict.keys returns a view of the keys. If you are using Python2, d.keys() is enough.
I find this version the most comprehensive one:
a = {1: 'one', 2: 'two'}
swapped_a = {value : key for key, value in a.items()}
print(swapped_a)
output :
{'one': 1, 'two': 2}
An alternative that is not quite as readable (in my opinion) as some of the other answers:
new_dict = dict(zip(*list(zip(*old_dict.items()))[::-1]))
where list(zip(*old_dict.items()))[::-1] gives a list of 2 tuples, old_dict's values and keys, respectively.

Categories