Related
I have to update a nested JSON object.
If I knew the specifics of which items were to be updated I could do :
json_object['basket']['items']['apple'] = 'new value'
However, my list of elements to target is dynamic.
> basket.items.apple = 'green'
> name = 'my shopping'
> basket.cost = '15.43'
I could do this by looping through elements.
Find 'basket' > then find 'items > then find 'apple' > set value
Find 'name' > set value
However, was hoping that there was a way to just reference directly/dynamicaly.
i.e. from a string 'basket.cost', build the expression :
json_object['basket']['cost']
P.s. it has to cope with lists of dictionaries too !
Any guidance appreciated :)
Once you have the string "basket.cost", you can split it on "." and it's pretty easy to drill down into json_object['basket']['cost'] using a loop. Functionally, there is no difference between doing this and doing it "directly": you are still getting the 'basket' key first, and then getting the 'cost' key from the value of json_object['basket'].
def get_element(d, path):
# This function can take the string "basket.cost", or the list ["basket", "cost"]
if isinstance(path, str):
path = path.split(".")
for p in path:
d = d[p]
return d
def set_element(d, path, value):
path = path.split(".")
dict_to_set = get_element(d, path[:-1])
key_to_set = path[-1]
dict_to_set[key_to_set] = value
set_element(json_object, "basket.items.apple", 100)
Now, this assumes all elements of your path already exist, so let's say you create a dictionary that looks like so:
json_object = {"basket": {"items": dict()}}
set_element(json_object, "basket.items.apple", 100)
set_element(json_object, "basket.cost", 10)
print(json_object)
# Output: {'basket': {'items': {'apple': 100}, 'cost': 10}}
print(get_element(json_object, "basket.cost"))
# Output: 10
If you try to access an element that doesn't already exist, you get a KeyError:
get_element(json_object, "basket.date")
# KeyError: 'date'
This also happens if you try to set a value in an element that doesn't exist:
set_element(json_object, "basket.date.day", 1)
# KeyError: 'date'
If we want to allow your function to create the dictionaries when they don't exist, we can modify the get_element function to account for this situation and add the key:
def get_element(d, path, create_missing=False):
# This function can take the string "basket.cost", or an iterable containing the elements "basket" and "cost"
if isinstance(path, str):
path = path.split(".")
for p in path:
if create_missing and p not in d:
d[p] = dict()
d = d[p]
return d
def set_element(d, path, value, create_missing=True):
path = path.split(".")
dict_to_set = get_element(d, path[:-1], create_missing)
key_to_set = path[-1]
dict_to_set[key_to_set] = value
set_element(json_object, "basket.date.day", 1)
print(json_object)
# Output: {'basket': {'items': {'apple': 100}, 'cost': 10, 'date': {'day': 1}}}
If using third party package is an option, you can try python-box. It comes with lots of options and utilities to load from json, yaml files. The implementation is optimized for speed using Cython.
from box import Box
test_data = {
"basket": {
"products": [
{"name": "apple", "colour": "green"}
],
}
}
a = Box(test_data)
a.basket.cost = 12.3
a.basket.products[0].colour = "pink"
a.basket.products.append({"name": "pineapple", "taste": "sweet"})
print(a.basket.products[1].taste)
You can get exactly what you want by overloading some python magic methods: __getattr__ and __setattr__. I'll show an example of the API to wet the appetite and then the full code:
test_data = {'basket': {'items': [{'name': 'apple', 'colour': 'green'},
{'name': 'pineapple', 'taste': 'sweet',},
],
'cost': 12.3,
},
'name': 'Other'}
o = wrap(test_data) # This wraps with the correct class, depending if it is a dict or a list
print(o.name) # Prints 'Other'
print(o.basket.items) # Prints the list of items
print(o.basket.cost) # Prints 12.3
o.basket.cost = 10.0 # Changes the cost
assert o.basket.cost == 10.0
assert len(o) == 2
assert len(o.basket.items) == 2
o.basket.items.append({'name': 'orange'})
o.basket.items[2].colour = 'yellow' # It works with lists!
assert o.basket.items[2].name == 'orange'
assert o.basket.items[2].colour == 'yellow'
# You can get a part of it and it holds a reference to the original
b = o.basket
b.type = 'groceries'
assert o.basket.type == 'groceries'
# It is also possible to create a separate wrapped part and then join:
employees = wrap({})
employees.Clara.id = 101
employees.Clara.age = 23
employees.Lucia.id = 102
employees.Lucia.age = 29
o.employees = employees
The implementation is based on special wrapper classes, one for dicts, another for lists. They all inherit from a base class. Note that the need to use super().__setattr__ instead of simply self._data is because we will override the __getattr__ and __setattr__ methods to look for the data inside _data. Of course it gives an infinite loop when you try to define _data.
from collections.abc import Mapping, Sequence, MutableSequence
class BaseWrapper:
__slots__ = ('_data')
def __init__(self, data):
super().__setattr__('_data', data)
def __repr__(self):
return f'{self.__class__.__name__}({repr(self._data)})'
The wrapper for dictionaries is the most interesting: it uses __getattr__ to look for a key in the wrapped dictionary. This allows for a very natural API: if o is a wrapped dictionary, o.entry will give the same result as o['entry']. Most of the code should be self-explanatory, there are only two tricks: the first is that __getattr__ checks if the output is a dict or list and wraps it. This allows for chaining of calls like o.basket.cost. The downside is that a new wrapper is created every call. The second trick is when setting an attribute: it checks if what is being set is a wrapped instance and un-wraps it. Thus, wrapped dictionaries can be combined and the underlying dictionary is always "clean".
class MappingWrapper(BaseWrapper):
"""Wraps a dictionary and provides the keys of the dictionary as class members.
Create new keys when they do not exist."""
def __getattr__(self, name):
# Note: these two lines allow automatic creation of attributes, e.g. in an object 'obj'
# that doesn't have an attribute 'car', the following is possible:
# >> o.car.colour = 'blue'
# And all the missing levels will be automatically created
if name not in self._data and not name.startswith('_'):
self._data[name] = {}
return wrap(self._data[name])
def __setattr__(self, name, value):
self._data[name] = unwrap(value)
# Implements standard dictionary access
def __getitem__(self, name):
return wrap(self._data[name])
def __setitem__(self, name, value):
self._data[name] = unwrap(value)
def __delitem__(self, name):
del self._data[name]
def __len__(self):
return len(self._data)
The list wrapper is simpler, no need to mess around with attribute access. The only special care we have to take is to wrap and unwrap the list elements when one is requested/set. Note that, just like with the dictionary wrapper, the same wrap and unwrap functions are used (in __getitem__/__setitem__/insert).
class ListWrapper(BaseWrapper, MutableSequence):
"""Wraps a list. Essentially, provides wrapping of elements of the list."""
def __getitem__(self, idx):
return wrap(self._data[idx])
def __setitem__(self, idx, value):
self._data[idx] = unwrap(value)
def __delitem__(self, idx):
del self._data[idx]
def __len__(self):
return len(self._data)
def insert(self, index, obj):
self._data.insert(index, unwrap(obj))
Finally, the definition of wrap, which just selects the correct wrapper based on the type of the input, and unwrap, which extracts the raw data:
def wrap(obj):
if isinstance(obj, dict):
return MappingWrapper(obj)
if isinstance(obj, list):
return ListWrapper(obj)
return obj
def unwrap(obj):
if isinstance(obj, BaseWrapper):
return obj._data
return obj
The full code can be found in this gist.
An important caveat: to keep the implementation simple, wrapper objects are created at every access. Thus using this method inside large loops may cause performance issues (per my measurements, this method of access is between 12 to 30 times slower).
I'm going to assume that you already know how to handle the value errors that will probably come up with this nested collection accessing, so I won't focus on it in my approach.
I would split this in two parts:
Traversing a nested collection according to a list of keys for each level
Getting a list of keys out of a string
The first one is quite trivial, where as you said simply looping through the keys and getting to the end of those gives you access to the collection element in question. A simple implementation of that could look something like this:
def get_nested(collection, key):
for part in key:
collection = collection[part]
return collection
def set_nested(collection, key, value):
for part in key[:-1]:
collection = collection[part]
collection[key[-1]] = value
Here the key is expected to be some iterable of keys, such as a tuple or list.
Of course that means there is an expectation that your string representing a path along the collection is already parsed. We can get to that next.
This step would also be very trivial, since one could simply expression.split(".") it. However, since you also want to be able to index nested lists along with dicts, it get's a little more complicated.
There is a tradeoff to be made here. One could simply say: "Any time that one of the items in expression.split(".") can be parsed to an int, we will do just that, and assume that it was ment as an index in a list", however naturally that isn't necessarily the case. There is nothing preventing you from using a number in string form as a key in a dict. However if you think this is never going to be the case for you, perhaps the you can just call it like this:
set_nested(
collection,
(int(part) if part.isdigit() else part for part in expression.split(".")),
"target value",
)
(or of course wrap it in another function like this).
However if the consideration of using digit keys in dicts is important for you, there is another solution:
Whenever traversing the nested collection downward, we check if the collection we are currently looking at is a list. Only if it is a list, do we actually try to parse the path part as an int.
This would be the respective set_nested and get_nested functions for that:
def get_nested(collection, key: str):
for part in key.split("."):
if type(collection) == list:
part = int(part)
collection = collection[part]
return collection
def set_nested(collection, key: str, val):
key = key.split(".")
for i, part in enumerate(key):
if type(collection) == list:
part = int(part)
if i == len(key) - 1:
collection[part] = val
else:
collection = collection[part]
I believe that's the simplest solution to your problem, though of course it's important to keep in mind:
There is no error handling in this code, and indexing on dynamic paths is a topic where you are bound to run into errors. Depending on where and how you want to handle those it's going to be easy or very tedious.
There is no checking of setting values in dicts that don't exist yet, or for expanding arrays to a specific size, but since you didn't mention those that as a requirement I'm presuming it's not an issue. It might be for others reading this.
This is tricky and I would discourage it unless necessary as it is an easy thing to design and implmenet badly.
First: it's easy to split on path separator and follow the object tree to the desired key.
But after a while questions will start to appear. E.g.: what separator to split on?
A slash? It can appear in the JSON dictionary key... A dot? Same.
We'll need to either restrict legal / handled paths or implement some kind of escaping mechanism.
How do you handle empty strings?
Another goal: handle lists... Ok. So how do we interpret a path a.0? Is it ['a'][0] or ['a']['0'] ?
It seem that we'll have to complicate the language or drop the requirement.
So, in general -- I'd avoid it. Ultimately here's a quick implementation which
desing choices may or may not satisfy you:
there's basic backslash escaping of path separator
empty string is allowed as a key
lists are not handled due to ambiguity
def deep_set(root: dict, path: str, value):
segments = [*iter_segments(path, '.')]
for k in segments[:-1]:
root = root[k]
root[segments[-1]] = value
def iter_segments(path: str, separator: str = '.'):
segment = ''
path_iter = iter(path)
while True:
c = next(path_iter, '')
if c in ('.', ''):
yield segment
segment = ''
if c == '':
break
continue
elif '\\' == c:
c = next(path_iter, '')
segment += c
The addition of collections.defaultdict in Python 2.5 greatly reduced the need for dict's setdefault method. This question is for our collective education:
What is setdefault still useful for, today in Python 2.6/2.7?
What popular use cases of setdefault were superseded with collections.defaultdict?
You could say defaultdict is useful for settings defaults before filling the dict and setdefault is useful for setting defaults while or after filling the dict.
Probably the most common use case: Grouping items (in unsorted data, else use itertools.groupby)
# really verbose
new = {}
for (key, value) in data:
if key in new:
new[key].append( value )
else:
new[key] = [value]
# easy with setdefault
new = {}
for (key, value) in data:
group = new.setdefault(key, []) # key might exist already
group.append( value )
# even simpler with defaultdict
from collections import defaultdict
new = defaultdict(list)
for (key, value) in data:
new[key].append( value ) # all keys have a default already
Sometimes you want to make sure that specific keys exist after creating a dict. defaultdict doesn't work in this case, because it only creates keys on explicit access. Think you use something HTTP-ish with many headers -- some are optional, but you want defaults for them:
headers = parse_headers( msg ) # parse the message, get a dict
# now add all the optional headers
for headername, defaultvalue in optional_headers:
headers.setdefault( headername, defaultvalue )
I commonly use setdefault for keyword argument dicts, such as in this function:
def notify(self, level, *pargs, **kwargs):
kwargs.setdefault("persist", level >= DANGER)
self.__defcon.set(level, **kwargs)
try:
kwargs.setdefault("name", self.client.player_entity().name)
except pytibia.PlayerEntityNotFound:
pass
return _notify(level, *pargs, **kwargs)
It's great for tweaking arguments in wrappers around functions that take keyword arguments.
defaultdict is great when the default value is static, like a new list, but not so much if it's dynamic.
For example, I need a dictionary to map strings to unique ints. defaultdict(int) will always use 0 for the default value. Likewise, defaultdict(intGen()) always produces 1.
Instead, I used a regular dict:
nextID = intGen()
myDict = {}
for lots of complicated stuff:
#stuff that generates unpredictable, possibly already seen str
strID = myDict.setdefault(myStr, nextID())
Note that dict.get(key, nextID()) is insufficient because I need to be able to refer to these values later as well.
intGen is a tiny class I build that automatically increments an int and returns its value:
class intGen:
def __init__(self):
self.i = 0
def __call__(self):
self.i += 1
return self.i
If someone has a way to do this with defaultdict I'd love to see it.
As most answers state setdefault or defaultdict would let you set a default value when a key doesn't exist. However, I would like to point out a small caveat with regard to the use cases of setdefault. When the Python interpreter executes setdefaultit will always evaluate the second argument to the function even if the key exists in the dictionary. For example:
In: d = {1:5, 2:6}
In: d
Out: {1: 5, 2: 6}
In: d.setdefault(2, 0)
Out: 6
In: d.setdefault(2, print('test'))
test
Out: 6
As you can see, print was also executed even though 2 already existed in the dictionary. This becomes particularly important if you are planning to use setdefault for example for an optimization like memoization. If you add a recursive function call as the second argument to setdefault, you wouldn't get any performance out of it as Python would always be calling the function recursively.
Since memoization was mentioned, a better alternative is to use functools.lru_cache decorator if you consider enhancing a function with memoization. lru_cache handles the caching requirements for a recursive function better.
I use setdefault() when I want a default value in an OrderedDict. There isn't a standard Python collection that does both, but there are ways to implement such a collection.
As Muhammad said, there are situations in which you only sometimes wish to set a default value. A great example of this is a data structure which is first populated, then queried.
Consider a trie. When adding a word, if a subnode is needed but not present, it must be created to extend the trie. When querying for the presence of a word, a missing subnode indicates that the word is not present and it should not be created.
A defaultdict cannot do this. Instead, a regular dict with the get and setdefault methods must be used.
Theoretically speaking, setdefault would still be handy if you sometimes want to set a default and sometimes not. In real life, I haven't come across such a use case.
However, an interesting use case comes up from the standard library (Python 2.6, _threadinglocal.py):
>>> mydata = local()
>>> mydata.__dict__
{'number': 42}
>>> mydata.__dict__.setdefault('widgets', [])
[]
>>> mydata.widgets
[]
I would say that using __dict__.setdefault is a pretty useful case.
Edit: As it happens, this is the only example in the standard library and it is in a comment. So may be it is not enough of a case to justify the existence of setdefault. Still, here is an explanation:
Objects store their attributes in the __dict__ attribute. As it happens, the __dict__ attribute is writeable at any time after the object creation. It is also a dictionary not a defaultdict. It is not sensible for objects in the general case to have __dict__ as a defaultdict because that would make each object having all legal identifiers as attributes. So I can't foresee any change to Python objects getting rid of __dict__.setdefault, apart from deleting it altogether if it was deemed not useful.
I rewrote the accepted answer and facile it for the newbies.
#break it down and understand it intuitively.
new = {}
for (key, value) in data:
if key not in new:
new[key] = [] # this is core of setdefault equals to new.setdefault(key, [])
new[key].append(value)
else:
new[key].append(value)
# easy with setdefault
new = {}
for (key, value) in data:
group = new.setdefault(key, []) # it is new[key] = []
group.append(value)
# even simpler with defaultdict
new = defaultdict(list)
for (key, value) in data:
new[key].append(value) # all keys have a default value of empty list []
Additionally,I categorized the methods as reference:
dict_methods_11 = {
'views':['keys', 'values', 'items'],
'add':['update','setdefault'],
'remove':['pop', 'popitem','clear'],
'retrieve':['get',],
'copy':['copy','fromkeys'],}
One drawback of defaultdict over dict (dict.setdefault) is that a defaultdict object creates a new item EVERYTIME non existing key is given (eg with ==, print). Also the defaultdict class is generally way less common then the dict class, its more difficult to serialize it IME.
P.S. IMO functions|methods not meant to mutate an object, should not mutate an object.
Here are some examples of setdefault to show its usefulness:
"""
d = {}
# To add a key->value pair, do the following:
d.setdefault(key, []).append(value)
# To retrieve a list of the values for a key
list_of_values = d[key]
# To remove a key->value pair is still easy, if
# you don't mind leaving empty lists behind when
# the last value for a given key is removed:
d[key].remove(value)
# Despite the empty lists, it's still possible to
# test for the existance of values easily:
if d.has_key(key) and d[key]:
pass # d has some values for key
# Note: Each value can exist multiple times!
"""
e = {}
print e
e.setdefault('Cars', []).append('Toyota')
print e
e.setdefault('Motorcycles', []).append('Yamaha')
print e
e.setdefault('Airplanes', []).append('Boeing')
print e
e.setdefault('Cars', []).append('Honda')
print e
e.setdefault('Cars', []).append('BMW')
print e
e.setdefault('Cars', []).append('Toyota')
print e
# NOTE: now e['Cars'] == ['Toyota', 'Honda', 'BMW', 'Toyota']
e['Cars'].remove('Toyota')
print e
# NOTE: it's still true that ('Toyota' in e['Cars'])
I use setdefault frequently when, get this, setting a default (!!!) in a dictionary; somewhat commonly the os.environ dictionary:
# Set the venv dir if it isn't already overridden:
os.environ.setdefault('VENV_DIR', '/my/default/path')
Less succinctly, this looks like this:
# Set the venv dir if it isn't already overridden:
if 'VENV_DIR' not in os.environ:
os.environ['VENV_DIR'] = '/my/default/path')
It's worth noting that you can also use the resulting variable:
venv_dir = os.environ.setdefault('VENV_DIR', '/my/default/path')
But that's less necessary than it was before defaultdicts existed.
Another use case that I don't think was mentioned above.
Sometimes you keep a cache dict of objects by their id where primary instance is in the cache and you want to set cache when missing.
return self.objects_by_id.setdefault(obj.id, obj)
That's useful when you always want to keep a single instance per distinct id no matter how you obtain an obj each time. For example when object attributes get updated in memory and saving to storage is deferred.
One very important use-case I just stumbled across: dict.setdefault() is great for multi-threaded code when you only want a single canonical object (as opposed to multiple objects that happen to be equal).
For example, the (Int)Flag Enum in Python 3.6.0 has a bug: if multiple threads are competing for a composite (Int)Flag member, there may end up being more than one:
from enum import IntFlag, auto
import threading
class TestFlag(IntFlag):
one = auto()
two = auto()
three = auto()
four = auto()
five = auto()
six = auto()
seven = auto()
eight = auto()
def __eq__(self, other):
return self is other
def __hash__(self):
return hash(self.value)
seen = set()
class cycle_enum(threading.Thread):
def run(self):
for i in range(256):
seen.add(TestFlag(i))
threads = []
for i in range(8):
threads.append(cycle_enum())
for t in threads:
t.start()
for t in threads:
t.join()
len(seen)
# 272 (should be 256)
The solution is to use setdefault() as the last step of saving the computed composite member -- if another has already been saved then it is used instead of the new one, guaranteeing unique Enum members.
In addition to what have been suggested, setdefault might be useful in situations where you don't want to modify a value that has been already set. For example, when you have duplicate numbers and you want to treat them as one group. In this case, if you encounter a repeated duplicate key which has been already set, you won't update the value of that key. You will keep the first encountered value. As if you are iterating/updating the repeated keys once only.
Here's a code example of recording the index for the keys/elements of a sorted list:
nums = [2,2,2,2,2]
d = {}
for idx, num in enumerate(sorted(nums)):
# This will be updated with the value/index of the of the last repeated key
# d[num] = idx # Result (sorted_indices): [4, 4, 4, 4, 4]
# In the case of setdefault, all encountered repeated keys won't update the key.
# However, only the first encountered key's index will be set
d.setdefault(num,idx) # Result (sorted_indices): [0, 0, 0, 0, 0]
sorted_indices = [d[i] for i in nums]
[Edit] Very wrong! The setdefault would always trigger long_computation, Python being eager.
Expanding on Tuttle's answer. For me the best use case is cache mechanism. Instead of:
if x not in memo:
memo[x]=long_computation(x)
return memo[x]
which consumes 3 lines and 2 or 3 lookups, I would happily write :
return memo.setdefault(x, long_computation(x))
I like the answer given here:
http://stupidpythonideas.blogspot.com/2013/08/defaultdict-vs-setdefault.html
In short, the decision (in non-performance-critical apps) should be made on the basis of how you want to handle lookup of empty keys downstream (viz. KeyError versus default value).
The different use case for setdefault() is when you don't want to overwrite the value of an already set key. defaultdict overwrites, while setdefault() does not. For nested dictionaries it is more often the case that you want to set a default only if the key is not set yet, because you don't want to remove the present sub dictionary. This is when you use setdefault().
Example with defaultdict:
>>> from collection import defaultdict()
>>> foo = defaultdict()
>>> foo['a'] = 4
>>> foo['a'] = 2
>>> print(foo)
defaultdict(None, {'a': 2})
setdefault doesn't overwrite:
>>> bar = dict()
>>> bar.setdefault('a', 4)
>>> bar.setdefault('a', 2)
>>> print(bar)
{'a': 4}
Another usecase for setdefault in CPython is that it is atomic in all cases, whereas defaultdict will not be atomic if you use a default value created from a lambda.
cache = {}
def get_user_roles(user_id):
if user_id in cache:
return cache[user_id]['roles']
cache.setdefault(user_id, {'lock': threading.Lock()})
with cache[user_id]['lock']:
roles = query_roles_from_database(user_id)
cache[user_id]['roles'] = roles
If two threads execute cache.setdefault at the same time, only one of them will be able to create the default value.
If instead you used a defaultdict:
cache = defaultdict(lambda: {'lock': threading.Lock()}
This would result in a race condition. In my example above, the first thread could create a default lock, and the second thread could create another default lock, and then each thread could lock its own default lock, instead of the desired outcome of each thread attempting to lock a single lock.
Conceptually, setdefault basically behaves like this (defaultdict also behaves like this if you use an empty list, empty dict, int, or other default value that is not user python code like a lambda):
gil = threading.Lock()
def setdefault(dict, key, value_func):
with gil:
if key not in dict:
return
value = value_func()
dict[key] = value
Conceptually, defaultdict basically behaves like this (only when using python code like a lambda - this is not true if you use an empty list):
gil = threading.Lock()
def __setitem__(dict, key, value_func):
with gil:
if key not in dict:
return
value = value_func()
with gil:
dict[key] = value
I need to parse a json file which unfortunately for me, does not follow the prototype. I have two issues with the data, but i've already found a workaround for it so i'll just mention it at the end, maybe someone can help there as well.
So i need to parse entries like this:
"Test":{
"entry":{
"Type":"Something"
},
"entry":{
"Type":"Something_Else"
}
}, ...
The json default parser updates the dictionary and therfore uses only the last entry. I HAVE to somehow store the other one as well, and i have no idea how to do this. I also HAVE to store the keys in the several dictionaries in the same order they appear in the file, thats why i am using an OrderedDict to do so. it works fine, so if there is any way to expand this with the duplicate entries i'd be grateful.
My second issue is that this very same json file contains entries like that:
"Test":{
{
"Type":"Something"
}
}
Json.load() function raises an exception when it reaches that line in the json file. The only way i worked around this was to manually remove the inner brackets myself.
Thanks in advance
You can use JSONDecoder.object_pairs_hook to customize how JSONDecoder decodes objects. This hook function will be passed a list of (key, value) pairs that you usually do some processing on, and then turn into a dict.
However, since Python dictionaries don't allow for duplicate keys (and you simply can't change that), you can return the pairs unchanged in the hook and get a nested list of (key, value) pairs when you decode your JSON:
from json import JSONDecoder
def parse_object_pairs(pairs):
return pairs
data = """
{"foo": {"baz": 42}, "foo": 7}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
[(u'foo', [(u'baz', 42)]), (u'foo', 7)]
How you use this data structure is up to you. As stated above, Python dictionaries won't allow for duplicate keys, and there's no way around that. How would you even do a lookup based on a key? dct[key] would be ambiguous.
So you can either implement your own logic to handle a lookup the way you expect it to work, or implement some sort of collision avoidance to make keys unique if they're not, and then create a dictionary from your nested list.
Edit: Since you said you would like to modify the duplicate key to make it unique, here's how you'd do that:
from collections import OrderedDict
from json import JSONDecoder
def make_unique(key, dct):
counter = 0
unique_key = key
while unique_key in dct:
counter += 1
unique_key = '{}_{}'.format(key, counter)
return unique_key
def parse_object_pairs(pairs):
dct = OrderedDict()
for key, value in pairs:
if key in dct:
key = make_unique(key, dct)
dct[key] = value
return dct
data = """
{"foo": {"baz": 42, "baz": 77}, "foo": 7, "foo": 23}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
OrderedDict([(u'foo', OrderedDict([(u'baz', 42), ('baz_1', 77)])), ('foo_1', 7), ('foo_2', 23)])
The make_unique function is responsible for returning a collision-free key. In this example it just suffixes the key with _n where n is an incremental counter - just adapt it to your needs.
Because the object_pairs_hook receives the pairs exactly in the order they appear in the JSON document, it's also possible to preserve that order by using an OrderedDict, I included that as well.
Thanks a lot #Lukas Graf, i got it working as well by implementing my own version of the hook function
def dict_raise_on_duplicates(ordered_pairs):
count=0
d=collections.OrderedDict()
for k,v in ordered_pairs:
if k in d:
d[k+'_dupl_'+str(count)]=v
count+=1
else:
d[k]=v
return d
Only thing remaining is to automatically get rid of the double brackets and i am done :D Thanks again
If you would prefer to convert those duplicated keys into an array, instead of having separate copies, this could do the work:
def dict_raise_on_duplicates(ordered_pairs):
"""Convert duplicate keys to JSON array."""
d = {}
for k, v in ordered_pairs:
if k in d:
if type(d[k]) is list:
d[k].append(v)
else:
d[k] = [d[k],v]
else:
d[k] = v
return d
And then you just use:
dict = json.loads(yourString, object_pairs_hook=dict_raise_on_duplicates)
I need to parse a json file which unfortunately for me, does not follow the prototype. I have two issues with the data, but i've already found a workaround for it so i'll just mention it at the end, maybe someone can help there as well.
So i need to parse entries like this:
"Test":{
"entry":{
"Type":"Something"
},
"entry":{
"Type":"Something_Else"
}
}, ...
The json default parser updates the dictionary and therfore uses only the last entry. I HAVE to somehow store the other one as well, and i have no idea how to do this. I also HAVE to store the keys in the several dictionaries in the same order they appear in the file, thats why i am using an OrderedDict to do so. it works fine, so if there is any way to expand this with the duplicate entries i'd be grateful.
My second issue is that this very same json file contains entries like that:
"Test":{
{
"Type":"Something"
}
}
Json.load() function raises an exception when it reaches that line in the json file. The only way i worked around this was to manually remove the inner brackets myself.
Thanks in advance
You can use JSONDecoder.object_pairs_hook to customize how JSONDecoder decodes objects. This hook function will be passed a list of (key, value) pairs that you usually do some processing on, and then turn into a dict.
However, since Python dictionaries don't allow for duplicate keys (and you simply can't change that), you can return the pairs unchanged in the hook and get a nested list of (key, value) pairs when you decode your JSON:
from json import JSONDecoder
def parse_object_pairs(pairs):
return pairs
data = """
{"foo": {"baz": 42}, "foo": 7}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
[(u'foo', [(u'baz', 42)]), (u'foo', 7)]
How you use this data structure is up to you. As stated above, Python dictionaries won't allow for duplicate keys, and there's no way around that. How would you even do a lookup based on a key? dct[key] would be ambiguous.
So you can either implement your own logic to handle a lookup the way you expect it to work, or implement some sort of collision avoidance to make keys unique if they're not, and then create a dictionary from your nested list.
Edit: Since you said you would like to modify the duplicate key to make it unique, here's how you'd do that:
from collections import OrderedDict
from json import JSONDecoder
def make_unique(key, dct):
counter = 0
unique_key = key
while unique_key in dct:
counter += 1
unique_key = '{}_{}'.format(key, counter)
return unique_key
def parse_object_pairs(pairs):
dct = OrderedDict()
for key, value in pairs:
if key in dct:
key = make_unique(key, dct)
dct[key] = value
return dct
data = """
{"foo": {"baz": 42, "baz": 77}, "foo": 7, "foo": 23}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
OrderedDict([(u'foo', OrderedDict([(u'baz', 42), ('baz_1', 77)])), ('foo_1', 7), ('foo_2', 23)])
The make_unique function is responsible for returning a collision-free key. In this example it just suffixes the key with _n where n is an incremental counter - just adapt it to your needs.
Because the object_pairs_hook receives the pairs exactly in the order they appear in the JSON document, it's also possible to preserve that order by using an OrderedDict, I included that as well.
Thanks a lot #Lukas Graf, i got it working as well by implementing my own version of the hook function
def dict_raise_on_duplicates(ordered_pairs):
count=0
d=collections.OrderedDict()
for k,v in ordered_pairs:
if k in d:
d[k+'_dupl_'+str(count)]=v
count+=1
else:
d[k]=v
return d
Only thing remaining is to automatically get rid of the double brackets and i am done :D Thanks again
If you would prefer to convert those duplicated keys into an array, instead of having separate copies, this could do the work:
def dict_raise_on_duplicates(ordered_pairs):
"""Convert duplicate keys to JSON array."""
d = {}
for k, v in ordered_pairs:
if k in d:
if type(d[k]) is list:
d[k].append(v)
else:
d[k] = [d[k],v]
else:
d[k] = v
return d
And then you just use:
dict = json.loads(yourString, object_pairs_hook=dict_raise_on_duplicates)
Is it possible to retrieve items from a Python dictionary in the order that they were inserted?
The standard Python dict does this by default if you're using CPython 3.6+ (or Python 3.7+ for any other implementation of Python).
On older versions of Python you can use collections.OrderedDict.
As of Python 3.7, the standard dict preserves insertion order. From the docs:
Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was implementation detail of CPython from 3.6.
So, you should be able to iterate over the dictionary normally or use popitem().
Use OrderedDict(), available since version 2.7
Just a matter of curiosity:
from collections import OrderedDict
a = {}
b = OrderedDict()
c = OrderedDict()
a['key1'] = 'value1'
a['key2'] = 'value2'
b['key1'] = 'value1'
b['key2'] = 'value2'
c['key2'] = 'value2'
c['key1'] = 'value1'
print a == b # True
print a == c # True
print b == c # False
The other answers are correct; it's not possible, but you could write this yourself. However, in case you're unsure how to actually implement something like this, here's a complete and working implementation that subclasses dict which I've just written and tested. (Note that the order of values passed to the constructor is undefined but will come before values passed later, and you could always just not allow ordered dicts to be initialized with values.)
class ordered_dict(dict):
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
self._order = self.keys()
def __setitem__(self, key, value):
dict.__setitem__(self, key, value)
if key in self._order:
self._order.remove(key)
self._order.append(key)
def __delitem__(self, key):
dict.__delitem__(self, key)
self._order.remove(key)
def order(self):
return self._order[:]
def ordered_items(self):
return [(key,self[key]) for key in self._order]
od = ordered_dict()
od["hello"] = "world"
od["goodbye"] = "cruel world"
print od.order() # prints ['hello', 'goodbye']
del od["hello"]
od["monty"] = "python"
print od.order() # prints ['goodbye', 'monty']
od["hello"] = "kitty"
print od.order() # prints ['goodbye', 'monty', 'hello']
print od.ordered_items()
# prints [('goodbye','cruel world'), ('monty','python'), ('hello','kitty')]
You can't do this with the base dict class -- it's ordered by hash. You could build your own dictionary that is really a list of key,value pairs or somesuch, which would be ordered.
Or, just make the key a tuple with time.now() as the first field in the tuple.
Then you can retrieve the keys with dictname.keys(), sort, and voila!
Gerry
I've used StableDict before with good success.
http://pypi.python.org/pypi/StableDict/0.2
Or use any of the implementations for the PEP-372 described here, like the odict module from the pythonutils.
I successfully used the pocoo.org implementation, it is as easy as replacing your
my_dict={}
my_dict["foo"]="bar"
with
my_dict=odict.odict()
my_dict["foo"]="bar"
and require just this file
It's not possible unless you store the keys in a separate list for referencing later.
if you don't need the dict functionality, and only need to return tuples in the order you've inserted them, wouldn't a queue work better?
What you can do is insert the values with a key representing the order inputted, and then call sorted() on the items.
>>> obj = {}
>>> obj[1] = 'Bob'
>>> obj[2] = 'Sally'
>>> obj[3] = 'Joe'
>>> for k, v in sorted(obj.items()):
... print v
...
Bob
Sally
Joe
>>>