Getting value from nested dictionary in Python - python

I am looking for a nice and efficient way to get particulars values of a dictionary.
{
"online_apply":{
"value":"true"
},
"interview_format":{
"value":"In-store"
},
"interview_time":{
"value":"4PM-5PM",
}
}
I am trying to transform the dictionary above to:
{
"online_apply": "true",
"interview_format": "In-store",
"interview_time": "4PM-5PM"
}
Thank you for your help

You can use a dict comprehension:
{k: v['value'] for k, v in d.items()}

Related

How to find a particular JSON value by key using List Comprehension and Pydash

I am trying to find a particular JSON value by key using List Comprehension and Pydash. I know there are multiple ways to do that , but I am more specific to get it done using List Comprehension and Pydash.
I tried below code snippet, which is kind of working for dict iteration but not for list iteration.
import pydash as py_
data = {
"P1": "ss",
"P2": {
"P1": "cccc"
},
"P3": [
{
"P1": "aaa"
}
]
}
def findall(v, k):
if type(v) is list:
[findall(i,k) for i in v]
a= [py_.get(v,k)]+[findall(py_.get(v, k1), k) for k1 in v if type(v) == type({})]
return(a)
refs_= findall(data, 'P1')
refs_d = py_.compact(py_.chain(refs_).
flatten_deep().
value())
print(refs_d)
I am trying to get values for all P1. Output should be ["ss","cccc","aaa"]

Pythonic way to map between two dicts with one having nested keys

I have dicts of two types representing same data. These are consumed by two different channels hence their keys are different.
for example:
Type A
{
"key1": "value1",
"key2": "value2",
"nestedKey1" : {
"key3" : "value3",
"key4" : "value4"
}
}
Type B
{
"equiKey1": "value1",
"equiKey2": "value2",
"equinestedKey1.key3" : "value3",
"equinestedKey1.key4" : "value4"
}
I want to map data from Type B to type A.
currently i am creating it as below
{
"key1": typeBObj.get("equiKey1"),
.....
}
Is there a better and faster way to do that in Python
First, you need a dictionary mapping keys in B to keys (or rather lists of keys) in A. (If the keys follow the pattern from your question, or a similar pattern, this dict might also be generated.)
B_to_A = {
"equiKey1": ["key1"],
"equiKey2": ["key2"],
"equinestedKey1.key3" : ["nestedKey1", "key3"],
"equinestedKey1.key4" : ["nestedKey1", "key4"]
}
Then you can define a function for translating those keys.
def map_B_to_A(d):
res = {}
for key, val in B.items():
r = res
*head, last = B_to_A[key]
for k in head:
r = res.setdefault(k, {})
r[last] = val
return res
print(map_B_to_A(B) == A) # True
Or a bit shorter, but probably less clear, using reduce:
def map_B_to_A(d):
res = {}
for key, val in B.items():
*head, last = B_to_A[key]
reduce(lambda d, k: d.setdefault(k, {}), head, res)[last] = val
return res

Create unique dictionaries for two keys in a list of dictionaries

I am getting a list of dictionaries in the following format from an API, :
eg.
xlist =[
{ "id":1, "day":2, name:"abc", ... },
{ "id":1, "day":3, name:"abc", ... },
{ "id":1, "day":2, name:"xyz", ... },
{ "id":1, "day":3, name:"xyz", ... },...
]
So, to store/optimize the queries in to the DB I have to convert them in this format.
What is efficient or other way to generate following structure?
unique_xlist =[
{ "id":1, "day":2, name:["abc", "xyz"], ... },
{ "id":1, "day":3, name:["abc", "xyz"], ... },
]
What I am doing :
names = list(set([ v['name'] for v in xlist])) #-- GET UNIQUE NAMES
templist = [{k:(names if k == 'name' else v) \
for k, v in obj.items()} for obj in xlist] #-- Append Unique names
unique_xlist= {v['day']:v for v in templist}.values() #- find unique dicts
I don't think this is very efficient, am using 3 loops just to find unique dicts by day.
You could use itertools.groupby:
from itertools import groupby
xlist.sort(key=lambda x: (x["id"], x["day"], x["name"])) # or use sorted()
unique_xlist = []
for k, g in groupby(xlist, lambda x: (x["id"], x["day"])):
unique_xlist.append({"id": k[0], "day": k[1], "name": [i["name"] for i in g]})
Simply use the values that makes an item unique as keys to a dictionary:
grouped = {}
for x in xlist:
key = (x['id'], x['day'])
try:
grouped[key]['name'].append(x['name'])
except KeyError:
grouped[key] = x
grouped[key]['name'] = [x['name']]
You can listify this again afterwards if necessary.

Using dictionary comprehenion, need more than 1 value to unpack

Consider you have JSON structured like below:
{
"valueA": "2",
"valueB": [
{
"key1": "value1"
},
{
"key2": "value2"
},
{
"key3": "value3"
}
]
}
and when doing something like:
dict_new = {key:value for (key,value) in dict['valueB'] if key == 'key2'}
I get:
ValueError: need more than 1 value to unpack
Why and how to fix it?
dict['valueB'] is a list of dictionaries. You need another layer of nesting for your code to work, and since you are looking for one key, you need to produce a list here (keys must be unique in a dictionary):
values = [value for d in dict['valueB'] for key, value in d.items() if key == 'key2']
If you tried to make a dictionary of key2: value pairs, you will only have the last pair left, as the previous values have been replaced by virtue of having been associated with the same key.
Better still, just grab that one key, no need to loop over all items if you just wanted that one key:
values = [d['key2'] for d in dict['valueB'] if 'key2' in d]
This filters on the list of dictionaries in the dict['valueB'] list; if 'key2' is a key in that nested dictionary, we extract it.

Python dictionary comprehension with nested for

Having trouble turning these loops into a dictionary comprehension - it might be impossible.
The general idea is that I have a dictionary of excludes that looks like this:
excludes = {
"thing1": ["name", "address"],
"thing2": ["username"]
}
Then I have a larger dictionary that I want to "clean" using the exclusions
original_dict = {
"thing1": {"name": "John", "address": "123 Anywhere Drive", "occupation": "teacher" },
"thing2": {"username": "bearsfan123", "joined_date": "01/01/2015"},
"thing3": {"pet_name": "Spot"}
}
If I run the following:
for k, v in original_dict.iteritems():
if k in excludes.keys():
for key in excludes[k]:
del v[key]
I'm left with:
original_dict = {
"thing1": {"occupation": "teacher" },
"thing2": {"joined_date": "01/01/2015"},
"thing3": {"pet_name": "Spot"}
}
This is perfect, but I'm not sure if I can better represent this as a dictionary comprehension - simply adding the keys I want rather than deleting the ones I don't.
I've gotten down to the second for but am not sure how to represent that in a
new_dict = {k: v for (k, v) in original_dict.iteritems()}
{k:{sub_k:val for sub_k, val in v.iteritems()
if sub_k not in excludes.get(k, {})}
for k,v in original_dict.iteritems()}
Note the need for excludes.get(k, {}).
After pasting in your data and running it in IPython:
In [154]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:{k:{sub_k:val for sub_k, val in v.iteritems()
: if sub_k not in excludes.get(k, {})}
: for k,v in original_dict.iteritems()}
:--
Out[154]:
{'thing1': {'occupation': 'teacher'},
'thing2': {'joined_date': '01/01/2015'},
'thing3': {'pet_name': 'Spot'}}
I'd personally argue that the for-loop approach is more readable and just generally better and less surprising across the spectrum of different developer experience levels of potential code readers.
A slight variation of the for loop approach that doesn't require evil side-effects with del and uses an inner dict comprehension:
new_dict = {}
for k, v in original_dict.iteritems():
k_excludes = excludes.get(k, {})
new_dict[k] = {sub_k:sub_v for sub_k, sub_v in v.iteritems()
if sub_k not in k_excludes}

Categories