Manipulating api response into a list of object key/value pairs - python

Having trouble manipulating this data for my front end.
Here is my API response, which is a list of dictionaries:
{
"name": "bob",
"age": 22,
"gender": male
},
{
"name": "zack",
"age": 43,
"gender": male
}
Here is the desired output, another list of dictionaries:
{id: 0, column: "age", bob: 22, zack: 43},
{id: 1, column: "gender", bob: male, zack: male}
So if another name was returned, it would simply be added as a key and grab the corresponding value for age/gender etc..
Here is the output I'm currently getting:
{id: 0, column: "age", bob: 22},
{id: 1, column: "gender", bob: male},
{id: 0, column: "age", zack: 43},
{id: 1, column: "gender", zack: male}
So for each list of dicts, I'm selecting the columns, using the name has a type of identifier, and assigning the corresponding value for a particular column.
I'm having trouble adding each person's name (the key in this case) to the same list of dicts with the corresponding value, age, gender, etc.. Here is the code I currently have.
my_list = []
counter = 0
for d in data:
for k, v in d.items():
dict = {}
length = len(d.keys())
if counter == length:
counter = 0
value = d['name']
dict['id'] = counter
dict['column'] = k
dict[value] = v
my_list.append(dict)
counter += 1
Can someone point me in the right direction?

Use:
data = [{
"name": "bob",
"age": 22,
"gender": "male"
},
{
"name": "zack",
"age": 43,
"gender": "male"
}]
keys = ["age", "gender"]
lookup = {key: {"id": i, "column" : key} for i, key in enumerate(keys)}
for d in data:
name = d.pop("name")
for key, value in d.items():
lookup[key][name] = value
res = list(lookup.values())
print(res)
Output
[{'id': 0, 'column': 'age', 'bob': 22, 'zack': 43}, {'id': 1, 'column': 'gender', 'bob': 'male', 'zack': 'male'}]
Or an alternative that does not alter the original dictionary:
keys = ["age", "gender"]
lookup = {key: {"id": i, "column" : key} for i, key in enumerate(keys)}
for d in data:
name = d["name"]
for key in (d.keys() - {"name"}):
lookup[key][name] = d[key]
res = list(lookup.values())
print(res)
Output
[{'id': 0, 'column': 'age', 'bob': 22, 'zack': 43}, {'id': 1, 'column': 'gender', 'bob': 'male', 'zack': 'male'}]
UPDATE
If the keys are not known before hand, you could do:
lookup = {}
for d in data:
name = d["name"]
for key in (d.keys() - {"name"}):
if key not in lookup:
lookup[key] = {key: {"id": len(lookup), "column": key}}
lookup[key][name] = d[key]
res = list(lookup.values())
print(res)

Related

List generated from For loop with .append() of another JSON data

myjson = [
{"GROUP" : "A",
"TYPE" : "I",
"VALUE1" : 25,
"VALUE2" : 26,
"REMARK" : "None"},
{"GROUP" : "B",
"TYPE" : "II",
"VALUE1" : 33,
"VALUE2" : 22,
"REMARK" : "None"}
]
Expected output
[{'GROUP': 'A', 'TYPE': 'I'}, {'GROUP': 'B', 'TYPE': 'II'}]
My approach
For each item in myjson, selecting keys GROUP and TYPE to form a dictionary object, and append these two dictionaries object into another list myjson2
temp = {}
myjson2 = []
for listitem in myjson:
for key, item in listitem.items():
if key == "GROUP" or key == "TYPE":
temp[key] = item
print(temp)
myjson2.append(temp)
The print(temp) above gives:
{'GROUP': 'A', 'TYPE': 'I'}
{'GROUP': 'B', 'TYPE': 'II'}
I wonder why the following results after each .append().
print(myjson2)
[{'GROUP': 'B', 'TYPE': 'II'}, {'GROUP': 'B', 'TYPE': 'II'}]
Why not working: You need to append the new dictionary every time instead of overwriting the old one with a new value. Just move that temp inside the first for,
myjson = [
{
"GROUP": "A",
"TYPE": "I",
"VALUE1": 25,
"VALUE2": 26,
"REMARK": "None"
},
{
"GROUP": "B",
"TYPE": "II",
"VALUE1": 33,
"VALUE2": 22,
"REMARK": "None"
}
]
myjson2 = []
for listitem in myjson:
temp = {}
for key, item in listitem.items():
if key == "GROUP" or key == "TYPE":
temp[key] = item
myjson2.append(temp)
print(myjson2)
One-Liner: with list comprehension
myjson2 = [{key:val for key, val in dic.items() if key in ['GROUP', 'TYPE']} for dic in myjson]
Your issue is caused by temp being defined once outside the loops.
Move the definition inside the first for so we clear it for each item in the list:
myjson2 = []
for listitem in myjson:
temp = {}
for key, item in listitem.items():
if key == "GROUP" or key == "TYPE":
temp[key] = item
print(temp)
myjson2.append(temp)
print(myjson2)
{'GROUP': 'A', 'TYPE': 'I'}
{'GROUP': 'B', 'TYPE': 'II'}
[{'GROUP': 'A', 'TYPE': 'I'}, {'GROUP': 'B', 'TYPE': 'II'}]
Try it online!
It's because temp keep reference to the same dictionary object and what you did in inside for key, item in listitem.items() is essentially change the value of the dictionary and since the last value of the listitem is {'GROUP': 'B', 'TYPE': 'II'}, temp will in the end has these keys-values. You can simply fix this by reference temp to a new object in every new iteration
for listitem in myjson:
temp = {}
for key, item in listitem.items():
if key == "GROUP" or key == "TYPE":
temp[key] = item
print(temp)
myjson2.append(temp)

Can we have dictionaries as elements of a list, if so how do we call out a specific value of the of a specific dictionary? [duplicate]

Assume I have this:
[
{"name": "Tom", "age": 10},
{"name": "Mark", "age": 5},
{"name": "Pam", "age": 7}
]
and by searching "Pam" as name, I want to retrieve the related dictionary: {name: "Pam", age: 7}
How to achieve this ?
You can use a generator expression:
>>> dicts = [
... { "name": "Tom", "age": 10 },
... { "name": "Mark", "age": 5 },
... { "name": "Pam", "age": 7 },
... { "name": "Dick", "age": 12 }
... ]
>>> next(item for item in dicts if item["name"] == "Pam")
{'age': 7, 'name': 'Pam'}
If you need to handle the item not being there, then you can do what user Matt suggested in his comment and provide a default using a slightly different API:
next((item for item in dicts if item["name"] == "Pam"), None)
And to find the index of the item, rather than the item itself, you can enumerate() the list:
next((i for i, item in enumerate(dicts) if item["name"] == "Pam"), None)
This looks to me the most pythonic way:
people = [
{'name': "Tom", 'age': 10},
{'name': "Mark", 'age': 5},
{'name': "Pam", 'age': 7}
]
filter(lambda person: person['name'] == 'Pam', people)
result (returned as a list in Python 2):
[{'age': 7, 'name': 'Pam'}]
Note: In Python 3, a filter object is returned. So the python3 solution would be:
list(filter(lambda person: person['name'] == 'Pam', people))
#Frédéric Hamidi's answer is great. In Python 3.x the syntax for .next() changed slightly. Thus a slight modification:
>>> dicts = [
{ "name": "Tom", "age": 10 },
{ "name": "Mark", "age": 5 },
{ "name": "Pam", "age": 7 },
{ "name": "Dick", "age": 12 }
]
>>> next(item for item in dicts if item["name"] == "Pam")
{'age': 7, 'name': 'Pam'}
As mentioned in the comments by #Matt, you can add a default value as such:
>>> next((item for item in dicts if item["name"] == "Pam"), False)
{'name': 'Pam', 'age': 7}
>>> next((item for item in dicts if item["name"] == "Sam"), False)
False
>>>
You can use a list comprehension:
def search(name, people):
return [element for element in people if element['name'] == name]
I tested various methods to go through a list of dictionaries and return the dictionaries where key x has a certain value.
Results:
Speed: list comprehension > generator expression >> normal list iteration >>> filter.
All scale linear with the number of dicts in the list (10x list size -> 10x time).
The keys per dictionary does not affect speed significantly for large amounts (thousands) of keys. Please see this graph I calculated: https://imgur.com/a/quQzv (method names see below).
All tests done with Python 3.6.4, W7x64.
from random import randint
from timeit import timeit
list_dicts = []
for _ in range(1000): # number of dicts in the list
dict_tmp = {}
for i in range(10): # number of keys for each dict
dict_tmp[f"key{i}"] = randint(0,50)
list_dicts.append( dict_tmp )
def a():
# normal iteration over all elements
for dict_ in list_dicts:
if dict_["key3"] == 20:
pass
def b():
# use 'generator'
for dict_ in (x for x in list_dicts if x["key3"] == 20):
pass
def c():
# use 'list'
for dict_ in [x for x in list_dicts if x["key3"] == 20]:
pass
def d():
# use 'filter'
for dict_ in filter(lambda x: x['key3'] == 20, list_dicts):
pass
Results:
1.7303 # normal list iteration
1.3849 # generator expression
1.3158 # list comprehension
7.7848 # filter
people = [
{'name': "Tom", 'age': 10},
{'name': "Mark", 'age': 5},
{'name': "Pam", 'age': 7}
]
def search(name):
for p in people:
if p['name'] == name:
return p
search("Pam")
Have you ever tried out the pandas package? It's perfect for this kind of search task and optimized too.
import pandas as pd
listOfDicts = [
{"name": "Tom", "age": 10},
{"name": "Mark", "age": 5},
{"name": "Pam", "age": 7}
]
# Create a data frame, keys are used as column headers.
# Dict items with the same key are entered into the same respective column.
df = pd.DataFrame(listOfDicts)
# The pandas dataframe allows you to pick out specific values like so:
df2 = df[ (df['name'] == 'Pam') & (df['age'] == 7) ]
# Alternate syntax, same thing
df2 = df[ (df.name == 'Pam') & (df.age == 7) ]
I've added a little bit of benchmarking below to illustrate pandas' faster runtimes on a larger scale i.e. 100k+ entries:
setup_large = 'dicts = [];\
[dicts.extend(({ "name": "Tom", "age": 10 },{ "name": "Mark", "age": 5 },\
{ "name": "Pam", "age": 7 },{ "name": "Dick", "age": 12 })) for _ in range(25000)];\
from operator import itemgetter;import pandas as pd;\
df = pd.DataFrame(dicts);'
setup_small = 'dicts = [];\
dicts.extend(({ "name": "Tom", "age": 10 },{ "name": "Mark", "age": 5 },\
{ "name": "Pam", "age": 7 },{ "name": "Dick", "age": 12 }));\
from operator import itemgetter;import pandas as pd;\
df = pd.DataFrame(dicts);'
method1 = '[item for item in dicts if item["name"] == "Pam"]'
method2 = 'df[df["name"] == "Pam"]'
import timeit
t = timeit.Timer(method1, setup_small)
print('Small Method LC: ' + str(t.timeit(100)))
t = timeit.Timer(method2, setup_small)
print('Small Method Pandas: ' + str(t.timeit(100)))
t = timeit.Timer(method1, setup_large)
print('Large Method LC: ' + str(t.timeit(100)))
t = timeit.Timer(method2, setup_large)
print('Large Method Pandas: ' + str(t.timeit(100)))
#Small Method LC: 0.000191926956177
#Small Method Pandas: 0.044392824173
#Large Method LC: 1.98827004433
#Large Method Pandas: 0.324505090714
To add just a tiny bit to #FrédéricHamidi.
In case you are not sure a key is in the the list of dicts, something like this would help:
next((item for item in dicts if item.get("name") and item["name"] == "Pam"), None)
Simply using list comprehension:
[i for i in dct if i['name'] == 'Pam'][0]
Sample code:
dct = [
{'name': 'Tom', 'age': 10},
{'name': 'Mark', 'age': 5},
{'name': 'Pam', 'age': 7}
]
print([i for i in dct if i['name'] == 'Pam'][0])
> {'age': 7, 'name': 'Pam'}
You can achieve this with the usage of filter and next methods in Python.
filter method filters the given sequence and returns an iterator.
next method accepts an iterator and returns the next element in the list.
So you can find the element by,
my_dict = [
{"name": "Tom", "age": 10},
{"name": "Mark", "age": 5},
{"name": "Pam", "age": 7}
]
next(filter(lambda obj: obj.get('name') == 'Pam', my_dict), None)
and the output is,
{'name': 'Pam', 'age': 7}
Note: The above code will return None incase if the name we are searching is not found.
One simple way using list comprehensions is , if l is the list
l = [
{"name": "Tom", "age": 10},
{"name": "Mark", "age": 5},
{"name": "Pam", "age": 7}
]
then
[d['age'] for d in l if d['name']=='Tom']
def dsearch(lod, **kw):
return filter(lambda i: all((i[k] == v for (k, v) in kw.items())), lod)
lod=[{'a':33, 'b':'test2', 'c':'a.ing333'},
{'a':22, 'b':'ihaha', 'c':'fbgval'},
{'a':33, 'b':'TEst1', 'c':'s.ing123'},
{'a':22, 'b':'ihaha', 'c':'dfdvbfjkv'}]
list(dsearch(lod, a=22))
[{'a': 22, 'b': 'ihaha', 'c': 'fbgval'},
{'a': 22, 'b': 'ihaha', 'c': 'dfdvbfjkv'}]
list(dsearch(lod, a=22, b='ihaha'))
[{'a': 22, 'b': 'ihaha', 'c': 'fbgval'},
{'a': 22, 'b': 'ihaha', 'c': 'dfdvbfjkv'}]
list(dsearch(lod, a=22, c='fbgval'))
[{'a': 22, 'b': 'ihaha', 'c': 'fbgval'}]
This is a general way of searching a value in a list of dictionaries:
def search_dictionaries(key, value, list_of_dictionaries):
return [element for element in list_of_dictionaries if element[key] == value]
dicts=[
{"name": "Tom", "age": 10},
{"name": "Mark", "age": 5},
{"name": "Pam", "age": 7}
]
from collections import defaultdict
dicts_by_name=defaultdict(list)
for d in dicts:
dicts_by_name[d['name']]=d
print dicts_by_name['Tom']
#output
#>>>
#{'age': 10, 'name': 'Tom'}
names = [{'name':'Tom', 'age': 10}, {'name': 'Mark', 'age': 5}, {'name': 'Pam', 'age': 7}]
resultlist = [d for d in names if d.get('name', '') == 'Pam']
first_result = resultlist[0]
This is one way...
You can try this:
''' lst: list of dictionaries '''
lst = [{"name": "Tom", "age": 10}, {"name": "Mark", "age": 5}, {"name": "Pam", "age": 7}]
search = raw_input("What name: ") #Input name that needs to be searched (say 'Pam')
print [ lst[i] for i in range(len(lst)) if(lst[i]["name"]==search) ][0] #Output
>>> {'age': 7, 'name': 'Pam'}
Put the accepted answer in a function to easy re-use
def get_item(collection, key, target):
return next((item for item in collection if item[key] == target), None)
Or also as a lambda
get_item_lambda = lambda collection, key, target : next((item for item in collection if item[key] == target), None)
Result
key = "name"
target = "Pam"
print(get_item(target_list, key, target))
print(get_item_lambda(target_list, key, target))
#{'name': 'Pam', 'age': 7}
#{'name': 'Pam', 'age': 7}
In case the key may not be in the target dictionary use dict.get and avoid KeyError
def get_item(collection, key, target):
return next((item for item in collection if item.get(key, None) == target), None)
get_item_lambda = lambda collection, key, target : next((item for item in collection if item.get(key, None) == target), None)
My first thought would be that you might want to consider creating a dictionary of these dictionaries ... if, for example, you were going to be searching it more a than small number of times.
However that might be a premature optimization. What would be wrong with:
def get_records(key, store=dict()):
'''Return a list of all records containing name==key from our store
'''
assert key is not None
return [d for d in store if d['name']==key]
Most (if not all) implementations proposed here have two flaws:
They assume only one key to be passed for searching, while it may be interesting to have more for complex dict
They assume all keys passed for searching exist in the dicts, hence they don't deal correctly with KeyError occuring when it is not.
An updated proposition:
def find_first_in_list(objects, **kwargs):
return next((obj for obj in objects if
len(set(obj.keys()).intersection(kwargs.keys())) > 0 and
all([obj[k] == v for k, v in kwargs.items() if k in obj.keys()])),
None)
Maybe not the most pythonic, but at least a bit more failsafe.
Usage:
>>> obj1 = find_first_in_list(list_of_dict, name='Pam', age=7)
>>> obj2 = find_first_in_list(list_of_dict, name='Pam', age=27)
>>> obj3 = find_first_in_list(list_of_dict, name='Pam', address='nowhere')
>>>
>>> print(obj1, obj2, obj3)
{"name": "Pam", "age": 7}, None, {"name": "Pam", "age": 7}
The gist.
Here is a comparison using iterating throuhg list, using filter+lambda or refactoring(if needed or valid to your case) your code to dict of dicts rather than list of dicts
import time
# Build list of dicts
list_of_dicts = list()
for i in range(100000):
list_of_dicts.append({'id': i, 'name': 'Tom'})
# Build dict of dicts
dict_of_dicts = dict()
for i in range(100000):
dict_of_dicts[i] = {'name': 'Tom'}
# Find the one with ID of 99
# 1. iterate through the list
lod_ts = time.time()
for elem in list_of_dicts:
if elem['id'] == 99999:
break
lod_tf = time.time()
lod_td = lod_tf - lod_ts
# 2. Use filter
f_ts = time.time()
x = filter(lambda k: k['id'] == 99999, list_of_dicts)
f_tf = time.time()
f_td = f_tf- f_ts
# 3. find it in dict of dicts
dod_ts = time.time()
x = dict_of_dicts[99999]
dod_tf = time.time()
dod_td = dod_tf - dod_ts
print 'List of Dictionries took: %s' % lod_td
print 'Using filter took: %s' % f_td
print 'Dict of Dicts took: %s' % dod_td
And the output is this:
List of Dictionries took: 0.0099310874939
Using filter took: 0.0121960639954
Dict of Dicts took: 4.05311584473e-06
Conclusion:
Clearly having a dictionary of dicts is the most efficient way to be able to search in those cases, where you know say you will be searching by id's only.
interestingly using filter is the slowest solution.
I would create a dict of dicts like so:
names = ["Tom", "Mark", "Pam"]
ages = [10, 5, 7]
my_d = {}
for i, j in zip(names, ages):
my_d[i] = {"name": i, "age": j}
or, using exactly the same info as in the posted question:
info_list = [{"name": "Tom", "age": 10}, {"name": "Mark", "age": 5}, {"name": "Pam", "age": 7}]
my_d = {}
for d in info_list:
my_d[d["name"]] = d
Then you could do my_d["Pam"] and get {"name": "Pam", "age": 7}
Ducks will be a lot faster than a list comprehension or filter. It builds an index on your objects so lookups don't need to scan every item.
pip install ducks
from ducks import Dex
dicts = [
{"name": "Tom", "age": 10},
{"name": "Mark", "age": 5},
{"name": "Pam", "age": 7}
]
# Build the index
dex = Dex(dicts, {'name': str, 'age': int})
# Find matching objects
dex[{'name': 'Pam', 'age': 7}]
Result: [{'name': 'Pam', 'age': 7}]
You have to go through all elements of the list. There is not a shortcut!
Unless somewhere else you keep a dictionary of the names pointing to the items of the list, but then you have to take care of the consequences of popping an element from your list.
I found this thread when I was searching for an answer to the same
question. While I realize that it's a late answer, I thought I'd
contribute it in case it's useful to anyone else:
def find_dict_in_list(dicts, default=None, **kwargs):
"""Find first matching :obj:`dict` in :obj:`list`.
:param list dicts: List of dictionaries.
:param dict default: Optional. Default dictionary to return.
Defaults to `None`.
:param **kwargs: `key=value` pairs to match in :obj:`dict`.
:returns: First matching :obj:`dict` from `dicts`.
:rtype: dict
"""
rval = default
for d in dicts:
is_found = False
# Search for keys in dict.
for k, v in kwargs.items():
if d.get(k, None) == v:
is_found = True
else:
is_found = False
break
if is_found:
rval = d
break
return rval
if __name__ == '__main__':
# Tests
dicts = []
keys = 'spam eggs shrubbery knight'.split()
start = 0
for _ in range(4):
dct = {k: v for k, v in zip(keys, range(start, start+4))}
dicts.append(dct)
start += 4
# Find each dict based on 'spam' key only.
for x in range(len(dicts)):
spam = x*4
assert find_dict_in_list(dicts, spam=spam) == dicts[x]
# Find each dict based on 'spam' and 'shrubbery' keys.
for x in range(len(dicts)):
spam = x*4
assert find_dict_in_list(dicts, spam=spam, shrubbery=spam+2) == dicts[x]
# Search for one correct key, one incorrect key:
for x in range(len(dicts)):
spam = x*4
assert find_dict_in_list(dicts, spam=spam, shrubbery=spam+1) is None
# Search for non-existent dict.
for x in range(len(dicts)):
spam = x+100
assert find_dict_in_list(dicts, spam=spam) is None

Statistics on a list of dictionaries considering multiples keys

I have a list of dicts:
input = [{'name':'A', 'Status':'Passed','id':'x1'},
{'name':'A', 'Status':'Passed','id':'x2'},
{'name':'A','Status':'Failed','id':'x3'},
{'name':'B', 'Status':'Passed','id':'x4'},
{'name':'B', 'Status':'Passed','id':'x5'}]
I want an output like :
output = [{'name':'A', 'Passed':'2', 'Failed':'1', 'Total':'3', '%Pass':'66%'},
{'name':'B', 'Passed':'2', 'Failed':'0', 'Total':'2', '%Pass':'100%'},
{'name':'Total', 'Passed':'4', 'Failed':'1', 'Total':'5', '%Pass':'80%'}]\
i started retrieving the different names by using a lookup :
lookup = {(d["name"]): d for d in input [::-1]}
names= [e for e in lookup.values()]
names= names[::-1]
and after using the list comprehension something like :\
for name in names :
name_passed = sum(["Passed" and "name" for d in input if 'Status' in d and name in d])
name_faled = sum(["Failed" and "name" for d in input if 'Status' in d and name in d])\
But i am not sure if there is a smartest way ? a simple loop and comparing dict values will be more simple!?
Assuming your input entries will always be grouped according to the "name" key-value pair:
entries = [
{"name": "A", "Status": "Passed", "id": "x1"},
{"name": "A", "Status": "Passed", "id": "x2"},
{"name": "A", "Status": "Failed", "id": "x3"},
{"name": "B", "Status": "Passed", "id": "x4"},
{"name": "B", "Status": "Passed", "id": "x5"}
]
def to_grouped(entries):
from itertools import groupby
from operator import itemgetter
for key, group_iter in groupby(entries, key=itemgetter("name")):
group = list(group_iter)
total = len(group)
passed = sum(1 for entry in group if entry["Status"] == "Passed")
failed = total - passed
perc_pass = (100 // total) * passed
yield {
"name": key,
"Passed": str(passed),
"Failed": str(failed),
"Total": str(total),
"%Pass": f"{perc_pass:.0f}%"
}
print(list(to_grouped(entries)))
Output:
[{'name': 'A', 'Passed': '2', 'Failed': '1', 'Total': '3', '%Pass': '66%'}, {'name': 'B', 'Passed': '2', 'Failed': '0', 'Total': '2', '%Pass': '100%'}]
This will not create the final entry you're looking for, which sums the statistics of all other entries. Though, that shouldn't be too hard to do.

How to append a new key and value into a dictionary of lists using a function

{'Brazil': [19000000, 810000.00], 'Japan': [128000000, 350000.0]}
If I have dict1 which holds data about a country name, population and area, and I want to use a function to add a new country, population and area in the same format as my current dictionary (where the values are enclosed in a list), how would I do that?
Use below code:
dict1 = {'Brazil': [19000000, 810000.00], 'Japan': [128000000, 350000.0]} #initial dict
def addToDict(c, p, a):
#add new values to dict1, here we are adding country(c) as key, [population(p), area(a)] as value.
dict1[c] = [p, a] #dict[c] = [p,a] <-- [p,a] are values assigned to the key identified-in/new-key in dict1
return dict1 #optional - I used it just to see the final value.
print (addToDict('India', 1000000000, 10098978.778)) #main call
#result --> {'Brazil': [19000000, 810000.0], 'Japan': [128000000, 350000.0], 'India': [1000000000, 10098978.778]}
See the addNew function:
dict1 = {'Brazil': [19000000, 810000.00], 'Japan': [128000000, 350000.0]}
def addNew(dict, country, population, area):
dict.update({country: [population, area]})
addNew(dict1, 'countryTest', 200000, 1000.00)
print(dict1)
Output:
{'Brazil': [19000000, 810000.0], 'Japan': [128000000, 350000.0], 'countryTest': [200000, 1000.0]}
pop = 10
area = 20
name = "A"
dict1[name] = [pop, area]
dict1 = {}
def appendDict(country, population, area):
dict1[country] = [population, area]
appendDict("Brazil", "19000000", "810000.00")
appendDict("Japan", "128000000", "350000.0")
print(dict1)
Here is a good example:
travel_log = [
{
"country": "France",
"visits": 12,
"cities": ["Paris", "Lille", "Dijon"]
},
{
"country": "Germany",
"visits": 5,
"cities": ["Berlin", "Hamburg", "Stuttgart"]
},
]
dict1 = {}
def add_new_country(country,visits,cities):
dict1 = {}
dict1['country']= country
dict1['visits'] = visits
dict1['cities'] = cities
travel_log.append(dict1)
add_new_country("Russia", 2, ["Moscow", "Saint Petersburg"])
print(travel_log)

Python: Convert multiple columns of CSV file to nested Json

This is my input CSV file with multiple columns, I would like to convert this csv file to a json file with department, departmentID, and one nested field called customer and put first and last nested to this field.
department, departmentID, first, last
fans, 1, Caroline, Smith
fans, 1, Jenny, White
students, 2, Ben, CJ
students, 2, Joan, Carpenter
...
Output json file what I need:
[
{
"department" : "fans",
"departmentID: "1",
"customer" : [
{
"first" : "Caroline",
"last" : "Smith"
},
{
"first" : "Jenny",
"last" : "White"
}
]
},
{
"department" : "students",
"departmentID":2,
"user" :
[
{
"first" : "Ben",
"last" : "CJ"
},
{
"first" : "Joan",
"last" : "Carpenter"
}
]
}
]
my code:
from csv import DictReader
from itertools import groupby
with open('data.csv') as csvfile:
r = DictReader(csvfile, skipinitialspace=True)
data = [dict(d) for d in r]
groups = []
uniquekeys = []
for k, g in groupby(data, lambda r: (r['group'], r['groupID'])):
groups.append({
"group": k[0],
"groupID": k[1],
"user": [{k:v for k, v in d.items() if k != 'group'} for d in list(g)]
})
uniquekeys.append(k)
pprint(groups)
My issue is: groupID shows twice in the data, in and out nested json. What I want is group and groupID as grouby key.
The issue was you mixed the names of the keys so this line
"user": [{k:v for k, v in d.items() if k != 'group'} for d in list(g)]
did not strip them properly from your dictionary there was no such key. So nothing was deleted.
I do not fully understand what keys you want so the following example assumes that data.csv looks exactly like in your question department and departmentID but the script converts it to group and groupID
from csv import DictReader
from itertools import groupby
from pprint import pprint
with open('data.csv') as csvfile:
r = DictReader(csvfile, skipinitialspace=True)
data = [dict(d) for d in r]
groups = []
uniquekeys = []
for k, g in groupby(data, lambda r: (r['department'], r['departmentID'])):
groups.append({
"group": k[0],
"groupID": k[1],
"user": [{k:v for k, v in d.items() if k not in ['department','departmentID']} for d in list(g)]
})
uniquekeys.append(k)
pprint(groups)
Output:
[{'group': 'fans',
'groupID': '1',
'user': [{'first': 'Caroline', 'last': 'Smith'},
{'first': 'Jenny', 'last': 'White'}]},
{'group': 'students',
'groupID': '2',
'user': [{'first': 'Ben', 'last': 'CJ'},
{'first': 'Joan', 'last': 'Carpenter'}]}]
I used different keys so it would be really obvious which line does what and easy to customize it for different keys in input or output

Categories