Convert json dictionary to dataframe in Python - python

My API gives me a json file as output with the following structure:
{
"results": [
{
"statement_id": 0,
"series": [
{
"name": "PCJeremy",
"tags": {
"host": "001"
},
"columns": [
"time",
"memory"
],
"values": [
[
"2021-03-20T23:00:00Z",
1049911288
],
[
"2021-03-21T00:00:00Z",
1057692712
],
]
},
{
"name": "PCJohnny",
"tags": {
"host": "002"
},
"columns": [
"time",
"memory"
],
"values": [
[
"2021-03-20T23:00:00Z",
407896064
],
[
"2021-03-21T00:00:00Z",
406847488
]
]
}
]
}
]
}
I want to transform this output to a pandas dataframe so I can create some reports from it. I tried using the pdDataFrame.from_dict method:
with open(fn) as f:
data = json.load(f)
print(pd.DataFrame.from_dict(data))
But as a resulting set, I just get one column and one row with all the data back:
results
0 {'statement_id': 0, 'series': [{'name': 'Jerem...
The structure is just quite hard to understand for me as I am no professional. I would like to get a dataframe with 4 columns: name, host, time and memory with a row of data for every combination of values in the json file. Example:
name host time memory
JeremyPC 001 "2021-03-20T23:00:00Z" 1049911288
JeremyPC 001 "2021-03-21T00:00:00Z" 1049911288
Is this in any way possible? Thanks a lot in advance!

First extract the data from json you are interested in
extracted_data = []
for series in data['results'][0]['series']:
d = {}
d['name'] = series['name']
d['host'] = series['tags']['host']
d['time'] = [value[0] for value in series['values']]
d['memory'] = [value[1] for value in series['values']]
extracted_data.append(d)
df = pd.DataFrame(extracted_data)
# print(df)
name host time memory
0 PCJeremy 001 [2021-03-20T23:00:00Z, 2021-03-21T00:00:00Z] [1049911288, 1057692712]
1 PCJohnny 002 [2021-03-20T23:00:00Z, 2021-03-21T00:00:00Z] [407896064, 406847488]
Second, explode multiple columns into rows
df1 = pd.concat([df.explode('time')['time'], df.explode('memory')['memory']], axis=1)
df_ = df.drop(['time','memory'], axis=1).join(df1).reset_index(drop=True)
# print(df_)
name host time memory
0 PCJeremy 001 2021-03-20T23:00:00Z 1049911288
1 PCJeremy 001 2021-03-21T00:00:00Z 1057692712
2 PCJohnny 002 2021-03-20T23:00:00Z 407896064
3 PCJohnny 002 2021-03-21T00:00:00Z 406847488
With carefully constructing the dict, it could be done without exploding.
extracted_data = []
for series in data['results'][0]['series']:
d = {}
d['name'] = series['name']
d['host'] = series['tags']['host']
for values in series['values']:
d_ = d.copy()
for column, value in zip(series['columns'], values):
d_[column] = value
extracted_data.append(d_)
df = pd.DataFrame(extracted_data)

You could jmespath to extract the data; it is quite a handy tool for such nested json data. You can read the docs for more details; I will summarize the basics: If you want to access a key, use a dot, if you want to access values in a list, use []. Combination of these two will help in traversing the json paths. There are more tools; these basics should get you started.
Your json is wrapped in a data variable:
data
{'results': [{'statement_id': 0,
'series': [{'name': 'PCJeremy',
'tags': {'host': '001'},
'columns': ['time', 'memory'],
'values': [['2021-03-20T23:00:00Z', 1049911288],
['2021-03-21T00:00:00Z', 1057692712]]},
{'name': 'PCJohnny',
'tags': {'host': '002'},
'columns': ['time', 'memory'],
'values': [['2021-03-20T23:00:00Z', 407896064],
['2021-03-21T00:00:00Z', 406847488]]}]}]}
Let's create an expression to parse the json, and get the specific values:
expression = """{name: results[].series[].name,
host: results[].series[].tags.host,
time: results[].series[].values[*][0],
memory: results[].series[].values[*][-1]}
"""
Parse the expression to the json data:
expression = jmespath.compile(expression).search(data)
expression
{'name': ['PCJeremy', 'PCJohnny'],
'host': ['001', '002'],
'time': [['2021-03-20T23:00:00Z', '2021-03-21T00:00:00Z'],
['2021-03-20T23:00:00Z', '2021-03-21T00:00:00Z']],
'memory': [[1049911288, 1057692712], [407896064, 406847488]]}
Note the time and memory are nested lists, and match the values in data:
Create dataframe and explode relevant columns:
pd.DataFrame(expression).apply(pd.Series.explode)
name host time memory
0 PCJeremy 001 2021-03-20T23:00:00Z 1049911288
0 PCJeremy 001 2021-03-21T00:00:00Z 1057692712
1 PCJohnny 002 2021-03-20T23:00:00Z 407896064
1 PCJohnny 002 2021-03-21T00:00:00Z 406847488

Related

Normalizing json using pandas with inconsistent nested lists/dictionaries

I've been using pandas' json_normalize for a bit but ran into a problem with specific json file, similar to the one seen here: https://github.com/pandas-dev/pandas/issues/37783#issuecomment-1148052109
I'm trying to find a way to retrieve the data within the Ats -> Ats dict and return any null values (like the one seen in the ID:101 entry) as NaN values in the dataframe. Ignoring errors within the json_normalize call doesn't prevent the TypeError that stems from trying to iterate through a null value.
Any advice or methods to receive a valid dataframe out of data with this structure is greatly appreciated!
import json
import pandas as pd
data = """[
{
"ID": "100",
"Ats": {
"Ats": [
{
"Name": "At1",
"Desc": "Lazy At"
}
]
}
},
{
"ID": "101",
"Ats": null
}
]"""
data = json.loads(data)
df = pd.json_normalize(data, ["Ats", "Ats"], "ID", errors='ignore')
df.head()
TypeError: 'NoneType' object is not iterable
I tried to iterate through the Ats dictionary, which would work normally for the data with ID 100 but not with ID 101. I expected ignoring errors within the function to return a NaN value in a dataframe but instead received a TypeError for trying to iterate through a null value.
The desired output would look like this: Dataframe
This approach can be more efficient when it comes to dealing with large datasets.
data = json.loads(data)
desired_data = list(
map(lambda x: pd.json_normalize(x, ["Ats", "Ats"], "ID").to_dict(orient="records")[0]
if x["Ats"] is not None
else {"ID": x["ID"], "Name": np.nan, "Desc": np.nan}, data))
df = pd.DataFrame(desired_data)
Output:
Name Desc ID
0 At1 Lazy At 100
1 NaN NaN 101
You might want to consider using this simple try and except approach when working with small datasets. In this case, whenever an error is found it should append new row to DataFrame with NAN.
Example:
data = json.loads(data)
df = pd.DataFrame()
for item in data:
try:
df = df.append(pd.json_normalize(item, ["Ats", "Ats"], "ID"))
except TypeError:
df = df.append({"ID" : item["ID"], "Name": np.nan, "Desc": np.nan}, ignore_index=True)
print(df)
Output:
Name Desc ID
0 At1 Lazy At 100
1 NaN NaN 101
Maybe you can create a DataFrame from the data normally (without pd.json_normalize) and then transform it to requested form afterwards:
import json
import pandas as pd
data = """\
[
{
"ID": "100",
"Ats": {
"Ats": [
{
"Name": "At1",
"Desc": "Lazy At"
}
]
}
},
{
"ID": "101",
"Ats": null
}
]"""
data = json.loads(data)
df = pd.DataFrame(data)
df["Ats"] = df["Ats"].str["Ats"]
df = df.explode("Ats")
df = pd.concat([df, df.pop("Ats").apply(pd.Series, dtype=object)], axis=1)
print(df)
Prints:
ID Name Desc
0 100 At1 Lazy At
1 101 NaN NaN

How to remove redundant elements from a JSON string in Python

I have the below JSON string which I converted from a Pandas data frame.
[
{
"ID":"1",
"Salary1":69.43,
"Salary2":513.0,
"Date":"2022-06-09",
"Name":"john",
"employeeId":12,
"DateTime":"2022-09-0710:57:55"
},
{
"ID":"2",
"Salary1":691.43,
"Salary2":5123.0,
"Date":"2022-06-09",
"Name":"john",
"employeeId":12,
"DateTime":"2022-09-0710:57:55"
}
]
I want to change the above JSON to the below format.
[
{
"Date":"2022-06-09",
"Name":"john",
"DateTime":"2022-09-0710:57:55",
"employeeId":12,
"Results":[
{
"ID":1,
"Salary1":69.43,
"Salary2":513
},
{
"ID":"2",
"Salary1":691.43,
"Salary2":5123
}
]
}
]
Kindly let me know how we can achieve this in Python.
Original Dataframe:
ID Salary1 Salary2 Date Name employeeId DateTime
1 69.43 513.0 2022-06-09 john 12 2022-09-0710:57:55
2 691.43 5123.0 2022-06-09 john 12 2022-09-0710:57:55
Thank you.
As #Harsha pointed, you can adapt one of the answers from another question, with just some minor tweaks to make it work for OP's case:
(
df.groupby(["Date","Name","DateTime","employeeId"])[["ID","Salary1","Salary2"]]
# to_dict(orient="records") - returns list of rows, where each row is a dict,
# "oriented" like [{column -> value}, … , {column -> value}]
.apply(lambda x: x.to_dict(orient="records"))
# groupBy makes a Series: with grouping columns as index, and dict as values.
# This structure is no good for the next to_dict() method.
# So here we create new DataFrame out of grouped Series,
# with Series' indexes as columns of DataFrame,
# and also renamimg our Series' values to "Results" while we are at it.
.reset_index(name="Results")
# Finally we can achieve the desired structure with the last call to to_dict():
.to_dict(orient="records")
)
# [{'Date': '2022-06-09', 'Name': 'john', 'DateTime': '2022-09-0710:57:55', 'employeeId': 12,
# 'Results': [
# {'ID': 1, 'Salary1': 69.43, 'Salary2': 513.0},
# {'ID': 2, 'Salary1': 691.43, 'Salary2': 5123.0}
# ]}]

Pandas DataFrame - remove / replace dict values based on key

Say I have a DataFrame defined as:
df = {
"customer_name":"john",
"phone":{
"mobile":000,
"office":111
},
"mail":{
"office":"john#office.com",
"personal":"john#home.com",
"fax":"12345"
}
}
I want to somehow alter the value in column "mail" to remove the key "fax". Eg, the output DataFrame would be something like:
output_df = {
"customer_name":"john",
"phone":{
"mobile":000,
"office":111
},
"mail":{
"office":"john#office.com",
"personal":"john#home.com"
}
}
where the "fax" key-value pair has been deleted. I tried to use pandas.map with a dict in the lambda, but it does not work. One bad workaround I had was to normalize the dict, but this created unnecessary output columns, and I could not merge them back. Eg.;
df = pd.json_normalize(df)
Is there a better way for this?
You can use pop to remove a element from dict having the given key.
import pandas as pd
df['mail'].pop('fax')
df = pd.json_normalize(df)
df
Output:
customer_name phone.mobile phone.office mail.office mail.personal
0 john 0 111 john#office.com john#home.com
Is there a reason you just don't access it directly and delete it?
Like this:
del df['mail']['fax']
print(df)
{'customer_name': 'john',
'phone': {'mobile': 0, 'office': 111},
'mail': {'office': 'john#office.com', 'personal': 'john#home.com'}}
This is the simplest technique to achieve your aim.
import pandas as pd
import numpy as np
df = {
"customer_name":"john",
"phone":{
"mobile":000,
"office":111
},
"mail":{
"office":"john#office.com",
"personal":"john#home.com",
"fax":"12345"
}
}
del df['mail']['fax']
df = pd.json_normalize(df)
df
Output :
customer_name phone.mobile phone.office mail.office mail.personal
0 john 0 111 john#office.com john#home.com

how do you convert json output to a data frame in python

I need to convert this json file to a data frame in python:
print(resp2)
{
"totalCount": 1,
"nextPageKey": null,
"result": [
{
"metricId": "builtin:tech.generic.cpu.usage",
"data": [
{
"dimensions": [
"process_345678"
],
"dimensionMap": {
"dt.entity.process_group_instance": "process_345678"
},
"timestamps": [
1642021200000,
1642024800000,
1642028400000
],
"values": [
10,
15,
12
]
}
]
}
]
}
Output needs to be like this:
metricId dimensions timestamps values
builtin:tech.generic.cpu.usage process_345678 1642021200000 10
builtin:tech.generic.cpu.usage process_345678 1642024800000 15
builtin:tech.generic.cpu.usage process_345678 1642028400000 12
I have tried this:
print(pd.json_normalize(resp2, "data"))
I get invalid syntax, any ideas?
Take a look at the examples of json_normalize, and you'll see a list of dictionaries that have the key names of the columns you want, unique to each row. When you have nested lists/objects, then the columns will be flatten to have dot-notation, but nested arrays will not end up duplicated across rows.
Therefore, parse the data into a flat list, then you can use from_records.
data = []
for r in resp2['result']:
metricId = r['metricId']
for d in r['data']:
dimension = d['dimensions'][0] # unclear why this is an array
timestamps = d['timestamps']
values = d['values']
for t, v in zip(timestamps, values):
data.append({'metricId': metricId, 'dimensions': dimension, 'timestamps': t, 'values': v})
df = pd.DataFrame.from_records(data)

DataFrame groupby a column which has dictionary values

I'm having a dataframe which contains a column as dictionary. And I need to groupby the column by the dictionary values. For example,
import pandas as pd
data = [
{
"name":"xx",
"values":{
"element":[
{
"path":"path1/id1"
},
{
"path":"path2/id1"
}
],
"nonrequired":[
{}
]
}
},
{
"name":"yy",
"values":{
"element":[
{
"path":"path1/id2"
},
{
"path":"path2/id2"
}
],
"nonrequired":[
{}
]
}
}
]
df = pd.DataFrame(data)
What I'm looking for,
I want to groupby the column "values" by inside specific key.
The grouping should be values->element->path
The grouping should be based on the partial path values. For example if path="path1/id2", the
grouping should be based on path="path1"
After grouping I need to extract the result as dictionary.
Expected result:
result = {
'path1': [
{
"name":'xx',
"renamecolumn":['id1','id2']
}
],
'path2': [
{
"name":'yy',
"renamecolumn":['id1','id2']
}
]
}
Still not 100% sure of the logic of the final dictionary creation as the example input and output don't quite match up. However, here is how you can extract the values and you can create your desired dictionary from there.
# ectract the values and split them on the forward slash
df['split'] = df['values'].apply(lambda x: [item['path'].split('/') for item in x['element']])
# generate the path and ids columns
df['path'] = df['split'].apply(lambda x: [x[i][0] for i in range(0,len(x))])
df['ids'] = df['split'].apply(lambda x: [x[i][1] for i in range(0,len(x))])
# separate out all the lists and
result = df.drop(['values', 'split'], axis=1) \
.explode('ids').explode('path').drop_duplicates()
Result is:
name path ids
0 xx path1 id1
0 xx path2 id1
1 yy path1 id2
1 yy path2 id2

Categories