Creating Dataframe with JSON Keys - python

I have a JSON file which resulted from YouTube's iframe API and I want to put this JSON data into a pandas dataframe, where each JSON key will be a column, and each record should be a new row.
Normally I would use a loop and iterate over the rows of the JSON but this particular JSON looks like this :
[
"{\"timemillis\":1563467467703,\"date\":\"18.7.2019\",\"time\":\"18:31:07,703\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:02\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.3,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}",
"{\"timemillis\":1563467468705,\"date\":\"18.7.2019\",\"time\":\"18:31:08,705\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:03\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.5,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}"
]
In this JSON not every key is written as a new line. How can I extract the keys in this case, and express them as columns?

A Pythonic Solution would be to use the keys and values API of the Python Dictionary.
it should be something like this:
ls = [
"{\"timemillis\":1563467467703,\"date\":\"18.7.2019\",\"time\":\"18:31:07,703\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:02\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.3,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}",
"{\"timemillis\":1563467468705,\"date\":\"18.7.2019\",\"time\":\"18:31:08,705\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:03\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.5,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}"
]
ls = [json.loads(j) for j in ls]
keys = [j.keys() for j in ls] # this will get you all the keys
vals = [j.values() for j in ls] # this will get the values and then you can do something with it
print(keys)
print(values)

easiest way is to leverage json_normalize from pandas.
import json
from pandas.io.json import json_normalize
input_dict = [
"{\"timemillis\":1563467467703,\"date\":\"18.7.2019\",\"time\":\"18:31:07,703\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:02\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.3,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}",
"{\"timemillis\":1563467468705,\"date\":\"18.7.2019\",\"time\":\"18:31:08,705\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:03\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.5,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}"
]
input_json = [json.loads(j) for j in input_dict]
df = json_normalize(input_json)

I think you are asking to break down your key and values and want keys as a column,and values as a row:
This is my approach and plz always provide how your expected output should like
ChainMap flats your dict in key and values and pretty much is self explanatory.
data = ["{\"timemillis\":1563467467703,\"date\":\"18.7.2019\",\"time\":\"18:31:07,703\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:02\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.3,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}","{\"timemillis\":1563467468705,\"date\":\"18.7.2019\",\"time\":\"18:31:08,705\",\"videoId\":\"0HJx2JhQKQk\",\"startSecond\":\"0\",\"stopSecond\":\"90\",\"playerStateNumeric\":1,\"playerStateVerbose\":\"Playing\",\"curTimeFormatted\":\"0:03\",\"totalTimeFormatted\":\"9:46\",\"playoutLevelPercent\":0.5,\"bufferLevelPercent\":1.4,\"qual\":\"large\",\"qualLevels\":[\"hd720\",\"large\",\"medium\",\"small\",\"tiny\",\"auto\"],\"playbackRate\":1,\"playbackRates\":[0.25,0.5,0.75,1,1.25,1.5,1.75,2],\"playerErrorNumeric\":\"\",\"playerErrorVerbose\":\"\"}"]
import json
from collections import ChainMap
data = [json.loads(i) for i in data]
data = dict(ChainMap(*data))
keys = []
vals = []
for k,v in data.items():
keys.append(k)
vals.append(v)
data = pd.DataFrame(zip(keys,vals)).T
new_header = data.iloc[0]
data = data[1:]
data.columns = new_header
#startSecond playbackRates playbackRate qual totalTimeFormatted timemillis playerStateNumeric playerStateVerbose playerErrorNumeric date time stopSecond bufferLevelPercent playerErrorVerbose qualLevels videoId curTimeFormatted playoutLevelPercent
#0 [0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2] 1 large 9:46 1563467467703 1 Playing 18.7.2019 18:31:07,703 90 1.4 [hd720, large, medium, small, tiny, auto] 0HJx2JhQKQk 0:02 0.3

Related

updating dictionary in a nested loop

In the code below, I would like to update the fruit_dict dictionary with the mean price of each row. But the code is not working as expected. Kindly help.
#!/usr/bin/python3
import random
import numpy as np
import pandas as pd
price=np.array(range(20)).reshape(5,4) #sample data for illustration
fruit_keys = [] # list of keys for dictionary
for i in range(5):
key = "fruit_" + str(i)
fruit_keys.append(key)
# initialize a dictionary
fruit_dict = dict.fromkeys(fruit_keys)
fruit_list = []
# print(fruit_dict)
# update dictionary values
for i in range(price.shape[1]):
for key,value in fruit_dict.items():
for j in range(price.shape[0]):
fruit_dict[key] = np.mean(price[j])
fruit_list.append(fruit_dict)
fruit_df = pd.DataFrame(fruit_list)
print(fruit_df)
Instead of creating the dictionary with the string pattern you can append the values for the means of rows as a string pattern by iterating the rows only.
In case if you have a dictionary with a certain pattern you can update the value in a single loop by assigning the key as the pattern which you need for displaying. you don't need to create an additional list for creating a data frame instead you can refer the documentation for creating data frames from dictionary itself Here. I have provided a sample output which may be suitable for your requirement.
In case you need an output with mean value as a column and fruits as rows you can use the below implementation.
#!/usr/bin/python3
import random
import numpy as np
import pandas as pd
row = 5
column = 4
price = np.array(range(20)).reshape(row, column) # sample data for illustration
# initialize a dictionary
fruit_dict = {}
for j in range(row):
fruit_dict['fruit_'+str(j)] = np.mean(price[j])
fruit_df = pd.DataFrame.from_dict(fruit_dict,orient='index',columns=['mean_value'])
print(fruit_df)
This will provide an output like below. As I already mentioned you can create the data frame as you wish from a dictionary by referring the above data frame documentation.
mean_value
fruit_0 1.5
fruit_1 5.5
fruit_2 9.5
fruit_3 13.5
fruit_4 17.5
`
You shouldn't nest the loop over the range and the dictionary items, you should iterate over them together. You can do this with enumerate().
You're also not using value, so there's no need to use items().
for i, key in enumerate(fruit_dict):
fruit_dict[key] = np.mean(price[j])
Could arrive on a solution based on the answer provided by Sangeerththan. Please find the same below.
#!/usr/bin/python3
fruit_dict = {}
fruit_list =[]
price=np.array(range(40)).reshape(4,10)
for i in range(price.shape[0]):
mark_price = np.square(price[i])
for j in range(mark_price.shape[0]):
fruit_dict['proj_fruit_price_'+str(j)] = np.mean(mark_price[j])
fruit_list.append(fruit_dict.copy())
fruit_df = pd.DataFrame(fruit_list)
You can use this instead of your loops:
fruit_keys = [] # list of keys for dictionary
for i in range(5):
key = "fruit_" + str(i)
fruit_keys.append(key)
out = {fruit_keys[index]: np.mean(price[index]) for index in range(price.shape[0])}
Output:
{'fruit_1': '1.5', 'fruit_2': '5.5', 'fruit_3': '9.5', 'fruit_4': '13.5', 'fruit_5': '17.5'}

Creating multiple dataframes using a for loop

Hi I have code which looks like this:
with open("file123.json") as json_file:
data = json.load(json_file)
df_1 = pd.DataFrame(dict([(k,pd.Series(v)) for k,v in data["spt"][1].items()]))
df_1_made =pd.json_normalize(json.loads(df_1.to_json(orient="records"))).T.drop(["content.id","shortname","name"])
df_2 = pd.DataFrame(dict([(k,pd.Series(v)) for k,v in data["spt"][2].items()]))
df_2_made = pd.json_normalize(json.loads(df_2.to_json(orient="records"))).T.drop(["content.id","shortname","name"])
df_3 = pd.DataFrame(dict([(k,pd.Series(v)) for k,v in data["spt"][3].items()]))
df_3_made = pd.json_normalize(json.loads(df_3.to_json(orient="records"))).T.drop(["content.id","shortname","name"])
which the dataframe is built from a json file
the problem is that I am dealing with different json files and each one of them can lead to different number of dataframes. so the code above is 3, it may change to 7. Is there any way to make a for loop taking the length of the data:
length = len(data["spt"])
and make the correct number of dataframes from it? so I do not need to do it manually.
The simplest option here will be to put all your dataframes into a dictionary or a list. First define a function that creates the dataframe and then use a list comprehension.
def create_df(data):
df = pd.DataFrame(
dict(
[(k,pd.Series(v)) for k,v in data]
)
)
df =pd.json_normalize(
json.loads(
df.to_json(orient="records")
)
).T.drop(["content.id","shortname","name"])
return df
my_list_of_dfs = [create_df(data.items()) for x in data["spt"]]

Extract part of a json keys value and combine

I have this json dataset. From this dataset i only want "column_names" keys and its values and "data" keys and its values.Each values of column_names corresponds to values of data. How do i combine only these two keys in python for analysis
{"dataset":{"id":42635350,"dataset_code":"MSFT","column_names":
["Date","Open","High","Low","Close","Volume","Dividend","Split",
"Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"],
"frequency":"daily","type":"Time Series",
"data":[["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082,
83.22667201021558,82.85862667838872,83.0232785373639,10594344.0],
["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001,
83.27509902756123,82.53416566217294,83.01359313389476,14678025.0]
for cnames in data['dataset']['column_names']:
print(cnames)
for cdata in data['dataset']['data']:
print(cdata)
For loop gives me column names and data values i want but i am not sure how to combine it and make it as a python data frame for analysis.
Ref:The above piece of code is from quandal website
data = {
"dataset": {
"id":42635350,"dataset_code":"MSFT",
"column_names": ["Date","Open","High","Low","Close","Volume","Dividend","Split","Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"],
"frequency":"daily",
"type":"Time Series",
"data":[
["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082, 83.22667201021558,82.85862667838872,83.0232785373639,10594344.0],
["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001,83.27509902756123,82.53416566217294,83.01359313389476,14678025.0]
]
}
}
Should the following code do what you want ?
import pandas as pd
df = pd.DataFrame(data, columns = data['dataset']['column_names'])
for i, data_row in enumerate(data['dataset']['data']):
df.loc[i] = data_row
cols = data['dataset']['column_names']
data = data['dataset']['data']
It's quite simple
labeled_data = [dict(zip(cols, d)) for d in data]
The following snippet should work for you
import pandas as pd
df = pd.DataFrame(data['dataset']['data'],columns=data['dataset']['column_names'])
Check the following link to learn more
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html

How to create a hierarchical dictionary from a csv?

I am trying to build a hierarchical dict (please see below the desired output I am looking for) from my csv file.
The following is my code so far, I was searching through itertools possibly I think that's the best tool I need for this task. I cannot use pandas. I think I need to maybe put the values of the key into a new dictionary and then try to map the policy interfaces and build a new dict?
import csv
import pprint
from itertools import groupby
new_dict=[]
with open("test_.csv", "rb") as file_data:
reader = csv.DictReader(file_data)
for keys, grouping in groupby(reader, lambda x: x['groupA_policy']):
new_dict.append(list(grouping))
pprint.pprint(new_dict)
My csv file looks like this:
GroupA_Host,groupA_policy,groupA_policy_interface,GroupB_Host,GroupB_policy,GroupB_policy_interface
host1,policy10,eth0,host_R,policy90,eth9
host1,policy10,eth0.1,host_R,policy90,eth9.1
host2,policy20,eth2,host_Q,policy80,eth8
host2,policy20,eth2.1,host_Q,policy80,eth8.1
The desired output I want achieve is this:
[{'GroupA_Host': 'host1',
'GroupB_Host': 'host_R',
'GroupB_policy': 'policy90',
'groupA_policy': 'policy10',
'interfaces': [{'GroupB_policy_interface': 'eth9',
'group_a_policy_interfaces': 'eth0'},
{'GroupB_policy_interface': 'eth9.1',
'group_a_policy_interface': 'eth0.1'}]},
{'GroupA_host': 'host2',
'GroupB_Host': 'host_Q',
'GroupB_policy': 'policy80',
'groupA_policy': 'policy20',
'interfaces': [{'GroupB_policy_interface': 'eth8',
'groupA_policy_interfaces': 'eth2'},
{'groupA_policy_interface': 'eth8.1',
'groupA_policy_interfaces': 'eth2.1'}]}]
I don't think itertools is necessary here. The important thing is to recognize that you're using ('GroupA_Host', 'GroupB_Host', 'groupA_policy', 'GroupB_policy') as the key for the grouping -- so you can use a dictionary to collect interfaces keyed on this key:
d = {}
for row in reader:
key = row['GroupA_Host'], row['GroupB_Host'], row['groupA_policy'], row['GroupB_policy']
interface = {'groupA_policy_interface': row['groupA_policy_interface'],
'GroupB_policy_interface': row['GroupB_policy_interface']
}
if key in d:
d[key].append(interface)
else:
d[key] = [interface]
as_list = []
for key, interfaces in d.iteritems():
record = {}
record['GroupA_Host'] = key[0]
record['GroupB_Host'] = key[1]
record['groupA_policy'] = key[2]
record['GroupB_policy'] = key[3]
record['interfaces'] = interfaces
as_list.append(record)

Making a pandas dataframe from a .npy file

I'm trying to make a pandas dataframe from a .npy file which, when read in using np.load, returns a numpy array containing a dictionary. My initial instinct was to extract the dictionary and then create a dataframe using pd.from_dict, but this fails every time because I can't seem to get the dictionary out of the array returned from np.load. It looks like it's just np.array([dictionary, dtype=object]), but I can't get the dictionary by indexing the array or anything like that. I've also tried using np.load('filename').item() but the result still isn't recognized by pandas as a dictionary.
Alternatively, I tried pd.read_pickle and that didn't work either.
How can I get this .npy dictionary into my dataframe? Here's the code that keeps failing...
import pandas as pd
import numpy as np
import os
targetdir = '../test_dir/'
filenames = []
successful = []
unsuccessful = []
for dirs, subdirs, files in os.walk(targetdir):
for name in files:
filenames.append(name)
path_to_use = os.path.join(dirs, name)
if path_to_use.endswith('.npy'):
try:
file_dict = np.load(path_to_use).item()
df = pd.from_dict(file_dict)
#df = pd.read_pickle(path_to_use)
successful.append(path_to_use)
except:
unsuccessful.append(path_to_use)
continue
print str(len(successful)) + " files were loaded successfully!"
print "The following files were not loaded:"
for item in unsuccessful:
print item + "\n"
print df
Let's assume once you load the .npy, the item (np.load(path_to_use).item()) looks similar to this;
{'user_c': 'id_003', 'user_a': 'id_001', 'user_b': 'id_002'}
So, if you need to come up with a DataFrame like below using above dictionary;
user_name user_id
0 user_c id_003
1 user_a id_001
2 user_b id_002
You can use;
df = pd.DataFrame(list(x.item().iteritems()), columns=['user_name','user_id'])
If you have a list of dictionaries like below;
users = [{'u_name': 'user_a', 'u_id': 'id_001'}, {'u_name': 'user_b', 'u_id': 'id_002'}]
You can simply use
df = pd.DataFrame(users)
To come up with a DataFrame similar to;
u_id u_name
0 id_001 user_a
1 id_002 user_b
Seems like you have a dictionary similar to this;
data = {
'Center': [[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]],
'Vpeak': [1.1, 2.2],
'ID': ['id_001', 'id_002']
}
In this case, you can simply use;
df = pd.DataFrame(data) # df = pd.DataFrame(file_dict.item()) in your case
To come up with a DataFrame similar to;
Center ID Vpeak
0 [0.1, 0.2, 0.3] id_001 1.1
1 [0.4, 0.5, 0.6] id_002 2.2
If you have ndarray within the dict, do some preprocessing similar to below; and use it to create the df;
for key in data:
if isinstance(data[key], np.ndarray):
data[key] = data[key].tolist()
df = pd.DataFrame(data)

Categories