I have dataframe as below with NaN value.
Category,Type,Capacity,Efficiency
Chiller,ChillerA,1000,6.0
Chiller,ChillerB,2000,5.5
Cooling Tower,Cooling TowerA,1000,NaN
Cooling Tower,Cooling TowerB,2000,NaN
I want to convert this pandas dataframe to below json format.
Can anyone tell me how to implement this?
{
"Chiller":{
"ChillerA":{
"Capacity":1000,
"Efficiency":6.0
},
"ChillerB":{
"Capacity":2000,
"Efficiency":5.5
},
},
"Cooling Tower":{
"Cooling TowerA":{
"Capacity":1000 <=Will not include efficiency because efficiency was NaN for this.
},
"Cooling TowerB":{
"Capacity":2000
},
},
}
This is a very robust solution that will get you to desired output using nested dict comprehension:
df = df.set_index(['Category', 'Type'])
{level: {chiller: {name: value for name, value in values.items() if not np.isnan(value)} for chiller, values in df.xs(level).to_dict('index').items()} for level in df.index.levels[0]}
#{'Cooling Tower':
# {'Cooling TowerA':
# {'Capacity': 1000.0},
# 'Cooling TowerB':
# {'Capacity': 2000.0}},
# 'Chiller':
# {'ChillerA': {'Efficiency': 6.0, 'Capacity': 1000.0},
# 'ChillerB': {'Efficiency': 5.5, 'Capacity': 2000.0}}}
Related
I have been provided a very large dictionary with the following format that I am unsure how to convert to a dataframe that I can use to perform basic functions on.
{
'hash': {
'ids': [List of Unique IDs of records this hash has been seen in],
'weights': [List of weights],
'values': [List of values],
'measure_dates': [List of dates]
}
}
The number of items in ids, weights, values and measure_dates is the same within a hash. Different hashes can have a different number of items though. It depends on how often a measurement is taken.
Real(ish) data for an example of three records:
{
'IRR-99876-UTY': {
'ids': [9912234, 9912237, 45555889],
'weights': [0.09, 0.09, 0.113],
'values': [2.31220, 2.31219, 2.73944],
'measure_dates': ['2021-10-14', '2021-10-15', '2022-12-17']
},
'IRR-10881-CKZ': {
'ids': [45557231],
'weights': [0.31],
'values': [5.221001],
'measure_dates': ['2022-12-31']
},
'IRR-881-CKZ': {
'ids': [24661, 24662, 29431],
'weights': [0.05, 0.07, 0.105],
'values': [3.254, 4.500001, 7.3221],
'measure_dates': ['2018-05-05', '2018-05-06', '2018-07-01']
}
}
The value in an index corresponds to the same measurement being taken. For example in IRR-881-CKZ, there are 3 measurements.
Measurement 1 taken on 2018-05-05, with id 24661, weight 0.05, and value 3.254
Measurement 2 taken on 2018-05-06, with id 24662, weight 0.07 and value 4.500001
Measurement 3 taken on 2018-07-01, with id 29431, weight 0.105 and value 7.3221
No other combination of indexes is valid for this hash.
Information that I'm going to be attempting to get data on:
Which hash(es) are measured the most often. This can be determined by which have the largest number of items in the ids list. In this example, the first and third record have three items so would be the top results. I'd love to be able to use something like nlargest() or sort_values().head() to get this, instead of parsing each record and counting the number of items.
Which hashes have an average value between two values. If I had a set number of columns, I think I'd be able to do something like df['average'] = df[['value1', 'value2']].mean(axis=1), but with a variable number of values I'm not sure how to do this.
How can I convert this dictionary of dictionaries of lists to a usable dataframe?
You can use .from_dict() in pandas to convert it to a dataframe.
import pandas as pd
# dictionary of dictionaries with list values
data = {
'IRR-99876-UTY': {
'ids': [9912234, 9912237, 45555889],
'weights': [0.09, 0.09, 0.113],
'values': [2.31220, 2.31219, 2.73944],
'measure_dates': ['2021-10-14', '2021-10-15', '2022-12-17']
},
'IRR-10881-CKZ': {
'ids': [45557231],
'weights': [0.31],
'values': [5.221001],
'measure_dates': ['2022-12-31']
},
'IRR-881-CKZ': {
'ids': [24661, 24662, 29431],
'weights': [0.05, 0.07, 0.105],
'values': [3.254, 4.500001, 7.3221],
'measure_dates': ['2018-05-05', '2018-05-06', '2018-07-01']
}
}
# convert to data frame
df = pd.DataFrame.from_dict(data, orient='index')
You'll need to convert each entry of this dictionary into its own DataFrame and concatenate those to effectively work with this data:
Creating a Usable DataFrame
import pandas as pd
data = {
'IRR-99876-UTY': {
'ids': [9912234, 9912237, 45555889],
'weights': [0.09, 0.09, 0.113],
'values': [2.31220, 2.31219, 2.73944],
'measure_dates': ['2021-10-14', '2021-10-15', '2022-12-17']
},
'IRR-10881-CKZ': {
'ids': [45557231],
'weights': [0.31],
'values': [5.221001],
'measure_dates': ['2022-12-31']
},
'IRR-881-CKZ': {
'ids': [24661, 24662, 29431],
'weights': [0.05, 0.07, 0.105],
'values': [3.254, 4.500001, 7.3221],
'measure_dates': ['2018-05-05', '2018-05-06', '2018-07-01']
}
}
df = pd.concat(
{k: pd.DataFrame(v) for k, v in data.items()},
names=['hash', 'obs']
)
print(df)
ids weights values measure_dates
hash obs
IRR-99876-UTY 0 9912234 0.090 2.312200 2021-10-14
1 9912237 0.090 2.312190 2021-10-15
2 45555889 0.113 2.739440 2022-12-17
IRR-10881-CKZ 0 45557231 0.310 5.221001 2022-12-31
IRR-881-CKZ 0 24661 0.050 3.254000 2018-05-05
1 24662 0.070 4.500001 2018-05-06
2 29431 0.105 7.322100 2018-07-01
Now that our data is cleaned up we can solve your questions.
Solving Your Questions
Which hash(es) are measured the most often
This is simply a Series.value_counts operation. However since the data we're interested in is currently in the index we'll need to grab it out using Index.get_level_values first.
Which hashes have an average value between two values.
This is a groupby operation where we calculate the average from the "values" column per unique "hash". From there we can use the Series.between method to check whether a those averages exist between two arbitrary values.
# Which hash(es) are measured the most often.
df.index.get_level_values('hash').value_counts()
# IRR-99876-UTY 3
# IRR-881-CKZ 3
# IRR-10881-CKZ 1
# Name: hash, dtype: int64
# ---
# Which hashes have an average value between two values.
## Here you can see that I'm testing whether the average is between 0 and 4
print(df.groupby('hash')['values'].mean().between(0, 4))
# IRR-10881-CKZ False
# IRR-881-CKZ False
# IRR-99876-UTY True
# Name: values, dtype: bool
I have the below JSON string which I converted from a Pandas data frame.
[
{
"ID":"1",
"Salary1":69.43,
"Salary2":513.0,
"Date":"2022-06-09",
"Name":"john",
"employeeId":12,
"DateTime":"2022-09-0710:57:55"
},
{
"ID":"2",
"Salary1":691.43,
"Salary2":5123.0,
"Date":"2022-06-09",
"Name":"john",
"employeeId":12,
"DateTime":"2022-09-0710:57:55"
}
]
I want to change the above JSON to the below format.
[
{
"Date":"2022-06-09",
"Name":"john",
"DateTime":"2022-09-0710:57:55",
"employeeId":12,
"Results":[
{
"ID":1,
"Salary1":69.43,
"Salary2":513
},
{
"ID":"2",
"Salary1":691.43,
"Salary2":5123
}
]
}
]
Kindly let me know how we can achieve this in Python.
Original Dataframe:
ID Salary1 Salary2 Date Name employeeId DateTime
1 69.43 513.0 2022-06-09 john 12 2022-09-0710:57:55
2 691.43 5123.0 2022-06-09 john 12 2022-09-0710:57:55
Thank you.
As #Harsha pointed, you can adapt one of the answers from another question, with just some minor tweaks to make it work for OP's case:
(
df.groupby(["Date","Name","DateTime","employeeId"])[["ID","Salary1","Salary2"]]
# to_dict(orient="records") - returns list of rows, where each row is a dict,
# "oriented" like [{column -> value}, … , {column -> value}]
.apply(lambda x: x.to_dict(orient="records"))
# groupBy makes a Series: with grouping columns as index, and dict as values.
# This structure is no good for the next to_dict() method.
# So here we create new DataFrame out of grouped Series,
# with Series' indexes as columns of DataFrame,
# and also renamimg our Series' values to "Results" while we are at it.
.reset_index(name="Results")
# Finally we can achieve the desired structure with the last call to to_dict():
.to_dict(orient="records")
)
# [{'Date': '2022-06-09', 'Name': 'john', 'DateTime': '2022-09-0710:57:55', 'employeeId': 12,
# 'Results': [
# {'ID': 1, 'Salary1': 69.43, 'Salary2': 513.0},
# {'ID': 2, 'Salary1': 691.43, 'Salary2': 5123.0}
# ]}]
Say I have a DataFrame defined as:
df = {
"customer_name":"john",
"phone":{
"mobile":000,
"office":111
},
"mail":{
"office":"john#office.com",
"personal":"john#home.com",
"fax":"12345"
}
}
I want to somehow alter the value in column "mail" to remove the key "fax". Eg, the output DataFrame would be something like:
output_df = {
"customer_name":"john",
"phone":{
"mobile":000,
"office":111
},
"mail":{
"office":"john#office.com",
"personal":"john#home.com"
}
}
where the "fax" key-value pair has been deleted. I tried to use pandas.map with a dict in the lambda, but it does not work. One bad workaround I had was to normalize the dict, but this created unnecessary output columns, and I could not merge them back. Eg.;
df = pd.json_normalize(df)
Is there a better way for this?
You can use pop to remove a element from dict having the given key.
import pandas as pd
df['mail'].pop('fax')
df = pd.json_normalize(df)
df
Output:
customer_name phone.mobile phone.office mail.office mail.personal
0 john 0 111 john#office.com john#home.com
Is there a reason you just don't access it directly and delete it?
Like this:
del df['mail']['fax']
print(df)
{'customer_name': 'john',
'phone': {'mobile': 0, 'office': 111},
'mail': {'office': 'john#office.com', 'personal': 'john#home.com'}}
This is the simplest technique to achieve your aim.
import pandas as pd
import numpy as np
df = {
"customer_name":"john",
"phone":{
"mobile":000,
"office":111
},
"mail":{
"office":"john#office.com",
"personal":"john#home.com",
"fax":"12345"
}
}
del df['mail']['fax']
df = pd.json_normalize(df)
df
Output :
customer_name phone.mobile phone.office mail.office mail.personal
0 john 0 111 john#office.com john#home.com
Requirement
My requirement is to have a Python code extract some records from a database, format and upload a formatted JSON to a sink.
Planned approach
1. Create JSON-like templates for each record. E.g.
json_template_str = '{{
"type": "section",
"fields": [
{{
"type": "mrkdwn",
"text": "Today *{total_val}* customers saved {percent_derived}%."
}}
]
}}'
2. Extract records from DB to a dataframe.
3. Loop over dataframe and replace the {var} variables in bulk using something like .format(**locals()))
Question
I haven't worked with dataframes before.
What would be the best way to accomplish Step 3 ? Currently I am
3.1 Looping over the dataframe objects 1 by 1 for i, df_row in df.iterrows():
3.2 Assigning
total_val= df_row['total_val']
percent_derived= df_row['percent_derived']
3.3 In the loop format and add str to a list block.append(json.loads(json_template_str.format(**locals()))
I was trying to use the assign() method in dataframe but was not able to figure out a way to use like a lambda function to create a new column with my expected value that I can use.
As a novice in pandas, I feel there might be a more efficient way to do this (which may even involve changing the JSON template string - which I can totally do). Will be great to hear thoughts and ideas.
Thanks for your time.
I would not write a JSON string by hand, but rather create a corresponding python object and then use the json library to convert it into a string. With this in mind, you could try the following:
import copy
import pandas as pd
# some sample data
df = pd.DataFrame({
'total_val': [100, 200, 300],
'percent_derived': [12.4, 5.2, 6.5]
})
# template dictionary for a single block
json_template = {
"type": "section",
"fields": [
{"type": "mrkdwn",
"text": "Today *{total_val:.0f}* customers saved {percent_derived:.1f}%."
}
]
}
# a function that will insert data from each row
# of the dataframe into a block
def format_data(row):
json_t = copy.deepcopy(json_template)
text_t = json_t["fields"][0]["text"]
json_t["fields"][0]["text"] = text_t.format(
total_val=row['total_val'], percent_derived=row['percent_derived'])
return json_t
# create a list of blocks
result = df.agg(format_data, axis=1).tolist()
The resulting list looks as follows, and can be converted into a JSON string if needed:
[{
'type': 'section',
'fields': [{
'type': 'mrkdwn',
'text': 'Today *100* customers saved 12.4%.'
}]
}, {
'type': 'section',
'fields': [{
'type': 'mrkdwn',
'text': 'Today *200* customers saved 5.2%.'
}]
}, {
'type': 'section',
'fields': [{
'type': 'mrkdwn',
'text': 'Today *300* customers saved 6.5%.'
}]
}]
I am receiving the following json from a webservice:
{
"headers":[
{
"seriesId":"18805",
"Name":"Name1",
"assetId":"4"
},
{
"seriesId":"18801",
"Name":"Name2",
"assetId":"209"
}
],
"values":[
{
"Date":"01-Jan-2021",
"18805":"127.93",
"18801":"75.85"
}
]
}
Is there a way to create a MultiIndex dataframe from this data? I would like Date to be the row index and the rest to be column indexes.
the values key is a straight forward data frame
columns can be rebuilt from headers key
js = {'headers': [{'seriesId': '18805', 'Name': 'Name1', 'assetId': '4'},
{'seriesId': '18801', 'Name': 'Name2', 'assetId': '209'}],
'values': [{'Date': '01-Jan-2021', '18805': '127.93', '18801': '75.85'}]}
# get values into dataframe
df = pd.DataFrame(js["values"]).set_index("Date")
# get headers for use in rebuilding column names
dfc = pd.DataFrame(js["headers"])
# rebuild columns
df.columns = pd.MultiIndex.from_tuples(dfc.apply(tuple, axis=1), names=dfc.columns)
print(df)
seriesId 18805 18801
Name Name1 Name2
assetId 4 209
Date
01-Jan-2021 127.93 75.85