Convert CSV to JSON (in specific format) using Python - python

I would like to convert a csv file into a json file using python 2.7. Down below is the python code I tried but it is not giving me expected result. Also, I would like to know if there is any simplified version than mine. Any help is appreciated.
Here is my csv file (SampleCsvFile.csv):
zipcode,date,state,val1,val2,val3,val4,val5
95110,2015-05-01,CA,50,30.00,5.00,3.00,3
95110,2015-06-01,CA,67,31.00,5.00,3.00,4
95110,2015-07-01,CA,97,32.00,5.00,3.00,6
Here is the expected json file (ExpectedJsonFile.json):
{
"zipcode": "95110",
"state": "CA",
"subset": [
{
"date": "2015-05-01",
"val1": "50",
"val2": "30.00",
"val3": "5.00",
"val4": "3.00",
"val5": "3"
},
{
"date": "2015-06-01",
"val1": "67",
"val2": "31.00",
"val3": "5.00",
"val4": "3.00",
"val5": "4"
},
{
"date": "2015-07-01",
"val1": "97",
"val2": "32.00",
"val3": "5.00",
"val4": "3.00",
"val5": "6"
}
]
}
Here's the python code I tried:
import pandas as pd
from itertools import groupby
import json
df = pd.read_csv('SampleCsvFile.csv')
names = df.columns.values.tolist()
data = df.values
master_list2 = [ (d["zipcode"], d["state"], d) for d in [dict(zip(names, d)) for d in data] ]
intermediate2 = [(k, [x[2] for x in list(v)]) for k,v in groupby(master_list2, lambda t: (t[0],t[1]) )]
nested_json2 = [dict(zip(names,(k[0][0], k[0][1], k[1]))) for k in [(i[0], i[1]) for i in intermediate2]]
#print json.dumps(nested_json2, indent=4)
with open('ExpectedJsonFile.json', 'w') as outfile:
outfile.write(json.dumps(nested_json2, indent=4))

Since you are using pandas already, I tried to get as much mileage as I could out of dataframe methods. I also ended up wandering fairly far afield from your implementation. I think the key here, though, is don't try to get too clever with your list and/or dictionary comprehensions. You can very easily confuse yourself and everyone who reads your code.
import pandas as pd
from itertools import groupby
from collections import OrderedDict
import json
df = pd.read_csv('SampleCsvFile.csv', dtype={
"zipcode" : str,
"date" : str,
"state" : str,
"val1" : str,
"val2" : str,
"val3" : str,
"val4" : str,
"val5" : str
})
results = []
for (zipcode, state), bag in df.groupby(["zipcode", "state"]):
contents_df = bag.drop(["zipcode", "state"], axis=1)
subset = [OrderedDict(row) for i,row in contents_df.iterrows()]
results.append(OrderedDict([("zipcode", zipcode),
("state", state),
("subset", subset)]))
print json.dumps(results[0], indent=4)
#with open('ExpectedJsonFile.json', 'w') as outfile:
# outfile.write(json.dumps(results[0], indent=4))
The simplest way to have all the json datatypes written as strings, and to retain their original formatting, was to force read_csv to parse them as strings. If, however, you need to do any numerical manipulation on the values before writing out the json, you will have to allow read_csv to parse them numerically and coerce them into the proper string format before converting to json.

Related

Converting JSON to CSV with array objects in JSON Python

I have a JSON with mostly simple key, value pairs. But some of thema are arrays. I want to convert the json to a simple csv format in order to import the values into tables of my postgresql database. The JSON file looks like this
{
"name": "Text",
"operator_type": [
"one",
"two",
"three"
],
"street": "M\u00f6nchhaldenstra\u00dfe ",
"street_nr": "113",
"zipcode": "70191",
"city": "Stuttgart",
"operator_type_id": [
"1",
"2",
"3"
],
"dc_operator_per": [
"100",
"",
""
],
"input_power": 600.0,
"el_power": 800.0,
"col_power": 300.0
}
To convert im using this simple method:
import pandas as pd
data=pd.read_json('export.json')
data.to_csv('text.csv')
data=pd.read_csv('text.csv')
But for each of my array elements it creates a new line in the csv with the same values if it is not an array. Like this:
name,dc_operator_type,capacity_kwh,export_me
Test,Colocation,,600,True
Test,,,600,True,
I want to have it like this:
name,dc_operator_type,capacity_kwh,export_me
Test,Colocation,,600,True
Or one object has two operator_types, then like this:
name,dc_operator_type,capacity_kwh,export_me
Test,Colocation,,600,True
,Seconlocation,,,

How to create a single json file from two DataFrames?

I have two DataFrames, and I want to post these DataFrames as json (to the web service) but first I have to concatenate them as json.
#first df
input_df = pd.DataFrame()
input_df['first'] = ['a', 'b']
input_df['second'] = [1, 2]
#second df
customer_df = pd.DataFrame()
customer_df['first'] = ['c']
customer_df['second'] = [3]
For converting to json, I used following code for each DataFrame;
df.to_json(
path_or_buf='out.json',
orient='records', # other options are (split’, ‘records’, ‘index’, ‘columns’, ‘values’, ‘table’)
date_format='iso',
force_ascii=False,
default_handler=None,
lines=False,
indent=2
)
This code gives me the table like this: For ex, input_df export json
[
{
"first":"a",
"second":1
},
{
"first":"b",
"second":2
}
]
my desired output is like that:
{
"input": [
{
"first": "a",
"second": 1
},
{
"first": "b",
"second": 2
}
],
"customer": [
{
"first": "d",
"second": 3
}
]
}
How can I get this output like this? I couldn't find the way :(
You can concatenate the DataFrames with appropriate key names, then groupby the keys and build dictionaries at each group; finally build a json string from the entire thing:
out = (
pd.concat([input_df, customer_df], keys=['input', 'customer'])
.droplevel(1)
.groupby(level=0).apply(lambda x: x.to_dict('records'))
.to_json()
)
Output:
'{"customer":[{"first":"c","second":3}],"input":[{"first":"a","second":1},{"first":"b","second":2}]}'
or a dict by replacing the last to_json() to to_dict().

Transpose CSV to JSON

I have a csv similar to below and wanted transpose as JSON
Input:
Output I expect to get:
{
"C1" :[ {"header2" : "name1", "header 3" : "address 1"}, {"header2" : "name3", "header 3" : "address 3"}],
"C2" : [ {"header2" : "name2", "header 3" : "address 2"}]
}
Based on some comments, some people are just pandas haters. But I like to use the tool that allows me to solve the problem in the easiest manner possible, and with the fewest lines of code.
In this case, without a doubt, that's pandas
An added benefit of using pandas, is the data can easily be clean, analyzed , and visualized, if needed.
Solutions at How to convert CSV file to multiline JSON? offer some basics, but won't help transform the csv into the required shape.
Because of the expected output of the JSON file, this is a non-trivial question, which requires reshaping/grouping the data in the csv and is easily accomplished with pandas.DataFrame.groupby.
groupby 'h1' since the column values will be the dict outer keys
groupby returns a DataFrameGroupBy object that can be split into, i, the value used to create the group ('c1' and 'c2' in this case) and the associated dataframe group, g.
Use pandas.DataFrame.to_dict to convert the dataframe into a list of dictionaries.
import json
import pandas as pd
# read the file
df = pd.DataFrame('test.csv')
# display(df)
h1 h2 h3
0 c1 n1 a1
1 c2 n2 a2
2 c1 n3 a3
# groupby and create dict
data_dict = dict()
for i, g in df.groupby('h1'):
data_dict[i] = g.drop(columns=['h1']).to_dict(orient='records')
# print(data_dict)
{'c1': [{'h2': 'n1', 'h3': 'a1'}, {'h2': 'n3', 'h3': 'a3'}],
'c2': [{'h2': 'n2', 'h3': 'a2'}]}
# save data_dict to a file as a JSON
with open('result.json', 'w') as fp:
json.dump(data_dict, fp)
JSON file
{
"c1": [{
"h2": "n1",
"h3": "a1"
}, {
"h2": "n3",
"h3": "a3"
}
],
"c2": [{
"h2": "n2",
"h3": "a2"
}
]
}

Loading JSON data into pandas data frame and creating custom columns

Here is example JSON im working with.
{
":#computed_region_amqz_jbr4": "587",
":#computed_region_d3gw_znnf": "18",
":#computed_region_nmsq_hqvv": "55",
":#computed_region_r6rf_p9et": "36",
":#computed_region_rayf_jjgk": "295",
"arrests": "1",
"county_code": "44",
"county_code_text": "44",
"county_name": "Mifflin",
"fips_county_code": "087",
"fips_state_code": "42",
"incident_count": "1",
"lat_long": {
"type": "Point",
"coordinates": [
-77.620031,
40.612749
]
}
I have been able to pull out select columns I want except I'm having troubles with "lat_long". So far my code looks like:
# PRINTS OUT SPECIFIED COLUMNS
col_titles = ['county_name', 'incident_count', 'lat_long']
df = df.reindex(columns=col_titles)
However 'lat_long' is added to the data frame as such: {'type': 'Point', 'coordinates': [-75.71107, 4...
I thought once I figured out how properly add the coordinates to the data frame I would then create two seperate columns, one for latitude and one for longitude.
Any help with this matter would be appreciated. Thank you.
If I don't misunderstood your requirements then you can try this way with json_normalize. I just added the demo for single json, you can use apply or lambda for multiple datasets.
import pandas as pd
from pandas.io.json import json_normalize
df = {":#computed_region_amqz_jbr4":"587",":#computed_region_d3gw_znnf":"18",":#computed_region_nmsq_hqvv":"55",":#computed_region_r6rf_p9et":"36",":#computed_region_rayf_jjgk":"295","arrests":"1","county_code":"44","county_code_text":"44","county_name":"Mifflin","fips_county_code":"087","fips_state_code":"42","incident_count":"1","lat_long":{"type":"Point","coordinates":[-77.620031,40.612749]}}
df = pd.io.json.json_normalize(df)
df_modified = df[['county_name', 'incident_count', 'lat_long.type']]
df_modified['lat'] = df['lat_long.coordinates'][0][0]
df_modified['lng'] = df['lat_long.coordinates'][0][1]
print(df_modified)
Here is how you can do it as well:
df1 = pd.io.json.json_normalize(df)
pd.concat([df1, df1['lat_long.coordinates'].apply(pd.Series) \
.rename(columns={0: 'lat', 1: 'long'})], axis=1) \
.drop(columns=['lat_long.coordinates', 'lat_long.type'])

Complex json to pandas dataframe

There are lots of question about json to pandas dataframe but none of them solved my issues. I am practicing on this complex json file which looks like this
{
"type" : "FeatureCollection",
"features" : [ {
"Id" : 265068000,
"type" : "Feature",
"geometry" : {
"type" : "Point",
"coordinates" : [ 22.170376666666666, 65.57273333333333 ]
},
"properties" : {
"timestampExternal" : 1529151039629
}
}, {
"Id" : 265745760,
"type" : "Feature",
"geometry" : {
"type" : "Point",
"coordinates" : [ 20.329506666666667, 63.675425000000004 ]
},
"properties" : {
"timestampExternal" : 1529151278287
}
} ]
}
I want to convert this json directly to pandas dataframe using pd.read_json() My Primary Goal is to extract Id, Coordinates and timestampExternal. As this is very complex json, normal way of pd.read_json(), simply doesnt give correct output. Can you suggest me, how can i approach to solve in this kind of situations. Expected output is something like this
Id,Coordinates,timestampExternal
265068000,[22.170376666666666, 65.57273333333333],1529151039629
265745760,[20.329506666666667, 63.675425000000004],1529151278287
You can read the json to load it into a dictionary. Then, using dictionary comprehension, extract the attributes you want as columns -
import json
import pandas as pd
_json = json.load(open('/path/to/json'))
df_dict = [{'id':item['Id'], 'coordinates':item['geometry']['coordinates'],
'timestampExternal':item['properties']['timestampExternal']} for item in _json['features']]
extracted_df = pd.DataFrame(df_dict)
>>>
coordinates id timestampExternal
0 [22.170376666666666, 65.57273333333333] 265068000 1529151039629
1 [20.329506666666667, 63.675425000000004] 265745760 1529151278287
You can read the json directly, and then given the features array to pandas as a dict like:
Code:
import json
with open('test.json', 'rU') as f:
data = json.load(f)
df = pd.DataFrame([dict(id=datum['Id'],
coords=datum['geometry']['coordinates'],
ts=datum['properties']['timestampExternal'],
)
for datum in data['features']])
print(df)
Results:
coords id ts
0 [22.170376666666666, 65.57273333333333] 265068000 1529151039629
1 [20.329506666666667, 63.675425000000004] 265745760 1529151278287

Categories