Related
I want to delete everything in the object "name" in the given json file example but keep the the object, in simple words I want to clear the object.
{
"names": [
{
"player": "Player_Name",
"TB:": "12389",
"BW:": "596",
"SW:": "28",
"CQ:": "20"
}
]
}
I used tried this code:
with open('players.json', 'w') as w:
with open('players.json', 'r') as r:
for line in r:
element = json.loads(line.strip())
if 'names' in element:
del element['names']
w.write(json.dumps(element))
but it just clears the whole json file
sorry for my bad english
The problem is that you open the same file twice - for reading and for writing simultaneously. Also a JSON cannot be parsed line by line, only as a whole.
import json
# 1. read
with open('players.json', 'r') as r:
data = json.load(r)
# 2. modify
# (you might want to check if data is a dict)
data['names'] = []
# 3. write
with open('players.json', 'w') as w:
data = json.dump(data, w)
With indentation:
a_dict = ({"name": "kevin", "id":100001 })
with open('test.json',"a+") as f:
json.dump(a_dict, f, indent=4) # Indent makes it more readable
f.write("\n")
print("done")
Output as below, won't be able to read, says json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes:
{
"name": "kevin",
"id": 100001
}
{
"name": "kevin",
"id": 100001
}
Without indentation:
a_dict = ({"name": "kevin", "id":100001 })
with open('test.json',"a+") as f:
json.dump(a_dict, f) # Indent makes it more readable
f.write("\n")
print("done")
Output that can be read:
{"name": "kevin", "id": 100001}
{"name": "kevin", "id": 100001}
{"name": "kevin", "id": 100001}
{"name": "kevin", "id": 100001}
Decoding:
p_list = []
with open('test.json') as f:
for json_obj in f:
test_dict = json.loads(json_obj)
p_list.append(test_dict)
# print(test_dict)
for emp in p_list:
print(emp['name'])
Looks like you're trying to use the JSONL / JSON Lines format. The predicate for this format is that each object, or whichever JSON entity, is wholly represented in a single line of the file. So you can't have it be JSONL and indented/prettified at the same time.
When you do:
with open('test.json') as f:
for json_obj in f:
...
each json_obj is just one line of the file, not the JSON object read-in till the end of that object.
If you want to do it that way, you'll need to write your own JSON Decoder that reads in more lines until it's found the end-delimiter and a valid JSON entity. Same goes for writing the file - you'd need to write your own JSON Encoder.
The closest thing to being able to do JSON Lines & Pretty'fied is jq command line tool. And since it's not a Python package, in order to read and write data, use subprocess.run() with capture_output=True.
You can find questions related to this tool on StackOverflow with the tag jq.
Edit: If you are certain that you will only be writing JSON objects to the file the same way always, you can setup the read to start at a line which starts with { without any spaces/indentation before it and continue reading until you reach a line with } without any spaces/indentation before it.
A rough idea:
with open('test.json') as f:
parts = []
in_obj = False
for some_text in f:
if some_text == '{' and not in_obj:
in_obj = True
parts.append('{')
elif in_obj:
parts.append(some_text)
if some_text == '}':
in_obj = False
# put this in a try-except block
json_obj = json.loads('\n'.join(parts))
yield json_obj # or return
parts = [] # reset
elif not some_text.startswith(' ' * 4):
print('error') # in an object but wrong indent
# the check above should actually include checking more than
# just the starting 4 spaces since it could be nested further
else:
print('error') # not in an object and not end delimeter
You'll need to modify that to read multiple objects and be an actual parser.
Also, as noted by #ewen-lbh below, files in this format should have the .jsonl extension. If it's .json you're implying that it holds a single valid loadable json entity.
I have a JSON file I want to convert to a CSV file. How can I do this with Python?
I tried:
import json
import csv
f = open('data.json')
data = json.load(f)
f.close()
f = open('data.csv')
csv_file = csv.writer(f)
for item in data:
csv_file.writerow(item)
f.close()
However, it did not work. I am using Django and the error I received is:
`file' object has no attribute 'writerow'`
I then tried the following:
import json
import csv
f = open('data.json')
data = json.load(f)
f.close()
f = open('data.csv')
csv_file = csv.writer(f)
for item in data:
f.writerow(item) # ← changed
f.close()
I then get the error:
`sequence expected`
Sample json file:
[{
"pk": 22,
"model": "auth.permission",
"fields": {
"codename": "add_logentry",
"name": "Can add log entry",
"content_type": 8
}
}, {
"pk": 23,
"model": "auth.permission",
"fields": {
"codename": "change_logentry",
"name": "Can change log entry",
"content_type": 8
}
}, {
"pk": 24,
"model": "auth.permission",
"fields": {
"codename": "delete_logentry",
"name": "Can delete log entry",
"content_type": 8
}
}, {
"pk": 4,
"model": "auth.permission",
"fields": {
"codename": "add_group",
"name": "Can add group",
"content_type": 2
}
}, {
"pk": 10,
"model": "auth.permission",
"fields": {
"codename": "add_message",
"name": "Can add message",
"content_type": 4
}
}
]
With the pandas library, this is as easy as using two commands!
df = pd.read_json()
read_json converts a JSON string to a pandas object (either a series or dataframe). Then:
df.to_csv()
Which can either return a string or write directly to a csv-file. See the docs for to_csv.
Based on the verbosity of previous answers, we should all thank pandas for the shortcut.
For unstructured JSON see this answer.
EDIT:
Someone asked for a working minimal example:
import pandas as pd
with open('jsonfile.json', encoding='utf-8') as inputfile:
df = pd.read_json(inputfile)
df.to_csv('csvfile.csv', encoding='utf-8', index=False)
First, your JSON has nested objects, so it normally cannot be directly converted to CSV. You need to change that to something like this:
{
"pk": 22,
"model": "auth.permission",
"codename": "add_logentry",
"content_type": 8,
"name": "Can add log entry"
},
......]
Here is my code to generate CSV from that:
import csv
import json
x = """[
{
"pk": 22,
"model": "auth.permission",
"fields": {
"codename": "add_logentry",
"name": "Can add log entry",
"content_type": 8
}
},
{
"pk": 23,
"model": "auth.permission",
"fields": {
"codename": "change_logentry",
"name": "Can change log entry",
"content_type": 8
}
},
{
"pk": 24,
"model": "auth.permission",
"fields": {
"codename": "delete_logentry",
"name": "Can delete log entry",
"content_type": 8
}
}
]"""
x = json.loads(x)
f = csv.writer(open("test.csv", "wb+"))
# Write CSV Header, If you dont need that, remove this line
f.writerow(["pk", "model", "codename", "name", "content_type"])
for x in x:
f.writerow([x["pk"],
x["model"],
x["fields"]["codename"],
x["fields"]["name"],
x["fields"]["content_type"]])
You will get output as:
pk,model,codename,name,content_type
22,auth.permission,add_logentry,Can add log entry,8
23,auth.permission,change_logentry,Can change log entry,8
24,auth.permission,delete_logentry,Can delete log entry,8
I am assuming that your JSON file will decode into a list of dictionaries. First we need a function which will flatten the JSON objects:
def flattenjson(b, delim):
val = {}
for i in b.keys():
if isinstance(b[i], dict):
get = flattenjson(b[i], delim)
for j in get.keys():
val[i + delim + j] = get[j]
else:
val[i] = b[i]
return val
The result of running this snippet on your JSON object:
flattenjson({
"pk": 22,
"model": "auth.permission",
"fields": {
"codename": "add_message",
"name": "Can add message",
"content_type": 8
}
}, "__")
is
{
"pk": 22,
"model": "auth.permission",
"fields__codename": "add_message",
"fields__name": "Can add message",
"fields__content_type": 8
}
After applying this function to each dict in the input array of JSON objects:
input = map(lambda x: flattenjson( x, "__" ), input)
and finding the relevant column names:
columns = [x for row in input for x in row.keys()]
columns = list(set(columns))
it's not hard to run this through the csv module:
with open(fname, 'wb') as out_file:
csv_w = csv.writer(out_file)
csv_w.writerow(columns)
for i_r in input:
csv_w.writerow(map(lambda x: i_r.get(x, ""), columns))
JSON can represent a wide variety of data structures -- a JS "object" is roughly like a Python dict (with string keys), a JS "array" roughly like a Python list, and you can nest them as long as the final "leaf" elements are numbers or strings.
CSV can essentially represent only a 2-D table -- optionally with a first row of "headers", i.e., "column names", which can make the table interpretable as a list of dicts, instead of the normal interpretation, a list of lists (again, "leaf" elements can be numbers or strings).
So, in the general case, you can't translate an arbitrary JSON structure to a CSV. In a few special cases you can (array of arrays with no further nesting; arrays of objects which all have exactly the same keys). Which special case, if any, applies to your problem? The details of the solution depend on which special case you do have. Given the astonishing fact that you don't even mention which one applies, I suspect you may not have considered the constraint, neither usable case in fact applies, and your problem is impossible to solve. But please do clarify!
A generic solution which translates any json list of flat objects to csv.
Pass the input.json file as first argument on command line.
import csv, json, sys
input = open(sys.argv[1])
data = json.load(input)
input.close()
output = csv.writer(sys.stdout)
output.writerow(data[0].keys()) # header row
for row in data:
output.writerow(row.values())
Use json_normalize from pandas:
Using the sample data from the OP in a file named test.json.
encoding='utf-8' has been used here, but may not be necessary for other cases.
The following code takes advantage of the pathlib library.
.open is a method of pathlib.
Works with non-Windows paths too.
Use pandas.to_csv(...) to save the data to a csv file.
import pandas as pd
# As of Pandas 1.01, json_normalize as pandas.io.json.json_normalize is deprecated and is now exposed in the top-level namespace.
# from pandas.io.json import json_normalize
from pathlib import Path
import json
# set path to file
p = Path(r'c:\some_path_to_file\test.json')
# read json
with p.open('r', encoding='utf-8') as f:
data = json.loads(f.read())
# create dataframe
df = pd.json_normalize(data)
# dataframe view
pk model fields.codename fields.name fields.content_type
22 auth.permission add_logentry Can add log entry 8
23 auth.permission change_logentry Can change log entry 8
24 auth.permission delete_logentry Can delete log entry 8
4 auth.permission add_group Can add group 2
10 auth.permission add_message Can add message 4
# save to csv
df.to_csv('test.csv', index=False, encoding='utf-8')
CSV Output:
pk,model,fields.codename,fields.name,fields.content_type
22,auth.permission,add_logentry,Can add log entry,8
23,auth.permission,change_logentry,Can change log entry,8
24,auth.permission,delete_logentry,Can delete log entry,8
4,auth.permission,add_group,Can add group,2
10,auth.permission,add_message,Can add message,4
Resources for more heavily nested JSON objects:
SO Answers:
Flatten a JSON array with python
How to flatten nested JSON recursively, with flatten_json
How to json_normalize a column with NaNs
Split / Explode a column of dictionaries into separate columns with pandas
See the json_normalize tag for other related questions.
This code should work for you, assuming that your JSON data is in a file called data.json.
import json
import csv
with open("data.json") as file:
data = json.load(file)
with open("data.csv", "w") as file:
csv_file = csv.writer(file)
for item in data:
fields = list(item['fields'].values())
csv_file.writerow([item['pk'], item['model']] + fields)
It'll be easy to use csv.DictWriter(),the detailed implementation can be like this:
def read_json(filename):
return json.loads(open(filename).read())
def write_csv(data,filename):
with open(filename, 'w+') as outf:
writer = csv.DictWriter(outf, data[0].keys())
writer.writeheader()
for row in data:
writer.writerow(row)
# implement
write_csv(read_json('test.json'), 'output.csv')
Note that this assumes that all of your JSON objects have the same fields.
Here is the reference which may help you.
I was having trouble with Dan's proposed solution, but this worked for me:
import json
import csv
f = open('test.json')
data = json.load(f)
f.close()
f=csv.writer(open('test.csv','wb+'))
for item in data:
f.writerow([item['pk'], item['model']] + item['fields'].values())
Where "test.json" contained the following:
[
{"pk": 22, "model": "auth.permission", "fields":
{"codename": "add_logentry", "name": "Can add log entry", "content_type": 8 } },
{"pk": 23, "model": "auth.permission", "fields":
{"codename": "change_logentry", "name": "Can change log entry", "content_type": 8 } }, {"pk": 24, "model": "auth.permission", "fields":
{"codename": "delete_logentry", "name": "Can delete log entry", "content_type": 8 } }
]
This is a modification of #MikeRepass's answer. This version writes the CSV to a file, and works for both Python 2 and Python 3.
import csv,json
input_file="data.json"
output_file="data.csv"
with open(input_file) as f:
content=json.load(f)
try:
context=open(output_file,'w',newline='') # Python 3
except TypeError:
context=open(output_file,'wb') # Python 2
with context as file:
writer=csv.writer(file)
writer.writerow(content[0].keys()) # header row
for row in content:
writer.writerow(row.values())
Alec's answer is great, but it doesn't work in the case where there are multiple levels of nesting. Here's a modified version that supports multiple levels of nesting. It also makes the header names a bit nicer if the nested object already specifies its own key (e.g. Firebase Analytics / BigTable / BigQuery data):
"""Converts JSON with nested fields into a flattened CSV file.
"""
import sys
import json
import csv
import os
import jsonlines
from orderedset import OrderedSet
# from https://stackoverflow.com/a/28246154/473201
def flattenjson( b, prefix='', delim='/', val=None ):
if val is None:
val = {}
if isinstance( b, dict ):
for j in b.keys():
flattenjson(b[j], prefix + delim + j, delim, val)
elif isinstance( b, list ):
get = b
for j in range(len(get)):
key = str(j)
# If the nested data contains its own key, use that as the header instead.
if isinstance( get[j], dict ):
if 'key' in get[j]:
key = get[j]['key']
flattenjson(get[j], prefix + delim + key, delim, val)
else:
val[prefix] = b
return val
def main(argv):
if len(argv) < 2:
raise Error('Please specify a JSON file to parse')
print "Loading and Flattening..."
filename = argv[1]
allRows = []
fieldnames = OrderedSet()
with jsonlines.open(filename) as reader:
for obj in reader:
# print 'orig:\n'
# print obj
flattened = flattenjson(obj)
#print 'keys: %s' % flattened.keys()
# print 'flattened:\n'
# print flattened
fieldnames.update(flattened.keys())
allRows.append(flattened)
print "Exporting to CSV..."
outfilename = filename + '.csv'
count = 0
with open(outfilename, 'w') as file:
csvwriter = csv.DictWriter(file, fieldnames=fieldnames)
csvwriter.writeheader()
for obj in allRows:
# print 'allRows:\n'
# print obj
csvwriter.writerow(obj)
count += 1
print "Wrote %d rows" % count
if __name__ == '__main__':
main(sys.argv)
As mentioned in the previous answers the difficulty in converting json to csv is because a json file can contain nested dictionaries and therefore be a multidimensional data structure verses a csv which is a 2D data structure. However, a good way to turn a multidimensional structure to a csv is to have multiple csvs that tie together with primary keys.
In your example, the first csv output has the columns "pk","model","fields" as your columns. Values for "pk", and "model" are easy to get but because the "fields" column contains a dictionary, it should be its own csv and because "codename" appears to the be the primary key, you can use as the input for "fields" to complete the first csv. The second csv contains the dictionary from the "fields" column with codename as the the primary key that can be used to tie the 2 csvs together.
Here is a solution for your json file which converts a nested dictionaries to 2 csvs.
import csv
import json
def readAndWrite(inputFileName, primaryKey=""):
input = open(inputFileName+".json")
data = json.load(input)
input.close()
header = set()
if primaryKey != "":
outputFileName = inputFileName+"-"+primaryKey
if inputFileName == "data":
for i in data:
for j in i["fields"].keys():
if j not in header:
header.add(j)
else:
outputFileName = inputFileName
for i in data:
for j in i.keys():
if j not in header:
header.add(j)
with open(outputFileName+".csv", 'wb') as output_file:
fieldnames = list(header)
writer = csv.DictWriter(output_file, fieldnames, delimiter=',', quotechar='"')
writer.writeheader()
for x in data:
row_value = {}
if primaryKey == "":
for y in x.keys():
yValue = x.get(y)
if type(yValue) == int or type(yValue) == bool or type(yValue) == float or type(yValue) == list:
row_value[y] = str(yValue).encode('utf8')
elif type(yValue) != dict:
row_value[y] = yValue.encode('utf8')
else:
if inputFileName == "data":
row_value[y] = yValue["codename"].encode('utf8')
readAndWrite(inputFileName, primaryKey="codename")
writer.writerow(row_value)
elif primaryKey == "codename":
for y in x["fields"].keys():
yValue = x["fields"].get(y)
if type(yValue) == int or type(yValue) == bool or type(yValue) == float or type(yValue) == list:
row_value[y] = str(yValue).encode('utf8')
elif type(yValue) != dict:
row_value[y] = yValue.encode('utf8')
writer.writerow(row_value)
readAndWrite("data")
I know it has been a long time since this question has been asked but I thought I might add to everyone else's answer and share a blog post that I think explain the solution in a very concise way.
Here is the link
Open a file for writing
employ_data = open('/tmp/EmployData.csv', 'w')
Create the csv writer object
csvwriter = csv.writer(employ_data)
count = 0
for emp in emp_data:
if count == 0:
header = emp.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(emp.values())
Make sure to close the file in order to save the contents
employ_data.close()
It is not a very smart way to do it, but I have had the same problem and this worked for me:
import csv
f = open('data.json')
data = json.load(f)
f.close()
new_data = []
for i in data:
flat = {}
names = i.keys()
for n in names:
try:
if len(i[n].keys()) > 0:
for ii in i[n].keys():
flat[n+"_"+ii] = i[n][ii]
except:
flat[n] = i[n]
new_data.append(flat)
f = open(filename, "r")
writer = csv.DictWriter(f, new_data[0].keys())
writer.writeheader()
for row in new_data:
writer.writerow(row)
f.close()
Surprisingly, I found that none of the answers posted here so far correctly deal with all possible scenarios (e.g., nested dicts, nested lists, None values, etc).
This solution should work across all scenarios:
def flatten_json(json):
def process_value(keys, value, flattened):
if isinstance(value, dict):
for key in value.keys():
process_value(keys + [key], value[key], flattened)
elif isinstance(value, list):
for idx, v in enumerate(value):
process_value(keys + [str(idx)], v, flattened)
else:
flattened['__'.join(keys)] = value
flattened = {}
for key in json.keys():
process_value([key], json[key], flattened)
return flattened
My simple way to solve this:
Create a new Python file like: json_to_csv.py
Add this code:
import csv, json, sys
#if you are not using utf-8 files, remove the next line
sys.setdefaultencoding("UTF-8")
#check if you pass the input file and output file
if sys.argv[1] is not None and sys.argv[2] is not None:
fileInput = sys.argv[1]
fileOutput = sys.argv[2]
inputFile = open(fileInput)
outputFile = open(fileOutput, 'w')
data = json.load(inputFile)
inputFile.close()
output = csv.writer(outputFile)
output.writerow(data[0].keys()) # header row
for row in data:
output.writerow(row.values())
After add this code, save the file and run at the terminal:
python json_to_csv.py input.txt output.csv
I hope this help you.
SEEYA!
This code works for any given json file
# -*- coding: utf-8 -*-
"""
Created on Mon Jun 17 20:35:35 2019
author: Ram
"""
import json
import csv
with open("file1.json") as file:
data = json.load(file)
# create the csv writer object
pt_data1 = open('pt_data1.csv', 'w')
csvwriter = csv.writer(pt_data1)
count = 0
for pt in data:
if count == 0:
header = pt.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(pt.values())
pt_data1.close()
If we consider the below example for converting the json format file to csv formatted file.
{
"item_data" : [
{
"item": "10023456",
"class": "100",
"subclass": "123"
}
]
}
The below code will convert the json file ( data3.json ) to csv file ( data3.csv ).
import json
import csv
with open("/Users/Desktop/json/data3.json") as file:
data = json.load(file)
file.close()
print(data)
fname = "/Users/Desktop/json/data3.csv"
with open(fname, "w", newline='') as file:
csv_file = csv.writer(file)
csv_file.writerow(['dept',
'class',
'subclass'])
for item in data["item_data"]:
csv_file.writerow([item.get('item_data').get('dept'),
item.get('item_data').get('class'),
item.get('item_data').get('subclass')])
The above mentioned code has been executed in the locally installed pycharm and it has successfully converted the json file to the csv file. Hope this help to convert the files.
This works relatively well.
It flattens the json to write it to a csv file.
Nested elements are managed :)
That's for python 3
import json
o = json.loads('your json string') # Be careful, o must be a list, each of its objects will make a line of the csv.
def flatten(o, k='/'):
global l, c_line
if isinstance(o, dict):
for key, value in o.items():
flatten(value, k + '/' + key)
elif isinstance(o, list):
for ov in o:
flatten(ov, '')
elif isinstance(o, str):
o = o.replace('\r',' ').replace('\n',' ').replace(';', ',')
if not k in l:
l[k]={}
l[k][c_line]=o
def render_csv(l):
ftime = True
for i in range(100): #len(l[list(l.keys())[0]])
for k in l:
if ftime :
print('%s;' % k, end='')
continue
v = l[k]
try:
print('%s;' % v[i], end='')
except:
print(';', end='')
print()
ftime = False
i = 0
def json_to_csv(object_list):
global l, c_line
l = {}
c_line = 0
for ov in object_list : # Assumes json is a list of objects
flatten(ov)
c_line += 1
render_csv(l)
json_to_csv(o)
enjoy.
Modified Alec McGail's answer to support JSON with lists inside
def flattenjson(self, mp, delim="|"):
ret = []
if isinstance(mp, dict):
for k in mp.keys():
csvs = self.flattenjson(mp[k], delim)
for csv in csvs:
ret.append(k + delim + csv)
elif isinstance(mp, list):
for k in mp:
csvs = self.flattenjson(k, delim)
for csv in csvs:
ret.append(csv)
else:
ret.append(mp)
return ret
Thanks!
import json,csv
t=''
t=(type('a'))
json_data = []
data = None
write_header = True
item_keys = []
try:
with open('kk.json') as json_file:
json_data = json_file.read()
data = json.loads(json_data)
except Exception as e:
print( e)
with open('bar.csv', 'at') as csv_file:
writer = csv.writer(csv_file)#, quoting=csv.QUOTE_MINIMAL)
for item in data:
item_values = []
for key in item:
if write_header:
item_keys.append(key)
value = item.get(key, '')
if (type(value)==t):
item_values.append(value.encode('utf-8'))
else:
item_values.append(value)
if write_header:
writer.writerow(item_keys)
write_header = False
writer.writerow(item_values)
Since the data appears to be in a dictionary format, it would appear that you should actually use csv.DictWriter() to actually output the lines with the appropriate header information. This should allow the conversion to be handled somewhat easier. The fieldnames parameter would then set up the order properly while the output of the first line as the headers would allow it to be read and processed later by csv.DictReader().
For example, Mike Repass used
output = csv.writer(sys.stdout)
output.writerow(data[0].keys()) # header row
for row in data:
output.writerow(row.values())
However just change the initial setup to
output = csv.DictWriter(filesetting, fieldnames=data[0].keys())
Note that since the order of elements in a dictionary is not defined, you might have to create fieldnames entries explicitly. Once you do that, the writerow will work. The writes then work as originally shown.
Unfortunately I have not enouthg reputation to make a small contribution to the amazing #Alec McGail answer.
I was using Python3 and I have needed to convert the map to a list following the #Alexis R comment.
Additionaly I have found the csv writer was adding a extra CR to the file (I have a empty line for each line with data inside the csv file). The solution was very easy following the #Jason R. Coombs answer to this thread:
CSV in Python adding an extra carriage return
You need to simply add the lineterminator='\n' parameter to the csv.writer. It will be: csv_w = csv.writer( out_file, lineterminator='\n' )
You can use this code to convert a json file to csv file
After reading the file, I am converting the object to pandas dataframe and then saving this to a CSV file
import os
import pandas as pd
import json
import numpy as np
data = []
os.chdir('D:\\Your_directory\\folder')
with open('file_name.json', encoding="utf8") as data_file:
for line in data_file:
data.append(json.loads(line))
dataframe = pd.DataFrame(data)
## Saving the dataframe to a csv file
dataframe.to_csv("filename.csv", encoding='utf-8',index= False)
I have tried a lot of the suggested solution (also Panda was not correctly normalizing my JSON) but the real good one which is parsing correctly the JSON data is from Max Berman.
I wrote an improvement to avoid new columns for each row and
puts it to the existing column during parsing.
It has also the effect to store a value as a string if only one data exists, and make a list if there are more values for that columns.
It takes an input.json file for input and spits out an output.csv.
import json
import pandas as pd
def flatten_json(json):
def process_value(keys, value, flattened):
if isinstance(value, dict):
for key in value.keys():
process_value(keys + [key], value[key], flattened)
elif isinstance(value, list):
for idx, v in enumerate(value):
process_value(keys, v, flattened)
# process_value(keys + [str(idx)], v, flattened)
else:
key1 = '__'.join(keys)
if not flattened.get(key1) is None:
if isinstance(flattened[key1], list):
flattened[key1] = flattened[key1] + [value]
else:
flattened[key1] = [flattened[key1]] + [value]
else:
flattened[key1] = value
flattened = {}
for key in json.keys():
k = key
# print("Key: " + k)
process_value([key], json[key], flattened)
return flattened
try:
f = open("input.json", "r")
except:
pass
y = json.loads(f.read())
flat = flatten_json(y)
text = json.dumps(flat)
df = pd.read_json(text)
df.to_csv('output.csv', index=False, encoding='utf-8')
I might be late to the party, but I think, I have dealt with the similar problem. I had a json file which looked like this
I only wanted to extract few keys/values from these json file. So, I wrote the following code to extract the same.
"""json_to_csv.py
This script reads n numbers of json files present in a folder and then extract certain data from each file and write in a csv file.
The folder contains the python script i.e. json_to_csv.py, output.csv and another folder descriptions containing all the json files.
"""
import os
import json
import csv
def get_list_of_json_files():
"""Returns the list of filenames of all the Json files present in the folder
Parameter
---------
directory : str
'descriptions' in this case
Returns
-------
list_of_files: list
List of the filenames of all the json files
"""
list_of_files = os.listdir('descriptions') # creates list of all the files in the folder
return list_of_files
def create_list_from_json(jsonfile):
"""Returns a list of the extracted items from json file in the same order we need it.
Parameter
_________
jsonfile : json
The json file containing the data
Returns
-------
one_sample_list : list
The list of the extracted items needed for the final csv
"""
with open(jsonfile) as f:
data = json.load(f)
data_list = [] # create an empty list
# append the items to the list in the same order.
data_list.append(data['_id'])
data_list.append(data['_modelType'])
data_list.append(data['creator']['_id'])
data_list.append(data['creator']['name'])
data_list.append(data['dataset']['_accessLevel'])
data_list.append(data['dataset']['_id'])
data_list.append(data['dataset']['description'])
data_list.append(data['dataset']['name'])
data_list.append(data['meta']['acquisition']['image_type'])
data_list.append(data['meta']['acquisition']['pixelsX'])
data_list.append(data['meta']['acquisition']['pixelsY'])
data_list.append(data['meta']['clinical']['age_approx'])
data_list.append(data['meta']['clinical']['benign_malignant'])
data_list.append(data['meta']['clinical']['diagnosis'])
data_list.append(data['meta']['clinical']['diagnosis_confirm_type'])
data_list.append(data['meta']['clinical']['melanocytic'])
data_list.append(data['meta']['clinical']['sex'])
data_list.append(data['meta']['unstructured']['diagnosis'])
# In few json files, the race was not there so using KeyError exception to add '' at the place
try:
data_list.append(data['meta']['unstructured']['race'])
except KeyError:
data_list.append("") # will add an empty string in case race is not there.
data_list.append(data['name'])
return data_list
def write_csv():
"""Creates the desired csv file
Parameters
__________
list_of_files : file
The list created by get_list_of_json_files() method
result.csv : csv
The csv file containing the header only
Returns
_______
result.csv : csv
The desired csv file
"""
list_of_files = get_list_of_json_files()
for file in list_of_files:
row = create_list_from_json(f'descriptions/{file}') # create the row to be added to csv for each file (json-file)
with open('output.csv', 'a') as c:
writer = csv.writer(c)
writer.writerow(row)
c.close()
if __name__ == '__main__':
write_csv()
I hope this will help. For details on how this code work you can check here
I have a JSON file I want to convert to a CSV file. How can I do this with Python?
I tried:
import json
import csv
f = open('data.json')
data = json.load(f)
f.close()
f = open('data.csv')
csv_file = csv.writer(f)
for item in data:
csv_file.writerow(item)
f.close()
However, it did not work. I am using Django and the error I received is:
`file' object has no attribute 'writerow'`
I then tried the following:
import json
import csv
f = open('data.json')
data = json.load(f)
f.close()
f = open('data.csv')
csv_file = csv.writer(f)
for item in data:
f.writerow(item) # ← changed
f.close()
I then get the error:
`sequence expected`
Sample json file:
[{
"pk": 22,
"model": "auth.permission",
"fields": {
"codename": "add_logentry",
"name": "Can add log entry",
"content_type": 8
}
}, {
"pk": 23,
"model": "auth.permission",
"fields": {
"codename": "change_logentry",
"name": "Can change log entry",
"content_type": 8
}
}, {
"pk": 24,
"model": "auth.permission",
"fields": {
"codename": "delete_logentry",
"name": "Can delete log entry",
"content_type": 8
}
}, {
"pk": 4,
"model": "auth.permission",
"fields": {
"codename": "add_group",
"name": "Can add group",
"content_type": 2
}
}, {
"pk": 10,
"model": "auth.permission",
"fields": {
"codename": "add_message",
"name": "Can add message",
"content_type": 4
}
}
]
With the pandas library, this is as easy as using two commands!
df = pd.read_json()
read_json converts a JSON string to a pandas object (either a series or dataframe). Then:
df.to_csv()
Which can either return a string or write directly to a csv-file. See the docs for to_csv.
Based on the verbosity of previous answers, we should all thank pandas for the shortcut.
For unstructured JSON see this answer.
EDIT:
Someone asked for a working minimal example:
import pandas as pd
with open('jsonfile.json', encoding='utf-8') as inputfile:
df = pd.read_json(inputfile)
df.to_csv('csvfile.csv', encoding='utf-8', index=False)
First, your JSON has nested objects, so it normally cannot be directly converted to CSV. You need to change that to something like this:
{
"pk": 22,
"model": "auth.permission",
"codename": "add_logentry",
"content_type": 8,
"name": "Can add log entry"
},
......]
Here is my code to generate CSV from that:
import csv
import json
x = """[
{
"pk": 22,
"model": "auth.permission",
"fields": {
"codename": "add_logentry",
"name": "Can add log entry",
"content_type": 8
}
},
{
"pk": 23,
"model": "auth.permission",
"fields": {
"codename": "change_logentry",
"name": "Can change log entry",
"content_type": 8
}
},
{
"pk": 24,
"model": "auth.permission",
"fields": {
"codename": "delete_logentry",
"name": "Can delete log entry",
"content_type": 8
}
}
]"""
x = json.loads(x)
f = csv.writer(open("test.csv", "wb+"))
# Write CSV Header, If you dont need that, remove this line
f.writerow(["pk", "model", "codename", "name", "content_type"])
for x in x:
f.writerow([x["pk"],
x["model"],
x["fields"]["codename"],
x["fields"]["name"],
x["fields"]["content_type"]])
You will get output as:
pk,model,codename,name,content_type
22,auth.permission,add_logentry,Can add log entry,8
23,auth.permission,change_logentry,Can change log entry,8
24,auth.permission,delete_logentry,Can delete log entry,8
I am assuming that your JSON file will decode into a list of dictionaries. First we need a function which will flatten the JSON objects:
def flattenjson(b, delim):
val = {}
for i in b.keys():
if isinstance(b[i], dict):
get = flattenjson(b[i], delim)
for j in get.keys():
val[i + delim + j] = get[j]
else:
val[i] = b[i]
return val
The result of running this snippet on your JSON object:
flattenjson({
"pk": 22,
"model": "auth.permission",
"fields": {
"codename": "add_message",
"name": "Can add message",
"content_type": 8
}
}, "__")
is
{
"pk": 22,
"model": "auth.permission",
"fields__codename": "add_message",
"fields__name": "Can add message",
"fields__content_type": 8
}
After applying this function to each dict in the input array of JSON objects:
input = map(lambda x: flattenjson( x, "__" ), input)
and finding the relevant column names:
columns = [x for row in input for x in row.keys()]
columns = list(set(columns))
it's not hard to run this through the csv module:
with open(fname, 'wb') as out_file:
csv_w = csv.writer(out_file)
csv_w.writerow(columns)
for i_r in input:
csv_w.writerow(map(lambda x: i_r.get(x, ""), columns))
JSON can represent a wide variety of data structures -- a JS "object" is roughly like a Python dict (with string keys), a JS "array" roughly like a Python list, and you can nest them as long as the final "leaf" elements are numbers or strings.
CSV can essentially represent only a 2-D table -- optionally with a first row of "headers", i.e., "column names", which can make the table interpretable as a list of dicts, instead of the normal interpretation, a list of lists (again, "leaf" elements can be numbers or strings).
So, in the general case, you can't translate an arbitrary JSON structure to a CSV. In a few special cases you can (array of arrays with no further nesting; arrays of objects which all have exactly the same keys). Which special case, if any, applies to your problem? The details of the solution depend on which special case you do have. Given the astonishing fact that you don't even mention which one applies, I suspect you may not have considered the constraint, neither usable case in fact applies, and your problem is impossible to solve. But please do clarify!
A generic solution which translates any json list of flat objects to csv.
Pass the input.json file as first argument on command line.
import csv, json, sys
input = open(sys.argv[1])
data = json.load(input)
input.close()
output = csv.writer(sys.stdout)
output.writerow(data[0].keys()) # header row
for row in data:
output.writerow(row.values())
Use json_normalize from pandas:
Using the sample data from the OP in a file named test.json.
encoding='utf-8' has been used here, but may not be necessary for other cases.
The following code takes advantage of the pathlib library.
.open is a method of pathlib.
Works with non-Windows paths too.
Use pandas.to_csv(...) to save the data to a csv file.
import pandas as pd
# As of Pandas 1.01, json_normalize as pandas.io.json.json_normalize is deprecated and is now exposed in the top-level namespace.
# from pandas.io.json import json_normalize
from pathlib import Path
import json
# set path to file
p = Path(r'c:\some_path_to_file\test.json')
# read json
with p.open('r', encoding='utf-8') as f:
data = json.loads(f.read())
# create dataframe
df = pd.json_normalize(data)
# dataframe view
pk model fields.codename fields.name fields.content_type
22 auth.permission add_logentry Can add log entry 8
23 auth.permission change_logentry Can change log entry 8
24 auth.permission delete_logentry Can delete log entry 8
4 auth.permission add_group Can add group 2
10 auth.permission add_message Can add message 4
# save to csv
df.to_csv('test.csv', index=False, encoding='utf-8')
CSV Output:
pk,model,fields.codename,fields.name,fields.content_type
22,auth.permission,add_logentry,Can add log entry,8
23,auth.permission,change_logentry,Can change log entry,8
24,auth.permission,delete_logentry,Can delete log entry,8
4,auth.permission,add_group,Can add group,2
10,auth.permission,add_message,Can add message,4
Resources for more heavily nested JSON objects:
SO Answers:
Flatten a JSON array with python
How to flatten nested JSON recursively, with flatten_json
How to json_normalize a column with NaNs
Split / Explode a column of dictionaries into separate columns with pandas
See the json_normalize tag for other related questions.
This code should work for you, assuming that your JSON data is in a file called data.json.
import json
import csv
with open("data.json") as file:
data = json.load(file)
with open("data.csv", "w") as file:
csv_file = csv.writer(file)
for item in data:
fields = list(item['fields'].values())
csv_file.writerow([item['pk'], item['model']] + fields)
It'll be easy to use csv.DictWriter(),the detailed implementation can be like this:
def read_json(filename):
return json.loads(open(filename).read())
def write_csv(data,filename):
with open(filename, 'w+') as outf:
writer = csv.DictWriter(outf, data[0].keys())
writer.writeheader()
for row in data:
writer.writerow(row)
# implement
write_csv(read_json('test.json'), 'output.csv')
Note that this assumes that all of your JSON objects have the same fields.
Here is the reference which may help you.
I was having trouble with Dan's proposed solution, but this worked for me:
import json
import csv
f = open('test.json')
data = json.load(f)
f.close()
f=csv.writer(open('test.csv','wb+'))
for item in data:
f.writerow([item['pk'], item['model']] + item['fields'].values())
Where "test.json" contained the following:
[
{"pk": 22, "model": "auth.permission", "fields":
{"codename": "add_logentry", "name": "Can add log entry", "content_type": 8 } },
{"pk": 23, "model": "auth.permission", "fields":
{"codename": "change_logentry", "name": "Can change log entry", "content_type": 8 } }, {"pk": 24, "model": "auth.permission", "fields":
{"codename": "delete_logentry", "name": "Can delete log entry", "content_type": 8 } }
]
This is a modification of #MikeRepass's answer. This version writes the CSV to a file, and works for both Python 2 and Python 3.
import csv,json
input_file="data.json"
output_file="data.csv"
with open(input_file) as f:
content=json.load(f)
try:
context=open(output_file,'w',newline='') # Python 3
except TypeError:
context=open(output_file,'wb') # Python 2
with context as file:
writer=csv.writer(file)
writer.writerow(content[0].keys()) # header row
for row in content:
writer.writerow(row.values())
Alec's answer is great, but it doesn't work in the case where there are multiple levels of nesting. Here's a modified version that supports multiple levels of nesting. It also makes the header names a bit nicer if the nested object already specifies its own key (e.g. Firebase Analytics / BigTable / BigQuery data):
"""Converts JSON with nested fields into a flattened CSV file.
"""
import sys
import json
import csv
import os
import jsonlines
from orderedset import OrderedSet
# from https://stackoverflow.com/a/28246154/473201
def flattenjson( b, prefix='', delim='/', val=None ):
if val is None:
val = {}
if isinstance( b, dict ):
for j in b.keys():
flattenjson(b[j], prefix + delim + j, delim, val)
elif isinstance( b, list ):
get = b
for j in range(len(get)):
key = str(j)
# If the nested data contains its own key, use that as the header instead.
if isinstance( get[j], dict ):
if 'key' in get[j]:
key = get[j]['key']
flattenjson(get[j], prefix + delim + key, delim, val)
else:
val[prefix] = b
return val
def main(argv):
if len(argv) < 2:
raise Error('Please specify a JSON file to parse')
print "Loading and Flattening..."
filename = argv[1]
allRows = []
fieldnames = OrderedSet()
with jsonlines.open(filename) as reader:
for obj in reader:
# print 'orig:\n'
# print obj
flattened = flattenjson(obj)
#print 'keys: %s' % flattened.keys()
# print 'flattened:\n'
# print flattened
fieldnames.update(flattened.keys())
allRows.append(flattened)
print "Exporting to CSV..."
outfilename = filename + '.csv'
count = 0
with open(outfilename, 'w') as file:
csvwriter = csv.DictWriter(file, fieldnames=fieldnames)
csvwriter.writeheader()
for obj in allRows:
# print 'allRows:\n'
# print obj
csvwriter.writerow(obj)
count += 1
print "Wrote %d rows" % count
if __name__ == '__main__':
main(sys.argv)
As mentioned in the previous answers the difficulty in converting json to csv is because a json file can contain nested dictionaries and therefore be a multidimensional data structure verses a csv which is a 2D data structure. However, a good way to turn a multidimensional structure to a csv is to have multiple csvs that tie together with primary keys.
In your example, the first csv output has the columns "pk","model","fields" as your columns. Values for "pk", and "model" are easy to get but because the "fields" column contains a dictionary, it should be its own csv and because "codename" appears to the be the primary key, you can use as the input for "fields" to complete the first csv. The second csv contains the dictionary from the "fields" column with codename as the the primary key that can be used to tie the 2 csvs together.
Here is a solution for your json file which converts a nested dictionaries to 2 csvs.
import csv
import json
def readAndWrite(inputFileName, primaryKey=""):
input = open(inputFileName+".json")
data = json.load(input)
input.close()
header = set()
if primaryKey != "":
outputFileName = inputFileName+"-"+primaryKey
if inputFileName == "data":
for i in data:
for j in i["fields"].keys():
if j not in header:
header.add(j)
else:
outputFileName = inputFileName
for i in data:
for j in i.keys():
if j not in header:
header.add(j)
with open(outputFileName+".csv", 'wb') as output_file:
fieldnames = list(header)
writer = csv.DictWriter(output_file, fieldnames, delimiter=',', quotechar='"')
writer.writeheader()
for x in data:
row_value = {}
if primaryKey == "":
for y in x.keys():
yValue = x.get(y)
if type(yValue) == int or type(yValue) == bool or type(yValue) == float or type(yValue) == list:
row_value[y] = str(yValue).encode('utf8')
elif type(yValue) != dict:
row_value[y] = yValue.encode('utf8')
else:
if inputFileName == "data":
row_value[y] = yValue["codename"].encode('utf8')
readAndWrite(inputFileName, primaryKey="codename")
writer.writerow(row_value)
elif primaryKey == "codename":
for y in x["fields"].keys():
yValue = x["fields"].get(y)
if type(yValue) == int or type(yValue) == bool or type(yValue) == float or type(yValue) == list:
row_value[y] = str(yValue).encode('utf8')
elif type(yValue) != dict:
row_value[y] = yValue.encode('utf8')
writer.writerow(row_value)
readAndWrite("data")
I know it has been a long time since this question has been asked but I thought I might add to everyone else's answer and share a blog post that I think explain the solution in a very concise way.
Here is the link
Open a file for writing
employ_data = open('/tmp/EmployData.csv', 'w')
Create the csv writer object
csvwriter = csv.writer(employ_data)
count = 0
for emp in emp_data:
if count == 0:
header = emp.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(emp.values())
Make sure to close the file in order to save the contents
employ_data.close()
It is not a very smart way to do it, but I have had the same problem and this worked for me:
import csv
f = open('data.json')
data = json.load(f)
f.close()
new_data = []
for i in data:
flat = {}
names = i.keys()
for n in names:
try:
if len(i[n].keys()) > 0:
for ii in i[n].keys():
flat[n+"_"+ii] = i[n][ii]
except:
flat[n] = i[n]
new_data.append(flat)
f = open(filename, "r")
writer = csv.DictWriter(f, new_data[0].keys())
writer.writeheader()
for row in new_data:
writer.writerow(row)
f.close()
Surprisingly, I found that none of the answers posted here so far correctly deal with all possible scenarios (e.g., nested dicts, nested lists, None values, etc).
This solution should work across all scenarios:
def flatten_json(json):
def process_value(keys, value, flattened):
if isinstance(value, dict):
for key in value.keys():
process_value(keys + [key], value[key], flattened)
elif isinstance(value, list):
for idx, v in enumerate(value):
process_value(keys + [str(idx)], v, flattened)
else:
flattened['__'.join(keys)] = value
flattened = {}
for key in json.keys():
process_value([key], json[key], flattened)
return flattened
My simple way to solve this:
Create a new Python file like: json_to_csv.py
Add this code:
import csv, json, sys
#if you are not using utf-8 files, remove the next line
sys.setdefaultencoding("UTF-8")
#check if you pass the input file and output file
if sys.argv[1] is not None and sys.argv[2] is not None:
fileInput = sys.argv[1]
fileOutput = sys.argv[2]
inputFile = open(fileInput)
outputFile = open(fileOutput, 'w')
data = json.load(inputFile)
inputFile.close()
output = csv.writer(outputFile)
output.writerow(data[0].keys()) # header row
for row in data:
output.writerow(row.values())
After add this code, save the file and run at the terminal:
python json_to_csv.py input.txt output.csv
I hope this help you.
SEEYA!
This code works for any given json file
# -*- coding: utf-8 -*-
"""
Created on Mon Jun 17 20:35:35 2019
author: Ram
"""
import json
import csv
with open("file1.json") as file:
data = json.load(file)
# create the csv writer object
pt_data1 = open('pt_data1.csv', 'w')
csvwriter = csv.writer(pt_data1)
count = 0
for pt in data:
if count == 0:
header = pt.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(pt.values())
pt_data1.close()
If we consider the below example for converting the json format file to csv formatted file.
{
"item_data" : [
{
"item": "10023456",
"class": "100",
"subclass": "123"
}
]
}
The below code will convert the json file ( data3.json ) to csv file ( data3.csv ).
import json
import csv
with open("/Users/Desktop/json/data3.json") as file:
data = json.load(file)
file.close()
print(data)
fname = "/Users/Desktop/json/data3.csv"
with open(fname, "w", newline='') as file:
csv_file = csv.writer(file)
csv_file.writerow(['dept',
'class',
'subclass'])
for item in data["item_data"]:
csv_file.writerow([item.get('item_data').get('dept'),
item.get('item_data').get('class'),
item.get('item_data').get('subclass')])
The above mentioned code has been executed in the locally installed pycharm and it has successfully converted the json file to the csv file. Hope this help to convert the files.
This works relatively well.
It flattens the json to write it to a csv file.
Nested elements are managed :)
That's for python 3
import json
o = json.loads('your json string') # Be careful, o must be a list, each of its objects will make a line of the csv.
def flatten(o, k='/'):
global l, c_line
if isinstance(o, dict):
for key, value in o.items():
flatten(value, k + '/' + key)
elif isinstance(o, list):
for ov in o:
flatten(ov, '')
elif isinstance(o, str):
o = o.replace('\r',' ').replace('\n',' ').replace(';', ',')
if not k in l:
l[k]={}
l[k][c_line]=o
def render_csv(l):
ftime = True
for i in range(100): #len(l[list(l.keys())[0]])
for k in l:
if ftime :
print('%s;' % k, end='')
continue
v = l[k]
try:
print('%s;' % v[i], end='')
except:
print(';', end='')
print()
ftime = False
i = 0
def json_to_csv(object_list):
global l, c_line
l = {}
c_line = 0
for ov in object_list : # Assumes json is a list of objects
flatten(ov)
c_line += 1
render_csv(l)
json_to_csv(o)
enjoy.
Modified Alec McGail's answer to support JSON with lists inside
def flattenjson(self, mp, delim="|"):
ret = []
if isinstance(mp, dict):
for k in mp.keys():
csvs = self.flattenjson(mp[k], delim)
for csv in csvs:
ret.append(k + delim + csv)
elif isinstance(mp, list):
for k in mp:
csvs = self.flattenjson(k, delim)
for csv in csvs:
ret.append(csv)
else:
ret.append(mp)
return ret
Thanks!
import json,csv
t=''
t=(type('a'))
json_data = []
data = None
write_header = True
item_keys = []
try:
with open('kk.json') as json_file:
json_data = json_file.read()
data = json.loads(json_data)
except Exception as e:
print( e)
with open('bar.csv', 'at') as csv_file:
writer = csv.writer(csv_file)#, quoting=csv.QUOTE_MINIMAL)
for item in data:
item_values = []
for key in item:
if write_header:
item_keys.append(key)
value = item.get(key, '')
if (type(value)==t):
item_values.append(value.encode('utf-8'))
else:
item_values.append(value)
if write_header:
writer.writerow(item_keys)
write_header = False
writer.writerow(item_values)
Since the data appears to be in a dictionary format, it would appear that you should actually use csv.DictWriter() to actually output the lines with the appropriate header information. This should allow the conversion to be handled somewhat easier. The fieldnames parameter would then set up the order properly while the output of the first line as the headers would allow it to be read and processed later by csv.DictReader().
For example, Mike Repass used
output = csv.writer(sys.stdout)
output.writerow(data[0].keys()) # header row
for row in data:
output.writerow(row.values())
However just change the initial setup to
output = csv.DictWriter(filesetting, fieldnames=data[0].keys())
Note that since the order of elements in a dictionary is not defined, you might have to create fieldnames entries explicitly. Once you do that, the writerow will work. The writes then work as originally shown.
Unfortunately I have not enouthg reputation to make a small contribution to the amazing #Alec McGail answer.
I was using Python3 and I have needed to convert the map to a list following the #Alexis R comment.
Additionaly I have found the csv writer was adding a extra CR to the file (I have a empty line for each line with data inside the csv file). The solution was very easy following the #Jason R. Coombs answer to this thread:
CSV in Python adding an extra carriage return
You need to simply add the lineterminator='\n' parameter to the csv.writer. It will be: csv_w = csv.writer( out_file, lineterminator='\n' )
You can use this code to convert a json file to csv file
After reading the file, I am converting the object to pandas dataframe and then saving this to a CSV file
import os
import pandas as pd
import json
import numpy as np
data = []
os.chdir('D:\\Your_directory\\folder')
with open('file_name.json', encoding="utf8") as data_file:
for line in data_file:
data.append(json.loads(line))
dataframe = pd.DataFrame(data)
## Saving the dataframe to a csv file
dataframe.to_csv("filename.csv", encoding='utf-8',index= False)
I have tried a lot of the suggested solution (also Panda was not correctly normalizing my JSON) but the real good one which is parsing correctly the JSON data is from Max Berman.
I wrote an improvement to avoid new columns for each row and
puts it to the existing column during parsing.
It has also the effect to store a value as a string if only one data exists, and make a list if there are more values for that columns.
It takes an input.json file for input and spits out an output.csv.
import json
import pandas as pd
def flatten_json(json):
def process_value(keys, value, flattened):
if isinstance(value, dict):
for key in value.keys():
process_value(keys + [key], value[key], flattened)
elif isinstance(value, list):
for idx, v in enumerate(value):
process_value(keys, v, flattened)
# process_value(keys + [str(idx)], v, flattened)
else:
key1 = '__'.join(keys)
if not flattened.get(key1) is None:
if isinstance(flattened[key1], list):
flattened[key1] = flattened[key1] + [value]
else:
flattened[key1] = [flattened[key1]] + [value]
else:
flattened[key1] = value
flattened = {}
for key in json.keys():
k = key
# print("Key: " + k)
process_value([key], json[key], flattened)
return flattened
try:
f = open("input.json", "r")
except:
pass
y = json.loads(f.read())
flat = flatten_json(y)
text = json.dumps(flat)
df = pd.read_json(text)
df.to_csv('output.csv', index=False, encoding='utf-8')
I might be late to the party, but I think, I have dealt with the similar problem. I had a json file which looked like this
I only wanted to extract few keys/values from these json file. So, I wrote the following code to extract the same.
"""json_to_csv.py
This script reads n numbers of json files present in a folder and then extract certain data from each file and write in a csv file.
The folder contains the python script i.e. json_to_csv.py, output.csv and another folder descriptions containing all the json files.
"""
import os
import json
import csv
def get_list_of_json_files():
"""Returns the list of filenames of all the Json files present in the folder
Parameter
---------
directory : str
'descriptions' in this case
Returns
-------
list_of_files: list
List of the filenames of all the json files
"""
list_of_files = os.listdir('descriptions') # creates list of all the files in the folder
return list_of_files
def create_list_from_json(jsonfile):
"""Returns a list of the extracted items from json file in the same order we need it.
Parameter
_________
jsonfile : json
The json file containing the data
Returns
-------
one_sample_list : list
The list of the extracted items needed for the final csv
"""
with open(jsonfile) as f:
data = json.load(f)
data_list = [] # create an empty list
# append the items to the list in the same order.
data_list.append(data['_id'])
data_list.append(data['_modelType'])
data_list.append(data['creator']['_id'])
data_list.append(data['creator']['name'])
data_list.append(data['dataset']['_accessLevel'])
data_list.append(data['dataset']['_id'])
data_list.append(data['dataset']['description'])
data_list.append(data['dataset']['name'])
data_list.append(data['meta']['acquisition']['image_type'])
data_list.append(data['meta']['acquisition']['pixelsX'])
data_list.append(data['meta']['acquisition']['pixelsY'])
data_list.append(data['meta']['clinical']['age_approx'])
data_list.append(data['meta']['clinical']['benign_malignant'])
data_list.append(data['meta']['clinical']['diagnosis'])
data_list.append(data['meta']['clinical']['diagnosis_confirm_type'])
data_list.append(data['meta']['clinical']['melanocytic'])
data_list.append(data['meta']['clinical']['sex'])
data_list.append(data['meta']['unstructured']['diagnosis'])
# In few json files, the race was not there so using KeyError exception to add '' at the place
try:
data_list.append(data['meta']['unstructured']['race'])
except KeyError:
data_list.append("") # will add an empty string in case race is not there.
data_list.append(data['name'])
return data_list
def write_csv():
"""Creates the desired csv file
Parameters
__________
list_of_files : file
The list created by get_list_of_json_files() method
result.csv : csv
The csv file containing the header only
Returns
_______
result.csv : csv
The desired csv file
"""
list_of_files = get_list_of_json_files()
for file in list_of_files:
row = create_list_from_json(f'descriptions/{file}') # create the row to be added to csv for each file (json-file)
with open('output.csv', 'a') as c:
writer = csv.writer(c)
writer.writerow(row)
c.close()
if __name__ == '__main__':
write_csv()
I hope this will help. For details on how this code work you can check here
Hi I am trying to take the data from a json file and insert and id then perform POST REST.
my file data.json has:
{
'name':'myname'
}
and I would like to add an id so that the json data looks like:
{
'id': 134,
'name': 'myname'
}
So I tried:
import json
f = open("data.json","r")
data = f.read()
jsonObj = json.loads(data)
I can't get to load the json format file.
What should I do so that I can convert the json file into json object and add another id value.
Set item using data['id'] = ....
import json
with open('data.json', 'r+') as f:
data = json.load(f)
data['id'] = 134 # <--- add `id` value.
f.seek(0) # <--- should reset file position to the beginning.
json.dump(data, f, indent=4)
f.truncate() # remove remaining part
falsetru's solution is nice, but has a little bug:
Suppose original 'id' length was larger than 5 characters. When we then dump with the new 'id' (134 with only 3 characters) the length of the string being written from position 0 in file is shorter than the original length. Extra chars (such as '}') left in file from the original content.
I solved that by replacing the original file.
import json
import os
filename = 'data.json'
with open(filename, 'r') as f:
data = json.load(f)
data['id'] = 134 # <--- add `id` value.
os.remove(filename)
with open(filename, 'w') as f:
json.dump(data, f, indent=4)
I would like to present a modified version of Vadim's solution. It helps to deal with asynchronous requests to write/modify json file. I know it wasn't a part of the original question but might be helpful for others.
In case of asynchronous file modification os.remove(filename) will raise FileNotFoundError if requests emerge frequently. To overcome this problem you can create temporary file with modified content and then rename it simultaneously replacing old version. This solution works fine both for synchronous and asynchronous cases.
import os, json, uuid
filename = 'data.json'
with open(filename, 'r') as f:
data = json.load(f)
data['id'] = 134 # <--- add `id` value.
# add, remove, modify content
# create randomly named temporary file to avoid
# interference with other thread/asynchronous request
tempfile = os.path.join(os.path.dirname(filename), str(uuid.uuid4()))
with open(tempfile, 'w') as f:
json.dump(data, f, indent=4)
# rename temporary file replacing old file
os.rename(tempfile, filename)
There is really quite a number of ways to do this and all of the above are in one way or another valid approaches... Let me add a straightforward proposition. So assuming your current existing json file looks is this....
{
"name":"myname"
}
And you want to bring in this new json content (adding key "id")
{
"id": "134",
"name": "myname"
}
My approach has always been to keep the code extremely readable with easily traceable logic. So first, we read the entire existing json file into memory, assuming you are very well aware of your json's existing key(s).
import json
# first, get the absolute path to json file
PATH_TO_JSON = 'data.json' # assuming same directory (but you can work your magic here with os.)
# read existing json to memory. you do this to preserve whatever existing data.
with open(PATH_TO_JSON,'r') as jsonfile:
json_content = json.load(jsonfile) # this is now in memory! you can use it outside 'open'
Next, we use the 'with open()' syntax again, with the 'w' option. 'w' is a write mode which lets us edit and write new information to the file. Here s the catch that works for us ::: any existing json with the same target write name will be erased automatically.
So what we can do now, is simply write to the same filename with the new data
# add the id key-value pair (rmbr that it already has the "name" key value)
json_content["id"] = "134"
with open(PATH_TO_JSON,'w') as jsonfile:
json.dump(json_content, jsonfile, indent=4) # you decide the indentation level
And there you go!
data.json should be good to go for an good old POST request
try this script:
with open("data.json") as f:
data = json.load(f)
data["id"] = 134
json.dump(data, open("data.json", "w"), indent = 4)
the result is:
{
"name":"mynamme",
"id":134
}
Just the arrangement is different, You can solve the problem by converting the "data" type to a list, then arranging it as you wish, then returning it and saving the file, like that:
index_add = 0
with open("data.json") as f:
data = json.load(f)
data_li = [[k, v] for k, v in data.items()]
data_li.insert(index_add, ["id", 134])
data = {data_li[i][0]:data_li[i][1] for i in range(0, len(data_li))}
json.dump(data, open("data.json", "w"), indent = 4)
the result is:
{
"id":134,
"name":"myname"
}
you can add if condition in order not to repeat the key, just change it, like that:
index_add = 0
n_k = "id"
n_v = 134
with open("data.json") as f:
data = json.load(f)
if n_k in data:
data[n_k] = n_v
else:
data_li = [[k, v] for k, v in data.items()]
data_li.insert(index_add, [n_k, n_v])
data = {data_li[i][0]:data_li[i][1] for i in range(0, len(data_li))}
json.dump(data, open("data.json", "w"), indent = 4)
This implementation should suffice:
with open(jsonfile, 'r') as file:
data = json.load(file)
data[id] = value
with open(jsonfile, 'w') as file:
json.dump(data, file)
using context manager for the opening of the jsonfile.
data holds the updated object and dumped into the overwritten jsonfile in 'w' mode.
Not exactly your solution but might help some people solving this issue with keys.
I have list of files in folder, and i need to make Jason out of it with keys.
After many hours of trying the solution is simple.
Solution:
async def return_file_names():
dir_list = os.listdir("./tmp/")
json_dict = {"responseObj":[{"Key": dir_list.index(value),"Value": value} for value in dir_list]}
print(json_dict)
return(json_dict)
Response look like this:
{
"responseObj": [
{
"Key": 0,
"Value": "bottom_mask.GBS"
},
{
"Key": 1,
"Value": "bottom_copper.GBL"
},
{
"Key": 2,
"Value": "copper.GTL"
},
{
"Key": 3,
"Value": "soldermask.GTS"
},
{
"Key": 4,
"Value": "ncdrill.DRD"
},
{
"Key": 5,
"Value": "silkscreen.GTO"
}
]
}