Related
I have file which contains the data in comma separated and i want final output to be json data. I tried below but wanted to know is there any better way to implement this?
data.txt
1,002, name, address
2,003, name_1, address_2
3,004, name_2, address_3
I want final output like below
{
"Id": "1",
"identifier": "002",
"mye": "name",
"add": "address"
}
{
"Id": "2",
"identifier": "003",
"mye": "name_2",
"add": "address_2"
}
and so on...
here is code which i am trying
list = []
with open('data.txt') as reader:
for line in reader:
list.append(line.split(','))
print(list)
above just return the list but i need to convert json key value pair defined in above
Your desired result isn't actually JSON. It's just a series of dict structures. What I think you want is a list of dictionaries. Try this:
fields = ["Id", "identifier", "mye", "add"]
my_json = []
with open('data.txt') as reader:
for line in reader:
vals = line.rstrip().split(',')
my_json.append({fields[vals.index(val)]: val for val in vals})
print(my_json)
Something like this should work:
import json
dataList = []
with open('data.txt') as reader:
# split lines in a way that strips unnecessary whitespace and newlines
for line in reader.read().replace(' ', '').split('\n'):
lineData = line.split(',')
dataList.append({
"Id": lineData[0],
"identifier": lineData[1],
"mye": lineData[2],
"add": lineData[3]
})
out_json = json.dumps(dataList)
print(out_json)
Note that you can change this line:
out_json = json.dumps(dataList)
to
out_json = json.dumps(dataList, indent=4)
and change the indent value to format the json output.
And you can write it to a file if you want to:
open("out.json", "w+").write(out_json)
Extension to one of the suggestion, but you can consider zip instead of list comprehension
import json
my_json = []
dict_header=["Id","identifier","mye","add"]
with open('data.txt') as fh:
for line in fh:
my_json.append(dict ( zip ( dict_header, line.split('\n')[0].split(',')) ))
out_file = open("test1.json", "w")
json.dump(my_json, out_file, indent = 4, sort_keys = False)
out_file.close()
This of course assuming you save the from excel to text(Tab delimnited) in excel
1 2 name address
2 3 name_1 address_2
3 4 name_2 address_3
I would suggest using pandas library, it can't get any easier.
import pandas as pd
df = pd.read_csv("data.txt", header=None)
df.columns = ["Id","identifier", "mye","add"]
df.to_json("output.json")
I have the JSON file below and i am trying to extract the value dob_year from each item if the name element is jim.
This is my try:
import json
with open('j.json') as json_file:
data = json.load(json_file)
if data['name'] == 'jim'
print(data['dob_year'])
Error:
File "extractjson.py", line 6 if data['name'] == 'jim' ^ SyntaxError: invalid syntax
this is my file.
[
{
"name": "jim",
"age": "10",
"sex": "male",
"dob_year": "2007",
"ssn": "123-23-1234"
},
{
"name": "jill",
"age": "6",
"sex": "female",
"dob_year": "2011",
"ssn": "123-23-1235"
}
]
You need to iterate over the list in the JSON file
data = json.load(json_file)
for item in data:
if item['name'] == 'jim':
print(item['dob_year'])
I suggest to use list comprehension:
import json
with open('j.json') as json_file:
data = json.load(json_file)
print([item["dob_year"] for item in a if item["name"] == "jim"])
json.load()
returns a list of entries, the way you have saved it. You have to iterate through all list items, then search for your field in the list item.
Also, if it is a repetitive task, make a function out of it. You can pass different files, fields to be extracted, etc. this way, and it can make your job easier.
Example:
def extract_info(search_field, extract_field, name, filename):
import json
with open(filename) as json_file:
data = json.load(json_file)
for item in data:
if item[search_field] == name:
print(item[extract_field])
extract_info('name','dob_year','jim','j.json')
data is a list of dictionaries! you cannot directly access the key value pairs.
Try encapsulating it in a loop to check every dict in the list:
import json
with open('j.json') as json_file:
data = json.load(json_file)
for set in data:
if set['name'] == 'jim':
print(set['dob_year'])
I am looking to create a python script to be able to convert a nested json file to a csv file, with each inner most child having its own row, that includes all of the parent fields in the row as well.
My nested json looks :
(Note this is just a small excerpt, there are hundreds of date/value pairs)
{
"test1": true,
"test2": [
{
"name_id": 12345,
"tags": [
{
"prod_id": 54321,
"history": [
{
"date": "Feb-2-2019",
"value": 6
},
{
"date": "Feb-3-2019",
"value": 5
},
{
"date": "Feb-4-2019",
"value": 4
}
The goal is to write to a csv where each row shows the values for the inner most field and all of its parents. (e.g, date, value, prod_id, name_id, test1). Basically creating a row for each date & value, with all of the parent field values included as well.
I started using this resource as a foundation, but still not exactly what I'm trying to accomplish:
How to Flatten Deeply Nested JSON Objects in Non-Recursive Elegant Python
I've tried tweaking this script but have not been able to come up with a solution. This seems like a relatively easy task, so maybe there's something I'm missing.
A lot of what you want to do is very data-format specific. Here's something using a function loosely derived from the "traditional recursive" solution shown in linked resource you cited since it will work fine with this data since it's not that deeply nested and is simpler than the iterative approach also illustrtated.
The flatten_json() function returns a list, with each value corresponding to keys in the JSON object passed to it.
Note this is Python 3 code.
from collections import OrderedDict
import csv
import json
def flatten_json(nested_json):
""" Flatten values of JSON object dictionary. """
name, out = [], []
def flatten(obj):
if isinstance(obj, dict):
for key, value in obj.items():
name.append(key)
flatten(value)
elif isinstance(obj, list):
for index, item in enumerate(obj):
name.append(str(index))
flatten(item)
else:
out.append(obj)
flatten(nested_json)
return out
def grouper(iterable, n):
""" Collect data in iterable into fixed-length chunks or blocks. """
args = [iter(iterable)] * n
return zip(*args)
if __name__ == '__main__':
json_str = """
{"test1": true,
"test2": [
{"name_id": 12345,
"tags": [
{"prod_id": 54321,
"history": [
{"date": "Feb-2-2019", "item": 6},
{"date": "Feb-3-2019", "item": 5},
{"date": "Feb-4-2019", "item": 4}
]
}
]
}
]
}
"""
# Flatten the json object into a list of values.
json_obj = json.loads(json_str, object_pairs_hook=OrderedDict)
flattened = flatten_json(json_obj)
print('flattened:', flattened)
# Create row dictionaies for each (data, value) pair at the end of the list
# flattened values with all of the preceeding fields repeated in each one.
test1, name_id, prod_id = flattened[:3]
rows = []
for date, value in grouper(flattened[3:], 2):
rows.append({'date': date, 'value': value,
'prod_id': prod_id, 'name_id': name_id, 'test1': prod_id})
# Write rows to a csv file.
filename = 'product_tests.csv'
fieldnames = 'date', 'value', 'prod_id', 'name_id', 'test1'
with open(filename, mode='w', newline='') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames)
writer.writeheader() # Write csv file header row (optional).
writer.writerows(rows)
print('"{}" file written.'.format(filename))
Here's what it prints:
flattened: [True, 12345, 54321, 'Feb-2-2019', 6, 'Feb-3-2019', 5, 'Feb-4-2019', 4]
"product_tests.csv" file written.
And here's the contents of the product_tests.csv file produced:
date,value,prod_id,name_id,test1
Feb-2-2019,6,54321,12345,True
Feb-3-2019,5,54321,12345,True
Feb-4-2019,4,54321,12345,True
I am trying to create a JSON file through a CSV. Below code creates the data however not quite where I want it to be. I have some experience in python. From my understanding the JSON file should be written like this [{},{},...,{}].
How do I?:
I am able to insert the ',', however how do I remove the last ','?
How do I insert '[' at the very beginning and ']' at the very end? I tried inserting it into outputfile.write('['...etc), it shows up too many places.
Not include header on the first line of json file.
Names.csv:
id,team_name,team_members
123,Biology,"Ali Smith, Jon Doe"
234,Math,Jane Smith
345,Statistics ,"Matt P, Albert Shaw"
456,Chemistry,"Andrew M, Matt Shaw, Ali Smith"
678,Physics,"Joe Doe, Jane Smith, Ali Smith "
Code:
import csv
import json
import os
with open('names.csv', 'r') as infile, open('names1.json','w') as outfile:
for line in infile:
row = dict()
# print(row)
id, team_name, *team_members = line.split(',')
row["id"] = id;
row["team_name"] = team_name;
row["team_members"] = team_members
json.dump(row,outfile)
outfile.write("," + "\n" )
Output so far:
{"id": "id", "team_name": "team_name", "team_members": ["team_members\n"]},
{"id": "123", "team_name": "Biology", "team_members": ["\"Ali Smith", " Jon Doe\"\n"]},
{"id": "234", "team_name": "Math", "team_members": ["Jane Smith \n"]},
{"id": "345", "team_name": "Statistics ", "team_members": ["\"Matt P", " Albert Shaw\"\n"]},
{"id": "456", "team_name": "Chemistry", "team_members": ["\"Andrew M", " Matt Shaw", " Ali Smith\"\n"]},
{"id": "678", "team_name": "Physics", "team_members": ["\"Joe Doe", " Jane Smith", " Ali Smith \""]},
First, how do you skip the header? That's easy:
next(infile) # skip the first line
for line in infile:
However, you may want to consider using a csv.DictReader for input. It handles reading the header line, and using the information there to create a dict for each row, and splitting the rows for you (as well as handling cases you may not have thought of, like quoted or escaped text that can be present in CSV files):
for row in csv.DictReader(infile):
jsondump(row,outfile)
Now onto the harder problem.
A better solution would probably be to use an iterative JSON library that can dump an iterator as a JSON array. Then you could do something like this:
def rows(infile):
for line in infile:
row = dict()
# print(row)
id, team_name, *team_members = line.split(',')
row["id"] = id;
row["team_name"] = team_name;
row["team_members"] = team_members
yield row
with open('names.csv', 'r') as infile, open('names1.json','w') as outfile:
genjson.dump(rows(infile), outfile)
The stdlib json.JSONEncoder has an example in the docs that does exactly this—although not very efficiently, because it first consumes the entire iterator to build a list, then dumps that:
class GenJSONEncoder(json.JSONEncoder):
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, o)
j = GenJSONEncoder()
with open('names.csv', 'r') as infile, open('names1.json','w') as outfile:
outfile.write(j.encode(rows(infile)))
And really, if you're willing to build a whole list rather than encode line by line, it may be simpler to just do the listifying explicitly:
with open('names.csv', 'r') as infile, open('names1.json','w') as outfile:
json.dump(list(rows(infile)))
You can go further by also overriding the iterencode method, but this will be a lot less trivial, and you'd probably want to look for an efficient, well-tested streaming iterative JSON library on PyPI instead of building it yourself from the json module.
But, meanwhile, here's a direct solution to your question, changing as little as possible from your existing code:
with open('names.csv', 'r') as infile, open('names1.json','w') as outfile:
# print the opening [
outfile.write('[\n')
# keep track of the index, just to distinguish line 0 from the rest
for i, line in enumerate(infile):
row = dict()
# print(row)
id, team_name, *team_members = line.split(',')
row["id"] = id;
row["team_name"] = team_name;
row["team_members"] = team_members
# add the ,\n _before_ each row except the first
if i:
outfile.write(',\n')
json.dump(row,outfile)
# write the final ]
outfile.write('\n]')
This trick—treating the first element special rather than the last—simplifies a lot of problems of this type.
Another way to simplify things is to actual iterate over adjacent pairs of lines, using a minor variation on the pairwise example in the itertools docs:
def pairwise(iterable):
a, b = itertools.tee(iterable)
next(b, None)
return itertools.zip_longest(a, b, fillvalue=None)
with open('names.csv', 'r') as infile, open('names1.json','w') as outfile:
# print the opening [
outfile.write('[\n')
# iterate pairs of lines
for line, nextline in pairwise(infile):
row = dict()
# print(row)
id, team_name, *team_members = line.split(',')
row["id"] = id;
row["team_name"] = team_name;
row["team_members"] = team_members
json.dump(row,outfile)
# add the , if there is a next line
if nextline is not None:
outfile.write(',')
outfile.write('\n')
# write the final ]
outfile.write(']')
This is just as efficient as the previous version, and conceptually simpler—but a lot more abstract.
With a minimal edit to your code, you can create a list of dictionaries in Python and dump it to file as JSON all at once (assuming your dataset is small enough to fit in memory):
import csv
import json
import os
rows = [] # Create list
with open('names.csv', 'r') as infile, open('names1.json','w') as outfile:
for line in infile:
row = dict()
id, team_name, *team_members = line.split(',')
row["id"] = id;
row["team_name"] = team_name;
row["team_members"] = team_members
rows.append(row) # Append row to list
json.dump(rows[1:], outfile) # Write entire list to file (except first row)
As an aside, you should not use id as a variable name in Python as it is a built-in function.
Pandas can handle this with ease:
df = pd.read_csv('names.csv', dtype=str)
df['team_members'] = (df['team_members']
.map(lambda s: s.split(','))
.map(lambda l: [x.strip() for x in l]))
records = df.to_dict('records')
json.dump(records, outfile)
Seems like it would be a lot easier to use the csv.DictReader class instead of reinventing the wheel:
import csv
import json
data = []
with open('names.csv', 'r', newline='') as infile:
for row in csv.DictReader(infile):
data.append(row)
with open('names1.json','w') as outfile:
json.dump(data, outfile, indent=4)
Contents of names1.json file folloing execution (I used indent=4 just to make it more human readable):
[
{
"id": "123",
"team_name": "Biology",
"team_members": "Ali Smith, Jon Doe"
},
{
"id": "234",
"team_name": "Math",
"team_members": "Jane Smith"
},
{
"id": "345",
"team_name": "Statistics ",
"team_members": "Matt P, Albert Shaw"
},
{
"id": "456",
"team_name": "Chemistry",
"team_members": "Andrew M, Matt Shaw, Ali Smith"
},
{
"id": "678",
"team_name": "Physics",
"team_members": "Joe Doe, Jane Smith, Ali Smith"
}
]
I'm trying to take data from a CSV and put it in a top-level array in JSON format.
Currently I am running this code:
import csv
import json
csvfile = open('music.csv', 'r')
jsonfile = open('file.json', 'w')
fieldnames = ("ID","Artist","Song", "Artist")
reader = csv.DictReader( csvfile, fieldnames)
for row in reader:
json.dump(row, jsonfile)
jsonfile.write('\n')
The CSV file is formatted as so:
| 1 | Empire of the Sun | We Are The People | Walking on a Dream |
| 2 | M83 | Steve McQueen | Hurry Up We're Dreaming |
Where = Column 1: ID | Column 2: Artist | Column 3: Song | Column 4: Album
And getting this output:
{"Song": "Empire of the Sun", "ID": "1", "Artist": "Walking on a Dream"}
{"Song": "M83", "ID": "2", "Artist": "Hurry Up We're Dreaming"}
I'm trying to get it to look like this though:
{
"Music": [
{
"id": 1,
"Artist": "Empire of the Sun",
"Name": "We are the People",
"Album": "Walking on a Dream"
},
{
"id": 2,
"Artist": "M83",
"Name": "Steve McQueen",
"Album": "Hurry Up We're Dreaming"
},
]
}
Pandas solves this really simply. First to read the file
import pandas
df = pandas.read_csv('music.csv', names=("id","Artist","Song", "Album"))
Now you have some options. The quickest way to get a proper json file out of this is simply
df.to_json('file.json', orient='records')
Output:
[{"id":1,"Artist":"Empire of the Sun","Song":"We Are The People","Album":"Walking on a Dream"},{"id":2,"Artist":"M83","Song":"Steve McQueen","Album":"Hurry Up We're Dreaming"}]
This doesn't handle the requirement that you want it all in a "Music" object or the order of the fields, but it does have the benefit of brevity.
To wrap the output in a Music object, we can use to_dict:
import json
with open('file.json', 'w') as f:
json.dump({'Music': df.to_dict(orient='records')}, f, indent=4)
Output:
{
"Music": [
{
"id": 1,
"Album": "Walking on a Dream",
"Artist": "Empire of the Sun",
"Song": "We Are The People"
},
{
"id": 2,
"Album": "Hurry Up We're Dreaming",
"Artist": "M83",
"Song": "Steve McQueen"
}
]
}
I would advise you to reconsider insisting on a particular order for the fields since the JSON specification clearly states "An object is an unordered set of name/value pairs" (emphasis mine).
Alright this is untested, but try the following:
import csv
import json
from collections import OrderedDict
fieldnames = ("ID","Artist","Song", "Artist")
entries = []
#the with statement is better since it handles closing your file properly after usage.
with open('music.csv', 'r') as csvfile:
#python's standard dict is not guaranteeing any order,
#but if you write into an OrderedDict, order of write operations will be kept in output.
reader = csv.DictReader(csvfile, fieldnames)
for row in reader:
entry = OrderedDict()
for field in fieldnames:
entry[field] = row[field]
entries.append(entry)
output = {
"Music": entries
}
with open('file.json', 'w') as jsonfile:
json.dump(output, jsonfile)
jsonfile.write('\n')
Your logic is in the wrong order. json is designed to convert a single object into JSON, recursively. So you should always be thinking in terms of building up a single object before calling dump or dumps.
First collect it into an array:
music = [r for r in reader]
Then put it in a dict:
result = {'Music': music}
Then dump to JSON:
json.dump(result, jsonfile)
Or all in one line:
json.dump({'Music': [r for r in reader]}, jsonfile)
"Ordered" JSON
If you really care about the order of object properties in the JSON (even though you shouldn't), you shouldn't use the DictReader. Instead, use the regular reader and create OrderedDicts yourself:
from collections import OrderedDict
...
reader = csv.Reader(csvfile)
music = [OrderedDict(zip(fieldnames, r)) for r in reader]
Or in a single line again:
json.dump({'Music': [OrderedDict(zip(fieldnames, r)) for r in reader]}, jsonfile)
Other
Also, use context managers for your files to ensure they're closed properly:
with open('music.csv', 'r') as csvfile, open('file.json', 'w') as jsonfile:
# Rest of your code inside this block
It didn't write to the JSON file in the order I would have liked
The csv.DictReader classes return Python dict objects. Python dictionaries are unordered collections. You have no control over their presentation order.
Python does provide an OrderedDict, which you can use if you avoid using csv.DictReader().
and it skipped the song name altogether.
This is because the file is not really a CSV file. In particular, each line begins and ends with the field separator. We can use .strip("|") to fix this.
I need all this data to be output into an array named "Music"
Then the program needs to create a dict with "Music" as a key.
I need it to have commas after each artist info. In the output I get I get
This problem is because you call json.dumps() multiple times. You should only call it once if you want a valid JSON file.
Try this:
import csv
import json
from collections import OrderedDict
def MyDictReader(fp, fieldnames):
fp = (x.strip().strip('|').strip() for x in fp)
reader = csv.reader(fp, delimiter="|")
reader = ([field.strip() for field in row] for row in reader)
dict_reader = (OrderedDict(zip(fieldnames, row)) for row in reader)
return dict_reader
csvfile = open('music.csv', 'r')
jsonfile = open('file.json', 'w')
fieldnames = ("ID","Artist","Song", "Album")
reader = MyDictReader(csvfile, fieldnames)
json.dump({"Music": list(reader)}, jsonfile, indent=2)