I'm trying to read existing data from a json file and trying to append more data to the file using python (I'm a python newbie). Here is my existing data in data.json file which I read in my script:
{
"Config1": {
"TestCase1": {
"Data1": 200,
"Data2": 2715
}
},
"Config2": {
"TestCase1": {
"Data1": 2710,
"Data2": 2715
}
}
}
After reading I want to append TestCase2 data. This is what I'm doing:
with open("data.json") as json_file: #load existing data
json_data = json.load(json_file)
test='TestCase2'
result=json_data
myConfigs = ['Config1','Config2']
for each, config in enumerate(myConfigs):
result.update({config:{test:{'Data1':2600,'Data2':2900}}})
with open('data.json', 'a') as outfile:
json.dump(result, outfile)
The new data in data.json is not valid as pointed by jsonLint. What am I doing wrong? Here is the new data
{
"Config1": {
"TestCase1": {
"Data1": 200,
"Data2": 2715
}
},
"Config2": {
"TestCase1": {
"Data1": 2710,
"Data2": 2715
}
}
} {
"Config1": {
"TestCase2": {
"Data1": 2600,
"Data2": 2900
}
},
"Config2": {
"TestCase2": {
"Data1": 2600,
"Data2": 2900
}
}
}
In addition to opening the file in the wrong mode (should be 'w'), you are also overwriting your old "config" trees by defining a new dict inline.
Instead of:
result.update({config:{test:{'Data1':2600,'Data2':2900}}})
Try this:
result[config][test] = {'Data1': 2600, 'Data2': 2900}
This should give you the result you are looking for with your example. It will let result['Config1']['TestCase1'] persist while you add TestCase2. You may also need to make sure that the config tree exists by setting result[config] to {} if it's None.
The problem is that you're appending the new JSON to the original JSON file here:
with open('data.json', 'a') as outfile:
json.dump(result, outfile)
So you have two JSON objects in the same file as you can see:
...
"Data2": 2715
}
}
} { <--- original object ends here, new object starts here
"Config1": {
...
JSONLint is expecting a single object, as will any JSON parser.
The main problem is that the dict1.update(dict2) method overwrites dict1 keys if the same exist in dict2 hence the second object in your file doesn't have key =>TestCase1
Another problem is that (as pointed above) the file is opened in the wrong mode (Should be w) as a appends to the json file
You could try this:
with open("data.json") as json_file:
json_data = json.load(json_file)
test='TestCase2'
result=json_data
myConfigs = ['Config1','Config2']
for each, config in enumerate(myConfigs):
result[config].update({test:{'Data1':2600,'Data2':2900}})
with open('data.json', 'w') as outfile:
json.dump(result, outfile)
Just result[config].update(... instead of result.update({config:...
Related
I have to extract variable data from a json file, where the path is not a constant.
Here's my code
import json
JSONFILE="output.json"
jsonf = open(JSONFILE, 'r', encoding='utf-8')
with jsonf as json_file:
data = json.load(json_file)
print(data["x1"]["y1"][0])
The json file
{
"x1" : {
"y1" : [
{
"value" : "v1",
"type" : "t1",
}
]
},
"x2" : {
"y2" : [
{
"value" : "v2",
"type" : "t2",
}
],
}
}
I want to extract all the values not only [x1][y1][value]
All the values should be stored in the data, and they datatype is a dictionary. It's up to you what you want to "index" and obtain. Learn more about dictionaries in Python 3: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
Feel free to ask further questions.
Use for loop to iterate over the dictionary
import json
JSONFILE="output.json"
jsonf = open(JSONFILE, 'r', encoding='utf-8')
with jsonf as json_file:
data = json.load(json_file)
for key in data.keys():
for key2 in data[key].keys():
print(data[key][key2])
I am trying to add data into a json key from a csv file and maintain the original structure as is.. the json file looks like this..
{
"inputDocuments": {
"gcsDocuments": {
"documents": [
{
"gcsUri": "gs://test/.PDF",
"mimeType": "application/pdf"
}
]
}
},
"documentOutputConfig": {
"gcsOutputConfig": {
"gcsUri": "gs://test"
}
},
"skipHumanReview": false
The csv file I am trying to load has the following structure..
note that the
mimetype
is not included in the csv file.
I already have code that can do this, however its a bit manual and I am looking for a simpler approach that would just require a csv file with the values and this data will be added into the json structure. The expected outcome should look like this:
{
"inputDocuments": {
"gcsDocuments": {
"documents": [
{
"gcsUri": "gs://sampleinvoices/Handwritten/1.pdf",
"mimeType": "application/pdf"
},
{
"gcsUri": "gs://sampleinvoices/Handwritten/2.pdf",
"mimeType": "application/pdf"
}
]
}
},
"documentOutputConfig": {
"gcsOutputConfig": {
"gcsUri": "gs://test"
}
},
"skipHumanReview": false
The code that I am currently using, which is a bit manual looks like this..
import json
# function to add to JSON
def write_json(new_data, filename='keyvalue.json'):
with open(filename,'r+') as file:
# load existing data into a dict.
file_data = json.load(file)
# Join new_data with file_data inside documents
file_data["inputDocuments"]["gcsDocuments"]["documents"].append(new_data)
# Sets file's current position at offset.
file.seek(0)
# convert back to json.
json.dump(file_data, file, indent = 4)
# python object to be appended
y = {
"gcsUri": "gs://test/.PDF",
"mimeType": "application/pdf"
}
write_json(y)
I would suggest something like this:
import pandas as pd
import json
from pathlib import Path
df_csv = pd.read_csv("your_data.csv")
json_file = Path("your_data.json")
json_data = json.loads(json_file.read_text())
documents = [
{
"gcsUri": cell,
"mimeType": "application/pdf"
}
for cell in df_csv["column_name"]
]
json_data["inputDocuments"]["gcsDocuments"]["documents"] = documents
json_file.write_text(json.dumps(json_data))
Probably you should split this into separate functions, but it should communicate the general idea.
I am trying to achieve the below JSON format and store it in a json file:
{
"Name": "Anurag",
"resetRecordedDate": false,
"ED": {
"Link": "google.com"
}
}
I know how to create a simple JSON file using JSON dumps but not really sure how to add something similar to a dictionary for one of the records within the JSON file.
Assuming the input json content is
{
"Name": "Anurag",
"resetRecordedDate": False
}
Program
import json
# read file
with open('example.json', 'r') as infile:
data=infile.read()
# parse file
parsed_json = json.loads(data)
# Add dictionary element
parsed_json["ED"] = {
"Link": "google.com"
}
# print(json.dumps(parsed_json, indent=4))
# write to json
with open('data.json', 'w') as outfile:
json.dump(parsed_json, outfile)
o/p
{
"Name": "Anurag",
"resetRecordedDate": false,
"ED": {
"Link": "google.com"
}
}
Code Including JSON File Code:
Python( Suppose To Append A New User and Balance):
import json
with open('users_balance.json', 'r') as file:
data = json.load(file)['user_list']
data['user_list']
data.append({"user": "sdfsd", "balance": 40323420})
with open('users_balance.json', 'w') as file:
json.dump(data, file, indent=2)
Json(Object The Code Is Appending To):
{
"user_list": [
{
"user": "<#!672986823185661955>",
"balance": 400
},
{
"user": "<#!737747404048171043>",
"balance": 500
}
],
}
Error(Traceback Error Given After Executing Code):
data = json.load(file)['user_list']
KeyError: 'user_list'
The solution is this:
import json
with open('users_balance.json', 'r') as file:
data = json.load(file)
data['user_list'].append({"user": "sdfsd", "balance": 40323420})
with open('users_balance.json', 'w') as file:
json.dump(data, file, indent=2)
The .json file I want to update has this structure:
{
"username": "abc",
"statistics": [
{
"followers": 1234,
"date": "2018-02-06 02:00:00",
"num_of_posts": 123,
"following": 123
}
]
}
and I want it to insert a new statistic like so
{
"username": "abc",
"statistics": [
{
"followers": 1234,
"date": "2018-02-06 02:00:00",
"num_of_posts": 123,
"following": 123
},
{
"followers": 2345,
"date": "2018-02-06 02:10:00",
"num_of_posts": 234,
"following": 234
}
]
}
When working with
with open(filepath, 'w') as fp:
json.dump(information, fp, indent=2)
the file will always be overwritten. But I want the items in statistics to be added. I tried reading the file in many possible ways and append it afterwards but it never worked.
The data is coming written in the information variable just like
information = {
"username": username,
"statistics": [
{
"date": datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
"num_of_posts": num_of_posts,
"followers": followers,
"following": following
}
]
}
So how do I update the .json file that my information is added correctly?
You would want to do something along the lines of:
def append_statistics(filepath, num_of_posts, followers, following):
with open(filepath, 'r') as fp:
information = json.load(fp)
information["statistics"].append({
"date": datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
"num_of_posts": num_of_posts,
"followers": followers,
"following": following
})
with open(filepath, 'w') as fp:
json.dump(information, fp, indent=2)
You need to read the .json file and then append the new dataset and dump that data. See the code.
import json
appending_statistics_data = {}
appending_statistics_data["followers"] = 2346
appending_statistics_data["date"] = "2018-02-06 02:10:00"
appending_statistics_data["num_of_posts"] = 234
appending_statistics_data["following"] = 234
with open(file.json, 'r') as fp:
data = json.load(fp)
data['statistics'].append(appending_statistics_data)
#print(json.dumps(data,indent=4))
with open(file.json, 'w') as fp:
json.dump(data, fp, indent=2)
Normally, you don't directly update the file that you are reading from.
You might consider:
read from source file.
do processing
write to a new temp file
close both source file and temp file
rename (move) the temp file back to source file