Update JSON file in Python - python

The .json file I want to update has this structure:
{
"username": "abc",
"statistics": [
{
"followers": 1234,
"date": "2018-02-06 02:00:00",
"num_of_posts": 123,
"following": 123
}
]
}
and I want it to insert a new statistic like so
{
"username": "abc",
"statistics": [
{
"followers": 1234,
"date": "2018-02-06 02:00:00",
"num_of_posts": 123,
"following": 123
},
{
"followers": 2345,
"date": "2018-02-06 02:10:00",
"num_of_posts": 234,
"following": 234
}
]
}
When working with
with open(filepath, 'w') as fp:
json.dump(information, fp, indent=2)
the file will always be overwritten. But I want the items in statistics to be added. I tried reading the file in many possible ways and append it afterwards but it never worked.
The data is coming written in the information variable just like
information = {
"username": username,
"statistics": [
{
"date": datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
"num_of_posts": num_of_posts,
"followers": followers,
"following": following
}
]
}
So how do I update the .json file that my information is added correctly?

You would want to do something along the lines of:
def append_statistics(filepath, num_of_posts, followers, following):
with open(filepath, 'r') as fp:
information = json.load(fp)
information["statistics"].append({
"date": datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
"num_of_posts": num_of_posts,
"followers": followers,
"following": following
})
with open(filepath, 'w') as fp:
json.dump(information, fp, indent=2)

You need to read the .json file and then append the new dataset and dump that data. See the code.
import json
appending_statistics_data = {}
appending_statistics_data["followers"] = 2346
appending_statistics_data["date"] = "2018-02-06 02:10:00"
appending_statistics_data["num_of_posts"] = 234
appending_statistics_data["following"] = 234
with open(file.json, 'r') as fp:
data = json.load(fp)
data['statistics'].append(appending_statistics_data)
#print(json.dumps(data,indent=4))
with open(file.json, 'w') as fp:
json.dump(data, fp, indent=2)

Normally, you don't directly update the file that you are reading from.
You might consider:
read from source file.
do processing
write to a new temp file
close both source file and temp file
rename (move) the temp file back to source file

Related

Python Taking API response and adding to JSON Key error

I have a script that takes an ID from a JSON file, adds it to a URL for an API request. The aim is to have a loop run through the 1000-ish ids and preduce one JSON file with all the information contained within.
The current code calls the first request and creates and populates the JSON file, but when run in a loop it throws a key error.
import json
import requests
fname = "NewdataTest.json"
def write_json(data, fname):
fname = "NewdataTest.json"
with open(fname, "w") as f:
json.dump(data, f, indent = 4)
with open (fname) as json_file:
data = json.load(json_file)
temp = data[0]
#print(newData)
y = newData
data.append(y)
# Read test.json to get tmdb IDs
tmdb_ids = []
with open('test.json', 'r') as json_fp:
imdb_info = json.load(json_fp)
tmdb_ids = [movie_info['tmdb_id'] for movies_chunk in imdb_info for movie_index, movie_info in movies_chunk.items()]
# Add IDs to API call URL
for tmdb_id in tmdb_ids:
print("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Send API Call
response_API = requests.get("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Check API Call Status
print(response_API.status_code)
write_json((response_API.json()), "NewdataTest.json")
The error is in this line "temp = data[0]" I have tried printing the keys for data, nothing. At this point, I have no idea where I am with this as I have hacked it about it barely resembles anything like a cohesive piece of code. My aim was to make a simple function to get the data from the JSON, one to produce the API call URLs, and one to write the results to the new JSON.
Example of API reponse JSON:
{
"adult": false,
"backdrop_path": "/e1cC9muSRtAHVtF5GJtKAfATYIT.jpg",
"belongs_to_collection": null,
"budget": 0,
"genres": [
{
"id": 10749,
"name": "Romance"
},
{
"id": 35,
"name": "Comedy"
}
],
"homepage": "",
"id": 1063242,
"imdb_id": "tt24640474",
"original_language": "fr",
"original_title": "Disconnect: The Wedding Planner",
"overview": "After falling victim to a scam, a desperate man races the clock as he attempts to plan a luxurious destination wedding for an important investor.",
"popularity": 34.201,
"poster_path": "/tGmCxGkVMOqig2TrbXAsE9dOVvX.jpg",
"production_companies": [],
"production_countries": [
{
"iso_3166_1": "KE",
"name": "Kenya"
},
{
"iso_3166_1": "NG",
"name": "Nigeria"
}
],
"release_date": "2023-01-13",
"revenue": 0,
"runtime": 107,
"spoken_languages": [
{
"english_name": "English",
"iso_639_1": "en",
"name": "English"
},
{
"english_name": "Afrikaans",
"iso_639_1": "af",
"name": "Afrikaans"
}
],
"status": "Released",
"tagline": "",
"title": "Disconnect: The Wedding Planner",
"video": false,
"vote_average": 5.8,
"vote_count": 3
}
You can store all results from the API calls into a list and then save this list in Json format into a file. For example:
#...
all_data = []
for tmdb_id in tmdb_ids:
print("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Send API Call
response_API = requests.get("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Check API Call Status
print(response_API.status_code)
if response_API.status_code == 200:
# store the Json data in a list:
all_data.append(response_API.json())
# write the list to file
with open('output.json', 'w') as f_out:
json.dump(all_data, f_out, indent=4)
This will produce output.json with all responses in Json format.

am getting identical sha256 for each json file in python

I am in a huge hashing crisis. Using the chip-0007's default format I generatedfew JSON files. Using these files I have been trying to generate sha256 hash value. And I expect a unique hash value for each file.
However, python code isn't doing so. I thought there might be some issue with JSON file but, it is not. Something is to do with sha256 code.
All the json files ->
JSON File 1
{ "format": "CHIP-0007", "name": "adewale-the-amebo", "description": "Adewale always wants to be in everyone's business.", "attributes": [ { "trait_type": "Gender", "value": "male" } ], "collection": { "name": "adewale-the-amebo Collection", "id": "1" } }
JSON File 2
{ "format": "CHIP-0007", "name": "alli-the-queeny", "description": "Alli is an LGBT Stan.", "attributes": [ { "trait_type": "Gender", "value": "male" } ], "collection": { "name": "alli-the-queeny Collection", "id": "2" } }
JSON File 3
{ "format": "CHIP-0007", "name": "aminat-the-snnobish", "description": "Aminat never really wants to talk to anyone.", "attributes": [ { "trait_type": "Gender", "value": "female" } ], "collection": { "name": "aminat-the-snnobish Collection", "id": "3" } }
Sample CSV File:
Series Number,Filename,Description,Gender
1,adewale-the-amebo,Adewale always wants to be in everyone's business.,male
2,alli-the-queeny,Alli is an LGBT Stan.,male
3,aminat-the-snnobish,Aminat never really wants to talk to anyone.,female
Python CODE
TODO 2 : Generate a JSON file per entry in team's sheet in CHIP-0007's default format
new_jsonFile = f"{row[1]}.json"
json_data = {}
json_data["format"] = "CHIP-0007"
json_data["name"] = row[1]
json_data["description"] = row[2]
attribute_data = {}
attribute_data["trait_type"] = "Gender" # gender
attribute_data["value"] = row[3] # "value/male/female"
json_data["attributes"] = [attribute_data]
collection_data = {}
collection_data["name"] = f"{row[1]} Collection"
collection_data["id"] = row[0] # "ID of the NFT collection"
json_data["collection"] = collection_data
filepath = f"Json_Files/{new_jsonFile}"
with open(filepath, 'w') as f:
json.dump(json_data, f, indent=2)
C += 1
sha256_hash = sha256_gen(filepath)
temp.append(sha256_hash)
NEW.append(temp)
# TODO 3 : Calculate sha256 of the each entry
def sha256_gen(fn):
return hashlib.sha256(open(fn, 'rb').read()).hexdigest()
How can I generate a unique sha256 hash for each JSON?
I tried reading in byte blocks. That is also not working out. After many trials, I am going nowhere. Sharing the unexpected outputs of each JSON file:
[ All hashes are identical ]
Unexpected SHA256 output:
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Expected:
Unique Hash value. Different from each other
Because of output buffering, you're calling sha256_gen(filepath) before anything is written to the file, so you're getting the hash of an empty file. You should do that outside the with, so that the JSON file is closed and the buffer is flushed.
with open(filepath, 'w') as f:
json.dump(json_data, f, indent=2)
C += 1
sha256_hash = sha256_gen(filepath)
temp.append(sha256_hash)
NEW.append(temp)

How to add dictionary line to an JSON file

I am trying to achieve the below JSON format and store it in a json file:
{
"Name": "Anurag",
"resetRecordedDate": false,
"ED": {
"Link": "google.com"
}
}
I know how to create a simple JSON file using JSON dumps but not really sure how to add something similar to a dictionary for one of the records within the JSON file.
Assuming the input json content is
{
"Name": "Anurag",
"resetRecordedDate": False
}
Program
import json
# read file
with open('example.json', 'r') as infile:
data=infile.read()
# parse file
parsed_json = json.loads(data)
# Add dictionary element
parsed_json["ED"] = {
"Link": "google.com"
}
# print(json.dumps(parsed_json, indent=4))
# write to json
with open('data.json', 'w') as outfile:
json.dump(parsed_json, outfile)
o/p
{
"Name": "Anurag",
"resetRecordedDate": false,
"ED": {
"Link": "google.com"
}
}

Trouble with this error while adding a dict to an existing key value

Code Including JSON File Code:
Python( Suppose To Append A New User and Balance):
import json
with open('users_balance.json', 'r') as file:
data = json.load(file)['user_list']
data['user_list']
data.append({"user": "sdfsd", "balance": 40323420})
with open('users_balance.json', 'w') as file:
json.dump(data, file, indent=2)
Json(Object The Code Is Appending To):
{
"user_list": [
{
"user": "<#!672986823185661955>",
"balance": 400
},
{
"user": "<#!737747404048171043>",
"balance": 500
}
],
}
Error(Traceback Error Given After Executing Code):
data = json.load(file)['user_list']
KeyError: 'user_list'
The solution is this:
import json
with open('users_balance.json', 'r') as file:
data = json.load(file)
data['user_list'].append({"user": "sdfsd", "balance": 40323420})
with open('users_balance.json', 'w') as file:
json.dump(data, file, indent=2)

python: read json data from file and append more data

I'm trying to read existing data from a json file and trying to append more data to the file using python (I'm a python newbie). Here is my existing data in data.json file which I read in my script:
{
"Config1": {
"TestCase1": {
"Data1": 200,
"Data2": 2715
}
},
"Config2": {
"TestCase1": {
"Data1": 2710,
"Data2": 2715
}
}
}
After reading I want to append TestCase2 data. This is what I'm doing:
with open("data.json") as json_file: #load existing data
json_data = json.load(json_file)
test='TestCase2'
result=json_data
myConfigs = ['Config1','Config2']
for each, config in enumerate(myConfigs):
result.update({config:{test:{'Data1':2600,'Data2':2900}}})
with open('data.json', 'a') as outfile:
json.dump(result, outfile)
The new data in data.json is not valid as pointed by jsonLint. What am I doing wrong? Here is the new data
{
"Config1": {
"TestCase1": {
"Data1": 200,
"Data2": 2715
}
},
"Config2": {
"TestCase1": {
"Data1": 2710,
"Data2": 2715
}
}
} {
"Config1": {
"TestCase2": {
"Data1": 2600,
"Data2": 2900
}
},
"Config2": {
"TestCase2": {
"Data1": 2600,
"Data2": 2900
}
}
}
In addition to opening the file in the wrong mode (should be 'w'), you are also overwriting your old "config" trees by defining a new dict inline.
Instead of:
result.update({config:{test:{'Data1':2600,'Data2':2900}}})
Try this:
result[config][test] = {'Data1': 2600, 'Data2': 2900}
This should give you the result you are looking for with your example. It will let result['Config1']['TestCase1'] persist while you add TestCase2. You may also need to make sure that the config tree exists by setting result[config] to {} if it's None.
The problem is that you're appending the new JSON to the original JSON file here:
with open('data.json', 'a') as outfile:
json.dump(result, outfile)
So you have two JSON objects in the same file as you can see:
...
"Data2": 2715
}
}
} { <--- original object ends here, new object starts here
"Config1": {
...
JSONLint is expecting a single object, as will any JSON parser.
The main problem is that the dict1.update(dict2) method overwrites dict1 keys if the same exist in dict2 hence the second object in your file doesn't have key =>TestCase1
Another problem is that (as pointed above) the file is opened in the wrong mode (Should be w) as a appends to the json file
You could try this:
with open("data.json") as json_file:
json_data = json.load(json_file)
test='TestCase2'
result=json_data
myConfigs = ['Config1','Config2']
for each, config in enumerate(myConfigs):
result[config].update({test:{'Data1':2600,'Data2':2900}})
with open('data.json', 'w') as outfile:
json.dump(result, outfile)
Just result[config].update(... instead of result.update({config:...

Categories