I have here a set of json objects.
[
{
"group": "GroupName1",
"name": "Name1",
"nick": "Nick1",
"host": "Hostname1",
"user": "user1",
"sshport": "22",
"httpport": "80"
},
{
"group": "GroupName2",
"name": "Name2",
"nick": "Nick2",
"host": "hostname2",
"user": "user2",
"sshport": "22",
"httpport": "80"
}
]
I have a CLI script taking raw_input and building a new dict object containing the new object parameters as such:
def main():
# CLI Input
group_in = raw_input("Group: ")
name_in = raw_input("Name: ")
nick_in = raw_input("Nick: ")
host_in = raw_input("Host: ")
user_in = raw_input("User: ")
sshport_in = raw_input("SSH Port: ")
httpport_in = raw_input("HTTP Port: ")
# New server to add
jdict = {
"group": group_in,
"name": name_in,
"nick": nick_in,
"host": host_in,
"user": user_in,
"sshport": sshport_in,
"httpport": httpport_in
}
Assuming json file containing the aforementioned json objects formatted as such is loaded as:
with open(JSON_PATH, mode='r') as rf:
jf = json.load(rf)
I know how to do this by hacking the file using readlines/writelines, but how would I add jdict in at the end of jf pythonically so I can just write the file back with the complete new set of objects formatted in the same way?
jf is now just a Python list, so you can append the new dictionary to your list:
jf.append(jdict)
then write out the whole object back to your file, replacing the old JSON string:
with open(JSON_PATH, mode='w') as wf:
json.dump(jf, wf)
Related
I am in a huge hashing crisis. Using the chip-0007's default format I generatedfew JSON files. Using these files I have been trying to generate sha256 hash value. And I expect a unique hash value for each file.
However, python code isn't doing so. I thought there might be some issue with JSON file but, it is not. Something is to do with sha256 code.
All the json files ->
JSON File 1
{ "format": "CHIP-0007", "name": "adewale-the-amebo", "description": "Adewale always wants to be in everyone's business.", "attributes": [ { "trait_type": "Gender", "value": "male" } ], "collection": { "name": "adewale-the-amebo Collection", "id": "1" } }
JSON File 2
{ "format": "CHIP-0007", "name": "alli-the-queeny", "description": "Alli is an LGBT Stan.", "attributes": [ { "trait_type": "Gender", "value": "male" } ], "collection": { "name": "alli-the-queeny Collection", "id": "2" } }
JSON File 3
{ "format": "CHIP-0007", "name": "aminat-the-snnobish", "description": "Aminat never really wants to talk to anyone.", "attributes": [ { "trait_type": "Gender", "value": "female" } ], "collection": { "name": "aminat-the-snnobish Collection", "id": "3" } }
Sample CSV File:
Series Number,Filename,Description,Gender
1,adewale-the-amebo,Adewale always wants to be in everyone's business.,male
2,alli-the-queeny,Alli is an LGBT Stan.,male
3,aminat-the-snnobish,Aminat never really wants to talk to anyone.,female
Python CODE
TODO 2 : Generate a JSON file per entry in team's sheet in CHIP-0007's default format
new_jsonFile = f"{row[1]}.json"
json_data = {}
json_data["format"] = "CHIP-0007"
json_data["name"] = row[1]
json_data["description"] = row[2]
attribute_data = {}
attribute_data["trait_type"] = "Gender" # gender
attribute_data["value"] = row[3] # "value/male/female"
json_data["attributes"] = [attribute_data]
collection_data = {}
collection_data["name"] = f"{row[1]} Collection"
collection_data["id"] = row[0] # "ID of the NFT collection"
json_data["collection"] = collection_data
filepath = f"Json_Files/{new_jsonFile}"
with open(filepath, 'w') as f:
json.dump(json_data, f, indent=2)
C += 1
sha256_hash = sha256_gen(filepath)
temp.append(sha256_hash)
NEW.append(temp)
# TODO 3 : Calculate sha256 of the each entry
def sha256_gen(fn):
return hashlib.sha256(open(fn, 'rb').read()).hexdigest()
How can I generate a unique sha256 hash for each JSON?
I tried reading in byte blocks. That is also not working out. After many trials, I am going nowhere. Sharing the unexpected outputs of each JSON file:
[ All hashes are identical ]
Unexpected SHA256 output:
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Expected:
Unique Hash value. Different from each other
Because of output buffering, you're calling sha256_gen(filepath) before anything is written to the file, so you're getting the hash of an empty file. You should do that outside the with, so that the JSON file is closed and the buffer is flushed.
with open(filepath, 'w') as f:
json.dump(json_data, f, indent=2)
C += 1
sha256_hash = sha256_gen(filepath)
temp.append(sha256_hash)
NEW.append(temp)
I have the data as below
{
"employeealias": "101613177",
"firstname": "Lion",
"lastname": "King",
"date": "2022-04-21",
"type": "Thoughtful Intake",
"subject": "Email: From You Success Coach"
}
{
"employeealias": "101613177",
"firstname": "Lion",
"lastname": "King",
"date": "2022-04-21",
"type": null,
"subject": "Call- CDL options & career assessment"
}
I need to create a dictionary like the below:
You have to create new dictionary with list and use for-loop to check if exists employeealias, firstname, lastname to add other information to sublist. If item doesn't exist then you have to create new item with employeealias, firstname, lastname and other information.
data = [
{"employeealias":"101613177","firstname":"Lion","lastname":"King","date":"2022-04-21","type":"Thoughtful Intake","subject":"Email: From You Success Coach"},
{"employeealias":"101613177","firstname":"Lion","lastname":"King","date":"2022-04-21","type":"null","subject":"Call- CDL options & career assessment"},
]
result = {'interactions': []}
for row in data:
found = False
for item in result['interactions']:
if (row["employeealias"] == item["employeealias"]
and row["firstname"] == item["firstname"]
and row["lastname"] == item["lastname"]):
item["activity"].append({
"date": row["date"],
"subject": row["subject"],
"type": row["type"],
})
found = True
break
if not found:
result['interactions'].append({
"employeealias": row["employeealias"],
"firstname": row["firstname"],
"lastname": row["lastname"],
"activity": [{
"date": row["date"],
"subject": row["subject"],
"type": row["type"],
}]
})
print(result)
EDIT:
You read lines as normal text but you have to convert text to dictonary using module json
import json
data = []
with open("/Users/Downloads/amazon_activity_feed_0005_part_00.json") as a_file:
for line in a_file:
line = line.strip()
dictionary = json.loads(line)
data.append(dictionary)
print(data)
You can create a nested dictionary inside Python like this:
student = {name : "Suman", Age = 20, gender: "male",{class : 11, roll no: 12}}
I'm very new to programming so excuse any terrible explanations. Basically I have 1000 json files all that need to have the same text added to the end. Here is an example:
This is what it looks like now:
{"properties": {
"files": [
{
"uri": "image.png",
"type": "image/png"
}
],
"category": "image",
"creators": [
{
"address": "wallet address",
"share": 100
}
]
}
}
Which I want to look like this:
{"properties": {
"files": [
{
"uri": "image.png",
"type": "image/png"
}
],
"category": "image",
"creators": [
{
"address": "wallet address",
"share": 100
}
]
},
"collection": {"name": "collection name"}
}
I've tried my best with append and update but it always tells me there is no attribute to append. I also don't really know what I'm doing.
This will be embarrassing but here is what I tried and failed.
import json
entry= {"collection": {"name": "collection name"}}
for i in range((5)):
a_file = open("./testjsons/" + str(i) + ".json","r")
json_obj = json.load(a_file)
print(json_obj)
json_obj["properties"].append(entry)
a_file = open(str(i) + ".json","w")
json.dump(json_obj,a_file,indent=4)
a_file.close()
json.dump(a_file, f)
Error code: json_obj["properties"].append(entry)
AttributeError: 'dict' object has no attribute 'append'
you don't use append() to add to a dictionary. You can either assign to the key to add a single entry, or use .update() to merge dictionaries.
import json
entry= {"collection": {"name": "collection name"}}
for i in range((5)):
with open("./testjsons/" + str(i) + ".json","r") as a_file:
a_file = open("./testjsons/" + str(i) + ".json","r")
json_obj = json.load(a_file)
print(json_obj)
json_obj.update(entry)
with open(str(i) + ".json","w") as a_file:
json.dump(json_obj,a_file,indent=4)
JSON, like XML, is a specialized data format. You should always parse the data and work with it as JSON where possible. This is different from a plain text file where you would 'add to the end' or 'append' text.
There are a number of json parsing libraries in Python, but you'll probably want to use the json encoder that is built in to the standard Python library. For a file, myfile.json, you can:
import json
with open('myfile.json`, 'r') as f:
myfile = json.load(f) # read the file into a Python dict
myfile["collection"] = {"name": "collection name"} # here you're adding the "collection" field to the end of the Python dict
# If you want to add "collection" inside "properties", you'd do something like
#. myfile["properties"]["collection"] = {"name": "collection name"}
with open('myfile.json', 'w') as f:
json.dump(myfile, f) # save the modified dict into the json file
I have extracted id, username, and name for 100 followers for 102 politicians using Tweepy. The data is stored in a JSON file named pol_followers. Now I wish to append id and username and save it as a CSV file using the function below. However, when using the function in the last line append_followers_to_csv(pol_followers, "pol_followers.csv") I get the error seen at the bottom.
# Structure of pol_followers. The full pol_followers is much longer...
print(json.dumps(pol_followers, indent=4, sort_keys=True)) # see json data structure
[
{
"data": [
{
"id": "1464206217807601666",
"name": "terry alex",
"username": "terryal51850644"
},
{
"id": "1479032154394968064",
"name": "Charles Williams",
"username": "Charles99924770"
},
{
"id": "2526015770",
"name": "LISA P",
"username": "LISAP0910"
},
{
"id": "2957692520",
"name": "fayaz ahmad",
"username": "ahmadfayaz202"
}
],
"meta": {
"next_token": "F6HS7IU5SRGHEZZZ",
"result_count": 100
}
},
{
"data": [
{
"id": "2482703136",
"name": "HieuVu",
"username": "sachieuhaihanh"
},
{
"id": "580882148",
"name": "Maxine D. Harmon",
"username": "maxxximd"
},
{
"id": "1478867472841334787",
"name": "RBPsych1",
"username": "RBPsych1"
# Create file
csv_follower_file = open("pol_followers.csv", "a", newline="", encoding='utf-8')
csv_follower_writer = csv.writer(csv_follower_file)
# Create headers for the data I want to save. I only want to save these columns in my dataset
csv_follower_writer.writerow(
['id', 'username'])
csv_follower_file.close()\
def append_followers_to_csv(pol_followers, csv_follower_file):
# A counter variable
global follower_id, username
counter = 0
# Open OR create the target CSV file
csv_follower_file = open(csv_follower_file, "a", newline="", encoding='utf-8')
csv_follower_writer = csv.writer(csv_follower_file)
for ids in pol_followers['data']:
# 1. follower ID
follower_id = ids['id']
# 2. follower username
username = ids['username']
# Assemble all data in a list
ress = [follower_id, username]
# Append the result to the CSV file
csv_follower_writer.writerow(ress)
counter += 1
# When done, close the CSV file
csvFile.close()
# Print the number of tweets for this iteration
print("# of Tweets added from this response: ", counter)
append_followers_to_csv(pol_followers, "pol_followers.csv") # Save tweet data in a csv file
File "<input>", line 1, in <module>
File "<input>", line 11, in append_followers_to_csv
TypeError: list indices must be integers or slices, not str
You are just missing additional loop, like so:
for each_dict in pol_followers:
for ids in each_dict['data']:
follower_id = ids['id']
username = ids['username']
You seem to have wrapped your JSON object in a list, so instead of getting the 'data' bit of the JSON, you are getting the 'data'th element of a list when you are iterating in your append_followers_to_csv function, which you can't do in python. Try removing the square brackets around the JSON or making it for ids in pol_followers[0]['data'].
[
{
"name": "name one",
"id": 1
},
{
"name": "name two",
"id": 2
}
]
I want to append object to the list in .json file. how do i do?
You could read the existing json content update it and rewrite the updated list.
import json
with open("myfile.json", "r+") as f:
my_file = f.read() # read the current content
my_list = json.loads(my_file) # convert from json object to dictionary type
dict_obj = {
"name": "name three",
"id": 3
}
my_list.append(dict_obj)
f.seek(0) # sets point at the beginning of the file
f.truncate() # Clear previous content
print(f" going to rewrite {my_list}")
f.write(json.dumps(my_list)) # Write updated version file
I'm not entirely sure of what you are asking but perhaps the code below will help:
const myList = [
{
"name": "name one",
"id": 1
},
{
"name": "name two",
"id": 2
}
]
const myNewItem = {
"name": "name three",
"id": 3
}
const addItemIfDifferentId = (list, newItem) => [...list, !list.map(({id}) =>id).includes(newItem.id) ? {...newItem} : {} ]
const newList = addItemIfDifferentId(myList, myNewItem)
newList
Maybe this will help you:
import json
# When loading a .json files it will be a string:
with open('data.json') as json_file:
x = json.load(json_file) //{"key1":"123", "key2":"456", "key3":"789"}
# python object to be appended
y = {"key4": "101112"}
# Load the json string to be an object type:
z = json.loads(x)
# appending the data
z.update(y)
# the result is a JSON string:
print(json.dumps(z))
with open('data.json', 'w') as outfile:
json.dump(z, outfile)