{
"Basketball": {
"first_name": "Michael",
"last_name": "Jordan"
},
"Football": {
"first_name": "Leo",
"last_name": "Messi"
},
"Football2": {
"first_name": "Cristiano",
"last_name": "Ronaldo"
}
}
This is my json file. I want to delete "Football2" out of this json file. It should work no matter what the value of "Football2" is.
So it should look like this after my code is executed:
{
"Basketball": {
"first_name": "Michael",
"last_name": "Jordan"
},
"Football": {
"first_name": "Leo",
"last_name": "Messi"
}
}
This is my code.
def delete_profile():
delete_profile = input('Which one would you like to delete? ')
with open(os.path.join('recources\datastorage\profiles.json')) as data_file:
data = json.load(data_file)
for element in data:
print(element)
if delete_profile in element:
del(element[delete_profile])
with open('data.json', 'w') as data_file:
json.dump(data, data_file)
But it gives this error: TypeError: 'str' object does not support item deletion
What am I doing wrong and what is the correct way to do this?
Youre looping over the items in your JSON dictionary unnecessarily when you just want to delete the "top" level items:
def delete_profile():
delete_profile = input('Which one would you like to delete? ')
with open(os.path.join('recources\datastorage\profiles.json')) as data_file:
data = json.load(data_file)
# No need to loop, just check if the profile is in the JSON dictionary since your 'profiles' are top-level objects
if (delete_profile in data):
# Delete the profile from the data dictionary, not from the elements of the dictionary
del(data[delete_profile])
# Maybe add an else to handle if the requested profile is not in the JSON?
with open('data.json', 'w') as data_file:
json.dump(data, data_file)
Related
I want to print the ip addresses from jobs.json but I am getting the error 'string indices must be integers'
Here is my python code:
import json
f = open('jobs.json')
data = json.load(f)
f.close()
for item in data["Jobs"]:
print(item["ip"])
And here is the Jobs.json file:
{
"Jobs": {
"Carpenter": {
"ip": "123.1432.515",
"address": ""
},
"Electrician": {
"ip": "643.452.234",
"address": "mini-iad.com"
},
"Plumber": {
"ip": "151.101.193",
"Address": "15501 Birch St"
},
"Mechanic": {
"ip": "218.193.942",
"Address": "Yellow Brick Road"
}
}
data["Company"] is a dictionary, so you're iterating over the keys (which are strings). Use data["Company"].values():
import json
with open("company.json", "r") as f_in:
data = json.load(f_in)
for item in data["Company"].values():
print(item["ip"])
Prints:
142.250.115.139
151.101.193
data["Company"] returns a dictionary. When iterating over that, you will get string keys for item, since that's what you get by default when iterating over a dictionary. Then you try to do item["ip"], where item is "Google" for example, which causes your error.
You want to iterate the values of the dictionary instead:
for item in data["Company"].values():
print(item["ip"])
[
{
"name": "name one",
"id": 1
},
{
"name": "name two",
"id": 2
}
]
I want to append object to the list in .json file. how do i do?
You could read the existing json content update it and rewrite the updated list.
import json
with open("myfile.json", "r+") as f:
my_file = f.read() # read the current content
my_list = json.loads(my_file) # convert from json object to dictionary type
dict_obj = {
"name": "name three",
"id": 3
}
my_list.append(dict_obj)
f.seek(0) # sets point at the beginning of the file
f.truncate() # Clear previous content
print(f" going to rewrite {my_list}")
f.write(json.dumps(my_list)) # Write updated version file
I'm not entirely sure of what you are asking but perhaps the code below will help:
const myList = [
{
"name": "name one",
"id": 1
},
{
"name": "name two",
"id": 2
}
]
const myNewItem = {
"name": "name three",
"id": 3
}
const addItemIfDifferentId = (list, newItem) => [...list, !list.map(({id}) =>id).includes(newItem.id) ? {...newItem} : {} ]
const newList = addItemIfDifferentId(myList, myNewItem)
newList
Maybe this will help you:
import json
# When loading a .json files it will be a string:
with open('data.json') as json_file:
x = json.load(json_file) //{"key1":"123", "key2":"456", "key3":"789"}
# python object to be appended
y = {"key4": "101112"}
# Load the json string to be an object type:
z = json.loads(x)
# appending the data
z.update(y)
# the result is a JSON string:
print(json.dumps(z))
with open('data.json', 'w') as outfile:
json.dump(z, outfile)
I am trying to achieve the below JSON format and store it in a json file:
{
"Name": "Anurag",
"resetRecordedDate": false,
"ED": {
"Link": "google.com"
}
}
I know how to create a simple JSON file using JSON dumps but not really sure how to add something similar to a dictionary for one of the records within the JSON file.
Assuming the input json content is
{
"Name": "Anurag",
"resetRecordedDate": False
}
Program
import json
# read file
with open('example.json', 'r') as infile:
data=infile.read()
# parse file
parsed_json = json.loads(data)
# Add dictionary element
parsed_json["ED"] = {
"Link": "google.com"
}
# print(json.dumps(parsed_json, indent=4))
# write to json
with open('data.json', 'w') as outfile:
json.dump(parsed_json, outfile)
o/p
{
"Name": "Anurag",
"resetRecordedDate": false,
"ED": {
"Link": "google.com"
}
}
I'm having trouble to generate a well formatted CSV file out of some data i fetched from the leadfeeder API. In the csv file that is currently being created, not all values are in one row, id and leads are one column higher then the rest. Like here:
CSV Output
I later also like to load another json file and use it to map some values over the id and then put also the visits per lead into my csv file.
Do you also have some advice for this?
This is my code so far:
import json
import csv
csv_columns = ['name', 'industry', 'website_url', 'status', 'crm_lead_id', 'crm_organization_id', 'employee_count', 'id', 'type' ]
with open('data.json', 'r') as d:
d = json.load(d)
csv_file = 'lead_daten.csv'
try:
with open('leads.csv', 'w', newline='') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=csv_columns, extrasaction='ignore')
writer.writeheader()
for item in d['data']:
writer.writerow(item)
writer.writerow(item['attributes'])
except IOError:
print("I/O error")
My json data has the following structure:
I need also some of the nested values like the id in relationships!
{
"data": [
{
"attributes": {
"crm_lead_id": null,
"crm_organization_id": null,
"employee_count": 5000,
"facebook_url": null,
"first_visit_date": "2019-01-31",
"industry": "Furniture",
"last_visit_date": "2019-01-31",
"linkedin_url": null,
"name": "Example Inc",
"phone": null,
"status": "new",
"twitter_handle": "example",
"website_url": "http://www.example.com"
},
"id": "s7ybF6VxqhQqVM1m1BCnZT_8SRo9XnuoxSUP5ChvERZS9",
"relationships": {
"location": {
"data": {
"id": "8SRo9XnuoxSUP5ChvERZS9",
"type": "locations"
}
}
},
"type": "leads"
},
{
"attributes": {
"crm_lead_id": null,
When you write to a csv, you must write one full row at a time. You current code writes one row with only id and type, and then a different row with the other fields.
The correct way is to first fully build a dictionary containing all the fields and only then write it in one single operation. Code could be:
...
writer.writeheader()
for item in d['data']:
item.update(item["attributes"])
writer.writerow(item)
...
I'm dumping a MongoDB databse in Python in json format. Here's part of my code
cursor = collection.find()
with open(json_file_path, 'w') as outfile:
dump = json.dumps([doc for doc in cursor], sort_keys=False, indent=4, default=json_util.default)
outfile.write(dump)
The problem is that pymongo adds an _id filed by itself and creates an entry like "_id": {"$oid": "5c2b4813e43eda7815444204"}. This creates an error that key '$oid' must not start with '$' while loading from this json file. So I was thinking if I could either modify or skip this field all together while exporting the database itself? How can I do that?
{
"Employee ID": 9771504,
"NAME": "Harsh Wardhan",
"DOB": "14-Apr",
"MOBILE": 12345697890,
"Group": "SW-VS",
"_id": {
"$oid": "5c2b4813e43eda7815444204"
},
"Emai ID": "hwardhan#examples.com"
}
Assuming the extra id is added for each entry in the cursor, you can just filter it out before writing using a dict comprehension.
cursor = collection.find()
with open(json_file_path, 'w') as outfile:
dump = json.dumps([{k:v for k,v in doc.items() if k != "_id"} for doc in cursor],
sort_keys=False, indent=4, default=json_util.default)
outfile.write(dump)