I am having trouble with converting a dictionary to a string in python. I am trying to extract the information from one of my variables but cannot seem to remove the square brackets surrounding the information
for line in str(object):
if line.startswith ('['):
new_object = object.replace('[', '')
Is there a way to remove the square brackets or do I have to find another way of taking the information out of the dictionary?
Edit:
in more detail what i am trying to do here is the following
import requests
city = 'dublin'
country = 'ireland'
info = requests.get('http://api.openweathermap.org/data/2.5/weather?q='+city +','+ country +'&mode=json')
weather = info.json()['weather']
fh = open('/home/Ricky92d3/city.txt', 'w')
fh.write(str(weather))
fh.close()
fl = open('/home/Ricky92d3/city.txt')
Object = fl.read()
fl.close()
for line in str(Object):
if line.startswith ('['):
new_Object = Object.replace('[', '')
if line.startswith ('{'):
new_Object = Object.replace('{u', '')
print new_Object
i hope this makes what i am trying to do a little more clear
The object returned by info.json() is a Python dictionary, so you can access its contents using normal Python syntax. I admit that it can get a little bit tricky, since JSON dictionaries often contain other dictionaries and lists, but it's generally not too hard to figure out what's what if you print the JSON object out in a nicely formatted way. The easiest way to do that is by using the dumps() function in the standard Python json module.
The code below retrieves the JSON data into a dict called data.
It then prints the 'description' string from the list in the 'weather' item of data.
It then saves all the data (not just the 'weather' item) as an ASCII-encoded JSON file.
It then reads the JSON data back in again to a new dict called newdata, and pretty-prints it.
Finally, it prints the weather description again, to verify that we got back what we saw earlier. :)
import requests, json
#The base URL of the weather service
endpoint = 'http://api.openweathermap.org/data/2.5/weather'
#Filename for saving JSON data to
fname = 'data.json'
city = 'dublin'
country = 'ireland'
params = {
'q': '%s,%s' % (city, country),
'mode': 'json',
}
#Fetch the info
info = requests.get(endpoint, params=params)
data = info.json()
#print json.dumps(data, indent=4)
#Extract the value of 'description' from the list in 'weather'
print '\ndescription: %s\n' % data['weather'][0]['description']
#Save data
with open(fname, 'w') as f:
json.dump(data, f, indent=4)
#Reload data
with open(fname, 'r') as f:
newdata = json.load(f)
#Show all the data we just read in
print json.dumps(newdata, indent=4)
print '\ndescription: %s\n' % data['weather'][0]['description']
output
description: light intensity shower rain
{
"clouds": {
"all": 75
},
"name": "Dublin",
"visibility": 10000,
"sys": {
"country": "IE",
"sunset": 1438374108,
"message": 0.0118,
"type": 1,
"id": 5237,
"sunrise": 1438317600
},
"weather": [
{
"description": "light intensity shower rain",
"main": "Rain",
"id": 520,
"icon": "09d"
}
],
"coord": {
"lat": 53.340000000000003,
"lon": -6.2699999999999996
},
"base": "stations",
"dt": 1438347600,
"main": {
"pressure": 1014,
"humidity": 62,
"temp_max": 288.14999999999998,
"temp": 288.14999999999998,
"temp_min": 288.14999999999998
},
"id": 2964574,
"wind": {
"speed": 8.1999999999999993,
"deg": 210
},
"cod": 200
}
description: light intensity shower rain
I'm not quite sure what you're trying to do here (without seeing your dictionary) but if you have a string like x = "[myString]" you can just do the following:
x = x.replace("[", "").replace("]", "")
If this isn't working, there is a high chance you're actually getting a list returned. Though if that was the case you should see an error like this:
>>> x = [1,2,3]
>>> x.replace("[", "")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute 'replace'
Edit 1:
I think there's a misunderstanding of what you're getting back here. If you're just looking for a csv output file with the weather from your api try this:
import requests
import csv
city = 'dublin'
country = 'ireland'
info = requests.get('http://api.openweathermap.org/data/2.5/weather?q='+city +','+ country +'&mode=json')
weather = info.json()['weather']
weather_fieldnames = ["id", "main", "description", "icon"]
with open('city.txt', 'w') as f:
csvwriter = csv.DictWriter(f, fieldnames=weather_fieldnames)
for w in weather:
csvwriter.writerow(w)
This works by looping through the list of items you're getting and using a csv.DictWriter to write it as a row in the csv file.
Bonus
Don't call your dictionary object - It's a reserved word for the core language.
Related
I have a script that takes an ID from a JSON file, adds it to a URL for an API request. The aim is to have a loop run through the 1000-ish ids and preduce one JSON file with all the information contained within.
The current code calls the first request and creates and populates the JSON file, but when run in a loop it throws a key error.
import json
import requests
fname = "NewdataTest.json"
def write_json(data, fname):
fname = "NewdataTest.json"
with open(fname, "w") as f:
json.dump(data, f, indent = 4)
with open (fname) as json_file:
data = json.load(json_file)
temp = data[0]
#print(newData)
y = newData
data.append(y)
# Read test.json to get tmdb IDs
tmdb_ids = []
with open('test.json', 'r') as json_fp:
imdb_info = json.load(json_fp)
tmdb_ids = [movie_info['tmdb_id'] for movies_chunk in imdb_info for movie_index, movie_info in movies_chunk.items()]
# Add IDs to API call URL
for tmdb_id in tmdb_ids:
print("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Send API Call
response_API = requests.get("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Check API Call Status
print(response_API.status_code)
write_json((response_API.json()), "NewdataTest.json")
The error is in this line "temp = data[0]" I have tried printing the keys for data, nothing. At this point, I have no idea where I am with this as I have hacked it about it barely resembles anything like a cohesive piece of code. My aim was to make a simple function to get the data from the JSON, one to produce the API call URLs, and one to write the results to the new JSON.
Example of API reponse JSON:
{
"adult": false,
"backdrop_path": "/e1cC9muSRtAHVtF5GJtKAfATYIT.jpg",
"belongs_to_collection": null,
"budget": 0,
"genres": [
{
"id": 10749,
"name": "Romance"
},
{
"id": 35,
"name": "Comedy"
}
],
"homepage": "",
"id": 1063242,
"imdb_id": "tt24640474",
"original_language": "fr",
"original_title": "Disconnect: The Wedding Planner",
"overview": "After falling victim to a scam, a desperate man races the clock as he attempts to plan a luxurious destination wedding for an important investor.",
"popularity": 34.201,
"poster_path": "/tGmCxGkVMOqig2TrbXAsE9dOVvX.jpg",
"production_companies": [],
"production_countries": [
{
"iso_3166_1": "KE",
"name": "Kenya"
},
{
"iso_3166_1": "NG",
"name": "Nigeria"
}
],
"release_date": "2023-01-13",
"revenue": 0,
"runtime": 107,
"spoken_languages": [
{
"english_name": "English",
"iso_639_1": "en",
"name": "English"
},
{
"english_name": "Afrikaans",
"iso_639_1": "af",
"name": "Afrikaans"
}
],
"status": "Released",
"tagline": "",
"title": "Disconnect: The Wedding Planner",
"video": false,
"vote_average": 5.8,
"vote_count": 3
}
You can store all results from the API calls into a list and then save this list in Json format into a file. For example:
#...
all_data = []
for tmdb_id in tmdb_ids:
print("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Send API Call
response_API = requests.get("https://api.themoviedb.org/3/movie/" + str(tmdb_id) + "?api_key=****")
# Check API Call Status
print(response_API.status_code)
if response_API.status_code == 200:
# store the Json data in a list:
all_data.append(response_API.json())
# write the list to file
with open('output.json', 'w') as f_out:
json.dump(all_data, f_out, indent=4)
This will produce output.json with all responses in Json format.
I recently generated 10,000 images with a corresponding .json file. I generated 10 before I did the bigger collection and so I am trying to filter out or search through the 10,000 json files, for a specific key value. here is one of the JSON files for example:
{
"name": "GrapeGrannys #1",
"description": "Grannys with grapes etc.",
"image": "ipfs://NewUriToReplace/1.png",
"dna": "93596679f006e3a9226700e0e7539179b532bf29",
"edition": 1,
"date": 1667406230920,
"attributes": [
{
"trait_type": "Backgrounds",
"value": "sunrise_beach"
},
{
"trait_type": "main",
"value": "GrapeGranny"
},
{
"trait_type": "eyeColor",
"value": "gray"
},
{
"trait_type": "skirtAndTieColor",
"value": "green"
},
{
"trait_type": "Headwear",
"value": "hat1"
},
{
"trait_type": "specialItems",
"value": "ThugLife"
}
],
"compiler": "HashLips Art Engine"
}
In "attributes", I want to I want to target the first object and its value and check to see if that value is equal to "GrapeCity".
Then after all files have been read and searched through, Id like the files with that specific value "GrapeCity" to be stored in a new list or array that I can print and see which specific files contain that keyword. Here is what I have tried in Python:
import json
import glob
# from datetime import datetime
src = "./Assets/json"
# date = datetime.now()
data = []
files = glob.glob('$./Assets/json/*', recursive=True)
for single_file in files:
with open(single_file, 'r') as f:
try:
json_file = json.load(f)
data.append([
json_file["attributes"]["values"]["GrapeCity"]
])
except KeyError:
print(f'Skipping {single_file}')
data.sort()
print(data)
# csv_filename = f'{str(date)}.csv'
# with open(csv_filename, "w", newline="") as f:
# writer = csv.writer(f)
# writer.writerows(data)
# print("Updated CSV")
At one point I was getting a typeError but now it is just outputing an empty array. Any help is appreciated!
json_file["attributes"] is a list so you can't access it like a dictionary.
Try this:
for single_file in files:
with open(single_file, 'r') as f:
try:
json_file = json.load(f)
attrs = json_file["attributes"]
has_grape_city = any(attr["value"] == "GrapeCity" for attr in attrs)
if has_grape_city:
data.append(single_file)
except KeyError:
print(f'Skipping {single_file}')
I currently have A JSON file saved containing some data I want to convert to CSV. Here is the data sample below, please note, I have censored the actual value in there for security and privacy reasons.
{
"ID value1": {
"Id": "ID value1",
"TechnischContactpersoon": {
"Naam": "Value",
"Telefoon": "Value",
"Email": "Value"
},
"Disclaimer": [
"Value"
],
"Voorzorgsmaatregelen": [
{
"Attributes": {},
"FileId": "value",
"FileName": "value",
"FilePackageLocation": "value"
},
{
"Attributes": {},
"FileId": "value",
"FileName": "value",
"FilePackageLocation": "value"
},
]
},
"ID value2": {
"Id": "id value2",
"TechnischContactpersoon": {
"Naam": "Value",
"Telefoon": "Value",
"Email": "Value"
},
"Disclaimer": [
"Placeholder"
],
"Voorzorgsmaatregelen": [
{
"Attributes": {},
"FileId": "value",
"FileName": "value",
"FilePackageLocation": "value"
}
]
},
Though I know how to do this (because I already have a function to handle a JSON to CSV convertion) with a simple JSON string without issues. I do not know to this with this kind of JSON file that this kind of a structure layer. Aka a second layer beneath the first. Also you may have noticed that there is an ID value above
Because as may have noticed from structure is actually another layer inside the JSON file. So in total I need to have two kinds of CSV files:
The main CSV file just containing the ID, Disclaimer. This CSV file
is called utility networks and contains all possible ID value's and
the value
A file containing the "Voorzorgsmaatregelen" value's. Because there are multiple values in this section, one CSV file per unique
ID file is needed and needs to be named after the Unique value id.
Deleted this part because it was irrelevant.
Data_folder = "Data"
Unazones_file_name = "UnaZones"
Utilitynetworks_file_name = "utilityNetworks"
folder_path_JSON_BS_JSON = folder_path_creation(Data_folder)
pkml_file_path = os.path.join(folder_path_JSON_BS_JSON,"pmkl.json")
print(pkml_file_path)
json_object = json_open(pkml_file_path)
json_content_unazones = json_object.get("mapRequest").get("UnaZones")
json_content_utility_Networks = json_object.get("utilityNetworks")
Unazones_json_location = json_to_save(json_content_unazones,folder_path_JSON_BS_JSON,Unazones_file_name)
csv_file_location_unazones = os.path.join(folder_path_CSV_file_path(Data_folder),(Unazones_file_name+".csv"))
csv_file_location_Utilitynetwork = os.path.join(folder_path_CSV_file_path(Data_folder),(Unazones_file_name+".csv"))
json_content_utility_Networks = json_object.get("utilityNetworks")
Utility_networks_json_location = json_to_save(json_content_utility_Networks,folder_path_JSON_BS_JSON,Utilitynetworks_file_name)
def json_to_csv_convertion(json_file_path: str, csv_file_location: str):
loaded_json_data = json_open(json_file_path)
# now we will open a file for writing
data_file = open(csv_file_location, 'w', newline='')
# # create the csv writer object
csv_writer = csv.writer(data_file,delimiter = ";")
# Counter variable used for writing
# headers to the CSV file
count = 0
for row in loaded_json_data:
if count == 0:
# Writing headers of CSV file
header = row.keys()
csv_writer.writerow(header)
count += 1
# Writing data of CSV file
csv_writer.writerow(row.values())
data_file.close()
def folder_path_creation(path: str):
if not os.path.exists(path):
os.makedirs(path)
return path
def json_open(complete_folder_path):
with open(complete_folder_path) as f:
json_to_load = json.load(f) # Modified "objectids" to "object_ids" for readability -sg
return json_to_load
def json_to_save(input_json, folder_path: str, file_name: str):
json_save_location = save_file(input_json, folder_path, file_name, "json")
return json_save_location
So how do I this starting from this?
for obj in json_content_utility_Networks:
Go from there?
Keep in mind that is JSON value has already one layer above every object for every object I need to start one layer below it.
So how do I this?
I'm looking to pull the "name" field from a large json text file and be able to store them in another file for later, but I'm getting every piece of data that was in my previous json file albeit slightly modified. How do I make it so I only grab the data after the "name": field in my json file?
I've tried
names = []
with open('./out.json', 'r') as f:
data = json.load(f)
for name in data:
names.append(data[name])
with open('./names.json','w') as f:
for name in names:
f.write('%s\r\n' % name)
and I'm getting my exact json file back, with no formatting and u' in front of everything, likely from the json.load(f), but I have no idea how to remedy this.
my text file is formatted like this, if it matters:
{
"array":[
{
"name": "Seranul",
"id": 5,
"type": "Paladin",
"itemLevel": 414,
"icon": "Paladin-Holy",
"total": 11107150,
"activeTime": 2205387,
"activeTimeReduced": 2205387
},
{
"name": "Contherious",
"id": 9,
"type": "Hunter",
"itemLevel": 412,
"icon": "Hunter-Marksmanship",
"total": 51102811,
"activeTime": 2637303,
"activeTimeReduced": 2637303
},
{
"name": "Unicorns",
"id": 17,
"type": "Priest",
"itemLevel": null,
"icon": "Priest",
"total": 12252005,
"activeTime": 1768883,
"activeTimeReduced": 1761797
},
...
}
]}
I'm expecting to see the corresponding data for each name field, but I'm getting my entire document back.
It looks like your code is ignoring the structure of the JSON data. Specifically, you are iterating through the keys in the JSON dictionary, which is just array, and then appending the value to you names list. This results in the whole array property being put into your names variable.
Here is what I believe you want: iterate through the entries in array and and them to a list, then export that as JSON to another file.
import json
names = []
with open('./out.json', 'r') as f:
data = json.load(f)
for entry in data["array"]:
names.append(entry["name"])
with open('./names.json', 'w') as f:
f.write(json.dumps(names))
This will result in the following JSON in names.json:
["Seranul", "Contherious", "Unicorns"]
I want to fetch the output of below json file using python
Json file
{
"Name": [
{
"name": "John",
"Avg": "55.7"
},
{
"name": "Rose",
"Avg": "71.23"
},
{
"name": "Lola",
"Avg": "78.93"
},
{
"name": "Harry",
"Avg": "95.5"
}
]
}
I want to get the average marks of the person, when I look for harry
i.e. I need output in below or similar format
Harry = 95.5
Here's my code
import json
json_file = open('test.json') //the above contents are stored in the json
data = json.load(json_file)
do = data['Name'][0]
op1 = do['Name']
if op1 is 'Harry':
print do['Avg']
But when I run I get error IOError: [Errno 63] File name too long.
How to print the score of Harry with python 3
import json
from pprint import pprint
with open('test.json') as data_file:
data = json.load(data_file)
for l in data["Name"]:
if (str(l['name']) == 'Harry'):
pprint(l['Avg'])
You can do something simple like this
import json
file = open('data.json')
json_data = json.load(file)
stud_list = json_data['Name']
y = {}
for var in stud_list:
x = {var['name']: var['Avg']}
y = dict(list(x.items()) + list(y.items()))
print(y)
It gives the output in dictionary format
{'Harry': '95.5', 'Lola': '78.93', 'Rose': '71.23', 'John': '55.7'}
json.load loads data from file file-like object. If you want to read it from string you may use json.loads.