i am using this code to get some data from JSON from this URL:
https://opendata.ecdc.europa.eu/covid19/nationalcasedeath_eueea_daily_ei/json/
And i am using this code for that, but i am not getting all results ( all countries) i dont know why !! :
import json
import urllib.request
# Print Daily Updated Report from JSON file in this (URL)
url = "https://opendata.ecdc.europa.eu/covid19/nationalcasedeath_eueea_daily_ei/json/"
data = urllib.request.urlopen(url).read()
obj = json.loads(data)
for p in obj['records']:
print('Country: ' + p['countriesAndTerritories'])
print('Date: ' + p['dateRep'])
print('Cases: ' + str(p['cases']))
print('Deaths: ' + str(p['deaths']))
print('')
There was an error but i solve it but the problem not solved yet !. I just add this before my code to solve the error:
import json
import urllib.request
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
# Print Daily Updated Report from JSON file in this (URL)
url = "https://opendata.ecdc.europa.eu/covid19/nationalcasedeath_eueea_daily_ei/json/"
data = urllib.request.urlopen(url).read().decode()
obj = json.loads(data)
for p in obj['records']:
print('Country: ' + p['countriesAndTerritories'])
print('Date: ' + p['dateRep'])
print('Cases: ' + str(p['cases']))
print('Deaths: ' + str(p['deaths']))
print('')
Related
I am trying to extract data from a JSON API but I get an error string indices must be integers i couldn't find anything about this here is my code:
import requests
import json
name = input('input a name: ')
server = input('input a server: ')
response = requests.get('https://api.battlemetrics.com/players?fields[server]=name&filter[search]=' + name + '&filter[servers]=' + server + '&page[size]=10&include=server')
def jprint(obj):
#create a formatted string of the Python JSON onject
text = json.dumps(obj, sort_keys=True, indent=4)
print (text)
pass_times = response.json()
jprint(pass_times)
status = []
for d in pass_times:
time = d["online"]
status.append(time)
print (status)
import requests
import json
name = "master oogway"
server = "6354292"
response = requests.get('https://api.battlemetrics.com/players?fields[server]=name&filter[search]=' + name + '&filter[servers]=' + server + '&page[size]=10&include=server')
def jprint(obj):
#create a formatted string of the Python JSON onject
text = json.dumps(obj, sort_keys=True, indent=4)
print (text)
pass_times = response.json()
#jprint(pass_times)
status = []
for data in pass_times["data"]:
status.append(data["relationships"]["servers"]["data"][0]["meta"]["online"])
print(status)
I have the following code that is supposed to take a random top post from reddit.com/r/showerthoughts and print the title and author of the post.
import random, json
randnum = random.randint(0,99)
response = json.load('https://www.reddit.com/r/showerthoughts/top.json?sort=top&t=week&limit=100')["data"]["children"][randnum]["data"]
print("\n\"" + response["title"] + "\"")
print(" -" + response["author"] + "\n")
I get the following error:
Traceback (most recent call last):
File "C:/Users/jacks/.PyCharmCE2019.1/config/scratches/scratch_4.py", line 4, in <module>
response = json.load('https://www.reddit.com/r/showerthoughts/top.json?sort=top&t=week&limit=100')["data"]["children"][randnum]["data"]
File "C:\Users\jacks\AppData\Local\Programs\Python\Python37\lib\json\__init__.py", line 293, in load
return loads(fp.read(),
AttributeError: 'str' object has no attribute 'read'
Am I on the right track here?
UPDATE:
Got it to work with this code:
import random, requests
randnum = random.randint(0,99)
response = requests.get('https://www.reddit.com/r/showerthoughts/top.json?sort=top&t=week&limit=100', headers = {'User-Agent': 'showerbot'})
result = response.json()
result1 = result["data"]["children"][randnum]["data"]
print("\n\"" + result1["title"] + "\"")
print(" -" + result1["author"] + "\n")
You cannot load json directly from a url, for that you need to use requests module.
Using json module
import random, json, requests
randnum = random.randint(0,99)
response = requests.get('https://www.reddit.com/r/showerthoughts/top.json?sort=top&t=week&limit=100')
response = json.loads(response.text)
response = response["data"]["children"][randnum]["data"]
print("\n\"" + response["title"] + "\"")
print(" -" + response["author"] + "\n")
Without using json module
import random, requests
randnum = random.randint(0,99)
response = requests.get('https://www.reddit.com/r/showerthoughts/top.json?sort=top&t=week&limit=100')
response = response.json()
response = response["data"]["children"][randnum]["data"]
print("\n\"" + response["title"] + "\"")
print(" -" + response["author"] + "\n")
The api returning json on brower but when parsing it on python I am getting this exception: No JSON object could be decoded. I have used both json.load() and json.loads() but failed.
Here is that code.
def handler_timeout(self):
try:
data = json.load(
urlopen(
'https://www.zebapi.com/api/v1/market/ticker/btc/inr'
)
)
buy_price = data['buy']
sell_price = data['sell']
status_message = "Buy: ₹ " + "{:,}".format(buy_price) + " Sell: ₹ " + "{:,}".format(sell_price)
self.ind.set_label(status_message, "")
except Exception, e:
print str(e)
self.ind.set_label("!", "")
return True
Here is the output for urlopen(URL):
<addinfourl at 140336849031752 whose fp = <socket._fileobject object at 0x7fa2bb6f1cd0>>
Here is the output for urlopen(URL).read() :
��`I�%&/m�{J�J��t�`$ؐ#�������iG#)�*��eVe]f#�흼��{����{����;�N'���?\fdl��J�ɞ!���?~|?"~�o���G��~��=J�vv��;#�x���}��e���?=�N�u�/�h��ًWɧ�U�^���Ã���;���}�'���Q��ct
The content of the url is gzip-encoded.
>>> u = urllib.urlopen('https://www.zebapi.com/api/v1/market/ticker/btc/inr')
>>> info = u.info()
>>> info['Content-Encoding']
'gzip'
Decompress the content.
import gzip
import io
import json
import urllib
u = urllib.urlopen('https://www.zebapi.com/api/v1/market/ticker/btc/inr')
with io.BytesIO(u.read()) as f:
gz = gzip.GzipFile(fileobj=f)
print json.load(gz)
or use requests which decode gzip automatically:
import requests
print requests.get('https://www.zebapi.com/api/v1/market/ticker/btc/inr').json()
I am using a weather API to design a slack bot service using python.
My source code is-
import requests
import re
import json
from bs4 import BeautifulSoup
def weather(cityname):
cityid = extractid(cityname)
url = "http://api.openweathermap.org/data/2.5/forecast?id=" + str(cityid) + "&APPID=c72f730d08a4ea1d121c8e25da7e4411"
while True:
r = requests.get(url, timeout=5)
while r.status_code is not requests.codes.ok:
r = requests.get(url, timeout=5)
soup = BeautifulSoup(r.text)
data = ("City: " + soup.city["name"] + ", Country: " + soup.country.text + "\nTemperature: " + soup.temperature["value"] +
" Celsius\nWind: " + soup.speed["name"] + ", Direction: " + soup.direction["name"] + "\n\n" + soup.weather["value"])
# print data
return data
def extractid(cname):
with open('/home/sourav/Git-Github/fabulous/fabulous/services/city.list.json') as data_file:
data = json.load(data_file)
for item in data:
if item["name"] == cname:
return item["id"]
def on_message(msg, server):
text = msg.get("text", "")
match = re.findall(r"~weather (.*)", text)
if not match:
return
searchterm = match[0]
return weather(searchterm.encode("utf8"))
on_bot_message = on_message
But executing the code gives the following error-
File "/usr/local/lib/python2.7/dist-packages/fabulous-0.0.1-py2.7.egg/fabulous/services/weather.py", line 19, in weather
" Celsius\nWind: " + soup.speed["name"] + ", Direction: " + soup.direction["name"] + "\n\n" + soup.weather["value"])
TypeError: 'NoneType' object has no attribute '__getitem__'
I can't figure out what's the error. Please help!
__getitem__ is called when you ask for dictionary key like a['abc'] translates to a.__getitem__('abc')
so in this case one attribute of soup is None (speed, direction or weather)
ensure that your r.text contains data you want, simply print it:
print(r.text)
list structure in parsed data:
for child in soup.findChildren():
print child
always assume your entry data might be wrong, instead doing soup.city do soup.find('city'), it might be empty so:
city = soup.find('city')
if len(city):
city_name = city[0]['name']
else:
city_name = 'Error' # or empty, or sth
So, I am testing this piece of code :
import requests
import json
searchTerm = 'parrot'
startIndex = '0'
searchUrl = "http://ajax.googleapis.com/ajax/services/search/images?v=1.0&q=" + \
searchTerm + "&start=" + startIndex
r = requests.get(searchUrl)
response = r.content.decode('utf-8')
result = json.loads(response)
print(r)
print(result)
And the response is :
<Response [200]>
{'responseData': None, 'responseStatus': 403, 'responseDetails': 'This API is no longer available.'}
Seems that I am trying to use the old API and it is deprecated now. When I check on the Google Custom Search API I don't see any way to search straight on google images, is this even possible with the new API ?
It is possible, here is new API reference:
https://developers.google.com/custom-search/json-api/v1/reference/cse/list
import requests
import json
searchTerm = 'parrot'
startIndex = '1'
key = ' Your API key here. '
cx = ' Your CSE ID:USER here. '
searchUrl = "https://www.googleapis.com/customsearch/v1?q=" + \
searchTerm + "&start=" + startIndex + "&key=" + key + "&cx=" + cx + \
"&searchType=image"
r = requests.get(searchUrl)
response = r.content.decode('utf-8')
result = json.loads(response)
print(searchUrl)
print(r)
print(result)
That works fine, I just tried.