I'm trying to process this JSON using python3:
http://www.bom.gov.au/fwo/IDV60701/IDV60701.94857.json
But I'm getting the following error:
Traceback (most recent call last):
File "./weath.py", line 41, in <module>
data1 = response.json()
File "/home/dz/anaconda3/lib/python3.8/site-packages/requests/models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "/home/dz/anaconda3/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/home/dz/anaconda3/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/dz/anaconda3/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
https://jsonlint.com/
Validates the JSON as valid, so I'm not sure why this is failing.
Here is the python code:
url = "http://www.bom.gov.au/fwo/IDV60701/IDV60701.94857.json"
response = requests.get(url)
data1 = response.json()
This was working 2 week ago.
How can I fix this?
This is an issue related to the specific service endpoint you're using. They have disabled web scraping through some mechanism.
If you look at your response object, you'll see it's a 403 (forbidden) with the following message:
Potential automated request detected! We are making changes to our website therefore web scraping is no longer supported. Please contact us by filling in the details at http://reg.bom.gov.au/screenscraper/screenscraper_enquiry_form/ and we will get in touch with you.
You can verify this for yourself:
response = requests.get("http://www.bom.gov.au/fwo/IDV60701/IDV60701.94857.json")
print(response.status_code) # 403
print(response.text) # above quote
When I ran this code:
import requests
url = "http://www.bom.gov.au/fwo/IDV60701/IDV60701.94857.json"
response = requests.get(url).text
print(response)
The print returned
Potential automated request detected! We are making changes to our website therefore web scraping is no longer supported. Please contact us by filling in the details at http://reg.bom.gov.au/screenscraper/screenscraper_enquiry_form/ and we will get in touch with you.
Seems like that website has disabled web-scraping or something
Related
After generating the access_token (which works when I use it on TD Ameritrade's API website) I'm trying to get option chains for a stock. I can get it to work on TD Ameritrade's API website, and I get an 'OKAY' response when I run my code, but no JSON data attached, any idea why? My relevant code is below.
content = requests.get(url = https://api.tdameritrade.com/v1/marketdata/chains, params = params_dictionary, headers = access_token)
print(content)
print(repr(content.text))
data = content.json()
print(data)
but for my output I get
<Response [200]>
''
Traceback (most recent call last):
File "C:\Users\USER\Documents\GitHub\pythonfiles\TD Ameritrade API tests.py", line 98, in <module>
data = content.json()
File "C:\Users\USER\Anaconda3\lib\site-packages\requests\models.py", line 900, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Users\USER\Anaconda3\lib\json\__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "C:\Users\USER\Anaconda3\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\USER\Anaconda3\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Expecting value
It was a mistake on my part unshown in my question. I was accidentally using 'https://api.tdameritrade.com/v1/marketdata/https://api.tdameritrade.com/v1/marketdata/chains' for my URL. I'm unsure how this didn't break things, but this was my issue.
I hope you're doing good.
I'm trying to get the solar radiation values from this website 'solcast.com.au' .. I have went to their API documentation and followed it here ' https://docs.solcast.com.au/#forecasts-by-location' and I have applied the code:
import requests
url = 'https://api.solcast.com.au/world_radiation/forecasts?latitude= -33.865143&longitude=151.209900&api_key=MYAPI'
res = requests.get(url)
data = res.json()
forecast = data["forecasts"]["ghi"]
print('forecastss: {} dgree'.format(forecast))
So when I run the code I'm getting this error:
Traceback (most recent call last):
File "/home/pi/Desktop/solcastoo.py", line 5, in <module>
data = res.json()
File "/usr/lib/python3/dist-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Would really appreciate your help.
As John mentioned, you need to specify in your request the format you're willing to receive.
You can do it by adding headers to your request:
import requests
url = 'https://api.solcast.com.au/world_radiation/forecasts?latitude= -33.865143&longitude=151.209900&api_key=API_KEY'
res = requests.get(url, headers={'Content-Type': 'application/json'})
data = res.json()
forecast = data["forecasts"][0]["ghi"]
print('forecastss: {} dgree'.format(forecast))
In their documentation they give you two additional options:
“Accepts” HTTP request header, eg “application/json” for JSON
“format” query string, eg “format=json” for JSON
Endpoint suffix file extension, eg “forecasts.json” for JSON
The second option doesn't work, at least for this specific request. The third option works, but it's a bit odd.
The first option is more commonly used in APIs, but be prepared to use other options too.
PS they say in the documentation that `headers={'Accepts': 'application/json'}
should give the desired result, so I'd assume it also could be a possibility in other endpoints.
Good luck
I am currently trying to play around with the unofficial stockx api which is found here:
https://pypi.org/project/stockx-py-sdk/
When i try passing in my login details in the below code i get the following error:
File "stockxapi.py", line 11, in
stockx.authenticate(email, password) File "/Users/xxxxxx/anaconda3/lib/python3.6/site-packages/stockxsdk/wrapper.py",
line 48, in authenticate
customer = response.json().get('Customer', None) File "/Users/xxxxxx/anaconda3/lib/python3.6/site-packages/requests/models.py",
line 897, in json
return complexjson.loads(self.text, **kwargs) File "/Users/xxxxxx/anaconda3/lib/python3.6/json/init.py", line 354, in
loads
return _default_decoder.decode(s) File "/Users/xxxxxx/anaconda3/lib/python3.6/json/decoder.py", line 339, in
decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/Users/xxxxxx/anaconda3/lib/python3.6/json/decoder.py", line 357, in
raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char
0)
import json
import pandas as pd
import matplotlib as plt
import numpy
from stockxsdk import Stockx
stockx = Stockx()
email='xxxxxxxxx#gmail.com'
password='xxxxxxxxxxx'
stockx.authenticate(email, password)
I just want to be able to pass in my login details and have 'True' returned in the command line.
Today I've been trying to play with the same unofficial API and just met the same issue. I figured that inside stockx.authenticate(email, password) authentication is failing due to HTTP 403 (Forbidden) response ending up in a JSONDecodeError... I assume this is due to change on stockx serverside to prevent scraping via automated tools but maybe this is avoidable by adding sufficient information to the HTTP request header.
In [17]: response = requests.post(endpoint, json=payload)
In [18]: response
Out[18]: <Response [403]>
Access to this page has been denied because we believe you are using automation tools to browse the website.
I believe that StockX closed their APIs. Therefore, when you call the authenticate function, the request returns a 403 code while the function is expecting a JSON string, and in turn, it throws a JSONDecodeError exception.
I want to use python to grab Google Street View image.
For example:
'url=https://maps.googleapis.com/maps/api/streetview?location=48.15763817939112,11.533002555370581&size=512x512&key=
I run the following code:
import requests
result = requests.get(url)
result.json()
But it comes out an error:
Traceback (most recent call last):
File "<ipython-input-69-180c2a4b335d>", line 1, in <module>
result.json()
File "/home/kang/.local/lib/python3.4/site-packages/requests/models.py", line 826, in json
return complexjson.loads(self.text, **kwargs)
File "/home/kang/.local/lib/python3.4/site-packages/simplejson/__init__.py", line 516, in loads
return _default_decoder.decode(s)
File "/home/kang/.local/lib/python3.4/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/home/kang/.local/lib/python3.4/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value
The response of this url is:
How to fix that?
Thank you very much.
There a new Street View Image Metadata API that does return JSON.
It will let you query for the availability of Street View panoramas at given locations (by address or latlng). If a panorama is found, the response will include its pano IDs. These queries are free.
Otherwise, the Street View Image API will always return an image. That's why the above JSON metadata API was introduced.
There is no JSON in the response that is coming back to you, which is why it is giving the error.
I'm trying to get some data from this website. I can enter 'text' and 'longest_only' parameters but when I pass 'ontologies' param, it says No JSON object could be decoded. Here's the complete URL http://data.bioontology.org/annotator?text=lung cancer,bone marrow&ontologies=NCIT&longest_only=true
I'm using Python 2.7
The argument is ontologies[], since you can specify more than one. Your request should be similar to the one that the online search uses:
text=lung+cancer%2Cbone+marrow&ontologies%5B%5D=NCIT&longest_only=true&raw=true
Simply execute the same search there, and use the developer tools option of your favorite browser to check what is the actual payload being sent.
This is not an answer, but the only place I can show the error that I see when executing the sample code. I placed the code in a new module in main and run it in Python 3.4.
import requests
if __name__ == '__main__':
url = 'http://bioportal.bioontology.org/annotator'
params = {
'text': 'lung cancer,bone marrow',
'ontologies': 'NCIT',
'longest_only': 'true'
}
session = requests.Session()
session.get(url)
response = session.post(url, data=params)
data = response.json()
# get the annotations
for annotation in data['annotations']:
print (annotation['annotatedClass']['prefLabel'])
I receive the following error.
Traceback (most recent call last):
File "/Users/.../Sandbox/Ontology.py", line 21, in <module>
data = response.json()
File "/Users/erwin/anaconda/lib/python3.4/site-packages/requests/models.py", line 799, in json
return json.loads(self.text, **kwargs)
File "/Users/erwin/anaconda/lib/python3.4/json/__init__.py", line 318, in loads
return _default_decoder.decode(s)
File "/Users/erwin/anaconda/lib/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/erwin/anaconda/lib/python3.4/json/decoder.py", line 361, in raw_decode
raise ValueError(errmsg("Expecting value", s, err.value)) from None
ValueError: Expecting value: line 1 column 1 (char 0)