I'm trying to use the yts api, and I wrote the following code.
import urllib2
import json
url = 'https://yts.ag/api/v2/list_movies.json?quality=3D'
json_obj = urllib2.urlopen(url)
data = json.load(json_obj)
print data
But I ran into the following error:
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 435, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 548, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 473, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 407, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
Help me how to fix this
I'm not completely sure what's going on with your urllib2 request, as it works fine with the other urls I tested, with and without JSON responses.
I suspect it's an encoding thing for the quality=3D query, but my brief attempt at handling that also failed.
Simply putting the request in a browser works, as does the following 'requests' code.
In a nutshell, I'm not truly answering your question regarding fixing the urllib2 usage, but definitely fixing the problem of getting the data.
I'd suggest working with the 'requests' module in this case. Try
import requests
import json
url = 'https://yts.ag/api/v2/list_movies.json?quality=3D'
response = requests.get( url = url )
data = response.json()
print data
Note that the requests response.json() handles the json.load() for you.
It will save you many a headache over urllib2.
The 'requests' documentation:
http://docs.python-requests.org/en/latest/
Related
I am really an ETL guy trying to learn Python, please help
import urllib2
urls =urllib2.urlopen("url1","url2")
i=0
while i< len(urls):
htmlfile = urllib2.urlopen(urls[i])
htmltext = htmlfile.read()
print htmltext
i+=1
I am getting errors as
Traceback (most recent call last):
File ".\test.py", line 2, in
urls =urllib2.urlopen("url1","url2")
File "c:\python27\Lib\urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "c:\python27\Lib\urllib2.py", line 437, in open
response = meth(req, response)
File "c:\python27\Lib\urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "c:\python27\Lib\urllib2.py", line 475, in error
return self._call_chain(*args)
File "c:\python27\Lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "c:\python27\Lib\urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 405: Method Not Allowed
Your error is coming from line 2:
urls =urllib2.urlopen("url1","url2")
Whatever url you're trying to access is returning a http error code
HTTP Error 405: Method Not Allowed
Looking at the urllib2 docs, you should only be using 1 url as an argument
https://docs.python.org/2/library/urllib2.html
Open the URL url, which can be either a string or a Request object.
data may be a string specifying additional data to send to the server, or None if no such data is needed. Currently HTTP requests are the only ones that use data; the HTTP request will be a POST instead of a GET when the data parameter is provided.
The 2nd argument you're putting in may be turning the request into a POST, which would explain the Method Not Allowed code.
This question already has answers here:
urllib2 Error 403: Forbidden
(2 answers)
Closed 8 years ago.
I was following a tutorial on pythonforbeginners.com, and I came across a code which isn't running right on my OSX.
from bs4 import BeautifulSoup
import urllib2
url = "http://www.pythonforbeginners.com"
content = urllib2.urlopen(url).read()
soup = BeautifulSoup(content)
print soup.prettify()
This gives me the error:
Traceback (most recent call last): File
"/Users/dhruvmullick/CS/Python/Extracting Data/test.py", line 8, in
content = urllib2.urlopen(url).read() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py",
line 127, in urlopen
return _opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py",
line 410, in open
response = meth(req, response) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py",
line 523, in http_response
'http', request, response, code, msg, hdrs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py",
line 448, in error
return self._call_chain(*args) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py",
line 382, in _call_chain
result = func(*args) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py",
line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 403: Forbidden
The 403 error indicates that the server is blocking your connection.
...a request from a client for a web page or resource to indicate that the server can be reached and understood the request, but refuses to take any further action.
Try with a different domain and you'll find that it works as expected.
To make a work-around, you can add a custom user-agent.
This question already has answers here:
urllib2 returns 404 for a website which displays fine in browsers
(3 answers)
Closed 8 years ago.
I'm using urllib2 to request for URLs and read their contents but unfortunately it's not working for some URLs. look at these commands:
#No problem with this URL
urllib2.urlopen('http://www.huffingtonpost.com/2014/07/19/todd-akin-slavery_n_5602083.html')
#This one produced error
urllib2.urlopen('http://www.foxnews.com/us/2014/07/19/cartels-suspected-as-high-caliber-gunfire-sends-border-patrol-scrambling-on-rio/')
The second URL produced and error like this:
Traceback (most recent call last):
File "D:/Developer Center/Republishan/republishan2/republishan2/test.py", line 306, in <module>
urllib2.urlopen('http://www.foxnews.com/us/2014/07/19/cartels-suspected-as-high-caliber-gunfire-sends-border-patrol-scrambling-on-rio/')
File "C:\Python27\lib\urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
What's the problem with this?
I think the site is checking for a User-Agent and or other headers which urllib doesn't set by default.
You can set a User-Agent manually.
Requests library sets a user-agent automatically.
However remember that requests user-agent may also be blocked by some sites.
Try this. This is working for me. You need to install the requests module first!
pip install requests
Then
import requests
r = requests.get("http://www.foxnews.com/us/2014/07/19/cartels-suspected-as-high-caliber-gunfire-sends-border-patrol-scrambling-on-rio/")
print r.text
Urllib is hard and you've to code more. Requests is simpler and is more in line with the Python philosophy that code should be beautiful!
I'm trying to call the NYT events api with urllib2 but I'm receiving a 596 error. If I construct the url myself, there is no problem, but if I call urlopen with the data instead, I receive the 596 error. What's going on? The 596 error seems to be undocumented, so it doesn't help.
>>> data = urllib.urlencode({'api-key': os.environ['NYT_EVENT_LISTING_API_KEY']})
>>> resp = urllib2.urlopen('?'.join([url,data]))
>>> resp = urllib2.urlopen(url, data)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 400, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 438, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 372, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 521, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 596:
#Thomas is right, you are using GET in your first request which will construct the URL to something like this:
nytimes.com/api/?MY_API_KEY
However, your second call to urllib2.urlopen sends the data as a POST request to this URL
nytimes.com/api/
instead, which gives you 596 service not found error.
Now, urllib2 is notorious for its non-intuitive API and documentation, you may consider using Requests instead:
import requests
api_key = {'api-key': os.environ['NYT_EVENT_LISTING_API_KEY']}
resp = requests.get(url, params=api_key)
print resp.text
print resp.json
This way, GET requests and POST requests are a lot easier to distinguish, url and parameters are separated as well.
Your first request is a GET request - the second is a POST request. See the docs on this - when the parameter data is provided, urlopen performs a POST request.
Hey I am trying to publish a score to Facebook through python's urllib2 library.
import urllib2,urllib
url = "https://graph.facebook.com/USER_ID/scores"
data = {}
data['score']=SCORE
data['access_token']='APP_ACCESS_TOKEN'
data_encode = urllib.urlencode(data)
request = urllib2.Request(url, data_encode)
response = urllib2.urlopen(request)
responseAsString = response.read()
I am getting this error:
response = urllib2.urlopen(request)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 124, in urlopen
return _opener.open(url, data, timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 389, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 502, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 427, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 361, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 510, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 400: Bad Request
Not sure if this is relating to Facebook's Open Graph or improper urllib2 API use.
I ran your code and got the same error (there is no more error in the body, would have posted that in a comment but can't yet I guess) so I googled "publish facebook scores."
I believe you'll need to grant your app permission to publish scores first, unless you've done that already. See http://developers.facebook.com/blog/post/539/.
You may have to provide user:agent as some browser. I remember getting similar error while running crawler in some website, as it detected that no browser is calling for it.