I'm sending a POST request to some URL and this URL then throws a 200 OK or 401 Unauthorized code depending on the parameters provided in the POST request.
Additionally to that return code, the website also returns a text, which is especially useful on errors so the person who did the request knows why it failed. For that, I use this code:
#/usr/bin/env python
import urllib
import urllib2
url = 'https://site/request'
params = {
'param1': 'value1',
'param2': 'value2',
...
}
data = urllib.urlencode(params)
req = urllib2.Request(url, data)
try:
response = urllib2.urlopen(req)
the_page = response.read()
except urllib2.URLError as e:
print e.code, e.reason # Returns only 401 Unauthorized, not the text
When the request is successful, I get a 200 code and I can grab the message with the the_page variable. Pretty useless in that case.
But when it fails, the line which throws the URLError is the one calling urlopen(), so I can't grab the web error message.
Is there any way to grab the message even on a URLError event? If not, is there an alternative way to do a POST request and grab the Web content on error?
The Python version is 2.7.6 in my case.
Thanks
In case you catch an HTTPError—it’s a more specific subclass of URLError and I think it would be raised in case of a 401—it can be read as a file-like object, yielding page contents:
urllib2.HTTPError documentation
Overriding urllib2.HTTPError or urllib.error.HTTPError and reading response HTML anyway
I would suggest using the requests library (install using pip install requests)
import requests
url = 'https://site/request'
params = {
'param1': 'value1',
'param2': 'value2',
}
post_response = requests.post(url, json=params)
if post_response.ok:
the_page = post_response.content
# do your stuff here
print post_response.content # this will give you the message regardless of failure
print post_response.status_code # this will give you the status code of the request
post_response.raise_for_status() # this will throw an error if the status is not 200
Docs: http://docs.python-requests.org/en/latest/
Related
I have to use Python 2.5.3. This would be easy if I was using Python 3 request library, but sadly I am locked into Python 2.5.3 for work.
I need to make a 'PUT' request to a RESTful api to get a 204 response back.
I tried using urllib2 but I don't get the right response that I need
import urllib2
url='http://some_ulr.com'
try:
request = urllib2.Request(url)
request.get_method = lambda: 'PUT'
response = urllib2.urlopen(request)
except urllib2.HTTPError , e:
print e.code
print e.read()
I keep getting a 505 as the response.
I want to get the response code from a web server, but sometime I get code 200 even if the page doesn't exist and I don't know how to deal with it.
I'm using this code:
def checking_url(link):
try:
link = urllib.request.urlopen(link)
response = link.code
except urllib.error.HTTPError as e:
response = e.code
return response
When I'm checking a website like this one:
https://www.wykop.pl/notexistlinkkk/
It still returns code 200 even if the page doesn't exist.
Is there any solution to deal with it?
I found solution, now gonna test it with more websites
I had to use http.client.
You are getting response code 200, because the website you are checking has automatic redirection. In the URL you gave, even if you specify a non-existing page, it automatically redirects you to the home page, rather than returning a 404 status code. Your code works fine.
import urllib2
thisCode = None
try:
i = urllib2.urlopen('http://www.google.com')
thisCode = i.code
except urllib2.HTTPError, e:
thisCode = e.code
print thisCode
I am trying to submit a multipart POST request in Python. I looked around and found 2 variations:
Using 'reqests' (http://docs.python-requests.org/en/latest/)
Using urllib2 (https://docs.python.org/2/library/urllib2.html#module-urllib2)
I tried both of them and am able to submit the request successfully.
Below is the sample code for both:
----------requests--------------
resp = requests.post(submiturl, files=multipart_form_data, headers=headers,timeout=5)
where multipart_form_data contains my file object as well as string parameters
---------------urllib2------------
items.append(MultipartParam(name, value))
fileObj = open(inputFile,'r')
items.append(MultipartParam('file', filename=inputFile, fileobj=fileObj))
res = urllib2.urlopen(request)
My Question:
Which one should I use?
Correct me if I am wrong but I have seen that while submitting with urllib2 I get the HTTPError for response code like 500. However, while using "request" it does not throw the HTTPError for response code like 500s instead I have to manually add the condition:
Response.raise_for_status():
or:
resp.status_code != 200: raise Execption(...)
Is this correct or I am missing something?
Thanks!
Response.raise_for_status() raises for HTTP response code in the 4xx and 5xx ranges. The src is very clear and readable.
You'll get a 2xx response for successful requests, but you may also want to consider other response codes, for example redirects.
I am using the following recipe for getting the HTTP response code but it fails to get 3xx ones.
import urllib2
for url in ["http://entrian.com/", "http://entrian.com/does-not-exist/"]:
try:
connection = urllib2.urlopen(url)
print connection.getcode()
connection.close()
except urllib2.HTTPError, e:
print e.getcode()
How can I disable the redirect processing on urllib2?
You can make a subtype of HTTPRedirectHandler that handles each redirect response code in whatever way you want. You then can use urllib2.build_opener to build your custom redirect handler into the opener and overwrite the default redirect handler that way.
This is not a direct answer - but in this case, you're much better off using requests
import requests
for url in ["http://entrian.com/", "http://entrian.com/does-not-exist/"]:
print requests.head(url).status_code
I have the following code with urllib2 which prints HTTP Error 403: Forbidden but if I use urllib instead to fetch url, I don't see any error and I do get a list of my friends. The access token used is same in both the cases.
url = 'https://graph.facebook.com/me/friends/'
params = {'access_token': 'a valid access-token...', 'fields': 'id,name,birthday'}
req = urllib2.Request(url, data=urllib.urlencode(params))
try:
con = urllib2.urlopen( req )
print con.read()
except Exception as excp:
print excp.read()
Please suggest what might be wrong.
This one is solved now. The trouble was that the request should be GET not POST and thus all the query parameters should be passed with url instead of being passed as POST data. So in my case the to get friends the code would look something like this:
url = 'https://graph.facebook.com/me/friends/'
params = {'access_token': 'a valid access-token...', 'fields': 'id,name,birthday'}
try:
con = urllib2.urlopen( url + '?' + urllib.urlencode(params))
print con.read()
except Exception as excp:
print excp
Hope it helps someone.