How to get HTTP error stream using Python requests library? - python

How do I access the error stream using the Python requests library? For example, using HttpURLConnection in Java, I would do something like:
InputStream errorStream = conn.getErrorStream();
Is there a function like this with requests?
I'm looking for the error message in the response that was supplied by the source, NOT the error status e.g. Internal Server Error

requests won't raise an exception if HTTP error occurs, but you can get the error message in the response content. Example:
url = 'https://stackoverflow.com/does_not_exist'
r = requests.get(url)
print(r.status_code, r.reason)
print(r.text)

Related

post requests in python creates an internal server error

I'm trying to create a post request in python, but I get an internal server error when issuing the request.
I'm trying to intercept it with a try-statement, but that doesn't seem to work.
import logging
import requests
logging.basicConfig(filename='python.log', filemode='w', level=logging.DEBUG)
url = "https://redacted-url.com/my-api/check_email"
json = {"email":request.params["email"].strip(), "list":request.params["list"]}
headers = {"Content-Type":"application/json", "Accept": "text/plain"}
try:
r = requests.post(url, headers=headers, json=json)
except requests.exceptions.RequestException as e:
logging.error(e, exc_info=True)
A: I have no idea where that logging-file would be stored. Do I have to add the full server-part? What If I just use «python.log»? Where would it be stored?
B: the try/except doesn't seem to work, I still get an internal server error
C: the error definitely occurs on the line r = requests.post(url, headers=headers, json=json). If I comment that out, the error doesn't occur.
D: Since I don't get an error that's meaningful: What am I doing wrong with that request? This is actually my main problem, but it would be nice to figure out how to log that error and how to intercept it.
Last but not least: If I run the same command from the terminal, the request is processed fine. WTH???

Can't read a raw cgi response with python

I'm trying to read a raw CGI response from an old website to simply parse the data and can't seem to read it with requests or urllib.
Python throws the following exception:
Exception has occurred: requests.exceptions.ConnectionError
('Connection aborted.', BadStatusLine('60.000\n',))
The data loads from 'http://powermeter.tradesmanmfg.ca:9900/basic.cgi?sid=0.487933' and works only in IE, but I'd like to parse this raw data directly in python. I've tried this but can't get any response without the 'BadStatusLine' response. My code is below: How can I read this into python to parse?
import requests
webpage = 'http://powermeter.tradesmanmfg.ca:9900/basic.cgi?sid=0.487933'
r = requests.get(webpage, stream=True)
print(r.raw.data)

HTTP Error 403: Forbidden while fetching html source on the server

When I run code locally and try to fetch data from URL and then parse it to text everything work properly.
When I run exactly the same code on the remote server and try to fetch data from URL error HTTP Error 403: Forbidden occur
Answers from questions:
HTTP error 403 in Python 3 Web Scraping,
urllib2.HTTPError: HTTP Error 403: Forbidden helped me when I tried to run it locally and everything work fine.
Do you know what can be different in fetching data from remote server while code is the same(locally and on the server) and way of running code is the same but result is absolutely different?
URL that I want to fetch:
url=https://bithumb.cafe/notice
Code that I was trying to use to fetch data(once it work, second not)
try:
request = urllib.request.Request(url)
request.add_header('User-Agent', 'cheese')
logger.info("request: {}".format(request))
content = urllib.request.urlopen(request).read()
logger.info('content: {}'.format(content))
decoded = content.decode('utf-8')
logger.info('content_decoded: {}'.format(decoded))
return decoded
except Exception as e:
logger.error('failed with error message: {}'.format(e))
return ''`
second way of fetching data(also work locally but on the remote server not):
class AppURLopener(urllib.request.FancyURLopener):
version = "Mozilla/5.0"
method:
try:
opener = AppURLopener()
response = opener.open(url)
logger.info("request response: {}. response type: {}. response_dict: {}"
.format(response, type(response), response.__dict__))
html_response = response.read()
logger.info("html_Response".format(html_response))
encoding = response.headers.get_content_charset('utf-8')
decoded_html = html_response.decode(encoding)
logger.info('content_decoded: {}'.format(decoded_html))
return decoded_html
except Exception as e:
logger.error('failed with error message: {}'.format(e))
return ''

Python SUDS - Getting Exception 415 when calling a SOAP method

from suds.client import Client
url = r'http://*********?singleWsdl'
c = Client(url)
The requests work fine till here, but when I execute the below statement, I get the error message shown at the end. Please help.
c.service.Method_Name('parameter1', 'parameter2')
The Error message is :
Exception: (415, u'Cannot process the message because the content type
\'text/xml; charset=utf-8\' was not the expected type
\'multipart/related; type="application/xop+xml"\'.')
A Content-Type header of multipart/related; type="application/xop+xml" is the type used by MTOM, a message format used to efficiently send attachments to/from web services.
I'm not sure why the error claims to be expecting it, because the solution I found for my situation was the override the Content-Type header to 'application/soap+xml;charset=UTF-8'.
Example:
soap_client.set_options(headers = {'Content-Type': 'application/soap+xml;charset=UTF-8'})
If you are able, you could also trying checking for MTOM encoding in the web service's configuration and changing it.

making a simple GET/POST with url Encoding python

i have a custom url of the form
http://somekey:somemorekey#host.com/getthisfile.json
i tried all the way but getting errors :
method 1 :
from httplib2 import Http
ipdb> from urllib import urlencode
h=Http()
ipdb> resp, content = h.request("3b8138fedf8:1d697a75c7e50#abc.myshopify.com/admin/shop.json")
error :
No help on =Http()
Got this method from here
method 2 :
import urllib
urllib.urlopen(url).read()
Error :
*** IOError: [Errno url error] unknown url type: '3b8108519e5378'
I guess something wrong with the encoding ..
i tried ...
ipdb> url.encode('idna')
*** UnicodeError: label empty or too long
Is there any way to make this Complex url get call easy .
You are using a PDB-based debugger instead of a interactive Python prompt. h is a command in PDB. Use ! to prevent PDB from trying to interpret the line as a command:
!h = Http()
urllib requires that you pass it a fully qualified URL; your URL is lacking a scheme:
urllib.urlopen('http://' + url).read()
Your URL does not appear to use any international characters in the domain name, so you do not need to use IDNA encoding.
You may want to look into the 3rd-party requests library; it makes interacting with HTTP servers that much easier and straightforward:
import requests
r = requests.get('http://abc.myshopify.com/admin/shop.json', auth=("3b8138fedf8", "1d697a75c7e50"))
data = r.json() # interpret the response as JSON data.
The current de facto HTTP library for Python is Requests.
import requests
response = requests.get(
"http://abc.myshopify.com/admin/shop.json",
auth=("3b8138fedf8", "1d697a75c7e50")
)
response.raise_for_status() # Raise an exception if HTTP error occurs
print response.content # Do something with the content.

Categories