Python HttpConnection - write request headers to file - python

I'm trying to log some connection info in the event that an error occurs. Using httplib's HTTPConnection I can print out the request headers by setting the debug level to 1:
connection = httplib.HTTPConnection('www.example.com')
connection.set_debuglevel(1)
However, this just seems to print directly to the shell without conditions. I need to be able to get this info as a string or something to store in a variable such that I only print it out when an Exception is thrown.
The specific info that I want is the request headers that the library is generating.

I would use requests HTTP library.
To get response headers, you just need this little piece of code:
import requests
try:
r = requests.get("http://www.example.com")
# Raise an exception in case of "bad"
# status code (non-200 response)
r.raise_for_status()
print r.headers
except Exception as e:
print e.message
Output:
{'connection': 'close',
'content-encoding': 'gzip',
'content-length': '1162',
'content-type': 'text/html; charset=UTF-8',
'date': 'Sun, 12 Aug 2012 12:49:44 GMT',
'last-modified': 'Wed, 09 Feb 2011 17:13:15 GMT',
'server': 'Apache/2.2.3 (CentOS)',
'vary': 'Accept-Encoding'}

Turns out the trick is to redirect sys.stdout to a StringIO object, the contents of which can then either be written to file or what ever you like as you can get to the String. Check out StringIO.StringIO.

Related

Suppress printing of response from Azure Queue Storage

When I send a message to a queue I want the response returned into an object which I can then include in my log or not. However for some reason when I execute the following code:
from azure.storage.queue import QueueClient, TextBase64EncodePolicy
# ... some code running ##########################
queue = QueueClient.from_connection_string(conn_str=conn_queue, queue_name="items",
message_encode_policy=TextBase64EncodePolicy())
# ... some message generated #####################
response=queue.send_message(json.dumps(item_dict))
a complete message is printed to my log. It looks for example like this:
Request URL: 'https://{some_storage_account}.queue.core.windows.net/items/messages'
Request method: 'POST'
Request headers: 'Accept': 'application/xml'
'Content-Type': 'application/xml; charset=utf-8'
'x-ms-version': 'REDACTED'
'Content-Length': '1295'
'x-ms-date': 'REDACTED'
'x-ms-client-request-id': '3452464c-06b2-11eb-9f96-00155d6ebdc5'
'User-Agent': 'azsdk-python-storage-queue/12.1.3 Python/3.8.5 (Linux-4.19.104-microsoft-standard-x86_64-with-glibc2.2.5)'
'Authorization': 'REDACTED'
A body is sent with the request
Response status: 201
Response headers:
'Transfer-Encoding': 'chunked'
'Content-Type': 'application/xml'
'Server': 'Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0'
'x-ms-request-id': '1720a5da-c003-002b-18be-9ab4d4000000'
'x-ms-version': 'REDACTED'
'Date': 'Mon, 05 Oct 2020 02:26:41 GMT'
How can I prevent this gulp of information to be printed to my log?
I have spent at least half an hour on the docs from microsoft. But I can't find where I can turn this behaviour off.
logger = logging.getLogger("azure.core.pipeline.policies.http_logging_policy")
logger.setLevel(logging.WARNING)
This Worked For Me - Add this to your code
I tried with the logging-levels - it does not do the trick for me. However when I define a new logger for this connection then it works:
queue_logger = logging.getLogger("logger_name")
# queue_logger.disabled = True
queue_logger.setLevel(logging.DEBUG)
queue = QueueClient.from_connection_string(conn_str=conn_queue, queue_name="debug",message_encode_policy=TextBase64EncodePolicy(), logger=queue_logger)
Ultimately I feel like I should use the REST API instead of the Python Azure SDK for this. REST API allows me to log-output based on the response status. For some weird reason the SDK does not offer me this possibility.
The answer should be here, in the README of azure-storage-python in GitHub:
Here is how we use the logging levels, it is recommended to use INFO:
DEBUG: log strings to sign
INFO: log outgoing requests and responses, as well as retry attempts # <--
WARNING: not used
ERROR: log calls that still failed after all the retries
This should do it, as recommended by Kalies LAMIRI (I didn't try myself):
logger = logging.getLogger("azure.storage")
logger.setLevel(logging.ERROR)

POST request to API Prestashop with Python

I achieve to list and create products through Prestashop API. I want to automate a little bit the product update process in my website.
But i have an issue trying to upload the images both in create new product with image and in upload an image to a product that i create through the webservice.
I don´t see any error in my code, so i want to know if I made wrong using the Prestashop API.
My code:
def addNewImage(product_id):
file = 'foto.png'
fd = io.open(file, "rb")
data = fd.read()
r = requests.post(urlimg + product_id, data=data,auth=(key,""), headers={'Content-Type': 'multipart/form-data'})
print(r.status_code)
print(r.headers)
print(r.content)
Prints:
500
{'Server': 'nginx', 'Date': 'Fri, 31 May 2019 09:18:27 GMT', 'Content-Type': 'text/xml;charset=utf-8', 'Transfer-Encoding': 'chunked', 'Connection': 'keep-alive', 'Access-Time': '1559294307', 'X-Powered-By': 'PrestaShop Webservice', 'PSWS-Version': '1.7.5.2', 'Execution-Time': '0.003', 'Set-Cookie': 'PrestaShop-30ff13c7718a401c862ad41ea4c0505f=def50200b7a8c608f3053d32136569a34c897c09cea1230b5f8a0aee977e6caac3e22bea39c63c30bfc955fe344d2cbabf640dc75039c63b33c88c5f33e6b01f2b282047bfb0e05c8f8eb7af08f2cc5b0c906d2060f92fea65f73ce063bf6d87bd8ac4d03d1f9fc0d7b6bf56b1eb152575ef559d95f89fc4f0090124630ae292633b4e08cfee38cee533eb8abe151a7d9c47ed84366a5dd0e241242b809300f84b9bb2; expires=Thu, 20-Jun-2019 09:18:27 GMT; Max-Age=1728000; path=/; domain=example.com; secure; HttpOnly', 'Vary': 'Authorization', 'MS-Author-Via': 'DAV'}
b'<?xml version="1.0" encoding="UTF-8"?>
\n<prestashop xmlns:xlink="http://www.w3.org/1999/xlink">
\n<errors>
\n<error>
\n<code><![CDATA[66]]></code>
\n<message><![CDATA[Unable to save this image]]></message>
\n</error>
\n</errors>
\n</prestashop>\n'
I probe to use the logging library of python but only tell me this:
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): midominio:443
DEBUG:urllib3.connectionpool:https://midominio:443 "POST /api/images/products/20 HTTP/1.1" 500 None
Also I probe to change the file config/defines.inc.php, that i read in the forum of prestashop to active debug mode but any difference.
Also I probe the library prestapyt( and prestapyt3) but don´t work with python 3 and I read that are not compatible with presta 1.7
Edit:
Display_errors, and log_errors are activated in my Plesk Panel:
But when i go to var/www/vhosts/midominio/logs/error_log
I can´t see any error referenced to php or 500 error in any line.
Thanks in advance for any suggestion...
Edit: I probe the suggestion in response, but return same error.
I think the problem is in the post command if all else if working fine on the backend. data is used to send form data and other text data. To upload a file, you should do it like this:
files = {'media': open('test.jpg', 'rb')}
requests.post(url, files=files)
So your code translates to:
def addNewImage(product_id):
file = 'foto.png'
fd = io.open(file, "rb")
r = requests.post(urlimg + product_id, auth=(key,""), headers={'Content-Type': 'multipart/form-data'}, files={ 'media' : fd })
print(r.status_code)
print(r.headers)
print(r.content)

Requests HTTP headers

I've a trouble with the HTTP headers that the module Requests returns.
I'm using the following code :
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
response = requests.get("http://www.google.co.il",proxies={'http': '','https':''})
data = response.text
# response.text returns the appropriate html code
# (<!doctype html><html dir="rtl" itemscope=""....)
if response.status_code == requests.codes.ok:
# How do I send those headers to the conn (browser)
print "HEADERS: " + str(response.headers)
conn.send(data)
I'm trying to send a GET request to www.google.co.il, and send the response to the browser (on the example I called it "conn"). The problem is that the browser won't show the received HTML code and instead I'm receiving ERR_EMPTY_RESPONSE.
The headers in the response are :
HEADERS: {'Content-Length': '5451', 'X-XSS-Protection': '1; mode=block', 'Content-Encoding': 'gzip', 'Set-Cookie': 'NID=103=RJzu4RTCNxkh-75dvKBHx-_jen9M8iPes_AdOIQqzBVZ0VPTz1PlQaAVLpwYOmxZlTKmcogiDb1VoY__Es0HqSNwlkmHl3SuBZC8_8XUfqh1PzdWTjrXRnB4S738M1lm; expires=Wed, 08-Nov-2017 10:05:46 GMT; path=/; domain=.google.co.il; HttpOnly', 'Expires': '-1', 'Server': 'gws', 'Cache-Control': 'private, max-age=0', 'Date': 'Tue, 09 May 2017 10:05:46 GMT', 'P3P': 'CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."', 'Content-Type': 'text/html; charset=windows-1255', 'X-Frame-Options': 'SAMEORIGIN'}
Someone told me that the problem is that I'm not sending any header to the browser. Is this really the problem ? Any other suggestions ? and if it is the problem, how do I send the appropriate headers to the browser ?
Edit: I forgot to mention that the connection is
through a Proxy server.
Any help would be great!
Thanks alot, Yahli.
I couldn't find anything about geting the raw http response ( not response.raw ) in requests documentation so i wrote a function :
def http_response(response):
return 'HTTP/1.1 {} {}\r\n{}\r\n\r\n{}'.format(
response.status_code, response.reason ,
'\r\n'.join(k + ': ' + v for k, v in response.headers.items()),
response.content
)
I tested it by setting Firefox HTTP proxy to localhost:port ( with a listening socket on port ) , and it works fine .
Alternatively you can get the host from conn.recv , open a new socket to that host, and send the data . Example :
data = conn.recv(1024)
host = [ l.split(':')[1].strip() for l in data.splitlines() if l.startswith('Host:') ]
if len(host) :
cli = socket.socket()
cli.connect((host[0], 80))
cli.send(data)
response = ''
while True :
data = cli.recv(1024)
if not data.strip() :
break
response += data
conn.send(response)
cli.close()
Where conn is the connection to the web browser . This is just a quick example , assuming you have only HTTP requests ( port 80 ) . There is room for much optimization

Downloading *.gz zipped file with python requests corrupt it

I use this code (it is only a part) to download *.gz archive.
with requests.session() as s:
s.post(login_to_site_URL, payload)
load = s.get(scene, stream=True)
with open(path_to_file, "wb") as save_command:
for chunk in load.iter_content(chunk_size=1024, decode_unicode=False):
if chunk:
save_command.write(chunk)
save_command.flush()
After download the size of the file is twice more than when I download file by clicking "save as" on it. And the file is corrupted.
Link for the file is:http://www.zsrcpod.aviales.ru/modistlm/archive/tlm/geo/00000/28325/terra_77835_20140806_060059.geo.hdf.gz
File require login and password, so I add a screenshot of what I see when I follow the link: http://i.stack.imgur.com/DGqtS.jpg
Looks like some options set to define this archive as a text.
file.header is:
{'content-length': '58277138',
'content-encoding': 'gzip',
'set-cookie': 'cidaviales=53616c7465645f5fc8f0abdb26f7b0536784ae4e8b302410a288f1f67ccc0afd13ce067d97ba237dc27749d9957f30457f1a1d9763b03637; path=/,
avialestime=1407386483; path=/; expires=Wed,
05-Nov-2014 04:41:23 GMT,
ciddaviales=53616c7465645f5fc8f0abdb26f7b0536784ae4e8b302410a288f1f67ccc0afd13ce067d97ba237dc27749d9957f30457f1a1d9763b03637; domain=aviales.ru; path=/',
'accept-ranges': 'bytes',
'server': 'Apache/1.3.37 (Unix) mod_perl/1.30',
'last-modified': 'Wed, 06 Aug 2014 06:17:14 GMT',
'etag': '"21d4e63-3793d12-53e1c86a"',
'date': 'Thu, 07 Aug 2014 04:41:23 GMT',
'content-type': 'text/plain; charset=windows-1251'}
How to properly download this file using python requests library?
It looks like requests automatically decompresses the content for you. See here
Requests automatically decompresses gzip-encoded responses, and does
its best to decode response content to unicode when possible. You can
get direct access to the raw response (and even the socket), if needed
as well
This is default behaviour if Accept-Encoding request header contains gzip. You can check this by printing s.request.headers. To be able to get raw data you should modify this headers dict to exclude gzip, however in your case the decompressed data looks like valid hdf file - so, just save it with this extension and use it!

Suppress logging by cloudstorage.open() in google app engine GCS client library Python

I'm using the GCS client library to write files to Google Cloud Storage in my App Engine app in Python.
Before creating a file, I need to make sure it doesn't already exist to avoid overwriting.
To do this I am checking to see if the file exists before trying to create it:
import cloudstorage as gcs
try:
gcs_file = gcs.open(filename, 'r')
gcs_file.close()
return "File Exists!"
except gcs.NotFoundError as e:
return "File Does Not Exist: " + str(e)
cloudstorage.write() is logging (either directly or indirectly) the fact that it receives a 404 error when trying to read the non-existent file. I would like to suppress this if possible.
Thanks
edit
Here's what is logged:
12:19:32.565 suspended generator _get_segment(storage_api.py:432)
raised NotFoundError(Expect status [200, 206] from Google Storage. But
got status 404. Path:
'/cs-mailing/bde24e63-4d31-41e5-8aff-14b76b239388.html'. Request
headers: {'Range': 'bytes=0-1048575', 'x-goog-api-version': '2',
'accept-encoding': 'gzip, *'}. Response headers:
{'alternate-protocol': '443:quic', 'content-length': '127', 'via':
'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires':
'Mon, 02 Jun 2014 11:19:32 GMT', 'server': 'HTTP Upload Server Built
on May 19 2014 09:31:01 (1400517061)', 'cache-control': 'private,
max-age=0', 'date': 'Mon, 02 Jun 2014 11:19:32 GMT', 'content-type':
'application/xml; charset=UTF-8'}. Body: "NoSuchKeyThe specified
key does not exist.". Extra info: None.
The logging in your case was triggered by str(e).
Issue was fixed in https://code.google.com/p/appengine-gcs-client/source/detail?r=172

Categories