I'm using Observium to pull Nginx stats on localhost however it returns '405 Not Allowed':
# curl -I localhost/nginx_status
HTTP/1.1 405 Not Allowed
Server: nginx
Date: Wed, 19 Jun 2013 22:12:37 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 166
Connection: keep-alive
Keep-Alive: timeout=5
# curl -I -H "Host: example.com" localhost/nginx_status
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 19 Jun 2013 22:12:43 GMT
Content-Type: text/plain
Connection: keep-alive
Keep-Alive: timeout=5
Could you please advise how to add Host header with 'urllib2.urlopen' in Python (Python 2.6.6
):
Current script:
#!/usr/bin/env python
import urllib2
import re
data = urllib2.urlopen('http://localhost/nginx_status').read()
params = {}
for line in data.split("\n"):
smallstat = re.match(r"\s?Reading:\s(.*)\sWriting:\s(.*)\sWaiting:\s(.*)$", line)
req = re.match(r"\s+(\d+)\s+(\d+)\s+(\d+)", line)
if smallstat:
params["Reading"] = smallstat.group(1)
params["Writing"] = smallstat.group(2)
params["Waiting"] = smallstat.group(3)
elif req:
params["Requests"] = req.group(3)
else:
pass
dataorder = [
"Active",
"Reading",
"Writing",
"Waiting",
"Requests"
]
print "<<<nginx>>>\n";
for param in dataorder:
if param == "Active":
Active = int(params["Reading"]) + int(params["Writing"]) + int(params["Waiting"])
print Active
else:
print params[param]
You might want to check out the urllib2 missing manual for more information, but basically you create a dictionary of your header labels and values and pass it to the urllib2.Request method. A (slightly) modified version of the code from the linked manual:
from urllib import urlencode
from urllib2 import Request urlopen
# Define values that we'll pass to our urllib and urllib2 methods
url = 'http://www.something.com/blah'
user_host = 'example.com'
values = {'name' : 'Engineero', # dict of keys and values for our POST data
'location' : 'Interwebs',
'language' : 'Python' }
headers = { 'Host' : user_host } # dict of keys and values for our header
# Set up our request, execute, and read
data = urlencode(values) # encode for sending URL request
req = Request(url, data, headers) # make POST request to url with data and headers
response = urlopen(req) # get the response from the server
the_page = response.read() # read the response from the server
# Do other stuff with the response
Related
I am trying to send through an image using the following code: This is just a part of my code, I didn't include the headers here but they are set up correctly, with content-type as content-type: multipart/form-data; boundary=eBayClAsSiFiEdSpOsTiMaGe
img = "piano.jpg"
f = open(img,'rb')
out = f.read()
files = {'file':out}
p = requests.post("https://ecg-api.gumtree.com.au/api/pictures",headers=headers, data=files)
f.close()
I get a 400 error incorrect multipart/form-data format
How do I send the image properly?
Extra Details:
Network analysis shows the following request been sent:
POST https://ecg-api.gumtree.com.au/api/pictures HTTP/1.1
host: ecg-api.gumtree.com.au
content-type: multipart/form-data; boundary=eBayClAsSiFiEdSpOsTiMaGe
authorization: Basic YXV5grehg534
accept: */*
x-ecg-ver: 1.49
x-ecg-ab-test-group: gblios_9069_b;gblios-8982-b
accept-encoding: gzip
x-ecg-udid: 73453-7578p-8657
x-ecg-authorization-user: id="1635662", token="ee56hgjfjdghgjhfj"
accept-language: en-AU
content-length: 219517
user-agent: Gumtree 12.6.0 (iPhone; iOS 13.3; en_AU)
x-ecg-original-machineid: Gk435454-hhttehr
Form data:
file: ����..JFIF.....H.H..��.LExif..MM.*..................�i.........&......�..
I cut off the the formdata part for file as its too long. My headers are written as follows (I have made up the actual auth values here):
idd = "1635662"
token = "ee56hgjfjdghgjhfj"
headers = {
"authority":"ecg-api.gumtree.com.au",
"content-type":"multipart/form-data; boundary=eBayClAsSiFiEdSpOsTiMaGe",
"authorization":"Basic YXV5grehg534",
"accept":"*/*",
"x-ecg-ver":"1.49",
"x-ecg-ab-test-group":"gblios_9069_b;gblios-8982-b",
"accept-encoding":"gzip",
"x-ecg-udid":"73453-7578p-8657",
"x-ecg-authorization-user":f"id={idd}, token={token}",
"accept-language":"en-AU",
"content-length":"219517",
"user-agent":"Gumtree 12.6.0 (iPhone; iOS 13.3; en_AU)",
"x-ecg-original-machineid":"Gk435454-hhttehr"
}
Maybe its the way I have written the headers? I suspect its the way I have written the x-ecg-authorization-user part in headers? Because I realise even putting random values for the token or id gives me the same 400 error incorrect multipart/form-data format
You can try the following code. Don't set content-type in the headers.Let Pyrequests do that for you
files = {'file': (os.path.basename(filename), open(filename, 'rb'), 'application/octet-stream')}
upload_files_url = "url"
headers = {'Authorization': access_token, 'JWTAUTH': jwt}
r2 = requests.post(parcels_files_url, files=files, headers=headers)
I have a HTTP/JSON Restful server implemented in Python thanks to the Bottle Web framework. I want to Gzip the data sent to the client.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# curl -H "Content-Type: application/json" -X POST -d '{"key1": 1, "key2": 2}' http://localhost:6789/post
#
from bottle import run, request, post, route, response
from zlib import compress
import json
data = {'my': 'json'}
#post('/post')
def api_post():
global data
data = json.loads(request.body.read())
return(data)
#route('/get')
def api_get():
global data
response.headers['Content-Encoding'] = 'identity'
return(json.dumps(data).encode('utf-8'))
#route('/getgzip')
def api_get_gzip():
global data
if 'gzip' in request.headers.get('Accept-Encoding', ''):
response.headers['Content-Encoding'] = 'gzip'
ret = compress(json.dumps(data).encode('utf-8'))
else:
response.headers['Content-Encoding'] = 'identity'
ret = json.dumps(data).encode('utf-8')
return(ret)
run(host='localhost', port=6789, debug=True)
When i test my server with Curl, the result is good (if i use the --compressed option tag):
$ curl -H "Accept-encoding: gzip, deflated" -v --compressed http://localhost:6789/getgzip
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 6789 (#0)
> GET /getgzip HTTP/1.1
> Host: localhost:6789
> User-Agent: curl/7.47.0
> Accept: */*
> Accept-encoding: gzip, deflated
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Date: Sun, 12 Nov 2017 09:09:09 GMT
< Server: WSGIServer/0.1 Python/2.7.12
< Content-Length: 22
< Content-Encoding: gzip
< Content-Type: text/html; charset=UTF-8
<
* Closing connection 0
{"my": "json"}
But not with HTTPie (or Firefox, or Chrome...):
$ http http://localhost:6789/getgzipHTTP/1.0 200 OK
Content-Encoding: gzip
Content-Length: 22
Content-Type: text/html; charset=UTF-8
Date: Sun, 12 Nov 2017 09:10:10 GMT
Server: WSGIServer/0.1 Python/2.7.12
http: error: ContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing: incorrect header check',))
Any idea ?
I Nicolargo,
According to the documentation of Httpie, default encoding is set to Accept-Encoding: gzip, deflate but your are using the compress Python function of the zlib module which implement a Lempel–Ziv–Welch Compression Algorithm (Gzip is based on DEFLATE Algorithm).
Or, according to the documentation of Bottle (https://bottlepy.org/docs/dev/recipes.html#gzip-compression-in-bottle) you will need a custom middleware to perform a gzip compression (see an example there: http://svn.cherrypy.org/tags/cherrypy-2.1.1/cherrypy/lib/filter/gzipfilter.py).
Edit:
The compress function of the zlib module do perform a gzip compatible compression.
I think it's more related to the the header of the data (as the Error mention). In http://svn.cherrypy.org/tags/cherrypy-2.1.1/cherrypy/lib/filter/gzipfilter.py there is a use of a write_gzip_header maybe you can try this.
Thanks to the edit section of Guillaume, it's now work with both Httpie and Curl.
Here is the complete code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# curl -H "Content-Type: application/json" -X POST -d '{"key1": 1, "key2": 2}' http://localhost:6789/post
#
from bottle import run, request, post, route, response
import zlib
import json
import struct
import time
data = {'my': 'json'}
#post('/post')
def api_post():
global data
data = json.loads(request.body.read())
return(data)
#route('/get')
def api_get():
global data
response.headers['Content-Encoding'] = 'identity'
return(json.dumps(data).encode('utf-8'))
#route('/getgzip')
def api_get_gzip():
global data
ret = json.dumps(data).encode('utf-8')
if 'gzip' in request.headers.get('Accept-Encoding', ''):
response.headers['Content-Encoding'] = 'gzip'
ret = gzip_body(ret)
else:
response.headers['Content-Encoding'] = 'identity'
return(ret)
def write_gzip_header():
header = '\037\213' # magic header
header += '\010' # compression method
header += '\0'
header += struct.pack("<L", long(time.time()))
header += '\002'
header += '\377'
return header
def write_gzip_trailer(crc, size):
footer = struct.pack("<l", crc)
footer += struct.pack("<L", size & 0xFFFFFFFFL)
return footer
def gzip_body(body, compress_level=6):
yield gzip_header()
crc = zlib.crc32("")
size = 0
zobj = zlib.compressobj(compress_level,
zlib.DEFLATED, -zlib.MAX_WBITS,
zlib.DEF_MEM_LEVEL, 0)
size += len(data)
crc = zlib.crc32(data, crc)
yield zobj.compress(data)
yield zobj.flush()
yield gzip_trailer(crc, size)
run(host='localhost', port=6789, debug=True)
It's a little complicated but it do the job...
I am trying to post data to my server from my microcontroller. I need to send raw http data from my controller and this is what I am sending below:
POST /postpage HTTP/1.1
Host: https://example.com
Accept: */*
Content-Length: 18
Content-Type: application/json
{"cage":"abcdefg"}
My server requires JSON encoding and not form encoded request.
For the above request sent, I get an 400 error from the server, HTTP/1.1 400 Bad Request
However, when I try to reach the post to my server via a python script via my laptop, I am able to get a proper response.
import requests
url='https://example.com'
mycode = 'abcdefg'
def enter():
value = requests.post('url/postpage',
params={'cage': mycode})
print vars(value)
enter()
Can anyone please let me know where I could be going wrong in the raw http data I'm sending above ?
HTTP specifies the separator between headers as a single newline, and requires a double newline before the content:
POST /postpage HTTP/1.1
Host: https://example.com
Accept: */*
Content-Length: 18
Content-Type: application/json
{"cage":"abcdefg"}
If you don’t think you’ve got all of the request right, try seeing what was sent by Python:
response = ...
request = response.request # request is a PreparedRequest.
headers = request.headers
url = request.url
Read the docs for PreparedRequest for more information.
To pass a parameter, use this Python:
REQUEST = 'POST /postpage%s HTTP/1.1\r\nHost: example.com\r\nContent-Length: 0\r\nConnection: keep-alive\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.4.3 CPython/2.7.9 Linux/4.4.11-v7+\r\n\r\n';
query = ''
for k, v in params.items():
query += '&' + k + '=' + v # URL-encode here if you want.
if len(query): query = '?' + query[1:]
return REQUEST % query
(New to python)
I'm trying to make a simple authenticated put of a file... so I make two curls, the first one to authenticate (which prints the token out as expected) but when I use the same variable (token) to add it to the headers ("Authorization: Bearer %s" % str(token)) token is empty. What am I doing wrong here?
import urllib
import cStringIO
import pycurl
import requests
from urllib import urlencode
import os.path
# declarations
filename = "./profile.jpg"
response = cStringIO.StringIO()
c = pycurl.Curl()
# formdata
post_data = {'username': '...', 'password':'...'}
# Form data must be provided already urlencoded.
postfields = urlencode(post_data)
# Sets request method to POST,
# Content-Type header to application/x-www-form-urlencoded
# and data to send in request body.
print "*****************************************************"
# authenticate
c = pycurl.Curl()
c.setopt(c.POST, 1)
c.setopt(c.URL, "https://.../auth")
c.setopt(c.POSTFIELDS, postfields)
c.setopt(c.SSL_VERIFYPEER, 0)
c.setopt(c.SSL_VERIFYHOST, 0)
c.setopt(c.VERBOSE, 1)
c.perform()
c.close()
token = response.getvalue()
print token
print "*****************************************************"
# upload file
filesize = os.path.getsize(filename)
fin = open(filename, 'rb')
c = pycurl.Curl()
c.setopt(c.PUT, 1)
c.setopt(c.URL, "https://.../avatar")
c.setopt(c.HTTPPOST, [("file", (c.FORM_FILE, filename))])
c.setopt(c.HTTPHEADER, [
"Authorization: Bearer %s" % str(token),
"Content-Type: image/jpeg"
])
c.setopt(c.READFUNCTION, fin.read)
c.setopt(c.POSTFIELDSIZE, filesize)
c.setopt(c.SSL_VERIFYPEER, 0)
c.setopt(c.SSL_VERIFYHOST, 0)
c.setopt(c.VERBOSE, 1)
c.setopt(c.WRITEFUNCTION, response.write),
c.perform()
c.close()
print response.getvalue()
print "*****************************************************"
Request:
> PUT ../avatar HTTP/1.1
User-Agent: PycURL/7.19.3 libcurl/7.35.0 GnuTLS/2.12.23 zlib/1.2.8 libidn/1.28 librtmp/2.3
Host: 127.0.0.1:8080
Accept: */*
Transfer-Encoding: chunked
Authorization: Bearer
Expect: 100-continue
Response:
< HTTP/1.1 100 Continue
< HTTP/1.1 401 Unauthorized
< content-type: application/json; charset=utf-8
< cache-control: no-cache
< content-length: 86
< Date: Tue, 02 Jun 2015 19:09:29 GMT
< Connection: keep-alive
<
* Connection #1 to host 127.0.0.1 left intact
{"statusCode":401,"error":"Unauthorized","message":"Incorrect Token or Token Expired"}
I think there is an encoding problem. The print function is able to output something without caring about the encoding. Looking at the PycURL quickstart documentation it mentions this issue. I would try to manipulate the encoding on this line:
"Authorization: Bearer %s" % str(token)
and try to do something like this instead:
"Authorization: Bearer %s" % token.decode('iso-8859-1')
(I would try .decode("utf-8") also, depending on what the encoding is)
You might need to change response = cStringIO.StringIO() to response = BytesIO(). I cannot give a definitive answer because I'm unsure about your setup.
EDIT: My suspicions about encoding affirmed by this post about cStringIO where it says that Unicode is not supported.
While using the requests module, is there any way to print the raw HTTP request?
I don't want just the headers, I want the request line, headers, and content printout. Is it possible to see what ultimately is constructed from HTTP request?
Since v1.2.3 Requests added the PreparedRequest object. As per the documentation "it contains the exact bytes that will be sent to the server".
One can use this to pretty print a request, like so:
import requests
req = requests.Request('POST','http://stackoverflow.com',headers={'X-Custom':'Test'},data='a=1&b=2')
prepared = req.prepare()
def pretty_print_POST(req):
"""
At this point it is completely built and ready
to be fired; it is "prepared".
However pay attention at the formatting used in
this function because it is programmed to be pretty
printed and may differ from the actual request.
"""
print('{}\n{}\r\n{}\r\n\r\n{}'.format(
'-----------START-----------',
req.method + ' ' + req.url,
'\r\n'.join('{}: {}'.format(k, v) for k, v in req.headers.items()),
req.body,
))
pretty_print_POST(prepared)
which produces:
-----------START-----------
POST http://stackoverflow.com/
Content-Length: 7
X-Custom: Test
a=1&b=2
Then you can send the actual request with this:
s = requests.Session()
s.send(prepared)
These links are to the latest documentation available, so they might change in content:
Advanced - Prepared requests and API - Lower level classes
import requests
response = requests.post('http://httpbin.org/post', data={'key1': 'value1'})
print(response.request.url)
print(response.request.body)
print(response.request.headers)
Response objects have a .request property which is the PreparedRequest object that was sent.
An even better idea is to use the requests_toolbelt library, which can dump out both requests and responses as strings for you to print to the console. It handles all the tricky cases with files and encodings which the above solution does not handle well.
It's as easy as this:
import requests
from requests_toolbelt.utils import dump
resp = requests.get('https://httpbin.org/redirect/5')
data = dump.dump_all(resp)
print(data.decode('utf-8'))
Source: https://toolbelt.readthedocs.org/en/latest/dumputils.html
You can simply install it by typing:
pip install requests_toolbelt
Note: this answer is outdated. Newer versions of requests support getting the request content directly, as AntonioHerraizS's answer documents.
It's not possible to get the true raw content of the request out of requests, since it only deals with higher level objects, such as headers and method type. requests uses urllib3 to send requests, but urllib3 also doesn't deal with raw data - it uses httplib. Here's a representative stack trace of a request:
-> r= requests.get("http://google.com")
/usr/local/lib/python2.7/dist-packages/requests/api.py(55)get()
-> return request('get', url, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/api.py(44)request()
-> return session.request(method=method, url=url, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py(382)request()
-> resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py(485)send()
-> r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py(324)send()
-> timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py(478)urlopen()
-> body=body, headers=headers)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py(285)_make_request()
-> conn.request(method, url, **httplib_request_kw)
/usr/lib/python2.7/httplib.py(958)request()
-> self._send_request(method, url, body, headers)
Inside the httplib machinery, we can see HTTPConnection._send_request indirectly uses HTTPConnection._send_output, which finally creates the raw request and body (if it exists), and uses HTTPConnection.send to send them separately. send finally reaches the socket.
Since there's no hooks for doing what you want, as a last resort you can monkey patch httplib to get the content. It's a fragile solution, and you may need to adapt it if httplib is changed. If you intend to distribute software using this solution, you may want to consider packaging httplib instead of using the system's, which is easy, since it's a pure python module.
Alas, without further ado, the solution:
import requests
import httplib
def patch_send():
old_send= httplib.HTTPConnection.send
def new_send( self, data ):
print data
return old_send(self, data) #return is not necessary, but never hurts, in case the library is changed
httplib.HTTPConnection.send= new_send
patch_send()
requests.get("http://www.python.org")
which yields the output:
GET / HTTP/1.1
Host: www.python.org
Accept-Encoding: gzip, deflate, compress
Accept: */*
User-Agent: python-requests/2.1.0 CPython/2.7.3 Linux/3.2.0-23-generic-pae
requests supports so called event hooks (as of 2.23 there's actually only response hook). The hook can be used on a request to print full request-response pair's data, including effective URL, headers and bodies, like:
import textwrap
import requests
def print_roundtrip(response, *args, **kwargs):
format_headers = lambda d: '\n'.join(f'{k}: {v}' for k, v in d.items())
print(textwrap.dedent('''
---------------- request ----------------
{req.method} {req.url}
{reqhdrs}
{req.body}
---------------- response ----------------
{res.status_code} {res.reason} {res.url}
{reshdrs}
{res.text}
''').format(
req=response.request,
res=response,
reqhdrs=format_headers(response.request.headers),
reshdrs=format_headers(response.headers),
))
requests.get('https://httpbin.org/', hooks={'response': print_roundtrip})
Running it prints:
---------------- request ----------------
GET https://httpbin.org/
User-Agent: python-requests/2.23.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
None
---------------- response ----------------
200 OK https://httpbin.org/
Date: Thu, 14 May 2020 17:16:13 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 9593
Connection: keep-alive
Server: gunicorn/19.9.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
<!DOCTYPE html>
<html lang="en">
...
</html>
You may want to change res.text to res.content if the response is binary.
Here is a code, which makes the same, but with response headers:
import socket
def patch_requests():
old_readline = socket._fileobject.readline
if not hasattr(old_readline, 'patched'):
def new_readline(self, size=-1):
res = old_readline(self, size)
print res,
return res
new_readline.patched = True
socket._fileobject.readline = new_readline
patch_requests()
I spent a lot of time searching for this, so I'm leaving it here, if someone needs.
A fork of #AntonioHerraizS answer (HTTP version missing as stated in comments)
Use this code to get a string representing the raw HTTP packet without sending it:
import requests
def get_raw_request(request):
request = request.prepare() if isinstance(request, requests.Request) else request
headers = '\r\n'.join(f'{k}: {v}' for k, v in request.headers.items())
body = '' if request.body is None else request.body.decode() if isinstance(request.body, bytes) else request.body
return f'{request.method} {request.path_url} HTTP/1.1\r\n{headers}\r\n\r\n{body}'
headers = {'User-Agent': 'Test'}
request = requests.Request('POST', 'https://stackoverflow.com', headers=headers, json={"hello": "world"})
raw_request = get_raw_request(request)
print(raw_request)
Result:
POST / HTTP/1.1
User-Agent: Test
Content-Length: 18
Content-Type: application/json
{"hello": "world"}
💡 Can also print the request in the response object
r = requests.get('https://stackoverflow.com')
raw_request = get_raw_request(r.request)
print(raw_request)
I use the following function to format requests. It's like #AntonioHerraizS except it will pretty-print JSON objects in the body as well, and it labels all parts of the request.
format_json = functools.partial(json.dumps, indent=2, sort_keys=True)
indent = functools.partial(textwrap.indent, prefix=' ')
def format_prepared_request(req):
"""Pretty-format 'requests.PreparedRequest'
Example:
res = requests.post(...)
print(format_prepared_request(res.request))
req = requests.Request(...)
req = req.prepare()
print(format_prepared_request(res.request))
"""
headers = '\n'.join(f'{k}: {v}' for k, v in req.headers.items())
content_type = req.headers.get('Content-Type', '')
if 'application/json' in content_type:
try:
body = format_json(json.loads(req.body))
except json.JSONDecodeError:
body = req.body
else:
body = req.body
s = textwrap.dedent("""
REQUEST
=======
endpoint: {method} {url}
headers:
{headers}
body:
{body}
=======
""").strip()
s = s.format(
method=req.method,
url=req.url,
headers=indent(headers),
body=indent(body),
)
return s
And I have a similar function to format the response:
def format_response(resp):
"""Pretty-format 'requests.Response'"""
headers = '\n'.join(f'{k}: {v}' for k, v in resp.headers.items())
content_type = resp.headers.get('Content-Type', '')
if 'application/json' in content_type:
try:
body = format_json(resp.json())
except json.JSONDecodeError:
body = resp.text
else:
body = resp.text
s = textwrap.dedent("""
RESPONSE
========
status_code: {status_code}
headers:
{headers}
body:
{body}
========
""").strip()
s = s.format(
status_code=resp.status_code,
headers=indent(headers),
body=indent(body),
)
return s
test_print.py content:
import logging
import pytest
import requests
from requests_toolbelt.utils import dump
def print_raw_http(response):
data = dump.dump_all(response, request_prefix=b'', response_prefix=b'')
return '\n' * 2 + data.decode('utf-8')
#pytest.fixture
def logger():
log = logging.getLogger()
log.addHandler(logging.StreamHandler())
log.setLevel(logging.DEBUG)
return log
def test_print_response(logger):
session = requests.Session()
response = session.get('http://127.0.0.1:5000/')
assert response.status_code == 300, logger.warning(print_raw_http(response))
hello.py content:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello, World!'
Run:
$ python -m flask hello.py
$ python -m pytest test_print.py
Stdout:
------------------------------ Captured log call ------------------------------
DEBUG urllib3.connectionpool:connectionpool.py:225 Starting new HTTP connection (1): 127.0.0.1:5000
DEBUG urllib3.connectionpool:connectionpool.py:437 http://127.0.0.1:5000 "GET / HTTP/1.1" 200 13
WARNING root:test_print_raw_response.py:25
GET / HTTP/1.1
Host: 127.0.0.1:5000
User-Agent: python-requests/2.23.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 13
Server: Werkzeug/1.0.1 Python/3.6.8
Date: Thu, 24 Sep 2020 21:00:54 GMT
Hello, World!