Non blocking CherryPy does not receive anything - python

I am trying to run cherrypy with cherrypy.engine.start instead of cherrypy.quickstart. That's because I want to run cherrypy in non blocking state to start and stop a web server within my functional tests with py.test.
This works fine:
cherrypy.quickstart(WebServerTest(None), config=testconf)
The response to a curl is:
curl --head http://127.0.0.1:1026/index HTTP/1.1 200 OK
Date: Thu, 08 Aug 2013 12:54:37 GMT
Content-Length: 0
Content-Type: text/html;charset=utf-8
Server: CherryPy/3.2.2
But it's blocking the rest of the script to execute.
However this does not work:
testconf = path.join(path.dirname(__file__), 'webservertest.conf')
web_server = WebServerTest(None)
cherrypy.tree.mount(web_server, "", config=testconf)
cherrypy.engine.start()
time.sleep(60)
cherrypy.engine.stop()
The response to a curl is:
curl --head http://127.0.0.1:1026/index
curl: (7) couldn't connect to host
Adding cherrypy.engine.block() aftet cherrypy.engine.start does not solve the problem.
So how can I make it work with cherrypy.engine.start()?
The webservertest.conf config file is:
[global]
server.socket_host = "127.0.0.1"
server.socket_port = 1026
server.thread_pool = 10

You also need to pass the conf to cherrypy.config.update(conf). This is for global config (including your server host and port), whereas the tree.mount call only sets config for that particular app. Read the source code of quickstart to see all the gory details.

Related

how to get html file into code in python?

how to get html file into python code using socket. I was able to implement using the requests library. However, it needs to be rewritten to sockets. I don’t understand how. The implementation code through requests will be below. I will also leave pathetic attempts to implement via a socket using Google. However, the decision is not at all correct. ! (Help implement using sockets.
import requests
reg_get = requests.get("https://stackoverflow.blog/")
text = reg_get.text
print(text)
import socket
request = b"GET / HTTP/1.1\nHost: https://stackoverflow.blog/\n\n"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("https://stackoverflow.blog/", 80))
s.send(request)
result = s.recv(10000)
while (len(result) > 0):
print(result)
result = s.recv(10000)
After seeing the comments and listening to you. I have rewritten the following code. However, I never got the html. And I received information about the site. How do I get html structure in python
import socket
import ssl
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
request = "GET /r/AccidentalRenaissance/comments/8ciibe/mr_fluffies_betrayal/ HTTP/1.1\r\nHost: www.reddit.com\r\n\r\n"
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
s = context.wrap_socket(sock, server_hostname = "www.reddit.com")
s.connect(("www.reddit.com", 443))
s.sendall(request.encode())
contest = s.recv(1024).decode()
s.close()
print(contest)
result
HTTP/1.1 200 OK
Connection: keep-alive
Cache-control: private, s-maxage=0, max-age=0, must-revalidate, no-store
Content-Type: text/html; charset=utf-8
X-Frame-Options: SAMEORIGIN
Accept-Ranges: bytes
Date: Sun, 03 Oct 2021 03:34:25 GMT
Via: 1.1 varnish
Vary: Accept-Encoding, Accept-Encoding
A URL is composed of a protocol, a hostname, an optional port, and an optional path. In the URL http://stackoverflow.blog/ , https is the protocol, stackoverflow.blog is the hostname, and no port or path is provided. For http, the port defaults to 80 and the path defaults to /. When using sockets, first establish a connection to the host at the port using connect then send an HTTP command to retrieve the page on the path. The HTTP command to retrieve the page is "GET /" and receive the response from the server.
Note that I used http instead of https because https adds security set up and negotiation to the above that occurs once the connect is done but before the "GET /" is done. It is quite complicated and a good reason to use Requests instead of trying to implement it yourself. If you don't want to use Requests but don't want to go down to the level of sockets, take a look at urllib3

UWSGI crash when started as service and POST big file: ConnectionError

I'm facing an inexplicable problem with uwsgi: Crashs irregular happens when upload a big file size. Scenario:
Context
For a simple wsgi application, here a python Flask application in /home/bastien/Projects/test_upload/wsgi.py:
# -*- coding: utf-8 -*-
from flask import Flask, request
app = Flask(__name__)
#app.route('/', methods=['POST'])
def hello_world():
f = request.files['file'].read()
return 'Hello, World! {}'.format(len(f))
application = app
No crash when:
Use this configuration uwsgi file, /etc/uwsgi/apps-available/test_upload.ini:
[uwsgi]
plugins = python3
chdir = /home/bastien/Projects/test_upload/tracim
home = /home/bastien/Projects/test_upload/venv3.4
module = wsgi
callable = application
enable-threads = true
env = PYTHON_EGG_CACHE=/tmp
limit-post = 0
chmod-socket = 660
vacuum = true
Run uwsgi with:
uwsgi -i /etc/uwsgi/apps-available/test_upload.ini --http-socket :6543
And send file (~262Mo) with /httpie:
http -h -f POST :6543 'file#/home/bastien/Téléchargements/pycharm-professional-2017.2.3.tar.gz'
HTTP request can be repeated, no crash.
Crash when:
Use this configuration uwsgi file, /etc/uwsgi/apps-available/test_upload.ini with symbolic link into /etc/uwsgi/apps-enabled:
[uwsgi]
plugins = python3
chdir = /home/bastien/Projects/test_upload/tracim
home = /home/bastien/Projects/test_upload/venv3.4
module = wsgi
callable = application
http-socket = :4321
enable-threads = true
env = PYTHON_EGG_CACHE=/tmp
limit-post = 0
chmod-socket = 660
vacuum = true
Note: Only difference is http-socket = :4321
Run uwsgi with service uwsgi start (on debian 8.9) and send file with:
http -h -f POST :4321 'file#/home/bastien/Téléchargements/pycharm-professional-2017.2.3.tar.gz'
This request will work one time, sometimes two times:
HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Length: 5188
Content-Type: text/html; charset=utf-8
Pragma: no-cache
But finally crash with:
http: error: ConnectionError: ('Connection aborted.', BadStatusLine("''",)) while doing POST request to URL: http://localhost:4321/
Note: Any wsgi application can be used for reproduce
Note: No log is produced by uwsgi or application about this "error"
Summary:
The error is not conistent and only difference is uwsgi usage as service with:
Debian 8.9
uwsgi 2.0.7-1+deb8u1 apt installed
Question
How can produce this difefrence ? Where can i search to know how is started uwsgi by service command ?
Problem solved when use Debian 9 with uwsgi in apt version 2.0.14+20161117-3.

Requests Library Force Use of HTTP/1.1 On HTTPS Proxy CONNECT

I am having a problem with a misbehaving HTTP Proxy server. I have no control over the proxy server, unfortunately -- it's an 'enterprise' product from IBM. The proxy server is part of a service virtualization solution being leveraged for software testing.
The fundamental issue (I think*) is that the proxy server sends back HTTP/1.0 responses. I can get it to work fine from SOAP UI ( A Java application) and curl from the command line, but Python refuses to connect. From what I can tell, Python is behaving correctly, and the other two are not, as the server expects HTTP/1.1 responses (it wants Host headers, at the very least, to route the service request to a given stub).
Is there a way to get Requests, or the underlying urllib3, or the even farther down http lib to always use http1.1, even if the other end appears to be using 1.0?
Here is a sample program (unfortunately, it requires you to have an IBM Ration Integration Tester installation with RTCP to really replicate) to reproduce the problem:
import http.client as http_client
http_client.HTTPConnection.debuglevel = 1
import logging
import requests
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True
requests.post("https://host:8443/axl",
headers={"soapAction": '"CUCM:DB ver=9.1 updateSipTrunk"'},
data='<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tns="http://www.cisco.com/AXL/API/9.1"><soapenv:Header/><soapenv:Body><tns:updateSipTrunk><name>PLACEHOLDER</name><newName>PLACEHOLDER</newName><destinations><destination><addressIpv4>10.10.1.5</addressIpv4><sortOrder>1</sortOrder></destination></destinations></tns:updateSipTrunk></soapenv:Body></soapenv:Envelope>',
verify=False)
(Proxy is configured via HTTPS_PROXY environment variable)
Debug output before the error, note the HTTP/1.0:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): host.com
send: b'CONNECT host.com:8443 HTTP/1.0\r\n'
send: b'\r\n'
header: Host: host.com:8443
header: Proxy-agent: Green Hat HTTPS Proxy/1.0
The exact error text that occurs in RHEL 6 is:
requests.exceptions.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:646)
Even though the Host header is shown here, it does NOT show up on the wire. I confirmed this with a tcpdump:
14:03:14.315049 IP sourcehost.53214 > desthost.com: Flags [P.], seq 0:32, ack 1, win 115, options [nop,nop,TS val 2743933964 ecr 4116114841], length 32
0x0000: 0000 0c07 ac00 0050 56b5 4044 0800 4500 .......PV.#D..E.
0x0010: 0054 3404 4000 4006 2ca0 0af8 3f15 0afb .T4.#.#.,...?...
0x0020: 84f8 cfde 0c7f a4f8 280a 4ebd b425 8018 ........(.N..%..
0x0030: 0073 da46 0000 0101 080a a38d 1c0c f556 .s.F...........V
0x0040: XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX ..CONNECT.host
0x0050: XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX xx:8443.HTTP/1.0
0x0060: 0d0a
When I curl it with verbose, this is what the output looks like:
* About to connect() to proxy proxy-host.com port 3199 (#0)
* Trying 10.**.**.** ... connected
* Connected to proxy-host.com (10.**.**.**) port 3199 (#0)
* Establish HTTP proxy tunnel to host.com:8443
> CONNECT host.com:8443 HTTP/1.1
> Host: host.com:8443
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Proxy-Connection: Keep-Alive
> soapAction: "CUCM:DB ver=9.1 updateSipTrunk"
>
< HTTP/1.0 200 OK
< Host: host.com:8443
< Proxy-agent: Green Hat HTTPS Proxy/1.0
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /path/to/store/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Truncated after this point. You can see the HTTP/1.0 response from the proxy after connecting. The curl's tcpdump also clearly shows the host header, as well as HTTP 1.1.
*I can't be entirely sure this is the fundamental issue, as I can't test it. I do see HTTP/1.0 responses, and can tell that my non-working Python code sends CONNECT HTTP/1.0 messages, while the working Java sends HTTP/1.1 messages, as does Curl. It's possible the problem is unrelated (although I find that unlikely) or that Python is misbehaving, and not Java/curl. I simply don't know enough to know for sure.
So, is there a way to force urllib3/requests to use HTTP v1.1 at all times?
httplib (which requests relies upon for HTTP(S) heavy lifting) always uses HTTP/1.0 with CONNECT:
Lib/httplib.py:788:
def _tunnel(self):
self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self._tunnel_host,
self._tunnel_port))
for header, value in self._tunnel_headers.iteritems():
self.send("%s: %s\r\n" % (header, value))
self.send("\r\n")
<...>
So you can't "force" it to use "HTTP/1.1" here other than by editing the subroutine.
This MAY be the problem if the proxy doesn't support HTTP/1.0 - in particular, 1.0 does not require a Host: header, and indeed, as you can see by comparing your log output with the code above, httplib does not send it. While, in verity, a proxy may expect it regardless. But if this is the case, you should've gotten an error from the proxy or something in response to CONNECT -- unless the proxy is so borken that it substitutes some default (or garbage) for Host:, returns 200 anyway and tries to connect God-knows-where, at which point you're getting timeouts.
You can make httplib add the Host: header to CONNECT by adding it to _tunnel_headers (indirectly):
s=requests.Session()
proxy_url=os.environ['HTTPS_PROXY']
s.proxies["https"]=proxy_url
# have to specify proxy here because env variable is only detected by httplib code
#while we need to trigger requests' proxy logic that acts earlier
# "https" means any https host. Since a Session persists cookies,
#it's meaningless to make requests to multiple hosts through it anyway.
pm=s.get_adapter("https://").proxy_manager_for(proxy_url)
pm.proxy_headers['Host']="host.com"
del pm,proxy_url
<...>
s.get('https://host.com')
If you do not depend on the requests library you may find the following snippet useful:
import http.client
conn = http.client.HTTPSConnection("proxy.domain.lu", 8080)
conn.set_tunnel("www.domain.org", 443, headers={'User-Agent': 'curl/7.56.0'})
conn.request("GET", "/api")
response = conn.getresponse()
print( response.read() )

Service and version displayed via nmap scan for simple python socket server

I've got a simple python socket server. Here's the code:
import socket
host = "0.0.0.0" # address to bind on.
port = 8081
def listen_serv():
try:
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host,port))
s.listen(4)
...
messages back and forth between the server and client
...
if __name__ == "__main__":
while True:
listen_serv()
When I run the python server locally and then scan with nmap localhost i see the open port 8081 with the service blackice-icecap running on it. A quick google search revealed that this is a firewall service that uses the port 8081 for a service called ice-cap remote. If I change the port to 12000 for example, I get another service called cce4x.
A further scan with nmap localhost -sV returns the contents of the python script
1 service unrecognized despite returning data. If you know the service/version,
please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port8081-TCP:V=7.25BETA1%I=7%D=8/18%Time=57B58EE7%P=x86_64-pc-linux-gn
SF:u%r(NULL,1A4,"\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\
SF:*\*\*\*\*\*\*\*\*\*\*\*\*\n\*\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x
SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\*\n\*\x20\x20\x20\x20\x20\x
SF:20Welcome\x20to\x20ScapeX\x20Mail\x20Server\x20\x20\x20\x20\*\n\*\x20\x
SF:20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
SF:x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
SF:\x20\x20\*\n\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\
SF:*\*\*\*\*\*\*\*\*\*\*\*\nHere\x20is\x20a\x20quiz\x20to\x20test\x20your\
SF:x20knowledge\x20of\x20hacking\.\.\.\n\n\nAnswer\x20correctly\x20and\x20
SF:we\x20will\x20reward\x20you\x20with\x20a\x20shell\x20:-\)\x20\nQuestion
etc...
etc...
Is there a way I can customize the service and version descriptions that are displayed by nmap for my simple python server?
Found a solution by sending the following line as the first message from the server
c.send("HTTP/1.1 200 OK\r\nServer: Netscape-Enterprise/6.1\r\nDate: Fri, 19 Aug 2016 10:28:43 GMT\r\nContent-Type: text/html; charset=UTF-8\r\nConnection: close\r\nVary: Accept-Encoding\n\nContent-Length: 32092\r\n\n\n""")

Unable to view files in a browser with python http server

I am creating a python httpserver for a folder on remote machine using command :
python -m SimpleHTTPServer 9999
But I am unable to view a file in the browser using this. As soon as click on the link the file gets downloaded. Is there a way to create a server so that i can view the files in my browser only.
To make the browser open files inline instead of downloading them, you have to serve the files with the appropriate http Content headers.
What makes the content load inline in the browser tab, instead of as a download, is the header Content-Disposition: inline.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition
To add these headers, you you can subclass the default SimpleHTTPRequestHandler with a custom one.
This is how it can be done using python 3. You have to modify the imports and maybe some other parts if you have to use python 2.
Put it in a executable script file which you can call myserver.py and run it like so: ./myserver.py 9999
#!/usr/bin/env python3
from http.server import SimpleHTTPRequestHandler, test
import argparse
class InlineHandler(SimpleHTTPRequestHandler):
def end_headers(self):
mimetype = self.guess_type(self.path)
is_file = not self.path.endswith('/')
# This part adds extra headers for some file types.
if is_file and mimetype in ['text/plain', 'application/octet-stream']:
self.send_header('Content-Type', 'text/plain')
self.send_header('Content-Disposition', 'inline')
super().end_headers()
# The following is based on the standard library implementation
# https://github.com/python/cpython/blob/3.6/Lib/http/server.py#L1195
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--bind', '-b', default='', metavar='ADDRESS',
help='Specify alternate bind address '
'[default: all interfaces]')
parser.add_argument('port', action='store',
default=8000, type=int,
nargs='?',
help='Specify alternate port [default: 8000]')
args = parser.parse_args()
test(InlineHandler, port=args.port, bind=args.bind)
Serving files like foo.sh should work fine using the SimpleHTTPServer. Using curl as the client, I get a HTTP response like this:
$ curl -v http://localhost:9999/so.sh
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9999 (#0)
> GET /so.sh HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:9999
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Server: SimpleHTTP/0.6 Python/2.7.6
< Date: Thu, 02 Jun 2016 07:28:57 GMT
< Content-type: text/x-sh
< Content-Length: 11
< Last-Modified: Thu, 02 Jun 2016 07:27:41 GMT
<
echo "foo"
* Closing connection 0
You see the header line
Content-type: text/x-sh
which is correct for the file foo.sh. The mapping from the file extensions sh to text/x-sh happens in /etc/mime.types on GNU/Linux systems. My browser
Chromium 50.0.2661.102
is able to display the file inline.
Summary: As long as you serve known files and your browser can display them inline, everything should work.

Categories