No data received from wrapped SSLsocked, Python - python

I am writing TLS server in Python. I accept a connection from a client, wrap the socket and then try to read data - without success.
My server inherits from socketserver.TCPServer. My socket is non-blocking - I overwrote server_bind() method. Socket is wrapped, but handshake has to be done separately, because of the exception which is raised otherwise:
def get_request(self):
cli_sock, cli_addr = self.socket.accept()
ssl_sock = ssl.wrap_socket(cli_sock,
server_side=True,
certfile='/path/to/server.crt',
keyfile='/path/to/server.key',
ssl_version=ssl.PROTOCOL_TLSv1,
do_handshake_on_connect=False)
try:
ssl_sock.do_handshake()
except ssl.SSLError as e:
if e.args[1].find("SSLV3_ALERT_CERTIFICATE_UNKNOWN") == -1:
raise
return ssl_sock, cli_addr
To handle received data, I created a class which inherits from socketserver.StreamRequestHandler (I tried also with BaseRequestHandler, but with no luck, ended with the same problem - no data received).
When I print self.connection in handle() method, I can see that it is of type SSLSocket, fd is set (to some positive value), both local and remote IP and port have values as expected, so I assume that a client is successfully connected to my server and the connection is opened. However when I try to read data
self.connection.read(1)
There should be more bytes received, I tried with 1, 10, 1024, but it does not make any difference, the read() method always returns nothing. I tried to check len or print it, but there is nothing to be printed.
I was monitoring packages using Wireshark. And I can see that the data I am expecting to read, comes to my server (I checked that IP and port are the same for self.connection and in Wireshark), which sends ACK and then receives FIN+ACK from the client. So it looks like the data comes and are handled properly on a low level, but somehow read() method is not able to access it.
If I remove wrap_socket() call, then I am able to read data, but that is some data which client is sending for authentication.
I am using Python 3.4 on Mac machine.
How is that possible that I can in Wireshark that packets are coming, but I am not able to read the data in my code?

Related

Checking ip:port is open on remote host

I have a list of servers and ip:ports (external addressses) and i need to check if each server can connect to those addresses.
Looping through the file and trying to open an sshtunnel and doing connect as below
tunnel=sshtunnel.SSHTunnelForwarder(
ssh_host=host,
ssh_port=22,
ssh_username=ssh_username,
ssh_pkey=privkey,
remote_bind_address=(addr_ip, int(addr_port)),
local_bind_address=('0.0.0.0', 10022)
#,logger=sshtunnel.create_logger(loglevel=10)
)
tunnel.start()
# use socket
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
res = s.connect(('localhost', 10022))
print(res)
#s.connect((addr_ip, int(addr_port)))
s.close()
except socket.error as err:
print('socket err:')
print(err)
finally:
s.close()
time.sleep(2)
tunnel.stop()
When i do this though, the response is always 0 (i.e. the sock can connect to the local bind) even if the remote is incorrect - however sshtunnelforwarder throws
ERROR | Secsh channel 0 open FAILED: Network is unreachable: Connect failed
ERROR | Could not establish connection from ('127.0.0.1', 10022) to remote side of the tunnel
How do I make my socket command check if the remote_bind_address is available?
I tried to do use telnetlib, but get a similar issue
the code is effectively the same with the socket block replaced with
tn=telnetlib.Telnet()
tn.open('localhost',10022)
tn.close()
Im relatively new to all this, so still learning. If there is a better way to achieve what i need to do please let me know.
Thanks
Set the attribute skip_tunnel_checkup to False to enable checking of the remote side availability (it's disabled by default for backwards compatibility):
tunnel.skip_tunnel_checkup = False
Adding this before starting the tunnel checks the remote side is up on start and throws an exception which can be handled.
Removed my socket code.
I haven't tried that, but there's the tunnel_is_up attribute of the SSH tunnel class, which according to the documentation:
Describe whether or not the other side of the tunnel was reported to be up (and we must close it) or not (skip shutting down that tunnel)
Example of its content (it's a dictionary):
{('127.0.0.1', 55550): True, # this tunnel is up
('127.0.0.1', 55551): False} # this one isn't
So you shouldn't need to make an explicit connection yourself.
Note: you may need to set the attribute skip_tunnel_checkup to False (which is True by default for backwards compatibility) first before setting up the tunnel, otherwise tunnel_is_up may always report True:
When skip_tunnel_checkup is disabled or the local bind is a UNIX socket, the value will always be True
So the code may look like:
tunnel=sshtunnel.SSHTunnelForwarder(
# ...
)
tunnel.skip_tunnel_checkup = False
tunnel.start()
# tunnel.tunnel_is_up should be populated with actual tunnel status(es) now
In the code you posted, you're setting up a tunnel and then just opening a socket to the local endpoint of the tunnel, which is apparently open no matter what state the tunnel is in, so it always connects successfully.
Another approach would be to actually try to establish an SSH connection through the tunnel, but that's the paramiko.SSHclient alternative you're mentioning in a comment I guess.

Python 3.8 TLSv1.3 socket close result in ConnectionResetError or ConnectionAbortedError

Client side:
data = b'\xff' * 1000000
ssock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
#context is created by ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
ssock = context.wrap_socket(ssock, server_hostname='xd1337sv')
ssock.connect((SERVERADDR, SERVERPORT))
ssock.sendall(data)
#time.sleep(3)
ssock.close()
If I just use regular non-SSL socket, everything works correctly with the server receiving exact amount of data. If I use TLS socket, the behavior then depends on the version.
If I run either the server or client on Python 3.6 and therefore the TLSv1.2 will be used, there's no problem.
Problem arises only when TLSv1.3 is used and depends on the size of data and how soon client ssocket.close() line is executed.
If I put a right amount of time.sleep before ssocket.close() depending on the size of data, then I get no error. Otherwise, the server will get ConnectionResetError [WinError 10054] An existing connection was forcibly closed by the remote host and receive only part of the data, or throw ConnectionAbortedError [WinError 10053] An established connection was aborted by the software in your host machine and receive no data.
I'm testing both the server and client on my local machine with local address 192.168.1.2.
The difference is caused by TLS 1.3 sending a session ticket after the TLS handshake while with previous TLS versions the session ticket is send inside the TLS handshake. Thus, with TLS 1.3 data from the server (the session ticket) will arrive after the ssock.connect(...) is done. Since your application does not read any data after the connect it closes the socket while unread data are still inside the socket buffer of the underlying TCP socket. This will cause RST send to the server and cause there the connection reset error.
This is a known problems with applications which never attempt to read from the server. If the application would expect a response from the server and use recv to get it this would implicitly also read the session ticket.
To fix this situation when you don't expect the server to return any application data do a proper SSL shutdown of the socket before closing it. Since this will read the servers SSL shutdown message it will also implicitly read the session ticket send before by the server.
try:
ssock = ssock.unwrap()
except:
True
ssock.close()
For more information see also this issue and this documentation.
I was getting a similar problem when the application was running through gunicorn with certificates. The jsondecodeerror problem randomly came to the client, i.e. the response was empty. The only thing that TLS 1.2 was used.
The solution was simple, I deployed the application on uwsgi and the problem went away

Python Sockets: How to detect when client has disconnected ungracefully? (e.g. WiFi disconnect)

I'm writing a pair of client/server scripts where it is important to maintain connection as well as detect quickly when the client disconnects. The server normally only sends data, so to test if the client has disconnected normally, I set the socket to timeout mode and check what it returns:
try: # check if client disconnect
c.settimeout(1)
if not (c.recv(1024)):
print("## Socket disconnected! ##")
c.settimeout(None)
closeConnection(c)
return
except Exception as e:
print(e)
c.settimeout(None)
This works instantly if I close the client. However, if I disconnect the WiFi on the client machine, the recv on the server doesn't return anything. It just times out like it would if the connection was up but there wasn't anything being sent.
I've tried using send() to send empty messages to the client as a way to poll. When I do this, the operation succeeds and returns 0 regardless of if the client has disconnected.

How do I send and receive HTTP POST requests in Python?

I have these two Python scripts I'm using to attempt to work out how to send and receive POST requests in Python:
The Client:
import httplib
conn = httplib.HTTPConnection("localhost:8000")
conn.request("POST", "/testurl")
conn.send("clientdata")
response = conn.getresponse()
conn.close()
print(response.read())
The Server:
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
ADDR = "localhost"
PORT = 8000
class RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
print(self.path)
print(self.rfile.read())
self.send_response(200, "OK")
self.end_headers()
self.wfile.write("serverdata")
httpd = HTTPServer((ADDR, PORT), RequestHandler)
httpd.serve_forever()
The problem is that the server hangs on self.rfile.read() until conn.close() has been called on the client but if conn.close() is called on the client the client cannot receive a response from the server. This creates a situation where one can either get a response from the server or read the POST data but never both. I assume there is something I'm missing here that will fix this problem.
Additional information:
conn.getresponse() causes the client to hang until the response is received from the server. The response doesn't appear to be received until the function on the server has finished execution.
There are a couple of issues with your original example. The first is that if you use the request method, you should include the message body you want to send in that call, rather than calling send separately. The documentation notes send() can be used as an alternative to request:
As an alternative to using the request() method described above, you
can also send your request step by step, by using the four functions
below.
You just want conn.request("POST", "/testurl", "clientdata").
The second issue is the way you're trying to read what's sent to the server. self.rfile.read() attempts to read the entire input stream coming from the client, which means it will block until the stream is closed. The stream won't be closed until connection is closed. What you want to do is read exactly how many bytes were sent from the client, and that's it. How do you know how many bytes that is? The headers, of course:
length = int(self.headers['Content-length'])
print(self.rfile.read(length))
I do highly recommend the python-requests library if you're going to do more than very basic tests. I also recommend using a better HTTP framework/server than BaseHTTPServer for more than very basic tests (flask, bottle, tornado, etc.).
Long time answered but came up during a search so I bring another piece of answer. To prevent the server to keep the stream open (resulting in the response never being sent), you should use self.rfile.read1() instead of self.rfile.read()

Slow Python HTTP server on localhost

I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect.
Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7.
Below is my very simple server example, which simply returns 'Hello world' for any get request.
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
print("Just received a GET request")
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write('Hello world')
return
def log_request(self, code=None, size=None):
print('Request')
def log_message(self, format, *args):
print('Message')
if __name__ == "__main__":
try:
server = HTTPServer(('localhost', 80), MyHandler)
print('Started http server')
server.serve_forever()
except KeyboardInterrupt:
print('^C received, shutting down server')
server.socket.close()
Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem.
Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.
The request handler issues a inverse name lookup in order to display the client name in the log. My Windows 7 issues a first DNS lookup that fails with no delay, followed by 2 successive NetBIOS name queries to the HTTP client, and each one run into a 2 sec timeout = 4 seconds delay !!
Have a look at https://bugs.python.org/issue6085
Another fix that worked for me is to override BaseHTTPRequestHandler.address_string() in my request handler with a version that does not perform the name lookup
def address_string(self):
host, port = self.client_address[:2]
#return socket.getfqdn(host)
return host
Philippe
This does not sound like a problem with the code. A nifty way of troubleshooting an HTTP server is to connect to it to telnet to it on port 80. Then you can type something like:
GET /index.html HTTP/1.1
host: www.blah.com
<enter> <enter>
and observe the server's response. See if you get a delay using this approach.
You may also want to turn off any firewalls to see if they are responsible for the slowdown.
Try replacing 127.0.0.1 for localhost. If that solves the problem, then that is a clue that the FQDN lookup may indeed be the possible cause.
Replacing localhost with 127.0.0.1 can solve the problem:)

Categories