Close lingering connection - python

I'm using python-requests for a client tool. It makes repeated requests to servers at an interval. However if the server disconnects, the client fails with a socket error on its next request. It appears the client is keeping the connection open from its side, rather than reconnecting. These connections could be hours apart, so it is unlikely the server wouldn't disconnect it.
Is there a way to override keep alive and force it to close? Is there something similar to:
with requests.get(url) as r:
doStuff(r)
# R is cleaned up, the socket is closed.
that would force the connection to clean up after I'm done?
As written that doesn't work, because requests.Response doesn't have an __ exit__ call.

How about this?
I haven't tested it, based only on the API doc:
s = requests.Session()
r = s.get(url)
doStuff(r)
s.close()
Or, to make sure that the close is always called, even if there's an exception, here's how to emulate the with-statement using a try/finally:
s = requests.Session()
try:
r = s.get(url)
doStuff(r)
finally:
s.close()

Related

During what part of an HTTP request can the protocol get stuck?

import python_http_client # 3.2.1
client = python_http_client.Client(host='https://www.google.com')
while True:
print(client.get()) # Has no request timeout
With this piece of code, the HTTP client gets stuck and hangs if I disconnect and reconnect my internet. Is this a bug with the package that I'm using or is this something that's inherently possible with HTTP protocol?
Looks like python_http_client.Client can take a timeout in seconds, like
client = python_http_client.Client(host='https://www.google.com', timeout=30)
Citation: https://github.com/sendgrid/python-http-client/blob/d99717c9e48f07a1a7d598e657838070704b4da7/python_http_client/client.py#L75

httplib.HTTPConnection timeout: connect vs other blocking calls

I'm trying to use python's HTTPConnection to make some long running remote procedure calls (~30 seconds)
httplib.HTTPConnection(..., timeout=45)
solves this. However, it means that failed connection attempts will cause a painfully long wait. I can independently control the read and connect timeouts for a socket -- can I do this when using HTTPConnection?
I understand you are not waiting to send a request, but if you deal with the connection failure first, you wont have to wait, i.e. don't include the timeout parameter with the first request, something like the following:
httplib.HTTPSConnection ('url' )
headers ={'Connection' : 'Keep-Alive'}
conn.request("request blank", headers) #dummy request to open a conn and keep it alive
dummy_response = conn.getresponse()
if dummmy_response == 200 or dummy_response == 201: #if ok, do work with conn
conn.request("request real", timeout = 25)
else:
reconnect. #restart the dummy request
Just a suggestion, you can always bump the question or have it answered again.

Python requests module connection timeout

I'm looking at http://docs.python-requests.org/en/latest/ and "Connection Timeouts" is listed as a feature. However, when I read further, it states
timeout is not a time limit on the entire response download; rather, an exception is raised if the server has not issued a response for timeout seconds (more precisely, if no bytes have been received on the underlying socket for timeout seconds).
That doesn't sound like a description of a connection timeout. What I'm seeing is a connection is successful, it uploads a big file and then waits for a response. However, the response takes a while and then timesout.
How can I set a connection timeout, but still wait for slow responses once a connection has been successful? Thanks a lot.
The requests (for humans) library has connection timeouts, see
- https://requests.kennethreitz.org/en/master/user/advanced/#timeouts
r = requests.get('https://github.com', timeout=(3.05, 27))
# e.g. explicitly
conn_timeout = 6
read_timeout = 60
timeouts = (conn_timeout, read_timeout)
r = requests.get('https://github.com', timeout=timeouts)
The docs are not exactly explicit about which value is which in the tuple, but it might be safe to assume that it's (connect, read) timeouts.
The timeout is used for both the socket connect stage and the response reading stage. The only exception is streamed requests; if you set stream=True, the timeout cannot be applied to the reading portion. The timeout is indeed used just for waiting for the socket to connect or data to be received.
If you need an overall timeout, then use another technique, like using interrupts or eventlets: Timeout for python requests.get entire response

How do I send and receive HTTP POST requests in Python?

I have these two Python scripts I'm using to attempt to work out how to send and receive POST requests in Python:
The Client:
import httplib
conn = httplib.HTTPConnection("localhost:8000")
conn.request("POST", "/testurl")
conn.send("clientdata")
response = conn.getresponse()
conn.close()
print(response.read())
The Server:
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
ADDR = "localhost"
PORT = 8000
class RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
print(self.path)
print(self.rfile.read())
self.send_response(200, "OK")
self.end_headers()
self.wfile.write("serverdata")
httpd = HTTPServer((ADDR, PORT), RequestHandler)
httpd.serve_forever()
The problem is that the server hangs on self.rfile.read() until conn.close() has been called on the client but if conn.close() is called on the client the client cannot receive a response from the server. This creates a situation where one can either get a response from the server or read the POST data but never both. I assume there is something I'm missing here that will fix this problem.
Additional information:
conn.getresponse() causes the client to hang until the response is received from the server. The response doesn't appear to be received until the function on the server has finished execution.
There are a couple of issues with your original example. The first is that if you use the request method, you should include the message body you want to send in that call, rather than calling send separately. The documentation notes send() can be used as an alternative to request:
As an alternative to using the request() method described above, you
can also send your request step by step, by using the four functions
below.
You just want conn.request("POST", "/testurl", "clientdata").
The second issue is the way you're trying to read what's sent to the server. self.rfile.read() attempts to read the entire input stream coming from the client, which means it will block until the stream is closed. The stream won't be closed until connection is closed. What you want to do is read exactly how many bytes were sent from the client, and that's it. How do you know how many bytes that is? The headers, of course:
length = int(self.headers['Content-length'])
print(self.rfile.read(length))
I do highly recommend the python-requests library if you're going to do more than very basic tests. I also recommend using a better HTTP framework/server than BaseHTTPServer for more than very basic tests (flask, bottle, tornado, etc.).
Long time answered but came up during a search so I bring another piece of answer. To prevent the server to keep the stream open (resulting in the response never being sent), you should use self.rfile.read1() instead of self.rfile.read()

Python: How to shutdown a threaded HTTP server with persistent connections (how to kill readline() from another thread)?

I'm using python2.6 with HTTPServer and the ThreadingMixIn, which will handle each request in a separate thread. I'm also using HTTP1.1 persistent connections ('Connection: keep-alive'), so neither the server or client will close a connection after a request.
Here's roughly what the request handler looks like
request, client_address = sock.accept()
rfile = request.makefile('rb', rbufsize)
wfile = request.makefile('wb', wbufsize)
global server_stopping
while not server_stopping:
request_line = rfile.readline() # 'GET / HTTP/1.1'
# etc - parse the full request, write to wfile with server response, etc
wfile.close()
rfile.close()
request.close()
The problem is that if I stop the server, there will still be a few threads waiting on rfile.readline().
I would put a select([rfile, closefile], [], []) above the readline() and write to closefile when I want to shutdown the server, but I don't think it would work on windows because select only works with sockets.
My other idea is to keep track of all the running requests and rfile.close() but I get Broken pipe errors.
Ideas?
You're almost there—the correct approach is to call rfile.close() and to catch the broken pipe errors and exit your loop when that happens.
If you set daemon_threads to true in your HTTPServer subclass, the activity of the threads will not prevent the server from exiting.
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
daemon_threads = True
You could work around the Windows problem by making closefile a socket, too -- after all, since it's presumably something that's opened by your main thread, it's up to you to decide whether to open it as a socket or a file;-).

Categories