httplib.HTTPConnection timeout: connect vs other blocking calls - python

I'm trying to use python's HTTPConnection to make some long running remote procedure calls (~30 seconds)
httplib.HTTPConnection(..., timeout=45)
solves this. However, it means that failed connection attempts will cause a painfully long wait. I can independently control the read and connect timeouts for a socket -- can I do this when using HTTPConnection?

I understand you are not waiting to send a request, but if you deal with the connection failure first, you wont have to wait, i.e. don't include the timeout parameter with the first request, something like the following:
httplib.HTTPSConnection ('url' )
headers ={'Connection' : 'Keep-Alive'}
conn.request("request blank", headers) #dummy request to open a conn and keep it alive
dummy_response = conn.getresponse()
if dummmy_response == 200 or dummy_response == 201: #if ok, do work with conn
conn.request("request real", timeout = 25)
else:
reconnect. #restart the dummy request
Just a suggestion, you can always bump the question or have it answered again.

Related

What causes python socket timeout

I have a simple Python code running on Linux (Raspbian) and connecting to a server using urlopen(basically this is using Python socket) :
req = urllib.request.Request('myServer', data = params, headers = head)
try:
response = urllib.request.urlopen(req, timeout = 20)
except:
From socket timeout doc
timeout=None will act in blocking mode this is not what I want as It will hang forever if I have not internet connection
timeout=0 will act in non-blocking mode, but using it then I get error 115 (Operation now in progress)
timeout=20 will act in timeout mode, blocking for 20s and escaping if not able to create the connection
My questions:
Why is the non-blocking mode always failing ? (It may be a misconception but I thought it should work sometimes and not fail always)
What sometimes causes the 20s timeout to occur ? (80% of the case the urlopen will execute in 1-2s, 20% timeout)

Python requests module connection timeout

I'm looking at http://docs.python-requests.org/en/latest/ and "Connection Timeouts" is listed as a feature. However, when I read further, it states
timeout is not a time limit on the entire response download; rather, an exception is raised if the server has not issued a response for timeout seconds (more precisely, if no bytes have been received on the underlying socket for timeout seconds).
That doesn't sound like a description of a connection timeout. What I'm seeing is a connection is successful, it uploads a big file and then waits for a response. However, the response takes a while and then timesout.
How can I set a connection timeout, but still wait for slow responses once a connection has been successful? Thanks a lot.
The requests (for humans) library has connection timeouts, see
- https://requests.kennethreitz.org/en/master/user/advanced/#timeouts
r = requests.get('https://github.com', timeout=(3.05, 27))
# e.g. explicitly
conn_timeout = 6
read_timeout = 60
timeouts = (conn_timeout, read_timeout)
r = requests.get('https://github.com', timeout=timeouts)
The docs are not exactly explicit about which value is which in the tuple, but it might be safe to assume that it's (connect, read) timeouts.
The timeout is used for both the socket connect stage and the response reading stage. The only exception is streamed requests; if you set stream=True, the timeout cannot be applied to the reading portion. The timeout is indeed used just for waiting for the socket to connect or data to be received.
If you need an overall timeout, then use another technique, like using interrupts or eventlets: Timeout for python requests.get entire response

How to set the redis timeout waiting for the response with pipeline in redis-py?

In the code below, is the pipeline timeout 2 seconds?
client = redis.StrictRedis(host=host, port=port, db=0, socket_timeout=2)
pipe = client.pipeline(transaction=False)
for name in namelist:
key = "%s-%s-%s-%s" % (key_sub1, key_sub2, name, key_sub3)
pipe.smembers(key)
pipe.execute()
In the redis, there are a lot of members in the set "key". It always return the error as below with the code last:
error Error while reading from socket: ('timed out',)
If I modify the socket_timeout value to 10, it returns ok.
Doesn't the param "socket_timeout" mean connection timeout? But it looks like response timeout.
The redis-py version is 2.6.7.
I asked andymccurdy , the author of redis-py, on github and the answer is as below:
If you're using redis-py<=2.9.1, socket_timeout is both the timeout
for socket connection and the timeout for reading/writing to the
socket. I pushed a change recently (465e74d) that introduces a new
option, socket_connect_timeout. This allows you to specify different
timeout values for socket.connect() differently from
socket.send/socket.recv(). This change will be included in 2.10 which
is set to be released later this week.
The redis-py version is 2.6.7, so it's both the timeout for socket connection and the timeout for reading/writing to the socket.
It is not connection timeout, it is operation timeout. Internally the socket_timeout argument on StrictRedis() will be passed to the socket's settimeout method.
See here for details: https://docs.python.org/2/library/socket.html#socket.socket.settimeout

How do I send and receive HTTP POST requests in Python?

I have these two Python scripts I'm using to attempt to work out how to send and receive POST requests in Python:
The Client:
import httplib
conn = httplib.HTTPConnection("localhost:8000")
conn.request("POST", "/testurl")
conn.send("clientdata")
response = conn.getresponse()
conn.close()
print(response.read())
The Server:
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
ADDR = "localhost"
PORT = 8000
class RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
print(self.path)
print(self.rfile.read())
self.send_response(200, "OK")
self.end_headers()
self.wfile.write("serverdata")
httpd = HTTPServer((ADDR, PORT), RequestHandler)
httpd.serve_forever()
The problem is that the server hangs on self.rfile.read() until conn.close() has been called on the client but if conn.close() is called on the client the client cannot receive a response from the server. This creates a situation where one can either get a response from the server or read the POST data but never both. I assume there is something I'm missing here that will fix this problem.
Additional information:
conn.getresponse() causes the client to hang until the response is received from the server. The response doesn't appear to be received until the function on the server has finished execution.
There are a couple of issues with your original example. The first is that if you use the request method, you should include the message body you want to send in that call, rather than calling send separately. The documentation notes send() can be used as an alternative to request:
As an alternative to using the request() method described above, you
can also send your request step by step, by using the four functions
below.
You just want conn.request("POST", "/testurl", "clientdata").
The second issue is the way you're trying to read what's sent to the server. self.rfile.read() attempts to read the entire input stream coming from the client, which means it will block until the stream is closed. The stream won't be closed until connection is closed. What you want to do is read exactly how many bytes were sent from the client, and that's it. How do you know how many bytes that is? The headers, of course:
length = int(self.headers['Content-length'])
print(self.rfile.read(length))
I do highly recommend the python-requests library if you're going to do more than very basic tests. I also recommend using a better HTTP framework/server than BaseHTTPServer for more than very basic tests (flask, bottle, tornado, etc.).
Long time answered but came up during a search so I bring another piece of answer. To prevent the server to keep the stream open (resulting in the response never being sent), you should use self.rfile.read1() instead of self.rfile.read()

Close lingering connection

I'm using python-requests for a client tool. It makes repeated requests to servers at an interval. However if the server disconnects, the client fails with a socket error on its next request. It appears the client is keeping the connection open from its side, rather than reconnecting. These connections could be hours apart, so it is unlikely the server wouldn't disconnect it.
Is there a way to override keep alive and force it to close? Is there something similar to:
with requests.get(url) as r:
doStuff(r)
# R is cleaned up, the socket is closed.
that would force the connection to clean up after I'm done?
As written that doesn't work, because requests.Response doesn't have an __ exit__ call.
How about this?
I haven't tested it, based only on the API doc:
s = requests.Session()
r = s.get(url)
doStuff(r)
s.close()
Or, to make sure that the close is always called, even if there's an exception, here's how to emulate the with-statement using a try/finally:
s = requests.Session()
try:
r = s.get(url)
doStuff(r)
finally:
s.close()

Categories