Timeout on a HTTP request in python - python

Very occasionally when making a http request, I am waiting for an age for a response that never comes. What is the recommended way to cancel this request after a reasonable period of time?

Set the HTTP request timeout.

The timeout parameter to urllib2.urlopen, or httplib. The original urllib has no such convenient feature. You can also use an asynchronous HTTP client such as twisted.web.client, but that's probably not necessary.

If you are making a lot of HTTP requests, you can change this globally by calling socket.setdefaulttimeout

Related

API getting timeout error with JMETER and python and working in POSTMAN

I have a POST request that is getting success in POSTMAN, but the same request is throwing timeout exception in JMETER and python-requests, please help me to understand what might be wrong.
java.net.SocketTimeoutException: Read timed out
Most probably your server cannot respond fast enough and JMeter terminates the connection without waiting for the response.
Postman default configuration is to wait "forever"
JMeter's HTTP Request sampler behaviour should be the same
however there could be some default timeouts on JVM or OS level which have an impact so it makes sense to explicitly increase the timeout at the "Advanced" tab of the HTTP Request sampler (or if you have more than one HTTP Request sampler - go for HTTP Request Defaults, this way you will be able to set the timeout for all HTTP Request samplers at once)
Also check that you're sending the same request (protocol, host, port, path, etc.) and pay extra attention to HTTP Headers, in JMeter you need to configure request headers via HTTP Header Manager
And finally I can think of the situation where you need a proxy in order to be able to connect to the system under test, i.e. Postman is using the proxy and in JMeter you need to set it

Reusing the same connection used within a session with a loop

I just want to inquire regarding reusing the same connection while having a loop sending the same POST request. Assume I have this code:
import requests
import time
r = requests.Session()
url = "http://somenumbers.php"
while True:
x = r.post(url)
time.sleep(10)
Now according to the documentation of requests library
Excellent news — thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection!
Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set stream to False or read the content property of the Response object
Does this work for the code up above? I am trying to prevent sending the same request in case the server freezes or a read timeout occurs. In Issue with sending POST requests using the library requests I go over the whole problem, and one of the suggestions is to reuse the connection, but
Isn't sending the same request on the same connection will just mean multiple entries, or is it going to fix the issue since it will only pull back when one entry is sent as the documentation states?
Assuming the latter is true, won't that affect performance and cause long delays since the request is trapped inside the connection?!
r.post is a blocking call. The function will only return once the request has been sent and a response is received. As long as you access x.content before the loop terminates, the next loop will re-use the underlying TCP connection.
Isn't sending the same request on the same connection will just mean
multiple entries, or is it going to fix the issue since it will only
pull back when one entry is sent as the documentation states?
requests doesn't cache the response. It will not check if a previous request having the same parameters was made. If you need that, you will have to build something on your own.
won't that affect performance and cause long delays since the request
is trapped inside the connection
requests will only re-use an available connection. If no free connection exists, a new connection will be established. You can use requests.packages.urllib3.poolmanager.PoolManager to control the number of connections in the pool.

Listen for http request in the body of RequestHandler

There is a strange API I need to work with.
I want to make a HTTP call to the API and the API will return success but I need to wait for request from this API before I respond to the client.
What is the best way to accomplish that?
Is it an option to make your API REST-ful?
An example flow: Have the client POST to a url to create a new resource and GET/HEAD for the state of that resource, that way you don't need to block your client while you do any blocking stuff.

How to execute a method after a response is complete in Pyramid?

Using Pyramid with Akhet, how do I execute a method after a response has been returned to the client? I believe this was done with the __after__ method in Pylons. I'm trying to execute a DB query and don't want it to block the request response.
You can use a response callback for your case.
EDITED after Michael Merickel's comment: The response callback blocks the request to which is added, but you shouldn't worry about that callback blocking other requests since each request runs in a different thread. If you still need not to block the request with the callback, you can spawn a different thread or process (if you can afford it) or look into message queuing systems as mentioned in the comment below.

async http request on Google App Engine Python

Does anybody know how to make http request from Google App Engine without waiting a response?
It should be like a push data with http without latency for response.
I think that this section of the AppEngine docs is what you are looking for.
Use the taskqueue. If you're just pushing data, there's no sense in waiting for the response.
What you could do is in the request handler enqueue a task with whatever data was received (using the deferred library). As soon as the task has been enqueued successfully you can return a '200 OK' response and be ready for the next push.
I've done this before by setting doing a URLFetch and setting a very low value for the deadline parameter. I put 0.1 as my value, so 100ms. You need to wrap the URLFetch in a try/catch also since the request will timeout.

Categories