Periodic REST call failing using python httplib - python

To monitor the server heartbeat using a REST call. I am calling a REST api periodically every 30 seconds.
HEARTBEAT_URI='/heartbeat'
class HearBeatCheck:
"""
Has other code
"""
#staticmethod
def check_heartbeat(host, port, username, password, timeout=60):
http_connection = httplib.HTTPConnection(host,port,timeout=timeout)
http_connection.request('GET', HEARTBEAT_URI, headers={'username':username,
'password':password})
response = http_connection.getresponse()
response.read()
if response.status in [200, 201, 202, 203, 204, 205, 206]:
return True
else:
return False
Now when I make this call every 30 seconds keeping the service down (heartbeat api would return 500 or some other code) error I get using the default timeout value is CannotsendRequest error. If I modify the periodic interval to make this call in every 60 seconds I get BadStatusLine error.
Now I understand CannotSendRequest is raised because of not reading the response and reusing the same http connection object but in this case I am setting the timeout higher than the polling interval and everytime a periodic call is made a new http_connection object is used anyway. Can someone shed some light if I am missing something that is obvious? Thanks.

Related

AWS Lambda call url which takes longer than Lambda execution limit

Context
I want to run an AWS Lambda, call an endpoint (fire and forget) then stop the Lambda - all the while the endpoint is whirling away happily on it's own.
Attempts
1.
Using a timeout, e.g.
try:
requests.get(url, timeout=0.001)
except requests.exceptions.ReadTimeout:
...
2.
Using an async call with grequest:
import grequests
...
def handler(event, context):
try:
...
req = grequests.get(url)
grequests.send(req, grequests.Pool(1))
return {
'statusCode': 200,
'body': "Lambda done"
}
except Exception:
logger.exception('Error while running lambda')
raise
These requests don't appear to reach the API, it's almost like the request is being cancelled.
Any ideas why?
Question
How can a Lambda call a URL which takes a long time to complete? Thanks.
To anyone reading this: I fixed my problem by using AWS Batch.
You have to increase the timeout on your function. Depending on how you have defined your function, you can increase the default timeout of 300 seconds on the function page or inside your cloudformation script.
The timeout defined inside your function has no effect on lambda functionality. AWS Lambda will kill your function after 300 seconds regardless of any timeout defined inside your script. See: https://docs.aws.amazon.com/lambda/latest/dg/limits.html

python tornado async client

I created batch delayed http (async) client which allows to trigger multiple async http requests and most importantly it allows to delay the start of requests so for example 100 requests are not triggered at a time.
But it has an issue. The http .fetch() method has a handleMethod parameter which handles the response, but I found out that if the delay (sleep) after the fetch isn't long enough the handle method is not even triggered. (maybe the request is killed or what meanwhile).
It is probably related to .run_sync method. How to fix that? I want to put delays but dont want this issue happen.
I need to parse the response regardless how long the request takes, regardless the following sleep call (that call has another reason as i said, and should not be related to response handling at all)
class BatchDelayedHttpClient:
def __init__(self, requestList):
# class members
self.httpClient = httpclient.AsyncHTTPClient()
self.requestList = requestList
ioloop.IOLoop.current().run_sync(self.execute)
#gen.coroutine
def execute(self):
print("exec start")
for request in self.requestList:
print("requesting " + request["url"])
self.httpClient.fetch(request["url"], request["handleMethod"], method=request["method"], headers=request["headers"], body=request["body"])
yield gen.sleep(request["sleep"])
print("exec end")

Python Requests hanging/freezing

I'm using the requests library to get a lot of webpages from somewhere. He's the pertinent code:
response = requests.Session()
retries = Retry(total=5, backoff_factor=.1)
response.mount('http://', HTTPAdapter(max_retries=retries))
response = response.get(url)
After a while it just hangs/freezes (never on the same webpage) while getting the page. Here's the traceback when I interrupt it:
File "/Users/Student/Hockey/Scrape/html_pbp.py", line 21, in get_pbp
response = r.read().decode('utf-8')
File "/anaconda/lib/python3.6/http/client.py", line 456, in read
return self._readall_chunked()
File "/anaconda/lib/python3.6/http/client.py", line 566, in _readall_chunked
value.append(self._safe_read(chunk_left))
File "/anaconda/lib/python3.6/http/client.py", line 612, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/anaconda/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
KeyboardInterrupt
Does anybody know what could be causing it? Or (more importantly) does anybody know a way to stop it if it takes more than a certain amount of time so that I could try again?
Seems like setting a (read) timeout might help you.
Something along the lines of:
response = response.get(url, timeout=5)
(This will set both connect and read timeout to 5 seconds.)
In requests, unfortunately, neither connect nor read timeouts are set by default, even though the docs say it's good to set it:
Most requests to external servers should have a timeout attached, in case the server is not responding in a timely manner. By default, requests do not time out unless a timeout value is set explicitly. Without a timeout, your code may hang for minutes or more.
Just for completeness, the connect timeout is the number of seconds requests will wait for your client to establish a connection to a remote machine, and the read timeout is the number of seconds the client will wait between bytes sent from the server.
Patching the documented "send" function will fix this for all requests - even in many dependent libraries and sdk's. When patching libs, be sure to patch supported/documented functions, otherwise you may wind up silently losing the effect of your patch.
import requests
DEFAULT_TIMEOUT = 180
old_send = requests.Session.send
def new_send(*args, **kwargs):
if kwargs.get("timeout", None) is None:
kwargs["timeout"] = DEFAULT_TIMEOUT
return old_send(*args, **kwargs)
requests.Session.send = new_send
The effects of not having any timeout are quite severe, and the use of a default timeout can almost never break anything - because TCP itself has timeouts as well.
On Windows the default TCP timeout is 240 seconds, TCP RFC recommend a minimum of 100 seconds for RTO*retry. Somewhere in that range is a safe default.
To set timeout globally instead of specifying in every request:
from requests.adapters import TimeoutSauce
REQUESTS_TIMEOUT_SECONDS = float(os.getenv("REQUESTS_TIMEOUT_SECONDS", 5))
class CustomTimeout(TimeoutSauce):
def __init__(self, *args, **kwargs):
if kwargs["connect"] is None:
kwargs["connect"] = REQUESTS_TIMEOUT_SECONDS
if kwargs["read"] is None:
kwargs["read"] = REQUESTS_TIMEOUT_SECONDS
super().__init__(*args, **kwargs)
# Set it globally, instead of specifying ``timeout=..`` kwarg on each call.
requests.adapters.TimeoutSauce = CustomTimeout
sess = requests.Session()
sess.get(...)
sess.post(...)

requests process hangs

I'm using requests to get a URL, such as:
while True:
try:
rv = requests.get(url, timeout=1)
doSth(rv)
except socket.timeout as e:
print e
except Exception as e:
print e
After it runs for a while, it quits working. No exception or any error, just like it suspended. I then stop the process by typing Ctrl+C from the console. It shows that the process is waiting for data:
.............
httplib_response = conn.getresponse(buffering=True) #httplib.py
response.begin() #httplib.py
version, status, reason = self._read_status() #httplib.py
line = self.fp.readline(_MAXLINE + 1) #httplib.py
data = self._sock.recv(self._rbufsize) #socket.py
KeyboardInterrupt
Why is this happening? Is there a solution?
It appears that the server you're sending your request to is throttling you - that is, it's sending bytes with less than 1 second between each package (thus not triggering your timeout parameter), but slow enough for it to appear to be stuck.
The only fix for this I can think of is to reduce the timeout parameter, unless you can fix this throttling issue with the Server provider.
Do keep in mind that you'll need to consider latency when setting the timeout parameter, otherwise your connection will be dropped too quickly and might not work at all.
The default requests doesn't not set a timeout for connection or read.
If for some reason, the server cannot get back to the client within the time, the client will stuck at connecting or read, mostly the read for the response.
The quick resolution is to set a timeout value in the requests object, the approach is well described here: http://docs.python-requests.org/en/master/user/advanced/#timeouts
(Thanks to the guys.)
If this resolves the issue, please kindly mark this a resolution. Thanks.

Why doesn't requests.get() return? What is the default timeout that requests.get() uses?

In my script, requests.get never returns:
import requests
print ("requesting..")
# This call never returns!
r = requests.get(
"http://www.some-site.example",
proxies = {'http': '222.255.169.74:8080'},
)
print(r.ok)
What could be the possible reason(s)? Any remedy? What is the default timeout that get uses?
What is the default timeout that get uses?
The default timeout is None, which means it'll wait (hang) until the connection is closed.
Just specify a timeout value, like this:
r = requests.get(
'http://www.example.com',
proxies={'http': '222.255.169.74:8080'},
timeout=5
)
From requests documentation:
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter:
>>> requests.get('http://github.com', timeout=0.001)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
requests.exceptions.Timeout: HTTPConnectionPool(host='github.com', port=80): Request timed out. (timeout=0.001)
Note:
timeout is not a time limit on the entire response download; rather,
an exception is raised if the server has not issued a response for
timeout seconds (more precisely, if no bytes have been received on the
underlying socket for timeout seconds).
It happens a lot to me that requests.get() takes a very long time to return even if the timeout is 1 second. There are a few way to overcome this problem:
1. Use the TimeoutSauce internal class
From: https://github.com/kennethreitz/requests/issues/1928#issuecomment-35811896
import requests from requests.adapters import TimeoutSauce
class MyTimeout(TimeoutSauce):
def __init__(self, *args, **kwargs):
if kwargs['connect'] is None:
kwargs['connect'] = 5
if kwargs['read'] is None:
kwargs['read'] = 5
super(MyTimeout, self).__init__(*args, **kwargs)
requests.adapters.TimeoutSauce = MyTimeout
This code should cause us to set the read timeout as equal to the
connect timeout, which is the timeout value you pass on your
Session.get() call. (Note that I haven't actually tested this code, so
it may need some quick debugging, I just wrote it straight into the
GitHub window.)
2. Use a fork of requests from kevinburke: https://github.com/kevinburke/requests/tree/connect-timeout
From its documentation: https://github.com/kevinburke/requests/blob/connect-timeout/docs/user/advanced.rst
If you specify a single value for the timeout, like this:
r = requests.get('https://github.com', timeout=5)
The timeout value will be applied to both the connect and the read
timeouts. Specify a tuple if you would like to set the values
separately:
r = requests.get('https://github.com', timeout=(3.05, 27))
NOTE: The change has since been merged to the main Requests project.
3. Using evenlet or signal as already mentioned in the similar question:
Timeout for python requests.get entire response
I wanted a default timeout easily added to a bunch of code (assuming that timeout solves your problem)
This is the solution I picked up from a ticket submitted to the repository for Requests.
credit: https://github.com/kennethreitz/requests/issues/2011#issuecomment-477784399
The solution is the last couple of lines here, but I show more code for better context. I like to use a session for retry behaviour.
import requests
import functools
from requests.adapters import HTTPAdapter,Retry
def requests_retry_session(
retries=10,
backoff_factor=2,
status_forcelist=(500, 502, 503, 504),
session=None,
) -> requests.Session:
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
# set default timeout
for method in ('get', 'options', 'head', 'post', 'put', 'patch', 'delete'):
setattr(session, method, functools.partial(getattr(session, method), timeout=30))
return session
then you can do something like this:
requests_session = requests_retry_session()
r = requests_session.get(url=url,...
In my case, the reason of "requests.get never returns" is because requests.get() attempt to connect to the host resolved with ipv6 ip first. If something went wrong to connect that ipv6 ip and get stuck, then it retries ipv4 ip only if I explicit set timeout=<N seconds> and hit the timeout.
My solution is monkey-patching the python socket to ignore ipv6(or ipv4 if ipv4 not working), either this answer or this answer are works for me.
You might wondering why curl command is works, because curl connect ipv4 without waiting for ipv6 complete. You can trace the socket syscalls with strace -ff -e network -s 10000 -- curl -vLk '<your url>' command. For python, strace -ff -e network -s 10000 -- python3 <your python script> command can be used.
Patching the documented "send" function will fix this for all requests - even in many dependent libraries and sdk's. When patching libs, be sure to patch supported/documented functions, not TimeoutSauce - otherwise you may wind up silently losing the effect of your patch.
import requests
DEFAULT_TIMEOUT = 180
old_send = requests.Session.send
def new_send(*args, **kwargs):
if kwargs.get("timeout", None) is None:
kwargs["timeout"] = DEFAULT_TIMEOUT
return old_send(*args, **kwargs)
requests.Session.send = new_send
The effects of not having any timeout are quite severe, and the use of a default timeout can almost never break anything - because TCP itself has default timeouts as well.
Reviewed all the answers and came to conclusion that the problem still exists. On some sites requests may hang infinitely and using multiprocessing seems to be overkill. Here's my approach(Python 3.5+):
import asyncio
import aiohttp
async def get_http(url):
async with aiohttp.ClientSession(conn_timeout=1, read_timeout=3) as client:
try:
async with client.get(url) as response:
content = await response.text()
return content, response.status
except Exception:
pass
loop = asyncio.get_event_loop()
task = loop.create_task(get_http('http://example.com'))
loop.run_until_complete(task)
result = task.result()
if result is not None:
content, status = task.result()
if status == 200:
print(content)
UPDATE
If you receive a deprecation warning about using conn_timeout and read_timeout, check near the bottom of THIS reference for how to use the ClientTimeout data structure. One simple way to apply this data structure per the linked reference to the original code above would be:
async def get_http(url):
timeout = aiohttp.ClientTimeout(total=60)
async with aiohttp.ClientSession(timeout=timeout) as client:
try:
etc.

Categories