I'm trying to make a POST request to a localhost address I've set inside the same python file but I am getting the error ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
Here's my code:
#app.route('/callback_work/', methods=['POST'])
async def callback_work():
content_type = request.headers.get('content-type')
if (content_type == 'application/json'):
request_json = await request.get_json()
print(request_json)
return 'Callback done'
else:
return 'Content-Type not supported!'
async def capture_callback(request_json):
requests.post('http://localhost:5000/callback_work/',
json=request_json, timeout=2, headers={"Content-Type": "application/json"})
I am already providing the request_json through another function and I know it's valid and it exists. Also, I've been sending POST requests through Postman all of this time and everything was working fine. The timeout argument is there as a precaution since I was executing the script without it and it never stopped waiting for the POST request to be executed.
Do you thing there's a problem that both the function that handles the post request and the function that makes the post request, are in the same file?
the requests module is not async-able, even in an async funciton it will block. what happaned in your case is that your post request blocks and so your callback is unable to respond.
you have two general options:
use an async compatible library like aiohttp
use multiple processes either by running multiple scripts or by multiprocessing
Related
I need help with the python web framework, Quart. I want to build a python server that returns 202 as soon as a client requests some time consuming I/O task, and call the client back to return value of that task as soon as the task is done.
For that purpose, I add task requested by client to the background task using app.add_background_task(task) and that gave me a successful result as it returns 202 immediately. But I'm not sure how I can approach the return value of background task and call the client back to give that value.
I'm reading https://quart.palletsprojects.com/en/latest/how_to_guides/server_sent_events.html this article. But I'm not sure how to handle it.
async def background_task(timeout=10):
print(f"background task started at", str(datetime.now().strftime("%d/%m/%Y %H:%M:%S")))
await asyncio.sleep(timeout)
print(f"background task completed at", str(datetime.now().strftime("%d/%m/%Y %H:%M:%S")))
return "requested task done"
#app.route("/", methods=["GET"])
async def main_route():
print("Hello from main route")
app.add_background_task(background_task, 10)
return "request accepted", 202
To push information to the client, you'll need Websockets or some other mechanism - it'll require server and client-side implementations
A simpler solution is to poll the server from the client to determine if the task is complete or not. i.e. send requests repeatedly to the server until you get confirmation of what you expect, or your max number of attempts is exceeded (or a request just times out entirely)
I'm trying to implement Fire and Forget mechanism using FastAPI. I'm facing few difficulties when implementing the mechanism.
I have two applications. One is developed with FastAPI and other is Flask. FastAPI will run in AWS Lambda and it will send requests to the Flask app running on AWS ECS.
Currently, I was able to send a request to the Flask API and receive an immediate response from the FastAPI app. But I see FastAPI still running bg_tasks.add_task(make_request, request) in the background which times out after lambda execution threshold time (15 mins).
Fast API application:
def make_request(data):
"""
Function to make a post request to flask application
:param data: Data from the user to write into the file
:return: None
"""
print("***** Inside post *****")
requests.post(url=root_url, data=data)
print("***** Post completed *****")
#router.post("/write-to-file")
async def write_to_file(request: Dict, bg_tasks: BackgroundTasks):
"""
Function to queue the requests and return to the post function
:param request: Request from the user
:param bg_tasks: Background task instance
:return: Some message
"""
print(f"****** Request call started ******")
bg_tasks.add_task(make_request, request)
print(f"****** Request completed ******")
return {"Message": "Data will be written into the file"}
Flask Application:
#app.route('/', methods=['POST'])
def write():
"""
Function to write the request data into the file
:return:
"""
request_data = request.form
try:
print(f"Sleep time {int(request_data['sleep_time'])}")
time.sleep(int(request_data["sleep_time"]))
request_data = dict(request_data)
request_data['current_time'] = str(datetime.now())
with open("data.txt", "a") as f:
f.write("\n")
f.write(json.dumps(request_data, indent=4))
return {"Message": "Success"}
except Exception as e:
return {"Message": e}
Fast API (http://localhost:8000/write-to-file/) calls the write_to_file method, which adds all the tasks (requests) into the background queue and runs them in background.
This function does not wait for the process to be completed. However, it returns the response to the client side. make_request method will then trigger the Flask endpoint (http://localhost:5000/), which in turn will process the request and write to a file. Consider make_request as one AWS lambda, if flask application takes more hours to process, the lambda will wait for longer time.
Is it possible to kill lambda once the request is published, or do something else to solve the timeout issue?
With the current setup, your lambda would run for as long, as the Flask endpoint would require to process your request. Effectively, both APIs run exactly the same time.
This is because the requests.post in the lambda function must wait for the response to finish. Given that you don't care about the results of that response, I can think of several other ways to solve this.
If I were you, I would move the queue processing to the ECS side. Then the only thing that lambda would only be responsible for putting a job into the queue that the ECS worker would process when it has capacity.
This option would let you get rid of one of the APIs: you would be able to query the Flask API directly and kill the lambda function, or instead kill the Flask API and run a worker process on ECS.
Alternatively, you could respond early on the Flask API side, which would finish your HTTP request, and thus the lambda execution, sooner. This can be confusing to set up and defeats the purpose of exposing an HTTP API in the first place. Also, under some circumstances, the Flask request execution could be terminated by the webserver after a default timeout (~30 seconds).
And finally, in case you really-really want to leave your code as it is now, you could set a request to timeout after a short period of time. In case you go this route, make sure to choose a long enough timeout for Flask to start processing the request:
try:
requests.post(url=root_url, data=data, timeout=5) # throw after 5 seconds of waiting
except requests.exceptions.Timeout:
pass
I have a python server written with bottle. When I access the server from a website using Ajax, and then close the website before the server can send its response, the server gets stuck trying to send the response to a destination that no longer exists. When this happens, the server becomes unresponsive to any requests for about 10 seconds, before resuming normal operations.
How can I prevent this? I would like bottle to immediately stop trying if the website that made the request no longer exists.
I start the server like this:
bottle.run(host='localhost', port=port_to_listen_to, quiet=True)
and the only url exposed by the server is this:
#bottle.route('/', method='POST')
def main_server_input():
request_data = bottle.request.forms['request_data']
request_data = json.loads(request_data)
try:
response_data = process_message_from_scenario(request_data)
except:
error_message = utilities.get_error_message_details()
error_message = "Exception during processing of command:\n%s" % (error_message,)
print(error_message)
response_data = {
'success' : False,
'error_message' : error_message,
}
return(json.dumps(response_data))
Is process_message_from_scenario a long-running function? (Say, 10 seconds?)
If so, your one-and-only server thread will be tied up in that function, and no subsequent requests will be serviced during that time. Have you tried running a concurrent server, like gevent? Try this:
bottle.run(host='localhost', port=port_to_listen_to, quiet=True, server='gevent')
I want to create messenger via websockets. My logic is: User_1 send a message (json) to User_2 via tornado handler, a message is checked (def send_message_to_RDB_parallel) on the tornado server (some requests to RDB, PostgreSQL) and then User_1 recieve the response and User_2 recieve a message.
Checking with requests to RDB (def send_message_to_RDB_parallel) - might block my tornado server. Because of it I want to do it via Celery (with RabbitMQ) or just yielded it. As I understand it can help me unblock tornado server. But I need to get the response back when it will be done. I can launch it with Celery or without, but I cant get response.. And when I break my tornado server (push Ctrl-C) then I see an error like "... object is not callable"
How can I get the response and send it (self.write_message())?
In this example I try to do it just with yield
class MessagesHandler(tornado.websocket.WebSocketHandler):
...
def on_message(self, mess):
...
self.send_message_to_RDB(thread_id=thread_id,
sender_id=self.user_id,
recipient_id=recipient_id,
message=message['msg'],
time=datetime.datetime.now(datetime.timezone.utc),
check=True)
...
#tornado.gen.coroutine
def send_message_to_RDB(self, thread_id, sender_id, recipient_id, message, time, check):
response = yield tornado.gen.Task(send_message_to_RDB_parallel(thread_id=thread_id,
sender_id=sender_id,
recipient_id=recipient_id,
message=message,
time=time,
check=check))
if response.result[0] is False:
self.write_message(response.result[1])
def send_message_to_RDB_parallel(thread_id, sender_id, recipient_id, message, time, check=False):
"""
Send message to rdb. Check thread. One recipient_id !
"""
# tf__ = False
if check is True:
if recipient_id == sender_id:
return False, to_json_error(MessengerRecipientEqualSenderServerMessage[1])
if User.objects.filter(id=recipient_id,
is_deleted=False,
is_active=True,
is_blocked=False).exists() is False:
return False, to_json_error("Wrong User")
...
else:
me = Message()
me.text = message
me.thread_id = thread_id
me.sender_id = sender_id
me.datetime = time
me.save()
return True, None
There are couple general errors:
send_message_to_RDB_parallel is not async even doesn't have callback arg, but you trying to use it with gen.Task - no result will be set
on_message is a coroutine, it's called in send_message_to_RDB, nut it isn't yielded (awaited)
gen.Task takes a function (and optional additional arguments) and runs it, but it the code actually you are calling it not passing
Because of 2) any further error that occurs are not raised, and that way you see them after ^C. Must take a read of http://www.tornadoweb.org/en/stable/guide/async.html
Solution
Of course you could use celery and asynchronously wait for results (Tornado celery integration hacks)..
But if you using Postgres I would recommend to use existing async library (Saving API output async using SQLAlchemy and Tornado):
momoko - postgres Tornado-based client, it is not an ORM,
aiopg - postgres asyncio-based client (Tornado 4.3 and above), support for sqlalchemy query builders
In my script, requests.get never returns:
import requests
print ("requesting..")
# This call never returns!
r = requests.get(
"http://www.some-site.example",
proxies = {'http': '222.255.169.74:8080'},
)
print(r.ok)
What could be the possible reason(s)? Any remedy? What is the default timeout that get uses?
What is the default timeout that get uses?
The default timeout is None, which means it'll wait (hang) until the connection is closed.
Just specify a timeout value, like this:
r = requests.get(
'http://www.example.com',
proxies={'http': '222.255.169.74:8080'},
timeout=5
)
From requests documentation:
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter:
>>> requests.get('http://github.com', timeout=0.001)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
requests.exceptions.Timeout: HTTPConnectionPool(host='github.com', port=80): Request timed out. (timeout=0.001)
Note:
timeout is not a time limit on the entire response download; rather,
an exception is raised if the server has not issued a response for
timeout seconds (more precisely, if no bytes have been received on the
underlying socket for timeout seconds).
It happens a lot to me that requests.get() takes a very long time to return even if the timeout is 1 second. There are a few way to overcome this problem:
1. Use the TimeoutSauce internal class
From: https://github.com/kennethreitz/requests/issues/1928#issuecomment-35811896
import requests from requests.adapters import TimeoutSauce
class MyTimeout(TimeoutSauce):
def __init__(self, *args, **kwargs):
if kwargs['connect'] is None:
kwargs['connect'] = 5
if kwargs['read'] is None:
kwargs['read'] = 5
super(MyTimeout, self).__init__(*args, **kwargs)
requests.adapters.TimeoutSauce = MyTimeout
This code should cause us to set the read timeout as equal to the
connect timeout, which is the timeout value you pass on your
Session.get() call. (Note that I haven't actually tested this code, so
it may need some quick debugging, I just wrote it straight into the
GitHub window.)
2. Use a fork of requests from kevinburke: https://github.com/kevinburke/requests/tree/connect-timeout
From its documentation: https://github.com/kevinburke/requests/blob/connect-timeout/docs/user/advanced.rst
If you specify a single value for the timeout, like this:
r = requests.get('https://github.com', timeout=5)
The timeout value will be applied to both the connect and the read
timeouts. Specify a tuple if you would like to set the values
separately:
r = requests.get('https://github.com', timeout=(3.05, 27))
NOTE: The change has since been merged to the main Requests project.
3. Using evenlet or signal as already mentioned in the similar question:
Timeout for python requests.get entire response
I wanted a default timeout easily added to a bunch of code (assuming that timeout solves your problem)
This is the solution I picked up from a ticket submitted to the repository for Requests.
credit: https://github.com/kennethreitz/requests/issues/2011#issuecomment-477784399
The solution is the last couple of lines here, but I show more code for better context. I like to use a session for retry behaviour.
import requests
import functools
from requests.adapters import HTTPAdapter,Retry
def requests_retry_session(
retries=10,
backoff_factor=2,
status_forcelist=(500, 502, 503, 504),
session=None,
) -> requests.Session:
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
# set default timeout
for method in ('get', 'options', 'head', 'post', 'put', 'patch', 'delete'):
setattr(session, method, functools.partial(getattr(session, method), timeout=30))
return session
then you can do something like this:
requests_session = requests_retry_session()
r = requests_session.get(url=url,...
In my case, the reason of "requests.get never returns" is because requests.get() attempt to connect to the host resolved with ipv6 ip first. If something went wrong to connect that ipv6 ip and get stuck, then it retries ipv4 ip only if I explicit set timeout=<N seconds> and hit the timeout.
My solution is monkey-patching the python socket to ignore ipv6(or ipv4 if ipv4 not working), either this answer or this answer are works for me.
You might wondering why curl command is works, because curl connect ipv4 without waiting for ipv6 complete. You can trace the socket syscalls with strace -ff -e network -s 10000 -- curl -vLk '<your url>' command. For python, strace -ff -e network -s 10000 -- python3 <your python script> command can be used.
Patching the documented "send" function will fix this for all requests - even in many dependent libraries and sdk's. When patching libs, be sure to patch supported/documented functions, not TimeoutSauce - otherwise you may wind up silently losing the effect of your patch.
import requests
DEFAULT_TIMEOUT = 180
old_send = requests.Session.send
def new_send(*args, **kwargs):
if kwargs.get("timeout", None) is None:
kwargs["timeout"] = DEFAULT_TIMEOUT
return old_send(*args, **kwargs)
requests.Session.send = new_send
The effects of not having any timeout are quite severe, and the use of a default timeout can almost never break anything - because TCP itself has default timeouts as well.
Reviewed all the answers and came to conclusion that the problem still exists. On some sites requests may hang infinitely and using multiprocessing seems to be overkill. Here's my approach(Python 3.5+):
import asyncio
import aiohttp
async def get_http(url):
async with aiohttp.ClientSession(conn_timeout=1, read_timeout=3) as client:
try:
async with client.get(url) as response:
content = await response.text()
return content, response.status
except Exception:
pass
loop = asyncio.get_event_loop()
task = loop.create_task(get_http('http://example.com'))
loop.run_until_complete(task)
result = task.result()
if result is not None:
content, status = task.result()
if status == 200:
print(content)
UPDATE
If you receive a deprecation warning about using conn_timeout and read_timeout, check near the bottom of THIS reference for how to use the ClientTimeout data structure. One simple way to apply this data structure per the linked reference to the original code above would be:
async def get_http(url):
timeout = aiohttp.ClientTimeout(total=60)
async with aiohttp.ClientSession(timeout=timeout) as client:
try:
etc.