I have a simple experiment in the code snippet shown below. My goal is to have the browser client (via a WebSocket) kick off a long-running task on the server, but the server should service WebSocket messages from the client while the long-running task is running. Here's the workflow ("OK" means this step is working as-is in the snippet, while "?" means this is what I'm trying to figure out)...
OK - Run the code
OK - Launch a browser at 127.0.0.1
OK - WebSocket connects
OK - Click "Send" and the browser client generates a random number, sends it to the server, and the server echoes back the number
OK - Click "Begin" and this invokes a long-running task on the server (5.0 seconds)
? - During this 5sec (while the long-running task is running), I'd like to click "Send" and have the server immediately echo back the random number that was sent from the client while the long-running task continues to be concurrently executed in the event loop
For that last bullet point, it is not working that way: rather, if you click "Send" while the long process is running, the long process finishes and then the numbers are echoed back. To me, this demonstrates that await simulate_long_process(websocket) is truly waiting for simulate_long_process() to complete -- makes sense. However, part of me was expecting that await simulate_long_process(websocket) would signal the event loop that it could go work on other tasks and therefore go back to the while True loop to service the next incoming messages. I was expecting this because simulate_long_process() is fully async (async def, await websocket.send_text(), and await asyncio.sleep()). The current behavior kinda makes sense but not what I want. So my question is, how can I achieve my goal of responding to incoming messages on the WebSocket while the long-running task is running? I am interested in two (or more) approaches:
Spawning the long-running task in a different thread. For example, with asyncio.to_thread() or by stuffing a message into a separate queue that another thread is reading, which then executes the long-running task (e.g. like a producer/consumer queue). Furthermore, I can see how using those same queues, at the end of the long-running tasks, I could then send acknowledgment messages back to the Starlette/async thread and then back to the client over the WebSocket to tell them a task has completed.
Somehow achieving this "purely async"? "Purely async" means mostly or entirely using features/methods from the asyncio package. This might delve into synchronous or blocking code, but here I'm thinking about things like: organizing my coroutines into a TaskGroup() object to get concurrent execution, using call_soon(), using run_in_executor(), etc. I'm really interested in hearing about this approach! But I'm skeptical since it may be convoluted. The spirit of this is mentioned here: Long-running tasks with async server
I can certainly see the path to completion on approach (1). So I'm debating how "pure async" I try to go -- maybe Starlette (running in its own thread) is the only async portion of my entire app, and the rest of my (CPU-bound, blocking) app is on a different (synchronous) thread. Then, the Starlette async thread and the CPU-bound sync thread simply coordinate via a queue. This is where I'm headed but I'd like to hear some thoughts to see if a "pure async" approach could be reasonably implemented. Stated differently, if someone could refactor the code snippet below to work as intended (responding immediately to "Send" while the long-running task is running), using only or mostly methods from asyncio then that would be a good demonstration.
from starlette.applications import Starlette
from starlette.responses import HTMLResponse
from starlette.routing import Route, WebSocketRoute
import uvicorn
import asyncio
index_str = """<!DOCTYPE HTML>
<html>
<head>
<script type = "text/javascript">
const websocket = new WebSocket("ws://127.0.0.1:80");
window.addEventListener("DOMContentLoaded", () => {
websocket.onmessage = ({ data }) => {
console.log('Received: ' + data)
document.body.innerHTML += data + "<br>";
};
});
</script>
</head>
<body>
WebSocket Async Experiment<br>
<button onclick="websocket.send(Math.floor(Math.random()*10))">Send</button><br>
<button onclick="websocket.send('begin')">Begin</button><br>
<button onclick="websocket.send('close')">Close</button><br>
</body>
</html>
"""
def homepage(request):
return HTMLResponse(index_str)
async def simulate_long_process(websocket):
await websocket.send_text(f'Running long process...')
await asyncio.sleep(5.0)
async def websocket_endpoint(websocket):
await websocket.accept()
await websocket.send_text(f'Server connected')
while True:
msg = await websocket.receive_text()
print(f'server received: {msg}')
if msg == 'begin':
await simulate_long_process(websocket)
elif msg == 'close':
await websocket.send_text('Server closed')
break
else:
await websocket.send_text(f'Server received {msg} from client')
await websocket.close()
print('Server closed')
if __name__ == '__main__':
routes = [
Route('/', homepage),
WebSocketRoute('/', websocket_endpoint) ]
app = Starlette(debug=True, routes=routes)
uvicorn.run(app, host='0.0.0.0', port=80)
First:
However, part of me was expecting that await simulate_long_process(websocket) would signal the event loop that it could go work on other tasks
That is exactly what await means: it means, "stop executing this coroutine (websocket_endpoint) while we wait for a result from simulate_long_process, and go service other coroutines".
As it happens, you don't have any concurrent coroutines running, so this just pauses things until simulate_long_process returns.
Second:
Even if you were to run simulate_long_process concurrently (e.g., by creating a task using asyncio.create_task and then checking if its complete), your while loop blocks waiting for text from the client. This means that you can't, for instance, send the client a message when simulate_long_process completes, because the client needs to send you something before the body of the while loop can execute.
I haven't worked with Starlette before, so this may not be the most canonical solution, but here's an implementation that uses a WebSocketEndpoint to implement the desired behavior:
from starlette.applications import Starlette
from starlette.responses import HTMLResponse
from starlette.routing import Route, WebSocketRoute
from starlette.endpoints import WebSocketEndpoint
import uvicorn
import asyncio
SERVER_PORT=8000
index_str = """<!DOCTYPE HTML>
<html>
<head>
<script type = "text/javascript">
const websocket = new WebSocket("ws://127.0.0.1:%s");
window.addEventListener("DOMContentLoaded", () => {
websocket.onmessage = ({ data }) => {
console.log('Received: ' + data)
document.body.innerHTML += data + "<br>";
};
});
</script>
</head>
<body>
WebSocket Async Experiment<br>
<button onclick="websocket.send(Math.floor(Math.random()*10))">Send</button><br>
<button onclick="websocket.send('begin')">Begin</button><br>
<button onclick="websocket.send('close')">Close</button><br>
</body>
</html>
""" % (SERVER_PORT)
def homepage(request):
return HTMLResponse(index_str)
class Consumer(WebSocketEndpoint):
encoding = 'text'
task = None
async def on_connect(self, ws):
await ws.accept()
async def on_receive(self, ws, data):
match data:
case 'begin':
if self.task is not None:
await ws.send_text('background task is already running')
return
await ws.send_text('start background task')
self.task = asyncio.create_task(self.simulate_long_task(ws))
case 'close':
await ws.send_text('closing connection')
await ws.close()
case _:
await ws.send_text(f'Server received {data} from client')
async def simulate_long_task(self, ws):
await ws.send_text('start long process')
await asyncio.sleep(5)
await ws.send_text('finish long process')
self.task = None
async def on_disconnect(self, ws, close_code):
pass
if __name__ == '__main__':
routes = [
Route('/', homepage),
WebSocketRoute('/', Consumer) ]
app = Starlette(debug=True, routes=routes)
uvicorn.run(app, host='0.0.0.0', port=SERVER_PORT)
(Note that this by default uses port 8000 instead of port 80 because I already have something running on port 80 locally.)
Related
Issue
My problem is that I can't write a server that streams the response that my application sends back.
The response are not retrieved chunk by chunk, but from a single block when the iterator has finished iterating.
Approach
When I write the response with the write method of Request, it understands well that it is a chunk that we send.
I checked if there was a buffer size used by Twisted, but the message size check seems to be done in the doWrite.
After spending some time debugging, it seems that the reactor only reads and writes at the end.
If I understood correctly how a reactor works with Twisted, it writes and reads when the file descriptor is available.
What is a file descriptor in Twisted ?
Why is it not available after writing the response ?
Example
I have written a minimal script of what I would like my server to look like.
It's a "ASGI-like" server that runs an application, iterates over a function that returns a very large string:
# async_stream_server.py
import asyncio
from twisted.internet import asyncioreactor
twisted_loop = asyncio.new_event_loop()
asyncioreactor.install(twisted_loop)
import time
from sys import stdout
from twisted.web import http
from twisted.python.log import startLogging
from twisted.internet import reactor, endpoints
CHUNK_SIZE = 2**16
def async_partial(async_fn, *partial_args):
async def wrapped(*args):
return await async_fn(*partial_args, *args)
return wrapped
def iterable_content():
for _ in range(5):
time.sleep(1)
yield b"a" * CHUNK_SIZE
async def application(send):
for part in iterable_content():
await send(
{
"body": part,
"more_body": True,
}
)
await send({"more_body": False})
class Dummy(http.Request):
def process(self):
asyncio.ensure_future(
application(send=async_partial(self.handle_reply)),
loop=asyncio.get_event_loop()
)
async def handle_reply(self, message):
http.Request.write(self, message.get("body", b""))
if not message.get("more_body", False):
http.Request.finish(self)
print('HTTP response chunk')
class DummyFactory(http.HTTPFactory):
def buildProtocol(self, addr):
protocol = http.HTTPFactory.buildProtocol(self, addr)
protocol.requestFactory = Dummy
return protocol
startLogging(stdout)
endpoints.serverFromString(reactor, "tcp:1234").listen(DummyFactory())
asyncio.set_event_loop(reactor._asyncioEventloop)
reactor.run()
To execute this example:
in a terminal, run:
python async_stream_server.py
in another terminal, run:
curl http://localhost:1234/
You will have to wait a while before you see the whole message.
Details
$ python --version
Python 3.10.4
$ pip list
Package Version Editable project location
----------------- ------- --------------------------------------------------
asgiref 3.5.0
Twisted 22.4.0
You just need to sprinkle some more async over it.
As written, the iterable_content generator blocks the reactor until it finishes generating content. This is why you see no results until it is done. The reactor does not get control of execution back until it finishes.
That's only because you used time.sleep to insert a delay into it. time.sleep blocks. This -- and everything else in the "asynchronous" application -- is really synchronous and keeps control of execution until it is done.
If you replace iterable_content with something that's really asynchronous, like an asynchronous generator:
async def iterable_content():
for _ in range(5):
await asyncio.sleep(1)
yield b"a" * CHUNK_SIZE
and then iterate over it asynchronously with async for:
async def application(send):
async for part in iterable_content():
await send(
{
"body": part,
"more_body": True,
}
)
await send({"more_body": False})
then the reactor has a chance to run in between iterations and the server begins to produce output chunk by chunk.
I am trying to set up a FastAPI server that will take as input some biological data, and run some processing on them. Since the processing takes up all the server's resources, queries should be processed sequentially. However, the server should stay responsive and add further requests in a buffer. I've been trying to use the BackgroundTasks module for this, but after sending the second query, the response gets delayed while the task is running. Any help appreciated, and thanks in advance.
import os
import sys
import time
from dataclasses import dataclass
from fastapi import FastAPI, Request, BackgroundTasks
EXPERIMENTS_BASE_DIR = "/experiments/"
QUERY_BUFFER = {}
app = FastAPI()
#dataclass
class Query():
query_name: str
query_sequence: str
experiment_id: str = None
status: str = "pending"
def __post_init__(self):
self.experiment_id = str(time.time())
self.experiment_dir = os.path.join(EXPERIMENTS_BASE_DIR, self.experiment_id)
os.makedirs(self.experiment_dir, exist_ok=False)
def run(self):
self.status = "running"
# perform some long task using the query sequence and get a return code #
self.status = "finished"
return 0 # or another code depending on the final output
#app.post("/")
async def root(request: Request, background_tasks: BackgroundTasks):
query_data = await request.body()
query_data = query_data.decode("utf-8")
query_data = dict(str(x).split("=") for x in query_data.split("&"))
query = Query(**query_data)
QUERY_BUFFER[query.experiment_id] = query
background_tasks.add_task(process, query)
return {"Query created": query, "Query ID": query.experiment_id, "Backlog Length": len(QUERY_BUFFER)}
async def process(query):
""" Process query and generate data"""
ret_code = await query.run()
del QUERY_BUFFER[query.experiment_id]
print(f'Query {query.experiment_id} processing finished with return code {ret_code}.')
#app.get("/backlog/")
def return_backlog():
return {f"Currently {len(QUERY_BUFFER)} jobs in the backlog."}
EDIT:
The original answer was influenced by testing with httpx.AsyncClient (as flagged might be the case in the original caveat). The test client causes background tasks to block that do not block without the test client. As such, there's a simpler solution provided you don't want to test it with httpx.AsyncClient. The new solution uses uvicorn and then I tested this manually with Postman instead.
This solution uses a function as the background task (process) so that it runs outside the main thread. It then schedules a job to run aprocess which will run in the main thread when the event loop gets a chance. The aprocess coroutine is able to then await the run coroutine of your Query as before.
Additionally, I've added a time.sleep(10) to the process function to illustrate that even long running non-IO tasks will not prevent your original HTTP session from sending a response back to the client (although this will only work if it is something that releases the GIL. If it's CPU bound though you might want a separate process altogether by using multiprocessing or a separate service). Finally, I've replaced the prints with logging so that they work along with the uvicorn logging.
import asyncio
import os
import sys
import time
from dataclasses import dataclass
from fastapi import FastAPI, Request, BackgroundTasks
import logging
logging.basicConfig(level=logging.INFO, format="%(levelname)-9s %(asctime)s - %(name)s - %(message)s")
LOGGER = logging.getLogger(__name__)
EXPERIMENTS_BASE_DIR = "/experiments/"
QUERY_BUFFER = {}
app = FastAPI()
loop = asyncio.get_event_loop()
#dataclass
class Query():
query_name: str
query_sequence: str
experiment_id: str = None
status: str = "pending"
def __post_init__(self):
self.experiment_id = str(time.time())
self.experiment_dir = os.path.join(EXPERIMENTS_BASE_DIR, self.experiment_id)
# os.makedirs(self.experiment_dir, exist_ok=False) # Commented out for testing
async def run(self):
self.status = "running"
await asyncio.sleep(5) # simulate long running query
# perform some long task using the query sequence and get a return code #
self.status = "finished"
return 0 # or another code depending on the final output
#app.post("/")
async def root(request: Request, background_tasks: BackgroundTasks):
query_data = await request.body()
query_data = query_data.decode("utf-8")
query_data = dict(str(x).split("=") for x in query_data.split("&"))
query = Query(**query_data)
QUERY_BUFFER[query.experiment_id] = query
background_tasks.add_task(process, query)
LOGGER.info(f'root - added task')
return {"Query created": query, "Query ID": query.experiment_id, "Backlog Length": len(QUERY_BUFFER)}
def process(query):
""" Schedule processing of query, and then run some long running non-IO job without blocking the app"""
asyncio.run_coroutine_threadsafe(aprocess(query), loop)
LOGGER.info(f"process - {query.experiment_id} - Submitted query job. Now run non-IO work for 10 seconds...")
time.sleep(10) # simulate long running non-IO work, does not block app as this is in another thread - provided it is not cpu bound.
LOGGER.info(f'process - {query.experiment_id} - wake up!')
async def aprocess(query):
""" Process query and generate data """
ret_code = await query.run()
del QUERY_BUFFER[query.experiment_id]
LOGGER.info(f'aprocess - Query {query.experiment_id} processing finished with return code {ret_code}.')
#app.get("/backlog/")
def return_backlog():
return {f"return_backlog - Currently {len(QUERY_BUFFER)} jobs in the backlog."}
if __name__ == "__main__":
import uvicorn
uvicorn.run("scratch_26:app", host="127.0.0.1", port=8000)
ORIGINAL ANSWER:
*A caveat on this answer - I've tried testing this with `httpx.AsyncClient`, which might account for different behavior compared to deploying behind guvicorn.*
From what I can tell (and I am very open to correction on this), BackgroundTasks actually need to complete prior to an HTTP response being sent. This is not what the Starlette docs or the FastAPI docs say, but it appears to be the case, at least while using the httpx AsyncClient.
Whether you add a a coroutine (which is executed in the main thread) or a function (which gets executed in it's own side thread) that HTTP response is blocked from being sent until the background task is complete.
If you want to await a long running (asyncio friendly) task, you can get around this problem by using a wrapper function. The wrapper function adds the real task (a coroutine, since it will be using await) to the event loop and then returns. Since this is very fast, the fact that it "blocks" no longer matters (assuming a few milliseconds doesn't matter).
The real task then gets executed in turn (but after the initial HTTP response has been sent), and although it's on the main thread, the asyncio part of the function will not block.
You could try this:
#app.post("/")
async def root(request: Request, background_tasks: BackgroundTasks):
...
background_tasks.add_task(process_wrapper, query)
...
async def process_wrapper(query):
loop = asyncio.get_event_loop()
loop.create_task(process(query))
async def process(query):
""" Process query and generate data"""
ret_code = await query.run()
del QUERY_BUFFER[query.experiment_id]
print(f'Query {query.experiment_id} processing finished with return code {ret_code}.')
Note also that you'll also need to make your run() function a coroutine by adding the async keyword since you're expecting to await it from your process() function.
Here's a full working example that uses httpx.AsyncClient to test it. I've added the fmt_duration helper function to show the lapsed time for illustrative purposes. I've also commented out the code that creates directories, and simulated a 2 second query duration in the run() function.
import asyncio
import os
import sys
import time
from dataclasses import dataclass
from fastapi import FastAPI, Request, BackgroundTasks
from httpx import AsyncClient
EXPERIMENTS_BASE_DIR = "/experiments/"
QUERY_BUFFER = {}
app = FastAPI()
start_ts = time.time()
#dataclass
class Query():
query_name: str
query_sequence: str
experiment_id: str = None
status: str = "pending"
def __post_init__(self):
self.experiment_id = str(time.time())
self.experiment_dir = os.path.join(EXPERIMENTS_BASE_DIR, self.experiment_id)
# os.makedirs(self.experiment_dir, exist_ok=False) # Commented out for testing
async def run(self):
self.status = "running"
await asyncio.sleep(2) # simulate long running query
# perform some long task using the query sequence and get a return code #
self.status = "finished"
return 0 # or another code depending on the final output
#app.post("/")
async def root(request: Request, background_tasks: BackgroundTasks):
query_data = await request.body()
query_data = query_data.decode("utf-8")
query_data = dict(str(x).split("=") for x in query_data.split("&"))
query = Query(**query_data)
QUERY_BUFFER[query.experiment_id] = query
background_tasks.add_task(process_wrapper, query)
print(f'{fmt_duration()} - root - added task')
return {"Query created": query, "Query ID": query.experiment_id, "Backlog Length": len(QUERY_BUFFER)}
async def process_wrapper(query):
loop = asyncio.get_event_loop()
loop.create_task(process(query))
async def process(query):
""" Process query and generate data"""
ret_code = await query.run()
del QUERY_BUFFER[query.experiment_id]
print(f'{fmt_duration()} - process - Query {query.experiment_id} processing finished with return code {ret_code}.')
#app.get("/backlog/")
def return_backlog():
return {f"{fmt_duration()} - return_backlog - Currently {len(QUERY_BUFFER)} jobs in the backlog."}
async def test_me():
async with AsyncClient(app=app, base_url="http://example") as ac:
res = await ac.post("/", content="query_name=foo&query_sequence=42")
print(f"{fmt_duration()} - [{res.status_code}] - {res.content.decode('utf8')}")
res = await ac.post("/", content="query_name=bar&query_sequence=43")
print(f"{fmt_duration()} - [{res.status_code}] - {res.content.decode('utf8')}")
content = ""
while not content.endswith('0 jobs in the backlog."]'):
await asyncio.sleep(1)
backlog_results = await ac.get("/backlog")
content = backlog_results.content.decode("utf8")
print(f"{fmt_duration()} - test_me - content: {content}")
def fmt_duration():
return f"Progress time: {time.time() - start_ts:.3f}s"
loop = asyncio.get_event_loop()
print(f'starting loop...')
loop.run_until_complete(test_me())
duration = time.time() - start_ts
print(f'Finished. Duration: {duration:.3f} seconds.')
in my local environment if I run the above I get this output:
starting loop...
Progress time: 0.005s - root - added task
Progress time: 0.006s - [200] - {"Query created":{"query_name":"foo","query_sequence":"42","experiment_id":"1627489235.9300923","status":"pending","experiment_dir":"/experiments/1627489235.9300923"},"Query ID":"1627489235.9300923","Backlog Length":1}
Progress time: 0.007s - root - added task
Progress time: 0.009s - [200] - {"Query created":{"query_name":"bar","query_sequence":"43","experiment_id":"1627489235.932097","status":"pending","experiment_dir":"/experiments/1627489235.932097"},"Query ID":"1627489235.932097","Backlog Length":2}
Progress time: 1.016s - test_me - content: ["Progress time: 1.015s - return_backlog - Currently 2 jobs in the backlog."]
Progress time: 2.008s - process - Query 1627489235.9300923 processing finished with return code 0.
Progress time: 2.008s - process - Query 1627489235.932097 processing finished with return code 0.
Progress time: 2.041s - test_me - content: ["Progress time: 2.041s - return_backlog - Currently 0 jobs in the backlog."]
Finished. Duration: 2.041 seconds.
I also tried making process_wrapper a function so that Starlette executes it in a new thread. This works the same way, just use run_coroutine_threadsafe instead of create_task i.e.
def process_wrapper(query):
loop = asyncio.get_event_loop()
asyncio.run_coroutine_threadsafe(process(query), loop)
If there is some other way to get a background task to run without blocking the HTTP response I'd love to find out how, but absent that this wrapper solution should work.
I think your issue is in the task you want to run, not in the BackgroundTask itself.
FastAPI (and underlying Starlette, which is responsible for running the background tasks) is created on top of the asyncio and handles all requests asynchronously. That means, if one request is being processed, if there is any IO operation while processing the current request, and that IO operation supports the asynchronous approach, FastAPI will switch to the next request in queue while this IO operation is pending.
Same goes for any background tasks added to the queue. If background task is pending, any requests or other background tasks will be handled only when FastAPI is waiting for any IO operation.
As you may see, this is not ideal when either your view or task doesn't have any IO operations or they cannot be run asynchronously. There is a workaround for that situation:
declare your views or tasks as normal, non asynchronous functions
Starlette will then run those views in a separate thread, outside of the main async loop, so other requests can be handled at the same time
manually run the part of your logic that may block the
processing of other requests using asgiref.sync_to_async
This will also cause this logic to be executed in a separate thread, releasing the main async loop to take care of other requests until the function returns.
If you are not doing any asynchronous IO operations in your long-running task, the first approach will be most suitable for you. Otherwise, you should take any part of your code that is either long-running or performs any non-asynchronous IO operations and wrap it with sync_to_async.
I am working on a Flask app in which the response to the client depends on replies that I get from a couple of external APIs. The requests to these APIs are logically independent from each other, so a speed gain can be realized by sending these requests in parallel (in the example below response time would be cut almost in half).
It seems to me the simplest and most modern way to achieve this is to use asyncio and process all work in a separate async function that is called from the flask view function using asyncio.run(). I have included a short working example below.
Using celery or any other type of queue with a separate worker process does not really make sense here, because the response has to wait for the API results anyway before sending a reply. As far as I can see this is a variant of this idea where a processing loop is accessed through asyncio. There are certainly applications for this, but I think if we really just want to parallelize IO before answering a request this is unnecessarily complicated.
However, I know that there can be some pitfalls in using various kinds of multithreading from within Flask. Therefore my questions are:
Would the implmentation below be considered safe when used in a production environment? How does that depend on the kind of server that we run Flask on? Particularly, the built-in development server or a typical multi-worker gunicorn setup such as suggested on https://flask.palletsprojects.com/en/1.1.x/deploying/wsgi-standalone/#gunicorn?
Are there any considerations to be made about Flask's app and request contexts in the async function or can I simply use them as I would in any other function? I.e. can I simply import current_app to access my application config or use the g and session objects? When writing to them possible race conditions would clearly have to be considered, but are there any other issues? In my basic tests (not in example) everything seems to work alright.
Are there any other solutions that would improve on this?
Here is my example application. Since the ascynio interface changed a bit over time it is probably worth noting that I tested this on Python 3.7 and 3.8 and I have done my best to avoid deprecated parts of asyncio.
import asyncio
import random
import time
from flask import Flask
app = Flask(__name__)
async def contact_api_a():
print(f'{time.perf_counter()}: Start request 1')
# This sleep simulates querying and having to wait for an external API
await asyncio.sleep(2)
# Here is our simulated API reply
result = random.random()
print(f'{time.perf_counter()}: Finish request 1')
return result
async def contact_api_b():
print(f'{time.perf_counter()}: Start request 2')
await asyncio.sleep(1)
result = random.random()
print(f'{time.perf_counter()}: Finish request 2')
return result
async def contact_apis():
# Create the two tasks
task_a = asyncio.create_task(contact_api_a())
task_b = asyncio.create_task(contact_api_b())
# Wait for both API requests to finish
result_a, result_b = await asyncio.gather(task_a, task_b)
print(f'{time.perf_counter()}: Finish both requests')
return result_a, result_b
#app.route('/')
def hello_world():
start_time = time.perf_counter()
# All async processes are organized in a separate function
result_a, result_b = asyncio.run(contact_apis())
# We implement some final business logic before finishing the request
final_result = result_a + result_b
processing_time = time.perf_counter() - start_time
return f'Result: {final_result:.2f}; Processing time: {processing_time:.2f}'
This will be safe to run in production but asyncio will not work efficiently with the Gunicorn async workers, such as gevent or eventlet. This is because the result_a, result_b = asyncio.run(contact_apis()) will block the gevent/eventlet event-loop until it completes, whereas using the gevent/eventlet spawn equivalents will not. The Flask server shouldn't be used in production. The Gunicorn threaded workers (or multiple Gunicorn processes) will be fine, as asyncio will block the thread/process.
The globals will work fine as they are tied to either the thread (threaded workers) or green-thread (gevent/eventlet) and not to the asyncio task.
I would say Quart is an improvement (I'm the Quart author). Quart is the Flask API re-implemented using asyncio. With Quart the snippet above is,
import asyncio
import random
import time
from quart import Quart
app = Quart(__name__)
async def contact_api_a():
print(f'{time.perf_counter()}: Start request 1')
# This sleep simulates querying and having to wait for an external API
await asyncio.sleep(2)
# Here is our simulated API reply
result = random.random()
print(f'{time.perf_counter()}: Finish request 1')
return result
async def contact_api_b():
print(f'{time.perf_counter()}: Start request 2')
await asyncio.sleep(1)
result = random.random()
print(f'{time.perf_counter()}: Finish request 2')
return result
async def contact_apis():
# Create the two tasks
task_a = asyncio.create_task(contact_api_a())
task_b = asyncio.create_task(contact_api_b())
# Wait for both API requests to finish
result_a, result_b = await asyncio.gather(task_a, task_b)
print(f'{time.perf_counter()}: Finish both requests')
return result_a, result_b
#app.route('/')
async def hello_world():
start_time = time.perf_counter()
# All async processes are organized in a separate function
result_a, result_b = await contact_apis()
# We implement some final business logic before finishing the request
final_result = result_a + result_b
processing_time = time.perf_counter() - start_time
return f'Result: {final_result:.2f}; Processing time: {processing_time:.2f}'
I'd also suggest using an asyncio based request library such as httpx
I'm trying to create a Python-based CLI that communicates with a web service via websockets. One issue that I'm encountering is that requests made by the CLI to the web service intermittently fail to get processed. Looking at the logs from the web service, I can see that the problem is caused by the fact that frequently these requests are being made at the same time (or even after) the socket has closed:
2016-09-13 13:28:10,930 [22 ] INFO DeviceBridge - Device bridge has opened
2016-09-13 13:28:11,936 [21 ] DEBUG DeviceBridge - Device bridge has received message
2016-09-13 13:28:11,937 [21 ] DEBUG DeviceBridge - Device bridge has received valid message
2016-09-13 13:28:11,937 [21 ] WARN DeviceBridge - Unable to process request: {"value": false, "path": "testcube.pwms[0].enabled", "op": "replace"}
2016-09-13 13:28:11,936 [5 ] DEBUG DeviceBridge - Device bridge has closed
In my CLI I define a class CommunicationService that is responsible for handling all direct communication with the web service. Internally, it uses the websockets package to handle communication, which itself is built on top of asyncio.
CommunicationService contains the following method for sending requests:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
asyncio.ensure_future(self._ws.send(request))
...where ws is a websocket opened earlier in another method:
self._ws = await websockets.connect(websocket_address)
What I want is to be able to await the future returned by asyncio.ensure_future and, if necessary, sleep for a short while after in order to give the web service time to process the request before the websocket is closed.
However, since send_request is a synchronous method, it can't simply await these futures. Making it asynchronous would be pointless as there would be nothing to await the coroutine object it returned. I also can't use loop.run_until_complete as the loop is already running by the time it is invoked.
I found someone describing a problem very similar to the one I have at mail.python.org. The solution that was posted in that thread was to make the function return the coroutine object in the case the loop was already running:
def aio_map(coro, iterable, loop=None):
if loop is None:
loop = asyncio.get_event_loop()
coroutines = map(coro, iterable)
coros = asyncio.gather(*coroutines, return_exceptions=True, loop=loop)
if loop.is_running():
return coros
else:
return loop.run_until_complete(coros)
This is not possible for me, as I'm working with PyRx (Python implementation of the reactive framework) and send_request is only called as a subscriber of an Rx observable, which means the return value gets discarded and is not available to my code:
class AnonymousObserver(ObserverBase):
...
def _on_next_core(self, value):
self._next(value)
On a side note, I'm not sure if this is some sort of problem with asyncio that's commonly come across or whether I'm just not getting it, but I'm finding it pretty frustrating to use. In C# (for instance), all I would need to do is probably something like the following:
void SendRequest(string request)
{
this.ws.Send(request).Wait();
// Task.Delay(500).Wait(); // Uncomment If necessary
}
Meanwhile, asyncio's version of "wait" unhelpfully just returns another coroutine that I'm forced to discard.
Update
I've found a way around this issue that seems to work. I have an asynchronous callback that gets executed after the command has executed and before the CLI terminates, so I just changed it from this...
async def after_command():
await comms.stop()
...to this:
async def after_command():
await asyncio.sleep(0.25) # Allow time for communication
await comms.stop()
I'd still be happy to receive any answers to this problem for future reference, though. I might not be able to rely on workarounds like this in other situations, and I still think it would be better practice to have the delay executed inside send_request so that clients of CommunicationService do not have to concern themselves with timing issues.
In regards to Vincent's question:
Does your loop run in a different thread, or is send_request called by some callback?
Everything runs in the same thread - it's called by a callback. What happens is that I define all my commands to use asynchronous callbacks, and when executed some of them will try to send a request to the web service. Since they're asynchronous, they don't do this until they're executed via a call to loop.run_until_complete at the top level of the CLI - which means the loop is running by the time they're mid-way through execution and making this request (via an indirect call to send_request).
Update 2
Here's a solution based on Vincent's proposal of adding a "done" callback.
A new boolean field _busy is added to CommunicationService to represent if comms activity is occurring or not.
CommunicationService.send_request is modified to set _busy true before sending the request, and then provides a callback to _ws.send to reset _busy once done:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
def callback(_):
self._busy = False
self._busy = True
asyncio.ensure_future(self._ws.send(request)).add_done_callback(callback)
CommunicationService.stop is now implemented to wait for this flag to be set false before progressing:
async def stop(self) -> None:
"""
Terminate communications with TestCube Web Service.
"""
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
while self._busy:
await asyncio.sleep(0.1)
# Allow short delay after final request is processed.
await asyncio.sleep(0.1)
self._listen_task.cancel()
await asyncio.wait([self._listen_task, self._ws.close()])
self._listen_task = None
self._ws = None
logger.info('Terminated connection to TestCube Web Service')
This seems to work too, and at least this way all communication timing logic is encapsulated within the CommunicationService class as it should be.
Update 3
Nicer solution based on Vincent's proposal.
Instead of self._busy we have self._send_request_tasks = [].
New send_request implementation:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.append(task)
New stop implementation:
async def stop(self) -> None:
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
...
You could use a set of tasks:
self._send_request_tasks = set()
Schedule the tasks using ensure_future and clean up using add_done_callback:
def send_request(self, request: str) -> None:
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.add(task)
task.add_done_callback(self._send_request_tasks.remove)
And wait for the set of tasks to complete:
async def stop(self):
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
Given that you're not inside an asynchronous function you can use the yield from keyword to effectively implement await yourself. The following code will block until the future returns:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
future = asyncio.ensure_future(self._ws.send(request))
yield from future.__await__()
I have the following HTTP server written using Tornado:
def reindex(index):
# After some initialization, we execute a process and wait for its output
result = subprocess.check_output([indexerBinPath, arg])
class ReindexRequestHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def post(self):
reindexRequest = json.loads(self.request.body)
p = self.application.settings.get('pool')
p.apply_async(reindex, [ reindexRequest['IndexName'] ], callback = self.onIndexingFinished)
def onIndexingFinished(self, output):
self.flush()
self.finish()
logger.info('Async callback: finished')
application = tornado.web.Application([
(r"/reindex", ReindexRequestHandler)
], pool = Pool(8), queue = Queue())
if __name__ == "__main__":
application.listen(8625)
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
In the POST handler, I asynchronously execute the reindex function which in turn launches a process and wait for it to finish. That works fine - the process is always executed correctly. The process may, depending on its arguments, take up to several minutes to finish. If it completes within seconds, everything works fine.
However, when it takes e.g. over 3 minutes to complete, the HTTP client which sent the POST request never gets the answer. From the standpoint of the server, it looks ok - I can see Async callback: finished logged. However, the HTTP client waits indefinitely for the response (until it fails with a timeout). I had tried both Fiddler's request composer and the .NET HttpClient class.
Why does the HTTP client never gets the response if the request takes long to process?
I had a similar handler and the self.finish() will trigger the response back to the client. So if you move that line to above your p.apply_async it ought to work as you intend.