I am a beginner in flask. I am making several api using flask.
Below is an example of an api.
#app.route('/alarm', method = [POST])
def add_alarm():
insert_task_alarmDB()
requests.post("another server", data)
response = requests.post("another server", data)
requests.post("another server", data) ...
return response['content']
I would like to process the request asynchronously in the above code.
How can i do that?
..The method of official documentation of flask 2.0 does not work well for me.
You have to define the method as async: async def add_alarm(): and then you have to use an async library for requests for example grequests
Related
I have a FastAPI application for testing/development purposes. What I want is that any request that arrives to my app to automatically be sent, as is, to another app on another server, with exactly the same parameters and same endpoint. This is not a redirect, because I still want the app to process the request and return values as usual. I just want to initiate a similar request to a different version of the app on a different server, without waiting for the answer from the other server, so that the other app gets the request as if the original request was sent to it.
How can I achieve that? Below is a sample code that I use for handling the request:
#app.post("/my_endpoint/some_parameters")
def process_request(
params: MyParamsClass,
pwd: str = Depends(authenticate),
):
# send the same request to http://my_other_url/my_endpoint/
return_value = process_the_request(params)
return return_value.as_json()
You could use the AsyncClient() from the httpx library, as described in this answer, as well as this answer and this answer (have a look at those answers for more details on the approach demonstrated below). You can spawn a Client inside the startup event handler, store it on the app instance—as described here, as well as here and here—and reuse it every time you need it. You can explicitly close the Client once you are done with it, using the shutdown event handler.
Working Example
Main Server
When building the request that is about to be forwarded to the other server, the main server uses request.stream() to read the request body from the client's request, which provides an async iterator, so that if the client sent a request with some large body (for instance, the client uploads a large file), the main server would not have to wait for the entire body to be received and loaded into memory before forwarding the request, something that would happen in case you used await request.body() instead, which would likely cause server issues if the body could not fit into RAM.
You can add multiple routes in the same way the /upload one has been defined below, specifying the path, as well as the HTTP method for the endpoint. Note that the /upload route below uses Starlette's path convertor to capture arbitrary paths, as demonstrated here and here. You could also specify the exact path parameters if you wish, but the below provides a more convenient way if there are too many of them. Regardless, the path will be evaluated against the endpoint in the other server below, where you can explicitly specify the path parameters.
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from starlette.background import BackgroundTask
import httpx
app = FastAPI()
#app.on_event('startup')
async def startup_event():
client = httpx.AsyncClient(base_url='http://127.0.0.1:8001/') # this is the other server
app.state.client = client
#app.on_event('shutdown')
async def shutdown_event():
client = app.state.client
await client.aclose()
async def _reverse_proxy(request: Request):
client = request.app.state.client
url = httpx.URL(path=request.url.path, query=request.url.query.encode('utf-8'))
req = client.build_request(
request.method, url, headers=request.headers.raw, content=request.stream()
)
r = await client.send(req, stream=True)
return StreamingResponse(
r.aiter_raw(),
status_code=r.status_code,
headers=r.headers,
background=BackgroundTask(r.aclose)
)
app.add_route('/upload/{path:path}', _reverse_proxy, ['POST'])
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8000)
The Other Server
Again, for simplicity, the Request object is used to read the body, but you can isntead define UploadFile, Form and other parameters as usual. The below is listenning on port 8001.
from fastapi import FastAPI, Request
app = FastAPI()
#app.post('/upload/{p1}/{p2}')
async def upload(p1: str, p2: str, q1: str, request: Request):
return {'p1': p1, 'p2': p2, 'q1': q1, 'body': await request.body()}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8001)
Test the example above
import httpx
url = 'http://127.0.0.1:8000/upload/hello/world'
files = {'file': open('file.txt', 'rb')}
params = {'q1': 'This is a query param'}
r = httpx.post(url, params=params, files=files)
print(r.content)
Using FastAPI in a sync, not async mode, I would like to be able to receive the raw, unchanged body of a POST request.
All examples I can find show async code, when I try it in a normal sync way, the request.body() shows up as a coroutine object.
When I test it by posting some XML to this endpoint, I get a 500 "Internal Server Error".
from fastapi import FastAPI, Response, Request, Body
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.post("/input")
def input_request(request: Request):
# how can I access the RAW request body here?
body = request.body()
# do stuff with the body here
return Response(content=body, media_type="application/xml")
Is this not possible with FastAPI?
Note: a simplified input request would look like:
POST http://127.0.0.1:1083/input
Content-Type: application/xml
<XML>
<BODY>TEST</BODY>
</XML>
and I have no control over how input requests are sent, because I need to replace an existing SOAP API.
Using async def endpoint
If an object is a co-routine, it needs to be awaited. FastAPI is actually Starlette underneath, and Starlette methods for returning the request body are async methods (see the source code here as well); thus, one needs to await them (inside an async def endpoint). For example:
from fastapi import Request
#app.post("/input")
async def input_request(request: Request):
return await request.body()
Update 1 - Using def endpoint
Alternatively, if you are confident that the incoming data is a valid JSON, you can define your endpoint with def instead, and use the Body field, as shown below (for more options on how to post JSON data, see this answer):
from fastapi import Body
#app.post("/input")
def input_request(payload: dict = Body(...)):
return payload
If, however, the incoming data are in XML format, as in the example you provided, one option is to pass them using Files instead, as shown below—as long as you have control over how client data are sent to the server (have a look here as well). Example:
from fastapi import File
#app.post("/input")
def input_request(contents: bytes = File(...)):
return contents
Update 2 - Using def endpoint and async dependency
As described in this post, you can use an async dependency function to pull out the body from the request. You can use async dependencies on non-async (i.e., def) endpoints as well. Hence, if there is some sort of blocking code in this endpoint that prevents you from using async/await—as I am guessing this might be the reason in your case—this is the way to go.
Note: I should also mention that this answer—which explains the difference between def and async def endpoints (that you might be aware of)—also provides solutions when you are required to use async def (as you might need to await for coroutines inside a route), but also have some synchronous expensive CPU-bound operation that might be blocking the server. Please have a look.
Example of the approach described earlier can be found below. You can uncomment the time.sleep() line, if you would like to confirm yourself that a request won't be blocking other requests from going through, as when you declare an endpoint with normal def instead of async def, it is run in an external threadpool (regardless of the async def dependency function).
from fastapi import FastAPI, Depends, Request
import time
app = FastAPI()
async def get_body(request: Request):
return await request.body()
#app.post("/input")
def input_request(body: bytes = Depends(get_body)):
print("New request arrived.")
#time.sleep(5)
return body
For convenience, you can simply use asgiref, this package supports async_to_sync and sync_to_async:
from asgiref.sync import async_to_sync
sync_body_func = async_to_sync(request.body)
print(sync_body_func())
async_to_sync execute an async function in a eventloop, sync_to_async execute a sync function in a threadpool.
I am looking to create a server that takes in a request, does some processing, and forwards the request to another endpoint. I seem to be running into an issue at higher concurrency where my client.post is causing a httpx.ConnectTimeout exception.
I haven't completely ruled out the possibility of an issue with the endpoint(I am currently working with them to debug anything that might be on their end), but I'm trying to figure out if there is something wrong on my end or if there are any glaring inefficiencies I can improve upon.
I am running this in ECS, currently on a cluster where tasks have 4 vCPUs. I am using the docker image uvicorn-gunicorn-fastapi(https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker). Currently all default settings minus the bind/port/logging. Here is a minimal code example:
import httpx
from fastapi import FastAPI, Request, Response
app = FastAPI()
def process_request(path, request):
#Process Request Here
def create_headers(path):
#Create headers here
#app.get('/')
async def root(path: str, request: Request):
endpoint = 'https://endpoint.com/'
querystring = 'path=' + path
data = process_request(request, path, request)
headers = create_headers(request)
async with httpx.AsyncClient() as client:
await client.post(endpoint + "?" + querystring, data=data, headers=headers)
return Response(status_code=200)
Could be that the server on the other side is taking too much and the connection simply times out because httpx doesn't give enough time to the other endpoint to complete the request?
If yes, you could try disabling timeout or increase the limit (which I suggest over disabling).
See https://www.python-httpx.org/quickstart/#timeouts
I'm trying to mock a single request to an external URL but in the documentation exists just examples to internal request (starting with '/'), it's impossible to add routers who not start with '/' on the current version of aiohttp.
I'm using pytest and pytest-aiohttp, here are an example of the request code:
import aiohttp
import asyncio
async def fetch(client):
async with client.get('http://python.org') as resp:
return resp.status, (await resp.text())
async def main():
async with aiohttp.ClientSession() as client:
html = await fetch(client)
print(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The kind of assertion that I want to do is very simple, like check the status code, the headers, and the content.
You can patch (using asynctest.patch) your ClientSession. But in this case you need to implement simple ResponseContextManager with .status, async .text() (async .json()), etc. methods and attrs.
In your comment (below your question), you say you're actually looking to mock aiohttp responses. For that, I've been using the 3rd-party library aioresponses: https://github.com/pnuckowski/aioresponses
I use it for integration testing, where it seems better than directly mocking or patching aiohttp methods.
I made it into a little pytest fixture like so:
#pytest.fixture
def aiohttp_mock():
with aioresponses() as aioresponse_mock:
yield aioresponse_mock
which I can then call like an aiohttp client/session: aiohttp_mock.get(...)
Edit from the future: We actually went back to mocking aiohttp methods because aioresponses currently lacks the ability to verify the args that were used in the call. We decided that verifying the args was a requirement for us.
I'm working on an application that will have to consult multiple APIs for information and after processing the data, will output the answer to a client. The client uses a browser to connect to a web server to forward the request, afterwards, the web server will look for the information needed from the multiple APIs and after joining the responses from those APIs will then give an answer to the client.
The web server was built using Flask and a module that extracts the needed information for each API was also implemented (Python). Since the consulting process for each API takes time, I would like to give the web server a timeout for responding, therefore, after the requests are sent only those that are below the time buffer will be used.
My proposed solution:
Use a Redis Queue and an RQ worker to enqueue the requests for each API and store the responses on the Queue then wait for the timeout and collect the responses that were able to respond in the allowed time. Afterwards, process the information and give the response to the user.
The flask web server is setup something like this:
#app.route('/result',methods=["POST"])
def show_result():
inputText = request.form["question"]
tweetModule = Twitter()
tweeterResponse = tweetModule.ask(params=inputText)
redditObject = RedditModule()
redditResponse = redditObject.ask(params=inputText)
edmunds = Edmunds()
edmundsJson = edmunds.ask(params=inputText)
# More APIs could be consulted here
# Send each request async and the synchronize the responses from the queue
template = env.get_template('templates/result.html')
return render_template(template,resp=resp)
The worker:
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work()
And lets assume each Module handles its own queueing process.
I can see some problems ahead:
What happens to the information stored on the queue that did not make it to the timeout?
How can I make Flask wait and then extract the responses from the Queue?
Is it possible that information could get mixed if two clients ask in the same time-frame?
Is there a better way to handle the async requests and then synchronize the response?
Thanks!
In such cases I prefer a combination of HTTPX and flask[async]
First - HTTPX
HTTPX offers a standard synchronous API by default, but also gives you the option of an async client if you need it.
Async is a concurrency model that is far more efficient than multi-threading, and can provide significant performance benefits and enable the use of long-lived network connections such as WebSockets.
If you're working with an async web framework then you'll also want to use an async client for sending outgoing HTTP requests.
>>> async with httpx.AsyncClient() as client:
... r = await client.get('https://www.example.com/')
...
>>> r
<Response [200 OK]>
Second - Using async and await in a flask
Routes, error handlers, before request, after request, and teardown functions can all be coroutine functions if Flask is installed with the async extra (pip install flask[async]). It requires Python 3.7+ where contextvars.ContextVar is available. This allows views to be defined with async def and use await.
For example, you should do something like this:
import asyncio
import httpx
from flask import Flask, render_template, request
app = Flask(__name__)
#app.route('/async', methods=['GET', 'POST'])
async def async_form():
if request.method == 'POST':
...
async with httpx.AsyncClient() as client:
tweeterResponse, redditResponse, edmundsJson = await asyncio.gather(
client.get(f'https://api.tweeter....../id?id={request.form["tweeter_id"]}', timeout=None),
client.get(f'https://api.redditResponse.....?key={APIKEY}&reddit={request.form["reddit_id"]}'),
client.post(f'https://api.edmundsJson.......', data=inputText)
)
...
resp = {
"tweeter_response" : tweeterResponse,
"reddit_response": redditResponse,
"edmunds_json" : edmundsJson
}
template = env.get_template('templates/result.html')
return render_template(template, resp=resp)