I am building a rest api with fastapi. I implemented the data layer separately from the fastapi application meaning I do not have direct access to the database session in my fastapi application.
I have access to the storage object which have method like close_session which allow me to close the current session.
Is there a equivalent of flask teardown_request in fastapi?
Flask Implementation
from models import storage
.....
.....
#app.teardown_request
def close_session(exception=None):
storage.close_session()
I have looked at fastapi on_event('shutdown') and on_event('startup'). These two only runs when the application is shutting down or starting up.
We can do this by using dependency.
credit to williamjemir: Click here to read the github discussion
from fastapi import FastAPI, Depends
from models import storage
async def close_session() -> None:
"""Close current after every request."""
print('Closing current session')
yield
storage.close()
print('db session closed.')
app = FastAPI(dependencies=[Depends(close_session)])
#app.get('/')
def home():
return "Hello World"
if __name__ == '__main__':
import uvicorn
uvicorn.run(app)
use fastapi middleware
A "middleware" is a function that works with every request before it is processed by any specific path operation. And also with every response before returning it.
It takes each request that comes to your application.
It can then do something to that request or run any needed code.
Then it passes the request to be processed by the rest of the application (by some path operation).
It then takes the response generated by the application (by some path operation).
It can do something to that response or run any needed code.
Then it returns the response.
Example:
import time
from fastapi import FastAPI, Request
app = FastAPI()
#app.middleware("http")
async def add_process_time_header(request: Request, call_next):
# do things before the request
response = await call_next(request)
# do things after the response
return response
references:
https://fastapi.tiangolo.com/tutorial/middleware/
Related
I have a FastAPI application for testing/development purposes. What I want is that any request that arrives to my app to automatically be sent, as is, to another app on another server, with exactly the same parameters and same endpoint. This is not a redirect, because I still want the app to process the request and return values as usual. I just want to initiate a similar request to a different version of the app on a different server, without waiting for the answer from the other server, so that the other app gets the request as if the original request was sent to it.
How can I achieve that? Below is a sample code that I use for handling the request:
#app.post("/my_endpoint/some_parameters")
def process_request(
params: MyParamsClass,
pwd: str = Depends(authenticate),
):
# send the same request to http://my_other_url/my_endpoint/
return_value = process_the_request(params)
return return_value.as_json()
You could use the AsyncClient() from the httpx library, as described in this answer, as well as this answer and this answer (have a look at those answers for more details on the approach demonstrated below). You can spawn a Client inside the startup event handler, store it on the app instance—as described here, as well as here and here—and reuse it every time you need it. You can explicitly close the Client once you are done with it, using the shutdown event handler.
Working Example
Main Server
When building the request that is about to be forwarded to the other server, the main server uses request.stream() to read the request body from the client's request, which provides an async iterator, so that if the client sent a request with some large body (for instance, the client uploads a large file), the main server would not have to wait for the entire body to be received and loaded into memory before forwarding the request, something that would happen in case you used await request.body() instead, which would likely cause server issues if the body could not fit into RAM.
You can add multiple routes in the same way the /upload one has been defined below, specifying the path, as well as the HTTP method for the endpoint. Note that the /upload route below uses Starlette's path convertor to capture arbitrary paths, as demonstrated here and here. You could also specify the exact path parameters if you wish, but the below provides a more convenient way if there are too many of them. Regardless, the path will be evaluated against the endpoint in the other server below, where you can explicitly specify the path parameters.
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from starlette.background import BackgroundTask
import httpx
app = FastAPI()
#app.on_event('startup')
async def startup_event():
client = httpx.AsyncClient(base_url='http://127.0.0.1:8001/') # this is the other server
app.state.client = client
#app.on_event('shutdown')
async def shutdown_event():
client = app.state.client
await client.aclose()
async def _reverse_proxy(request: Request):
client = request.app.state.client
url = httpx.URL(path=request.url.path, query=request.url.query.encode('utf-8'))
req = client.build_request(
request.method, url, headers=request.headers.raw, content=request.stream()
)
r = await client.send(req, stream=True)
return StreamingResponse(
r.aiter_raw(),
status_code=r.status_code,
headers=r.headers,
background=BackgroundTask(r.aclose)
)
app.add_route('/upload/{path:path}', _reverse_proxy, ['POST'])
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8000)
The Other Server
Again, for simplicity, the Request object is used to read the body, but you can isntead define UploadFile, Form and other parameters as usual. The below is listenning on port 8001.
from fastapi import FastAPI, Request
app = FastAPI()
#app.post('/upload/{p1}/{p2}')
async def upload(p1: str, p2: str, q1: str, request: Request):
return {'p1': p1, 'p2': p2, 'q1': q1, 'body': await request.body()}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8001)
Test the example above
import httpx
url = 'http://127.0.0.1:8000/upload/hello/world'
files = {'file': open('file.txt', 'rb')}
params = {'q1': 'This is a query param'}
r = httpx.post(url, params=params, files=files)
print(r.content)
I'm writing a web hook server to receive updates when my git repository is pushed.
Upon receiving the POST request from GitHub, I execute several commands like git pull, mvn install which take a very long time.
But the web hook request sent by GitHub timeouts after 10 seconds.
My code:
import logging
import os
from fastapi import FastAPI
app = FastAPI()
logger = logging.getLogger("uvicorn")
def exec_cmd(command):
out = os.system(command)
logger.info(str(out))
#app.post('/')
def func():
logger.info("WebHook received")
exec_cmd("git pull")
exec_cmd("mvn clean install")
exec_cmd("killall java")
return {}
if __name__ == "__main__":
import uvicorn
exec_cmd("git pull")
uvicorn.run("main:app", debug=False, reload=False, host="0.0.0.0")
Therefore I want to run the long running tasks in the background, and respond to GitHub's request as soon as possible.
How can I do this?
(If I make the exec_cmd() function async, when the request returns, the exec_cmd() function doesn't run till completion. )
They are called BackgroundTasks in FastAPI and can be used by adding a BackgroundTasks type to your view function signature.
The example given in the documentation can be further adapter to your needs:
from fastapi import BackgroundTasks, FastAPI
app = FastAPI()
def process_repository(email: str, message=""):
exec_cmd("git pull")
exec_cmd("mvn clean install")
exec_cmd("killall java")
#app.post("/")
async def update_repository(background_tasks: BackgroundTasks):
background_tasks.add_task(process_repository)
return {"message": "Repository update has begun"}
Since you don't check the results this should work for your use case.
I am looking to create a server that takes in a request, does some processing, and forwards the request to another endpoint. I seem to be running into an issue at higher concurrency where my client.post is causing a httpx.ConnectTimeout exception.
I haven't completely ruled out the possibility of an issue with the endpoint(I am currently working with them to debug anything that might be on their end), but I'm trying to figure out if there is something wrong on my end or if there are any glaring inefficiencies I can improve upon.
I am running this in ECS, currently on a cluster where tasks have 4 vCPUs. I am using the docker image uvicorn-gunicorn-fastapi(https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker). Currently all default settings minus the bind/port/logging. Here is a minimal code example:
import httpx
from fastapi import FastAPI, Request, Response
app = FastAPI()
def process_request(path, request):
#Process Request Here
def create_headers(path):
#Create headers here
#app.get('/')
async def root(path: str, request: Request):
endpoint = 'https://endpoint.com/'
querystring = 'path=' + path
data = process_request(request, path, request)
headers = create_headers(request)
async with httpx.AsyncClient() as client:
await client.post(endpoint + "?" + querystring, data=data, headers=headers)
return Response(status_code=200)
Could be that the server on the other side is taking too much and the connection simply times out because httpx doesn't give enough time to the other endpoint to complete the request?
If yes, you could try disabling timeout or increase the limit (which I suggest over disabling).
See https://www.python-httpx.org/quickstart/#timeouts
I am trying to use Flask to send a stream of events to a front-end client as documented in this question. This works fine if I don't access anything in the request context, but fails as soon as I do.
Here's an example to demonstrate.
from time import sleep
from flask import Flask, request, Response
app = Flask(__name__)
#app.route('/events')
def events():
return Response(_events(), mimetype="text/event-stream")
def _events():
while True:
# yield "Test" # Works fine
yield request.args[0] # Throws RuntimeError: Working outside of request context
sleep(1)
Is there a way to access the request context for server-sent events?
You can use the #copy_current_request_context decorator to make a copy of the request context that your event stream function can use:
from time import sleep
from flask import Flask, request, Response, copy_current_request_context
app = Flask(__name__)
#app.route('/events')
def events():
#copy_current_request_context
def _events():
while True:
# yield "Test" # Works fine
yield request.args[0]
sleep(1)
return Response(_events(), mimetype="text/event-stream")
Note that to be able to use this decorator the target function must be moved inside the view function that has the source request.
I have a scikit-learn classifier running as a Dockerised Flask app, launched with gunicorn. It receives input data in JSON format as a POST request, and responds with a JSON object of results.
When the app is first launched with gunicorn, a large model (serialised with joblib) is read from a database, and loaded into memory before the app is ready for requests. This can take 10-15 minutes.
A reproducible example isn't feasible, but the basic structure is illustrated below:
from flask import Flask, jsonify, request, Response
import joblib
import json
def classifier_app(model_name):
# Line below takes 10-15 mins to complete
classifier = _load_model(model_name)
app = Flask(__name__)
#app.route('/classify_invoice', methods=['POST'])
def apicall():
query = request.get_json()
results = _build_results(query['data'])
return Response(response=results,
status=200,
mimetype='application/json')
print('App loaded!')
return app
How do I configure Flask or gunicorn to return a 'still loading' response (or suitable error message) to any incoming http requests while _load_model is still running?
Basically, you want to return two responses for one request. So there are two different possibilities.
First one is to run time-consuming task in background and ping server with simple ajax requests every two seconds to check if task is completed or not. If task is completed, return result, if not, return "Please standby" string or something.
Second one is to use websockets and flask-socketio extension.
Basic server code would be something like this:
from threading import Thread
from flask import Flask
app = Flask(__name__)
socketio = SocketIO(app)
def do_work():
result = your_heavy_function()
socketio.emit("result", {"result": result}, namespace="/test/")
#app.route("/api/", methods=["POST"])
def start():
socketio.start_background_task(target=do_work)
# return intermediate response
return Response()
On the client side you should do something like this
var socket = io.connect('http://' + document.domain + ':' + location.port + '/test/');
socket.on('result', function(msg) {
// Process your request here
});
For further details, visit this blog post, flask-socketio documentation for server-side reference and socketio documentation for client-side reference.
PS Using web-sockets this you can make progress-bar too.