I want to create a REST service using FastAPI and aio-pika while working asynchronously. For other async database drivers, I could create clients on startup, when get them in route handlers. For example, with motor I would declare simple connection manager:
from motor.motor_asyncio import AsyncIOMotorClient
class Database:
client: AsyncIOMotorClient = None
db = Database()
async def connect_to_mongo():
db.client = AsyncIOMotorClient("mongo:27017")
async def close_mongo_connection():
db.client.close()
async def get_mongo_client() -> AsyncIOMotorClient:
return db.client
Then add couple of handlers:
app.add_event_handler("startup", connect_to_mongo)
app.add_event_handler("shutdown", close_mongo_connection)
and then just use get_mongo_client to get one to my handler.
Problem here is that aio-pika needs asyncio loop to function. Here is an example from the docs:
connection = await aio_pika.connect_robust(
"amqp://guest:guest#127.0.0.1/", loop=loop
)
And with FastAPI I don't have asyncio loop. Are there any way to use it with interface like in example? Can I just create new loop using asyncio.get_event_loop() and pass it to the connect_robust without really using it anywhere? Like this:
connection = await aio_pika.connect_robust(
"amqp://guest:guest#127.0.0.1/", loop=asyncio.get_event_loop()
)
Ok, so, according to the docs, I can just use connect instead of connect_robust:
connection = await aio_pika.connect(
"amqp://guest:guest#127.0.0.1/"
)
Related
I have a FastAPI application which, in several different occasions, needs to call external APIs. I use httpx.AsyncClient for these calls. The point is that I don't fully understand how I shoud use it.
From httpx' documentation I should use context managers,
async def foo():
""""
I need to call foo quite often from different
parts of my application
"""
async with httpx.AsyncClient() as aclient:
# make some http requests, e.g.,
await aclient.get("http://example.it")
However, I understand that in this way a new client is spawned each time I call foo(), and is precisely what we want to avoid by using a client in the first place.
I suppose an alternative would be to have some global client defined somewhere, and just import it whenever I need it like so
aclient = httpx.AsyncClient()
async def bar():
# make some http requests using the global aclient, e.g.,
await aclient.get("http://example.it")
This second option looks somewhat fishy, though, as nobody is taking care of closing the session and the like.
So the question is: how do I properly (re)use httpx.AsyncClient() within a FastAPI application?
You can have a global client that is closed in the FastApi shutdown event.
import logging
from fastapi import FastAPI
import httpx
logging.basicConfig(level=logging.INFO, format="%(levelname)-9s %(asctime)s - %(name)s - %(message)s")
LOGGER = logging.getLogger(__name__)
class HTTPXClientWrapper:
async_client = None
def start(self):
""" Instantiate the client. Call from the FastAPI startup hook."""
self.async_client = httpx.AsyncClient()
LOGGER.info(f'httpx AsyncClient instantiated. Id {id(self.async_client)}')
async def stop(self):
""" Gracefully shutdown. Call from FastAPI shutdown hook."""
LOGGER.info(f'httpx async_client.is_closed(): {self.async_client.is_closed} - Now close it. Id (will be unchanged): {id(self.async_client)}')
await self.async_client.aclose()
LOGGER.info(f'httpx async_client.is_closed(): {self.async_client.is_closed}. Id (will be unchanged): {id(self.async_client)}')
self.async_client = None
LOGGER.info('httpx AsyncClient closed')
def __call__(self):
""" Calling the instantiated HTTPXClientWrapper returns the wrapped singleton."""
# Ensure we don't use it if not started / running
assert self.async_client is not None
LOGGER.info(f'httpx async_client.is_closed(): {self.async_client.is_closed}. Id (will be unchanged): {id(self.async_client)}')
return self.async_client
httpx_client_wrapper = HTTPXClientWrapper()
app = FastAPI()
#app.get('/test-call-external')
async def call_external_api(url: str = 'https://stackoverflow.com'):
async_client = httpx_client_wrapper()
res = await async_client.get(url)
result = res.text
return {
'result': result,
'status': res.status_code
}
#app.on_event("startup")
async def startup_event():
httpx_client_wrapper.start()
#app.on_event("shutdown")
async def shutdown_event():
await httpx_client_wrapper.stop()
if __name__ == '__main__':
import uvicorn
LOGGER.info(f'starting...')
uvicorn.run(f"{__name__}:app", host="127.0.0.1", port=8000)
Note - this answer was inspired by a similar answer I saw elsewhere a long time ago for aiohttp, I can't find the reference but thanks to whoever that was!
EDIT
I've added uvicorn bootstrapping in the example so that it's now fully functional. I've also added logging to show what's going on on startup and shutdown, and you can visit localhost:8000/docs to trigger the endpoint and see what happens (via the logs).
The reason for calling the start() method from the startup hook is that by the time the hook is called the eventloop has already started, so we know we will be instantiating the httpx client in an async context.
Also I was missing the async on the stop() method, and had a self.async_client = None instead of just async_client = None, so I have fixed those errors in the example.
The answer to this question depends on how you structure your FastAPI application and how you manage your dependencies. One possible way to use httpx.AsyncClient() is to create a custom dependency function that returns an instance of the client and closes it when the request is finished. For example:
from fastapi import FastAPI, Depends
import httpx
app = FastAPI()
async def get_client():
# create a new client for each request
async with httpx.AsyncClient() as client:
# yield the client to the endpoint function
yield client
# close the client when the request is done
#app.get("/foo")
async def foo(client: httpx.AsyncClient = Depends(get_client)):
# use the client to make some http requests, e.g.,
response = await client.get("http://example.it")
return response.json()
This way, you don't need to create a global client or worry about closing it manually. FastAPI will handle the dependency injection and the context management for you. You can also use the same dependency function for other endpoints that need to use the client.
Alternatively, you can create a global client and close it when the application shuts down. For example:
from fastapi import FastAPI, Depends
import httpx
import atexit
app = FastAPI()
# create a global client
client = httpx.AsyncClient()
# register a function to close the client when the app exits
atexit.register(client.aclose)
#app.get("/bar")
async def bar():
# use the global client to make some http requests, e.g.,
response = await client.get("http://example.it")
return response.json()
This way, you don't need to create a new client for each request, but you need to make sure that the client is closed properly when the application stops. You can use the atexit module to register a function that will be called when the app exits, or you can use other methods such as signal handlers or event hooks.
Both methods have their pros and cons, and you should choose the one that suits your needs and preferences. You can also check out the FastAPI documentation on dependencies and testing for more examples and best practices.
I'm developing an API (using Sanic) which is a gateway to IB, using ib_insync
This API exposes endpoints to place a new order and getting live positions, but also is in charge to update order statuses in a DB using the events of ib_insync.
My question is - is it possible to have the API connect only once to IB when it goes up, and re-using the same connection for all requests?
I'm currently connecting to IB using connectAsync on every request. And while this is working - the API will not receive events if not currently handling a request.
This is one endpoint code, for reference
#app.post("/order/<symbol>")
async def post_order(request, symbol):
order = jsonpickle.decode(request.body)
with await IB().connectAsync("127.0.0.1", 7496, clientId=100) as ib:
ib.orderStatusEvent += onOrderStatus
ib.errorEvent += onTWSError
ib.newOrderEvent += onNewOrderEvent
contract = await ib.qualifyContractsAsync(contract)
trade = ib.placeOrder(contract[0], order)
return text(trade.order.orderId)
So I wish to not use the with statement, and just use a global ib connection.
When I'm connecting on the module init (using connectAsync), every later call that is async, like qualifyContractsAsync, just hangs. Debugging showed me that it hangs on asyncio.gather which means I'm doing something wrong with the event loops.
I'm not familiar with this particular connection, but yes it should be possible. Presumably the with statement is opening and closing the connection.
Instead, open and close it with a listener.
Docs re: listeners
#app.before_server_start
async def connect(app,_):
app.ctx.foo = await Foobar()
#app.route("/")
async def handler(request):
await request.app.ctx.foo.do_something()
#app.after_server_stop
async def close(app,_):
app.ctx.foo.close()
I have an async http webs service using fastapi. I am running multiple instances of the same service on the server on a different port and I have an nginx server in front so I can utilise them all. I have a particular resource I need to protect that only one client is accessing it.
#app.get("/do_something")
async def do_something():
critical_section_here()
I tried to protect this critical section using a file lock like this:
#app.get("/do_something")
async def do_something():
with FileLock("dosomething.lock"):
critical_section()
This will prevent multiple processes to enter the critical section at the same time. But what I found is that this will actually dead lock. Think about the following event:
client 1 connected to port 8000 and enter the critical section
while client 1 is still using the resource client 2 is routed to the same port 8000 and then it will try to acquire the file lock, it cannot, so it will keep trying and this will block the execution of client 1 and client 1 will never be able to release the filelock and this means not only this process is locked every other server instance will be locked as well.
Is there a way for me to coordinate these processes so that only one of them access the critical section? I thought about adding a timeout to the filelock but I really don't want to reject user, I just want to wait until it's his/her turn to enter the critical section.
You can try something like this:
import fcntl
from contextlib import asynccontextmanager
from fastapi import FastAPI
app = FastAPI()
def acquire_lock():
f = open("/tmp/test.lock", "w")
fcntl.flock(f, fcntl.LOCK_EX)
return f
#asynccontextmanager
async def lock():
loop = asyncio.get_running_loop()
f = await loop.run_in_executor(None, acquire_lock)
try:
yield
finally:
f.close()
#app.get("/test/")
async def test():
async with lock():
print("Enter critical section")
await asyncio.sleep(5)
print("End critical section")
It will basically serialize all your requests.
You could use aioredlock.
It allows you to create distributed locks between workers (processes). For more information about its usage, follow the link above.
The redlock algorithm is a distributed lock implementation for Redis. There are many implementations of it in several languages. In this case, this is the asyncio compatible implementation for python 3.5+.
Example of usage:
# Or you can use the lock as async context manager:
try:
async with await lock_manager.lock("resource_name") as lock:
assert lock.valid is True
# Do your stuff having the lock
await lock.extend() # alias for lock_manager.extend(lock)
# Do more stuff having the lock
assert lock.valid is False # lock will be released by context manager
except LockError:
print('Lock not acquired')
raise
I'm trying to create a very simple system where an user, in order to use a consumer, needs to input a key in the WS url, like : ws://127.0.0.1:8000/main?key=KEY. Once the consumer is called, Django Channels needs to perform a very simple DB query to check if the key exists:
class TestConsumer(AsyncJsonWebsocketConsumer):
async def websocket_connect(self, event):
...
key_query = await database_sync_to_async(keys.objects.get(key=key))
...
But the problem with this code is that it will give the following error:
You cannot call this from an async context - use a thread or sync_to_async.
Is there any way to accomplish this? Any advice is appreciated!
database_sync_to_async must be called on the method, not on the result of the method:
key_query = await database_sync_to_async(keys.objects.get)(key=key)
My application will be sending hundreds, if not thousands, of messages over redis every second, so we are going to open the connection when the app launches and use that connection for every transaction. I am having trouble keeping that connection open.
Here is some of my redis handler class:
class RedisSingleDatabase(AsyncGetSetDatabase):
async def redis(self):
if not self._redis:
self._redis = await aioredis.create_redis(('redis', REDIS_PORT))
return self._redis
def __init__(self):
self._redis = None
async def get(self, key):
r = await self.redis()
# r = await aioredis.create_redis(('redis', REDIS_PORT))
print(f'CONNECTION IS {"CLOSED" if r.closed else "OPEN"}!')
data = await r.get(key, encoding='utf-8')
if data is not None:
return json.loads(data)
This does not work. By the time we call r.get, the connection is closed (ConnectionClosedError).
If I uncomment the second line of the get method, and connect to the database locally in the method, it works. But then we're no longer using the same connection.
I've considered trying all this using dependency injection as well. This is a flask app, and in my create_app method I would connect to the database and construct my RedisSingleDatabase with that connection. Besides the question of whether my current problem would be the same, I can't seem to get this approach to work since we need to await the connection which I can't do in my root-level create_app which can't be async! (unless I'm missing something).
I've tried asyncio.run all over the place and it either doesn't fix the problem or raises its own error that it can't be called from a running event loop.
Please help!!!