Making async calls from an async library? - python

The toy script shows an application using a class that is dependent on an implementation that is not asyncio-aware, and obviously doesn't work.
How would the fetch method of MyFetcher be implemented, using the asyncio-aware client, while still maintaining the contract with the _internal_validator method of FetcherApp? To be very clear, FetcherApp and AbstractFetcher cannot be modified.

To use async fetch_data function inside fetch both fetch and is_fetched_data_valid functions should be async too. You can change them in child classes without modify parent:
import asyncio
class AsyncFetcherApp(FetcherApp):
async def is_fetched_data_valid(self): # async here
data = await self.fetcher_implementation.fetch() # await here
return self._internal_validator(data)
class AsyncMyFetcher(AbstractFetcher):
def __init__(self, client):
super().__init__()
self.client = client
async def fetch(self): # async here
result = await self.client.fetch_data() # await here
return result
class AsyncClient:
async def fetch_data(self):
await asyncio.sleep(1) # Just to sure it works
return 1
async def main():
async_client = AsyncClient()
my_fetcher = AsyncMyFetcher(async_client)
fetcherApp = AsyncFetcherApp(my_fetcher)
# ...
is_valid = await fetcherApp.is_fetched_data_valid() # await here
print(repr(is_valid))
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())

Related

how to remove value from list safely in asyncio

There is a global list to store data. Different async functions maybe add or remove value from it.
Example:
a = [] # List[Connection]
async def foo():
for v in a:
await v.send('msg')
async def bar():
await SomeAsyncFunc()
a.pop(0)
Both foo and bar will give up the control to let other coroutines run, so in foo, it is not safe to remove value from the list.
The following example shows how to use the lock for this:
Create a connection manager:
import asyncio
class ConnectionsManager:
def __init__(self, timeout=5):
self.timeout = timeout
self._lock = asyncio.Lock()
self._connections = []
async def __aenter__(self):
await asyncio.wait_for(self._lock.acquire(), timeout=self.timeout)
return self._connections
async def __aexit__(self, *exc):
self._lock.release()
The timeout is a security measure to break bugs with circular waits.
The manager can be used as follows:
async def foo():
for _ in range(10):
async with cm as connections:
# do stuff with connection
await asyncio.sleep(0.25)
connections.append('foo')
async def bar():
for _ in range(5):
async with cm as connections:
# do stuff with connection
await asyncio.sleep(0.5)
if len(connections) > 1:
connections.pop()
else:
connections.append('bar')
cm = ConnectionsManager()
t1 = asyncio.create_task(foo())
t2 = asyncio.create_task(bar())
await t1
await t2
async with cm as connections:
print(connections)
Note, that you could also be more explicit with connections here:
async def foo(cm):
...
async def bar(cm):
...
Just to make a comment why being explicit is so beneficial in contrast to globals. At some point you may need to write unit tests for your code, where you will need to specify all inputs to your functions/methods. Forgetting conditions on implicit inputs to your function (used globals) can easily result in untested states. For example your bar coroutine expects an element in the list a and will break if it is empty. Most of the time it might do the right thing, but one day in production...

Correct destruction process for async code running in a thread

Below is (working) code for a generic websocket streamer.
It creates a daemon thread from which performs asyncio.run(...).
The asyncio code spawns 2 tasks, which never complete.
How to correctly destroy this object?
One of the tasks is executing a keepalive 'ping', so I can easily exit that loop using a flag. But the other is blocking on a message from the websocket.
import json
import aiohttp
import asyncio
import gzip
import asyncio
from threading import Thread
class WebSocket:
KEEPALIVE_INTERVAL_S = 10
def __init__(self, url, on_connect, on_msg):
self.url = url
self.on_connect = on_connect
self.on_msg = on_msg
self.streams = {}
self.worker_thread = Thread(name='WebSocket', target=self.thread_func, daemon=True).start()
def thread_func(self):
asyncio.run(self.aio_run())
async def aio_run(self):
async with aiohttp.ClientSession() as session:
self.ws = await session.ws_connect(self.url)
await self.on_connect(self)
async def ping():
while True:
print('KEEPALIVE')
await self.ws.ping()
await asyncio.sleep(WebSocket.KEEPALIVE_INTERVAL_S)
async def main_loop():
async for msg in self.ws:
def extract_data(msg):
if msg.type == aiohttp.WSMsgType.BINARY:
as_bytes = gzip.decompress(msg.data)
as_string = as_bytes.decode('utf8')
as_json = json.loads(as_string)
return as_json
elif msg.type == aiohttp.WSMsgType.TEXT:
return json.loads(msg.data)
elif msg.type == aiohttp.WSMsgType.ERROR:
print('⛔️ aiohttp.WSMsgType.ERROR')
return msg.data
data = extract_data(msg)
self.on_msg(data)
# May want this approach if we want to handle graceful shutdown
# W.task_ping = asyncio.create_task(ping())
# W.task_main_loop = asyncio.create_task(main_loop())
await asyncio.gather(
ping(),
main_loop()
)
async def send_json(self, J):
await self.ws.send_json(J)
I'd suggest the use of asyncio.run_coroutine_threadsafe instead of asyncio.run. It returns a concurrent.futures.Future object which you can cancel:
def thread_func(self):
self.future = asyncio.run_coroutine_threadsafe(
self.aio_run(),
asyncio.get_event_loop()
)
# somewhere else
self.future.cancel()
Another approach would be to make ping and main_loop a task, and cancel them when necessary:
# inside `aio_run`
self.task_ping = asyncio.create_task(ping())
self.main_loop_task = asyncio.create_task(main_loop())
await asyncio.gather(
self.task_ping,
self.main_loop_task
return_exceptions=True
)
# somewhere else
self.task_ping.cancel()
self.main_loop_task.cancel()
This doesn't change the fact that aio_run should also be called with asyncio.run_coroutine_threadsafe. asyncio.run should be used as a main entry point for asyncio programs and should be only called once.
I would like to suggest one more variation of the solution. When finishing coroutines (tasks), I prefer minimizing the use of cancel() (but not excluding), since sometimes it can make it difficult to debug business logic (keep in mind that asyncio.CancelledError does not inherit from an Exception).
In your case, the code might look like this(only changes):
class WebSocket:
KEEPALIVE_INTERVAL_S = 10
def __init__(self, url, on_connect, on_msg):
# ...
self.worker_thread = Thread(name='WebSocket', target=self.thread_func)
self.worker_thread.start()
async def aio_run(self):
self._loop = asyncio.get_event_loop()
# ...
self._ping_task = asyncio.create_task(ping())
self._main_task = asyncio.create_task(main_loop())
await asyncio.gather(
self._ping_task,
self._main_task,
return_exceptions=True
)
# ...
async def stop_ping(self):
self._ping_task.cancel()
try:
await self._ping_task
except asyncio.CancelledError:
pass
async def _stop(self):
# wait ping end before socket closing
await self.stop_ping()
# lead to correct exit from `async for msg in self.ws`
await self.ws.close()
def stop(self):
# wait stopping ping and closing socket
asyncio.run_coroutine_threadsafe(
self._stop(), self._loop
).result()
self.worker_thread.join() # wait thread finish

How to run async coroutines from init, wait until it is complete

I am connecting to aioredis from __init__ (I do not want to move it out since this means I have to some extra major changes). How can I wait for aioredis connection task in below __init__ example code and have it print self.sub and self.pub object? Currently it gives an error saying
abc.py:42> exception=AttributeError("'S' object has no attribute
'pub'")
I do see redis connections created and coro create_connetion done.
Is there a way to call blocking asyncio calls from __init__. If I replace asyncio.wait with asyncio.run_until_complete I get an error that roughly says
loop is already running.
asyncio.gather is
import sys, json
from addict import Dict
import asyncio
import aioredis
class S():
def __init__(self, opts):
print(asyncio.Task.all_tasks())
task = asyncio.wait(asyncio.create_task(self.create_connection()), return_when="ALL_COMPLETED")
print(asyncio.Task.all_tasks())
print(task)
print(self.pub, self.sub)
async def receive_message(self, channel):
while await channel.wait_message():
message = await channel.get_json()
await asyncio.create_task(self.callback_loop(Dict(json.loads(message))))
async def run_s(self):
asyncio.create_task(self.listen())
async def callback_loop(msg):
print(msg)
self.callback_loop = callback_loop
async def create_connection(self):
self.pub = await aioredis.create_redis("redis://c8:7070/0", password="abc")
self.sub = await aioredis.create_redis("redis://c8:7070/0", password="abc")
self.db = await aioredis.create_redis("redis://c8:7070/0", password="abc")
self.listener = await self.sub.subscribe(f"abc")
async def listen(self):
self.tsk = asyncio.ensure_future(self.receive_message(self.listener[0]))
await self.tsk
async def periodic(): #test function to show current tasks
number = 5
while True:
await asyncio.sleep(number)
print(asyncio.Task.all_tasks())
async def main(opts):
loop.create_task(periodic())
s = S(opts)
print(s.pub, s.sub)
loop.create_task(s.run_s())
if __name__ == "__main__":
loop = asyncio.get_event_loop()
main_task = loop.create_task(main(sys.argv[1:]))
loop.run_forever() #I DONT WANT TO MOVE THIS UNLESS IT IS NECESSARY
I think what you want to do is to make sure the function create_connections runs to completion BEFORE the S constructor. A way to do that is to rearrange your code a little bit. Move the create_connections function outside the class:
async def create_connection():
pub = await aioredis.create_redis("redis://c8:7070/0", password="abc")
sub = await aioredis.create_redis("redis://c8:7070/0", password="abc")
db = await aioredis.create_redis("redis://c8:7070/0", password="abc")
listener = await self.sub.subscribe(f"abc")
return pub, sub, db, listener
Now await that function before constructing S. So your main function becomes:
async def main(opts):
loop.create_task(periodic())
x = await create_connections()
s = S(opts, x) # pass the result of create_connections to S
print(s.pub, s.sub)
loop.create_task(s.run_s())
Now modify the S constructor to receive the objects created:
def __init__(self, opts, x):
self.pub, self.sub, self.db, self.listener = x
I'm not sure what you're trying to do with the return_when argument and the call to asyncio.wait. The create_connections function doesn't launch a set of parallel tasks, but rather awaits each of the calls before moving on to the next one. Perhaps you could improve performance by running the four calls in parallel but that's a different question.

Pass instance of object to FastAPI router

What's wrong with implementation of passing class instance as a dependency in FastAPI router or is it a bug?
1) I have defined router with dependency
app = FastAPI()
dbconnector_is = AsyncDBPool(conn=is_cnx, loop=None)
app.include_router(test_route.router, dependencies=[Depends(dbconnector_is)])
#app.on_event('startup')
async def startup():
app.logger = await AsyncLogger().getlogger(log)
await app.logger.warning('Webservice is starting up...')
await app.logger.info("Establishing RDBS Integration Pool Connection...")
await dbconnector_is.connect()
#app.on_event('startup')
async def startup():
await app.logger.getlogger(log)
await app.logger.warning('Webservice is starting up...')
await dbconnector_is.connect()
router
#router.get('/test')
async def test():
data = await dbconnector_is.callproc('is_processes_get', rows=-1, values=[None, None])
return Response(json.dumps(data, default=str))
custom class for passing it's instance as callable.
class AsyncDBPool:
def __init__(self, conn: dict, loop: None):
self.conn = conn
self.pool = None
self.connected = False
self.loop = loop
def __call__(self):
return self
async def connect(self):
while self.pool is None:
try:
self.pool = await aiomysql.create_pool(**self.conn, loop=self.loop, autocommit=True)
except aiomysql.OperationalError as e:
await asyncio.sleep(1)
continue
else:
return self.pool
And when I send request I receive this error.
data = await dbconnector_is.callproc('is_processes_get', rows=-1, values=[None, None])
NameError: name 'dbconnector_is' is not defined

Using decorators in python to set global variables

I am attempting to reduce SLOC by using decorators. I have a case that I need to start four TCP servers and save the connecting clients as global variables. Here is the code.
# The sockets we will be using
socket0_client = None
socket1_client = None
socket2_client = None
socket3_client = None
# Populate them
def save_client(global_client_var):
def decorator(func):
async def inner(client, verbose=False):
global_client_var = client
# Receive stuff
with client:
while True:
# Get data. If there is no data, quit
data = await loop.sock_recv(client, 10000)
if not data:
break
# respond to the data
await func(client, data)
return inner
return decorator
#save_client(socket0_client)
async def socket0_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)
#save_client(socket1_client)
async def socket1_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)
#save_client(socket2_client)
async def socket2_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)
#save_client(socket3_client)
async def socket3_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)
loop.create_task(tcp_server.single_server(('', 60001), task=socket0_reply, verbose=True))
loop.create_task(tcp_server.single_server(('', 60002), task=socket1_reply, verbose=True))
loop.create_task(tcp_server.single_server(('', 60003), task=socket2_reply, verbose=True))
loop.create_task(tcp_server.single_server(('', 60004), task=socket3_reply, verbose=True))
There is a function that I don't have the code for. It is the single_server function. It binds to the server at the given address, waits for a connection, and then calls task on the newly connected client.
The problem that I have is that although client is populated in the inner function, and it is clearly set to global_client_var, the global sockets are never set. They remain None.
What's going on here? How can I get these global variables set?
You cannot update the reference that a Python function argument points to, which is what your code is attempting to do. All function call arguments in Python are passed by value; however, you could append an item to an incoming list or dictionary in this case:
sockets = {}
# Populate them
def save_client(global_client_var):
def decorator(func):
async def inner(client, verbose=False):
sockets[global_client_var] = client
# Receive stuff
with client:
while True:
# Get data. If there is no data, quit
data = await loop.sock_recv(client, 10000)
if not data:
break
# respond to the data
await func(client, data)
return inner
return decorator
#save_client(0)
async def socket0_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)
#save_client(1)
async def socket1_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)
#save_client(2)
async def socket2_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)
#save_client(3)
async def socket3_reply(client, data):
await loop.sock_sendall(client, b'Got:'+data)

Categories