Using asyncio in the middle of Python3.6 script - python

I have a Python task that reports on it's progress during execution using a status updater that has 1 or more handlers associated with it. I want the updates to be dispatched to each handler in an asynchronous way (each handler is responsible for making an I/O bound call with the updates, pushing to a queue, logging to a file, calling a HTTP endpoint etc). The status updater has a method like so with each handler.dispatch method being a coroutine. This was working until a handler using aiohttp was added and now I am getting weird errors from the aiohttp module.
def _dispatch(self, **updates):
event_loop = asyncio.get_event_loop()
tasks = (event_loop.create_task(handler.dispatch(**updates)) for handler in self._handlers)
event_loop.run_until_complete(asyncio.gather(*tasks))
Every example of asyncio I've seen basically has this pattern
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
My question is, is the way I am attempting to use the asyncio module in this case just completely wrong? Does the event loop need to be created once and only once and then everything else goes through that?

Related

How to handle graceful shutdown inside of aiohttp coroutines?

I want to save state into the database of a background aiohttp coroutine before server is shut down. I was thinking of creating a global array of coroutine jobs that need to be finished and do an await asyncio.gather(*global_jobs) in the shutdown handler.
Is this the proper approach?
I'm not sure I understand what you mean by "the database of a background aiohttp coroutine" but, as far as cleanup actions at shutdown go, you can:
Use a signal handler
import asyncio
import signal
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGINT, my_signal_handler, *additional_args_list)
In a Unix setting, if you know that the application is going to be interrupted with a specific signal, you can perform cleanup actions selectively for that signal. See loop.add_signal_handler(signum, callback, *args) for further information.
Note: you can employ a callable class rather than a function as callback, so that the class instance can hold a reference to any resource you wish to interact with during shutdown, e.g. the coroutines you mentioned in your question.
Catch asyncio.CancelledError
import asyncio
async def my_coro():
try:
# Normal interaction with aiohttp
while True:
pass
except asyncio.CancelledError:
cleanup_actions()
If you can assume that, at shutdown, the event loop will be stopped cleanly, you can count on your running coroutines to be thrown an asyncio.CancelledError before the loop is closed.

Non blocking python class method executed with asyncio

I'm trying to initialize a non-blocking task, which shares data with its parent object. It is a websocket client, and it would not block the main execution, though still run "forever".
My humble expectations were this would do it, but sadly, it is blocking the main thread.
loop = asyncio.new_event_loop()
task = loop.create_task(self.initWS())
loop.run_forever()
self.initWS() is indeed not blocking the main thread, but loop.run_forever() is.
If you want to execute more tasks concurrently with self.initWS(), you have to add them to the asyncio loop, too.

Multithreaded server using Python Websockets library

I am trying to build a Server and Client using Python Websockets library. I am following the example given here: Python OCPP documentation. I want to edit this example such that for each new client(Charge Point), the server(Central System) runs on new thread (or new loop as Websockets library uses Python's asyncio). Basically I want to receive messages from multiple clients simultaneously. I modified the main method of server(Central System), from the example that I followed, like this by adding a loop parameter in websockets.serve() in a hope that whenever a new client runs, it runs on a different loop:
async def main():
server = await websockets.serve(
on_connect,
'0.0.0.0',
9000,
subprotocols=['ocpp2.0'],
ssl=ssl_context,
loop=asyncio.new_event_loop(),
ping_timeout=100000000
)
await server.wait_closed()
if __name__ == '__main__':
asyncio.run(main())
I am getting the following error. Please help:
RuntimeError:
Task <Task pending coro=<main() running at server.py:36>
cb=_run_until_complete_cb() at /usr/lib/python3.7/asyncio/base_events.py:153]>
got Future <_GatheringFuture pending> attached to a different loop - sys:1:
RuntimeWarning: coroutine 'BaseEventLoop._create_server_getaddrinfo' was never awaited
Author of the OCPP package here. The examples in the project are able to handle multiple clients with 1 event loop.
The problem in your example is loop=asyncio.new_event_loop(). The websocket.serve() is called in the 'main' event loop. But you pass it a reference to a second event loop. This causes the future to be attached to a different event loop it was created it.
You should avoid running multiple event loops.

Running any web server event loop on a secondary thread

We have a rich backend application that handles messaging/queuing, database queries, and computer vision. An additional feature we need is tcp communication - preferably via http. The point is: this is not primarily a web application. We would expect to have a set of http channels set up for different purposes. Yes - we understand about messaging including topics and publish-subscribe: but direct tcp based request/response also has its place.
I have looked at and tried out a half dozen python http web servers. They either implicitly or explicitly describe a requirement to run the event loop on the main thread. This is for us a cart before the horse: the main thread is already occupied with other tasks including coordination of the other activities.
To illustrate the intended structure I will lift code from my aiohttp-specific question How to run an aiohttp web application in a secondary thread. In that question I tried running in another standalone script but on a subservient thread:
def runWebapp():
from aiohttp import web
async def handle(request):
name = request.match_info.get('name', "Anonymous")
text = "Hello, " + name
return web.Response(text=text)
app = web.Application()
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
web.run_app(app)
if __name__ == '__main__':
from threading import Thread
t = Thread(target=runWebapp)
t.start()
print('thread started let''s nap..')
import time
time.sleep(50)
This gives error:
RuntimeError: There is no current event loop in thread 'Thread-1'.
This error turns out to mean "hey you're not running this on the main thread".
We can logically replace aiohttp with other web servers here. Are there any for which this approach of asking the web server's event handling loop to run on a secondary thread will work? So far I have also tried cherrypy, tornado, and flask.
Note that one prominent webserver that I have not tried is django. But that one seems to require an extensive restructuring of the application around the directory structures expected (/required?) for django. We would not want to do that given the application has a set of other purposes that supersede this sideshow of having http servers.
An approach that I have looked at is asyncio. I have not understood whether it can support running event loops on a side thread or not: if so then it would be an answer to this question.
In any case are there any web servers that explicitly support having their event loops off of the main thread?
You can create and set an event loop while on the secondary thread:
asyncio.set_event_loop(asyncio.new_event_loop())
cherrypy and flask already work without this; tornado works with this.
On aiohttp, you get another error from it calling loop.add_signal_handler():
ValueError: set_wakeup_fd only works in main thread
You need to skip that because only the main thread of the main interpreter is allowed to set a new signal handler, which means web servers running on a secondary thread cannot directly handle signals to do graceful exit.
Example: aiohttp
Set the event loop before calling run_app().
aiohttp 3.8+ already uses a new event loop in run_app(), so you can skip this.
Pass handle_signals=False when calling run_app() to not add signal handlers.
asyncio.set_event_loop(asyncio.new_event_loop()) # aiohttp<3.8
web.run_app(app, handle_signals=False)
Example: tornado
Set the event loop before calling app.listen().
asyncio.set_event_loop(asyncio.new_event_loop())
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
Any Python program is run on a single thread which is the main. And when you create a Thread it does not mean that your program already uses two threads.
Unfortunately, it is not possible to use different event loops for every Thread but possible to do that using multiprocessing instead of threading.
It allows creating its own event loop for every single Process.
from multiprocessing import Process
from aiohttp import web
def runWebapp(port):
async def handle(request):
name = request.match_info.get("name", "Anonymous")
text = "Hello, " + name
return web.Response(text=text)
app = web.Application()
app.add_routes([
web.get("/", handle),
web.get("/{name}", handle)
])
web.run_app(app, port=port)
if __name__ == "__main__":
p1 = Process(target=runWebapp, args=(8080,))
p2 = Process(target=runWebapp, args=(8081,))
p1.start()
p2.start()

Multiple asyncio loops

I am implementing a service in a multi-service python application. This service will wait for requests to be pushed in a redis queue and resolve them whenever one is available.
async def foo():
while True:
_, request = await self.request_que.brpop(self.name)
# processing request ...
And I'm adding this foo() function to an asyncio event loop:
loop = asyncio.new_event_loop()
asyncio.ensure_future(self.handle(), loop=loop)
loop.run_forever()
The main question is, is it ok to do this on several(say 3 or 4) different services, that results in running several asyncio loops simultaneously? Will it harm OS?
Simple answer: yes. It is fine. I previously implemented a service which runs on one asyncio loop and spawns additional processes which run on their own loop (within the same machine).

Categories