asyncio - How can coroutines be used in signal handlers? - python

I am developing an application that uses asyncio from python3.4 for networking. When this application shuts down cleanly, a node needs to "disconnect" from the hub. This disconnect is an active process that requires a network connection so the loop needs to wait for this to complete before shutting down.
My issue is that using a coroutine as a signal handler will result in the application not shutting down. Please consider the following example:
import asyncio
import functools
import os
import signal
#asyncio.coroutine
def ask_exit(signame):
print("got signal %s: exit" % signame)
yield from asyncio.sleep(10.0)
loop.stop()
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
functools.partial(ask_exit, signame))
print("Event loop running forever, press CTRL+c to interrupt.")
print("pid %s: send SIGINT or SIGTERM to exit." % os.getpid())
loop.run_forever()
If you run this example and then press Ctrl+C, nothing will happen.
The question is, how do I make this behavior happen with siganls and coroutines?

Syntax for python >=3.5
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
lambda: asyncio.ensure_future(ask_exit(signame)))

Syntax for python >=3.7
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
lambda signame=signame: asyncio.create_task(ask_exit(signame)))
Note
This is basically same as #svs's answers with two differences:
Usage of the more recent Python 3.7+ method asyncio.create_task which is "more readable" than asyncio.ensure_future.
Binding signame immediately to the lambda function avoids the problem of late binding leading to the expected-unexpected™ behavior referred to in the comment by #R2RT. This was shamelessly copied from Lynn Root's blog post: Graceful Shutdowns with asyncio (read the whole series to learn more about asyncio's beautiful goriness).

loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
asyncio.async, ask_exit(signame))
That way the signal causes your ask_exit to get scheduled in a task.

python3.8
1st attempt: used async def handler_shutdown, and wrapped it in loop.create_task() when passing to add_signal_handler()
2nd attempt: don't use async for def handler_shutdown().
3rd attempt: wrap handler_shutdown and param in functools.partial()
e.g.
import asyncio
import functools
def handler_shutdown(signal, loop, tasks, http_runner, ):
...
...
def main():
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM', 'SIGQUIT'):
print(f"add signal handler {signame} ...")
loop.add_signal_handler(
getattr(signal, signame),
functools.partial(handler_shutdown,
signal=signame, loop=loop, tasks=tasks,
http_runner=http_runner
)
)
The main issue i had was error
raise TypeError("coroutines cannot be used "
solved it by wrapping the routine in loop.create_task()
then solved it by removing async form signal handler function
for named param to handler also use functools.partial

Related

Python asyncio non-blocking wait after return

I created API on Python and i want to start some long function, but I want to tell user that my endpoint worked successfully and i some task started in execution
I want to do it because i want so that the user does not wait for the function to be executed
If it were represented in pseudocode, it would probably look like this:
async my_endpoint(context):
func_name = context.func_name
<something_validation_block>
return 204 if all right
So, how created in one function ?
I tried something as:
async def handle(context):
<validate_block>
threading.Thread(
target=logn_func, args=(context,),
).start()
return 204
But unfortunately it does not work : (
First, asyncio has a method named asyncio.to_thread docs
It's provide a friendly method to work with async and threading.
(Or you can run task in threading pool docs)
then, you can use asyncio.create_task(coro) to run async function in background
it will return a Task object which is awaitable, or use task.add_done_callback to handle result.
import asyncio
import time
def block() -> str:
print("block function start")
time.sleep(1)
print("block function done")
return "result"
async def main() -> int:
task = asyncio.get_running_loop().run_in_executor(None, block)
task.add_done_callback(lambda task: print("task with result:", task.result()))
print("return 204")
return 204
asyncio.run(main())
block function start
return 204
block function done
task with result: result
NOTE: Save a reference to tasks, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn’t referenced elsewhere may get garbage collected at any time, even before it’s done.

RuntimeError: There is no current event loop in thread in async + apscheduler

I have a async function and need to run in with apscheduller every N minutes.
There is a python code below
URL_LIST = ['<url1>',
'<url2>',
'<url2>',
]
def demo_async(urls):
"""Fetch list of web pages asynchronously."""
loop = asyncio.get_event_loop() # event loop
future = asyncio.ensure_future(fetch_all(urls)) # tasks to do
loop.run_until_complete(future) # loop until done
async def fetch_all(urls):
tasks = [] # dictionary of start times for each url
async with ClientSession() as session:
for url in urls:
task = asyncio.ensure_future(fetch(url, session))
tasks.append(task) # create list of tasks
_ = await asyncio.gather(*tasks) # gather task responses
async def fetch(url, session):
"""Fetch a url, using specified ClientSession."""
async with session.get(url) as response:
resp = await response.read()
print(resp)
if __name__ == '__main__':
scheduler = AsyncIOScheduler()
scheduler.add_job(demo_async, args=[URL_LIST], trigger='interval', seconds=15)
scheduler.start()
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
# Execution will block here until Ctrl+C (Ctrl+Break on Windows) is pressed.
try:
asyncio.get_event_loop().run_forever()
except (KeyboardInterrupt, SystemExit):
pass
But when i tried to run it i have the next error info
Job "demo_async (trigger: interval[0:00:15], next run at: 2017-10-12 18:21:12 +04)" raised an exception.....
..........\lib\asyncio\events.py", line 584, in get_event_loop
% threading.current_thread().name)
RuntimeError: There is no current event loop in thread '<concurrent.futures.thread.ThreadPoolExecutor object at 0x0356B150>_0'.
Could you please help me with this?
Python 3.6, APScheduler 3.3.1,
In your def demo_async(urls), try to replace:
loop = asyncio.get_event_loop()
with:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
The important thing that hasn't been mentioned is why the error occurs. For me personally, knowing why the error occurs is as important as solving the actual problem.
Let's take a look at the implementation of the get_event_loop of BaseDefaultEventLoopPolicy:
class BaseDefaultEventLoopPolicy(AbstractEventLoopPolicy):
...
def get_event_loop(self):
"""Get the event loop.
This may be None or an instance of EventLoop.
"""
if (self._local._loop is None and
not self._local._set_called and
isinstance(threading.current_thread(), threading._MainThread)):
self.set_event_loop(self.new_event_loop())
if self._local._loop is None:
raise RuntimeError('There is no current event loop in thread %r.'
% threading.current_thread().name)
return self._local._loop
You can see that the self.set_event_loop(self.new_event_loop()) is only executed if all of the below conditions are met:
self._local._loop is None - _local._loop is not set
not self._local._set_called - set_event_loop hasn't been called yet
isinstance(threading.current_thread(), threading._MainThread) - current thread is the main one (this is not True in your case)
Therefore the exception is raised, because no loop is set in the current thread:
if self._local._loop is None:
raise RuntimeError('There is no current event loop in thread %r.'
% threading.current_thread().name)
Just pass fetch_all to scheduler.add_job() directly. The asyncio scheduler supports coroutine functions as job targets.
If the target callable is not a coroutine function, it will be run in a worker thread (due to historical reasons), hence the exception.
I had a similar issue where I wanted my asyncio module to be callable from a non-asyncio script (which was running under gevent... don't ask...). The code below resolved my issue because it tries to get the current event loop, but will create one if there isn't one in the current thread. Tested in python 3.9.11.
try:
loop = asyncio.get_event_loop()
except RuntimeError as e:
if str(e).startswith('There is no current event loop in thread'):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
else:
raise
Use asyncio.run() instead of directly using the event loop.
It creates a new loop and closes it when finished.
This is how the 'run' looks like:
if events._get_running_loop() is not None:
raise RuntimeError(
"asyncio.run() cannot be called from a running event loop")
if not coroutines.iscoroutine(main):
raise ValueError("a coroutine was expected, got {!r}".format(main))
loop = events.new_event_loop()
try:
events.set_event_loop(loop)
loop.set_debug(debug)
return loop.run_until_complete(main)
finally:
try:
_cancel_all_tasks(loop)
loop.run_until_complete(loop.shutdown_asyncgens())
finally:
events.set_event_loop(None)
loop.close()
Since this question continues to appear on the first page, I will write my problem and my answer here.
I had a RuntimeError: There is no current event loop in thread 'Thread-X'. when using flask-socketio and Bleak.
Edit: well, I refactored my file and made a class.
I initialized the loop in the constructor, and now everything is working fine:
class BLE:
def __init__(self):
self.loop = asyncio.get_event_loop()
# function example, improvement of
# https://github.com/hbldh/bleak/blob/master/examples/discover.py :
def list_bluetooth_low_energy(self) -> list:
async def run() -> list:
BLElist = []
devices = await bleak.discover()
for d in devices:
BLElist.append(d.name)
return 'success', BLElist
return self.loop.run_until_complete(run())
Usage:
ble = path.to.lib.BLE()
list = ble.list_bluetooth_low_energy()
Original answer:
The solution was stupid. I did not pay attention to what I did, but I moved some import out of a function, like this:
import asyncio, platform
from bleak import discover
def listBLE() -> dict:
async def run() -> dict:
# my code that keep throwing exceptions.
loop = asyncio.get_event_loop()
ble_list = loop.run_until_complete(run())
return ble_list
So I thought that I needed to change something in my code, and I created a new event loop using this piece of code just before the line with get_event_loop():
loop = asyncio.new_event_loop()
loop = asyncio.set_event_loop()
At this moment I was pretty happy, since I had a loop running.
But not responding. And my code relied on a timeout to return some values, so it was pretty bad for my app.
It took me nearly two hours to figure out that the problem was the import, and here is my (working) code:
def list() -> dict:
import asyncio, platform
from bleak import discover
async def run() -> dict:
# my code running perfectly
loop = asyncio.get_event_loop()
ble_list = loop.run_until_complete(run())
return ble_list
Reading given answers I only manage to fix my websocket thread by using the hint (try replacing) in https://stackoverflow.com/a/46750562/598513 on this page.
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
The documentation of BaseDefaultEventLoopPolicy explains
Default policy implementation for accessing the event loop.
In this policy, each thread has its own event loop. However, we
only automatically create an event loop by default for the main
thread; other threads by default have no event loop.
So when using a thread one has to create the loop.
And I had to reorder my code so my final code
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# !!! Place code after setting the loop !!!
server = Server()
start_server = websockets.serve(server.ws_handler, 'localhost', port)
In my case the line was like this
asyncio.get_event_loop().run_until_complete(test())
I replaced above line with this line which solved my problem
asyncio.run(test())

asyncio.lock in a Tornado application

I'm trying to write a asynchronous method for use in a Tornado application. My method needs to manage a connection that can and should be shared among other calls to the function The connection is created by awaiting. To manage this, I was using asyncio.Lock. However, every call to my method would hang waiting for the lock.
After a few hours of experimenting, I found out a few things,
If nothing awaits in the lock block, everything works as expected
tornado.ioloop.IOLoop.configure('tornado.platform.asyncio.AsyncIOLoop') does not help
tornado.platform.asyncio.AsyncIOMainLoop().install() allows it to work, regardless if the event loop is started with tornado.ioloop.IOLoop.current().start() or asyncio.get_event_loop().run_forever()
Here is some sample code that wont work until unless you uncomment AsyncIOMainLoop().install():
import tornado.ioloop
import tornado.web
import tornado.gen
import tornado.httpclient
from tornado.platform.asyncio import AsyncIOMainLoop
import asyncio
import tornado.locks
class MainHandler(tornado.web.RequestHandler):
_lock = asyncio.Lock()
#_lock = tornado.locks.Lock()
async def get(self):
print("in get")
r = await tornado.gen.multi([self.foo(str(i)) for i in range(2)])
self.write('\n'.join(r))
async def foo(self, i):
print("Getting first lock on " + i)
async with self._lock:
print("Got first lock on " + i)
# Do something sensitive that awaits
await asyncio.sleep(0)
print("Unlocked on " + i)
# Do some work
print("Work on " + i)
await asyncio.sleep(0)
print("Getting second lock on " + i)
async with self._lock:
print("Got second lock on " + i)
# Do something sensitive that doesnt await
pass
print("Unlocked on " + i)
return "done"
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
#AsyncIOMainLoop().install() # This will make it work
#tornado.ioloop.IOLoop.configure('tornado.platform.asyncio.AsyncIOLoop') # Does not help
app = make_app()
app.listen(8888)
print('starting app')
tornado.ioloop.IOLoop.current().start()
I now know that tornado.locks.Lock() exists and works, but I'm curious why the asyncio.Lock does not work.
Both Tornado and asyncio have a global singleton event loop which everything else depends on (for advanced use cases you can avoid the singleton, but using it is idiomatic). To use both libraries together, the two singletons need to be aware of each other.
AsyncIOMainLoop().install() makes a Tornado event loop that points to the asyncio singleton, then sets it as the tornado singleton. This works.
IOLoop.configure('AsyncIOLoop') tells Tornado "whenever you need an IOLoop, create a new (non-singleton!) asyncio event loop and use that. The asyncio loop becomes the singleton when the IOLoop is started. This almost works, but when the MainHandler class is defined (and creates its class-scoped asyncio.Lock, the asyncio singleton is still pointing to the default (which will be replaced by the one created by AsyncIOLoop).
TL;DR: Use AsyncIOMainLoop, not AsyncIOLoop, unless you're attempting to use the more advanced non-singleton use patterns. This will get simpler in Tornado 5.0 as asyncio integration will be enabled by default.

First experience with asyncio

I am learning python-asyncio, and I'm trying to write a simple model.
I have a function handling tasks. While handling, it goes to another remote service for data and then prints a message.
My code:
dd = 0
#asyncio.coroutine
def slow_operation():
global dd
dd += 1
print('Future is started!', dd)
yield from asyncio.sleep(10 - dd) # request to another server
print('Future is done!', dd)
def add():
while True:
time.sleep(1)
asyncio.ensure_future(slow_operation(), loop=loop)
loop = asyncio.get_event_loop()
future = asyncio.Future()
asyncio.ensure_future(slow_operation(), loop=loop)
th.Thread(target=add).start()
loop.run_forever()
But this code doesn't switch the context while sleeping in:
yield from asyncio.sleep(10 - dd)
How can I fix that?
asyncio.ensure_future is not thread-safe, that's why slow_operation tasks are not started when they should be. Use loop.call_soon_threadsafe:
callback = lambda: asyncio.ensure_future(slow_operation(), loop=loop)
loop.call_soon_threadsafe(cb)
Or asyncio.run_coroutine_threadsafe if you're running python 3.5.1:
asyncio.run_coroutine_threadsafe(slow_operation(), loop)
However, you should probably keep the use of threads to the minimum. Unless you use a library running tasks in its own thread, all the code should run inside the event loop (or inside an executor, see loop.run_in_executor).

How to use Tornado.gen.coroutine in TCP Server?

i write a Tcp Server with Tornado.
here is the code:
#! /usr/bin/env python
#coding=utf-8
from tornado.tcpserver import TCPServer
from tornado.ioloop import IOLoop
from tornado.gen import *
class TcpConnection(object):
def __init__(self,stream,address):
self._stream=stream
self._address=address
self._stream.set_close_callback(self.on_close)
self.send_messages()
def send_messages(self):
self.send_message(b'hello \n')
print("next")
self.read_message()
self.send_message(b'world \n')
self.read_message()
def read_message(self):
self._stream.read_until(b'\n',self.handle_message)
def handle_message(self,data):
print(data)
def send_message(self,data):
self._stream.write(data)
def on_close(self):
print("the monitored %d has left",self._address)
class MonitorServer(TCPServer):
def handle_stream(self,stream,address):
print("new connection",address,stream)
conn = TcpConnection(stream,address)
if __name__=='__main__':
print('server start .....')
server=MonitorServer()
server.listen(20000)
IOLoop.instance().start()
And i face some eorror assert self._read_callback is None, "Already reading",i guess the eorror is because multiple commands to read from socket at the same time.and then i change the function send_messages with tornado.gen.coroutine.here is code:
#gen.coroutine
def send_messages(self):
yield self.send_message(b'hello \n')
response1 = yield self.read_message()
print(response1)
yield self.send_message(b'world \n')
print((yield self.read_message()))
but there are some other errors. the code seem to stop after yield self.send_message(b'hello \n'),and the following code seem not to execute.
how should i do about it ? If you're aware of any Tornado tcpserver (not HTTP!) code with tornado.gen.coroutine,please tell me.I would appreciate any links!
send_messages() calls send_message() and read_message() with yield, but these methods are not coroutines, so this will raise an exception.
The reason you're not seeing the exception is that you called send_messages() without yielding it, so the exception has nowhere to go (the garbage collector should eventually notice and print the exception, but that can take a long time). Whenever you call a coroutine, you should either use yield to wait for it to finish, or IOLoop.current().spawn_callback() to run the coroutine in the "background" (this tells Tornado that you do not intend to yield the coroutine, so it will print the exception as soon as it occurs). Also, whenever you override a method you should read the documentation to see whether coroutines are allowed (when you override TCPServer.handle_stream() you can make it a coroutine, but __init__() may not be a coroutine).
Once the exception is getting logged, the next step is to fix it. You can either make send_message() and read_message() coroutines (getting rid of the handle_message() callback in the process), or you can use tornado.gen.Task() to call coroutine-style code from a coroutine. I generally recommend using coroutines everywhere.

Categories