Suppose I have two functions that work like this:
#tornado.gen.coroutine
def f():
for i in range(4):
print("f", i)
yield tornado.gen.sleep(0.5)
#tornado.gen.coroutine
def g():
yield tornado.gen.sleep(1)
print("Let's raise RuntimeError")
raise RuntimeError
In general, function f might contain endless loop and never return (e.g. it can process some queue).
What I want to do is to be able to interrupt it, at any time it yields.
The most obvious way doesn't work. Exception is only raised after function f exits (if it's endless, it obviously never happens).
#tornado.gen.coroutine
def main():
try:
yield [f(), g()]
except Exception as e:
print("Caught", repr(e))
while True:
yield tornado.gen.sleep(10)
if __name__ == "__main__":
tornado.ioloop.IOLoop.instance().run_sync(main)
Output:
f 0
f 1
Let's raise RuntimeError
f 2
f 3
Traceback (most recent call last):
File "/tmp/test/lib/python3.4/site-packages/tornado/gen.py", line 812, in run
yielded = self.gen.send(value)
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
<...>
File "test.py", line 16, in g
raise RuntimeError
RuntimeError
That is, exception is only raised when both of the coroutines return (both futures resolve).
This's partially solved by tornado.gen.WaitIterator, but it's buggy (unless I'm mistaken). But that's not the point.
It still doesn't solve the problem of interrupting existing coroutines. Coroutine continues to run even though the function that started it exits.
EDIT: it seems like coroutine cancellation is something not really supported in Tornado, unlike in Python's asyncio, where you can easily throw CancelledError at every yield point.
If you use WaitIterator according to the instructions, and use a toro.Event to signal between coroutines, it works as expected:
from datetime import timedelta
import tornado.gen
import tornado.ioloop
import toro
stop = toro.Event()
#tornado.gen.coroutine
def f():
for i in range(4):
print("f", i)
# wait raises Timeout if not set before the deadline.
try:
yield stop.wait(timedelta(seconds=0.5))
print("f done")
return
except toro.Timeout:
print("f continuing")
#tornado.gen.coroutine
def g():
yield tornado.gen.sleep(1)
print("Let's raise RuntimeError")
raise RuntimeError
#tornado.gen.coroutine
def main():
wait_iterator = tornado.gen.WaitIterator(f(), g())
while not wait_iterator.done():
try:
result = yield wait_iterator.next()
except Exception as e:
print("Error {} from {}".format(e, wait_iterator.current_future))
stop.set()
else:
print("Result {} received from {} at {}".format(
result, wait_iterator.current_future,
wait_iterator.current_index))
if __name__ == "__main__":
tornado.ioloop.IOLoop.instance().run_sync(main)
For now, pip install toro to get the Event class. Tornado 4.2 will include Event, see the changelog.
Since version 5, Tornado runs on asyncio event loop.
On Python 3, the IOLoop is always a wrapper around the asyncio event loop, and asyncio.Future and asyncio.Task are used instead of their Tornado counterparts.
Hence you can use asyncio Task cancellation, i.e. asyncio.Task.cancel.
Your example with a queue reading while-true loop, might look like this.
import logging
from asyncio import CancelledError
from tornado import ioloop, gen
async def read_off_a_queue():
while True:
try:
await gen.sleep(1)
except CancelledError:
logging.debug('Reader cancelled')
break
else:
logging.debug('Pretend a task is consumed')
async def do_some_work():
await gen.sleep(5)
logging.debug('do_some_work is raising')
raise RuntimeError
async def main():
logging.debug('Starting queue reader in background')
reader_task = gen.convert_yielded(read_off_a_queue())
try:
await do_some_work()
except RuntimeError:
logging.debug('do_some_work failed, cancelling reader')
reader_task.cancel()
# give the task a chance to clean up, in case it
# catches CancelledError and awaits something
try:
await reader_task
except CancelledError:
pass
if __name__ == '__main__':
logging.basicConfig(level='DEBUG')
ioloop.IOLoop.instance().run_sync(main)
If you run it, you should see:
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:root:Starting queue reader in background
DEBUG:root:Pretend a task is consumed
DEBUG:root:Pretend a task is consumed
DEBUG:root:Pretend a task is consumed
DEBUG:root:Pretend a task is consumed
DEBUG:root:do_some_work is raising
DEBUG:root:do_some_work failed, cancelling reader
DEBUG:root:Reader cancelled
Warning: This is not a working solution. Look at the commentary. Still if you're new (as myself), this example can show the logical flow. Thanks #nathaniel-j-smith and #wgh
What is the difference using something more primitive, like global variable for instance?
import asyncio
event = asyncio.Event()
aflag = False
async def short():
while not aflag:
print('short repeat')
await asyncio.sleep(1)
print('short end')
async def long():
global aflag
print('LONG START')
await asyncio.sleep(3)
aflag = True
print('LONG END')
async def main():
await asyncio.gather(long(), short())
if __name__ == '__main__':
asyncio.run(main())
It is for asyncio, but I guess the idea stays the same. This is a semi-question (why Event would be better?). Yet solution yields exact result author needs:
LONG START
short repeat
short repeat
short repeat
LONG END
short end
UPDATE:
this slides may be really helpful in understanding core of a problem.
Related
I am a new python programmer who is trying to write a 'bot' to trade on betfair for myself. (ambitious!!!!)
My problem that has arisen is this - I have the basics of an asyncio event loop running but I have noticed that if one of the coroutines fails in its process ( for instance an API call fails or a mongodb read) then the asyncio event loop just continues running but ignores the one failed coroutine
my question is how could I either restart that one coroutine automatically or handle an error to stop the complete asyncio loop but at the moment everything runs seemingly oblivious to the fact that something is not right and one portion of it has failed. In my case the loop never returned to the 'rungetcompetitionids' function after a database read was not successful and it never returned to the function again even though it is in a while true loop
The usergui is not yet functional but only there to try asyncio
thanks
Clive
import sys
import datetime
from login import sessiontoken as gst
from mongoenginesetups.setupmongo import global_init as initdatabase
from asyncgetcompetitionids import competition_id_pass as gci
from asyncgetcompetitionids import create_comp_id_table_list as ccid
import asyncio
import PySimpleGUI as sg
sg.change_look_and_feel('DarkAmber')
layout = [
[sg.Text('Password'), sg.InputText(password_char='*', key='password')],
[sg.Text('', key='status')],
[sg.Button('Submit'), sg.Button('Cancel')]
]
window = sg.Window('Betfair', layout)
def initialisethedatabase():
initdatabase('xxxx', 'xxxx', xxxx, 'themongo1', True)
async def runsessiontoken():
nextlogontime = datetime.datetime.now()
while True:
returned_login_time = gst(nextlogontime)
nextlogontime = returned_login_time
await asyncio.sleep(15)
async def rungetcompetitionids(compid_from_compid_table_list):
nextcompidtime = datetime.datetime.now()
while True:
returned_time , returned_list = gci(nextcompidtime, compid_from_compid_table_list)
nextcompidtime = returned_time
compid_from_compid_table_list = returned_list
await asyncio.sleep(10)
async def userinterface():
while True:
event, value = window.read(timeout=1)
if event in (None, 'Cancel'):
sys.exit()
if event != "__TIMEOUT__":
print(f"{event} {value}")
await asyncio.sleep(0.0001)
async def wait_list():
await asyncio.wait([runsessiontoken(),
rungetcompetitionids(compid_from_compid_table_list),
userinterface()
])
initialisethedatabase()
compid_from_compid_table_list = ccid()
print(compid_from_compid_table_list)
nextcompidtime = datetime.datetime.now()
print(nextcompidtime)
loop = asyncio.get_event_loop()
loop.run_until_complete(wait_list())
loop.close()
A simple solution would be to use a wrapper function (or "supervisor") that catches Exception and then just blindly retries the function. More elegant solutions would include printing out the exception and stack trace for diagnostic purposes, and querying the application state to see if it makes sense to try and continue. For instance, if betfair tells you your account is not authorised, then continuing makes no sense. And if it's a general network error then retying immediately might be worthwhile. You might also want to stop retrying if the supervisor notices it has restarted quite a lot in a short space of time.
eg.
import asyncio
import traceback
import functools
from collections import deque
from time import monotonic
MAX_INTERVAL = 30
RETRY_HISTORY = 3
# That is, stop after the 3rd failure in a 30 second moving window
def supervise(func, name=None, retry_history=RETRY_HISTORY, max_interval=MAX_INTERVAL):
"""Simple wrapper function that automatically tries to name tasks"""
if name is None:
if hasattr(func, '__name__'): # raw func
name = func.__name__
elif hasattr(func, 'func'): # partial
name = func.func.__name__
return asyncio.create_task(supervisor(func, retry_history, max_interval), name=name)
async def supervisor(func, retry_history=RETRY_HISTORY, max_interval=MAX_INTERVAL):
"""Takes a noargs function that creates a coroutine, and repeatedly tries
to run it. It stops is if it thinks the coroutine is failing too often or
too fast.
"""
start_times = deque([float('-inf')], maxlen=retry_history)
while True:
start_times.append(monotonic())
try:
return await func()
except Exception:
if min(start_times) > monotonic() - max_interval:
print(
f'Failure in task {asyncio.current_task().get_name()!r}.'
' Is it in a restart loop?'
)
# we tried our best, this coroutine really isn't working.
# We should try to shutdown gracefully by setting a global flag
# that other coroutines should periodically check and stop if they
# see that it is set. However, here we just reraise the exception.
raise
else:
print(func.__name__, 'failed, will retry. Failed because:')
traceback.print_exc()
async def a():
await asyncio.sleep(2)
raise ValueError
async def b(greeting):
for i in range(15):
print(greeting, i)
await asyncio.sleep(0.5)
async def main_async():
tasks = [
supervise(a),
# passing repeated argument to coroutine (or use lambda)
supervise(functools.partial(b, 'hello'))
]
await asyncio.wait(
tasks,
# Only stop when all coroutines have completed
# -- this allows for a graceful shutdown
# Alternatively use FIRST_EXCEPTION to stop immediately
return_when=asyncio.ALL_COMPLETED,
)
return tasks
def main():
# we run outside of the event loop, so we can carry out a post-mortem
# without needing the event loop to be running.
done = asyncio.run(main_async())
for task in done:
if task.cancelled():
print(task, 'was cancelled')
elif task.exception():
print(task, 'failed with:')
# we use a try/except here to reconstruct the traceback for logging purposes
try:
task.result()
except:
# we can use a bare-except as we are not trying to block
# the exception -- just record all that may have happened.
traceback.print_exc()
main()
And this will result in output like:
hello 0
hello 1
hello 2
hello 3
a failed, will retry. Failed because:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
hello 4
hello 5
hello 6
hello 7
a failed, will retry. Failed because:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
hello 8
hello 9
hello 10
hello 11
Failure in task 'a'. Is it in a restart loop?
hello 12
hello 13
hello 14
exception=ValueError()> failed with:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 84, in main
task.result()
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
I'm using python to create a script which runs and interacts with some processes simultaneously. For that I'm using asyncio to implement this parallelism. The main problem is how to run another cleanup routine when a KeyboardInterrupt or a SIGINT occurs.
Here's an example code I wrote to show the problem:
import asyncio
import logging
import signal
from time import sleep
class Process:
async def start(self, arguments):
self._process = await asyncio.create_subprocess_exec("/bin/bash", *arguments)
return await self._process.wait()
async def stop(self):
self._process.terminate()
class BackgroundTask:
async def start(self):
# Very important process which needs to run while process 2 is running
self._process1 = Process()
self._process1_task = asyncio.create_task(self._process1.start(["-c", "sleep 100"]))
self._process2 = Process()
self._process2_task = asyncio.create_task(self._process2.start(["-c", "sleep 50"]))
await asyncio.wait([self._process1_task, self._process2_task], return_when=asyncio.ALL_COMPLETED)
async def stop(self):
# Stop process
await self._process1.stop()
# Call a cleanup process which cleans up process 1
cleanup_process = Process()
await cleanup_process.start(["-c", "sleep 10"])
# After that we can stop our second process
await self._process2.stop()
backgroundTask = BackgroundTask()
async def main():
await asyncio.create_task(backgroundTask.start())
logging.basicConfig(level=logging.DEBUG)
asyncio.run(main(), debug=True)
This code creates a background task which starts two processes (in this example two bash sleep commands) and waits for them to finish. This works fine and both command are running in parallel.
The main problem is the stop routine. I'd like to run the stop method when the program receives a SIGINT or KeyboardInterrupt, which first stops the process1, then starts a cleanup method and stops process2 afterwards. This is necessary because the cleanup command depends on process2.
What I've tried (instead of the asyncio.run() and the async main):
def main():
try:
asyncio.get_event_loop().run_until_complete(backgroundTask.start())
except KeyboardInterrupt:
asyncio.get_event_loop().run_until_complete(backgroundTask.stop())
main()
This of course doens't work as expected, because as soon as an KeyboardInterrupt exception occours the backgroundTask.start Task is canceled and the backgroundTask.stop is started in the main loop, so my processes are canceled and can't stopped properly.
So is there a way to detect the KeyboardInterrupt without canceling the current main loop and run my backgroundTask.stop method instead?
You want to add a signal handler as shown in this example in the docs:
import asyncio
import functools
import os
import signal
def ask_exit(signame, loop):
print("got signal %s: exit" % signame)
loop.stop()
async def main():
loop = asyncio.get_running_loop()
for signame in {'SIGINT', 'SIGTERM'}:
loop.add_signal_handler(
getattr(signal, signame),
functools.partial(ask_exit, signame, loop))
await asyncio.sleep(3600)
print("Event loop running for 1 hour, press Ctrl+C to interrupt.")
print(f"pid {os.getpid()}: send SIGINT or SIGTERM to exit.")
asyncio.run(main())
That's a bit of an overcomplicated/outdated example though, consider it more like this (your coroutine code goes where the asyncio.sleep call is):
import asyncio
from signal import SIGINT, SIGTERM
async def main():
loop = asyncio.get_running_loop()
for signal_enum in [SIGINT, SIGTERM]:
loop.add_signal_handler(signal_enum, loop.stop)
await asyncio.sleep(3600) # Your code here
asyncio.run(main())
At this point a Ctrl + C will break the loop and raise a RuntimeError, which you can catch by putting the asyncio.run call in a try/except block like so:
try:
asyncio.run(main())
except RuntimeError as exc:
expected_msg = "Event loop stopped before Future completed."
if exc.args and exc.args[0] == expected_msg:
print("Bye")
else:
raise
That's not very satisfying though (what if something else caused the same error?), so I'd prefer to raise a distinct error. Also, if you're exiting on the command line, the proper thing to do is to return the proper exit code (in fact, the code in the example just uses the name, but it's actually an IntEnum with that numeric exit code in it!)
import asyncio
from functools import partial
from signal import SIGINT, SIGTERM
from sys import stderr
class SignalHaltError(SystemExit):
def __init__(self, signal_enum):
self.signal_enum = signal_enum
print(repr(self), file=stderr)
super().__init__(self.exit_code)
#property
def exit_code(self):
return self.signal_enum.value
def __repr__(self):
return f"\nExitted due to {self.signal_enum.name}"
def immediate_exit(signal_enum, loop):
loop.stop()
raise SignalHaltError(signal_enum=signal_enum)
async def main():
loop = asyncio.get_running_loop()
for signal_enum in [SIGINT, SIGTERM]:
exit_func = partial(immediate_exit, signal_enum=signal_enum, loop=loop)
loop.add_signal_handler(signal_enum, exit_func)
await asyncio.sleep(3600)
print("Event loop running for 1 hour, press Ctrl+C to interrupt.")
asyncio.run(main())
Which when Ctrl + C'd out of gives:
python cancelling_original.py
⇣
Event loop running for 1 hour, press Ctrl+C to interrupt.
^C
Exitted due to SIGINT
echo $?
⇣
2
Now there's some code I'd be happy to serve! :^)
P.S. here it is with type annotations:
from __future__ import annotations
import asyncio
from asyncio.events import AbstractEventLoop
from functools import partial
from signal import Signals, SIGINT, SIGTERM
from sys import stderr
from typing import Coroutine
class SignalHaltError(SystemExit):
def __init__(self, signal_enum: Signals):
self.signal_enum = signal_enum
print(repr(self), file=stderr)
super().__init__(self.exit_code)
#property
def exit_code(self) -> int:
return self.signal_enum.value
def __repr__(self) -> str:
return f"\nExitted due to {self.signal_enum.name}"
def immediate_exit(signal_enum: Signals, loop: AbstractEventLoop) -> None:
loop.stop()
raise SignalHaltError(signal_enum=signal_enum)
async def main() -> Coroutine:
loop = asyncio.get_running_loop()
for signal_enum in [SIGINT, SIGTERM]:
exit_func = partial(immediate_exit, signal_enum=signal_enum, loop=loop)
loop.add_signal_handler(signal_enum, exit_func)
return await asyncio.sleep(3600)
print("Event loop running for 1 hour, press Ctrl+C to interrupt.")
asyncio.run(main())
The advantage of a custom exception here is that you can then catch it specifically, and avoid the traceback being dumped to the screen
try:
asyncio.run(main())
except SignalHaltError as exc:
# log.debug(exc)
pass
else:
raise
As my project heavily relies on asynchronous network I/O, I always have to expect some weird network error to occur: whether it is the service I'm connecting to having an API outage, or my own server having a network issue, or something else. Issues like that appear, and there's no real way around it. So, I eventually ended up trying to figure out a way to effectively "pause" a coroutine's execution from outside whenever such a network issue occured, until the connection has been reestablished. My approach is writing a decorator pausable that takes an argument pause which is a coroutine function that will be yielded from / awaited like this:
def pausable(pause, resume_check=None, delay_start=None):
if not asyncio.iscoroutinefunction(pause):
raise TypeError("pause must be a coroutine function")
if not (delay_start is None or asyncio.iscoroutinefunction(delay_start)):
raise TypeError("delay_start must be a coroutine function")
def wrapper(coro):
#asyncio.coroutine
def wrapped(*args, **kwargs):
if delay_start is not None:
yield from delay_start()
for x in coro(*args, **kwargs):
try:
yield from pause()
yield x
# catch exceptions the regular discord.py user might not catch
except (asyncio.CancelledError,
aiohttp.ClientError,
websockets.WebSocketProtocolError,
ConnectionClosed,
# bunch of other network errors
) as ex:
if any((resume_check() if resume_check is not None else False and
isinstance(ex, asyncio.CancelledError),
# clean disconnect
isinstance(ex, ConnectionClosed) and ex.code == 1000,
# connection issue
not isinstance(ex, ConnectionClosed))):
yield from pause()
yield x
else:
raise
return wrapped
return wrapper
Pay special attention to this bit:
for x in coro(*args, **kwargs):
yield from pause()
yield x
Example usage (ready is an asyncio.Event):
#pausable(ready.wait, resume_check=restarting_enabled, delay_start=ready.wait)
#asyncio.coroutine
def send_test_every_minute():
while True:
yield from client.send("Test")
yield from asyncio.sleep(60)
However, this does not seem to work and it does not seem like an elegant solution to me. Is there a working solution that is compatible with Python 3.5.3 and above? Compatibility with Python 3.4.4 and above is desirable.
Addendum
Just try/excepting the exceptions raised in the coroutine that needs to be paused is neither always possible nor a viable option to me as it heavily violates against a core code design principle (DRY) I'd like to comply with; in other words, excepting so many exceptions in so many coroutine functions would make my code messy.
Few words about current solution.
for x in coro(*args, **kwargs):
try:
yield from pause()
yield x
except
...
You won't be able to catch exceptions this way:
exception raises outside of for-loop
generator is exhausted (not usable) after first exception anyway
.
#asyncio.coroutine
def test():
yield from asyncio.sleep(1)
raise RuntimeError()
yield from asyncio.sleep(1)
print('ok')
#asyncio.coroutine
def main():
coro = test()
try:
for x in coro:
try:
yield x
except Exception:
print('Exception is NOT here.')
except Exception:
print('Exception is here.')
try:
next(coro)
except StopIteration:
print('And after first exception generator is exhausted.')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close()
Output:
Exception is here.
And after first exception generator is exhausted.
Even if it was possible to resume, consider what will happen if coroutine already did some cleanup operations due to exception.
Given all above, if some coroutine raised exception only option you have is to suppress this exception (if you want) and re-run this coroutine. You can rerun it after some event if you want. Something like this:
def restart(ready_to_restart):
def wrapper(func):
#asyncio.coroutine
def wrapped(*args, **kwargs):
while True:
try:
return (yield from func(*args, **kwargs))
except (ConnectionClosed,
aiohttp.ClientError,
websockets.WebSocketProtocolError,
ConnectionClosed,
# bunch of other network errors
) as ex:
yield from ready_to_restart.wait()
ready_to_restart = asyncio.Event() # set it when you sure network is fine
# and you're ready to restart
Upd
However, how would I make the coroutine continue where it was
interrupted now?
Just to make things clear:
#asyncio.coroutine
def test():
with aiohttp.ClientSession() as client:
yield from client.request_1()
# STEP 1:
# Let's say line above raises error
# STEP 2:
# Imagine you you somehow maged to return to this place
# after exception above to resume execution.
# But what is state of 'client' now?
# It's was freed by context manager when we left coroutine.
yield from client.request_2()
Nor functions, nor coroutines are designed to resume their execution after exception was propagated outside from them.
Only thing that comes to mind is to split complex operation to re-startable little ones while whole complex operation can store it's state:
#asyncio.coroutine
def complex_operation():
with aiohttp.ClientSession() as client:
res = yield from step_1(client)
# res/client - is a state of complex_operation.
# It can be used by re-startable steps.
res = yield from step_2(client, res)
#restart(ready_to_restart)
#asyncio.coroutine
def step_1():
# ...
#restart(ready_to_restart)
#asyncio.coroutine
def step_2():
# ...
I'm playing with Python's new(ish) asyncio stuff, trying to combine its event loop with traditional threading. I have written a class that runs the event loop in its own thread, to isolate it, and then provide a (synchronous) method that runs a coroutine on that loop and returns the result. (I realise this makes it a somewhat pointless example, because it necessarily serialises everything, but it's just as a proof-of-concept).
import asyncio
import aiohttp
from threading import Thread
class Fetcher(object):
def __init__(self):
self._loop = asyncio.new_event_loop()
# FIXME Do I need this? It works either way...
#asyncio.set_event_loop(self._loop)
self._session = aiohttp.ClientSession(loop=self._loop)
self._thread = Thread(target=self._loop.run_forever)
self._thread.start()
def __enter__(self):
return self
def __exit__(self, *e):
self._session.close()
self._loop.call_soon_threadsafe(self._loop.stop)
self._thread.join()
self._loop.close()
def __call__(self, url:str) -> str:
# FIXME Can I not get a future from some method of the loop?
future = asyncio.run_coroutine_threadsafe(self._get_response(url), self._loop)
return future.result()
async def _get_response(self, url:str) -> str:
async with self._session.get(url) as response:
assert response.status == 200
return await response.text()
if __name__ == "__main__":
with Fetcher() as fetcher:
while True:
x = input("> ")
if x.lower() == "exit":
break
try:
print(fetcher(x))
except Exception as e:
print(f"WTF? {e.__class__.__name__}")
To avoid this sounding too much like a "Code Review" question, what is the purpose of asynchio.set_event_loop and do I need it in the above? It works fine with and without. Moreover, is there a loop-level method to invoke a coroutine and return a future? It seems a bit odd to do this with a module level function.
You would need to use set_event_loop if you called get_event_loop anywhere and wanted it to return the loop created when you called new_event_loop.
From the docs
If there’s need to set this loop as the event loop for the current context, set_event_loop() must be called explicitly.
Since you do not call get_event_loop anywhere in your example, you can omit the call to set_event_loop.
I might be misinterpreting, but i think the comment by #dirn in the marked answer is incorrect in stating that get_event_loop works from a thread. See the following example:
import asyncio
import threading
async def hello():
print('started hello')
await asyncio.sleep(5)
print('finished hello')
def threaded_func():
el = asyncio.get_event_loop()
el.run_until_complete(hello())
thread = threading.Thread(target=threaded_func)
thread.start()
This produces the following error:
RuntimeError: There is no current event loop in thread 'Thread-1'.
It can be fixed by:
- el = asyncio.get_event_loop()
+ el = asyncio.new_event_loop()
The documentation also specifies that this trick (creating an eventloop by calling get_event_loop) only works on the main thread:
If there is no current event loop set in the current OS thread, the OS thread is main, and set_event_loop() has not yet been called, asyncio will create a new event loop and set it as the current one.
Finally, the docs also recommend to use get_running_loop instead of get_event_loop if you're on version 3.7 or higher
I have a project in Python 3.5 without any usage of asynchronous features. I have to implement the folowing logic:
def should_return_in_3_sec(some_serious_job, arguments, finished_callback):
# Start some_serious_job(*arguments) in a task
# if it finishes within 3 sec:
# return result immediately
# otherwise return None, but do not terminate task.
# If the task finishes in 1 minute:
# call finished_callback(result)
# else:
# call finished_callback(None)
pass
The function should_return_in_3_sec() should remain synchronous, but it is up to me to write any new asynchronous code (including some_serious_job()).
What is the most elegant and pythonic way to do it?
Fork off a thread doing the serious job, let it write its result into a queue and then terminate. Read in your main thread from that queue with a timeout of three seconds. If the timeout occurs, start another thread and return None. Let the second thread read from the queue with a timeout of one minute; if that timeouts also, call finished_callback(None); otherwise call finished_callback(result).
I sketched it like this:
import threading, queue
def should_return_in_3_sec(some_serious_job, arguments, finished_callback):
result_queue = queue.Queue(1)
def do_serious_job_and_deliver_result():
result = some_serious_job(arguments)
result_queue.put(result)
threading.Thread(target=do_serious_job_and_deliver_result).start()
try:
result = result_queue.get(timeout=3)
except queue.Empty: # timeout?
def expect_and_handle_late_result():
try:
result = result_queue.get(timeout=60)
except queue.Empty:
finished_callback(None)
else:
finished_callback(result)
threading.Thread(target=expect_and_handle_late_result).start()
return None
else:
return result
The threading module has some simple timeout options, see Thread.join(timeout) for example.
If you do choose to use asyncio, below is a a partial solution to address some of your needs:
import asyncio
import time
async def late_response(task, flag, timeout, callback):
done, pending = await asyncio.wait([task], timeout=timeout)
callback(done.pop().result() if done else None) # will raise an exception if some_serious_job failed
flag[0] = True # signal some_serious_job to stop
return await task
async def launch_job(loop, some_serious_job, arguments, finished_callback,
timeout_1=3, timeout_2=5):
flag = [False]
task = loop.run_in_executor(None, some_serious_job, flag, *arguments)
done, pending = await asyncio.wait([task], timeout=timeout_1)
if done:
return done.pop().result() # will raise an exception if some_serious_job failed
asyncio.ensure_future(
late_response(task, flag, timeout_2, finished_callback))
return None
def f(flag, n):
for i in range(n):
print("serious", i, flag)
if flag[0]:
return "CANCELLED"
time.sleep(1)
return "OK"
def finished(result):
print("FINISHED", result)
loop = asyncio.get_event_loop()
result = loop.run_until_complete(launch_job(loop, f, [1], finished))
print("result:", result)
loop.run_forever()
This will run the job in a separate thread (Use loop.set_executor(ProcessPoolExecutor()) to run a CPU intensive task in a process instead). Keep in mind it is a bad practice to terminate a process/thread - the code above uses a very simple list to signal the thread to stop (See also threading.Event / multiprocessing.Event).
While implementing your solution, you might discover you would want to modify your existing code to use couroutines instead of using threads.