I have a big project which depends some third-party libraries, and sometimes its execution gets interrupted by a CancelledError.
To demonstrate the issue, let's look at a small example:
import asyncio
async def main():
task = asyncio.create_task(foo())
# Cancel the task in 1 second.
loop = asyncio.get_event_loop()
loop.call_later(1.0, lambda: task.cancel())
await task
async def foo():
await asyncio.sleep(999)
if __name__ == '__main__':
asyncio.run(main())
Traceback:
Traceback (most recent call last):
File "/Users/ss/Library/Application Support/JetBrains/PyCharm2021.2/scratches/async.py", line 19, in <module>
asyncio.run(main())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
concurrent.futures._base.CancelledError
As you can see, there's no information about the place the CancelledError originates from. How do I find out the exact cause of it?
One approach that I came up with is to place a lot of try/except blocks which would catch the CancelledError and narrow down the place where it comes from. But that's quite tedious.
I've solved it by applyting a decorator to every async function in the project. The decorator's job is simple - log a message when a CancelledError is raised from the function. This way we will see which functions (and more importantly, in which order) get cancelled.
Here's the decorator code:
def log_cancellation(f):
async def wrapper(*args, **kwargs):
try:
return await f(*args, **kwargs)
except asyncio.CancelledError:
print(f"Cancelled {f}")
raise
return wrapper
In order to add this decorator everywhere I used regex. Find: (.*)(async def). Replace with: $1#log_cancellation\n$1$2.
Also to avoid importing log_cancellation in every file I modified the builtins:
builtins.log_cancellation = log_cancellation
The rich package has helped us to identify the cause of CancelledError, without much code change required.
from rich.console import Console
console = Console()
if __name__ == "__main__":
try:
asyncio.run(main()) # replace main() with your entrypoint
except BaseException as e:
console.print_exception(show_locals=True)
Related
I am trying to start a bunch (one or more) aioserial instances using an for and asyncio.gather without success.
# -*- coding: utf-8 -*-
import asyncio
import aioserial
from protocol import contactid, ademco
def main():
# Loop Asyncio
loop = asyncio.get_event_loop()
centrals = utils.auth()
# List of corotines to be executed in paralel
lst_coro = []
# Unpack centrals
for central in centrals:
protocol = central['protocol']
id = central['id']
name = central['name']
port = central['port']
logger = log.create_logging_system(name)
# Protocols
if protocol == 'contactid':
central = contactid.process
elif protocol == 'ademco':
central = ademco.process
else:
print(f'Unknown protocol: {central["protocol"]}')
# Serial (port ex: ttyUSB0/2400)
dev = ''.join(['/dev/', *filter(str.isalnum, port.split('/')[0])])
bps = int(port.split('/')[-1])
aioserial_instance = aioserial.AioSerial(port=dev, baudrate=bps)
lst_coro.append(central(aioserial_instance, id, name, logger))
asyncio.gather(*lst_coro, loop=loop)
if __name__ == '__main__':
asyncio.run(main())
I based this on the asyncio documentation example and some answers from stack overflow. But when I try to run it, I just got errors:
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/Serial/serial.py", line 39, in <module>
asyncio.run(main())
File "/usr/lib/python3.7/asyncio/runners.py", line 37, in run
raise ValueError("a coroutine was expected, got {!r}".format(main))
ValueError: a coroutine was expected, got None
I also tried to use a set instead of a list, but nothing really changed. Is there a better way to start a bunch of parallels corotines when you need to use a loop? Thanks for the attention.
Your problem isn't with how you call gather. It's with how you define main. The clue is with the error message
ValueError: a coroutine was expected, got None
and the last line of code in the traceback before the except is raised
asyncio.run(main())
asyncio.run wants an awaitable. You pass it the return value of main, but main doesn't return anything. Rather than adding a return value, though, the fix is to change how you define main.
async def main():
This will turn main from a regular function to a coroutine that can be awaited.
Edit
Once you've done this, you'll notice that gather doesn't actually seem to do anything. You'll need to await it in order for main to wait for everything in lst_coro to complete.
await asyncio.gather(*lst_coro)
Unrelated to your error: you shouldn't need to use loop inside main at all. gather's loop argument was deprecated in 3.8 and will be removed in 3.10. Unless you're using an older version of Python, you can remove it and your call to asyncio.get_event_loop.
I am a new python programmer who is trying to write a 'bot' to trade on betfair for myself. (ambitious!!!!)
My problem that has arisen is this - I have the basics of an asyncio event loop running but I have noticed that if one of the coroutines fails in its process ( for instance an API call fails or a mongodb read) then the asyncio event loop just continues running but ignores the one failed coroutine
my question is how could I either restart that one coroutine automatically or handle an error to stop the complete asyncio loop but at the moment everything runs seemingly oblivious to the fact that something is not right and one portion of it has failed. In my case the loop never returned to the 'rungetcompetitionids' function after a database read was not successful and it never returned to the function again even though it is in a while true loop
The usergui is not yet functional but only there to try asyncio
thanks
Clive
import sys
import datetime
from login import sessiontoken as gst
from mongoenginesetups.setupmongo import global_init as initdatabase
from asyncgetcompetitionids import competition_id_pass as gci
from asyncgetcompetitionids import create_comp_id_table_list as ccid
import asyncio
import PySimpleGUI as sg
sg.change_look_and_feel('DarkAmber')
layout = [
[sg.Text('Password'), sg.InputText(password_char='*', key='password')],
[sg.Text('', key='status')],
[sg.Button('Submit'), sg.Button('Cancel')]
]
window = sg.Window('Betfair', layout)
def initialisethedatabase():
initdatabase('xxxx', 'xxxx', xxxx, 'themongo1', True)
async def runsessiontoken():
nextlogontime = datetime.datetime.now()
while True:
returned_login_time = gst(nextlogontime)
nextlogontime = returned_login_time
await asyncio.sleep(15)
async def rungetcompetitionids(compid_from_compid_table_list):
nextcompidtime = datetime.datetime.now()
while True:
returned_time , returned_list = gci(nextcompidtime, compid_from_compid_table_list)
nextcompidtime = returned_time
compid_from_compid_table_list = returned_list
await asyncio.sleep(10)
async def userinterface():
while True:
event, value = window.read(timeout=1)
if event in (None, 'Cancel'):
sys.exit()
if event != "__TIMEOUT__":
print(f"{event} {value}")
await asyncio.sleep(0.0001)
async def wait_list():
await asyncio.wait([runsessiontoken(),
rungetcompetitionids(compid_from_compid_table_list),
userinterface()
])
initialisethedatabase()
compid_from_compid_table_list = ccid()
print(compid_from_compid_table_list)
nextcompidtime = datetime.datetime.now()
print(nextcompidtime)
loop = asyncio.get_event_loop()
loop.run_until_complete(wait_list())
loop.close()
A simple solution would be to use a wrapper function (or "supervisor") that catches Exception and then just blindly retries the function. More elegant solutions would include printing out the exception and stack trace for diagnostic purposes, and querying the application state to see if it makes sense to try and continue. For instance, if betfair tells you your account is not authorised, then continuing makes no sense. And if it's a general network error then retying immediately might be worthwhile. You might also want to stop retrying if the supervisor notices it has restarted quite a lot in a short space of time.
eg.
import asyncio
import traceback
import functools
from collections import deque
from time import monotonic
MAX_INTERVAL = 30
RETRY_HISTORY = 3
# That is, stop after the 3rd failure in a 30 second moving window
def supervise(func, name=None, retry_history=RETRY_HISTORY, max_interval=MAX_INTERVAL):
"""Simple wrapper function that automatically tries to name tasks"""
if name is None:
if hasattr(func, '__name__'): # raw func
name = func.__name__
elif hasattr(func, 'func'): # partial
name = func.func.__name__
return asyncio.create_task(supervisor(func, retry_history, max_interval), name=name)
async def supervisor(func, retry_history=RETRY_HISTORY, max_interval=MAX_INTERVAL):
"""Takes a noargs function that creates a coroutine, and repeatedly tries
to run it. It stops is if it thinks the coroutine is failing too often or
too fast.
"""
start_times = deque([float('-inf')], maxlen=retry_history)
while True:
start_times.append(monotonic())
try:
return await func()
except Exception:
if min(start_times) > monotonic() - max_interval:
print(
f'Failure in task {asyncio.current_task().get_name()!r}.'
' Is it in a restart loop?'
)
# we tried our best, this coroutine really isn't working.
# We should try to shutdown gracefully by setting a global flag
# that other coroutines should periodically check and stop if they
# see that it is set. However, here we just reraise the exception.
raise
else:
print(func.__name__, 'failed, will retry. Failed because:')
traceback.print_exc()
async def a():
await asyncio.sleep(2)
raise ValueError
async def b(greeting):
for i in range(15):
print(greeting, i)
await asyncio.sleep(0.5)
async def main_async():
tasks = [
supervise(a),
# passing repeated argument to coroutine (or use lambda)
supervise(functools.partial(b, 'hello'))
]
await asyncio.wait(
tasks,
# Only stop when all coroutines have completed
# -- this allows for a graceful shutdown
# Alternatively use FIRST_EXCEPTION to stop immediately
return_when=asyncio.ALL_COMPLETED,
)
return tasks
def main():
# we run outside of the event loop, so we can carry out a post-mortem
# without needing the event loop to be running.
done = asyncio.run(main_async())
for task in done:
if task.cancelled():
print(task, 'was cancelled')
elif task.exception():
print(task, 'failed with:')
# we use a try/except here to reconstruct the traceback for logging purposes
try:
task.result()
except:
# we can use a bare-except as we are not trying to block
# the exception -- just record all that may have happened.
traceback.print_exc()
main()
And this will result in output like:
hello 0
hello 1
hello 2
hello 3
a failed, will retry. Failed because:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
hello 4
hello 5
hello 6
hello 7
a failed, will retry. Failed because:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
hello 8
hello 9
hello 10
hello 11
Failure in task 'a'. Is it in a restart loop?
hello 12
hello 13
hello 14
exception=ValueError()> failed with:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 84, in main
task.result()
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
When I run this code in Python 3.7:
import asyncio
sem = asyncio.Semaphore(2)
async def work():
async with sem:
print('working')
await asyncio.sleep(1)
async def main():
await asyncio.gather(work(), work(), work())
asyncio.run(main())
It fails with RuntimeError:
$ python3 demo.py
working
working
Traceback (most recent call last):
File "demo.py", line 13, in <module>
asyncio.run(main())
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "demo.py", line 11, in main
await asyncio.gather(work(), work(), work())
File "demo.py", line 6, in work
async with sem:
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/locks.py", line 92, in __aenter__
await self.acquire()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/locks.py", line 474, in acquire
await fut
RuntimeError: Task <Task pending coro=<work() running at demo.py:6> cb=[gather.<locals>._done_callback() at /opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py:664]> got Future <Future pending> attached to a different loop
Python 3.10+: This error message should not occur anymore, see answer from #mmdanziger:
(...) the implementation of Semaphore has been changed and no longer grabs the current loop on init
Python 3.9 and older:
It's because Semaphore constructor sets its _loop attribute – in asyncio/locks.py:
class Semaphore(_ContextManagerMixin):
def __init__(self, value=1, *, loop=None):
if value < 0:
raise ValueError("Semaphore initial value must be >= 0")
self._value = value
self._waiters = collections.deque()
if loop is not None:
self._loop = loop
else:
self._loop = events.get_event_loop()
But asyncio.run() starts a completely new loop – in asyncio/runners.py, it's also metioned in the documentation:
def run(main, *, debug=False):
if events._get_running_loop() is not None:
raise RuntimeError(
"asyncio.run() cannot be called from a running event loop")
if not coroutines.iscoroutine(main):
raise ValueError("a coroutine was expected, got {!r}".format(main))
loop = events.new_event_loop()
...
Semaphore initiated outside of asyncio.run() grabs the asyncio "default" loop and so cannot be used with the event loop created with asyncio.run().
Solution
Initiate Semaphore from code called by asyncio.run(). You will have to pass them to the right place, there are more possibilities how to do that, you can for example use contextvars, but I will just give the simplest example:
import asyncio
async def work(sem):
async with sem:
print('working')
await asyncio.sleep(1)
async def main():
sem = asyncio.Semaphore(2)
await asyncio.gather(work(sem), work(sem), work(sem))
asyncio.run(main())
The same issue (and solution) is probably also with asyncio.Lock, asyncio.Event, and asyncio.Condition.
Update: As of Python 3.10 OP's code will run as written. This is because the implementation of Semaphore has been changed and no longer grabs the current loop on init. See this answer for more discussion.
Python 3.10 implementation from GitHub
class Semaphore(_ContextManagerMixin, mixins._LoopBoundMixin):
"""A Semaphore implementation.
A semaphore manages an internal counter which is decremented by each
acquire() call and incremented by each release() call. The counter
can never go below zero; when acquire() finds that it is zero, it blocks,
waiting until some other thread calls release().
Semaphores also support the context management protocol.
The optional argument gives the initial value for the internal
counter; it defaults to 1. If the value given is less than 0,
ValueError is raised.
"""
def __init__(self, value=1, *, loop=mixins._marker):
super().__init__(loop=loop)
if value < 0:
raise ValueError("Semaphore initial value must be >= 0")
self._value = value
self._waiters = collections.deque()
self._wakeup_scheduled = False
Alternative solution for Python 3.9 and older is to instantiate the Event, Lock, Semaphore, etc. as a first step inside the main() task, where possible.
I validated this with an Event case tested on Python 3.10 (Windows) vs Python 3.9 (Raspberry Pi).
I'm trying to run two autobahn.asyncio.wamp.ApplicationSessions in python at the same time. Previously, I did this using a modification of the autobahn library as suggested in this post's answer. I now
require a bit more professional solution.
After googling about for a while, this post appeared quite promising, but uses the twisted library, instead of asyncio. I wasn't able to identify a similar solution for the asyncio branch of the autobahn library, since it doesn't appear to be using Reactors.
The main problem I have, is that ApplicationRunner.run() is blocking (which is why I previously outsourced it to a thread), so I can't just run a second ApplicationRunner after it.
I do need to access 2 websocket channels at the same time, which I cannot appear to do with a single ApplicationSession.
My Code so far:
from autobahn.asyncio.wamp import ApplicationSession
from autobahn.asyncio.wamp import ApplicationRunner
from asyncio import coroutine
import time
channel1 = 'BTC_LTC'
channel2 = 'BTC_XMR'
class LTCComponent(ApplicationSession):
def onConnect(self):
self.join(self.config.realm)
#coroutine
def onJoin(self, details):
def onTicker(*args, **kwargs):
print('LTCComponent', args, kwargs)
try:
yield from self.subscribe(onTicker, channel1)
except Exception as e:
print("Could not subscribe to topic:", e)
class XMRComponent(ApplicationSession):
def onConnect(self):
self.join(self.config.realm)
#coroutine
def onJoin(self, details):
def onTicker(*args, **kwargs):
print('XMRComponent', args, kwargs)
try:
yield from self.subscribe(onTicker, channel2)
except Exception as e:
print("Could not subscribe to topic:", e)
def main():
runner = ApplicationRunner("wss://api.poloniex.com:443", "realm1", extra={})
runner.run(LTCComponent)
runner.run(XMRComponent) # <- is not being called
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
quit()
except Exception as e:
print(time.time(), e)
My knowledge of the autobahn library is limited, and I'm afraid the documentation isn't improving my situation much. Am I overlooking something here? A function, a parameter, which would enable me to either compound my components or run them both at once?
Perhaps a similar solution as provided here, which implements an alternative ApplicationRunner ?
Related Topics
Running two ApplicationSessions in twisted
Running Autobahn ApplicationRunner in Thread
Autobahn.wamp.ApplicationSession Source
Autobahn.wamp.Applicationrunner Source
As Requested, the Traceback from #stovfl's answer using multithreading code:
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/nils/anaconda3/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/home/nils/git/tools/gemini_wss/t2.py", line 27, in run
self.appRunner.run(self.__ApplicationSession)
File "/home/nils/anaconda3/lib/python3.5/site-packages/autobahn- 0.14.1-py3.5.egg/autobahn/asyncio/wamp.py", line 143, in run
transport_factory = WampWebSocketClientFactory(create, url=self.url, serializers=self.serializers)
File "/home/nils/anaconda3/lib/python3.5/site-packages/autobahn- 0.14.1-py3.5.egg/autobahn/asyncio/websocket.py", line 319, in __init__
WebSocketClientFactory.__init__(self, *args, **kwargs)
File "/home/nils/anaconda3/lib/python3.5/site-packages/autobahn- 0.14.1-py3.5.egg/autobahn/asyncio/websocket.py", line 268, in __init__
self.loop = loop or asyncio.get_event_loop()
File "/home/nils/anaconda3/lib/python3.5/asyncio/events.py", line 626, in get_event_loop
return get_event_loop_policy().get_event_loop()
File "/home/nils/anaconda3/lib/python3.5/asyncio/events.py", line 572, in get_event_loop
% threading.current_thread().name)
RuntimeError: There is no current event loop in thread 'Thread-2'.
Exception in thread Thread-1:
**Same as in Thread-2**
...
RuntimeError: There is no current event loop in thread 'Thread-1'.
As I see from the traceback, we only reach Step 2 of 4
From the asyncio docs:
This module provides infrastructure for writing single-threaded concurrent code using coroutines, multiplexing I/O access over sockets and other resources
So I drop my first proposal using multithreading.
I could imagin the following three options:
Do it with multiprocessing instead of multithreading
Do it with coroutine inside asyncio loop
Switch between channels in def onJoin(self, details)
Second proposal, first option using multiprocessing.
I can start two asyncio loops, so appRunner.run(...) should work.
You can use one class ApplicationSession if the channel are the only different.
If you need to pass different class ApplicationSession add it to the args=
class __ApplicationSession(ApplicationSession):
# ...
try:
yield from self.subscribe(onTicker, self.config.extra['channel'])
except Exception as e:
# ...
import multiprocessing as mp
import time
def ApplicationRunner_process(realm, channel):
appRunner = ApplicationRunner("wss://api.poloniex.com:443", realm, extra={'channel': channel})
appRunner.run(__ApplicationSession)
if __name__ == "__main__":
AppRun = [{'process':None, 'channel':'BTC_LTC'},
{'process': None, 'channel': 'BTC_XMR'}]
for app in AppRun:
app['process'] = mp.Process(target = ApplicationRunner_process, args = ('realm1', app['channel'] ))
app['process'].start()
time.sleep(0.1)
AppRun[0]['process'].join()
AppRun[1]['process'].join()
Following the approach you linked for twisted I managed to get same behaviour with asyncio setting start_loop=False
import asyncio
from autobahn.asyncio.wamp import ApplicationSession, ApplicationRunner
runner1 = ApplicationRunner(url, realm, extra={'cli_id': 1})
coro1 = runner1.run(MyApplicationSession, start_loop=False)
runner2 = ApplicationRunner(url, realm, extra={'cli_id': 2})
coro2 = runner2.run(MyApplicationSession, start_loop=False)
asyncio.get_event_loop().run_until_complete(coro1)
asyncio.get_event_loop().run_until_complete(coro2)
asyncio.get_event_loop().run_forever()
class MyApplicationSession(ApplicationSession):
def __init__(self, cfg):
super().__init__(cfg)
self.cli_id = cfg.extra['cli_id']
def onJoin(self, details):
print("session attached", self.cli_id)
Suppose I have two functions that work like this:
#tornado.gen.coroutine
def f():
for i in range(4):
print("f", i)
yield tornado.gen.sleep(0.5)
#tornado.gen.coroutine
def g():
yield tornado.gen.sleep(1)
print("Let's raise RuntimeError")
raise RuntimeError
In general, function f might contain endless loop and never return (e.g. it can process some queue).
What I want to do is to be able to interrupt it, at any time it yields.
The most obvious way doesn't work. Exception is only raised after function f exits (if it's endless, it obviously never happens).
#tornado.gen.coroutine
def main():
try:
yield [f(), g()]
except Exception as e:
print("Caught", repr(e))
while True:
yield tornado.gen.sleep(10)
if __name__ == "__main__":
tornado.ioloop.IOLoop.instance().run_sync(main)
Output:
f 0
f 1
Let's raise RuntimeError
f 2
f 3
Traceback (most recent call last):
File "/tmp/test/lib/python3.4/site-packages/tornado/gen.py", line 812, in run
yielded = self.gen.send(value)
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
<...>
File "test.py", line 16, in g
raise RuntimeError
RuntimeError
That is, exception is only raised when both of the coroutines return (both futures resolve).
This's partially solved by tornado.gen.WaitIterator, but it's buggy (unless I'm mistaken). But that's not the point.
It still doesn't solve the problem of interrupting existing coroutines. Coroutine continues to run even though the function that started it exits.
EDIT: it seems like coroutine cancellation is something not really supported in Tornado, unlike in Python's asyncio, where you can easily throw CancelledError at every yield point.
If you use WaitIterator according to the instructions, and use a toro.Event to signal between coroutines, it works as expected:
from datetime import timedelta
import tornado.gen
import tornado.ioloop
import toro
stop = toro.Event()
#tornado.gen.coroutine
def f():
for i in range(4):
print("f", i)
# wait raises Timeout if not set before the deadline.
try:
yield stop.wait(timedelta(seconds=0.5))
print("f done")
return
except toro.Timeout:
print("f continuing")
#tornado.gen.coroutine
def g():
yield tornado.gen.sleep(1)
print("Let's raise RuntimeError")
raise RuntimeError
#tornado.gen.coroutine
def main():
wait_iterator = tornado.gen.WaitIterator(f(), g())
while not wait_iterator.done():
try:
result = yield wait_iterator.next()
except Exception as e:
print("Error {} from {}".format(e, wait_iterator.current_future))
stop.set()
else:
print("Result {} received from {} at {}".format(
result, wait_iterator.current_future,
wait_iterator.current_index))
if __name__ == "__main__":
tornado.ioloop.IOLoop.instance().run_sync(main)
For now, pip install toro to get the Event class. Tornado 4.2 will include Event, see the changelog.
Since version 5, Tornado runs on asyncio event loop.
On Python 3, the IOLoop is always a wrapper around the asyncio event loop, and asyncio.Future and asyncio.Task are used instead of their Tornado counterparts.
Hence you can use asyncio Task cancellation, i.e. asyncio.Task.cancel.
Your example with a queue reading while-true loop, might look like this.
import logging
from asyncio import CancelledError
from tornado import ioloop, gen
async def read_off_a_queue():
while True:
try:
await gen.sleep(1)
except CancelledError:
logging.debug('Reader cancelled')
break
else:
logging.debug('Pretend a task is consumed')
async def do_some_work():
await gen.sleep(5)
logging.debug('do_some_work is raising')
raise RuntimeError
async def main():
logging.debug('Starting queue reader in background')
reader_task = gen.convert_yielded(read_off_a_queue())
try:
await do_some_work()
except RuntimeError:
logging.debug('do_some_work failed, cancelling reader')
reader_task.cancel()
# give the task a chance to clean up, in case it
# catches CancelledError and awaits something
try:
await reader_task
except CancelledError:
pass
if __name__ == '__main__':
logging.basicConfig(level='DEBUG')
ioloop.IOLoop.instance().run_sync(main)
If you run it, you should see:
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:root:Starting queue reader in background
DEBUG:root:Pretend a task is consumed
DEBUG:root:Pretend a task is consumed
DEBUG:root:Pretend a task is consumed
DEBUG:root:Pretend a task is consumed
DEBUG:root:do_some_work is raising
DEBUG:root:do_some_work failed, cancelling reader
DEBUG:root:Reader cancelled
Warning: This is not a working solution. Look at the commentary. Still if you're new (as myself), this example can show the logical flow. Thanks #nathaniel-j-smith and #wgh
What is the difference using something more primitive, like global variable for instance?
import asyncio
event = asyncio.Event()
aflag = False
async def short():
while not aflag:
print('short repeat')
await asyncio.sleep(1)
print('short end')
async def long():
global aflag
print('LONG START')
await asyncio.sleep(3)
aflag = True
print('LONG END')
async def main():
await asyncio.gather(long(), short())
if __name__ == '__main__':
asyncio.run(main())
It is for asyncio, but I guess the idea stays the same. This is a semi-question (why Event would be better?). Yet solution yields exact result author needs:
LONG START
short repeat
short repeat
short repeat
LONG END
short end
UPDATE:
this slides may be really helpful in understanding core of a problem.