Multiprocess Queue synchronization with asyncio - python

I want to gather data from asyncio loops running in sibling processes with Python 3.7
Ideally I would use a multiprocess.JoinableQueue, relaying on its join() call for synchronization.
However, its synchronization primitives block the event loop in full (see my partial answer below for an example).
Illustrative prototype:
class MP_GatherDict(dict):
'''A per-process dictionary which can be gathered from a single one'''
def __init__(self):
self.q = multiprocess.JoinableQueue()
super().__init__()
async def worker_process_server(self):
while True:
(await?) self.q.put(dict(self)) # Put a shallow copy
(await?) self.q.join() # Wait for it to be gathered
async def gather(self):
all_dicts = []
while not self.q.empty():
all_dicts.append(await self.q.get())
self.q.task_done()
return all_dicts
Note that the put->get->join->put flow might not work as expected but this question really is about using multiprocess primitives in asyncio event loop...
The question would then be how to best await for multiprocess primitives from an asyncio event loop?

This test shows that multiprocess.Queue.get() blocks the whole event loop:
mp_q = mp.JoinableQueue()
async def mp_queue_wait():
try:
print('Queue:',mp_q.get(timeout=2))
except Exception as ex:
print('Queue:',repr(ex))
async def main_loop_task():
task = asyncio.get_running_loop().create_task(mp_queue_wait())
for i in range(3):
print(i, os.times())
await asyncio.sleep(1)
await task
print(repr(task))
asyncio.run(main_loop_task())
Whose output is:
0 posix.times_result(user=0.41, system=0.04, children_user=0.0, children_system=0.0, elapsed=17208620.18)
Queue: Empty()
1 posix.times_result(user=0.41, system=0.04, children_user=0.0, children_system=0.0, elapsed=17208622.18)
2 posix.times_result(user=0.41, system=0.04, children_user=0.0, children_system=0.0, elapsed=17208623.18)
<Task finished coro=<mp_queue_wait() done,...> result=None>
So I am looking at asyncio.loop.run_in_executor() as the next possible answer, however spawning an executor/thread just for this seems overkill...
Here is same test using the default executor:
async def mp_queue_wait():
try:
result = await asyncio.get_running_loop().run_in_executor(None,mp_q.get,True,2)
except Exception as ex:
result = ex
print('Queue:',repr(result))
return result
And the (desired) result:
0 posix.times_result(user=0.36, system=0.02, children_user=0.0, children_system=0.0, elapsed=17210674.65)
1 posix.times_result(user=0.37, system=0.02, children_user=0.0, children_system=0.0, elapsed=17210675.65)
Queue: Empty()
2 posix.times_result(user=0.37, system=0.02, children_user=0.0, children_system=0.0, elapsed=17210676.66)
<Task finished coro=<mp_queue_wait() done, defined at /home/apozuelo/Documents/5G_SBA/Tera5G/services/db.py:211> result=Empty()>

This comes bit late, but.
You need to create an async wrapper around the mp.JoinableQueue() since both get()and put() block the whole process (GIL).
There are two approaches for this:
Use threads
Use asyncio.sleep() and get_nowait(), put_nowait() methods.
I chose the option 2 since it is easy.
from queue import Queue, Full, Empty
from typing import Any, Generic, TypeVar
from asyncio import sleep
T= TypeVar('T')
class AsyncQueue(Generic[T]):
"""Async wrapper for queue.Queue"""
SLEEP: float = 0.01
def __init__(self, queue: Queue[T]):
self._Q : Queue[T] = queue
async def get(self) -> T:
while True:
try:
return self._Q.get_nowait()
except Empty:
await sleep(self.SLEEP)
async def put(self, item: T) -> None:
while True:
try:
self._Q.put_nowait(item)
return None
except Full:
await sleep(self.SLEEP)
def task_done(self) -> None:
self._Q.task_done()
return None

Related

asynchronous iteration, how to move to next step of iteration while waiting for a task to complete?

I am trying to write an iterator which moves on to the next step in the iteration while awaiting an IO bound task. To roughly demonstrate what I'm trying to do in code
for i in iterable:
await io_bound_task() # move on to next step in iteration
# do more stuff when task is complete
I initially tried running with a simple for loop, with a sleep simulating an IO bound task
import asyncio
import random
async def main() -> None:
for i in range(3):
print(f"starting task {i}")
result = await io_bound_task(i)
print(f"finished task {result}")
async def io_bound_task(i: int) -> int:
await asyncio.sleep(random.random())
return i
asyncio.run(main())
here the code runs synchronously and outputs
starting task 0
finished task 0
starting task 1
finished task 1
starting task 2
finished task 2
which I assume is because the for loop is blocking. So I think an asynchronous for loop is the way to proceed? so I try using an asynchronous iterator
from __future__ import annotations
import asyncio
import random
class AsyncIterator:
def __init__(self, max_value: int) -> None:
self.max_value = max_value
self.count = 0
def __aiter__(self) -> AsyncIterator:
return self
async def __anext__(self) -> int:
if self.count == self.max_value:
raise StopAsyncIteration
self.count += 1
return self.count
async def main() -> None:
async for i in AsyncIterator(3):
print(f"starting task {i}")
result = await io_bound_task(i)
print(f"finished task {result}")
async def io_bound_task(i: int) -> int:
await asyncio.sleep(random.random())
return i
asyncio.run(main())
but this also seems to run synchronously and results in the output
starting task 1
finished task 1
starting task 2
finished task 2
starting task 3
finished task 3
every time. So I think the asynchronous iterator is not doing what I assumed it would do? At this point I'm stuck. Is it an issue with my understanding of the asynchronous iterator? Can someone give me some pointers as to how to achieve what I'm trying to do?
I'm new to working with async, so apologies if I'm doing something stupid. Any help is appreciated. Thanks.
I'm on python 3.8.10 if that is a relevant detail.
The thing that you are looking for is called a task, and can be created using the asyncio.create_task function. All the approaches you tried involved awaiting the coroutine io_bound_task(i), and await means something like "wait for this to complete before continuing". If you wrap your coroutine in a task, then it will run in the background rather than you having to wait for it to complete before continuing.
Here is a version of your code using tasks:
import asyncio
import random
async def main() -> None:
tasks = []
for i in range(3):
print(f"starting task {i}")
tasks.append(asyncio.create_task(io_bound_task(i)))
for task in tasks:
result = await task
print(f"finished task {result}")
async def io_bound_task(i: int) -> int:
await asyncio.sleep(random.random())
return i
asyncio.run(main())
Output:
starting task 0
starting task 1
starting task 2
finished task 0
finished task 1
finished task 2
You can also use asyncio.gather (if you need all results before continuing) or asyncio.wait for awaiting multiple tasks, rather than a loop. For example if task 2 completes before task 0 and you don't want to wait for task 0, you could do:
async def main() -> None:
pending = []
for i in range(3):
print(f"starting task {i}")
pending.append(asyncio.create_task(io_bound_task(i)))
while pending:
done, pending = await asyncio.wait(pending, return_when=asyncio.FIRST_COMPLETED)
for task in done:
result = await task
print(f"finished task {result}")

How to communicate between traditional thread and asyncio thread in Python?

In python, what's the idiomatic way to establish a one-way communication between two threading.Threads, call them thread a and thread b.
a is the producer, it continuously generates values for b to consume.
b is the consumer, it reads one value generated by a, process the value with a coroutine, and then reads the next value, and so on.
Illustration:
q = very_magic_queue.Queue()
def worker_of_a(q):
while True:
q.put(1)
time.sleep(1)
a = threading.Thread(worker_of_a, args=(q,))
a.start()
async def loop(q):
while True:
# v must be processed in the same order as they are produced
v = await q.get()
print(v)
async def foo():
pass
async def b_main(q):
loop_fut = asyncio.ensure_future(loop(q))
foo_fut = asyncio.ensure_future(foo())
_ = await asyncio.wait([loop_fut, foo_fut], ...)
# blah blah blah
def worker_of_b(q):
asyncio.set_event_loop(asyncio.new_event_loop())
asyncio.get_event_loop().run_until_complete(b_main(q))
b = threading.Thread(worker_of_b, args=(q,))
b.start()
Of course the above code doesn't work, because queue.Queue.get cannot be awaitted, and asyncio.Queue cannot be used in another thread.
I also need a communication channel from b to a.
I would be great if the solution could also work with gevent.
Thanks :)
I had a similar problem -communicate data between a thread and asyncio. The solution I used is to create a sync Queue and add methods for async get and async put using asyncio.sleep to make it non-blocking.
Here is my queue class:
#class to provide queue (sync or asyc morph)
class queMorph(queue.Queue):
def __init__(self,qSize,qNM):
super().__init__(qSize)
self.timeout=0.018
self.me=f'queMorph-{qNM}'
#Introduce methods for async awaitables morph of Q
async def aget(self):
while True:
try:
return self.get_nowait()
except queue.Empty:
await asyncio.sleep(self.timeout)
except Exception as E:
raise
async def aput(self,data):
while True:
try:
return self.put_nowait(data)
except queue.Full:
print(f'{self.me} Queue full on put..')
await asyncio.sleep(self.timeout)
except Exception as E:
raise
To put/get items from queue from the thread (synchronous), use the normal q.get() and q.put() blocking functions.
In the async loop, use q.aget() and q.aput() which do not block.
You can use a synchronized queue from the queue module and defer the wait to a ThreadPoolExecutor:
async def loop(q):
from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=1) as executor:
loop = asyncio.get_event_loop()
while True:
# v must be processed in the same order as they are produced
v = await loop.run_in_executor(executor, q.get)
print(v)
I've used Janus to solve this problem - it's a Python library that gives you a thread-safe queue that can be used to communicate between asyncio and a thread.
def threaded(sync_q):
for i in range(100):
sync_q.put(i)
sync_q.join()
async def async_code(async_q):
for i in range(100):
val = await async_q.get()
assert val == i
async_q.task_done()
queue = janus.Queue()
fut = loop.run_in_executor(None, threaded, queue.sync_q)
await async_code(queue.async_q)

Join multiple async generators in Python [duplicate]

This question already has answers here:
asynchronous python itertools chain multiple generators
(2 answers)
Closed 3 years ago.
I would like to listen for events from multiple instances of the same object and then merge this event streams to one stream. For example, if I use async generators:
class PeriodicYielder:
def __init__(self, period: int) -> None:
self.period = period
async def updates(self):
while True:
await asyncio.sleep(self.period)
yield self.period
I can successfully listen for events from one instance:
async def get_updates_from_one():
each_1 = PeriodicYielder(1)
async for n in each_1.updates():
print(n)
# 1
# 1
# 1
# ...
But how can I get events from multiple async generators? In other words: how can I iterate through multiple async generators in the order they are ready to produce next value?
async def get_updates_from_multiple():
each_1 = PeriodicYielder(1)
each_2 = PeriodicYielder(2)
async for n in magic_async_join_function(each_1.updates(), each_2.updates()):
print(n)
# 1
# 1
# 2
# 1
# 1
# 2
# ...
Is there such magic_async_join_function in stdlib or in 3rd party module?
You can use wonderful aiostream library. It'll look like this:
import asyncio
from aiostream import stream
async def test1():
for _ in range(5):
await asyncio.sleep(0.1)
yield 1
async def test2():
for _ in range(5):
await asyncio.sleep(0.2)
yield 2
async def main():
combine = stream.merge(test1(), test2())
async with combine.stream() as streamer:
async for item in streamer:
print(item)
asyncio.run(main())
Result:
1
1
2
1
1
2
1
2
2
2
If you wanted to avoid the dependency on an external library (or as a learning exercise), you could merge the async iterators using a queue:
def merge_async_iters(*aiters):
# merge async iterators, proof of concept
queue = asyncio.Queue(1)
async def drain(aiter):
async for item in aiter:
await queue.put(item)
async def merged():
while not all(task.done() for task in tasks):
yield await queue.get()
tasks = [asyncio.create_task(drain(aiter)) for aiter in aiters]
return merged()
This passes the test from Mikhail's answer, but it's not perfect: it doesn't propagate the exception in case one of the async iterators raises. Also, if the task that exhausts the merged generator returned by merge_async_iters() gets cancelled, or if the same generator is not exhausted to the end, the individual drain tasks are left hanging.
A more complete version could handle the first issue by detecting an exception and transmitting it through the queue. The second issue can be resolved by merged generator cancelling the drain tasks as soon as the iteration is abandoned. With those changes, the resulting code looks like this:
def merge_async_iters(*aiters):
queue = asyncio.Queue(1)
run_count = len(aiters)
cancelling = False
async def drain(aiter):
nonlocal run_count
try:
async for item in aiter:
await queue.put((False, item))
except Exception as e:
if not cancelling:
await queue.put((True, e))
else:
raise
finally:
run_count -= 1
async def merged():
try:
while run_count:
raised, next_item = await queue.get()
if raised:
cancel_tasks()
raise next_item
yield next_item
finally:
cancel_tasks()
def cancel_tasks():
nonlocal cancelling
cancelling = True
for t in tasks:
t.cancel()
tasks = [asyncio.create_task(drain(aiter)) for aiter in aiters]
return merged()
Different approaches to merging async iterators can be found in this answer, and also this one, where the latter allows for adding new streams mid-stride. The complexity and subtlety of these implementations shows that, while it is useful to know how to write one, actually doing so is best left to well-tested external libraries such as aiostream that cover all the edge cases.

How to make a asyncio pool cancelable?

I have a pool_map function that can be used to limit the number of simultaneously executing functions.
The idea is to have a coroutine function accepting a single parameter that is mapped to a list of possible parameters, but to also wrap all function calls into a semaphore acquisition, whereupon only a limited number is running at once:
from typing import Callable, Awaitable, Iterable, Iterator
from asyncio import Semaphore
A = TypeVar('A')
V = TypeVar('V')
async def pool_map(
func: Callable[[A], Awaitable[V]],
arg_it: Iterable[A],
size: int=10
) -> Generator[Awaitable[V], None, None]:
"""
Maps an async function to iterables
ensuring that only some are executed at once.
"""
semaphore = Semaphore(size)
async def sub(arg):
async with semaphore:
return await func(arg)
return map(sub, arg_it)
I modified and didn’t test above code for the sake of an example, but my variant works well. E.g. you can use it like this:
from asyncio import get_event_loop, coroutine, as_completed
from contextlib import closing
URLS = [...]
async def run_all(awaitables):
for a in as_completed(awaitables):
result = await a
print('got result', result)
async def download(url): ...
if __name__ != '__main__':
pool = pool_map(download, URLS)
with closing(get_event_loop()) as loop:
loop.run_until_complete(run_all(pool))
But a problem arises if there is an exception thrown while awaiting a future. I can’t see how to cancel all scheduled or still-running tasks, neither the ones still waiting for the semaphore to be acquired.
Is there a library or an elegant building block for this that I don’t know, or do I have to build all parts myself? (i.e. a Semaphore with access to its waiters, a as_finished that provides access to its running task queue, …)
Use ensure_future to get a Task instead of a coroutine:
import asyncio
from contextlib import closing
def pool_map(func, args, size=10):
"""
Maps an async function to iterables
ensuring that only some are executed at once.
"""
semaphore = asyncio.Semaphore(size)
async def sub(arg):
async with semaphore:
return await func(arg)
tasks = [asyncio.ensure_future(sub(x)) for x in args]
return tasks
async def f(n):
print(">>> start", n)
if n == 7:
raise Exception("boom!")
await asyncio.sleep(n / 10)
print("<<< end", n)
return n
async def run_all(tasks):
exc = None
for a in asyncio.as_completed(tasks):
try:
result = await a
print('=== result', result)
except asyncio.CancelledError as e:
print("!!! cancel", e)
except Exception as e:
print("Exception in task, cancelling!")
for t in tasks:
t.cancel()
exc = e
if exc:
raise exc
pool = pool_map(f, range(1, 20), 3)
with closing(asyncio.get_event_loop()) as loop:
loop.run_until_complete(run_all(pool))
Here's a naive solution, based on the fact that cancel is a no-op if the task is already finished:
async def run_all(awaitables):
futures = [asyncio.ensure_future(a) for a in awaitables]
try:
for fut in as_completed(futures):
result = await fut
print('got result', result)
except:
for future in futures:
future.cancel()
await asyncio.wait(futures)

How can I periodically execute a function with asyncio?

I'm migrating from tornado to asyncio, and I can't find the asyncio equivalent of tornado's PeriodicCallback. (A PeriodicCallback takes two arguments: the function to run and the number of milliseconds between calls.)
Is there such an equivalent in asyncio?
If not, what would be the cleanest way to implement this without running the risk of getting a RecursionError after a while?
For Python versions below 3.5:
import asyncio
#asyncio.coroutine
def periodic():
while True:
print('periodic')
yield from asyncio.sleep(1)
def stop():
task.cancel()
loop = asyncio.get_event_loop()
loop.call_later(5, stop)
task = loop.create_task(periodic())
try:
loop.run_until_complete(task)
except asyncio.CancelledError:
pass
For Python 3.5 and above:
import asyncio
async def periodic():
while True:
print('periodic')
await asyncio.sleep(1)
def stop():
task.cancel()
loop = asyncio.get_event_loop()
loop.call_later(5, stop)
task = loop.create_task(periodic())
try:
loop.run_until_complete(task)
except asyncio.CancelledError:
pass
When you feel that something should happen "in background" of your asyncio program, asyncio.Task might be good way to do it. You can read this post to see how to work with tasks.
Here's possible implementation of class that executes some function periodically:
import asyncio
from contextlib import suppress
class Periodic:
def __init__(self, func, time):
self.func = func
self.time = time
self.is_started = False
self._task = None
async def start(self):
if not self.is_started:
self.is_started = True
# Start task to call func periodically:
self._task = asyncio.ensure_future(self._run())
async def stop(self):
if self.is_started:
self.is_started = False
# Stop task and await it stopped:
self._task.cancel()
with suppress(asyncio.CancelledError):
await self._task
async def _run(self):
while True:
await asyncio.sleep(self.time)
self.func()
Let's test it:
async def main():
p = Periodic(lambda: print('test'), 1)
try:
print('Start')
await p.start()
await asyncio.sleep(3.1)
print('Stop')
await p.stop()
await asyncio.sleep(3.1)
print('Start')
await p.start()
await asyncio.sleep(3.1)
finally:
await p.stop() # we should stop task finally
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Output:
Start
test
test
test
Stop
Start
test
test
test
[Finished in 9.5s]
As you see on start we just start task that calls some functions and sleeps some time in endless loop. On stop we just cancel that task. Note, that task should be stopped at the moment program finished.
One more important thing that your callback shouldn't take much time to be executed (or it'll freeze your event loop). If you're planning to call some long-running func, you possibly would need to run it in executor.
A variant that may be helpful: if you want your recurring call to happen every n seconds instead of n seconds between the end of the last execution and the beginning of the next, and you don't want calls to overlap in time, the following is simpler:
async def repeat(interval, func, *args, **kwargs):
"""Run func every interval seconds.
If func has not finished before *interval*, will run again
immediately when the previous iteration finished.
*args and **kwargs are passed as the arguments to func.
"""
while True:
await asyncio.gather(
func(*args, **kwargs),
asyncio.sleep(interval),
)
And an example of using it to run a couple tasks in the background:
async def f():
await asyncio.sleep(1)
print('Hello')
async def g():
await asyncio.sleep(0.5)
print('Goodbye')
async def main():
t1 = asyncio.ensure_future(repeat(3, f))
t2 = asyncio.ensure_future(repeat(2, g))
await t1
await t2
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
There is no built-in support for periodic calls, no.
Just create your own scheduler loop that sleeps and executes any tasks scheduled:
import math, time
async def scheduler():
while True:
# sleep until the next whole second
now = time.time()
await asyncio.sleep(math.ceil(now) - now)
# execute any scheduled tasks
async for task in scheduled_tasks(time.time()):
await task()
The scheduled_tasks() iterator should produce tasks that are ready to be run at the given time. Note that producing the schedule and kicking off all the tasks could in theory take longer than 1 second; the idea here is that the scheduler yields all tasks that should have started since the last check.
Alternative version with decorator for python 3.7
import asyncio
import time
def periodic(period):
def scheduler(fcn):
async def wrapper(*args, **kwargs):
while True:
asyncio.create_task(fcn(*args, **kwargs))
await asyncio.sleep(period)
return wrapper
return scheduler
#periodic(2)
async def do_something(*args, **kwargs):
await asyncio.sleep(5) # Do some heavy calculation
print(time.time())
if __name__ == '__main__':
asyncio.run(do_something('Maluzinha do papai!', secret=42))
Based on #A. Jesse Jiryu Davis answer (with #Torkel Bjørnson-Langen and #ReWrite comments) this is an improvement which avoids drift.
import time
import asyncio
#asyncio.coroutine
def periodic(period):
def g_tick():
t = time.time()
count = 0
while True:
count += 1
yield max(t + count * period - time.time(), 0)
g = g_tick()
while True:
print('periodic', time.time())
yield from asyncio.sleep(next(g))
loop = asyncio.get_event_loop()
task = loop.create_task(periodic(1))
loop.call_later(5, task.cancel)
try:
loop.run_until_complete(task)
except asyncio.CancelledError:
pass
This solution uses the decoration concept from Fernando José Esteves de Souza, the drifting workaround from Wojciech Migda and a superclass in order to generate most elegant code as possible to deal with asynchronous periodic functions.
Without threading.Thread
The solution is comprised of the following files:
periodic_async_thread.py with the base class for you to subclass
a_periodic_thread.py with an example subclass
run_me.py with an example instantiation and run
The PeriodicAsyncThread class in the file periodic_async_thread.py:
import time
import asyncio
import abc
class PeriodicAsyncThread:
def __init__(self, period):
self.period = period
def periodic(self):
def scheduler(fcn):
async def wrapper(*args, **kwargs):
def g_tick():
t = time.time()
count = 0
while True:
count += 1
yield max(t + count * self.period - time.time(), 0)
g = g_tick()
while True:
# print('periodic', time.time())
asyncio.create_task(fcn(*args, **kwargs))
await asyncio.sleep(next(g))
return wrapper
return scheduler
#abc.abstractmethod
async def run(self, *args, **kwargs):
return
def start(self):
asyncio.run(self.run())
An example of a simple subclass APeriodicThread in the file a_periodic_thread.py:
from periodic_async_thread import PeriodicAsyncThread
import time
import asyncio
class APeriodicThread(PeriodicAsyncThread):
def __init__(self, period):
super().__init__(period)
self.run = self.periodic()(self.run)
async def run(self, *args, **kwargs):
await asyncio.sleep(2)
print(time.time())
Instantiating and running the example class in the file run_me.py:
from a_periodic_thread import APeriodicThread
apt = APeriodicThread(2)
apt.start()
This code represents an elegant solution that also mitigates the time drift problem of other solutions. The output is similar to:
1642711285.3898764
1642711287.390698
1642711289.3924973
1642711291.3920736
With threading.Thread
The solution is comprised of the following files:
async_thread.py with the canopy asynchronous thread class.
periodic_async_thread.py with the base class for you to subclass
a_periodic_thread.py with an example subclass
run_me.py with an example instantiation and run
The AsyncThread class in the file async_thread.py:
from threading import Thread
import asyncio
import abc
class AsyncThread(Thread):
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
#abc.abstractmethod
async def async_run(self, *args, **kwargs):
pass
def run(self, *args, **kwargs):
# loop = asyncio.new_event_loop()
# asyncio.set_event_loop(loop)
# loop.run_until_complete(self.async_run(*args, **kwargs))
# loop.close()
asyncio.run(self.async_run(*args, **kwargs))
The PeriodicAsyncThread class in the file periodic_async_thread.py:
import time
import asyncio
from .async_thread import AsyncThread
class PeriodicAsyncThread(AsyncThread):
def __init__(self, period, *args, **kwargs):
self.period = period
super().__init__(*args, **kwargs)
self.async_run = self.periodic()(self.async_run)
def periodic(self):
def scheduler(fcn):
async def wrapper(*args, **kwargs):
def g_tick():
t = time.time()
count = 0
while True:
count += 1
yield max(t + count * self.period - time.time(), 0)
g = g_tick()
while True:
# print('periodic', time.time())
asyncio.create_task(fcn(*args, **kwargs))
await asyncio.sleep(next(g))
return wrapper
return scheduler
An example of a simple subclass APeriodicThread in the file a_periodic_thread.py:
import time
from threading import current_thread
from .periodic_async_thread import PeriodicAsyncThread
import asyncio
class APeriodicAsyncTHread(PeriodicAsyncThread):
async def async_run(self, *args, **kwargs):
print(f"{current_thread().name} {time.time()} Hi!")
await asyncio.sleep(1)
print(f"{current_thread().name} {time.time()} Bye!")
Instantiating and running the example class in the file run_me.py:
from .a_periodic_thread import APeriodicAsyncTHread
a = APeriodicAsyncTHread(2, name = "a periodic async thread")
a.start()
a.join()
This code represents an elegant solution that also mitigates the time drift problem of other solutions. The output is similar to:
a periodic async thread 1643726990.505269 Hi!
a periodic async thread 1643726991.5069854 Bye!
a periodic async thread 1643726992.506919 Hi!
a periodic async thread 1643726993.5089169 Bye!
a periodic async thread 1643726994.5076022 Hi!
a periodic async thread 1643726995.509422 Bye!
a periodic async thread 1643726996.5075526 Hi!
a periodic async thread 1643726997.5093904 Bye!
a periodic async thread 1643726998.5072556 Hi!
a periodic async thread 1643726999.5091035 Bye!
For multiple types of scheduling I'd recommend APSScheduler which has asyncio support.
I use it for a simple python process I can fire up using docker and just runs like a cron executing something weekly, until I kill the docker/process.
This is what I did to test my theory of periodic call backs using asyncio. I don't have experience using Tornado, so I'm not sure exactly how the periodic call backs work with it. I am used to using the after(ms, callback) method in Tkinter though, and this is what I came up with. While True: Just looks ugly to me even if it is asynchronous (more so than globals). The call_later(s, callback, *args) method uses seconds not milliseconds though.
import asyncio
my_var = 0
def update_forever(the_loop):
global my_var
print(my_var)
my_var += 1
# exit logic could be placed here
the_loop.call_later(3, update_forever, the_loop) # the method adds a delayed callback on completion
event_loop = asyncio.get_event_loop()
event_loop.call_soon(update_forever, event_loop)
event_loop.run_forever()

Categories