I have two functions:
i = 0
def update():
global i
i += 1
def output():
print(i)
I want to run output() every 3 seconds and loop update() without any interval, both of course asynchronously.
I tried using asyncio, threading, multithreading and timeloop but I couldn't get it to work in neither of these libraries. If you figure out how to do it in any of these libraries, or some other library, please help. I'm ok with working with any library.
Using AsyncIO this would resemble:
import asyncio
def update(value):
value["int"] += 1
def output(value):
print(value["int"])
async def update_worker(value):
while True:
update(value)
await asyncio.sleep(0)
async def output_worker(value):
while True:
output(value)
await asyncio.sleep(3)
async def main():
value = {"int": 0}
await asyncio.gather(
update_worker(value),
output_worker(value))
if __name__ == "__main__":
asyncio.run(main())
Notice that I changed the global value to be a shared value since it is best practice to do so. In other programming languages it would be unsafe to share a value both in read and write to multiple concurrent contexts but since most Python objects are thread safe is is ok in this case. Otherwise, you should use a mutex of any other concurrency primitive to synchronise reads and writes.
AsyncIO concurrency is based on a cooperative multitasking model so asynchronous tasks must explicitly yield control to other concurrent tasks when they are waiting for something (noted by all await keywords). Thus, to ensure that output_worker has a chance to run one must add an await asyncio.sleep(0) in the infinite loop of the update_worker so that the AsyncIO event loop can run output_worker.
Here is the same code using multithreading instead of AsyncIO:
from time import sleep
from threading import Thread, Lock
def update(value, lock):
with lock:
value["int"] += 1
def output(value, lock):
with lock:
print(value["int"])
def update_worker(value, lock):
while True:
update(value, lock)
def output_worker(value, lock):
while True:
output(value, lock)
sleep(3)
def main():
value = {"int": 0}
lock = Lock()
t1 = Thread(target=update_worker, args=(value, lock), daemon=True)
t2 = Thread(target=output_worker, args=(value, lock), daemon=True)
t1.start()
t2.start()
t1.join()
t2.join()
if __name__ == "__main__":
main()
Even though it is not necessary in this particular Python program, I used a Lock to synchronize reads and writes as it is the correct way to handle concurrency.
Related
I'm new to this so I apologize for mistakes
I'm trying to figure out a way to iterate inside a for loop range, calling an async function but without waiting for a response
here's my code
import asyncio
from random import randint
import time
import threading
async def print_i(i):
number = 0
if (number % 2) == 0: #check for even number
time.sleep(5)
while number != 5:
number = randint(0,100)
print("id-", i)
for i in range (0,100):
asyncio.run(print_i(i))
# thread = threading.Thread(target=print_i(i))
# thread.start()
Both the asyncio.run and the thread.start() are linearly executing the called function, whereas i was hoping that the for loop would call the functions in all iterations in one go, and only the even numbers of "i" would get the time.sleep(5)
Is this possible?
Here's some basic examples I made about how to achieve concurrency in asyncio, threading, and trio. Consider range() call as list in these cases.
If you wonder why the trio, there's a better alternative to asyncio - called Structured Concurrency - and they use different method when spawning a concurrent task - you might stumble on it one day.
For asyncio:
import asyncio
async def task(num: int):
print(f"task {num} started.")
# async function need something 'awaitable' to be asynchronous
await asyncio.sleep(3)
print(f"task {num} finished.")
async def spawn_task():
task_list = []
for n in range(5):
task_list.append(asyncio.create_task(task(n)))
await asyncio.gather(*task_list)
asyncio.run(spawn_task())
For threading:
import threading
import time
def thread_workload(num: int):
print(f"task {num} started.")
# most of python's IO functions (including time.sleep) release GIL,
# allowing other thread to run.
# GIL prevents more than 1 thread running the python code.
time.sleep(3)
print(f"task {num} finished.")
def spawn_thread():
for n in range(5):
t = threading.Thread(target=thread_workload, args=(n,))
t.start()
spawn_thread()
For Trio:
import trio
async def task(num: int):
print(f"task {num} started.")
# async function need something 'awaitable' to be asynchronous
await trio.sleep(3)
print(f"task {num} finished.")
async def spawn_task():
async with trio.open_nursery() as nursery:
# explicit task spawning area. Nursery for tasks!
for n in range(5):
nursery.start_soon(task, n)
trio.run(spawn_task)
Output:
task 0 started.
task 1 started.
task 2 started.
task 3 started.
task 4 started.
task 0 finished.
task 1 finished.
task 2 finished.
task 3 finished.
task 4 finished.
import queue
qq = queue.Queue()
qq.put('hi')
class MyApp():
def __init__(self, q):
self._queue = q
def _process_item(self, item):
print(f'Processing this item: {item}')
def get_item(self):
try:
item = self._queue.get_nowait()
self._process_item(item)
except queue.Empty:
pass
async def listen_for_orders(self):
'''
Asynchronously check the orders queue for new incoming orders
'''
while True:
self.get_item()
await asyncio.sleep(0)
a = MyApp(qq)
loop = asyncio.get_event_loop()
loop.run_until_complete(a.listen_for_orders())
Using Python 3.6.
I'm trying to write an event handler that constantly listens for messages in the queue, and processes them (prints them in this case). But it must be asynchronous - I need to be able to run it in a terminal (IPython) and manually feed things to the queue (at least initially, for testing).
This code does not work - it blocks forever.
How do I make this run forever but return control after each iteration of the while loop?
Thanks.
side note:
To make the event loop work with IPython (version 7.2), I'm using this code from the ib_insync library, I'm using this library for the real-world problem in the example above.
You need to make your queue an asyncio.Queue, and add things to the queue in a thread-safe manner. For example:
qq = asyncio.Queue()
class MyApp():
def __init__(self, q):
self._queue = q
def _process_item(self, item):
print(f'Processing this item: {item}')
async def get_item(self):
item = await self._queue.get()
self._process_item(item)
async def listen_for_orders(self):
'''
Asynchronously check the orders queue for new incoming orders
'''
while True:
await self.get_item()
a = MyApp(qq)
loop = asyncio.get_event_loop()
loop.run_until_complete(a.listen_for_orders())
Your other thread must put stuff in the queue like this:
loop.call_soon_threadsafe(qq.put_nowait, <item>)
call_soon_threadsafe will ensure correct locking, and also that the event loop is woken up when a new queue item is ready.
This is not an async queue. You need to use asyncio.Queue
qq = queue.Queue()
Async is an event loop. You call the loop transferring control to it and it loops until your function is complete which never happens:
loop.run_until_complete(a.listen_for_orders())
You commented:
I have another Thread that polls an external network resource for data (I/O intensive) and dumps the incoming messages into this thread.
Write that code async - so you'd have:
async def run():
while 1:
item = await get_item_from_network()
process_item(item)
loop = asyncio.get_event_loop()
loop.run_until_complete( run() )
If you don't want to do that what you can do is step through the loop though you don't want to do this.
import asyncio
def run_once(loop):
loop.call_soon(loop.stop)
loop.run_forever()
loop = asyncio.get_event_loop()
for x in range(100):
print(x)
run_once(loop)
Then you simply call your async function and each time you call run_once it will check your (asyncio queue) and pass control to your listen for orders function if the queue has an item in it.
from what I know about asyncio this should only print 0 to 4, but it goes through the full 10 digits.
Shouldn't the stop_loop coroutine stop awaiting the event and cancel the loop after i hits 5?
import asyncio
async def run():
for i in range(10):
if i == 5:
e.set()
print(i)
async def stop_loop():
await e.wait()
l.stop()
e = asyncio.Event()
l = asyncio.get_event_loop()
l.set_debug(True)
l.create_task(stop_loop())
l.create_task(run())
try:
l.run_forever()
finally:
l.close()
Output is
machine:programs user$ python3 conditional_stop.py
0
1
2
3
4
5
6
7
8
9
asyncio works by switching between tasks that are implemented as coroutines. A coroutine is a cooperative routine, in that coroutines voluntarily give up control once in a while, to let the asyncio event loop switch to another task. This is different from threading, where each task can and will be interrupted 'at will' by the scheduler.
And coroutines give up control every time they use await on another coroutine, usually at points where some I/O is involved. I/O is slow, and the asyncio event loop takes responsibility of monitoring for changes in I/O streams so it can know what tasks are ready to do more work again.
Your problem is that you have a coroutine that is not cooperating:
async def run():
for i in range(10):
if i == 5:
e.set()
print(i)
That routine has no await statements, so it never gives up control to the event loop. No other coroutines can be run.
You could await on a asyncio.sleep() call:
async def run():
for i in range(10):
if i == 5:
e.set()
print(i)
await asyncio.sleep(0.01) # wait 1/100th of a second
Another option would be to replace the print(i) call (which is an I/O operation) with one that uses a non-blocking output stream. If you are not on Windows, then you can create a StreamWriter asynchronous I/O wrapper for sys.stdout:
import os
import sys
async def run():
# create an async writer for sys.stdout
loop = asyncio.get_event_loop()
writer_transport, writer_protocol = await loop.connect_write_pipe(
asyncio.streams.FlowControlMixin, os.fdopen(sys.stdout.fileno(), 'wb'))
writer = asyncio.streams.StreamWriter(
writer_transport, writer_protocol, None, loop)
for i in range(10):
if i == 5:
e.set()
writer.write(b'%d\n' % i)
await writer.drain()
Unfortunately, there is no support yet for creating async I/O streams for the Windows console streams, see Pyton issue #26832, you'd have to use a threadpool executor instead.
Note that even with the latter coroutine, there is no guarantee that the stop coroutine will actually be run soon enough after e.set() is called to cancel run() before it reads 10! The loop is free to give control right back to the same coroutine after await writer.drain() has been handled. Writing short lines to the sys.stdout stream buffer is fast, and the only thing .drain() does is just give the writing coroutines time to flush the internal transport buffer; with direct non-blocking writes to sys.stdout succeeding most of the time that's not always enough room for stop_loop() to jump in and the run() coroutine will have written all its lines to the writer transport .
I was wondering how concurrency works in python 3.6 with asyncio. My understanding is that when the interpreter executing await statement, it will leave it there until the awaiting process is complete and then move on to execute the other coroutine task. But what I see here in the code below is not like that. The program runs synchronously, executing task one by one.
What is wrong with my understanding and my impletementation code?
import asyncio
import time
async def myWorker(lock, i):
print("Attempting to attain lock {}".format(i))
# acquire lock
with await lock:
# run critical section of code
print("Currently Locked")
time.sleep(10)
# our worker releases lock at this point
print("Unlocked Critical Section")
async def main():
# instantiate our lock
lock = asyncio.Lock()
# await the execution of 2 myWorker coroutines
# each with our same lock instance passed in
# await asyncio.wait([myWorker(lock), myWorker(lock)])
tasks = []
for i in range(0, 100):
tasks.append(asyncio.ensure_future(myWorker(lock, i)))
await asyncio.wait(tasks)
# Start up a simple loop and run our main function
# until it is complete
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
print("All Tasks Completed")
loop.close()
Invoking a blocking call such as time.sleep in an asyncio coroutine blocks the whole event loop, defeating the purpose of using asyncio.
Change time.sleep(10) to await asyncio.sleep(10), and the code will behave like you expect.
asyncio use a loop to run everything, await would yield back the control to the loop so it can arrange the next coroutine to run.
Is it possible to make a thread run method async so it can execute coroutines inside it? I realise I am mixing paradigms - I am trying to integrate a 3rd party library that uses coroutines whereas my project uses threads. Before I consider updating my project to use coroutines instead I'd like to explore executing coroutines within my threads.
Below is my example usecase where I have a thread but I want to call a coroutine from within my thread. My problem is the function MyThread::run() doesn't appear to be executing (printing). Is what I am trying possible...and advisable?
from threading import Thread
class MyThread(Thread):
def __init__(self):
Thread.__init__(self)
self.start()
# def run(self):
# while True:
# print("MyThread::run() sync")
async def run(self):
while True:
# This line isn't executing/printing
print("MyThread::run() sync")
# Call coroutine...
# volume = await market_place.get_24h_volume()
try:
t = MyThread()
while True:
pass
except KeyboardInterrupt:
pass
You need to create a asyncio event loop, and wait until the coroutine complete.
import asyncio
from threading import Thread
class MyThread(Thread):
def run(self):
loop = asyncio.new_event_loop() # loop = asyncio.get_event_loop()
loop.run_until_complete(self._run())
loop.close()
# asyncio.run(self._run()) In Python 3.7+
async def _run(self):
while True:
print("MyThread::run() sync")
await asyncio.sleep(1)
# OR
# volume = await market_place.get_24h_volume()
t = MyThread()
t.start()
try:
t.join()
except KeyboardInterrupt:
pass