I'm using an async library (asyncpg) and I want to debug some async calls to query the database.
I place a pdb breakpoint and want try out a few queries:
(pdb) await asyncpg.fetch("select * from foo;")
*** SyntaxError: 'await' outside function
It would be great to be able to do this because it would allow me to try out a few SQL queries and see the result, all from the comfort of my debugger.
Is it possible?
I had a similar problem debugging the useage of aiofile. I then found a solution using nest_asyncio. For example if one has the following async example script:
import asyncio
from aiofile import async_open
import nest_asyncio
async def main():
async with async_open("/tmp/hello.txt", 'w+') as afp:
await afp.write("Hello ")
await afp.write("world")
afp.seek(0)
breakpoint()
print(await afp.read())
if __name__=="__main__":
loop = asyncio.get_event_loop()
nest_asyncio.apply(loop)
loop.run_until_complete(main())
One can then do:
-> print(await afp.read())
(Pdb) loop = asyncio.get_event_loop()
(Pdb) loop.run_until_complete(afp.read())
'Hello world'
(Pdb)
Admittedly it is a bit more tedious then await asyncpg.fetch("select * from foo;") or await afp.read() but it gets the job done. Hopefully a more elegant solution will come up in the future.
I want to add on to #M.D.'s answer.
After applying the nest_asyncio to your loop, you can also add the following function:
def return_awaited_value(coroutine: asyncio.coroutine) -> Any:
loop = asyncio.get_event_loop()
result = loop.run_until_complete(coroutine)
return result
If you use VSCode as your IDE, then this can be run from VSCode's debug console as
result = return_awaited_value(afp.read())
And VSCode will return it as an object that I find helpful to have while debugging.
We can't directly debug the coroutine (since, imagine we have to put the pdb inside the asyncpg.fetch function somewhere by overriding it).
Instead, one possibility is we can make a sync function and convert it to async function using any package like awaits and and pdb:
import asyncio
from awaits.awaitable import awaitable
#awaitable
def overrided_fectch_function( a , b ):
print(a)
import pdb; pdb.set_trace()
return a + b
print(b)
# Now sum is a coroutine! While it is running in a separate thread, control is passed to the event-loop.
print ( asyncio . run ( overrided_fectch_function ( 2 , 2 )))
Related
Can you help me see what I have understood wrong here please. I have two functions and I would like the second one to run regardless of the status of the first one (whether it is finished or not). Hence I was thinking to make the first function asynchronous. This is what I have done
import os
import asyncio
from datetime import datetime
async def do_some_iterations():
for i in range(10):
print(datetime.now().time())
await asyncio.sleep(1)
print('... Cool!')
async def main():
task = asyncio.create_task (do_some_iterations())
await task
def do_something():
print('something...')
if __name__ == "__main__":
asyncio.run(main())
do_something()
The output is:
00:46:00.145024
00:46:01.148533
00:46:02.159751
00:46:03.169868
00:46:04.179915
00:46:05.187242
00:46:06.196356
00:46:07.207614
00:46:08.215997
00:46:09.225066
Cool!
something...
which looks like the traditional way where one function has to finish and then move to the next call.
I was hoping instead to execute do_something() before the asynchronous function started generating the print statements (or at lease at the very top of those statements..)
What am I doing wrong please? How I should edit the script?
They both need to be part of the event loop the you created. asyncio.run() itself is not async, which means it will run until the loop ends. One easy way to do this is to use gather()
import asyncio
from datetime import datetime
async def do_some_iterations():
for i in range(10):
print(datetime.now().time())
await asyncio.sleep(1)
print('... Cool!')
async def do_something():
print('something...')
async def main():
await asyncio.gather(
do_some_iterations(),
do_something()
)
if __name__ == "__main__":
asyncio.run(main())
print("done")
This will print:
16:08:38.921879
something...
16:08:39.922565
16:08:40.923709
16:08:41.924823
16:08:42.926004
16:08:43.927044
16:08:44.927877
16:08:45.928724
16:08:46.929589
16:08:47.930453
... Cool!
done
You can also simply add another task:
async def main():
task = asyncio.create_task(do_some_iterations())
task2 = asyncio.create_task(do_something())
In both cases the function needs to be awaitable.
I have legacy python application which is synchronous.
I started to use async code inside this application in this way (simplified):
async def loader():
async with trio.open_nursery() as nursery:
# some async tasks started here
await trio.to_thread.run_sync(legacyCode)
if __name__ == '__main__':
trio.run(loader)
Inside legacyCode I can use trio.from_thread.run(asyncMethod) to run some async code from the legacy synchronous code.
It works well, but now I need to include new library (triopg) which use internally trio_asyncio.
So I need to modify the way how I start my application - I need to replace trio.run by trio_asyncio.run. That's easy but after the trio.to_thread -> trio.from_thread the async code does not work because trio_asyncio has no loop defined.
Here is a short demonstration:
import trio
import trio_asyncio
def main():
trio.from_thread.run(amain)
async def amain():
print(f"Loop in amain: {trio_asyncio.current_loop.get()}") # this print none
async def loader():
print(f"Loop in loader: {trio_asyncio.current_loop.get()}") # this print some loop
await trio.to_thread.run_sync(main)
if __name__ == '__main__':
trio_asyncio.run(loader)
How should I modify the example above so the trio_asyncio is able to found the loop inside amain() function?
Or is this approach completely wrong? If so, how can I use small pieces of async code inside huge synchronous application when libraries needs to use trio and trio_asyncio?
I use python 3.9.
Finally I found the solution and ... it seems to be easy :-)
The trio_asyncio loop needs to be opened manually when we call the async function from thread. So the only difference is to add open_loop() call in amain() function:
import trio
import trio_asyncio
def main():
trio.from_thread.run(amain)
async def amain():
async with trio_asyncio.open_loop():
print(f"Loop in amain: {trio_asyncio.current_loop.get()}") # this print another loop
async def loader():
print(f"Loop in loader: {trio_asyncio.current_loop.get()}") # this print one loop
await trio.to_thread.run_sync(main)
if __name__ == '__main__':
trio_asyncio.run(loader)
I'm trying to find a solution to call async function in a Synchronous context.
And following is my references:
Python call callback after async function is done
When using asyncio, how do you allow all running tasks to finish before shutting down the event loop
https://docs.python.org/zh-cn/3/library/asyncio-task.html
RuntimeError: This event loop is already running in python
call async function in main function
But I find that, asyncio.get_event_loop() fails when following asyncio.run(), here is my code to reproduce this issue:
import asyncio
async def asyncfunction(n):
print(f'before sleep in asyncfunction({ n })')
await asyncio.sleep(1)
print(f'after sleep in asyncfunction({ n })')
return f'result of asyncfunction({ n })'
def callback(r):
print(f'inside callback, got: {r}')
r0 = asyncio.run(asyncfunction(0)) # cause following asyncio.get_event_loop() fail.
callback(r0)
print('sync code following asyncio.run(0)')
r1 = asyncio.run(asyncfunction(1)) # but following asyncio.run() still works.
callback(r1)
print('sync code following asyncio.run(1)')
async def wrapper(n):
r = await asyncfunction(n)
callback(r)
asyncio.get_event_loop().create_task(wrapper(2)) #fail if there is asyncio.run() before
print('sync code following loop.create_task(2)')
#RuntimeError: There is no current event loop in thread 'MainThread'.
asyncio.get_event_loop().create_task(wrapper(3)) #the second call works if there is no asyncio.run() before
print('sync code following loop.create_task(3)')
# main
_all = asyncio.gather(*asyncio.all_tasks(asyncio.get_event_loop()))
asyncio.get_event_loop().run_until_complete(_all)
I think it might because that the event loop is "consumed" by something somehow, and asyncio.set_event_loop(asyncio.new_event_loop()) might be a workaround, but I'm not sure whether that is an expected usage for end-user to set event loop mannually. And I'm also wondering why and how everything happens here.
After read some of the source code of asyncio.run. I can know why this happens.
But I'm still wondering what is the expected way to call async function in a Synchronous context ?
It seems that the following code works (set a new event loop after each asyncio.run() call) :
asyncio.run(asyncfunction())
asyncio.set_event_loop(asyncio.new_event_loop())
but that is somehow weird, and doesn't seems to be the expected way.
When you call asyncio.run it creates a new event loop each time you call it. That event loop is subsequently destroyed when asyncio.run finishes. So in your code example after your second asyncio.run finishes there is no event loop at all, the two you've previously created don't exist anymore. asyncio.get_event_loop will normally create a new event loop for you unless set_event_loop was previously called, which asyncio.run does do (which explains why if you remove asyncio.run things work). To fix your code, you should create a new event loop and just use that instead of calling get_event_loop, bear in mind that this is a third loop and that may not be what you want.
import asyncio
async def asyncfunction(n):
print(f'before sleep in asyncfunction({ n })')
await asyncio.sleep(1)
print(f'after sleep in asyncfunction({ n })')
return f'result of asyncfunction({ n })'
def callback(r):
print(f'inside callback, got: {r}')
r0 = asyncio.run(asyncfunction(0))
callback(r0)
print('sync code following asyncio.run(0)')
r1 = asyncio.run(asyncfunction(1))
callback(r1)
print('sync code following asyncio.run(1)')
async def wrapper(n):
r = await asyncfunction(n)
callback(r)
loop = asyncio.new_event_loop()
loop.create_task(wrapper(2))
print('sync code following loop.create_task(2)')
loop.create_task(wrapper(3))
print('sync code following loop.create_task(3)')
# main
_all = asyncio.gather(*asyncio.all_tasks(loop))
loop.run_until_complete(_all)
In my simple asyncio Python program below, bar_loop is supposed to run continuously with a 1 second delay between loops.
Things run as expected when we have simply
async def bar_loop(self):
while True:
print('bar')
However, when we add a asyncio.sleep(1), the loop will end instead of looping.
async def bar_loop(self):
while True:
print('bar')
await asyncio.sleep(1)
Why does asyncio.sleep() cause bar_loop to exit immediately? How can we let it loop with a 1 sec delay?
Full Example:
import asyncio
from typing import Optional
class Foo:
def __init__(self):
self.bar_loop_task: Optional[asyncio.Task] = None
async def start(self):
self.bar_loop_task = asyncio.create_task(self.bar_loop())
async def stop(self):
if self.bar_loop_task is not None:
self.bar_loop_task.cancel()
async def bar_loop(self):
while True:
print('bar')
await asyncio.sleep(1)
if __name__ == '__main__':
try:
foo = Foo()
asyncio.run(foo.start())
except KeyboardInterrupt:
asyncio.run(foo.stop())
Using Python 3.9.5 on Ubuntu 20.04.
This behavior has nothing to do with calling asyncio.sleep, but with the expected behavior of creating a task and doing nothing else.
Tasks will run in parallel in the the asyncio loop, while other code that uses just coroutine and await expressions can be thought as if run in a linear pattern - however, as the are "out of the way" of the - let's call it "visible path of execution", they also won't prevent that flow.
In this case, your program simply reaches the end of the start method, with nothing left being "awaited", the asyncio loop simply finishes its execution.
If you have no explicit code to run in parallel to bar_loop, just await for the task. Change your start method to read:
async def start(self):
self.bar_loop_task = asyncio.create_task(self.bar_loop())
try:
await self.bar_loop_task
except XXX:
# handle excptions that might have taken place inside the task
I need to call an async function inside the on_next of a python rx subscription as this:
from rx.subject import Subject
import asyncio
async def asyncPrint(value: str):
print(f'async print: {value}')
async def main():
s = Subject()
s.subscribe(
on_error=lambda e: print(e),
on_next=lambda value: asyncPrint(value)
)
s.on_next('Im from the subject')
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
but I get the async error:
$ python test.py
rx\core\observer\autodetachobserver.py:26:
RuntimeWarning: coroutine 'asyncPrint' was never awaited
self._on_next(value)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
I really don't want to use asyncio.get_event_loop().run_until_complete(...) because I'm already running the main loop and I don't want to start a new one neither use nested loops.
I searched a about it and I saw that a lambda function can't be async. And I think maybe that's the problem here, because I really don't know how to get the on_next value without a lambda funtion using rx library.
I also searched about async-rx library, but this looks like the only different thing that can be done is an await subscribe(...), but that's not what I want. I want something like subscribe(on_next=await...).
Is that possible? from my javascript background its easy to start a async function inside a subscribe so it looks like a possible task for me. Hope someone could find a solution to that.
Thank's a lot!
finally I manage to make this work by using asyncio.run_task()
from rx.subject import Subject
import asyncio
async def asyncPrint(value: str):
print(f'async print: {value}')
async def main():
s = Subject()
s.subscribe(
on_error=lambda e: print(e),
on_next=lambda value: asyncio.create_task(asyncPrint(value))
)
s.on_next('Im from the subject')
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
output:
$ python teste.py
async print: Im from the subject