Python asyncio: function or coroutine, which to use? - python

I'm wondering if there are any noticeable differences in performance between foo and bar:
class Interface:
def __init__(self, loop):
self.loop = loop
def foo(self, a, b):
return self.loop.run_until_complete(self.bar(a, b))
async def bar(self, a, b):
v1 = await do_async_thing1(a)
for i in range(whatever):
do_some_syncronous_work(i)
v2 = await do_async_thing2(b, v1)
return v2
async def call_foo_bar(loop):
a = 'squid'
b = 'ink'
interface = Interface(loop)
v_foo = interface.foo(a, b)
v_bar = await interface.bar(a, b)
But will the use of run_until_complete cause any practical, noticeable different to running my program?
(The reason I ask is I'm building an interface class which will accommodate decomposable "back-ends" some of which could be asynchronous. I'm hoping to use standard functions for the public (reusable) methods of the interface class so one API can be maintained for all code without messing up the use of asynchronous back-ends where an event-loop is available.)
Update: I didn't check the code properly and the first version was completely invalid, hence rewrite.

loop.run_until_complete() should be used outside of coroutine on very top level. Calling run_until_complete() with active (running) event loop is forbidden.
Also loop.run_until_complete(f) is a blocking call: execution thread is blocked until f coroutine or future becomes done.
async def is a proper way to write asynchronous code by making concurrent coroutines which may communicate with each other.
async def requires running event loop (either loop.run_until_complete() or loop.run_forever() should be called).

There is a world of difference. It is a SyntaxError to use await in a normal function def function.
import asyncio
async def atest():
print("hello")
def test():
await atest()
async def main():
await atest()
test()
--
File "test.py", line 9
await atest()
^
SyntaxError: invalid syntax

I do not believe so. I have used both and have not really seen a difference. I prefer foo just because I have used it more and understand it a bit better, but it is up to you.

Related

Share a Python asyncio.Future between multiple coroutines

I would like to know if the following is a safe use of futures to bridge
callback-based code to asynchronous code.
I hope the following illustrates what I mean.
I would like to communicate between two coroutines. One coroutine is mainly
running a library (not my code, represented by b method below), which provides
a synchronous callback which is called whenever an event happens. (Note that the library is using asyncio, but this particular API it provides uses a synchronous callback). I would like
to get data from that callback into a second coroutine (my code, represented by
a method below).
I don't care about missing messages (X objects). callback is called
frequently, much more often than we actually need X objects. It is no problem to
just wait for the next if we miss it. And we don't always need the latest X,
there are only certain situations when we need to refresh it (get a new one),
the rest of the time we just ignore them.
Can I share a future across coroutines like this, or do I need some kind of locking around setting/accessing self.fut? I can't actually use an asyncio.Lock in callback (a synchronous function) because I would need to use await lock.acquire() or async with lock: .... As mentioned above, I can't change this API to make callback an async function.
Also, are there any better alternative approaches? I could use an unlimited asyncio.Queue and use put_nowait synchronously from
the callback function. But it seems like overkill to use a Queue for the reasons
above (I only care about the latest, I don't care about missing messages, I
ignore most of them). The queue would (should) only ever have at most 1 item.
import asyncio
from typing import Optional
class C:
fut: Optional[asyncio.Future[X]]
def __init__(self):
self.fut = None
async def a(self):
while True:
...
# Some condition comes up where we need an X
# We are signalling to b() that we want an X object by setting
# self.fut to a Future (making it not None)
fut = asyncio.get_running_loop().create_future()
self.fut = fut
x = await fut
...
# (use the X object)
async def b(self):
while True:
...
# Calls synchronous callback when it has an X object ready
self.callback(x)
def callback(self, x: X):
...
# callback does other things as well as sending X objects over the Future
# But if the future is not None, it will use it send the X object
fut = self.fut
# Reset it to None because we only want one X object
self.fut = None
if fut is not None:
fut.set_result(x)
async def main():
c = C()
tasks = asyncio.create_task(c.a()), asyncio.create_task(c.b())
for coro in asyncio.as_completed(tasks)
await coro
if __name__ == '__main__':
asyncio.run(main())

Create a synchronous wrapper for an async function compatible with Spyder IDE

I'm trying to create a wrapper around an asyncio coroutine that allows the user to use it as a "normal" function.
To give a bit of context, this is a function inside a package that war originally not-async. For a series of reasons, I now need to have an async version of it. To avoid duplicating the whole code, I'm trying to create a wrapper to allow existing code (that doesn't use asyncio) to keep running without breaking back compatibility.
To make things more complicated, the majority of the users (it's a company code) use this code inside Spyder IDE.
To sort it, I did something like this
import asyncio
async def an_async_subfunction(t, tag):
print(f"I'm inside an_async_subfunction named {tag}")
await asyncio.sleep(t)
print(f"Leaving an_async_subfunction named {tag}")
async def an_async_function(n):
print(f"I'm inside an_async_function")
tasks = [an_async_subfunction(t, t) for t in range(n)]
await asyncio.gather(*tasks)
print(f"Leaving an_async_function")
async def main_async(n):
# the old main function, now become a corouting
await an_async_function(n)
return 'a result'
def main(*args):
# the wrapper exposed to the users
return asyncio.run(main_async(*args))
if __name__ == '__main__':
print('Normal version')
# The user can call the main function without bothering with asyncio
result = main(3)
print('Async version')
# ...or can use the async version of main if he wants
async def a_user_defined_async_function():
return await main_async(3)
result = asyncio.run(a_user_defined_async_function())
This works as expected, allowing the basic user to call main without bothering that it is a coroutine, while if a user wants to use main inside a custom-made async function, he can use main_async.
However, if you try to run this code in Spyder, you get the error:
RuntimeError: asyncio.run() cannot be called from a running event loop
This is caused by the fact that Spyder has its own event loop running as explained here.
I tried to fix it doing something like:
def main(*args):
if asyncio.get_event_loop().is_running():
return asyncio.create_task(main_async(*args)).result()
else:
return asyncio.run(main_async(*args))
This is now "Spyder-friendly" an it works inside Spyder without problems. The problem is that .result() is called before the Task created by asyncio.create_task is finished and an InvalidStateError exception is returned.
I can't put an await in front of create_task as main is not a coroutine, and I can't make main a coroutine, otherwise the whole thing would have been pointless.
Is there a solution to this mess?

Using an asynchronous function in __del__

I'd like to define what essentially is an asynchronous __del__ that closes a resource. Here's an example.
import asyncio
class Async:
async def close(self):
print('closing')
return self
def __del__(self):
print('destructing')
asyncio.ensure_future(self.close())
async def amain():
Async()
if __name__ == '__main__':
asyncio.run(amain())
This works, printing destructing and closing as expected. However, if the resource is defined outside an asynchronous function, __del__ is called, but closing is never performed.
def main():
Async()
No warning is raised here, but the prints reveal that closing was not done. The warning is issued if an asynchronous function has been run, but any instance is created outside of it.
def main2():
Async()
asyncio.run(amain())
RuntimeWarning: coroutine 'Async.close' was never awaited
This has been the subject in 1 and 2, but neither quite had what I was looking for, or maybe I didn't know how to look. Particularly the first question was about deleting a resource, and its answer suggested using asyncio.ensure_future, which was tested above. Python documentation suggests using the newer asyncio.create_task, but it straight up raises an error in the non-async case, there being no current loop. My final, desperate attempt was to use asyncio.run, which worked for the non-async case, but not for the asynchronous one, as calling run is prohibited in a thread that already has a running loop. Additionally, the documentation states that it should only be called once in a program.
I'm still new to async things. How could this be achieved?
A word on the use case, since asynchronous context managers were mentioned as the preferred alternative in comments. I agree, using them for short-term resource management would be ideal. However, my use case is different for two reasons.
Users of the class are not necessarily aware of the underlying resources. It is better user experience to hide closing the resource from a user who doesn't fiddle with the resource itself.
The class needs to be instantiated (or for it to be possible to instantiate it) in a synchronous context, and it is often created just once. For example, in a web server context the class would be instantiated in the global scope, after which its async functions would be used in the endpoint definitions.
For example:
asc = Async()
server.route('/', 'GET')
async def root():
return await asc.do_something(), 200
I'm open to other suggestions of implementing such a feature, but at this point even my curiosity for the possibility that this can be done is enough for me to want an answer to this specific question, not just the general problem.
Only thing that comes to mind is to run cleanup after the server shutdown. It'll look something like this:
asc = Async()
try:
asyncio.run(run_server()) # You already do it now somewhere
finally:
asyncio.run(asc.close())
Since asyncio.run creates new event loop each time, you may want to go even deeper and reuse the same event loop:
loop = asyncio.get_event_loop()
asc = Async()
try:
loop.run_until_complete(run_server())
finally:
loop.run_until_complete(asc.close())
It's absolutely ok to call run_until_complete multiple times as long as you know what you're doing.
Full example with your snippet:
import asyncio
class Async:
async def close(self):
print('closing')
return self
async def cleanup(self):
print('destructing')
await self.close()
loop = asyncio.get_event_loop()
asc = Async()
async def amain():
await asyncio.sleep(1) # Do something
if __name__ == '__main__':
try:
loop.run_until_complete(amain())
finally:
loop.run_until_complete(asc.cleanup())
loop.close()

Python 3.6 async await without asyncio: how to write own simplest event loop?

Just like to understand async await syntax, so I am looking for some 'hello world' app without using asyncio at all.
So how to create simplest event loop using only Python syntax itself? The simplest code (from this Start async function without importing the asyncio package , further code is much more then hello world, that's why I am asking) looks like that:
async def cr():
while(True):
print(1)
cr().send(None)
It prints 1 infinitely, not so good.
So the 1st question is how to yield from coroutine back to the main flow? yield keyword makes coroutine async generator, not we expected.
I would also appreciate a simple application, like this
i.e. we have a coroutine which prints 1, then yields to event loop, then prints 2 then exits with return 3, and simple event loop, which push coroutine until return and consume result.
How about this?
import types
#types.coroutine
def simple_coroutine():
print(1)
yield
print(2)
return 3
future = simple_coroutine()
while True:
try: future.send(None)
except StopIteration as returned:
print('It has returned', returned.value)
break
I think your biggest problem is that you're mixing concepts. An async function is not the same as a coroutine. It is more appropriate to think of it as a way of combining coroutines. Same as ordinary def functions are a way of combining statements into functions. Yes, Python is highly reflective language, so def is also a statement, and what you get from your async function is also a coroutine---but you need to have something at the bottom, something to start with. (At the bottom, yielding is just yielding. At some intermediate level, it is awaiting---of something else, of course.) That's given to you through the types.coroutine decorator in the standard library.
If you have any more questions, feel free to ask.

Python websockets: how to override sync method WebSocketCommonProtocol.connection_made() with async call?

Currently WebSocketCommonProtocol.connection_made() is defined as a sync call. If I want override it and add some async call, it seems no way to do it.
Example: I use aioredis to talk to Redis; but I cannot use aioredis when overriding WebSocketCommonProtocol.connection_made(). The only workaround I can think out is using a sync library redis-py in this function, but use aioredis in other places. It works but very ugly.
asgiref.sync.async_to_sync() doesn't work here: I already have event loop running. This commit will prevent me from using it: https://github.com/django/asgiref/commit/9d42cb57129bd8d94a529a5c95dcf9f5d35a9beb
WebSocketCommonProtocol.connection_made() is inherited from asyncio.StreamReaderProtocol.connection_made(). So this is a generic question even for python standard library. Don't know whether anyone knows a solution already.
Please give me some suggestions to solve this issue.
Worked out one solution: https://pypi.org/project/syncasync/
It will run the async code in a new thread. So it will not have a race condition with the sync code. This approach is very slow: the main thread will wait for the sync code, and the sync code will wait for the new thread to finish.
Compared to the other solution: use both sync and async libraries in your program, this solution will allow you to only use async library.
Give it a try and let me know any bugs, or suggest a better approach.
Example:
#!/usr/bin/env python
import asyncio
from syncasync import async_to_sync
class Base:
def method(self):
pass
def run(self):
self.method()
class Derived(Base):
#async_to_sync
async def method(self):
await asyncio.sleep(0)
async def test():
d = Derived()
d.run()
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(test())

Categories