I wrote a test program to try out using create_task() that needs to wait until the created task completes.
I tried using loop.run_until_complete() to wait for task completion, but it results in an error with a traceback.
/Users/jason/.virtualenvs/xxx/bin/python3.5 /Users/jason/asyncio/examples/hello_coroutine.py
Traceback (most recent call last):
Test
File "/Users/jason/asyncio/examples/hello_coroutine.py", line 42, in <module>
Hello World, is a task
loop.run_until_complete(test.greet_every_two_seconds())
File "/Users/jason/asyncio/asyncio/base_events.py", line 373, in run_until_complete
return future.result()
File "/Users/jason/asyncio/asyncio/futures.py", line 274, in result
raise self._exception
File "/Users/jason/asyncio/asyncio/tasks.py", line 240, in _step
result = coro.send(None)
File "/Users/jason/asyncio/examples/hello_coroutine.py", line 33, in greet_every_two_seconds
self.a()
File "/Users/jason/asyncio/examples/hello_coroutine.py", line 26, in a
t = loop.run_until_complete(self.greet_every_one_seconds(self.db_presell))
File "/Users/jason/asyncio/asyncio/base_events.py", line 361, in run_until_complete
self.run_forever()
File "/Users/jason/asyncio/asyncio/base_events.py", line 326, in run_forever
raise RuntimeError('Event loop is running.')
RuntimeError: Event loop is running.
The test code is as follows. The function a() must not be a coroutine,
How can I wait for the task until complete?
import asyncio
class Test(object):
def __init__(self):
print(self.__class__.__name__)
pass
#asyncio.coroutine
def greet_every_one_seconds(self, value):
print('Hello World, one second.')
fut = asyncio.sleep(1,result=value)
a = yield from fut
print(a)
def a(self):
loop = asyncio.get_event_loop()
task=loop.run_until_complete(self.greet_every_one_seconds(4))
#How can i wait for the task until complete?
#asyncio.coroutine
def greet_every_two_seconds(self):
while True:
self.a()
print('Hello World, two seconds.')
yield from asyncio.sleep(2)
if __name__ == '__main__':
test = Test()
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(test.greet_every_two_seconds())
finally:
loop.close()
How can i wait for the task until complete?
loop.run_until_complete(task) in an ordinary function. Or await task in a coroutine.
Here's a complete code example that demonstrates both cases:
#!/usr/bin/env python3
import asyncio
async def some_coroutine(loop):
task = loop.create_task(asyncio.sleep(1)) # just some task
await task # wait for it (inside a coroutine)
loop = asyncio.get_event_loop()
task = loop.create_task(asyncio.sleep(1)) # just some task
loop.run_until_complete(task) # wait for it (outside of a coroutine)
loop.run_until_complete(some_coroutine(loop))
Related
I am trying to make some API calls to firebase using the google.cloud.firestore.AsyncClient module. Fire2 is just a wrapper for this object.
The traceback suggests its a problem with asyncio, but if I make a dummy await it actually runs just fine. What is the problem and why does it occur?
Sample code:
import Fire2
class Test:
def __init__(self):
doc = asyncio.run(self.wait())
async def wait(self):
doc = await Fire2("test1").get(g1) # gives the error
# doc = await asyncio.sleep(1) # runs without error
return doc
def test(self):
x = Test2(p1)
class Test2:
def __init__(self, p):
doc = asyncio.run(self.run(p))
print(doc.to_dict())
async def run(self, p):
doc = await Fire2('test2').get(p)
return doc
p1 = 'foo'
g1 = 'bar'
h = Test()
h.test()
Traceback:
Traceback (most recent call last):
File "<project path>\scratch.py", line 137, in <module>
h.test()
File "<project path>\scratch.py", line 123, in test
x = Test2(p1)
File "<project path>\scratch.py", line 127, in __init__
doc = asyncio.run(self.run(p))
File "<user AppData>\Local\Programs\Python\Python39\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "<user AppData>\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "<project path>\scratch.py", line 131, in run
doc = await Fire2('test2').get(p)
File "<project path>\<Fire2 file>", line 422, in get
res = await self._collection.document(doc_id).get()
File "<project path>\venv\lib\site-packages\google\cloud\firestore_v1\async_document.py", line 364, in get
response_iter = await self._client._firestore_api.batch_get_documents(
File "<project path>\venv\lib\site-packages\google\api_core\grpc_helpers_async.py", line 171, in error_remapped_callable
call = callable_(*args, **kwargs)
File "<project path>\venv\lib\site-packages\grpc\aio\_channel.py", line 165, in __call__
call = UnaryStreamCall(request, deadline, metadata, credentials,
File "<project path>\venv\lib\site-packages\grpc\aio\_call.py", line 553, in __init__
self._send_unary_request_task = loop.create_task(
File "<user AppData>\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 431, in create_task
self._check_closed()
File "<user AppData>\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 510, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
sys:1: RuntimeWarning: coroutine 'UnaryStreamCall._send_unary_request' was never awaited
Process finished with exit code 1
You are making two separate calls to asyncio.run. We can see from the stack trace that the error is coming from inside the grpc library. I suspect that the same grpc connection is being used both times, and that during its initialization it integrates itself with the async event loop in order to maintain the connection in the background between calls.
The problem is that every time you call asyncio.run, you will create a new event loop, and every time it returns that event loop will be closed. So by the time you call this function again, and the grpc layer tries to do its work, it realizes that the original event loop has been closed and it returns this error.
To fix this, You can have a single call to asyncio.run, typically calling a "main" function, or similar. This would then make calls to async functions which perform the actual work.
If the problem persists then there might be some issues in the “Fire2”class.
For more reference you can check this link which outlines high-level asyncio APIs to work with coroutines and Tasks with the help of various examples.
Use pytest with using fixture with scope="session" in conftest.py
#pytest.fixture(scope="session")
def event_loop():
loop = get_event_loop()
yield loop
loop.close()
Here's a simplified scenario for the problem I'm having.
import asyncio
async def c():
print("yes")
def b():
asyncio.run(c())
async def a():
b()
asyncio.run(a())
I'm expecting the program to print "yes". However, I get this instead:
Traceback (most recent call last):
File [redacted], line 12, in <module>
asyncio.run(a())
File "/usr/lib64/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/usr/lib64/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
File [redacted], line 10, in a
b()
File [redacted], line 7, in b
asyncio.run(c())
File "/usr/lib64/python3.7/asyncio/runners.py", line 34, in run
"asyncio.run() cannot be called from a running event loop")
RuntimeError: asyncio.run() cannot be called from a running event loop
sys:1: RuntimeWarning: coroutine 'c' was never awaited
What would you think would be the solution for this problem?
(Also, can this be done using purely asyncio?)
Don't call asyncio.run() twice in the same program. Instead, once the event loop is running, create a task and await it. A sync function can create a new task but can't await it; it can return it as an awaitable. The async calling function can then perform the await.
import asyncio
async def c():
print("yes")
def b():
return asyncio.create_task(c())
async def a():
task = b()
await task
asyncio.run(a())
>>>yes
How to terminate loop.run_in_executor with ProcessPoolExecutor gracefully? Shortly after starting the program, SIGINT (ctrl + c) is sent.
def blocking_task():
sleep(3)
async def main():
exe = concurrent.futures.ProcessPoolExecutor(max_workers=4)
loop = asyncio.get_event_loop()
tasks = [loop.run_in_executor(exe, blocking_task) for i in range(3)]
await asyncio.gather(*tasks)
if __name__ == "__main__":
try:
asyncio.run(main())
except KeyboardInterrupt:
print('ctrl + c')
With max_workers equal or lesser than the the number of tasks everything works. But if max_workers is greater, the output of the above code is as follows:
Process ForkProcess-4:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/concurrent/futures/process.py", line 233, in _process_worker
call_item = call_queue.get(block=True)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 97, in get
res = self._recv_bytes()
File "/usr/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
ctrl + c
I would like to catch the exception (KeyboardInterrupt) only once and ignore or mute the other exception(s) in the process pool, but how?
Update extra credit:
Can you explain (the reason for) the multi exception?
Does adding a signal handler work on Windows?
If not, is there a solution that works without a signal handler?
You can use the initializer parameter of ProcessPoolExecutor to install a handler for SIGINT in each process.
Update:
On Unix, when the process is created, it becomes a member of the process group of its parent. If you are generating the SIGINT with Ctrl+C, then the signal is being sent to the entire process group.
import asyncio
import concurrent.futures
import os
import signal
import sys
from time import sleep
def handler(signum, frame):
print('SIGINT for PID=', os.getpid())
sys.exit(0)
def init():
signal.signal(signal.SIGINT, handler)
def blocking_task():
sleep(15)
async def main():
exe = concurrent.futures.ProcessPoolExecutor(max_workers=5, initializer=init)
loop = asyncio.get_event_loop()
tasks = [loop.run_in_executor(exe, blocking_task) for i in range(2)]
await asyncio.gather(*tasks)
if __name__ == "__main__":
try:
asyncio.run(main())
except KeyboardInterrupt:
print('ctrl + c')
Ctrl-C shortly after start:
^CSIGINT for PID= 59942
SIGINT for PID= 59943
SIGINT for PID= 59941
SIGINT for PID= 59945
SIGINT for PID= 59944
ctrl + c
This question is different than Is there a way to use asyncio.Queue in multiple threads?
I have 2 asyncio event loops running in two different threads.
Thread1 produces data to Thread2 through a asyncio.Queue().
One of the threads throws an exception: got Future <Future pending> attached to a different loop
Now this is true because I have a single queue that I use in different loops. How can I share a queue between two loops, in two different threads?
Example code:
q = asyncio.Queue()
async def producer(q):
await asyncio.sleep(3)
q.put(1)
def prod_work(q):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(producer(q))
async def consumer(q):
await asyncio.sleep(3)
res = await q.get()
def cons_work(q):
loop2 = asyncio.new_event_loop()
asyncio.set_event_loop(loop2)
loop2.run_until_complete(consumer(q))
def worker(q):
# main thread - uses this threads loop
prod = threading.Thread(target=prod_work, args=(q,))
# separate thread - uses NEW loop
cons = threading.Thread(target=cons_work, args=(q,))
prod.start()
cons.start()
worker(q)
full stack trace:
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "asyncio_examples.py", line 24, in cons_work
loop2.run_until_complete(consumer(q))
File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 568, in run_until_complete
return future.result()
File "asyncio_examples.py", line 18, in consumer
res = await q.get()
File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/queues.py", line 159, in get
await getter
RuntimeError: Task <Task pending coro=<consumer() running at asyncio_examples.py:18> cb=[_run_until_complete_cb() at /usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py:150]> got Future <Future pending> attached to a different loop
I'm attempting to use asyncio to make an asynchronous client/server setup.
For some reason I'm getting AssertionError: yield from wasn't used with future when running the client.
Searching for this error didn't turn up much.
What does this error mean and what's causing it?
#!/usr/bin/env python3
import asyncio
import pickle
import uuid
port = 9999
class ClientProtocol(asyncio.Protocol):
def __init__(self, loop):
self.loop = loop
self.conn = None
self.uuid = uuid.uuid4()
self.other_clients = []
def connection_made(self, transport):
print("Connected to server")
self.conn = transport
m = "hello"
self.conn.write(m)
def data_received(self, data):
print('Data received: {!r}'.format(data))
def connection_lost(self, exc):
print('The server closed the connection')
print('Stop the event loop')
self.loop.stop()
# note that in my use-case, main() is called continuously by an external game engine
client_init = False
def main():
# use a global here only for the purpose of providing example code runnable outside of aforementioned game engine
global client_init
if client_init != True:
loop = asyncio.get_event_loop()
coro = loop.create_connection(lambda: ClientProtocol(loop), '127.0.0.1', port)
task = asyncio.Task(coro)
transport, protocol = loop.run_until_complete(coro)
client_init = True
# to avoid blocking the execution of main (and of game engine calling it), only run one iteration of the event loop
loop.stop()
loop.run_forever()
if transport:
transport.write("some data")
if __name__ == "__main__":
main()
Traceback:
Traceback (most recent call last):
File "TCPclient.py", line 57, in <module>
main()
File "TCPclient.py", line 45, in main
transport, protocol = loop.run_until_complete(coro)
File "/usr/lib/python3.5/asyncio/base_events.py", line 337, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/usr/lib/python3.5/asyncio/base_events.py", line 599, in create_connection
yield from tasks.wait(fs, loop=self)
File "/usr/lib/python3.5/asyncio/tasks.py", line 341, in wait
return (yield from _wait(fs, timeout, return_when, loop))
File "/usr/lib/python3.5/asyncio/tasks.py", line 424, in _wait
yield from waiter
File "/usr/lib/python3.5/asyncio/futures.py", line 359, in __iter__
assert self.done(), "yield from wasn't used with future"
AssertionError: yield from wasn't used with future
The problem seems to be that you create a task from your coroutine, but then pass the coroutine to run_until_complete instead:
coro = loop.create_connection(lambda: ClientProtocol(loop), '127.0.0.1', port)
task = asyncio.Task(coro)
transport, protocol = loop.run_until_complete(coro)
Either pass the task:
coro = loop.create_connection(lambda: ClientProtocol(loop), '127.0.0.1', port)
task = asyncio.Task(coro)
transport, protocol = loop.run_until_complete(task)
Or don't create the task and pass the coroutine. run_until_complete will create a task for you
coro = loop.create_connection(lambda: ClientProtocol(loop), '127.0.0.1', port)
transport, protocol = loop.run_until_complete(coro)
In addition, you need to ensure the strings you are writing are byte strings. String literals in Python 3 default to unicode. You can either encode these, or just write byte strings in the first place
transport.write("some data".encode('utf-8'))
transport.write(b"some data")
EDIT It's not clear to me why this is an issue, however the source for run_until_complete has this to say:
WARNING: It would be disastrous to call run_until_complete()
with the same coroutine twice -- it would wrap it in two
different Tasks and that can't be good.
I suppose creating a task and then passing in the coroutine (which causes a task to be created) has the same effect.