Can I pass an list to asyncio.gather? - python

I am trying to start a bunch (one or more) aioserial instances using an for and asyncio.gather without success.
# -*- coding: utf-8 -*-
import asyncio
import aioserial
from protocol import contactid, ademco
def main():
# Loop Asyncio
loop = asyncio.get_event_loop()
centrals = utils.auth()
# List of corotines to be executed in paralel
lst_coro = []
# Unpack centrals
for central in centrals:
protocol = central['protocol']
id = central['id']
name = central['name']
port = central['port']
logger = log.create_logging_system(name)
# Protocols
if protocol == 'contactid':
central = contactid.process
elif protocol == 'ademco':
central = ademco.process
else:
print(f'Unknown protocol: {central["protocol"]}')
# Serial (port ex: ttyUSB0/2400)
dev = ''.join(['/dev/', *filter(str.isalnum, port.split('/')[0])])
bps = int(port.split('/')[-1])
aioserial_instance = aioserial.AioSerial(port=dev, baudrate=bps)
lst_coro.append(central(aioserial_instance, id, name, logger))
asyncio.gather(*lst_coro, loop=loop)
if __name__ == '__main__':
asyncio.run(main())
I based this on the asyncio documentation example and some answers from stack overflow. But when I try to run it, I just got errors:
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/Serial/serial.py", line 39, in <module>
asyncio.run(main())
File "/usr/lib/python3.7/asyncio/runners.py", line 37, in run
raise ValueError("a coroutine was expected, got {!r}".format(main))
ValueError: a coroutine was expected, got None
I also tried to use a set instead of a list, but nothing really changed. Is there a better way to start a bunch of parallels corotines when you need to use a loop? Thanks for the attention.

Your problem isn't with how you call gather. It's with how you define main. The clue is with the error message
ValueError: a coroutine was expected, got None
and the last line of code in the traceback before the except is raised
asyncio.run(main())
asyncio.run wants an awaitable. You pass it the return value of main, but main doesn't return anything. Rather than adding a return value, though, the fix is to change how you define main.
async def main():
This will turn main from a regular function to a coroutine that can be awaited.
Edit
Once you've done this, you'll notice that gather doesn't actually seem to do anything. You'll need to await it in order for main to wait for everything in lst_coro to complete.
await asyncio.gather(*lst_coro)
Unrelated to your error: you shouldn't need to use loop inside main at all. gather's loop argument was deprecated in 3.8 and will be removed in 3.10. Unless you're using an older version of Python, you can remove it and your call to asyncio.get_event_loop.

Related

How to find the cause of CancelledError in asyncio?

I have a big project which depends some third-party libraries, and sometimes its execution gets interrupted by a CancelledError.
To demonstrate the issue, let's look at a small example:
import asyncio
async def main():
task = asyncio.create_task(foo())
# Cancel the task in 1 second.
loop = asyncio.get_event_loop()
loop.call_later(1.0, lambda: task.cancel())
await task
async def foo():
await asyncio.sleep(999)
if __name__ == '__main__':
asyncio.run(main())
Traceback:
Traceback (most recent call last):
File "/Users/ss/Library/Application Support/JetBrains/PyCharm2021.2/scratches/async.py", line 19, in <module>
asyncio.run(main())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
concurrent.futures._base.CancelledError
As you can see, there's no information about the place the CancelledError originates from. How do I find out the exact cause of it?
One approach that I came up with is to place a lot of try/except blocks which would catch the CancelledError and narrow down the place where it comes from. But that's quite tedious.
I've solved it by applyting a decorator to every async function in the project. The decorator's job is simple - log a message when a CancelledError is raised from the function. This way we will see which functions (and more importantly, in which order) get cancelled.
Here's the decorator code:
def log_cancellation(f):
async def wrapper(*args, **kwargs):
try:
return await f(*args, **kwargs)
except asyncio.CancelledError:
print(f"Cancelled {f}")
raise
return wrapper
In order to add this decorator everywhere I used regex. Find: (.*)(async def). Replace with: $1#log_cancellation\n$1$2.
Also to avoid importing log_cancellation in every file I modified the builtins:
builtins.log_cancellation = log_cancellation
The rich package has helped us to identify the cause of CancelledError, without much code change required.
from rich.console import Console
console = Console()
if __name__ == "__main__":
try:
asyncio.run(main()) # replace main() with your entrypoint
except BaseException as e:
console.print_exception(show_locals=True)

Python asyncio restart coroutine

I am a new python programmer who is trying to write a 'bot' to trade on betfair for myself. (ambitious!!!!)
My problem that has arisen is this - I have the basics of an asyncio event loop running but I have noticed that if one of the coroutines fails in its process ( for instance an API call fails or a mongodb read) then the asyncio event loop just continues running but ignores the one failed coroutine
my question is how could I either restart that one coroutine automatically or handle an error to stop the complete asyncio loop but at the moment everything runs seemingly oblivious to the fact that something is not right and one portion of it has failed. In my case the loop never returned to the 'rungetcompetitionids' function after a database read was not successful and it never returned to the function again even though it is in a while true loop
The usergui is not yet functional but only there to try asyncio
thanks
Clive
import sys
import datetime
from login import sessiontoken as gst
from mongoenginesetups.setupmongo import global_init as initdatabase
from asyncgetcompetitionids import competition_id_pass as gci
from asyncgetcompetitionids import create_comp_id_table_list as ccid
import asyncio
import PySimpleGUI as sg
sg.change_look_and_feel('DarkAmber')
layout = [
[sg.Text('Password'), sg.InputText(password_char='*', key='password')],
[sg.Text('', key='status')],
[sg.Button('Submit'), sg.Button('Cancel')]
]
window = sg.Window('Betfair', layout)
def initialisethedatabase():
initdatabase('xxxx', 'xxxx', xxxx, 'themongo1', True)
async def runsessiontoken():
nextlogontime = datetime.datetime.now()
while True:
returned_login_time = gst(nextlogontime)
nextlogontime = returned_login_time
await asyncio.sleep(15)
async def rungetcompetitionids(compid_from_compid_table_list):
nextcompidtime = datetime.datetime.now()
while True:
returned_time , returned_list = gci(nextcompidtime, compid_from_compid_table_list)
nextcompidtime = returned_time
compid_from_compid_table_list = returned_list
await asyncio.sleep(10)
async def userinterface():
while True:
event, value = window.read(timeout=1)
if event in (None, 'Cancel'):
sys.exit()
if event != "__TIMEOUT__":
print(f"{event} {value}")
await asyncio.sleep(0.0001)
async def wait_list():
await asyncio.wait([runsessiontoken(),
rungetcompetitionids(compid_from_compid_table_list),
userinterface()
])
initialisethedatabase()
compid_from_compid_table_list = ccid()
print(compid_from_compid_table_list)
nextcompidtime = datetime.datetime.now()
print(nextcompidtime)
loop = asyncio.get_event_loop()
loop.run_until_complete(wait_list())
loop.close()
A simple solution would be to use a wrapper function (or "supervisor") that catches Exception and then just blindly retries the function. More elegant solutions would include printing out the exception and stack trace for diagnostic purposes, and querying the application state to see if it makes sense to try and continue. For instance, if betfair tells you your account is not authorised, then continuing makes no sense. And if it's a general network error then retying immediately might be worthwhile. You might also want to stop retrying if the supervisor notices it has restarted quite a lot in a short space of time.
eg.
import asyncio
import traceback
import functools
from collections import deque
from time import monotonic
MAX_INTERVAL = 30
RETRY_HISTORY = 3
# That is, stop after the 3rd failure in a 30 second moving window
def supervise(func, name=None, retry_history=RETRY_HISTORY, max_interval=MAX_INTERVAL):
"""Simple wrapper function that automatically tries to name tasks"""
if name is None:
if hasattr(func, '__name__'): # raw func
name = func.__name__
elif hasattr(func, 'func'): # partial
name = func.func.__name__
return asyncio.create_task(supervisor(func, retry_history, max_interval), name=name)
async def supervisor(func, retry_history=RETRY_HISTORY, max_interval=MAX_INTERVAL):
"""Takes a noargs function that creates a coroutine, and repeatedly tries
to run it. It stops is if it thinks the coroutine is failing too often or
too fast.
"""
start_times = deque([float('-inf')], maxlen=retry_history)
while True:
start_times.append(monotonic())
try:
return await func()
except Exception:
if min(start_times) > monotonic() - max_interval:
print(
f'Failure in task {asyncio.current_task().get_name()!r}.'
' Is it in a restart loop?'
)
# we tried our best, this coroutine really isn't working.
# We should try to shutdown gracefully by setting a global flag
# that other coroutines should periodically check and stop if they
# see that it is set. However, here we just reraise the exception.
raise
else:
print(func.__name__, 'failed, will retry. Failed because:')
traceback.print_exc()
async def a():
await asyncio.sleep(2)
raise ValueError
async def b(greeting):
for i in range(15):
print(greeting, i)
await asyncio.sleep(0.5)
async def main_async():
tasks = [
supervise(a),
# passing repeated argument to coroutine (or use lambda)
supervise(functools.partial(b, 'hello'))
]
await asyncio.wait(
tasks,
# Only stop when all coroutines have completed
# -- this allows for a graceful shutdown
# Alternatively use FIRST_EXCEPTION to stop immediately
return_when=asyncio.ALL_COMPLETED,
)
return tasks
def main():
# we run outside of the event loop, so we can carry out a post-mortem
# without needing the event loop to be running.
done = asyncio.run(main_async())
for task in done:
if task.cancelled():
print(task, 'was cancelled')
elif task.exception():
print(task, 'failed with:')
# we use a try/except here to reconstruct the traceback for logging purposes
try:
task.result()
except:
# we can use a bare-except as we are not trying to block
# the exception -- just record all that may have happened.
traceback.print_exc()
main()
And this will result in output like:
hello 0
hello 1
hello 2
hello 3
a failed, will retry. Failed because:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
hello 4
hello 5
hello 6
hello 7
a failed, will retry. Failed because:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError
hello 8
hello 9
hello 10
hello 11
Failure in task 'a'. Is it in a restart loop?
hello 12
hello 13
hello 14
exception=ValueError()> failed with:
Traceback (most recent call last):
File "C:\Users\User\Documents\python\src\main.py", line 84, in main
task.result()
File "C:\Users\User\Documents\python\src\main.py", line 30, in supervisor
return await func()
File "C:\Users\User\Documents\python\src\main.py", line 49, in a
raise ValueError
ValueError

"Asyncio Event Loop is Closed" when getting loop

When trying to run the asyncio hello world code example given in the docs:
import asyncio
async def hello_world():
print("Hello World!")
loop = asyncio.get_event_loop()
# Blocking call which returns when the hello_world() coroutine is done
loop.run_until_complete(hello_world())
loop.close()
I get the error:
RuntimeError: Event loop is closed
I am using python 3.5.3.
On Windows seems to be a problem with EventLoopPolicy, use this snippet to work around it:
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
asyncio.run(main())
You have already called loop.close() before you ran that sample piece of code, on the global event loop:
>>> import asyncio
>>> asyncio.get_event_loop().close()
>>> asyncio.get_event_loop().is_closed()
True
>>> asyncio.get_event_loop().run_until_complete(asyncio.sleep(1))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../lib/python3.6/asyncio/base_events.py", line 443, in run_until_complete
self._check_closed()
File "/.../lib/python3.6/asyncio/base_events.py", line 357, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
You need to create a new loop:
loop = asyncio.new_event_loop()
You can set that as the new global loop with:
asyncio.set_event_loop(asyncio.new_event_loop())
and then just use asyncio.get_event_loop() again.
Alternatively, just restart your Python interpreter, the first time you try to get the global event loop you get a fresh new one, unclosed.
As of Python 3.7, the process of creating, managing, then closing the loop (as well as a few other resources) is handled for you when use asyncio.run(). It should be used instead of loop.run_until_complete(), and there is no need any more to first get or set the loop.
...and just in case:
import platform
if platform.system()=='Windows':
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
If you ever deploy your code in the cloud it might avoid painful debug.

IOerror when using .start() on imported multiprocessing processor subclass

I'm having a problem with python 2.7 multiprocessing (64 bit windows). Suppose I have a file pathfinder.py with the code:
import multiprocessing as mp
class MWE(mp.Process):
def __init__(self, n):
mp.Process.__init__(self)
self.daemon = True
self.list = []
for i in range(n):
self.list.append(i)
def run(self):
print "I'm running!"
if __name__=='__main__':
n = 10000000
mwe = MWE(n)
mwe.start()
This code executes fine for arbitrarily large values of n. However if I then import and run a class instance in another file
from pathfinder import MWE
mwe = MWE(10000)
mwe.start()
I get the following traceback if n >= ~ 10000:
Traceback (most recent call last):
File <filepath>, in <module>
mwe.start()
File "C:\Python27\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\Python27\lib\multiprocessing\forking.py", line 280, in __init__
to_child.close()
IOError: [Errno 22] Invalid argument
I thought this might be some sort of race condition bug, but using time.sleep to delay mwe.start() doesn't appear to affect this behavior. Does anyone know why this is happening, or how to get around it?
The problem is with how you use multiprocessing in Windows. When importing a module that defines a Process class, e.g.:
from pathfinder import MWE
you must encapsulate the running code inside if __name__ == '__main__': block. Therefore, change your client code to read:
from pathfinder import MWE
if __name__ == '__main__':
mwe = MWE(10000)
mwe.start()
mwe.join()
(Also, note that you want to join() your process at some point.)
Check out the Windows-specific Python restrictions doc https://docs.python.org/2/library/multiprocessing.html#windows.
See https://stackoverflow.com/a/16642099/1510289 and https://stackoverflow.com/a/20222706/1510289 for similar questions.

python, COM and multithreading issue

I'm trying to take a look at IE's DOM from a separate thread that dispatched IE, and for some properties I'm getting a "no such interface supported" error. I managed to reduce the problem to this script:
import threading, time
import pythoncom
from win32com.client import Dispatch, gencache
gencache.EnsureModule('{3050F1C5-98B5-11CF-BB82-00AA00BDCE0B}', 0, 4, 0) # MSHTML
def main():
pythoncom.CoInitializeEx(0)
ie = Dispatch('InternetExplorer.Application')
ie.Visible = True
ie.Navigate('http://www.Rhodia-ecommerce.com/')
while ie.Busy:
time.sleep(1)
def printframes():
pythoncom.CoInitializeEx(0)
document = ie.Document
frames = document.getElementsByTagName(u'frame')
for frame in frames:
obj = frame.contentWindow
thr = threading.Thread(target=printframes)
thr.start()
thr.join()
if __name__ == '__main__':
thr = threading.Thread(target=main)
thr.start()
thr.join()
Everything is fine until the frame.contentWindow. Then bam:
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\python22\lib\threading.py", line 414, in __bootstrap
self.run()
File "C:\python22\lib\threading.py", line 402, in run
apply(self.__target, self.__args, self.__kwargs)
File "testie.py", line 42, in printframes
obj = frame.contentWindow
File "C:\python22\lib\site-packages\win32com\client\__init__.py", line 455, in __getattr__
return self._ApplyTypes_(*args)
File "C:\python22\lib\site-packages\win32com\client\__init__.py", line 446, in _ApplyTypes_
return self._get_good_object_(
com_error: (-2147467262, 'No such interface supported', None, None)
Any hint ?
The correct answer is to marshal stuff by hand. That's not a workaround it is what you are supposed to do here. You shouldn't have to use apartment threading though.
You initialised as multithreaded apartment - that tells COM that it can call your interfaces on any thread. It does not allow you to call other interfaces on any thread, or excuse you from marshalling interfaces provided by COM. That will only work "by accident" - E.g. if the object you are calling happens to be an in-process MTA object, it won't matter.
CoMarshalInterThreadInterfaceInStream/CoGetInterfaceAndReleaseStream does the business.
The reason for this is that objects can provide their own proxies, which may or may not be free-threaded. (Or indeed provide custom marshalling). You have to marshal them to tell them they are moving between threads. If the proxy is free threaded, you may get the same pointer back.

Categories