I am working on a project that uses the ccxt async library which requires all resources used by a certain class to be released with an explicit call to the class's .close() coroutine. I want to exit the program with ctrl+c and await the close coroutine in the exception. However, it is never awaited.
The application consists of the modules harvesters, strategies, traders, broker, and main (plus config and such). The broker initiates the strategies specified for an exchange and executes them. The strategy initiates the associated harvester which collects the necessary data. It also analyses the data and spawns a trader when there is a profitable opportunity. The main module creates a broker for each exchange and runs it. I have tried to catch the exception at each of these levels, but the close routine is never awaited. I'd prefer to catch it in the main module in order to close all exchange instances.
Harvester
async def harvest(self):
if not self.routes:
self.routes = await self.get_routes()
for route in self.routes:
self.logger.info("Harvesting route {}".format(route))
await asyncio.sleep(self.exchange.rateLimit / 1000)
yield await self.harvest_route(route)
Strategy
async def execute(self):
async for route_dct in self.harvester.harvest():
self.logger.debug("Route dictionary: {}".format(route_dct))
await self.try_route(route_dct)
Broker
async def run(self):
for strategy in self.strategies:
self.strategies[strategy] = getattr(
strategies, strategy)(self.share, self.exchange, self.currency)
while True:
try:
await self.execute_strategies()
except KeyboardInterrupt:
await safe_exit(self.exchange)
Main
async def main():
await load_exchanges()
await load_markets()
brokers = [Broker(
share,
exchanges[id]["api"],
currency,
exchanges[id]["strategies"]
) for id in exchanges]
futures = [broker.run() for broker in brokers]
for future in asyncio.as_completed(futures):
executed = await future
return executed
if __name__ == "__main__":
status = asyncio.run(main())
sys.exit(status)
I had expected the close() coroutine to be awaited, but I still get an error from the library that I must explicitly call it. Where do I catch the exception so that all exchange instances are closed properly?
Somewhere in your code should be entry point, where event loop is started.
Usually it is one of functions below:
loop.run_until_complete(main())
loop.run_forever()
asyncio.run(main())
When ctrl+C happens KeyboardInterrupt can be catched at this line. When it happened to execute some finalizing coroutine you can run event loop again.
This little example shows idea:
import asyncio
async def main():
print('Started, press ctrl+C')
await asyncio.sleep(10)
async def close():
print('Finalazing...')
await asyncio.sleep(1)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
except KeyboardInterrupt:
loop.run_until_complete(close())
finally:
print('Program finished')
Related
I want to find a way to stop the call of a function
Currently I found this method in function
from func_timeout import func_set_timeout
######## is ok #########
#func_set_timeout(timeout=2)
def is_ok_request():
import time
time.sleep(10)
is_ok_request()
But currently I can't stop the call in an async function
def down_file():
'''eg. this a third-party modules '''
time.sleep(100000)
async def timeout_func():
'''down a file times out 10s to exit'''
print("start connection mysql")
down_file()
print("end connection mysql")
async def main():
try:
await asyncio.wait_for(timeout_func(),timeout=1)
except asyncio.TimeoutError:
print('timeout')
asyncio.run(main())
help
There are several problems:
Don't call time.sleep() in async progams! Always await asyncio.sleep() instead.
The timeout of asyncio.wait (link) is the time when to stop waiting. It does not cancel anything. Use asyncio.wait_for (link) instead. It generates a TimeoutError that should be handled.
Not an error, but loop.run_until_complete() is not the recommended way to run an async program. Use asyncio.run() as the entry-point, it is like run_until_complete with a cleanup afterward.
Another issue: task = my_request(). It is not a task, it is a coroutine. In asyncio, the term task has a fixed meaning (link). The wait documentation warns, that it expects tasks and will not accept coroutines in future versions.
The code in its simplest form (actually, it is almost the same as an example in the linked docs):
import asyncio
import time
async def my_request():
await asyncio.sleep(10)
async def main():
try:
await asyncio.wait_for(my_request(), timeout=1)
except asyncio.TimeoutError:
print('timeout')
asyncio.run(main())
I am trying to do something similar like C# ManualResetEvent but in Python.
I have attempted to do it in python but doesn't seem to work.
import asyncio
cond = asyncio.Condition()
async def main():
some_method()
cond.notify()
async def some_method():
print("Starting...")
await cond.acquire()
await cond.wait()
cond.release()
print("Finshed...")
main()
I want the some_method to start then wait until signaled to start again.
This code is not complete, first of all you need to use asyncio.run() to bootstrap the event loop - this is why your code is not running at all.
Secondly, some_method() never actually starts. You need to asynchronously start some_method() using asyncio.create_task(). When you call an "async def function" (the more correct term is coroutinefunction) it returns a coroutine object, this object needs to be driven by the event loop either by you awaiting it or using the before-mentioned function.
Your code should look more like this:
import asyncio
async def main():
cond = asyncio.Condition()
t = asyncio.create_task(some_method(cond))
# The event loop hasn't had any time to start the task
# until you await again. Sleeping for 0 seconds will let
# the event loop start the task before continuing.
await asyncio.sleep(0)
cond.notify()
# You should never really "fire and forget" tasks,
# the same way you never do with threading. Wait for
# it to complete before returning:
await t
async def some_method(cond):
print("Starting...")
await cond.acquire()
await cond.wait()
cond.release()
print("Finshed...")
asyncio.run(main())
I am making a Discord bot using discord.py and sometimes the script I use to run it needs to close out the bot. When I close it out without using signal handlers there is a lot of errors about a loop not closing, so I added a signal handler (using the code below) and inside I need to call client.close() and client.logout(), but the problem is those are async functions and thus require me to await them, but I can't await the functions since the signal handler can't be an async function.
Here is the code:
def handler():
print("Logging out of Discord Bot")
client.logout()
client.close()
sys.exit()
#client.event
async def on_ready():
print('We have logged in as {0.user}'.format(client))
for signame in ('SIGINT', 'SIGTERM'):
client.loop.add_signal_handler(getattr(signal, signame),
lambda: asyncio.ensure_future(handler()))
Is there a way to either logout properly using the signal handler, or at least just silence the warnings and errors from inside the code so no errors are printed in the console.
Your approach is on the right track - since add_signal_handler expects an ordinary function and not an async function, you do need to call ensure_future (or its cousin create_task) to submit an async function to run in the event loop. The next step is to actually make handler async, and await the coroutines it invokes:
async def handler():
print("Logging out of Discord Bot")
await client.logout()
await client.close()
asyncio.get_event_loop().stop()
Note that I changed sys.exit() to explicit stopping of the event loop, because asyncio doesn't react well to sys.exit() being invoked from the middle of a callback (it catches the SystemExit exception and complains of an unretrieved exception).
Since I don't have discord to test, I tested it by changing the logout and close with a sleep:
import asyncio, signal
async def handler():
print("sleeping a bit...")
await asyncio.sleep(0.2)
print('exiting')
asyncio.get_event_loop().stop()
def setup():
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
lambda: asyncio.create_task(handler()))
setup()
asyncio.get_event_loop().run_forever()
If you are starting the event loop using something other than run_forever, such as asyncio.run(some_function()), then you will need to replace loop.stop() with the code that sets whatever event the main coroutine awaits. For example, if it awaits server.serve_forever() on a server, then you'd pass the server object to handler and call server.close(), and so on.
I have two tasks in a consumer/producer relationship, separated by a asyncio.Queue. If the producer task fails, I'd like the consumer task to also fail as soon as possible, and not wait indefinitely on the queue. The consumer task can be created(spawned) independently from the producer task.
In general terms, I'd like to implement a dependency between two tasks, such that the failure of one is also the failure of the other, while keeping those two tasks concurrent(i.e. one will not await the other directly).
What kind of solutions(e.g. patterns) could be used here?
Basically, I'm thinking of erlang's "links".
I think it may be possible to implement something similar using callbacks, i.e. asyncio.Task.add_done_callback
Thanks!
From the comment:
The behavior I'm trying to avoid is the consumer being oblivious to the producer's death and waiting indefinitely on the queue. I want the consumer to be notified of the producer's death, and have a chance to react. or just fail, and that even while it's also waiting on the queue.
Other than the answer presented by Yigal, another way is to set up a third task that monitors the two and cancels one when the other one finishes. This can be generalized to any two tasks:
async def cancel_when_done(source, target):
assert isinstance(source, asyncio.Task)
assert isinstance(target, asyncio.Task)
try:
await source
except:
# SOURCE is a task which we expect to be awaited by someone else
pass
target.cancel()
Now when setting up the producer and the consumer, you can link them with the above function. For example:
async def producer(q):
for i in itertools.count():
await q.put(i)
await asyncio.sleep(.2)
if i == 7:
1/0
async def consumer(q):
while True:
val = await q.get()
print('got', val)
async def main():
loop = asyncio.get_event_loop()
queue = asyncio.Queue()
p = loop.create_task(producer(queue))
c = loop.create_task(consumer(queue))
loop.create_task(cancel_when_done(p, c))
await asyncio.gather(p, c)
asyncio.get_event_loop().run_until_complete(main())
One way would be to propagate the exception through the queue, combined with delegation of the work handling:
class ValidWorkLoad:
async def do_work(self, handler):
await handler(self)
class HellBrokeLoose:
def __init__(self, exception):
self._exception = exception
async def do_work(self, handler):
raise self._exception
async def worker(name, queue):
async def handler(work_load):
print(f'{name} handled')
while True:
next_work = await queue.get()
try:
await next_work.do_work(handler)
except Exception as e:
print(f'{name} caught exception: {type(e)}: {e}')
break
finally:
queue.task_done()
async def producer(name, queue):
i = 0
while True:
try:
# Produce some work, or fail while trying
new_work = ValidWorkLoad()
i += 1
if i % 3 == 0:
raise ValueError(i)
await queue.put(new_work)
print(f'{name} produced')
await asyncio.sleep(0) # Preempt just for the sake of the example
except Exception as e:
print('Exception occurred')
await queue.put(HellBrokeLoose(e))
break
loop = asyncio.get_event_loop()
queue = asyncio.Queue(loop=loop)
producer_coro = producer('Producer', queue)
consumer_coro = worker('Consumer', queue)
loop.run_until_complete(asyncio.gather(producer_coro, consumer_coro))
loop.close()
Which outputs:
Producer produced
Consumer handled
Producer produced
Consumer handled
Exception occurred
Consumer caught exception: <class 'ValueError'>: 3
Alternatively you could skip the delegation, and designate an item that signals the worker to stop. When catching an exception in the producer you put that designated item in the queue.
Another possible solution:
import asyncio
def link_tasks(t1: Union[asyncio.Task, asyncio.Future], t2: Union[asyncio.Task, asyncio.Future]):
"""
Link the fate of two asyncio tasks,
such that the failure or cancellation of one
triggers the cancellation of the other
"""
def done_callback(other: asyncio.Task, t: asyncio.Task):
# TODO: log cancellation due to link propagation
if t.cancelled():
other.cancel()
elif t.exception():
other.cancel()
t1.add_done_callback(functools.partial(done_callback, t2))
t2.add_done_callback(functools.partial(done_callback, t1))
This uses asyncio.Task.add_done_callback to register callbacks that will cancel the other task if either one fails or is cancelled.
I am using the Sanic as the server and try to handle multiple request concurrently.
I have used the await for the encode function(I use for loop to simulate do something) but when I try the time curl http://0.0.0.0:8000/ in two separate consoles, it doesn't run concurrently.
I have searched google but only find event_loop but it is to schedule registered conroutines.
How do I await the for loop so the requests won't be blocked?
Thank you.
from sanic import Sanic
from sanic import response
from signal import signal, SIGINT
import asyncio
import uvloop
app = Sanic(__name__)
#app.route("/")
async def test(request):
# await asyncio.sleep(5)
await encode()
return response.json({"answer": "42"})
async def encode():
print('encode')
for i in range(0, 300000000):
pass
asyncio.set_event_loop(uvloop.new_event_loop())
server = app.create_server(host="0.0.0.0", port=8000)
loop = asyncio.get_event_loop()
task = asyncio.ensure_future(server)
signal(SIGINT, lambda s, f: loop.stop())
try:
loop.run_forever()
except:
loop.stop()
Running for i in range() is blocking. If you change that to put your await asyncio.sleep(5) into the encode method, you will see that it operates as expected.
#app.route("/")
async def test(request):
await encode()
return response.json({"answer": "42"})
async def encode():
print('encode')
await asyncio.sleep(5)
When you call await encode() and encode is a blocking method, then it still is going to block because you are not "awaiting" anything else. Your thread is still locked up.
You could also add another worker:
app.create_server(worker=2)
Try looking through this answer
Since the async handler is actually running in an eventloop, it is running asynchronously as callback rather than concurrently.
loop.run_forever() would call loop._run_once over and over again to run all the registered event, each await would stop the coroutine and yield control back to the eventloop and eventloop arrange to run the next event.
So basically if you don't want blocking in a long running for-loop, you need to manually hand over the control back to the eventloop inside the for-loop, see the issue about relinquishing control:
async def encode():
print('encode')
for i in range(0, 300000000):
await asyncio.sleep(0)
Here is a quote from Guido:
asyncio.sleep(0) means just that -- let any other tasks run and then
come back here.