I am using the Sanic as the server and try to handle multiple request concurrently.
I have used the await for the encode function(I use for loop to simulate do something) but when I try the time curl http://0.0.0.0:8000/ in two separate consoles, it doesn't run concurrently.
I have searched google but only find event_loop but it is to schedule registered conroutines.
How do I await the for loop so the requests won't be blocked?
Thank you.
from sanic import Sanic
from sanic import response
from signal import signal, SIGINT
import asyncio
import uvloop
app = Sanic(__name__)
#app.route("/")
async def test(request):
# await asyncio.sleep(5)
await encode()
return response.json({"answer": "42"})
async def encode():
print('encode')
for i in range(0, 300000000):
pass
asyncio.set_event_loop(uvloop.new_event_loop())
server = app.create_server(host="0.0.0.0", port=8000)
loop = asyncio.get_event_loop()
task = asyncio.ensure_future(server)
signal(SIGINT, lambda s, f: loop.stop())
try:
loop.run_forever()
except:
loop.stop()
Running for i in range() is blocking. If you change that to put your await asyncio.sleep(5) into the encode method, you will see that it operates as expected.
#app.route("/")
async def test(request):
await encode()
return response.json({"answer": "42"})
async def encode():
print('encode')
await asyncio.sleep(5)
When you call await encode() and encode is a blocking method, then it still is going to block because you are not "awaiting" anything else. Your thread is still locked up.
You could also add another worker:
app.create_server(worker=2)
Try looking through this answer
Since the async handler is actually running in an eventloop, it is running asynchronously as callback rather than concurrently.
loop.run_forever() would call loop._run_once over and over again to run all the registered event, each await would stop the coroutine and yield control back to the eventloop and eventloop arrange to run the next event.
So basically if you don't want blocking in a long running for-loop, you need to manually hand over the control back to the eventloop inside the for-loop, see the issue about relinquishing control:
async def encode():
print('encode')
for i in range(0, 300000000):
await asyncio.sleep(0)
Here is a quote from Guido:
asyncio.sleep(0) means just that -- let any other tasks run and then
come back here.
Related
I have two bots, one is using pydle for IRC, like:
async def start_ircbot ():
try:
client = MyOwnBot(NICK,
realname=REALNAME,
sasl_username=SASL_USERNAME,
sasl_password=SASL_PASSWORD,
sasl_identity=SASL_IDENTITY,)
loop = asyncio.get_event_loop()
asyncio.ensure_future(client.connect(HOST, PORT, tls=True, tls_verify=False), loop=loop)
loop.run_forever()
loop.close()
except Exception as e:
print (e)
and another is using telethon for Telegram:
#client.on(events.NewMessage)
async def my_event_handler(event):
...
async def start_client ():
print ("Telegram monitor started...")
await client.start()
await client.run_until_disconnected()
Both of them work without problem separately.
Now, I want to integrate both of them, I tried to launch both of them in my main function like this,
import Notifier
...
async def main():
await asyncio.gather (Notifier.start_client (), start_ircbot ())
asyncio.run(main())
It starts without issue but my_event_handler seems never to get new messages. If I swap the order of functions:
await asyncio.gather (start_ircbot (), Notifier.start_client ())
The script will be stuck at launching, I suspect it has to be something within events loops and tried some different methods but without luck, could anyone shed light on this for me?
Newer Python versions are removing the loop parameter from most methods, so you should try to avoid using it. As long as you don't use asyncio.run (which creates a new loop) or you don't create a new loop yourself, both libraries should be using the default loop from the main thread (I can't speak for pydle, but Telethon does this).
As long as the asyncio event loop is running, Telethon should have no trouble receiving updates. You can use client.loop to make sure it's using the same loop:
tlclient = TelegramClient(...)
irclient = MyOwnBot(...)
#tlclient.on(events.NewMessage)
async def my_event_handler(event):
...
async def main():
await tlclient.start()
await irclient.connect(HOST, PORT, tls=True, tls_verify=False), loop=tlclient.tlclient)
await tlclient.run_until_disconnected() # similar to loop.run_forever() but stops when the client disconnects
client.loop.run_until_complete(main())
I want to find a way to stop the call of a function
Currently I found this method in function
from func_timeout import func_set_timeout
######## is ok #########
#func_set_timeout(timeout=2)
def is_ok_request():
import time
time.sleep(10)
is_ok_request()
But currently I can't stop the call in an async function
def down_file():
'''eg. this a third-party modules '''
time.sleep(100000)
async def timeout_func():
'''down a file times out 10s to exit'''
print("start connection mysql")
down_file()
print("end connection mysql")
async def main():
try:
await asyncio.wait_for(timeout_func(),timeout=1)
except asyncio.TimeoutError:
print('timeout')
asyncio.run(main())
help
There are several problems:
Don't call time.sleep() in async progams! Always await asyncio.sleep() instead.
The timeout of asyncio.wait (link) is the time when to stop waiting. It does not cancel anything. Use asyncio.wait_for (link) instead. It generates a TimeoutError that should be handled.
Not an error, but loop.run_until_complete() is not the recommended way to run an async program. Use asyncio.run() as the entry-point, it is like run_until_complete with a cleanup afterward.
Another issue: task = my_request(). It is not a task, it is a coroutine. In asyncio, the term task has a fixed meaning (link). The wait documentation warns, that it expects tasks and will not accept coroutines in future versions.
The code in its simplest form (actually, it is almost the same as an example in the linked docs):
import asyncio
import time
async def my_request():
await asyncio.sleep(10)
async def main():
try:
await asyncio.wait_for(my_request(), timeout=1)
except asyncio.TimeoutError:
print('timeout')
asyncio.run(main())
I am trying to do something similar like C# ManualResetEvent but in Python.
I have attempted to do it in python but doesn't seem to work.
import asyncio
cond = asyncio.Condition()
async def main():
some_method()
cond.notify()
async def some_method():
print("Starting...")
await cond.acquire()
await cond.wait()
cond.release()
print("Finshed...")
main()
I want the some_method to start then wait until signaled to start again.
This code is not complete, first of all you need to use asyncio.run() to bootstrap the event loop - this is why your code is not running at all.
Secondly, some_method() never actually starts. You need to asynchronously start some_method() using asyncio.create_task(). When you call an "async def function" (the more correct term is coroutinefunction) it returns a coroutine object, this object needs to be driven by the event loop either by you awaiting it or using the before-mentioned function.
Your code should look more like this:
import asyncio
async def main():
cond = asyncio.Condition()
t = asyncio.create_task(some_method(cond))
# The event loop hasn't had any time to start the task
# until you await again. Sleeping for 0 seconds will let
# the event loop start the task before continuing.
await asyncio.sleep(0)
cond.notify()
# You should never really "fire and forget" tasks,
# the same way you never do with threading. Wait for
# it to complete before returning:
await t
async def some_method(cond):
print("Starting...")
await cond.acquire()
await cond.wait()
cond.release()
print("Finshed...")
asyncio.run(main())
I need to be able to keep adding coroutines to the asyncio loop at runtime. I tried using create_task() thinking that this would do what I want, but it still needs to be awaited.
This is the code I had, not sure if there is a simple edit to make it work?
async def get_value_from_api():
global ASYNC_CLIENT
return ASYNC_CLIENT.get(api_address)
async def print_subs():
count = await get_value_from_api()
print(count)
async def save_subs_loop():
while True:
asyncio.create_task(print_subs())
time.sleep(0.1)
async def start():
global ASYNC_CLIENT
async with httpx.AsyncClient() as ASYNC_CLIENT:
await save_subs_loop()
asyncio.run(start())
I once created similar pattern when I was mixing trio and kivy, which was demonstration of running multiple coroutines asynchronously.
It use a trio.MemoryChannel which is roughly equivalent to asyncio.Queue, I'll just refer it as queue here.
Main idea is:
Wrap each task with class, which has run function.
Make class object's own async method to put object itself into queue when execution is done.
Create a global task-spawning loop to wait for the object in queue and schedule execution/create task for the object.
import asyncio
import traceback
import httpx
async def task_1(client: httpx.AsyncClient):
resp = await client.get("http://127.0.0.1:5000/")
print(resp.read())
await asyncio.sleep(0.1) # without this would be IP ban
async def task_2(client: httpx.AsyncClient):
resp = await client.get("http://127.0.0.1:5000/meow/")
print(resp.read())
await asyncio.sleep(0.5)
class CoroutineWrapper:
def __init__(self, queue: asyncio.Queue, coro_func, *param):
self.func = coro_func
self.param = param
self.queue = queue
async def run(self):
try:
await self.func(*self.param)
except Exception:
traceback.print_exc()
return
# put itself back into queue
await self.queue.put(self)
class KeepRunning:
def __init__(self):
# queue for gathering CoroutineWrapper
self.queue = asyncio.Queue()
def add_task(self, coro, *param):
wrapped = CoroutineWrapper(self.queue, coro, *param)
# add tasks to be executed in queue
self.queue.put_nowait(wrapped)
async def task_processor(self):
task: CoroutineWrapper
while task := await self.queue.get():
# wait for new CoroutineWrapper Object then schedule it's async method execution
asyncio.create_task(task.run())
async def main():
keep_running = KeepRunning()
async with httpx.AsyncClient() as client:
keep_running.add_task(task_1, client)
keep_running.add_task(task_2, client)
await keep_running.task_processor()
asyncio.run(main())
Server
import time
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return str(time.time())
#app.route("/meow/")
def meow():
return "meow"
app.run()
Output:
b'meow'
b'1639920445.965701'
b'1639920446.0767004'
b'1639920446.1887035'
b'1639920446.2986999'
b'1639920446.4067013'
b'meow'
b'1639920446.516704'
b'1639920446.6267014'
...
You can see tasks running repeatedly on their own pace.
Old answer
Seems like you only want to cycle fixed amount of tasks.
In that case just iterate list of coroutine with itertools.cycle
But this is no different with synchronous, so lemme know if you need is asynchronous.
import asyncio
import itertools
import httpx
async def main_task(client: httpx.AsyncClient):
resp = await client.get("http://127.0.0.1:5000/")
print(resp.read())
await asyncio.sleep(0.1) # without this would be IP ban
async def main():
async with httpx.AsyncClient() as client:
for coroutine in itertools.cycle([main_task]):
await coroutine(client)
asyncio.run(main())
Server:
import time
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return str(time.time())
app.run()
Output:
b'1639918937.7694323'
b'1639918937.8804302'
b'1639918937.9914327'
b'1639918938.1014295'
b'1639918938.2124324'
b'1639918938.3204308'
...
asyncio.create_task() works as you describe it. The problem you are having here is that you create an infinite loop here:
async def save_subs_loop():
while True:
asyncio.create_task(print_subs())
time.sleep(0.1) # do not use time.sleep() in async code EVER
save_subs_loop() keeps creating tasks but control is never yielded back to the event loop, because there is no await in there. Try
async def save_subs_loop():
while True:
asyncio.create_task(print_subs())
await asyncio.sleep(0.1) # yield control back to loop to give tasks a chance to actually run
This problem is so common I'm thinking python should raise a RuntimeError if it detects time.sleep() within a coroutine :-)
You might want to try the TaskThread framework
It allows you to add tasks in runtime
Tasks are re-scheduled periodically (like in your while loop up there)
There is a consumer / producer framework built in (parent/child relationships) which you seem to need
disclaimer: I wrote TaskThread out of necessity & it's been a life saver.
I am making a Discord bot using discord.py and sometimes the script I use to run it needs to close out the bot. When I close it out without using signal handlers there is a lot of errors about a loop not closing, so I added a signal handler (using the code below) and inside I need to call client.close() and client.logout(), but the problem is those are async functions and thus require me to await them, but I can't await the functions since the signal handler can't be an async function.
Here is the code:
def handler():
print("Logging out of Discord Bot")
client.logout()
client.close()
sys.exit()
#client.event
async def on_ready():
print('We have logged in as {0.user}'.format(client))
for signame in ('SIGINT', 'SIGTERM'):
client.loop.add_signal_handler(getattr(signal, signame),
lambda: asyncio.ensure_future(handler()))
Is there a way to either logout properly using the signal handler, or at least just silence the warnings and errors from inside the code so no errors are printed in the console.
Your approach is on the right track - since add_signal_handler expects an ordinary function and not an async function, you do need to call ensure_future (or its cousin create_task) to submit an async function to run in the event loop. The next step is to actually make handler async, and await the coroutines it invokes:
async def handler():
print("Logging out of Discord Bot")
await client.logout()
await client.close()
asyncio.get_event_loop().stop()
Note that I changed sys.exit() to explicit stopping of the event loop, because asyncio doesn't react well to sys.exit() being invoked from the middle of a callback (it catches the SystemExit exception and complains of an unretrieved exception).
Since I don't have discord to test, I tested it by changing the logout and close with a sleep:
import asyncio, signal
async def handler():
print("sleeping a bit...")
await asyncio.sleep(0.2)
print('exiting')
asyncio.get_event_loop().stop()
def setup():
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
lambda: asyncio.create_task(handler()))
setup()
asyncio.get_event_loop().run_forever()
If you are starting the event loop using something other than run_forever, such as asyncio.run(some_function()), then you will need to replace loop.stop() with the code that sets whatever event the main coroutine awaits. For example, if it awaits server.serve_forever() on a server, then you'd pass the server object to handler and call server.close(), and so on.