how to Async request in sanic parallel Work - python

Im not able to free the main function from this code so that tasks are completed in parallel and i can receive another get.
in this code when i open in chrome http://0.0.0.0:8082/envioavisos?test1=AAAAAA&test2=test the get_avisos_grupo() function is excecuted in secuence and not in parallel and untill the function ends and not able to send another request to http://0.0.0.0:8082/envioavisos?test1=AAAAAA&test2=test
#!/usr/bin/env python3
import asyncio
import time
from sanic import Sanic
from sanic.response import text
from datetime import datetime
import requests
avisos_ips = ['1.1.1.1','2.2.2.2']
app = Sanic(name='server')
async def get_avisos_grupo(ip_destino,test1,test2):
try:
try:
print(datetime.now().strftime("%d/%m/%Y %H:%M:%S,%f"),'STEP 2',ip_destino)
r = requests.post('http://{}:8081/avisosgrupo?test1={}&test2={}'.format(ip_destino,test1,test2), timeout=10)
await asyncio.sleep(5)
except Exception as e:
print('TIME OUT',str(e))
pass
except Exception as e:
print(str(e))
pass
#app.route("/envioavisos", methods=['GET','POST'])
async def avisos_telegram_send(request): ## enviar avisos
try:
query_components = request.get_args(keep_blank_values=True)
print(datetime.now().strftime("%d/%m/%Y %H:%M:%S,%f"),'>--------STEP 1',query_components['test1'][0])
for ip_destino in avisos_ips:
asyncio.ensure_future(get_avisos_grupo(ip_destino,query_components['test1'][0],query_components['test2'][0]))
except Exception as e:
print(str(e))
pass
print(datetime.now().strftime("%d/%m/%Y %H:%M:%S,%f"),'STEP 4')
return text('ok')
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8082, workers=4)
Expected result is to post everything in parallel.
I'm getting this result
06/04/2021 16:25:18,669074 STEP 2 1.1.1.1
TIME OUT HTTPConnectionPool(host='1.1.1.1', port=8081): Max retries exceeded with url: '))
06/04/2021 16:25:28,684200 STEP 2 2.2.2.2
TIME OUT HTTPConnectionPool(host='2.2.2.2', port=8081): Max retries exceeded with url: '))
i expect to have something like this
06/04/2021 16:25:18,669074 STEP 2 1.1.1.1
06/04/2021 16:25:28,684200 STEP 2 2.2.2.2
TIME OUT HTTPConnectionPool(host='1.1.1.1', port=8081): Max retries exceeded with url: '))
TIME OUT HTTPConnectionPool(host='2.2.2.2', port=8081): Max retries exceeded with url: '))

Asyncio is not a magic bullet that parallelizes operations. Indeed Sanic doesn't either. What it does is make efficient use of the processor to allow for multiple functions to "push the ball forward" a little at a time.
Everything runs in a single thread and a single process.
You are experiencing this because you are using a blocking HTTP call. You should replace requests with an async compatible utility so that Sanic can put the request aside to handle new requests while the outgoing operation takes place.
take a look at this:
https://sanicframework.org/en/guide/basics/handlers.html#a-word-about-async
A common mistake!
Don't do this! You need to ping a website. What do you use? pip install your-fav-request-library 🙈
Instead, try using a client that is async/await capable. Your server will thank you. Avoid using blocking tools, and favor those that play well in the asynchronous ecosystem. If you need recommendations, check out Awesome Sanic
Sanic uses httpx inside of its testing package (sanic-testing) 😉.

Related

How to stop execution of FastAPI endpoint after a specified time to reduce CPU resource usage/cost?

Use case
The client micro service, which calls /do_something, has a timeout of 60 seconds in the request/post() call. This timeout is fixed and can't be changed. So if /do_something takes 10 mins, /do_something is wasting CPU resources since the client micro service is NOT waiting after 60 seconds for the response from /do_something, which wastes CPU for 10 mins and this increases the cost. We have limited budget.
The current code looks like this:
import time
from uvicorn import Server, Config
from random import randrange
from fastapi import FastAPI
app = FastAPI()
def some_func(text):
"""
Some computationally heavy function
whose execution time depends on input text size
"""
randinteger = randrange(1,120)
time.sleep(randinteger)# simulate processing of text
return text
#app.get("/do_something")
async def do_something():
response = some_func(text="hello world")
return {"response": response}
# Running
if __name__ == '__main__':
server = Server(Config(app=app, host='0.0.0.0', port=3001))
server.run()
Desired Solution
Here /do_something should stop the processing of the current request to endpoint after 60 seconds and wait for next request to process.
If execution of the end point is force stopped after 60 seconds we should be able to log it with custom message.
This should not kill the service and work with multithreading/multiprocessing.
I tried this. But when timeout happends the server is getting killed.
Any solution to fix this?
import logging
import time
import timeout_decorator
from uvicorn import Server, Config
from random import randrange
from fastapi import FastAPI
app = FastAPI()
#timeout_decorator.timeout(seconds=2, timeout_exception=StopIteration, use_signals=False)
def some_func(text):
"""
Some computationally heavy function
whose execution time depends on input text size
"""
randinteger = randrange(1,30)
time.sleep(randinteger)# simulate processing of text
return text
#app.get("/do_something")
async def do_something():
try:
response = some_func(text="hello world")
except StopIteration:
logging.warning(f'Stopped /do_something > endpoint due to timeout!')
else:
logging.info(f'( Completed < /do_something > endpoint')
return {"response": response}
# Running
if __name__ == '__main__':
server = Server(Config(app=app, host='0.0.0.0', port=3001))
server.run()
This answer is not about improving CPU time—as you mentioned in the comments section—but rather explains what would happen, if you defined an endpoint with normal def or async def, as well as provides solutions when you run blocking operations inside an endpoint.
You are asking how to stop the processing of a request after a while, in order to process further requests. It does not really make that sense to start processing a request, and then (60 seconds later) stop it as if it never happened (wasting server resources all that time and having other requests waiting). You should instead let the handling of requests to FastAPI framework itself. When you define an endpoint with async def, it is run on the main thread (in the event loop), i.e., the server processes the requests sequentially, as long as there is no await call inside the endpoint (just like in your case). The keyword await passes function control back to the event loop. In other words, it suspends the execution of the surrounding coroutine, and tells the event loop to let something else run, until the awaited task completes (and has returned the result data). The await keyword only works within an async function.
Since you perform a heavy CPU-bound operation inside your async def endpoint (by calling your some_func() function), and you never give up control for other requests to run in the event loop (e.g., by awaiting for some coroutine), the server will be blocked and wait for that request to be fully processed and complete, before moving on to the next one(s)—have a look at this answer for more details.
Solutions
One solution would be to define your endpoint with normal def instead of async def. In brief, when you declare an endpoint with normal def instead of async def in FastAPI, it is run in an external threadpool that is then awaited, instead of being called directly (as it would block the server); hence, FastAPI would still work asynchronously.
Another solution, as described in this answer, is to keep the async def definition and run the CPU-bound operation in a separate thread and await it, using Starlette's run_in_threadpool(), thus ensuring that the main thread (event loop), where coroutines are run, does not get blocked. As described by #tiangolo here, "run_in_threadpool is an awaitable function, the first parameter is a normal function, the next parameters are passed to that function directly. It supports sequence arguments and keyword arguments". Example:
from fastapi.concurrency import run_in_threadpool
res = await run_in_threadpool(cpu_bound_task, text='Hello world')
Since this is about a CPU-bound operation, it would be preferable to run it in a separate process, using ProcessPoolExecutor, as described in the link provided above. In this case, this could be integrated with asyncio, in order to await the process to finish its work and return the result(s). Note that, as described in the link above, it is important to protect the main loop of code to avoid recursive spawning of subprocesses, etc—essentially, your code must be under if __name__ == '__main__'. Example:
import concurrent.futures
from functools import partial
import asyncio
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
res = await loop.run_in_executor(pool, partial(cpu_bound_task, text='Hello world'))
About Request Timeout
With regards to the recent update on your question about the client having a fixed 60s request timeout; if you are not behind a proxy such as Nginx that would allow you to set the request timeout, and/or you are not using gunicorn, which would also allow you to adjust the request timeout, you could use a middleware, as suggested here, to set a timeout for all incoming requests. The suggested middleware (example is given below) uses asyncio's .wait_for() function, which waits for an awaitable function/coroutine to complete with a timeout. If a timeout occurs, it cancels the task and raises asyncio.TimeoutError.
Regarding your comment below:
My requirement is not unblocking next request...
Again, please read carefully the first part of this answer to understand that if you define your endpoint with async def and not await for some coroutine inside, but instead perform some CPU-bound task (as you already do), it will block the server until is completed (and even the approach below wont' work as expected). That's like saying that you would like FastAPI to process one request at a time; in that case, there is no reason to use an ASGI framework such as FastAPI, which takes advantage of the async/await syntax (i.e., processing requests asynchronously), in order to provide fast performance. Hence, you either need to drop the async definition from your endpoint (as mentioned earlier above), or, preferably, run your synchronous CPU-bound task using ProcessPoolExecutor, as described earlier.
Also, your comment in some_func():
Some computationally heavy function whose execution time depends on
input text size
indicates that instead of (or along with) setting a request timeout, you could check the length of input text (using a dependency fucntion, for instance) and raise an HTTPException in case the text's length exceeds some pre-defined value, which is known beforehand to require more than 60s to complete the processing. In that way, your system won't waste resources trying to perform a task, which you already know will not be completed.
Working Example
import time
import uvicorn
import asyncio
import concurrent.futures
from functools import partial
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
from starlette.status import HTTP_504_GATEWAY_TIMEOUT
from fastapi.concurrency import run_in_threadpool
REQUEST_TIMEOUT = 2 # adjust timeout as desired
app = FastAPI()
#app.middleware('http')
async def timeout_middleware(request: Request, call_next):
try:
return await asyncio.wait_for(call_next(request), timeout=REQUEST_TIMEOUT)
except asyncio.TimeoutError:
return JSONResponse({'detail': f'Request exceeded the time limit for processing'},
status_code=HTTP_504_GATEWAY_TIMEOUT)
def cpu_bound_task(text):
time.sleep(5)
return text
#app.get('/')
async def main():
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
res = await loop.run_in_executor(pool, partial(cpu_bound_task, text='Hello world'))
return {'response': res}
if __name__ == '__main__':
uvicorn.run(app)

How do I slow API calls for Binance API

What must I add to my code so I stop running into API rate limit errors? I believe I run into this error because my script is making to many API calls to the Binance servers.
My code is:
from binance.client import Client
client = Client(api_key=***, api_secret=***, tld='us')
The client module below uses the requests library. The Client constructor has an optional parameter: requests_params=None and allows you to add a "Dictionary of requests params to use for all calls" (quote from documentation.)
I have looked through the requests documentation but could not find anything to fix this issue. I found another library called ratelimit but I do not know how to pass it through client() effectively.
The error message I receive is:
requests.exceptions.SSLError: HTTPSConnectionPool(host='api.binance.us', port=443): Max retries exceeded with url: /api/v1/ping (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))
You can simply add a delay using time.sleep in between your requests.
from time import sleep
# Adds a delay of 3 seconds
sleep(3)
have you tried a decorator? In my opinion a very clean and beautiful way for your problem :-)
Here an example:
import requests
from functools import wraps
import time
def delay(sleep_time:int):
def decorator(function):
#wraps(function)
def wrapper(*args, **kwargs):
time.sleep(sleep_time)
print(f"Sleeping {sleep_time} seconds")
return function(*args, **kwargs)
return wrapper
return decorator
#delay(5)
def get_data(url:str) -> requests.models.Response:
return requests.get(url)
while True:
print(get_data("https://www.google.com"))

Gevent async server with blocking requests

I have what I would think is a pretty common use case for Gevent. I need a UDP server that listens for requests, and based on the request submits a POST to an external web service. The external web service essentially only allows one request at a time.
I would like to have an asynchronous UDP server so that data can be immediately retrieved and stored so that I don't miss any requests (this part is easy with the DatagramServer gevent provides). Then I need some way to send requests to the external web service serially, but in such a way that it doesn't ruin the async of the UDP server.
I first tried monkey patching everything and what I ended up with was a quick solution, but one in which my requests to the external web service were not rate limited in any way and which resulted in errors.
It seems like what I need is a single non-blocking worker to send requests to the external web service in serial while the UDP server adds tasks to the queue from which the non-blocking worker is working.
What I need is information on running a gevent server with additional greenlets for other tasks (especially with a queue). I've been using the serve_forever function of the DatagramServer and think that I'll need to use the start method instead, but haven't found much information on how it would fit together.
Thanks,
EDIT
The answer worked very well. I've adapted the UDP server example code with the answer from #mguijarr to produce a working example for my use case:
from __future__ import print_function
from gevent.server import DatagramServer
import gevent.queue
import gevent.monkey
import urllib
gevent.monkey.patch_all()
n = 0
def process_request(q):
while True:
request = q.get()
print(request)
print(urllib.urlopen('https://test.com').read())
class EchoServer(DatagramServer):
__q = gevent.queue.Queue()
__request_processing_greenlet = gevent.spawn(process_request, __q)
def handle(self, data, address):
print('%s: got %r' % (address[0], data))
global n
n += 1
print(n)
self.__q.put(n)
self.socket.sendto('Received %s bytes' % len(data), address)
if __name__ == '__main__':
print('Receiving datagrams on :9000')
EchoServer(':9000').serve_forever()
Here is how I would do it:
Write a function taking a "queue" object as argument; this function will continuously process items from the queue. Each item is supposed to be a request for the web service.
This function could be a module-level function, not part of your DatagramServer instance:
def process_requests(q):
while True:
request = q.get()
# do your magic with 'request'
...
in your DatagramServer, make the function running within a greenlet (like a background task):
self.__q = gevent.queue.Queue()
self.__request_processing_greenlet = gevent.spawn(process_requests, self.__q)
when you receive the UDP request in your DatagramServer instance, you push the request to the queue
self.__q.put(request)
This should do what you want. You still call 'serve_forever' on DatagramServer, no problem.

Python & URLLIB2 - Request webpage but don't wait for response

In python, how would I go about making a http request but not waiting for a response. I don't care about getting any data back, I just need to server to register a page request.
Right now I use this code:
urllib2.urlopen("COOL WEBSITE")
But obviously this pauses the script until a a response is returned, I just want to fire off a request and move on.
How would I do this?
What you want here is called Threading or Asynchronous.
Threading:
Wrap the call to urllib2.urlopen() in a threading.Thread()
Example:
from threading import Thread
def open_website(url):
return urllib2.urlopen(url)
Thread(target=open_website, args=["http://google.com"]).start()
Asynchronous:
Unfortunately there is no standard way of doing this in the Python standard library.
Use the requests library which has this support.
Example:
from requests import async
async.get("http://google.com")
There is also a 3rd option using the restclient library which has
builtin (has for some time) Asynchronous support:
from restclient import GET
res = GET("http://google.com", async=True, resp=True)
Use thread:
import threading
threading.Thread(target=urllib.urlopen, args=('COOL WEBSITE',)).start()
Don't forget args argument should be tuple. That's why there's trailing ,.
You can do this with requests library as follows
import requests
try:
requests.get("http://127.0.0.1:8000/test/",timeout=10)
except requests.exceptions.ReadTimeout: #this confirms you that the request has reached server
do_something
except:
print "unable to reach server"
raise
from the above code you can send async requests without getting response. Specify timeout according to your need. if not it will not time out.
gevent may be a proper choice.
First patch socket:
import gevent
import gevent.monkey
monkey.patch_socket()
monkey.patch_ssl()
Then use gevent.spawn() to encapulate your requests to generate greenlets. It will not block the main thread and be very fast!
Here's a simple tutorial.

How to read www-site properly?

I tried to read WWW-site on my Python project. However, the code will crash if I can't connect to the Internet. How can catch the exception if there is no connection during some point of reading the site?
import sys
import time
import urllib3
# Gets the weather from foreca.fi.
def get_weather(url):
http = urllib3.PoolManager()
r = http.request('GET', url)
return (r.data)
time = time.strftime("%Y%m%d")
url = "http://www.foreca.fi/Finland/Kuopio/Saaristokaupunki/details/" + time
weather_all = get_weather(url)
print(weather_all)
I tested your code with no connection, if there's no connection it will raise and MaxRetryError ("Raised when the maximum number of retries is exceeded.") so you can handle the exception something like:
try:
# Your code here
except urllib3.exceptions.MaxRetryError:
# handle the exception here
Another thing you can do is to use a timeout and do something special when it times out, so that you have additional control. which in a sense what the exception raised it telling you, that it hit the max amount
Also, consider working with requests library.
I presume urllib3 would throw a URLError exception if there is no route to the specified server (i.e. the internet connection is lost), so perhaps you could use a simply try catch? I'm not particularly well versed in urllib3, but for urllib it would be something like:
E.g.
try:
weather_all = get_weather(url)
except urllib.error.URLError as e:
print "No connection to host"

Categories