Flask+gevent: logging events from within the Process instance not handled - python

I want to implement Server Side Events in my application. I'm using Flask with a gevent backend. I mostly follow the official snippet, with two important distinctions:
The function that fires events is spawned as a separate process (an instance of multiprocessing.Process), instead of gevent.spawn. It sends "inner" event
Event firing is done via logging, with QueueHandler from logutils
Here's a minimal example:
from flask import Flask, Response
import time
import logging
from gevent.wsgi import WSGIServer
from gevent.queue import Queue
import multiprocessing as mp
from logutils.queue import QueueHandler
app = Flask(__name__)
logger = logging.getLogger('events')
logger.setLevel(logging.INFO)
#app.route("/publish")
def publish():
logger = logging.getLogger('events')
def notify():
time.sleep(1)
logger.info('inner')
logger.info('outer')
p = mp.Process(target=notify)
p.start()
return "OK"
#app.route("/subscribe")
def subscribe():
def gen():
logger = logging.getLogger("events")
q = Queue()
handler = QueueHandler(q)
logger.addHandler(handler)
while True:
result = q.get().message
yield result
return Response(gen(), mimetype="text/event-stream")
if __name__ == "__main__":
root = logging.getLogger()
root.addHandler(logging.StreamHandler())
root.setLevel(logging.INFO)
app.debug = True
server = WSGIServer(("", 5000), app)
server.serve_forever()
The problem is, StreamHandler catches both "inner" and "outer" events and logs them to stdout. QueueHandler, on the other hand, only pushes the "outer" event to the queue, so it fails to catch the "inner" event, which originates in other process.
Now, if I use multiprocessing.Queue instead of gevent.queue, the "/subscribe" route blocks. If I use standard werkzeug server (with threading) in conjunction with multiprocessing.Queue, the code above works as expected.
What's going on and how to tackle this?

Related

Signal handling in Uvicorn with FastAPI

I have an app using Uvicorn with FastAPI. I have also some connections open (e.g. to MongoDB). I want to gracefully close these connections once some signal occurs (SIGINT, SIGTERM and SIGKILL).
My server.py file:
import uvicorn
import fastapi
import signal
import asyncio
from source.gql import gql
app = fastapi.FastAPI()
app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_methods=["*"], allow_headers=["*"])
app.mount("/graphql", gql)
# handle signals
HANDLED_SIGNALS = (
signal.SIGINT,
signal.SIGTERM
)
loop = asyncio.get_event_loop()
for sig in HANDLED_SIGNALS:
loop.add_signal_handler(sig, _some_callback_func)
if __name__ == "__main__":
uvicorn.run(app, port=6900)
Unfortunately, the way I try to achieve this is not working. When I try to Ctrl+C in terminal, nothing happens. I believe it is caused because Uvicorn is started in different thread...
What is the correct way of doing this? I have noticed uvicorn.Server.install_signal_handlers() function, but wasn't lucky in using it...
FastAPI allows defining event handlers (functions) that need to be executed before the application starts up, or when the application is shutting down. Thus, you could use the shutdown event, as described here:
#app.on_event("shutdown")
def shutdown_event():
# close connections here

How to use Process Pool Executor in tornado with synchronous code?

I am new to tornado and have an API that makes a blocking database call. Because of this blocking call, tornado isn't able to serve all the requests if multiple requests come at the same time.
I looked about it and found out that it could be solved using two approaches: Making the code asynchronous and/or using Process Pool Executors. My assumption here is that having multiple process pool executors is like having multiple processes on tornado to serve multiple requests. Every single example I looked at about implementing Process Pool Executor also makes the code asynchronous.
I cannot make the code asynchronous for the time being because it would require more code changes and so I was looking at simple fix using Process Pool Executors.
What I have currently
import tornado.ioloop
import tornado.web
def blocking_call():
import time
time.sleep(60)
return "Done"
class MainHandler(tornado.web.RequestHandler):
def get(self):
val = blocking_call()
self.write(val)
if __name__ == "__main__":
app = tornado.web.Application([(r"/", MainHandler)])
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
What I tried
import tornado.ioloop
import tornado.web
from concurrent.futures import ProcessPoolExecutor
def blocking_call():
import time
time.sleep(60)
return "Done"
class MainHandler(tornado.web.RequestHandler):
def initialize(self, executor):
self.executor = executor
def get(self):
val = self.executor.submit(blocking_call)
self.write(val)
if __name__ == "__main__":
executor = ProcessPoolExecutor(5)
app = tornado.web.Application(
[(r"/", MainHandler, dict(executor=executor))])
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
My problem with this approach is that now I am getting a future object instead of actual response. How do I make the Get request wait for self.executor to complete before sending back the response?
The executor.submit() returns a concurrent.futures.Future which is not awaitable.
I suggest you use Tornado's run_in_executor method to execute the blocking task.
async def get(self):
loop = tornado.ioloop.IOLoop.current()
val = await loop.run_in_executor(self.executor, blocking_call)
self.write(val)

Python Flask API with stomp listener multiprocessing

TLDR:
I need to setup a flask app for multiprocessing such that the API and stomp queue listener are running in separate processes and therefore not interfering with each other's operations.
Details:
I am building a python flask app that has API endpoints and also creates a message queue listener to connect to an activemq queue with the stomp package.
I need to implement multiprocessing such that the API and listener do not block each other's operation. That way the API will accept new requests and the listener will continue to listen for new messages and carry out tasks accordingly.
A simplified version of the code is shown below (some details are omitted for brevity).
Problem: The multiprocessing is causing the application to get stuck. The worker's run method is not called consistently, and therefore the listener never gets created.
# Start the worker as a subprocess -- this is not working -- app gets stuck before the worker's run method is called
m = Manager()
shared_state = m.dict()
worker = MyWorker(shared_state=shared_state)
worker.start()
After several days of troubleshooting I suspect the problem is due to the multiprocessing not being setup correctly. I was able to prove that this is the case because when I stripped out all of the multiprocessing code and called the worker's run method directly, the all of the queue management code is working correctly, the CustomWorker module creates the listener, creates the message, and picks up the message. I think this indicates that the queue management code is working correctly and the source of the problem is most likely due to the multiprocessing.
# Removing the multiprocessing and calling the worker's run method directly works without getting stuck so the issue is likely due to multiprocessing not being setup correctly
worker = MyWorker()
worker.run()
Here is the code I have so far:
App
This part of the code creates the API and attempts to create a new process to create the queue listener. The 'custom_worker_utils' module is a custom module that creates the stomp listener in the CustomWorker() class run method.
from flask import Flask, request, make_response, jsonify
from flask_restx import Resource, Api
import sys, os, logging, time
basedir = os.path.dirname(os.getcwd())
sys.path.append('..')
from custom_worker_utils.custom_worker_utils import *
from multiprocessing import Manager
# app.py
def create_app():
app = Flask(__name__)
app.config['BASE_DIR'] = basedir
api = Api(app, version='1.0', title='MPS Worker', description='MPS Common Worker')
logger = get_logger()
'''
This is a placeholder to trigger the sending of a message to the first queue
'''
#api.route('/initialapicall', endpoint="initialapicall", methods=['GET', 'POST', 'PUT', 'DELETE'])
class InitialApiCall(Resource):
#Sends a message to the queue
def get(self, *args, **kwargs):
mqconn = get_mq_connection()
message = create_queue_message(initial_tracker_file)
mqconn.send('/queue/test1', message, headers = {"persistent":"true"})
return make_response(jsonify({'message': 'Initial Test Call Worked!'}), 200)
# Start the worker as a subprocess -- this is not working -- app gets stuck before the worker's run method is called
m = Manager()
shared_state = m.dict()
worker = MyWorker(shared_state=shared_state)
worker.start()
# Removing the multiprocessing and calling the worker's run method directly works without getting stuck so the issue is likely due to multiprocessing not being setup correctly
#worker = MyWorker()
#worker.run()
return app
Custom worker utils
The run() method is called, connects to the queue and creates the listener with the stomp package
# custom_worker_utils.py
from multiprocessing import Manager, Process
from _datetime import datetime
import os, time, json, stomp, requests, logging, random
'''
The listener
'''
class MyListener(stomp.ConnectionListener):
def __init__(self, p):
self.process = p
self.logger = p.logger
self.conn = p.mqconn
self.conn.connect(_user, _password, wait=True)
self.subscribe_to_queue()
def on_message(self, headers, message):
message_data = json.loads(message)
ticket_id = message_data[constants.TICKET_ID]
prev_status = message_data[constants.PREVIOUS_STEP_STATUS]
task_name = message_data[constants.TASK_NAME]
#Run the service
if prev_status == "success":
resp = self.process.do_task(ticket_id, task_name)
elif hasattr(self, 'revert_task'):
resp = self.process.revert_task(ticket_id, task_name)
else:
resp = True
if (resp):
self.logger.debug('Acknowledging')
self.logger.debug(resp)
self.conn.ack(headers['message-id'], self.process.conn_id)
else:
self.conn.nack(headers['message-id'], self.process.conn_id)
def on_disconnected(self):
self.conn.connect('admin', 'admin', wait=True)
self.subscribe_to_queue()
def subscribe_to_queue(self):
queue = os.getenv('QUEUE_NAME')
self.conn.subscribe(destination=queue, id=self.process.conn_id, ack='client-individual')
def get_mq_connection():
conn = stomp.Connection([(_host, _port)], heartbeats=(4000, 4000))
conn.connect(_user, _password, wait=True)
return conn
class CustomWorker(Process):
def __init__(self, **kwargs):
super(CustomWorker, self).__init__()
self.logger = logging.getLogger("Worker Log")
log_level = os.getenv('LOG_LEVEL', 'WARN')
self.logger.setLevel(log_level)
self.mqconn = get_mq_connection()
self.conn_id = random.randrange(1,100)
for k, v in kwargs.items():
setattr(self, k, v)
def revert_task(self, ticket_id, task_name):
# If the subclass does not implement this,
# then there is nothing to undo so just return True
return True
def run(self):
lst = MyListener(self)
self.mqconn.set_listener('queue_listener', lst)
while True:
pass
Seems like Celery is excatly what you need.
Celery is a task queue that can distribute work across worker-processes and even across machines.
Miguel Grinberg created a great post about that, Showing how to accept tasks via flask and spawn them using Celery as tasks.
Good Luck!
To resolve this issue I have decided to run the flask API and the message queue listener as two entirely separate applications in the same docker container. I have installed and configured supervisord to start and the processes individually.
[supervisord]
nodaemon=true
logfile=/home/appuser/logs/supervisord.log
[program:gunicorn]
command=gunicorn -w 1 -c gunicorn.conf.py "app:create_app()" -b 0.0.0.0:8081 --timeout 10000
directory=/home/appuser/app
user=appuser
autostart=true
autorestart=true
stdout_logfile=/home/appuser/logs/supervisord_worker_stdout.log
stderr_logfile=/home/appuser/logs/supervisord_worker_stderr.log
[program:mqlistener]
command=python3 start_listener.py
directory=/home/appuser/mqlistener
user=appuser
autostart=true
autorestart=true
stdout_logfile=/home/appuser/logs/supervisord_mqlistener_stdout.log
stderr_logfile=/home/appuser/logs/supervisord_mqlistener_stderr.log

Flask and Azure message queues

How can I use azure.servicebus with flask?
I Tried to use asyncio to run process_queue function, but it locked REST requests.
Now, I'm trying to use multiprocessing, but never executes the print("while True").
I'm looking for any good practice to use flask and azure message-queues (or message queues in general way).
My code is:
from multiprocessing import Process
from flask import Flask
from src.flask_settings import DevConfig
from src.rest import health
from src.rest import helloworld
import time
def create_app(config_object=DevConfig):
app = Flask(__name__)
app.config.from_object(config_object)
app.register_blueprint(health.blueprint)
app.register_blueprint(helloworld.blueprint)
return app
print("1")
from azure.servicebus import QueueClient, Message
# Create the QueueClient
queue_client = QueueClient.from_connection_string("Endpoint=sb://**********.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=*************", "queue1")
# Receive the message from the queue
def process_queue(sleep_time):
while True:
time.sleep(sleep_time)
print("while True")
with queue_client.get_receiver() as queue_receiver:
messages = queue_receiver.fetch_next(timeout=3)
for message in messages:
print(message)
message.complete()
p = Process(target=process_queue, args=(1, ))
p.start()
p.join()
print("2")
Thanks

Make a Python asyncio call from a Flask route

I want to execute an async function every time the Flask route is executed. Why is the abar function never executed?
import asyncio
from flask import Flask
async def abar(a):
print(a)
loop = asyncio.get_event_loop()
app = Flask(__name__)
#app.route("/")
def notify():
asyncio.ensure_future(abar("abar"), loop=loop)
return "OK"
if __name__ == "__main__":
app.run(debug=False, use_reloader=False)
loop.run_forever()
I also tried putting the blocking call in a separate thread. But it is still not calling the abar function.
import asyncio
from threading import Thread
from flask import Flask
async def abar(a):
print(a)
app = Flask(__name__)
def start_worker(loop):
asyncio.set_event_loop(loop)
try:
loop.run_forever()
finally:
loop.close()
worker_loop = asyncio.new_event_loop()
worker = Thread(target=start_worker, args=(worker_loop,))
#app.route("/")
def notify():
asyncio.ensure_future(abar("abar"), loop=worker_loop)
return "OK"
if __name__ == "__main__":
worker.start()
app.run(debug=False, use_reloader=False)
You can incorporate some async functionality into Flask apps without having to completely convert them to asyncio.
import asyncio
from flask import Flask
async def abar(a):
print(a)
loop = asyncio.get_event_loop()
app = Flask(__name__)
#app.route("/")
def notify():
loop.run_until_complete(abar("abar"))
return "OK"
if __name__ == "__main__":
app.run(debug=False, use_reloader=False)
This will block the Flask response until the async function returns, but it still allows you to do some clever things. I've used this pattern to perform many external requests in parallel using aiohttp, and then when they are complete, I'm back into traditional flask for data processing and template rendering.
import aiohttp
import asyncio
import async_timeout
from flask import Flask
loop = asyncio.get_event_loop()
app = Flask(__name__)
async def fetch(url):
async with aiohttp.ClientSession() as session, async_timeout.timeout(10):
async with session.get(url) as response:
return await response.text()
def fight(responses):
return "Why can't we all just get along?"
#app.route("/")
def index():
# perform multiple async requests concurrently
responses = loop.run_until_complete(asyncio.gather(
fetch("https://google.com/"),
fetch("https://bing.com/"),
fetch("https://duckduckgo.com"),
fetch("http://www.dogpile.com"),
))
# do something with the results
return fight(responses)
if __name__ == "__main__":
app.run(debug=False, use_reloader=False)
A simpler solution to your problem (in my biased view) is to switch to Quart from Flask. If so your snippet simplifies to,
import asyncio
from quart import Quart
async def abar(a):
print(a)
app = Quart(__name__)
#app.route("/")
async def notify():
await abar("abar")
return "OK"
if __name__ == "__main__":
app.run(debug=False)
As noted in the other answers the Flask app run is blocking, and does not interact with an asyncio loop. Quart on the other hand is the Flask API built on asyncio, so it should work how you expect.
Also as an update, Flask-Aiohttp is no longer maintained.
Your mistake is to try to run the asyncio event loop after calling app.run(). The latter doesn't return, it instead runs the Flask development server.
In fact, that's how most WSGI setups will work; either the main thread is going to busy dispatching requests, or the Flask server is imported as a module in a WSGI server, and you can't start an event loop here either.
You'll instead have to run your asyncio event loop in a separate thread, then run your coroutines in that separate thread via asyncio.run_coroutine_threadsafe(). See the Coroutines and Multithreading section in the documentation for what this entails.
Here is an implementation of a module that will run such an event loop thread, and gives you the utilities to schedule coroutines to be run in that loop:
import asyncio
import itertools
import threading
__all__ = ["EventLoopThread", "get_event_loop", "stop_event_loop", "run_coroutine"]
class EventLoopThread(threading.Thread):
loop = None
_count = itertools.count(0)
def __init__(self):
self.started = threading.Event()
name = f"{type(self).__name__}-{next(self._count)}"
super().__init__(name=name, daemon=True)
def __repr__(self):
loop, r, c, d = self.loop, False, True, False
if loop is not None:
r, c, d = loop.is_running(), loop.is_closed(), loop.get_debug()
return (
f"<{type(self).__name__} {self.name} id={self.ident} "
f"running={r} closed={c} debug={d}>"
)
def run(self):
self.loop = loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.call_later(0, self.started.set)
try:
loop.run_forever()
finally:
try:
shutdown_asyncgens = loop.shutdown_asyncgens()
except AttributeError:
pass
else:
loop.run_until_complete(shutdown_asyncgens)
try:
shutdown_executor = loop.shutdown_default_executor()
except AttributeError:
pass
else:
loop.run_until_complete(shutdown_executor)
asyncio.set_event_loop(None)
loop.close()
def stop(self):
loop, self.loop = self.loop, None
if loop is None:
return
loop.call_soon_threadsafe(loop.stop)
self.join()
_lock = threading.Lock()
_loop_thread = None
def get_event_loop():
global _loop_thread
if _loop_thread is None:
with _lock:
if _loop_thread is None:
_loop_thread = EventLoopThread()
_loop_thread.start()
# give the thread up to a second to produce a loop
_loop_thread.started.wait(1)
return _loop_thread.loop
def stop_event_loop():
global _loop_thread
with _lock:
if _loop_thread is not None:
_loop_thread.stop()
_loop_thread = None
def run_coroutine(coro):
"""Run the coroutine in the event loop running in a separate thread
Returns a Future, call Future.result() to get the output
"""
return asyncio.run_coroutine_threadsafe(coro, get_event_loop())
You can use the run_coroutine() function defined here to schedule asyncio routines. Use the returned Future instance to control the coroutine:
Get the result with Future.result(). You can give this a timeout; if no result is produced within the timeout, the coroutine is automatically cancelled.
You can query the state of the coroutine with the .cancelled(), .running() and .done() methods.
You can add callbacks to the future, which will be called when the coroutine has completed, or is cancelled or raised an exception (take into account that this is probably going to be called from the event loop thread, not the thread that you called run_coroutine() in).
For your specific example, where abar() doesn't return any result, you can just ignore the returned future, like this:
#app.route("/")
def notify():
run_coroutine(abar("abar"))
return "OK"
Note that before Python 3.8 that you can't use an event loop running on a separate thread to create subprocesses with! See my answer to Python3 Flask asyncio subprocess in route hangs for backport of the Python 3.8 ThreadedChildWatcher class for a work-around for this.
For same reason you won't see this print:
if __name__ == "__main__":
app.run(debug=False, use_reloader=False)
print('Hey!')
loop.run_forever()
loop.run_forever() is never called since as #dirn already noted app.run is also blocking.
Running global blocking event loop - is only way you can run asyncio coroutines and tasks, but it's not compatible with running blocking Flask app (or with any other such thing in general).
If you want to use asynchronous web framework you should choose one created to be asynchronous. For example, probably most popular now is aiohttp:
from aiohttp import web
async def hello(request):
return web.Response(text="Hello, world")
if __name__ == "__main__":
app = web.Application()
app.router.add_get('/', hello)
web.run_app(app) # this runs asyncio event loop inside
Upd:
About your try to run event loop in background thread. I didn't investigate much, but it seems problem somehow related with tread-safety: many asyncio objects are not thread-safe. If you change your code this way, it'll work:
def _create_task():
asyncio.ensure_future(abar("abar"), loop=worker_loop)
#app.route("/")
def notify():
worker_loop.call_soon_threadsafe(_create_task)
return "OK"
But again, this is very bad idea. It's not only very inconvenient, but I guess wouldn't make much sense: if you're going to use thread to start asyncio, why don't just use threads in Flask instead of asyncio? You will have Flask you want and parallelization.
If I still didn't convince you, at least take a look at Flask-aiohttp project. It has close to Flask api and I think still better that what you're trying to do.
The main issue, as already explained in the other answers by #Martijn Pieters and #Mikhail Gerasimov is that app.run is blocking, so the line loop.run_forever() is never called. You will need to manually set up and maintain a run loop on a separate thread.
Fortunately, with Flask 2.0, you don't need to create, run, and manage your own event loop anymore. You can define your route as async def and directly await on coroutines from your route functions.
https://flask.palletsprojects.com/en/2.0.x/async-await/
Using async and await
New in version 2.0.
Routes, error handlers, before request, after request, and teardown
functions can all be coroutine functions if Flask is installed with
the async extra (pip install flask[async]). It requires Python 3.7+
where contextvars.ContextVar is available. This allows views to be
defined with async def and use await.
Flask will take care of creating the event loop on each request. All you have to do is define your coroutines and await on them to finish:
https://flask.palletsprojects.com/en/2.0.x/async-await/#performance
Performance
Async functions require an event loop to run. Flask, as a WSGI
application, uses one worker to handle one request/response cycle.
When a request comes into an async view, Flask will start an event
loop in a thread, run the view function there, then return the result.
Each request still ties up one worker, even for async views. The
upside is that you can run async code within a view, for example to
make multiple concurrent database queries, HTTP requests to an
external API, etc. However, the number of requests your application
can handle at one time will remain the same.
Tweaking the original example from the question:
import asyncio
from flask import Flask, jsonify
async def send_notif(x: int):
print(f"Called coro with {x}")
await asyncio.sleep(1)
return {"x": x}
app = Flask(__name__)
#app.route("/")
async def notify():
futures = [send_notif(x) for x in range(5)]
results = await asyncio.gather(*futures)
response = list(results)
return jsonify(response)
# The recommended way now is to use `flask run`.
# See: https://flask.palletsprojects.com/en/2.0.x/cli/
# if __name__ == "__main__":
# app.run(debug=False, use_reloader=False)
$ time curl -s -XGET 'http://localhost:5000'
[{"x":0},{"x":1},{"x":2},{"x":3},{"x":4}]
real 0m1.016s
user 0m0.005s
sys 0m0.006s
Most common recipes using asyncio can be applied the same way. The one thing to take note of is, as of Flask 2.0.1, we cannot use asyncio.create_task to spawn background tasks:
https://flask.palletsprojects.com/en/2.0.x/async-await/#background-tasks
Async functions will run in an event loop until they complete, at which
stage the event loop will stop. This means any additional spawned
tasks that haven’t completed when the async function completes will be
cancelled. Therefore you cannot spawn background tasks, for example
via asyncio.create_task.
If you wish to use background tasks it is best to use a task queue to
trigger background work, rather than spawn tasks in a view function.
Other than the limitation with create_task, it should work for use-cases where you want to make async database queries or multiple calls to external APIs.

Categories