How to restart flask server via API while it uses multiprocessing inside - python

I have a flask app with waitress that gets some data in post-request, then it runs some long computations in long_function and returns result. These computations are parallel and I'm using pebble because I need a timeout option. Also I want the user to be able to send a request to restart the server (i.e. he want to change the number of threads for waitress)
I've found this solution https://gist.github.com/naushadzaman/b65534d912f1551c7d8366b326b7a151
It mostly works, but it doesn't interact well with my pebble pool. I'm having trouble reloading the server while it is in the pool. If I use long_function_without_pool, that doesn't use any multiprocessing, I can reload server even if it is currently does some job (results are lost, of course, but this is what I want). But with long_function I have to wait for the pool to be closed and only then can I restart the server. If I try to send restart request while the pool is still open, I get an error:
OSError: [Errno 98] Address already in use
So I suppose that p.terminate() doesn't work if there is a Pool running.
How can I fix this code, or maybe should I use a different solution?
Brief instructions on how to replicate this error:
start the app
send POST-request with empty body to http://localhost:5221/
before you get a response (you'll have 5 seconds) send GET-request without variables to http://localhost:5221/restart/
enjoy. Server is stuck now and is not responding to anything
import subprocess
from flask import Flask
from flask_restful import Api, Resource
from flask_cors import CORS
from webargs.flaskparser import parser, abort
import json
import time
import sys
from waitress import serve
from multiprocessing import Process, Queue
from concurrent.futures import TimeoutError
from pebble import ProcessPool, ProcessExpired
import functools
some_queue = None
APP = Flask(__name__)
API = Api(APP)
CORS(APP)
#APP.route('/restart/', methods=['GET'], endpoint='start_flaskapp')
def restart():
try:
some_queue.put("something")
print("Restarted successfully")
return("Quit")
except:
print("Failed in restart")
return "Failed"
def start_flaskapp(queue):
global some_queue
some_queue = queue
API.add_resource(FractionsResource, "/")
serve(APP, host='0.0.0.0', port=5221, threads=2)
def long_function():
with ProcessPool(5) as pool:
data = [0, 1, 2, 3, 4]
future = pool.map(functools.partial(add_const, const=1), data, timeout=5)
iterator = future.result()
result=[]
while True:
try:
result.append(next(iterator))
except StopIteration:
break
except TimeoutError as error:
print("function took longer than %d seconds" % error.args[1])
return(result)
def long_function_without_pool():
data = [0, 1, 2, 3, 4]
result = list(map(functools.partial(add_const, const=1), data))
return(result)
def add_const(number, const=0):
time.sleep(5)
return number+const
class FractionsResource(Resource):
#APP.route('/', methods=['POST'])
def post():
response = long_function()
return(json.dumps(response))
if __name__ == "__main__":
q = Queue()
p = Process(target=start_flaskapp, args=(q,))
p.start()
while True: #wathing queue, if there is no call than sleep, otherwise break
if q.empty():
time.sleep(1)
else:
break
p.terminate() #terminate flaskapp and then restart the app on subprocess
args = [sys.executable] + [sys.argv[0]]
subprocess.call(args)

Related

Flask and Azure message queues

How can I use azure.servicebus with flask?
I Tried to use asyncio to run process_queue function, but it locked REST requests.
Now, I'm trying to use multiprocessing, but never executes the print("while True").
I'm looking for any good practice to use flask and azure message-queues (or message queues in general way).
My code is:
from multiprocessing import Process
from flask import Flask
from src.flask_settings import DevConfig
from src.rest import health
from src.rest import helloworld
import time
def create_app(config_object=DevConfig):
app = Flask(__name__)
app.config.from_object(config_object)
app.register_blueprint(health.blueprint)
app.register_blueprint(helloworld.blueprint)
return app
print("1")
from azure.servicebus import QueueClient, Message
# Create the QueueClient
queue_client = QueueClient.from_connection_string("Endpoint=sb://**********.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=*************", "queue1")
# Receive the message from the queue
def process_queue(sleep_time):
while True:
time.sleep(sleep_time)
print("while True")
with queue_client.get_receiver() as queue_receiver:
messages = queue_receiver.fetch_next(timeout=3)
for message in messages:
print(message)
message.complete()
p = Process(target=process_queue, args=(1, ))
p.start()
p.join()
print("2")
Thanks

Make function not to wait for other function inside it

I have a flask service as below:
from flask import Flask, request
import json
import time
app = Flask(__name__)
#app.route("/first", methods=["POST"])
def main():
print("Request received")
func1()
return json.dumps({"status": True})
def func1():
time.sleep(100)
print("Print function executed")
if __name__ == "__main__":
app.run("0.0.0.0", 8080)
So now when I make a request using http://localhost:8080/first
control goes to main method and it prints Request received and wait for func1 to get executed and then it returns {"status": True}
But now I don't want to wait for func1 to finish its execution instead it will sent {"status": True} and func1 will continue it's execution.
In order to reply to request from flask, you need the decorated function to finish (in your case, that's main).
If you want to execute something in parallel, you need to execute it in another thread or a process. Multi-process apps are used to achieve more than a single CPU. (CPU bound); in your case, you just need it to execute in parallel so it is better to go with threads.
A simple technique is to use ThreadPool. import ThreadPoolExecutor from concurrent.futures, then submit work to it, which allows your function execution code to continue. Try this:
from flask import Flask, request
import json
import time
import os
from concurrent.futures import ThreadPoolExecutor
app = Flask(__name__)
# Task manager executor
_threadpool_cpus = int(os.cpu_count() / 2)
EXECUTOR = ThreadPoolExecutor(max_workers=max(_threadpool_cpus, 2))
#app.route("/first", methods=["POST"])
def main():
print("Request received")
EXECUTOR.submit(func1)
return json.dumps({"status": True})
def func1():
time.sleep(2)
print("Print function executed")
if __name__ == "__main__":
app.run("0.0.0.0", 8080)
This will run the func1 in a different thread, allowing flask to respond the user without blocking until func1 is done.
Maybe working with subproccesses is what you need?
You can try something like:
import subprocess
subprocess.call(func1())
I think the problem is in the POST method, that you prescribed. Also 100 seconds sleep time too long :)
def func1():
print("Print function executed1")
time.sleep(10)
print("Print function executed2")
app = Flask(__name__)
#app.route("/first")
def main():
print("Request received1")
func1()
print("Request received2")
return json.dumps({"status": True})
if __name__ == "__main__":
app.run("0.0.0.0", 8080)
Output:
Request received1
Print function executed1
Print function executed2
Request received2
After receiving/executing a request for function 1, you can set/reset a global status flag/variable(e.g. flag_func_1 = True:Request Received ; False:Request Executed).
You can monitor the value of the flag_func_1 and can return {"status": True}immediately after setting flag.
Ex: inside main function you can do something like :
if(flag_func_1 == True):
func_1()
flag_func1 = False
Warning, this is not a robust solution. You should look into distributed queues to persist these requests (for example: RabbitMQ, Kafka, Redis)
That being said... You can use a thread to start the function.
from threading import Thread
#app.route("/first", methods=["GET"])
def main():
print("Request received")
Thread(target=func1, args=()).start()
return json.dumps({"status": True})
If you need flask to return a response before starting your func1(), you should checkout this answer which provides a details about necessary workings of flask.
Otherwise, you can use threading or multiprocessing:
from threading import Thread
from multiprocessing import Process #and multiprocessing queue if you use this
import queue #for passing messages between main and func1
message_queue = queue.Queue()
#app.route("/first", methods=["GET"])
def main():
print("Request received")
func_thread = Thread(target=func1, args=(), daemon=True).start() #daemon if it needs to die with main program, otherwise daemon=False
#or func_process = Process(...) #in case
return json.dumps({"status": True})
def func1():
...
print("func 1 ")
message_queue.put(...) #if you need to pass something
message_queue.get(...) #to get something like stopping signal
return
I think the simplest way to do what you're asking is to use the library, multiprocessing.
def run_together(*functions):
processes = []
for function in functions:
process = Process(target=function)
process.start()
processes.append(process)
for process in processes:
process.join()
#app.route("/first", methods=["POST"])
def main():
print("Request received")
return run_together(func1, func2)
def func1():
time.sleep(100)
print("Print function executed")
def func2():
return json.dumps({"status": True})
I wrote a rough code, I haven't tested it yet. But I hope it helps, cheerio!

Use zerorpc inside Flask app throws error "operation would block forever"

I have a RPC Server using zerorpc in Python, written this way
import zerorpc
from service import Service
print('RPC server - loading')
def main():
print('RPC server - main')
s = zerorpc.Server(Service())
s.bind("tcp://*:4242")
s.run()
if __name__ == "__main__" : main()
It works fine when I create a client
import zerorpc, sys
client_rpc = zerorpc.Client()
client_rpc.connect("tcp://127.0.0.1:4242")
name = sys.argv[1] if len(sys.argv) > 1 else "dude"
print(client_rpc.videos('138cd9e5-3c4c-488a-9b6f-49907b55a040.webm'))
and runs it. The print() outputs what this 'videos' function returns.
But when I try to use it this same code inside route from a Flask app, I receive the following error:
File "src/gevent/__greenlet_primitives.pxd", line 35, in
gevent.__greenlet_primitives._greenlet_switch
gevent.exceptions.LoopExit: This operation would block forever Hub:
The flask method/excerpt
import zerorpc, sys
client_rpc = zerorpc.Client()
client_rpc.connect("tcp://127.0.0.1:4242")
#app.route('/videos', methods=['POST'])
def videos():
global client_rpc
client_rpc.videos('138cd9e5-3c4c-488a-9b6f-49907b55a040.webm')
I can't find out what might be happening. I'm quite new to Python and I understand that this may have some relation with Flask and how it handles the thread, but I can't figure out how to solve it.
zerorpc depends on gevent, which provides async IO with cooperative coroutines. This means your flask application must use gevent for all IO operations.
In your specific case, you are likely starting your application with a standard blocking IO WSGI server.
Here is a snippet using the WSGI server from gevent:
import zerorpc
from gevent.pywsgi import WSGIServer
app = Flask(__name__)
client_rpc = zerorpc.Client()
client_rpc.connect("tcp://127.0.0.1:4242")
#app.route('/videos', methods=['POST'])
def videos():
global client_rpc
client_rpc.videos('138cd9e5-3c4c-488a-9b6f-49907b55a040.webm')
# ...
if __name__ == "__main__":
http = WSGIServer(('', 5000), app)
http.serve_forever()
Excerpt from https://sdiehl.github.io/gevent-tutorial/#chat-server

Running tornado web server with flask application and check asynchronous request handling [duplicate]

This question already has an answer here:
Flask and Tornado Applciation does not handle multiple concurrent requests
(1 answer)
Closed 5 years ago.
I am trying to run flask application on tornado server to check the asynchronous request handling. I have two files 'flask_req.py' and 'tornado_ex.py'. My both files looks like below:
flask_req.py
from flask import Flask
from flask import request
app = Flask(__name__)
#app.route('/hello',methods=['GET'])
def hello():
print "hello 1"
time.sleep(20)
x= 2*2
print(x)
return "hello"
#app.route('/bye',methods=['GET'])
def bye():
print "bye 1"
time.sleep(5)
y = 4*4
print(y)
return "bye"
tornado_ex.py
from __future__ import print_function
from tornado.wsgi import WSGIContainer
from tornado.web import Application, FallbackHandler
from tornado.websocket import WebSocketHandler
from tornado.ioloop import IOLoop
from tornado import gen
from tornado.httpclient import AsyncHTTPClient
import time
from flask_req import app
class WebSocket(WebSocketHandler):
def open(self):
print("Socket opened.")
def on_message(self, message):
self.write_message("Received: " + message)
print("Received message: " + message)
def on_close(self):
print("Socket closed.")
#gen.coroutine
def fetch_and_handle():
"""Fetches the urls and handles/processes the response"""
urls = [
'http://127.0.0.1:8080/hello',
'http://127.0.0.1:8080/bye'
]
http_client = AsyncHTTPClient()
waiter = gen.WaitIterator(*[http_client.fetch(url) for url in urls])
while not waiter.done():
try:
response = yield waiter.next()
except Exception as e:
print(e)
continue
print(response.body)
if __name__ == "__main__":
container = WSGIContainer(app)
server = Application([
(r'/websocket/', WebSocket),
(r'.*', FallbackHandler, dict(fallback=container))
])
server.listen(8080)
fetch_and_handle()
IOLoop.instance().start()
I want to check the asynchronous behavior of handling the request using tornado server. Right now when I am running it, when both the URL are passed, it is waiting for 20 sec+5sec=25 sec. I want to run it something like that if one request is taking time then it should process the other request so that from above code the total waiting time it should take is only 20 sec, not 25 sec. How I can achieve this behavior here. Right now when I am running the above code as I am getting the response as:
$ python tornado_ex.py
hello 1
4
bye 1
16
hello
bye
after printing 'hello1' it's waiting for 25 sec and then doing further processing and after printing 'bye1' it is again waiting for 5sec. What I want is after printing 'hello1', if it is taking so much time then it should process '/bye'.
Using the WSGI container means only one request is handled at a time and a subsequent request is not handled until the first is complete.
Using Tornado to run WSGI applications is generally not a good idea when you need concurrency.
Either use multiple processes or convert you project to use ASYNC TornadoWeb framework instead of WSGI.

Run gevent processes and server concurrently

How to run a given module given I want to run some functions concurrently that are not necessarily using routing (could be daemon services) while at the same time running the app server?
For example:
#some other route functions app.post(...)
#some other concurrent functions
def alarm():
'''
Run this service every X duration
'''
ALARM = 21
try:
while 1:
#checking time and doing something. Then finding INTERVAL
gevent.sleep(INTERVAL)
except KeyboardInterrupt,e:
print 'exiting'
Do I have to use the above like this after main ?
gevent.joinall(gevent.spawn(alarm))
app.run(....)
or
gevent.joinall((gevent.spawn(alarm),gevent.spawn(app.run)))
The objective is run these alarm like daemon services, do their work and snooze while rest of service operations work as usual.
The server should start concurrently as well. correct me if im not on the right track.
Gevent comes with it's own WSGI servers, so it is really not necessary to use app.run. The servers are:
gevent.pywsgi.WSGIServer
gevent.wsgi.WSGIServer
Both provide the same interface.
You can use these to achieve what you want:
import gevent
import gevent.monkey
gevent.monkey.patch_all()
import requests
from gevent.pywsgi import WSGIServer
# app = YourBottleApp
def alarm():
'''
Run this service every X duration
'''
ALARM = 21
while 1:
#checking time and doing something. Then finding INTERVAL
gevent.sleep(INTERVAL)
if __name__ == '__main__':
http_server = WSGIServer(('', 8080), app)
srv_greenlet = gevent.spawn(http_server.serve_forever)
alarm_greenlet = gevent.spawn(alarm)
try:
gevent.joinall([srv_greenlet, alarm_greenlet])
except KeyboardInterrupt:
http_server.stop()
print 'Quitting'

Categories