I have a flask app. I want the client-server connection to terminate if the server does not respond within a stipulated time (say 20 seconds). I read here that the session.permanent = True can be set. I am a bit unclear where this goes in the server side code (if at all this is the way??).
For simplicity I am including the minimal server side code I have. Actually the server is performing a File Read/Write operation and returning a result to the client.
from flask import Flask, session, app
from flask_restful import Api, Resource
from datetime import timedelta
app = Flask(__name__)
api = Api(app)
class GetParams(Resource):
def get(self):
print ("Hello.")
return 'OK'
api.add_resource(GetParams, '/data')
if __name__ == '__main__':
app.run(host='127.0.0.1', port=5002)
Can anyone tell me what should I do here so that the connection between my client and server is terminated if the server does not respond i.e., send data back to the client within 20 seconds?
Long running tasks should be dealt with in a different design because, if you allow your server to keep a request alive for 50 minutes, you can't force user browser to do so.
I would recommend implementing the long running task as a thread that notifies the user once it's done.
For more readings about the problem statement and suggested solutions:
timeout issue with chrome and flask
long request time patterns
I believe that the only thing you need is to put your connexion statement in a try/except block. So that you will be able to handle any kind of connexion error.
Furthermore, a session timeout and a connexion fail/unreachable server are different things. A session timeout disconnect a user from a server which is here for too long (usually used to avoid a user to forgot a session open). Whereas when a server is unreachable the user isn't connected so there is no session timeout.
from flask import Flask, session, app
from flask_restful import Api, Resource
from datetime import timedelta
app = Flask(__name__)
api = Api(app)
class GetParams(Resource):
def get(self):
print ("Hello.")
return 'OK'
api.add_resource(GetParams, '/data')
if __name__ == '__main__':
try:
app.run(host='130.0.1.1', port=5002)
except:
print("unexcepted error")
you could qualify the received exception, but you'll have to read a bit of doc http://flask.pocoo.org/docs/1.0/quickstart/#what-to-do-if-the-server-does-not-start
Related
I am developing a flutter app using flask as back end framework and mariabd as database
Trying to reduce web service time response of ws:
1- open the connexion at the begining of ws
2- Execute queries
3-close connexion to database before returnning the response
Here is an exemple of my code archi:
#app.route('/ws_name', methods=['GET'])
def ws_name():
cnx=db_connexion()
try:
id_lanparamguage = request.args.get('param')
result = function_execute_many_query(cnx,param)
except:
cnx.close()
return jsonify(result), 200
response = {}
cnx.close()
return jsonify(result), 200
db_connexion is my function that handle connecting to database
The probleme is when only one user is connecting to the app (use ws) the time response is perfect
but if 3 users (as exemple) are connected th time response is up from millisecond to 10 seconds
I suspect you have a problem with many requests sharing the same thread. Read https://werkzeug.palletsprojects.com/en/1.0.x/local/ for how the local context works and why you need werkzeug to manage your local context in an WSGI application.
You would want to do something like:
from werkzeug.local import LocalProxy
cnx=LocalProxy(db_connexion)
I also recommend closing your connextion in a function decorated by #app.teardown_request
See https://flask.palletsprojects.com/en/1.1.x/api/#flask.Flask.teardown_request
I created a python app in flask. Here the Skelton of the code
app = Flask(__name__)
#app.route('/', methods=['GET'])
def authentication():
'''athentication process'''
return 'authenticated'
so when user call the app it will authenticate. but if two user call that at same time or while processing one authentication i want to hold the new request until the old one finished then I want to start the new request. I've tried with semaphore but not working. Here is what I've tried
#app.route('/', methods=['GET'])
def authentication():
sem.acquire()
'''athentication process'''
sem.release()
return 'authenticated'
and I have deployed this in Heroku. Any idea how I can achieve this?
PS: If this can't be done at least i want to response the new request that another request is in process and try again after some time
Short answer: Dont worry about it.
This is the job of a web server. When you host the application in any Server like Apache , Nginx etc the server creates multiple processes of your flask app.When requst comes the, server program forwards it to any of the free processes, if no process is free server will queue the request until one process becomes free.
This is high level overwiew of how HTTP servers work.
I am trying to build REST API with only one call.
Sometimes it takes up to 30 seconds for a program to return a response. But if user thinks that service is lagging - he makes a new call and my app returns response with error code 500 (Internal Server Error).
For now it is enough for me to block any new requests if last one is not ready. Is there any simple way to do it?
I know that there is a lot of queueing managers like Celery, but I prefer not to overload my app with any large dependencies/etc.
You could use Flask-Limiter to ignore new requests from that remote address.
pip install Flask-Limiter
Check this quickstart:
from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
app = Flask(__name__)
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["200 per day", "50 per hour"]
)
#app.route("/slow")
#limiter.limit("1 per day")
def slow():
return "24"
#app.route("/fast")
def fast():
return "42"
#app.route("/ping")
#limiter.exempt
def ping():
return "PONG"
As you can see, you could ignore the remote IP address for a certain amount of time meanwhile you finish the process you´re running
DOCS
Check these two links:
Flasf-Limiter Documentation
Flasf-Limiter Quick start
I have a scikit-learn classifier running as a Dockerised Flask app, launched with gunicorn. It receives input data in JSON format as a POST request, and responds with a JSON object of results.
When the app is first launched with gunicorn, a large model (serialised with joblib) is read from a database, and loaded into memory before the app is ready for requests. This can take 10-15 minutes.
A reproducible example isn't feasible, but the basic structure is illustrated below:
from flask import Flask, jsonify, request, Response
import joblib
import json
def classifier_app(model_name):
# Line below takes 10-15 mins to complete
classifier = _load_model(model_name)
app = Flask(__name__)
#app.route('/classify_invoice', methods=['POST'])
def apicall():
query = request.get_json()
results = _build_results(query['data'])
return Response(response=results,
status=200,
mimetype='application/json')
print('App loaded!')
return app
How do I configure Flask or gunicorn to return a 'still loading' response (or suitable error message) to any incoming http requests while _load_model is still running?
Basically, you want to return two responses for one request. So there are two different possibilities.
First one is to run time-consuming task in background and ping server with simple ajax requests every two seconds to check if task is completed or not. If task is completed, return result, if not, return "Please standby" string or something.
Second one is to use websockets and flask-socketio extension.
Basic server code would be something like this:
from threading import Thread
from flask import Flask
app = Flask(__name__)
socketio = SocketIO(app)
def do_work():
result = your_heavy_function()
socketio.emit("result", {"result": result}, namespace="/test/")
#app.route("/api/", methods=["POST"])
def start():
socketio.start_background_task(target=do_work)
# return intermediate response
return Response()
On the client side you should do something like this
var socket = io.connect('http://' + document.domain + ':' + location.port + '/test/');
socket.on('result', function(msg) {
// Process your request here
});
For further details, visit this blog post, flask-socketio documentation for server-side reference and socketio documentation for client-side reference.
PS Using web-sockets this you can make progress-bar too.
Just to give a context here, I'm a node.JS developer, but I'm on a project that I need to work with Python using Flask framework.
The problem is, when a client request to an endpoint of my rest flask app, I need to emit an event using socket.IO, and get some data from the socket server, then this data is the response of the endpoint. But I didn't figured out how to send this, because flask needs a "return" statement saying what is the response, and my callback is in another context.
Sample of what I'm trying to do: (There's some comments explaining)
import socketio
import eventlet
from flask import Flask, request
sio = socketio.Server()
app = Flask(__name__)
#app.route('/test/<param>')
def get(param):
def ack(data):
print (data) #Should be the response
sio.emit('event', param, callback=ack) # Socket server call my ack function
#Without a return statement, the endpoint return 500
if __name__ == '__main__':
app = socketio.Middleware(sio, app)
eventlet.wsgi.server(eventlet.listen(('', 8000)), app)
Maybe, the right question here is: Is this possible?
I'm going to give you one way to implement what you want specifically, but I believe you have an important design flaw in this, as I explain in a comment above. In the way you have this coded, your socketio.Server() object will broadcast to all your clients, so will not be able to get a callback. If you want to emit to one client (hopefully not the same one that sent the HTTP request), then you need to add a room=client_sid argument to the emit. Or, if you are contacting a Socket.IO server, then you need to use a Socket.IO client here, not a server.
In any case, to block your HTTP route until the callback function is invoked, you can use an Event object. Something like this:
from threading import Event
from flask import jsonify
#app.route('/test/<param>')
def get(param):
ev = threading.Event()
result = None
def ack(data):
nonlocal result
nonlocal ev
result = {'data': data}
ev.set() # unblock HTTP route
sio.emit('event', param, room=some_client_sid, callback=ack)
ev.wait() # blocks until ev.set() is called
return jsonify(result)
I had a similar problem using FastAPI + socketIO (async version) and I was stuck at the exact same point. No eventlet so could not try out the monkey patching option.
After a lot of head bangings it turns out that, for some reason, adding asyncio.sleep(.1) just before ev.wait() made everything work smoothly. Without that, emitted event actually never reach the other side (socketio client, in my scenario)