Flask with Gevent Blocking Requests on Separate Browser Windows - python

In the following snippet, I have a simple web server running that utilizes Flask. It appears as though all requests wait for the previous requests to complete before being processed.
To test, I point two windows in Chrome to localhost:5000. The second waits for the first request to finish completely.
This does not occur when I open one of those windows in 'Incognito' or when running two curl commands simultaneously.
If anyone has an idea why two separate windows get treated as the same connection (and why an incognito one is treated separately), this would be much appreciated.
Here is my code:
from gevent import monkey; monkey.patch_all()
monkey.patch_time()
from gevent.pywsgi import WSGIServer
from flask import Flask, Response, jsonify
import json
import time
app = Flask(__name__)
def toJson(obj):
return json.dumps(obj, indent=None, separators=(',', ':'))
#app.route("/")
def hello():
print 'Received Request'
time.sleep(5)
return Response(toJson({'hello': 'world'}), mimetype='application/json')
print 'Starting Server'
http = WSGIServer(('', 5000), app)
http.serve_forever()

Related

How to actually use pymongo ChangeStreams with Flask in a non-blocking way?

I am learning flask and PyMongo right now and came across ChangeStreams. I do understand how ChangeStreams work but I have only worked with them in Node and Express. I have implemented ChangeStreams in my Flask app as following:
with ms.db.collection.watch() as stream:
for change in stream:
print(change)
On the official docs pages, it says that it's a blocking method. But how would I go about in making it non-blocking? Because currently my ChangeStream logic is in a different file and I import it into the server.py file. So when it never goes past that import and the Flask App doesn't start at all. Below is my server.py
from flask import Flask, render_template, request
import mongo_starter as ms
import changestream as cs
app = Flask(__name__)
#app.route('/')
def home():
return render_template('index.html')
if __name__ == "__main__":
app.run(host="0.0.0.0", port="5000")
Below is my ChangeStream.py
import mongo_starter as ms
with ms.db.collection.watch() as stream:
for change in stream:
print(change)
Below is my MongoStarter.py that actually initiates the connection to Mongo
import pymongo
import mongo_config as mc
print(mc.data_header)
try:
print('Connecting to Database...')
mongo_client = pymongo.MongoClient(mc.mongo_url)
db = mongo_client['PyMongo']
collection = db['Test Data']
print("Connection to Database Successful!")
except pymongo.errors.InvalidURI:
print('Error Connecting to Database')
When I run the app using nodemon it prints the following to the output.
[nodemon] restarting due to changes...
[nodemon] starting `python server.py`
----------------- MONGO CONNECTION LOG --------------------
Connecting to Database...
Connection to Database Successful!
So it never actually goes past the change stream method. How can I make it so it worked in an async way? I have looked at asyncio, but wanted to see if there was any way to implement it without using asyncio.

Using Gevent in flask: API is not asynchronous

Earlier I was using Waitress. Now I'm using Gevent to run my Flask app that has only one API
from flask import Flask, request, jsonify
import documentUtil
from gevent.pywsgi import WSGIServer
app = Flask(__name__)
#app.route('/post-document-string', methods=['POST'])
def parse_data():
req_data = request.get_json(force=True)
text = req_data['text']
result = documentUtil.parse(text)
return jsonify(keywords = result)
if __name__=='__main__':
http_server = WSGIServer(('127.0.0.1', 8000), app)
http_server.serve_forever()
This works fine. But the API is not asynchronous. If from front-end, I fire the same API twice at the same time, the second call waits for the first one to give response first.
What is wrong here ? How can I make it asynchronous ?
We use Gunicorn to run Flask in multiple processes. You get more juice out of python that way + auto restarts and stuff. Sample config file:
import multiprocessing
bind = "0.0.0.0:80"
workers = (multiprocessing.cpu_count() * 2) + 1
# ... additional config
Then run with something like
gunicorn --config /path/to/file application.app
"""index.py"""
from flask import Flask
from flask import jsonify
app = Flask(__name__)
#app.route('/')
def index():
"""Main page"""
doc = {
'site': 'stackoverflow',
'page_id': 6347182,
'title': 'Using Gevent in flask'
}
return jsonify(doc)
# To start application
gunicorn -k gevent --bind 0.0.0.0 index:app
k : worker_class
--bind : bind address
# See https://docs.gunicorn.org/en/latest/settings.html
Not sure, however I think adding thread param in server object can solve the problem.
http_server = WSGIServer(('127.0.0.1', 8000), app, numthreads=50)
source: https://f.gallai.re/wsgiserver
I found the Chrome browser was the culprit, after learning based on this answer:
https://stackoverflow.com/a/62912019/253127
basically Chrome is trying to cache the result of the first request, and then serve that to the additional tabs.
You might get around this by disabling AJAX caching, assuming you're using jQuery the code is:
$.post(
{url: '/', cache: false},
{'text':'my data'}
).then(function(data){
console.log(`server return data was: ${data}`);
});

What is the best way for a Python script to communicate with a Python Flask server that transports content to the client?

The following scenario:
I have a Raspberry Pi running as a server. Currently I am using a Python script with Flask and I can also access the Raspberry Pi from my PC. (The flask server runs an react app.)
But the function should be extended. It should look like the following:
2nd Python script is running all the time. This Python script fetches data from an external API every second and processes it. If certain conditions are met, the data should be processed and then the data should be communicated to the Python Flask server. And the Flask server then forwards the data to the website running on the computer.
How or which method is best to program this "interprocess communication". Are there any libraries? I tried Celery, but then it throws up my second Python script whenever I want to access the external API, so I don't know if this is the right choice.
What else would be the best approach? Threading? Direct interprocess communication?
If important, this is how my server application looks so far:
from gevent import monkey
from flask import Flask, render_template
from flask_socketio import SocketIO
monkey.patch_all()
app = Flask(__name__, template_folder='./build', static_folder='./build/static')
socket_io = SocketIO(app)
#app.route('/')
def main():
return render_template('index.html')
#socket_io.on('fromFrontend')
def handleInput(input):
print('Input from Frontend: ' + input)
send_time()
#socket_io.on('time')
def send_time():
socket_io.emit('time', {'returnTime': "some time"})
if __name__ == '__main__':
socket_io.run(app, host='0.0.0.0', port=5000, debug=True)
Well i found a solution for my specific problem i implemented it with a thread as follows:
import gevent.monkey
gevent.monkey.patch_all()
from flask import Flask, render_template
from flask_socketio import SocketIO
import time
import requests
from threading import Thread
app = Flask(__name__, template_folder='./build', static_folder='./build/static')
socket_io = SocketIO(app)
#app.route('/')
def main():
thread = Thread(target=backgroundTask)
thread.daemon = True
thread.start()
return render_template('index.html')
#socket_io.on('fromFrontend')
def handleInput(input):
print('Input from Frontend: ' + input)
#socket_io.on('time')
def send_time():
socket_io.emit('time', {'returnTime': 'hi frontend'})
def backgroundTask():
# do something here
# access socket to push some data
socket_io.emit('time', {'returnTime': "some time"})
if __name__ == '__main__':
socket_io.run(app, host='0.0.0.0', port=5000, debug=True)

Returning 'still loading' response with Flask API

I have a scikit-learn classifier running as a Dockerised Flask app, launched with gunicorn. It receives input data in JSON format as a POST request, and responds with a JSON object of results.
When the app is first launched with gunicorn, a large model (serialised with joblib) is read from a database, and loaded into memory before the app is ready for requests. This can take 10-15 minutes.
A reproducible example isn't feasible, but the basic structure is illustrated below:
from flask import Flask, jsonify, request, Response
import joblib
import json
def classifier_app(model_name):
# Line below takes 10-15 mins to complete
classifier = _load_model(model_name)
app = Flask(__name__)
#app.route('/classify_invoice', methods=['POST'])
def apicall():
query = request.get_json()
results = _build_results(query['data'])
return Response(response=results,
status=200,
mimetype='application/json')
print('App loaded!')
return app
How do I configure Flask or gunicorn to return a 'still loading' response (or suitable error message) to any incoming http requests while _load_model is still running?
Basically, you want to return two responses for one request. So there are two different possibilities.
First one is to run time-consuming task in background and ping server with simple ajax requests every two seconds to check if task is completed or not. If task is completed, return result, if not, return "Please standby" string or something.
Second one is to use websockets and flask-socketio extension.
Basic server code would be something like this:
from threading import Thread
from flask import Flask
app = Flask(__name__)
socketio = SocketIO(app)
def do_work():
result = your_heavy_function()
socketio.emit("result", {"result": result}, namespace="/test/")
#app.route("/api/", methods=["POST"])
def start():
socketio.start_background_task(target=do_work)
# return intermediate response
return Response()
On the client side you should do something like this
var socket = io.connect('http://' + document.domain + ':' + location.port + '/test/');
socket.on('result', function(msg) {
// Process your request here
});
For further details, visit this blog post, flask-socketio documentation for server-side reference and socketio documentation for client-side reference.
PS Using web-sockets this you can make progress-bar too.

Running two process on Flask with common variables

I want to build a Webapp with Flask where some data is printed on a dynamic page in real time.
The data is taken from a Python script which connects to a Websocket, then it's printed on the frontend with Flask.
I have two problems:
1) I can't run both the scripts together
2) I don't know how to call parsed from test to yield
Here is the code:
from time import sleep
from flask import Flask, render_template
import websocket
from bitmex_websocket import Instrument
from bitmex_websocket.constants import InstrumentChannels
from bitmex_websocket.constants import Channels
import json
from threading import Thread, Event
app = Flask(__name__)
websocket.enableTrace(True)
channels = [
InstrumentChannels.trade,
]
XBTUSD = Instrument(symbol='XBTUSD',
channels=channels)
XBTUSD.on('action', lambda msg: test(msg))
def test(msg):
parsed = json.loads(json.dumps(msg))
print(parsed)
#app.route('/')
def index():
# render the template (below) that will use JavaScript to read the stream
return render_template('index.html')
#app.route('/stream_sqrt')
def stream():
def generate():
yield '{}\n'.format('test')
return app.response_class(generate(), mimetype='text/plain')
if __name__ == '__main__':
XBTUSD.run_forever()
app.run()
If i put XBTUSD.run_forever() before app.run() i will start the part supposed to retrieve the data but the Flask app won't start. If i do the opposite, the Flask app will run but not the other part. How can i run together the whole app? How could i "share" variables between test and generate?
An easier way to go, please use flask-socketio instead flask.
https://flask-socketio.readthedocs.io/en/latest/
Sample for sending messages using flask-socketio
https://flask-socketio.readthedocs.io/en/latest/#sending-messages

Categories