how to handle multiple requests using python flask [duplicate] - python

My Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation.
Is there any way that I can make my Flask application accept requests from multiple users?

Yes, deploy your application on a different WSGI server, see the Flask deployment options documentation.
The server component that comes with Flask is really only meant for when you are developing your application; even though it can be configured to handle concurrent requests with app.run(threaded=True) (as of Flask 1.0 this is the default). The above document lists several options for servers that can handle concurrent requests and are far more robust and tuneable.

For requests that take a long time, you might want to consider starting a background job for them.

Related

Solution to REST API caching for Python Flask app served through wsgi/nginx

I'm trying to cache the response to API http requests in Python/Flask using WSGI/nginx. I used to use the Flask simplecache library for running small standalone Flask apps. Those apps are now behind an WSGI/nginx layer, which makes them much more scalable but I am wondering if there is a way to cache the API response at this level? I was hoping nginx would handle that, but I'm stumped when I Google it.
E.g. cache the results of http://results.com?page=5 for 3 hrs.
The lack of questions and answers on this top makes me think I am somehow asking the wrong question.

Run a Python script from a HTML page using nginx on a Raspberry Pi

I've been working on a project recently where basically I need to make a motor spin at certain times throughout the day for a few seconds, that can be customised using your phone.
So far I have followed many tutorials and done a lot of browsing and I've managed to have my Pi zero host its own network(using nginx, hostapd and dnsmasq), which you can connect to on your phone and go to 192.168.4.1 to access an index.html page in /var/www/html/
I also have a Python script which, when run, turns one of the GPIO pins on for a few seconds and then off again, and this GPIO pin is in turn connected to the motor.
The trouble I am having is setting up the rest of the web side, where you are able to connect to the network, go to a page, insert 2 or 3 different times, submit them, and then when it's that time the Python scripts will run.
Since I've set up the pi as a access point, I'm not sure how to reverse it and allow it to connect to wifi again without ruining the access point and current set up, so I'm not sure if there's an easy way to download any packages or modules I may need.
Anyway, any help anyone could give me would be incredibly useful - many thanks!!
Unless you are planning for a production web server, for simple application like display sensor status or control sensor via web page, there is a simpler solution for beginners and for python programmers. Since you are using python, so you don't have to use PHP, and you probably don't need to have Nginx at this stage. There are actually two ways in my experiences to do it.
1) Using http.server
The 'simple way' to serve web page using python based on python standard library http.server, utilising python build-in socket based http server. But it is less intuitive to set it up for GET/POST requests/responses. It is too long to describe it here, but I have a blog post on how to do it here.
2) Using Flask web development micro framework
Flask allows you to setup html template, handling route and run a web server easily within python environment. You need to install Flask package for python web development. The simplest flask python code that addressed your question of serving the data to a web page would be:
from flask import Flask, render_template_string
app = Flask(__name__)
data = 200 #assuming this is the data you want to show in your web page
#app.route('/')
def index():
return render_template_string('''
<h1>My Sensor Web Page</h1>
<p>My sensor reading is {}".format(data))</p>
'''
if __name__ == '__main__':
app.run(debug=True)
Launch your browser and point it to http://localhost:8000, you should see the data to be rendered as webpage per our simple example code.
What you will need is to either import your code into this flask example, or integrate it into the example, and pass the data you want to display to render_template_string function.
I would suggest using a web framework to host the "website". Flask is one that I have used for similar applications. Since this method allows you to directly call python functions in response to http requests it should be fairly easy to implement what you are trying to do.
As a bonus, you can use flask with nginx but I really don't think you need it for this specific application.

Do flask threads handle file access?

I am building a Flask app that uses a docx template to build a Word document. If I set threaded=True in app.run() will Flask handle the critical region properly as multiple users access the file on the server concurrently?
Flask doesn't know what your code does. It's up to you to put whatever checks you need before taking an action. HTTP is a stateless protocol, you cannot make assumptions about how and when workers will access other data.
threaded=True just enables multiple workers so that the development server can handle concurrent requests.

Handling multiple requests in Flask

My Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation.
Is there any way that I can make my Flask application accept requests from multiple users?
Yes, deploy your application on a different WSGI server, see the Flask deployment options documentation.
The server component that comes with Flask is really only meant for when you are developing your application; even though it can be configured to handle concurrent requests with app.run(threaded=True) (as of Flask 1.0 this is the default). The above document lists several options for servers that can handle concurrent requests and are far more robust and tuneable.
For requests that take a long time, you might want to consider starting a background job for them.

of tornado and blocking code

I am trying to move away from CherryPy for a web service that I am working on and one alternative that I am considering is Tornado. Now, most of my requests look on the backend something like:
get POST data
see if I have it in cache (database access)
if not make multiple HTTP requests to some other web service which can take even a good few seconds depending on the number of requests
I keep hearing that one should not block the tornado main loop; I am wondering if all of the above code is executed in the post() method of a RequestHandler, does this mean that I am blocking the code ? And if so, what's the appropriate approach to use tornado with the above requirements.
Tornado comes shipped with an asynchronous (actually two iirc) http client (AsyncHTTPClient). Use that one if you need to do additional http requests.
The database lookup should also be done using an asynchronous client in order to not block the tornado ioloop/mainloop. I know there are a couple of tornado tailor made database clients (e.g redis, mongodb) out there. The mysql lib is included in the tornado distribution.

Categories