Handling multiple requests in Flask - python

My Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation.
Is there any way that I can make my Flask application accept requests from multiple users?

Yes, deploy your application on a different WSGI server, see the Flask deployment options documentation.
The server component that comes with Flask is really only meant for when you are developing your application; even though it can be configured to handle concurrent requests with app.run(threaded=True) (as of Flask 1.0 this is the default). The above document lists several options for servers that can handle concurrent requests and are far more robust and tuneable.

For requests that take a long time, you might want to consider starting a background job for them.

Related

Performance of Singleton Controllers in fastapi

I am having trouble troubleshooting a performance bottleneck in one of my APIs and I have a theory that I need somebody with deeper knowledge of Python to validate for me.
I have a fastapi web service and a nodejs web service deployed on AWS. The node.js api is performing perfeclty under heavier loads, multiple concurrent requests taking same amount of time to be served.
My fastapi service however, is performing absurdly. If I make two requests concurrently, only one is served while the other has to wait for the first to be finished, hence the response time for the second request is twice as long as the first one.
My theory is that I am using Singleton pattern to instantiate the controller after a request comes to a route and the object already being in use and locked is causing the second request to wait until the first is resolved. Could this be it or am I missing something very obvious here? 2 concurrent requests should absolutely not be a problem for any type of web server.

Solution to REST API caching for Python Flask app served through wsgi/nginx

I'm trying to cache the response to API http requests in Python/Flask using WSGI/nginx. I used to use the Flask simplecache library for running small standalone Flask apps. Those apps are now behind an WSGI/nginx layer, which makes them much more scalable but I am wondering if there is a way to cache the API response at this level? I was hoping nginx would handle that, but I'm stumped when I Google it.
E.g. cache the results of http://results.com?page=5 for 3 hrs.
The lack of questions and answers on this top makes me think I am somehow asking the wrong question.

how to handle multiple requests using python flask [duplicate]

My Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation.
Is there any way that I can make my Flask application accept requests from multiple users?
Yes, deploy your application on a different WSGI server, see the Flask deployment options documentation.
The server component that comes with Flask is really only meant for when you are developing your application; even though it can be configured to handle concurrent requests with app.run(threaded=True) (as of Flask 1.0 this is the default). The above document lists several options for servers that can handle concurrent requests and are far more robust and tuneable.
For requests that take a long time, you might want to consider starting a background job for them.

Do flask threads handle file access?

I am building a Flask app that uses a docx template to build a Word document. If I set threaded=True in app.run() will Flask handle the critical region properly as multiple users access the file on the server concurrently?
Flask doesn't know what your code does. It's up to you to put whatever checks you need before taking an action. HTTP is a stateless protocol, you cannot make assumptions about how and when workers will access other data.
threaded=True just enables multiple workers so that the development server can handle concurrent requests.

of tornado and blocking code

I am trying to move away from CherryPy for a web service that I am working on and one alternative that I am considering is Tornado. Now, most of my requests look on the backend something like:
get POST data
see if I have it in cache (database access)
if not make multiple HTTP requests to some other web service which can take even a good few seconds depending on the number of requests
I keep hearing that one should not block the tornado main loop; I am wondering if all of the above code is executed in the post() method of a RequestHandler, does this mean that I am blocking the code ? And if so, what's the appropriate approach to use tornado with the above requirements.
Tornado comes shipped with an asynchronous (actually two iirc) http client (AsyncHTTPClient). Use that one if you need to do additional http requests.
The database lookup should also be done using an asynchronous client in order to not block the tornado ioloop/mainloop. I know there are a couple of tornado tailor made database clients (e.g redis, mongodb) out there. The mysql lib is included in the tornado distribution.

Categories