I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts.
I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish?
[I have a spinner to let users know that the page is loading].
We don't change the request timeout for individual users on PythonAnywhere. In the vast majority of cases, a request that takes 5 min (or even, really, 1 min) indicates that something is very wrong with the app.
Yes, the timeout value can be adjusted in the web server configuration.
Does anyone else but you use this page? If so, you'll have to educate them to be patient and not click the Stop or Reload buttons on their browser.
Related
There are two different services. One service -Django is getting the request from the front-end and then calling an API in the other service -Flask.
But the response time of the Flask service is high and if the user navigates to another page that request will be canceled.
Should it be a background task or a pub/sub pattern? If so, how to do it in the background and then tell the user here is your last result?
You have two main options possible:
Make an initial request to a "simple" view of Django, which load a skeleton HTML page with a spinner where some JS will trigger a XHR request to a second Django view which will contain the other service (Flask) call. Thus, you can even properly alert your user the loading takes times and handle the exit on the browser side (ask confirmation before leaving/abort the request...)
If possible, cache the result of the Flask service, so you don't need to call it at each page load.
You can combine those two solutions by calling the service in a asynchronous request and cache its result (depending on context, you may need to customize the cache depending on the user connected for example).
The first solution can be declined with pub/sub, websockets, whatever, but a classical XHR seems fine for your case.
On our project, we have a couple of time-expensive endpoints. Our solution was similar to a previous answer:
Once we receive a request we call a Celery task that does its expensive work in async mode. We do not wait for its results and return a quick response to the user. Celery task sends its progress/results via WebSockets to a user. Frontend handles this WS message. The benefit of this approach is that we do not spend the CPU of our backend. We spend the CPU of the Celery worker that is running on another machine.
Basically I'm running a Flask web server that crunches a bunch of data and sends it back to the user. We aren't expecting many users ~60, but I've noticed what could be an issue with concurrency. Right now, if I open a tab and send a request to have some data crunched, it takes about 30s, for our application that's ok.
If I open another tab and send the same request at the same time, unicorn will do it concurrently, this is great if we have two seperate users making two seperate requests. But what happens if I have one user open 4 or 8 tabs and send the same request? It backs up the server for everyone else, is there a way I can tell Gunicorn to only accept 1 request at a time from the same IP?
A better solution to the answer by #jon would be limiting the access by your web server instead of the application server. A good way would always be to have separation between the responsibilities to be carried out by the different layers of your application. Ideally, the application server, flask should not have any configuration for the limiting or anything to do with from where the requests are coming. The responsibility of the web server, in this case nginx is to route the request based on certain parameters to the right client. The limiting should be done at this layer.
Now, coming to the limiting, you could do it by using the limit_req_zone directive in the http block config of nginx
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
...
server {
...
location / {
limit_req zone=one burst=5;
proxy_pass ...
}
where, binary_remote_addris the IP of the client and not more than 1 request per second at an average is allowed, with bursts not exceeding 5 requests.
Pro-tip: Since the subsequent requests from the same IP would be held in a queue, there is a good chance of nginx timing out. Hence, it would be advisable to have a better proxy_read_timeout and if the reports take longer then also adjusting the timeout of gunicorn
Documentation of limit_req_zone
A blog post by nginx on rate limiting can be found here
This is probably NOT best handled at the flask level. But if you had to do it there, then it turns out someone else already designed a flask plugin to do just this:
https://flask-limiter.readthedocs.io/en/stable/
If a request takes at least 30s then make your limit by address for one request every 30s. This will solve the issue of impatient users obsessively clicking instead of waiting for a very long process to finish.
This isn't exactly what you requested, since it means that longer/shorter requests may overlap and allow multiple requests at the same time, which doesn't fully exclude the behavior you describe of multiple tabs, etc. That said, if you are able to tell your users to wait 30 seconds for anything, it sounds like you are in the drivers seat for setting UX expectations. Probably a good wait/progress message will help too if you can build an asynchronous server interaction.
To test my API, I need to send a request on my viewer url on which there is a tracking service that tell my API how many time I've spent on the page (classical).
I have this small function in my tests :
def does_it_track(response, **kwargs):
# some unrelated actions
r = requests.get('my_viewer_url')
This request works fine but it only last for less than a second and it doesn't allow me to test my statistic generator, neither the my tracker precision.
I've tried :
This SO issue : how to make python request.get wait a few seconds? it didn't help
The sleep method (but I got has no attribute 'sleep'
To repeat the request send, but it obviously create several stats and I only need a longer one
Does someone know about a "not-to-complicated-way" to make my request wait on my page ?
I'm python 2.7
Thank you !
"how many time you've spent on the page" has nothing to do with the HTTP request/response cycle, but with your browser.
From the server's point of view, the server gets a request, returns a response and the job is over, period - and from the client's point of view once the server returned a response the HTTP transaction is over too. There's not even a notion of "page" here, only HTTP request and response.
Your "tracker" is (obviously) using javascript to send data from the browser itself (most likely by sending a request each X seconds indicating the page is still displayed in the browser). IOW, the only way to test this is to use a headless browser that will execute javascript.
Try VCR, it might help you to solve your issue.
Indeed you would be able to save your request and see what's happening.
VCR
I have rather unusual task, so I would like to ask for a piece of advice from experts :)
I need to build small Flask-based web which will have build-in video player. Users will have to log-in to access videos. The problem is that I need to limit user by the amount of time they can spend using the service.
Could someone please recommend a possible way to make it work or help me to find a place to get started?
What I am thinking of... what if i create user's profile variable like "credits_minutes", and i could find a way to decrease credits_minutes every minute by one?
The sessions are based on requests from my understanding what you are trying to do is to actually get the amount of time spent on the site? You'll need to do some kind of keep alive from the client.
Such as web sockets, repetitive JavaScript calls or something else to know that they are on the actual site and base you logic on that.
A simple solution would be to write something with jquery that polls an endpoint of you choice where you could do something time based for each poll. Such as saving the oldest call and comparing it to each new that is arriving. and when X minutes has elapsed, redirect the user.
From the Flask-Session documentation: https://pythonhosted.org/Flask-Session/
PERMANENT_SESSION_LIFETIME: the lifetime of a permanent session as datetime.timedelta object. Starting with Flask 0.8 this can also be an integer representing seconds.
I have a Python web application in which one function that can take up to 30 seconds to complete.
I have been kicking off the process with a cURL request (inc. parameters) from PHP but I don't want the user staring at a blank screen the whole time the Python function is working.
Is there a way to have it process the data 'in the background', e.g. close the http socket and allow the user to do other things while it continues to process the data?
Thank you.
You should use an asynchronous data approach to transfer data from a PHP script - or directly from the Python script, to an already rendered HTML page on the user side.
Check a javascript framework for the way that is easier for you to do that (for example, jquery). Then return an html page minus results to the user, with the javascript code to show a "calculating" animation, and fetch the reslts, in xml or json from the proper URL when they are done.