Internal Server Error 500 python flask app on Heroku - python

When I leave my app young-harbor-5584.herokuapp.com for a day or so, and then try to access it I am seeing the error below.
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
However, when I make a single character change and re-push to Heroku, the app seems to work fine. I know this because I removed a period from a TODO, re-pushed and launched it again.
How can I avoid this? Do I need to set more dynos?
I think this might be an answer as I see this in the Heroku documents.
Free dynos will sleep after a half hour of inactivity and they can be
active (receiving traffic) for no more than 18 hours a day before
going to sleep. If a free dyno is sleeping, and it hasn’t exceeded the
18 hours, any web request will wake it. This causes a delay of a few
seconds for the first request upon waking. Subsequent requests will
perform normally.
Any confirmation or other advice would be greatly appreciated.

Related

How to prevent Heroku clock dyno from sleeping on free tier?

I am using Heroku to host a Django web app. This is just for a fun project that will make no money so paying for the premium service would not make sense.
I am using APScheduler to run a cron job once per day. So for this, I have a clock dyno running. The issue is that the clock dyno keeps going idle after 30mins of inactivity. I read that you can ping the app to keep it from idling but unfortunately, this just keeps the web dyno from idling, the clock dyno still idles.
Any recommendations?
I'm essentially looking for a free way to send scheduled emails once a day. I tried using mailchimp but you have to pay to schedule an email.
Okay so it looks like my original solution does actually work, the issue was with the timezone that was set for the cron job.
There is a service to ping your app to keep it from idling called keffeine.
http://kaffeine.herokuapp.com/

heroku timeout error H12 when calling API

Im receiving a heroku timeout error with code H12 when Im calling an api via my flask app. The api usualy responds within 2min. Im calling the api via a different thread so that the main flask app thread keeps running.
with ThreadPoolExecutor(max_workers=5) as executor:
future = executor.submit(shub_api, website, merchant.id)
result = future.result()
There is some documentation on Heroku on running background tasks, however the python examples were for using Redis that i know nothing about. Are there some other solutions to this problem?
This is not working because of the way Heroku is architected.
When your web application is deployed to Heroku, it runs on dynos. Dynos are "ephemeral webservers" that only live for a small amount of time. This means that when a user makes a request to your app, the user's request will be handled by a dyno that may only live for a short period of time.
Heroku dynos are constantly starting, stopping, and being moved around to other physical hosts. This means that web dynos should not be used to run tasks that take a long time to complete (there are different worker dynos for that).
Furthermore, every web request that is served by a Heroku dyno has a 30-second timeout. What this means is that if someone makes an HTTP request to your app on Heroku, your app must start responding to the client within 30 seconds, otherwise, Heroku's routing layer will issue an H12 TIMEOUT error to you because it thinks your app has frozen or gotten stuck in a loop somewhere.
To sum it up: Heroku's architecture is such that it is designed from the ground up to follow web best practices, which means having your HTTP requests finish quickly (< 30 seconds) and not relying on your web servers being permanent fixtures where you can just run code on them all the time.
What you should do to resolve this issue instead is to use a background worker process (essentially it's just a second type of dyno you can run some code on that will process long-running tasks) and have your web application send a notification to your worker process to start running your task code.
This is typically done via a message queue like Redis, AWS SQS, etc. This Heroku article explains the concept in more detail.

Problem with Celery task in Django, stopped for unknown reason

I made simple script using Django and Celery, which makes queries in Django database compares to dates with current date and send email. I use Heroku, and Redislab server for Resis server.
I used Celery beam and Celry worker to check every 1 second.
I made simple task which send emails from Gmail and the settings.py in Django.
All fine.
When I deployed to Heroku it was working for few minutes,
then stoped.
What could be the possible reasons?
Is this the right approach?
What I think is: probably Gmail or the receiver mail told that that's flood.
Or...
Please help and thank you in advance.
There could a lot of reasons.
Add Sentry, as a logger. It will show you all errors in real-time. Probably you will fit in a free plan.

Is there a way to limit the number of concurrent requests from one IP with Gunicorn?

Basically I'm running a Flask web server that crunches a bunch of data and sends it back to the user. We aren't expecting many users ~60, but I've noticed what could be an issue with concurrency. Right now, if I open a tab and send a request to have some data crunched, it takes about 30s, for our application that's ok.
If I open another tab and send the same request at the same time, unicorn will do it concurrently, this is great if we have two seperate users making two seperate requests. But what happens if I have one user open 4 or 8 tabs and send the same request? It backs up the server for everyone else, is there a way I can tell Gunicorn to only accept 1 request at a time from the same IP?
A better solution to the answer by #jon would be limiting the access by your web server instead of the application server. A good way would always be to have separation between the responsibilities to be carried out by the different layers of your application. Ideally, the application server, flask should not have any configuration for the limiting or anything to do with from where the requests are coming. The responsibility of the web server, in this case nginx is to route the request based on certain parameters to the right client. The limiting should be done at this layer.
Now, coming to the limiting, you could do it by using the limit_req_zone directive in the http block config of nginx
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
...
server {
...
location / {
limit_req zone=one burst=5;
proxy_pass ...
}
where, binary_remote_addris the IP of the client and not more than 1 request per second at an average is allowed, with bursts not exceeding 5 requests.
Pro-tip: Since the subsequent requests from the same IP would be held in a queue, there is a good chance of nginx timing out. Hence, it would be advisable to have a better proxy_read_timeout and if the reports take longer then also adjusting the timeout of gunicorn
Documentation of limit_req_zone
A blog post by nginx on rate limiting can be found here
This is probably NOT best handled at the flask level. But if you had to do it there, then it turns out someone else already designed a flask plugin to do just this:
https://flask-limiter.readthedocs.io/en/stable/
If a request takes at least 30s then make your limit by address for one request every 30s. This will solve the issue of impatient users obsessively clicking instead of waiting for a very long process to finish.
This isn't exactly what you requested, since it means that longer/shorter requests may overlap and allow multiple requests at the same time, which doesn't fully exclude the behavior you describe of multiple tabs, etc. That said, if you are able to tell your users to wait 30 seconds for anything, it sounds like you are in the drivers seat for setting UX expectations. Probably a good wait/progress message will help too if you can build an asynchronous server interaction.

Django: Request timeout for long-running script

I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts.
I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish?
[I have a spinner to let users know that the page is loading].
We don't change the request timeout for individual users on PythonAnywhere. In the vast majority of cases, a request that takes 5 min (or even, really, 1 min) indicates that something is very wrong with the app.
Yes, the timeout value can be adjusted in the web server configuration.
Does anyone else but you use this page? If so, you'll have to educate them to be patient and not click the Stop or Reload buttons on their browser.

Categories