How to prevent Heroku clock dyno from sleeping on free tier? - python

I am using Heroku to host a Django web app. This is just for a fun project that will make no money so paying for the premium service would not make sense.
I am using APScheduler to run a cron job once per day. So for this, I have a clock dyno running. The issue is that the clock dyno keeps going idle after 30mins of inactivity. I read that you can ping the app to keep it from idling but unfortunately, this just keeps the web dyno from idling, the clock dyno still idles.
Any recommendations?
I'm essentially looking for a free way to send scheduled emails once a day. I tried using mailchimp but you have to pay to schedule an email.

Okay so it looks like my original solution does actually work, the issue was with the timezone that was set for the cron job.
There is a service to ping your app to keep it from idling called keffeine.
http://kaffeine.herokuapp.com/

Related

heroku timeout error H12 when calling API

Im receiving a heroku timeout error with code H12 when Im calling an api via my flask app. The api usualy responds within 2min. Im calling the api via a different thread so that the main flask app thread keeps running.
with ThreadPoolExecutor(max_workers=5) as executor:
future = executor.submit(shub_api, website, merchant.id)
result = future.result()
There is some documentation on Heroku on running background tasks, however the python examples were for using Redis that i know nothing about. Are there some other solutions to this problem?
This is not working because of the way Heroku is architected.
When your web application is deployed to Heroku, it runs on dynos. Dynos are "ephemeral webservers" that only live for a small amount of time. This means that when a user makes a request to your app, the user's request will be handled by a dyno that may only live for a short period of time.
Heroku dynos are constantly starting, stopping, and being moved around to other physical hosts. This means that web dynos should not be used to run tasks that take a long time to complete (there are different worker dynos for that).
Furthermore, every web request that is served by a Heroku dyno has a 30-second timeout. What this means is that if someone makes an HTTP request to your app on Heroku, your app must start responding to the client within 30 seconds, otherwise, Heroku's routing layer will issue an H12 TIMEOUT error to you because it thinks your app has frozen or gotten stuck in a loop somewhere.
To sum it up: Heroku's architecture is such that it is designed from the ground up to follow web best practices, which means having your HTTP requests finish quickly (< 30 seconds) and not relying on your web servers being permanent fixtures where you can just run code on them all the time.
What you should do to resolve this issue instead is to use a background worker process (essentially it's just a second type of dyno you can run some code on that will process long-running tasks) and have your web application send a notification to your worker process to start running your task code.
This is typically done via a message queue like Redis, AWS SQS, etc. This Heroku article explains the concept in more detail.

Problem with Celery task in Django, stopped for unknown reason

I made simple script using Django and Celery, which makes queries in Django database compares to dates with current date and send email. I use Heroku, and Redislab server for Resis server.
I used Celery beam and Celry worker to check every 1 second.
I made simple task which send emails from Gmail and the settings.py in Django.
All fine.
When I deployed to Heroku it was working for few minutes,
then stoped.
What could be the possible reasons?
Is this the right approach?
What I think is: probably Gmail or the receiver mail told that that's flood.
Or...
Please help and thank you in advance.
There could a lot of reasons.
Add Sentry, as a logger. It will show you all errors in real-time. Probably you will fit in a free plan.

Update Environment Variables

I'm in the process of porting my local django app to heroku and am hitting a snag. Mainly with my environment variables. I can't very well create a .env file on my webserver, it would just get overwritten when I push from github again. So I've set environment variables using heroku config:set VAR='' --app <app>. These seem to work, however, I'm working with an API that requires I refresh my token every 60 min. Locally, I developed a method to update my .env every time the task that refreshed this token was executed, but this solution clearly isn't sufficient for my web server...I've attempted to update server-level variables in Python, but I don't think that's possible. Has anyone had to deal with an issue like this? Am I approaching this all wrong?
What is the best way for me to update an environment variable on a web server (ie heroku config:set VAR='' --app <app> but in my python code)? I need this variable to update every 60 minutes (I already have the celery task code built for this). Should I modify the task to simply write it to a text file and use that text file as my "web server .env file"? I'm really lost here, so any help would be much appreciated. Thanks!
EDIT:
As requested here's more information:
I'm building some middlware for two systems. The first system posts a record to my Django API. This event kicks off a task that subsequently updates a separate financial system. This separate financial system's API requires two things, an auth_code and an access_token. The access_token must be updated every 60 minutes.
I have a refresh_token that I use to get a new access_token. This refresh_token expires every 365 days. As a result, I can simply reuse this refresh_token every time I request a new access_token.
My app is in the really early stages and doesn't require anything but a simple api post from the first system to kick off this process. This will eventually be built out to require my own sort of auth_token to access my django api.
first system --> Django App --> Finance System
https://developer.blackbaud.com/skyapi/docs/authorization/auth-code-flow/tutorial
Process:
I currently have a celery task that runs in the background every 55 minutes. This task gathers the new access_token and recreates my .env file with the new access_token.
I have a separate celery task that runs an ETL pipeline and requires the access_token to post to the financial systems api.

Internal Server Error 500 python flask app on Heroku

When I leave my app young-harbor-5584.herokuapp.com for a day or so, and then try to access it I am seeing the error below.
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
However, when I make a single character change and re-push to Heroku, the app seems to work fine. I know this because I removed a period from a TODO, re-pushed and launched it again.
How can I avoid this? Do I need to set more dynos?
I think this might be an answer as I see this in the Heroku documents.
Free dynos will sleep after a half hour of inactivity and they can be
active (receiving traffic) for no more than 18 hours a day before
going to sleep. If a free dyno is sleeping, and it hasn’t exceeded the
18 hours, any web request will wake it. This causes a delay of a few
seconds for the first request upon waking. Subsequent requests will
perform normally.
Any confirmation or other advice would be greatly appreciated.

Cron in Google App Engine

I have made an app using Google App Engine in python of weekly Project and assessment report submitting.
I want to check that on Friday who have submitted the report and who don't just send the scheduled notification mail that he haven't submitted the report in last week.
but i don't want to send the notification mail on Monday who have submitted the report in last week, just to those who haven't submitted the report
so please suggest me some idea for that.
Hard to fathom what you want (your English is very hard to parse), but anyway, besides Task Queues which are much more flexible and powerful (and may be harder to use for simple jobs that cron functionality covers perfectly), you can use cron to schedule App Engine tasks in Python by following the instructions here.
Not sure what you want, but anything you can do with cron can be done via TaskQueues in GAE, so read this http://code.google.com/appengine/docs/python/taskqueue/
App Engine applications can perform background processing by inserting tasks (modeled as web hooks) into a queue. App Engine will detect the presence of new, ready-to-execute tasks and automatically dispatch them for execution, subject to scheduling criteria.

Categories