Run Django background task on heroku - python

So, I have a Django Project which has a background task for a method to run.
I made following adjustments to procfile
Initially
web: python manage.py collectstatic --no-input; gunicorn project.wsgi --log-file - --log-level debug
Now
web: python manage.py collectstatic --no-input; gunicorn project.wsgi --log-file - --log-level debug
worker: python manage.py process_tasks
Inspite of adding worker, when I deploy my project on heroku it does not run the background task. The background task gets created and can be seen registered in django admin but does not run. I hoped after reading various articles (one of them being https://medium.com/#201651034/background-tasks-in-django-and-heroku-58ac91bc881c) adding worker: python mnanage.py process_tasks would do the job but it didn't.
If I execute in my heroku cli: heroku run python manage.py process_tasks it only runs on the data which was initially present in database and not on any new data that I add after deployment.
Note: python manage.py process_tasks is what I use to get the background task to run on my local server.
So, if anyone could help me in running the background task after deployment on heroku.

Your Procfile seems right, you need to scale your worker dyno using heroku scale worker=1 from heroku CLI or you can also scale your worker dyno from heroku dashboard.
For scaling worker dyno through browser:-
Visit https://dashboard.heroku.com/apps/<your-app-name>/resources
Edit your worker dyno and scale it from there confirm your changes
In CLI use command heroku logs -t -p worker to see status and logs for worker dyno

Related

Run command when worker starts on Heroku-Django

I have a Django app running on Heroku, which consist in a web process and a worker process (running Django background tasks library). My Procfile looks like this:
web: gunicorn search.wsgi --log-file -
worker: python manage.py process_tasks
I need to run a Python management command when worker starts(and not when the web process starts), to check some issues related to the daily dyno restart of Heroku. Specifically, when worker restarts, I need to run python manage.py somescript.py, and then python manage.py process_taks (the command that starts the worker as needed by DJB library).
How can I achieve this? Is there any way to run two or more commands per process in the procfile? Thanks in advance!

How to run django's "python manage.py runserver" , celery's "celery -A app_name worker -l info" and redis-server in one command

I have recently started with django. And I started doing a small project. I've been using celery with redis worker. And every to use celery and redis I have to run the celery and redis server and then django server. Which is a bit lengthy process.
I have two questions.
1. Am I doing the right thing by running the servers everytime or are there any other right method to this process?
2. If I'm in the right direction, is there any method to do this?
I tried circus.ini , but it did not work.
If you use UNIX system:
For this purpose you can get along just with bash. Just run celery and redis in background - use & command.
redis-server & celery -A app_name worker -l info & python manage.py runserver
Disadvantage of this approach - redis and celery will work in the background even after a shutdown of django dev server. So you need to terminate these processes. See this unix se answer for examples how to do that.
So you can create 2 bash scripts start.sh (contains commands with &) and cleanup.sh (terminate processes) and run them respectively.
For production see purpose #2
Use systemd or supervisor. You need to create conf files for your daemons and then run them.
Building upon Yevhenii M.'s answer, you can start a subshell command with a trap to kill all running processes in that subshell when you hit Ctrl+C:
(trap "kill 0" SIGINT; redis-server & celery -A app_name worker -l info & python manage.py runserver)
or as a more readable multiline command:
(
trap "kill 0" SIGINT
redis-server &
celery -A app_name worker -l info &
python manage.py runserver
)
Another option is to use a Procfile manager, but that requires installing additional dependencies/programs. Something like foreman or one of it's ports in other languages:
forego - Go
node-foreman - Node.js
gaffer - Java/JVM
goreman - Go
honcho - python
proclet - Perl
shoreman - shell
crank - Crystal
houseman - Haskell
(Source: foreman's README)
For this you create a Procfile (file in your project root) where you specify which commands to run:
redis: redis-server
worker: celery -A app_name worker
web: python manage.py runserver
Then run foreman start

Deploying a python flask web application on Heroku with Windows

I am trying to deploy a flask app I made to Heroku with success.
The app is generated but I get errors when I push the code to the Heroku repository.
My flask app is inside a module called server.py and the variable is named app.
At first I tried using gunicorn and writing
web: gunicorn server:app
and deplying but no web dynos were up and I get an error stating it is the Procfile file.
Red about it about and saw that Gunicorn is not really working on windows so I tried installing Waitress and deploying without success. this time my profcile was written as all of these (tried several times):
web: waitress-serve --listen=*:8000 server.wsgi:application
web: waitress-serve --listen=*:8000 app.wsgi:application
And so on.
to add a web dyno I should scale it because heroku ps: showes that there is no dynos.
When I try to run heroku ps:scale web=1 I get:
Scaling dynos... !
▸ Couldn't find that process type.
What am i doing wrong?
I was having the same problem. Particularly, waitress works locally in Windows (inside a Procfile.windows file web: waitress-serve index:server, then with heroku CLI heroku local -f Procfile.windows), but failed after Heroku deployment. Workaround for me was to locally test with waitress (like explained), but deploy with gunicorn (web: gunicorn index:server inside Procfile). Let me know if this works for you.

How can I run one-off dyno inside another one-off dyno on Heroku using custom django-admin commands?

I need to create one-off dyno in Django application deployed on Heroku using custom django-admin commands. I want to use Heroku Scheduler to run command heroku run python manage.py test_function2. It creates one-off dyno with running function test_function2 in it. Then I would like to use test_function2 function to create more one-off dynos. I added example code below. My problem is associated with line command = 'heroku run:detached myworker2' When I use command = 'heroku run:detached myworker2' in test_function2 I get error sh: 1: heroku: not found.
In Heroku documentation there is written One-off dynos are created using heroku run. Does anyone have an idea how can I create heroku one-off dyno when I am already in one?
test_function2:
class Command(BaseCommand):
def handle(self, *args, **options):
command = 'heroku run:detached myworker2'
os.system(command)
Procfile:
web: sh -c 'gunicorn backend.wsgi --log-file -'
myworker2: python manage.py test_function2
myworker2: python manage.py test_function

How to add dyno in heroku?

In the present, I am running 2 dynos in my heroku app.
I want to add one more dyno. worker: python manage.py runworker
What should I do to add dyno?
==========================================================
edit)
I modified procfile and three dynos are made.
However 1 dyno must be charged. How can I add charged dyno??

Categories