I have a Django app running on Heroku, which consist in a web process and a worker process (running Django background tasks library). My Procfile looks like this:
web: gunicorn search.wsgi --log-file -
worker: python manage.py process_tasks
I need to run a Python management command when worker starts(and not when the web process starts), to check some issues related to the daily dyno restart of Heroku. Specifically, when worker restarts, I need to run python manage.py somescript.py, and then python manage.py process_taks (the command that starts the worker as needed by DJB library).
How can I achieve this? Is there any way to run two or more commands per process in the procfile? Thanks in advance!
Related
I have some celery workers in a Heroku app. My app is using python3.6and django, these are the relevant dependencies and their versions:
celery==3.1.26.post2
redis==2.10.3
django-celery==3.2.2
I do not know if the are useful to this question, but just in case. On Heroku we are running the Heroku-18 stack.
As it's usual, we have our workers declared in a Procfile, with the following content:
web: ... our django app ....
celeryd: python manage.py celery worker -Q celery --loglevel=INFO -O fair
one_type_of_worker: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
another_type: python manage.py celery worker -Q ... --maxtasksperchild=3 --loglevel=INFO -O fair
So, my current understanding of this process is the following:
Our celery queues run on multiple workers, each worker runs as a dyno on Heroku (not a server, but a “worker process” kind of thing, since servers aren’t a concept on Heroku). We also have multiple dynos running the same celery worker with the same queue, which results in multiple parallel “threads” for that queue to run more tasks simultaneously (scalability).
The web workers, celery workers, and celery queues can talk to each other because celery manages the orchestration between them. I think it's specifically the broker that handles this responsibility. But for example, this lets our web workers schedule a celery task on a specific queue and it is routed to the correct queue/worker, or a task running in one queue/worker can schedule a task on a different queue/worker.
Now here is when comes my question, so does the worker communicate? Do they use an API endpoint in localhost with a port? RCP? Do they use the broker url? Magic?
I'm asking this because I'm trying to replicate this setup in ECS and I need to know how to set it up for celery.
Here you go to know how celery works at heroku: https://devcenter.heroku.com/articles/celery-heroku
You can't run celery on Heroku without getting a Heroku dyno for celery. Also, make sure you have Redis configured on your Django celery settings.
to run the celery on Heroku, you just add this line to your Procfile
worker: celery -A YOUR-PROJECT_NAME worker -l info -B
Note: above celery commands will run both celery worker and celery beat
If you want to run it separately, you can use separate commands but one command is recommended
So, I have a Django Project which has a background task for a method to run.
I made following adjustments to procfile
Initially
web: python manage.py collectstatic --no-input; gunicorn project.wsgi --log-file - --log-level debug
Now
web: python manage.py collectstatic --no-input; gunicorn project.wsgi --log-file - --log-level debug
worker: python manage.py process_tasks
Inspite of adding worker, when I deploy my project on heroku it does not run the background task. The background task gets created and can be seen registered in django admin but does not run. I hoped after reading various articles (one of them being https://medium.com/#201651034/background-tasks-in-django-and-heroku-58ac91bc881c) adding worker: python mnanage.py process_tasks would do the job but it didn't.
If I execute in my heroku cli: heroku run python manage.py process_tasks it only runs on the data which was initially present in database and not on any new data that I add after deployment.
Note: python manage.py process_tasks is what I use to get the background task to run on my local server.
So, if anyone could help me in running the background task after deployment on heroku.
Your Procfile seems right, you need to scale your worker dyno using heroku scale worker=1 from heroku CLI or you can also scale your worker dyno from heroku dashboard.
For scaling worker dyno through browser:-
Visit https://dashboard.heroku.com/apps/<your-app-name>/resources
Edit your worker dyno and scale it from there confirm your changes
In CLI use command heroku logs -t -p worker to see status and logs for worker dyno
I have recently started with django. And I started doing a small project. I've been using celery with redis worker. And every to use celery and redis I have to run the celery and redis server and then django server. Which is a bit lengthy process.
I have two questions.
1. Am I doing the right thing by running the servers everytime or are there any other right method to this process?
2. If I'm in the right direction, is there any method to do this?
I tried circus.ini , but it did not work.
If you use UNIX system:
For this purpose you can get along just with bash. Just run celery and redis in background - use & command.
redis-server & celery -A app_name worker -l info & python manage.py runserver
Disadvantage of this approach - redis and celery will work in the background even after a shutdown of django dev server. So you need to terminate these processes. See this unix se answer for examples how to do that.
So you can create 2 bash scripts start.sh (contains commands with &) and cleanup.sh (terminate processes) and run them respectively.
For production see purpose #2
Use systemd or supervisor. You need to create conf files for your daemons and then run them.
Building upon Yevhenii M.'s answer, you can start a subshell command with a trap to kill all running processes in that subshell when you hit Ctrl+C:
(trap "kill 0" SIGINT; redis-server & celery -A app_name worker -l info & python manage.py runserver)
or as a more readable multiline command:
(
trap "kill 0" SIGINT
redis-server &
celery -A app_name worker -l info &
python manage.py runserver
)
Another option is to use a Procfile manager, but that requires installing additional dependencies/programs. Something like foreman or one of it's ports in other languages:
forego - Go
node-foreman - Node.js
gaffer - Java/JVM
goreman - Go
honcho - python
proclet - Perl
shoreman - shell
crank - Crystal
houseman - Haskell
(Source: foreman's README)
For this you create a Procfile (file in your project root) where you specify which commands to run:
redis: redis-server
worker: celery -A app_name worker
web: python manage.py runserver
Then run foreman start
Error explain:
I do a django-celery project and use supervisor to keep the celery process.
With a lot of action,it mad out a error that I can't start a work.It says:
stale pidfile exists.Removing it.
But i did not point the pidfile path when I setting the supervisor.
question
where is the supervisor keep the process's pidfile default?
Could someone tell me how to right do command that I can see the tasks and workers in django-admin-site? I try like this when I develop the projects:
python manage.py runserver 0.0.0.0:8090
python manage.py celery events --camera=djcelery.snapshot.Camera
python manage.py celerybeat -l INFO
python manage.py celeryd -n worker_1 -l INFO
But when I try like this in supervisor,with nginx+uwsgi,I see nothing in django-admin-site
I need to create one-off dyno in Django application deployed on Heroku using custom django-admin commands. I want to use Heroku Scheduler to run command heroku run python manage.py test_function2. It creates one-off dyno with running function test_function2 in it. Then I would like to use test_function2 function to create more one-off dynos. I added example code below. My problem is associated with line command = 'heroku run:detached myworker2' When I use command = 'heroku run:detached myworker2' in test_function2 I get error sh: 1: heroku: not found.
In Heroku documentation there is written One-off dynos are created using heroku run. Does anyone have an idea how can I create heroku one-off dyno when I am already in one?
test_function2:
class Command(BaseCommand):
def handle(self, *args, **options):
command = 'heroku run:detached myworker2'
os.system(command)
Procfile:
web: sh -c 'gunicorn backend.wsgi --log-file -'
myworker2: python manage.py test_function2
myworker2: python manage.py test_function