Celery worker getting crashed on Heroku - python

I am working on a Django project which I have pushed on Heroku, for background tasking I have used Celery. Although Celery works fine locally, but on the Heroku server I have observed that celery worker is getting crashed. I have set CLOUDAMQP_URL properly in settings.py and configured worker configuration in Procfile, but still worker is getting crashed.
Procfile
web: gunicorn my_django_app.wsgi --log-file -
worker: python manage.py celery worker --loglevel=info
Settings.py
...
# Celery
BROKER_URL = os.environ.get("CLOUDAMQP_URL", "django://")
#CELERY_BROKER_URL = 'amqp://localhost'
BROKER_POOL_LIMIT = 1
BROKER_CONNECTION_MAX_RETRIES = 100
CELERY_TASK_SERIALIZER="json"
CELERY_RESULT_SERIALIZER="json"
CELERY_RESULT_BACKEND = "amqp://"
Logs
2019-08-05T15:03:51.296563+00:00 heroku[worker.1]: State changed from crashed to starting
2019-08-05T15:04:05.370900+00:00 heroku[worker.1]: Starting process with command `python manage.py celery worker --loglevel=info`
2019-08-05T15:04:06.173210+00:00 heroku[worker.1]: State changed from starting to up
2019-08-05T15:04:09.067794+00:00 heroku[worker.1]: State changed from up to crashed
2019-08-05T15:04:08.778426+00:00 app[worker.1]: Unknown command: 'celery'
2019-08-05T15:04:08.778447+00:00 app[worker.1]: Type 'manage.py help' for usage.
2019-08-05T15:04:09.048404+00:00 heroku[worker.1]: Process exited with status 1
manage.py
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_django_app.settings")
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)

I made the following changes in Procfile and error got resolved
web: gunicorn my_django_app.wsgi --log-file -
worker: celery -A my_django_app worker -l info

Related

How to bind Heroku port to scrapyd

I created a simple python app on Heroku to launch scrapyd. The scrapyd service starts, but it launches on port 6800. Heroku requires you to bind it the $PORT variable, and I was able to run the heroku app locally. The logs from the process are included below. I looked at a package scrapy-heroku, but wasn't able to install it due to errors. The code in app.py of this package seems to provide some clues as to how it can be done. How can I implement this as a python command to start scrapyd on the port provided by Heroku?
Procfile:
web: scrapyd
Heroku Logs:
2022-01-24T05:17:27.058721+00:00 app[web.1]: 2022-01-24T05:17:27+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 21.7.0 (/app/.heroku/python/bin/python 3.10.2) starting up.
2022-01-24T05:17:27.058786+00:00 app[web.1]: 2022-01-24T05:17:27+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor.
2022-01-24T05:17:27.059190+00:00 app[web.1]: 2022-01-24T05:17:27+0000 [-] Site starting on 6800
2022-01-24T05:17:27.059301+00:00 app[web.1]: 2022-01-24T05:17:27+0000 [twisted.web.server.Site#info] Starting factory <twisted.web.server.Site object at 0x7f1706e3eaa0>
2022-01-24T05:17:27.059649+00:00 app[web.1]: 2022-01-24T05:17:27+0000 [Launcher] Scrapyd 1.3.0 started: max_proc=32, runner='scrapyd.runner'
2022-01-24T05:18:25.204305+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2022-01-24T05:18:25.231596+00:00 heroku[web.1]: Stopping process with SIGKILL
2022-01-24T05:18:25.402503+00:00 heroku[web.1]: Process exited with status 137
You just need to read the PORT environment variable and write it into your scrapyd config file. You can check out this code that does the same.
# init.py
import os
import io
PORT = os.environ['PORT']
with io.open("scrapyd.conf", 'r+', encoding='utf-8') as f:
f.read()
f.write(u'\nhttp_port = %s\n' % PORT)
Source: https://github.com/scrapy/scrapyd/issues/367#issuecomment-591446036

Django celery ImportError: no module named celery when using gunicorn bind?

I have been searching everywhere for an answer to this. I am setting up a server for my Django website on Ubuntu 16.04 (digital ocean) and my Django site requires the use of celery for some periodic tasks.
It works in my development environment. And running python manage.py celery beat and python manage.py celery worker work just fine. It was all installed inside a virtualenv as well.
Here are my files:
# __init__.py
from __future__ import absolute_import
from .celery_tasks import app as celery_app # noqa
# celery_tasks.py
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
from django.conf import settings # noqa
app = Celery('myproject')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
And this is the error that has been happening:
# gunicorn --bind 0.0.0.0:8000 myproject.wsgi:application
File "/root/myproject/myproject/__init__.py", line 2, in <module>
from .celery_tasks import app as celery_app # noqa
File "/root/myproject/myproject/celery_tasks.py", line 4, in <module>
from celery import Celery
ImportError: No module named celery
[2017-08-13 07:29:36 +0000] [5463] [INFO] Worker exiting (pid: 5463)
[2017-08-13 07:29:36 +0000] [5458] [INFO] Shutting down: Master
[2017-08-13 07:29:36 +0000] [5458] [INFO] Reason: Worker failed to boot.
There is also some more traceback which didn't seem as relevant.
Please, any help is MUCH appreciated. I think I'm missing something simple but I've been struggling with this for hours.
Error says is not founding celery.
So put celery in your requirements.txt file, and when you deploy it installs Celery.
Or in your server do:
pip install celery
or insert celery in your requirements.txt and do:
pip install -r requirements.txt

Django Celery Periodic Task not running (Heroku)?

I have a Django application that I've deployed with Heroku. I'm trying to user celery to create a periodic task every minute. However, when I observe the logs for the worker using the following command:
heroku logs -t -p worker
I don't see my task being executed. Perhaps there is a step I'm missing? This is my configuration below...
Procfile
web: gunicorn activiist.wsgi --log-file -
worker: celery worker --app=trending.tasks.app
Tasks.py
import celery
app = celery.Celery('activiist')
import os
from celery.schedules import crontab
from celery.task import periodic_task
from django.conf import settings
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
os.environ['DJANGO_SETTINGS_MODULE'] = 'activiist.settings'
from trending.views import *
#periodic_task(run_every=crontab())
def add():
getarticles(30)
One thing to add. When I run the task using the python shell and the "delay()" command, the task does indeed run (it shows in the logs) -- but it only runs once and only when executed.
You need separate worker for the beat process (which is responsible for executing periodic tasks):
web: gunicorn activiist.wsgi --log-file -
worker: celery worker --app=trending.tasks.app
beat: celery --app=trending.tasks.app
Worker isn't necessary for periodic tasks so the relevant line can be omitted. The other possibility is to embed beat inside the worker:
web: gunicorn activiist.wsgi --log-file -
worker: celery worker --app=trending.tasks.app -B
but to quote the celery documentation:
You can also start embed beat inside the worker by enabling workers -B option, this is convenient if you will never run more than one worker node, but it’s not commonly used and for that reason is not recommended for production use

Celery Worker is offline

I am running a celery worker
foreman run python manage.py celery worker -E --maxtasksperchild=1000
And a celerymon
foreman run python manage.py celerymon
as well as celerycam
foreman run python manage.py celerycam
The django admin shows that my worker is offline and all tasks remain in the delayed state. I have tried killing and restarting it several times but it does not seem to be online.
Here is my configuration
BROKER_TRANSPORT = 'amqplib'
BROKER_POOL_LIMIT=0
BROKER_CONNECTION_MAX_RETRIES = 0
BROKER_URL = os.environ.get('AMQP_URL')
CELERY_RESULT_BACKEND = 'database'
CELERY_TASK_RESULT_EXPIRES = 14400

adding clocking support to heroku with python django

This article teaches how to run background jobs in heroku.
I have my clock.py file, in the same directory where Procfile is.
Following is my clock.py file:
from apscheduler.scheduler import Scheduler
from subscription.views import send_subs_mail
sched = Scheduler()
#sched.cron_schedule(day_of_week='sat', hour=23)
def scheduled_job():
print 'This job is run every saturday at 11pm.'
send_subs_mail()
sched.start()
while True:
pass
I have updated my Procfile, to look as follows:
web: newrelic-admin run-program gunicorn hellodjango.wsgi -b 0.0.0.0:$PORT -w 3
clock: python clock.py
Earlier it used to look like:
web: newrelic-admin run-program gunicorn hellodjango.wsgi -b 0.0.0.0:$PORT -w 3
Having done all this, I do the following in my terminal:
heroku ps:scale clock=1
And I get the following error:
Scaling clock processes... failed ! No such type as clock.
I have updated my requirements file, as mentioned in the article.
and here is the heroku logs
2013-05-22T15:52:08.200587+00:00 heroku[web.1]: Starting process with command `newrelic-admin run-program gunicorn hellodjango.wsgi -b 0.0.0.0:48070 -w 3 clock: python clock.py`
2013-05-22T15:52:09.081883+00:00 heroku[web.1]: Process exited with status 0
2013-05-22T15:52:10.691985+00:00 app[web.1]: gunicorn: error: No application module specified.
2013-05-22T15:52:10.683457+00:00 app[web.1]: Usage: gunicorn [OPTIONS] APP_MODULE
2013-05-22T15:52:10.691843+00:00 app[web.1]:
2013-05-22T15:52:12.504514+00:00 heroku[web.1]: Process exited with status 2
2013-05-22T15:52:12.525765+00:00 heroku[web.1]: State changed from crashed to starting
2013-05-22T15:52:12.525765+00:00 heroku[web.1]: State changed from starting to crashed
2013-05-22T15:52:16.198417+00:00 heroku[web.1]: Starting process with command `newrelic-admin run-program gunicorn hellodjango.wsgi -b 0.0.0.0:55149 -w 3 clock: python clock.py`
2013-05-22T15:52:17.343513+00:00 app[web.1]: Usage: gunicorn [OPTIONS] APP_MODULE
2013-05-22T15:52:17.343513+00:00 app[web.1]: gunicorn: error: No application module specified.
2013-05-22T15:52:17.343513+00:00 app[web.1]:
2013-05-22T15:52:18.557818+00:00 heroku[web.1]: State changed from starting to crashed
2013-05-22T15:52:18.542409+00:00 heroku[web.1]: Process exited with status 2
What is wrong?
Each process type should be on it's own line, like so:
web: newrelic-admin run-program gunicorn hellodjango.wsgi -b 0.0.0.0:$PORT -w 3
clock: python clock.py
Hope that helps!

Categories