Please help me to get out of this problem I am getting this when I am running
celery -A app.celery worker --loglevel=info
Error:
Unable to load celery application.
The module app.celery was not found.
My code is--
# Celery Configuration
from celery import Celery
from app import app
print("App Name=",app.import_name)
celery=Celery(app.name,broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
#celery.task
def download_content():
return "hello"
directory structure--
newYoutube/app/auth/routes.py and this function is present inside routes.py
auth is blueprint.
When invoking celery via
celery -A app.celery ...
celery will look for the name celery in the app namespace, expecting it to hold an instance of Celery. If you put that elsewhere (say, in app.auth.routes), then celery won't find it.
I have a working example you can crib from at https://github.com/davewsmith/flask-celery-starter
Or, refer to chapter 22 of the Flask Mega Tutorial, which uses rx instead of celery, but the general approach to structuring the code is similar.
Related
I'm trying to setup Celery on my Flask application. To do so, i followed this example.
So here is what i did:
main.py
# Celery configuration
app.config['CELERY_BROKER_URL'] = 'redis://localhost:6379/0'
app.config['CELERY_RESULT_BACKEND'] = 'redis://localhost:6379/0'
# Initialize Celery
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
The problem is that, whenever i try to start the celery process, i get the following error:
celery -A app.celery worker
Error:
Unable to load celery application.
The module app was not found.
Can anyone help me out on this?
I'm trying to use django in combination with celery.
Therefore I came across autodiscover_tasks() and I'm not fully sure on how to use them. The celery workers get tasks added by other applications (in this case a node backend).
So far I used this to start the worker:
celery worker -Q extraction --hostname=extraction_worker
which works fine.
Now I'm not sure what the general idea of the django-celery integration is. Should workers still be started from external (e.g. with the command above), or should they be managed and started from the django application?
My celery.py looks like:
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
app = Celery('app')
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
then I have 2 apps containing a tasks.py file with:
#shared_task
def extraction(total):
return 'Task executed'
how can I now register django to register the worker for those tasks?
You just start worker process as documented, you don't need to register anything else
In a production environment you’ll want to run the worker in the
background as a daemon - see Daemonization - but for testing and
development it is useful to be able to start a worker instance by
using the celery worker manage command, much as you’d use Django’s
manage.py runserver:
celery -A proj worker -l info
For a complete listing of the command-line options available, use the
help command:
celery help
celery worker collects/registers task when it runs and also consumes tasks which it found out
I have a problem wit setting up periodic tasks with celery.
I got the scheduler running by:
celery -A myproject beat -l info --scheduler
django_celery_beat.schedulers:DatabaseScheduler
It seems as if the scheduler is up and running my task:
In the admin interface, I can see/edit the task:
But it does nothing. IMHO, the file myproject.backend.tasks.importnewvideo.py should be executed.
But it is not.
In the celery manual, I could not find any further information how to set up a task with the admin interface.
Any ideas?
Thanks in advance.
I have a layout in my project like this: (As the documentation saids it has to be)
/zonia
/backend
__init__.py
celery.py
...
/musics
tasks.py
...
...
In the init.py:
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ['celery_app']
In the celery.py:
from __future__ import absolute_import, unicode_literals
import os
import environ
from celery import Celery
env = environ.Env()
environ.Env.read_env()
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('C_FORCE_ROOT', 'true')
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings')
app = Celery('backend', backend='rpc://', broker=env('broker'))
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self))
I have several shared_tasks in the tasks.py module that are like this:
#shared_task
def recalc_ranking_music():
musics = musics_models.Music.objects.all().order_by("-num_points")
for music, rank in enumerate(musics):
music.ranking = rank + 1
music.save()
When i start the celery with command: celery -A backend worker -l info
As you can see, the tasks that i have in the tasks.py module doesn't get read by the celery process, but the one in the celery.py folder does.
The weird thing is that i have the same exact layout in another project and it works fine.
I have two days on this, and it is really taking some time away, any help?
UPDATE: (from comment) 'musics' is a django app, so it has a __init__.py file in it.
I've also tried passing the Apps names to the auto discover method on the celery instance without any luck.
If i set in the app.autodiscover_tasks(force=True) it throws an error:
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
Does you musics directory have a __init__.py?
Also, try to specify the package name explicitly:
app.autodiscover_tasks(['musics'])
Celery documentation here
Your problem might be related to the scope in which you are running Celery (e.g.: virtual environment)
For instance, when I run celery like this:
celery -A demoproject worker --loglevel=info
It outputs just one task (the only one in my demo app):
[tasks]
. core.tasks.demo
but when I run it from the virtualenv, it results into this:
[tasks]
. core.tasks.demo
. myapp.tasks.demo_task
see? It has discovered a new app just because of the environment.
I found a solution, i just import the module (Django App) in the celery.py file, and it reads all the tasks.
But, this is not the behavior described in the celery documentation
I'm trying to process some tasks using celery, and I'm not having too much luck. I'm running celeryd and celerybeat as daemons. I have a tasks.py file that look like this with a simple app and task defined:
from celery import Celery
app = Celery('tasks', broker='amqp://user:pass#hostname:5672/vhostname')
#app.task
def process_file(f):
# do some stuff
# and log results
And this file is referenced from another file process.py I use to monitor for file changes that looks like:
from tasks import process_file
file_name = '/file/to/process'
result = process_file.delay(file_name)
result.get()
And with that little code, celery is unable to see tasks and process them. I can execute similar code in the python interpreter and celery processes them:
py >>> from tasks import process_file
py >>> process_file.delay('/file/to/process')
<AsyncResult: 8af23a4e-3f26-469c-8eee-e646b9d28c7b>
When I run the tasks from the interpreter however, beat.log and worker1.log don't show any indication that the tasks were received, but using logging I can confirm the task code was executed. There are also no obvious errors in the .log files. Any ideas what could be causing this problem?
My /etc/default/celerybeat looks like:
CELERY_BIN="/usr/local/bin/celery"
CELERYBEAT_CHDIR="/opt/dirwithpyfiles"
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
And /etc/default/celeryd:
CELERYD_NODES="worker1"
CELERY_BIN="/usr/local/bin/celery"
CELERYD_CHDIR="/opt/dirwithpyfiles"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERY_CREATE_DIRS=1
So I figured out my issue here by running celery from the cli instead of as a daemon, enabling me to see more detailed output of errors that happened. I did this by running:
user#hostname /opt/dirwithpyfiles $ su celery
celery#hostname /opt/dirwithpyfiles $ celery -A tasks worker --loglevel=info
There I could see that a permissions issue was happening as the celery user that did not happen when I ran the commands from the python interpreter as my normal user. I fixed this by changing the permissions of /file/to/process so that both users could read from it.