I have a project where I'm starting my FastAPI using a file (python main.py):
import uvicorn
from configuration import API_HOST, API_PORT
if __name__ == "__main__":
uvicorn.run("endpoints:app", host="localhost", port=8811, reload=True, access_log=False)
Inside endpoints.py I have:
from celery import Celery
from fastapi import FastAPI
import os
import time
# Create object for fastAPI
app = FastAPI(
title="MYFASTAPI",
description="MYDESCRIPTION",
version=1.0,
contact="ME!",
)
celery = Celery(__name__)
celery.conf.broker_url = os.environ.get("CELERY_BROKER_URL", "redis://localhost:6379")
celery.conf.result_backend = os.environ.get("CELERY_RESULT_BACKEND", "redis://localhost:6379")
celery.conf.task_track_started = True
celery.conf.task_serializer = pickle
celery.conf.result_serializer = pickle
celery.conf.accept_content = ["pickle"]
# By defaul celery can handle as many threads as CPU cores have the instance.
celery.conf.worker_concurrency = os.cpu_count()
# Start the celery worker. I start it in a separate thread, so fastapi can run in parallel
worker = celery.Worker()
def start_worker():
worker.start()
ce = threading.Thread(target=start_worker)
ce.start()
#app.post("/taskA")
def taskA():
task = ask_taskA.delay()
return {"task_id": task.id}
#celery.task(name="ask_taskA", bind=True)
def ask_taskA(self):
time.sleep(100)
#app.post("/get_results")
def get_results(task_id):
task_result = celery.AsyncResult(task_id)
return {'task_status': task_result.status}
Given this code, how can I have two different queues, assign a specific number of workers per earch queue and assign a specific task to one of these queues?
I read that people use to execute celery as:
celery -A proj worker
but there was a structure in the project that limited me because of some importings, and at the end I finished by starting the celery worker in a different thread (which works perfectly)
Based on the official celery documentation https://docs.celeryq.dev/en/stable/userguide/routing.html#manual-routing[1] you can follow this to specify different queues.
from kombu import Queue
app.conf.task_default_queue = 'default'
app.conf.task_queues = (
Queue('default', routing_key='task.#'),
Queue('feed_tasks', routing_key='feed.#'),
)
app.conf.task_default_exchange = 'tasks'
app.conf.task_default_exchange_type = 'topic'
app.conf.task_default_routing_key = 'task.default'
Related
I can't import my celery app to run tasks from my main Python application. I want to be able to run celery tasks from the myprogram.py file.
My celery_app.py file is as follows:
import celery
app = celery.Celery('MyApp', broker='redis://localhost:6379/0')
app.conf.broker_url = 'redis://localhost:6379/0'
app.conf.result_backend = 'redis://localhost:6379/0'
app.autodiscover_tasks()
#app.task(ignore_result=True)
def task_to_run():
print("Task Running")
# The following call runs a worker in celery
task_to_run.delay()
if __name__ == '__main__':
app.start()
Application structure
projectfolder/core/celery_app.py # Celery app
projectfolder/core/myprogram.py # My Python application
projectfolder/core/other python files...
The file myprogram.py contains the following:
from .celery_app import task_to_run
task_to_run.delay()
Error:
Received unregistered task of type 'projectfolder.core.celery_app.task_to_run'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
strategy = strategies[type_]
KeyError: 'projectfolder.core.celery_app.task_to_run'
Thanks
interesting, I didn't know about autodiscover_tasks, I guess it's new in 4.1
As I see in the documentation, this function takes list of packages to search. You might want to call it with:
app.autodiscover_tasks(['core.celery_app'])
or it might be better to extract the task to a seperate file called tasks.py and then it would be just:
app.autodiscover_tasks(['core']).
Alternatively, you can use the inculde parameter when creating the Celery instance:
app = celery.Celery('MyApp', broker='redis://localhost:6379/0', include=['core.celery_app']) or wherever your tasks are.
Good luck
When run under Flask's local development server, jobs are added and run normally. When run under uWSGI, the job appears to get added to the job store but never is executed. A simple example with the described undesired behavior is given below:
__init__.py
import flask
from datetime import datetime, timedelta
from flask_apscheduler import APScheduler
app = flask.Flask("apscheduler_test")
app.config["SCHEDULER_API_ENABLED"] = True
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
def test_job():
print("test job run")
#app.route("/test")
def apscheduler_test():
print("Adding Job")
scheduler.add_job(id="101",
func=test_job,
next_run_time=(datetime.now() + timedelta(seconds=10)))
return "view", 200
if __name__ == '__main__':
app.run(port=5050)
apschedule_test.ini
[uwsgi]
pidfile = /var/run/%n.pid
chdir = /opt/apscheduler
master = true
enable-threads = true
threads = 20
http-socket = :48197
logto = /var/log/%n.log
plugin = python3
module = %n
callable = app
processes = 1
uid = root
gid = root
daemonize = /var/log/apscheduler_test.log
Try adding this flag to your ini file:
lazy-apps=true
Similar issue: uWSGI lazy-apps and ThreadPool
This is the first time I'm using Celery, and honestly, I'm not sure I'm doing it right. My system has to run on Windows, so I'm using RabbitMQ as the broker.
As a proof of concept, I'm trying to create a single object where one task sets the value, another task reads the value, and I also want to show the current value of the object when I go to a certain url. However I'm having problems sharing the object between everything.
This is my celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE','cesGroundStation.settings')
app = Celery('cesGroundStation')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind = True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
The object I'm trying to share is:
class SchedulerQ():
item = 0
def setItem(self, item):
self.item = item
def getItem(self):
return self.item
This is my tasks.py
from celery import shared_task
from time import sleep
from scheduler.schedulerQueue import SchedulerQ
schedulerQ = SchedulerQ()
#shared_task()
def SchedulerThread():
print ("Starting Scheduler")
counter = 0
while(1):
counter += 1
if(counter > 100):
counter = 0
schedulerQ.setItem(counter)
print("In Scheduler thread - " + str(counter))
sleep(2)
print("Exiting Scheduler")
#shared_task()
def RotatorsThread():
print ("Starting Rotators")
while(1):
item = schedulerQ.getItem()
print("In Rotators thread - " + str(item))
sleep(2)
print("Exiting Rotators")
#shared_task()
def setSchedulerQ(schedulerQueue):
schedulerQ = schedulerQueue
#shared_task()
def getSchedulerQ():
return schedulerQ
I'm starting my tasks in my apps.py...I'm not sure if this is the right place as the tasks/workers don't seem to work until I start the workers in a separate console where I run the celery -A cesGroundStation -l info.
from django.apps import AppConfig
from scheduler.schedulerQueue import SchedulerQ
from scheduler.tasks import SchedulerThread, RotatorsThread, setSchedulerQ, getSchedulerQ
class SchedulerConfig(AppConfig):
name = 'scheduler'
def ready(self):
schedulerQ = SchedulerQ()
setSchedulerQ.delay(schedulerQ)
SchedulerThread.delay()
RotatorsThread.delay()
In my views.py I have this:
def schedulerQ():
queue = getSchedulerQ.delay()
return HttpResponse("Your list: " + queue)
The django app runs without errors, however my output from "celery -A cesGroundStation -l info" is this: Celery command output
First it seems to start multiple "SchedulerThread" tasks, secondly the "SchedulerQ" object isn't being passed to the Rotators, as it's not reading the updated value.
And if I go to the url for which shows the views.schedulerQ view I get this error:
Django views error
I have very, very little experience with Python, Django and Web Development in general, so I have no idea where to start with that last error. Solutions suggest using Redis to pass the object to the views, but I don't know how I'd do that using RabbitMQ. Later on the schedulerQ object will implement a queue and the scheduler and rotators will act as more of a producer/consumer dynamic with the view showing the contents of the queue, so I believe using the database might be too resource intensive. How can I share this object across all tasks, and is this even the right approach?
The right approach would be to use some persistence layer, such as a database or results back end to store the information you want to share between tasks if you need to share information between tasks (in this example, what you are currently putting in your class).
Celery operates on a distributed message passing paradigm - a good way to distill that idea for this example, is that your module will be executed independently every time a task is dispatched. Whenever a task is dispatched to Celery, you must assume it is running in a seperate interpreter and loaded independently of other tasks. That SchedulerQ class is instantiated anew each time.
You can share information between tasks in ways described in the docs linked previously and some best practice tips discuss data persistence concerns.
I have an issue with Celery queue routing when using current_app.send_task
I have two workers (each one for each queue)
python manage.py celery worker -E -Q priority --concurrency=8 --loglevel=DEBUG
python manage.py celery worker -Q low --concurrency=8 -E -B --loglevel=DEBUG
I have two queues defined in celeryconfig.py file:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.core.exceptions import ImproperlyConfigured
from celery import Celery
from django.conf import settings
try:
app = Celery('proj', broker=getattr(settings, 'BROKER_URL', 'redis://'))
except ImproperlyConfigured:
app = Celery('proj', broker='redis://')
app.conf.update(
CELERY_TASK_SERIALIZER='json',
CELERY_ACCEPT_CONTENT=['json'],
CELERY_RESULT_SERIALIZER='json',
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend',
CELERY_DEFAULT_EXCHANGE='tasks',
CELERY_DEFAULT_EXCHANGE_TYPE='topic',
CELERY_DEFAULT_ROUTING_KEY='task.priority',
CELERY_QUEUES=(
Queue('priority',routing_key='priority.#'),
Queue('low', routing_key='low.#'),
),
CELERY_DEFAULT_EXCHANGE='priority',
CELERY_IMPORTS=('mymodule.tasks',)
CELERY_ENABLE_UTC = True
CELERY_TIMEZONE = 'UTC'
if __name__ == '__main__':
app.start()
In the definition of tasks, we use decorator to explicit the queue:
#task(name='mymodule.mytask', routing_key='low.mytask', queue='low')
def mytask():
# does something
pass
This task is run indeed in the low queue when this task is run using:
from mymodule.tasks import mytask
mytask.delay()
But it's not the case when it's run using: (it's run in the default queue: "priority")
from celery import current_app
current_app.send_task('mymodule.mytask')
I wonder why this later way doesn't route the task to the "low" queue!
p.s: I use redis.
send_task is a low-level method. It sends directly to the broker the task signature without going through your task decorator.
With this method, you can even send a task without loading the task code/module.
To solve your problem, you can fetch the routing_key/queue from configuration directly:
route = celery.amqp.routes[0].route_for_task("mymodule.mytask")
Out[10]: {'queue': 'low', 'routing_key': 'low.mytask'}
celery.send_task("myodule.mytask", queue=route['queue'], routing_key=route['routing_key']`
I want make a testcase with my celery codes.
But usually celery need start with a new process like $ celery -A CELERY_MODULE worker, It's means I can't run my testcase code directly ?
I'm configurate the Celery with memory store to void the extra I/O in the testcase. That's config can't sample share the task queue in different process.
Here is my naive implements.
The celery entry from celery.bin.celeryd.WorkCommand, it's parse the args and execute works.
Use the solo to void the MultiProcess use in the case. Of course you need install that's lib first.
You could use this before your celery testcase start.
#!/usr/bin/env python
#vim: encoding=utf-8
import time
import unittest
from threading import Thread
from celery import Celery, states
from celery.bin.celeryd import WorkerCommand
class CELERY_CONFIG(object):
BROKER_URL = "memory://"
CELERY_CACHE_BACKEND = "memory"
CELERY_RESULT_BACKEND = "cache"
CELERYD_POOL = "solo"
class CeleryTestCase(unittest.TestCase):
def test_inprocess(self):
app = Celery(__name__)
app.config_from_object(CELERY_CONFIG)
#app.task
def dumpy_task(dct):
return 321
worker = WorkerCommand(app)
#worker.execute_from_commandline(["-P solo"])
t = Thread(target=worker.execute_from_commandline, args=(["-c 1"],))
t.daemon = True
t.start()
ar = dumpy_task.apply_async(({"a": 123},))
while ar.status != states.SUCCESS:
time.sleep(.01)
self.assertEqual(states.SUCCESS, ar.status)
self.assertEqual(ar.result, 321)
t.join(0)