simple celery test with Print doesn't go to Terminal - python

EDIT 1:
Actually, print statements outputs to the Celery terminal, instead of the terminal where the python program is ran - as #PatrickAllen indicated
OP
I've recently started to use Celery, but can't even get a simple test going where I print a line to the terminal after a 30 second wait.
In my tasks.py:
from celery import Celery
celery = Celery(__name__, broker='amqp://guest#localhost//', backend='amqp://guest#localhost//')
#celery.task
def test_message():
print ("schedule task says hello")
in the main module for my package, I have:
import tasks.py
if __name__ == '__main__':
<do something>
tasks.test_message.apply_async(countdown=30)
I run it from terminal:
celery -A tasks worker --loglevel=info
Task is ran correctly, but nothing on the terminal of the main program. Celery output:
[2016-03-06 17:49:46,890: INFO/MainProcess] Received task: tasks.test_message[4282fa1a-8b2f-4fa2-82be-d8f90288b6e2] eta:[2016-03-06 06:50:16.785896+00:00]
[2016-03-06 17:50:17,890: WARNING/Worker-2] schedule task says hello
[2016-03-06 17:50:17,892: WARNING/Worker-2] The client is not currently connected.
[2016-03-06 17:50:18,076: INFO/MainProcess] Task tasks.test_message[4282fa1a-8b2f-4fa2-82be-d8f90288b6e2] succeeded in 0.18711688100120227s: None

Related

How to send a task with schedule to celery while worker is running?

I've problem to creating and adding beat schedule task to the celery worker and running that when the celery worker is currently running
well this is my code
from celery import Celery
from celery.schedules import crontab
app = Celery('proj2')
app.config_from_object('proj2.celeryconfig')
#app.task
def hello() -> None:
print(f'Hello')
app.add_periodic_task(10.0, hello)
so when i am running the celery
$ celery -A proj2 worker --beat -E
this works very well and the task will run every 10 seconds
But
I need to create a task with schedule and sending that to the worker to run it automatically
Imagine i have run the worker celery and every things is ok
well i go to the python shell and do following code
>>> from proj2.celery import app
>>>
>>> app.conf.beat_schedule = {
... 'hello-every-5': {
... 'tasks': 'proj2.tasks.bye' # i have a task called bye in tasks module
... 'schedule': 5.0,
... }
...}
>>>
it's not working. Also there is no error
but it seems does not send to the worker
p.s: i have also used add_periodic_task method. still not working

Celery not processing tasks everytime

I am having below configuration for celery
celery = Celery(__name__,
broker=os.environ.get('CELERY_BROKER_URL', 'redis://'),
backend=os.environ.get('CELERY_BROKER_URL', 'redis://'))
celery.config_from_object(APP_SETTINGS)
ssl = celery.conf.get('REDIS_SSL', True)
r = redis.StrictRedis(REDIS_BROKER, int(REDIS_BROKER_PORT), 0,
charset='utf-8', decode_responses=True, ssl=ssl)
db_uri = celery.conf.get('SQLALCHEMY_DATABASE_URI')
#celery.task
def process_task(data):
#some code here
I am calling process task inside API endpoint like
process_task.delay(data)
sometimes it's processing tasks sometimes not.
can someone help me to resolve this issue?
I am running worker like celery worker -A api.celery --loglevel=DEBUG --concurrency=10
Once all the worker-processes are busy the new tasks will just sit on the queue waiting for the next idle worker-process to start the task. This is most likely why you perceive this as "not processing tasks everytime". Go through the monitoring and management section of the Celery documentation to find how to monitor your Celery cluster. For starters, do celery worker -A api.celery inspect active to check the currently running tasks.

celery apply_async choking rabbit mq

I am using celery's apply_async method to queue tasks. I expect about 100,000 such tasks to run everyday (number will only go up). I am using RabbitMQ as the broker. I ran the code a few days back and RabbitMQ crashed after a few hours. I noticed that apply_async creates a new queue for each task with x-expires set at 1 day. My hypothesis is that RabbitMQ chokes when so many queues are being created. How can I stop celery from creating these extra queues for each task?
I also tried giving the queue parameter to the apply_async and assigned a x-message-ttl to that queue. Messages did go this new queue, however they were immediately consumed and never reached the ttl of 30sec that I had put. And this did not stop celery from creating those extra queues.
Here's my code:
views.py
from celery import task, chain
chain(task1.s(a), task2.s(b),)
.apply_async(link_error=error_handler.s(a), queue="async_tasks_queue")
tasks.py
from celery.result import AsyncResult
#shared_task
def error_handler(uuid, a):
#Handle error
#shared_task
def task1(a):
#Do something
return a
#shared_task
def task2(a, b):
#Do something more
celery.py
app = Celery(
'app',
broker=settings.QUEUE_URL,
backend=settings.QUEUE_URL,
)
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app.amqp.queues.add("async_tasks_queue", queue_arguments={'durable' : True , 'x-message-ttl': 30000})
From the celery logs:
[2016-01-05 01:17:24,398: INFO/MainProcess] Received task:
project.tasks.task1[615e094c-2ec9-4568-9fe1-82ead2cd303b]
[2016-01-05 01:17:24,834: INFO/MainProcess] Received task:
project.decorators.wrapper[bf9a0a94-8e71-4ad6-9eaa-359f93446a3f]
RabbitMQ had 2 new queues by the names "615e094c2ec945689fe182ead2cd303b" and "bf9a0a948e714ad69eaa359f93446a3f" when these tasks were executed
My code is running on Django 1.7.7, celery 3.1.17 and RabbitMQ 3.5.3.
Any other suggestions to execute tasks asynchronously are also welcome
Try using a different backend - I recommend Redis. When we tried using Rabbitmq as both broker and backend we discovered that it was ill suited to the broker role.

Celery - Completes task but never returns result

I've just installed Celery and am trying to follow the tutorial:
I have a file called tasks.py with the following code:
from celery import Celery
app = Celery('tasks', backend='amqp', broker='amqp://')
#app.task
def add(x, y):
return x + y
I installed RabitMQ (I did no configuring with it since the tutorial didn't mention anything of that sort).
I run the celery worker server as follows:
celery -A tasks worker --loglevel=info
It seems to start up normally (here is the output: http://i.imgur.com/qnoNCzJ.png)
Then I run a script with the following:
from tasks import add
from time import sleep
result = add.delay(2,2)
while not result.ready():
sleep(10)
When I check result.ready() I always get False (so the while loop above runs forever). On the Celery logs, however, everything looks fine:
[2014-10-30 00:58:46,673: INFO/MainProcess] Received task: tasks.add[2bc4ceba-1319-49ce-962d-1ed0a424a2ce]
[2014-10-30 00:58:46,674: INFO/MainProcess] Task tasks.add[2bc4ceba-1319-49ce-962d-1ed0a424a2ce] succeeded in 0.000999927520752s: 4
So the task was recived and succeeds. Yet, result.ready() is still False. Any insight as to why this might be? I am on Windows 7, and am using RabbitMQ. Thanks in advance.
Should solve your problem
ignore_result=False
Okay, I've set up a clear VM with fresh celery install, set the following files:
tasks.py:
from celery import Celery
app = Celery('tasks', backend='amqp', broker='amqp://')
#app.task
def add(x, y):
return x + y
And runme.py
from tasks import add
import time
result = add.delay(1,2)
while not result.ready():
time.sleep(1)
print(result.get())
Then I set up celery with:
celery -A tasks worker --loglevel=info
And subsequently I run the runme.py which gives expected result:
[puciek#somewhere tmp]# python3.3 runme.py
3
So clearly the issue is within your setup, most likely somewhere in rabbit-mq installation, so I recommend reinstalling it with latest stable version from sources, which is what I am using, and as you can see - it works just fine.
Update:
Actually, your issue may be as trivial as imaginable - are you sure that you are using same version for celery run, and running your consumer? I just managed to reproduce it, where I've ran celery on Python3.3, and subsequently ran the runme.py with version 2.7. Result was as you've described.
Celery needs to have the result backend enabled. See
http://celery.readthedocs.org/en/latest/configuration.html#celery-result-backend
See also http://celery.readthedocs.org/en/latest/getting-started/first-steps-with-celery.html#calling-the-task.

Django celery always returns false when checking if data is ready

I have setup Django with Celery running with rabbitmq.
I have implemented following example in my project: http://docs.celeryproject.org/en/master/django/first-steps-with-django.html
When i run a simple test in two terminal windows the results are as following:
# Terminal 1
>>> from Exercise.tasks import *
>>> result = add.delay(2,3)
>>> result
<AsyncResult: e6c92297-eea2-4f99-8902-1446ac74a6bb>
>>> result.ready()
False
# Terminal 2
$ celery -A Website3 worker -l info
[2014-10-02 14:39:59,269: INFO/MainProcess] Received task: Exercise.tasks.add[464249dd-ab89-4099-badd-9190a147310f]
[2014-10-02 14:39:59,271: INFO/MainProcess] Task Exercise.tasks.add[464249dd-ab89-4099-badd-9190a147310f] succeeded in 0.0010875929147s: 5
Obviously the data is completed, but i am not able to receive this data.
What am i doing wrong here?
Do you have your result backend set up properly? http://docs.celeryproject.org/en/master/userguide/configuration.html#task-result-backend-settings
Celery needs this set up properly to store state and results of async tasks.

Categories