I've just installed Celery and am trying to follow the tutorial:
I have a file called tasks.py with the following code:
from celery import Celery
app = Celery('tasks', backend='amqp', broker='amqp://')
#app.task
def add(x, y):
return x + y
I installed RabitMQ (I did no configuring with it since the tutorial didn't mention anything of that sort).
I run the celery worker server as follows:
celery -A tasks worker --loglevel=info
It seems to start up normally (here is the output: http://i.imgur.com/qnoNCzJ.png)
Then I run a script with the following:
from tasks import add
from time import sleep
result = add.delay(2,2)
while not result.ready():
sleep(10)
When I check result.ready() I always get False (so the while loop above runs forever). On the Celery logs, however, everything looks fine:
[2014-10-30 00:58:46,673: INFO/MainProcess] Received task: tasks.add[2bc4ceba-1319-49ce-962d-1ed0a424a2ce]
[2014-10-30 00:58:46,674: INFO/MainProcess] Task tasks.add[2bc4ceba-1319-49ce-962d-1ed0a424a2ce] succeeded in 0.000999927520752s: 4
So the task was recived and succeeds. Yet, result.ready() is still False. Any insight as to why this might be? I am on Windows 7, and am using RabbitMQ. Thanks in advance.
Should solve your problem
ignore_result=False
Okay, I've set up a clear VM with fresh celery install, set the following files:
tasks.py:
from celery import Celery
app = Celery('tasks', backend='amqp', broker='amqp://')
#app.task
def add(x, y):
return x + y
And runme.py
from tasks import add
import time
result = add.delay(1,2)
while not result.ready():
time.sleep(1)
print(result.get())
Then I set up celery with:
celery -A tasks worker --loglevel=info
And subsequently I run the runme.py which gives expected result:
[puciek#somewhere tmp]# python3.3 runme.py
3
So clearly the issue is within your setup, most likely somewhere in rabbit-mq installation, so I recommend reinstalling it with latest stable version from sources, which is what I am using, and as you can see - it works just fine.
Update:
Actually, your issue may be as trivial as imaginable - are you sure that you are using same version for celery run, and running your consumer? I just managed to reproduce it, where I've ran celery on Python3.3, and subsequently ran the runme.py with version 2.7. Result was as you've described.
Celery needs to have the result backend enabled. See
http://celery.readthedocs.org/en/latest/configuration.html#celery-result-backend
See also http://celery.readthedocs.org/en/latest/getting-started/first-steps-with-celery.html#calling-the-task.
Related
I have a simple celery application with two tasks, a_func() and b_func().
After starting the celery worker, I am calling a_func.apply_async(), and a_func, when running on worker is calling b_func.apply_async().
When using 'amqp://' as a backend everything is working well.
However, when using 'rpc://' as a backend, I am having problems.
I am trying to get the state and the return value of the tasks.
For the a_func() task, there is no problem. However for b_func() I am getting state = 'PENDING' forever, and get() is stuck forever.
I am using:
celery version 4.3.0.
rabbitmq version 3.5.7 as broker.
python 2.7.
ubuntu version 16.0.4 LTS.
Worker cmd:
celery -A celery_test worker --loglevel=inf
celery application:
app = Celery('my_app',
backend='rpc://',
broker='pyamqp://guest#localhost/celery',
include=['tasks'])
a_func and b_func tasks:
#task
def a_func():
print "A"
b_func.apply_async()
return "A"
#task
def b_func():
print "B"
return "B"
The system is running a Django server (1.11.5) with Celery (4.0.0) and RabbitMQ as broker.
It is necessary to send some tasks to a remote server to be processed there. This new server will have its own RabbitMQ installed to use it as broker. The problem comes when on the server where Django is running we need to select which of the tasks keep running on the local machine and which are sent to the new server.
Due to some architecture reasons is not possible to solve this using queues, tasks must be sent to the new broker.
Is it possible to create two different Celery apps in Django (each one pointing to a different broker) which their own tasks each one? How can it be done?
You can create two celery app and rename celery.py to celery_app.py to avoid auto import.
from celery import Celery
app1 = Celery('hello', broker='amqp://guest#localhost//')
#app1.task
def hello1():
return 'hello world from local'
and
from celery import Celery
app2 = Celery('hello', broker='amqp://guest#remote//')
#app2.task
def hello2():
return 'hello world from remote'
and for shared task:
from celery import shared_task
#shared_task
def add(x, y):
return x + y
When you run your celery worker node:
celery --app=PACKAGE.celery_app:app worker
EDIT 1:
Actually, print statements outputs to the Celery terminal, instead of the terminal where the python program is ran - as #PatrickAllen indicated
OP
I've recently started to use Celery, but can't even get a simple test going where I print a line to the terminal after a 30 second wait.
In my tasks.py:
from celery import Celery
celery = Celery(__name__, broker='amqp://guest#localhost//', backend='amqp://guest#localhost//')
#celery.task
def test_message():
print ("schedule task says hello")
in the main module for my package, I have:
import tasks.py
if __name__ == '__main__':
<do something>
tasks.test_message.apply_async(countdown=30)
I run it from terminal:
celery -A tasks worker --loglevel=info
Task is ran correctly, but nothing on the terminal of the main program. Celery output:
[2016-03-06 17:49:46,890: INFO/MainProcess] Received task: tasks.test_message[4282fa1a-8b2f-4fa2-82be-d8f90288b6e2] eta:[2016-03-06 06:50:16.785896+00:00]
[2016-03-06 17:50:17,890: WARNING/Worker-2] schedule task says hello
[2016-03-06 17:50:17,892: WARNING/Worker-2] The client is not currently connected.
[2016-03-06 17:50:18,076: INFO/MainProcess] Task tasks.test_message[4282fa1a-8b2f-4fa2-82be-d8f90288b6e2] succeeded in 0.18711688100120227s: None
I am using celery's apply_async method to queue tasks. I expect about 100,000 such tasks to run everyday (number will only go up). I am using RabbitMQ as the broker. I ran the code a few days back and RabbitMQ crashed after a few hours. I noticed that apply_async creates a new queue for each task with x-expires set at 1 day. My hypothesis is that RabbitMQ chokes when so many queues are being created. How can I stop celery from creating these extra queues for each task?
I also tried giving the queue parameter to the apply_async and assigned a x-message-ttl to that queue. Messages did go this new queue, however they were immediately consumed and never reached the ttl of 30sec that I had put. And this did not stop celery from creating those extra queues.
Here's my code:
views.py
from celery import task, chain
chain(task1.s(a), task2.s(b),)
.apply_async(link_error=error_handler.s(a), queue="async_tasks_queue")
tasks.py
from celery.result import AsyncResult
#shared_task
def error_handler(uuid, a):
#Handle error
#shared_task
def task1(a):
#Do something
return a
#shared_task
def task2(a, b):
#Do something more
celery.py
app = Celery(
'app',
broker=settings.QUEUE_URL,
backend=settings.QUEUE_URL,
)
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app.amqp.queues.add("async_tasks_queue", queue_arguments={'durable' : True , 'x-message-ttl': 30000})
From the celery logs:
[2016-01-05 01:17:24,398: INFO/MainProcess] Received task:
project.tasks.task1[615e094c-2ec9-4568-9fe1-82ead2cd303b]
[2016-01-05 01:17:24,834: INFO/MainProcess] Received task:
project.decorators.wrapper[bf9a0a94-8e71-4ad6-9eaa-359f93446a3f]
RabbitMQ had 2 new queues by the names "615e094c2ec945689fe182ead2cd303b" and "bf9a0a948e714ad69eaa359f93446a3f" when these tasks were executed
My code is running on Django 1.7.7, celery 3.1.17 and RabbitMQ 3.5.3.
Any other suggestions to execute tasks asynchronously are also welcome
Try using a different backend - I recommend Redis. When we tried using Rabbitmq as both broker and backend we discovered that it was ill suited to the broker role.
I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery.
I am following http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/ along with the docs.
I've been able to get a basic task working at the command line, using:
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO
I just realized, that I have a bunch of tasks in my queue, that had not executed:
[2015-03-28 16:49:05,916: WARNING/MainProcess] Restoring 4 unacknowledged message(s).
(env1)ubuntu#ip-172-31-22-65:~/projects/tp$ celery -A tp purge
WARNING: This will remove all tasks from queue: celery.
There is no undo for this operation!
(to skip this prompt use the -f option)
Are you sure you want to delete all tasks (yes/NO)? yes
Purged 81 messages from 1 known task queue.
How do I get a list of the queued items from the command line?
If you want to get all scheduled tasks,
celery inspect scheduled
To find all active queues
celery inspect active_queues
For status
celery inspect stats
For all commands
celery inspect
If you want to get it explicitily.Since you are using redis as queue.Then
redis-cli
>KEYS * #find all keys
Then find out something related to celery
>LLEN KEY # i think it gives length of list
Here is a copy-paste solution for Redis:
def get_celery_queue_len(queue_name):
from yourproject.celery import app as celery_app
with celery_app.pool.acquire(block=True) as conn:
return conn.default_channel.client.llen(queue_name)
def get_celery_queue_items(queue_name):
import base64
import json
from yourproject.celery import app as celery_app
with celery_app.pool.acquire(block=True) as conn:
tasks = conn.default_channel.client.lrange(queue_name, 0, -1)
decoded_tasks = []
for task in tasks:
j = json.loads(task)
body = json.loads(base64.b64decode(j['body']))
decoded_tasks.append(body)
return decoded_tasks
It works with Django. Just don't forget to change yourproject.celery.