I have a server (Ubuntu Server) on the local network on ip address: 192.168.1.9.
This server is running RabbitMQ in docker.
I defined a basic Celery app:
from celery import Celery
app = Celery(
'tasks',
brocker='pyamqp://<username>:<password>#localhost//',
backend='rpc://',
)
#app.task
def add(x, y):
return x + y
Connected on the server I run the script with celery -A tasks worker --loglevel=INFO -c 2 -E
On my local laptop in a python shell I try to execute the task remotely by creating a new Celery instance with this time the ip address of my remote server.
from celery import Celery
app = Celery(
'tasks',
brocker='pyamqp://<username>:<password>#192.168.1.9//',
backend='rpc://',
)
result = app.send_task('add', (2,2))
# Note: I also tried app.send_task('tasks.add', (2,2))
And from there nothing happen, the task stay PENDING for ever, I can't see anything in the logs, it doesn't seem the server picks up the task.
If I connect to the server and run the same commands locally (but with localhost as the address) it works fine.
What is wrong? How can I send tasks remotely?
Thank you.
The task name is your celery app module's path + task name because you put it in that file.
Or you can start your worker with the DEBUG log, which will list all registered tasks:
celery -A tasks worker -l DEBUG
It should be
result = app.send_task('tasks.<celery_file>.add', (2,2))
But IMO you should use some API like https://flower.readthedocs.io/en/latest/api.html to have a more stable API.
Actually there was just a typo, brocker argument instead of broker.
In [1]: from celery import Celery
In [2]: app = Celery('tasks', broker='amqp://<username>:<password>#192.168.31.9:5672//', backend='rpc://')
In [3]: result = app.send_task('tasks.add', (2, 3))
In [4]: result.get()
Out[5]: 5
Related
I use bottle and uwsgi.
uwsgi config:
[uwsgi]
http-socket = :8087
processes = 4
workers=4
master = true
file=app.py
app.py:
import bottle
import os
application = bottle.app()
#bottle.route('/test')
def test():
os.system('./restart_ssdb.sh')
if __name__=='__main__':
bottle.run()
restart_ssdb.sh(just restart a service and do not care about what the service is):
./ssdb-server -d ssdb.conf -s restart
Then I start the uwsgi and it works well.
Then I access url:127.0.0.1/test
The image shows that one of uwsgi processes becomes ssdb server.
Then I stop uwsgi:
The port 8087 belongs to ssdb. It causes uwsgi server to be unable to restart because the port is used.
What causes the problem in Figure 2 to appear?
I just want to execute shell(restart ssdb server), But it must be
guaranteed not to affect the uwsgi server, what can I do?
http://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
I solve it when i set the
close-on-exec
option to my uwsgi setting
celery.py
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project', broker='amqp://foo:bar#remoteserver:5672', backend='amqp')
# app = Celery('project')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
tasks.py (in the app folder)
from __future__ import absolute_import, unicode_literals
from celery import shared_task
#shared_task
def addnum(x, y):
return (x + y)
When I call this task :
addnum.delay(3, 5)
It returns:
<AsyncResult: 82cb362a-5439-4c1c-9c64-b158a9a48786>
but celery worker just sits there waiting for tasks but doesn't receive any:
[2017-03-17 13:48:36,869: INFO/MainProcess] celery#gauravrajput ready.
The problem is that the tasks are not being queued to the remote rabbitmq server.
When I initialize Celery as:
app = Celery('project')
and then start Celery worker, it started to receive and complete tasks.
[2017-03-17 14:02:13,558: INFO/MainProcess] celery#gauravrajput ready.
[2017-03-17 14:02:13,560: INFO/MainProcess] Received task: app.tasks.addnum[82cb362a-5439-4c1c-9c64-b158a9a48786]
I found out that rabbitmq-server was running on my localhost. Idk why but the tasks were being queued to the localhost instead of remote RabbitMQ server even after explicitly declaring the remote RabbitMQ server as my broker. However, simply stopping the rabbit-mq server on my localhost fixed the issue.
sudo -u rabbitmq rabbitmqctl stop
I am trying to run my django server on an Ubuntu instance on AWS EC2. I am using gunicorn to run the server like this :
gunicorn --workers 4 --bind 127.0.0.1:8000 woc.wsgi:application --name woc-server --log-level=info --worker-class=tornado --timeout=90 --graceful-timeout=10
When I make a request I am getting 502, Bad Gateway on the browser. Here is the server log http://pastebin.com/Ej5KWrWs
Some sections of the settings.py file where behaviour is changed based on hostname are
iUbuntu is the hostname of my laptop
if socket.gethostname() == 'iUbuntu':
'''
Development mode
"iUbuntu" is the hostname of Ishan's PC
'''
DEBUG = TEMPLATE_DEBUG = True
else:
'''
Production mode
Anywhere else than Ishan's PC is considered as production
'''
DEBUG = TEMPLATE_DEBUG = False
if socket.gethostname() == 'iUbuntu':
'''Development'''
ALLOWED_HOSTS = ['*', ]
else:
'''Production Won't let anyone pretend as us'''
ALLOWED_HOSTS = ['domain.com', 'www.domain.com',
'api.domain.com', 'analytics.domain.com',
'ops.domain.com', 'localhost', '127.0.0.1']
(I don't get what's the purpose of this section of the code. Since I inherited the code from someone and the server was working I didn't bothered removing it without understanding what it does)
if socket.gethostname() == 'iUbuntu':
MAIN_SERVER = 'http://localhost'
else:
MAIN_SERVER = 'http://domain.com'
I can't figure out what's the problem here. The same code runs fine with gunicorn on my laptop.
I have also made a small hello world node.js to serve on the port 8000 to test nginx configuration and it is running fine. So no nginx errors.
UPDATE:
I set DEBUG to True and copied the Traceback http://pastebin.com/ggFuCmYW
UPDATE:
Thanks to the reply by #ARJMP. This indeed is the problem with celery consumer not getting connected to broker.
I am configuring celery like this : app.config_from_object('woc.celeryconfig') and the contents of celeryconfig.py are:
BROKER_URL = 'amqp://celeryuser:celerypassword#localhost:5672/MyVHost'
CELERY_RESULT_BACKEND = 'rpc://'
I am running the worker like this :celery worker -A woc.async -l info --autoreload --include=woc.async -n woc_celery.%h
And the error that I am getting is:
consumer: Cannot connect to amqp://celeryuser:**#127.0.0.1:5672/MyVHost: [Errno 104] Connection reset by peer.
Ok so your problem as far as I can tell is that your celery worker can't connect to the broker. You have some middleware trying to call a celery task, so it will fail on every request (unless that analyse_urls.delay(**kw) is conditional)
I found a similar issue that was solved by upgrading their version of celery.
Another cause could be that the EC2 instance can't connect to the message queue server because the EC2 security group won't allow it. If the message queue is running on a separate server, you may have to make sure you've allowed the connection between the EC2 instance and the message queue through AWS EC2 Security Groups
try setting the rabbitmq connection timeout to 30 seconds. this usually clears up the problem of being unable to connect to a server.
you can add connection_timeout to your connection string:
BROKER_URL = 'amqp://celeryuser:celerypassword#server.example.com:5672/MyVHost?connection_timeout=30'
note the format with the question mark: ?connection_timeout=30
this is a query string parameter for the RMQ connection string.
also - make sure the url is pointing to your production server name / url, and not localhost, in your production environment
I am using rabbitMQ to launch processes in remote hosts located in other parts of the world. Eg, RabbitMQ is running in an Oregon host, and it receives a client message to launch processes in Ireland and California.
Most of the time, the processes are launched, and, when they finish, rabbitMQ returns the output to the client. But, sometimes, the jobs finish successfully but rabbitMQ hasn't return the output to the client, and the client keeps hanging waiting for the response. These processes can take 10 minutes to execute, so the client is 10 minutes hanged waiting for the response.
I am using celery to connect to the rabbitMQ, and the client calls are blocking using task.get(). In other words, the client hangs until it receives the response for its call. I would like to understand why the client did not get the response if the jobs have finished. How can I debug this problem?
Here is my celeryconfig.py
import os
import sys
# add hadoop python to the env, just for the running
sys.path.append(os.path.dirname(os.path.basename(__file__)))
# broker configuration
# medusa-rabbitmq is the name of the hosts where rabbitmq is running
BROKER_URL = "amqp://celeryuser:celery#medusa-rabbitmq/celeryvhost"
CELERY_RESULT_BACKEND = "amqp"
TEST_RUNNER = 'celery.contrib.test_runner.run_tests'
# for debug
# CELERY_ALWAYS_EAGER = True
# module loaded
CELERY_IMPORTS = ("medusa.mergedirs", "medusa.medusasystem",
"medusa.utility", "medusa.pingdaemon", "medusa.hdfs", "medusa.vote.voting")
I'm using nd_service_registry to register my django service to zookeeper, which launched with uwsgi.
versions:
uWSGI==2.0.10
Django==1.7.5
My question is, what is the correct way to place nd_service_registry.set_node code to register itself to zookeeper server, avoiding duplicated register or deregister.
my uwsgi config ini, with processes=2, enable-threads=true, threads=2:
[uwsgi]
chdir = /data/www/django-proj/src
module = settings.wsgi:application
env = DJANGO_SETTINGS_MODULE=settings.test
master = true
pidfile = /tmp/uwsgi-proj.pid
socket = /tmp/uwsgi_proj.sock
processes = 2
threads = 2
harakiri = 20
max-requests = 50000
vacuum = true
home = /data/www/django-proj/env
enable-threads = true
buffer-size = 65535
chmod-socket=666
register code:
from nd_service_registry import KazooServiceRegistry
nd = KazooServiceRegistry(server=ZOOKEEPER_SERVER_URL)
nd.set_node('/web/test/server0', {'host': 'localhost', 'port': 80})
I've tested such cases and both worked as expected, django service registered at uwsgi master process startup only once.
place code in settings.py
place code in wsgi.py
Even if I killed uwsgi worker processes(then master process will relaunch another worker) or let worker process kill+restart by uwsgi harakiri options, no new register action triggered.
So my question is whether my register code is correct for django+uwsgi with processes and threads enabled, and where to place it.
The problem happened when you use uwsgi with master/worker. When uwsgi master process spawns workers, the connection to zookeeper maintained by thread in zookeeper client can't be copy to worker correctly.So in application of uwsgi, you should use uwsgi decorators: uwsgidecorators.postfork to initialize register code. The function decorated by #postfork will be called when spawning new workers.
Hope it will help.