I have created a flask application to process GNSS data. There are certain functions which takes a lot of time to execute. Therefore i have integrated celery to perform those functions as Asynchronous tasks. First I have tested the app in localhost by adding message broker as rabbitmq
app.config['CELERY_BROKER_URL']='amqp://localhost//'
app.config['CELERY_RESULT_BACKEND']='db+postgresql://username:pssword#localhost/DBname'
After fully tested the application in virtualenv I deployed It on heroku and added rabbitmq addon. Then I changed the app.config as follows.
app.config['CELERY_BROKER_URL']='amqp://myUsername:Mypassowrd#small-fiver-23.bigwig.lshift.net:10123/FlGJwZfbz4TR'
app.config['CELERY_RESULT_BACKEND']='db+postgres://myusername:Mypassword#ec2-54-163-246-193.compute-1.amazonaws.com:5432/dhcbl58v8ifst/MYDB'
After changing the above I ran the celery worker
celery -A app.celery worker --loglevel=info
and get this error
[2018-03-16 11:21:16,796: ERROR/MainProcess] consumer: Cannot connect to amqp://SHt1Xvhb:**#small-fiver-23.bigwig.lshift.net:10123/FlGJwZfbz4TR: timed out.
How can I check whether my heroku addon is working from Rabbitmq management console
It seems the port 10123 is not exposed. Can you try telnet small-fiver-23.bigwig.lshift.net 10123 from the server and see if you're able to connect successfully to the server?
If not, you have to expose that port to be accessible from the server you're trying to connect to.
Related
I have been creating a flask application to process GNSS data receive by the user. In the flask application data processing done in backend. Since GNSS data processing takes a long time currently user have to wait few minutes without closing the browser to get the result via email.
Therefore I decided to integrate Celery with Flask application.
Based on the available documentations I have installed RabbitMQ Server on local C: drive by downloading the rabbitmq-server-3.7.3.exe file.
In order to learn integration of Celery with Flask I followed this video 2:15 once I tried to restart rabbitmq-server with following command
service rabbitmq-server restart
It gives me following error
'service' is not recognized as an internal or external command,
operable program or batch file.
After creating tasks.py file
from celery import Celery
app = Celery('tasks', broker='amqp://localhost//')
#app.task
def reverse(string):
return string[::-1]
I ran following command
celery -A tasks worker --loglevel=info
Gives following error
WindowsError: [Error 87] The parameter is incorrect
My other question is how he has intergrated ubuntu console in windows 10 like in above video
Use this link https://www.rabbitmq.com/install-windows-manual.html to register rabbitmq as a service in windows.
after that try using rabbitmq as amqp://localhost:5672 in celery
I am trying to connect to a GRPC server in a celery task. I have the following piece of code
timeout = 1
host = '0.tcp.ngrok.io'
port = '7145'
channel = grpc.insecure_channel('{0}:{1}'.format(host, port))
try:
grpc.channel_ready_future(channel).result(timeout=timeout)
except grpc.FutureTimeoutError:
sys.exit(1)
stub = stub(channel)
When I run this snippet through the Python shell, I am able to establish the connection, and execute the GRPC methods. However, when I run this through the Celery task, I get the grpc.FutureTimeoutError, and the connection does not get established.
The Celery worker lies on the same machine as the grpc server. I tried using the socket library to ping the GRPC server, and that worked (It returned some junk response).
I am using Python 2.7, with grpcio==1.6.0 installed. The Celery version is 4.1.0. Any pointers would be helpful.
I believe Celery uses fork under the hood, and gRPC 1.6 did not support any forking behavior.
Try updating to gRPC 1.7.
I have been attempting to push my flask app running socketio to Heroku, but to no avail. I have narrowed it down to the Procfile. I am constantly getting 503 server errors because my program doesn't want to be connected to. I tested it locally and it works just fine.
I have had a couple versions of the Procfile, which are
web: gunicorn -b 0.0.0.0:$PORT app:userchat_manager
and
web: python userchat_manager.py
where the userchar_manager file holds the SocketIO.run() function to run the app. What would be the best way to fix this?
EDIT: I changed the Procfile to
web: gunicorn -b 0.0.0.0:$PORT app:app
and it loads. However, whenever I try to send a message, it doesn't send the message and I get a 400 code.
See the Deployment section of the documentation. The gunicorn web server is only supported when used alongside eventlet or gevent, and in both cases you have to use a single worker process.
If you want to drop gunicorn and run the native web server instead, you should code your userchat_manager.py script in a way that loads the port on which the server should listen from the PORT environment variable exposed by Heroku. If you go this route, I still think you should look into using eventlet or gevent, without using an asynchronous framework the performance is pretty bad (no WebSocket support), and the number of clients that can be connected at the same time is very limited (just one client per worker).
Try this:
web: gunicorn --worker-class eventlet -w 1 your_module:app
You don't need port to connect the socket, just use your heroku app url as socket connection withou :PORT.
I have many time-consuming tasks that need to be shared by several machines. I currently have one master machine using Celery workers to do the task. I'm using RabbitMQ as the broker and redis as the backend running in that machine locally. The master machine is also responsible for deploying tasks and return results.
I wonder if it is possible to have slave machines remotely connected to the broker and result backend in the master machine to fetch jobs, so that all the machines work together. I think I just need to configure RabbitMQ and redis settings somehow and then start the Celery workers in the slave machines. Thanks a lot.
When looking at the Celery documentation there is absolutely no limitation that you can't access RabbitMQ from worker your processes as a remote server instead of just using localhost. Take a look CELERY_QUEUE_HA_POLICY here.
I have a wsgi app with a celery component. Basically, when certain requests come in they can hand off relatively time-consuming tasks to celery. I have a working version of this product on a server I set up myself, but our client recently asked me to deploy it to Cloud Foundry. Since Celery is not available as a service on Cloud Foundry, we (me and the client's deployment team) decided to deploy the app twice – once as a wsgi app and once as a standalone celery app, sharing a rabbitmq service.
The code between the apps is identical. The wsgi app responds correctly, returning the expected web pages. vmc logs celeryapp shows that celery is to be up-and-running, but when I send requests to wsgi that should become celery tasks, they disappear as soon as they get to a .delay() statement. They neither appear in the celery logs nor do they appear as an error.
Attempts to debug:
I can't use celery.contrib.rdb in Cloud Foundry (to supply a telnet interface to pdb), as each app is sandboxed and port-restricted.
I don't know how to find the specific rabbitmq instance these apps are supposed to share, so I can see what messages it's passing.
Update: to corroborate the above statement about finding rabbitmq, here's what happens when I try to access the node that should be sharing celery tasks:
root#cf:~# export RABBITMQ_NODENAME=eecef185-e1ae-4e08-91af-47f590304ecc
root#cf:~# export RABBITMQ_NODE_PORT=57390
root#cf:~# ~/cloudfoundry/.deployments/devbox/deploy/rabbitmq/sbin/rabbitmqctl list_queues
Listing queues ...
=ERROR REPORT==== 18-Jun-2012::11:31:35 ===
Error in process <0.36.0> on node 'rabbitmqctl17951#cf' with exit value: {badarg,[{erlang,list_to_existing_atom,["eecef185-e1ae-4e08-91af-47f590304ecc#localhost"]},{dist_util,recv_challenge,1},{dist_util,handshake_we_started,1}]}
Error: unable to connect to node 'eecef185-e1ae-4e08-91af-47f590304ecc#cf': nodedown
diagnostics:
- nodes and their ports on cf: [{'eecef185-e1ae-4e08-91af-47f590304ecc',57390},
{rabbitmqctl17951,36032}]
- current node: rabbitmqctl17951#cf
- current node home dir: /home/cf
- current node cookie hash: 1igde7WRgkhAea8fCwKncQ==
How can I debug this and/or why are my tasks vanishing?
Apparently the problem was caused by a deadlock between the broker and the celery worker, such that the worker would never acknowledge the task as complete, and never accept a new task, but never crashed or failed either. The tasks weren't vanishing; they were simply staying in queue forever.
Update: The deadlock was caused by the fact that we were running celeryd inside a wrapper script that installed dependencies. (Literally pip install -r requirements.txt && ./celeryd -lINFO). Because of how Cloud Foundry manages process trees, Cloud Foundry would try to kill the parent process (bash), which would HUP celeryd, but ultimately lots of child processes would never die.