I have been creating a flask application to process GNSS data receive by the user. In the flask application data processing done in backend. Since GNSS data processing takes a long time currently user have to wait few minutes without closing the browser to get the result via email.
Therefore I decided to integrate Celery with Flask application.
Based on the available documentations I have installed RabbitMQ Server on local C: drive by downloading the rabbitmq-server-3.7.3.exe file.
In order to learn integration of Celery with Flask I followed this video 2:15 once I tried to restart rabbitmq-server with following command
service rabbitmq-server restart
It gives me following error
'service' is not recognized as an internal or external command,
operable program or batch file.
After creating tasks.py file
from celery import Celery
app = Celery('tasks', broker='amqp://localhost//')
#app.task
def reverse(string):
return string[::-1]
I ran following command
celery -A tasks worker --loglevel=info
Gives following error
WindowsError: [Error 87] The parameter is incorrect
My other question is how he has intergrated ubuntu console in windows 10 like in above video
Use this link https://www.rabbitmq.com/install-windows-manual.html to register rabbitmq as a service in windows.
after that try using rabbitmq as amqp://localhost:5672 in celery
Related
I am looking for help deploying my flash app. I've already written the app and it works well. I'm currently using the following command in the directory of my flask code:
sudo uwsgi --socket 0.0.0.0:70 --protocol=http -w AppName:app --buffer-size=32768
This is on my Amazon Lightsail instance. I have the instance linked to a static public IP, and if I navigate to the website, it works great. However, to get the command to continuously run in the background even after logging out of the Lightsail, I first start a screen command, execute the above line of code, and then detach the screen using ctrl-a-d.
The problem is, if the app crashes (which is understandable since it is very large and under development), or if the command is left running for too long, the process is killed, and it is no longer being served.
I am looking for a better method of deploying a flask app on Amazon Lightsail so that it will redeploy the app in the event of a crash without any interaction from myself.
Generally you would write your own unit file for systemd to keep your application running, auto restart when it crashes and start when you boot your instances.
There are many tutorials out there showing how to write such a unit file. Some examples:
Systemd: Service File Examples
Creating a Linux service with systemd
How to write startup script for Systemd?
You can use pm2
Starting an application with PM2 is straightforward. It will auto
discover the interpreter to run your application depending on the
script extension. This can be configurable via the Ecosystem config
file, as I will show you later on this article.
All you need to install pm2 and then
pm2 start appy.py
Great, this application will now run forever, meaning that if the process exit or throw an exception it will get automatically restarted. If you exit the console and connect again you will still be able to check the application state.
To list application managed by PM2 run:
pm2 ls
You can also check logs
pm2 logs
Keeping Processes Alive at Server Reboot
If you want to keep your application online across unexpected (or expected) server restart, you will want to setup init script to tell your system to boot PM2 and your applications.
It’s really simple with PM2, just run this command (without sudo):
pm2 startup
Pm2 Manage-Python-Processes
My Flask web application runs using nginx and gunicorn. I use supervisor to let my application run in the background. I always updated my files using Windows Power Shell and the command SCP. After i moved the new edited files, which are already existing on my Ubuntu server, to the server, i use the command sudo supervisorctl reload to restart the flask app to see the changes. But this time the flask app did not start and i only get 502 Bad Gateway. It does not matter how many times i reload the supervisor or restart nginx, i only get the error code 502.
The issue was a not installed module and a typing error in a configuration file.
I have successfully followed steps at http://docs.deeppavlov.ai/en/master/integrations/aws_ec2.html to have a REST API running.
Specifically, as outlined in the steps at the link, I ssh to the Ubuntu server and create and activate a Python 3.6 virtual environment and install DeepPavlov and the dependencies and models as outlined in those steps.
The final step is to run the REST API service with the following format:
python -m deeppavlov riseapi <config_path> -p <port>
The screen will then state that Uvicorn is running and to press CTRL+C to quit.
At that point I am able to access the API from a browser and it logs HTTP requests to the screen.
But if I end the ssh session, then the API service is no longer running.
How can I:
Start the service so that it stays running even after I log out of the server.
Capture logging from the service.
Determine if the service is running or not and be able to stop/restart the service when desired.
You can create systemd service (example with virtualenv and systemd). With systemd you can start, stop, restart your service via systemctl command, and view logs via journalctl.
I have created a flask application to process GNSS data. There are certain functions which takes a lot of time to execute. Therefore i have integrated celery to perform those functions as Asynchronous tasks. First I have tested the app in localhost by adding message broker as rabbitmq
app.config['CELERY_BROKER_URL']='amqp://localhost//'
app.config['CELERY_RESULT_BACKEND']='db+postgresql://username:pssword#localhost/DBname'
After fully tested the application in virtualenv I deployed It on heroku and added rabbitmq addon. Then I changed the app.config as follows.
app.config['CELERY_BROKER_URL']='amqp://myUsername:Mypassowrd#small-fiver-23.bigwig.lshift.net:10123/FlGJwZfbz4TR'
app.config['CELERY_RESULT_BACKEND']='db+postgres://myusername:Mypassword#ec2-54-163-246-193.compute-1.amazonaws.com:5432/dhcbl58v8ifst/MYDB'
After changing the above I ran the celery worker
celery -A app.celery worker --loglevel=info
and get this error
[2018-03-16 11:21:16,796: ERROR/MainProcess] consumer: Cannot connect to amqp://SHt1Xvhb:**#small-fiver-23.bigwig.lshift.net:10123/FlGJwZfbz4TR: timed out.
How can I check whether my heroku addon is working from Rabbitmq management console
It seems the port 10123 is not exposed. Can you try telnet small-fiver-23.bigwig.lshift.net 10123 from the server and see if you're able to connect successfully to the server?
If not, you have to expose that port to be accessible from the server you're trying to connect to.
I have a wsgi app with a celery component. Basically, when certain requests come in they can hand off relatively time-consuming tasks to celery. I have a working version of this product on a server I set up myself, but our client recently asked me to deploy it to Cloud Foundry. Since Celery is not available as a service on Cloud Foundry, we (me and the client's deployment team) decided to deploy the app twice – once as a wsgi app and once as a standalone celery app, sharing a rabbitmq service.
The code between the apps is identical. The wsgi app responds correctly, returning the expected web pages. vmc logs celeryapp shows that celery is to be up-and-running, but when I send requests to wsgi that should become celery tasks, they disappear as soon as they get to a .delay() statement. They neither appear in the celery logs nor do they appear as an error.
Attempts to debug:
I can't use celery.contrib.rdb in Cloud Foundry (to supply a telnet interface to pdb), as each app is sandboxed and port-restricted.
I don't know how to find the specific rabbitmq instance these apps are supposed to share, so I can see what messages it's passing.
Update: to corroborate the above statement about finding rabbitmq, here's what happens when I try to access the node that should be sharing celery tasks:
root#cf:~# export RABBITMQ_NODENAME=eecef185-e1ae-4e08-91af-47f590304ecc
root#cf:~# export RABBITMQ_NODE_PORT=57390
root#cf:~# ~/cloudfoundry/.deployments/devbox/deploy/rabbitmq/sbin/rabbitmqctl list_queues
Listing queues ...
=ERROR REPORT==== 18-Jun-2012::11:31:35 ===
Error in process <0.36.0> on node 'rabbitmqctl17951#cf' with exit value: {badarg,[{erlang,list_to_existing_atom,["eecef185-e1ae-4e08-91af-47f590304ecc#localhost"]},{dist_util,recv_challenge,1},{dist_util,handshake_we_started,1}]}
Error: unable to connect to node 'eecef185-e1ae-4e08-91af-47f590304ecc#cf': nodedown
diagnostics:
- nodes and their ports on cf: [{'eecef185-e1ae-4e08-91af-47f590304ecc',57390},
{rabbitmqctl17951,36032}]
- current node: rabbitmqctl17951#cf
- current node home dir: /home/cf
- current node cookie hash: 1igde7WRgkhAea8fCwKncQ==
How can I debug this and/or why are my tasks vanishing?
Apparently the problem was caused by a deadlock between the broker and the celery worker, such that the worker would never acknowledge the task as complete, and never accept a new task, but never crashed or failed either. The tasks weren't vanishing; they were simply staying in queue forever.
Update: The deadlock was caused by the fact that we were running celeryd inside a wrapper script that installed dependencies. (Literally pip install -r requirements.txt && ./celeryd -lINFO). Because of how Cloud Foundry manages process trees, Cloud Foundry would try to kill the parent process (bash), which would HUP celeryd, but ultimately lots of child processes would never die.