How to add a delay to supervised process in supervisor - linux - python

I added a bottle server that uses python's cassandra library, but it exits with this error: Bottle FATAL Exited too quickly (process log may have details) log shows this: File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 1765, in _reconnect_internal
raise NoHostAvailable("Unable to connect to any servers", errors)So I tried to run it manually using supervisorctl start Bottle ,and then it started with no issue. The conclusion= Bottle service starts too fast (before the needed cassandra supervised service does): a delay is needed!

This is what I use:
[program:uwsgi]
command=bash -c 'sleep 5 && uwsgi /etc/uwsgi.ini'

Not happy enough with the sleep hack I created a startup script and launched supervisorctl start processname from there.
[program:startup]
command=/startup.sh
startsecs = 0
autostart = true
autorestart = false
startretries = 1
priority=1
[program:myapp]
command=/home/website/venv/bin/gunicorn /home/website/myapp/app.py
autostart=false
autorestart=true
process_name=myapp
startup.sh
#!/bin/bash
sleep 5
supervisorctrl start myapp
This way supervisor will fire the startup script once and this will start myapp after 5 seconds, mind the autostart=false and autorestart=true on myapp.

I had a similar issue where, starting 64 python rq-worker processes using supervisorctl was raising CPU and RAM alert at every restart. What I did was the following:
command=/bin/bash -c "sleep %(process_num)02d && virtualenv/bin/python3 manage.py rqworker --name %(program_name)s_my-rq-worker_%(process_num)02d default low"
Basically, before running the python command, I sleep for N second, where N is the process number, which basically means I supervisor will start one rq-worker process every second.

Related

How to Properly Exit Airflow Standalone?

I am running airflow standalone as a local development environment. I followed the instructions provided by Airflow to setup the environment, but now I'd like to shut it down in the most graceful way possible.
I ran the standalone command in a terminal, and so my first attempt was to simply use Ctrl+C. It looks promising:
triggerer | [2022-02-02 10:44:06,771] {triggerer_job.py:251} INFO - 0 triggers currently running
^Cstandalone | Shutting down components
However, even 10 minutes later, the shutdown is still in progress, with no more messages in the terminal. I used Ctrl+C again and got a KeyboardInterrupt. Did I do this the wrong way? Is there a better way to shut down the standalone environment?
You could try the following (in bash):
pkill --signal 2 -u $USER airflow
or
pkill --signal 15 -u $USER airflow
or
pkill --signal 9 -u $USER airflow
Say what?
Here's more description of each part:
pkill - Process kill function.
--signal - Tells what 'signal' to send to the process
2 | 15 | 9 - Is the id for the terminal signal to send.
2 = SIGINT, which is like CTRL + C.
15 = SIGTERM, the default for pkill.
9 = SIGKILL, which doesn't messaround with gracefully ending a process.
For more info, run kill -L in your bash terminal.
-u - Tells the functon to only match processes whose real user ID is listed.
$USER - The current session user environment variable. This may be different on your system, so adjust accordingly.
airflow - The name of the selection criteria or pattern to match.
prep info page for more detail on the options available.

pkill -f not working from inside shell script

I have a shell script called kill.sh that helps me restart a python script I've written. I normally use pkill -f main.py to kill my forever-running python script. However, when I wrote it into a shell script it does not work.
My script
pkill -f main.py
ps aux | grep main.py # Still shows the process running.
While just executing pkill -f main.py in bash command line works as expected. Why is this?
This is not a satisfactory answer, as I cannot find out the root cause of why pkill -f does not work in a script. I ended up using a systemd Service file to manage my python process. Here's an example fyi.
[Unit]
Description=Service Name
[Service]
Environment=PYTHONUNBUFFERED=1
ExecStart=/path/to/python /path/to/python/script.py
Restart=on-failure
RestartSec=5s
WorkingDirectory=/python/project/dir/
Name the file main.service and place it in /lib/systemd/system/
Running the service systemctl start main.service
Stop the service systemctl stop main.service
Restart the service systemctl restart main.service
Show status and output systemctl status main.service -l
Now I don't have to worry about multiple processes running. If the program dies it'll even restart.

Supervisor command won't start Chromium

EDIT: Apparently the script DOES run, but it just doesn't start my browser. Still don't know why tho.
I'm trying to use supervisor to run commands/scripts, but I don't seem to be able to get it to work.
I got the idea of the Pi_Video_looper that does the same with the following script :
# Supervisord configuration to run video looper at boot and
# ensure it runs continuously.
[program:video_looper]
command=python -u -m Adafruit_Video_Looper.video_looper
autostart=true
autorestart=unexpected
startsecs=5
So I modified it to my needs to this:
# Supervisord configuration to run video looper at boot and
# ensure it runs continuously.
[program:video_looper]
command=chromium-browser http://google.be --incognito
autostart=true
autorestart=unexpected
startsecs=5
I also used it with the command :
python /home/pi/Startup/Script.py
Which does some testing and then calls the browser, but doesn't do anything either, allthough it runs perfectly from commandline. Am I missing something?
EDIT: Doesn't work after reboot, doesn't work after a sudo service supervisor restart
EDIT 2 :
Logfile shows that it should be running, so apparently it just doesn't open it in my GUI?:
2016-01-27 16:40:43,569 INFO daemonizing the supervisord process
2016-01-27 16:40:43,573 INFO supervisord started with pid 4767
2016-01-27 16:40:44,583 INFO spawned: 'video_looper' with pid 4773
2016-01-27 16:40:49,593 INFO success: video_looper entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
The working version below:
The main issue here was that chromium can't be ran as root for some obscure reason
# Supervisord configuration to run chromium at boot and
# ensure it runs continuously.
[program:chromiumbrowser]
command=chromium-browser http://google.be --incognito
user=pi
autostart=true
autorestart=unexpected
startsecs=5

Celery-Supervisor: How to restart a supervisor job to make newly updated celery-tasks working?

I have a running supervisor job for my celery server. Now I need to add a new task to it, but unfortunately my celery server command is not configured to track those dynamic changes automatically.
Here is my celery command:
python manage.py celery worker --broker=amqp://username:password#localhost/our_app_vhost
To restart my celery process, I have tried,
sudo supervisorctl -c /etc/supervisor/supervisord.conf restart <process_name>
supervisorctl stop all
supervisorctl start all
service supervisor restart
But nothing found working. How to restart it?
If you want to manage process with supervisorctl, you should configure supervisorctl, rpcinterface in your configuration file.
Here is a sample configuration file.
sample.conf
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
[program:my_worker]
command = python manage.py celery worker --broker=amqp://username:password#localhost/our_app_vhost
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
Now start supervisor with
supervisord -c sample.conf
Now if you want to restart your worker you can do it with
supervisorctl -c sample.conf restart my_worker
This restarts your worker. Alternatively you can also drop to supervisor shell and you can restart it
sudo supervisorctl -c sample.conf
supervisor> restart my_worker
my_worker: stopped
my_worker: started
Note:
There is an option to autoreload workers in Celery
python manage.py celery worker --autoreload --broker=amqp://username:password#localhost/our_app_vhost
This should be used in development mode only. Using this in production is not recommended.
More about this on celery docs.
you can write your celery task in /etc/supervisor/conf.d/. create a new config file for celery like celery.conf.
Assuming your virtualenv is venv, your django project is sample and your celery script is in _celery.py
The file should look like
[program:celery]
command=/home/ubuntu/.virtualenvs/venv/bin/celery --app=sample._celery:app worker --loglevel=INFO
directory=/home/ubuntu/sample/
user=ubuntu
numprocs=1
stdout_logfile=/home/ubuntu/logs/celery-worker.log
stderr_logfile=/home/ubuntu/logs/celery-error.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
after writing this supervisor program you need to run
If you add the supervisor program run this
$ sudo supervisorctl reread
celery: available
If you add/update the supervisor program run this
$ sudo supervisorctl update
celery: added process group
To check the status of your celery task
$ sudo supervisorctl status celery
celery RUNNING pid 18020, uptime 0:00:50
To stop the celery task
$ sudo supervisorctl stop celery
celery: stopped
To start the celery task
$ sudo supervisorctl start celery
celery: started
To restart the celery task (this would stop and again start the specified task)
$ sudo supervisorctl restart celery
celery: stopped
celery: started
If some task running then restart celery waiting for complete them. So need to kill all running process.
run following command for kill all celery process:
kill -9 $(ps aux | grep celery | grep -v grep | awk '{print $2}' | tr '\n' ' ') > /dev/null 2>&1
Restart celery:
sudo supervisorctl stop all
sudo supervisorctl start all

How can I run uWsgi as a service in CentOs?

I am in a hurry, I can find out how to do this but I need some help to achieve this without loosing too much time.
Currently what I do to run a uWsgi instance along with my ini file is just:
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid
and then to stop:
uwsgi --stop /var/run/uwsgi_serv.pid.
By the way, I have this code inside a uwsgi init file in my /etc/init.d/uwsgi.
so when I run /etc/init.d/uwsgi start it executes the ini config file and when I execute /etc/init.d/uwsgi stop it stops the uwsgi process id.
The problem is that when I start the uWsgi service it runs normally and logs every http request, any debug print and so on, but when I close putty which is where I run my Vps it kills all uWsgi process and quits the site from being displayed.
I do not know if I have to touch the pid file only, or what do I need to do leave the uWsgi process executing and I can close putty.
Thanks in advance.
If you are setting the parameters in the command line, add the flag -d file.log to your command (-d stands for daemonize):
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid -d file.log
If you are setting the parameters in a config file, add the following line in your config:
daemonize = /absolute/path/to/file.log
In both cases, uWsgi will run in the background and log everything in file.log. Given these options, there is no need using nohup et al.
Using nohup to start the uWsgi process should solve your problem of the process stopping when you log out.
A tutorial
Be sure to add
daemon = logfile
to your config

Categories