Run a python script with supervisor - python

I copied from here to run my Python code as a daemon.
For extra uptime. I thought it would be a better Idea to use supervisor to keep this daemon running.
I did this.
python_deamon.conf
[program:python_deamon]
directory=/usr/local/python_deamon/
command=/usr/local/python_venv/bin/python daemon_runnner.py start
stderr_logfile=/var/log/gunicorn.log
stdout_logfile=/var/log/gunicorn.log
autostart=true
autorestart=true
The problem is that though supervisor successfully starts the python_daemon it keeps retrying.
2015-09-23 16:10:45,592 CRIT Supervisor running as root (no user in config file)
2015-09-23 16:10:45,592 WARN Included extra file "/etc/supervisor/conf.d/python_daemon.conf" during parsing
2015-09-23 16:10:45,592 INFO RPC interface 'supervisor' initialized
2015-09-23 16:10:45,592 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2015-09-23 16:10:45,592 INFO supervisord started with pid 13880
2015-09-23 16:10:46,595 INFO spawned: 'python_deamon' with pid 17884
2015-09-23 16:10:46,611 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:47,614 INFO spawned: 'python_deamon' with pid 17885
2015-09-23 16:10:47,630 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:49,635 INFO spawned: 'python_deamon' with pid 17888
2015-09-23 16:10:49,656 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:52,662 INFO spawned: 'python_deamon' with pid 17891
2015-09-23 16:10:52,680 INFO exited: python_deamon (exit status 1; not expected)
2015-09-23 16:10:53,681 INFO gave up: python_deamon entered FATAL state, too many start retries too quickly
Just for the record the after overriding run() method I never return anything.
Is it possible to do what I am trying to do or am I being dumb ?
P.S: I know that the root cause of the whole problem is that since run() never return anything supervisor keeps trying to start it and hence thinks that the process failed and gives the status as FATAL Exited too quickly (process log may have details). My actual question is am I doing it right ? or can this be done this way ?
P.P.S: Stand alone(without supervisor) daemon_runnner.py runs fine with and without sudo permissions.

try to set startsecs = 0:
[program:foo]
command = ls
startsecs = 0
autorestart = false
http://supervisord.org/configuration.html
startsecs
The total number of seconds which the program needs to stay running after a startup to consider the start successful. If the program does not stay up for this many seconds after it has started, even if it exits with an “expected” exit code (see exitcodes), the startup will be considered a failure. Set to 0 to indicate that the program needn’t stay running for any particular amount of time.

This is what I normally do:
Create a service.conf file which describes the new Python script. This script references the shell script which is the one in reality launching the Python script. This .conf file lives inside /etc/supervisor/conf.d/
Create a shell script which launches the Python script. Change permissions to executable. chmod 755 service.sh. In this script we actually launch the Python script.
Configure log_stderr and stderr_logfile to verify issue.
Update supervisor using reload and then check status:
supervisor> status
alexad RUNNING pid 32657, uptime 0:21:05
service.conf
[program:alexad]
; Set full path to celery program if using virtualenv
command=sh /usr/local/src/gonzo/supervisorctl/alexad.sh
directory=/usr/local/src/gonzo/services/alexa
log_stdout=true ; if true, log program stdout (default true)
log_stderr=true ; if true, log program stderr (default false)
stderr_logfile=/usr/local/src/gonzo/log/alexad.err
logfile=/usr/local/src/gonzo/log/alexad.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; Set Celery priority higher than default (999)
priority=500
service.sh
#!/bin/bash
cd /usr/local/src/gonzo/services/alexa
exec python reader.py

You should check your supervisor program logs which live here normally /var/log/supervisor/
In my case, I had a ModuleNotFoundError: No module named 'somemodule'
I found this odd cause when I ran the script directly it worked fine.
After attempting to run the script with sudo I realized what was happening.
Python imports are specific to the user, not to the folder. So if you python or pip install with your current logged-in user and try running the same script as sudo or some other user it will probably return ModuleNotFoundError: No module named 'somemodule'
Supervisor by default runs as root
I solved this by setting the user in the supervisor config file to the current user which in my case ubuntu:
[program:some_program]
directory=/home/ubuntu/scripts/
command=/usr/bin/python3 myscript.py
autostart=true
autorestart=true
user=ubuntu
stderr_logfile=/var/log/supervisor/myscriptlogs.err.log
stdout_logfile=/var/log/supervisor/myscriptlogs.out.log
As a side note, it's also important to make sure your Supervisor command= is calling the script from the version of Python you intend.
cd to the folder where your script lives and run which python and use that

Your script is failing with an exit status. Supervisor is simply trying to restart the script.
Supervisor is started with root permissions, perhaps it is giving those permissions to your script and this is causing it to fail (a change in the source directory or something). Check what happens when you run your daemon as root without Supervisor.
We really need more information to know why it is failing.

Not sure if the issue is the same with daemon runner, but if you use the daemon context directly and use supervisord, you need to set context.detach_process to False

Related

Multiple programs in supervisor

I'm deploying a Django app in a virtual environment and I'm using supervisor for the app itself and some Celery tasks. When my /etc/supervisor/conf.d/project is like this:
[program:botApp]
command = /home/ubuntu/gunicorn_start.bash;
user = ubuntu;
stdout_logfile = /home/ubuntu/logs/gunicorn_supervisor.log;
redirect_stderr = true;
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8;
it works fine, I do sudo systemctl restart supervisor and I can see it running properly, but when I add my second program in the same configuration file like this:
[program:botApp]
command = /home/ubuntu/gunicorn_start.bash;
user = ubuntu;
stdout_logfile = /home/ubuntu/logs/gunicorn_supervisor.log;
redirect_stderr = true;
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8;
[program:worker]
command=/home/ubuntu/django_env/bin/celery -A botApp worker -l info;
user=ubuntu;
numprocs=1;
stdout_logfile=/home/ubuntu/logs/celeryworker.log;
redirect_stderr = true;
autostart=true;
autorestart=true;
startsecs=10;
stopwaitsecs = 600 ;
killasgroup=true;
priority=998;
it throws the following error:
● supervisor.service - Supervisor process control system for UNIX
Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2018-09-04 08:09:26 UTC; 12s ago
Docs: http://supervisord.org
Process: 21931 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS)
Process: 21925 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2)
Main PID: 21925 (code=exited, status=2)
Sep 04 08:09:26 ip-172-31-45-13 systemd[1]: supervisor.service: Unit entered failed state.
Sep 04 08:09:26 ip-172-31-45-13 systemd[1]: supervisor.service: Failed with result 'exit-code'.
I have tried changing the second program to be the same as the first one with different name and log file and it throws the same error. Do I need to do something extra for using 2 programs with supervisor? Many thanks.
Since this question was asked over a year ago, it seems doubtful we'll ever receive the answers to these questions, but the following pieces of information would have been helpful:
what Linux distribution and version are (or were) you using; e.g., Ubuntu 18.04, CentOS 7, etc.?
did you look at the logs generated by systemd? (journalctl -xu supervisord)
what, if any, messages did they contain?
did you look at the individual log files generated by your two supervisord services (e.g., /home/ubuntu/logs/celeryworker.log)?
what, if any, messages did they contain?
My gut feel is that the output of journalctl -xu supervisord will tell you what you need to know. Or at least move you a step in the right direction.
Once the configuration file has been created, you may update the Supervisor configuration and start the processes using the following commands:
sudo supervisorctl reread
sudo supervisorctl update
and then restart your supervisorctl

Supervisor command won't start Chromium

EDIT: Apparently the script DOES run, but it just doesn't start my browser. Still don't know why tho.
I'm trying to use supervisor to run commands/scripts, but I don't seem to be able to get it to work.
I got the idea of the Pi_Video_looper that does the same with the following script :
# Supervisord configuration to run video looper at boot and
# ensure it runs continuously.
[program:video_looper]
command=python -u -m Adafruit_Video_Looper.video_looper
autostart=true
autorestart=unexpected
startsecs=5
So I modified it to my needs to this:
# Supervisord configuration to run video looper at boot and
# ensure it runs continuously.
[program:video_looper]
command=chromium-browser http://google.be --incognito
autostart=true
autorestart=unexpected
startsecs=5
I also used it with the command :
python /home/pi/Startup/Script.py
Which does some testing and then calls the browser, but doesn't do anything either, allthough it runs perfectly from commandline. Am I missing something?
EDIT: Doesn't work after reboot, doesn't work after a sudo service supervisor restart
EDIT 2 :
Logfile shows that it should be running, so apparently it just doesn't open it in my GUI?:
2016-01-27 16:40:43,569 INFO daemonizing the supervisord process
2016-01-27 16:40:43,573 INFO supervisord started with pid 4767
2016-01-27 16:40:44,583 INFO spawned: 'video_looper' with pid 4773
2016-01-27 16:40:49,593 INFO success: video_looper entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
The working version below:
The main issue here was that chromium can't be ran as root for some obscure reason
# Supervisord configuration to run chromium at boot and
# ensure it runs continuously.
[program:chromiumbrowser]
command=chromium-browser http://google.be --incognito
user=pi
autostart=true
autorestart=unexpected
startsecs=5

How to add a delay to supervised process in supervisor - linux

I added a bottle server that uses python's cassandra library, but it exits with this error: Bottle FATAL Exited too quickly (process log may have details) log shows this: File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 1765, in _reconnect_internal
raise NoHostAvailable("Unable to connect to any servers", errors)So I tried to run it manually using supervisorctl start Bottle ,and then it started with no issue. The conclusion= Bottle service starts too fast (before the needed cassandra supervised service does): a delay is needed!
This is what I use:
[program:uwsgi]
command=bash -c 'sleep 5 && uwsgi /etc/uwsgi.ini'
Not happy enough with the sleep hack I created a startup script and launched supervisorctl start processname from there.
[program:startup]
command=/startup.sh
startsecs = 0
autostart = true
autorestart = false
startretries = 1
priority=1
[program:myapp]
command=/home/website/venv/bin/gunicorn /home/website/myapp/app.py
autostart=false
autorestart=true
process_name=myapp
startup.sh
#!/bin/bash
sleep 5
supervisorctrl start myapp
This way supervisor will fire the startup script once and this will start myapp after 5 seconds, mind the autostart=false and autorestart=true on myapp.
I had a similar issue where, starting 64 python rq-worker processes using supervisorctl was raising CPU and RAM alert at every restart. What I did was the following:
command=/bin/bash -c "sleep %(process_num)02d && virtualenv/bin/python3 manage.py rqworker --name %(program_name)s_my-rq-worker_%(process_num)02d default low"
Basically, before running the python command, I sleep for N second, where N is the process number, which basically means I supervisor will start one rq-worker process every second.

managing uWSGI with Upstart

I am trying to configure uWSGI with Upstart.
I created the file /etc/init/uwsgi-flask.conf:
description "uwsgi for flask"
start on runlevel [2345]
stop on runlevel [06]
exec /appdir/virtualenvdir/bin/uwsgi /appdir/virtualenvdir/uwsgi.ini --die-on-term
On reboot, it starts up correctly, but I am not able to stop the service.
If I type on shell initctl stop uwsgi-flask, it gives:
initctl: Unknown instance:
anyone has any idea?
You probably have daemonize=some/log/file/path in your ini file. That will make the process exit with a "normal" exit code, so Upstart will figure that you wanted the job stopped and terminate the job.
Remove daemonize and upstart will track the process in the foreground.

How can I run uWsgi as a service in CentOs?

I am in a hurry, I can find out how to do this but I need some help to achieve this without loosing too much time.
Currently what I do to run a uWsgi instance along with my ini file is just:
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid
and then to stop:
uwsgi --stop /var/run/uwsgi_serv.pid.
By the way, I have this code inside a uwsgi init file in my /etc/init.d/uwsgi.
so when I run /etc/init.d/uwsgi start it executes the ini config file and when I execute /etc/init.d/uwsgi stop it stops the uwsgi process id.
The problem is that when I start the uWsgi service it runs normally and logs every http request, any debug print and so on, but when I close putty which is where I run my Vps it kills all uWsgi process and quits the site from being displayed.
I do not know if I have to touch the pid file only, or what do I need to do leave the uWsgi process executing and I can close putty.
Thanks in advance.
If you are setting the parameters in the command line, add the flag -d file.log to your command (-d stands for daemonize):
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid -d file.log
If you are setting the parameters in a config file, add the following line in your config:
daemonize = /absolute/path/to/file.log
In both cases, uWsgi will run in the background and log everything in file.log. Given these options, there is no need using nohup et al.
Using nohup to start the uWsgi process should solve your problem of the process stopping when you log out.
A tutorial
Be sure to add
daemon = logfile
to your config

Categories