Why does my daemon-like Python script run multiple times? - python

I start a Python script from a bash script in /etc/init.d using sudo /etc/init.d/fanservice.sh start (or stop)
It starts and stops as expected, and starts on boot. The Python script works well.
However, in top or htop I see that the script is running multiple times.
What might be causing this, and how do I stop it?
Other than looking around for other triggers (I've checked there are no old references to the script in crontab) I'm not sure where else to look.
This is /etc/init.d/fanservice.sh
I run the Python script as root to allow access to GPIO.
#!/bin/sh
### BEGIN INIT INFO
# Provides: myservice
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Put a short description of the service here
# Description: Put a long description of the service here
### END INIT INFO
# Change the next 3 lines to suit where you install your script and what you want to call it
DIR=/home/pi
DAEMON=$DIR/fanservice.py
DAEMON_NAME=fanservice
# Add any command line options for your daemon here
DAEMON_OPTS=""
# This next line determines what user the script runs as.
# Root generally not recommended but necessary if you are using the Raspberry Pi GPIO from Python.
DAEMON_USER=root
# The process ID of the script when it runs is stored here:
PIDFILE=/var/run/$DAEMON_NAME.pid
. /lib/lsb/init-functions
do_start () {
log_daemon_msg "Starting system $DAEMON_NAME daemon"
start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
log_end_msg $?
}
do_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME daemon"
start-stop-daemon --stop --pidfile $PIDFILE --retry 10
log_end_msg $?
}
case "$1" in
start|stop)
do_${1}
;;
restart|reload|force-reload)
do_stop
do_start
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
exit 1
;;
esac
exit 0
The Python script uses wiringPi, GPIO, tach and Paho.mqtt to read a temperature, set a fan speed, read the speed from the tach then send an MQTT message to a broker. All that is contained in a while True: loop with a time.sleep(58) at the end, so what I'm expecting is for the 'service' to start and loop every 58 seconds.
It does this, but every minute or so a new instance appears in top.
If I examine /var/run/fanservice.pid where the script PID is stored, I see it has the PID of the first instance.
For example, after 15 or 20 minutes htop will show the script running with a PID that matches that stored in PID file, and that process will have a time of XX minutes and seconds since boot. Then there'll be XX duplicate processes with their own PIDs, all with a time of 00:00.00, and the number of them will correspond (I think) with the number of minutes since boot.
Any tips very welcome. The init.d script is one I found on the internet, can't remember where - but I think I've probably broken that somehow! That said, I moved to running these tasks in a timer loop rather than from cron because the Python script wasn't exiting after being called by cron. I also have other scripts running in this way that need a faster loop than the minimum 1 minute from cron, so I'd like to learn how to make this work properly.
Thank you!

Related

init.d autostart python script after reboot (Centos)

Has any one script for autostart python script after reboot (centos).
I tryed this code, but it is not working
#! /bin/sh
# chkconfig: 2345 95 20
# description: almagest
# What your script does (not sure if this is necessary though)
# processname: www-almagest
# /etc/init.d/www-almagest start
case "$1" in
start)
echo "Starting almagest"
# run application you want to start
python ~almagest_clinic/app.py &> /dev/null &
;;
stop)
echo "Stopping example"
# kill application you want to stop
kill -9 $(sudo lsof -t -i:8002)
;;
*)
echo "Usage: /etc/init.d/www-private{start|stop}"
exit 1
;;
esac
exit 0
chkconfig script on
I found solution https://github.com/frdmn/service-daemons/blob/master/centos
with absolute path it is worked for me
The init process runs as root, and you have a relative path
python $HOME/almagest_clinic/app.py &> /dev/null &
in your script.
The root user may not be able to see that path. I would suggest changing that path to an absolute path

How do I turn my python program into a service on uBuntu?

I have a piece of software made in python I am running on a uBuntu instance. But I want to package it so that the user can see it as a service.
for example, he can do "/etc/init.d/myPythonProgram restart" to restart. Just like any other service.
You want to search for how to create a 'daemon' with python. So...
How do you create a daemon in Python?
https://pypi.python.org/pypi/python-daemon/
https://pypi.python.org/pypi/daemonize
You need to write a script in /etc/init.d/ ,in this script ,you need to define how to start and stop the software.here is an example:
case "$1" in
start)
start_software
;;
stop)
stop_software
;;
*)
echo "Usage: $0 start|stop" >&2
exit 3
;;
exit 0

Run python script as daemon at boot time (Ubuntu)

I've created small web server using werkzeug and I'm able to run it in usual python way with python my_server.py. Pages load, everything works fine. Now I want to start it when my pc boots. What's the easiest way to do that? I've been struggling with upstart but it doesn't seem to "live in a background" cuz after I execute start my_server I immediately receive kernel: [ 8799.793942] init: my_server main process (7274) terminated with status 1
my_server.py:
...
if __name__ == '__main__':
from werkzeug.serving import run_simple
app = create_app()
run_simple('0.0.0.0', 4000, app)
upstart configuration file my_server.conf:
description "My service"
author "Some Dude <blah#foo.com>"
start on runlevel [2345]
stop on runlevel [016]
exec /path/to/my_server.py
start on startup
Any Ideas how to make it work? Or any other better way to daemonize the script?
Update:
I believe the problem lies within my_server.py. It doesn't seem to initiate the webserver (method run_simple()) in the first place. What steps should be taken to make .py file be run by task handler such as upstart?
Place shebang as first line #!/usr/bin/env python
Allow execution permissions chmod 755
Start the daemon with superuser rights (to be absolutely sure no permission restrictions prevents it from starting)
Make sure all python libraries are there!
Something else?
Solved:
The problem was with missing python dependencies. When starting the script through task manager (e.g. upstart or start-stop-daemon) no errors are thrown. Need to be absolutely sure that pythonpath contains everything you need.
In addition to gg.kaspersky method, you could also turn your script into a "service", so that you can start or stop it using:
$ sudo service myserver start
* Starting system myserver.py Daemon [ OK ]
$ sudo service myserver status
* /path/to/myserver.py is running
$ sudo service myserver stop
* Stopping system myserver.py Daemon [ OK ]
and define it as a startup service using:
$ sudo update-rc.d myserver defaults
To do this, you must create this file and save it in /etc/init.d/.
#!/bin/sh -e
DAEMON="/path/to/myserver.py"
DAEMONUSER="myuser"
DAEMON_NAME="myserver.py"
PATH="/sbin:/bin:/usr/sbin:/usr/bin"
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $DAEMON_NAME Daemon"
start-stop-daemon --background --name $DAEMON_NAME --start --user $DAEMONUSER --exec $DAEMON
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME Daemon"
start-stop-daemon --name $DAEMON_NAME --stop --retry 5 --name $DAEMON_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $DAEMON_NAME || true
sleep 2
killall -q -9 $DAEMON_NAME || true
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" "system-wide $DAEMON_NAME" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0
In this example, I assume you have a shebang like #!/usr/bin/python at the head of your python file, so that you can execute it directly.
Last but not least, do not forget to give execution rights to your python server and to the service script :
$ sudo chmod 755 /etc/init.d/myserver
$ sudo chmod 755 /path/to/mserver.py
Here's the page where I learned this originally (french).
Cheers.
One simple way to do is using crontab:
$ crontab -e
A crontab file will appear for editing, write the line at the end:
#reboot python myserver.py
and quit. Now, after each reboot, the cron daemon will run your myserver python script.
If you have supervisor service that starts at boot, write a supervisor service is much, much simpler.
You can even set autorestart if your program fails.

How to run own daemon processes with Django?

In my Django project I have to do repeatedly some processing in the background.
This processing needs access to Django stuff, so I put it into Django's commands and run it as cronjob.
Right now I realize, that I have to do some of them more frequently (cronjob has limitation to invoke command at most every 1 minute). Another problem is that I don't have enough control, to protect running the same command in one time. It's happen when one processing takes longer than one minute.
I think that I should run them like daemons, but I am looking for pure way to do it with Django.
Have you ever faced with this problem or know any clean solution for it?
We do a lot of background processing for django using Celery http://celeryproject.org/. It requires some effort to set up and there is a bit of a learning curve, but once it's up and running it's just awesome.
We took more simple approach - write the script as normal script with endless loop that iterate through a queryset and then use supervise to manage it as a daemon. Basically, this is all needed to have the daemon running:-
$ sudo apt-get install daemontools daemontools-run
$ mkdir /etc/service/sendmsevad
$ echo -> /etc/service/sendmsevad/run
#!/bin/bash
exec /usr/local/bin/sendmsgd
$ sudo svc -d /etc/service/sendmsgd
$ sudo svc -u /etc/service/sendmsgd
$ sudo svstat /etc/service/sendmsgd
/etc/service/sendmsg: up (pid 10521) 479 seconds
More about this - How do I daemonize an arbitrary script in unix?
Now, /usr/local/bin/sendmsgd may look like:-
def main(args=None):
while True:
process_messages()
time.sleep(10)
if __name__ == '__main__':
import signal
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
main(sys.argv)
I have some trouble understanding the documentation on Celery's website. I found this site. Which did a nice job explaining things. I got things working on a centos 6.2 system with django-1.5+Celery-3.0.17+sqlite3. The only trouble I had was an error finding the settings module I have to change it to "myprojectname.settings".
Step 1.
Make the following script in /etc/default/celeryd . Note that you will need to change some of the contents depending on your system.
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# Where to chdir at start.
CELERYD_CHDIR="/var/www/some_folder/Myproject/"
# Python interpreter from environment, if using virtualenv
ENV_PYTHON="/somewhere/.virtualenvs/MyProject/bin/python"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Name of the celery config module, don't change this.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Set any other env vars here too!
PROJET_ENV="PRODUCTION"
# Name of the projects settings module.
# in this case is just settings and not the full path because it will change the dir to
# the project folder first.
export DJANGO_SETTINGS_MODULE="settings"
Step 2. Make the script below in /etc/default/celeryd and change its permissions with
chmod +x /etc/init.d/celeryd
This one does not need to be modified. Source
#!/bin/sh -e
# ============================================
# celeryd - Starts the Celery worker daemon.
# ============================================
#
# :Usage: /etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celeryd
#
# See http://docs.celeryq.org/en/latest/cookbook/daemonizing.html#init-script-celeryd
### BEGIN INIT INFO
# Provides: celeryd
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: celery task worker daemon
### END INIT INFO
#set -e
DEFAULT_PID_FILE="/var/run/celeryd#%n.pid"
DEFAULT_LOG_FILE="/var/log/celeryd#%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery.bin.celeryd_detach"
# /etc/init.d/celeryd: start and stop the celery task worker daemon.
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/celeryd"}
test -f "$CELERY_DEFAULTS" && . "$CELERY_DEFAULTS"
if [ -f "/etc/default/celeryd" ]; then
. /etc/default/celeryd
fi
CELERYD_PID_FILE=${CELERYD_PID_FILE:-${CELERYD_PIDFILE:-$DEFAULT_PID_FILE}}
CELERYD_LOG_FILE=${CELERYD_LOG_FILE:-${CELERYD_LOGFILE:-$DEFAULT_LOG_FILE}}
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERYD_MULTI=${CELERYD_MULTI:-"celeryd-multi"}
CELERYD=${CELERYD:-$DEFAULT_CELERYD}
CELERYCTL=${CELERYCTL:="celeryctl"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
if [ ! -d "$CELERYD_LOG_DIR" ]; then
mkdir -p $CELERYD_LOG_DIR
fi
if [ ! -d "$CELERYD_PID_DIR" ]; then
mkdir -p $CELERYD_PID_DIR
fi
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_USER" ]; then
DAEMON_OPTS="$DAEMON_OPTS --uid=$CELERYD_USER"
chown "$CELERYD_USER" $CELERYD_LOG_DIR $CELERYD_PID_DIR
fi
if [ -n "$CELERYD_GROUP" ]; then
DAEMON_OPTS="$DAEMON_OPTS --gid=$CELERYD_GROUP"
chgrp "$CELERYD_GROUP" $CELERYD_LOG_DIR $CELERYD_PID_DIR
fi
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=\"$CELERYD_CHDIR\""
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 1
fi
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
stop_workers () {
$CELERYD_MULTI stop $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
start_workers () {
$CELERYD_MULTI start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
--cmd="$CELERYD" \
$CELERYD_OPTS
}
restart_workers () {
$CELERYD_MULTI restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
--cmd="$CELERYD" \
$CELERYD_OPTS
}
case "$1" in
start)
check_dev_null
start_workers
;;
stop)
check_dev_null
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
$CELERYCTL status $CELERYCTL_OPTS
;;
restart)
check_dev_null
restart_workers
;;
try-restart)
check_dev_null
restart_workers
;;
*)
echo "Usage: /etc/init.d/celeryd {start|stop|restart|try-restart|kill}"
exit 1
;;
esac
exit 0
Step 3. Use these command to start, stop, etc. the script.
# to start celeryd
/etc/init.d/celeryd start
# to stop
/etc/init.d/celeryd stop
# see the status
/etc/init.d/celeryd status
# print the log in the screen
cat /var/log/celery/w1.log
If you have problems there were a bunch of comments and other recommendations on the site. Hopefully it stays up for a long time.
You could try using The Fat Controller which can take any script and daemonise it. It can also repeatedly run scripts with intervals specified in seconds, or even no interval at all so it will prevent there ever being two instances running at a time.
It's written entirely in C so it's very stable, designed to run for months or years on end - no matter how much your own scripts might crash. It's also very easy to get up and running.
It can also do a lot more things, such as parallel running of a script, even adjusting the number of parallel instances in accordance with the amount of work - but I guess that's out of scope for your requirement.
On the website there are plenty of use cases and detailed instructions. If you need any further help with it then just get in touch or file a support ticket and I'll get back to you as soon as I can.
The website is: http://fat-controller.sourceforge.net/

Python script as linux service/daemon

Hallo,
I'm trying to let a python script run as service (daemon) on (ubuntu) linux.
On the web there exist several solutions like:
http://pypi.python.org/pypi/python-daemon/
A well-behaved Unix daemon process is tricky to get right, but the required steps are much the same for every daemon program. A DaemonContext instance holds the behaviour and configured process environment for the program; use the instance as a context manager to enter a daemon state.
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
However as I want to integrate my python script specifically with ubuntu linux my solution is a combination with an init.d script
#!/bin/bash
WORK_DIR="/var/lib/foo"
DAEMON="/usr/bin/python"
ARGS="/opt/foo/linux_service.py"
PIDFILE="/var/run/foo.pid"
USER="foo"
case "$1" in
start)
echo "Starting server"
mkdir -p "$WORK_DIR"
/sbin/start-stop-daemon --start --pidfile $PIDFILE \
--user $USER --group $USER \
-b --make-pidfile \
--chuid $USER \
--exec $DAEMON $ARGS
;;
stop)
echo "Stopping server"
/sbin/start-stop-daemon --stop --pidfile $PIDFILE --verbose
;;
*)
echo "Usage: /etc/init.d/$USER {start|stop}"
exit 1
;;
esac
exit 0
and in python:
import signal
import time
import multiprocessing
stop_event = multiprocessing.Event()
def stop(signum, frame):
stop_event.set()
signal.signal(signal.SIGTERM, stop)
if __name__ == '__main__':
while not stop_event.is_set():
time.sleep(3)
My question now is if this approach is correct. Do I have to handle any additional signals? Will it be a "well-behaved Unix daemon process"?
Assuming your daemon has some way of continually running (some event loop, twisted, whatever), you can try to use upstart.
Here's an example upstart config for a hypothetical Python service:
description "My service"
author "Some Dude <blah#foo.com>"
start on runlevel [234]
stop on runlevel [0156]
chdir /some/dir
exec /some/dir/script.py
respawn
If you save this as script.conf to /etc/init you simple do a one-time
$ sudo initctl reload-configuration
$ sudo start script
You can stop it with stop script. What the above upstart conf says is to start this service on reboots and also restart it if it dies.
As for signal handling - your process should naturally respond to SIGTERM. By default this should be handled unless you've specifically installed your own signal handler.
Rloton's answer is good. Here is a light refinement, just because I spent a ton of time debugging. And I need to do a new answer so I can format properly.
A couple other points that took me forever to debug:
When it fails, first check /var/log/upstart/.log
If your script implements a daemon with python-daemon, you do NOT use the 'expect daemon' stanza. Having no 'expect' works. I don't know why. (If anyone knows why - please post!)
Also, keep checking "initctl status script" to make sure you are up (start/running). (and do a reload when you update your conf file)
Here is my version:
description "My service"
author "Some Dude <blah#foo.com>"
env PYTHON_HOME=/<pathtovirtualenv>
env PATH=$PYTHON_HOME:$PATH
start on runlevel [2345]
stop on runlevel [016]
chdir <directory>
# NO expect stanza if your script uses python-daemon
exec $PYTHON_HOME/bin/python script.py
# Only turn on respawn after you've debugged getting it to start and stop properly
respawn

Categories