I have a script which runs inside a while loop and monitors a mysql data source every 2 seconds. If I run if from the command line, it runs and works fine. But If I attach it to a daemon, it throws an error saying "MySQL has gone" or something similar. I checked and found MySQL up and running. I could even execute queries from other tools.
I badly need help. I am running Ubuntu 10.04.
Error Code
Traceback (most recent call last):
File "/home/masnun/Desktop/daemon/daemon.py", line 67, in <module>
main()
File "/home/masnun/Desktop/daemon/daemon.py", line 35, in main
USERPROG()
File "/home/masnun/Desktop/daemon/mymain.py", line 19, in main
cursor.execute("select * from hits_logs where id > '" + str(last) + "'")
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 166, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 35, in defau$
raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
File: daemon
#! /bin/sh
# example python daemon starter script
# based on skeleton from Debian GNU/Linux
# cliechti#gmx.net
# place the daemon scripts in a folder accessible by root. /usr/local/sbin is a good idea
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/home/masnun/Desktop/daemon/daemon.py
NAME=pydaemon
DESC="Example daemon"
test -f $DAEMON || exit 0
set -e
case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid
# \ --exec $DAEMON
echo "$NAME."
;;
#reload)
#
# If the daemon can reload its config files on the fly
# for example by sending it SIGHUP, do it here.
#
# If the daemon responds to changes in its config file
# directly anyway, make this a do-nothing entry.
#
# echo "Reloading $DESC configuration files."
# start-stop-daemon --stop --signal 1 --quiet --pidfile \
# /var/run/$NAME.pid --exec $DAEMON
#;;
restart|force-reload)
#
# If the "reload" option is implemented, move the "force-reload"
# option to the "reload" entry above. If not, "force-reload" is
# just the same as "restart".
#
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid
# --exec $DAEMON
sleep 1
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON
echo "$NAME."
;;
*)
N=/etc/init.d/$NAME
# echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
echo "Usage: $N {start|stop|restart|force-reload}" >&2
exit 1
;;
esac
exit 0
File: daemon.py
#!/usr/bin/env python
###########################################################################
# configure these paths:
LOGFILE = '/var/log/pydaemon.log'
PIDFILE = '/var/run/pydaemon.pid'
# and let USERPROG be the main function of your project
import mymain
USERPROG = mymain.main
###########################################################################
import sys, os
class Log:
"""file like for writes with auto flush after each write
to ensure that everything is logged, even during an
unexpected exit."""
def __init__(self, f):
self.f = f
def write(self, s):
self.f.write(s)
self.f.flush()
def main():
#change to data directory if needed
os.chdir("/home/masnun/Desktop/daemon")
#redirect outputs to a logfile
sys.stdout = sys.stderr = Log(open(LOGFILE, 'a+'))
#ensure the that the daemon runs a normal user
os.setegid(1000) #set group first "pydaemon"
os.seteuid(1000) #set user "pydaemon"
#start the user program here:
USERPROG()
if __name__ == "__main__":
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# decouple from parent environment
os.chdir("/") #don't prevent unmounting....
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent, print eventual PID before
#print "Daemon PID %d" % pid
open(PIDFILE,'w').write("%d"%pid)
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# start the daemon main loop
main()
File: mymain.py
import MySQLdb
from ProxyChecker import ProxyChecker
from time import sleep
config = {"host":"localhost","username":"root","password":"masnun","database":"webtracc_db1"}
connection = MySQLdb.connect(config['host'],config['username'],config['password'],config['database'])
cursor = connection.cursor()
def main():
while True:
f = open("last","r")
last = f.read().strip()
f.close()
if last == '': last = 0;
last = int(last)
cursor.execute("select * from hits_logs where id > '" + str(last) + "'")
row = cursor.fetchall()
for x in row:
pc = ProxyChecker( x[2] )
pc.start()
last = x[0]
f = open("last","w")
f.write(str(last))
f.close()
sleep(2)
if __name__ == "__main__":
main()
File:
ProxyChecker.py
#! /usr/bin/env python
from threading import Thread
from CheckProxy import CheckProxy
class ProxyChecker(Thread):
def __init__(self, data):
self.data = data
Thread.__init__(self)
def run(self):
pc = CheckProxy()
pc.check(self.data)
File: CheckProxy.py
#! /usr/bin/env python
import MySQLdb
import socket
class CheckProxy:
def __init__(self):
self.config = {"host":"localhost","username":"root","password":"masnun","database":"webtracc_db1"}
self.portList = [80]
def check(self,host):
connection = MySQLdb.connect(self.config['host'],self.config['username'],self.config['password'],self.config['database'])
cursor = connection.cursor()
proxy = False
try:
for x in self.portList:
sock = socket.socket()
sock.connect((host,x))
#print "connected to: " + str (x)
sock.close()
cursor.execute("select count(*) from list_entries where list='1' and ip='"+ host + "' ")
data = cursor.fetchall()
#print data[0][0]
if data[0][0] < 1:
print 'ok'
proxy = True
except socket.error, e:
#print e
if proxy:
cursor.execute("insert into list_entries (ip,list) values ('"+ host.strip() +"','1') ")
else:
cursor.execute("insert into list_entries (ip,list) values ('"+ host.strip() +"','2') ")
if __name__ == "__main__":
print "Direct access not allowed!"
I haven't worked with Python, but it almost seems you are making a database connection, then forking. The other way around should work: fork at will, then connect in remaining process, possibly in your mymain.py:main() method.
Related
I am running a python file at command prompt and it is working fine. Execution of the file as
python myfile.py -d device01 -s on and this is working. when I import the same class from another file, it's not working.
Source code:
#!/usr/bin/env python3
import pytuya
import sys
import getopt
import json
def smartlite(argv):
device = ''
state = ''
try:
opts, args = getopt.getopt(argv,"hd:s:",["help", "device=", "state="])
except getopt.GetoptError:
print (sys.argv[0], '-d <device> -s <state>')
sys.exit(2)
for opt, arg in opts:
if opt in ("-h", "--help"):
print (sys.argv[0], '-d <device> -s <state>')
sys.exit()
elif opt in ("-d", "--device"):
device = arg
elif opt in ("-s", "--state"):
state = arg
filename = "p.json"
if filename:
with open(filename, 'r') as f:
datafile = json.load(f)
print ('Device is', device)
print ('State is', state)
print (datafile["devices"][device]["uuid"])
print (datafile["devices"][device]["key"])
print (datafile["devices"][device]["ip"])
d = pytuya.OutletDevice(datafile["devices"][device]["uuid"], datafile["devices"][device]["ip"], datafile["devices"][device]["key"])
data = d.status() # NOTE this does NOT require a valid key
print('Dictionary %r' % data)
print('state (bool, true is ON) %r' % data['dps']['1']) # Show status of first controlled switch on device
# Toggle switch state
switch_state = data['dps']['1']
data = d.set_status(not switch_state) # This requires a valid key
if data:
print('set_status() result %r' % data)
sys.exit(0)
if __name__ == "__main__":
smartlite(sys.argv[1:])
The above code is working when I run at command prompt as
\.> python myfile.py -d device01 -s on
when I am calling the above code from another file I am getting an error.
Importing the library of smartlite.
from sourcefolder.pytuya import smartlite # this is good.
try:
smartlite.get("-d Lite01 -s on")
#os.system('python pytuya/mypytuya.py -d Lite01 -s on')
except sr.UnknownValueError:
print('error')
except sr.RequestError as e:
print('faile'.format(e))
error when executing.
('Device is', '')
('State is', '')
Traceback (most recent call last):
File "voicetotext.py", line 22, in <module>
smartlite("-d Lite01 -s on")
File "/home/respeaker/voice-engine/examples/pytuya/mypytuya.py", line 34, in smartlite
print (datafile["devices"][device]["uuid"])
KeyError: ''
Thank you guys this conversation helped me.
I am able to send the arguments now to the fuction in this format and it got worked.
if a=b:
smartlite(['-d','device01','-s','on'])
You have several options:
(a) Edit your .bashrc, add something like:
export PYTHONPATH="$HOME/dir-base-sourcefolder"
(b) Review the directory structure. Check the relative depth from the module and the caller. Try with:
from .sourcefolder.pytuya import smartlite
from ..sourcefolder.pytuya import smartlite
After some error and trial I came up with the following solution (popen.py):
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import pty
import sys
from subprocess import Popen
from shlex import join
import errno
master_fd, slave_fd = pty.openpty()
cmd = join(sys.argv[1:])
print(">>", cmd)
try:
p = Popen(
cmd,
shell=True,
stdout=slave_fd,
stderr=slave_fd,
stdin=slave_fd,
close_fds=True,
universal_newlines=True,
)
os.close(slave_fd)
while p.returncode is None:
buffer = os.read(master_fd, 512)
if buffer:
os.write(1, buffer)
else:
break
except OSError as err:
if err.errno != errno.EIO:
raise
except KeyboardInterrupt:
print(f"\n## Err: Terminating the PID: {p.pid}")
p.terminate()
This works well in most of the cases:
> ./popen.py date
>> date
Wed 13 May 19:10:54 BST 2020
> ./popen.py date +'%F_%T'
>> date +%F_%T
2020-05-13_19:10:56
> ./popen.py bash -c 'while :; do echo -n .; sleep .5; done;'
>> bash -c 'while :; do echo -n .; sleep .5; done;'
.......^C
## Err: Terminating the PID: 840102
However it seems that my script is not capable to read the stdin:
> ./popen.py bash -c 'read -p "Enter something: " x; echo $x'
>> bash -c 'read -p "Enter something: " x; echo $x'
Enter something: bla bla
Come on... read it!!!
^C
## Err: Terminating the PID: 841583
> ./popen.py docker exec -it 9ab85463e3c1 sh
>> docker exec -it 9ab85463e3c1 sh
/opt/work_dir # ^[[20;19R
sfsdf
sdfsdf
^C
## Err: Terminating the PID: 847172
I've also tried to skip the os.close(slave_fd) step, but with exactly the same results :-/
My goal is to replicate bash script similar to the following (bash.sh):
#! /bin/bash --
ACT="${1:?## Err: Sorry, what should I do?}"
DID="${2:?## Err: Oh please.. Where\'s ID?}"
CMD=( docker exec -it )
case "${ACT}" in
(run)
shift 2
echo ">> ${CMD[#]}" "${DID}" "$#"
"${CMD[#]}" "${DID}" "$#"
;;
(*)
echo "## Err: Something went wrong..."
exit 1
;;
esac
Example:
> ./bash.sh run 9ab85463e3c1 sh
>> docker exec -it 9ab85463e3c1 sh
/opt/work_dir # date
Wed May 13 19:08:05 UTC 2020
/opt/work_dir # ^C
> ./bash.sh run 9ab85463e3c1 date +"%F_%T"
>> docker exec -it 9ab85463e3c1 date +%F_%T
2020-05-13_19:35:09
For my use case I eventually came up with the following script (wrapper.py):
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import pty
import fire
from shlex import split
from os.path import realpath, expanduser
def proc(cmd, cd=None):
"""Spawn a process, and connect its controlling terminal with the
current process's standard io."""
print("## PID: {}".format(os.getpid()))
def m_read(fd):
"""Master read function."""
return os.read(fd, 1024)
def i_read(fd):
"""Standard input read function."""
return os.read(fd, 1024)
cmd = cmd if isinstance(cmd, (list, tuple)) else split(cmd)
try:
cwd = os.getcwd()
if cd is not None:
print("## Changing directory: {}".format(realpath(cd)))
os.chdir(expanduser(os.fsdecode(cd)))
ex_msg = "\n## Exit Status Indicator: {}"
print(ex_msg.format(pty.spawn(cmd, m_read, i_read)))
except Exception as err:
print("## Err: {}".format(str(err)))
finally:
os.chdir(cwd)
if __name__ == "__main__":
fire.Fire({"run": proc})
It now can be used as a regular wrapper script:
> ./wrapper.py run -cd .. 'docker exec -it 9ab85463e3c1 sh'
## PID: 1679972
## Changing directory: /home
/opt/work_dir # date
Sat May 16 15:18:46 UTC 2020
/opt/work_dir # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.11.6
PRETTY_NAME="Alpine Linux v3.11"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/opt/work_dir #
## Exit Status Indicator: 0
I hope that someone will find this helpful.
I'm trying to run terminal commands from a web python script.
I tried many things but none seens to work... Such as: add 'www-data' to sudoers, use full path to bin, run command with sudo word, use 3 different system calls (os.spawnl and subprocess) and none of that works.
Read only commands like "ps aux" that only output information works, but a simple echo to file don't. It seens like need permitions to do so. What more can i try?
Example from output: Unexpected error: (, CalledProcessError(2, '/bin/echo hello > /var/www/html/cgi-bin/test2.htm'), )
On that example the /var/www/html/cgi-bin/ folder is owned by "www-data", same user as server config.
<!-- language: python -->
#!/usr/bin/python3
# coding=utf-8
import os
import sys
import subprocess
import cgi
import subprocess
SCRIPT_PATH = "/var/www/html/scripts/aqi3.py"
DATA_FILE = "/var/www/html/assets/aqi.json"
KILL_PROCESS = "ps aux | grep " + SCRIPT_PATH + " | grep -v \"grep\" | awk '{print $2}' | xargs kill -9"
START_PROCESS = "/usr/bin/python3 " + SCRIPT_PATH + " start > /dev/null 2>&1 &"
STOP_PROCESS = "/usr/bin/python3 " + SCRIPT_PATH + " stop > /dev/null 2>&1 &"
# Don't edit
def killProcess():
os.spawnl(os.P_NOWAIT, KILL_PROCESS)
try:
os.spawnl(os.P_NOWAIT, "/bin/echo hello > /var/www/html/cgi-bin/test2.htm")
proc = subprocess.Popen(['sudo', 'echo', 'hello > /var/www/html/cgi-bin/test3.htm'])
print(subprocess.check_output("/bin/echo hello > /var/www/html/cgi-bin/test2.htm", shell=True, timeout = 10))
except:
print("Unexpected error:", sys.exc_info())
print(KILL_PROCESS)
def stopSensor():
killProcess()
os.spawnl(os.P_NOWAIT, STOP_PROCESS)
def restartProcess():
killProcess()
print(START_PROCESS)
print(os.spawnl(os.P_NOWAIT, START_PROCESS))
def main():
arguments = cgi.FieldStorage()
for key in arguments.keys():
value = arguments[key].value
if key == 'action':
if value == 'stop':
stopSensor()
print("ok")
return
elif value == 'start' or value == 'restart':
restartProcess()
print("ok")
return
elif value == 'resetdata':
try:
with open(DATA_FILE, 'w') as outfile:
outfile.write('[]')
except:
print("Unexpected error:", sys.exc_info())
print("ok")
return
print("?")
main()
I was able to solve my problem with: http://alexanderhoughton.co.uk/blog/lighttpd-changing-default-user-raspberry-pi/
I have a problem with a backup script which is supposed to call a bash starting/stopping script, in which a "daemon" (via GNU screen) is managed. For the moment my python backup script is called via cron. Within the launch.sh script there is a determination of the given parameter. If "stop" is given the script echos "Stopping..." and runs the GNU screen command to shut down the session. The same goes for "start". If the script is called via subprocess.call(...,Shell=True) in Python the string is shown but the screen session remains untouched. If it gets called directly in bash everything works fine.
#!/usr/bin/env python
'''
Created on 27.07.2013
BackUp Script v0.2
#author: Nerade
'''
import time
import os
from datetime import date
from subprocess import check_output
import subprocess
script_dir = '/home/minecraft/automated_backup'
#folders = ['/home/minecraft/staff']
folders = ['/home/minecraft/bspack2','/home/minecraft/staff']
# log = 0
backup_date = date.today()
backup_dir = '/home/minecraft/automated_backup/' + backup_date.isoformat()
def main():
global log
init_log()
init_dirs()
for folder in folders:
token = folder.split("/")
stopCmd = folder + '/launch.sh stop'
log.write("Stopping server %s...\n" % (token[3]))
subprocess.call(stopCmd,shell=True)
#print stopCmd
while screen_present(token[3]):
time.sleep(0.5)
log.write("Server %s successfully stopped!\n" % (token[3]))
specificPath = backup_dir + '/' + token[3]
os.makedirs(specificPath)
os.system("cp /home/minecraft/%s/server.log %s/server.log" % (token[3],specificPath))
backup(folder,specificPath + '/' + backup_date.isoformat() + '.tar.gz')
dumpDatabase(backup_dir)
for folder in folders:
token = folder.split("/")
startCmd = folder + '/launch.sh start'
log.write("Starting server %s...\n" % (token[3]))
subprocess.call(startCmd,shell=True)
time.sleep(1)
log.write(screen_present(token[3]))
#print startCmd
def dumpDatabase(target):
global log
log.write("Dumping Database...\n")
cmd = "mysqldump -uroot -p<password> -A --quick --result-file=%s/%s.sql" % (backup_dir,backup_date.isoformat())
os.system(cmd)
#print cmd
def backup(source,target):
global log
log.write("Starting backup of folder %s to %s\n" % (source,target))
cmd = 'tar cfvz %s --exclude-from=%s/backup.conf %s' % (target,source,source)
os.system(cmd)
#print cmd
def screen_present(name):
var = check_output(["screen -ls; true"],shell=True)
if "."+name+"\t(" in var:
return True
else:
return False
def init_log():
global log
log = open("%s/backup.log" % script_dir,'a')
log.write(
"Starting script at %s\n" % time.strftime("%m/%d/%Y %H:%M:%S")
)
def init_dirs():
global backup_dir,log
log.write("Checking and creating directories...\n")
if not os.path.isdir(backup_dir):
os.makedirs(backup_dir)
if __name__ == '__main__':
main()
And the launch.sh:
#!/bin/sh
if [ $# -eq 0 ] || [ "$1" = "start" ]; then
echo "Starting Server bspack2"
screen -S bspack2 -m -d java -Xmx5G -Xms4G -jar mcpc-plus-legacy-1.4.7-R1.1.jar nogui
fi
if [ "$1" = "stop" ]; then
screen -S bspack2 -X stuff 'stop\015'
echo "Stopping Server bspack2"
fi
What's my problem here?
I'm sure by now you've solved this problem, but looking through your question I'd bet the answer is remarkably simple -- mcpc-plus-legacy-1.4.7-R1.1.jar isn't found by java, which fails, and subsequently screen terminates.
In launch.sh, screen will execute in the same directory as the calling script. In this case, your python script, when run by cron, will have an active directory of the running user's home directory (so root crontabs will run in /root/, for instance, and a user crontab in /home/username/).
Simple solution is just to the following:
cd /home/minecraft/bspack2
as the second line in your launch.sh script, just after #!/bash/sh.
In the future, I'd recommend when interacting with screen to leverage the -L parameter. This turns on autologging. By default, in the current directory a file "screenlog.0" will be generated when screen terminates, showing you a log history of activity during the screen session. This will allow you to debug screen problems with ease, and help encourage keeping track of "current directory" while working with shell scripts, to make finding the screen log output simple.
I have a python script that i want to run in the background on startup. This is the script:
#!/usr/bin/python
from Adafruit_CharLCD import Adafruit_CharLCD
from subprocess import *
from time import sleep, strftime
from datetime import datetime
from datetime import timedelta
from os import system
from os import getloadavg
from glob import glob
#Variables
lcd = Adafruit_CharLCD() #Stores LCD object
cmdIP = "ip addr show eth0 | grep inet | awk '{print $2}' | cut -d/ -f1" #Current IP
cmdHD = "df -h / | awk '{print $5}'" # Available hd space
cmdSD = "df -h /dev/sda1 | awk '{print $5}'" # Available sd space
cmdRam = "free -h"
temp = 0
#Run shell command
def run_cmd(cmd):
p = Popen(cmd, shell=True, stdout=PIPE)
output = p.communicate()[0]
return output
#Initalises temp device
def initialise_temp():
#Initialise
system("sudo modprobe w1-gpio")
system("sudo modprobe w1-therm")
#Find device
devicedir = glob("/sys/bus/w1/devices/28-*")
device = devicedir[0]+"/w1_slave"
return device
#Gets temp
def get_temp(device):
f = open (device, 'r')
sensor = f.readlines()
f.close()
#parse results from the file
crc=sensor[0].split()[-1]
temp=float(sensor[1].split()[-1].strip('t='))
temp_C=(temp/1000.000)
temp_F = ( temp_C * 9.0 / 5.0 ) + 32
#output
return temp_C
#Gets time
def get_time():
return datetime.now().strftime('%b %d %H:%M:%S\n')
#Gets uptime
def get_uptime():
with open('/proc/uptime', 'r') as f:
seconds = float(f.readline().split()[0])
array = str(timedelta(seconds = seconds)).split('.')
string = array[0].split(' ')
totalString = string[0] + ":" + string[2]
return totalString
#Gets average load
def get_load():
array = getloadavg()
average = 0
for i in array:
average += i
average = average / 3
average = average * 100
average = "%.2f" % average
return str(average + "%")
#def get_ram():
def get_ram():
ram = run_cmd(cmdRam)
strippedRam = ram.replace("\n"," ");
splitRam = strippedRam.split(' ')
totalRam = int(splitRam[52].rstrip("M"))
usedRam = int(splitRam[59].rstrip("M"))
percentage = "%.2f" % ((float(usedRam) / float(totalRam)) * 100)
return percentage + "%"
#Gets the SD usage
def get_sd():
sd = run_cmd(cmdSD)
strippedSD = sd.lstrip("Use%\n")
return strippedSD
#Gets the HD usage
def get_hd():
hd = run_cmd(cmdSD)
strippedHD = hd.lstrip("Use%\n")
return strippedHD
def scroll():
while(1):
lcd.scrollDisplayLeft()
sleep(0.5)
#Uptime and IP
def screen1():
uptime = get_uptime()
lcd.message('Uptime %s\n' % (uptime))
ipaddr = run_cmd(cmdIP)
lcd.message('IP %s' % (ipaddr))
#Ram and load
def screen2():
ram = get_ram()
lcd.message('Ram Used %s\n' % (ram))
load = get_load()
lcd.message('Avg Load %s' % (load))
#Temp and time
def screen3():
time = get_time();
lcd.message('%s\n' % (time))
lcd.message('Temp %s' % (temp))
#HD and SD usage
def screen4():
sd = get_sd()
lcd.message('SD Used %s\n' % (sd))
hd = get_hd()
lcd.message('HD Used %s' % (hd))
#Pause and clear
def screenPause(time):
sleep(time)
#In here to reduce lag
global temp
temp = str(get_temp(device));
lcd.clear()
###########################################################################################################
#Initialise
lcd.begin(16,2)
device = initialise_temp()
lcd.clear()
#Testing
#Main loop
while(1):
screen1()
screenPause(5)
screen2()
screenPause(5)
screen3()
screenPause(5)
screen4()
screenPause(5)
I know i probably havnt done things the write way but its the first attempt.
My startup script is in /etc/init.d This is the script:
#! /bin/sh
### BEGIN INIT INFO
# Provides: LCD looping
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: LCD daemon
# Description: This file should be used to construct scripts to be
# placed in /etc/init.d.
### END INIT INFO
# Author: Foo Bar <foobar#baz.org>
#
# Please remove the "Author" lines above and replace them
# with your own name if you copy and modify this script.
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Loops the LCD screen through LCD.py"
NAME=startup.py
DAEMON=/home/pi/Programming/LCD/startup.py
DAEMON_ARGS=""
PIDFILE=/var/run/daemonLCD.pid
SCRIPTNAME=/etc/init.d/daemonLCD
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/daemonLCD ] && . /etc/default/daemonLCD
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
$DAEMON_ARGS \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
#
# If the daemon can reload its configuration without
# restarting (for example, when it is sent a SIGHUP),
# then implement that here.
#
start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
return 0
}
case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
#reload|force-reload)
#
# If do_reload() is not implemented then leave this commented out
# and leave 'force-reload' as an alias for 'restart'.
#
#log_daemon_msg "Reloading $DESC" "$NAME"
#do_reload
#log_end_msg $?
#;;
restart|force-reload)
#
# If the "reload" option is implemented then remove the
# 'force-reload' alias
#
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
#echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac
:
Im think i have missed something as when i type daemonLCD start it says command not found.
Any input would be great.
Thanks
Assuming you may want to manage more than one daemon in the future, let me recommend Supervisord. It's much simpler than writing and managing your own init.d scripts.
For example, starting your script would be as easy as including this in the conf:
[program:myscript]
command=/usr/bin/python /path/to/myscript.py
I use an init.d script available here. Rename it to supervisord and copy it to your /etc/init.d/ then run:
sudo update-rc.d supervisord defaults
I believe that init script has supervisord run as root as default. You can have it drop to run as another user if you like. I'm not if children run as root or not, although I'd assume not. Go ahead and check, but if they don't you can stick a sudo before the python command in your supervisord.conf where you call the script.
It that doesn't run, (or if you want supervisord to run as a non-root but still want your script run as root) you can allow for anyone (or a group of users) to run the python script as root (although you should make quite certain that this script cannot be edited by anyone other than root).
edit your sudoers file with "sudo visudo" and add the following to the end:
USERS ALL=(ALL) NOPASSWD: /path/to/myscript.py
Then make sure you have a shebang at the beginning of your python script and change the command to omit the python call, i.e:
[program:myscript]
command=sudo /path/to/myscript.py
Here's a good blog post which deals with this question: Getting a Python script to run in the background (as a service) on boot
Use daemontools from djb. It is a lot easier than the other answers provided. For starters you can install daemon tools with apt-get so you do not need to worry about grabbing an unknown script from a gist and you get updates through debian like normal. daemontools also takes care of restarting the service if it dies and provides for logging. There is a description of daemontools and debian here:
http://blog.rtwilson.com/how-to-set-up-a-simple-service-to-run-in-the-background-on-a-linux-machine-using-daemontools/
djb's page aout daemontools:
http://cr.yp.to/daemontools.html
This is a classic mistake new Unix/Linux users make. /etc/init.d isn't in your path which is why you can't just run daemonLCD. Try using the full path (/etc/init.d/daemonLCD start) or prepending ./ (./daemonLCD start).
The script needs to be executable for either of the above to work.
thanks for the code above. I've been using it to figure out how to set up a daemon on a linux machine.
With some tweaking I could get it to work quite well.
But something puzzled me. And that was checking if the process was running, by checking the exists of the /var/run/myfile.pid
That's just the pidfile - NOT the process, right?
Take a look at /lib/lsb/init-functions.status_of_proc
status_of_proc () {
local pidfile daemon name status OPTIND
pidfile=
OPTIND=1
while getopts p: opt ; do
case "$opt" in
p) pidfile="$OPTARG";;
esac
done
shift $(($OPTIND - 1))
if [ -n "$pidfile" ]; then
pidfile="-p $pidfile"
fi
daemon="$1"
name="$2"
status="0"
pidofproc $pidfile $daemon >/dev/null || status="$?"
if [ "$status" = 0 ]; then
log_success_msg "$name is running"
return 0
elif [ "$status" = 4 ]; then
log_failure_msg "could not access PID file for $name"
return $status
else
log_failure_msg "$name is not running"
return $status
fi
}
That's only dealing with the success or failure of accessing the PID file.
Now, I'm building this daemon to go on a small device. I've discovered it's using BusyBox and I don't have init-functions :-(
But I do have pidof.
So I added
log_success_msg "pidof $NAME is $(pidof -x $NAME)" >> $LOGFILE
log_success_msg "PIDFILE of $NAME is" >> $LOGFILE
sed -n '1p' < $PIDFILE >> $LOGFILE
and checked $LOGFILE and lo and behold the numbers are different.
I did pstree -s -p on both numbers and
the pidof number spits out a very short tree, so it's for the root level process
but the $PIDFILE number vomits out branch after branch, so I don't think pstree can find the process.
Yes, the do_stop in Joseph Baldwin Roberts's code will kill both processes. But if the process is killed in another way e.g. kill -9 12345, the $PIDFILE is still there. So, the daemon will falsely believe the process is already running an refuse to start.