Run tcsh script without interruption after called from python - python

I am calling a tcsh script in my python program. The tcsh script takes 10-12 mins for completion. But as i call this script from python, python interrupts script before it executes completely. here is the code snippet.
import subprocess
import os
os.chdir(dir_path_forCD)
subprocess.call('/home/sdcme/bin/nii_mdir_sdcme %s %s' % (a, a), shell=True)
print(a+1);
Can some one point out, how i can call nii_mdir_sdcmescript from python without interrupting(killing) script before it is executed completely.
The complete script is as follows:
#!/usr/bin/python
import subprocess
import os
import dicom
import time
dire = '.'
directories = subprocess.check_output(
['find', '/Users/sdb99/Desktop/dicom', '-maxdepth', '1', '-type', 'd', '-mmin', '-660', '-type', 'd', '-mmin', '+5']
).splitlines()
number_of_directories = len(directories)
b_new = '.'
for n in range(1,number_of_directories):
dire_str = (directories[n])
dire_str = str(dire_str) #[2:-1]
print(dire_str)
for dirpath,dirnames,filenames in os.walk(dire_str,topdown=True):
a =1
for filename in filenames:
print(dirpath)
if filename[-4:] == '.dcm':
firstfilename = os.path.join(dirpath, filename)
dir_path_forCD= dirpath
dcm_info = dicom.read_file(firstfilename, force=True)
if dcm_info[0x0019, 0x109c].value == 'epiRTme':
os.chdir(dir_path_forCD)
subprocess.call('/home/sdcme/bin/nii_mdir_sdcme %s %s' % (a, a), shell=True)
print(a+1);
break
break
break
tcsh script: nii_mdir_sdcme
#!/bin/tcsh
if ($#argv < 2) then
echo "Usage: nii_mdir_sdcme start_dir# end_dir#"
exit
else
set start = $argv[1]
set end = $argv[2]
if ( ! -d ./medata ) then
sudo mkdir ./medata
endif
sudo chown sdcme ./medata
sudo chgrp users ./medata
set i = $start
while ( $i <= $end )
echo " "
if ( $i < 10 ) then
echo "Entering 000$i..."
cd 000$i
sudo chmod 777 .
niidicom_sdcme run0$i
#mv *+orig.* ../medata
sudo chmod 755 .
else
echo "Entering 00$i..."
cd 00$i
sudo chmod 777 .
niidicom_sdcme run$i
#mv *+orig.* ../medata
sudo chmod 755 .
endif
cd ..
# i++
end
endif

The problem was with the counter a, which i am passing to call tcsh script.
Now it seems that, the problem was never with the python interrupting tcsh script. subprocess lets tcsh script run without interruptions with shell = True.

Related

Output not getting redirected properly

I am running this command through on bash console through iTerm
{ cd /usr/local/path/to/code; echo "hi1"; sudo chmod 777 /tmp/dissolve.log; echo "hi2"; python someapp/runner.py dissolve; echo "hi3"; } > /tmp/dissolve.log &
on tailing the file i get :
tail: /tmp/dissolve.log: file truncated
hi1
hi2
I am not able to figure out why i am not getting output of file python someapp/runner.py dissolve, when i do cmd + c, the expected output is appearing on tail log.
code snippet from runner.py:
if __name__ == '__main__':
program_name = sys.argv[1]
if program_name == 'dissolve':
obj = SomeClass() # this is properly imported
obj.some_function() # this has lot of `print` statements, which i intened to catch in '/tmp/dissolve.log'
Is the initial print inside some_function() passing the values some where other than the /tmp/dissolve.log?
Any suggestion why this could be happening?
This seems like a buffering issue, as you are sending the output to a file. You can force line buffering with stdbuf, like this:
{ cd /usr/local/path/to/code;
echo "hi1";
sudo chmod 777 /tmp/dissolve.log;
echo "hi2";
stdbuf -oL python someapp/runner.py dissolve;
echo "hi3"; } > /tmp/dissolve.log &

How to source script via python

I can source bash script (without shebang) easy as bash command in terminal but trying to do the same via python command
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell = True)
or
sourcevars = [". /etc/openvpn/easy-rsa/vars"]
runSourcevars = subprocess.Popen(sourcevars, shell = True)
I receive :
Please source the vars script first (i.e. "source ./vars")
Make sure you have edited it to reflect your configuration.
What's the matter, how to do it correctly?I've read some topics here,e.g here but could not solve my problem using given advices. Please explain with examples.
UPDATED:
# os.chdir = ('/etc/openvpn/easy-rsa')
initvars = "cd /etc/openvpn/easy-rsa && . ./vars && ./easy-rsa ..."
# initvars = "cd /etc/openvpn/easy-rsa && . ./vars"
# initvars = [". /etc/openvpn/easy-rsa/vars"]
cleanall = ["/etc/openvpn/easy-rsa/clean-all"]
# buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
# buildkey = ["printf '\n\n\n\n\n\n\n\n\n\nyes\n ' | /etc/openvpn/easy-rsa/build-key AAAAAA"]
# buildca = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runInitvars = subprocess.Popen(cmd, shell = True)
# runInitvars = subprocess.Popen(initvars,stdout=subprocess.PIPE, shell = True, executable="/bin/bash")
runCleanall = subprocess.Popen(cleanall , shell=True)
# runBuildca = subprocess.Popen(buildca , shell=True)
# runBuildca.communicate()
# runBuildKey = subprocess.Popen(buildkey, shell=True )
UPDATE 2
buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
runcommands = subprocess.Popen(initvars+cleanall+buildca, shell = True)
There's absolutely nothing wrong with this in and of itself:
# What you're already doing -- this is actually fine!
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell=True)
# ...*however*, it won't have any effect at all on this:
runOther = subprocess.Popen('./easy-rsa build-key yadda yadda', shell=True)
However, if you subsequently try to run a second subprocess.Popen(..., shell=True) command, you'll see that it doesn't have any of the variables set by sourcing that configuration.
This is entirely normal and expected behavior: The entire point of using source is to modify the state of the active shell; each time you create a new Popen object with shell=True, it's starting a new shell -- their state isn't carried over.
Thus, combine into a single call:
prefix = "cd /etc/openvpn/easy-rsa && . ./vars && "
cmd = "/etc/openvpn/easy-rsa/clean-all"
runCmd = subprocess.Popen(prefix + cmd, shell=True)
...such that you're using the results of sourcing the script in the same shell invocation as that in which you actually source the script.
Alternately (and this is what I'd do), require your Python script to be invoked by a shell which already has the necessary variables in its environment. Thus:
# ask your users to do this
set -a; . ./vars; ./yourPythonScript
...and you can error out if people don't do so very easy:
import os, sys
if not 'EASY_RSA' in os.environ:
print >>sys.stderr, "ERROR: Source vars before running this script"
sys.exit(1)

python-notify module & cron: gio.Error

I'm asking some help to show notifications using python-crontab, because everything I've tried do not work. The display is not initilised when the script is launched by cron. When I start it manually, that's work.
The codes I've tried:
#!/usr/bin/env python
# coding: utf8
import subprocess
import os
#os.environ.setdefault("XAUTHORITY", "/home/guillaume" + "/.Xauthority")
#os.environ.setdefault('DISPLAY', ':0.0') # do not work
#os.environ['DISPLAY'] = ':0.0' # do not work
print = os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
# more code, which is working (using VLC)
cmd3 = "cvlc rtp://232.0.2.183:8200 --sout file/mkv:/path/save/file.mkv" # to download TV's flow
with open("/path/debug_cvlc.log", 'w') as out:
proc = subprocess.Popen(cmd3, stderr=out, shell=True, preexec_fn=os.setsid)
pid = proc.pid # to get the pid
with open("/path/pid.log", "w") as f:
f.write(str(pid)) # to write the pid in a file
# I'm using the pid to stop the download with another cron's task, and to display another notify message.
# Download and stop is working very well, and zenity too. But not notify-send
Thanks
Edit: here are the environment variables I have for this cron's script:
{'LANG': 'fr_FR.UTF-8', 'SHELL': '/bin/sh', 'PWD': '/home/guillaume', 'LOGNAME': 'guillaume', 'PATH': '/usr/bin:/bin', 'HOME': '/home/guillaume', 'DISPLAY': ':0.0'}
Edit2: I'm calling my script in cron like this:
45 9 30 6 * export DISPLAY=:0.0 && python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
I precise I have two screens, so I think DISPLAY:0.0 is the way to display this notify..
But I don't see it.
Edit3: It appears that I've a problem with notify-send, because it's working using zenity:
subprocess.call("zenity --warning --timeout 5 --text='this test is working'", shell=True)
I have notify-send version 0.7.3, and I precise that notify-send is working with the terminal.
Edit4: Next try with python-notify.
import pynotify
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST")
n.show()
The log file show this: (in french)
Traceback (most recent call last):
File "/home/path/script.py", line 22, in <module>
n.show()
gio.Error: Impossible de se connecter : Connexion refusée
#Translating: Unable to connect : Connection refused
So, I have problem with dbus? what is this?
Solution: Get the DBUS_SESSION_BUS_ADDRESS before creating the cron order:
cron = CronTab()
dbus = os.getenv("DBUS_SESSION_BUS_ADDRESS") # get the dbus
# creating cron
cmd_start = "export DBUS_SESSION_BUS_ADDRESS=" + str(dbus) + " && export DISPLAY=:0.0 && cd /path && python /path/script.py > path/debug_cron.log 2>&1"
job = cron.new(cmd_start)
job = job_start.day.on(self.day_on) # and all the lines to set cron, with hours etc..
cron.write() # write the cron's file
Finally, the cron's line is like that:
20 15 1 7 * export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-M0JCXXbuhC && export DISPLAY=:0.0 && python script.py
Then the notification is displaying. Problem resolved !! :)
You are calling the cron like
45 9 30 6 * DISPLAY=:0.0 python /home/path/script.py > /home/path/debug_cron_on.log 2>&1
which is incorrect, since you are not exporting the DISPLAY variable, and the subsequent command does not run.
Try this instead
45 9 30 6 * export DISPLAY=:0.0 && cd /home/path/ && python script.py >> debug_cron.log 2>&1
Also, you are setting the DISPLAY variable within your cron job as well, so try if the cron job works without exporting it in the job line
45 9 30 6 * cd /home/path/ && python script.py >> debug_cron.log 2>&1
EDIT
While debugging, run the cron job every minute. Following worked for me:
Cron entry
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
script.py
#!/usr/bin/env python
import subprocess
import os
os.environ.setdefault('DISPLAY', ':0.0')
print os.environ
cmd2 = 'notify-send test'
subprocess.call(cmd2, shell=True)
EDIT 2
Using pynotify, script.py becomes
#!/usr/bin/env python
import pynotify
import os
os.environ.setdefault('DISPLAY', ':0.0')
pynotify.init("Basic")
n = pynotify.Notification("Title", "TEST123")
n.show()
and cron entry becomes
* * * * * cd /home/user/Desktop/test/send-notify && python script.py
EDIT 3
One environment variable DBUS_SESSION_BUS_ADDRESS is missing from the cron environment.
It can be set in this and this fashion
crontab is considered an external host -- it doesn't have permission to write to your display.
Workaround: allow anyone to write to your display. Type this in your shell when you're logged in:
xhost +

Called bashscript doesn't start up GNU screen session

I have a problem with a backup script which is supposed to call a bash starting/stopping script, in which a "daemon" (via GNU screen) is managed. For the moment my python backup script is called via cron. Within the launch.sh script there is a determination of the given parameter. If "stop" is given the script echos "Stopping..." and runs the GNU screen command to shut down the session. The same goes for "start". If the script is called via subprocess.call(...,Shell=True) in Python the string is shown but the screen session remains untouched. If it gets called directly in bash everything works fine.
#!/usr/bin/env python
'''
Created on 27.07.2013
BackUp Script v0.2
#author: Nerade
'''
import time
import os
from datetime import date
from subprocess import check_output
import subprocess
script_dir = '/home/minecraft/automated_backup'
#folders = ['/home/minecraft/staff']
folders = ['/home/minecraft/bspack2','/home/minecraft/staff']
# log = 0
backup_date = date.today()
backup_dir = '/home/minecraft/automated_backup/' + backup_date.isoformat()
def main():
global log
init_log()
init_dirs()
for folder in folders:
token = folder.split("/")
stopCmd = folder + '/launch.sh stop'
log.write("Stopping server %s...\n" % (token[3]))
subprocess.call(stopCmd,shell=True)
#print stopCmd
while screen_present(token[3]):
time.sleep(0.5)
log.write("Server %s successfully stopped!\n" % (token[3]))
specificPath = backup_dir + '/' + token[3]
os.makedirs(specificPath)
os.system("cp /home/minecraft/%s/server.log %s/server.log" % (token[3],specificPath))
backup(folder,specificPath + '/' + backup_date.isoformat() + '.tar.gz')
dumpDatabase(backup_dir)
for folder in folders:
token = folder.split("/")
startCmd = folder + '/launch.sh start'
log.write("Starting server %s...\n" % (token[3]))
subprocess.call(startCmd,shell=True)
time.sleep(1)
log.write(screen_present(token[3]))
#print startCmd
def dumpDatabase(target):
global log
log.write("Dumping Database...\n")
cmd = "mysqldump -uroot -p<password> -A --quick --result-file=%s/%s.sql" % (backup_dir,backup_date.isoformat())
os.system(cmd)
#print cmd
def backup(source,target):
global log
log.write("Starting backup of folder %s to %s\n" % (source,target))
cmd = 'tar cfvz %s --exclude-from=%s/backup.conf %s' % (target,source,source)
os.system(cmd)
#print cmd
def screen_present(name):
var = check_output(["screen -ls; true"],shell=True)
if "."+name+"\t(" in var:
return True
else:
return False
def init_log():
global log
log = open("%s/backup.log" % script_dir,'a')
log.write(
"Starting script at %s\n" % time.strftime("%m/%d/%Y %H:%M:%S")
)
def init_dirs():
global backup_dir,log
log.write("Checking and creating directories...\n")
if not os.path.isdir(backup_dir):
os.makedirs(backup_dir)
if __name__ == '__main__':
main()
And the launch.sh:
#!/bin/sh
if [ $# -eq 0 ] || [ "$1" = "start" ]; then
echo "Starting Server bspack2"
screen -S bspack2 -m -d java -Xmx5G -Xms4G -jar mcpc-plus-legacy-1.4.7-R1.1.jar nogui
fi
if [ "$1" = "stop" ]; then
screen -S bspack2 -X stuff 'stop\015'
echo "Stopping Server bspack2"
fi
What's my problem here?
I'm sure by now you've solved this problem, but looking through your question I'd bet the answer is remarkably simple -- mcpc-plus-legacy-1.4.7-R1.1.jar isn't found by java, which fails, and subsequently screen terminates.
In launch.sh, screen will execute in the same directory as the calling script. In this case, your python script, when run by cron, will have an active directory of the running user's home directory (so root crontabs will run in /root/, for instance, and a user crontab in /home/username/).
Simple solution is just to the following:
cd /home/minecraft/bspack2
as the second line in your launch.sh script, just after #!/bash/sh.
In the future, I'd recommend when interacting with screen to leverage the -L parameter. This turns on autologging. By default, in the current directory a file "screenlog.0" will be generated when screen terminates, showing you a log history of activity during the screen session. This will allow you to debug screen problems with ease, and help encourage keeping track of "current directory" while working with shell scripts, to make finding the screen log output simple.

Daemonizing a python script in debian

I have a python script that i want to run in the background on startup. This is the script:
#!/usr/bin/python
from Adafruit_CharLCD import Adafruit_CharLCD
from subprocess import *
from time import sleep, strftime
from datetime import datetime
from datetime import timedelta
from os import system
from os import getloadavg
from glob import glob
#Variables
lcd = Adafruit_CharLCD() #Stores LCD object
cmdIP = "ip addr show eth0 | grep inet | awk '{print $2}' | cut -d/ -f1" #Current IP
cmdHD = "df -h / | awk '{print $5}'" # Available hd space
cmdSD = "df -h /dev/sda1 | awk '{print $5}'" # Available sd space
cmdRam = "free -h"
temp = 0
#Run shell command
def run_cmd(cmd):
p = Popen(cmd, shell=True, stdout=PIPE)
output = p.communicate()[0]
return output
#Initalises temp device
def initialise_temp():
#Initialise
system("sudo modprobe w1-gpio")
system("sudo modprobe w1-therm")
#Find device
devicedir = glob("/sys/bus/w1/devices/28-*")
device = devicedir[0]+"/w1_slave"
return device
#Gets temp
def get_temp(device):
f = open (device, 'r')
sensor = f.readlines()
f.close()
#parse results from the file
crc=sensor[0].split()[-1]
temp=float(sensor[1].split()[-1].strip('t='))
temp_C=(temp/1000.000)
temp_F = ( temp_C * 9.0 / 5.0 ) + 32
#output
return temp_C
#Gets time
def get_time():
return datetime.now().strftime('%b %d %H:%M:%S\n')
#Gets uptime
def get_uptime():
with open('/proc/uptime', 'r') as f:
seconds = float(f.readline().split()[0])
array = str(timedelta(seconds = seconds)).split('.')
string = array[0].split(' ')
totalString = string[0] + ":" + string[2]
return totalString
#Gets average load
def get_load():
array = getloadavg()
average = 0
for i in array:
average += i
average = average / 3
average = average * 100
average = "%.2f" % average
return str(average + "%")
#def get_ram():
def get_ram():
ram = run_cmd(cmdRam)
strippedRam = ram.replace("\n"," ");
splitRam = strippedRam.split(' ')
totalRam = int(splitRam[52].rstrip("M"))
usedRam = int(splitRam[59].rstrip("M"))
percentage = "%.2f" % ((float(usedRam) / float(totalRam)) * 100)
return percentage + "%"
#Gets the SD usage
def get_sd():
sd = run_cmd(cmdSD)
strippedSD = sd.lstrip("Use%\n")
return strippedSD
#Gets the HD usage
def get_hd():
hd = run_cmd(cmdSD)
strippedHD = hd.lstrip("Use%\n")
return strippedHD
def scroll():
while(1):
lcd.scrollDisplayLeft()
sleep(0.5)
#Uptime and IP
def screen1():
uptime = get_uptime()
lcd.message('Uptime %s\n' % (uptime))
ipaddr = run_cmd(cmdIP)
lcd.message('IP %s' % (ipaddr))
#Ram and load
def screen2():
ram = get_ram()
lcd.message('Ram Used %s\n' % (ram))
load = get_load()
lcd.message('Avg Load %s' % (load))
#Temp and time
def screen3():
time = get_time();
lcd.message('%s\n' % (time))
lcd.message('Temp %s' % (temp))
#HD and SD usage
def screen4():
sd = get_sd()
lcd.message('SD Used %s\n' % (sd))
hd = get_hd()
lcd.message('HD Used %s' % (hd))
#Pause and clear
def screenPause(time):
sleep(time)
#In here to reduce lag
global temp
temp = str(get_temp(device));
lcd.clear()
###########################################################################################################
#Initialise
lcd.begin(16,2)
device = initialise_temp()
lcd.clear()
#Testing
#Main loop
while(1):
screen1()
screenPause(5)
screen2()
screenPause(5)
screen3()
screenPause(5)
screen4()
screenPause(5)
I know i probably havnt done things the write way but its the first attempt.
My startup script is in /etc/init.d This is the script:
#! /bin/sh
### BEGIN INIT INFO
# Provides: LCD looping
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: LCD daemon
# Description: This file should be used to construct scripts to be
# placed in /etc/init.d.
### END INIT INFO
# Author: Foo Bar <foobar#baz.org>
#
# Please remove the "Author" lines above and replace them
# with your own name if you copy and modify this script.
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Loops the LCD screen through LCD.py"
NAME=startup.py
DAEMON=/home/pi/Programming/LCD/startup.py
DAEMON_ARGS=""
PIDFILE=/var/run/daemonLCD.pid
SCRIPTNAME=/etc/init.d/daemonLCD
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/daemonLCD ] && . /etc/default/daemonLCD
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
$DAEMON_ARGS \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
#
# If the daemon can reload its configuration without
# restarting (for example, when it is sent a SIGHUP),
# then implement that here.
#
start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
return 0
}
case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
#reload|force-reload)
#
# If do_reload() is not implemented then leave this commented out
# and leave 'force-reload' as an alias for 'restart'.
#
#log_daemon_msg "Reloading $DESC" "$NAME"
#do_reload
#log_end_msg $?
#;;
restart|force-reload)
#
# If the "reload" option is implemented then remove the
# 'force-reload' alias
#
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
#echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac
:
Im think i have missed something as when i type daemonLCD start it says command not found.
Any input would be great.
Thanks
Assuming you may want to manage more than one daemon in the future, let me recommend Supervisord. It's much simpler than writing and managing your own init.d scripts.
For example, starting your script would be as easy as including this in the conf:
[program:myscript]
command=/usr/bin/python /path/to/myscript.py
I use an init.d script available here. Rename it to supervisord and copy it to your /etc/init.d/ then run:
sudo update-rc.d supervisord defaults
I believe that init script has supervisord run as root as default. You can have it drop to run as another user if you like. I'm not if children run as root or not, although I'd assume not. Go ahead and check, but if they don't you can stick a sudo before the python command in your supervisord.conf where you call the script.
It that doesn't run, (or if you want supervisord to run as a non-root but still want your script run as root) you can allow for anyone (or a group of users) to run the python script as root (although you should make quite certain that this script cannot be edited by anyone other than root).
edit your sudoers file with "sudo visudo" and add the following to the end:
USERS ALL=(ALL) NOPASSWD: /path/to/myscript.py
Then make sure you have a shebang at the beginning of your python script and change the command to omit the python call, i.e:
[program:myscript]
command=sudo /path/to/myscript.py
Here's a good blog post which deals with this question: Getting a Python script to run in the background (as a service) on boot
Use daemontools from djb. It is a lot easier than the other answers provided. For starters you can install daemon tools with apt-get so you do not need to worry about grabbing an unknown script from a gist and you get updates through debian like normal. daemontools also takes care of restarting the service if it dies and provides for logging. There is a description of daemontools and debian here:
http://blog.rtwilson.com/how-to-set-up-a-simple-service-to-run-in-the-background-on-a-linux-machine-using-daemontools/
djb's page aout daemontools:
http://cr.yp.to/daemontools.html
This is a classic mistake new Unix/Linux users make. /etc/init.d isn't in your path which is why you can't just run daemonLCD. Try using the full path (/etc/init.d/daemonLCD start) or prepending ./ (./daemonLCD start).
The script needs to be executable for either of the above to work.
thanks for the code above. I've been using it to figure out how to set up a daemon on a linux machine.
With some tweaking I could get it to work quite well.
But something puzzled me. And that was checking if the process was running, by checking the exists of the /var/run/myfile.pid
That's just the pidfile - NOT the process, right?
Take a look at /lib/lsb/init-functions.status_of_proc
status_of_proc () {
local pidfile daemon name status OPTIND
pidfile=
OPTIND=1
while getopts p: opt ; do
case "$opt" in
p) pidfile="$OPTARG";;
esac
done
shift $(($OPTIND - 1))
if [ -n "$pidfile" ]; then
pidfile="-p $pidfile"
fi
daemon="$1"
name="$2"
status="0"
pidofproc $pidfile $daemon >/dev/null || status="$?"
if [ "$status" = 0 ]; then
log_success_msg "$name is running"
return 0
elif [ "$status" = 4 ]; then
log_failure_msg "could not access PID file for $name"
return $status
else
log_failure_msg "$name is not running"
return $status
fi
}
That's only dealing with the success or failure of accessing the PID file.
Now, I'm building this daemon to go on a small device. I've discovered it's using BusyBox and I don't have init-functions :-(
But I do have pidof.
So I added
log_success_msg "pidof $NAME is $(pidof -x $NAME)" >> $LOGFILE
log_success_msg "PIDFILE of $NAME is" >> $LOGFILE
sed -n '1p' < $PIDFILE >> $LOGFILE
and checked $LOGFILE and lo and behold the numbers are different.
I did pstree -s -p on both numbers and
the pidof number spits out a very short tree, so it's for the root level process
but the $PIDFILE number vomits out branch after branch, so I don't think pstree can find the process.
Yes, the do_stop in Joseph Baldwin Roberts's code will kill both processes. But if the process is killed in another way e.g. kill -9 12345, the $PIDFILE is still there. So, the daemon will falsely believe the process is already running an refuse to start.

Categories