zc.buildout variable substitution prepends a "=" - python

When using collective.recipe.template, my template is generated fine, except that variables are inserted with a extra = in front of it.
So in the config and template below, the generated bin/npm script, on the third line ends up reading:
cd = /home/andre/dev/myapp/webapp/frontend/static
... when if fact it should be:
cd /home/andre/dev/myapp/webapp/frontend/static
buildout.cfg
[baseconfig]
webapp_dir= = ${buildout:directory}/webapp
resources_dir = ${buildout:directory}/resources
playbooks_dir = ${buildout:directory}/playbooks
npm_root = ${:resources_dir}/static/
[config]
<= baseconfig
npm_root = ${:webapp_dir}/frontend/static
[npm]
recipe = collective.recipe.template
input = templates/npm.sh.in
output = ${buildout:bin-directory}/npm
mode = 744
templates/npm.sh.in
#!/bin/bash
cd ${config:npm_root}
case "$1" in
install)
npm install
;;
workbox)
npm run workbox
;;
copy-workbox)
npm run copy-workbox
;;
build-bower)
npm run build-bower
;;
build-sw)
npm run build-sw
;;
build-all)
npm run build-all
;;
*)
echo $"Usage: $0 {install|workbox|copy-workbox|build-bower|build-sw|build-all}"
exit 1
esac

Woops, discovered my error. Typo in webapp_dir which I never saw for some reason even after reading and re-reading multiple times.

Related

Docker process suspends and gets killed

Like in the topic, docker process suspends and gets killed.
My python project runs bash script whose one of the parts is to run R script which withdraws data from influxdb and then processes it. When the project gets data for a short period of time e.g. 1-5 days, it is not a problem. The whole thing starts with bigger time frames like a couple of weeks. It just slows down so badly so that it takes ages to generate anything (I checked logs) and eventually it gets killed. It’s OK when R script withdraws something about 25mb of data, however 70mb of data is not as easy. Can it be something that Flask+bash+R use too much memory at once or something? Such problem does not appear when invoked outside of docker
Dockerfile:
FROM ubuntu
# Install requirements fot the flask app
RUN apt-get clean && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && DEBIAN_FRONTEND=noninteractive apt-get install -y \
python3 \
python3-pip \
r-base \
r-base-dev \
r-cran-rgl \
mutt \
git \
texlive-fonts-recommended
# Install requirements fot the flask app
RUN pip3 install -r ./requirements.txt
flask app snippet:
#app.route('/send', methods=['POST'])
def send():
path = os.path.dirname(os.path.realpath(__file__))
script = path + '/generate_pdf.sh'
address = str(request.form['email'])
start_date = convert_date(str(request.form['start_date']))
end_date = convert_date(str(request.form['end_date']))
command = [script, start_date, end_date, address]
subprocess.run(command)
return json.dumps({
'status': 'OK',
'message': 'The action is completed'
})
generate_pdf.sh:
#!/bin/bash
start_date="$1"
end_date="$2"
address="$3"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
report_name="\"$DIR/my_document.pdf\""
R -e "rmarkdown::render('$DIR/generate_document.Rmd', output_file = $report_name)" --args "$start_date" "$end_date"
report_name="$DIR/my_document.pdf"
echo | mutt -s "Generated document" -a $report_name -- "$address"
out=$(rm $report_name)
R script snippet:
where.clause <- paste0("time >= '",
start.date,
"' AND time <= '",
as.character(as.Date(end.date) + days(1)),
"'")
con <- influxdbr::influx_connection(host = "localhost",
port = 8086,
user = "root",
pass = "root")
select.query <- paste0(
'id, name, surname, car, employment_status'
)
rows <- influx_select(con, db = 'my_db', select.query, from = 'workers',
where = where.clause)
rows <- as.data.frame(rows, stringsAsFactors = FALSE)
if(is.data.frame(rows) && nrow(rows) == 0) {
cat('No data could be obtained from the database.', sep = '\n')
knitr::knit_exit()
}
Here are the logs that I got while executing the app, which was suppose to withdraw about 74mb of data.
....
label: unnamed-chunk-4 (with options)
List of 3
$ echo : logi FALSE
$ message: logi FALSE
$ warning: logi FALSE
Success: (204) No Content
/app/generate_pdf.sh: line 8: 58 Killed
....
The application works perfectly outside the docker.
When this command rows <- influx_select is invoked, the data is obtained in a raw version. Before it gets cast to dataframe it weights a lot - 24mb, 70 and more.
I manually ran the script inside the docker and the R script went a bit farther:
....
label: unnamed-chunk-8 (with options)
List of 4
$ echo : logi FALSE
$ message : logi FALSE
$ fig.align : chr "left"
$ fig.height: num 7
Quitting from lines 72-76 (generate_document.Rmd)
Error in system(paste(which, shQuote(names[i])), intern = TRUE, ignore.stderr = TRUE) :
cannot popen '/usr/bin/which 'pdfcrop' 2>/dev/null', probable reason 'Cannot allocate memory'
...

Tail file till process exits

Going through the answers at superuser.
I'm trying to modify this to listen for multiple strings and echo custom messages such as ; 'Your server started successfully' etc
I'm also trying to tack it to another command i.e. pip
wait_str() {
local file="$1"; shift
local search_term="Successfully installed"; shift
local search_term2='Exception'
local wait_time="${1:-5m}"; shift # 5 minutes as default timeout
(timeout $wait_time tail -F -n0 "$file" &) | grep -q "$search_term" && echo 'Custom success message' && return 0 || || grep -q "$search_term2" && echo 'Custom success message' && return 0
echo "Timeout of $wait_time reached. Unable to find '$search_term' or '$search_term2' in '$file'"
return 1
}
The usage I have in mind is:
pip install -r requirements.txt > /var/log/pip/dump.log && wait_str /var/log/pip/dump.log
To clarify, I'd like to get wait_str to stop tailing when pip exits, whether successfully or not.
Following is general answer and tail could be replaced by any command that result in stream of lines.
IF different string needs different actions then use following:
tail -f var/log/pip/dump.log |awk '/condition1/ {action for condition-1} /condition-2/ {action for condition-2} .....'
If multiple conditions need same action them ,separate them using OR operator :
tail -f var/log/pip/dump.log |awk '/condition-1/ || /condition-2/ || /condition-n/ {take this action}'
Based on comments : Single awk can do this.
tail -f /path/to/file |awk '/Exception/{ print "Worked"} /compiler/{ print "worked"}'
or
tail -f /path/to/file | awk '/Exception/||/compiler/{ print "worked"}'
OR Exit if match is found
tail -f logfile |awk '/Exception/||/compiler/{ print "worked";exit}'

Python startup script using STDOUT for outputs, using init.d method

My environment is Debian Jesse on a embedded arm display.
I am using a bash script to automatically launch my app using the init.d method. It launches a second python script as a daemon that handles my application on startup and reboot.
Because it is run this way to the best of my knowledge this is now a daemon background process, and it disconnects STDOUT and STDIN from my python script.
The system and application is for a single purpose, so spamming the console with outputs from a background process is not only not a problem, but it is desired. With the outputs I can easily ssh or serial console into the display and see all the live debug outputs or exceptions.
I have looked into possible ways to force the process to the foreground, or redirect the outputs to STDOUT but have not found any definite answer when the script is run at startup.
My logging to a file is working perfect and otherwise the app works well in the state it is in. Currently when I need to debug I stop the application and run it manually to get all the outputs.
I have considered using sockets to redirect the outputs from the app, and then running a separate script that is listening will print to console... but that seems less than ideal and that a better solution might exist.
Is there methods to achieve this or should I just accept this.
EDIT 1 (additional details)
Because I am using multiple logs for multiple processes I have created a logger class. The stream handler uses the default which should be sys.stderr.
import logging
import logging.handlers
class LoggerSetup(object):
def __init__(self, logName, logFileNameAndPath, logSizeBytes, logFileCount):
log = logging.getLogger(logName)
log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - ' + logName + ' - %(message)s',datefmt="%m-%d %H:%M:%S")
# Add the log message handler to the logger
if(logFileCount > 0):
fileHandler = logging.handlers.RotatingFileHandler(logFileNameAndPath, maxBytes=logSizeBytes, backupCount=logFileCount)
log.addHandler(fileHandler)
fileHandler.setFormatter(formatter)
consoleHandler = logging.StreamHandler()
log.addHandler(consoleHandler)
consoleHandler.setFormatter(formatter)
log.info(logName + ' initialized')
For more reference here is the startup script launch on boot. It then runs my python run.py which handles the rest of the startup process.
#!/bin/sh
### BEGIN INIT INFO
# Provides: ArcimotoStart
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Startup script
# Description: startip script that points to run.py
### END INIT INFO
# Change the next 3 lines to suit where you install your script and what you want to call it
DAEMON=/app/run.py
DAEMON_NAME=app
# Add any command line options for your daemon here
DAEMON_OPTS=""
# This next line determines what user the script runs as.
# Root generally not recommended but necessary if you are using certain features in Python.
DAEMON_USER=root
# The process ID of the script when it runs is stored here:
PIDFILE=/var/run/$DAEMON_NAME.pid
. /lib/lsb/init-functions
do_start () {
log_daemon_msg "Starting system $DAEMON_NAME daemon"
start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
log_end_msg $?
}
do_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME daemon"
start-stop-daemon --stop --pidfile $PIDFILE --retry 10
log_end_msg $?
}
case "$1" in
start|stop)
do_${1}
;;
restart|reload|force-reload)
do_stop
do_start
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
exit 1
;;
esac
exit 0

How to source script via python

I can source bash script (without shebang) easy as bash command in terminal but trying to do the same via python command
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell = True)
or
sourcevars = [". /etc/openvpn/easy-rsa/vars"]
runSourcevars = subprocess.Popen(sourcevars, shell = True)
I receive :
Please source the vars script first (i.e. "source ./vars")
Make sure you have edited it to reflect your configuration.
What's the matter, how to do it correctly?I've read some topics here,e.g here but could not solve my problem using given advices. Please explain with examples.
UPDATED:
# os.chdir = ('/etc/openvpn/easy-rsa')
initvars = "cd /etc/openvpn/easy-rsa && . ./vars && ./easy-rsa ..."
# initvars = "cd /etc/openvpn/easy-rsa && . ./vars"
# initvars = [". /etc/openvpn/easy-rsa/vars"]
cleanall = ["/etc/openvpn/easy-rsa/clean-all"]
# buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
# buildkey = ["printf '\n\n\n\n\n\n\n\n\n\nyes\n ' | /etc/openvpn/easy-rsa/build-key AAAAAA"]
# buildca = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runInitvars = subprocess.Popen(cmd, shell = True)
# runInitvars = subprocess.Popen(initvars,stdout=subprocess.PIPE, shell = True, executable="/bin/bash")
runCleanall = subprocess.Popen(cleanall , shell=True)
# runBuildca = subprocess.Popen(buildca , shell=True)
# runBuildca.communicate()
# runBuildKey = subprocess.Popen(buildkey, shell=True )
UPDATE 2
buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
runcommands = subprocess.Popen(initvars+cleanall+buildca, shell = True)
There's absolutely nothing wrong with this in and of itself:
# What you're already doing -- this is actually fine!
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell=True)
# ...*however*, it won't have any effect at all on this:
runOther = subprocess.Popen('./easy-rsa build-key yadda yadda', shell=True)
However, if you subsequently try to run a second subprocess.Popen(..., shell=True) command, you'll see that it doesn't have any of the variables set by sourcing that configuration.
This is entirely normal and expected behavior: The entire point of using source is to modify the state of the active shell; each time you create a new Popen object with shell=True, it's starting a new shell -- their state isn't carried over.
Thus, combine into a single call:
prefix = "cd /etc/openvpn/easy-rsa && . ./vars && "
cmd = "/etc/openvpn/easy-rsa/clean-all"
runCmd = subprocess.Popen(prefix + cmd, shell=True)
...such that you're using the results of sourcing the script in the same shell invocation as that in which you actually source the script.
Alternately (and this is what I'd do), require your Python script to be invoked by a shell which already has the necessary variables in its environment. Thus:
# ask your users to do this
set -a; . ./vars; ./yourPythonScript
...and you can error out if people don't do so very easy:
import os, sys
if not 'EASY_RSA' in os.environ:
print >>sys.stderr, "ERROR: Source vars before running this script"
sys.exit(1)

Daemonizing a python script in debian

I have a python script that i want to run in the background on startup. This is the script:
#!/usr/bin/python
from Adafruit_CharLCD import Adafruit_CharLCD
from subprocess import *
from time import sleep, strftime
from datetime import datetime
from datetime import timedelta
from os import system
from os import getloadavg
from glob import glob
#Variables
lcd = Adafruit_CharLCD() #Stores LCD object
cmdIP = "ip addr show eth0 | grep inet | awk '{print $2}' | cut -d/ -f1" #Current IP
cmdHD = "df -h / | awk '{print $5}'" # Available hd space
cmdSD = "df -h /dev/sda1 | awk '{print $5}'" # Available sd space
cmdRam = "free -h"
temp = 0
#Run shell command
def run_cmd(cmd):
p = Popen(cmd, shell=True, stdout=PIPE)
output = p.communicate()[0]
return output
#Initalises temp device
def initialise_temp():
#Initialise
system("sudo modprobe w1-gpio")
system("sudo modprobe w1-therm")
#Find device
devicedir = glob("/sys/bus/w1/devices/28-*")
device = devicedir[0]+"/w1_slave"
return device
#Gets temp
def get_temp(device):
f = open (device, 'r')
sensor = f.readlines()
f.close()
#parse results from the file
crc=sensor[0].split()[-1]
temp=float(sensor[1].split()[-1].strip('t='))
temp_C=(temp/1000.000)
temp_F = ( temp_C * 9.0 / 5.0 ) + 32
#output
return temp_C
#Gets time
def get_time():
return datetime.now().strftime('%b %d %H:%M:%S\n')
#Gets uptime
def get_uptime():
with open('/proc/uptime', 'r') as f:
seconds = float(f.readline().split()[0])
array = str(timedelta(seconds = seconds)).split('.')
string = array[0].split(' ')
totalString = string[0] + ":" + string[2]
return totalString
#Gets average load
def get_load():
array = getloadavg()
average = 0
for i in array:
average += i
average = average / 3
average = average * 100
average = "%.2f" % average
return str(average + "%")
#def get_ram():
def get_ram():
ram = run_cmd(cmdRam)
strippedRam = ram.replace("\n"," ");
splitRam = strippedRam.split(' ')
totalRam = int(splitRam[52].rstrip("M"))
usedRam = int(splitRam[59].rstrip("M"))
percentage = "%.2f" % ((float(usedRam) / float(totalRam)) * 100)
return percentage + "%"
#Gets the SD usage
def get_sd():
sd = run_cmd(cmdSD)
strippedSD = sd.lstrip("Use%\n")
return strippedSD
#Gets the HD usage
def get_hd():
hd = run_cmd(cmdSD)
strippedHD = hd.lstrip("Use%\n")
return strippedHD
def scroll():
while(1):
lcd.scrollDisplayLeft()
sleep(0.5)
#Uptime and IP
def screen1():
uptime = get_uptime()
lcd.message('Uptime %s\n' % (uptime))
ipaddr = run_cmd(cmdIP)
lcd.message('IP %s' % (ipaddr))
#Ram and load
def screen2():
ram = get_ram()
lcd.message('Ram Used %s\n' % (ram))
load = get_load()
lcd.message('Avg Load %s' % (load))
#Temp and time
def screen3():
time = get_time();
lcd.message('%s\n' % (time))
lcd.message('Temp %s' % (temp))
#HD and SD usage
def screen4():
sd = get_sd()
lcd.message('SD Used %s\n' % (sd))
hd = get_hd()
lcd.message('HD Used %s' % (hd))
#Pause and clear
def screenPause(time):
sleep(time)
#In here to reduce lag
global temp
temp = str(get_temp(device));
lcd.clear()
###########################################################################################################
#Initialise
lcd.begin(16,2)
device = initialise_temp()
lcd.clear()
#Testing
#Main loop
while(1):
screen1()
screenPause(5)
screen2()
screenPause(5)
screen3()
screenPause(5)
screen4()
screenPause(5)
I know i probably havnt done things the write way but its the first attempt.
My startup script is in /etc/init.d This is the script:
#! /bin/sh
### BEGIN INIT INFO
# Provides: LCD looping
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: LCD daemon
# Description: This file should be used to construct scripts to be
# placed in /etc/init.d.
### END INIT INFO
# Author: Foo Bar <foobar#baz.org>
#
# Please remove the "Author" lines above and replace them
# with your own name if you copy and modify this script.
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Loops the LCD screen through LCD.py"
NAME=startup.py
DAEMON=/home/pi/Programming/LCD/startup.py
DAEMON_ARGS=""
PIDFILE=/var/run/daemonLCD.pid
SCRIPTNAME=/etc/init.d/daemonLCD
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/daemonLCD ] && . /etc/default/daemonLCD
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
$DAEMON_ARGS \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
#
# If the daemon can reload its configuration without
# restarting (for example, when it is sent a SIGHUP),
# then implement that here.
#
start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
return 0
}
case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
#reload|force-reload)
#
# If do_reload() is not implemented then leave this commented out
# and leave 'force-reload' as an alias for 'restart'.
#
#log_daemon_msg "Reloading $DESC" "$NAME"
#do_reload
#log_end_msg $?
#;;
restart|force-reload)
#
# If the "reload" option is implemented then remove the
# 'force-reload' alias
#
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
#echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac
:
Im think i have missed something as when i type daemonLCD start it says command not found.
Any input would be great.
Thanks
Assuming you may want to manage more than one daemon in the future, let me recommend Supervisord. It's much simpler than writing and managing your own init.d scripts.
For example, starting your script would be as easy as including this in the conf:
[program:myscript]
command=/usr/bin/python /path/to/myscript.py
I use an init.d script available here. Rename it to supervisord and copy it to your /etc/init.d/ then run:
sudo update-rc.d supervisord defaults
I believe that init script has supervisord run as root as default. You can have it drop to run as another user if you like. I'm not if children run as root or not, although I'd assume not. Go ahead and check, but if they don't you can stick a sudo before the python command in your supervisord.conf where you call the script.
It that doesn't run, (or if you want supervisord to run as a non-root but still want your script run as root) you can allow for anyone (or a group of users) to run the python script as root (although you should make quite certain that this script cannot be edited by anyone other than root).
edit your sudoers file with "sudo visudo" and add the following to the end:
USERS ALL=(ALL) NOPASSWD: /path/to/myscript.py
Then make sure you have a shebang at the beginning of your python script and change the command to omit the python call, i.e:
[program:myscript]
command=sudo /path/to/myscript.py
Here's a good blog post which deals with this question: Getting a Python script to run in the background (as a service) on boot
Use daemontools from djb. It is a lot easier than the other answers provided. For starters you can install daemon tools with apt-get so you do not need to worry about grabbing an unknown script from a gist and you get updates through debian like normal. daemontools also takes care of restarting the service if it dies and provides for logging. There is a description of daemontools and debian here:
http://blog.rtwilson.com/how-to-set-up-a-simple-service-to-run-in-the-background-on-a-linux-machine-using-daemontools/
djb's page aout daemontools:
http://cr.yp.to/daemontools.html
This is a classic mistake new Unix/Linux users make. /etc/init.d isn't in your path which is why you can't just run daemonLCD. Try using the full path (/etc/init.d/daemonLCD start) or prepending ./ (./daemonLCD start).
The script needs to be executable for either of the above to work.
thanks for the code above. I've been using it to figure out how to set up a daemon on a linux machine.
With some tweaking I could get it to work quite well.
But something puzzled me. And that was checking if the process was running, by checking the exists of the /var/run/myfile.pid
That's just the pidfile - NOT the process, right?
Take a look at /lib/lsb/init-functions.status_of_proc
status_of_proc () {
local pidfile daemon name status OPTIND
pidfile=
OPTIND=1
while getopts p: opt ; do
case "$opt" in
p) pidfile="$OPTARG";;
esac
done
shift $(($OPTIND - 1))
if [ -n "$pidfile" ]; then
pidfile="-p $pidfile"
fi
daemon="$1"
name="$2"
status="0"
pidofproc $pidfile $daemon >/dev/null || status="$?"
if [ "$status" = 0 ]; then
log_success_msg "$name is running"
return 0
elif [ "$status" = 4 ]; then
log_failure_msg "could not access PID file for $name"
return $status
else
log_failure_msg "$name is not running"
return $status
fi
}
That's only dealing with the success or failure of accessing the PID file.
Now, I'm building this daemon to go on a small device. I've discovered it's using BusyBox and I don't have init-functions :-(
But I do have pidof.
So I added
log_success_msg "pidof $NAME is $(pidof -x $NAME)" >> $LOGFILE
log_success_msg "PIDFILE of $NAME is" >> $LOGFILE
sed -n '1p' < $PIDFILE >> $LOGFILE
and checked $LOGFILE and lo and behold the numbers are different.
I did pstree -s -p on both numbers and
the pidof number spits out a very short tree, so it's for the root level process
but the $PIDFILE number vomits out branch after branch, so I don't think pstree can find the process.
Yes, the do_stop in Joseph Baldwin Roberts's code will kill both processes. But if the process is killed in another way e.g. kill -9 12345, the $PIDFILE is still there. So, the daemon will falsely believe the process is already running an refuse to start.

Categories