SIGINT has no effect on a script - python

I am trying to understand the behavior I have with the SIGINT signal with a script launched in two differents ways.
Here is a simple python script :
import time
while True:
time.sleep(10000)
If I launch the script in background, check the pid and ppid (notice that it is the same as my terminal) and kill it wit SIGINT, it works :
user#host [~] > python script.py &
[1] 19077
user#host [~] > ps axo pid,ppid,command | grep script
19077 1055 python script.py
19093 1055 grep script
user#host [~] > kill -INT 19077
Traceback (most recent call last):
File "script.py", line 10, in <module>
time.sleep(10000)
KeyboardInterrupt
[1] + exit 1 python script.py
Now if i launch it through a Makefile :
user#host [~] > cat Makefile
all:
python script.py &
user#host [~] > make
python script.py &
user#host [~] > ps axo pid,ppid,command | grep script
19118 1 python script.py
19122 1055 grep script
user#host [~] > kill -INT 19118
user#host [~] > ps axo pid,ppid,command | grep script
19118 1 python script.py
19128 1055 grep script
Notice that now, its ppid is 1 (init, seems logic) and it does not get killed. As if the process does not receive the signal. I changed my script to handle the signal by myself :
import time, signal, sys
def signal_handler(signal, frame):
print 'Killed !'
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
while True:
time.sleep(10000)
Now the process got killed with the handler I wrote :
user#host [~] > make
python script.py &
user#host [~] > ps axo pid,ppid,command | grep script
19148 1 python script.py
19152 1055 grep script
user#host [~] > kill -INT 19148
Killed !
user#host [~] > ps axo pid,ppid,command | grep script
19158 1055 grep script
So my question is : why does the process does not get killed witn SIGINT when its ppid is 1 or launched with a Makefile ? I cannot understand the behavior, I know that the best way would be to kill it with SIGTERM as it is almost like a daemon but I want to understand this anyway.
In python the SIGINT signal is translated into a KeyboardInterrupt exception, I tried to caught it without any success.
I did the same script in bash and the behavior is exactly the same.
Any ideas ?

Signal handlers are inherited from the parent process and, as you demonstrate yourself, can be redefined. So either make redefines it. Or, for SIGINT in particular, there may be a logic that redefines the handler when the process looses its stdin or its terminal, since SIGINT is normally used for Ctrl-C. No terminal, no Ctrl-C.

Related

Logs from signal handler hidden when redirecting stdout to file via tee

I have a python program like this:
import signal, time
def cleanup(*_):
print("cleanup")
# do stuff ...
exit(1)
# trap ctrl+c and hide the traceback message
signal.signal(signal.SIGINT, cleanup)
time.sleep(20)
I run the program through a script:
#!/bin/bash
ARG1="$1"
trap cleanup INT TERM EXIT
cleanup() {
echo "\ncleaning up..."
killall -9 python >/dev/null 2>&1
killall -9 python3 >/dev/null 2>&1
# some more killing here ...
}
mystart() {
echo "starting..."
export PYTHONPATH=$(pwd)
python3 -u myfolder/myfile.py $ARG1 2>&1 | tee "myfolder/log.txt"
}
mystart &&
cleanup
My problem is that the message cleanup isn't appearing on the terminal nor on the log file.
However, if I call the program without redirecting the output it works fine.
If you don't want this to happen, put tee in the background so it isn't part of the process group getting a SIGINT. For example, with bash 4.1 or newer, you can start a process substitution with an automatically-allocated file descriptor providing a handle:
#!/usr/bin/env bash
# ^^^^ NOT /bin/sh; >(...) is a bashism, likewise automatic FD allocation.
exec {log_fd}> >(exec tee log.txt) # run this first as a separate command
python3 -u myfile >&"$log_fd" 2>&1 # then here, ctrl+c will only impact Python...
exec {log_fd}>&- # here we close the file & thus the copy of tee.
Of course, if you put those three commands in a script, that entire script becomes your foreground process, so different techniques are called for. Thus:
python3 -u myfile > >(trap '' INT; exec tee log.txt) 2>&1
Pressing ^C sends SIGINT to the entire foreground process group (the current pipeline or shell “job”), killing tee before it can write the output from your handler anywhere. You can use trap in the shell to immunize a command against SIGINT, although that comes with obvious risks.
Simply use the -i or --ignore-interrupts option of tee.
Documentation says:
-i, --ignore-interrupts
ignore interrupt signals
https://helpmanual.io/man1/tee/

`ps -ef` shows running process twice if started with `subprocess.Popen`

I use the following snippet in a larger Python program to spawn a process in background:
import subprocess
command = "/media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)
After that I wanted to check whether the process was running when my Python program returned.
Output of ps -ef | grep -v grep | grep FOOBAR:
ap 3396 937 0 16:08 pts/16 00:00:00 /bin/sh -c /media/sf_SharedDir/FOOBAR
ap 3397 3396 0 16:08 pts/16 00:00:00 /bin/sh /media/sf_SharedDir/FOOBAR
I was surprised to see two lines of and they have differend PIDs so are those two processes running? Is there something wrong with my Popen call?
FOOBAR Script:
#!/bin/bash
while :
do
echo "still alive"
sleep 1
done
EDIT: Starting the script in a terminal ps displayes only one process.
Started via ./FOOBAR
ap#VBU:/media/sf_SharedDir$ ps -ef | grep -v grep | grep FOOBAR
ap 4115 3463 0 16:34 pts/5 00:00:00 /bin/bash ./FOOBAR
EDIT: shell=True is causing this issue (if it is one). But how would I fix that if I required shell to be True to run bash commands?
There is nothing wrong, what you see is perfectly normal. There is no "fix".
Each of your processes has a distinct function. The top-level process is running the python interpreter.
The second process, /bin/sh -c /media/sf_SharedDir/FOOBAR' is the shell that interprets the cmd line (because you want | or * or $HOME to be interpreted, you specified shell=True).
The third process, /bin/sh /media/sf_SharedDir/FOOBAR is the FOOBAR cmd. The /bin/sh comes from the #! line inside your FOOBAR program. If it were a C program, you'd just see /media/sf_SharedDir/FOOBAR here. If it were a python program, you'd see /usr/bin/python/media/sf_SharedDir/FOOBAR.
If you are really bothered by the second process, you could modify your python program like so:
command = "exec /media/sf_SharedDir/FOOBAR"
subprocess.Popen(command, shell=True)

running series of interactive shell commands in bash/python/perl script

Currently, I do the following steps:
a. Grep for pid of a process and kill it.
ps -aux | grep foo.bar # process of interest
kill -9 pid_of_foo.bar # kill the process
b. start virtualenv
cd {required_folder}
sudo virtualenv folder/
cd {folder2}
source bin/activate
c. Start the manage.py in shell mode
cd {required folder}
sudo python manage.py shell
d. In the interactive manage shell, execute the following commands:
from core import *
foo.bar.bz.clear.state()
exit
e. Execute a script
/baz/maz/foo
In bash we can write down a series of commands, however Is it possible to run the interactive shell in django using bash and execute commands? I was wondering if above steps can be scriptified.
Thanks
You need a script like this one:
#!/bin/bash
# kill all foo.bar's instances
for pid in $(ps -aux | grep foo.bar | grep -v grep | awk '{print $2;}'); do
kill $pid
done
# start virtualenv
cd {required_folder}
...
# Start the manage.py in shell mode
cd {required folder}
cat << EOF | sudo python manage.py shell
from core import *
foo.bar.bz.clear.state()
exit
EOF
# Execute a script
/baz/maz/foo
The key point of the script is HEREDOC python snippet. Take a look at the example I've just tried in a console:
[alex#galene ~]$ cat <<EOF_MARK | python -
> import sys
> print "Hello, world from python %s" % sys.version
> exit
> EOF_MARK
Hello, world from python 2.7.6 (default, Nov 22 2013, 22:57:56)
[GCC 4.7.2 20121109 (ALT Linux 4.7.2-alt7)]
[alex#galene ~]$ _

How to kill Django runserver sub processes from a bash script?

I'm working on a Django website where I have various compilation programs that need to run (Compass/Sass, coffeescript, hamlpy), so I made this shell script for convenience:
#!/bin/bash
SITE=/home/dev/sites/rmx
echo "RMX using siteroot=$SITE"
$SITE/rmx/manage.py runserver &
PIDS[0]=$!
compass watch $SITE/media/compass/ &
PIDS[1]=$!
coffee -o $SITE/media/js -cw $SITE/media/coffee &
PIDS[2]=$!
hamlpy-watcher $SITE/templates/hamlpy $SITE/templates/templates &
PIDS[3]=$!
trap "echo PIDS: ${PIDS[*]} && kill ${PIDS[*]}" SIGINT
wait
Everything except for the Django server shuts down nicely on a ctrl+c because the PID of the server process isn't the PID of the python manage.py runserver command. Which means everytime I stop the script, I have to find the running process PID and shut it down.
Here's an example:
$> ./compile.sh
RMX using siteroot....
...
[ctrl+c]
PIDS: 29725 29726 29728 29729
$> ps -A | grep python
29732 pts/2 00:00:00 python
The first PID, 29725, is the initial python manage.py runserver call, but 29732 is the actual dev server process.
edit Looks like this is due to Django's auto-reload feature which can be disabled with the --noreload flag. Since I'd like to keep the auto reload feature, the question now becomes how to kill the child processes from the bash script. I would think killing the initial python runserver command would do it...
SOLVED
Thanks to this SO question, I've changed my script to this:
#!/bin/bash
SITE=/home/dev/sites/rmx
echo "RMX using siteroot=$SITE"
$SITE/rmx/manage.py runserver &
compass watch $SITE/media/compass/ &
coffee -o $SITE/media/js -cw $SITE/media/coffee &
hamlpy-watcher $SITE/templates/hamlpy $SITE/templates/templates &
trap "kill -TERM -$$" SIGINT
wait
PIDs preceded with the dash operate on the PID group with the kill command, and the $$ references the PID of the bash script itself.
Thanks for the help, me!
No problem, self, and hey -- you're awesome.
You can execute this to kill or process and servers, you set PORT number:
$ netstat -tulpn | grep PORT | awk '{print $7}' | cut -d/ -f 1 | xargs kill
OR
$ sudo lsof -i tcp:PORT
$ sudo lsof -i tcp:PORT|awk '{print $2}'|cut -d/ -f 1|xargs kill

Changing Process Name using Shell for nagios monitoring with check_procs

I have a python script to start a process which I want to monitor using Nagios. When I run that script and perform ps -ef on my ubuntu EC2 instance, it shows process as python <filename>.py --arguments. For Nagios to monitor that process using check_procs, we need to supply process name. Here process name becomes 'python'.
/usr/lib/nagios/plugins/check_procs -C python
It returns the output that one python process is running. This is fine when I'm running one python process. But If I'm running multiple python scripts and monitor only few, then I have to give that particular process name. If in the above command, I give python script name, it throws an error. So I want to mask whole python <filename>.py --arguments to some other name so that while performing check_procs, I can give that new name.
If anyone have any idea, please let me know. I have checked other stackoverflow questions which suggest changing python process name using setproctitle but I want to perform it using shell.
Regards,
Sanket
You can use the check_procs command to look at arguments, which includes the module name. The following command will let you know if the python module 'module.py' is running.
/usr/lib/nagios/plugins/check_procs -c 1:1 -a module.py -C python
The -c argument lets you set the critical range. 1:1 will trigger a critical status if there is more or less than 1 process that matches running.
The -a argument will filter based on processes that contain the args 'module.py' (change it to the name of the module you want to monitor)
The -C argument will make sure that the process is a python process
If you need help figuring out how to create the service definition, I had to figure that out too. Just let me know.
REFERENCE:
check_procs plugin manpage
http://nagiosplugins.org/man/check_procs
You can't change the process name from pure Python, although you can use a wrapper (for example, written in C) to do so.
However, what you should do instead is making your program a daemon, and using a pidfile. Have a look at the python Daemon API and its implementation python-daemon.
check_procs already handles this situation.
check_procs can tell the difference between scripts launched as an argument to the interpreter vs jobs run directly a hashbang interpreter. Even though both of these look the same in the ps output!! The latter case will not be listed in check_procs -C python!
If you run your scripts explicitly via python: python <filename.py>, then you can monitor them with the check_procs -C python -a filename.py.
If you put #!/usr/bin/python in your scripts and run them as ./filename.py, then you can monitor with check_procs -C filename.py.
Example command line session showing this behavior:
#make test.py directly executable. See code below
$ chmod a+x test.py
#launch via python explicitly:
$ /usr/bin/python ./test.py &
[1] 27094
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#launch via python implicitly
$ ./test.py &
[2] 27134
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 1 process with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 2 processes with args 'test.py'
#PS 'COMMAND' output looks the same
$ ps 27094 27134
PID TTY STAT TIME COMMAND
27094 pts/6 S 0:00 /usr/bin/python ./test.py
27134 pts/6 S 0:00 /usr/bin/python ./test.py
#kill the explicit test
$ kill 27094
[1] - terminated /usr/bin/python ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 1 process with command name 'test.py'
PROCS OK: 1 process with args 'test.py'
#kill the implicit test
$ kill 27134
[2] + terminated ./test.py
$ check_procs -C python && check_procs -C test.py && check_procs -a test.py
PROCS OK: 0 processes with command name 'python'
PROCS OK: 0 processes with command name 'test.py'
PROCS OK: 0 processes with args 'test.py'
test.py is a python script that sleeps for 2 minutes. It is chmod +x and has a hashbang #! line invoking /usr/bin/python.
#!/usr/bin/python
import time
time.sleep(120)
Create a pid file and use that file for the process lookup with nagios.
I'm not saying this is the best solution (it wouldn't scale well at all), but you can create a symbolic link to the python command and execute your script using this link. e.g.
ln -s `which python` ~/mypython
~/mypython myscript.py
Scripts launched using the link should show up as mypython in ps.
You can use subprocess.Popen to change the executable name, but you'd have to use a wrapper script (or some weird fork magic). The following code causes ps to list the executable as kwyjibo /tmp/test.py instead of /usr/bin/python /tmp/test.py:
import subprocess
p = subprocess.Popen(['kwyjibo', '/tmp/test.py'], executable='/usr/bin/python')

Categories