I am starting my script locally via:
sudo python run.py remote
This script happens to also open a subprocess (if that matters)
webcam = subprocess.Popen('avconv -f video4linux2 -s 320x240 -r 20 -i /dev/video0 -an -metadata title="OfficeBot" -f flv rtmp://6f7528a4.fme.bambuser.com/b-fme/xxx', shell = True)
I want to know how to terminate this script when I SSH in.
I understand I can do:
sudo pkill -f "python run.py remote"
or use:
ps -f -C python
to find the process ID and kill it that way.
However none of these gracefully kill the process, I want to able to trigger the equilivent of CTRL/CMD C to register an exit command (I do lots of things on shutdown that aren't triggered when the process is simply killed).
Thank you!
You should use "signals" for it:
http://docs.python.org/2/library/signal.html
Example:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
signal.signal(signal.SIGINT, handler)
#do your stuff
then in terminal:
kill -INT $PID
or ctrl+c if your script is active in current shell
http://en.wikipedia.org/wiki/Unix_signal
also this might be useful:
How do you create a daemon in Python?
You can use signals for communicating with your process. If you want to emulate CTRL-C the signal is SIGINT (which you can raise by kill -INT and process id. You can also modify the behavior for SIGTERM which would make your program shut down cleanly under a broader range of circumstances.
Related
I have a python program like this:
import signal, time
def cleanup(*_):
print("cleanup")
# do stuff ...
exit(1)
# trap ctrl+c and hide the traceback message
signal.signal(signal.SIGINT, cleanup)
time.sleep(20)
I run the program through a script:
#!/bin/bash
ARG1="$1"
trap cleanup INT TERM EXIT
cleanup() {
echo "\ncleaning up..."
killall -9 python >/dev/null 2>&1
killall -9 python3 >/dev/null 2>&1
# some more killing here ...
}
mystart() {
echo "starting..."
export PYTHONPATH=$(pwd)
python3 -u myfolder/myfile.py $ARG1 2>&1 | tee "myfolder/log.txt"
}
mystart &&
cleanup
My problem is that the message cleanup isn't appearing on the terminal nor on the log file.
However, if I call the program without redirecting the output it works fine.
If you don't want this to happen, put tee in the background so it isn't part of the process group getting a SIGINT. For example, with bash 4.1 or newer, you can start a process substitution with an automatically-allocated file descriptor providing a handle:
#!/usr/bin/env bash
# ^^^^ NOT /bin/sh; >(...) is a bashism, likewise automatic FD allocation.
exec {log_fd}> >(exec tee log.txt) # run this first as a separate command
python3 -u myfile >&"$log_fd" 2>&1 # then here, ctrl+c will only impact Python...
exec {log_fd}>&- # here we close the file & thus the copy of tee.
Of course, if you put those three commands in a script, that entire script becomes your foreground process, so different techniques are called for. Thus:
python3 -u myfile > >(trap '' INT; exec tee log.txt) 2>&1
Pressing ^C sends SIGINT to the entire foreground process group (the current pipeline or shell “job”), killing tee before it can write the output from your handler anywhere. You can use trap in the shell to immunize a command against SIGINT, although that comes with obvious risks.
Simply use the -i or --ignore-interrupts option of tee.
Documentation says:
-i, --ignore-interrupts
ignore interrupt signals
https://helpmanual.io/man1/tee/
This question already has an answer here:
Capturing SIGINT using KeyboardInterrupt exception works in terminal, not in script
(1 answer)
Closed 5 years ago.
I'm trying to:
launch a background process (a python script)
run some bash commands
Then send control-C to shut down the background process once the foreground tasks are finished
Minimal example of the what I've tried - Python test.py:
import sys
try:
print("Running")
while True:
pass
except KeyboardInterrupt:
print("Escape!")
Bash test.sh:
#!/bin/bash
python3 ./test.py &
pid=$!
# ... do something here ...
sleep 2
# Send an interrupt to the background process
# and wait for it to finish cleanly
echo "Shutdown"
kill -SIGINT $pid
wait
result=$?
echo $result
exit $result
But the bash script seems to be hanging on the wait and the SIGINT signal is not being sent to the python process.
I'm using Mac OS X, and am looking for a solution that works for bash on linux + mac.
Edit: Bash was sending interrupts but Python was not capturing them when being run as a background job. Fixed by adding the following to the Python script:
import signal
signal.signal(signal.SIGINT, signal.default_int_handler)
The point is SIGINT is used to terminate foreground process. You should directly use kill $pid to terminate background process.
BTW, kill $pid is equal to kill -15 $pid or kill -SIGTERM $pid.
Update
You can use signal module to deal with this situation.
import signal
import sys
def handle(signum, frame):
sys.exit(0)
signal.signal(signal.SIGINT, handle)
print("Running")
while True:
pass
I'm trying to catch SIGINT (or keyboard interrupt) in Python 2.7 program. This is how my Python test script test looks:
#!/usr/bin/python
import time
try:
time.sleep(100)
except KeyboardInterrupt:
pass
except:
print "error"
Next I have a shell script test.sh:
./test & pid=$!
sleep 1
kill -s 2 $pid
When I run the script with bash, or sh, or something bash test.sh, the Python process test stays running and is not killable with SIGINT. Whereas when I copy test.sh command and paste it into (bash) terminal, the Python process test shuts down.
I cannot get what's going on, which I'd like to understand. So, where is difference, and why?
This is not about how to catch SIGINT in Python! According to docs – this is the way, which should work:
Python installs a small number of signal handlers by default: SIGPIPE ... and SIGINT is translated into a KeyboardInterrupt exception
It is indeed catching KeyboardInterrupt when SIGINT is sent by kill if the program is started directly from shell, but when the program is started from bash script run on background, it seems that KeyboardInterrupt is never raised.
There is one case in which the default sigint handler is not installed at startup, and that is when the signal mask contains SIG_IGN for SIGINT at program startup. The code responsible for this can be found here.
The signal mask for ignored signals is inherited from the parent process, while handled signals are reset to SIG_DFL. So in case SIGINT was ignored the condition if (Handlers[SIGINT].func == DefaultHandler) in the source won't trigger and the default handler is not installed, python doesn't override the settings made by the parent process in this case.
So let's try to show the used signal handler in different situations:
# invocation from interactive shell
$ python -c "import signal; print(signal.getsignal(signal.SIGINT))"
<built-in function default_int_handler>
# background job in interactive shell
$ python -c "import signal; print(signal.getsignal(signal.SIGINT))" &
<built-in function default_int_handler>
# invocation in non interactive shell
$ sh -c 'python -c "import signal; print(signal.getsignal(signal.SIGINT))"'
<built-in function default_int_handler>
# background job in non-interactive shell
$ sh -c 'python -c "import signal; print(signal.getsignal(signal.SIGINT))" &'
1
So in the last example, SIGINT is set to 1 (SIG_IGN). This is the same as when you start a background job in a shell script, as those are non interactive by default (unless you use the -i option in the shebang).
So this is caused by the shell ignoring the signal when launching a background job in a non interactive shell session, not by python directly. At least bash and dash behave this way, I've not tried other shells.
There are two options to deal with this situation:
manually install the default signal handler:
import signal
signal.signal(signal.SIGINT, signal.default_int_handler)
add the -i option to the shebang of the shell script, e.g:
#!/bin/sh -i
edit: this behaviour is documented in the bash manual:
SIGNALS
...
When job control is not in effect, asynchronous commands ignore SIGINT and SIGQUIT in addition to these inherited handlers.
which applies to non-interactive shells as they have job control disabled by default, and is actually specified in POSIX: Shell Command Language
I am working on a script in python where first I set ettercap to ARP poisoning and then start urlsnarf to log the URLs. I want to have ettercap to start first and then, while poisoning, start urlsnarf. The problem is that these jobs must run at the same time and then urlsnarf show the output. So I thought it would be nice If I could run ettercap in background without waiting to exit and then run urlsnarf. I tried command nohup but at the time that urlsnarf had to show the url the script just ended. I run:
subprocess.call(["ettercap",
"-M ARP /192.168.1.254/ /192.168.1.66/ -p -T -q -i wlan0"])
But I get:
ettercap NG-0.7.4.2 copyright 2001-2005 ALoR & NaGA
MITM method ' ARP /192.168.1.254/ /192.168.1.66/ -p -T -q -i wlan0' not supported...
Which means that somehow the arguments were not passed correctly
You could use the subprocess module in the Python standard library to spawn ettercap as a separate process that will run simultaneously with the parent. Using the Popen class from subprocess you'll be able to spawn your ettercap process run your other processing and then kill the ettercap process when you are done. More info here: Python Subprocess Package
import shlex, subprocess
args = shlex.split("ettercap -M ARP /192.168.1.254/ /192.168.1.66/ -p -T -q -i wlan0")
ettercap = subprocess.Popen(args)
# program continues without waiting for ettercap process to finish.
I want to kill a subprocess if the time of executing is too long.
I know I have to use os.kill or os.killpg.
However, the problems comes out when if I am not a root user. For example, in my designed GUI, I want to call subprocess, and os.kill or os.killpg to kill the subprocess. But my GUI is owned by apache. So when it comes to the command os.kill, I will get error:
[type:
exceptions.OSError value: [Errno 1] Operation not permitted
And besides, the version of my python is 2.4.3. so terminate()...can't be used.
Could anyone give me some ideas?
Thanks a lot!
P.S.
Related part of my code:
timeout=4
subp = subprocess.Popen('sudo %s'%commandtosend, shell=True,preexec_fn=os.setsid, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while subp.poll() is None:
time.sleep(0.1)
now = datetime.datetime.now()
if (now - start).seconds > timeout:
os.kill(subp.pid, signal.SIGKILL)
#os.killpg(subp.pid, signal.SIGKILL)
break
Remove sudo from the subprocess command if it's possible which you should do because you shouldn't run a subprocess in a sudo user from your GUI , it's definitely a security breach:
subprocess.Popen(commandtosend, shell=True,preexec_fn=os
^^
Here don't put sudo
Like this your subprocess will be launch with the www-data user(Apache user), and you can kill it with os.kill(subp.pid, signal.SIGKILL).
If it's not possible to remove the sudo (which is bad) from the subprocess you will have to execute the kill like this :
os.system("sudo kill %s" % (subp.pid, ))
Hope this can help :)
Your subprocess is running with superuser privileges (because you're starting it with sudo).
To kill it, you need to be superuser.
One option would be to not use os.kill but run 'sudo kill 5858' where 5858 would be the PID of the process spawned by subprocess.Popen.
It's also worth noting that if your program allows the user to control commandtosend you will give the user superuser rights to the entire machine.