I'm trying to catch SIGINT (or keyboard interrupt) in Python 2.7 program. This is how my Python test script test looks:
#!/usr/bin/python
import time
try:
time.sleep(100)
except KeyboardInterrupt:
pass
except:
print "error"
Next I have a shell script test.sh:
./test & pid=$!
sleep 1
kill -s 2 $pid
When I run the script with bash, or sh, or something bash test.sh, the Python process test stays running and is not killable with SIGINT. Whereas when I copy test.sh command and paste it into (bash) terminal, the Python process test shuts down.
I cannot get what's going on, which I'd like to understand. So, where is difference, and why?
This is not about how to catch SIGINT in Python! According to docs – this is the way, which should work:
Python installs a small number of signal handlers by default: SIGPIPE ... and SIGINT is translated into a KeyboardInterrupt exception
It is indeed catching KeyboardInterrupt when SIGINT is sent by kill if the program is started directly from shell, but when the program is started from bash script run on background, it seems that KeyboardInterrupt is never raised.
There is one case in which the default sigint handler is not installed at startup, and that is when the signal mask contains SIG_IGN for SIGINT at program startup. The code responsible for this can be found here.
The signal mask for ignored signals is inherited from the parent process, while handled signals are reset to SIG_DFL. So in case SIGINT was ignored the condition if (Handlers[SIGINT].func == DefaultHandler) in the source won't trigger and the default handler is not installed, python doesn't override the settings made by the parent process in this case.
So let's try to show the used signal handler in different situations:
# invocation from interactive shell
$ python -c "import signal; print(signal.getsignal(signal.SIGINT))"
<built-in function default_int_handler>
# background job in interactive shell
$ python -c "import signal; print(signal.getsignal(signal.SIGINT))" &
<built-in function default_int_handler>
# invocation in non interactive shell
$ sh -c 'python -c "import signal; print(signal.getsignal(signal.SIGINT))"'
<built-in function default_int_handler>
# background job in non-interactive shell
$ sh -c 'python -c "import signal; print(signal.getsignal(signal.SIGINT))" &'
1
So in the last example, SIGINT is set to 1 (SIG_IGN). This is the same as when you start a background job in a shell script, as those are non interactive by default (unless you use the -i option in the shebang).
So this is caused by the shell ignoring the signal when launching a background job in a non interactive shell session, not by python directly. At least bash and dash behave this way, I've not tried other shells.
There are two options to deal with this situation:
manually install the default signal handler:
import signal
signal.signal(signal.SIGINT, signal.default_int_handler)
add the -i option to the shebang of the shell script, e.g:
#!/bin/sh -i
edit: this behaviour is documented in the bash manual:
SIGNALS
...
When job control is not in effect, asynchronous commands ignore SIGINT and SIGQUIT in addition to these inherited handlers.
which applies to non-interactive shells as they have job control disabled by default, and is actually specified in POSIX: Shell Command Language
Related
I have a python program like this:
import signal, time
def cleanup(*_):
print("cleanup")
# do stuff ...
exit(1)
# trap ctrl+c and hide the traceback message
signal.signal(signal.SIGINT, cleanup)
time.sleep(20)
I run the program through a script:
#!/bin/bash
ARG1="$1"
trap cleanup INT TERM EXIT
cleanup() {
echo "\ncleaning up..."
killall -9 python >/dev/null 2>&1
killall -9 python3 >/dev/null 2>&1
# some more killing here ...
}
mystart() {
echo "starting..."
export PYTHONPATH=$(pwd)
python3 -u myfolder/myfile.py $ARG1 2>&1 | tee "myfolder/log.txt"
}
mystart &&
cleanup
My problem is that the message cleanup isn't appearing on the terminal nor on the log file.
However, if I call the program without redirecting the output it works fine.
If you don't want this to happen, put tee in the background so it isn't part of the process group getting a SIGINT. For example, with bash 4.1 or newer, you can start a process substitution with an automatically-allocated file descriptor providing a handle:
#!/usr/bin/env bash
# ^^^^ NOT /bin/sh; >(...) is a bashism, likewise automatic FD allocation.
exec {log_fd}> >(exec tee log.txt) # run this first as a separate command
python3 -u myfile >&"$log_fd" 2>&1 # then here, ctrl+c will only impact Python...
exec {log_fd}>&- # here we close the file & thus the copy of tee.
Of course, if you put those three commands in a script, that entire script becomes your foreground process, so different techniques are called for. Thus:
python3 -u myfile > >(trap '' INT; exec tee log.txt) 2>&1
Pressing ^C sends SIGINT to the entire foreground process group (the current pipeline or shell “job”), killing tee before it can write the output from your handler anywhere. You can use trap in the shell to immunize a command against SIGINT, although that comes with obvious risks.
Simply use the -i or --ignore-interrupts option of tee.
Documentation says:
-i, --ignore-interrupts
ignore interrupt signals
https://helpmanual.io/man1/tee/
I have an issue with executing application via /usr/bin/timeout in a bash script.
In this specific case this is a simple python fabric script (fabric version 1.14)
In order to install this version of fabric library run: pip install "fabric<2"
There is no reproduction with new fabric 2.x.
Shell script causing issue:
[root#testhost:~ ] $ cat testNOK.sh
#!/bin/bash
timeout 10 ./test.py
echo "RETCODE=$?"
[root#testhost:~ ] $ ./testNOK.sh
[localhost] run: echo Hello!
RETCODE=124
[root#testhost:~ ] $
Similar script (without timeout) working fine
[root#testhost:~ ] $ cat testOK.sh
#!/bin/bash
./test.py
echo "RETCODE=$?"
[root#testhost:~ ] $ ./testOK.sh
[localhost] run: echo Hello!
[localhost] out: Hello!
[localhost] out:
RETCODE=0
[root#testhost:~ ] $
Manual execution from bash commandline with timeout working fine:
[root#testhost:~ ] $ timeout 10 ./test.py && echo "RETCODE=$?"
[localhost] run: echo Hello!
[localhost] out: Hello!
[localhost] out:
RETCODE=0
[root#testhost:~ ] $
Python2.7 test.py script
[root#testhost:~ ] $ cat test.py
#!/usr/bin/python
from fabric.api import run, settings
with settings(host_string='localhost', user='root', password='XXXXX'):
run('echo Hello!')
[root#testhost:~ ] $
I have observed the same behavior on different Linux distributions.
Now the question is why application executed via timeout within bash script behaves in a different way and what would be the best solution to this issue?
You need to invoke timeout with the --foreground option:
timeout --foreground ./test.py
This is only required if the timeout command is not executed from an interactive shell (that is, if it's executed from a script file).
Quoting from the timeout info page:
‘--foreground’
Don’t create a separate background program group, so that the
managed COMMAND can use the foreground TTY normally. This is
needed to support timing out commands not started directly from an
interactive shell, in two situations.
1. COMMAND is interactive and needs to read from the terminal for
example
2. the user wants to support sending signals directly to COMMAND
from the terminal (like Ctrl-C for example)
What's actually going on in this case is that fabric (or something invokes) is calling tcsetattr to turn terminal echo off. I don't know why, but I suppose it has something to do with the process used to (not) collect the user password. (I just saw it in an strace; I made no attempt to find the call.) Attempting to change tty configuration from a background process will cause the process to block until it regains control of the tty, and that's what's happening.
It doesn't happen when timeout is not used because bash doesn't create a background program group. I suppose that fabric 2 avoids the call to tcsetattr.
You could probably also avoid the issue by avoiding password-based SSH authentication but I didn't try that.
You can also avoid the problem by redirecting stdin to /dev/null (either in the timeout command or in the invocation of the shell script.) If you don't need to forward stdin to the remote command (and you probably don't), that might also be useful.
You Can Use time out without using bash Just by using the time model in python
import time
time.sleep(5)
#change the 5 by the seconds that you need to set a timeout
I have a python script(myscript.py) as follows:
#!/bin/python
import os
import optparse
import subprocess
import sys
sys.stdout.flush()
print("I can see this message on Jenkins console output")
cmd="sshpass -p 'xxx' ssh test#testmachine 'cmd /c cd C:\stage && test.bat'"
retval=subprocess.call(cmd,shell=True)
print retval
In jenkins, I have a job with execute shell as follows:
#!/bin/sh
./myscript.py
Problem:
Jenkins console shows only "I can see this message on Jenkins console output".
If there is any output from the subprocess call, it does not print it out on the console.
If I putty to Server A and run the same command (./myscript.py) on shell, I can see the output of subprocess call.
How can I print this output of subprocess call on Jenkins console?
FYI: As you can see from my command, the subprocess call is running a batch file on windows; Jenkins is running on Linux; There is ssh setup between the two machines..
Edit:
My test.bat looks like this:
echo off
RMDIR /S /Q C:\Test
IF %ERRORLEVEL% NEQ 0 (
ECHO Could not delete
EXIT /b %ERRORLEVEL%
)
if I run this batch file locally on windows server, it returns a 1 ( because am holding a file open in Test folder )
But when the python script calls this batch file using the subprocess call, all i get is a Zero for retval.
Why is this and how to fix this? If I can capture the correct retval, I can make the Jenkins job fail.
Edit 12/12:
Helllo!! Anybody! Somebody! Help!
I wonder if it has to do anything with stdout being buffered
Can you try setting PYTHONUNBUFFERED before running your command?
export PYTHONUNBUFFERED=true
In my Jenkins environment, executing python scripts with the unbuffered argument makes the output appear immediately. Like this:
python3 -u some_script.py
More information comes from the help menu (python3 --help):
-u : force the stdout and stderr streams to be unbuffered;
this option has no effect on stdin; also PYTHONUNBUFFERED=x
TL; DR
The fix is to use some conditional execution (the || operator) on rmdir to fix the errorlevel being returned.
Investigation
This was a corker of a bug, with quite a few twists and turns! We initially suspected that the stdout chain was broken somehow, so looked into that through explicit use of pipes in Popen and then removing sshpass from your command and so using the output from ssh directly.
However, that didn't do the trick, so we moved on to looking at the return code of the command. With sshpass removed, ssh should return the result of the command that was run. However, this was always 0 for you.
At this point, I found a known bug in Windows that rmdir (which is the same as rd) doesn't always set errorlevel correctly. The fix is to use some conditional execution (the || operator) on rmdir to fix up the errorlevel.
See batch: Exit code for "rd" is 0 on error as well for full details.
When you execute your script in a shell, Python sets your shell's STDOUT as the subprocess's STDOUT, so everything that gets executed gets printed to your terminal. I'm not sure why, but when you're executing in Jenkins the subprocess is not inheriting the shell's STDOUT so its output is not displayed.
In all likelihood, the best way to solve your problem will be to PIPE the STDOUT (and STDERR for good measure) and print it after the process ends. Also, if you exit with the exit code of your subprocess and the exit code is not 0, it will likely terminate your Jenkins job.
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
exit_code = p.wait() # wait for it to end
print('Got the following output from the script:\n', p.stdout.read().decode())
print('Got the following errors from the script:\n', p.stderr.read().decode())
print('Script returned exit code:', exit_code)
sys.exit(exit_code)
I'm trying to run a python script from python using the subprocess module and executing a script sequentially.
I'm trying to do this in UNIX but before I launch python in a new shell I need to execute a command (ppack_gnu) that sets the environment for python (and prints some lines in the console).
The thing is that when I run this command from python subprocess the process hangs and waits for this command to finish whereas when I do it in the UNIX console it jumps to the next line automatically.
Examples below:
From UNIX:
[user1#1:~]$ ppack_gnu; echo 1
You appear to be in prefix already (SHELL=/opt/soft/cdtng/tools/ppack_gnu/3.2/bin/bash)
1
[user1#1:~]$
From PYTHON:
processes.append(Popen("ppack_gnu; echo 1", shell=True, stdin = subprocess.PIPE))
This will print Entering Gentoo Prefix /opt/soft/cdtng/tools/ppack_gnu/3.2 - run 'bash -l' to source full bash profiles
in the python console and then hang...
Popen() does not hang: it returns immediately while ppack_gnu may be still running in the background.
The fact that you see the shell prompt does not mean that the command has returned:
⟫ echo $$
9302 # current shell
⟫ bash
⟫ echo $$
12131 # child shell
⟫ exit
⟫ echo $$
9302 # current shell
($$ -- PID of the current shell)
Even in bash, you can't change environment variables of the parent shell (without gdb or similar hacks) that is why source command exists.
stdin=PIPE suggests that you want to pass commands to the shell started by ppack_gnu. Perhaps you need to add process.stdin.flush() after the corresponding process.stdin.write(b'command\n').
I am starting my script locally via:
sudo python run.py remote
This script happens to also open a subprocess (if that matters)
webcam = subprocess.Popen('avconv -f video4linux2 -s 320x240 -r 20 -i /dev/video0 -an -metadata title="OfficeBot" -f flv rtmp://6f7528a4.fme.bambuser.com/b-fme/xxx', shell = True)
I want to know how to terminate this script when I SSH in.
I understand I can do:
sudo pkill -f "python run.py remote"
or use:
ps -f -C python
to find the process ID and kill it that way.
However none of these gracefully kill the process, I want to able to trigger the equilivent of CTRL/CMD C to register an exit command (I do lots of things on shutdown that aren't triggered when the process is simply killed).
Thank you!
You should use "signals" for it:
http://docs.python.org/2/library/signal.html
Example:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
signal.signal(signal.SIGINT, handler)
#do your stuff
then in terminal:
kill -INT $PID
or ctrl+c if your script is active in current shell
http://en.wikipedia.org/wiki/Unix_signal
also this might be useful:
How do you create a daemon in Python?
You can use signals for communicating with your process. If you want to emulate CTRL-C the signal is SIGINT (which you can raise by kill -INT and process id. You can also modify the behavior for SIGTERM which would make your program shut down cleanly under a broader range of circumstances.