Popen.wait never returning with docker-compose - python

I am developing a wrapper around docker compose with python.
However, I struggle with Popen.
Here is how I launch launch it :
import subprocess as sp
argList=['docker-compose', 'up']
env={'HOME': '/home/me/somewhere'}
p = sp.Popen(argList, env=env)
def handler(signum, frame):
p.send_signal(signum)
for s in (signal.SIGINT,):
signal.signal(s, handler) # to redirect Ctrl+C
p.wait()
Everything works fine, when I hit Ctrl+C, docker-compose kills gracelly the container, however, p.wait() never returns...
Any hint ?
NOTE : While writing the question, I though I needed to check if p.wait() does actually return and if the block is after (it's the last instruction in the script). Adding a print after it end in the process exiting normally, any further hints on this behavior ?

When I run your code as written, it works as intended in that it causes docker-compose to exit and then p.wait() returns. However, I occasionally see this behavior:
Killing example_service_1 ... done
ERROR: 2
I think that your code may end up delivering SIGINT twice to docker-compose. That is, I think docker-compose receives an initial SIGINT when you type CTRL-C, because it has the same controlling terminal as your Python script, and then you explicitly deliver another SIGINT in your handler function.
I don't always see this behavior, so it's possible my explanation is incorrect.
In any case, I think the correct solution here is imply to ignore SIGINT in your Python code:
import signal
import subprocess
argList = ["docker-compose", "up"]
p = subprocess.Popen(argList)
signal.signal(signal.SIGINT, signal.SIG_IGN) # to redirect Ctrl+C
p.wait()
With this implementation, your Python code ignores the SIGINT generated by CTRL-C, but it is received and processed normally by docker-compose.

Related

Catching SIGINT (Ctrl+C) signal sent from systemd to a python daemon/service

EDIT: Narrowed down problem from original version, originally assumed all SIGINT overrides were being ignored, but it's actually just the subprocess one, edited to reflect this.
I'd like to have python shutdown safely when receiving the SIGINT (Ctrl+C) from systemd. However, the command sudo systemctl kill --signal=SIGINT myapp ignores my subprocess.Popen(args, stdout=PIPE, stderr=PIPE, preexec_fn = os.setsid) line, which prevents the SIGINT from going to a called process (works when NOT using systemd), and crashes my program anyways.
Here's my setup (similar to this: How can I make a python daemon handle systemd signals?):
shutdown = False
def shutdown_handler(signal, frame):
global shutdown
is_thread = frame.f_code.co_name == "my_thread_func"
if shutdown:
logging.info("Force shutdown for process {0}".format(os.getpid()))
raise KeyboardInterrupt
else:
shutdown = True
if not is_thread:
logging.info("Shutdown signal received. Waiting for sweeps to finish.")
logging.info("Press Ctrl-C again to force shutdown.")
return
signal.signal(signal.SIGINT, shutdown_handler)
Elsewhere:
subprocess.Popen(args, stdout=PIPE, stderr=PIPE, preexec_fn = os.setsid)
When running NOT using systemd (as just python daemon.py), the Popen subprocess continues running as desired. But when using sudo systemctl kill --signal=SIGINT myapp, it sends the signal to the parent, child, and Popen (command line) processes.
systemd[1]: fi_iot.service: Sent signal SIGINT to main process 512562 (python3) on client request.
systemd[1]: fi_iot.service: Sending signal SIGINT to process 512978 (python3) on client request.
systemd[1]: fi_iot.service: Sending signal SIGINT to process 513023 (my-cli-tool) on client request.
Any one know why this is happening?
I'm also open to suggestions on alternative ways of implementing this (eg adding an ExecStop= arg to my system config, or using a custom signal instead of SIGINT), though I'd rather override as little default behavior as possible, I want sudo systemctl stop myapp to do what it's supposed to do without my custom code potentially messing things up or confusing others.
EDIT: It seems this issue is specific to how the Popen function is called, I might try setting it to SIGIGN and see it that works, an earlier version of this post indicated this was a broader issue than it appears to be.
In python3 ctl-c is raised as the error KeyboardInterrupt
to catch it use try, and except KeyboardInterrupt:
The best bet to catching it is to make a main method, and put the try except around where you call it.
Update:
Popen has a method called send_signal, so you can forward the systemd signal to it
relevant python docs
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.send_signal
Solution: Using a different method of preventing subprocess.Popen from overriding signals worked:
def preexec_function():
# used by Popen to tell driver to ignore SIGINT
signal.signal(signal.SIGINT, signal.SIG_IGN)
proc = Popen(args, stdout=PIPE, stderr=PIPE, preexec_fn = preexec_function)
Now the subprocess ignores the signal, unlike copilot's preexec_fn=os.setsid, which doesn't do what I want from systemd, which is what I get for using GPT-3 generated code I don't understand.
I may look into using Showierdata9978's suggestion of using send_signal, which could allow me to send the interrupt when the second Ctrl+C is pressed, allowing it to shutdown safely despite the ignore.

Killing a background process launched with python sh

I have a compiled program I launch using python sh as a background process. I want to run it for 20 seconds, then kill it. I always get an exception I can't catch. The code looks like
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _bg=True, _out='/dev/null', _err='/dev/null', _timeout=20)
p.wait()
except sh.TimeoutException:
print('caught timeout')
I have also tried to use p.kill() and p.terminate() after catching the timeout exception. I see a stack trace that ends in SignalException_SIGKILL. I can't seem to catch that. The stack trace references none of my code. Also, the text comes to the screen even though I'm routing stdout and stderr to /dev/null.
The program seems to run OK. The logger collects the data but I want eliminate or catch the exception. Any advice appreciated.
_timeout for the original invocation only applies when the command is run synchronously, in the foreground. When you run a command asynchronously, in the background, with _bg=True, you need to pass timeout to the wait call instead, e.g.:
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _bg=True, _out='/dev/null', _err='/dev/null')
p.wait(timeout=20)
except sh.TimeoutException:
print('caught timeout')
Of course, in this case, you're not taking advantage of it being in the background (no work is done between launch and wait), so you may as well run it in the foreground and leave the _timeout on the invocation:
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _out='/dev/null', _err='/dev/null', _timeout=20)
except sh.TimeoutException:
print('caught timeout')
You don't need to explicitly kill or terminate the child process; the _timeout_signal argument is used to signal the child on timeout (defaulting to signal.SIGKILL). You can change it to another signal if SIGKILL is not what you desire, but you don't need to call kill/terminate yourself either way; the act of timing out sends the signal for you.

subprocess.Popen makes terminal crash after KeyboardInterrupt

I wrote a simple python script ./vader-shell which uses subprocess.Popen to launch a spark-shell and I have to deal with KeyboardInterrupt, since otherwise the child process would not die
command = ['/opt/spark/current23/bin/spark-shell']
command.extend(params)
p = subprocess.Popen(command)
try:
p.communicate()
except KeyboardInterrupt:
p.terminate()
This is what I see with ps f
When I actually interrupt with ctrl-C, I see the processes dying (most of the time). However the terminal starts acting weird: I don't see any cursor, and all the lines starts to appear randomly
I am really lost in what is the best way to run a subprocess with this library and how to handle killing of the child processes. What I want to achieve is basic: whenever my python process is killed with a ctrl-C, I want all the family of process being killed. I googled several solutions os.kill, p.wait() after termination, calling subprocess.Popen(['reset']) after termination but none of them worked.
Do you know what is the best way to kill when KeyboardInterrupt happens? Or do you know any other more reliable library to use to spin-up processes?
There is nothing blatantly wrong with your code, the problem is that the command you are launching tries to do stuff with the current terminal, and does not correctly restore the settings where shutting down. Replacing your command with a "sleep" like below will run just fine and stop on Ctrl+C without problems:
import subprocess
command = ['/bin/bash']
command.extend(['-c', 'sleep 600'])
p = subprocess.Popen(command)
try:
p.communicate()
except KeyboardInterrupt:
p.terminate()
I don't know what you're trying to do with spark-shell, but if you don't need it's output you could try to redirect it to /dev/null so that it's doesn't mess up the terminal display:
p = subprocess.Popen(command, stdout=subprocess.DEVNULL)

How to capture Ctrl+C in a Python script which executes a Ruby script?

I am executing a Ruby script from within a Python script. Here is what my Python script "script007.py" looks like:
.
.
.
os.system("ruby script.rb") #executing ctrl+c here
print "should not be here"
.
.
.
I execute CTRL+C when the Ruby script is running but it just stops "script.rb" and continues with the rest of "script007.py". I know this because it prints "should not be here" when the Ruby script is stopped.
Is there a way that I can catch the CTRL+C in my Python script even though it happens in Ruby script? Let me know if further explanation is required.
In Python, SIGINT raises a special exception which you could catch. If, however, the child consumes the SIGINT signal and responds to it, it does not arrive at the parent process. Then you need to find a different way to communicate from child to parent about why the child exited. This usually is the exit code.
In any case, you should start replacing os.system() with tools from the subprocess module (this is documented, just go and read about this in the subprocess docs). You could emit a certain exit code in the child when it exits after retrieving SIGINT, and analyze the exit code in the parent. You can then exit the parent conditionally right after the child process has terminated, depending on what the exit code of the child was.
Example: your child (Ruby program) exits with code 15 after retrieving SIGINT. In the parent (Python program) you would do something in the lines of:
p = subprocess.Popen(...)
out, err = p.communicate(...)
if p.returncode == 15:
sys.exit(1)
print "should not be here"
In your Ruby script:
trap('SIGINT') { exit 1 }
In your Python script, os.system() should return the value passed to exit. You can use that redirect the flow of control however needed, e.g. call sys.exit().

Python: How to prevent subprocesses from receiving CTRL-C / Control-C / SIGINT

I am currently working on a wrapper for a dedicated server running in the shell. The wrapper spawns the server process via subprocess and observes and reacts to its output.
The dedicated server must be explicitly given a command to shut down gracefully. Thus, CTRL-C must not reach the server process.
If I capture the KeyboardInterrupt exception or overwrite the SIGINT-handler in python, the server process still receives the CTRL-C and stops immediately.
So my question is:
How to prevent subprocesses from receiving CTRL-C / Control-C / SIGINT?
Somebody in the #python IRC-Channel (Freenode) helped me by pointing out the preexec_fn parameter of subprocess.Popen(...):
If preexec_fn is set to a callable
object, this object will be called in
the child process just before the
child is executed. (Unix only)
Thus, the following code solves the problem (UNIX only):
import subprocess
import signal
def preexec_function():
# Ignore the SIGINT signal by setting the handler to the standard
# signal handler SIG_IGN.
signal.signal(signal.SIGINT, signal.SIG_IGN)
my_process = subprocess.Popen(
["my_executable"],
preexec_fn = preexec_function
)
Note: The signal is actually not prevented from reaching the subprocess. Instead, the preexec_fn above overwrites the signal's default handler so that the signal is ignored. Thus, this solution may not work if the subprocess overwrites the SIGINT handler again.
Another note: This solution works for all sorts of subprocesses, i.e. it is not restricted to subprocesses written in Python, too. For example the dedicated server I am writing my wrapper for is in fact written in Java.
Combining some of other answers that will do the trick - no signal sent to main app will be forwarded to the subprocess.
import os
from subprocess import Popen
def preexec(): # Don't forward signals.
os.setpgrp()
Popen('whatever', preexec_fn = preexec)
you can do something like this to make it work in windows and unix:
import subprocess
import sys
def pre_exec():
# To ignore CTRL+C signal in the new process
signal.signal(signal.SIGINT, signal.SIG_IGN)
if sys.platform.startswith('win'):
#https://msdn.microsoft.com/en-us/library/windows/desktop/ms684863(v=vs.85).aspx
#CREATE_NEW_PROCESS_GROUP=0x00000200 -> If this flag is specified, CTRL+C signals will be disabled
my_sub_process=subprocess.Popen(["executable"], creationflags=0x00000200)
else:
my_sub_process=subprocess.Popen(["executable"], preexec_fn = pre_exec)
After an hour of various attempts, this works for me:
process = subprocess.Popen(["someprocess"], creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP)
It's solution for windows.
Try setting SIGINT to be ignored before spawning the subprocess (reset it to default behavior afterward).
If that doesn't work, you'll need to read up on job control and learn how to put a process in its own background process group, so that ^C doesn't even cause the kernel to send the signal to it in the first place. (May not be possible in Python without writing C helpers.)
See also this older question.

Categories