How to stop SIGINT being passed to subprocess in python? - python

My python script intercepts the SIGINT signal with the signal process module to prevent premature exit, but this signal is passed to a subprocess that I open with Popen. is there some way to prevent passing this signal to the subprocess so that it also is not exited prematurely when the user presses ctrl-c?

Signal handlers are inherited when you start a subprocess, so if you use the signal module to ignore SIGINT (signal.signal(signal.SIGINT, signal.SIG_IGN)), then your child process automatically will also.
There are two important caveats, though:
You have to set the ignore handler before you spawn the child process
Custom signal handlers are reset to the default handlers, since the child process won't have access to the handler code to run it.
So if you need to customise your handling of SIGINT rather than just ignoring it, you probably want to temporarily ignore SIGINT while you spawn your child process, then (re)set your custom signal handler.
If you're trying to catch SIGINT and set a flag so you can exit at a safe point rather than immediately, remember that when you get to that safe point your code will have to manually clean up its descendants, since your child process and any processes it starts will be ignoring the SIGINT.

You are able to re-assign the role of ctrl-c using the tty module, which allows you to manipulate the assignment of signals. Be warned, however, that unless you put them back the way they were before you modified them, they will persist for the shell's entire session, even after the program exits.
Here is a simple code snippet to get you started that stores your old tty settings, re-assigns ctrl-c to ctrl-x, and then restores your previous tty settings upon exit.
import sys
import tty
# Back up previous tty settings
stdin_fileno = sys.stdin.fileno()
old_ttyattr = tty.tcgetattr(stdin_fileno)
try:
print 'Reassigning ctrl-c to ctrl-x'
# Enter raw mode on local tty
tty.setraw(stdin_fileno)
raw_ta = tty.tcgetattr(stdin_fileno)
raw_ta[tty.LFLAG] |= tty.ISIG
raw_ta[tty.OFLAG] |= tty.OPOST | tty.ONLCR
# ^X is the new ^C, set this to 0 to disable it entirely
raw_ta[tty.CC][tty.VINTR] = '\x18'
# Set raw tty as active tty
tty.tcsetattr(stdin_fileno, tty.TCSANOW, raw_ta)
# Dummy program loop
import time
for _ in range(5):
print 'doing stuff'
time.sleep(1)
finally:
print 'Resetting ctrl-c'
# Restore previous tty no matter what
tty.tcsetattr(stdin_fileno, tty.TCSANOW, old_ttyattr)

For python 2 codebase: subprocess is broken.
The right thing is
import subprocess32 as subprocess
See subprocess32
This is a backport of the Python 3 subprocess module for use on Python
2. This code has not been tested on Windows or other non-POSIX platforms.

Related

Gracefully terminating a subprocess in python [duplicate]

I am trying the code pasted below on Windows, but instead of handling signal, it is killing the process.
However, the same code is working in Ubuntu.
import os, sys
import time
import signal
def func(signum, frame):
print 'You raised a SigInt! Signal handler called with signal', signum
signal.signal(signal.SIGINT, func)
while True:
print "Running...",os.getpid()
time.sleep(2)
os.kill(os.getpid(),signal.SIGINT)
Python's os.kill wraps two unrelated APIs on Windows. It calls GenerateConsoleCtrlEvent when the sig parameter is CTRL_C_EVENT or CTRL_BREAK_EVENT. In this case the pid parameter is a process group ID. If the latter call fails, and for all other sig values, it calls OpenProcess and then TerminateProcess. In this case the pid parameter is a process ID, and the sig value is passed as the exit code. Terminating a Windows process is akin to sending SIGKILL to a POSIX process. Generally this should be avoided since it doesn't allow the process to exit cleanly.
Note that the docs for os.kill mistakenly claim that "kill() additionally takes process handles to be killed", which was never true. It calls OpenProcess to get a process handle.
The decision to use WinAPI CTRL_C_EVENT and CTRL_BREAK_EVENT, instead of SIGINT and SIGBREAK, is unfortunate for cross-platform code. It's also not defined what GenerateConsoleCtrlEvent does when passed a process ID that's not a process group ID. Using this function in an API that takes a process ID is dubious at best, and potentially very wrong.
For your particular needs you can write an adapter function that makes os.kill a bit more friendly for cross-platform code. For example:
import os
import sys
import time
import signal
if sys.platform != 'win32':
kill = os.kill
sleep = time.sleep
else:
# adapt the conflated API on Windows.
import threading
sigmap = {signal.SIGINT: signal.CTRL_C_EVENT,
signal.SIGBREAK: signal.CTRL_BREAK_EVENT}
def kill(pid, signum):
if signum in sigmap and pid == os.getpid():
# we don't know if the current process is a
# process group leader, so just broadcast
# to all processes attached to this console.
pid = 0
thread = threading.current_thread()
handler = signal.getsignal(signum)
# work around the synchronization problem when calling
# kill from the main thread.
if (signum in sigmap and
thread.name == 'MainThread' and
callable(handler) and
pid == 0):
event = threading.Event()
def handler_set_event(signum, frame):
event.set()
return handler(signum, frame)
signal.signal(signum, handler_set_event)
try:
os.kill(pid, sigmap[signum])
# busy wait because we can't block in the main
# thread, else the signal handler can't execute.
while not event.is_set():
pass
finally:
signal.signal(signum, handler)
else:
os.kill(pid, sigmap.get(signum, signum))
if sys.version_info[0] > 2:
sleep = time.sleep
else:
import errno
# If the signal handler doesn't raise an exception,
# time.sleep in Python 2 raises an EINTR IOError, but
# Python 3 just resumes the sleep.
def sleep(interval):
'''sleep that ignores EINTR in 2.x on Windows'''
while True:
try:
t = time.time()
time.sleep(interval)
except IOError as e:
if e.errno != errno.EINTR:
raise
interval -= time.time() - t
if interval <= 0:
break
def func(signum, frame):
# note: don't print in a signal handler.
global g_sigint
g_sigint = True
#raise KeyboardInterrupt
signal.signal(signal.SIGINT, func)
g_kill = False
while True:
g_sigint = False
g_kill = not g_kill
print('Running [%d]' % os.getpid())
sleep(2)
if g_kill:
kill(os.getpid(), signal.SIGINT)
if g_sigint:
print('SIGINT')
else:
print('No SIGINT')
Discussion
Windows doesn't implement signals at the system level [*]. Microsoft's C runtime implements the six signals that are required by standard C: SIGINT, SIGABRT, SIGTERM, SIGSEGV, SIGILL, and SIGFPE.
SIGABRT and SIGTERM are implemented just for the current process. You can call the handler via C raise. For example (in Python 3.5):
>>> import signal, ctypes
>>> ucrtbase = ctypes.CDLL('ucrtbase')
>>> c_raise = ucrtbase['raise']
>>> foo = lambda *a: print('foo')
>>> signal.signal(signal.SIGTERM, foo)
<Handlers.SIG_DFL: 0>
>>> c_raise(signal.SIGTERM)
foo
0
SIGTERM is useless.
You also can't do much with SIGABRT using the signal module because the abort function kills the process once the handler returns, which happens immediately when using the signal module's internal handler (it trips a flag for the registered Python callable to be called in the main thread). For Python 3 you can instead use the faulthandler module. Or call the CRT's signal function via ctypes to set a ctypes callback as the handler.
The CRT implements SIGSEGV, SIGILL, and SIGFPE by setting a Windows structured exception handler for the corresponding Windows exceptions:
STATUS_ACCESS_VIOLATION SIGSEGV
STATUS_ILLEGAL_INSTRUCTION SIGILL
STATUS_PRIVILEGED_INSTRUCTION SIGILL
STATUS_FLOAT_DENORMAL_OPERAND SIGFPE
STATUS_FLOAT_DIVIDE_BY_ZERO SIGFPE
STATUS_FLOAT_INEXACT_RESULT SIGFPE
STATUS_FLOAT_INVALID_OPERATION SIGFPE
STATUS_FLOAT_OVERFLOW SIGFPE
STATUS_FLOAT_STACK_CHECK SIGFPE
STATUS_FLOAT_UNDERFLOW SIGFPE
STATUS_FLOAT_MULTIPLE_FAULTS SIGFPE
STATUS_FLOAT_MULTIPLE_TRAPS SIGFPE
The CRT's implementation of these signals is incompatible with Python's signal handling. The exception filter calls the registered handler and then returns EXCEPTION_CONTINUE_EXECUTION. However, Python's handler only trips a flag for the interpreter to call the registered callable sometime later in the main thread. Thus the errant code that triggered the exception will continue to trigger in an endless loop. In Python 3 you can use the faulthandler module for these exception-based signals.
That leaves SIGINT, to which Windows adds the non-standard SIGBREAK. Both console and non-console processes can raise these signals, but only a console process can receive them from another process. The CRT implements this by registering a console control event handler via SetConsoleCtrlHandler.
The console sends a control event by creating a new thread in an attached process that begins executing at CtrlRoutine in kernel32.dll or kernelbase.dll (undocumented). That the handler doesn't execute on the main thread can lead to synchronization problems (e.g. in the REPL or with input). Also, a control event won't interrupt the main thread if it's blocked while waiting on a synchronization object or waiting for synchronous I/O to complete. Care needs to be taken to avoid blocking in the main thread if it should be interruptible by SIGINT. Python 3 attempts to work around this by using a Windows event object, which can also be used in waits that should be interruptible by SIGINT.
When the console sends the process a CTRL_C_EVENT or CTRL_BREAK_EVENT, the CRT's handler calls the registered SIGINT or SIGBREAK handler, respectively. The SIGBREAK handler is also called for the CTRL_CLOSE_EVENT that the console sends when its window is closed. Python defaults to handling SIGINT by rasing a KeyboardInterrupt in the main thread. However, SIGBREAK is initially the default CTRL_BREAK_EVENT handler, which calls ExitProcess(STATUS_CONTROL_C_EXIT).
You can send a control event to all processes attached to the current console via GenerateConsoleCtrlEvent. This can target a subset of processes that belong to a process group, or target group 0 to send the event to all processes attached to the current console.
Process groups aren't a well-documented aspect of the Windows API. There's no public API to query the group of a process, but every process in a Windows session belongs to a process group, even if it's just the wininit.exe group (services session) or winlogon.exe group (interactive session). A new group is created by passing the creation flag CREATE_NEW_PROCESS_GROUP when creating a new process. The group ID is the process ID of the created process. To my knowledge, the console is the only system that uses the process group, and that's just for GenerateConsoleCtrlEvent.
What the console does when the target ID isn't a process group ID is undefined and should not be relied on. If both the process and its parent process are attached to the console, then sending it a control event basically acts like the target is group 0. If the parent process isn't attached to the current console, then GenerateConsoleCtrlEvent fails, and os.kill calls TerminateProcess. Weirdly, if you target the "System" process (PID 4) and its child process smss.exe (session manager), the call succeeds but nothing happens except that the target is mistakenly added to the list of attached processes (i.e. GetConsoleProcessList). It's probably because the parent process is the "Idle" process, which, since it's PID 0, is implicitly accepted as the broadcast PGID. The parent process rule also applies to non-console processes. Targeting a non-console child process does nothing -- except mistakenly corrupt the console process list by adding the unattached process. I hope it's clear that you should only send a control event to either group 0 or to a known process group that you created via CREATE_NEW_PROCESS_GROUP.
Don't rely on being able to send CTRL_C_EVENT to anything but group 0, since it's initially disabled in a new process group. It's not impossible to send this event to a new group, but the target process first has to enable CTRL_C_EVENT by calling SetConsoleCtrlHandler(NULL, FALSE).
CTRL_BREAK_EVENT is all you can depend on since it can't be disabled. Sending this event is a simple way to gracefully kill a child process that was started with CREATE_NEW_PROCESS_GROUP, assuming it has a Windows CTRL_BREAK_EVENT or C SIGBREAK handler. If not, the default handler will terminate the process, setting the exit code to STATUS_CONTROL_C_EXIT. For example:
>>> import os, signal, subprocess
>>> p = subprocess.Popen('python.exe',
... stdin=subprocess.PIPE,
... creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
>>> os.kill(p.pid, signal.CTRL_BREAK_EVENT)
>>> STATUS_CONTROL_C_EXIT = 0xC000013A
>>> p.wait() == STATUS_CONTROL_C_EXIT
True
Note that CTRL_BREAK_EVENT wasn't sent to the current process, because the example targets the process group of the child process (including all of its child processes that are attached to the console, and so on). If the example had used group 0, the current process would have been killed as well since I didn't define a SIGBREAK handler. Let's try that, but with a handler set:
>>> ctrl_break = lambda *a: print('^BREAK')
>>> signal.signal(signal.SIGBREAK, ctrl_break)
<Handlers.SIG_DFL: 0>
>>> os.kill(0, signal.CTRL_BREAK_EVENT)
^BREAK
[*]
Windows has asynchronous procedure calls (APC) to queue a target function to a thread. See the article Inside NT's Asynchronous Procedure Call for an in-depth analysis of Windows APCs, especially to clarify the role of kernel-mode APCs. You can queue a user-mode APC to a thread via QueueUserAPC. They also get queued by ReadFileEx and WriteFileEx for the I/O completion routine.
A user-mode APC executes when the thread enters an alertable wait (e.g. WaitForSingleObjectEx or SleepEx with bAlertable as TRUE). Kernel-mode APCs, on the other hand, get dispatched immediately (when the IRQL is below APC_LEVEL). They're typically used by the I/O manager to complete asynchronous I/O Request Packets in the context of the thread that issued the request (e.g. copying data from the IRP to a user-mode buffer). See Waits and APCs for a table that shows how APCs affect alertable and non-alertable waits. Note that kernel-mode APCs don't interrupt a wait, but instead are executed internally by the wait routine.
Windows could implement POSIX-like signals using APCs, but in practice it uses other means for the same ends. For example:
Structured Exception Handling, e.g. __try, __except, __finally, __leave, RaiseException, AddVectoredExceptionHandler.
Kernel Dispatcher Objects (i.e. Synchronization Objects), e.g. SetEvent, SetWaitableTimer.
Window Messages, e.g. SendMessage (to a window procedure), PostMessage (to a thread's message queue to be dispatched to a window procedure), PostThreadMessage (to a thread's message queue), WM_CLOSE, WM_TIMER.
Window messages can be sent and posted to all threads that share the calling thread's desktop and that are at the same or lower integrity level. Sending a window message puts it in a system queue to call the window procedure when the thread calls PeekMessage or GetMessage. Posting a message adds it to the thread's message queue, which has a default quota of 10,000 messages. A thread with a message queue should have a message loop to process the queue via GetMessage and DispatchMessage. Threads in a console-only process typically do not have a message queue. However, the console host process, conhost.exe, obviously does. When the close button is clicked, or when the primary process of a console is killed via the task manager or taskkill.exe, a WM_CLOSE message is posted to the message queue of the console window's thread. The console in turns sends a CTRL_CLOSE_EVENT to all of its attached processes. If a process handles the event, it's given 5 seconds to exit gracefully before it's forcefully terminated.
For Python >=3.8, use signal.raise_signal. This directly triggers the signal in the current process, avoiding complications of os.kill interpreting process ID incorrectly.
import os
import time
import signal
def func(signum, frame):
print (f"You raised a SigInt! Signal handler called with signal {signum}")
signal.signal(signal.SIGINT, func)
while True:
print(f"Running...{os.getpid()}")
time.sleep(2)
signal.raise_signal(signal.SIGINT)
Works great!

Python threads with os.system() calls. Main thread doesn't exit on ctrl+c

Please don't consider it a duplicate before reading, There are a lot of questions about multithreading and keyboard interrupt, but i didn't find any considering os.system and it looks like it's important.
I have a python script which makes some external calls in worker threads.
I want it to exit if I press ctrl+c But it look like the main thread ignores it.
Something like this:
from threading import Thread
import sys
import os
def run(i):
while True:
os.system("sleep 10")
print i
def main():
threads=[]
try:
for i in range(0, 3):
threads.append(Thread(target=run, args=(i,)))
threads[i].daemon=True
threads[i].start()
for i in range(0, 3):
while True:
threads[i].join(10)
if not threads[i].isAlive():
break
except(KeyboardInterrupt, SystemExit):
sys.exit("Interrupted by ctrl+c\n")
if __name__ == '__main__':
main()
Surprisingly, it works fine if I change os.system("sleep 10") to time.sleep(10).
I'm not sure what operating system and shell you are using. I describe Mac OS X and Linux with zsh (bash/sh should act similar).
When you hit Ctrl+C, all programs running in the foreground in your current terminal receive the signal SIGINT. In your case it's your main python process and all processes spawned by os.system.
Processes spawned by os.system then terminate their execution. Usually when python script receives SIGINT, it raises KeyboardInterrupt exception, but your main process ignores SIGINT, because of os.system(). Python os.system() calls the Standard C function system(), that makes calling process ignore SIGINT (man Linux / man Mac OS X).
So neither of your python threads receives SIGINT, it's only children processes who get it.
When you remove os.system() call, your python process stops ignoring SIGINT, and you get KeyboardInterrupt.
You can replace os.system("sleep 10") with subprocess.call(["sleep", "10"]). subprocess.call() doesn't make your process ignore SIGINT.
I've had this same problem more times than I could count back when i was first learning python multithreading.
Adding the sleep call within the loop makes your main thread block, which will allow it to still hear and honor exceptions. What you want to do is utilize the Event class to set an event in your child threads that will serve as an exit flag to break execution upon. You can set this flag in your KeyboardInterrupt exception, just put the except clause for that in your main thread.
I'm not entirely certain what is going on with the different behaviors between the python specific sleep and the os called one, but the remedy I am offering should work for what your desired end result is. Just offering a guess, the os called one probably blocks the interpreter itself in a different way?
Keep in mind that generally in most situations where threads are required the main thread is going to keep executing something, in which case the "sleeping" in your simple example would be implied.
http://docs.python.org/2/library/threading.html#event-objects

Linux blocking signals to Python init

This is a follow up to my other post Installing signal handler with Python. In short, Linux blocks all signals to PID 1 (including SIGKILL) unless Init has installed a signal handler for a particular signal; as to prevent kernel panic if someone were to send a termination signal to PID1. The issue I've been having, is it would seem that the signal module in Python doesn't install signal handlers in a way the system recognises. My Python Init script was seemingly, completely ignoring all signals as I think they were being blocked.
I seem to have found a solution; using ctypes to install the signal handlers with the signal() function in libc (in this case uClibc). Below is a python based test init. It opens a shell on TTY2 from which I can send signals to PID1 for testing. It seems to work in the KVM im using for testing (I'm willing to share the VM with anyone interested)
Is this the best way around this issue? Is there a 'better' way to install the signal handlers without the signal module? (I am not at all concerned with portably)
Is this a bug in Python?
#!/usr/bin/python
import os
import sys
import time
from ctypes import *
def SigHUP():
print "Caught SIGHUP"
return 0
def SigCHLD():
print "Caught SIGCHLD"
return 0
SIGFUNC = CFUNCTYPE(c_int)
SigHUPFunc = SIGFUNC(SigHUP)
SigCHLDFunc = SIGFUNC(SigCHLD)
libc = cdll.LoadLibrary('libc.so.0')
libc.signal(1, SigHUPFunc) # 1 = SIGHUP
libc.signal(17, SigCHLDFunc) # 17 = SIGCHLD
print "Mounting Proc: %s" % libc.mount(None, "/proc", "proc", 0, None)
print "forking for ash"
cpid = os.fork()
if cpid == 0:
os.closerange(0, 4)
sys.stdin = open('/dev/tty2', 'r')
sys.stdout = open('/dev/tty2', 'w')
sys.stderr = open('/dev/tty2', 'w')
os.execv('/bin/ash', ('ash',))
print "ash started on tty2"
print "sleeping"
while True:
time.sleep(0.01)
I did a bit of debugging under KVM and I found that the kernel is delivering signals to pid 1 when the signal handlers are installed by the standard signal module. However, when the signal is received "something" causes a clone of the process to be spawned, rather than printing the expected output.
Here is the strace output when I send HUP to the non-working init.sig-mod:
Which results in a new process running (pid 23) which is a clone of init.sig-mod:
I didn't have time to dig deeper into the cause, but this narrows things further. Probably something to do with Python's signal delivery logic (it registers a C hook which invokes your bytecode function when called). The ctypes technique bypasses this. The relevant Python source files are Python/pythonrun.c and Modules/signalmodule.c, in case you want to take a closer look.
Old Info -- I'm not sure this will solve your problem, but might get you closer. I
compared these different ways signal handlers are installed:
Installing a handler via Python's signal module.
Upstart's signal handlers.
Using ctypes to call the signal() syscall directly.
Some quick tests in C.
Both the ctypes-invoked signal() system call and Upstart's sigaction()
syscalls set the SA_RESTART flag when the handler is registered. Setting
this flag indicates that when a signal is received while the process is
executing or blocking inside certain syscalls (read, write, wait,
nanosleep, etc), after the signal handler completes the syscall should be
automatically restarted. The application won't be aware of this.
When Python's signal module registers a handler, it zeros the SA_RESTART
flag by calling siginterrupt(signum, 1). This says to the system "when a
system call is interrupted by a signal, after the signal handler completes
set errno to EINTR and return from the syscall". This leaves it up to the developer to
handle this and decide whether to restart the system call.
You can set the SA_RESTART flag by registering your signal this way:
import signal
signal.signal(signal.SIGHUP, handler)
signal.siginterrupt(signal.SIGHUP, False)
The issue was a compatibility issue with Python compiled against uClibc 0.9.31 with old linux threads. Compiling against 0.9.32-rc3 and using NPTL has fixed the issue.

Installing signal handler with Python

(there is a follow up to this question here)
I am working on trying to write a Python based Init system for Linux but I'm having an issue getting signals to my Python init script. From the 'man 2 kill' page:
The only signals that can be sent to process ID 1, the init process, are those for which init has explicitly installed signal handlers.
In my Python based Init, I have a test function and a signal handler setup to call that function:
def SigTest(SIG, FRM):
print "Caught SIGHUP!"
signal.signal(signal.SIGHUP, SigTest)
From another TTY (the init script executes sh on another tty) if I send a signal, it is completely ignored and the text is never printed. kill -HUP 1
I found this issue because I wrote a reaping function for my Python init to reap its child processes as they die, but they all just zombied, it took awhile to figure out Python was never getting the SIGCHLD signal. Just to ensure my environment is sane, I wrote a C program to fork and have the child send PID 1 a signal and it did register.
How do I install a signal handler the system will acknowledge if signal.signal(SIG, FUNC) isn't working?
Im going to try using ctypes to register my handler with C code and see if that works, but I rather a pure Python answer if at all possible.
Ideas?
( I'm not a programmer, Im really in over my head here :p )
Test code below...
import os
import sys
import time
import signal
def SigTest(SIG, FRM):
print "SIGINT Caught"
print "forking for ash"
cpid = os.fork()
if cpid == 0:
os.closerange(0, 4)
sys.stdin = open('/dev/tty2', 'r')
sys.stdout = open('/dev/tty2', 'w')
sys.stderr = open('/dev/tty2', 'w')
os.execv('/bin/ash', ('ash',))
print "ash started on tty2"
signal.signal(signal.SIGHUP, SigTest)
while True:
time.sleep(5.0)
Signal handlers mostly work in Python. But there are some problems. One is that your handler won't run until the interpreter re-enters it's bytecode interpreter. if your program is blocked in a C function the signal handler is not called until it returns. You don't show the code where you are waiting. Are you using signal.pause()?
Another is that if you are in a system call you will get an exception after the singal handler returns. You need to wrap all system calls with a retry handler (at least on Linux).
It's interesting that you are writing an init replacement... That's something like a process manager. The proctools code might interest you, since it does handle SIGCHLD.
By the way, this code:
import signal
def SigTest(SIG, FRM):
print "SIGINT Caught"
signal.signal(signal.SIGHUP, SigTest)
while True:
signal.pause()
Does work on my system.

Python: How to prevent subprocesses from receiving CTRL-C / Control-C / SIGINT

I am currently working on a wrapper for a dedicated server running in the shell. The wrapper spawns the server process via subprocess and observes and reacts to its output.
The dedicated server must be explicitly given a command to shut down gracefully. Thus, CTRL-C must not reach the server process.
If I capture the KeyboardInterrupt exception or overwrite the SIGINT-handler in python, the server process still receives the CTRL-C and stops immediately.
So my question is:
How to prevent subprocesses from receiving CTRL-C / Control-C / SIGINT?
Somebody in the #python IRC-Channel (Freenode) helped me by pointing out the preexec_fn parameter of subprocess.Popen(...):
If preexec_fn is set to a callable
object, this object will be called in
the child process just before the
child is executed. (Unix only)
Thus, the following code solves the problem (UNIX only):
import subprocess
import signal
def preexec_function():
# Ignore the SIGINT signal by setting the handler to the standard
# signal handler SIG_IGN.
signal.signal(signal.SIGINT, signal.SIG_IGN)
my_process = subprocess.Popen(
["my_executable"],
preexec_fn = preexec_function
)
Note: The signal is actually not prevented from reaching the subprocess. Instead, the preexec_fn above overwrites the signal's default handler so that the signal is ignored. Thus, this solution may not work if the subprocess overwrites the SIGINT handler again.
Another note: This solution works for all sorts of subprocesses, i.e. it is not restricted to subprocesses written in Python, too. For example the dedicated server I am writing my wrapper for is in fact written in Java.
Combining some of other answers that will do the trick - no signal sent to main app will be forwarded to the subprocess.
import os
from subprocess import Popen
def preexec(): # Don't forward signals.
os.setpgrp()
Popen('whatever', preexec_fn = preexec)
you can do something like this to make it work in windows and unix:
import subprocess
import sys
def pre_exec():
# To ignore CTRL+C signal in the new process
signal.signal(signal.SIGINT, signal.SIG_IGN)
if sys.platform.startswith('win'):
#https://msdn.microsoft.com/en-us/library/windows/desktop/ms684863(v=vs.85).aspx
#CREATE_NEW_PROCESS_GROUP=0x00000200 -> If this flag is specified, CTRL+C signals will be disabled
my_sub_process=subprocess.Popen(["executable"], creationflags=0x00000200)
else:
my_sub_process=subprocess.Popen(["executable"], preexec_fn = pre_exec)
After an hour of various attempts, this works for me:
process = subprocess.Popen(["someprocess"], creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP)
It's solution for windows.
Try setting SIGINT to be ignored before spawning the subprocess (reset it to default behavior afterward).
If that doesn't work, you'll need to read up on job control and learn how to put a process in its own background process group, so that ^C doesn't even cause the kernel to send the signal to it in the first place. (May not be possible in Python without writing C helpers.)
See also this older question.

Categories