Python ignores SIGINT in multithreaded programs - how to fix that? - python

I have Python 2.6 on MacOS X and a multithread operation. Following test code works fine and shuts down app on Ctrl-C:
import threading, time, os, sys, signal
def SigIntHandler( signum, frame ) :
sys.exit( 0 )
signal.signal( signal.SIGINT, SigIntHandler )
class WorkThread( threading.Thread ) :
def run( self ) :
while True :
time.sleep( 1 )
thread = WorkThread()
thread.start()
time.sleep( 1000 )
But if i change only one string, adding some real work to worker thread, the app will never terminate on Ctrl-C:
import threading, time, os, sys, signal
def SigIntHandler( signum, frame ) :
sys.exit( 0 )
signal.signal( signal.SIGINT, SigIntHandler )
class WorkThread( threading.Thread ) :
def run( self ) :
while True :
os.system( "svn up" ) # This is really slow and can fail.
time.sleep( 1 )
thread = WorkThread()
thread.start()
time.sleep( 1000 )
Is it possible to fix it, or python is not intended to be used with threading?

A couple of things which may be causing your problem:
The Ctrl-C is perhaps being caught by svn, which is ignoring it.
You are creating a thread which is a non-daemon thread, then just exiting the process. This will cause the process to wait until the thread exits - which it never will. You need to either make the thread a daemon or give it a way to terminate it and join() it before exiting. While it always seems to stop on my Linux system, MacOS X behaviour may be different.
Python works well enough with threads :-)
Update: You could try using subprocess, setting up the child process so that file handles are not inherited, and setting the child's stdin to subprocess.PIPE.

You likely do not need threads at all.
Try using Python's subprocess module, or even Twisted's process support.

I'm not an expert on Threads with Python but quickly reading the docs leads to a few conclusions.
1) Calling os.system() spawns a new subshell and is not encouraged. Instead the subprocess module should be used. http://docs.python.org/release/2.6.6/library/os.html?highlight=os.system#os.system
2) The threading module doesn't seem to give a whole lot of control to the threads, maybe try using the thread module, at least there is a thread.exit() function. Also from the threading docs here it says that dummy threads may be created, which are always alive and daemonic, furthermore
"… the entire Python program exits when only daemon threads are left."
So, I would imagine that the least you need to do is signal the currently running threads that they need to exit, before exiting the main thread, or joining them on ctrl-c to allow them to finish (although this would obviously be contradictory to ctrl-c), or perhaps just using the subprocess module to spawn the svn up would do the trick.

Related

Python subprocess hang when a lot of executable are called

I have a problem with using Python to run Windows executables in parallel.
I will explain my problem in more detail.
I was able to write some code that creates an amount of threads equal to the number of cores. Each thread executes the following function that starts the executable with the use of subprocess.Popen().
The executable are unit test for an application. The test use gtest library. From what I know they just read and write on the file system.
def _execute(self, test_file_path) -> None:
test_path = self._get_test_path_without_extension(test_file_path)
process = subprocess.Popen(test_path,
shell=False,
stdout=sys.stdout,
stderr=sys.stderr,
universal_newlines=True)
try:
process.communicate(timeout=TEST_TIMEOUT_IN_SECONDS)
if process.returncode != 0:
print(f'Test fail')
except subprocess.TimeoutExpired:
process.kill()
During the execution of processes it happens that some hang, never ending. I set a timeout as workaround but I wondering why some of these application never terminate. This block the execution of the Python code.
The following code show the creation of the threads. The function _execute_tests just take a test from the Queue (with the .get() function) and pass it to the function execute(test_file_path).
### Peace of code used to spawn the threads
for i in range(int(core_num)):
thread = threading.Thread(target=self._execute_tests,
args=(tests,),
daemon=True)
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
I already try to:
use subprocess.run, subprocess.call and the other function explained on the documentation page
use a larger buffer with the use of bufsize parameter
disable the buffer
move the stdout to a file per thread
move the stdout to subprocess.DEVNULL
remove the use of subprocess.communicate()
remove the use of threading
use multiprocessing
On my local machine with 16 core / 64GB RAM I can run without problems 16 threads. All of them always terminate without problems. To be able to reproduce the problem I need to increase the number of threads to 30/40.
On Azure machine with 8 core / 32 GB RAM the issues can be reproduce with just 8 threads in parallel.
If I run the executables from a bat
for /r "." %%a in (*.exe) do start /B "" "%%~fa"
the problem never happen.
Have someone an idea of what could be the problem?

How to stop Python PyGame program from a thread [duplicate]

How can I exit my entire Python application from one of its threads? sys.exit() only terminates the thread in which it is called, so that is no help.
I would not like to use an os.kill() solution, as this isn't very clean.
Short answer: use os._exit.
Long answer with example:
I yanked and slightly modified a simple threading example from a tutorial on DevShed:
import threading, sys, os
theVar = 1
class MyThread ( threading.Thread ):
def run ( self ):
global theVar
print 'This is thread ' + str ( theVar ) + ' speaking.'
print 'Hello and good bye.'
theVar = theVar + 1
if theVar == 4:
#sys.exit(1)
os._exit(1)
print '(done)'
for x in xrange ( 7 ):
MyThread().start()
If you keep sys.exit(1) commented out, the script will die after the third thread prints out. If you use sys.exit(1) and comment out os._exit(1), the third thread does not print (done), and the program runs through all seven threads.
os._exit "should normally only be used in the child process after a fork()" -- and a separate thread is close enough to that for your purpose. Also note that there are several enumerated values listed right after os._exit in that manual page, and you should prefer those as arguments to os._exit instead of simple numbers like I used in the example above.
If all your threads except the main ones are daemons, the best approach is generally thread.interrupt_main() -- any thread can use it to raise a KeyboardInterrupt in the main thread, which can normally lead to reasonably clean exit from the main thread (including finalizers in the main thread getting called, etc).
Of course, if this results in some non-daemon thread keeping the whole process alive, you need to followup with os._exit as Mark recommends -- but I'd see that as the last resort (kind of like a kill -9;-) because it terminates things quite brusquely (finalizers not run, including try/finally blocks, with blocks, atexit functions, etc).
Using thread.interrupt_main() may not help in some situation. KeyboardInterrupts are often used in command line applications to exit the current command or to clean the input line.
In addition, os._exit will kill the process immediately without running any finally blocks in your code, which may be dangerous (files and connections will not be closed for example).
The solution I've found is to register a signal handler in the main thread that raises a custom exception. Use the background thread to fire the signal.
import signal
import os
import threading
import time
class ExitCommand(Exception):
pass
def signal_handler(signal, frame):
raise ExitCommand()
def thread_job():
time.sleep(5)
os.kill(os.getpid(), signal.SIGUSR1)
signal.signal(signal.SIGUSR1, signal_handler)
threading.Thread(target=thread_job).start() # thread will fire in 5 seconds
try:
while True:
user_input = raw_input('Blocked by raw_input loop ')
# do something with 'user_input'
except ExitCommand:
pass
finally:
print('finally will still run')
Related questions:
Why does sys.exit() not exit when called inside a thread in Python?
Python: How to quit CLI when stuck in blocking raw_input?
The easiest way to exit the whole program is, we should terminate the program by using the process id (pid).
import os
import psutil
current_system_pid = os.getpid()
ThisSystem = psutil.Process(current_system_pid)
ThisSystem.terminate()
To install psutl:- "pip install psutil"
For Linux you can use the kill() command and pass the current process' ID and the SIGINT signal to start the steps to exit the app.
import signal
os.kill(os.getpid(), signal.SIGINT)

Python threads with os.system() calls. Main thread doesn't exit on ctrl+c

Please don't consider it a duplicate before reading, There are a lot of questions about multithreading and keyboard interrupt, but i didn't find any considering os.system and it looks like it's important.
I have a python script which makes some external calls in worker threads.
I want it to exit if I press ctrl+c But it look like the main thread ignores it.
Something like this:
from threading import Thread
import sys
import os
def run(i):
while True:
os.system("sleep 10")
print i
def main():
threads=[]
try:
for i in range(0, 3):
threads.append(Thread(target=run, args=(i,)))
threads[i].daemon=True
threads[i].start()
for i in range(0, 3):
while True:
threads[i].join(10)
if not threads[i].isAlive():
break
except(KeyboardInterrupt, SystemExit):
sys.exit("Interrupted by ctrl+c\n")
if __name__ == '__main__':
main()
Surprisingly, it works fine if I change os.system("sleep 10") to time.sleep(10).
I'm not sure what operating system and shell you are using. I describe Mac OS X and Linux with zsh (bash/sh should act similar).
When you hit Ctrl+C, all programs running in the foreground in your current terminal receive the signal SIGINT. In your case it's your main python process and all processes spawned by os.system.
Processes spawned by os.system then terminate their execution. Usually when python script receives SIGINT, it raises KeyboardInterrupt exception, but your main process ignores SIGINT, because of os.system(). Python os.system() calls the Standard C function system(), that makes calling process ignore SIGINT (man Linux / man Mac OS X).
So neither of your python threads receives SIGINT, it's only children processes who get it.
When you remove os.system() call, your python process stops ignoring SIGINT, and you get KeyboardInterrupt.
You can replace os.system("sleep 10") with subprocess.call(["sleep", "10"]). subprocess.call() doesn't make your process ignore SIGINT.
I've had this same problem more times than I could count back when i was first learning python multithreading.
Adding the sleep call within the loop makes your main thread block, which will allow it to still hear and honor exceptions. What you want to do is utilize the Event class to set an event in your child threads that will serve as an exit flag to break execution upon. You can set this flag in your KeyboardInterrupt exception, just put the except clause for that in your main thread.
I'm not entirely certain what is going on with the different behaviors between the python specific sleep and the os called one, but the remedy I am offering should work for what your desired end result is. Just offering a guess, the os called one probably blocks the interpreter itself in a different way?
Keep in mind that generally in most situations where threads are required the main thread is going to keep executing something, in which case the "sleeping" in your simple example would be implied.
http://docs.python.org/2/library/threading.html#event-objects

Python losing control of subprocess?

I'm using a commercial application that uses Python as part of its scripting API. One of the functions provided is something called App.run(). When this function is called, it starts a new Java process that does the rest of the execution. (Unfortunately, I don't really know what it's doing under the hood as the supplied Python modules are .pyc files, and many of the Python functions are SWIG generated).
The trouble I'm having is that I'm building the App.run() call into a larger Python application that needs to do some guaranteed cleanup code (closing a database, etc.). Unfortunately, if the subprocess is interrupted with Ctrl+C, it aborts and returns to the command line without returning control to the main Python program. Thus, my cleanup code never executes.
So far I've tried:
Registering a function with atexit... doesn't work
Putting cleanup in a class __del__ destructor... doesn't work. (App.run() is inside the class)
Creating a signal handler for Ctrl+C in the main Python app... doesn't work
Putting App.run() in a Thread... results in a Memory Fault after the Ctrl+C
Putting App.run() in a Process (from multiprocessing)... doesn't work
Any ideas what could be happening?
This is just an outline- but something like this?
import os
cpid = os.fork()
if not cpid:
# change stdio handles etc
os.setsid() # Probably not needed
App.run()
os._exit(0)
os.waitpid(cpid)
# clean up here
(os.fork is *nix only)
The same idea could be implemented with subprocess in an OS agnostic way. The idea is running App.run() in a child process and then waiting for the child process to exit; regardless of how the child process died. On posix, you could also trap for SIGCHLD (Child process death). I'm not a windows guru, so if applicable and subprocess doesn't work, someone else will have to chime in here.
After App.run() is called, I'd be curious what the process tree looks like. It's possible its running an exec and taking over the python process space. If thats happening, creating a child process is the only way I can think of trapping it.
If try: App.run() finally: cleanup() doesn't work; you could try to run it in a subprocess:
import sys
from subprocess import call
rc = call([sys.executable, 'path/to/run_app.py'])
cleanup()
Or if you have the code in a string you could use -c option e.g.:
rc = call([sys.executable, '-c', '''import sys
print(sys.argv)
'''])
You could implement #tMC's suggestion using subprocess by adding
preexec_fn=os.setsid argument (note: no ()) though I don't see how creating a process group might help here. Or you could try shell=True argument to run it in a separate shell.
You might give another try to multiprocessing:
import multiprocessing as mp
if __name__=="__main__":
p = mp.Process(target=App.run)
p.start()
p.join()
cleanup()
Are you able to wrap the App.Run() in a Try/Catch?
Something like:
try:
App.Run()
except (KeyboardInterrupt, SystemExit):
print "User requested an exit..."
cleanup()

Python: How to prevent subprocesses from receiving CTRL-C / Control-C / SIGINT

I am currently working on a wrapper for a dedicated server running in the shell. The wrapper spawns the server process via subprocess and observes and reacts to its output.
The dedicated server must be explicitly given a command to shut down gracefully. Thus, CTRL-C must not reach the server process.
If I capture the KeyboardInterrupt exception or overwrite the SIGINT-handler in python, the server process still receives the CTRL-C and stops immediately.
So my question is:
How to prevent subprocesses from receiving CTRL-C / Control-C / SIGINT?
Somebody in the #python IRC-Channel (Freenode) helped me by pointing out the preexec_fn parameter of subprocess.Popen(...):
If preexec_fn is set to a callable
object, this object will be called in
the child process just before the
child is executed. (Unix only)
Thus, the following code solves the problem (UNIX only):
import subprocess
import signal
def preexec_function():
# Ignore the SIGINT signal by setting the handler to the standard
# signal handler SIG_IGN.
signal.signal(signal.SIGINT, signal.SIG_IGN)
my_process = subprocess.Popen(
["my_executable"],
preexec_fn = preexec_function
)
Note: The signal is actually not prevented from reaching the subprocess. Instead, the preexec_fn above overwrites the signal's default handler so that the signal is ignored. Thus, this solution may not work if the subprocess overwrites the SIGINT handler again.
Another note: This solution works for all sorts of subprocesses, i.e. it is not restricted to subprocesses written in Python, too. For example the dedicated server I am writing my wrapper for is in fact written in Java.
Combining some of other answers that will do the trick - no signal sent to main app will be forwarded to the subprocess.
import os
from subprocess import Popen
def preexec(): # Don't forward signals.
os.setpgrp()
Popen('whatever', preexec_fn = preexec)
you can do something like this to make it work in windows and unix:
import subprocess
import sys
def pre_exec():
# To ignore CTRL+C signal in the new process
signal.signal(signal.SIGINT, signal.SIG_IGN)
if sys.platform.startswith('win'):
#https://msdn.microsoft.com/en-us/library/windows/desktop/ms684863(v=vs.85).aspx
#CREATE_NEW_PROCESS_GROUP=0x00000200 -> If this flag is specified, CTRL+C signals will be disabled
my_sub_process=subprocess.Popen(["executable"], creationflags=0x00000200)
else:
my_sub_process=subprocess.Popen(["executable"], preexec_fn = pre_exec)
After an hour of various attempts, this works for me:
process = subprocess.Popen(["someprocess"], creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP)
It's solution for windows.
Try setting SIGINT to be ignored before spawning the subprocess (reset it to default behavior afterward).
If that doesn't work, you'll need to read up on job control and learn how to put a process in its own background process group, so that ^C doesn't even cause the kernel to send the signal to it in the first place. (May not be possible in Python without writing C helpers.)
See also this older question.

Categories