I'm writing a script that runs a background process in parallel. When restarting the script I want to be able to kill the background process and exit it cleanly by sending it a CTRL_C_EVENT signal. For some reason though, sending the CTRL_C_EVENT signal to the child process also causes the same signal to be sent to the parent process. I suspect that the KeyboardInterrupt exception isn't being cleaned up after the child process gets it and is then caught by the main process.
I'm using Python version 2.7.1 and running on Windows Server 2012.
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#exit cleanly
return
def script():
try:
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
os.kill(proc.pid, signal.CTRL_C_EVENT)
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
except KeyboardInterrupt:
print "Unexpected keyboard interrupt"
def main():
script()
time.sleep(5)
script()
I expect that the script() function should never be receiving a KeyboardInterrupt exception, but it is triggered the second time that the function is called. Why is this happening?
I'm still looking for an explanation as to why the issue occurs, but I'll post my (albeit somewhat hacky) workaround here in case it helps anyone else. Since the Ctrl+C gets propagated to the parent process (still not entirely sure why this happens), I'm going to just catch the exception when it arrives and do nothing.
Eryk suggested using an extra watchdog thread to handle terminating the extra process, but for my application this introduces extra complexity and seems a bit overkill for the rare case that I actually need to kill the background process. Most of the time the background process in my application will close itself cleanly when it's done.
I'm still open to suggestions for a better implementation that doesn't add too much complexity (more processes, threads, etc.).
Modified code here:
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#Exit cleanly
return
def script():
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
try:
#Apparently sending a CTRL-C to the child also sends it to the parent??
os.kill(proc.pid, signal.CTRL_C_EVENT)
#Sleep until the parent receives the KeyboardInterrupt, then ignore it
time.sleep(1)
except KeyboardInterrupt:
pass
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
def main():
script()
time.sleep(5)
script()
Related
I am new to multithreading. While reading 'Programming Python' by Mark Lutz I stuck at this line
note that because of its simple-minded infinite loops, at least one of
its threads may not die on a Ctrl-C on Windows you may need to use
Task Manager to kill the python.exe process running this script or
close this window to exit
But according to my little knowledge about threading
all thread terminate when main thread exits. So why not in this code?
# anonymous pipes and threads, not process; this version works on Windows
import os
import time
import threading
def child(pipe_out):
try:
zzz = 0
while True:
time.sleep(zzz)
msg = ('Spam %03d\n' % zzz).encode()
os.write(pipe_out, msg)
zzz = (zzz + 1) % 5
except KeyboardInterrupt:
print("Child exiting")
def parent(pipe_in):
try:
while True:
line = os.read(pipe_in, 32)
print('Parent %d got [%s] at %s' % (os.getpid(), line, time.time()))
except KeyboardInterrupt:
print('Parent Exiting')
pipe_in, pipe_out = os.pipe()
threading.Thread(target=child, args=(pipe_out, )).start()
parent(pipe_in)
print("main thread exiting")
A Python process will end when there are no more running non-daemon threads. If you pass the daemon=True argument to threading.Thread you will notice different behavior in your program.
I suggest reading the docs for the threading module to learn more about what I'm talking about.
I have a python 2.7 process running in the background on Windows 8.1.
Is there a way to gracefully terminate this process and perform cleanup on shutdown or log off?
Try using win32api.GenerateConsoleCtrlEvent.
I solved this for a multiprocessing python program here:
Gracefully Terminate Child Python Process On Windows so Finally clauses run
I tested this solution using subprocess.Popen, and it also works.
Here is a code example:
import time
import win32api
import win32con
from multiprocessing import Process
def foo():
try:
while True:
print("Child process still working...")
time.sleep(1)
except KeyboardInterrupt:
print "Child process: caught ctrl-c"
if __name__ == "__main__":
p = Process(target=foo)
p.start()
time.sleep(2)
print "sending ctrl c..."
try:
win32api.GenerateConsoleCtrlEvent(win32con.CTRL_C_EVENT, 0)
while p.is_alive():
print("Child process is still alive.")
time.sleep(1)
except KeyboardInterrupt:
print "Main process: caught ctrl-c"
I'm trying to see how multi thread are working in order to use them in an automation project. I can run the thread but I cannot find a way to exit completely the two threads: the thread restart after each keyboard interupt. Is there a way to exit both thread with a keyboard interupt ?
import thread
from time import sleep
*parameters when starting
temp_c = 32
T_hot = 30
T_cold = 27
interval_temp = 2
def ctrl_fan(temp_c, T_hot,interval_temp):
while True:
if temp_c >= T_hot:
print 'refreshing'
else:
print ' fan stopped'
sleep(interval_temp)
print 'shutting everything off'
def ctrl_light(temp_c, T_cold,interval_temp):
while True:
if temp_c <= T_cold:
print 'warming'
else:
print 'light stopped'
sleep(interval_temp)
print 'shutting everything off'
try:
thread.start_new_thread(ctrl_fan, (temp_c, T_hot,interval_temp, ) )
sleep(1)
thread.start_new_thread(ctrl_light, (temp_c, T_cold,interval_temp, ) )
except (KeyboardInterrupt, SystemExit):
thread.exit()
print "Error: unable to start thread"
Sure,
Firstly I'd recommend using the slightly higher level threading module instead of the thread module.
To start a thread with threading use the following
import threading
t = threading.Thread(target=ctrl_fan, args=(temp_c, T_hot, interval_temp))
t.start()
There's a few things you'll need to do to get the program to exit with a Ctrl-C interupt.
Firstly you will want to set the threads to be daemon, so that they allow the program to exit when the main thread exits (t.daemon = True)
You will also want the main thread to wait on the completion of the threads, you can use t.join() to do this. However this wont raise out a KeyboardInterrupt exception until the thread finishes, there is a work around for this though
while t.is_alive():
t.join(1)
Providing a timeout value gets around this.
I'd be tempted to pull this together into a subclass, to get the behaviour you want
import threading
class CustomThread(threading.Thread):
def __init__(self, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
self.daemon = True
def join(self, timeout=None):
if timeout is None:
while self.is_alive():
threading.Thread.join(self, 10)
else:
return threading.Thread.join(self, timeout)
t1 = CustomThread(target=ctrl_fan, args=(temp_c, T_hot, interval_temp))
t1.start()
t2 = CustomThread(target=ctrl_light, args=(temp_c, T_cold, interval_temp))
t2.start()
t1.join()
t2.join()
The explanation is, again, in the documentation (https://docs.python.org/2/library/thread.html) :
Threads interact strangely with interrupts: the KeyboardInterrupt exception will be received by an arbitrary thread. (When the signal module is available, interrupts always go to the main thread.)
You'd certainly find answers in https://stackoverflow.com/, like :
Propagate system call interruptions in threads
I have been programming using python for the raspberryPi for several months now and I am trying to make my scripts "well behaved" and wrap up (close files and make sure no writes to SD are being perfomed) upon reception of SIGTERM.
Following advice on SO (1, 2) I am able to handle SIGTERM if I kill the process manually (i.e. kill {process number}) but if I send the shutdown command (i.e. shutdown -t 30 now) my handler never gets called.
I also tried registering for all signals and checking the signal being send on the shutdown event but I am not getting any.
Here's simple example code:
import time
import signal
import sys
def myHandler(signum, frame):
print "Signal #, ", signum
sys.exit()
for i in [x for x in dir(signal) if x.startswith("SIG")]:
try:
signum = getattr(signal, i)
signal.signal(signum, myHandler)
print "Handler added for {}".format(i)
except RuntimeError,m:
print "Skipping %s"%i
except ValueError:
break
while True:
print "goo"
time.sleep(1)
Any ideas will be greatly appreciated .. =)
this code works for me on the raspberry pi, i can see the correct output in the file output.log after the restart:
logging.basicConfig(level=WARNING,
filename='output.log',
format='%(message)s')
def quit():
#cleaning code here
logging.warning('exit')
sys.exit(0)
def handler(signum=None, frame=None):
quit()
for sig in [signal.SIGTERM, signal.SIGHUP, signal.SIGQUIT, signal.SIGKILL]:
signal.signal(sig, handler)
def restart():
command = '/sbin/shutdown -r now'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
logging.warning('%s'%output)
restart()
maybe your terminal handles the signal before the python script does, so you can't actually see anything. Try to see the output in a file (with the logging module or the way as you like).
I need to do the following in Python. I want to spawn a process (subprocess module?), and:
if the process ends normally, to continue exactly from the moment it terminates;
if, otherwise, the process "gets stuck" and doesn't terminate within (say) one hour, to kill it and continue (possibly giving it another try, in a loop).
What is the most elegant way to accomplish this?
The subprocess module will be your friend. Start the process to get a Popen object, then pass it to a function like this. Note that this only raises exception on timeout. If desired you can catch the exception and call the kill() method on the Popen process. (kill is new in Python 2.6, btw)
import time
def wait_timeout(proc, seconds):
"""Wait for a process to finish, or raise exception after timeout"""
start = time.time()
end = start + seconds
interval = min(seconds / 1000.0, .25)
while True:
result = proc.poll()
if result is not None:
return result
if time.time() >= end:
raise RuntimeError("Process timed out")
time.sleep(interval)
There are at least 2 ways to do this by using psutil as long as you know the process PID.
Assuming the process is created as such:
import subprocess
subp = subprocess.Popen(['progname'])
...you can get its creation time in a busy loop like this:
import psutil, time
TIMEOUT = 60 * 60 # 1 hour
p = psutil.Process(subp.pid)
while 1:
if (time.time() - p.create_time()) > TIMEOUT:
p.kill()
raise RuntimeError('timeout')
time.sleep(5)
...or simply, you can do this:
import psutil
p = psutil.Process(subp.pid)
try:
p.wait(timeout=60*60)
except psutil.TimeoutExpired:
p.kill()
raise
Also, while you're at it, you might be interested in the following extra APIs:
>>> p.status()
'running'
>>> p.is_running()
True
>>>
I had a similar question and found this answer. Just for completeness, I want to add one more way how to terminate a hanging process after a given amount of time: The python signal library
https://docs.python.org/2/library/signal.html
From the documentation:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
Since you wanted to spawn a new process anyways, this might not be the best soloution for your problem, though.
A nice, passive, way is also by using a threading.Timer and setting up callback function.
from threading import Timer
# execute the command
p = subprocess.Popen(command)
# save the proc object - either if you make this onto class (like the example), or 'p' can be global
self.p == p
# config and init timer
# kill_proc is a callback function which can also be added onto class or simply a global
t = Timer(seconds, self.kill_proc)
# start timer
t.start()
# wait for the test process to return
rcode = p.wait()
t.cancel()
If the process finishes in time, wait() ends and code continues here, cancel() stops the timer. If meanwhile the timer runs out and executes kill_proc in a separate thread, wait() will also continue here and cancel() will do nothing. By the value of rcode you will know if we've timeouted or not. Simplest kill_proc: (you can of course do anything extra there)
def kill_proc(self):
os.kill(self.p, signal.SIGTERM)
Koodos to Peter Shinners for his nice suggestion about subprocess module. I was using exec() before and did not have any control on running time and especially terminating it. My simplest template for this kind of task is the following and I am just using the timeout parameter of subprocess.run() function to monitor the running time. Of course you can get standard out and error as well if needed:
from subprocess import run, TimeoutExpired, CalledProcessError
for file in fls:
try:
run(["python3.7", file], check=True, timeout=7200) # 2 hours timeout
print("scraped :)", file)
except TimeoutExpired:
message = "Timeout :( !!!"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))
except CalledProcessError:
message = "SOMETHING HAPPENED :( !!!, CHECK"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))