How to gracefully terminate python process on windows - python

I have a python 2.7 process running in the background on Windows 8.1.
Is there a way to gracefully terminate this process and perform cleanup on shutdown or log off?

Try using win32api.GenerateConsoleCtrlEvent.
I solved this for a multiprocessing python program here:
Gracefully Terminate Child Python Process On Windows so Finally clauses run
I tested this solution using subprocess.Popen, and it also works.
Here is a code example:
import time
import win32api
import win32con
from multiprocessing import Process
def foo():
try:
while True:
print("Child process still working...")
time.sleep(1)
except KeyboardInterrupt:
print "Child process: caught ctrl-c"
if __name__ == "__main__":
p = Process(target=foo)
p.start()
time.sleep(2)
print "sending ctrl c..."
try:
win32api.GenerateConsoleCtrlEvent(win32con.CTRL_C_EVENT, 0)
while p.is_alive():
print("Child process is still alive.")
time.sleep(1)
except KeyboardInterrupt:
print "Main process: caught ctrl-c"

Related

Why the process is not terminated?

I am new to multithreading. While reading 'Programming Python' by Mark Lutz I stuck at this line
note that because of its simple-minded infinite loops, at least one of
its threads may not die on a Ctrl-C on Windows you may need to use
Task Manager to kill the python.exe process running this script or
close this window to exit
But according to my little knowledge about threading
all thread terminate when main thread exits. So why not in this code?
# anonymous pipes and threads, not process; this version works on Windows
import os
import time
import threading
def child(pipe_out):
try:
zzz = 0
while True:
time.sleep(zzz)
msg = ('Spam %03d\n' % zzz).encode()
os.write(pipe_out, msg)
zzz = (zzz + 1) % 5
except KeyboardInterrupt:
print("Child exiting")
def parent(pipe_in):
try:
while True:
line = os.read(pipe_in, 32)
print('Parent %d got [%s] at %s' % (os.getpid(), line, time.time()))
except KeyboardInterrupt:
print('Parent Exiting')
pipe_in, pipe_out = os.pipe()
threading.Thread(target=child, args=(pipe_out, )).start()
parent(pipe_in)
print("main thread exiting")
A Python process will end when there are no more running non-daemon threads. If you pass the daemon=True argument to threading.Thread you will notice different behavior in your program.
I suggest reading the docs for the threading module to learn more about what I'm talking about.

CTRL_C_EVENT sent to child process kills parent process

I'm writing a script that runs a background process in parallel. When restarting the script I want to be able to kill the background process and exit it cleanly by sending it a CTRL_C_EVENT signal. For some reason though, sending the CTRL_C_EVENT signal to the child process also causes the same signal to be sent to the parent process. I suspect that the KeyboardInterrupt exception isn't being cleaned up after the child process gets it and is then caught by the main process.
I'm using Python version 2.7.1 and running on Windows Server 2012.
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#exit cleanly
return
def script():
try:
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
os.kill(proc.pid, signal.CTRL_C_EVENT)
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
except KeyboardInterrupt:
print "Unexpected keyboard interrupt"
def main():
script()
time.sleep(5)
script()
I expect that the script() function should never be receiving a KeyboardInterrupt exception, but it is triggered the second time that the function is called. Why is this happening?
I'm still looking for an explanation as to why the issue occurs, but I'll post my (albeit somewhat hacky) workaround here in case it helps anyone else. Since the Ctrl+C gets propagated to the parent process (still not entirely sure why this happens), I'm going to just catch the exception when it arrives and do nothing.
Eryk suggested using an extra watchdog thread to handle terminating the extra process, but for my application this introduces extra complexity and seems a bit overkill for the rare case that I actually need to kill the background process. Most of the time the background process in my application will close itself cleanly when it's done.
I'm still open to suggestions for a better implementation that doesn't add too much complexity (more processes, threads, etc.).
Modified code here:
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#Exit cleanly
return
def script():
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
try:
#Apparently sending a CTRL-C to the child also sends it to the parent??
os.kill(proc.pid, signal.CTRL_C_EVENT)
#Sleep until the parent receives the KeyboardInterrupt, then ignore it
time.sleep(1)
except KeyboardInterrupt:
pass
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
def main():
script()
time.sleep(5)
script()

Why is Python3 daemon thread instantly closing in console?

this code works in idle3 but in a console(MAC, Windows Linux) thread2 is instantly closing if set to daemon. Is there any explanation for that ? Maybe also a workaround to properly have a daemon thread asking for user input ?
import queue
import threading
import sys
def worker(q):
_text = ''
while _text == '':
_text = q.get()
print('[worker]input was ',_text)
sys.exit()
def dialog(q):
while True:
try:
_select = input('[dialog]enter text:')
if _select != '':
q.put(_select)
except EOFError:
pass
except KeyboardInterrupt:
print("bye")
sys.exit(0)
except Exception as e:
print(e)
sys.exit(1)
if 'esc'.lower() in _select.lower():
sys.exit()
q = queue.Queue()
thread1 = threading.Thread(target=worker,args=(q,))
thread2 = threading.Thread(target=dialog,args=(q,))
thread1.setDaemon(True)
thread2.setDaemon(True)
print('start asking')
thread1.start()
thread2.start()
thanks for any hint on the issue
Normally the child threads die when the main thread exits. The code you've given as example exits directly after starting two child threads. To solve this, you should 'join' the threads back to the main thread. This will make it so the main thread waits for the child threads to die.
thread1.join()
thread2.join()
at the end of your file should solve this problem.
https://docs.python.org/3.5/library/threading.html#threading.Thread.join
Also, why do you want to run this application as daemon?

How to execute code just before terminating the process in python?

This question concerns multiprocessing in python. I want to execute some code when I terminate the process, to be more specific just before it will be terminated. I'm looking for a solution which works as atexit.register for the python program.
I have a method worker which looks:
def worker():
while True:
print('work')
time.sleep(2)
return
I run it by:
proc = multiprocessing.Process(target=worker, args=())
proc.start()
My goal is to execute some extra code just before terminating it, which I do by:
proc.terminate()
Use signal handling and intercept SIGTERM:
import multiprocessing
import time
import sys
from signal import signal, SIGTERM
def before_exit(*args):
print('Hello')
sys.exit(0) # don't forget to exit!
def worker():
signal(SIGTERM, before_exit)
time.sleep(10)
proc = multiprocessing.Process(target=worker, args=())
proc.start()
time.sleep(3)
proc.terminate()
Produces the desirable output just before subprocess termination.

exiting python with hanged thread

When you import and use package, this package can run non daemon threads. Until these threads are finished, python cannot exit properly (like with sys.exit(0)). For example imagine that thread t is from some package. When unhandled exception occurs in the main thread, you want to terminate. But this won't exit immediately, it will wait 60s till the thread terminates.
import time, threading
def main():
t = threading.Thread(target=time.sleep, args=(60,))
t.start()
a = 5 / 0
if __name__ == '__main__':
try:
main()
except:
sys.exit(1)
So I came up with 2 things. Replace sys.exit(1) with os._exit(1) or enumerate all threads and make them daemon. Both of them seems to work, but what do you thing is better? os._exit won't flush stdio buffers but setting daemon attribute to threads seems like a hack and maybe it's not guaranteed to work all the time.
import time, threading
def main():
t = thread.Thread(target=time.sleep, args=(60,))
t.start()
a = 5 / 0
if __name__ == '__main__':
try:
main()
except:
for t in threading.enumerate():
if not t.daemon and t.name != "MainThread":
t._daemonic = True
sys.exit(1)

Categories