SIGTERM signal not received by python on shutdown command - python

I have been programming using python for the raspberryPi for several months now and I am trying to make my scripts "well behaved" and wrap up (close files and make sure no writes to SD are being perfomed) upon reception of SIGTERM.
Following advice on SO (1, 2) I am able to handle SIGTERM if I kill the process manually (i.e. kill {process number}) but if I send the shutdown command (i.e. shutdown -t 30 now) my handler never gets called.
I also tried registering for all signals and checking the signal being send on the shutdown event but I am not getting any.
Here's simple example code:
import time
import signal
import sys
def myHandler(signum, frame):
print "Signal #, ", signum
sys.exit()
for i in [x for x in dir(signal) if x.startswith("SIG")]:
try:
signum = getattr(signal, i)
signal.signal(signum, myHandler)
print "Handler added for {}".format(i)
except RuntimeError,m:
print "Skipping %s"%i
except ValueError:
break
while True:
print "goo"
time.sleep(1)
Any ideas will be greatly appreciated .. =)

this code works for me on the raspberry pi, i can see the correct output in the file output.log after the restart:
logging.basicConfig(level=WARNING,
filename='output.log',
format='%(message)s')
def quit():
#cleaning code here
logging.warning('exit')
sys.exit(0)
def handler(signum=None, frame=None):
quit()
for sig in [signal.SIGTERM, signal.SIGHUP, signal.SIGQUIT, signal.SIGKILL]:
signal.signal(sig, handler)
def restart():
command = '/sbin/shutdown -r now'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
logging.warning('%s'%output)
restart()
maybe your terminal handles the signal before the python script does, so you can't actually see anything. Try to see the output in a file (with the logging module or the way as you like).

Related

CTRL_C_EVENT sent to child process kills parent process

I'm writing a script that runs a background process in parallel. When restarting the script I want to be able to kill the background process and exit it cleanly by sending it a CTRL_C_EVENT signal. For some reason though, sending the CTRL_C_EVENT signal to the child process also causes the same signal to be sent to the parent process. I suspect that the KeyboardInterrupt exception isn't being cleaned up after the child process gets it and is then caught by the main process.
I'm using Python version 2.7.1 and running on Windows Server 2012.
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#exit cleanly
return
def script():
try:
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
os.kill(proc.pid, signal.CTRL_C_EVENT)
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
except KeyboardInterrupt:
print "Unexpected keyboard interrupt"
def main():
script()
time.sleep(5)
script()
I expect that the script() function should never be receiving a KeyboardInterrupt exception, but it is triggered the second time that the function is called. Why is this happening?
I'm still looking for an explanation as to why the issue occurs, but I'll post my (albeit somewhat hacky) workaround here in case it helps anyone else. Since the Ctrl+C gets propagated to the parent process (still not entirely sure why this happens), I'm going to just catch the exception when it arrives and do nothing.
Eryk suggested using an extra watchdog thread to handle terminating the extra process, but for my application this introduces extra complexity and seems a bit overkill for the rare case that I actually need to kill the background process. Most of the time the background process in my application will close itself cleanly when it's done.
I'm still open to suggestions for a better implementation that doesn't add too much complexity (more processes, threads, etc.).
Modified code here:
import multiprocessing
import time
import signal
import os
def backgroundProcess():
try:
while(True):
time.sleep(10)
except KeyboardInterrupt:
#Exit cleanly
return
def script():
print "Starting function"
#Kill all background processes
for proc in multiprocessing.active_children():
print "Killing " + str(proc) + " with PID " + str(proc.pid)
try:
#Apparently sending a CTRL-C to the child also sends it to the parent??
os.kill(proc.pid, signal.CTRL_C_EVENT)
#Sleep until the parent receives the KeyboardInterrupt, then ignore it
time.sleep(1)
except KeyboardInterrupt:
pass
print "Creating background process"
newProc = multiprocessing.Process(target=backgroundProcess)
print "Starting new background process"
newProc.start()
print "Process PID is " + str(newProc.pid)
def main():
script()
time.sleep(5)
script()

Why is Python signal handler not getting triggered?

In the code snippet below, I registered the signal handler with a call to signal.signal. However, though the process is killed after the timeout, the print or system statements inside the handler are not being executed. What am I doing wrong? I am running Python 2.7.15rc1 on Ubuntu 18.04, 64 bit.
import signal
import time
import os
import multiprocessing as mp
def handler(signal, frame):
print "Alarmed"
os.system('echo Alarmed >> /tmp/log_alarm')
def launch():
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
print "Process launched"
os.execv('/bin/cat', ['cat'])
print "You should not see this"
p = mp.Process(target=launch)
p.start()
print "Process started"
p.join()
You are at that point out of Python territory. os.execv() ultimately caused execve(2) syscall to be made and as its man page states:
All process attributes are preserved during an execve(), except the
following:
The dispositions of any signals that are being caught are reset to
the default (signal(7)).
...
Which does make sense, you shouldn't really want to run some new code while having old handlers registered for it (and no idea how that could even work since handlers would also be replaced).
If you implemented cat-like behavior in python and did not execve a new process, it would have (more or less) worked the way you expected:
import signal
import time
import os
import sys
import multiprocessing as mp
def handler(signal, frame):
print "Alarmed"
os.system('echo Alarmed >> /tmp/log_alarm')
sys.exit(0)
def launch():
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
print "Process launched"
stdin = os.fdopen(0)
stdout = os.fdopen(1, 'w')
line = stdin.readline()
while line:
stdout.write(line)
line = stdin.readline()
print "You may end here if EOF was reached before being alarmed."
p = mp.Process(target=launch)
p.start()
print "Process started"
p.join()
Note: I've just hard coded handling of stdin/stdout in the child process. And since you've mentioned python 2.7 I've avoided using for line in stdin:.

handling unexpected shutdown python

is there any way to catch an exception for an unexpected shutdown of program in python ?
let say I am running a python script in a console then I don't press control+c to stop the program but rather just click the close button of the console is there any way to catch the error before the console close?
like this:
try:
print("hello")
except KeyboardInterrupt:
exit()
except UnexpectedClose:
print("unexpected shutoff")
exit()
thanks in advance
Following the link I put in the comment above already and reading here that forced closing is sending a SIGHUP this modified version writes an output file when the terminal window is closed and the python process is "killed".
Note, I just combined information (as cited) available on SE.
import signal
import time
class GracefulKiller:
kill_now = False
def __init__(self):
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
signal.signal(signal.SIGHUP, self.exit_gracefully)
def exit_gracefully(self,signum, frame):
with open('kill.txt', 'w') as fpntr:
fpntr.write('killed')
self.kill_now = True
if __name__ == '__main__':
killer = GracefulKiller()
while True:
time.sleep(1)
print("doing something in a loop ...")
if killer.kill_now:
break
print "End of the program. I was killed gracefully :)"

How to handle console exit and object destruction

Given this code:
from time import sleep
class TemporaryFileCreator(object):
def __init__(self):
print 'create temporary file'
# create_temp_file('temp.txt')
def watch(self):
try:
print 'watching tempoary file'
while True:
# add_a_line_in_temp_file('temp.txt', 'new line')
sleep(4)
except (KeyboardInterrupt, SystemExit), e:
print 'deleting the temporary file..'
# delete_temporary_file('temp.txt')
sleep(3)
print str(e)
t = TemporaryFileCreator()
t.watch()
during the t.watch(), I want to close this application in the console..
I tried using CTRL+C and it works:
However, if I click the exit button:
it doesn't work.. I checked many related questions about this but it seems that I cannot find the right answer..
What I want to do:
The console can be exited while the program is still running.. to handle that, when the exit button is pressed, I want to make a cleanup of the objects (deleting of created temporary files), rollback of temporary changes, etc..
Question:
how can I handle console exit?
how can I integrate it on object destructors (__exit__())
Is it even possible? (how about py2exe?)
Note: code will be compiled on py2exe.. "hopes that the effect is the same"
You may want to have a look at signals. When a *nix terminal is closed with a running process, this process receives a couple signals. For instance this code waits for the SIGHUB hangup signal and writes a final message. This codes works under OSX and Linux. I know you are specifically asking for Windows but you might want to give it a shot or investigate what signals a Windows command prompt is emitting during shutdown.
import signal
import sys
def signal_handler(signal, frame):
with open('./log.log', 'w') as f:
f.write('event received!')
signal.signal(signal.SIGHUP, signal_handler)
print('Waiting for the final blow...')
#signal.pause() # does not work under windows
sleep(10) # so let us just wait here
Quote from the documentation:
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM. A ValueError will be raised in any other case.
Update:
Actually, the closest thing in Windows is win32api.setConsoleCtrlHandler (doc). This was already discussed here:
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
And if Daniel's code still works, this might be a nice way to use both (signals and CtrlHandler) for cross-platform purposes:
import os, sys
def set_exit_handler(func):
if os.name == "nt":
try:
import win32api
win32api.SetConsoleCtrlHandler(func, True)
except ImportError:
version = “.”.join(map(str, sys.version_info[:2]))
raise Exception(”pywin32 not installed for Python ” + version)
else:
import signal
signal.signal(signal.SIGTERM, func)
if __name__ == "__main__":
def on_exit(sig, func=None):
print "exit handler triggered"
import time
time.sleep(5)
set_exit_handler(on_exit)
print "Press to quit"
raw_input()
print "quit!"
If you use tempfile to create your temporary file, it will be automatically deleted when the Python process is killed.
Try it with:
>>> foo = tempfile.NamedTemporaryFile()
>>> foo.name
'c:\\users\\blah\\appdata\\local\\temp\\tmpxxxxxx'
Now check that the named file is there. You can write to and read from this file like any other.
Now kill the Python window and check that file is gone (it should be)
You can simply call foo.close() to delete it manually in your code.

Why is signal.SIGTERM not dealt with properly in my main thread?

I have python code which runs continuously (collecting sensor data). It is supposed to be launched at boot using start-stop-daemon. However, I'd like to be able to kill the process gracefully, so I've started from the advice in the post How to process SIGTERM signal gracefully? and put my main loop in a separate thread. I'd like to be able to gracefully shut it down both when it is running as a daemon (the start-stop-daemon will send a kill signal) and when I launch it briefly for testing in a terminal myself (me pressing ctrl-c).
However, the signal handler doesn't seem to be called if I kill the process (even without using the thread, the "done (killed)" never ends up in the file I've redirected to). And when I press ctrl-c, the collecting just continues and keeps printing data in the terminal (or to the file I am redirecting to).
What am I doing wrong in the following code?
from threading import Thread
import time, sys, signal
shutdown_flag = False #used for gracefull shutdown
def main_loop():
while not shutdown_flag:
collect_data() # contains some print "data" statements
time.sleep(5)
print "done (killed)"
def sighandler(signum, frame):
print 'signal handler called with signal: %s ' % signum
global shutdown_flag
shutdown_flag = True
def main(argv=None):
signal.signal(signal.SIGTERM, sighandler) # so we can handle kill gracefully
signal.signal(signal.SIGINT, sighandler) # so we can handle ctrl-c
try:
Thread(target=main_loop, args=()).start()
except Exception, reason:
print reason
if __name__ == '__main__':
sys.exit(main(sys.argv))
You are terminating your main thread with this statement:
if __name__ == '__main__':
sys.exit(main(sys.argv))
So your signal handler never gets to run. The signal handler is part of the main thread not the main_loop thread you created. So once the main thread exits there's no signal handler function to call anymore.
You need something like this:
def sighandler(signum, frame):
print 'signal handler called with signal: %s ' % signum
global shutdown_flag
shutdown_flag = True
sys.exit() # make sure you add this so the main thread exits as well.
if __name__ == '__main__':
main(sys.argv)
while 1: # this will force your main thread to live until you terminate it.
time.sleep(1)
A simple test to see how many threads are running in your program is this:
def main_loop():
while not shutdown_flag:
collect_data() # contains some print "data" statements
time.sleep(5)
import threading
print threading.enumerate()
print "done (killed)"

Categories