In the code snippet below, I registered the signal handler with a call to signal.signal. However, though the process is killed after the timeout, the print or system statements inside the handler are not being executed. What am I doing wrong? I am running Python 2.7.15rc1 on Ubuntu 18.04, 64 bit.
import signal
import time
import os
import multiprocessing as mp
def handler(signal, frame):
print "Alarmed"
os.system('echo Alarmed >> /tmp/log_alarm')
def launch():
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
print "Process launched"
os.execv('/bin/cat', ['cat'])
print "You should not see this"
p = mp.Process(target=launch)
p.start()
print "Process started"
p.join()
You are at that point out of Python territory. os.execv() ultimately caused execve(2) syscall to be made and as its man page states:
All process attributes are preserved during an execve(), except the
following:
The dispositions of any signals that are being caught are reset to
the default (signal(7)).
...
Which does make sense, you shouldn't really want to run some new code while having old handlers registered for it (and no idea how that could even work since handlers would also be replaced).
If you implemented cat-like behavior in python and did not execve a new process, it would have (more or less) worked the way you expected:
import signal
import time
import os
import sys
import multiprocessing as mp
def handler(signal, frame):
print "Alarmed"
os.system('echo Alarmed >> /tmp/log_alarm')
sys.exit(0)
def launch():
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
print "Process launched"
stdin = os.fdopen(0)
stdout = os.fdopen(1, 'w')
line = stdin.readline()
while line:
stdout.write(line)
line = stdin.readline()
print "You may end here if EOF was reached before being alarmed."
p = mp.Process(target=launch)
p.start()
print "Process started"
p.join()
Note: I've just hard coded handling of stdin/stdout in the child process. And since you've mentioned python 2.7 I've avoided using for line in stdin:.
Related
Given this code:
from time import sleep
class TemporaryFileCreator(object):
def __init__(self):
print 'create temporary file'
# create_temp_file('temp.txt')
def watch(self):
try:
print 'watching tempoary file'
while True:
# add_a_line_in_temp_file('temp.txt', 'new line')
sleep(4)
except (KeyboardInterrupt, SystemExit), e:
print 'deleting the temporary file..'
# delete_temporary_file('temp.txt')
sleep(3)
print str(e)
t = TemporaryFileCreator()
t.watch()
during the t.watch(), I want to close this application in the console..
I tried using CTRL+C and it works:
However, if I click the exit button:
it doesn't work.. I checked many related questions about this but it seems that I cannot find the right answer..
What I want to do:
The console can be exited while the program is still running.. to handle that, when the exit button is pressed, I want to make a cleanup of the objects (deleting of created temporary files), rollback of temporary changes, etc..
Question:
how can I handle console exit?
how can I integrate it on object destructors (__exit__())
Is it even possible? (how about py2exe?)
Note: code will be compiled on py2exe.. "hopes that the effect is the same"
You may want to have a look at signals. When a *nix terminal is closed with a running process, this process receives a couple signals. For instance this code waits for the SIGHUB hangup signal and writes a final message. This codes works under OSX and Linux. I know you are specifically asking for Windows but you might want to give it a shot or investigate what signals a Windows command prompt is emitting during shutdown.
import signal
import sys
def signal_handler(signal, frame):
with open('./log.log', 'w') as f:
f.write('event received!')
signal.signal(signal.SIGHUP, signal_handler)
print('Waiting for the final blow...')
#signal.pause() # does not work under windows
sleep(10) # so let us just wait here
Quote from the documentation:
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM. A ValueError will be raised in any other case.
Update:
Actually, the closest thing in Windows is win32api.setConsoleCtrlHandler (doc). This was already discussed here:
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
And if Daniel's code still works, this might be a nice way to use both (signals and CtrlHandler) for cross-platform purposes:
import os, sys
def set_exit_handler(func):
if os.name == "nt":
try:
import win32api
win32api.SetConsoleCtrlHandler(func, True)
except ImportError:
version = “.”.join(map(str, sys.version_info[:2]))
raise Exception(”pywin32 not installed for Python ” + version)
else:
import signal
signal.signal(signal.SIGTERM, func)
if __name__ == "__main__":
def on_exit(sig, func=None):
print "exit handler triggered"
import time
time.sleep(5)
set_exit_handler(on_exit)
print "Press to quit"
raw_input()
print "quit!"
If you use tempfile to create your temporary file, it will be automatically deleted when the Python process is killed.
Try it with:
>>> foo = tempfile.NamedTemporaryFile()
>>> foo.name
'c:\\users\\blah\\appdata\\local\\temp\\tmpxxxxxx'
Now check that the named file is there. You can write to and read from this file like any other.
Now kill the Python window and check that file is gone (it should be)
You can simply call foo.close() to delete it manually in your code.
This should be very simple and I'm very surprised that I haven't been able to find this questions answered already on stackoverflow.
I have a daemon like program that needs to respond to the SIGTERM and SIGINT signals in order to work well with upstart. I read that the best way to do this is to run the main loop of the program in a separate thread from the main thread and let the main thread handle the signals. Then when a signal is received the signal handler should tell the main loop to exit by setting a sentinel flag that is routinely being checked in the main loop.
I've tried doing this but it is not working the way I expected. See the code below:
from threading import Thread
import signal
import time
import sys
stop_requested = False
def sig_handler(signum, frame):
sys.stdout.write("handling signal: %s\n" % signum)
sys.stdout.flush()
global stop_requested
stop_requested = True
def run():
sys.stdout.write("run started\n")
sys.stdout.flush()
while not stop_requested:
time.sleep(2)
sys.stdout.write("run exited\n")
sys.stdout.flush()
signal.signal(signal.SIGTERM, sig_handler)
signal.signal(signal.SIGINT, sig_handler)
t = Thread(target=run)
t.start()
t.join()
sys.stdout.write("join completed\n")
sys.stdout.flush()
I tested this in the following two ways:
1)
$ python main.py > output.txt&
[2] 3204
$ kill -15 3204
2)
$ python main.py
ctrl+c
In both cases I expect this written to the output:
run started
handling signal: 15
run exited
join completed
In the first case the program exits but all I see is:
run started
In the second case the SIGTERM signal is seemingly ignored when ctrl+c is pressed and the program doesn't exit.
What am I missing here?
The problem is that, as explained in Execution of Python signal handlers:
A Python signal handler does not get executed inside the low-level (C) signal handler. Instead, the low-level signal handler sets a flag which tells the virtual machine to execute the corresponding Python signal handler at a later point(for example at the next bytecode instruction)
…
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
Your main thread is blocked on threading.Thread.join, which ultimately means it's blocked in C on a pthread_join call. Of course that's not a "long-running calculation", it's a block on a syscall… but nevertheless, until that call finishes, your signal handler can't run.
And, while on some platforms pthread_join will fail with EINTR on a signal, on others it won't. On linux, I believe it depends on whether you select BSD-style or default siginterrupt behavior, but the default is no.
So, what can you do about it?
Well, I'm pretty sure the changes to signal handling in Python 3.3 actually changed the default behavior on Linux so you won't need to do anything if you upgrade; just run under 3.3+ and your code will work as you're expecting. At least it does for me with CPython 3.4 on OS X and 3.3 on Linux. (If I'm wrong about this, I'm not sure whether it's a bug in CPython or not, so you may want to raise it on python-list rather than opening an issue…)
On the other hand, pre-3.3, the signal module definitely doesn't expose the tools you'd need to fix this problem yourself. So, if you can't upgrade to 3.3, the solution is to wait on something interruptible, like a Condition or an Event. The child thread notifies the event right before it quits, and the main thread waits on the event before it joins the child thread. This is definitely hacky. And I can't find anything that guarantees it will make a difference; it just happens to work for me in various builds of CPython 2.7 and 3.2 on OS X and 2.6 and 2.7 on Linux…
abarnert's answer was spot on. I'm still using Python 2.7 however. In order to solve this problem for myself I wrote an InterruptableThread class.
Right now it doesn't allow passing additional arguments to the thread target. Join doesn't accept a timeout parameter either. This is just because I don't need to do that. You can add it if you want. You will probably want to remove the output statements if you use this yourself. They are just there as a way of commenting and testing.
import threading
import signal
import sys
class InvalidOperationException(Exception):
pass
# noinspection PyClassHasNoInit
class GlobalInterruptableThreadHandler:
threads = []
initialized = False
#staticmethod
def initialize():
signal.signal(signal.SIGTERM, GlobalInterruptableThreadHandler.sig_handler)
signal.signal(signal.SIGINT, GlobalInterruptableThreadHandler.sig_handler)
GlobalInterruptableThreadHandler.initialized = True
#staticmethod
def add_thread(thread):
if threading.current_thread().name != 'MainThread':
raise InvalidOperationException("InterruptableThread objects may only be started from the Main thread.")
if not GlobalInterruptableThreadHandler.initialized:
GlobalInterruptableThreadHandler.initialize()
GlobalInterruptableThreadHandler.threads.append(thread)
#staticmethod
def sig_handler(signum, frame):
sys.stdout.write("handling signal: %s\n" % signum)
sys.stdout.flush()
for thread in GlobalInterruptableThreadHandler.threads:
thread.stop()
GlobalInterruptableThreadHandler.threads = []
class InterruptableThread:
def __init__(self, target=None):
self.stop_requested = threading.Event()
self.t = threading.Thread(target=target, args=[self]) if target else threading.Thread(target=self.run)
def run(self):
pass
def start(self):
GlobalInterruptableThreadHandler.add_thread(self)
self.t.start()
def stop(self):
self.stop_requested.set()
def is_stop_requested(self):
return self.stop_requested.is_set()
def join(self):
try:
while self.t.is_alive():
self.t.join(timeout=1)
except (KeyboardInterrupt, SystemExit):
self.stop_requested.set()
self.t.join()
sys.stdout.write("join completed\n")
sys.stdout.flush()
The class can be used two different ways. You can sub-class InterruptableThread:
import time
import sys
from interruptable_thread import InterruptableThread
class Foo(InterruptableThread):
def __init__(self):
InterruptableThread.__init__(self)
def run(self):
sys.stdout.write("run started\n")
sys.stdout.flush()
while not self.is_stop_requested():
time.sleep(2)
sys.stdout.write("run exited\n")
sys.stdout.flush()
sys.stdout.write("all exited\n")
sys.stdout.flush()
foo = Foo()
foo2 = Foo()
foo.start()
foo2.start()
foo.join()
foo2.join()
Or you can use it more like the way threading.thread works. The run method has to take the InterruptableThread object as a parameter though.
import time
import sys
from interruptable_thread import InterruptableThread
def run(t):
sys.stdout.write("run started\n")
sys.stdout.flush()
while not t.is_stop_requested():
time.sleep(2)
sys.stdout.write("run exited\n")
sys.stdout.flush()
t1 = InterruptableThread(run)
t2 = InterruptableThread(run)
t1.start()
t2.start()
t1.join()
t2.join()
sys.stdout.write("all exited\n")
sys.stdout.flush()
Do with it what you will.
I faced the same problem here signal not handled when multiple threads join. After reading abarnert's answer, I changed to Python 3 and solved the problem. But I do like to change all my program to python 3. So, I solved my program by avoiding calling thread join() before signal sent. Below is my code.
It is not very good, but solved my program in python 2.7. My question was marked as duplicated, so I put my solution here.
import threading, signal, time, os
RUNNING = True
threads = []
def monitoring(tid, itemId=None, threshold=None):
global RUNNING
while(RUNNING):
print "PID=", os.getpid(), ";id=", tid
time.sleep(2)
print "Thread stopped:", tid
def handler(signum, frame):
print "Signal is received:" + str(signum)
global RUNNING
RUNNING=False
#global threads
if __name__ == '__main__':
signal.signal(signal.SIGUSR1, handler)
signal.signal(signal.SIGUSR2, handler)
signal.signal(signal.SIGALRM, handler)
signal.signal(signal.SIGINT, handler)
signal.signal(signal.SIGQUIT, handler)
print "Starting all threads..."
thread1 = threading.Thread(target=monitoring, args=(1,), kwargs={'itemId':'1', 'threshold':60})
thread1.start()
threads.append(thread1)
thread2 = threading.Thread(target=monitoring, args=(2,), kwargs={'itemId':'2', 'threshold':60})
thread2.start()
threads.append(thread2)
while(RUNNING):
print "Main program is sleeping."
time.sleep(30)
for thread in threads:
thread.join()
print "All threads stopped."
I have been programming using python for the raspberryPi for several months now and I am trying to make my scripts "well behaved" and wrap up (close files and make sure no writes to SD are being perfomed) upon reception of SIGTERM.
Following advice on SO (1, 2) I am able to handle SIGTERM if I kill the process manually (i.e. kill {process number}) but if I send the shutdown command (i.e. shutdown -t 30 now) my handler never gets called.
I also tried registering for all signals and checking the signal being send on the shutdown event but I am not getting any.
Here's simple example code:
import time
import signal
import sys
def myHandler(signum, frame):
print "Signal #, ", signum
sys.exit()
for i in [x for x in dir(signal) if x.startswith("SIG")]:
try:
signum = getattr(signal, i)
signal.signal(signum, myHandler)
print "Handler added for {}".format(i)
except RuntimeError,m:
print "Skipping %s"%i
except ValueError:
break
while True:
print "goo"
time.sleep(1)
Any ideas will be greatly appreciated .. =)
this code works for me on the raspberry pi, i can see the correct output in the file output.log after the restart:
logging.basicConfig(level=WARNING,
filename='output.log',
format='%(message)s')
def quit():
#cleaning code here
logging.warning('exit')
sys.exit(0)
def handler(signum=None, frame=None):
quit()
for sig in [signal.SIGTERM, signal.SIGHUP, signal.SIGQUIT, signal.SIGKILL]:
signal.signal(sig, handler)
def restart():
command = '/sbin/shutdown -r now'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output = process.communicate()[0]
logging.warning('%s'%output)
restart()
maybe your terminal handles the signal before the python script does, so you can't actually see anything. Try to see the output in a file (with the logging module or the way as you like).
I am running on a linux machine a python script which creates a child process using subprocess.check_output() as it follows:
subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
The problem is that even if the parent process dies, the child is still running.
Is there any way I can kill the child process as well when the parent dies?
Yes, you can achieve this by two methods. Both of them require you to use Popen instead of check_output. The first is a simpler method, using try..finally, as follows:
from contextlib import contextmanager
#contextmanager
def run_and_terminate_process(*args, **kwargs):
try:
p = subprocess.Popen(*args, **kwargs)
yield p
finally:
p.terminate() # send sigterm, or ...
p.kill() # send sigkill
def main():
with run_and_terminate_process(args) as running_proc:
# Your code here, such as running_proc.stdout.readline()
This will catch sigint (keyboard interrupt) and sigterm, but not sigkill (if you kill your script with -9).
The other method is a bit more complex, and uses ctypes' prctl PR_SET_PDEATHSIG. The system will send a signal to the child once the parent exits for any reason (even sigkill).
import signal
import ctypes
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig = signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
p = subprocess.Popen(args, preexec_fn = set_pdeathsig(signal.SIGTERM))
Your problem is with using subprocess.check_output - you are correct, you can't get the child PID using that interface. Use Popen instead:
proc = subprocess.Popen(["ls", "-l"], stdout=PIPE, stderr=PIPE)
# Here you can get the PID
global child_pid
child_pid = proc.pid
# Now we can wait for the child to complete
(output, error) = proc.communicate()
if error:
print "error:", error
print "output:", output
To make sure you kill the child on exit:
import os
import signal
def kill_child():
if child_pid is None:
pass
else:
os.kill(child_pid, signal.SIGTERM)
import atexit
atexit.register(kill_child)
Don't know the specifics, but the best way is still to catch errors (and perhaps even all errors) with signal and terminate any remaining processes there.
import signal
import sys
import subprocess
import os
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
a = subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
while 1:
pass # Press Ctrl-C (breaks the application and is catched by signal_handler()
This is just a mockup, you'd need to catch more than just SIGINT but the idea might get you started and you'd need to check for spawned process somehow still.
http://docs.python.org/2/library/os.html#os.kill
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.pid
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.kill
I'd recommend rewriting a personalized version of check_output cause as i just realized check_output is really just for simple debugging etc since you can't interact so much with it during executing..
Rewrite check_output:
from subprocess import Popen, PIPE, STDOUT
from time import sleep, time
def checkOutput(cmd):
a = Popen('ls -l', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
print(a.pid)
start = time()
while a.poll() == None or time()-start <= 30: #30 sec grace period
sleep(0.25)
if a.poll() == None:
print('Still running, killing')
a.kill()
else:
print('exit code:',a.poll())
output = a.stdout.read()
a.stdout.close()
a.stdin.close()
return output
And do whatever you'd like with it, perhaps store the active executions in a temporary variable and kill them upon exit with signal or other means of intecepting errors/shutdowns of the main loop.
In the end, you still need to catch terminations in the main application in order to safely kill any childs, the best way to approach this is with try & except or signal.
As of Python 3.2 there is a ridiculously simple way to do this:
from subprocess import Popen
with Popen(["sleep", "60"]) as process:
print(f"Just launched server with PID {process.pid}")
I think this will be best for most use cases because it's simple and portable, and it avoids any dependence on global state.
If this solution isn't powerful enough, then I would recommend checking out the other answers and discussion on this question or on Python: how to kill child process(es) when parent dies?, as there are a lot of neat ways to approach the problem that provide different trade-offs around portability, resilience, and simplicity. 😊
Manually you could do this:
ps aux | grep <process name>
get the PID(second column) and
kill -9 <PID>
-9 is to force killing it
In python 2.7 in windows according to the documentation you can send a CTRL_C_EVENT
(Python 2.7 Subprocess Popen.send_signal documentation).
However when I tried it I did not receive the expected keyboard interrupt in the subprocess.
This is the sample code for for the parent process:
# FILE : parentProcess.py
import subprocess
import time
import signal
CREATE_NEW_PROCESS_GROUP = 512
process = subprocess.Popen(['python', '-u', 'childProcess.py'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
creationflags=CREATE_NEW_PROCESS_GROUP)
print "pid = ", process.pid
index = 0
maxLoops = 15
while index < maxLoops:
index += 1
# Send one message every 0.5 seconds
time.sleep(0.5)
# Send data to the subprocess
process.stdin.write('Bar\n')
# Read data from the subprocess
temp = process.stdout.readline()
print temp,
if (index == 10):
# Send Keyboard Interrupt
process.send_signal(signal.CTRL_C_EVENT)
This is the sample code for the child proceess:
# FILE : childProcess.py
import sys
while True:
try:
# Get data from main process
temp = sys.stdin.readline()
# Write data out
print 'Foo ' + temp,
except KeyboardInterrupt:
print "KeyboardInterrupt"
If I run the file parentProcess.py I expect to get "Foo Bar" ten times then a "KeyboardInterrupt" followed by "Foo Bar" 4 times but I get "Foo Bar" 15 times instead.
Is there a way to get the CTRL_C_EVENT to behave as a keyboard interrupt just as SIGINT behaves in Linux?
After doing some reading I found some information that seems to contradic the python documentation regarding CTRL_C_EVENT, in particular it says that
CTRL_C_EVENT
0 Generates a CTRL+C signal. This signal cannot be generated for process groups
The following site provide more inforamtion about creation flags:
Process Creation Flags.
This method of signal handling by subprocesses worked for me on both Linux and Windows 2008, both using Python 2.7.2, but it uses Ctrl-Break instead of Ctrl-C. See the note about process groups and Ctrl-C in http://msdn.microsoft.com/en-us/library/ms683155%28v=vs.85%29.aspx.
catcher.py:
import os
import signal
import sys
import time
def signal_handler(signal, frame):
print 'catcher: signal %d received!' % signal
raise Exception('catcher: i am done')
if hasattr(os.sys, 'winver'):
signal.signal(signal.SIGBREAK, signal_handler)
else:
signal.signal(signal.SIGTERM, signal_handler)
print 'catcher: started'
try:
while(True):
print 'catcher: sleeping...'
time.sleep(1)
except Exception as ex:
print ex
sys.exit(0)
thrower.py:
import signal
import subprocess
import time
import os
args = [
'python',
'catcher.py',
]
print 'thrower: starting catcher'
if hasattr(os.sys, 'winver'):
process = subprocess.Popen(args, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
else:
process = subprocess.Popen(args)
print 'thrower: waiting a couple of seconds for catcher to start...'
time.sleep(2)
print 'thrower: sending signal to catch'
if hasattr(os.sys, 'winver'):
os.kill(process.pid, signal.CTRL_BREAK_EVENT)
else:
process.send_signal(signal.SIGTERM)
print 'thrower: i am done'
try with
win32api.GenerateConsoleCtrlEvent(CTRL_C_EVENT, pgroupid)
or
win32api.GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, pgroupid)
references:
http://docs.activestate.com/activepython/2.5/pywin3/win32process_CREATE_NEW_PROCESS_GROUP.html
http://msdn.microsoft.com/en-us/library/ms683155%28v=vs.85%29.aspx
read info about dwProcessGroupId, the groupid should be the same of the process id