How can I exit my entire Python application from one of its threads? sys.exit() only terminates the thread in which it is called, so that is no help.
I would not like to use an os.kill() solution, as this isn't very clean.
Short answer: use os._exit.
Long answer with example:
I yanked and slightly modified a simple threading example from a tutorial on DevShed:
import threading, sys, os
theVar = 1
class MyThread ( threading.Thread ):
def run ( self ):
global theVar
print 'This is thread ' + str ( theVar ) + ' speaking.'
print 'Hello and good bye.'
theVar = theVar + 1
if theVar == 4:
#sys.exit(1)
os._exit(1)
print '(done)'
for x in xrange ( 7 ):
MyThread().start()
If you keep sys.exit(1) commented out, the script will die after the third thread prints out. If you use sys.exit(1) and comment out os._exit(1), the third thread does not print (done), and the program runs through all seven threads.
os._exit "should normally only be used in the child process after a fork()" -- and a separate thread is close enough to that for your purpose. Also note that there are several enumerated values listed right after os._exit in that manual page, and you should prefer those as arguments to os._exit instead of simple numbers like I used in the example above.
If all your threads except the main ones are daemons, the best approach is generally thread.interrupt_main() -- any thread can use it to raise a KeyboardInterrupt in the main thread, which can normally lead to reasonably clean exit from the main thread (including finalizers in the main thread getting called, etc).
Of course, if this results in some non-daemon thread keeping the whole process alive, you need to followup with os._exit as Mark recommends -- but I'd see that as the last resort (kind of like a kill -9;-) because it terminates things quite brusquely (finalizers not run, including try/finally blocks, with blocks, atexit functions, etc).
Using thread.interrupt_main() may not help in some situation. KeyboardInterrupts are often used in command line applications to exit the current command or to clean the input line.
In addition, os._exit will kill the process immediately without running any finally blocks in your code, which may be dangerous (files and connections will not be closed for example).
The solution I've found is to register a signal handler in the main thread that raises a custom exception. Use the background thread to fire the signal.
import signal
import os
import threading
import time
class ExitCommand(Exception):
pass
def signal_handler(signal, frame):
raise ExitCommand()
def thread_job():
time.sleep(5)
os.kill(os.getpid(), signal.SIGUSR1)
signal.signal(signal.SIGUSR1, signal_handler)
threading.Thread(target=thread_job).start() # thread will fire in 5 seconds
try:
while True:
user_input = raw_input('Blocked by raw_input loop ')
# do something with 'user_input'
except ExitCommand:
pass
finally:
print('finally will still run')
Related questions:
Why does sys.exit() not exit when called inside a thread in Python?
Python: How to quit CLI when stuck in blocking raw_input?
The easiest way to exit the whole program is, we should terminate the program by using the process id (pid).
import os
import psutil
current_system_pid = os.getpid()
ThisSystem = psutil.Process(current_system_pid)
ThisSystem.terminate()
To install psutl:- "pip install psutil"
For Linux you can use the kill() command and pass the current process' ID and the SIGINT signal to start the steps to exit the app.
import signal
os.kill(os.getpid(), signal.SIGINT)
Related
this is really baffling me, and I cannot find the answer.
import thread
import time
import random
import sys
sys.stdout.flush()
def run_often(thread_name, sleep_time):
while True:
time.sleep(sleep_time)
print "%s" % (thread_name)
def run_randomly(thread_name, sleep_time):
while True:
time.sleep(sleep_time)
print "%s" % (thread_name)
thread.start_new_thread(run_often,("Often runs", 2))
thread.start_new_thread(run_randomly, ("Fast and random", random.random()))
the problem is that this does not print. In the terminal, I type python threading.py (that's the file name) and no error is output. So the code compiles but it just does not print. In other words, this is what happens:
$ python threading.py
$
I added sys.stdout.flush() thinking that stdout needed to be reset, but to no avail. And when I add a random print statement like "hello world" outside of the while loop in any of the functions, it gets printed. Also, when I run the exact same code in ipython, it prints perfectly. This was copied from a YouTube tutorial, and is not my own code.
Any ideas as to why it's not printing? The Python I'm running is 2.7.8.
Thanks in advance.
According to the thread documentation,
When the main thread exits, it is system defined whether the other threads survive. On SGI IRIX using the native thread implementation, they survive. On most other systems, they are killed without executing try ... finally clauses or executing object destructors.
So it is most likely that your child threads are being terminated before they can print anything, because the main thread ends too quickly.
The quick solution is to make your main thread wait before ending; add a time.sleep method as the last line.
You could also use threading instead of thread, and use Thread.join, which forces the main thread to wait until the child thread is finished executing.
I have a piece of Python code as below:
import sys
import signal
import atexit
def release():
print "Release resources..."
def sigHandler(signo, frame):
release()
sys.exit(0)
if __name__ == "__main__":
signal.signal(signal.SIGTERM, sigHandler)
atexit.register(release)
while True:
pass
The real code is far more complex than this snippets, but the structures are the same: i.e. main function maintains an infinite loop.
I need a signal callback to release the resources occupied, like DB handle.
Then I add a SIGTERM handler, in case the server is killed, which simply invoke the release function and then exit the process.
The atexit one aims to handling process complete successfully.
Now I have a problem I just want release to be invoked only once when the process is killed. Any improvement on my code?
Well, according to the documentation atexit handlers aren't executed if the program is killed by a signal not handled by Python, or in case of internal error, or if os._exit() is called. So I would use something like this (almost copied your code):
import sys
import signal
import atexit
def release():
print "Release resources..."
def sigHandler(signo, frame):
sys.exit(0)
if __name__ == "__main__":
atexit.register(release)
signal.signal(signal.SIGTERM, sigHandler)
while True:
pass
I've checked release() is called once and only once in case of both TERM (issued externally) and INTR signals (Ctrl-C from keyboard). If you need, you may install more signal handlers (e.g. for HUP etc). If you need "a more graceful shutdown", you should find a way to gracefully break the loop and/or install external "shutdown handlers" (in case of SIGKILL you won't get a chance to cleanly release resources) or simply make your application be ACID.
I am creating a Python program that calls an external command periodically. The external command takes a few
seconds to complete. I want to reduce the possibility of the external command terminating
badly by adding a signal handler for SIGINT. Basically, I want SIGINT to attempt to wait until the command
executes before terminating the Python program. The problem is that the external perogram seems to be
getting the SIGINT as well, causing it to end abruptly. I am invoking the command using an external thread, since
the Python documentation for signal mentions that only the main thread receives the signal, according to http://docs.python.org/2/library/signal.html.
Can someone help with this.
Here is a stub of my code. Imagine that the external program is /bin/sleep:
import sys
import time
import threading
import signal
def sleep():
import subprocess
global sleeping
cmd = ['/bin/sleep', '10000']
sleeping = True
p = subprocess.Popen(cmd)
p.wait()
sleeping = False
def sigint_handler(signum, frame):
if sleeping:
print 'busy, will terminate shortly'
while(sleeping): time.sleep(0.5)
sys.exit(0)
else:
print 'clean exit'
sys.exit(0)
sleeping = False
signal.signal(signal.SIGINT, sigint_handler)
while(1):
t1 = threading.Thread(target=sleep)
t1.start()
time.sleep(500)
The expected behavior is that pressing Ctrl+C N seconds after the program starts will result in
it waiting (10000 - N) seconds and then exiting. What is happening is the program immediately terminates.
Thanks!
The problem is the way signal handlers are modified when executing a new process. From POSIX:
A child created via fork(2) inherits a copy of its parent's signal dis‐
positions. During an execve(2), the dispositions of handled signals
are reset to the default; the dispositions of ignored signals are left
unchanged.
So what you need to do is:
Ignore the SIGINT signal
Start the external program
Set the SIGINT handler as desired
That way, the external program will ignore SIGINT.
Of course, this leaves a (very) small time window when your script won't respond to SIGINT. But that's something you'll have to live with.
For example:
sleeping = False
while(1):
t1 = threading.Thread(target=sleep)
signal.signal(signal.SIGINT, signal.SIG_IGN)
t1.start()
signal.signal(signal.SIGINT, sigint_handler)
time.sleep(500)
Please don't consider it a duplicate before reading, There are a lot of questions about multithreading and keyboard interrupt, but i didn't find any considering os.system and it looks like it's important.
I have a python script which makes some external calls in worker threads.
I want it to exit if I press ctrl+c But it look like the main thread ignores it.
Something like this:
from threading import Thread
import sys
import os
def run(i):
while True:
os.system("sleep 10")
print i
def main():
threads=[]
try:
for i in range(0, 3):
threads.append(Thread(target=run, args=(i,)))
threads[i].daemon=True
threads[i].start()
for i in range(0, 3):
while True:
threads[i].join(10)
if not threads[i].isAlive():
break
except(KeyboardInterrupt, SystemExit):
sys.exit("Interrupted by ctrl+c\n")
if __name__ == '__main__':
main()
Surprisingly, it works fine if I change os.system("sleep 10") to time.sleep(10).
I'm not sure what operating system and shell you are using. I describe Mac OS X and Linux with zsh (bash/sh should act similar).
When you hit Ctrl+C, all programs running in the foreground in your current terminal receive the signal SIGINT. In your case it's your main python process and all processes spawned by os.system.
Processes spawned by os.system then terminate their execution. Usually when python script receives SIGINT, it raises KeyboardInterrupt exception, but your main process ignores SIGINT, because of os.system(). Python os.system() calls the Standard C function system(), that makes calling process ignore SIGINT (man Linux / man Mac OS X).
So neither of your python threads receives SIGINT, it's only children processes who get it.
When you remove os.system() call, your python process stops ignoring SIGINT, and you get KeyboardInterrupt.
You can replace os.system("sleep 10") with subprocess.call(["sleep", "10"]). subprocess.call() doesn't make your process ignore SIGINT.
I've had this same problem more times than I could count back when i was first learning python multithreading.
Adding the sleep call within the loop makes your main thread block, which will allow it to still hear and honor exceptions. What you want to do is utilize the Event class to set an event in your child threads that will serve as an exit flag to break execution upon. You can set this flag in your KeyboardInterrupt exception, just put the except clause for that in your main thread.
I'm not entirely certain what is going on with the different behaviors between the python specific sleep and the os called one, but the remedy I am offering should work for what your desired end result is. Just offering a guess, the os called one probably blocks the interpreter itself in a different way?
Keep in mind that generally in most situations where threads are required the main thread is going to keep executing something, in which case the "sleeping" in your simple example would be implied.
http://docs.python.org/2/library/threading.html#event-objects
Is there a way to pause a process (running from an executable) so that it stops the cpu load while it's paused, and waits till it's unpaused to go on with its work? Possibly in python, or in some way accessible by python.
By using psutil ( https://github.com/giampaolo/psutil ):
>>> import psutil
>>> somepid = 1023
>>> p = psutil.Process(somepid)
>>> p.suspend()
>>> p.resume()
you are thinking of SIGTSTP -- the same signal that happens when you push CTRL-Z. This suspends the process until it gets SIGCONT.
of course, some programs can just catch and ignore this signal, so it depends on the executable. however, if you can suspend and resume it manually, you can do it from a python program, too. use os.kill()
I just implemented this with signals in python something like this:
def mysignalhandler(sig, frame):
print "Got " + str(sig)
if sig == signal.SIGUSR1:
do_something()
signal.signal(signal.SIGUSR1, mysignalhandler)
signal.pause()
This will pause at the last line and call do_something() when it receives the signal USR1, for example through a
kill -USR1 <pid>
command.
This will only work in UNIX though.
There is a (almost) native way of doing this in Python, and it's quite simple :
import time
time.sleep(5)
In this snippet, 5 is the number of seconds you want to pause your program.