I am trying to write a python script that also deals killing/stopping its own process with the signals.
It runs each files one at a time, sleep at specific time and run again until it finished the whole directory with files. The processing time of each file is around 5 to 10 minutes depending on the size.
However, I want my program to stop when I give the signal. It should not kill it right away. It should run the current file and stop afterwards.
So I cannot use CTRL Z because it suspends the pid right away.
stop = False
def handler(number, frame):
global stop
stop = True
signal.signal(signal.SIGUSR1, handler)
while not stop:
# Do things
Above is what I tried, but it kills it right away when I signal. Also it goes into an infinite loop even after it finishes working on all the files.
What can I do to stop the process when I signal, allowing it to finish processing the current file first?
Just install a signal handler for signal.SIGTERM - and within it setup a state variable in your program that you check when finishing processing each file.
It is actually quite simple - see the documentation at: https://docs.python.org/2/library/signal.html .
import os
import signal
terminate = False
for filename in os.listdir("<your dir>"):
if terminate:
break
process_next_file(filename)
def handler(signum, frame):
global terminate
print("Termination requested")
terminate = True
signal.signal(signal.SIGTERM, handler)
(Also, you can use other signals - SIGINT is the one used when the user press ctrl+C for example)
You can create a command listener thread. Main thread still does file processing. For example, the listener thread waits a command from standard input. When you send "stop" command, it sets a varible. The file processor thread checks the variable before processing a file. So, it can stop when you want to stop processing.
Related
I am using subprocess in Python to call an external program on WINDOWS. I control the process with ThreadPool so that I can limit it to max 6 processes at the same time, and new process continuously began when one was done.
Codes as below:
### some codes above
### Code of Subprocess Part
from multiprocessing.pool import ThreadPool as Pool
def FAST_worker(file):
p = subprocess.Popen([r'E:/pyworkspace/FAST/FAST_RV_W64.exe', file],
cwd = r'E:/pyworkspace/FAST/',
shell = True)
p.wait()
# List of *.in filenames
FAST_in_pathname_li = [
'334.in',
'893.in',
'9527.in',
...
'114514.in',
'1919810.in',
]
# Limit max 6 processes at same time
with Pool(processes = 6) as pool:
for result in pool.imap_unordered(FAST_worker, FAST_in_pathname_li):
pass
### some codes below
I got problem when the external program unexpectedly terminated and showed error message pop-up. Though the other 5 processes still kept going, the whole progress will finally get stuck at the "subprocess part" and couldn't go forward anymore. (unless I came to my desk and manually clicked "Shut down the program")
What I want to know is how can I avoid the pop-up and make the whole script process keep going, like bypass the error message or something, rather than manual click, in order to avoid wasting time.
Since we don't know enough about the program FAST_worker is calling, I'll assume you already checked there isn't any "kill on error" or "quiet" mode that would be more convenient to use in a script.
My two cents: maybe you can setup a timeout on the worker execution, so that a stuck process is killed automatically after a certain delay.
Building on the snippet provided here, here is a draft:
from threading import Timer
def FAST_worker(file, timeout_sec):
def kill_proc():
"""called by the Timer thread upon expiration"""
p.kill()
# maybe add task to list of failed task, for tracability
p = subprocess.Popen([r'E:/pyworkspace/FAST/FAST_RV_W64.exe', file],
cwd = r'E:/pyworkspace/FAST/',
shell = True)
# setup timer to kill the process after a timeout
timer = Timer(timeout_sec, kill_proc)
try:
timer.start()
stdout, stderr = p.wait()
finally:
timer.cancel()
Note that there are also GUI automation libraries in python that can do the clicking for you, but that is likely to be more tedious to program:
tutorial for pyAutoGui
SO question on the subject
I am somewhat new to Python, so I imagine this question has a simple answer. But I cannot seem to find a solution anywhere.
I have a Python script that continually accepts input from a streaming API and saves the data out to a file.
My problem when I need to stop the script to modify the code. If I use ctrl-f2, I sometime catch the script while it is in the process of writing to the output file, and the file ends up corrupted.
Is there a simple way to stop Python manually that allows it to finish executing the current line of code?
You can catch the SIGTERM or SIGINT signal and set a global variable that your script routinely checks to see if it should exit. It may mean you need to break your operations up into smaller chunks so that you can check the exit variable more frequently
import signal
EXIT = False
def handler(signum, frame):
global EXIT
EXIT = True
signal.signal(signal.SIGINT, handler)
def long_running_operation():
for i in range(1000000):
if EXIT:
# do cleanup or raise exception so that cleanup
# can be done higher up.
return
# Normal operation.
I am creating a Python program that calls an external command periodically. The external command takes a few
seconds to complete. I want to reduce the possibility of the external command terminating
badly by adding a signal handler for SIGINT. Basically, I want SIGINT to attempt to wait until the command
executes before terminating the Python program. The problem is that the external perogram seems to be
getting the SIGINT as well, causing it to end abruptly. I am invoking the command using an external thread, since
the Python documentation for signal mentions that only the main thread receives the signal, according to http://docs.python.org/2/library/signal.html.
Can someone help with this.
Here is a stub of my code. Imagine that the external program is /bin/sleep:
import sys
import time
import threading
import signal
def sleep():
import subprocess
global sleeping
cmd = ['/bin/sleep', '10000']
sleeping = True
p = subprocess.Popen(cmd)
p.wait()
sleeping = False
def sigint_handler(signum, frame):
if sleeping:
print 'busy, will terminate shortly'
while(sleeping): time.sleep(0.5)
sys.exit(0)
else:
print 'clean exit'
sys.exit(0)
sleeping = False
signal.signal(signal.SIGINT, sigint_handler)
while(1):
t1 = threading.Thread(target=sleep)
t1.start()
time.sleep(500)
The expected behavior is that pressing Ctrl+C N seconds after the program starts will result in
it waiting (10000 - N) seconds and then exiting. What is happening is the program immediately terminates.
Thanks!
The problem is the way signal handlers are modified when executing a new process. From POSIX:
A child created via fork(2) inherits a copy of its parent's signal dis‐
positions. During an execve(2), the dispositions of handled signals
are reset to the default; the dispositions of ignored signals are left
unchanged.
So what you need to do is:
Ignore the SIGINT signal
Start the external program
Set the SIGINT handler as desired
That way, the external program will ignore SIGINT.
Of course, this leaves a (very) small time window when your script won't respond to SIGINT. But that's something you'll have to live with.
For example:
sleeping = False
while(1):
t1 = threading.Thread(target=sleep)
signal.signal(signal.SIGINT, signal.SIG_IGN)
t1.start()
signal.signal(signal.SIGINT, sigint_handler)
time.sleep(500)
I'm writing a test harness for a multi-process UDP server. The test harness runs multiple subprocesses- including several that spawn instances of the UDP server. I'm having trouble both terminating subprocess on exit and exiting the program from within- the only thing that works is CTRL+C from terminal, which kills the subprocess and stops the program nicely.
I have several related problems:
The program does not quit if I use sys.exit(), either in the signal handler or after I fire the signal. It looks like it hits the exit code, and then hangs.
The program does not terminate the subprocesses if I use p.terminate() or os.kill(p, SIGINT
The program does not terminate the subprocesses if I use os._quit()
Again, if I just leave the program running and from the terminal type CTRL+C, the program immediately stops, taking all subprocesses with it. What's the best way to do this from within the program?
What I try at the end of the program
os.kill(os.getpid(), signal.SIGINT)
The signal handler
# handle ctrl+c and remove open files
def signal_handler(signal, frame):
print 'You pressed Ctrl+C!'
# remove all files
try:
filelist = [ f for f in os.listdir(tmpdir) ]
for f in filelist: os.remove(tmpdir+'/'+f)
# remove dir
os.rmdir(tmpdir)
except:
print "unable to remove temporary directory/files:", tmpdir
print "attempt sys.exit()"
sys.exit() # This doesn't do anything, program hangs
# os._exit(0) # This stops program, but doesn't kill subprocesses
signal.signal(signal.SIGINT, signal_handler)
Please don't consider it a duplicate before reading, There are a lot of questions about multithreading and keyboard interrupt, but i didn't find any considering os.system and it looks like it's important.
I have a python script which makes some external calls in worker threads.
I want it to exit if I press ctrl+c But it look like the main thread ignores it.
Something like this:
from threading import Thread
import sys
import os
def run(i):
while True:
os.system("sleep 10")
print i
def main():
threads=[]
try:
for i in range(0, 3):
threads.append(Thread(target=run, args=(i,)))
threads[i].daemon=True
threads[i].start()
for i in range(0, 3):
while True:
threads[i].join(10)
if not threads[i].isAlive():
break
except(KeyboardInterrupt, SystemExit):
sys.exit("Interrupted by ctrl+c\n")
if __name__ == '__main__':
main()
Surprisingly, it works fine if I change os.system("sleep 10") to time.sleep(10).
I'm not sure what operating system and shell you are using. I describe Mac OS X and Linux with zsh (bash/sh should act similar).
When you hit Ctrl+C, all programs running in the foreground in your current terminal receive the signal SIGINT. In your case it's your main python process and all processes spawned by os.system.
Processes spawned by os.system then terminate their execution. Usually when python script receives SIGINT, it raises KeyboardInterrupt exception, but your main process ignores SIGINT, because of os.system(). Python os.system() calls the Standard C function system(), that makes calling process ignore SIGINT (man Linux / man Mac OS X).
So neither of your python threads receives SIGINT, it's only children processes who get it.
When you remove os.system() call, your python process stops ignoring SIGINT, and you get KeyboardInterrupt.
You can replace os.system("sleep 10") with subprocess.call(["sleep", "10"]). subprocess.call() doesn't make your process ignore SIGINT.
I've had this same problem more times than I could count back when i was first learning python multithreading.
Adding the sleep call within the loop makes your main thread block, which will allow it to still hear and honor exceptions. What you want to do is utilize the Event class to set an event in your child threads that will serve as an exit flag to break execution upon. You can set this flag in your KeyboardInterrupt exception, just put the except clause for that in your main thread.
I'm not entirely certain what is going on with the different behaviors between the python specific sleep and the os called one, but the remedy I am offering should work for what your desired end result is. Just offering a guess, the os called one probably blocks the interpreter itself in a different way?
Keep in mind that generally in most situations where threads are required the main thread is going to keep executing something, in which case the "sleeping" in your simple example would be implied.
http://docs.python.org/2/library/threading.html#event-objects