Python ctrl+c, excute function if early termination - python

I have a python script that creates a lot of temporary files. If the script terminates early because of a ctrl+c interrupt, I would like to quickly delete those files before the program is allowed to end.
What's the pythonic way handling this?

Open the files in a with statement, if possible, or use a try statement with a finally block that closes the files. If you're using tempfile, the files will automatically be destroyed when closed; otherwise, you may need to delete them yourself in the finally block.

http://docs.python.org/2/library/exceptions.html#exceptions.KeyboardInterrupt
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
cleanUp()

Either catch and handle KeyboardInterrupt, or set an exit handler with atexit.
Also, tempfile.

Related

How can I terminate a Python script manually without interrupting file output?

I am somewhat new to Python, so I imagine this question has a simple answer. But I cannot seem to find a solution anywhere.
I have a Python script that continually accepts input from a streaming API and saves the data out to a file.
My problem when I need to stop the script to modify the code. If I use ctrl-f2, I sometime catch the script while it is in the process of writing to the output file, and the file ends up corrupted.
Is there a simple way to stop Python manually that allows it to finish executing the current line of code?
You can catch the SIGTERM or SIGINT signal and set a global variable that your script routinely checks to see if it should exit. It may mean you need to break your operations up into smaller chunks so that you can check the exit variable more frequently
import signal
EXIT = False
def handler(signum, frame):
global EXIT
EXIT = True
signal.signal(signal.SIGINT, handler)
def long_running_operation():
for i in range(1000000):
if EXIT:
# do cleanup or raise exception so that cleanup
# can be done higher up.
return
# Normal operation.

Catching Keyboard Interrupt with Raw Input

I have a bit of python code to to try and make raw_input catch keyboard interrupts. If I run the code in this function it works perfectly fine. But if I run it in my program, the print statement is never made, indicating that the keyboard interrupt is not caught. The program attempts to exit and fails until it escalates to SIGKILL, which of course works fine. My guess is somewhere else the keyboard interrupt is being caught, preventing the exception from running at all. My question is, where would such an interrupt likely occur, and how can I prevent it from blocking this one. My plan has been to add a slight delay between the program catching a keyboard interrupt and killing itself to give excepting here a moment to catch.
Any ideas appreciated
Thanks!
import sys
def interruptable_input(text=''):
'''Takes raw input, but accepts keyboard interrupt'''
try:
return raw_input(text)
except KeyboardInterrupt:
print "Interrupted by user"
sys.exit()
I have narrowed it down to the following:
import sys
text=''
try:
print raw_input(text)
except KeyboardInterrupt:
print "Interrupted by user"
sys.exit()
Which works perfectly when i run it on the command line using python 2.7.
It lets me type an input on the console and when I hit ctrl+c it prints intterupted by user
Edit:
I misread your question at first, however when i use the method from your example and call it from another method the result is the same
I have determined the reason for my issue was another interrupt handler killing the script before the KeyboardInterrupt was hit. I solved it by setting my own interrupt handler for signal.SIGINT like so:
import sys
import signal
signal.signal(signal.SIGINT, signal_term_handler)
def signal_term_handler(signal, frame):
'''Handles KeyboardInterrupts to ensure smooth exit'''
rospy.logerr('User Keyboard interrupt')
sys.exit(0)
it's slightly less direct but it get's the job done. Now raw_input() will simply die when told to.

Python run system command and then exit... won't exit

I have the following python code:
os.system("C:/Python27/python.exe C:/GUI/TestGUI.py")
sys.exit(0)
It runs the command fine, and a window pops up. However, it doesn't exit the first script. It just stays there, and I eventually have to force kill the process. No errors are produced. What's going on?
instead of os.system use subprocess.Popen
this runs a command and doesn't wait for it and then exits:
import subprocess
import sys
subprocess.Popen(["mupdf", "/home/dan/Desktop/Sieve-JFP.pdf"])
sys.exit(0)
note that os.system(command) like:
p = subprocess.Popen(command)
p.wait()
KeyboardInterrupts and signals are only seen by the process (ie the main thread). If your nested command hangs due to some kind of file read or write block, you won't be able to quit the program using any keyboard commands.
Why does a read-only open of a named pipe block?
If you can't eliminate the source of the disk block, then one way is to wrap the process in the thread so you can force kill it. But if you do this, you leave opportunity for half-written and corrupted files on disk.
I suggest using os._exit instead of sys.exit, as sys.exit doesnt quit a program but raises exception level, or exits a thread. os._exit(-1) quits the entire program
import sys ,subprocess
subprocess.Popen(["C:/Python27/python.exe", "C:/GUI/TestGUI.py"])
sys.exit(0)
Popen from subprocess module what you are looking for.

Graceful exiting of a program in Python?

I have a script that runs as a
while True:
doStuff()
What is the best way to communicate with this script if I need to stop it but I don't want to kill it if it is in the middle of an operation?
And I'm assuming you mean killing from outside the python script.
The way I've found easiest is
#atexit.register
def cleanup()
sys.unlink("myfile.%d" % os.getpid() )
f = open("myfile.%d" % os.getpid(), "w" )
f.write("Nothing")
f.close()
while os.path.exists("myfile.%d" % os.getpid() ):
doSomething()
Then to terminate the script just remove the myfile.xxx and the application should quit for you. You can use this even with multiple instances of the same script running at once if you only need to shut one down. And it tries to clean up after itself....
The best way is to rewrite the script so it doesn't use while True:.
Sadly, it's impossible to conjecture a good way to terminate this.
You could use the Linux signals.
You could use a timer and stop after a while.
You could have dostuff return a value and stop if the value is False.
You could check for a local file and stop if the file exists.
You could check an FTP site for a remote file and stop of the file exists.
You could check an HTTP web page for information that indicates if your loop should stop or not stop.
You could use OS-specific things like semaphores or shared memory.
I think the most elegant would be:
keep_running = true
while keep_running:
dostufF()
and then dostuff() can set keep_running = false whenever in no longer wants to keep running, then the while loop ends, and everything cleans up nicely.
If that's a console aplication and exiting by pressing Ctrl+C is ok, could that solve your problem?
try:
while True:
doStuff()
except KeyboardInterrupt:
doOtherStuff()
I guess the problem with that approach is that you wouldn't have any control exactly when and where in doStuff the execution is terminated.
Long time ago I've implemented such a thing. It catches Ctrl+C (or keyboard interrupt). It uses my package snuff-utils.
To install:
pip install snuff-utils
from snuff_utils.graceful_exit import graceful_exit
while True:
do_task_until_complete()
if graceful_exit:
do_stuff_before_exit()
break
On Ctrl+C it will log:
An interrupt signal has been received. The signal will be processed according to the logic of the application.
The goal I was after is to exit program but only after finishing already running task.
Be careful with multiprocessing/multithreading. It is not tested.
The signal module can trap signals and react accordingly?

Daemon dies unexpectedly

I have a python script, which I daemonise using this code
def daemonise():
from os import fork, setsid, umask, dup2
from sys import stdin, stdout, stderr
if fork(): exit(0)
umask(0)
setsid()
if fork(): exit(0)
stdout.flush()
stderr.flush()
si = file('/dev/null', 'r')
so = file('daemon-%s.out'%os.getpid(), 'a+')
se = file('daemon-%s.err'%os.getpid(), 'a+')
dup2(si.fileno(), stdin.fileno())
dup2(so.fileno(), stdout.fileno())
dup2(se.fileno(), stderr.fileno())
print 'this file has the output from daemon%s'%os.getpid()
print >> stderr, 'this file has the errors from daemon%s'%os.getpid()
The script is in
while True: try: funny_code(); sleep(10); except:pass;
loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.
[Edit]
Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)
You really should use python-daemon for this which is a library that implements PEP 3141 for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel.
Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this:
while True:
try:
funny_code()
sleep(10)
except BaseException, e:
print e.__class__, e.message
pass
Something unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions.
I recommend using supervisord (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your daemonise function.
What I've used in my clients is daemontools. It is a proven, well tested tool to run anything daemonized.
You just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts.
It can also handle log rotation and stuff. Saves a lot of tedious, repeated work.

Categories