I am using Slurm job manager on an HPC cluster. Sometimes there are situations, when a job is canceled due to time limit and I would like to finish my program gracefully.
As far as I understand, the process of cancellation occurs in two stages exactly for a software developer to be able to finish the program gracefully:
srun: Job step aborted: Waiting up to 62 seconds for job step to finish.
slurmstepd: error: *** JOB 18522559 ON ncm0317 CANCELLED AT 2020-12-14T19:42:43 DUE TO TIME LIMIT ***
You can see that I am given 62 seconds to finish the job the way I want it to finish (by saving some files, etc.).
Question: how to do this? I understand that first some Unix signal is sent to my job and I need to respond to it correctly. However, I cannot find in the Slurm documentation any information on what this signal is. Besides, I do not exactly how to handle it in Python, probably, through exception handling.
In Slurm, you can decide which signal is sent at which moment before your job hits the time limit.
From the sbatch man page:
--signal=[[R][B]:]<sig_num>[#<sig_time>]
When a job is within sig_time seconds of its end time, send it the signal sig_num.
So set
#SBATCH --signal=B:TERM#05:00
to get Slurm to signal the job with SIGTERM 5 minutes before the allocation ends. Note that depending on how you start your job, you might need to remove the B: part.
In your Python script, use the signal package. You need to define a "signal handler", a function that will be called when the signal is receive, and "register" that function for a specific signal. As that function is disrupting the normal flow when called , you need to keep it short and simple to avoid unwanted side effects, especially with multithreaded code.
A typical scheme in a Slurm environment is to have a script skeleton like this:
#! /bin/env python
import signal, os, sys
# Global Boolean variable that indicates that a signal has been received
interrupted = False
# Global Boolean variable that indicates then natural end of the computations
converged = False
# Definition of the signal handler. All it does is flip the 'interrupted' variable
def signal_handler(signum, frame):
global interrupted
interrupted = True
# Register the signal handler
signal.signal(signal.SIGTERM, signal_handler)
try:
# Try to recover a state file with the relevant variables stored
# from previous stop if any
with open('state', 'r') as file:
vars = file.read()
except:
# Otherwise bootstrap (start from scratch)
vars = init_computation()
while not interrupted and not converged:
do_computation_iteration()
# Save current state
if interrupted:
with open('state', 'w') as file:
file.write(vars)
sys.exit(99)
sys.exit(0)
This first tries to restart computations left by a previous run of the job, and otherwise bootstraps it. If it was interrupted, it lets the current loop iteration finish properly, and then saves the needed variables to disk. It then exits with the 99 return code. This allows, if Slurm is configured for it, to requeue the job automatically for further iteration.
If slurm is not configured for it, you can do it manually in the submission script like this:
python myscript.py || scontrol requeue $SLURM_JOB_ID
In most programming languages, Unix signals are captured using a callback. Python is no exception. To catch Unix signals using Python, just use the signal package.
For example, to gracefully exit:
import signal, sys
def terminate_signal(signalnum, handler):
print ('Terminate the process')
# save results, whatever...
sys.exit()
# initialize signal with a callback
signal.signal(signal.SIGTERM, terminate_signal)
while True:
pass # work
List of possible signals. SIGTERM is the one used to "politely ask a program to terminate".
All I want to do is timeout a function if it does not return before that
It all started because urllib2 supports timeout for urlopen, but not for reading part
and my program hangs. Changing defaulttimeout for sockets does not work. Using signal.sigalrm
does not work. I can't switch to requests because then I will have to rewrite and test a lot more.
I DON'T want to make a thread run the function and then timeout the thread, I want to timeout the function. Any ideas how?
I like to use David's class here in my projects. I find it's very effective and I like that it provides a simple way to implement in existing code via a decorator. For example:
# Timeout after 30 seconds
#timeout(30)
def your_function():
...
CAUTION: This is not thread-safe! If you're using multithreading, the signal will get caught by a random thread. For single-threaded programs however, this is the easiest solution.
Yes, it can be done in windows without signal and it will also work in other os as well. This is using thread but not to run the function but to raise a signal for timeout. The logic is to create a new thread and wait for a given time and raise an exception using _thread(in python3 and thread in python2). This exception will be thrown in the main thread and the with block will get exit if any exception occurs.
import threading
import _thread # import thread in python2
class timeout():
def __init__(self, time):
self.time= time
self.exit=False
def __enter__(self):
threading.Thread(target=self.callme).start()
def callme(self):
time.sleep(self.time)
if self.exit==False:
_thread.interrupt_main() # use thread instead of _thread in python2
def __exit__(self, a, b, c):
self.exit=True
Usuage Example :-
with timeout(2):
func()
The program in the with block should exit within 2 seconds otherise it will be exited after 2 seconds.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Timeout on a Python function call
How to timeout function in python, timout less than a second
I am running a function within a for loop, such as the following:
for element in my_list:
my_function(element)
for some reason, some elements may lead the function into very long processing time (maybe even some infinite loop that I cannot really trace where it comes from). So I want to add some loop control to skip the current element if its processing for example takes more than 2 seconds. How can this be done?
I would discourage the most obvious answer - using a signal.alarm() and an alarm signal handler that asynchronously raises an exception to jump out of task execution. In theory it should work great, but in practice the cPython interpreter code doesn't guarantee that the handler is executed within the time frame that you want. Signal handling can be delayed by x number of bytecode instructions, so the exception could still be raised after you explicitly cancel the alarm (outside the context of the try block).
A problem we ran into regularly was that the alarm handler's exception would get raised after the timeoutable code completed.
Since there isn't much available by way of thread control, I have relied on process control for handling tasks that must be subjected to a timeout. Basically, the gist is to hand off the task to a child process and kill the child process if the task takes too long. multiprocessing.pool isn't quite that sophisticated - so I have a home-rolled pool for that level of control.
Something like this:
import signal
import time
class Timeout(Exception):
pass
def try_one(func,t):
def timeout_handler(signum, frame):
raise Timeout()
old_handler = signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(t) # triger alarm in 3 seconds
try:
t1=time.clock()
func()
t2=time.clock()
except Timeout:
print('{} timed out after {} seconds'.format(func.__name__,t))
return None
finally:
signal.signal(signal.SIGALRM, old_handler)
signal.alarm(0)
return t2-t1
def troublesome():
while True:
pass
try_one(troublesome,2)
The function troublsome will never return on its own. If you use try_one(troublesome,2) it successfully times out after 2 seconds.
I want to stop executing R function called from python (rpy2) after 2 seconds. Here is python code:
signal.signal(signal.SIGALRM, handler)
signal.alarm(2) # set timeout to 2 seconds
# run R code
result = robjects.r('''
Sys.sleep(10)
return("hello")
''')
signal.alarm(0) # disable alarm
It doesn't work. I must wait 10 seconds for signal handler.
The evaluation of R code does not release the Python GIL. The only way to get a Python script to monitor the execution time of R code is to have two processes.
You could check the unit test for rpy2 "testInterruptR()", although there are much more elegant ways to implement that in an application. There a SIGINT is sent to an R process running an infinite loop.
Try setting your alarm and then putting the operation you want the timeout on inside of try catch blocks. The alarm should throw a catchable exception. Hope that makes sense, it works for me anyway.
I'm working with the Gnuradio framework. I handle flowgraphs I generate to send/receive signals. These flowgraphs initialize and start, but they don't return the control flow to my application:
I imported time
while time.time() < endtime:
# invoke GRC flowgraph for 1st sequence
if not seq1_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq1_sent = True
if time.time() < endtime:
break
# invoke GRC flowgraph for 2nd sequence
if not seq2_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq2_sent = True
if time.time() < endtime:
break
The problem is: only the first if statement invokes the flow-graph (that interacts with the hardware). I'm stuck in this. I could use a Thread, but I'm unexperienced how to timeout threads in Python. I doubt that this is possible, because it seems killing threads isn't within the APIs. This script only has to work on Linux...
How do you handle blocking functions with Python properly - without killing the whole program.
Another more concrete example for this problem is:
import signal, os
def handler(signum, frame):
# print 'Signal handler called with signal', signum
#raise IOError("Couldn't open device!")
import time
print "wait"
time.sleep(3)
def foo():
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(3)
# This open() may hang indefinitely
fd = os.open('/dev/ttys0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
foo()
print "hallo"
How do I still get print "hallo". ;)
Thanks,
Marius
First of all - the use of signals should be avoided at all cost:
1) It may lead to a deadlock. SIGALRM may reach the process BEFORE the blocking syscall (imagine super-high load in the system!) and the syscall will not be interrupted. Deadlock.
2) Playing with signals may have some nasty non-local consequences. For example, syscalls in other threads may be interrupted which usually is not what you want. Normally syscalls are restarted when (not a deadly) signal is received. When you set up a signal handler it automatically turns off this behavior for the whole process, or thread group so to say. Check 'man siginterrupt' on that.
Believe me - I met two problems before and they are not fun at all.
In some cases the blocking can be avoided explicitely - I strongly recommend using select() and friends (check select module in Python) to handle blocking writes and reads. This will not solve blocking open() call, though.
For that I've tested this solution and it works well for named pipes. It opens in a non-blocking way, then turns it off and uses select() call to eventually timeout if nothing is available.
import sys, os, select, fcntl
f = os.open(sys.argv[1], os.O_RDONLY | os.O_NONBLOCK)
flags = fcntl.fcntl(f, fcntl.F_GETFL, 0)
fcntl.fcntl(f, fcntl.F_SETFL, flags & ~os.O_NONBLOCK)
r, w, e = select.select([f], [], [], 2.0)
if r == [f]:
print 'ready'
print os.read(f, 100)
else:
print 'unready'
os.close(f)
Test this with:
mkfifo /tmp/fifo
python <code_above.py> /tmp/fifo (1st terminal)
echo abcd > /tmp/fifo (2nd terminal)
With some additional effort select() call can be used as a main loop of the whole program, aggregating all events - you can use libev or libevent, or some Python wrappers around them.
When you can't explicitely force non-blocking behavior, say you just use an external library, then it's going to be much harder. Threads may do, but obviously it is not a state-of-the-art solution, usually being just wrong.
I'm afraid that in general you can't solve this in a robust way - it really depends on WHAT you block.
IIUC, each top_block has a stop method. So you actually can run the top_block in a thread, and issue a stop if the timeout has arrived. It would be better if the top_block's wait() also had a timeout, but alas, it doesn't.
In the main thread, you then need to wait for two cases: a) the top_block completes, and b) the timeout expires. Busy-waits are evil :-), so you should use the thread's join-with-timeout to wait for the thread. If the thread is still alive after the join, you need to stop the top_run.
You can set a signal alarm that will interrupt your call with a timeout:
http://docs.python.org/library/signal.html
signal.alarm(1) # 1 second
my_blocking_call()
signal.alarm(0)
You can also set a signal handler if you want to make sure it won't destroy your application:
def my_handler(signum, frame):
pass
signal.signal(signal.SIGALRM, my_handler)
EDIT:
What's wrong with this piece of code ? This should not abort your application:
import signal, time
def handler(signum, frame):
print "Timed-out"
def foo():
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(3)
# This open() may hang indefinitely
time.sleep(5)
signal.alarm(0) # Disable the alarm
foo()
print "hallo"
The thing is:
The default handler for SIGALRM is to abort the application, if you set your handler then it should no longer stop the application.
Receiving a signal usually interrupts system calls (then unblocks your application)
The easy part of your question relates to the signal handling. From the perspective of the Python runtime a signal which has been received while the interpreter was making a system call is presented to your Python code as an OSError exception with an errno attributed corresponding to errno.EINTR
So this probably works roughly as you intended:
#!/usr/bin/env python
import signal, os, errno, time
def handler(signum, frame):
# print 'Signal handler called with signal', signum
#raise IOError("Couldn't open device!")
print "timed out"
time.sleep(3)
def foo():
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
try:
signal.alarm(3)
# This open() may hang indefinitely
fd = os.open('/dev/ttys0', os.O_RDWR)
except OSError, e:
if e.errno != errno.EINTR:
raise e
signal.alarm(0) # Disable the alarm
foo()
print "hallo"
Note I've moved the import of time out of the function definition as it seems to be poor form to hide imports in that way. It's not at all clear to me why you're sleeping in your signal handler and, in fact, it seems like a rather bad idea.
The key point I'm trying to make is that any (non-ignored) signal will interrupt your main line of Python code execution. Your handler will be invoked with arguments indicating which signal number triggered the execution (allowing for one Python function to be used for handling many different signals) and a frame object (which could be used for debugging or instrumentation of some sort).
Because the main flow through the code is interrupted it's necessary for you to wrap that code in some exception handling in order to regain control after such events have occurred. (Incidentally if you're writing code in C you'd have the same concern; you have to be prepared for any of your library functions with underlying system calls to return errors and handle -EINTR in the system errno by looping back to retry or branching to some alternative in your main line (such as proceeding to some other file, or without any file/input, etc).
As others have indicated in their responses to your question, basing your approach on SIGALARM is likely to be fraught with portability and reliability issues. Worse, some of these issues may be race conditions that you'll never encounter in your testing environment and may only occur under conditions that are extremely hard to reproduce. The ugly details tend to be in cases of re-entrancy --- what happens if signals are dispatched during execution of your signal handler?
I've used SIGALARM in some scripts and it hasn't been an issue for me, under Linux. The code I was working on was suitable to the task. It might be adequate for your needs.
Your primary question is difficult to answer without knowing more about how this Gnuradio code behaves, what sorts of objects you instantiate from it, and what sorts of objects they return.
Glancing at the docs to which you've linked, I see that they don't seem to offer any sort of "timeout" argument or setting that could be used to limit blocking behavior directly. In the table under "Controlling Flow Graphs" I see that they specifically say that .run() can execute indefinitely or until SIGINT is received. I also note that .start() can start threads in your application and, it seems, returns control to your Python code line while those are running. (That seems to depend on the nature of your flow graphs, which I don't understand sufficiently).
It sounds like you could create your flow graphs, .start() them, and then (after some time processing or sleeping in your main line of Python code) call the .lock() method on your controlling object (tb?). This, I'm guessing, puts the Python representation of the state ... the Python object ... into a quiescent mode to allow you to query the state or, as they say, reconfigure your flow graph. If you call .run() it will call .wait() after it calls .start(); and .wait() will apparently run until either all blocks "indicate they are done" or until you call the object's .stop() method.
So it sounds like you want to use .start() and neither .run() nor .wait(); then call .stop() after doing any other processing (including time.sleep()).
Perhaps something as simple as:
tb = send_seq_2.top_block()
tb.start()
time.sleep(endtime - time.time())
tb.stop()
seq1_sent = True
tb = send_seq_2.top_block()
tb.start()
seq2_sent = True
.. though I'm suspicious of my time.sleep() there. Perhaps you want to do something else where you query the tb object's state (perhaps entailing sleeping for smaller intervals, calling its .lock() method, and accessing attributes that I know nothing about and then calling its .unlock() before sleeping again.
if not seq1_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq1_sent = True
if time.time() < endtime:
break
If the 'if time.time() < endtime:' then you will break out of the loop and the seq2_sent stuff will never be hit, maybe you mean 'time.time() > endtime' in that test?
you could try using Deferred execution... Twisted framework uses them alot
http://www6.uniovi.es/python/pycon/papers/deferex/
You mention killing threads in Python - this is partialy possible although you can kill/interrupt another thread only when Python code runs, not in C code, so this may not help you as you want.
see this answer to another question:
python: how to send packets in multi thread and then the thread kill itself
or google for killable python threads for more details like this:
http://code.activestate.com/recipes/496960-thread2-killable-threads/
If you want to set a timeout on a blocking function, threading.Thread as the method join(timeout) which blocks until the timeout.
Basically, something like that should do what you want :
import threading
my_thread = threading.Thread(target=send_seq_2.top_block)
my_thread.start()
my_thread.join(TIMEOUT)