python, break function after 3 seconds - python

Sometimes, my function will hang. If the function is longer than 3 seconds.
I want to quit from this function, how to do this?
Thanks

Well, you should probably just figure out why it is hanging. If it is taking a long time to do the work then perhaps you could be doing something more efficiently. If it needs more than three seconds than how will breaking in the middle help you? The assumption is that the work actually needs to be done.
What call is hanging? If you don't own the code to the call that is hanging you're SOL unless you run it on another thread and create a timer to watch it.
This sounds to me like it may be a case of trying to find an answer to the wrong question, but perhaps you are waiting on a connection or some operation that should have a timeout period. With the information (or lack thereof) you have given us it is impossible to tell which is true. How about posting some code so that we are able to give a more definitive answer?

I had to do to something like that, too. I whipped up a module that provides a decorator to limit the runtime of a function:
import logging
import signal
from functools import wraps
MODULELOG = logging.getLogger(__name__)
class Timeout(Exception):
"""The timeout handler was invoked"""
def _timeouthandler(signum, frame):
"""This function is called by SIGALRM"""
MODULELOG.info('Invoking the timeout')
raise Timeout
def timeout(seconds):
"""Decorate a function to set a timeout on its execution"""
def wrap(func):
"""Wrap a timer around the given function"""
#wraps(func)
def inner(*args, **kwargs):
"""Set an execution timer, execute the wrapped function,
then clear the timer."""
MODULELOG.debug('setting an alarm for %d seconds on %s' % (seconds, func))
oldsignal = signal.signal(signal.SIGALRM, _timeouthandler)
signal.alarm(seconds)
try:
return func(*args, **kwargs)
finally:
MODULELOG.debug('clearing the timer on %s' % func)
signal.alarm(0)
signal.signal(signal.SIGALRM, oldsignal)
return inner
return wrap
I use it like:
#limits.timeout(300)
def dosomething(args): ...
In my case, dosomething was calling an external program which would periodically hang and I wanted to be able to cleanly exit whenever that happened.

I was able to set a timeout on input by using the select module.
You should be able to set your input type appropriately.
i,o,e = select.select([sys.stdin],[],[],5)
This will listen for input for 5 seconds before moving on.

Related

Set function timeout without having to use contextlib [duplicate]

I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:
Use thread that run the code, and join it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private _Thread__stop function) so it is bad practice
Use signal.SIGALRM - but this approach not working on Windows!
Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!
So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want it to work on Linux and Windows.
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute 9**(9**9)), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.
What you might be looking for is the multiprocessing module. If subprocess is too heavy, then this may not suit your needs either.
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
If it's network related you could try:
import socket
socket.setdefaulttimeout(number)
I found this with eventlet library:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace() that aborts the running code when the timeout is reached.
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
An other way is to use faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
I've solved that in that way:
For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work.
I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
solving with the 'with' construct and merging solution from -
Timeout function if it takes too long to finish
this thread which work better.
import threading, time
class Exception_TIMEOUT(Exception):
pass
class linwintimeout:
def __init__(self, f, seconds=1.0, error_message='Timeout'):
self.seconds = seconds
self.thread = threading.Thread(target=f)
self.thread.daemon = True
self.error_message = error_message
def handle_timeout(self):
raise Exception_TIMEOUT(self.error_message)
def __enter__(self):
try:
self.thread.start()
self.thread.join(self.seconds)
except Exception, te:
raise te
def __exit__(self, type, value, traceback):
if self.thread.is_alive():
return self.handle_timeout()
def function():
while True:
print "keep printing ...", time.sleep(1)
try:
with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0):
pass
except Exception_TIMEOUT, e:
print " attention !! execeeded timeout, giving up ... %s " % e

Python timeout decorator

I'm using the code solution mentioned here.
I'm new to decorators, and don't understand why this solution doesn't work if I want to write something like the following:
#timeout(10)
def main_func():
nested_func()
while True:
continue
#timeout(5)
def nested_func():
print "finished doing nothing"
=> Result of this will be no timeout at all. We will be stuck on endless loop.
However if I remove #timeout annotation from nested_func I get a timeout error.
For some reason we can't use decorator on function and on a nested function in the same time, any idea why and how can I correct it to work, assume that containing function timeout always must be bigger than the nested timeout.
This is a limitation of the signal module's timing functions, which the decorator you linked uses. Here's the relevant piece of the documentation (with emphasis added by me):
signal.alarm(time)
If time is non-zero, this function requests that a SIGALRM signal be sent to the process in time seconds. Any previously scheduled alarm is canceled (only one alarm can be scheduled at any time). The returned value is then the number of seconds before any previously set alarm was to have been delivered. If time is zero, no alarm is scheduled, and any scheduled alarm is canceled. If the return value is zero, no alarm is currently scheduled. (See the Unix man page alarm(2).) Availability: Unix.
So, what you're seeing is that when your nested_func is called, it's timer cancels the outer function's timer.
You can update the decorator to pay attention to the return value of the alarm call (which will be the time before the previous alarm (if any) was due). It's a bit complicated to get the details right, since the inner timer needs to track how long its function ran for, so it can modify the time remaining on the previous timer. Here's an untested version of the decorator that I think gets it mostly right (but I'm not entirely sure it works correctly for all exception cases):
import time
import signal
class TimeoutError(Exception):
def __init__(self, value = "Timed Out"):
self.value = value
def __str__(self):
return repr(self.value)
def timeout(seconds_before_timeout):
def decorate(f):
def handler(signum, frame):
raise TimeoutError()
def new_f(*args, **kwargs):
old = signal.signal(signal.SIGALRM, handler)
old_time_left = signal.alarm(seconds_before_timeout)
if 0 < old_time_left < second_before_timeout: # never lengthen existing timer
signal.alarm(old_time_left)
start_time = time.time()
try:
result = f(*args, **kwargs)
finally:
if old_time_left > 0: # deduct f's run time from the saved timer
old_time_left -= time.time() - start_time
signal.signal(signal.SIGALRM, old)
signal.alarm(old_time_left)
return result
new_f.func_name = f.func_name
return new_f
return decorate
as Blckknght pointed out, You can't use signals for nested decorators - but you can use multiprocessing to achieve that.
You might use this decorator, it supports nested decorators : https://github.com/bitranox/wrapt_timeout_decorator
and as ABADGER1999 points out in his blog https://anonbadger.wordpress.com/2018/12/15/python-signal-handlers-and-exceptions/
using signals and the TimeoutException is probably not the best idea - because it can be caught in the decorated function.
Of course you can use your own Exception, derived from the Base Exception Class, but the code might still not work as expected -
see the next example - you may try it out in jupyter: https://mybinder.org/v2/gh/bitranox/wrapt_timeout_decorator/master?filepath=jupyter_test_wrapt_timeout_decorator.ipynb
import time
from wrapt_timeout_decorator import *
# caveats when using signals - the TimeoutError raised by the signal may be caught
# inside the decorated function.
# So You might use Your own Exception, derived from the base Exception Class.
# In Python-3.7.1 stdlib there are over 300 pieces of code that will catch your timeout
# if you were to base an exception on Exception. If you base your exception on BaseException,
# there are still 231 places that can potentially catch your exception.
# You should use use_signals=False if You want to make sure that the timeout is handled correctly !
# therefore the default value for use_signals = False on this decorator !
#timeout(5, use_signals=True)
def mytest(message):
try:
print(message)
for i in range(1,10):
time.sleep(1)
print('{} seconds have passed - lets assume we read a big file here'.format(i))
# TimeoutError is a Subclass of OSError - therefore it is caught here !
except OSError:
for i in range(1,10):
time.sleep(1)
print('Whats going on here ? - Ooops the Timeout Exception is catched by the OSError ! {}'.format(i))
except Exception:
# even worse !
pass
except:
# the worst - and exists more then 300x in actual Python 3.7 stdlib Code !
# so You never really can rely that You catch the TimeoutError when using Signals !
pass
if __name__ == '__main__':
try:
mytest('starting')
print('no Timeout Occured')
except TimeoutError():
# this will never be printed because the decorated function catches implicitly the TimeoutError !
print('Timeout Occured')
There's a better version of timeout decorator that's currently on Python's PyPI library. It supports both UNIX and non-UNIX based operating system. The part where SIGNALS are mentioned - that specifically for UNIX.
Assuming you aren't using UNIX. Below is a code snippet from the decorator that shows you a list of parameters that you can use as required.
def timeout(seconds=None, use_signals=True, timeout_exception=TimeoutError, exception_message=None)
For implementation on NON-UNIX base operating system. This is what I would do:
import time
import timeout_decorator
#timeout_decorator.timeout(10, use_signals=False)
def main_func():
nested_func()
while True:
continue
#timeout_decorator.timeout(5, use_signals=False)
def nested_func():
print "finished doing nothing"
If you notice, I'm doing use_signals=False. That's all, you should be good to go.

python, calling method on main thread from timer callback

I'm very new to python development, I need to call a function every x seconds.
So I'm trying to use a timer for that, something like:
def start_working_interval():
def timer_tick():
do_some_work() // need to be called on the main thread
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer = threading.Timer(10.0, timer_tick)
timer.start()
the do_some_work() method need to be called on the main thread, and I think using the timer causing it to execute on different thread.
so my question is, how can I call this method on the main thread?
I'm now sure what you trying to achive but i played with your code and did this:
import threading
import datetime
def do_some_work():
print datetime.datetime.now()
def start_working_interval():
def timer_tick():
do_some_work()
timer = threading.Timer(10.0, timer_tick)
timer.start()
timer_tick()
start_working_interval()
So basically what i did was to set the Time inside the timer_tick() so it will call it-self after 10 sec and so on, but i removed the second timer.
I needed to do this too, here's what I did:
import time
MAXBLOCKINGSECONDS=5 #maximum time that a new task will have to wait before it's presence in the queue gets noticed.
class repeater:
repeatergroup=[] #our only static data member it holds the current list of the repeaters that need to be serviced
def __init__(self,callback,interval):
self.callback=callback
self.interval=abs(interval) #because negative makes no sense, probably assert would be better.
self.reset()
self.processing=False
def reset(self):
self.nextevent=time.time()+self.interval
def whennext(self):
return self.nextevent-time.time() #time until next event
def service(self):
if time.time()>=self.nextevent:
if self.processing=True: #or however you want to be re-entrant safe or thread safe
return 0
self.processing==True
self.callback(self) #just stuff all your args into the class and pull them back out?
#use this calculation if you don't want slew
self.nextevent+=self.interval
#reuse this calculation if you do want slew/don't want backlog
#self.reset()
#or put it just before the callback
self.processing=False
return 1
return 0
#this the transition code between class and classgroup
#I had these three as a property getter and setter but it was behaving badly/oddly
def isenabled(self):
return (self in self.repeatergroup)
def start(self):
if not (self in self.repeatergroup):
self.repeatergroup.append(self)
#another logical place to call reset if you don't want backlog:
#self.reset()
def stop(self):
if (self in self.repeatergroup):
self.repeatergroup.remove(self)
#group calls in c++ I'd make these static
def serviceall(self): #the VB hacker in me wants to name this doevents(), the c hacker in me wants to name this probe
ret=0
for r in self.repeatergroup:
ret+=r.service()
return ret
def minwhennext(self,max): #this should probably be hidden
ret=max
for r in self.repeatergroup:
ret=min(ret,r.whennext())
return ret
def sleep(self,seconds):
if not isinstance(threading.current_thread(), threading._MainThread): #if we're not on the main thread, don't process handlers, just sleep.
time.sleep(seconds)
return
endtime=time.time()+seconds #record when caller wants control back
while time.time()<=endtime: #spin until then
while self.serviceall()>0: #service each member of the group until none need service
if (time.time()>=endtime):
return #break out of service loop if caller needs control back already
#done with servicing for a while, yield control to os until we have
#another repeater to service or it's time to return control to the caller
minsleeptime=min(endtime-time.time(),MAXBLOCKINGPERIOD) #smaller of caller's requested blocking time, and our sanity number (1 min might be find for some systems, 5 seconds is good for some systems, 0.25 to 0.03 might be better if there could be video refresh code waiting, 0.15-0.3 seems a common range for software denouncing of hardware buttons.
minsleeptime=self.minwhennext(minsleeptime)
time.sleep(max(0,minsleeptime))
###################################################################
# and now some demo code:
def handler1(repeater):
print("latency is currently {0:0.7}".format(time.time()-repeater.nextevent))
repeater.count+=repeater.interval
print("Seconds: {0}".format(repeater.count))
def handler2(repeater): #or self if you prefer
print("Timed message is: {0}".format(repeater.message))
if repeater.other.isenabled():
repeater.other.stop()
else:
repeater.other.start()
repeater.interval+=1
def demo_main():
counter=repeater(handler1,1)
counter.count=0 #I'm still new enough to python
counter.start()
greeter=repeater(handler2,2)
greeter.message="Hello world." #that this feels like cheating
greeter.other=counter #but it simplifies everything.
greeter.start()
print ("Currently {0} repeaters in service group.".format(len(repeater.repeatergroup)))
print("About to yield control for a while")
greeter.sleep(10)
print("Got control back, going to do some processing")
time.sleep(5)
print("About to yield control for a while")
counter.sleep(20) #you can use any repeater to access sleep() but
#it will only service those currently enabled.
#notice how it gets behind but tries to catch up, we could add repeater.reset()
#at the beginning of a handler to make it ignore missed events, or at the
#end to let the timing slide, depending on what kind of processing we're doing
#and what sort of sensitivity there is to time.
#now just replace all your main thread's calls to time.sleep() with calls to mycounter.sleep()
#now just add a repeater.sleep(.01) or a while repeater.serviceall(): pass to any loop that will take too long.
demo_main()
There's a couple of odd things left to consider:
Would it be better to sort handlers that you'd prefer to run on main thread from handlers that you don't care? I later went on to add a threadingstyle property, which depending on it's value would run on main thread only, on either main thread or a shared/group thread, or stand alone on it's own thread. That way longer or more time-sensitive tasks, could run without causing the other threads to be as slowed down, or closer to their scheduled time.
I wonder whether, depending on the implementation details of threading: is my 'if not main thread: time.sleep(seconds); return' effectively make it sufficiently more likely to be the main thread's turn, and I shouldn't worry about the difference.
(It seems like adding our MAXBLOCKINGPERIOD as the 3rd arg to the sched library could fix it's notorious issue of not servicing new events after older longer in the future events have already hit the front of the queue.)

python interrupt a command if it takes longer than it should [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Timeout on a Python function call
How to timeout function in python, timout less than a second
I am running a function within a for loop, such as the following:
for element in my_list:
my_function(element)
for some reason, some elements may lead the function into very long processing time (maybe even some infinite loop that I cannot really trace where it comes from). So I want to add some loop control to skip the current element if its processing for example takes more than 2 seconds. How can this be done?
I would discourage the most obvious answer - using a signal.alarm() and an alarm signal handler that asynchronously raises an exception to jump out of task execution. In theory it should work great, but in practice the cPython interpreter code doesn't guarantee that the handler is executed within the time frame that you want. Signal handling can be delayed by x number of bytecode instructions, so the exception could still be raised after you explicitly cancel the alarm (outside the context of the try block).
A problem we ran into regularly was that the alarm handler's exception would get raised after the timeoutable code completed.
Since there isn't much available by way of thread control, I have relied on process control for handling tasks that must be subjected to a timeout. Basically, the gist is to hand off the task to a child process and kill the child process if the task takes too long. multiprocessing.pool isn't quite that sophisticated - so I have a home-rolled pool for that level of control.
Something like this:
import signal
import time
class Timeout(Exception):
pass
def try_one(func,t):
def timeout_handler(signum, frame):
raise Timeout()
old_handler = signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(t) # triger alarm in 3 seconds
try:
t1=time.clock()
func()
t2=time.clock()
except Timeout:
print('{} timed out after {} seconds'.format(func.__name__,t))
return None
finally:
signal.signal(signal.SIGALRM, old_handler)
signal.alarm(0)
return t2-t1
def troublesome():
while True:
pass
try_one(troublesome,2)
The function troublsome will never return on its own. If you use try_one(troublesome,2) it successfully times out after 2 seconds.

How to timeout function in python, timeout less than a second

Specification of the problem:
I'm searching through really great amount of lines of a log file and I'm distributing those lines to groups in order to regular expressions(RegExses) I have stored using the re.match() function. Unfortunately some of my RegExses are too complicated and Python sometimes gets himself to backtracking hell. Due to this I need to protect it with some kind of timeout.
Problems:
re.match, I'm using, is Python's function and as I found out somewhere here on StackOverflow (I'm really sorry, I can not find the link now :-( ). It is very difficult to interrupt thread with running Python's library. For this reason threads are out of the game.
Because evaluating of re.match function takes relatively short time and I want to analyse with this function great amount of lines, I need some timeout function that wont't take too long to execute (this makes threads even less suitable, it takes really long time to initialise new thread) and can be set to less than one second.
For those reasons, answers here - Timeout on a function call
and here - Timeout function if it takes too long to finish with decorator (alarm - 1sec and more) are off the table.
I've spent this morning searching for solution to this question but I did not find any satisfactory answer.
Solution:
I've just modified a script posted here: Timeout function if it takes too long to finish.
And here is the code:
from functools import wraps
import errno
import os
import signal
class TimeoutError(Exception):
pass
def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):
def decorator(func):
def _handle_timeout(signum, frame):
raise TimeoutError(error_message)
def wrapper(*args, **kwargs):
signal.signal(signal.SIGALRM, _handle_timeout)
signal.setitimer(signal.ITIMER_REAL,seconds) #used timer instead of alarm
try:
result = func(*args, **kwargs)
finally:
signal.alarm(0)
return result
return wraps(func)(wrapper)
return decorator
And then you can use it like this:
from timeout import timeout
from time import time
#timeout(0.01)
def loop():
while True:
pass
try:
begin = time.time()
loop()
except TimeoutError, e:
print "Time elapsed: {:.3f}s".format(time.time() - begin)
Which prints
Time elapsed: 0.010s

Categories