thread.start_new_thread: transfer exception to main-thread [duplicate] - python

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Catch a thread’s exception in the caller thread in Python
I have a given code and there is a
thread.start_new_thread()
As I just read in python doc: "When the function terminates with an unhandled exception, a stack trace is printed and then the thread exits (but other threads continue to run)."
But I want to terminate also the main-thread when the (new) function terminates with an exception - So the exception shall be transfered to the main-thread. How can I do this?
edit:
here is part of my code:
def CaptureRegionAsync(region=SCREEN, name="Region", asyncDelay=None, subDir="de"):
if asyncDelay is None:
CaptureRegion(region, name, subDir)
else:
thread.start_new_thread(_CaptureRegionAsync, (region, name, asyncDelay, subDir))
def _CaptureRegionAsync(region, name, asyncDelay, subDir):
time.sleep(max(0, asyncDelay))
CaptureRegion(region, name, subDir)
def CaptureRegion(region=SCREEN, name="Region", subDir="de"):
...
if found:
return
else:
raise Exception(u"[warn] Screenshot has changed: %s" % filename)
CaptureRegionAsync(myregion,"name",2)

UPD: It is not the best solution as your question is different from what I thought it is: I expected you were trying to deal with an exception in thread code you can not modify. However, i decided not to delete the answer.
It's hard to catch an exception from another thread. Usually it should be transferred manually.
Can you wrap a thread function? If you wrap the worker thread that crashes into try...except, then you may perform actions needed to exit main thread in except block (sys.exit seems to be useless, though it's a surprise to me, but there is thread.interrupt_main() and os.abort(), though you may need something more graceful like setting a flag the main thread is checking regularly).
However, if you can not wrap the function (can not modify the third-party code calling start_new_thread), you may try monkey patching the thread module (or the third-party module itself). Patched version of start_new_thread() should wrap the function you worry about:
import thread
import sys
import time
def t():
print "Thread started"
raise Exception("t Kaboom")
def t1():
print "Thread started"
raise Exception("t1 Kaboom")
def decorate(s_n_t):
def decorated(f, a, kw={}):
if f != t:
return s_n_t(f, a, kw)
def thunk(*args, **kwargs):
try:
f(*args, **kwargs)
except:
print "Kaboom"
thread.interrupt_main() # or os.abort()
return s_n_t(thunk, a, kw)
return decorated
# Now let's do monkey patching:
thread.start_new_thread = decorate(thread.start_new_thread)
thread.start_new_thread(t1, ())
time.sleep(5)
thread.start_new_thread(t, ())
time.sleep(5)
Here an exception in t() causes thread.interrupt_main()/os.abort(). Other thread functions in the application are not affected.

Related

How to timeout a looped script without stopping the loop [duplicate]

There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
An improvement on #rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException as e:
print("Timed out!")
I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
http://docs.python.org/library/signal.html
So your code is going to look something like this.
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
I prefer a context manager approach because it allows the execution of multiple python statements within a with time_limit statement. Because windows system does not have SIGALARM, a more portable and perhaps more straightforward method could be using a Timer
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception):
def __init__(self, msg=''):
self.msg = msg
#contextmanager
def time_limit(seconds, msg=''):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
raise TimeoutException("Timed out for operation {}".format(msg))
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
import time
# ends after 5 seconds
with time_limit(5, 'sleep'):
for i in range(10):
time.sleep(1)
# this will actually end after 10 seconds
with time_limit(5, 'sleep'):
time.sleep(10)
The key technique here is the use of _thread.interrupt_main to interrupt the main thread from the timer thread. One caveat is that the main thread does not always respond to the KeyboardInterrupt raised by the Timer quickly. For example, time.sleep() calls a system function so a KeyboardInterrupt will be handled after the sleep call.
Here: a simple way of getting the desired effect:
https://pypi.org/project/func-timeout
This saved my life.
And now an example on how it works: lets say you have a huge list of items to be processed and you are iterating your function over those items. However, for some strange reason, your function get stuck on item n, without raising an exception. You need to other items to be processed, the more the better. In this case, you can set a timeout for processing each item:
import time
import func_timeout
def my_function(n):
"""Sleep for n seconds and return n squared."""
print(f'Processing {n}')
time.sleep(n)
return n**2
def main_controller(max_wait_time, all_data):
"""
Feed my_function with a list of itens to process (all_data).
However, if max_wait_time is exceeded, return the item and a fail info.
"""
res = []
for data in all_data:
try:
my_square = func_timeout.func_timeout(
max_wait_time, my_function, args=[data]
)
res.append((my_square, 'processed'))
except func_timeout.FunctionTimedOut:
print('error')
res.append((data, 'fail'))
continue
return res
timeout_time = 2.1 # my time limit
all_data = range(1, 10) # the data to be processed
res = main_controller(timeout_time, all_data)
print(res)
Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix.
You don't have to use threads. You can use another process to do the blocking work, for instance, maybe using the subprocess module. If you want to share data structures between different parts of your program then Twisted is a great library for giving yourself control of this, and I'd recommend it if you care about blocking and expect to have this trouble a lot. The bad news with Twisted is you have to rewrite your code to avoid any blocking, and there is a fair learning curve.
You can use threads to avoid blocking, but I'd regard this as a last resort, since it exposes you to a whole world of pain. Read a good book on concurrency before even thinking about using threads in production, e.g. Jean Bacon's "Concurrent Systems". I work with a bunch of people who do really cool high performance stuff with threads, and we don't introduce threads into projects unless we really need them.
The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way.
I would usually prefer using a contextmanager as suggested by #josh-lee
But in case someone is interested in having this implemented as a decorator, here's an alternative.
Here's how it would look like:
import time
from timeout import timeout
class Test(object):
#timeout(2)
def test_a(self, foo, bar):
print foo
time.sleep(1)
print bar
return 'A Done'
#timeout(2)
def test_b(self, foo, bar):
print foo
time.sleep(3)
print bar
return 'B Done'
t = Test()
print t.test_a('python', 'rocks')
print t.test_b('timing', 'out')
And this is the timeout.py module:
import threading
class TimeoutError(Exception):
pass
class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
threading.Thread.__init__(self)
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout(object):
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
raise TimeoutError('execution expired')
return wrapped_f
The output:
python
rocks
A Done
timing
Traceback (most recent call last):
...
timeout.TimeoutError: execution expired
out
Notice that even if the TimeoutError is thrown, the decorated method will continue to run in a different thread. If you would also want this thread to be "stopped" see: Is there any way to kill a Thread in Python?
Using simple decorator
Here's the version I made after studying above answers. Pretty straight forward.
def function_timeout(seconds: int):
"""Wrapper of Decorator to pass arguments"""
def decorator(func):
#contextmanager
def time_limit(seconds_):
def signal_handler(signum, frame): # noqa
raise TimeoutException(f"Timed out in {seconds_} seconds!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds_)
try:
yield
finally:
signal.alarm(0)
#wraps(func)
def wrapper(*args, **kwargs):
with time_limit(seconds):
return func(*args, **kwargs)
return wrapper
return decorator
How to use?
#function_timeout(seconds=5)
def my_naughty_function():
while True:
print("Try to stop me ;-p")
Well of course, don't forget to import the function if it is in a separate file.
Here's a timeout function I think I found via google and it works for me.
From:
http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
The method from #user2283347 is tested working, but we want to get rid of the traceback messages. Use pass trick from Remove traceback in Python on Ctrl-C, the modified code is:
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
pass
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
def timeout_svm_score(i):
#from sklearn import svm
#import numpy as np
#from IPython.core.display import display
#%store -r names X Y
clf = svm.SVC(kernel='linear', C=1).fit(np.nan_to_num(X[[names[i]]]), Y)
score = clf.score(np.nan_to_num(X[[names[i]]]),Y)
#scoressvm.append((score, names[i]))
display((score, names[i]))
%%time
with time_limit(5):
i=0
timeout_svm_score(i)
#Wall time: 14.2 s
%%time
with time_limit(20):
i=0
timeout_svm_score(i)
#(0.04541284403669725, '计划飞行时间')
#Wall time: 16.1 s
%%time
with time_limit(5):
i=14
timeout_svm_score(i)
#Wall time: 5h 43min 41s
We can see that this method may need far long time to interrupt the calculation, we asked for 5 seconds, but it work out in 5 hours.
This code works for Windows Server Datacenter 2016 with python 3.7.3 and I didn't tested on Unix, after mixing some answers from Google and StackOverflow, it finally worked for me like this:
from multiprocessing import Process, Lock
import time
import os
def f(lock,id,sleepTime):
lock.acquire()
print("I'm P"+str(id)+" Process ID: "+str(os.getpid()))
lock.release()
time.sleep(sleepTime) #sleeps for some time
print("Process: "+str(id)+" took this much time:"+str(sleepTime))
time.sleep(sleepTime)
print("Process: "+str(id)+" took this much time:"+str(sleepTime*2))
if __name__ == '__main__':
timeout_function=float(9) # 9 seconds for max function time
print("Main Process ID: "+str(os.getpid()))
lock=Lock()
p1=Process(target=f, args=(lock,1,6,)) #Here you can change from 6 to 3 for instance, so you can watch the behavior
start=time.time()
print(type(start))
p1.start()
if p1.is_alive():
print("process running a")
else:
print("process not running a")
while p1.is_alive():
timeout=time.time()
if timeout-start > timeout_function:
p1.terminate()
print("process terminated")
print("watching, time passed: "+str(timeout-start) )
time.sleep(1)
if p1.is_alive():
print("process running b")
else:
print("process not running b")
p1.join()
if p1.is_alive():
print("process running c")
else:
print("process not running c")
end=time.time()
print("I am the main process, the two processes are done")
print("Time taken:- "+str(end-start)+" secs") #MainProcess terminates at approx ~ 5 secs.
time.sleep(5) # To see if on Task Manager the child process is really being terminated, and it is
print("finishing")
The main code is from this link:
Create two child process using python(windows)
Then I used .terminate() to kill the child process. You can see that the function f calls 2 prints, one after 5 seconds and another after 10 seconds. However, with a 7 seconds sleep and the terminate(), it does not show the last print.
It worked for me, hope it helps!

How to ensure that a longer running operation is not interrupted by Keyboard in Python? [duplicate]

Hitting ctrl+c while the dump operation is saving data, the interrupt results in the file being corrupted (i.e. only partially written, so it cannot be loaded again.
Is there a way to make dump, or in general any block of code, uninterruptable?
My current workaround looks something like this:
try:
file = open(path, 'w')
dump(obj, file)
file.close()
except KeyboardInterrupt:
file.close()
file.open(path,'w')
dump(obj, file)
file.close()
raise
It seems silly to restart the operation if it is interrupted, so how can the interrupt be deferred?
The following is a context manager that attaches a signal handler for SIGINT. If the context manager's signal handler is called, the signal is delayed by only passing the signal to the original handler when the context manager exits.
import signal
import logging
class DelayedKeyboardInterrupt:
def __enter__(self):
self.signal_received = False
self.old_handler = signal.signal(signal.SIGINT, self.handler)
def handler(self, sig, frame):
self.signal_received = (sig, frame)
logging.debug('SIGINT received. Delaying KeyboardInterrupt.')
def __exit__(self, type, value, traceback):
signal.signal(signal.SIGINT, self.old_handler)
if self.signal_received:
self.old_handler(*self.signal_received)
with DelayedKeyboardInterrupt():
# stuff here will not be interrupted by SIGINT
critical_code()
Put the function in a thread, and wait for the thread to finish.
Python threads cannot be interrupted except with a special C api.
import time
from threading import Thread
def noInterrupt():
for i in xrange(4):
print i
time.sleep(1)
a = Thread(target=noInterrupt)
a.start()
a.join()
print "done"
0
1
2
3
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\test.py", line 11, in <module>
a.join()
File "C:\Python26\lib\threading.py", line 634, in join
self.__block.wait()
File "C:\Python26\lib\threading.py", line 237, in wait
waiter.acquire()
KeyboardInterrupt
See how the interrupt was deferred until the thread finished?
Here it is adapted to your use:
import time
from threading import Thread
def noInterrupt(path, obj):
try:
file = open(path, 'w')
dump(obj, file)
finally:
file.close()
a = Thread(target=noInterrupt, args=(path,obj))
a.start()
a.join()
Use the signal module to disable SIGINT for the duration of the process:
s = signal.signal(signal.SIGINT, signal.SIG_IGN)
do_important_stuff()
signal.signal(signal.SIGINT, s)
In my opinion using threads for this is an overkill. You can make sure the file is being saved correctly by simply doing it in a loop until a successful write was done:
def saveToFile(obj, filename):
file = open(filename, 'w')
cPickle.dump(obj, file)
file.close()
return True
done = False
while not done:
try:
done = saveToFile(obj, 'file')
except KeyboardInterrupt:
print 'retry'
continue
This question is about blocking the KeyboardInterrupt, but for this situation I find atomic file writing to be cleaner and provide additional protection.
With atomic writes either the entire file gets written correctly, or nothing does. Stackoverflow has a variety of solutions, but personally I like just using atomicwrites library.
After running pip install atomicwrites, just use it like this:
from atomicwrites import atomic_write
with atomic_write(path, overwrite=True) as file:
dump(obj, file)
I've been thinking a lot about the criticisms of the answers to this question, and I believe I have implemented a better solution, which is used like so:
with signal_fence(signal.SIGINT):
file = open(path, 'w')
dump(obj, file)
file.close()
The signal_fence context manager is below, followed by an explanation of its improvements on the previous answers. The docstring of this function documents its interface and guarantees.
import os
import signal
from contextlib import contextmanager
from types import FrameType
from typing import Callable, Iterator, Optional, Tuple
from typing_extensions import assert_never
#contextmanager
def signal_fence(
signum: signal.Signals,
*,
on_deferred_signal: Callable[[int, Optional[FrameType]], None] = None,
) -> Iterator[None]:
"""
A `signal_fence` creates an uninterruptible "fence" around a block of code. The
fence defers a specific signal received inside of the fence until the fence is
destroyed, at which point the original signal handler is called with the deferred
signal. Multiple deferred signals will result in a single call to the original
handler. An optional callback `on_deferred_signal` may be specified which will be
called each time a signal is handled while the fence is active, and can be used
to print a message or record the signal.
A `signal_fence` guarantees the following with regards to exception-safety:
1. If an exception occurs prior to creating the fence (installing a custom signal
handler), the exception will bubble up as normal. The code inside of the fence will
not run.
2. If an exception occurs after creating the fence, including in the fenced code,
the original signal handler will always be restored before the exception bubbles up.
3. If an exception occurs while the fence is calling the original signal handler on
destruction, the original handler may not be called, but the original handler will
be restored. The exception will bubble up and can be detected by calling code.
4. If an exception occurs while the fence is restoring the original signal handler
(exceedingly rare), the original signal handler will be restored regardless.
5. No guarantees about the fence's behavior are made if exceptions occur while
exceptions are being handled.
A `signal_fence` can only be used on the main thread, or else a `ValueError` will
raise when entering the fence.
"""
handled: Optional[Tuple[int, Optional[FrameType]]] = None
def handler(signum: int, frame: Optional[FrameType]) -> None:
nonlocal handled
if handled is None:
handled = (signum, frame)
if on_deferred_signal is not None:
try:
on_deferred_signal(signum, frame)
except:
pass
# https://docs.python.org/3/library/signal.html#signal.getsignal
original_handler = signal.getsignal(signum)
if original_handler is None:
raise TypeError(
"signal_fence cannot be used with signal handlers that were not installed"
" from Python"
)
if isinstance(original_handler, int) and not isinstance(
original_handler, signal.Handlers
):
raise NotImplementedError(
"Your Python interpreter's signal module is using raw integers to"
" represent SIG_IGN and SIG_DFL, which shouldn't be possible!"
)
# N.B. to best guarantee the original handler is restored, the #contextmanager
# decorator is used rather than a class with __enter__/__exit__ methods so
# that the installation of the new handler can be done inside of a try block,
# whereas per [PEP 343](https://www.python.org/dev/peps/pep-0343/) the
# __enter__ call is not guaranteed to have a corresponding __exit__ call if an
# exception interleaves
try:
try:
signal.signal(signum, handler)
yield
finally:
if handled is not None:
if isinstance(original_handler, signal.Handlers):
if original_handler is signal.Handlers.SIG_IGN:
pass
elif original_handler is signal.Handlers.SIG_DFL:
signal.signal(signum, signal.SIG_DFL)
os.kill(os.getpid(), signum)
else:
assert_never(original_handler)
elif callable(original_handler):
original_handler(*handled)
else:
assert_never(original_handler)
signal.signal(signum, original_handler)
except:
signal.signal(signum, original_handler)
raise
First, why not use a thread (accepted answer)?
Running code in a non-daemon thread does guarantee that the thread will be joined on interpreter shutdown, but any exception on the main thread (e.g. KeyboardInterrupt) will not prevent the main thread from continuing to execute.
Consider what would happen if the thread method is using some data that the main thread mutates in a finally block after the KeyboardInterrupt.
Second, to address #benrg's feedback on the most upvoted answer using a context manager:
if an exception is raised after signal is called but before __enter__ returns, the signal will be permanently blocked;
My solution avoids this bug by using a generator context manager with the aid of the #contextmanager decorator. See the full comment in the code above for more details.
this code may call third-party exception handlers in threads other than the main thread, which CPython never does;
I don't think this bug is real. signal.signal is required to be called from the main thread, and raises ValueError otherwise. These context managers can only run on the main thread, and thus will only call third-party exception handlers from the main thread.
if signal returns a non-callable value, __exit__ will crash
My solution handles all possible values of the signal handler and calls them appropriately. Additionally I use assert_never to benefit from exhaustiveness checking in static analyzers.
Do note that signal_fence is designed to handle one interruption on the main thread such as a KeyboardInterrupt. If your user is spamming ctrl+c while the signal handler is being restored, not much can save you. This is unlikely given the relatively few opcodes that need to execute to restore the handler, but it's possible. (For maximum robustness, this solution would need to be rewritten in C)
A generic approach would be to use a context manager that accepts a set of signal to suspend:
import signal
from contextlib import contextmanager
#contextmanager
def suspended_signals(*signals):
"""
Suspends signal handling execution
"""
signal.pthread_sigmask(signal.SIG_BLOCK, set(signals))
try:
yield None
finally:
signal.pthread_sigmask(signal.SIG_UNBLOCK, set(signals))
This is not interruptible (try it), but also maintains a nice interface, so your functions can work the way you expect.
import concurrent.futures
import time
def do_task(func):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as run:
fut = run.submit(func)
return fut.result()
def task():
print("danger will robinson")
time.sleep(5)
print("all ok")
do_task(task)
and here's an easy way to create an uninterruptible sleep with no signal handling needed:
def uninterruptible_sleep(secs):
fut = concurrent.futures.Future()
with contextlib.suppress(concurrent.futures.TimeoutError):
fut.result(secs)

Error when using twisted and greenlets

I'm trying to use twisted with greenlets, so I can write synchronous looking code in twisted without using inlineCallbacks.
Here is my code:
import time, functools
from twisted.internet import reactor, threads
from twisted.internet.defer import Deferred
from functools import wraps
import greenlet
def make_async(func):
#wraps(func)
def wrapper(*pos, **kwds):
d = Deferred()
def greenlet_func():
try:
rc = func(*pos, **kwds)
d.callback(rc)
except Exception, ex:
print ex
d.errback(ex)
g = greenlet.greenlet(greenlet_func)
g.switch()
return d
return wrapper
def sleep(t):
print "sleep(): greenelet:", greenlet.getcurrent()
g = greenlet.getcurrent()
reactor.callLater(t, g.switch)
g.parent.switch()
def wait_one(d):
print "wait_one(): greenelet:", greenlet.getcurrent()
g = greenlet.getcurrent()
active = True
def callback(result):
if not active:
g.switch(result)
else:
reactor.callLater(0, g.switch, result)
def errback(failure):
if not active:
g.throw(failure)
else:
reactor.callLater(0, g.throw, failure)
d.addCallback(callback)
d.addErrback(errback)
active = False
rc = g.parent.switch()
return rc
#make_async
def inner():
print "inner(): greenelet:", greenlet.getcurrent()
import random, time
interval = random.random()
print "Sleeping for %s seconds..." % interval
sleep(interval)
print "done"
return interval
#make_async
def outer():
print "outer(): greenelet:", greenlet.getcurrent()
print wait_one(inner())
print "Here"
reactor.callLater(0, outer)
reactor.run()
There are 5 main parts:
A sleep function, that starts a timer, then switches back to the parent greenlet. When the timer goes off, it switches back to the greenlet that is sleeping.
A make_async decorator. This takes some synchronous looking code and runs it in a greenlet. IT also returns a deferred so the caller can register callbacks when the code completes.
A wait_one function, which blocks the greenlet until the deferred being waited on resolves.
The inner function, which (when wrapped) returns a deferred, sleeps for a random time, and then passes the time it slept for to the deferred.
The outer function, which calls inner(), waits for it to return, then prints the return value.
When I run this code I get this output (Note the error on the last two lines):
outer(): greenelet: <greenlet.greenlet object at 0xb729cc5c>
inner(): greenelet: <greenlet.greenlet object at 0xb729ce3c>
Sleeping for 0.545666723422 seconds...
sleep(): greenelet: <greenlet.greenlet object at 0xb729ce3c>
wait_one(): greenelet: <greenlet.greenlet object at 0xb729cc5c>
done
0.545666723422
Here
Exception twisted.python.failure.Failure: <twisted.python.failure.Failure <class 'greenlet.GreenletExit'>> in <greenlet.greenlet object at 0xb729ce3c> ignored
GreenletExit did not kill <greenlet.greenlet object at 0xb729ce3c>
Doing a bit of research I've found that:
The last line is logged by greenlet.c
The previous line is logged by python itself, as it's ignoring an exception raised in a del method.
I'm having real trouble debugging this as I can't access the GreenletExit or twisted.python.failure.Failure exceptions to get their stack traces.
Does anyone have any ideas what I'm doing wrong, or how I get debug the exceptions that are being thrown?
One other data point: If I hack wait_one() to just return immediately (and not to register anything on the deferred it is passed), the errors go away. :-/
Rewrite your error callback in wait_one like this:
def errback(failure):
## new code
if g.dead:
return
##
if not active:
g.throw(failure)
else:
reactor.callLater(0, g.throw, failure)
If greenlet is dead (finished running), there is no point throwing exceptions
in it.
mguijarr's answer fixed the problem, but I wanted to write up how I got into this situation.
I have three greenlets:
{main} that's runing the reactor.
{outer} that's running outer().
{inner} that's rrunning inner().
When the sleep finishes the {main} switches to {inner} which switches to {outer}. Outer then returns and raises GreenletExit in {inner}. This propogates back to twisted. It sees an exception being raised from callback(), and so invokes errback(). This tries to throw the exception into {outer} (which has already exited), and I hit the error.

How to prevent a block of code from being interrupted by KeyboardInterrupt in Python?

Hitting ctrl+c while the dump operation is saving data, the interrupt results in the file being corrupted (i.e. only partially written, so it cannot be loaded again.
Is there a way to make dump, or in general any block of code, uninterruptable?
My current workaround looks something like this:
try:
file = open(path, 'w')
dump(obj, file)
file.close()
except KeyboardInterrupt:
file.close()
file.open(path,'w')
dump(obj, file)
file.close()
raise
It seems silly to restart the operation if it is interrupted, so how can the interrupt be deferred?
The following is a context manager that attaches a signal handler for SIGINT. If the context manager's signal handler is called, the signal is delayed by only passing the signal to the original handler when the context manager exits.
import signal
import logging
class DelayedKeyboardInterrupt:
def __enter__(self):
self.signal_received = False
self.old_handler = signal.signal(signal.SIGINT, self.handler)
def handler(self, sig, frame):
self.signal_received = (sig, frame)
logging.debug('SIGINT received. Delaying KeyboardInterrupt.')
def __exit__(self, type, value, traceback):
signal.signal(signal.SIGINT, self.old_handler)
if self.signal_received:
self.old_handler(*self.signal_received)
with DelayedKeyboardInterrupt():
# stuff here will not be interrupted by SIGINT
critical_code()
Put the function in a thread, and wait for the thread to finish.
Python threads cannot be interrupted except with a special C api.
import time
from threading import Thread
def noInterrupt():
for i in xrange(4):
print i
time.sleep(1)
a = Thread(target=noInterrupt)
a.start()
a.join()
print "done"
0
1
2
3
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\test.py", line 11, in <module>
a.join()
File "C:\Python26\lib\threading.py", line 634, in join
self.__block.wait()
File "C:\Python26\lib\threading.py", line 237, in wait
waiter.acquire()
KeyboardInterrupt
See how the interrupt was deferred until the thread finished?
Here it is adapted to your use:
import time
from threading import Thread
def noInterrupt(path, obj):
try:
file = open(path, 'w')
dump(obj, file)
finally:
file.close()
a = Thread(target=noInterrupt, args=(path,obj))
a.start()
a.join()
Use the signal module to disable SIGINT for the duration of the process:
s = signal.signal(signal.SIGINT, signal.SIG_IGN)
do_important_stuff()
signal.signal(signal.SIGINT, s)
In my opinion using threads for this is an overkill. You can make sure the file is being saved correctly by simply doing it in a loop until a successful write was done:
def saveToFile(obj, filename):
file = open(filename, 'w')
cPickle.dump(obj, file)
file.close()
return True
done = False
while not done:
try:
done = saveToFile(obj, 'file')
except KeyboardInterrupt:
print 'retry'
continue
This question is about blocking the KeyboardInterrupt, but for this situation I find atomic file writing to be cleaner and provide additional protection.
With atomic writes either the entire file gets written correctly, or nothing does. Stackoverflow has a variety of solutions, but personally I like just using atomicwrites library.
After running pip install atomicwrites, just use it like this:
from atomicwrites import atomic_write
with atomic_write(path, overwrite=True) as file:
dump(obj, file)
I've been thinking a lot about the criticisms of the answers to this question, and I believe I have implemented a better solution, which is used like so:
with signal_fence(signal.SIGINT):
file = open(path, 'w')
dump(obj, file)
file.close()
The signal_fence context manager is below, followed by an explanation of its improvements on the previous answers. The docstring of this function documents its interface and guarantees.
import os
import signal
from contextlib import contextmanager
from types import FrameType
from typing import Callable, Iterator, Optional, Tuple
from typing_extensions import assert_never
#contextmanager
def signal_fence(
signum: signal.Signals,
*,
on_deferred_signal: Callable[[int, Optional[FrameType]], None] = None,
) -> Iterator[None]:
"""
A `signal_fence` creates an uninterruptible "fence" around a block of code. The
fence defers a specific signal received inside of the fence until the fence is
destroyed, at which point the original signal handler is called with the deferred
signal. Multiple deferred signals will result in a single call to the original
handler. An optional callback `on_deferred_signal` may be specified which will be
called each time a signal is handled while the fence is active, and can be used
to print a message or record the signal.
A `signal_fence` guarantees the following with regards to exception-safety:
1. If an exception occurs prior to creating the fence (installing a custom signal
handler), the exception will bubble up as normal. The code inside of the fence will
not run.
2. If an exception occurs after creating the fence, including in the fenced code,
the original signal handler will always be restored before the exception bubbles up.
3. If an exception occurs while the fence is calling the original signal handler on
destruction, the original handler may not be called, but the original handler will
be restored. The exception will bubble up and can be detected by calling code.
4. If an exception occurs while the fence is restoring the original signal handler
(exceedingly rare), the original signal handler will be restored regardless.
5. No guarantees about the fence's behavior are made if exceptions occur while
exceptions are being handled.
A `signal_fence` can only be used on the main thread, or else a `ValueError` will
raise when entering the fence.
"""
handled: Optional[Tuple[int, Optional[FrameType]]] = None
def handler(signum: int, frame: Optional[FrameType]) -> None:
nonlocal handled
if handled is None:
handled = (signum, frame)
if on_deferred_signal is not None:
try:
on_deferred_signal(signum, frame)
except:
pass
# https://docs.python.org/3/library/signal.html#signal.getsignal
original_handler = signal.getsignal(signum)
if original_handler is None:
raise TypeError(
"signal_fence cannot be used with signal handlers that were not installed"
" from Python"
)
if isinstance(original_handler, int) and not isinstance(
original_handler, signal.Handlers
):
raise NotImplementedError(
"Your Python interpreter's signal module is using raw integers to"
" represent SIG_IGN and SIG_DFL, which shouldn't be possible!"
)
# N.B. to best guarantee the original handler is restored, the #contextmanager
# decorator is used rather than a class with __enter__/__exit__ methods so
# that the installation of the new handler can be done inside of a try block,
# whereas per [PEP 343](https://www.python.org/dev/peps/pep-0343/) the
# __enter__ call is not guaranteed to have a corresponding __exit__ call if an
# exception interleaves
try:
try:
signal.signal(signum, handler)
yield
finally:
if handled is not None:
if isinstance(original_handler, signal.Handlers):
if original_handler is signal.Handlers.SIG_IGN:
pass
elif original_handler is signal.Handlers.SIG_DFL:
signal.signal(signum, signal.SIG_DFL)
os.kill(os.getpid(), signum)
else:
assert_never(original_handler)
elif callable(original_handler):
original_handler(*handled)
else:
assert_never(original_handler)
signal.signal(signum, original_handler)
except:
signal.signal(signum, original_handler)
raise
First, why not use a thread (accepted answer)?
Running code in a non-daemon thread does guarantee that the thread will be joined on interpreter shutdown, but any exception on the main thread (e.g. KeyboardInterrupt) will not prevent the main thread from continuing to execute.
Consider what would happen if the thread method is using some data that the main thread mutates in a finally block after the KeyboardInterrupt.
Second, to address #benrg's feedback on the most upvoted answer using a context manager:
if an exception is raised after signal is called but before __enter__ returns, the signal will be permanently blocked;
My solution avoids this bug by using a generator context manager with the aid of the #contextmanager decorator. See the full comment in the code above for more details.
this code may call third-party exception handlers in threads other than the main thread, which CPython never does;
I don't think this bug is real. signal.signal is required to be called from the main thread, and raises ValueError otherwise. These context managers can only run on the main thread, and thus will only call third-party exception handlers from the main thread.
if signal returns a non-callable value, __exit__ will crash
My solution handles all possible values of the signal handler and calls them appropriately. Additionally I use assert_never to benefit from exhaustiveness checking in static analyzers.
Do note that signal_fence is designed to handle one interruption on the main thread such as a KeyboardInterrupt. If your user is spamming ctrl+c while the signal handler is being restored, not much can save you. This is unlikely given the relatively few opcodes that need to execute to restore the handler, but it's possible. (For maximum robustness, this solution would need to be rewritten in C)
A generic approach would be to use a context manager that accepts a set of signal to suspend:
import signal
from contextlib import contextmanager
#contextmanager
def suspended_signals(*signals):
"""
Suspends signal handling execution
"""
signal.pthread_sigmask(signal.SIG_BLOCK, set(signals))
try:
yield None
finally:
signal.pthread_sigmask(signal.SIG_UNBLOCK, set(signals))
This is not interruptible (try it), but also maintains a nice interface, so your functions can work the way you expect.
import concurrent.futures
import time
def do_task(func):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as run:
fut = run.submit(func)
return fut.result()
def task():
print("danger will robinson")
time.sleep(5)
print("all ok")
do_task(task)
and here's an easy way to create an uninterruptible sleep with no signal handling needed:
def uninterruptible_sleep(secs):
fut = concurrent.futures.Future()
with contextlib.suppress(concurrent.futures.TimeoutError):
fut.result(secs)

How to limit execution time of a function call?

There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
An improvement on #rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException as e:
print("Timed out!")
I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
http://docs.python.org/library/signal.html
So your code is going to look something like this.
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
I prefer a context manager approach because it allows the execution of multiple python statements within a with time_limit statement. Because windows system does not have SIGALARM, a more portable and perhaps more straightforward method could be using a Timer
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception):
def __init__(self, msg=''):
self.msg = msg
#contextmanager
def time_limit(seconds, msg=''):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
raise TimeoutException("Timed out for operation {}".format(msg))
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
import time
# ends after 5 seconds
with time_limit(5, 'sleep'):
for i in range(10):
time.sleep(1)
# this will actually end after 10 seconds
with time_limit(5, 'sleep'):
time.sleep(10)
The key technique here is the use of _thread.interrupt_main to interrupt the main thread from the timer thread. One caveat is that the main thread does not always respond to the KeyboardInterrupt raised by the Timer quickly. For example, time.sleep() calls a system function so a KeyboardInterrupt will be handled after the sleep call.
Here: a simple way of getting the desired effect:
https://pypi.org/project/func-timeout
This saved my life.
And now an example on how it works: lets say you have a huge list of items to be processed and you are iterating your function over those items. However, for some strange reason, your function get stuck on item n, without raising an exception. You need to other items to be processed, the more the better. In this case, you can set a timeout for processing each item:
import time
import func_timeout
def my_function(n):
"""Sleep for n seconds and return n squared."""
print(f'Processing {n}')
time.sleep(n)
return n**2
def main_controller(max_wait_time, all_data):
"""
Feed my_function with a list of itens to process (all_data).
However, if max_wait_time is exceeded, return the item and a fail info.
"""
res = []
for data in all_data:
try:
my_square = func_timeout.func_timeout(
max_wait_time, my_function, args=[data]
)
res.append((my_square, 'processed'))
except func_timeout.FunctionTimedOut:
print('error')
res.append((data, 'fail'))
continue
return res
timeout_time = 2.1 # my time limit
all_data = range(1, 10) # the data to be processed
res = main_controller(timeout_time, all_data)
print(res)
Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix.
You don't have to use threads. You can use another process to do the blocking work, for instance, maybe using the subprocess module. If you want to share data structures between different parts of your program then Twisted is a great library for giving yourself control of this, and I'd recommend it if you care about blocking and expect to have this trouble a lot. The bad news with Twisted is you have to rewrite your code to avoid any blocking, and there is a fair learning curve.
You can use threads to avoid blocking, but I'd regard this as a last resort, since it exposes you to a whole world of pain. Read a good book on concurrency before even thinking about using threads in production, e.g. Jean Bacon's "Concurrent Systems". I work with a bunch of people who do really cool high performance stuff with threads, and we don't introduce threads into projects unless we really need them.
The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way.
I would usually prefer using a contextmanager as suggested by #josh-lee
But in case someone is interested in having this implemented as a decorator, here's an alternative.
Here's how it would look like:
import time
from timeout import timeout
class Test(object):
#timeout(2)
def test_a(self, foo, bar):
print foo
time.sleep(1)
print bar
return 'A Done'
#timeout(2)
def test_b(self, foo, bar):
print foo
time.sleep(3)
print bar
return 'B Done'
t = Test()
print t.test_a('python', 'rocks')
print t.test_b('timing', 'out')
And this is the timeout.py module:
import threading
class TimeoutError(Exception):
pass
class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
threading.Thread.__init__(self)
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout(object):
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
raise TimeoutError('execution expired')
return wrapped_f
The output:
python
rocks
A Done
timing
Traceback (most recent call last):
...
timeout.TimeoutError: execution expired
out
Notice that even if the TimeoutError is thrown, the decorated method will continue to run in a different thread. If you would also want this thread to be "stopped" see: Is there any way to kill a Thread in Python?
Using simple decorator
Here's the version I made after studying above answers. Pretty straight forward.
def function_timeout(seconds: int):
"""Wrapper of Decorator to pass arguments"""
def decorator(func):
#contextmanager
def time_limit(seconds_):
def signal_handler(signum, frame): # noqa
raise TimeoutException(f"Timed out in {seconds_} seconds!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds_)
try:
yield
finally:
signal.alarm(0)
#wraps(func)
def wrapper(*args, **kwargs):
with time_limit(seconds):
return func(*args, **kwargs)
return wrapper
return decorator
How to use?
#function_timeout(seconds=5)
def my_naughty_function():
while True:
print("Try to stop me ;-p")
Well of course, don't forget to import the function if it is in a separate file.
Here's a timeout function I think I found via google and it works for me.
From:
http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
The method from #user2283347 is tested working, but we want to get rid of the traceback messages. Use pass trick from Remove traceback in Python on Ctrl-C, the modified code is:
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
pass
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
def timeout_svm_score(i):
#from sklearn import svm
#import numpy as np
#from IPython.core.display import display
#%store -r names X Y
clf = svm.SVC(kernel='linear', C=1).fit(np.nan_to_num(X[[names[i]]]), Y)
score = clf.score(np.nan_to_num(X[[names[i]]]),Y)
#scoressvm.append((score, names[i]))
display((score, names[i]))
%%time
with time_limit(5):
i=0
timeout_svm_score(i)
#Wall time: 14.2 s
%%time
with time_limit(20):
i=0
timeout_svm_score(i)
#(0.04541284403669725, '计划飞行时间')
#Wall time: 16.1 s
%%time
with time_limit(5):
i=14
timeout_svm_score(i)
#Wall time: 5h 43min 41s
We can see that this method may need far long time to interrupt the calculation, we asked for 5 seconds, but it work out in 5 hours.
This code works for Windows Server Datacenter 2016 with python 3.7.3 and I didn't tested on Unix, after mixing some answers from Google and StackOverflow, it finally worked for me like this:
from multiprocessing import Process, Lock
import time
import os
def f(lock,id,sleepTime):
lock.acquire()
print("I'm P"+str(id)+" Process ID: "+str(os.getpid()))
lock.release()
time.sleep(sleepTime) #sleeps for some time
print("Process: "+str(id)+" took this much time:"+str(sleepTime))
time.sleep(sleepTime)
print("Process: "+str(id)+" took this much time:"+str(sleepTime*2))
if __name__ == '__main__':
timeout_function=float(9) # 9 seconds for max function time
print("Main Process ID: "+str(os.getpid()))
lock=Lock()
p1=Process(target=f, args=(lock,1,6,)) #Here you can change from 6 to 3 for instance, so you can watch the behavior
start=time.time()
print(type(start))
p1.start()
if p1.is_alive():
print("process running a")
else:
print("process not running a")
while p1.is_alive():
timeout=time.time()
if timeout-start > timeout_function:
p1.terminate()
print("process terminated")
print("watching, time passed: "+str(timeout-start) )
time.sleep(1)
if p1.is_alive():
print("process running b")
else:
print("process not running b")
p1.join()
if p1.is_alive():
print("process running c")
else:
print("process not running c")
end=time.time()
print("I am the main process, the two processes are done")
print("Time taken:- "+str(end-start)+" secs") #MainProcess terminates at approx ~ 5 secs.
time.sleep(5) # To see if on Task Manager the child process is really being terminated, and it is
print("finishing")
The main code is from this link:
Create two child process using python(windows)
Then I used .terminate() to kill the child process. You can see that the function f calls 2 prints, one after 5 seconds and another after 10 seconds. However, with a 7 seconds sleep and the terminate(), it does not show the last print.
It worked for me, hope it helps!

Categories