Related
Hitting ctrl+c while the dump operation is saving data, the interrupt results in the file being corrupted (i.e. only partially written, so it cannot be loaded again.
Is there a way to make dump, or in general any block of code, uninterruptable?
My current workaround looks something like this:
try:
file = open(path, 'w')
dump(obj, file)
file.close()
except KeyboardInterrupt:
file.close()
file.open(path,'w')
dump(obj, file)
file.close()
raise
It seems silly to restart the operation if it is interrupted, so how can the interrupt be deferred?
The following is a context manager that attaches a signal handler for SIGINT. If the context manager's signal handler is called, the signal is delayed by only passing the signal to the original handler when the context manager exits.
import signal
import logging
class DelayedKeyboardInterrupt:
def __enter__(self):
self.signal_received = False
self.old_handler = signal.signal(signal.SIGINT, self.handler)
def handler(self, sig, frame):
self.signal_received = (sig, frame)
logging.debug('SIGINT received. Delaying KeyboardInterrupt.')
def __exit__(self, type, value, traceback):
signal.signal(signal.SIGINT, self.old_handler)
if self.signal_received:
self.old_handler(*self.signal_received)
with DelayedKeyboardInterrupt():
# stuff here will not be interrupted by SIGINT
critical_code()
Put the function in a thread, and wait for the thread to finish.
Python threads cannot be interrupted except with a special C api.
import time
from threading import Thread
def noInterrupt():
for i in xrange(4):
print i
time.sleep(1)
a = Thread(target=noInterrupt)
a.start()
a.join()
print "done"
0
1
2
3
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\test.py", line 11, in <module>
a.join()
File "C:\Python26\lib\threading.py", line 634, in join
self.__block.wait()
File "C:\Python26\lib\threading.py", line 237, in wait
waiter.acquire()
KeyboardInterrupt
See how the interrupt was deferred until the thread finished?
Here it is adapted to your use:
import time
from threading import Thread
def noInterrupt(path, obj):
try:
file = open(path, 'w')
dump(obj, file)
finally:
file.close()
a = Thread(target=noInterrupt, args=(path,obj))
a.start()
a.join()
Use the signal module to disable SIGINT for the duration of the process:
s = signal.signal(signal.SIGINT, signal.SIG_IGN)
do_important_stuff()
signal.signal(signal.SIGINT, s)
In my opinion using threads for this is an overkill. You can make sure the file is being saved correctly by simply doing it in a loop until a successful write was done:
def saveToFile(obj, filename):
file = open(filename, 'w')
cPickle.dump(obj, file)
file.close()
return True
done = False
while not done:
try:
done = saveToFile(obj, 'file')
except KeyboardInterrupt:
print 'retry'
continue
This question is about blocking the KeyboardInterrupt, but for this situation I find atomic file writing to be cleaner and provide additional protection.
With atomic writes either the entire file gets written correctly, or nothing does. Stackoverflow has a variety of solutions, but personally I like just using atomicwrites library.
After running pip install atomicwrites, just use it like this:
from atomicwrites import atomic_write
with atomic_write(path, overwrite=True) as file:
dump(obj, file)
I've been thinking a lot about the criticisms of the answers to this question, and I believe I have implemented a better solution, which is used like so:
with signal_fence(signal.SIGINT):
file = open(path, 'w')
dump(obj, file)
file.close()
The signal_fence context manager is below, followed by an explanation of its improvements on the previous answers. The docstring of this function documents its interface and guarantees.
import os
import signal
from contextlib import contextmanager
from types import FrameType
from typing import Callable, Iterator, Optional, Tuple
from typing_extensions import assert_never
#contextmanager
def signal_fence(
signum: signal.Signals,
*,
on_deferred_signal: Callable[[int, Optional[FrameType]], None] = None,
) -> Iterator[None]:
"""
A `signal_fence` creates an uninterruptible "fence" around a block of code. The
fence defers a specific signal received inside of the fence until the fence is
destroyed, at which point the original signal handler is called with the deferred
signal. Multiple deferred signals will result in a single call to the original
handler. An optional callback `on_deferred_signal` may be specified which will be
called each time a signal is handled while the fence is active, and can be used
to print a message or record the signal.
A `signal_fence` guarantees the following with regards to exception-safety:
1. If an exception occurs prior to creating the fence (installing a custom signal
handler), the exception will bubble up as normal. The code inside of the fence will
not run.
2. If an exception occurs after creating the fence, including in the fenced code,
the original signal handler will always be restored before the exception bubbles up.
3. If an exception occurs while the fence is calling the original signal handler on
destruction, the original handler may not be called, but the original handler will
be restored. The exception will bubble up and can be detected by calling code.
4. If an exception occurs while the fence is restoring the original signal handler
(exceedingly rare), the original signal handler will be restored regardless.
5. No guarantees about the fence's behavior are made if exceptions occur while
exceptions are being handled.
A `signal_fence` can only be used on the main thread, or else a `ValueError` will
raise when entering the fence.
"""
handled: Optional[Tuple[int, Optional[FrameType]]] = None
def handler(signum: int, frame: Optional[FrameType]) -> None:
nonlocal handled
if handled is None:
handled = (signum, frame)
if on_deferred_signal is not None:
try:
on_deferred_signal(signum, frame)
except:
pass
# https://docs.python.org/3/library/signal.html#signal.getsignal
original_handler = signal.getsignal(signum)
if original_handler is None:
raise TypeError(
"signal_fence cannot be used with signal handlers that were not installed"
" from Python"
)
if isinstance(original_handler, int) and not isinstance(
original_handler, signal.Handlers
):
raise NotImplementedError(
"Your Python interpreter's signal module is using raw integers to"
" represent SIG_IGN and SIG_DFL, which shouldn't be possible!"
)
# N.B. to best guarantee the original handler is restored, the #contextmanager
# decorator is used rather than a class with __enter__/__exit__ methods so
# that the installation of the new handler can be done inside of a try block,
# whereas per [PEP 343](https://www.python.org/dev/peps/pep-0343/) the
# __enter__ call is not guaranteed to have a corresponding __exit__ call if an
# exception interleaves
try:
try:
signal.signal(signum, handler)
yield
finally:
if handled is not None:
if isinstance(original_handler, signal.Handlers):
if original_handler is signal.Handlers.SIG_IGN:
pass
elif original_handler is signal.Handlers.SIG_DFL:
signal.signal(signum, signal.SIG_DFL)
os.kill(os.getpid(), signum)
else:
assert_never(original_handler)
elif callable(original_handler):
original_handler(*handled)
else:
assert_never(original_handler)
signal.signal(signum, original_handler)
except:
signal.signal(signum, original_handler)
raise
First, why not use a thread (accepted answer)?
Running code in a non-daemon thread does guarantee that the thread will be joined on interpreter shutdown, but any exception on the main thread (e.g. KeyboardInterrupt) will not prevent the main thread from continuing to execute.
Consider what would happen if the thread method is using some data that the main thread mutates in a finally block after the KeyboardInterrupt.
Second, to address #benrg's feedback on the most upvoted answer using a context manager:
if an exception is raised after signal is called but before __enter__ returns, the signal will be permanently blocked;
My solution avoids this bug by using a generator context manager with the aid of the #contextmanager decorator. See the full comment in the code above for more details.
this code may call third-party exception handlers in threads other than the main thread, which CPython never does;
I don't think this bug is real. signal.signal is required to be called from the main thread, and raises ValueError otherwise. These context managers can only run on the main thread, and thus will only call third-party exception handlers from the main thread.
if signal returns a non-callable value, __exit__ will crash
My solution handles all possible values of the signal handler and calls them appropriately. Additionally I use assert_never to benefit from exhaustiveness checking in static analyzers.
Do note that signal_fence is designed to handle one interruption on the main thread such as a KeyboardInterrupt. If your user is spamming ctrl+c while the signal handler is being restored, not much can save you. This is unlikely given the relatively few opcodes that need to execute to restore the handler, but it's possible. (For maximum robustness, this solution would need to be rewritten in C)
A generic approach would be to use a context manager that accepts a set of signal to suspend:
import signal
from contextlib import contextmanager
#contextmanager
def suspended_signals(*signals):
"""
Suspends signal handling execution
"""
signal.pthread_sigmask(signal.SIG_BLOCK, set(signals))
try:
yield None
finally:
signal.pthread_sigmask(signal.SIG_UNBLOCK, set(signals))
This is not interruptible (try it), but also maintains a nice interface, so your functions can work the way you expect.
import concurrent.futures
import time
def do_task(func):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as run:
fut = run.submit(func)
return fut.result()
def task():
print("danger will robinson")
time.sleep(5)
print("all ok")
do_task(task)
and here's an easy way to create an uninterruptible sleep with no signal handling needed:
def uninterruptible_sleep(secs):
fut = concurrent.futures.Future()
with contextlib.suppress(concurrent.futures.TimeoutError):
fut.result(secs)
is there a way to time limit the oparations in python, eg:
try:
cmds.file( file, o=1, pmt=0 )
except:
print "Sorry, run out of time"
pass
If you're on Mac or a Unix-based system, you can use signal.SIGALRM to forcibly time out functions that take too long, so your code would look like:
import signal
class TimeoutException(Exception): # custom exception
pass
def timeout_handler(signum, frame): # raises exception when signal sent
raise TimeoutException
# Makes it so that when SIGALRM signal sent, it calls the function timeout_handler, which raises your exception
signal.signal(signal.SIGALRM, timeout_handler)
# Start the timer. Once 5 seconds are over, a SIGALRM signal is sent.
signal.alarm(5)
try:
cmds.file( file, o=1, pmt=0 )
except TimeoutException:
print "Sorry, run out of time" # you don't need pass because that's in the exception definition
Basically, you're creating a custom exception that's raised when the after the time limit is up (i.e., the SIGALRM is sent). You can of course tweak the time limit.
I've read a lot of questions on SO and elsewhere on this topic but can't get it working. Perhaps it's because I'm using Windows, I don't know.
What I'm trying to do is download a bunch of files (whose URLs are read from a CSV file) in parallel. I've tried using multiprocessing and concurrent.futures for this with no success.
The main problem is that I can't stop the program on Ctrl-C - it just keeps running. This is especially bad in the case of processes instead of threads (I used multiprocessing for that) because I have to kill each process manually every time.
Here is my current code:
import concurrent.futures
import signal
import sys
import urllib.request
class Download(object):
def __init__(self, url, filename):
self.url = url
self.filename = filename
def perform_download(download):
print('Downloading {} to {}'.format(download.url, download.filename))
return urllib.request.urlretrieve(download.url, filename=download.filename)
def main(argv):
args = parse_args(argv)
queue = []
with open(args.results_file, 'r', encoding='utf8') as results_file:
# Irrelevant CSV parsing...
queue.append(Download(url, filename))
def handle_interrupt():
print('CAUGHT SIGINT!!!!!!!!!!!!!!!!!!!11111111')
sys.exit(1)
signal.signal(signal.SIGINT, handle_interrupt)
with concurrent.futures.ThreadPoolExecutor(max_workers=args.num_jobs) as executor:
futures = {executor.submit(perform_download, d): d for d in queue}
try:
concurrent.futures.wait(futures)
except KeyboardInterrupt:
print('Interrupted')
sys.exit(1)
I'm trying to catch Ctrl-C in two different ways here but none of them works. The latter one (except KeyboardInterrupt) actually gets run but the process won't exit after calling sys.exit.
Before this I used the multiprocessing module like this:
try:
pool = multiprocessing.Pool(processes=args.num_jobs)
pool.map_async(perform_download, queue).get(1000000)
except Exception as e:
pool.close()
pool.terminate()
sys.exit(0)
So what is the proper way to add ability to terminate all worker threads or processes once you hit Ctrl-C in the terminal?
System information:
Python version: 3.6.1 32-bit
OS: Windows 10
You are catching the SIGINT signal in a signal handler and re-routing it as a SystemExit exception. This prevents the KeyboardInterrupt exception to ever reach your main loop.
Moreover, if the SystemExit is not raised in the main thread, it will just kill the child thread where it is raised.
Jesse Noller, the author of the multiprocessing library, explains how to deal with CTRL+C in a old blog post.
import signal
from multiprocessing import Pool
def initializer():
"""Ignore CTRL+C in the worker process."""
signal.signal(SIGINT, SIG_IGN)
pool = Pool(initializer=initializer)
try:
pool.map(perform_download, dowloads)
except KeyboardInterrupt:
pool.terminate()
pool.join()
I don't believe the accepted answer works under Windows, certainly not under current versions of Python (I am running 3.8.5). In fact, it won't run at all since SIGINT and SIG_IGN will be undefined (what is needed is signal.SIGINT and signal.SIG_IGN).
This is a know problem under Windows. A solution I have come up with is essentially the reverse of the accepted solution: The main process must ignore keyboard interrupts and we initialize the process pool to initially set a global flag ctrl_c_entered to False and to set this flag to True if Ctrl-C is entered. Then any multiprocessing worker function (or method) is decorated with a special decorator, handle_ctrl_c, that firsts tests the ctrl_c_entered flag and only if False does it run the worker function after re-enabling keyboard interrupts and establishing a try/catch handler for keyboard interrups. If the ctrl_c_entered flag was True or if a keyboard interrupt occurs during the execution of the worker function, the value returned is an instance of KeyboardInterrupt, which the main process can check to determine whether a Ctrl-C was entered.
Thus all submitted tasks will be allowed to start but will immediately terminate with a return value of a KeyBoardInterrupt exception and the actual worker function will never be called by the decorator function once a Ctrl-C has been entered.
import signal
from multiprocessing import Pool
from functools import wraps
import time
def handle_ctrl_c(func):
"""
Decorator function.
"""
#wraps(func)
def wrapper(*args, **kwargs):
global ctrl_c_entered
if not ctrl_c_entered:
# re-enable keyboard interrups:
signal.signal(signal.SIGINT, default_sigint_handler)
try:
return func(*args, **kwargs)
except KeyboardInterrupt:
ctrl_c_entered = True
return KeyboardInterrupt()
finally:
signal.signal(signal.SIGINT, pool_ctrl_c_handler)
else:
return KeyboardInterrupt()
return wrapper
def pool_ctrl_c_handler(*args, **kwargs):
global ctrl_c_entered
ctrl_c_entered = True
def init_pool():
# set global variable for each process in the pool:
global ctrl_c_entered
global default_sigint_handler
ctrl_c_entered = False
default_sigint_handler = signal.signal(signal.SIGINT, pool_ctrl_c_handler)
#handle_ctrl_c
def perform_download(download):
print('begin')
time.sleep(2)
print('end')
return True
if __name__ == '__main__':
signal.signal(signal.SIGINT, signal.SIG_IGN)
pool = Pool(initializer=init_pool)
results = pool.map(perform_download, range(20))
if any(map(lambda x: isinstance(x, KeyboardInterrupt), results)):
print('Ctrl-C was entered.')
print(results)
I want to run a task when my Python program finishes, but only if it finishes successfully. As far as I know, using the atexit module means that my registered function will always be run at program termination, regardless of success. Is there a similar functionality to register a function so that it runs only on successful exit? Alternatively, is there a way for my exit function to detect whether the exit was normal or exceptional?
Here is some code that demonstrates the problem. It will print that the program succeeded, even when it has failed.
import atexit
def myexitfunc():
print "Program succeeded!"
atexit.register(myexitfunc)
raise Exception("Program failed!")
Output:
$ python atexittest.py
Traceback (most recent call last):
File "atexittest.py", line 8, in <module>
raise Exception("Program failed!")
Exception: Program failed!
Program succeeded!
Out of the box, atexit is not quite suited for what you want to do: it's primarily used for resource cleanup at the very last moment, as things are shutting down and exiting. By analogy, it's the "finally" of a try/except, whereas what you want is the "else" of a try/except.
The simplest way I can think of is continuing to create a global flag which you set only when your script "succeeds"... and then have all the functions you attach to atexit check that flag, and do nothing unless it's been set.
Eg:
_success = False
def atsuccess(func, *args, **kwds):
def wrapper():
if _success:
func(*args,**kwds)
atexit(wrapper)
def set_success():
global _success
_success = True
# then call atsuccess() to attach your callbacks,
# and call set_success() before your script returns
One limitation is if you have any code which calls sys.exit(0) before setting the success flag. Such code should (probably) be refactored to return to the main function first, so that you call set_success and sys.exit in only one place. Failing that, you'll need add something like the following wrapper around the main entry point in your script:
try:
main()
except SystemExit, err:
if err.code == 0:
set_success()
raise
Wrap the body of your program in a with statement and define a corresponding context object that only performs your action when no exceptions have been raised. Something like:
class AtExit(object):
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_value is None:
print "Success!"
else:
print "Failure!"
if __name__ == "__main__":
with AtExit():
print "Running"
# raise Exception("Error")
Hitting ctrl+c while the dump operation is saving data, the interrupt results in the file being corrupted (i.e. only partially written, so it cannot be loaded again.
Is there a way to make dump, or in general any block of code, uninterruptable?
My current workaround looks something like this:
try:
file = open(path, 'w')
dump(obj, file)
file.close()
except KeyboardInterrupt:
file.close()
file.open(path,'w')
dump(obj, file)
file.close()
raise
It seems silly to restart the operation if it is interrupted, so how can the interrupt be deferred?
The following is a context manager that attaches a signal handler for SIGINT. If the context manager's signal handler is called, the signal is delayed by only passing the signal to the original handler when the context manager exits.
import signal
import logging
class DelayedKeyboardInterrupt:
def __enter__(self):
self.signal_received = False
self.old_handler = signal.signal(signal.SIGINT, self.handler)
def handler(self, sig, frame):
self.signal_received = (sig, frame)
logging.debug('SIGINT received. Delaying KeyboardInterrupt.')
def __exit__(self, type, value, traceback):
signal.signal(signal.SIGINT, self.old_handler)
if self.signal_received:
self.old_handler(*self.signal_received)
with DelayedKeyboardInterrupt():
# stuff here will not be interrupted by SIGINT
critical_code()
Put the function in a thread, and wait for the thread to finish.
Python threads cannot be interrupted except with a special C api.
import time
from threading import Thread
def noInterrupt():
for i in xrange(4):
print i
time.sleep(1)
a = Thread(target=noInterrupt)
a.start()
a.join()
print "done"
0
1
2
3
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\test.py", line 11, in <module>
a.join()
File "C:\Python26\lib\threading.py", line 634, in join
self.__block.wait()
File "C:\Python26\lib\threading.py", line 237, in wait
waiter.acquire()
KeyboardInterrupt
See how the interrupt was deferred until the thread finished?
Here it is adapted to your use:
import time
from threading import Thread
def noInterrupt(path, obj):
try:
file = open(path, 'w')
dump(obj, file)
finally:
file.close()
a = Thread(target=noInterrupt, args=(path,obj))
a.start()
a.join()
Use the signal module to disable SIGINT for the duration of the process:
s = signal.signal(signal.SIGINT, signal.SIG_IGN)
do_important_stuff()
signal.signal(signal.SIGINT, s)
In my opinion using threads for this is an overkill. You can make sure the file is being saved correctly by simply doing it in a loop until a successful write was done:
def saveToFile(obj, filename):
file = open(filename, 'w')
cPickle.dump(obj, file)
file.close()
return True
done = False
while not done:
try:
done = saveToFile(obj, 'file')
except KeyboardInterrupt:
print 'retry'
continue
This question is about blocking the KeyboardInterrupt, but for this situation I find atomic file writing to be cleaner and provide additional protection.
With atomic writes either the entire file gets written correctly, or nothing does. Stackoverflow has a variety of solutions, but personally I like just using atomicwrites library.
After running pip install atomicwrites, just use it like this:
from atomicwrites import atomic_write
with atomic_write(path, overwrite=True) as file:
dump(obj, file)
I've been thinking a lot about the criticisms of the answers to this question, and I believe I have implemented a better solution, which is used like so:
with signal_fence(signal.SIGINT):
file = open(path, 'w')
dump(obj, file)
file.close()
The signal_fence context manager is below, followed by an explanation of its improvements on the previous answers. The docstring of this function documents its interface and guarantees.
import os
import signal
from contextlib import contextmanager
from types import FrameType
from typing import Callable, Iterator, Optional, Tuple
from typing_extensions import assert_never
#contextmanager
def signal_fence(
signum: signal.Signals,
*,
on_deferred_signal: Callable[[int, Optional[FrameType]], None] = None,
) -> Iterator[None]:
"""
A `signal_fence` creates an uninterruptible "fence" around a block of code. The
fence defers a specific signal received inside of the fence until the fence is
destroyed, at which point the original signal handler is called with the deferred
signal. Multiple deferred signals will result in a single call to the original
handler. An optional callback `on_deferred_signal` may be specified which will be
called each time a signal is handled while the fence is active, and can be used
to print a message or record the signal.
A `signal_fence` guarantees the following with regards to exception-safety:
1. If an exception occurs prior to creating the fence (installing a custom signal
handler), the exception will bubble up as normal. The code inside of the fence will
not run.
2. If an exception occurs after creating the fence, including in the fenced code,
the original signal handler will always be restored before the exception bubbles up.
3. If an exception occurs while the fence is calling the original signal handler on
destruction, the original handler may not be called, but the original handler will
be restored. The exception will bubble up and can be detected by calling code.
4. If an exception occurs while the fence is restoring the original signal handler
(exceedingly rare), the original signal handler will be restored regardless.
5. No guarantees about the fence's behavior are made if exceptions occur while
exceptions are being handled.
A `signal_fence` can only be used on the main thread, or else a `ValueError` will
raise when entering the fence.
"""
handled: Optional[Tuple[int, Optional[FrameType]]] = None
def handler(signum: int, frame: Optional[FrameType]) -> None:
nonlocal handled
if handled is None:
handled = (signum, frame)
if on_deferred_signal is not None:
try:
on_deferred_signal(signum, frame)
except:
pass
# https://docs.python.org/3/library/signal.html#signal.getsignal
original_handler = signal.getsignal(signum)
if original_handler is None:
raise TypeError(
"signal_fence cannot be used with signal handlers that were not installed"
" from Python"
)
if isinstance(original_handler, int) and not isinstance(
original_handler, signal.Handlers
):
raise NotImplementedError(
"Your Python interpreter's signal module is using raw integers to"
" represent SIG_IGN and SIG_DFL, which shouldn't be possible!"
)
# N.B. to best guarantee the original handler is restored, the #contextmanager
# decorator is used rather than a class with __enter__/__exit__ methods so
# that the installation of the new handler can be done inside of a try block,
# whereas per [PEP 343](https://www.python.org/dev/peps/pep-0343/) the
# __enter__ call is not guaranteed to have a corresponding __exit__ call if an
# exception interleaves
try:
try:
signal.signal(signum, handler)
yield
finally:
if handled is not None:
if isinstance(original_handler, signal.Handlers):
if original_handler is signal.Handlers.SIG_IGN:
pass
elif original_handler is signal.Handlers.SIG_DFL:
signal.signal(signum, signal.SIG_DFL)
os.kill(os.getpid(), signum)
else:
assert_never(original_handler)
elif callable(original_handler):
original_handler(*handled)
else:
assert_never(original_handler)
signal.signal(signum, original_handler)
except:
signal.signal(signum, original_handler)
raise
First, why not use a thread (accepted answer)?
Running code in a non-daemon thread does guarantee that the thread will be joined on interpreter shutdown, but any exception on the main thread (e.g. KeyboardInterrupt) will not prevent the main thread from continuing to execute.
Consider what would happen if the thread method is using some data that the main thread mutates in a finally block after the KeyboardInterrupt.
Second, to address #benrg's feedback on the most upvoted answer using a context manager:
if an exception is raised after signal is called but before __enter__ returns, the signal will be permanently blocked;
My solution avoids this bug by using a generator context manager with the aid of the #contextmanager decorator. See the full comment in the code above for more details.
this code may call third-party exception handlers in threads other than the main thread, which CPython never does;
I don't think this bug is real. signal.signal is required to be called from the main thread, and raises ValueError otherwise. These context managers can only run on the main thread, and thus will only call third-party exception handlers from the main thread.
if signal returns a non-callable value, __exit__ will crash
My solution handles all possible values of the signal handler and calls them appropriately. Additionally I use assert_never to benefit from exhaustiveness checking in static analyzers.
Do note that signal_fence is designed to handle one interruption on the main thread such as a KeyboardInterrupt. If your user is spamming ctrl+c while the signal handler is being restored, not much can save you. This is unlikely given the relatively few opcodes that need to execute to restore the handler, but it's possible. (For maximum robustness, this solution would need to be rewritten in C)
A generic approach would be to use a context manager that accepts a set of signal to suspend:
import signal
from contextlib import contextmanager
#contextmanager
def suspended_signals(*signals):
"""
Suspends signal handling execution
"""
signal.pthread_sigmask(signal.SIG_BLOCK, set(signals))
try:
yield None
finally:
signal.pthread_sigmask(signal.SIG_UNBLOCK, set(signals))
This is not interruptible (try it), but also maintains a nice interface, so your functions can work the way you expect.
import concurrent.futures
import time
def do_task(func):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as run:
fut = run.submit(func)
return fut.result()
def task():
print("danger will robinson")
time.sleep(5)
print("all ok")
do_task(task)
and here's an easy way to create an uninterruptible sleep with no signal handling needed:
def uninterruptible_sleep(secs):
fut = concurrent.futures.Future()
with contextlib.suppress(concurrent.futures.TimeoutError):
fut.result(secs)