I have this decorator taken directly from an example I found on the net:
class TimedOutExc(Exception):
pass
def timeout(timeout):
def decorate(f):
def handler(signum, frame):
raise TimedOutExc()
def new_f(*args, **kwargs):
old = signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
try:
result = f(*args, **kwargs)
except TimedOutExc:
return None
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result
new_f.func_name = f.func_name
return new_f
return decorate
It throws an exception if the f function times out.
Well, it works but when I use this decorator on a multiprocessing function and stops due to a time out, it doesn't terminate the processes involved in the computation. How can I do that?
I don't want to launch an exception and stop the program. Basically what I want is when f times out, have it return None and then terminate the processes involved.
While I agree with the main point of Aaron's answer, I would like to elaborate a bit.
The processes launched by multiprocessing must be stopped in the function to be decorated; I don't think that this can be done generally and simply from the decorator itself (the decorated function is the only entity that knows what calculations it launched).
Instead of having the decorated function catch SIGALARM, you can also catch your custom TimedOutExc exception–this might be more flexible. Your example would then become:
import signal
import functools
class TimedOutExc(Exception):
"""
Raised when a timeout happens
"""
def timeout(timeout):
"""
Return a decorator that raises a TimedOutExc exception
after timeout seconds, if the decorated function did not return.
"""
def decorate(f):
def handler(signum, frame):
raise TimedOutExc()
#functools.wraps(f) # Preserves the documentation, name, etc.
def new_f(*args, **kwargs):
old_handler = signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
result = f(*args, **kwargs) # f() always returns, in this scheme
signal.signal(signal.SIGALRM, old_handler) # Old signal handler is restored
signal.alarm(0) # Alarm removed
return result
return new_f
return decorate
#timeout(10)
def function_that_takes_a_long_time():
try:
# ... long, parallel calculation ...
except TimedOutExc:
# ... Code that shuts down the processes ...
# ...
return None # Or exception raised, which means that the calculation is not complete
I doubt that can be done with a decorator: A decorator is a wrapper for a function; the function is a black box. There is no communication between the decorator and the function it wraps.
What you need to do is rewrite your function's code to use the SIGALRM handler to terminate any processes it has started.
Related
I'm providing some services through REST API that will occurs DB operation while performing a request.
So i'm trying to create a class that performs queries using oracledb(cx_Oracle).
A problem arises here. when that class executes a time-consuming query, i don't want other operations to block. So I looked for a lot of snippets a snippet that executes a method asynchronously. However, blocking occurred when there was a return value in all snippets.
asynchronously method without result works perfectly
such as (reference : Python Threading inside a class):
def threaded(fn):
def wrapper(*args, **kwargs):
Thread(target=fn, args=args, kwargs=kwargs).start()
return wrapper
class MyClass:
somevar = 'someval'
#threaded
def func_to_be_threaded(self):
time.sleep(3)
self.finished()
def finished(self):
print(datetime.datetime.now(), end=' ')
test = MyClass()
test.func_to_be_threaded()
test.func_to_be_threaded()
The result is exactly what I want.
2022-07-06 16:08:54.177499 2022-07-06 16:08:54.177499
but asynchronously method with result makes blocking.
Example from the same reference
def call_with_future(fn, future, args, kwargs):
try:
result = fn(*args, **kwargs)
future.set_result(result)
except Exception as exc:
future.set_exception(exc)
def threaded(fn):
def wrapper(*args, **kwargs):
future = Future()
Thread(target=call_with_future, args=(fn, future, args, kwargs)).start()
return future
return wrapper
class Test:
#threaded
def run_something(self):
time.sleep(5)
return datetime.datetime.now()
test = Test()
print(test.run_something().result(), test.run_something().result())
The result is
2022-07-06 16:08:12.159146 2022-07-06 16:08:17.167825
Is there any way to wait asynchronously for the result?
i don't want to hang while get query result.
I have python-cpp bindings implemented (using boost-python) such that calling foop() from Python runs a C++ function fooc(). I would like to set a timeout from Python such that foop returns after t seconds. The solutions here work fine for Python functions, but not with foop b/c I'm unable to interrupt the C++ code -- example below for calling run_for_timeout(foop). Is there a way to do this from Python (i.e. without implementing the timer functionality in C++)?
import signal
class CallbackValueError(ValueError):
"""Raise for improper data values with callback functions and their utils."""
pass
class TimeoutError(RuntimeError):
pass
def run_for_timeout(func, args=(), kwargs=None, timeout=5):
"""Run a function until it times-out.
Note that ``timeout`` = 0 does not imply no timeout, but rather there is
no time for the function to run and this function raises an error
Parameters
----------
func : function
args : tuple
kwargs : dict | None
timeout : int
(seconds)
Returns
-------
result : object | None
Return object from function, or None if it timed out
Raises
------
CallbackValueError
"""
if timeout <= 0:
raise CallbackValueError("{}s is a nonsensical value for the "
"timeout function".format(timeout))
def handler(signum, frame):
raise TimeoutError()
# Set the timeout handler
signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
if kwargs is None:
kwargs = {}
try:
result = func(*args, **kwargs)
except TimeoutError as e:
result = None
finally:
# Function returned before timeout, so cancel the timer
signal.alarm(0)
return result
I have some blocks of code which need to be wrapped by function.
try:
if config.DEVELOPMENT == True:
# do_some_stuff
except:
logger.info("Config is not set for development")
Then I'll do again:
try:
if config.DEVELOPMENT == True:
# do_some_another_stuff
except:
logger.info("Config is not set for development")
So, how can I wrap this "do_some_stuff" and "do_some_another_stuff"?
I'm trying to write function with contextmanager:
#contextmanager
def try_dev_config(name):
try:
if name is not None:
yield
except Exception as e:
print "not dev config"
with try_dev_config("config.DEVELOPMENT"):
# do_some_stuff
And I got an error:
RuntimeError: generator didn't yield
You could pass in a function.
boolean = True
def pass_this_in():
print("I just did some stuff")
def the_try_except_bit(function):
try:
if boolean:
function()
except:
print("Excepted")
# Calling the above code
the_try_except_bit(pass_this_in)
If you want to reduce the "pass_this_in" definition bit, then you can use lambda function definitions:
pass_this_in = lambda : print("I just did some stuff")
I am not sure that a context manager is the good method to achieve what you want. The context manager goal is to provide a mecanism to open/instantiate a resource, give access to it (or not) and close/clean it automatically when you no more need it.
IMHO, what you need is a decorator.
A decorator aims at executing code around a function call. It would force you to put each block of code in a function but I don't think it is so difficult. You can implement it like this:
class Config(object):
"""for demonstration purpose only: used to have a config.DEVELOPMENT value"""
DEVELOPMENT = True
class Logger(object):
"""for demonstration purpose only: used to have a logger.info method"""
#staticmethod
def info(msg):
print("Logged: {}".format(msg))
def check_dev_config(config, logger):
def dev_config_checker(func):
def wrapper(*args, **kwargs):
try:
if config.DEVELOPMENT:
func(*args, **kwargs)
except Exception as err:
logger.info(
"Config is not set for developpement: {}".format(err))
return wrapper
return dev_config_checker
#check_dev_config(Config, Logger)
def do_stuff_1():
print("stuff 1 done")
#check_dev_config(Config, Logger)
def do_stuff_2():
raise Exception("stuff 2 failed")
do_stuff_1()
do_stuff_2()
This code prints
stuff 1 done
Logged: Config is not set for developpement: stuff 2 failed
Explanations:
The check_dev_config function is actually a decorator generator which accepts the config and the logger as arguments.
It returns the dev_config_checker function which is an actual (and parameterised) decorator, and which accepts a function to decorate as argument.
This decorator returns a wrapper function which will actually run code around the decorated function call. In this function, the decorated function is called inside a try/except structure and only if the config.DEVELOPMENT is evaluated to True. In case of exception, the logger is used to log an information.
Each block of code to decorate is put into a function (do_stuff_1, do_stuff_2 and decorated with the check_dev_config decorator generator, giving it the config and the logger.
When decorated functions are called, they are called via their decorator and not directly. As you can see, the do_stuff_2 exception has been catched and the a message has been logged.
I'm looking to encapsulate logic for database transactions into a with block; wrapping the code in a transaction and handling various exceptions (locking issues). This is simple enough, however I'd like to also have the block encapsulate the retrying of the code block following certain exceptions. I can't see a way to package this up neatly into the context manager.
Is it possible to repeat the code within a with statement?
I'd like to use it as simply as this, which is really neat.
def do_work():
...
# This is ideal!
with transaction(retries=3):
# Atomic DB statements
...
...
I'm currently handling this with a decorator, but I'd prefer to offer the context manager (or in fact both), so I can choose to wrap a few lines of code in the with block instead of an inline function wrapped in a decorator, which is what I do at the moment:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
_perform_in_transaction()
...
Is it possible to repeat the code within a with statement?
No.
As pointed out earlier in that mailing list thread, you can reduce a bit of duplication by making the decorator call the passed function:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
# called implicitly
...
The way that occurs to me to do this is just to implement a standard database transaction context manager, but allow it to take a retries argument in the constructor. Then I'd just wrap that up in your method implementations. Something like this:
class transaction(object):
def __init__(self, retries=0):
self.retries = retries
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
# Implementation...
def execute(self, query):
err = None
for _ in range(self.retries):
try:
return self._cursor.execute(query)
except Exception as e:
err = e # probably ought to save all errors, but hey
raise err
with transaction(retries=3) as cursor:
cursor.execute('BLAH')
As decorators are just functions themselves, you could do the following:
with transaction(_perform_in_transaction, retries=3) as _perf:
_perf()
For the details, you'd need to implement transaction() as a factory method that returns an object with __callable__() set to call the original method and repeat it up to retries number of times on failure; __enter__() and __exit__() would be defined as normal for database transaction context managers.
You could alternatively set up transaction() such that it itself executes the passed method up to retries number of times, which would probably require about the same amount of work as implementing the context manager but would mean actual usage would be reduced to just transaction(_perform_in_transaction, retries=3) (which is, in fact, equivalent to the decorator example delnan provided).
While I agree it can't be done with a context manager... it can be done with two context managers!
The result is a little awkward, and I am not sure whether I approve of my own code yet, but this is what it looks like as the client:
with RetryManager(retries=3) as rm:
while rm:
with rm.protect:
print("Attempt #%d of %d" % (rm.attempt_count, rm.max_retries))
# Atomic DB statements
There is an explicit while loop still, and not one, but two, with statements, which leaves a little too much opportunity for mistakes for my liking.
Here's the code:
class RetryManager(object):
""" Context manager that counts attempts to run statements without
exceptions being raised.
- returns True when there should be more attempts
"""
class _RetryProtector(object):
""" Context manager that only raises exceptions if its parent
RetryManager has given up."""
def __init__(self, retry_manager):
self._retry_manager = retry_manager
def __enter__(self):
self._retry_manager._note_try()
return self
def __exit__(self, exc_type, exc_val, traceback):
if exc_type is None:
self._retry_manager._note_success()
else:
# This would be a good place to implement sleep between
# retries.
pass
# Suppress exception if the retry manager is still alive.
return self._retry_manager.is_still_trying()
def __init__(self, retries=1):
self.max_retries = retries
self.attempt_count = 0 # Note: 1-based.
self._success = False
self.protect = RetryManager._RetryProtector(self)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
def _note_try(self):
self.attempt_count += 1
def _note_success(self):
self._success = True
def is_still_trying(self):
return not self._success and self.attempt_count < self.max_retries
def __bool__(self):
return self.is_still_trying()
Bonus: I know you don't want to separate your work off into separate functions wrapped with decorators... but if you were happy with that, the redo package from Mozilla offers the decorators to do that, so you don't have to roll your own. There is even a Context Manager that effective acts as temporary decorator for your function, but it still relies on your retrievable code to be factored out into a single function.
This question is a few years old but after reading the answers I decided to give this a shot.
This solution requires the use of a "helper" class, but I I think it does provide an interface with retries configured through a context manager.
class Client:
def _request(self):
# do request stuff
print("tried")
raise Exception()
def request(self):
retry = getattr(self, "_retry", None)
if not retry:
return self._request()
else:
for n in range(retry.tries):
try:
return self._request()
except Exception:
retry.attempts += 1
class Retry:
def __init__(self, client, tries=1):
self.client = client
self.tries = tries
self.attempts = 0
def __enter__(self):
self.client._retry = self
def __exit__(self, *exc):
print(f"Tried {self.attempts} times")
del self.client._retry
>>> client = Client()
>>> with Retry(client, tries=3):
... # will try 3 times
... response = client.request()
tried once
tried once
tried once
Tried 3 times
My experiment code is like:
import signal
def hi(signum, frame):
print "hi"
signal.signal(signal.SIGINT, hi)
signal.signal(signal.SIGINT, signal.SIG_IGN)
hi didn't get printed, because the signal handler is overridden by signal.SIG_IGN.
How can I avoid this?
You could try to check whether there is already a handler. If so put your desired handler and the old handler in a wrapper function that calls both of them.
def append_signal(sig, f):
old = None
if callable(signal.getsignal(sig)):
old = signal.getsignal(sig)
def helper(*args, **kwargs):
if old is not None:
old(*args, **kwargs)
f(*args, **kwargs)
signal.signal(sig, helper)
If you don't want to override your own handler, check to see if you've set one:
if signal.getsignal(signal.SIGINT) in [signal.SIG_IGN, signal.SIG_DFL]:
signal.signal(signal.SIGINT, hi)
According to the documentation, is is possible that some superior process had already reassigned the handler from the default. If you don't want to override that, add None to the list of signals.
The obvious wrapper for signal.signal(..., signal.SIG_IGN) would be a not in test.
added in response to comment
Chaining signal handlers is not often done because signals are so granular. If I really wanted to do this, I'd follow the model of atexit and register functions to be called by your handler.
Simply do the same as it would be done in C:
sig_hand_prev = None
def signal_handler(signum, frame):
...
signal.signal(signum, sig_hand_prev)
os.kill(os.getpid(), signum)
def install_handler(signum):
global sig_hand_prev
sig_hand_prev = signal.signal(signum, signal_handler)
The key idea here is that you save only the previous handler and raise it again when you finished your stuff. In this way the signal handling is a single linked list OOB.