Timeout a C++ function from Python - python

I have python-cpp bindings implemented (using boost-python) such that calling foop() from Python runs a C++ function fooc(). I would like to set a timeout from Python such that foop returns after t seconds. The solutions here work fine for Python functions, but not with foop b/c I'm unable to interrupt the C++ code -- example below for calling run_for_timeout(foop). Is there a way to do this from Python (i.e. without implementing the timer functionality in C++)?
import signal
class CallbackValueError(ValueError):
"""Raise for improper data values with callback functions and their utils."""
pass
class TimeoutError(RuntimeError):
pass
def run_for_timeout(func, args=(), kwargs=None, timeout=5):
"""Run a function until it times-out.
Note that ``timeout`` = 0 does not imply no timeout, but rather there is
no time for the function to run and this function raises an error
Parameters
----------
func : function
args : tuple
kwargs : dict | None
timeout : int
(seconds)
Returns
-------
result : object | None
Return object from function, or None if it timed out
Raises
------
CallbackValueError
"""
if timeout <= 0:
raise CallbackValueError("{}s is a nonsensical value for the "
"timeout function".format(timeout))
def handler(signum, frame):
raise TimeoutError()
# Set the timeout handler
signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
if kwargs is None:
kwargs = {}
try:
result = func(*args, **kwargs)
except TimeoutError as e:
result = None
finally:
# Function returned before timeout, so cancel the timer
signal.alarm(0)
return result

Related

Coroutine that is guaranteed to exit its context managers

I'd like to use a context manager within a coroutine. This coroutine should handle unknown number of steps. However, due to unknown number of steps, it's unclear when should the context manager exit. I'd like it to exit when the co-routine goes out of scope / is garbage collected; however this seems not to happen in the example below:
import contextlib
#contextlib.contextmanager
def cm():
print("STARTED")
yield
print("ENDED")
def coro(a: str):
with cm():
print(a)
while True:
val1, val2 = yield
print(val1, val2)
c = coro("HI")
c.send(None)
print("---")
c.send((1, 2))
print("---!")
Output of this program:
STARTED
HI
---
1 2
---!
The context manager never printed "ENDED".
How can I make a coroutine that will support any number of steps, and be guaranteed to exit gracefully? I don't want to make this a responsibility of the caller.
TLDR: So the issue is that when an exception is raised (and not handled) inside a with block. The __exit__ method of the context manager is called with that exception. For contextmanager-decorated generators, this causes the exception to be thrown to the generator. cm does not handle this exception and thus the cleanup code is not run. When coro is garbage collected, its close method is called which throws a GeneratorExit to coro (which then gets thrown to cm). What follows is a detailed description of the above steps.
The close method throws a GeneratorExit to coro which means a GeneratorExit is raised at the point of yield. coro doesn't handle the GeneratorExit so it exits the context via an error. This causes the __exit__ method of the context to be called with an error and error information. What does the __exit__ method from a contextmanager-decorated generator do? If it is called with an exception, it throws that exception to the underlying generator.
At this point the a GeneratorExit is raised from the yield statement in the body of our context manager. That unhandled exception causes the cleanup code to not be run. That unhandled exception is raised by context manager and is passed back to the __exit__ of the contextmanager decorator. Being the same error that was thrown, __exit__ returns False to indicate the original error sent to __exit__ was unhandled.
Finally, this continues the GeneratorExit's propagation outside of the with block inside coro where it continues to be unhandled. However, not handling GeneratorExits is regular for generators, so the original close method suppresses the GeneratorExit.
See this part of the yield documentation:
If the generator is not resumed before it is finalized (by reaching a zero reference count or by being garbage collected), the generator-iterator’s close() method will be called, allowing any pending finally clauses to execute.
Looking at the close documentation we see:
Raises a GeneratorExit at the point where the generator function was paused. If the generator function then exits gracefully, is already closed, or raises GeneratorExit (by not catching the exception), close returns to its caller.
This part of the with statement documentation:
The suite is executed.
The context manager’s exit() method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to exit(). Otherwise, three None arguments are supplied.
And the code of the __exit__ method for the contextmanager decorator.
So with all this context (rim-shot), the easiest way we can get the desired behavior is with a try-except-finally in the definition of our context manager. This is the suggested method from the contextlib docs. And all their examples follow this form.
Thus, you can use a try…except…finally statement to trap the error (if any), or ensure that some cleanup takes place.
import contextlib
#contextlib.contextmanager
def cm():
try:
print("STARTED")
yield
except Exception:
raise
finally:
print("ENDED")
def coro(a: str):
with cm():
print(a)
while True:
val1, val2 = yield
print(val1, val2)
c = coro("HI")
c.send(None)
print("---")
c.send((1, 2))
print("---!")
The output is now:
STARTED
HI
---
1 2
---!
ENDED
as desired.
We could also define our context manager in the traditional manner: as a class with an __enter__ and __exit__ method and still gotten the correct behavior:
class CM:
def __enter__(self):
print('STARTED')
def __exit__(self, exc_type, exc_value, traceback):
print('ENDED')
return False
The situation is somewhat simpler, because we can see exactly what the __exit__ method is without having to go to the source code. The GeneratorExit gets sent (as a parameter) to __exit__ where __exit__ happily runs its cleanup code and then returns False. This is not strictly necessary as otherwise None (another Falsey value) would have been returned, but it indicates that any exception that was sent to __exit__ was not handled. (The return value of __exit__ doesn't matter if there was no exception).
You can do it by telling the coroutine to shutdown by sending it something the will cause it to break out of the loop and return as illustrated below. Doing so will cause a StopIteration exception to be raised where this is done, so I added another context manager to allow it to be suppressed. Note I have also added a coroutine decorator to make them start-up automatically when first called, but that part is strictly optional.
import contextlib
from typing import Callable
QUIT = 'quit'
def coroutine(func: Callable):
""" Decorator to make coroutines automatically start when called. """
def start(*args, **kwargs):
cr = func(*args, **kwargs)
next(cr)
return cr
return start
#contextlib.contextmanager
def ignored(*exceptions):
try:
yield
except exceptions:
pass
#contextlib.contextmanager
def cm():
print("STARTED")
yield
print("ENDED")
#coroutine
def coro(a: str):
with cm():
print(a)
while True:
value = (yield)
if value == QUIT:
break
val1, val2 = value
print(val1, val2)
print("---")
with ignored(StopIteration):
c = coro("HI")
#c.send(None) # No longer needed.
c.send((1, 2))
c.send((3, 5))
c.send(QUIT) # Tell coroutine to clean itself up and exit.
print("---!")
Output:
STARTED
HI
---
1 2
3 5
ENDED
---!

How do I apply timeout and retry decorator functions to google-cloud-storage client in python?

I want to alter the default timeout and retries for Google Cloud Storage requests and can see there are decorator functions in the google cloud package in the api_core directory, how do I apply these to the storage client (if that is the right term)?
from google.cloud import storage
from google.api_core import timeout
timeout_ = timeout.ConstantTimeout(timeout=600)
storage_client = storage.Client()
storage_client.get_bucket(name_of_bucket)
Here you have a custom solution based on the api_core folder. The difference resides in the func_with_timeout function. In this way it configures the specified timeout and the number of retries for the function you are wrapping.
import functools
import signal
from multiprocessing import TimeoutError
import six
from google.cloud import storage
_PARTIAL_VALID_ASSIGNMENTS = ("__doc__",)
def wraps(wrapped):
"""A functools.wraps helper that handles partial objects on Python 2."""
# https://github.com/google/pytype/issues/322
if isinstance(wrapped, functools.partial): # pytype: disable=wrong-arg-types
return six.wraps(wrapped, assigned=_PARTIAL_VALID_ASSIGNMENTS)
else:
return six.wraps(wrapped)
def raise_timeout(signum, frame):
raise TimeoutError
#six.python_2_unicode_compatible
class ConstantTimeout(object):
"""A decorator that adds a constant timeout argument.
This is effectively equivalent to
``functools.partial(func, timeout=timeout)``.
Args:
timeout (Optional[float]): the timeout (in seconds) to applied to the
wrapped function. If `None`, the target function is expected to
never timeout.
"""
def __init__(self, timeout=None, retries=None):
self._timeout = timeout
self._retries = retries
def __call__(self, func):
"""Apply the timeout decorator.
Args:
func (Callable): The function to apply the timeout argument to.
This function must accept a timeout keyword argument.
Returns:
Callable: The wrapped function.
"""
#wraps(func)
def func_with_timeout(*args, **kwargs):
"""Wrapped function that adds timeout."""
signal.signal(signal.SIGALRM, raise_timeout)
i = 1
while i <= self._retries:
try:
# Schedule the signal to be sent after ``time``.
signal.alarm(self._timeout)
return func(*args, **kwargs)
except TimeoutError:
i += 1
pass
return TimeoutError("Exceed maximum amount of retries: {}".format(self._retries))
return func_with_timeout
def __str__(self):
return "<ConstantTimeout timeout={:.1f}>".format(self._timeout)
Then you just have to create a new function passing the one you want to wrap as an argument to the timeout_ decorator:
timeout_ = ConstantTimeout(timeout=2, retries=3)
storage_client = storage.Client()
get_bucket_with_timeout = timeout_(storage_client.get_bucket)
buck = get_bucket_with_timeout(name_of_bucket)
print(buck)
Just pass the timeout in seconds as a second parameter:
storage_client.get_bucket(name_of_bucket,120)

How to make a use of decorators on a task function in a dag template

I have almost 20 functions, which i wrote a decorator for, instead of injecting the same code within each function.So my decorator should check if each function return a 500 response status code then raise an exception.Can anyone give me more clarity?
This is the decorator I wrote
def check_tasks_failed(a_func):
"""
Wrapper function to check whether a task_failed
:param task_id:
:param kwargs:
:param kwargs:
:param a_func:
:return:
"""
def wrapper(**kwargs):
task_id_resp_status_code = a_func(**kwargs).status_code
if task_id_resp_status_code in [500]: # Check here what are the criterion for the task to fail
raise ValueError('invalid return status 500')
return wrapper
This is the function which needs to be decorator as an example.
#check_tasks_failed
def get_naming_(**kwargs) -> requests:
"""Get job from custom_etl."""
try:
naming_res = get_custom_etl_job(VALIDATED_CONTAINER,
VALIDATED_CONTAINER,
VALIDATED_CONTAINER,
**kwargs)
except Exception as e:
return {'success':False , 'reason': f'{e}'}
My expect result should be an exception raised when get_naming_ failed the following exception raise ValueError('invalid return status 500')

Python: decorator/wrapper for try/except statement

I have some blocks of code which need to be wrapped by function.
try:
if config.DEVELOPMENT == True:
# do_some_stuff
except:
logger.info("Config is not set for development")
Then I'll do again:
try:
if config.DEVELOPMENT == True:
# do_some_another_stuff
except:
logger.info("Config is not set for development")
So, how can I wrap this "do_some_stuff" and "do_some_another_stuff"?
I'm trying to write function with contextmanager:
#contextmanager
def try_dev_config(name):
try:
if name is not None:
yield
except Exception as e:
print "not dev config"
with try_dev_config("config.DEVELOPMENT"):
# do_some_stuff
And I got an error:
RuntimeError: generator didn't yield
You could pass in a function.
boolean = True
def pass_this_in():
print("I just did some stuff")
def the_try_except_bit(function):
try:
if boolean:
function()
except:
print("Excepted")
# Calling the above code
the_try_except_bit(pass_this_in)
If you want to reduce the "pass_this_in" definition bit, then you can use lambda function definitions:
pass_this_in = lambda : print("I just did some stuff")
I am not sure that a context manager is the good method to achieve what you want. The context manager goal is to provide a mecanism to open/instantiate a resource, give access to it (or not) and close/clean it automatically when you no more need it.
IMHO, what you need is a decorator.
A decorator aims at executing code around a function call. It would force you to put each block of code in a function but I don't think it is so difficult. You can implement it like this:
class Config(object):
"""for demonstration purpose only: used to have a config.DEVELOPMENT value"""
DEVELOPMENT = True
class Logger(object):
"""for demonstration purpose only: used to have a logger.info method"""
#staticmethod
def info(msg):
print("Logged: {}".format(msg))
def check_dev_config(config, logger):
def dev_config_checker(func):
def wrapper(*args, **kwargs):
try:
if config.DEVELOPMENT:
func(*args, **kwargs)
except Exception as err:
logger.info(
"Config is not set for developpement: {}".format(err))
return wrapper
return dev_config_checker
#check_dev_config(Config, Logger)
def do_stuff_1():
print("stuff 1 done")
#check_dev_config(Config, Logger)
def do_stuff_2():
raise Exception("stuff 2 failed")
do_stuff_1()
do_stuff_2()
This code prints
stuff 1 done
Logged: Config is not set for developpement: stuff 2 failed
Explanations:
The check_dev_config function is actually a decorator generator which accepts the config and the logger as arguments.
It returns the dev_config_checker function which is an actual (and parameterised) decorator, and which accepts a function to decorate as argument.
This decorator returns a wrapper function which will actually run code around the decorated function call. In this function, the decorated function is called inside a try/except structure and only if the config.DEVELOPMENT is evaluated to True. In case of exception, the logger is used to log an information.
Each block of code to decorate is put into a function (do_stuff_1, do_stuff_2 and decorated with the check_dev_config decorator generator, giving it the config and the logger.
When decorated functions are called, they are called via their decorator and not directly. As you can see, the do_stuff_2 exception has been catched and the a message has been logged.

Time out decorator on a multprocessing function

I have this decorator taken directly from an example I found on the net:
class TimedOutExc(Exception):
pass
def timeout(timeout):
def decorate(f):
def handler(signum, frame):
raise TimedOutExc()
def new_f(*args, **kwargs):
old = signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
try:
result = f(*args, **kwargs)
except TimedOutExc:
return None
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result
new_f.func_name = f.func_name
return new_f
return decorate
It throws an exception if the f function times out.
Well, it works but when I use this decorator on a multiprocessing function and stops due to a time out, it doesn't terminate the processes involved in the computation. How can I do that?
I don't want to launch an exception and stop the program. Basically what I want is when f times out, have it return None and then terminate the processes involved.
While I agree with the main point of Aaron's answer, I would like to elaborate a bit.
The processes launched by multiprocessing must be stopped in the function to be decorated; I don't think that this can be done generally and simply from the decorator itself (the decorated function is the only entity that knows what calculations it launched).
Instead of having the decorated function catch SIGALARM, you can also catch your custom TimedOutExc exception–this might be more flexible. Your example would then become:
import signal
import functools
class TimedOutExc(Exception):
"""
Raised when a timeout happens
"""
def timeout(timeout):
"""
Return a decorator that raises a TimedOutExc exception
after timeout seconds, if the decorated function did not return.
"""
def decorate(f):
def handler(signum, frame):
raise TimedOutExc()
#functools.wraps(f) # Preserves the documentation, name, etc.
def new_f(*args, **kwargs):
old_handler = signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
result = f(*args, **kwargs) # f() always returns, in this scheme
signal.signal(signal.SIGALRM, old_handler) # Old signal handler is restored
signal.alarm(0) # Alarm removed
return result
return new_f
return decorate
#timeout(10)
def function_that_takes_a_long_time():
try:
# ... long, parallel calculation ...
except TimedOutExc:
# ... Code that shuts down the processes ...
# ...
return None # Or exception raised, which means that the calculation is not complete
I doubt that can be done with a decorator: A decorator is a wrapper for a function; the function is a black box. There is no communication between the decorator and the function it wraps.
What you need to do is rewrite your function's code to use the SIGALRM handler to terminate any processes it has started.

Categories