How could one write a debounce decorator in python which debounces not only on function called but also on the function arguments/combination of function arguments used?
Debouncing means to supress the call to a function within a given timeframe, say you call a function 100 times within 1 second but you only want to allow the function to run once every 10 seconds a debounce decorated function would run the function once 10 seconds after the last function call if no new function calls were made. Here I'm asking how one could debounce a function call with specific function arguments.
An example could be to debounce an expensive update of a person object like:
#debounce(seconds=10)
def update_person(person_id):
# time consuming, expensive op
print('>>Updated person {}'.format(person_id))
Then debouncing on the function - including function arguments:
update_person(person_id=144)
update_person(person_id=144)
update_person(person_id=144)
>>Updated person 144
update_person(person_id=144)
update_person(person_id=355)
>>Updated person 144
>>Updated person 355
So calling the function update_person with the same person_id would be supressed (debounced) until the 10 seconds debounce interval has passed without a new call to the function with that same person_id.
There's a few debounce decorators but none includes the function arguments, example: https://gist.github.com/walkermatt/2871026
I've done a similar throttle decorator by function and arguments:
def throttle(s, keep=60):
def decorate(f):
caller = {}
def wrapped(*args, **kwargs):
nonlocal caller
called_args = '{}'.format(*args)
t_ = time.time()
if caller.get(called_args, None) is None or t_ - caller.get(called_args, 0) >= s:
result = f(*args, **kwargs)
caller = {key: val for key, val in caller.items() if t_ - val > keep}
caller[called_args] = t_
return result
# Keep only calls > keep
caller = {key: val for key, val in caller.items() if t_ - val > keep}
caller[called_args] = t_
return wrapped
return decorate
The main takaway is that it keeps the function arguments in caller[called_args]
See also the difference between throttle and debounce: http://demo.nimius.net/debounce_throttle/
Update:
After some tinkering with the above throttle decorator and the threading.Timer example in the gist, I actually think this should work:
from threading import Timer
from inspect import signature
import time
def debounce(wait):
def decorator(fn):
sig = signature(fn)
caller = {}
def debounced(*args, **kwargs):
nonlocal caller
try:
bound_args = sig.bind(*args, **kwargs)
bound_args.apply_defaults()
called_args = fn.__name__ + str(dict(bound_args.arguments))
except:
called_args = ''
t_ = time.time()
def call_it(key):
try:
# always remove on call
caller.pop(key)
except:
pass
fn(*args, **kwargs)
try:
# Always try to cancel timer
caller[called_args].cancel()
except:
pass
caller[called_args] = Timer(wait, call_it, [called_args])
caller[called_args].start()
return debounced
return decorator
I've had the same need to build a debounce annotation for a personal project, after stumbling upon the same gist / discussion you have, I ended up with the following solution:
import threading
def debounce(wait_time):
"""
Decorator that will debounce a function so that it is called after wait_time seconds
If it is called multiple times, will wait for the last call to be debounced and run only this one.
"""
def decorator(function):
def debounced(*args, **kwargs):
def call_function():
debounced._timer = None
return function(*args, **kwargs)
# if we already have a call to the function currently waiting to be executed, reset the timer
if debounced._timer is not None:
debounced._timer.cancel()
# after wait_time, call the function provided to the decorator with its arguments
debounced._timer = threading.Timer(wait_time, call_function)
debounced._timer.start()
debounced._timer = None
return debounced
return decorator
I've created an open-source project to provide functions such as debounce, throttle, filter ... as decorators, contributions are more than welcome to improve on the solution I have for these decorators / add other useful decorators: decorator-operations repository
Related
Not sure if this is possible in Python, but I'm trying to profile a large function and indicate which parts of it's processing / I/O are slow. I was attempting to write a couple of decorator functions; a top-level function to wrap the function being profiled. And decorators for some of the nested functions to report on their timing if a threshold is exceeded for the top level decorator. I'm not sure how I could share this context across decorators though.
Top level Decorator
def time_stack(name, threshold=60000):
def wrapper(f):
def wrapped(*args, **kwargs):
start = time_millis()
f(*args, **kwargs)
end = time_millis()
if end - start > threshold:
# Log out frame timings here
return wrapped
return wrapper
For nested functions
def time_frame(name):
def wrapper(f):
def wrapped(*args, **kwargs):
start = time_millis()
f(*args, **kwargs)
end = time_millis()
t = end - start
# Somehow remember this value for the outer time_stack to use if needed
return wrapped
return wrapper
Example
#time_frame(name="do_some_io")
def do_some_io(string):
# do some io
#time_frame(name="do_a_transform")
def do_a_transform(result):
# do some transforming
#time_frame(name="do_some_caching")
def do_some_caching(stuff):
# do some caching
#time_stack(name="search", threshold=100000):
def search(string):
result = do_some_io(string)
transformed = do_a_transform(result)
return do_some_caching(transformed)
Here, if the execution time of search exceeds 100000ms, it would print out something like
search took 123456ms
do_some_io: 23000ms
do_a_transform: 13678ms
do_some_caching: 86778ms
I though about passing an object down through the kwargs to keep track of the times, but then all the functions in the call stack have to have **kwargs in their signature, and if theres a way to achieve this without having to do that it would be preferable.
You can define a global stack which keeps the data of each time_frame. It will be set on time_stack before calling the function and will be reset at the end of it. You can use its data if the time has passed the threshold.
However, there should be only one time_stack. For multiple time_stack functions, there should be a stack containing stacks.
A sketch of this idea is something like:
PROFILE_STACK = []
STACK_IS_SET = False
def time_stack(name, threshold=60000):
def wrapper(f):
def wrapped(*args, **kwargs):
PROFILE_IS_SET = True
start = time_millis()
f(*args, **kwargs)
end = time_millis()
if end - start > threshold:
# use PROFILE_STACK
PROFILE_STACK.clear()
STACK_IS_SET = False
return wrapped
return wrapper
And
def time_frame(name):
def wrapper(f):
def wrapped(*args, **kwargs):
start = time_millis()
f(*args, **kwargs)
end = time_millis()
t = end - start
if STACK_IS_SET:
PROFILE_STACK.append("SOMETHING")
# Somehow remember this value for the outer time_stack to use if needed
return wrapped
return wrapper
Is there any way to check inside function f1 in my example if calling a function (here decorated or not_decorated) has a specific decorator (in code #out)? Is such information passed to a function?
def out(fun):
def inner(*args, **kwargs):
fun(*args, **kwargs)
return inner
#out
def decorated():
f1()
def not_decorated():
f1()
def f1():
if is_decorated_by_out: # here I want to check it
print('I am')
else:
print('I am not')
decorated()
not_decorated()
Expected output:
I am
I am not
To be clear, this is egregious hackery, so I don't recommend it, but since you've ruled out additional parameters, and f1 will be the same whether wrapped or not, you've left hacks as your only option. The solution is to add a local variable to the wrapper function for the sole purpose of being found by means of stack inspection:
import inspect
def out(fun):
def inner(*args, **kwargs):
__wrapped_by__ = out
fun(*args, **kwargs)
return inner
def is_wrapped_by(func):
try:
return inspect.currentframe().f_back.f_back.f_back.f_locals.get('__wrapped_by__') is func
except AttributeError:
return False
#out
def decorated():
f1()
def not_decorated():
f1()
def f1():
if is_wrapped_by(out):
print('I am')
else:
print('I am not')
decorated()
not_decorated()
Try it online!
This assumes a specific degree of nesting (the manual back-tracking via f_back to account for is_wrapped_by itself, f1, decorated and finally to inner (from out). If you want to determine if out was involved anywhere in the call stack, make is_wrapped_by loop until the stack is exhausted:
def is_wrapped_by(func):
frame = None
try:
# Skip is_wrapped_by and caller
frame = inspect.currentframe().f_back.f_back
while True:
if frame.f_locals.get('__wrapped_by__') is func:
return True
frame = frame.f_back
except AttributeError:
pass
finally:
# Leaving frame on the call stack can cause cycle involving locals
# which delays cleanup until cycle collector runs;
# explicitly break cycle to save yourself the headache
del frame
return False
If you are open to creating an additional parameter in f1 (you could also use a default parameter), you can use functools.wraps and check for the existence of the __wrapped__ attribute. To do so, pass the wrapper function to f:
import functools
def out(fun):
#functools.wraps(fun)
def inner(*args, **kwargs):
fun(*args, **kwargs)
return inner
#out
def decorated():
f1(decorated)
def not_decorated():
f1(not_decorated)
def f1(_func):
if getattr(_func, '__wrapped__', False):
print('I am')
else:
print('I am not')
decorated()
not_decorated()
Output:
I am
I am not
Suppose you have a function decoration like this one
def double_arg(fun):
def inner(x):
return fun(x*2)
return inner
however you can't access it (it's inside a 3rd party lib or something). In this case you can wrap it into another function that adds the name of the decoration to the resulting function
def keep_decoration(decoration):
def f(g):
h = decoration(g)
h.decorated_by = decoration.__name__
return h
return f
and replace the old decoration by the wrapper.
double_arg = keep_decoration(double_arg)
You can even write a helper function that checks whether a function is decorated or not.
def is_decorated_by(f, decoration_name):
try:
return f.decorated_by == decoration_name
except AttributeError:
return False
Example of use...
#double_arg
def inc_v1(x):
return x + 1
def inc_v2(x):
return x + 1
print(inc_v1(5))
print(inc_v2(5))
print(is_decorated_by(inc_v1, 'double_arg'))
print(is_decorated_by(inc_v2, 'double_arg'))
Output
11
6
True
False
report.py
if __name__ == "__main__":
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter, description = "CHECK-ACCESS REPORTING.")
parser.add_argument('--input','-i', help='Filepath containing the Active Directory userlist')
parser.add_argument('--timestamp', '-t', nargs='?',const="BLANK", help='filepath with environement varible set')
args, unknownargs = parser.parse_known_args(sys.argv[1:])
timestampchecker(args.timestamp)
#checking the value of cons.DISPLAYRESULT is TRUE
main()
timestampchecker function :
def timestampchecker(status):
""" Check if the timestamp is to display or not from command line"""
if status is not None:
cons.DISPLAY_TIME_STAMP = True
This function checks if the user have set the -t arguments. If it is set I have defined one constant called cons.DISPLAYRESULT to true.
The function is working great and turning the constant value to True.
But in the main function I have implemented this decorators which is not taking the true value but false
timer.py
def benchmarking(timestaus):
def wrapper(funct):
def timercheck(*args, **kwarg):
if timestaus is True:
starttime=time.time()
funct(*args, **kwarg)
if timestaus is True:
print('Time Taken:',round(time.time()-starttime, 4))
return timercheck
return wrapper
I have decorated some method in main() method of report.py with the decorators above. For example This is the class being used in report.py and being decorated with above decorators
class NotAccountedReport:
def __init__(self, pluginoutputpath):
""" Path where the plugins result are stored need these files"""
self.pluginoutputpath = pluginoutputpath
#benchmarking(cons.DISPLAY_TIME_STAMP)
def makeNotAccountableReport():
#some functionality
here I have passed the constant value to the argument decorator which
when tested though before called is converted to True is taking false
and thus the decorators not being implemented. Where is the problem
cant figure out
You didn't post a complete minimal verifiable example so there might be something else too, but if your point is that when calling NotAccountedReport().makeNotAccountableReport() you don't get your "Time taken" printed then it's really not a surprise - the benchmarking decorator is applied when the function is defined (when the module is imported), well before the if __name__ == '__main__' clause is executed, so at that time cons.DISPLAY_TIME_STAMP has not been updated by your command line args.
If you want a runtime flag to activate / deactivate your decorator's behaviour the obvious solution is to check cons.DISPLAY_TIME_STAMP within the decorator instead of passing it as argument, ie:
def benchmarking(func):
def timercheck(*args, **kwarg):
if cons.DISPLAY_TIME_STAMP:
starttime=time.time()
result = func(*args, **kwarg)
if cons.DISPLAY_TIME_STAMP:
logger.debug('Time Taken: %s',round(time.time()-starttime, 4))
return result
return timercheck
class NotAccountedReport(object):
#benchmarking
def makeNotAccountableReport():
#some functionality
I have been learning Scala recently, so I wrote some recursion in Python.
And I found there is no tail-recursion optimization in Python.
Then I found a magic(?) decorator that seems to optimize the tail-recursion.
It solved the RuntimeError: maximum recursion depth exceeded.
But I don't understand how and why this code works.
Can somebody explain the magic power inside this code?
code:
# This program shows off a python decorator(
# which implements tail call optimization. It
# does this by throwing an exception if it is
# its own grandparent, and catching such
# exceptions to recall the stack.
import sys
class TailRecurseException:
def __init__(self, args, kwargs):
self.args = args
self.kwargs = kwargs
def tail_call_optimized(g):
"""
This function decorates a function with tail call
optimization. It does this by throwing an exception
if it is its own grandparent, and catching such
exceptions to fake the tail call optimization.
This function fails if the decorated
function recurses in a non-tail context.
"""
def func(*args, **kwargs):
f = sys._getframe()
if f.f_back and f.f_back.f_back \
and f.f_back.f_back.f_code == f.f_code:
raise TailRecurseException(args, kwargs)
else:
while 1:
try:
return g(*args, **kwargs)
except TailRecurseException, e:
args = e.args
kwargs = e.kwargs
func.__doc__ = g.__doc__
return func
#tail_call_optimized
def factorial(n, acc=1):
"calculate a factorial"
if n == 0:
return acc
return factorial(n-1, n*acc)
print factorial(10000)
# prints a big, big number,
# but doesn't hit the recursion limit.
#tail_call_optimized
def fib(i, current = 0, next = 1):
if i == 0:
return current
else:
return fib(i - 1, next, current + next)
print fib(10000)
# also prints a big number,
# but doesn't hit the recursion limit.
without tail call optimization your stack looks like this:
factorial(10000)
factorial(9999)
factorial(9998)
factorial(9997)
factorial(9996)
...
and grows until you reach sys.getrecursionlimit() calls (then kaboom).
with tail call optimization:
factorial(10000,1)
factorial(9999,10000) <-- f.f_back.f_back.f_code = f.f_code? nope
factorial(9998,99990000) <-- f.f_back.f_back.f_code = f.f_code? yes, raise excn.
and the exception makes the decorator go to the next iteration of its while loop.
I write get_function_arg_data(func) as below code to get the function func's arguments information:
def get_function_arg_data(func):
import inspect
func_data = inspect.getargspec(func)
args_name = func_data.args #func argument list
args_default = func_data.defaults #funcargument default data list
return args_name, args_default
def showduration(user_function):
''' show time duration decorator'''
import time
def wrapped_f(*args, **kwargs):
t1 = time.clock()
result = user_function(*args, **kwargs)
print "%s()_Time: %0.5f"%(user_function.__name__, time.clock()-t1)
return result
return wrapped_f
def foo(para1, para2=5, para3=7):
for i in range(1000):
s = para1+para2+para3
return s
#showduration
def bar(para1, para2, para3):
for i in range(1000):
s=para1+para2+para3
return s
print get_function_arg_data(foo)
bar(1,2,3)
print get_function_arg_data(bar)
>>>
(['para1', 'para2', 'para3'], (5, 7))
bar()_Time: 0.00012
([], None)
>>>
get_function_arg_data() works for foo, not for bar for bar is decorated by a decorator #showduration . My question is how to penetrate the decorator to get the underlying function's information (argument list and default value) ?
Thanks for your tips.
I don't think there is, or at least know of, any general way to "penetrate" a decorated function and get at the underlying function's information because Python's concept of function decoration is so general -- if fact, generally speaking, there's nothing that requires or guarantees that the original function will be called at all (although that's usually the case).
Therefore, a more practical question would be: How could I write my own decorators which would allow me to inspect the underlying function's argument information?
One easy way, previously suggested, would be to use Michele Simionato's decorator module (and write decorators compatible with it).
A less robust, but extremely simple way of doing this would be to do what is shown below based on the code in your question:
def get_function_arg_data(func):
import inspect
func = getattr(func, '_original_f', func) # use saved original if decorated
func_data = inspect.getargspec(func)
args_name = func_data.args #func argument list
args_default = func_data.defaults #funcargument default data list
return args_name, args_default
def showduration(user_function):
'''show time duration decorator'''
import time
def wrapped_f(*args, **kwargs):
t1 = time.clock()
result = user_function(*args, **kwargs)
print "%s()_Time: %0.5f"%(user_function.__name__, time.clock()-t1)
return result
wrapped_f._original_f = user_function # save original function
return wrapped_f
def foo(para1, para2=5, para3=7):
for i in range(1000):
s = para1+para2+para3
return s
#showduration
def bar(para1, para2, para3):
for i in range(1000):
s=para1+para2+para3
return s
print 'get_function_arg_data(foo):', get_function_arg_data(foo)
print 'get_function_arg_data(bar):', get_function_arg_data(bar)
All the modification involves is saving the original function in an attribute named _original_f which is added the wrapped function returned by the decorator. The get_function_arg_data() function then simply checks for this attribute and returns information based its value rather the decorated function passed to it.
While this approach doesn't work with just any decorated function, only ones which have had the special attribute added to them, it is compatible with both Python 2 & 3.
Output produced by the code shown:
get_function_arg_data(foo): (['para1', 'para2', 'para3'], (5, 7))
get_function_arg_data(bar): (['para1', 'para2', 'para3'], None)
Assuming you've installed Michele Simionato's decorator module, you can make yourshowdurationdecorator work with it by making some minor modifications to it and to the nestedwrapped_f()function defined in it so the latter fits the signature that module's decorator.decorator() function expects:
import decorator
def showduration(user_function):
''' show time duration decorator'''
import time
def wrapped_f(user_function, *args, **kwargs):
t1 = time.clock()
result = user_function(*args, **kwargs)
print "%s()_Time: %0.5f"%(user_function.__name__, time.clock()-t1)
return result
return decorator.decorator(wrapped_f, user_function)
However, the module really shines because it will let you reduce boilerplate stuff like the above down to just:
import decorator
#decorator.decorator
def showduration(user_function, *args, **kwargs):
import time
t1 = time.clock()
result = user_function(*args, **kwargs)
print "%s()_Time: %0.5f"%(user_function.__name__, time.clock()-t1)
return result
With either set of the above changes, your sample code would output:
(['para1', 'para2', 'para3'], (5, 7))
bar()_Time: 0.00026
(['para1', 'para2', 'para3'], None)