Passing multiple functions to a class method - python

I have defined TaskTimer Class below and that I would like to execute multiple functions when the event is triggered which may or may not have arguements. I would like to come up with a generic way of doing this. My functions are not being executed and I do not undestand why. Are my arguements in t.start() incorrect?
import System
from System.Timers import (Timer, ElapsedEventArgs)
class TaskTimer(object):
def __init__ (self):
self.timer = Timer ()
self.timer.Enabled = False
self.handlers =[]
def On_Timed_Event (self,source, event):
print 'Event fired', event.SignalTime
for handler in self.handlers:
handler(*self.my_args,**self.kwargs)
def start(self,interval, repeat, *args, **kwargs):
self.repeat = repeat
self.run = True #True if timer is running
self.timer.Interval= interval
self.timer.Enabled = True
self.my_args= args
self.kwargs = kwargs
for x in self.my_args:
self.handlers.append(x)
self.timer.Elapsed += self.On_Timed_Event
def func1(a,b):
print 'function 1. this function does task1'
print '1...1...1...'
return None
def func2(d):
print 'Function 2. This function does something'
print '2...2...2...'
return None
def func3(a,b,c):
print 'function 3. This function does something else'
return None
def main():
t= TaskTimer()
D= {'fun2':'func2', 'arg2':'3'}
t.start(5000,False,func1, func2, func3, a= 1, b=3, c=4, d=D)
if __name__ == '__main__':
main()
I am experimenting so I edited the def_Timed_Event function and func1, func2 and func3 as shown below. I also added print statement to the functions as suggested by #Ewan. Does Python automatically substitute function variables from **self.kwargs?
def On_Timed_Event (self,source, event):
print '\nEvent fired', event.SignalTime
for handler in self.handlers:
print 'length of self.handlers', len(self.handlers)
print 'args', self.my_args
print 'kwargs', self.kwargs
print 'handler', handler
handler(**self.kwargs)
self.handler[handler](**self.kwargs)
def func1(a,b):
print 'function 1. this function does task1'
print 'func1', a,b
print '1...1...1...'
return None
def func2(d):
print 'Function 2. This function does something'
print 'func2', d
print '2...2...2...'
return None
def func3(a,b,c):
print 'function 3. This function does something else'
print 'func3', a,b,c
return None
The code runs inside IronPyhton console.
![IronPython_console][2]
[2]: http://i.stack.imgur.com/vYW0S.jpg

First of all I can see you having trouble with this line:
handler(*self.my_args,**self.kwargs)
As you aren't accepting any extra kwargs in func1, func2 or func3 I would expect to see the following if it was reaching them:
In [1]: def func1(a, b):
...: print(a, b)
...:
In [2]: kwg = {'a':1, 'b':3, 'c':4, 'd':{'fun2':'func2', 'arg2':'3'}}
In [3]: func1(**kwg)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-12557b6441ce> in <module>()
----> 1 func1(**kwg)
TypeError: func1() got an unexpected keyword argument 'c'
Personally I'd put some decent logging/debug statements in to find out where your program is getting in your stack.
Is the program completing successfully or are you getting a TraceBack?
What is this designed to do: self.timer.Elapsed += self.On_Timed_Event?
It looks like it's supposed to be adding a time onto your timer.Elapsed value however nothing is returned by self.On_Timed_Event
On_Timed_Event takes 2 parameters (source, event) however you don't seem to be passing them in, this should also cause the program to fail.
Are you seeing Event fired in your stdout or is the program not getting to On_Timed_Event at all? This may show a problem in your System.Timers code.

Related

Function prints none instead of printing what I want it to print in decorators in Python

I am studying on decorators in Python. I was trying to use the decorators with arguments. I'm having a problem with the decorators. I defined two inner function in the default decorator function. It returns none when I use it as below:
def prefix(write: bool = False):
def thread(func):
def wrapper(*args, **kwargs):
t1 = Thread(target=func, args=args, kwargs=kwargs)
t1.start()
if write:
print("write parameter is true.")
return wrapper
return thread
#prefix(write=True)
def something(x):
return x + x
print(something(5))
As you see, I defined two different functions named prefix and something. If the write parameter is true, it prints the string. But something function is printing "None" instead of printing 5 + 5.
What's wrong?
Well, your wrapper() function doesn't have a return statement, so it will always return None.
Furthermore, how would you expect it to print 5 + 5 (or rather, the result thereof) when that may not have been computed yet, considering you're starting a new thread to do that and never do anything with the return value of func at all?
IOW, if we expand your example a bit:
import time
from threading import Thread
def prefix(write: bool = False):
def thread(func): # <- this function replaces `something`
def wrapper(*args, **kwargs):
t1 = Thread(target=func, args=args, kwargs=kwargs)
t1.start()
if write:
print("write parameter is true.")
return "hernekeitto"
return wrapper
return thread
#prefix(write=True)
def something(x):
print("Computing, computing...")
time.sleep(0.5)
print("Hmm, hmm, hmm...")
time.sleep(0.5)
print("Okay, got it!")
return x + x
value = something(9)
print("The value is:", value)
This will print out
Computing, computing...
write parameter is true.
The value is: hernekeitto
Hmm, hmm, hmm...
Okay, got it!
As you can see, the thread's first print() happens first, then the write print, then the value print, and then the rest of what happens in the thread. And as you can see, we only know what x + x is after "Okay, got it!", so there's no way you could have returned that out of wrapper() where "hernekeitto" is returned.
See futures (or the equivalent JavaScript concept promises) for a "value that's not yet ready":
import time
from concurrent.futures import Future
from threading import Thread
def in_future(func):
def wrapper(*args, **kwargs):
fut = Future()
def func_wrapper():
# Wraps the threaded function to resolve the future.
try:
fut.set_result(func(*args, **kwargs))
except Exception as e:
fut.set_exception(e)
t1 = Thread(target=func_wrapper)
t1.start()
return fut
return wrapper
#in_future
def something(x):
print("Computing, computing...")
time.sleep(0.5)
print("Hmm, hmm, hmm...")
time.sleep(0.5)
print("Okay, got it!")
return x + x
value_fut = something(9)
print("The value is:", value_fut)
print("Waiting for it to be done...")
print("Here it is!", value_fut.result())
This prints out
Computing, computing...
The value is: <Future at 0x... state=pending>
Waiting for it to be done...
Hmm, hmm, hmm...
Okay, got it!
Here it is! 18
so you can see the future is just a "box" where you'll need to wait for the actual value to be done (or an error to occur getting it).
Normally you'd use futures with the executors in concurrent, but the above is an example of how to do it by hand.

Using straing/data from another function

I'm trying to use returned data from one function into multiple other functions. But I don't want the first function to run each time; which is happening in my case.
#Function lab
def func_a():
print('running function a')
data = 'test'
return data
def func_b():
print(func_a())
def func_c():
print(func_a())
def func_d():
print(func_a())
if __name__ == '__main__':
func_a()
func_b()
func_c()
func_d()
Each time that whole function_a runs. But I just want the returned data from "func_a" in other functions.
IIUC, you could alleviate this with a simple class.
I hold the state of the class which runs func_a in a variable called output. I can then reference this output variable once the class has finished running as much as I like in all other functions without having to re-run func_a.
Hope this helps!
class FunctionA:
def __init__(self):
self.output = None
def run_function(self):
print('running function a')
data = 'test'
self.output = data
def func_b():
print(func_a.output)
def func_c():
print(func_a.output)
def func_d():
print(func_a.output)
if __name__ == '__main__':
func_a = FunctionA()
func_a.run_function()
func_b()
func_c()
func_d()
>>> running function a
>>> test
>>> test
>>> test
Your func_a does two things. To make this clear, let's call it, print_and_return_data.
There are several ways to to break apart the two things print_and_return_data does. One way is to split up the two behaviors into smaller sub-methods:
def print_and_return_data():
print('running function a') # keeping the old print behavior
data = 'test'
return data
into:
def print_run():
print('running function a') # keeping the old print behavior
def return_data():
return 'test'
def print_and_return_data():
print_run()
return return_data()
So that other functions only use what they need:
def func_b():
print(return_data())
Another way is to change print_and_return_data to behave differently the first time it's called from the following times it's called (I don't recommend this because functions changing based on how many times it's been called can be confusing):
context = {'has_printed_before': False}
def print_and_return_data():
if not context['has_printed_before']:
print('running function a')
context['has_printed_before'] = True
data = 'test'
return data
def func_b():
print(print_and_return_data())
if __name__ == '__main__':
func_a() # prints
func_b() # won't print
One way to avoid "functions behaving differently when they're called" is to pass the variation (the "context") in as an argument:
def return_data(also_print=False):
if also_print:
print('running function a')
data = 'test'
return data
def func_b():
print(return_data())
if __name__ == '__main__':
func_a(also_print=True) # prints
func_b() # won't print

Class method wraps a function - Problems with Arguments

In my main, I have a function with an error and a class that tracks errors in a list inside the class itself. In other words, instead of just calling the function, I would like to give this function to a class-method which then "logs" the error in a list and suppresses the error.
Here is my problem:
This function has input arguments. When I hand-over my function to the class-method, I would like to hand-over the inputs, too. What happens is, that the function is being executed before going to the class method. Therefore, the class-method can't suppress the error which happens in the function.
In the code below, I set the variable silent=True, therefore, it should not raise an error (because of the try/except clause within the method). Unfortunately, the code raises a TypeError which comes from the function.
Any advice would be much appreciated
PS: I am not looking for a decorator solution :)
Here is the class with the class method which can suppress the error
class ErrorTracker:
def __init__(self):
self.list = list()
def track_func(self, func, silent=False):
try:
self.list.append('...in trying')
print('....trying.....')
return func
except Exception as e:
self.list.append('...in except')
self.list.append(e) # important line - here the error gets "logged"
if not silent:
raise e
Here is the function with an error
def transformation_with_error(app1, app2):
# DO STUFF HERE with inputs
result = str(app1)+str(app2)
print(result)
print('TYPE ERROR here')
raise TypeError
return result
Here the main routine:
if __name__ == "__main__":
error_tracker = ErrorTracker()
print('-- start transformation')
error_tracker.track_func(transformation_with_error(app1='AA', app2='BB'), silent=True)
print('-- end transformation')
print(error_tracker.list)
If I understand your issue, in your main routine
error_tracker.track_func(transformation_with_error(app1='AA', app2='BB'), silent=True)
calls transformation_with_error before entering error_tracker.track_func. This happens just because you indeed are calling transformation_with_error. If you want your error_tracker.track_func to call transformation_with_error, you have to pass the later as an argument, like you would do for a callback.
For example:
def test(var1, var2):
print("{} {}".format(var1, var2))
def callFn(func, *vars):
func(*vars)
callFn(test, "foo", "bar")
outputs foo bar
Thx VincentRG
That was it
Just for the record, below are the changes I did:
(side note: I added the **kwargs, too, to be able to deal with default values)
thx mate
class changes
class ErrorTracker:
def __init__(self):
self.list = list()
def track_func(self, func, silent=False, *args, **kwargs):
try:
self.list.append('...in trying')
print('....trying.....')
return func(*args, **kwargs)
except Exception as e:
self.list.append('...in except')
self.list.append(e) # important line - here the error gets "logged"
if not silent:
raise e
change in call
if __name__ == "__main__":
error_tracker = ErrorTracker()
print('-- start transformation')
error_tracker.track_func(transformation_with_error, silent=True, app1='AA', app2='BB')
print('-- end transformation')
print(error_tracker.list)

Returning value when exiting python context manager

Maybe this is a stupid (and indeed not very practical) question but I'm asking it because I can't wrap my head around it.
While researching if a return statement inside a call to a context manager would prevent __exit__ from being called (no it doesn't), I found that it seems common to make an analogy between __exit__ and finally in a try/finally block (for example here: https://stackoverflow.com/a/9885287/3471881) because:
def test():
try:
return True
finally:
print("Good bye")
Would execute the same as:
class MyContextManager:
def __enter__(self):
return self
def __exit__(self, *args):
print('Good bye')
def test():
with MyContextManager():
return True
This really helped me understand how cm:s work but after playing around a bit I realised that this analogy wont work if we are returning something rather than printing.
def test():
try:
return True
finally:
return False
test()
--> False
While __exit__ seemingly wont return at all:
class MyContextManager:
def __enter__(self):
return self
def __exit__(self, *args):
return False
def test():
with MyContextManager():
return True
test()
--> True
This lead me to think that perhaps you can't actually return anything inside __exit__, but you can:
class MyContextManager:
def __enter__(self):
return self
def __exit__(self, *args):
return self.last_goodbye()
def last_goodbye(self):
print('Good bye')
def test():
with MyContextManager():
return True
test()
--> Good bye
--> True
Note that it doesn't matter if we don't return anything inside the test() function.
This leads me to my question:
Is it impossible to return a value from inside __exit__ and if so, why?
Yes. It is impossible to alter the return value of the context from inside __exit__.
If the context is exited with a return statement, you cannot alter the return value with your context_manager.__exit__. This is different from a try ... finally ... clause, because the code in finally still belongs to the parent function, while context_manager.__exit__ runs in its own scope
.
In fact, __exit__ can return a boolean value (True or False) and it will be understood by Python. It tells Python whether the exception that exits the context (if any) should be suppressed (not propagate to outside the context).
See this example of the meaning of the return value of __exit__:
>>> class MyContextManager:
... def __init__(self, suppress):
... self.suppress = suppress
...
... def __enter__(self):
... return self
...
... def __exit__(self, exc_type, exc_obj, exc_tb):
... return self.suppress
...
>>> with MyContextManager(True): # suppress exception
... raise ValueError
...
>>> with MyContextManager(False): # let exception pass through
... raise ValueError
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError
>>>
In the above example, both ValueErrors will cause the control to jump out of the context. In the first block, the __exit__ method of the context manager returns True, so Python suppresses this exception and it's not reflexed in the REPL. In the second block, the context manager returns False, so Python let the outer code handle the exception, which gets printed out by the REPL.
The workaround is to store the result in an attribute instead of returning it, and access it later. That is if you intend to use that value in more than a print.
For example, take this simple context manager:
class time_this_scope():
"""Context manager to measure how much time was spent in the target scope."""
def __init__(self, allow_print=False):
self.t0 = None
self.dt = None
self.allow_print = allow_print
def __enter__(self):
self.t0 = time.perf_counter()
def __exit__(self, type=None, value=None, traceback=None):
self.dt = (time.perf_counter() - self.t0) # Store the desired value.
if self.allow_print is True:
print(f"Scope took {self.dt*1000: 0.1f} milliseconds.")
It could be used this way:
with time_this_scope(allow_print=True):
time.sleep(0.100)
>>> Scope took 100 milliseconds.
or like so:
timer = time_this_scope()
with timer:
time.sleep(0.100)
dt = timer.dt
Not like what is shown below since the timer object is not accessible anymore as the scope ends. We need to modify the class as described here and add return self value to __enter__. Before the modification, you would get an error:
with time_this_scope() as timer:
time.sleep(0.100)
dt = timer.dt
>>> AttributeError: 'NoneType' object has no attribute 'dt'
Finally, here is a simple use example:
"""Calculate the average time spent sleeping."""
import numpy as np
import time
N = 100
dt_mean = 0
for n in range(N)
timer = time_this_scope()
with timer:
time.sleep(0.001 + np.random.rand()/1000) # 1-2 ms per loop.
dt = timer.dt
dt_mean += dt/N
print(f"Loop {n+1}/{N} took {dt}s.")
print(f"All loops took {dt_mean}s on average.)

How to access (and edit) variables from a callback function?

I use Boto to access Amazon S3. And for file uploading I can assign a callback function. The problem is that I cannot access the needed variables from that callback function until I make them global. In another hand, if I make them global, they are global for all other Celery tasks, too (until I restart Celery), as the file uploading is executed from a Celery task.
Here is a function that uploads a JSON file with information about video conversion progress.
def upload_json():
global current_frame
global path_to_progress_file
global bucket
json_file = Key(bucket)
json_file.key = path_to_progress_file
json_file.set_contents_from_string('{"progress": "%s"}' % current_frame,
cb=json_upload_callback, num_cb=2, policy="public-read")
And here are 2 callback functions for uploading frames generated by ffmpeg during the video conversion and a JSON file with the progress information.
# Callback functions that are called by get_contents_to_filename.
# The first argument is representing the number of bytes that have
# been successfully transmitted from S3 and the second is representing
# the total number of bytes that need to be transmitted.
def frame_upload_callback(transmitted, to_transmit):
if transmitted == to_transmit:
upload_json()
def json_upload_callback(transmitted, to_transmit):
global uploading_frame
if transmitted == to_transmit:
print "Frame uploading finished"
uploading_frame = False
Theoretically, I could pass the uploading_frame variable to the upload_json function, but it wouldn’t get to json_upload_callback as it’s executed by Boto.
In fact, I could write something like this.
In [1]: def make_function(message):
...: def function():
...: print message
...: return function
...:
In [2]: hello_function = make_function("hello")
In [3]: hello_function
Out[3]: <function function at 0x19f4c08>
In [4]: hello_function()
hello
Which, however, doesn’t let you edit the value from the function, just lets you read the value.
def myfunc():
stuff = 17
def lfun(arg):
print "got arg", arg, "and stuff is", stuff
return lfun
my_function = myfunc()
my_function("hello")
This works.
def myfunc():
stuff = 17
def lfun(arg):
print "got arg", arg, "and stuff is", stuff
stuff += 1
return lfun
my_function = myfunc()
my_function("hello")
And this gives an UnboundLocalError: local variable 'stuff' referenced before assignment.
Thanks.
In Python 2.x closed over variables are read-only (not for the Python VM, but just because of the syntax that doesn't allow writing to a non local and non global variable).
You can however use a closure over a mutable value... i.e.
def myfunc():
stuff = [17] # <<---- this is a mutable object
def lfun(arg):
print "got arg", arg, "and stuff[0] is", stuff[0]
stuff[0] += 1
return lfun
my_function = myfunc()
my_function("hello")
my_function("hello")
If you are instead using Python 3.x the keyword nonlocal can be used to specify that a variable used in read/write in a closure is not a local but should be captured from the enclosing scope:
def myfunc():
stuff = 17
def lfun(arg):
nonlocal stuff
print "got arg", arg, "and stuff is", stuff
stuff += 1
return lfun
my_function = myfunc()
my_function("hello")
my_function("hello")
You could create a partial function via functools.partial. This is a way to call a function with some variables pre-baked into the call. However, to make that work you'd need to pass a mutable value - eg a list or dict - into the function, rather than just a bool.
from functools import partial
def callback(arg1, arg2, arg3):
arg1[:] = [False]
print arg1, arg2, arg3
local_var = [True]
partial_func = partial(callback, local_var)
partial_func(2, 1)
print local_var # prints [False]
A simple way to do these things is to use a local function
def myfunc():
stuff = 17
def lfun(arg):
print "got arg", arg, "and stuff is", stuff
stuff += 1
def register_callback(lfun)
This will create a new function every time you call myfunc, and it will be able to use the local "stuff" copy.

Categories