I have a method like this in Python :
def test(a,b):
return a+b, a-b
How can I run this in a background thread and wait until the function returns.
The problem is the method is pretty big and the project involves GUI, so I can't wait until it's return.
In my opinion, you should besides this thread run another thread that checks if there is result. Or Implement callback that is called at the end of the thread. However, since you have gui, which as far as I know is simply a class -> you can store result into obj/class variable and check if the result came.
I would use mutable variable, which is sometimes used. Lets create special class which will be used for storing results from thread functions.
import threading
import time
class ResultContainer:
results = [] # Mutable - anything inside this list will be accesable anywher in your program
# Lets use decorator with argument
# This way it wont break your function
def save_result(cls):
def decorator(func):
def wrapper(*args,**kwargs):
# get result from the function
func_result = func(*args,**kwargs)
# Pass the result into mutable list in our ResultContainer class
cls.results.append(func_result)
# Return result from the function
return func_result
return wrapper
return decorator
# as argument to decorator, add the class with mutable list
#save_result(ResultContainer)
def func(a,b):
time.sleep(3)
return a,b
th = threading.Thread(target=func,args=(1,2))
th.daemon = True
th.start()
while not ResultContainer.results:
time.sleep(1)
print(ResultContainer.results)
So, in this code, we have class ResultContainer with list. Whatever you put in it, you can easily access it from anywhere in the code (between threads and etc... exception is between processes due to GIL). I made decorator, so you can store result from any function without violating the function. This is just example how you can run threads and leave it to store result itself without you taking care of it. All you have to do, is to check, if the result arrived.
You can use global variables, to do the same thing. But I dont advise you to use them. They are ugly and you have to be very careful when using them.
For even more simplicity, if you dont mind violating your function, you can just, without using decorator, just push result to class with list directly in the function, like this:
def func(a,b):
time.sleep(3)
ResultContainer.results.append(tuple(a,b))
return a,b
Related
I have a decorator #newthread which wraps functions to run in a separate thread (using wraps from functools and Thread from threading). However, there are some functions for which I only want this to happen some of the time.
At the moment, I have #newthread check the keyword arguments of the function to be wrapped and if it finds a bool new_thread equal to True then it runs the function in a separate thread, otherwise it runs the function normally. For example,
#newthread
def foo(new_thread=False)
# Do stuff...
foo() # Runs normally
foo(new_thread=True) # Runs in new thread
Is this the canonical way of doing this, or am I missing something?
Don't use newthread as a decorator, then. A decorator is just a function that takes a function and returns a function.
If you want it to run in the current thread, call
foo(some, params)
If you want to run foo in a new thread, call
newthread(foo)(some, params)
#newthread
def foo(new_thread=False)
# Do stuff...
foo() # Runs normally
foo(new_thread=True) # Runs in new thread
That is good - but, I for one, would prefer to have the decorator do consume the "new_thread" argument, instead of having it showing on the parameter list of the decorated functions.
Also, you could use a "default" value so that you'd pick the actual need to use a different thread from somewhere else (like an enviroment variable):
MARKER = object()
def newthread(func):
def wrapper(*args, newthread=MARKER, **kwargs):
if newthread is MARKER:
newthread = os.environ.get("force_threads", True)
if newthread:
...
# cretae new thread and return future-like object
else:
return func(*args, **kwargs)
return wrapper
I have this start.py:
# start.py
class Start:
def __init__(self):
self.mylist = []
def run(self):
# some code
Executing its run() method will at some point invoke the put_item(obj) method in moduleX.py:
# moduleX.py
def put_item(obj):
# what should I write here
run() is NOT the direct caller of put_item(obj). In fact, from run() to put_item(obj) the execution is quite complex and involves a lot of other invocations.
My problem is, when put_item(obj) is called, can I directly add the value of obj back to mylist in the class Start? For example:
s = Start()
# suppose during this execution, put_item(obj) has been
# invoked 3 times, with obj equals to 1, 2, 3 each time
s.run()
print(s.mylist) # I want it to be [1,2,3]
UPDATE:
From run() to put_item(obj) the execution involves heavy usages of 3rd-party modules and function calls that I have no control over. In other words, the execution inbetween run() to put_item(obj) is like a blackbox to me, and this execution leads to the value of obj that I'm interested in.
obj is consumed in put_item(obj) in moduleX.py, which is also a 3rd-party module. put_item(obj) originally has GUI code that displays obj in a fancy way. However, I want to modify its original behavior such that I can add obj to mylist in class Start and use mylist later in my own way.
Therefore, I cannot pass Start reference along the call chain to put_item since I don't know the call chain and I simply cannot modify it. Also, I cannot change the method signatures in moduleX.py otherwise I'll break the original API. What I can change is the content of put_item(obj) and the start.py.
Simply make put_item return the item you want to put in your instance:
def put_item():
# some code
return 42
class Start:
def __init__(self):
self.mylist = []
def run(self):
# some code
self.mylist.append(put_item())
s = Start()
s.run()
print(s.mylist)
Prints:
[42]
Yes, you can, but you will have to propagate a reference to your Start object's list down the call stack to put_item(). Your Start object can then add items to the list. It does not have to know or care that the object it is being passed is in the Start. It can just blindly add them.
For example (Ideone):
class Start:
def __init__(self):
self.mylist = []
def run(self):
foo(self.mylist)
print(self.mylist)
def foo(listRef):
bar(listRef)
def bar(listRef):
someItem = "Hello, World!"
put_item(listRef, someItem)
def put_item(listRef, obj):
listRef.append(obj)
x = Start()
x.run()
Of course, you'll get the appropriate runtime error if the thing you pass to foo turns out not to be a list.
Can you explain me how the following decorator works:
def set_ev_cls(ev_cls, dispatchers=None):
def _set_ev_cls_dec(handler):
if 'callers' not in dir(handler):
handler.callers = {}
for e in _listify(ev_cls):
handler.callers[e] = _Caller(_listify(dispatchers), e.__module__)
return handler
return _set_ev_cls_dec
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def _switch_features_handler(self, ev):
datapath = ev.msg.datapath
....
Please, don't go into details on what's going on inside the function. I'm interested in how the decorator with parameters wrap methods here. By the way, it's a code snippet from Ryu (event registration mechanism).
Thank you in advance
First, a decorator is just a function that gets called with a function. In particular, the following are (almost) the same thing:
#spam
def eggs(arg): pass
def eggs(arg): pass
eggs = spam(eggs)
So, what happens when the decorator takes parameters? Same thing:
#spam(arg2)
def eggs(arg): pass
def eggs(arg): pass
eggs = spam(arg2)(eggs)
Now, notice that the function _set_ev_cls_dec, which is ultimately returned and used in place of _switch_features_handler, is a local function, defined inside the decorator. That means it can be a closure over variables from the outer function—including the parameters of the outer function. So, it can use the handler argument at call time, plus the ev_cls and dispatchers arguments that it got at decoration time.
So:
set_ev_cls_dev creates a local function and returns a closure around its ev_cls and dispatchers arguments, and returns that function.
That closure gets called with _switch_features_handler as its parameter, and it modifies and returns that parameter by adding a callers attribute, which is a dict of _Caller objects built from that closed-over dispatchers parameter and keyed off that closed-over ev_cls parameter.
Explain how it works without detailing what's going on inside? That kind of sounds like "explain without explaining," but here's a rough walkthrough:
Think of set_ev_cls as a factory for decorators. It's there to catch the arguments at the time the decorator is invoked:
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
And return a function, _set_ev_cls_dec that has its variables bound to:
ev_cls = ofp_event.EventOFPSwitchFeatures
dispatchers = CONFIG_DISPATCHER
Or put another way, you now have a 'customized' or 'parametrized' dispatcher that's logically equivalent to:
def custom_decorator(handler):
if 'callers' not in dir(handler):
handler.callers = {}
for e in _listify(ofp_event.EventOFPSwitchFeatures):
handler.callers[e] = _Caller(_listify(CONFIG_DISPATCHER), e.__module__)
return handler
(If you captured the values of ofp_event.EventOFPSwitchFeatures and CONFIG_DISPATCHER at the moment the #set_ev_cls(...) was called).
The custom_decorator of step 1 is applied to _switch_features_handleras a more traditional unparameterized decorator.
I want to process a large for loop in parallel, and from what I have read the best way to do this is to use the multiprocessing library that comes standard with Python.
I have a list of around 40,000 objects, and I want to process them in parallel in a separate class. The reason for doing this in a separate class is mainly because of what I read here.
In one class I have all the objects in a list and via the multiprocessing.Pool and Pool.map functions I want to carry out parallel computations for each object by making it go through another class and return a value.
# ... some class that generates the list_objects
pool = multiprocessing.Pool(4)
results = pool.map(Parallel, self.list_objects)
And then I have a class which I want to process each object passed by the pool.map function:
class Parallel(object):
def __init__(self, args):
self.some_variable = args[0]
self.some_other_variable = args[1]
self.yet_another_variable = args[2]
self.result = None
def __call__(self):
self.result = self.calculate(self.some_variable)
The reason I have a call method is due to the post I linked before, yet I'm not sure I'm using it correctly as it seems to have no effect. I'm not getting the self.result value to be generated.
Any suggestions?
Thanks!
Use a plain function, not a class, when possible. Use a class only when there is a clear advantage to doing so.
If you really need to use a class, then given your setup, pass an instance of Parallel:
results = pool.map(Parallel(args), self.list_objects)
Since the instance has a __call__ method, the instance itself is callable, like a function.
By the way, the __call__ needs to accept an additional argument:
def __call__(self, val):
since pool.map is essentially going to call in parallel
p = Parallel(args)
result = []
for val in self.list_objects:
result.append(p(val))
Pool.map simply applies a function (actually, a callable) in parallel. It has no notion of objects or classes. Since you pass it a class, it simply calls __init__ - __call__ is never executed. You need to either call it explicitly from __init__ or use pool.map(Parallel.__call__, preinitialized_objects)
I have only started learning Python recently. Let me explain what I am trying to accomplish. I have this .py script that basically has several functions (hard-coded into the script) that all need to be added to a single list, so that I can get the function I require by simply using the index operator as follows:
needed_function = function_list[needed_function_index]
My first attempt at implementing this resulted in the following code structure:
(imports)
function_list = []
(other global variables)
def function_0 = (...)
function_list.append(function_0)
def function_1 = (...)
function_list.append(function_1)
def function_2 = (...)
function_list.append(function_2)
(rest of code)
But I don't like that solution since it isn't very elegant. My goal is to be able to simply add the function definition to the script (without the append call) and the script will automatically add it to the list of functions.
I've thought of defining all the functions within another function, but I don't think I'd get anywhere with those. I thought of maybe "tagging" each function with a decorator but I realized that decorators (if I understand them correctly) are called every time a function is called, and not just once.
After some time I came up with this solution:
(imports)
(global variables)
def function_0 = (...)
def function_1 = (...)
def function_2 = (...)
function_list= [globals()[x] for x in globals() if re.match('^function_[0-9]+$', x)]
(rest of code)
I like it a bit more as a solution, but my only qualm with it is that I would prefer, for cleanliness purposes, to completely define function_list at the top of the script. However, I cannot do that since an invocation of globals() at the top of the script would not contain the functions since they have not been defined yet.
Perhaps I should simply settle for a less elegant solution, or maybe I am not writing my script in an idiomatic way. Whatever the case, any input and suggestions are appreciated.
You are mistaken about decorators. They are invoked once when the function is defined, and the function they return is then the value assigned to the function name, and it is that function that is invoked each time. You can do what you want in a decorator without incurring runtime overhead.
my_functions = []
def put_in_list(fn):
my_functions.append(fn)
return fn
#put_in_list
def function1():
pass
#put_in_list
def function2():
pass
PS: You probably don't need to worry about runtime overhead anyway.
PPS: You are also trying to optimize odd things, you might be better off simply maintaining a list in your file. How often are you adding functions, and with how little thought? A list is not difficult to update in the source file.
Example of using a decorator that does not add any overhead to the function call:
my_list = []
def add_to_my_list(func):
print 'decorator called'
my_list.append(func)
return func
#add_to_my_list
def foo():
print 'foo called'
#add_to_my_list
def bar():
print 'foo called'
print '-- done defining functions --'
my_list[0]()
my_list[1]()
One way to solve this problem would be to put all those functions into a single container, then extract the functions from the container to build your list.
The most Pythonic container would be a class. I'm not saying to make them member functions of the class; just define them in the class.
class MyFunctions(object):
def func0():
pass
def func1():
pass
lst_funcs = [x for x in MyFunctions.__dict__ if not x.startswith('_')]
But I like the decorator approach even better; that's probably the most Pythonic solution.