I'm trying to use returned data from one function into multiple other functions. But I don't want the first function to run each time; which is happening in my case.
#Function lab
def func_a():
print('running function a')
data = 'test'
return data
def func_b():
print(func_a())
def func_c():
print(func_a())
def func_d():
print(func_a())
if __name__ == '__main__':
func_a()
func_b()
func_c()
func_d()
Each time that whole function_a runs. But I just want the returned data from "func_a" in other functions.
IIUC, you could alleviate this with a simple class.
I hold the state of the class which runs func_a in a variable called output. I can then reference this output variable once the class has finished running as much as I like in all other functions without having to re-run func_a.
Hope this helps!
class FunctionA:
def __init__(self):
self.output = None
def run_function(self):
print('running function a')
data = 'test'
self.output = data
def func_b():
print(func_a.output)
def func_c():
print(func_a.output)
def func_d():
print(func_a.output)
if __name__ == '__main__':
func_a = FunctionA()
func_a.run_function()
func_b()
func_c()
func_d()
>>> running function a
>>> test
>>> test
>>> test
Your func_a does two things. To make this clear, let's call it, print_and_return_data.
There are several ways to to break apart the two things print_and_return_data does. One way is to split up the two behaviors into smaller sub-methods:
def print_and_return_data():
print('running function a') # keeping the old print behavior
data = 'test'
return data
into:
def print_run():
print('running function a') # keeping the old print behavior
def return_data():
return 'test'
def print_and_return_data():
print_run()
return return_data()
So that other functions only use what they need:
def func_b():
print(return_data())
Another way is to change print_and_return_data to behave differently the first time it's called from the following times it's called (I don't recommend this because functions changing based on how many times it's been called can be confusing):
context = {'has_printed_before': False}
def print_and_return_data():
if not context['has_printed_before']:
print('running function a')
context['has_printed_before'] = True
data = 'test'
return data
def func_b():
print(print_and_return_data())
if __name__ == '__main__':
func_a() # prints
func_b() # won't print
One way to avoid "functions behaving differently when they're called" is to pass the variation (the "context") in as an argument:
def return_data(also_print=False):
if also_print:
print('running function a')
data = 'test'
return data
def func_b():
print(return_data())
if __name__ == '__main__':
func_a(also_print=True) # prints
func_b() # won't print
As the title, I need to execute the corresponding function according to the different values of the string variable,
Choice_function is a function that needs to be optimized. If I have a lot of functions that need to be executed, using if else is more cumbersome. Is there any easy way to optimize the choice_function function?
My code is as follows:
def function1():
print('function1')
return
def function2():
print('function2')
return
def choice_function(name):
if name == 'function1':
function1()
elif name == 'function2':
function2()
return
def main():
function_name = 'function1'
choice_function(function_name)
function_name = 'function2'
choice_function(function_name)
return
if __name__ == '__main__':
main()
You can use vars to do it
code:
def function1():
print('function1')
return
def function2():
print('function2')
return
vars()["function1"]()
vars()["function2"]()
result:
function1
function2
if you want to use it in a function like choice_function, you can use globals.
def function1():
print('function1')
return
def function2():
print('function2')
return
def choice_function(name):
try:
globals()[name]()
except Exception:
print(f"no found function: {name}")
return
def main():
function_name = 'function1'
choice_function(function_name)
function_name = 'function2'
choice_function(function_name)
return
if __name__ == '__main__':
main()
I am working with threading and multiprocessing and encounter an issue, here is my code:
xxx = []
def func1(*args):
#do something 1
global list
xxx.append('x')
def func2(*args2):
func1x = partial(func1, ...(*argsx)...)
with multiprocessing.Pool(3) as pool: pool.map(func1x, arg)
def func3(arg3):
While True:
try:
# do something 2
global list
print(list)
except:
continue
def main():
t1 = threading.Thread(target=func2, args=(*args2))
t2 = threading.Thread(target=func3, args=(args3))
t1.start()
t2.start()
main()
My code block run smoothly without any error with these nested multiprocessing and threading.
Problem is eventhough I tried to set a new global variable for xxx in func1(), the print command in func3() still print [] instead of ['x'] as I expected.
I use while loop to wait for the func1() to declares new variable of xxx, still not working.
How can I use new global variable everytime it's changed in a running thread?
So I have this code:
import time
import threading
bar = False
def foo():
while True:
if bar == True:
print "Success!"
else:
print "Not yet!"
time.sleep(1)
def example():
while True:
time.sleep(5)
bar = True
t1 = threading.Thread(target=foo)
t1.start()
t2 = threading.Thread(target=example)
t2.start()
I'm trying to understand why I can't get bar to = to true.. If so, then the other thread should see the change and write Success!
bar is a global variable. You should put global bar inside example():
def example():
global bar
while True:
time.sleep(5)
bar = True
When reading a variable, it is first searched inside the function and if not found, outside. That's why it's not necessary to put global bar inside foo().
When a variable is assigned a value, it is done locally inside the function unless the global statement has been used. That's why it's necessary to put global bar inside example()
You must specify 'bar' as global variable. Otherwise 'bar' is only considered as a local variable.
def example():
global bar
while True:
time.sleep(5)
bar = True
I use Boto to access Amazon S3. And for file uploading I can assign a callback function. The problem is that I cannot access the needed variables from that callback function until I make them global. In another hand, if I make them global, they are global for all other Celery tasks, too (until I restart Celery), as the file uploading is executed from a Celery task.
Here is a function that uploads a JSON file with information about video conversion progress.
def upload_json():
global current_frame
global path_to_progress_file
global bucket
json_file = Key(bucket)
json_file.key = path_to_progress_file
json_file.set_contents_from_string('{"progress": "%s"}' % current_frame,
cb=json_upload_callback, num_cb=2, policy="public-read")
And here are 2 callback functions for uploading frames generated by ffmpeg during the video conversion and a JSON file with the progress information.
# Callback functions that are called by get_contents_to_filename.
# The first argument is representing the number of bytes that have
# been successfully transmitted from S3 and the second is representing
# the total number of bytes that need to be transmitted.
def frame_upload_callback(transmitted, to_transmit):
if transmitted == to_transmit:
upload_json()
def json_upload_callback(transmitted, to_transmit):
global uploading_frame
if transmitted == to_transmit:
print "Frame uploading finished"
uploading_frame = False
Theoretically, I could pass the uploading_frame variable to the upload_json function, but it wouldn’t get to json_upload_callback as it’s executed by Boto.
In fact, I could write something like this.
In [1]: def make_function(message):
...: def function():
...: print message
...: return function
...:
In [2]: hello_function = make_function("hello")
In [3]: hello_function
Out[3]: <function function at 0x19f4c08>
In [4]: hello_function()
hello
Which, however, doesn’t let you edit the value from the function, just lets you read the value.
def myfunc():
stuff = 17
def lfun(arg):
print "got arg", arg, "and stuff is", stuff
return lfun
my_function = myfunc()
my_function("hello")
This works.
def myfunc():
stuff = 17
def lfun(arg):
print "got arg", arg, "and stuff is", stuff
stuff += 1
return lfun
my_function = myfunc()
my_function("hello")
And this gives an UnboundLocalError: local variable 'stuff' referenced before assignment.
Thanks.
In Python 2.x closed over variables are read-only (not for the Python VM, but just because of the syntax that doesn't allow writing to a non local and non global variable).
You can however use a closure over a mutable value... i.e.
def myfunc():
stuff = [17] # <<---- this is a mutable object
def lfun(arg):
print "got arg", arg, "and stuff[0] is", stuff[0]
stuff[0] += 1
return lfun
my_function = myfunc()
my_function("hello")
my_function("hello")
If you are instead using Python 3.x the keyword nonlocal can be used to specify that a variable used in read/write in a closure is not a local but should be captured from the enclosing scope:
def myfunc():
stuff = 17
def lfun(arg):
nonlocal stuff
print "got arg", arg, "and stuff is", stuff
stuff += 1
return lfun
my_function = myfunc()
my_function("hello")
my_function("hello")
You could create a partial function via functools.partial. This is a way to call a function with some variables pre-baked into the call. However, to make that work you'd need to pass a mutable value - eg a list or dict - into the function, rather than just a bool.
from functools import partial
def callback(arg1, arg2, arg3):
arg1[:] = [False]
print arg1, arg2, arg3
local_var = [True]
partial_func = partial(callback, local_var)
partial_func(2, 1)
print local_var # prints [False]
A simple way to do these things is to use a local function
def myfunc():
stuff = 17
def lfun(arg):
print "got arg", arg, "and stuff is", stuff
stuff += 1
def register_callback(lfun)
This will create a new function every time you call myfunc, and it will be able to use the local "stuff" copy.