So I’m trying to have a strobe like effect on a game I’m building and the way I currently have it it’s destroying my frame rate because the sleep function is also applying to the draw function. Can someone explain why this happens? And the logic that I’m failing to understand. Why can’t I just have the return happen every .5 seconds without it affecting the .1 sleep I have in my hue function?
Here’s a crude demonstration of what the code kind of does.
from random import randint
import time
def rand_intr():
r = randint(1,256)
time.sleep(.5)
return r
def rand_intg():
g = randint(1,256)
time.sleep(.5)
return g
def rand_intb():
b = randint(1,256)
time.sleep(.5)
return b
def hue():
r = rand_intr()
g = rand_intg()
b = rand_intb()
print(r, g, b)
print('test')
time.sleep(.01)
while True:
hue()
The sleep function blocks the main thread. This means rand_intg does not run until rand_intr "wakes up" from its sleep.
Similarly, rand_intb has to wait for rand_intg, and hue has to wait for all the previous 3 functions. This means the total time hue has to wait before it can do any work is at least the amount of time needed to complete rand_intr, rand_intg, and rand_intb.
We can understand what is happening if we modify your example slightly and look at the output.
from random import randint
import time
def log_entry_exit(f):
def wrapped():
print("Entered {}".format(f.__name__))
result = f()
print("Leaving {}".format(f.__name__))
return result
return wrapped
#log_entry_exit
def rand_intr():
r = randint(1,256)
time.sleep(.5)
return r
#log_entry_exit
def rand_intg():
g = randint(1,256)
time.sleep(.5)
return g
#log_entry_exit
def rand_intb():
b = randint(1,256)
time.sleep(.5)
return b
def hue():
r = rand_intr()
g = rand_intg()
b = rand_intb()
print(r, g, b)
print('test')
time.sleep(.01)
while True:
hue()
Here I just modified your functions to print a message when we enter and exit each function.
The output is
Entered rand_intr
Leaving rand_intr
Entered rand_intg
Leaving rand_intg
Entered rand_intb
Leaving rand_intb
172 206 115
test
Entered rand_intr
Leaving rand_intr
Entered rand_intg
Leaving rand_intg
Entered rand_intb
Leaving rand_intb
240 33 135
test
...
Here, the effect of each sleep on hue can be seen clearly. You don't get to print the rgb values or "test" until the previous functions have completed.
What you can do is to call your hue function periodically using a timer callback, and then modify the rgb values according to some pattern. See this stackoverflow question on
executing periodic actions for an example on how to periodically execute a function using a basic time-based mechanism.
Edit
Based on your comment to #jasonharper
If you call hue every 60 seconds, it does not make sense if your calls to the functions that generate the random rgb values occur at a faster rate because any changes in the intervening time will not be seen in hue.
What you can do is call hue every 60 seconds, then generate your rgb values to have whatever pattern in there.
Modifying the answer by #kev in the post I linked to above,
import time, threading
def update():
# Do whatever you want here.
# This function will be called again in 60 seconds.
# ...
hue()
# Whatever other things you want to do
# ...
threading.Timer(60.0, update).start()
def hue():
r = rand_intr()
g = rand_intg()
b = rand_intb()
print(r, g, b)
# Don't call sleep.
if __name__ == "__main__":
update()
Now you should only call update once, possibly in some startup part of your code and remove all the calls to sleep in your functions.
Related
I'm working on a project that needs to run two different CPU-intensive functions. Hence using a multiproccessing approach seems to be the way to go. The challenge that I'm facing is that one function has a slower runtime than the other one. For the sake of argument lets say that execute has a runtime of .1 seconds while update takes a full second to run. The goal is that while update is running execute will have calculated an output value 10 times. Once update has finished it needs to pass a set of parameters to execute which can then continue generating an output with the new set of parameters. After sometime update needs to run again and once more generate a new set of parameters.
Furthermore both functions will require a different set of input variables.
The image link below should hopefully visualize my conundrum a bit better.
function runtime visualisation
From what I've gathered (https://zetcode.com/python/multiprocessing/) using an asymetric mapping approach might be the way to go, but it doesn't really seem to work. Any help is greatly appreciated.
Pseudo Code
from multiprocessing import Pool
from datetime import datetime
import time
import numpy as np
class MyClass():
def __init__(self, inital_parameter_1, inital_parameter_2):
self.parameter_1 = inital_parameter_1
self.parameter_2 = inital_parameter_2
def execute(self, input_1, input_2, time_in):
print('starting execute function for time:' + str(time_in))
time.sleep(0.1) # wait for 100 milliseconds
# generate some output
output = (self.parameter_1 * input_1) + (self.parameter_2 + input_2)
print('exiting execute function')
return output
def update(self, update_input_1, update_input_2, time_in):
print('starting update function for time:' + str(time_in))
time.sleep(1) # wait for 1 second
# generate parameters
self.parameter_1 += update_input_1
self.parameter_2 += update_input_2
print('exiting update function')
def smap(f):
return f()
if __name__ == "__main__":
update_input_1 = 3
update_input_2 = 4
input_1 = 0
input_2 = 1
# initialize class
my_class = MyClass(1, 2)
# total runtime (arbitrary)
runtime = int(10e6)
# update_time (arbitrary)
update_time = np.array([10, 10e2, 15e4, 20e5])
for current_time in range(runtime):
# if time equals update time run both functions simultanously until update is complete
if any(update_time == current_time):
with Pool() as pool:
res = pool.map_async(my_class.smap, [my_class.execute(input_1, input_2, current_time),
my_class.update(update_input_1, update_input_2, current_time)])
# otherwise run only execute
else:
output = my_class.execute(input_1, input_2,current_time)
# increment input
input_1 += 1
input_2 += 2
I confess to not being able to fully following your code vis-a-vis your description. But I see some issues:
Method update is not returning any value other than None, which is implicitly returned due to the lack of a return statement.
Your with Pool() ...: block will call terminate upon block exit, which is immediately after your call to pool.map_async, which is non-blocking. But you have no provision to wait for the completion of this submitted task (terminate will most likely kill the running task before it completes).
What you are passing to the map_async function is the worker function name and an iterable. But you are invoking method calls to execute and update from the current main process and using their return values as elements of the iterable and these return values are definitely not functions suitable for passing to smap. So there is no multiprocessing being done and this is just plain wrong.
You are also creating and destroying process pools over and over again. Much better to create the process pool just once.
I would therefore recommend the following changes at the very least. But note that this code potentially generates tasks much faster than they can be completed and you could have millions of tasks queued up to run given your current runtime value, which could be quite a strain on system resources such as memory. So I've inserted some code that ensures that the rate of submitting tasks is throttled so that the number of incomplete submitted tasks is never more than three times the number of CPU cores available.
# we won't need heavy-duty numpy for what we are doing:
#import numpy as np
from multiprocessing import cpu_count
from threading import Lock
... # etc.
if __name__ == "__main__":
update_input_1 = 3
update_input_2 = 4
input_1 = 0
input_2 = 1
# initialize class
my_class = MyClass(1, 2)
# total runtime (arbitrary)
runtime = int(10e6)
# update_time (arbitrary)
# we don't need overhead of numpy (remove import of numpy):
#update_time = np.array([10, 10e2, 15e4, 20e5])
update_time = [10, 10e2, 15e4, 20e5]
tasks_submitted = 0
lock = Lock()
execute_output = []
def execute_result(result):
global tasks_submitted
with lock:
tasks_submitted -= 1
# result is the return value from method execute
# do something with it, e.g. execute_output.append(result)
pass
update_output = []
def update_result(result):
global tasks_submitted
with lock:
tasks_submitted -= 1
# result is the return value from method update
# do something with it, e.g. update_output.append(result)
pass
n_processors = cpu_count()
with Pool() as pool:
for current_time in range(runtime):
# if time equals update time run both functions simultanously until update is complete
#if any(update_time == current_time):
if current_time in update_time:
# run both update and execute:
pool.apply_async(my_class.update, args=(update_input_1, update_input_2, current_time), callback=update_result)
with lock:
tasks_submitted += 1
pool.apply_async(my_class.execute, args=(input_1, input_2, current_time), callback=execute_result)
with lock:
tasks_submitted += 1
# increment input
input_1 += 1
input_2 += 2
while tasks_submitted > n_processors * 3:
time.sleep(.05)
# Ensure all tasks have completed:
pool.close()
pool.join()
assert(tasks_submitted == 0)
I am working at a Python project, and I reached a point where I need a function to stop and return after a x time, that is passed as a parameter. A simple example:
def timedfunc(time_to_stop):
result = None
while (time_has_not_passed):
do()
return result
I explain:
When time has passed, timedfunc stops and interrupts everything in it, and jumps right to return result. So, what I need is a way to make this function work as long as possible (time_to_stop), and then to return the result variable, which is as accurate as possible (More time, more calculations, more accuracy). Of course, when time is out, also do() stops. To better understand, I say that the function is continuosly changing the value of result, and once the time has passed it returns the current value. (do() stands for all the calculations that change result)
I just made a simple example to better explain what I want:
def multiply(time):
result = 10
while time_has_not_passed:
temporary = result*10 #Actually much more time-consuming, also like 3 minutes.
temporary /= 11
result = temporary
return result
This explains what kind of calculations do() makes, and I need as many *10/11 as python can do in, for example, 0.5 sec.
I know that this pretty complicated, but any help would be great.
import time
start_time = time.time()
program()
if (time.time()-start_time)>0.5: #you can change 0.5 to any other value you want
exit()
It is something like this. you can put this if statement right inside your program function too.
maybe you can use:
time.sleep(x)
do()
# OR
now = time()
cooldown = x
if now + cooldown < time():
do()
if you want it to do something for a while
now = time()
needed_time = x
while now + needed_time > time():
do()
When I put a delay before a loop that has delay inside a function, the function seem to not delay and loop when called.
from time import *
from random import *
def _print(s):
global e_times
print(s)
return 10
def doloop(l_delay, s_delay):
sleep(s_delay)
while True:
sleep(l_delay)
doloop(_print('Hello, world!'), 20)
My expectation that the output must delay for 20 seconds then and for each 10 seconds it must print the 'Hello, world!' string once. But when executed, it doesn't both delay and loop. What should I do?
doloop(_print('Hello, world!'), 20)
This will do the following:
evaluate _print('Hello world!') -> get 10
call doLoop like this: doLoop(10, 20)
Functions arguments are evaluated first before being passed down to functions.
So you will obviously not get a loop that calls the function multiple times.
What you need to do is pass the function itself to the doLoop method. And use it's return value.
def doloop(call_func, s_delay):
sleep(s_delay)
while True:
l_delay = call_func()
sleep(l_delay)
And then call it with:
doloop(lambda: _print('Hello, world!'), 20)
The lambda here, turns the function call into a closure (the argument is bound to the method)
It really doesn't make sense to me about your functions after reading your expectation about the program you want to build... I would simply use the following code:
from time import sleep
def _print(n, s): # n = sleep for seconds, s = string to print
while True: # infinite loop
print(s) # print the string
sleep(n) # sleep for number of seconds you specify
sleep(20) # initial sleep of 20 seconds
_print("Hello, World!", 10) # run the custom print function which sleeps for n seconds and prints s string
Hope you have got what you want.
from time import *
from random import *
def _print(s):
global e_times
print(s)
return 10
def doloop(call_func, s_delay):
sleep(s_delay)
while True:
l_delay = call_func()
sleep(l_delay)
doloop(lambda: _print('Hello, world!'), 20)
you have to use lambda function sometimes. lamda functions like variables passed to the function.
I need to generate sine wave data (only positive values) between 0 and the specified interval and for each value of the sine wave, data call some function.
Currently, I am generating sine wave data between 0 and the specified interval using below code
np.sin(np.linspace(0,180, count)* np.pi / 180. )
it generates values between 0 to 180. The size of the array is equal to count.
Now I need to call some function for each value of the generated array. The total time to call the function for each value should complete within some predefined time interval. I tried to use sleep function by dividing predefined time interval by count.
I am wondering if there is any other way to achieve the above functionality because the instruction execution can take some time.
Let's say you want to run function foo() every 10 seconds, but the actual running time of foo() is unknown. The best you can do, without resorting to hard real-time programming, is to get the current time before and after the call to foo() and then sleep() for the rest of the interval:
import time
INTERVAL = 10 # seconds
# Repeat this fragment as needed
start = time.time() # in seconds
foo()
elapsed = time.time() - start
remains = INTERVAL - elapsed
time.sleep(remains)
However, keep in mind that sleep sleeps at least that much time. It may sleep longer, due to scheduling, in which case your function foo may be executed less frequently than needed.
Just for putting some Python around #DYZ's answer, you could use a decorator or a context manager in order to "patch" your target function and make it take the time you want to complete.
In the following code, you have a list with five elements and you want to print each one, the total time is 5s, so print each element should take 1s.
import time
data = [1, 2, 3, 4, 5]
# Decorator.
def patch_execution_time(limit):
def wrapper(func):
def wrapped(*args, **kwargs):
init = time.time()
result = func(*args, **kwargs)
end = time.time()
elapsed = end - init
if elapsed < limit:
time.sleep(limit - elapsed)
return result
return wrapped
return wrapper
# Context manager, more usefull if the total time interval
# is dynamic.
class patch_execution_time_cxt(object):
def __init__(self, operation, time):
self.operation = operation
self.time = time
def __enter__(self):
return patch_execution_time(self.time)(self.operation)
def __exit__(self, *args):
pass
# Two sample functions one decarated and the other for
# ilustrating the use of the context manager.
#patch_execution_time(1)
def foo(item):
print(item)
def foo_1(item):
print(item)
print("Using decoreted ...")
for item in data:
foo(item)
print("Using context manager ...")
with patch_execution_time_cxt(foo_1, 1) as patched_foo:
for item in data:
patched_foo(item)
I'm learning Twisted recently, and just now I re-read some basic docs on Deferred, here is some example code from:http://twistedmatrix.com/documents/12.3.0/core/howto/defer.html
What about commenting out the second g = Getter() out?
Will there be re-enter poblem? Do you have some good ideas on how to avoid these kind of issue?
from twisted.internet import reactor, defer
class Getter:
def gotResults(self, x):
"""
The Deferred mechanism provides a mechanism to signal error
conditions. In this case, odd numbers are bad.
This function demonstrates a more complex way of starting
the callback chain by checking for expected results and
choosing whether to fire the callback or errback chain
"""
if self.d is None:
print "Nowhere to put results"
return
d = self.d
self.d = None
if x % 2 == 0:
d.callback(x*3)
else:
d.errback(ValueError("You used an odd number!"))
def _toHTML(self, r):
"""
This function converts r to HTML.
It is added to the callback chain by getDummyData in
order to demonstrate how a callback passes its own result
to the next callback
"""
return "Result: %s" % r
def getDummyData(self, x):
"""
The Deferred mechanism allows for chained callbacks.
In this example, the output of gotResults is first
passed through _toHTML on its way to printData.
Again this function is a dummy, simulating a delayed result
using callLater, rather than using a real asynchronous
setup.
"""
self.d = defer.Deferred()
# simulate a delayed result by asking the reactor to schedule
# gotResults in 2 seconds time
reactor.callLater(2, self.gotResults, x)
self.d.addCallback(self._toHTML)
return self.d
def printData(d):
print d
def printError(failure):
import sys
sys.stderr.write(str(failure))
# this series of callbacks and errbacks will print an error message
g = Getter()
d = g.getDummyData(3)
d.addCallback(printData)
d.addErrback(printError)
# this series of callbacks and errbacks will print "Result: 12"
#g = Getter() #<= What about commenting this line out?
d = g.getDummyData(4)
d.addCallback(printData)
d.addErrback(printError)
reactor.callLater(4, reactor.stop)
reactor.run()
Yes, if you comment the second g = Getter(), you will have a problem. The same Deferred will fire twice because you have the Deferred stored in the Getter object. In particular, the second call to getDummyData will overwrite the first Deferred.
You shouldn't do this. As a general point, I don't think it is a good idea to hold onto Deferred objects, because they can only fire once and it is all too easy to have a problem like you do.
What you should do is this:
def getDummyData(self, x):
...
d = defer.Deferred()
# simulate a delayed result by asking the reactor to schedule
# gotResults in 2 seconds time
reactor.callLater(2, self.gotResults, x, d)
d.addCallback(self._toHTML)
return d
And:
def gotResults(self, x, d):
"""
The Deferred mechanism provides a mechanism to signal error
conditions. In this case, odd numbers are bad.
This function demonstrates a more complex way of starting
the callback chain by checking for expected results and
choosing whether to fire the callback or errback chain
"""
if d is None:
print "Nowhere to put results"
return
if x % 2 == 0:
d.callback(x*3)
else:
d.errback(ValueError("You used an odd number!"))
Notice that in this case Getter has no state, which is good, and you don't need a class for it!
My opinioin is that Deferreds should be used to give the caller of your function the ability to do something with the result when it becomes available. They should not be used for anything fancier. So, I always have
def func():
d = defer.Deferred()
...
return d
If the caller has to hold on to the Deferred for whatever reason, they may, but I can freely call func multiple times without having to worry about hidden state.