I need to generate sine wave data (only positive values) between 0 and the specified interval and for each value of the sine wave, data call some function.
Currently, I am generating sine wave data between 0 and the specified interval using below code
np.sin(np.linspace(0,180, count)* np.pi / 180. )
it generates values between 0 to 180. The size of the array is equal to count.
Now I need to call some function for each value of the generated array. The total time to call the function for each value should complete within some predefined time interval. I tried to use sleep function by dividing predefined time interval by count.
I am wondering if there is any other way to achieve the above functionality because the instruction execution can take some time.
Let's say you want to run function foo() every 10 seconds, but the actual running time of foo() is unknown. The best you can do, without resorting to hard real-time programming, is to get the current time before and after the call to foo() and then sleep() for the rest of the interval:
import time
INTERVAL = 10 # seconds
# Repeat this fragment as needed
start = time.time() # in seconds
foo()
elapsed = time.time() - start
remains = INTERVAL - elapsed
time.sleep(remains)
However, keep in mind that sleep sleeps at least that much time. It may sleep longer, due to scheduling, in which case your function foo may be executed less frequently than needed.
Just for putting some Python around #DYZ's answer, you could use a decorator or a context manager in order to "patch" your target function and make it take the time you want to complete.
In the following code, you have a list with five elements and you want to print each one, the total time is 5s, so print each element should take 1s.
import time
data = [1, 2, 3, 4, 5]
# Decorator.
def patch_execution_time(limit):
def wrapper(func):
def wrapped(*args, **kwargs):
init = time.time()
result = func(*args, **kwargs)
end = time.time()
elapsed = end - init
if elapsed < limit:
time.sleep(limit - elapsed)
return result
return wrapped
return wrapper
# Context manager, more usefull if the total time interval
# is dynamic.
class patch_execution_time_cxt(object):
def __init__(self, operation, time):
self.operation = operation
self.time = time
def __enter__(self):
return patch_execution_time(self.time)(self.operation)
def __exit__(self, *args):
pass
# Two sample functions one decarated and the other for
# ilustrating the use of the context manager.
#patch_execution_time(1)
def foo(item):
print(item)
def foo_1(item):
print(item)
print("Using decoreted ...")
for item in data:
foo(item)
print("Using context manager ...")
with patch_execution_time_cxt(foo_1, 1) as patched_foo:
for item in data:
patched_foo(item)
Related
I know the execution time for any python program shall depend on the OS and cannot be controlled by the User. But what I want is the program to go in sleep if the execution time is lower than necessary.
Let's say I have a python program which has a print statement at the end.
def foo():
...
...
return(ans)
print(foo())
Using timeit I have evaluated the range of execution time taken for foo. Let it be from 0.8 seconds to 5.5 seconds. I choose the execution time of the complete script as 10 seconds to be on the safe side.
I want the program to add delay of 9.2 seconds before print statement if the execution of foo was completed in 0.8 seconds. Likewise a delay of 4.5 seconds if execution was completed in 5.5 seconds.
You basically just have to sleep for the amount of time that is the difference between the maximum time and actual execution time. you could also make a general purpose decorator.
class padtime:
def __init__(self, maxtime):
self.maxtime = float(maxtime)
def __call__(self, f):
def _f(*args, **kwargs):
start = time.time()
ret = f(*args, **kwargs)
end = time.time()
delay = self.maxtime - (end - start)
if delay > 0.0:
time.sleep(delay)
return ret
return _f
#padtime(9.5)
def foo():
...
return("Answer")
that could be applied to any function.
You can measure the execution time of foo() using two calls to time.time(). You can then compute the amount of time to stall the execution of the program using the computed execution time of foo():
import time
def foo():
...
start_time = time.time()
foo()
end_time = time.time()
if end_time - start_time < 10:
time.sleep(10 - (end_time - start_time))
Note that we use time.sleep() rather than a while loop that repeatedly checks whether enough time has elapsed, since busy waiting wastes resources.
I'm working on a project that needs to run two different CPU-intensive functions. Hence using a multiproccessing approach seems to be the way to go. The challenge that I'm facing is that one function has a slower runtime than the other one. For the sake of argument lets say that execute has a runtime of .1 seconds while update takes a full second to run. The goal is that while update is running execute will have calculated an output value 10 times. Once update has finished it needs to pass a set of parameters to execute which can then continue generating an output with the new set of parameters. After sometime update needs to run again and once more generate a new set of parameters.
Furthermore both functions will require a different set of input variables.
The image link below should hopefully visualize my conundrum a bit better.
function runtime visualisation
From what I've gathered (https://zetcode.com/python/multiprocessing/) using an asymetric mapping approach might be the way to go, but it doesn't really seem to work. Any help is greatly appreciated.
Pseudo Code
from multiprocessing import Pool
from datetime import datetime
import time
import numpy as np
class MyClass():
def __init__(self, inital_parameter_1, inital_parameter_2):
self.parameter_1 = inital_parameter_1
self.parameter_2 = inital_parameter_2
def execute(self, input_1, input_2, time_in):
print('starting execute function for time:' + str(time_in))
time.sleep(0.1) # wait for 100 milliseconds
# generate some output
output = (self.parameter_1 * input_1) + (self.parameter_2 + input_2)
print('exiting execute function')
return output
def update(self, update_input_1, update_input_2, time_in):
print('starting update function for time:' + str(time_in))
time.sleep(1) # wait for 1 second
# generate parameters
self.parameter_1 += update_input_1
self.parameter_2 += update_input_2
print('exiting update function')
def smap(f):
return f()
if __name__ == "__main__":
update_input_1 = 3
update_input_2 = 4
input_1 = 0
input_2 = 1
# initialize class
my_class = MyClass(1, 2)
# total runtime (arbitrary)
runtime = int(10e6)
# update_time (arbitrary)
update_time = np.array([10, 10e2, 15e4, 20e5])
for current_time in range(runtime):
# if time equals update time run both functions simultanously until update is complete
if any(update_time == current_time):
with Pool() as pool:
res = pool.map_async(my_class.smap, [my_class.execute(input_1, input_2, current_time),
my_class.update(update_input_1, update_input_2, current_time)])
# otherwise run only execute
else:
output = my_class.execute(input_1, input_2,current_time)
# increment input
input_1 += 1
input_2 += 2
I confess to not being able to fully following your code vis-a-vis your description. But I see some issues:
Method update is not returning any value other than None, which is implicitly returned due to the lack of a return statement.
Your with Pool() ...: block will call terminate upon block exit, which is immediately after your call to pool.map_async, which is non-blocking. But you have no provision to wait for the completion of this submitted task (terminate will most likely kill the running task before it completes).
What you are passing to the map_async function is the worker function name and an iterable. But you are invoking method calls to execute and update from the current main process and using their return values as elements of the iterable and these return values are definitely not functions suitable for passing to smap. So there is no multiprocessing being done and this is just plain wrong.
You are also creating and destroying process pools over and over again. Much better to create the process pool just once.
I would therefore recommend the following changes at the very least. But note that this code potentially generates tasks much faster than they can be completed and you could have millions of tasks queued up to run given your current runtime value, which could be quite a strain on system resources such as memory. So I've inserted some code that ensures that the rate of submitting tasks is throttled so that the number of incomplete submitted tasks is never more than three times the number of CPU cores available.
# we won't need heavy-duty numpy for what we are doing:
#import numpy as np
from multiprocessing import cpu_count
from threading import Lock
... # etc.
if __name__ == "__main__":
update_input_1 = 3
update_input_2 = 4
input_1 = 0
input_2 = 1
# initialize class
my_class = MyClass(1, 2)
# total runtime (arbitrary)
runtime = int(10e6)
# update_time (arbitrary)
# we don't need overhead of numpy (remove import of numpy):
#update_time = np.array([10, 10e2, 15e4, 20e5])
update_time = [10, 10e2, 15e4, 20e5]
tasks_submitted = 0
lock = Lock()
execute_output = []
def execute_result(result):
global tasks_submitted
with lock:
tasks_submitted -= 1
# result is the return value from method execute
# do something with it, e.g. execute_output.append(result)
pass
update_output = []
def update_result(result):
global tasks_submitted
with lock:
tasks_submitted -= 1
# result is the return value from method update
# do something with it, e.g. update_output.append(result)
pass
n_processors = cpu_count()
with Pool() as pool:
for current_time in range(runtime):
# if time equals update time run both functions simultanously until update is complete
#if any(update_time == current_time):
if current_time in update_time:
# run both update and execute:
pool.apply_async(my_class.update, args=(update_input_1, update_input_2, current_time), callback=update_result)
with lock:
tasks_submitted += 1
pool.apply_async(my_class.execute, args=(input_1, input_2, current_time), callback=execute_result)
with lock:
tasks_submitted += 1
# increment input
input_1 += 1
input_2 += 2
while tasks_submitted > n_processors * 3:
time.sleep(.05)
# Ensure all tasks have completed:
pool.close()
pool.join()
assert(tasks_submitted == 0)
I'm pretty new to Python. I'm trying to make a text-based strategy-like game in python and I want to have a value to increase constantly(I wanna have some other values to increase or decrease at the same time too but this is just for beginning). But if I use a While True loop I can't do anything else in the program. It just keeps rising the value but I can't do anything else. I want it to increase continuously while I can get some inputs from the user or run some other functions. Tell me if there is a module I can use or anything else please.
import time
print("PLANET EARTH" + "" + "\n Buildings:",
" ", "Resources:")
class ironMine():
def __init__(self, bc, ps, w):
self.buildingCost = bc
self.productionSpeed = ps
self.warehouse = w
def production(self):
while True:
print(" " +
"iron:", self.warehouse,
end="\r")
self.warehouse += self.productionSpeed
time.sleep(0.5)
x=input("Write something")
if x == upgrade:
self.productionSpeed += 5
else:
print("there is no such command")
t1 = ironMine([300,200,100], 10, 0)
t1.production()
For example this part is the resource production part for iron. I just add an random input to show I cant get it done. And I don't know if this part if x == upgrade: self.productionSpeed += 5 will update the existing self.productionSpeed value for object t1.
Effectively, you are trying to implement your own version of a clock, which counts up at some rate, using a loop like this:
value = initial_value
while True:
time.sleep(1)
value += rate
There is more to your code than that, of course; you also want to read user input and control the rate based on that. But at the core of it, you're trying to create a clock, and it's not working because your clock stops "ticking" while input is waiting for the user to enter something.
Instead of writing a clock, you should use one from the standard library. The time.monotonic() function works like a clock, in the sense that if you call the function twice, the difference between the two numbers is the number of seconds which elapsed between the two function calls.
The simple "clock" above, which has a variable value increasing at a fixed rate, can be replaced by a function call which calculates the current value based on the number of seconds that have elapsed, instead of continuously maintaining its current value in a variable:
import time
initial_time = time.monotonic()
def get_current_value():
current_time = time.monotonic()
seconds = current_time - initial_time
# use int(seconds) for discrete updates once per second
return initial_value + rate * int(seconds)
For your case, where the rate can change dynamically, it is a bit more complicated, but the key idea is the same; don't write your own clock, use an existing one. Since there are two things we need to be able to do - get the current value, and change the rate - let's encapsulate those two operations in a class:
import time
class TimeBasedVariable:
def __init__(self, initial_value, rate):
self.initial_value = initial_value
self.rate = rate
self.initial_time = time.monotonic()
def get_value(self, current_time=None):
if current_time is None:
current_time = time.monotonic()
seconds = current_time - self.initial_time
return self.initial_value + self.rate * int(seconds)
def set_rate(self, rate):
# reset the reference point to the current time
t = time.monotonic()
self.initial_value = self.get_value(t)
self.initial_time = t
self.rate = rate
Note that I simplified the problem slightly by making the variable update every second, rather than every 0.5 seconds. If you do want it to update every half-second, just write int(2 * seconds) instead of int(seconds).
So I’m trying to have a strobe like effect on a game I’m building and the way I currently have it it’s destroying my frame rate because the sleep function is also applying to the draw function. Can someone explain why this happens? And the logic that I’m failing to understand. Why can’t I just have the return happen every .5 seconds without it affecting the .1 sleep I have in my hue function?
Here’s a crude demonstration of what the code kind of does.
from random import randint
import time
def rand_intr():
r = randint(1,256)
time.sleep(.5)
return r
def rand_intg():
g = randint(1,256)
time.sleep(.5)
return g
def rand_intb():
b = randint(1,256)
time.sleep(.5)
return b
def hue():
r = rand_intr()
g = rand_intg()
b = rand_intb()
print(r, g, b)
print('test')
time.sleep(.01)
while True:
hue()
The sleep function blocks the main thread. This means rand_intg does not run until rand_intr "wakes up" from its sleep.
Similarly, rand_intb has to wait for rand_intg, and hue has to wait for all the previous 3 functions. This means the total time hue has to wait before it can do any work is at least the amount of time needed to complete rand_intr, rand_intg, and rand_intb.
We can understand what is happening if we modify your example slightly and look at the output.
from random import randint
import time
def log_entry_exit(f):
def wrapped():
print("Entered {}".format(f.__name__))
result = f()
print("Leaving {}".format(f.__name__))
return result
return wrapped
#log_entry_exit
def rand_intr():
r = randint(1,256)
time.sleep(.5)
return r
#log_entry_exit
def rand_intg():
g = randint(1,256)
time.sleep(.5)
return g
#log_entry_exit
def rand_intb():
b = randint(1,256)
time.sleep(.5)
return b
def hue():
r = rand_intr()
g = rand_intg()
b = rand_intb()
print(r, g, b)
print('test')
time.sleep(.01)
while True:
hue()
Here I just modified your functions to print a message when we enter and exit each function.
The output is
Entered rand_intr
Leaving rand_intr
Entered rand_intg
Leaving rand_intg
Entered rand_intb
Leaving rand_intb
172 206 115
test
Entered rand_intr
Leaving rand_intr
Entered rand_intg
Leaving rand_intg
Entered rand_intb
Leaving rand_intb
240 33 135
test
...
Here, the effect of each sleep on hue can be seen clearly. You don't get to print the rgb values or "test" until the previous functions have completed.
What you can do is to call your hue function periodically using a timer callback, and then modify the rgb values according to some pattern. See this stackoverflow question on
executing periodic actions for an example on how to periodically execute a function using a basic time-based mechanism.
Edit
Based on your comment to #jasonharper
If you call hue every 60 seconds, it does not make sense if your calls to the functions that generate the random rgb values occur at a faster rate because any changes in the intervening time will not be seen in hue.
What you can do is call hue every 60 seconds, then generate your rgb values to have whatever pattern in there.
Modifying the answer by #kev in the post I linked to above,
import time, threading
def update():
# Do whatever you want here.
# This function will be called again in 60 seconds.
# ...
hue()
# Whatever other things you want to do
# ...
threading.Timer(60.0, update).start()
def hue():
r = rand_intr()
g = rand_intg()
b = rand_intb()
print(r, g, b)
# Don't call sleep.
if __name__ == "__main__":
update()
Now you should only call update once, possibly in some startup part of your code and remove all the calls to sleep in your functions.
I'm programming in python on windows and would like to accurately measure the time it takes for a function to run. I have written a function "time_it" that takes another function, runs it, and returns the time it took to run.
def time_it(f, *args):
start = time.clock()
f(*args)
return (time.clock() - start)*1000
i call this 1000 times and average the result. (the 1000 constant at the end is to give the answer in milliseconds.)
This function seems to work but i have this nagging feeling that I'm doing something wrong, and that by doing it this way I'm using more time than the function actually uses when its running.
Is there a more standard or accepted way to do this?
When i changed my test function to call a print so that it takes longer, my time_it function returns an average of 2.5 ms while the cProfile.run('f()') returns and average of 7.0 ms. I figured my function would overestimate the time if anything, what is going on here?
One additional note, it is the relative time of functions compared to each other that i care about, not the absolute time as this will obviously vary depending on hardware and other factors.
Use the timeit module from the Python standard library.
Basic usage:
from timeit import Timer
# first argument is the code to be run, the second "setup" argument is only run once,
# and it not included in the execution time.
t = Timer("""x.index(123)""", setup="""x = range(1000)""")
print t.timeit() # prints float, for example 5.8254
# ..or..
print t.timeit(1000) # repeat 1000 times instead of the default 1million
Instead of writing your own profiling code, I suggest you check out the built-in Python profilers (profile or cProfile, depending on your needs): http://docs.python.org/library/profile.html
You can create a "timeme" decorator like so
import time
def timeme(method):
def wrapper(*args, **kw):
startTime = int(round(time.time() * 1000))
result = method(*args, **kw)
endTime = int(round(time.time() * 1000))
print(endTime - startTime,'ms')
return result
return wrapper
#timeme
def func1(a,b,c = 'c',sleep = 1):
time.sleep(sleep)
print(a,b,c)
func1('a','b','c',0)
func1('a','b','c',0.5)
func1('a','b','c',0.6)
func1('a','b','c',1)
This code is very inaccurate
total= 0
for i in range(1000):
start= time.clock()
function()
end= time.clock()
total += end-start
time= total/1000
This code is less inaccurate
start= time.clock()
for i in range(1000):
function()
end= time.clock()
time= (end-start)/1000
The very inaccurate suffers from measurement bias if the run-time of the function is close to the accuracy of the clock. Most of the measured times are merely random numbers between 0 and a few ticks of the clock.
Depending on your system workload, the "time" you observe from a single function may be entirely an artifact of OS scheduling and other uncontrollable overheads.
The second version (less inaccurate) has less measurement bias. If your function is really fast, you may need to run it 10,000 times to damp out OS scheduling and other overheads.
Both are, of course, terribly misleading. The run time for your program -- as a whole -- is not the sum of the function run-times. You can only use the numbers for relative comparisons. They are not absolute measurements that convey much meaning.
If you want to time a python method even if block you measure may throw, one good approach is to use with statement. Define some Timer class as
import time
class Timer:
def __enter__(self):
self.start = time.clock()
return self
def __exit__(self, *args):
self.end = time.clock()
self.interval = self.end - self.start
Then you may want to time a connection method that may throw. Use
import httplib
with Timer() as t:
conn = httplib.HTTPConnection('google.com')
conn.request('GET', '/')
print('Request took %.03f sec.' % t.interval)
__exit()__ method will be called even if the connection request thows. More precisely, you'd have you use try finally to see the result in case it throws, as with
try:
with Timer() as t:
conn = httplib.HTTPConnection('google.com')
conn.request('GET', '/')
finally:
print('Request took %.03f sec.' % t.interval)
More details here.
This is neater
from contextlib import contextmanager
import time
#contextmanager
def timeblock(label):
start = time.clock()
try:
yield
finally:
end = time.clock()
print ('{} : {}'.format(label, end - start))
with timeblock("just a test"):
print "yippee"
Similar to #AlexMartelli's answer
import timeit
timeit.timeit(fun, number=10000)
can do the trick.