I'm trying to run a set of code that starts exactly in 5 second blocks from UTC time, starting at an even minute.
For example it would execute each sample at exactly:
11:20:45
11:20:50
11:20:55
11:21:00
11:21:05
11:21:10
I want that to happen regardless of execution time of the code block, if running the code is instant or 3 seconds I still want to execute at the 5 second UTC time intervals.
Not exactly sure how to do this, though I think that datetime.datetime.utcnow().timestamp() - (datetime.datetime.utcnow().timestamp() % 5.0) + 5 gets me the next upcoming start time?
You can use python's scheduler module:
from datetime import datetime
import sched, time
s = sched.scheduler(time.time, time.sleep)
def execute_something(start_time):
print("starting at: %f" % time.time())
time.sleep(3) # simulate a task taking 3 seconds
print("Done at: %f" % time.time())
# Schedule next iteration
next_start_time = start_time + 5
s.enterabs(next_start_time, 1, execute_something, argument=(next_start_time,))
next_start_time = round(time.time() + 5, -1) # align to next to 10sec
s.enterabs(next_start_time, 1, execute_something, argument=(next_start_time,))
print("Starting scheduler at: %f" % time.time())
s.run()
# Starting scheduler at: 1522031714.523436
# starting at: 1522031720.005633
# Done at: 1522031723.008825
# starting at: 1522031725.002102
# Done at: 1522031728.005263
# starting at: 1522031730.002157
# Done at: 1522031733.005365
# starting at: 1522031735.002160
# Done at: 1522031738.005370
Use time.sleep to wait until the desired time. Note that this is approximate; especially when the system is under high load, your process might not be waken in time. You can increase process priority to increase your chance.
To avoid blocking the waiting thread, run the task in a separate thread, either by constructing a new thread for every task or using a (faster) thread pool, like this:
import concurrent.futures
import time
def do_something(): # Replace this with your real code
# This code outputs the time and then simulates work between 0 and 10 seconds
import datetime
import random
print(datetime.datetime.utcnow())
time.sleep(10 * random.random())
pool = concurrent.futures.ThreadPoolExecutor()
while True:
now = time.time()
time.sleep(5 - now % 5)
pool.submit(do_something)
Related
I am using schedule module to automatically run a function...
I am thinking of changing the scheduling time dynamically, but the solution is not success
Code -
import schedule
import pandas
from time import gmtime, strftime, sleep
import time
import random
time = 0.1
def a():
global time
print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
index = random.randint(1, 9)
print(index, time)
if(index==2):
time = 1
print(strftime("%Y-%m-%d %H:%M:%S", gmtime()))
schedule.every(time).minutes.do(a) #specify the minutes to automatically run the api
while True:
schedule.run_pending()
In this program, I scheduled the program to run every 6 seconds. And if the random integer - index value becomes 2, then the time variable is assigned as 1(1 minute). I checked, the time variable is changed to 1 after the random integer index becomes 2. The issue - After changing the time variable to 1, the scheduling still runs the function a() every 6 seconds not 1 minute.
How to change the scheduling time dynamically?
Thank you
After changing the time variable to 1, the scheduling still runs the function a() every 6 seconds not 1 minute.
This is because schedule.every(time).minutes.do(a) # specify the minutes to automatically run the api sets time to 6 seconds at beginning which does not change even if you change the value of that variable because that line has executed just once where value of time was 6 seconds at that execution.
How to change the scheduling time dynamically?
After reading DOCUMENTATION, I found nothing(I think) regarding changing time manually(when certain condition becomes satisfies) but it has built in Random Interval function where that function itself specifies random time within the range.
In your case you could do:
schedule.every(5).to(10).seconds.do(a)
The problem is that you cannot change time when certain condition satisfies.
Maybe there might be some way to fix that issue but could not figure out. And these information may help to investigate further to solve your problem.
I usually use custom schedulers, as they allow greater control and are also less memory intensive. The variable "time" needs to be shared between processes. This is where Manager().Namespace() comes to rescue. It talks 'between' processes.
import time
import random
from multiprocessing import Process, Manager
ns = Manager().Namespace()
ns.time = 0.1
processes = []
def a():
print(time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime()))
index = random.randint(1, 4)
if(index==2):
ns.time = 1
print(index, ns.time)
while True:
try:
s = time.time() + ns.time*60
for x in processes:
if not x.is_alive():
x.join()
processes.remove(x)
print('Sleeping :',round(s - time.time()))
time.sleep(round(s - time.time()))
p = Process(target = a)
p.start()
processes.append(p)
except:
print('Killing all to prevent orphaning ...')
[p.terminate() for p in processes]
[processes.remove(p) for p in processes]
break
I'm currently trying to have a function called every 10ms to acquire data from a sensor.
Basically I was triggering the callback from a gpio interrupt but I changed my sensor and the one I'm currently using doesn't have a INT pin to drive the callback.
So my goal is to have the same behavior but with an internal interrupt generated by a timer.
I tried this from this topic
import threading
def work ():
threading.Timer(0.25, work).start ()
print(time.time())
print "stackoverflow"
work ()
But when I run it I can see that the timer is not really precise and it's deviating over time as you can see.
1494418413.1584847
stackoverflow
1494418413.1686869
stackoverflow
1494418413.1788757
stackoverflow
1494418413.1890721
stackoverflow
1494418413.1992736
stackoverflow
1494418413.2094712
stackoverflow
1494418413.2196639
stackoverflow
1494418413.2298684
stackoverflow
1494418413.2400634
stackoverflow
1494418413.2502584
stackoverflow
1494418413.2604961
stackoverflow
1494418413.270702
stackoverflow
1494418413.2808678
stackoverflow
1494418413.2910736
stackoverflow
1494418413.301277
stackoverflow
So the timer is deviating by 0.2 milliseconds every 10 milliseconds which is quite a big bias after few seconds.
I know that python is not really made for "real-time" but I think there should be a way to do it.
If someone already have to handle time constraints with python I would be glad to have some advices.
Thanks.
This code works on my laptop - logs the delta between target and actual time - main thing is to minimise what is done in the work() function because e.g. printing and scrolling screen can take a long time.
Key thing is to start the next timer based on difference between the time when that call is made and the target.
I slowed down the interval to 0.1s so it is easier to see the jitter which on my Win7 x64 can exceed 10ms which would cause problems with passing a negative value to thte Timer() call :-o
This logs 100 samples, then prints them - if you redirect to a .csv file you can load into Excel to display graphs.
from multiprocessing import Queue
import threading
import time
# this accumulates record of the difference between the target and actual times
actualdeltas = []
INTERVAL = 0.1
def work(queue, target):
# first thing to do is record the jitter - the difference between target and actual time
actualdeltas.append(time.clock()-target+INTERVAL)
# t0 = time.clock()
# print("Current time\t" + str(time.clock()))
# print("Target\t" + str(target))
# print("Delay\t" + str(target - time.clock()))
# print()
# t0 = time.clock()
if len(actualdeltas) > 100:
# print the accumulated deltas then exit
for d in actualdeltas:
print d
return
threading.Timer(target - time.clock(), work, [queue, target+INTERVAL]).start()
myQueue = Queue()
target = time.clock() + INTERVAL
work(myQueue, target)
Typical output (i.e. don't rely on millisecond timing on Windows in Python):
0.00947008617187
0.0029628920052
0.0121824719378
0.00582923077099
0.00131316206917
0.0105631524709
0.00437298744466
-0.000251418553351
0.00897956530515
0.0028528821332
0.0118192949105
0.00546301269675
0.0145723546788
0.00910063698529
I tried your solution but I got strange results.
Here is my code :
from multiprocessing import Queue
import threading
import time
def work(queue, target):
t0 = time.clock()
print("Target\t" + str(target))
print("Current time\t" + str(t0))
print("Delay\t" + str(target - t0))
print()
threading.Timer(target - t0, work, [queue, target+0.01]).start()
myQueue = Queue()
target = time.clock() + 0.01
work(myQueue, target)
And here is the output
Target 0.054099
Current time 0.044101
Delay 0.009998
Target 0.064099
Current time 0.045622
Delay 0.018477
Target 0.074099
Current time 0.046161
Delay 0.027937999999999998
Target 0.084099
Current time 0.0465
Delay 0.037598999999999994
Target 0.09409899999999999
Current time 0.046877
Delay 0.047221999999999986
Target 0.10409899999999998
Current time 0.047211
Delay 0.05688799999999998
Target 0.11409899999999998
Current time 0.047606
Delay 0.06649299999999997
So we can see that the target is increasing per 10ms and for the first loop, the delay for the timer seems to be good.
The point is instead of starting again at current_time + delay it start again at 0.045622 which represents a delay of 0.001521 instead of 0.01000
Did I missed something? My code seems to follow your logic isn't it?
Working example for #Chupo_cro
Here is my working example
from multiprocessing import Queue
import RPi.GPIO as GPIO
import threading
import time
import os
INTERVAL = 0.01
ledState = True
GPIO.setmode(GPIO.BCM)
GPIO.setup(2, GPIO.OUT, initial=GPIO.LOW)
def work(queue, target):
try:
threading.Timer(target-time.time(), work, [queue, target+INTERVAL]).start()
GPIO.output(2, ledState)
global ledState
ledState = not ledState
except KeyboardInterrupt:
GPIO.cleanup()
try:
myQueue = Queue()
target = time.time() + INTERVAL
work(myQueue, target)
except KeyboardInterrupt:
GPIO.cleanup()
I've found that numpy.fft.fft (and its variants) very slow when run in the background. Here is an example of what I'm talking about
import numpy as np
import multiprocessing as mproc
import time
import sys
# the producer function, which will run in the background and produce data
def Producer(dataQ):
numFrames = 5
n = 0
while n < numFrames:
data = np.random.rand(3000, 200)
dataQ.put(data) # send the datta to the consumer
time.sleep(0.1) # sleep for 0.5 second, so we dont' overload CPU
n += 1
# the consumer function, which will run in the backgrounnd and consume data from the producer
def Consumer(dataQ):
while True:
data = dataQ.get()
t1 = time.time()
fftdata = np.fft.rfft(data, n=3000*5)
tDiff = time.time() - t1
print("Elapsed time is %0.3f" % tDiff)
time.sleep(0.01)
sys.stdout.flush()
# the main program if __name__ == '__main__': is necessary to prevent this code from being run
# only when this program is started by user
if __name__ == '__main__':
data = np.random.rand(3000, 200)
t1 = time.time()
fftdata = np.fft.rfft(data, n=3000*5, axis=0)
tDiff = time.time() - t1
print("Elapsed time is %0.3f" % tDiff)
# generate a queue for transferring data between the producedr and the consumer
dataQ = mproc.Queue(4)
# start up the processoso
producerProcess = mproc.Process(target=Producer, args=[dataQ], daemon=False)
consumerProcess = mproc.Process(target=Consumer, args=[dataQ], daemon=False)
print("starting up processes")
producerProcess.start()
consumerProcess.start()
time.sleep(10) # let program run for 5 seconds
producerProcess.terminate()
consumerProcess.terminate()
The output it produes on my machine:
Elapsed time is 0.079
starting up processes
Elapsed time is 0.859
Elapsed time is 0.861
Elapsed time is 0.878
Elapsed time is 0.863
Elapsed time is 0.758
As you can see, it is roughly 10x slower when run in the background, and I can't figure out why this would be the case. The time.sleep() calls should ensure that the other process (the main process and producer process) aren't doing anything when the FFT is being computed, so it should use all the cores. I've checked CPU utilization through Windows Task Manager and it seems to use up about 25% when numpy.fft.fft is called heavily in both the single process and multiprocess cases.
Anyone have an idea whats going on?
The main problem is that your fft call in the background thread is:
fftdata = np.fft.rfft(data, n=3000*5)
rather than:
fftdata = np.fft.rfft(data, n=3000*5, axis=0)
which for me made all the difference.
There are a few other things worth noting. Rather than having the time.sleep() everywhere, why not just let the processor take care of this itself? Further more, rather than suspending the main thread, you can use
consumerProcess.join()
and then have the producer process run dataQ.put(None) once it has finished loading the data, and break out of the loop in the consumer process, i.e.:
def Consumer(dataQ):
while True:
data = dataQ.get()
if(data is None):
break
...
Running it on Ubuntu 14 with Python 2.7.6
I simplified script to show my problem:
import time
import multiprocessing
data = range(1, 3)
start_time = time.clock()
def lol():
for i in data:
print time.clock() - start_time, "lol seconds"
def worker(n):
print time.clock() - start_time, "multiprocesor seconds"
def mp_handler():
p = multiprocessing.Pool(1)
p.map(worker, data)
if __name__ == '__main__':
lol()
mp_handler()
And the output:
8e-06 lol seconds
6.9e-05 lol seconds
-0.030019 multiprocesor seconds
-0.029907 multiprocesor seconds
Process finished with exit code 0
Using time.time() gives non-negative values (as marked here Timer shows negative time elapsed)
but I'm curious what is the problem with time.clock() in python multiprocessing and reading time from CPU.
multiprocessing spawns new processes and time.clock() on linux has the same meaning of the C's clock():
The value returned is the CPU time used so far as a clock_t;
So the values returned by clock restart from 0 when a process start. However your code uses the parent's process start_time to determine the time spent in the child process, which is obviously incorrect if the child processes CPU time resets.
The clock() function makes sense only when handling one process, because its return value is the CPU time spent by that process. Child processes are not taken into account.
The time() function on the other hand uses a system-wide clock, and thus can be used even between different processes (although it is not monotonic, so it might return wrong results if somebody changes the system time during the events).
Forking a running python instance is probably faster then starting a new one from scratch, hence start_time is almost always bigger then the value returned by time.clock().
Take into account that the parent process also had to read your file on disk, perform the imports which may require reading other .py files, searching directories etc.
The forked child processes don't have to do all that.
Example code that shows that the return value of time.clock() resets to 0:
from __future__ import print_function
import time
import multiprocessing
data = range(1, 3)
start_time = time.clock()
def lol():
for i in data:
t = time.clock()
print('t: ', t, end='\t')
print(t - start_time, "lol seconds")
def worker(n):
t = time.clock()
print('t: ', t, end='\t')
print(t - start_time, "multiprocesor seconds")
def mp_handler():
p = multiprocessing.Pool(1)
p.map(worker, data)
if __name__ == '__main__':
print('start_time', start_time)
lol()
mp_handler()
Result:
$python ./testing.py
start_time 0.020721
t: 0.020779 5.8e-05 lol seconds
t: 0.020804 8.3e-05 lol seconds
t: 0.001036 -0.019685 multiprocesor seconds
t: 0.001166 -0.019555 multiprocesor seconds
Note how t is monotonic for the lol case while goes back to 0.001 in the other case.
To add a concise Python 3 example to Bakuriu's excellent answer above you can use the following method to get a global timer independent of the subprocesses:
import multiprocessing as mp
import time
# create iterable
iterable = range(4)
# adds three to the given element
def add_3(num):
a = num + 3
return a
# multiprocessing attempt
def main():
pool = mp.Pool(2)
results = pool.map(add_3, iterable)
return results
if __name__ == "__main__": #Required not to spawn deviant children
start=time.time()
results = main()
print(list(results))
elapsed = (time.time() - start)
print("\n","time elapsed is :", elapsed)
Note that if we had instead used time.process_time() instead of time.time() we will get an undesired result.
This might be incredibly easy but how do I get python code to loop every x mins between the times y and z?
For example if I wanted my script to run between midnight (00:00) through to 10 pm (22:00) looping every 5 minutes.
Try the sched module in the standard library. Here's an example of calling a function once per second, starting five seconds in the future, and ending ten seconds in the future:
from sched import scheduler
from time import time, sleep
s = scheduler(time, sleep)
def run_periodically(start, end, interval, func):
event_time = start
while event_time < end:
s.enterabs(event_time, 0, func, ())
event_time += interval
s.run()
if __name__ == '__main__':
def say_hello():
print 'hello'
run_periodically(time()+5, time()+10, 1, say_hello)
Alternatively, you can work with threading.Timer, but you need to do a little more work to get it to start at a given time, run every five minutes, and stop at a fixed time.