I've got a Python loop that should run every minute, do some data processing and sleep until the next minute is up. However the processing takes variable amount of time, sometimes it's close to zero when there is not much to do, sometimes it takes even 10 or 20 seconds.
To compensate for that I measure the time it takes to run the processing like this:
while True:
time_start = time.time()
do_something() # <-- This takes unknown time
time_spent = time.time() - time_start
time.sleep(60 - time_spent)
It kind of works but over a couple of days it still drifts away by a number of seconds. I guess it happens when the computer (small Raspberry Pi) is busy and delays the start of the loop, then it all starts slipping away.
I don't need do_something() executed exactly every minute, so no need for real-time OS or anything like that, but I don't want one delayed start affect all the subsequent ones either.
Is there some kind of scheduler that can run my function at a predefined rate? Or some more clever way to compensate for the occasional delays?
Playing with the loop a little this seems to work quite well. The trick is to record the start time once before the loop starts, not on every iteration. That way one delayed start won't affect any future ones.
rate_sec = 60
time_start = time.time()
while True:
print("{}".format(datetime.now()))
# Emulate processing time 0s to 20s
time.sleep(random.randint(0, 20))
# Sleep until the next 'rate_sec' multiple
delay = rate_sec - (time.time() - time_start) % rate_sec
time.sleep(delay)
Is sleeping a pre-requisite of your project? I mean, you don't need to have your processing blocked if you want to run the task every ~1 minute.
Since you are on a Raspberry Pi, you can (and probably should) use crontab.
This will give you the most flexibility and allow you to don't have the computer sleeping idle.
To do this, open your cron tab with
crontab -e
And add the following entry
* * * * * /usr/bin/env python3 /path/to/script.py
Related
I'm using multithreading to download videos from a website, however sometimes if the video is too small, the program starts another thread too fast, and the server blocks my request.
I don't want to use time.sleep because that will slow down requests that are not required to be slowed down.
So basically I need a command to establish a minimum execution time, like.
Pseuco code
minimum time = 20 seconds
If thread ended has been completed faster than minimum time:
wait until minimum time has been reached
Without trying something overly complex to achieve this, you could a) start a timer when the thread starts b) when the thread finishes, sleep for only the remainder of the time needed to reach the minimum duration:
start = ...
<threading code>
duration = now() - start
if duration < minimum:
time.sleep(minimum - duration)
Which is basically slightly less-pseudo pseudo code of what you've got in your question.
I've written a python script that automates my daily schedule for me like opening a certain apps at a certain time, and a few other similar functionalities. I was looking if I could run this script every time my Windows Laptop wakes up from sleep. I've seen other questions that talk about adding it to the startup folder, but that only runs when I fully shut down and restart my laptop. Instead I want it to run when my laptop wakes up from sleep, because I only very occasionally fully shut my laptop down. Any ideas on how to do this? Thanks!
When the computer is put to sleep, it pauses the current processes. One way to check if your process has been paused is to keep checking the time. In the following script, the loop sleeps 1 second then checks for the time difference. If the difference is more than 10 seconds, it assumes it was put to sleep and just woke up.
import time
import datetime
curtime = datetime.datetime.now()
while True:
time.sleep(1)
diff = (datetime.datetime.now()- curtime).total_seconds()
print(diff)
if diff > 10: print("....... I'm Awake .......")
curtime = datetime.datetime.now()
I'm trying to program two devices - the first by calling an application and manually clicking on program, and the second by calling a batch file and waiting for it to finish. I need each iteration of this loop to be 30 s so both devices can be programmed.
I've tried recording the time taken when it starts the iteration, and the time at the end of programming the second device. Then I set it to time.sleep(30-total time taken). This returns an execution time of slightly longer than 30 s per iteration.
for i in range(48):
t1 = time.time()
#program 1st board by calling app from python and clicking it using python.
#wait a static number of seconds (s) as there is no feedback from this app.
#program 2nd board by calling a batch file.
#this gives feedback as the code does not move to the next line until the
#batch file is finished
t2 = time.time()
time.sleep(30-(t2-t1))
#some other code
Actual results: a little over 30 seconds.
Expected results: exactly 30 seconds.
Is this because of the scheduling in python?
This is a result of scheduling in your operating system. When a process relinquishes the processor by calling sleep, there is no guarantee that it will wake up after the elapsed time requested in the call to sleep. Depending on how busy the system is, it could be delayed by a little, or it could be delayed by a lot.
If you have hard timing requirements, you need a realtime operating system.
I need to count the number of seconds that have passed between the execution of some code on a Raspberry Pi. Normally I'd do it as follows in Python:
start = time.time()
execute_my_function()
end = time.time()
elapsed = end - start
However, the Raspberry Pi doesn't include an RTC and instead relies on NTP. This means that for the first little while after booting, the system time is January 1, 1970, and so the difference between "end" and "start" often becomes about 47 years.
How do I measure the elapsed time in seconds if the system time is unreliable (from what I can gather, the "timeit" module relies on "time" and thus won't work either)? It doesn't have to be completely accurate--a second or two too much or too little is fine.
Edit: I've made a sort of hack where I read /proc/uptime which I believe is independent of the system time, but I kind of feel dirty this way. I'm hoping there is a somewhat less OS dependent solution.
You could have your program wait until start_time has a meaningful value:
while time.time() < 1e6:
time.sleep(10)
I have a python program that I want to run every 10 seconds, just like cron job. I cannot use sleep in a loop because the time interval would become uncertain. The way I am doing it now is like this:
interval = 10.0
next = time.time()
while True:
now = time.time()
if now < next:
time.sleep(next - now)
t = Thread(target=control_lights,)
t.start()# start a thread
next += interval
It generates a new thread that executes the contro_lights function. The problem is that as time goes, the number of python process grows and takes memory/CPU. Is there any good way to do this? Thanks a lot
may be try use supervisord or god for this script? It is very simple to use and to control a number of you'r processes on UNIX-like operating system
Take a look at a program called The Fat Controller which is a scheduler similar to CRON but has many more options. The interval can be measured from the end of the previous run (like a for loop) or regularly every x seconds, which I think is what you want. Particularly useful in this case is that you can tell The Fat Controller what to do if one of the processes takes longer than x seconds:
run a new instance anyway (increase parallel processes up to a specified maximum)
wait for the previous one to finish
kill the previous one and start a new one
There should be plenty of information in the documentation on how to get it set up.
You can run a cron job every 10 seconds, just set the second param to '0/10'. It will run on 0, 10, 20 etc
#run every 10 seconds from mon-fri, between 8-17
CronTrigger(day_of_week='mon-fri', hour='8-17', second='0/10')