How do I establish a minimum time for thread completion? - python

I'm using multithreading to download videos from a website, however sometimes if the video is too small, the program starts another thread too fast, and the server blocks my request.
I don't want to use time.sleep because that will slow down requests that are not required to be slowed down.
So basically I need a command to establish a minimum execution time, like.
Pseuco code
minimum time = 20 seconds
If thread ended has been completed faster than minimum time:
wait until minimum time has been reached

Without trying something overly complex to achieve this, you could a) start a timer when the thread starts b) when the thread finishes, sleep for only the remainder of the time needed to reach the minimum duration:
start = ...
<threading code>
duration = now() - start
if duration < minimum:
time.sleep(minimum - duration)
Which is basically slightly less-pseudo pseudo code of what you've got in your question.

Related

Running Python function at defined intervals

I've got a Python loop that should run every minute, do some data processing and sleep until the next minute is up. However the processing takes variable amount of time, sometimes it's close to zero when there is not much to do, sometimes it takes even 10 or 20 seconds.
To compensate for that I measure the time it takes to run the processing like this:
while True:
time_start = time.time()
do_something() # <-- This takes unknown time
time_spent = time.time() - time_start
time.sleep(60 - time_spent)
It kind of works but over a couple of days it still drifts away by a number of seconds. I guess it happens when the computer (small Raspberry Pi) is busy and delays the start of the loop, then it all starts slipping away.
I don't need do_something() executed exactly every minute, so no need for real-time OS or anything like that, but I don't want one delayed start affect all the subsequent ones either.
Is there some kind of scheduler that can run my function at a predefined rate? Or some more clever way to compensate for the occasional delays?
Playing with the loop a little this seems to work quite well. The trick is to record the start time once before the loop starts, not on every iteration. That way one delayed start won't affect any future ones.
rate_sec = 60
time_start = time.time()
while True:
print("{}".format(datetime.now()))
# Emulate processing time 0s to 20s
time.sleep(random.randint(0, 20))
# Sleep until the next 'rate_sec' multiple
delay = rate_sec - (time.time() - time_start) % rate_sec
time.sleep(delay)
Is sleeping a pre-requisite of your project? I mean, you don't need to have your processing blocked if you want to run the task every ~1 minute.
Since you are on a Raspberry Pi, you can (and probably should) use crontab.
This will give you the most flexibility and allow you to don't have the computer sleeping idle.
To do this, open your cron tab with
crontab -e
And add the following entry
* * * * * /usr/bin/env python3 /path/to/script.py

How can I limit one iteration of a loop to a fixed time in seconds?

I'm trying to program two devices - the first by calling an application and manually clicking on program, and the second by calling a batch file and waiting for it to finish. I need each iteration of this loop to be 30 s so both devices can be programmed.
I've tried recording the time taken when it starts the iteration, and the time at the end of programming the second device. Then I set it to time.sleep(30-total time taken). This returns an execution time of slightly longer than 30 s per iteration.
for i in range(48):
t1 = time.time()
#program 1st board by calling app from python and clicking it using python.
#wait a static number of seconds (s) as there is no feedback from this app.
#program 2nd board by calling a batch file.
#this gives feedback as the code does not move to the next line until the
#batch file is finished
t2 = time.time()
time.sleep(30-(t2-t1))
#some other code
Actual results: a little over 30 seconds.
Expected results: exactly 30 seconds.
Is this because of the scheduling in python?
This is a result of scheduling in your operating system. When a process relinquishes the processor by calling sleep, there is no guarantee that it will wake up after the elapsed time requested in the call to sleep. Depending on how busy the system is, it could be delayed by a little, or it could be delayed by a lot.
If you have hard timing requirements, you need a realtime operating system.

Measure time with Python when device's epoch time is unreliable

I need to count the number of seconds that have passed between the execution of some code on a Raspberry Pi. Normally I'd do it as follows in Python:
start = time.time()
execute_my_function()
end = time.time()
elapsed = end - start
However, the Raspberry Pi doesn't include an RTC and instead relies on NTP. This means that for the first little while after booting, the system time is January 1, 1970, and so the difference between "end" and "start" often becomes about 47 years.
How do I measure the elapsed time in seconds if the system time is unreliable (from what I can gather, the "timeit" module relies on "time" and thus won't work either)? It doesn't have to be completely accurate--a second or two too much or too little is fine.
Edit: I've made a sort of hack where I read /proc/uptime which I believe is independent of the system time, but I kind of feel dirty this way. I'm hoping there is a somewhat less OS dependent solution.
You could have your program wait until start_time has a meaningful value:
while time.time() < 1e6:
time.sleep(10)

How to advance clock and going through all the events

Reading this answer (point 2) to a question related to Twisted's task.Clock for testing purposes, I found very weird that there is no way to advance the clock from t0 to t1 while catching all the callLater calls within t0 and t1.
Of course, you could solve this problem by doing something like:
clock = task.Clock()
reactor.callLater = clock.callLater
...
def advance_clock(total_elapsed, step=0.01):
elapsed = 0
while elapsed < total_elapsed:
clock.advance(step)
elapsed += step
...
time_to_advance = 10 # seconds
advance_clock(time_to_advance)
But then we have shifted the problem toward choosing a sufficiently small step, which could be very tricky for callLater calls that sample the time from a probability distribution, for instance.
Can anybody think of a solution to this problem?
I found very weird that there is no way to advance the clock from t0 to t1 while catching all the callLater calls within t0 and t1.
Based on what you wrote later in your question, I'm going to suppose that the case you're pointing out is the one demonstrated by the following example program:
from twisted.internet.task import Clock
def foo(reactor, n):
if n == 0:
print "Done!"
reactor.callLater(1, foo, reactor, n - 1)
reactor = Clock()
foo(reactor, 10)
reactor.advance(10)
One might expect this program to print Done! but it does not. If the last line is replaced with:
for i in range(10):
reactor.advance(1)
Then the resulting program does print Done!.
The reason Clock works this way is that it's exactly the way real clocks work. As far as I know, there are no computer clocks that operate with a continuous time system. I won't say it is impossible to implement a timed-event system on top of a clock with discrete steps such that it appears to offer a continuous flow of time - but I will say that Twisted makes no attempt to do so.
The only real difference between Clock and the real reactor implementations is that with Clock you can make the time-steps much larger than you are likely to encounter in typical usage of a real reactor.
However, it's quite possible for a real reactor to get into a situation where a very large chunk of time all passes in one discrete step. This could be because the system clock changes (there's some discussion of making it possible to schedule events independent of the system clock so that this case goes away) or it could be because some application code blocked the reactor for a while (actually, application code always blocks the reactor! But in typical programs it only blocks it for a period of time short enough for most people to ignore).
Giving Clock a way to mimic these large steps makes it possible to write tests for what your program does when one of these cases arises. For example, perhaps you really care that, when the kernel decides not to schedule your program for 2.1 seconds because of a weird quirk in the Linux I/O elevator algorithm, your physics engine nevertheless computes 2.1 seconds of physics even though 420 calls of your 200Hz simulation loop have been skipped.
It might be fair to argue that the default (standard? only?) time-based testing tool offered by Twisted should be somewhat more friendly towards the common case... Or not. Maybe that would encourage people to write programs that only work in the common case and break in the real world when the uncommon (but, ultimately, inevitable) case arises. I'm not sure.
Regarding Mike's suggestion to advance exactly to the next scheduled call, you can do this easily and without hacking any internals. clock.advance(clock.getDelayedCalls()[0].getTime() - clock.seconds()) will do exactly this (perhaps you could argue Clock would be better if it at least offered an obvious helper function for this to ease testing of the common case). Just remember that real clocks do not advance like this so if your code has a certain desirable behavior in your unit tests when you use this trick, don't be fooled into thinking this means that same desirable behavior will exist in real usage.
Given that the typical use-class for Twisted is to mix hardware events and timers I'm confused why you would want to do this, but...
My understanding is that interally Twisted is tracking callLater events via a number of lists that are inside of the reactor object (See: http://twistedmatrix.com/trac/browser/tags/releases/twisted-15.2.0/twisted/internet/base.py#L437 - the xxxTimedCalls lists inside of class ReactorBase)
I haven't done any work to figure out if those lists are exposed anywhere, but if you want to take the reactors life into your own hands I'm sure you could hack your way in.
With access to the timing lists you could simply forward time to whenever the next element of the list is ... though if your trying to test code that interacts with IO events, I can't imagine this is going to do anything but confuse you...
Best of luck
Here's a function that will advance the reactor to the next IDelayedCall by iterating over reactor.getDelayedCalls. This has the problem Mike mentioned of not catching IO events, so you can specify a minimum and maximum time that it should wait, as well as a maximum time step.
def advance_through_delayeds(reactor, min_t=None, max_t=None, max_step=None):
elapsed = 0
while True:
if max_t is not None and elapsed >= max_t:
break
try:
step = min(d.getTime() - reactor.seconds() for d in reactor.getDelayedCalls())
except ValueError:
# nothing else pending
if min_t is not None and elapsed < min_t:
step = min_t - elapsed
else:
break
if max_step is not None:
step = min(step, max_step)
if max_t is not None:
step = min(step, max_t-elapsed)
reactor.advance(step)
elapsed += step
return elapsed
If you need to wait for some I/O to complete, then set min_t and max_step to reasonable values.
# wait at least 10s, advancing the reactor by no more than 0.1s at a time
advance_through_delayeds(reactor, min_t=10, max_step=0.1)
If min_t is set, it will exit once getDelayedCalls returns an empty list after that time is reached.
It's probably a good idea to always set max_t to a sane value to prevent the test suite from hanging. For example, on the above foo function by JPC it does reach the print "Done!" statement, but then would hang forever as the callback chain never completes.

Python Cron job on Ubuntu

I have a python program that I want to run every 10 seconds, just like cron job. I cannot use sleep in a loop because the time interval would become uncertain. The way I am doing it now is like this:
interval = 10.0
next = time.time()
while True:
now = time.time()
if now < next:
time.sleep(next - now)
t = Thread(target=control_lights,)
t.start()# start a thread
next += interval
It generates a new thread that executes the contro_lights function. The problem is that as time goes, the number of python process grows and takes memory/CPU. Is there any good way to do this? Thanks a lot
may be try use supervisord or god for this script? It is very simple to use and to control a number of you'r processes on UNIX-like operating system
Take a look at a program called The Fat Controller which is a scheduler similar to CRON but has many more options. The interval can be measured from the end of the previous run (like a for loop) or regularly every x seconds, which I think is what you want. Particularly useful in this case is that you can tell The Fat Controller what to do if one of the processes takes longer than x seconds:
run a new instance anyway (increase parallel processes up to a specified maximum)
wait for the previous one to finish
kill the previous one and start a new one
There should be plenty of information in the documentation on how to get it set up.
You can run a cron job every 10 seconds, just set the second param to '0/10'. It will run on 0, 10, 20 etc
#run every 10 seconds from mon-fri, between 8-17
CronTrigger(day_of_week='mon-fri', hour='8-17', second='0/10')

Categories