how would I stop a while loop after 5 minutes if it does not achieve what I want it to achieve.
while true:
test = 0
if test == 5:
break
test = test - 1
This code throws me in an endless loop.
Try the following:
import time
timeout = time.time() + 60*5 # 5 minutes from now
while True:
test = 0
if test == 5 or time.time() > timeout:
break
test = test - 1
You may also want to add a short sleep here so this loop is not hogging CPU (for example time.sleep(1) at the beginning or end of the loop body).
You do not need to use the while True: loop in this case. There is a much simpler way to use the time condition directly:
import time
# timeout variable can be omitted, if you use specific value in the while condition
timeout = 300 # [seconds]
timeout_start = time.time()
while time.time() < timeout_start + timeout:
test = 0
if test == 5:
break
test -= 1
Try this module: http://pypi.python.org/pypi/interruptingcow/
from interruptingcow import timeout
try:
with timeout(60*5, exception=RuntimeError):
while True:
test = 0
if test == 5:
break
test = test - 1
except RuntimeError:
pass
Petr Krampl's answer is the best in my opinion, but more needs to be said about the nature of loops and how to optimize the use of the system. Beginners who happen upon this thread may be further confused by the logical and algorithmic errors in the question and existing answers.
First, let's look at what your code does as you originally wrote it:
while True:
test = 0
if test == 5:
break
test = test - 1
If you say while True in a loop context, normally your intention is to stay in the loop forever. If that's not your intention, you should consider other options for the structure of the loop. Petr Krampl showed you a perfectly reasonable way to handle this that's much more clear to someone else who may read your code. In addition, it will be more clear to you several months later should you need to revisit your code to add or fix something. Well-written code is part of your documentation. There are usually multiple ways to do things, but that doesn't make all of the ways equally valid in all contexts. while true is a good example of this especially in this context.
Next, we will look at the algorithmic error in your original code. The very first thing you do in the loop is assign 0 to test. The very next thing you do is to check if the value of test is 5, which will never be the case unless you have multiple threads modifying the same memory location. Threading is not in scope for this discussion, but it's worth noting that the code could technically work, but even with multiple threads a lot would be missing, e.g. semaphores. Anyway, you will sit in this loop forever regardless of the fact that the sentinel is forcing an infinite loop.
The statement test = test - 1 is useless regardless of what it does because the variable is reset at the beginning of the next iteration of the loop. Even if you changed it to be test = 5, the loop would still be infinite because the value is reset each time. If you move the initialization statement outside the loop, then it will at least have a chance to exit. What you may have intended was something like this:
test = 0
while True:
test = test - 1
if test == 5:
break
The order of the statements in the loop depends on the logic of your program. It will work in either order, though, which is the main point.
The next issue is the potential and probable logical error of starting at 0, continually subtracting 1, and then comparing with a positive number. Yes, there are occasions where this may actually be what you intend to do as long as you understand the implications, but this is most likely not what you intended. Newer versions of python will not wrap around when you reach the 'bottom' of the range of an integer like C and various other languages. It will let you continue to subtract 1 until you've filled the available memory on your system or at least what's allocated to your process. Look at the following script and the results:
test = 0
while True:
test -= 1
if test % 100 == 0:
print "Test = %d" % test
if test == 5:
print "Test = 5"
break
which produces this:
Test = -100
Test = -200
Test = -300
Test = -400
...
Test = -21559000
Test = -21559100
Test = -21559200
Test = -21559300
...
The value of test will never be 5, so this loop will never exit.
To add to Petr Krampl's answer, here's a version that's probably closer to what you actually intended in addition to exiting the loop after a certain period of time:
import time
test = 0
timeout = 300 # [seconds]
timeout_start = time.time()
while time.time() < timeout_start + timeout:
if test == 5:
break
test -= 1
It still won't break based on the value of test, but this is a perfectly valid loop with a reasonable initial condition. Further boundary checking could help you to avoid execution of a very long loop for no reason, e.g. check if the value of test is less than 5 upon loop entry, which would immediately break the loop.
One other thing should be mentioned that no other answer has addressed. Sometimes when you loop like this, you may not want to consume the CPU for the entire allotted time. For example, say you are checking the value of something that changes every second. If you don't introduce some kind of delay, you would use every available CPU cycle allotted to your process. That's fine if it's necessary, but good design will allow a lot of programs to run in parallel on your system without overburdening the available resources. A simple sleep statement will free up the vast majority of the CPU cycles allotted to your process so other programs can do work.
The following example isn't very useful, but it does demonstrate the concept. Let's say you want to print something every second. One way to do it would be like this:
import time
tCurrent = time.time()
while True:
if time.time() >= tCurrent + 1:
print "Time = %d" % time.time()
tCurrent = time.time()
The output would be this:
Time = 1498226796
Time = 1498226797
Time = 1498226798
Time = 1498226799
And the process CPU usage would look like this:
That's a huge amount of CPU usage for doing basically no work. This code is much nicer to the rest of the system:
import time
tCurrent = time.time()
while True:
time.sleep(0.25) # sleep for 250 milliseconds
if time.time() >= tCurrent + 1:
print "Time = %d" % time.time()
tCurrent = time.time()
The output is the same:
Time = 1498226796
Time = 1498226797
Time = 1498226798
Time = 1498226799
and the CPU usage is way, way lower:
import time
abort_after = 5 * 60
start = time.time()
while True:
delta = time.time() - start
if delta >= abort_after:
break
I want to share the one I am using:
import time
# provide a waiting-time list:
lst = [1,2,7,4,5,6,4,3]
# set the timeout limit
timeLimit = 4
for i in lst:
timeCheck = time.time()
while True:
time.sleep(i)
if time.time() <= timeCheck + timeLimit:
print ([i,'looks ok'])
break
else:
print ([i,'too long'])
break
Then you will get:
[1, 'looks ok']
[2, 'looks ok']
[7, 'too long']
[4, 'looks ok']
[5, 'too long']
[6, 'too long']
[4, 'looks ok']
[3, 'looks ok']
I have read this but I just want to ask something, wouldn't something like I have written work at all?
I have done the testing for 5,10 and 20 seconds. Its time isn't exactly accurate but they are really close to the actual values.
import time
begin_time=0
while begin_time<5:
begin_time+=1
time.sleep(1.0)
print("The Work is Done")
i'm not an python expert but i wrote small function to check timeout and brake while loop
# compare now and given times older than delta
def is_time_older_than(time, delta):
print(dt.utcnow() - time,delta)
if (dt.utcnow() - time) > delta:
return True
return False
startTime = dt.utcnow()
while True:
print("waiting")
if (is_time_older_than(startTime, timedelta(seconds=5)) == True):
break
you can record time before execution than send to function as starting time with delta value in seconds (or 60*1 for a minute) it will compare difference and return False or True thats it
Try the following:
from datetime import datetime, timedelta
end_time = datetime.now() + timedelta(minutes=5)
while True:
current_time = datetime.now()
if current_time == end_time:
break
Old Thread, but I just want to share my re-usable solution using an Iterator:
class Timeoutloop:
""" universal for time-out loop """
def __init__(self, timeout):
self.timeout = timeout
def __iter__(self):
self.start_time = time.time()
return self
def __next__(self):
now = time.time()
time_passed = now - self.start_time
if time_passed > self.timeout:
raise StopIteration
else:
return time_passed
## usage example:
for passed in Timeoutloop(10.0):
print(passed)
sleep(random.random())
Inspired by #andrefsp's answer, here's a simple context manager that gives a deadline:
from contextlib import contextmanager
#contextmanager
def deadline(timeout_seconds: Union[int, float]):
"""
Context manager that gives a deadline to run some code.
Usage:
`with deadline(secs): ...`
or
```
#deadline(secs)
def func: ...
```
Args:
timeout_seconds: number of seconds before context manager raises a DeadlineExceededException
Raises:
DeadlineExceededException error if more than timeout_seconds elapses.
"""
start_time = time.time()
yield
elapsed_time = time.time() - start_time
if elapsed_time > timeout_seconds:
msg = f"Deadline of {timeout_seconds} seconds exceeded by {elapsed_time - timeout_seconds} seconds"
logger.exception(msg)
raise DeadlineExceededException(msg)
class DeadlineExceededException(Exception):
pass
Related
This question already has answers here:
Timeout on a function call
(23 answers)
Closed 25 days ago.
I am trying to run an apriori analysis on a series of hashtags scraped from Twitter in python using jupyter lab, and need to find a way to time out a function after a certain period of time. The function is being run using a while loop that incrementally reduces the size of the support value, and stops after ten seconds has passed.
def association_rules(hashtags_list):
# Convert the list of hashtags into a list of transactions
transactions = [hashtags for hashtags in hashtags_list]
# Initialize the support
min_support = 1
# Initialize the confidence
min_confidence = 0.1
# Initialize the lowest support
lowest_support = 1
# Start the timer
start_time = time.time()
while True:
try:
# Find the association rules
association_rules = apriori(transactions, min_confidence=min_confidence, min_support=min_support)
# Convert the association rules into a list
association_rules_list = list(association_rules)
# End the timer
end_time = time.time()
# Calculate the running time
running_time = end_time - start_time
# check if running time is over the maximum time
if running_time >= 10:
break
lowest_support = min_support
if min_support > 0.01:
min_support = min_support - 0.01
else:
min_support = min_support - 0.005
if min_support <= 0:
min_support = 0.01
except Exception as e:
print("An error occurred:", e)
break
return association_rules_list, round(lowest_support, 3)
The problem this causes is because the timeout is called within the loop itself, it is possible for the loop to get hung up if the apriori support value gets too low before hitting the 10 seconds, which often happens with small datasets, so I need an external function to stop the loop.
I've been looking into parallel processing with no success, and still can't really determine if it can even be carried out in Jupyter Lab.
Any ideas on how to stop a function would be appreciated.
Edited to add that I am running on Win 10, which may effect some options.
This is what I have so far, which seems to work, but I could be wrong...
import time
import func_timeout
import tempfile
def test_function():
highest_n = 0
n = 0
temp_file = tempfile.NamedTemporaryFile(delete=False)
start_time = time.time()
while time.time()-start_time < timeout_time:
n += 1
if n > highest_n:
highest_n = n
temp_file.write(str(n).encode() + b' ' + str(highest_n).encode())
temp_file.close()
return highest_n
timeout_time = 5
try:
result = func_timeout.func_timeout(timeout_time, test_function)
print("highest number counted to: ",result)
except func_timeout.FunctionTimedOut:
print("Function Timed Out after ", timeout_time, " seconds")
temp_file = open(temp_file.name, 'r')
n, highest_n = temp_file.read().split()
n = int(n)
highest_n = int(highest_n)
print("highest number counted to: ",highest_n)
Edit: After some discussion in the comments, here's a different suggestion.
You can terminate a function from outside the function by causing an exception to be thrown via the signalling mechanism as in the example below:
import signal
import time
import random
def out_of_time(signum, frame):
raise TimeoutError
def slow_function():
time.sleep(random.randint(1, 10))
return random.random()
signal.signal(signal.SIGALRM, out_of_time)
signal.alarm(5)
v = 0
while True:
try:
v = slow_function()
except TimeoutError:
print("ran out of time")
break
print("v:", v)
What's going on here is that we have a function, slow_function, that will run for an unknown period of time (1-10 seconds). We run that in a while True loop. At the same time, we've set up the signal system to throw a TimeoutError after 5 seconds. So when that happens, we can catch the exception.
A few things to note though:
This does not work on Windows.
There is absolutely NO GUARANTEE that the code will be where you think it will be when the exception is thrown. If the loop is already completed and the interpreter is not currently running slow_function, you don't know what will happen. So you'll need to 'arm' the exception-throwing mechanism somehow, for example by checking the frame parameter that is passed to out_of_time to make sure that the exception is only thrown if the signal comes while we're inside the expected function.
This is kind of the Python equivalent of a goto in the sense that it causes the execution to jump around in unexpected ways.
A better solution would be to insert some code into the function that you want to terminate to periodically check to see if it should keep running.
Change the while loop from while True to:
while time.time()-start_time < 10 and <some other criteria>:
# loop body
Then you can get rid of the break and you can add whatever halting criteria needed to the loop statement.
I think I found a simpler way just using a list rather than a temporary file.
import time
from func_timeout import func_timeout, FunctionTimedOut
start_time = time.time()
temp_list = []
running_time = 3.3
def while_loop():
min_support = 1
lowest_support = 1
while True:
time.sleep(0.1)
if lowest_support > 0.01:
lowest_support -= 0.01
else:
lowest_support -= 0.001
if lowest_support <= 0:
lowest_support = 0.001
temp_list.append(lowest_support)
if time.time() - start_time > running_time:
break
try:
func_timeout(running_time, while_loop)
except FunctionTimedOut:
pass
temp_list[-1]
This is more to do with my code as i'm sure i'm doing something wrong
all i'm trying to do is call a main function to repeated itself with a recursive number that at some point i'll reset.
here is my code
sys.setrecursionlimit(8000)
didMenuFinish = 0
def mainThread(interval):
time.sleep(0.2)
print("interval is: " + str(interval))
if (interval < 1650):
raceTrack(interval)
if (interval == 1680):
resetButtons()
restartRace()
if didMenuFinish==0:
mainThread(interval + 1)
else:
didMenuFinish = 0
mainThread(0)
time.sleep(4)
keyboard.press(Key.enter)
keyboard.release(Key.enter)
mainThread(0)
At first the main thread runs fine, but somewhere in the 2500 interval +- it just stops with the error code for stack overflow
Process finished with exit code -1073741571 (0xC00000FD)
I am using Windows and pyCharm
do you know how I can handle that ? or am i doing something wrong ?
You are most likely recursively looping forever. You recursively call mainThread in these parts:
def mainThread(interval):
# ...
if didMenuFinish == 0:
mainThread(interval + 1)
else:
didMenuFinish = 0
mainThread(0)
But you never return to the caller. You need a base case that stops the recursive loop
Consider this example of a recursive factorial function:
def fac(x):
if x <= 1:
return 1
return x*fac(x-1)
The base case (terminating case) is when x <= 1. and this will be reached eventually as fac(x-1) implies that x is decreasing towards the conditional x <= 1. i.e. you need to have some sort of case where recursion is not needed. Your current function is recursively equivalent to:
def foo():
foo()
Perhaps infinitely looping and locally updating variables will work?
def mainThread(interval):
while True:
time.sleep(0.2)
print(f"interval is: " {interval})
if interval < 1650:
raceTrack(interval)
if interval == 1680:
resetButtons()
restartRace()
if didMenuFinish == 0:
# No need to call the function again. just change the local variable
interval += 1
else:
didMenuFinish = 0
# Same here
interval = 0
You are getting a stack overflow error because your recursive calls never terminate.
Try using a loop instead:
def mainThread(interval):
while True:
time.sleep(0.2)
print("interval is: " + str(interval))
if (interval < 1650):
raceTrack(interval)
if (interval == 1680):
resetButtons()
restartRace()
if didMenuFinish==0:
interval += 1
else:
didMenuFinish = 0
interval = 0
I am using the python multiprocessing library in order to run a number of tests on a large array of numbers.
I have the follow syntax:
import multiprocessing as mp
pool = mp.Pool(processes = 6)
res = pool.async_map(testFunction, arrayOfNumbers)
However I want to return the first number that passes the test, and then exit. I am not interested in storing the array of results.
Currently testFunction will return 0 for any numbers that fail, so if doing this without any optimisation, I would wait for it to finish and use:
return filter(lambda x: x != 0, res)[0]
assuming there is a result. However since it is running asynchronously, I want to get the non-zero value as soon as possible.
What is the best approach to this?
I am not sure if this is the best approach, but it is a working approach. Adding tasks to a queue is non blocking and the program will keep operating. Now by storing all the possible return values I can iterate over them by myself.
The return values are actually close to a promise object, now by checking their ready() function I can check if the result is ready to be read. Then using the get() method I can verify what that value is. If I know the value is 0, I can terminate the pool early and return the final result.
A minimal working example demonstrating this is the following:
import time
import multiprocessing as mp
def worker(value):
print('working')
time.sleep(3)
return value
def main():
pool = mp.Pool(2) # Only two workers
results = []
for n in range(0, 8):
value = 0 if n == 0 else 1
results.append(pool.apply_async(worker, (value,)))
running = True
while running:
for result in results:
if result.ready() and result.get() == 0:
print(f"There was a zero returned")
pool.terminate()
running = False
if all(result.ready() for result in results):
running = False
pool.close()
pool.join()
if __name__ == '__main__':
main()
The expected output would be:
working
working
working
There was a zero returned
Process finished with exit code 0
I created a small pool of 2 processes, that are calling a function that will sleep for 3 seconds and then return either 1 or 0. Currently the first task will return a 0, and the program will early terminate after the results are available.
If there is no terminating task, the line:
if all(result.ready() for result in results):
running = False
Will terminate the loop if all processes are done.
If you would like to now all the results, you can use:
print([result.get() for result in results if result.ready()])
This is my first post to stack overflow. I'll try to include all the necessary information, but please let me know if there's more info I can provide to clarify my question.
I'm trying to multithread a costly function for an astrophysical code in python using pool.map. The function takes as an input a list of objects. The basic code structure is like this:
There's a class of Stars with physical properties:
Class Stars:
def __init__(self,mass,metals,positions,age):
self.mass = mass
self.metals = metals
self.positions = positions
self.age = age
def info(self):
return(self.mass,self.metals,self.positions,self.age)
and there's a list of these objects:
stars_list = []
for i in range(nstars):
stars_list.append(Stars(mass[i],metals[i],positions[i],age[i]))
(where mass, metals, positions and age are known from another script).
There's a costly function that I run with these star objects that returns a spectrum for each one:
def newstars_gen(stars_list):
....
return stellar_nu,stellar_fnu
where stellar_nu and stellar_fnu are numpy arrays
What I would like to do is break the list of star objects (stars_list) up into chunks, and then run newstars_gen on these chunks on multiple threads to gain a speedup. So, to do this, I split the list up into three sublists, and then try to run my function through pool.map:
p = Pool(processes = 3)
nchunks = 3
chunk_start_indices = []
chunk_start_indices.append(0) #the start index is 0
delta_chunk_indices = nstars / nchunks
for n in range(1,nchunks):
chunk_start_indices.append(chunk_start_indices[n-1]+delta_chunk_indices)
for n in range(nchunks):
stars_list_chunk = stars_list[chunk_start_indices[n]:chunk_start_indices[n]+delta_chunk_indices]
#if we're on the last chunk, we might not have the full list included, so need to make sure that we have that here
if n == nchunks-1:
stars_list_chunk = stars_list[chunk_start_indices[n]:-1]
chunk_sol = p.map(newstars_gen,stars_list_chunk)
But when I do this, I get as the error:
File "/Users/[username]/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/Users/[username]/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
AttributeError: Stars instance has no attribute '__getitem__'
So, I'm confused as to what sort of attribute I should include with the Stars class. I've tried reading about this online and am not sure how to define an appropriate __getitem__ for this class. I'm quite new to object oriented programming (and python in general).
Any help is much appreciated!
So, it looks like there may be a couple things wrong here and that could be cleaned up or made more pythonic. However, the key problem is that you are using pool.multiprocessing.Pool.map incorrectly for what you have. Your newstars_gen function expects a list, but p.map is going to break up the list you give it into chunks and hand it one Star at a time. You should probably rewrite newstars_gen to operate on one star at a time and then throw away all but the first and last lines of your last code block. If the calculations in newstars_gen aren't independent between Stars (e.g., the mass of one impacts the calculation for another), you will have to do a more dramatic refactoring.
It also looks like it would behoove you to learn about list comprehensions. Be aware that the other built in structures (e.g., set, dict) have equivalents, and also look into generator comprehensions.
I've written a function for distributing the processing of an iterable (like your list of stars objects) among multiple processors, which I'm pretty sure will work well for you.
from multiprocessing import Process, cpu_count, Lock
from sys import stdout
from time import clock
def run_multicore_function(iterable, function, func_args = [], max_processes = 0):
#directly pass in a function that is going to be looped over, and fork those
#loops onto independant processors. Any arguments the function needs must be provided as a list.
if max_processes == 0:
cpus = cpu_count()
if cpus > 7:
max_processes = cpus - 3
elif cpus > 3:
max_processes = cpus - 2
elif cpus > 1:
max_processes = cpus - 1
else:
max_processes = 1
running_processes = 0
child_list = []
start_time = round(clock())
elapsed = 0
counter = 0
print "Running function %s() on %s cores" % (function.__name__,max_processes)
#fire up the multi-core!!
stdout.write("\tJob 0 of %s" % len(iterable),)
stdout.flush()
for next_iter in iterable:
if type(iterable) is dict:
next_iter = iterable[next_iter]
while 1: #Only fork a new process when there is a free processor.
if running_processes < max_processes:
#Start new process
stdout.write("\r\tJob %s of %s (%i sec)" % (counter,len(iterable),elapsed),)
stdout.flush()
if len(func_args) == 0:
p = Process(target=function, args=(next_iter,))
else:
p = Process(target=function, args=(next_iter,func_args))
p.start()
child_list.append(p)
running_processes += 1
counter += 1
break
else:
#processor wait loop
while 1:
for next in range(len(child_list)):
if child_list[next].is_alive():
continue
else:
child_list.pop(next)
running_processes -= 1
break
if (start_time + elapsed) < round(clock()):
elapsed = round(clock()) - start_time
stdout.write("\r\tJob %s of %s (%i sec)" % (counter,len(iterable),elapsed),)
stdout.flush()
if running_processes < max_processes:
break
#wait for remaining processes to complete --> this is the same code as the processor wait loop above
while len(child_list) > 0:
for next in range(len(child_list)):
if child_list[next].is_alive():
continue
else:
child_list.pop(next)
running_processes -= 1
break #need to break out of the for-loop, because the child_list index is changed by pop
if (start_time + elapsed) < round(clock()):
elapsed = round(clock()) - start_time
stdout.write("\r\tRunning job %s of %s (%i sec)" % (counter,len(iterable),elapsed),)
stdout.flush()
print " --> DONE\n"
return
As a usage example, let's use your star_list, and send the result of newstars_gen to a shared file. Start by setting up your iterable, file, and a file lock
star_list = []
for i in range(nstars):
stars_list.append(Stars(mass[i],metals[i],positions[i],age[i]))
outfile = "some/where/output.txt"
file_lock = Lock()
Define your costly function like so:
def newstars_gen(stars_list_item,args): #args = [outfile,file_lock]
outfile,file_lock = args
....
with file_lock:
with open(outfile,"a") as handle:
handle.write(stellar_nu,stellar_fnu)
Now send your list of stars into run_multicore_function()
run_multicore_function(star_list, newstars_gen, [outfile,file_lock])
After all of your items have been calculated, you can go back into the output file to grab the data and carry on. Instead of writing to a file, you can also share the state with multiprocessing.Value or multiprocessing.Array, but I've ran into the occasional issue with data getting lost if my list is large and the function I'm calling is fairly fast. Maybe someone else out there can see why that's happening.
Hopefully this all makes sense!
Good luck,
-Steve
Im running part of a script just once per minute and came up with this:
def minutePassed(oldminute):
currentminute = time.gmtime()[4]
if ((currentminute - oldminute) >= 1) or (oldminute == 59 and currentminute >= 0):
return True
else:
return False
Problem here is that if the minute is 59 it runs everytime until its past that time - its not much of a bother performance wise for me. But I still dont like it happening!
I thought of something like this now:
def minutePassed(oldminute):
currentminute = time.gmtime()[4]
if ((currentminute - oldminute) >= 1) or (oldminute == 59 and currentminute >= 0 and ran == false):
ran = True
return True
else:
return False
Then on another part of the script I turn ran false again when the minute is != 59 and the variable isnt already false - but that seems crude?
On another note - is there any other way to check if a minute has passed? Maybe im making things complicated ...
Edit: Maybe I was not clear enough:
Run only ONCE per minute.
Execution time varies by many seconds but takes less then 30s.
Im looking at timedelta now.
Don't work with minutes like that; if the time is, for example, 00:00:59, your code will believe a minute has passed the very next second.
Instead, use something like time.time(), which returns seconds passed since the epoch:
def minute_passed(oldepoch):
return time.time() - oldepoch >= 60
That can still be off by almost a second for the same reasons, but that's probably more acceptable.
You can use seconds since epoch to get the time in seconds so you don't have to worry about minutes wrapping around:
import time
oldtime = time.time()
# check
if time.time() - oldtime > 59:
print "it's been a minute"
I think you'll find it much easier to use the time() function in the time module, which returns the number of seconds elapsed since the 'epoch'.
import time
oldtime = time.time()
def minutePassed(oldtime):
currenttime = time.time()
if currenttime - oldtime > 60 and ran == False:
ran = True
return True
else:
return False
Use time.sleep:
from time import sleep
while True:
# do something
sleep(60)
I had not much success with timedelta so I went with my former crude idea:
def minutePassed(oldminute):
currentminute = time.gmtime()[4]
if ((currentminute - oldminute) >= 1) or (oldminute == 59 and currentminute == 0):
return True
else:
return False
Then in the script:
while True:
dosomething()
if minutePassed(time.gmtime(lastrun)[4]):
domore
lastrun = time.time()
This works perfectly fine for me now - only runs once a Minute - sleeping or anything is a bad choice for me here since the execution time on each loop is unreliable.
I had the exact same problem and I think I found a solution:
import time
oldtime = time.time()
print oldtime
while oldtime + 60 != time.time():
if oldtime + 3 == time.time():
print "60 secs passed"
print time.time()
break
You can delete the print statements, I had them there just to see if it works, hope it's what you are looking for.