python - printing something in a for loop every n seconds - python

I have a for loop that iterates over a number and performs some simple calculations. I am trying to figure out how to print out ( or log to file) the current value of 'val' every .5 to 1 second with out having to pause or sleep during the loop. Here is a super simple example
val_list = []
for i in xrange(iterations):
val = (i*(1/2)) * pi
val2 = np.linalg.pinv(val)
# print or write to log val2 after every half second (or 1 second)
val_list.append(val2)

Just use time.time to capture the time before starting, then check how long it's been after you calculate val2:
import time
val_list = []
prev_time = time.time()
for i in xrange(iterations):
val = (i*(1/2)) * pi
val2 = np.linalg.pinv(val)
# print or write to log val2 after every half second (or 1 second)
dt = time.time() - prev_time
if dt > 1:
# print or write to log here
prev_time = time.time()
val_list.append(val2)

You can use time.time():
from time import time as t
val_list = []
nowTime = t()
for i in xrange(iterations):
val = (i*(1/2)) * pi
val2 = np.linalg.pinv(val)
curTime = t()
if curTime - nowTime >= 0.5:
#Do your stuff
nowTime = curTime
val_list.append(val2)

You can achieve this using Threads.
Here's a documentation on how to utilize Threads : https://docs.python.org/3/library/threading.html ( If you're using Python2.7 then change the 3 in the url to a 2 )
Here's a link which is similar to what you want and should also point you in the right direction : Python threading.timer - repeat function every 'n' seconds
Basically you have to create a Thread that will only execute ever n number of seconds. On each iteration it will print the value. The above link should suffice for that. Good luck !

Related

How to end a for loop after a given amount of time

I execute my code in a loop for many objects and it seems that process too much time.
I would like to add a condition that stops the execution after 30 min for example. How should it be done? Do I need another for loop and the timeit module for that or it can be done easier?
You can do it with something like this:
import time
time_limit = 60 * 30 # Number of seconds in one minute
t0 = time.time()
for obj in list_of_objects_to_iterate_over:
do_some_stuff(obj)
if time.time() - t0 > time_limit:
break
The break statement will exit the loop whenever the end of your iteration is reached after the time limit you have set.
You could Implement a starting time and stop executing after 30 mins past from starting time
from datetime import datetime, timedelta
starting_time = datetime.now()
for item in something:
#do something
if (datetime.now()- starting_time) //timedelta(minutes=1) >= 30:
break
So here is my variant how this can be done(assuming that we gather data via sql response):
for i in items:
if total_records % 100 == 0:
logger.warning(
"Processed {} items in {} ms".format(total_records, int(time.time() * 1000) - start_time_ms))
if int(time.time() * 1000) - start_time_ms > 3600000:
cur.close()
conn.close()
return total_records

python time results not as expected: time.time() - time.time()

In playing around with the python execution of time, I found an odd behavior when calling time.time() twice within a single statement. There is a very small processing delay in obtaining time.time() during statement execution.
E.g. time.time() - time.time()
If executed immediately in a perfect world, would compute in a result of 0.
However, in real world, this results in a very small number as there is an assumed delay in when the processor executes the first time.time() computation and the next. However, when running this same execution and comparing it to a variable computed in the same way, the results are skewed in one direction.
See the small code snippet below.
This also holds true for very large data sets
import time
counts = 300000
def at_once():
first = 0
second = 0
x = 0
while x < counts:
x += 1
exec_first = time.time() - time.time()
exec_second = time.time() - time.time()
if exec_first > exec_second:
first += 1
else:
second += 1
print('1sts: %s' % first)
print('2nds: %s' % second)
prints:
1sts: 39630
2nds: 260370
Unless I have my logic incorrect, I would expect the results to very close to 50:50, but it does not seem to be the case. Is there anyone who could explain what causes this behavior or point out a potential flaw with the code logic that is making the results skewed in one direction?
Could it be that exec_first == exec_second? Your if-else would add 1 to second in that case.
Try changing you if-else to something like:
if exec_first > exec_second:
first += 1
elif exec_second > exec_first:
second += 1
else:
pass
You assign all of the ties to one category. Try it with a middle ground:
import time
counts = 300000
first = 0
second = 0
same = 0
for _ in range(counts):
exec_first = time.time() - time.time()
exec_second = time.time() - time.time()
if exec_first == exec_second:
same += 1
elif exec_first > exec_second:
first += 1
else:
second += 1
print('1sts: %s' % first)
print('same: %s' % same)
print('2nds: %s' % second)
Output:
$ python3 so.py
1sts: 53099
same: 194616
2nds: 52285
$ python3 so.py
1sts: 57529
same: 186726
2nds: 55745
Also, I'm confused as to why you think that a function call might take 0 time. Every invocation requires at least access to the system clock and copying that value to a temporary location of some sort. This isn't free of overhead on any current computer.

Increment list size based on elapsed time

I am creating a program that will measure the execution times of various sorting algorithms (Selection, Bubble, Merge, and Tree sort).
The list sizes used for the test cases should start at 10,000, and go up by 10,000 for each test until the execution time for the test exceeds 60 seconds.
And that is my issue.
I have this probably very wrong (and ugly) code that I have created (I am currently testing with just the Bubble Sort).
import random
import time
def bubbleSort(a_list):
for passnum in range(len(a_list)-1,0,-1):
for i in range(passnum):
if a_list[i]>a_list[i+1]:
temp = a_list[i]
a_list[i] = a_list[i+1]
a_list[i+1] = temp
a_list = []
for i in range(10000):
a_list.append(random.randrange(0,10000))
start = time.perf_counter()
bubbleSort(a_list)
end = time.perf_counter()
elapsed = end - start
print("{0:.8f}".format(elapsed, "\n"))
print(a_list)
if elapsed <= 60:
for i in range(len(a_list), len(a_list)+10000):
a_list.append(random.randrange(len(a_list)+10000))
start = time.perf_counter()
bubbleSort(a_list)
end = time.perf_counter()
elapsed = end - start
print("{0:.8f}".format(elapsed, "\n"))
print(a_list)
else:
#it'll quit
I'm sorry for the ignorance that is very apparent. So above was my first reaction. Then I came up with this loop:
start = time.perf_counter()
while start <= 60:
for i in range(len(a_list)+10000):
a_list.append(random.randrange(len(a_list)+10000))
bubbleSort(a_list)
end = time.perf_counter()
elapsed = end - start
print("{0:.8f}".format(elapsed, "\n"))
print(a_list)
I would be very grateful if someone can give me a push in the right direction and help me think of the logic behind it. Thank you much in advance.
First, collapse some of the code to improve readability:
Element switch now uses the Python idiom a, b = b, a
Build the list with a comprehension, not a loop
parametrize the list size; increment each time through the loop.
Do you really need 8 decimal places for the execution time?
Code:
import random
import time
def bubbleSort(a_list):
for passnum in range(len(a_list)-1,0,-1):
for i in range(passnum):
if a_list[i] > a_list[i+1]:
a_list[i], a_list[i+1] = a_list[i+1], a_list[i]
elapsed = 0
size = 0
size_inc = 10000
print("Size\tTime")
while elapsed < 60:
# Add 10,000 numbers to the list
size += size_inc
a_list = [random.randrange(0,size) for i in range(size)]
start = time.perf_counter()
bubbleSort(a_list)
end = time.perf_counter()
elapsed = end - start
print(size, "\t{0:.8f}".format(elapsed, "sec.\n"))
Output:
Size Time
10000 12.05934826
20000 47.99201040
30000 111.39582218

Algorithm timing in Python

I want to compute how many times my computer can do counter += 1 in one second. A naive approach is the following:
from time import time
counter = 0
startTime = time()
while time() - startTime < 1:
counter += 1
print counter
The problem is time() - startTime < 1 may be considerably more expensive than counter += 1.
Is there a way to make a less "clean" 1 sec sample of my algorithm?
The usual way to time algorithms is the other way around: Use a fixed number of iterations and measure how long it takes to finish them. The best way to do such timings is the timeit module.
print timeit.timeit("counter += 1", "counter = 0", number=100000000)
Note that timing counter += 1 seems rather pointless, though. What do you want to achieve?
Why don't you infer the time instead? You can run something like:
from datetime import datetime
def operation():
counter = 0
tbeg = datetime.utcnow()
for _ in range(10**6):
counter += 1
td = datetime.utcnow() - tbeg
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6)/10.0**6
def timer(n):
stack = []
for _ in range(n):
stack.append(operation()) # units of musec/increment
print sum(stack) / len(stack)
if __name__ == "__main__":
timer(10)
and get the average elapsed microseconds per increment; I get 0.09 (most likely very inaccurate). Now, it is a simple operation to infer that if I can make one increment in 0.09 microseconds, then I am able to make about 11258992 in one second.
I think the measurements are very inaccurate, but maybe is a sensible approximation?
I have never worked with the time() library, but according to that code I assume it counts seconds, so what if you do the /sec calculations after ctrl+C happens? It would be something like:
#! /usr/bin/env python
from time import time
import signal
import sys
#The ctrl+C interruption function:
def signal_handler(signal, frame):
counts_per_sec = counter/(time()-startTime)
print counts_per_sec
exit(0)
signal.signal(signal.SIGINT, signal_handler)
counter = 0
startTime = time()
while 1:
counter = counter + 1
Of course, it wont be exact because of the time passed between the last second processed and the interruption signal, but the more time you leave the script running, the more precise it will be :)
Here is my approach
import time
m = 0
timeout = time.time() + 1
while True:
if time.time() > timeout:
break
m = m + 1
print(m)

Calculating Time Difference

at the start and end of my program, I have
from time import strftime
print int(strftime("%Y-%m-%d %H:%M:%S")
Y1=int(strftime("%Y"))
m1=int(strftime("%m"))
d1=int(strftime("%d"))
H1=int(strftime("%H"))
M1=int(strftime("%M"))
S1=int(strftime("%S"))
Y2=int(strftime("%Y"))
m2=int(strftime("%m"))
d2=int(strftime("%d"))
H2=int(strftime("%H"))
M2=int(strftime("%M"))
S2=int(strftime("%S"))
print "Difference is:"+str(Y2-Y1)+":"+str(m2-m1)+":"+str(d2-d1)\
+" "+str(H2-H1)+":"+str(M2-M1)+":"+str(S2-S1)
But when I tried to get the difference, I get syntax errors.... I am doing a few things wrong, but I'm not sure what is going on...
Basically, I just want to store a time in a variable at the start of my program, then store a 2nd time in a second variable near the end, then at the last bit of the program, compute the difference and display it. I am not trying to time a function speed. I am trying to log how long it took for a user to progress through some menus. What is the best way to do this?
The datetime module will do all the work for you:
>>> import datetime
>>> a = datetime.datetime.now()
>>> # ...wait a while...
>>> b = datetime.datetime.now()
>>> print(b-a)
0:03:43.984000
If you don't want to display the microseconds, just use (as gnibbler suggested):
>>> a = datetime.datetime.now().replace(microsecond=0)
>>> b = datetime.datetime.now().replace(microsecond=0)
>>> print(b-a)
0:03:43
from time import time
start_time = time()
...
end_time = time()
seconds_elapsed = end_time - start_time
hours, rest = divmod(seconds_elapsed, 3600)
minutes, seconds = divmod(rest, 60)
You cannot calculate the differences separately ... what difference would that yield for 7:59 and 8:00 o'clock? Try
import time
time.time()
which gives you the seconds since the start of the epoch.
You can then get the intermediate time with something like
timestamp1 = time.time()
# Your code here
timestamp2 = time.time()
print "This took %.2f seconds" % (timestamp2 - timestamp1)
Both time.monotonic() and time.monotonic_ns() are correct. Correct as in monotonic.
>>> import time
>>>
>>> time.monotonic()
452782.067158593
>>>
>>> t0 = time.monotonic()
>>> time.sleep(1)
>>> t1 = time.monotonic()
>>> print(t1 - t0)
1.001658110995777
Regardless of language, monotonic time is the right answer, and real time is the wrong answer. The difference is that monotonic time is supposed to give a consistent answer when measuring durations, while real time isn't, as real time may be adjusted – indeed needs to be adjusted – to keep up with reality. Monotonic time is usually the computer's uptime.
As such, time.time() and datetime.now() are wrong ways to do this.
Python also has time.perf_counter() and time.perf_counter_ns(), which are specified to have the highest available resolution, but aren't guarranteed to be monotonic. On PC hardware, though, both typically have nanosecond resolution.
Here is a piece of code to do so:
def(StringChallenge(str1)):
#str1 = str1[1:-1]
h1 = 0
h2 = 0
m1 = 0
m2 = 0
def time_dif(h1,m1,h2,m2):
if(h1 == h2):
return m2-m1
else:
return ((h2-h1-1)*60 + (60-m1) + m2)
count_min = 0
if str1[1] == ':':
h1=int(str1[:1])
m1=int(str1[2:4])
else:
h1=int(str1[:2])
m1=int(str1[3:5])
if str1[-7] == '-':
h2=int(str1[-6])
m2=int(str1[-4:-2])
else:
h2=int(str1[-7:-5])
m2=int(str1[-4:-2])
if h1 == 12:
h1 = 0
if h2 == 12:
h2 = 0
if "am" in str1[:8]:
flag1 = 0
else:
flag1= 1
if "am" in str1[7:]:
flag2 = 0
else:
flag2 = 1
if flag1 == flag2:
if h2 > h1 or (h2 == h1 and m2 >= m1):
count_min += time_dif(h1,m1,h2,m2)
else:
count_min += 1440 - time_dif(h2,m2,h1,m1)
else:
count_min += (12-h1-1)*60
count_min += (60 - m1)
count_min += (h2*60)+m2
return count_min

Categories