Time delta gives 0.0 output - python

I'm trying to get time delta to run a linear search. When I run on debug mode, the delta variable is logging the difference but when ran as regular python script, print isn't giving the right result. It prints 0.0 even if I change the target which isn't accurate time difference.
import time
import itertools
def linear_search(arry, target):
for index, value in enumerate(arry):
if value == target:
return True
return False
k = list(itertools.islice(range(2000000), 1, 2000000, 2))
start = time.time()
linear_search(k, 25)
done = time.time()
delta = done - start
print(delta)
Can someone help to find if there's anything wrong in the print statement?

The issue might be the resolution of time.time() particularly on Windows. I believe the resolution there is apparently 16 milliseconds.
There is a different package (timeit) that is more oriented to timing very short method calls.
Here is an example:
import timeit
setup = '''
def linear_search(arry, target):
for index, value in enumerate(arry):
if value == target:
return True
return False
k = list(itertools.islice(range(2000000), 1, 2000000, 2))
n = 25
'''
print(timeit.timeit("linear_search(k, n)", setup=setup, number=1000))
This gives me an average of 0.00021399999999999197 over 1000 runs

Related

Why is the time function working differently in both these cases?

I am solving Problem 14 of Project Euler and I wrote 2 programs, one which is optimised and the other which is not. I've even imported the time module to calculate the time taken, but it's not working properly. it works fine in the unoptimised code:
import time
start = time.time()
def collatz(n):
chain=1
while(n>1):
chain+=1
if(n%2==0):
n/=2
else:
n = 3*n+1
return chain
maxChain = 0
num=0
counter = 10**6
while(counter>13):
coll = collatz(counter)
if(coll > maxChain):
maxChain = coll
num = counter
counter-=1
end = time.time()
print("Time taken:",end-start)
print(start+', '+ end)
the output is:
Time taken: 47.83728861808777
1591290440.8452923, 1591290488.682581
But in my other code:
import time
start = time.time()
dict = {n:0 for n in range(1,10**6)}
dict[1], dict[2] = 1,2
for i in range(3,10**6):
counter = 0
start = i
while(i > 1):
#Have we already encountered this sequence?
if(i < start):
dict[start] = counter + dict[i]
break
if(i%2==0):
i/=2
else:
i = 3*i+1
counter += 1
end = time.time()
print('Time taken:',end-start)
print(start+', '+end)
the output is:
Time taken: 1590290651.4527032
999999, 1591290650.4527032
The start time in the second program is 999999 while the end time is fine. this problem doesn't occur in the first program, I don't know why this is happening?
Translated from comment:
You can see in the second version of the code you shadow/reuse the variable start, using it for a counter. Thus the 999999 in your output, and the strange results.
Renaming it to anything else will fix you right up =)

Negative algorithm run time

I'm trying to estimate time of running AES in Python. I have a code from here:
https://gist.github.com/jeetsukumaran/1291836
And i'm using this:
https://repl.it/languages/python3
Sometimes I get negative algorithm run times. Why is it? How to measure it right?
Relevant timing loop:
start = timeit.timeit()
r = Rijndael("abcdefg1234567890123456789012345", block_size = 32)
ciphertext = r.encrypt("99999999999999999999999999999995")
plaintext = r.decrypt(ciphertext)
end = timeit.timeit()
The full code is here.
Use time.time(), not timeit.timeit().
import time
# unrelated code
start = time.time()
r = Rijndael("abcdefg1234567890123456789012345", block_size = 32)
ciphertext = r.encrypt("99999999999999999999999999999995")
plaintext = r.decrypt(ciphertext)
end = time.time()
elapsed = end - start # will not be negative!
Notes
How does time.time() work?
time.time() will always return the number of seconds since January 1, 1970, 00:00:00 (UTC).
How is timeit.timeit() used?
Time one-liners, get average time over 1,000,000 calls.
>>> import timeit
>>> timeit.timeit('4 + 5') # runs 4 + 5 1,000,000 times; returns average speed (ms)
0.009406077000000401

python time results not as expected: time.time() - time.time()

In playing around with the python execution of time, I found an odd behavior when calling time.time() twice within a single statement. There is a very small processing delay in obtaining time.time() during statement execution.
E.g. time.time() - time.time()
If executed immediately in a perfect world, would compute in a result of 0.
However, in real world, this results in a very small number as there is an assumed delay in when the processor executes the first time.time() computation and the next. However, when running this same execution and comparing it to a variable computed in the same way, the results are skewed in one direction.
See the small code snippet below.
This also holds true for very large data sets
import time
counts = 300000
def at_once():
first = 0
second = 0
x = 0
while x < counts:
x += 1
exec_first = time.time() - time.time()
exec_second = time.time() - time.time()
if exec_first > exec_second:
first += 1
else:
second += 1
print('1sts: %s' % first)
print('2nds: %s' % second)
prints:
1sts: 39630
2nds: 260370
Unless I have my logic incorrect, I would expect the results to very close to 50:50, but it does not seem to be the case. Is there anyone who could explain what causes this behavior or point out a potential flaw with the code logic that is making the results skewed in one direction?
Could it be that exec_first == exec_second? Your if-else would add 1 to second in that case.
Try changing you if-else to something like:
if exec_first > exec_second:
first += 1
elif exec_second > exec_first:
second += 1
else:
pass
You assign all of the ties to one category. Try it with a middle ground:
import time
counts = 300000
first = 0
second = 0
same = 0
for _ in range(counts):
exec_first = time.time() - time.time()
exec_second = time.time() - time.time()
if exec_first == exec_second:
same += 1
elif exec_first > exec_second:
first += 1
else:
second += 1
print('1sts: %s' % first)
print('same: %s' % same)
print('2nds: %s' % second)
Output:
$ python3 so.py
1sts: 53099
same: 194616
2nds: 52285
$ python3 so.py
1sts: 57529
same: 186726
2nds: 55745
Also, I'm confused as to why you think that a function call might take 0 time. Every invocation requires at least access to the system clock and copying that value to a temporary location of some sort. This isn't free of overhead on any current computer.

python - printing something in a for loop every n seconds

I have a for loop that iterates over a number and performs some simple calculations. I am trying to figure out how to print out ( or log to file) the current value of 'val' every .5 to 1 second with out having to pause or sleep during the loop. Here is a super simple example
val_list = []
for i in xrange(iterations):
val = (i*(1/2)) * pi
val2 = np.linalg.pinv(val)
# print or write to log val2 after every half second (or 1 second)
val_list.append(val2)
Just use time.time to capture the time before starting, then check how long it's been after you calculate val2:
import time
val_list = []
prev_time = time.time()
for i in xrange(iterations):
val = (i*(1/2)) * pi
val2 = np.linalg.pinv(val)
# print or write to log val2 after every half second (or 1 second)
dt = time.time() - prev_time
if dt > 1:
# print or write to log here
prev_time = time.time()
val_list.append(val2)
You can use time.time():
from time import time as t
val_list = []
nowTime = t()
for i in xrange(iterations):
val = (i*(1/2)) * pi
val2 = np.linalg.pinv(val)
curTime = t()
if curTime - nowTime >= 0.5:
#Do your stuff
nowTime = curTime
val_list.append(val2)
You can achieve this using Threads.
Here's a documentation on how to utilize Threads : https://docs.python.org/3/library/threading.html ( If you're using Python2.7 then change the 3 in the url to a 2 )
Here's a link which is similar to what you want and should also point you in the right direction : Python threading.timer - repeat function every 'n' seconds
Basically you have to create a Thread that will only execute ever n number of seconds. On each iteration it will print the value. The above link should suffice for that. Good luck !

Algorithm timing in Python

I want to compute how many times my computer can do counter += 1 in one second. A naive approach is the following:
from time import time
counter = 0
startTime = time()
while time() - startTime < 1:
counter += 1
print counter
The problem is time() - startTime < 1 may be considerably more expensive than counter += 1.
Is there a way to make a less "clean" 1 sec sample of my algorithm?
The usual way to time algorithms is the other way around: Use a fixed number of iterations and measure how long it takes to finish them. The best way to do such timings is the timeit module.
print timeit.timeit("counter += 1", "counter = 0", number=100000000)
Note that timing counter += 1 seems rather pointless, though. What do you want to achieve?
Why don't you infer the time instead? You can run something like:
from datetime import datetime
def operation():
counter = 0
tbeg = datetime.utcnow()
for _ in range(10**6):
counter += 1
td = datetime.utcnow() - tbeg
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6)/10.0**6
def timer(n):
stack = []
for _ in range(n):
stack.append(operation()) # units of musec/increment
print sum(stack) / len(stack)
if __name__ == "__main__":
timer(10)
and get the average elapsed microseconds per increment; I get 0.09 (most likely very inaccurate). Now, it is a simple operation to infer that if I can make one increment in 0.09 microseconds, then I am able to make about 11258992 in one second.
I think the measurements are very inaccurate, but maybe is a sensible approximation?
I have never worked with the time() library, but according to that code I assume it counts seconds, so what if you do the /sec calculations after ctrl+C happens? It would be something like:
#! /usr/bin/env python
from time import time
import signal
import sys
#The ctrl+C interruption function:
def signal_handler(signal, frame):
counts_per_sec = counter/(time()-startTime)
print counts_per_sec
exit(0)
signal.signal(signal.SIGINT, signal_handler)
counter = 0
startTime = time()
while 1:
counter = counter + 1
Of course, it wont be exact because of the time passed between the last second processed and the interruption signal, but the more time you leave the script running, the more precise it will be :)
Here is my approach
import time
m = 0
timeout = time.time() + 1
while True:
if time.time() > timeout:
break
m = m + 1
print(m)

Categories