I've read this reply which explains that CPython has an optimization to do an in-place append without copy when appending to a string using a = a + b or a += b. I've also read this PEP8 recommendation:
Code should be written in a way that does not disadvantage other
implementations of Python (PyPy, Jython, IronPython, Cython, Psyco,
and such). For example, do not rely on CPython’s efficient
implementation of in-place string concatenation for statements in the
form a += b or a = a + b. This optimization is fragile even in CPython
(it only works for some types) and isn’t present at all in
implementations that don’t use refcounting. In performance sensitive
parts of the library, the ''.join() form should be used instead. This
will ensure that concatenation occurs in linear time across various
implementations.
So if I understand correctly, instead of doing a += b + c in order to trigger this CPython optimization which does the replacement in-place, the proper way is to call a = ''.join([a, b, c]) ?
But then why is this form with join significantly slower than the form in += in this example (In loop1 I'm using a = a + b + c on purpose in order to not trigger the CPython optimization)?
import os
import time
if __name__ == "__main__":
start_time = time.time()
print("begin: %s " % (start_time))
s = ""
for i in range(100000):
s = s + str(i) + '3'
time1 = time.time()
print("end loop1: %s " % (time1 - start_time))
s2 = ""
for i in range(100000):
s2 += str(i) + '3'
time2 = time.time()
print("end loop2: %s " % (time2 - time1))
s3 = ""
for i in range(100000):
s3 = ''.join([s3, str(i), '3'])
time3 = time.time()
print("end loop3: %s " % (time3 - time2))
The results show join is significantly slower in this case:
~/testdir$ python --version
Python 3.10.6
~/testdir$ python concatenate.py
begin: 1675268345.0761461
end loop1: 3.9019
end loop2: 0.0260
end loop3: 0.9289
Is my version with join wrong?
In "loop3" you bypass a lot of the gain of join() by continuously calling it in an unneeded way. It would be better to build up the full list of characters then join() once.
Check out:
import time
iterations = 100_000
##----------------
s = ""
start_time = time.time()
for i in range(iterations):
s = s + "." + '3'
end_time = time.time()
print("end loop1: %s " % (end_time - start_time))
##----------------
##----------------
s = ""
start_time = time.time()
for i in range(iterations):
s += "." + '3'
end_time = time.time()
print("end loop2: %s " % (end_time - start_time))
##----------------
##----------------
s = ""
start_time = time.time()
for i in range(iterations):
s = ''.join([s, ".", '3'])
end_time = time.time()
print("end loop3: %s " % (end_time - start_time))
##----------------
##----------------
s = []
start_time = time.time()
for i in range(iterations):
s.append(".")
s.append("3")
s = "".join(s)
end_time = time.time()
print("end loop4: %s " % (end_time - start_time))
##----------------
##----------------
s = []
start_time = time.time()
for i in range(iterations):
s.extend((".", "3"))
s = "".join(s)
end_time = time.time()
print("end loop5: %s " % (end_time - start_time))
##----------------
Just to be clear, you can run this with:
iterations = 10_000_000
If you like, just be sure to remove "loop1" and "loop3" as they get dramatically slower after about 300k.
When I run this with 10 million iterations I see:
end loop2: 16.977502584457397
end loop4: 1.6301295757293701
end loop5: 1.0435805320739746
So, clearly there is a way to use join() that is fast :-)
ADDENDUM:
#Étienne has suggested that making the string to append longer reverses the findings and that optimization of loop2 does not happen unless it is in a function. I do not see the same.
import time
iterations = 10_000_000
string_to_append = "345678912"
def loop2(iterations):
s = ""
for i in range(iterations):
s += "." + string_to_append
return s
def loop4(iterations):
s = []
for i in range(iterations):
s.append(".")
s.append(string_to_append)
return "".join(s)
def loop5(iterations):
s = []
for i in range(iterations):
s.extend((".", string_to_append))
return "".join(s)
##----------------
start_time = time.time()
s = loop2(iterations)
end_time = time.time()
print("end loop2: %s " % (end_time - start_time))
##----------------
##----------------
start_time = time.time()
s = loop4(iterations)
end_time = time.time()
print("end loop4: %s " % (end_time - start_time))
##----------------
##----------------
start_time = time.time()
s = loop5(iterations)
end_time = time.time()
print("end loop5: %s " % (end_time - start_time))
##----------------
On python 3.10 and 3.11 the results are similar. I get results like the following:
end loop2: 336.98531889915466
end loop4: 1.0211727619171143
end loop5: 1.1640543937683105
that continue to suggest to me that join() is overwhelmingly faster.
This is just to add the results from #JonSG answer with different python implementations I have available, posted as an answer, because cannot use formatting in an comment.
The only modification is that I was using 1M iterations and for "local" I've wrapped whole test in test() function, doing it inside 'if name == "main":' block, doesn't seem to help with 3.11 regression Étienne mentioned. With 3.12.0a5 I'm seeing similar difference between local and global s variable, but it's a lot faster.
loop
3.10.10
3.10.10
3.11.2
3.11.2
3.12.0a5
3.12.0a5
pypy 3.9.16
pypy 3.9.16
global
local
global
local
global
local
global
local
a = a + b + c
71.04
71.76
92.55
90.57
91.24
92.08
120.05
97.94
a += b + c
0.38
0.20
26.57
0.21
24.06
0.03
108.98
89.62
a = ''.join(a, b, c)
23.26
21.96
25.31
24.60
23.94
23.79
94.04
90.88
a.append(b);a.append(c)
0.50
0.38
0.35
0.23
0.0692
0.0334
0.12
0.12
a.extend((b, c))
0.35
0.27
0.29
0.19
0.0684
0.0343
0.10
0.10
Related
I have a problem with function time.time().
I've written a code, which has 3 different hash functions and then it counts how long does they execute.
start_time = time.time()
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = start_time - end_time
print(elapsed_time)
When I execute this in pycharm/IDLE/Visual it shows 0. When I do this in online compiler (https://www.programiz.com/python-programming/online-compiler/) it shows a good result. Why is that?
Here is the full code if needed.
import time
class Ksiazka:
def __init__(self, nazwa, autor, wydawca, rok, strony):
self.nazwa = nazwa
self.autor = autor
self.wydawca = wydawca
self.rok = rok
self.strony = strony
def hash_1(self):
h = 0
for char in self.nazwa:
h += ord(char)
return h
def hash_2(self):
h = 0
for char in self.autor:
h += ord(char)
return h
def hash_3(self):
h = self.strony + self.rok
return h
class HashTable:
def __init__(self):
self.size = 6
self.arr = [None for i in range(self.size)]
def add(self, key, c):
if c == 1:
h = Ksiazka.hash_1(key) % self.size
print("Hash 1: ", h)
if c == 2:
h = Ksiazka.hash_2(key) % self.size
print("Hash 2: ", h)
if c == 3:
h = Ksiazka.hash_3(key) % self.size
print("Hash 3: ", h)
self.arr[h] = key
arr = HashTable()
Book1 = Ksiazka("Harry Potter", "J.K Rowling", "foo", 1990, 700)
start_time = time.time()
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
start_time = time.time()
arr.add(Book1, 2)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
start_time = time.time()
arr.add(Book1, 3)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
I looks like 0 might just be a return value for successful script execution. You need to add a print statement to show anything. Also you might want to change the order of the subtraction:
start_time = time.time()
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
Edit b/c of updated questions:
If it still shows 0, it might just happen, that your add operation is extremely fast. In that case, try averaging over several runs, i.e. instead of your add operation use a version like this:
start_time = time.time()
for _ in range(10**6):
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time) # indicates the average microseconds for a single run
The documentation for time.time says:
Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.
So, depending on your OS, anything that is faster than 1 second might be displayed as a difference of 0.
I suggest you use time.perf_counter instead:
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration.
I want to print the numbers from 0 to 100000 sequentially and also using a processing pool of 12 processes.
def print1(x):
print(x)
def print2(x):
i = 0
while(i < x):
print(i)
i+=1
if __name__ == '__main__':
start_time = time.time()
p = Pool(12)
p.map(print1, [i for i in range(100000)])
print("--- %s seconds ---" % (time.time() - start_time))
start_time = time.time()
print2(100000)
print("--- %s seconds ---" % (time.time() - start_time))
But it takes almost the same time and if you look at the cpu utilization is also the same.
On the left with multiprcessing and on the right sequentially
Why the CPU is using the same resources ?
I do not understand what the time.perf_counter () command does.
Here is the code which includes the command time.perf_counter()
import random
num_nums = 100
start_time = time.perf_counter()
numbers = str(random.randint(1,100))
for i in range(num_nums):
num = random.randint(1,100)
numbers += ',' + str(num)
end_time = time.perf_counter()
td1 = end_time - start_time
start_time = time.perf_counter()
numbers=[]
for i in range(num_nums):
num = random.randint(1,100)
numbers.append(str(num))
numbers = ', '.join(numbers)
end_time = time.perf_counter()
td2 = end_time - start_time
start_time = time.perf_counter()
numbers = [str(random.randint(1,100)) for i in range(1,num_nums)]
numbers = ', '.join(numbers)
end_time = time.perf_counter()
td3 = end_time - start_time
print('''Number of numbers: {:,}
Time Delta 1: {}
Time Delta 2: {}
Time Delta 3: {}'''.format(num_nums, td1, td2, td3))
An here is the result
Time Delta 1: 0.0003232999999909225
Time Delta 2: 0.00016150000010384247
Time Delta 3: 0.0003734999997959676```
Based on the definition here: https://docs.python.org/3/library/time.html#time.perf_counter
The call is gathering the amount of time that has taken place between 2 consecutive calls of time.perf_counter and will be with extreme precision.
I'm trying to make the program output the time it took to complete fib(n) but during the time it's calculating, it continuously posts minute amounts of time. How do I get the program to just output the time once. Here if my program:
import time
def fib(n):
if n <= 1:
return 1
else:
start_time = time.time()
answer = fib(n-1) + fib(n-2)
end_time = time.time()
total_time = end_time - start_time
print(total_time)
return answer
Since your function is recursive, each call will print out its own time. If you want to know how much time the function took, I would suggest wrapping the main call to fib in a time statement, rather than putting the timing in the actual function code.
Instead of placing the code which calculates the time inside the fib() function, place it outside the function, like so:
import time
def fib(n):
if n <= 1:
return 1
else:
answer = fib(n-1) + fib(n-2)
return answer
#Place it all here
start_time = time.time()
fib(90) #Or some other number
end_time = time.time()
total_time = end_time - start_time
print(total_time)
You use my timing program that I wrote.
#!python3
import timeit
from os import system
system('cls')
# % % % % % % % % % % % % % % % % %
# times the code 100 times
runs = 100
totalTime = 0.0; average = 0.0
testTimes = []
for i in range(runs):
startTimer = timeit.default_timer()
# % % % % % % % % % % % % % % % %
# >>>>> code to be tested goes here <<<<<
def fib(n):
if n <= 1:
return 1
else:
answer = fib(n - 1) + fib(n - 2)
return answer
r = fib(26)
print('fib result is:', r)
# % % % % % % % % % % % % % % % %
endTimer = timeit.default_timer()
timeInterval = endTimer - startTimer
testTimes.append(timeInterval)
totalTime += timeInterval
print('\n', '{} {:.4f} {}'.format("This run's time is", timeInterval,
'seconds' + '\n'))
# print the results
print('{} {:.4f} {}'.format(' Total time:', totalTime, 'seconds'))
print('{} {:.4f} {}'.format('Shortest time:', min(testTimes), 'seconds'))
print('{} {:.4f} {}'.format(' Longest time:', max(testTimes), 'seconds'))
print('{} {:.4f} {}'.format(' Average time:', (totalTime / runs), 'seconds'))
As others noted, to time a recursive function, place the timings around the call to the function, not in the function. Here some additional code to time computing the first 30 number of the sequence.
import time
import numpy as np
def fib(n):
if n <= 1:
answer = 1
else:
answer = fib(n-1) + fib(n-2)
return answer
for i in np.arange(1,30):
start = time.time()
f = fib(i)
end = time.time()
total = end - start
print(i, fib(i), total)
I'm looking for a faster of way of sampling a single element at random from a large Python set. Below I've benchmarked three obvious examples. Is there a faster way of doing this?
import random
import time
test_set = set(["".join(["elem-", str(l)]) for l in range(0, 1000000)])
t0 = time.time()
random_element = random.choice(list(test_set))
print(time.time() - t0)
t0 = time.time()
random_element = random.sample(test_set, 1)
print(time.time() - t0)
t0 = time.time()
rand_idx = random.randrange(0, len(test_set)-1)
random_element = list(test_set)[rand_idx]
print(time.time() - t0)
Output:
0.0692291259765625
0.06741929054260254
0.07094502449035645
You could use numpy and add it to your benchmarks.
import numpy
random_num = numpy.randit(0, 1000000)
element = 'elem-' + str(random_num)
test_array = numpy.array([x for x in test_set])
Specifically, this is a piece of code that benchmarks the different methods:
random_choice_times = []
random_sample_times = []
random_randrange_times = []
numpy_choince_times = []
for i in range(0,10):
t0 = time.time()
random_element = random.choice(list(test_set))
time_elps = time.time() - t0
random_choice_times.append(time_elps)
t0 = time.time()
random_element = random.sample(test_set, 1)
time_elps = time.time() - t0
random_sample_times.append(time_elps)
t0 = time.time()
rand_idx = random.randrange(0, len(test_set)-1)
random_element = list(test_set)[rand_idx]
time_elps = time.time() - t0
random_randrange_times.append(time_elps)
t0 = time.time()
random_num = numpy.random.choice(numpy.array(test_array))
time_elps = time.time() - t0
numpy_choince_times.append(time_elps)
print("Avg time for random.choice: ", sum(random_choice_times) /10)
print("Avg time for random.sample: ", sum(random_sample_times) /10)
print("Avg time for random.randrange: ", sum(random_randrange_times) /10)
print("Avg time for numpy.choice: ", sum(numpy_choince_times) /10)
Here are the times
>>> Avg time for random.choice: 0.06497154235839844
>>> Avg time for random.sample: 0.06054067611694336
>>> Avg time for random.randrange: 0.05938301086425781
>>> Avg time for numpy.choice: 0.017636775970458984
You could try this.
def random_set_ele(set_: set):
copy = set_
return copy.pop()
test_set = set(["".join(["elem-", str(l)]) for l in range(0, 1000000)])
start = perf_counter()
print(random_set_ele(test_set))
print(perf_counter()-start)
Result:
elem-57221
0.00016391400276916102
The .pop() method for a set randomly extracts and returns an element and pops it out of the set. We make a copy of the set before popping the element so that the original list is not modified.