Multiprocessing in python is the same as sequentially processing - python

I want to print the numbers from 0 to 100000 sequentially and also using a processing pool of 12 processes.
def print1(x):
print(x)
def print2(x):
i = 0
while(i < x):
print(i)
i+=1
if __name__ == '__main__':
start_time = time.time()
p = Pool(12)
p.map(print1, [i for i in range(100000)])
print("--- %s seconds ---" % (time.time() - start_time))
start_time = time.time()
print2(100000)
print("--- %s seconds ---" % (time.time() - start_time))
But it takes almost the same time and if you look at the cpu utilization is also the same.
On the left with multiprcessing and on the right sequentially
Why the CPU is using the same resources ?

Related

Correct way to append to string in python

I've read this reply which explains that CPython has an optimization to do an in-place append without copy when appending to a string using a = a + b or a += b. I've also read this PEP8 recommendation:
Code should be written in a way that does not disadvantage other
implementations of Python (PyPy, Jython, IronPython, Cython, Psyco,
and such). For example, do not rely on CPython’s efficient
implementation of in-place string concatenation for statements in the
form a += b or a = a + b. This optimization is fragile even in CPython
(it only works for some types) and isn’t present at all in
implementations that don’t use refcounting. In performance sensitive
parts of the library, the ''.join() form should be used instead. This
will ensure that concatenation occurs in linear time across various
implementations.
So if I understand correctly, instead of doing a += b + c in order to trigger this CPython optimization which does the replacement in-place, the proper way is to call a = ''.join([a, b, c]) ?
But then why is this form with join significantly slower than the form in += in this example (In loop1 I'm using a = a + b + c on purpose in order to not trigger the CPython optimization)?
import os
import time
if __name__ == "__main__":
start_time = time.time()
print("begin: %s " % (start_time))
s = ""
for i in range(100000):
s = s + str(i) + '3'
time1 = time.time()
print("end loop1: %s " % (time1 - start_time))
s2 = ""
for i in range(100000):
s2 += str(i) + '3'
time2 = time.time()
print("end loop2: %s " % (time2 - time1))
s3 = ""
for i in range(100000):
s3 = ''.join([s3, str(i), '3'])
time3 = time.time()
print("end loop3: %s " % (time3 - time2))
The results show join is significantly slower in this case:
~/testdir$ python --version
Python 3.10.6
~/testdir$ python concatenate.py
begin: 1675268345.0761461
end loop1: 3.9019
end loop2: 0.0260
end loop3: 0.9289
Is my version with join wrong?
In "loop3" you bypass a lot of the gain of join() by continuously calling it in an unneeded way. It would be better to build up the full list of characters then join() once.
Check out:
import time
iterations = 100_000
##----------------
s = ""
start_time = time.time()
for i in range(iterations):
s = s + "." + '3'
end_time = time.time()
print("end loop1: %s " % (end_time - start_time))
##----------------
##----------------
s = ""
start_time = time.time()
for i in range(iterations):
s += "." + '3'
end_time = time.time()
print("end loop2: %s " % (end_time - start_time))
##----------------
##----------------
s = ""
start_time = time.time()
for i in range(iterations):
s = ''.join([s, ".", '3'])
end_time = time.time()
print("end loop3: %s " % (end_time - start_time))
##----------------
##----------------
s = []
start_time = time.time()
for i in range(iterations):
s.append(".")
s.append("3")
s = "".join(s)
end_time = time.time()
print("end loop4: %s " % (end_time - start_time))
##----------------
##----------------
s = []
start_time = time.time()
for i in range(iterations):
s.extend((".", "3"))
s = "".join(s)
end_time = time.time()
print("end loop5: %s " % (end_time - start_time))
##----------------
Just to be clear, you can run this with:
iterations = 10_000_000
If you like, just be sure to remove "loop1" and "loop3" as they get dramatically slower after about 300k.
When I run this with 10 million iterations I see:
end loop2: 16.977502584457397
end loop4: 1.6301295757293701
end loop5: 1.0435805320739746
So, clearly there is a way to use join() that is fast :-)
ADDENDUM:
#Étienne has suggested that making the string to append longer reverses the findings and that optimization of loop2 does not happen unless it is in a function. I do not see the same.
import time
iterations = 10_000_000
string_to_append = "345678912"
def loop2(iterations):
s = ""
for i in range(iterations):
s += "." + string_to_append
return s
def loop4(iterations):
s = []
for i in range(iterations):
s.append(".")
s.append(string_to_append)
return "".join(s)
def loop5(iterations):
s = []
for i in range(iterations):
s.extend((".", string_to_append))
return "".join(s)
##----------------
start_time = time.time()
s = loop2(iterations)
end_time = time.time()
print("end loop2: %s " % (end_time - start_time))
##----------------
##----------------
start_time = time.time()
s = loop4(iterations)
end_time = time.time()
print("end loop4: %s " % (end_time - start_time))
##----------------
##----------------
start_time = time.time()
s = loop5(iterations)
end_time = time.time()
print("end loop5: %s " % (end_time - start_time))
##----------------
On python 3.10 and 3.11 the results are similar. I get results like the following:
end loop2: 336.98531889915466
end loop4: 1.0211727619171143
end loop5: 1.1640543937683105
that continue to suggest to me that join() is overwhelmingly faster.
This is just to add the results from #JonSG answer with different python implementations I have available, posted as an answer, because cannot use formatting in an comment.
The only modification is that I was using 1M iterations and for "local" I've wrapped whole test in test() function, doing it inside 'if name == "main":' block, doesn't seem to help with 3.11 regression Étienne mentioned. With 3.12.0a5 I'm seeing similar difference between local and global s variable, but it's a lot faster.
loop
3.10.10
3.10.10
3.11.2
3.11.2
3.12.0a5
3.12.0a5
pypy 3.9.16
pypy 3.9.16
global
local
global
local
global
local
global
local
a = a + b + c
71.04
71.76
92.55
90.57
91.24
92.08
120.05
97.94
a += b + c
0.38
0.20
26.57
0.21
24.06
0.03
108.98
89.62
a = ''.join(a, b, c)
23.26
21.96
25.31
24.60
23.94
23.79
94.04
90.88
a.append(b);a.append(c)
0.50
0.38
0.35
0.23
0.0692
0.0334
0.12
0.12
a.extend((b, c))
0.35
0.27
0.29
0.19
0.0684
0.0343
0.10
0.10

Problem with showing elapsed time in python

I have a problem with function time.time().
I've written a code, which has 3 different hash functions and then it counts how long does they execute.
start_time = time.time()
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = start_time - end_time
print(elapsed_time)
When I execute this in pycharm/IDLE/Visual it shows 0. When I do this in online compiler (https://www.programiz.com/python-programming/online-compiler/) it shows a good result. Why is that?
Here is the full code if needed.
import time
class Ksiazka:
def __init__(self, nazwa, autor, wydawca, rok, strony):
self.nazwa = nazwa
self.autor = autor
self.wydawca = wydawca
self.rok = rok
self.strony = strony
def hash_1(self):
h = 0
for char in self.nazwa:
h += ord(char)
return h
def hash_2(self):
h = 0
for char in self.autor:
h += ord(char)
return h
def hash_3(self):
h = self.strony + self.rok
return h
class HashTable:
def __init__(self):
self.size = 6
self.arr = [None for i in range(self.size)]
def add(self, key, c):
if c == 1:
h = Ksiazka.hash_1(key) % self.size
print("Hash 1: ", h)
if c == 2:
h = Ksiazka.hash_2(key) % self.size
print("Hash 2: ", h)
if c == 3:
h = Ksiazka.hash_3(key) % self.size
print("Hash 3: ", h)
self.arr[h] = key
arr = HashTable()
Book1 = Ksiazka("Harry Potter", "J.K Rowling", "foo", 1990, 700)
start_time = time.time()
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
start_time = time.time()
arr.add(Book1, 2)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
start_time = time.time()
arr.add(Book1, 3)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
I looks like 0 might just be a return value for successful script execution. You need to add a print statement to show anything. Also you might want to change the order of the subtraction:
start_time = time.time()
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time)
Edit b/c of updated questions:
If it still shows 0, it might just happen, that your add operation is extremely fast. In that case, try averaging over several runs, i.e. instead of your add operation use a version like this:
start_time = time.time()
for _ in range(10**6):
arr.add(Book1, 1)
end_time = time.time()
elapsed_time = end_time - start_time
print(elapsed_time) # indicates the average microseconds for a single run
The documentation for time.time says:
Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.
So, depending on your OS, anything that is faster than 1 second might be displayed as a difference of 0.
I suggest you use time.perf_counter instead:
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration.

Retrieve values from multiprocessing library

I am trying to use multiprocesing library to compare the performance of my processor on 1 core vs 2 cores.
Therefore I calculate a great product using 1 loop, 2 loops on 1 core, and 2 loops on 2 cores (1 core/loop). The problem is that the value of D1.result and D2.result are 0 although they are expected to be the product of the "half/loop".
The code is the following:
import random
from multiprocessing import Process as Task, freeze_support
N = 10 ** 3
l = [random.randint(2 ** 999, 2 ** 1000 - 1) for x in range(N)]
# ---------------------------------------------------------------
class Loop:
def __init__(self):
self.result=0
def boucle(self,start,end):
self.result = l[start]
for v in l[start+1:end]:
self.result = self.result*v
# ---------------------------------------------------------------
if __name__=="__main__":
print("1 Loop without multiprocessing")
A=Loop()
sta = time.time()
ra=A.boucle(0,N)
end = time.time()
print("--> Time :", end - sta)
#----------------------------------------------------------------------
print("2 Loops without multiprocessing")
B1=Loop()
B2=Loop()
sta = time.time()
rb1 = B1.boucle(0, N//2)
rb2 = B2.boucle(N//2, N)
rb = B1.result*B2.result
end = time.time()
print("--> Time :", end - sta)
if rb - A.result == 0 :
check="OK"
else :
check="fail"
print("--> Check :", check)
# ----------------------------------------------------------------------
print("2 Loops with multiprocessing")
freeze_support()
D1=Loop()
D2=Loop()
v1 = Task(target=D1.boucle, args=(0,N//2))
v2 = Task(target=D2.boucle, args=(N//2,N))
sta = time.time()
v1.start()
v2.start()
v1.join()
v2.join()
rd = D1.result*D2.result
end = time.time()
print("D1",D1.result)
print("D2",D2.result)
print("--> Time :", end - sta)
if rd - A.result == 0 :
check="OK"
else :
check="fail"
print("--> Check :", check)
The result of this code is :
1 Loop without multiprocessing
--> Time : 0.5025153160095215
2 Loops without multiprocessing
--> Time : 0.283463716506958
--> Check : OK
2 Loops with multiprocessing
D1 0
D2 0
--> Time : 0.2579989433288574
--> Check : fail
Process finished with exit code 0
Why D1 0 and D2 0 and not the result of the loop ?
Thanks you !
The issue with this code is shown when D1 and D2 are displayed:
In multiprocessing, tasks are executed in a forked process. This process got a copy of the data.
In each forked process the value is properly computed but it is never sent back to main process.
To work around this you can:
Use shared memory to store the result, but in this case you are limited to C types. Your numbers do not fit on 64 bits (max integer size in C), so this is not a good solution.
Use a pool of process, thus data will be shared using queues and you will be able to manage real python types.
This last option requires that "boucle" function returns the result.
Here is the code:
import random
from multiprocessing import Process as Task, freeze_support, Pool
import time
N = 10 ** 3
l = [random.randint(2 ** 999, 2 ** 1000 - 1) for x in range(N)]
# ---------------------------------------------------------------
class Loop:
def __init__(self):
self.result = 0
def boucle(self, start, end):
self.result = l[start]
for v in l[start + 1:end]:
self.result = self.result * v
return self.result
# ---------------------------------------------------------------
if __name__ == "__main__":
print("1 Loop without multiprocessing")
A = Loop()
sta = time.time()
ra = A.boucle(0, N)
end = time.time()
print("--> Time :", end - sta)
# ----------------------------------------------------------------------
print("2 Loops without multiprocessing")
B1 = Loop()
B2 = Loop()
sta = time.time()
rb1 = B1.boucle(0, N // 2)
rb2 = B2.boucle(N // 2, N)
rb = B1.result * B2.result
end = time.time()
print("--> Time :", end - sta)
if rb - A.result == 0:
check = "OK"
else:
check = "fail"
print("--> Check :", check)
# ----------------------------------------------------------------------
print("2 Loops with multiprocessing")
freeze_support()
D1 = Loop()
D2 = Loop()
pool = Pool(processes=2)
with pool:
sta = time.time()
sta = time.time()
rb1 = pool.apply_async(B1.boucle, (0, N // 2))
rb2 = pool.apply_async(B2.boucle, (N // 2, N))
v1 = rb1.get()
v2 = rb2.get()
rd = v1 * v2
end = time.time()
print("D1", D1.result)
print("D2", D2.result)
print("--> Time :", end - sta)
if rd - A.result == 0:
check = "OK"
else:
check = "fail"
print("--> Check :", check)
And the result:
1 Loop without multiprocessing
--> Time : 0.3473360538482666
2 Loops without multiprocessing
--> Time : 0.18696999549865723
--> Check : OK
2 Loops with multiprocessing
D1 0
D2 0
--> Time : 0.1116642951965332
--> Check : OK
You can also use map with the pool to get the value back, but I have not tried it in this case because you only call 2 functions, and pool workers get tasks by "packets of functions - see maxtaskperchild" so it could be possible that only one worker will have taken the 2 functions for itself

"How to print list data on a specific time delay list ?"

I want to print list data on the specific delays which are on another list. I Want to loop this process for a specific time, but I'm unable to implement it in a thread.
from time import sleep
import datetime
now = datetime.datetime.now()
Start_Time = datetime.datetime.now()
Str_time = Start_Time.strftime("%H:%M:%S")
End_Time = '11:15:00'
class sampleTest:
#staticmethod
def test():
list1 = ["Hello", "Hi", "Ola"]
list2 = [5, 10, 7]
# print(f"{data} delay {delay} & time is {t} ")
# sleep(delay)
i = 0
while i < len(list1):
t = datetime.datetime.now().strftime('%H:%M:%S')
print(f"{list1[i]} delay {list2[i]} & time is {t} ")
sleep(list2[i])
i += 1
else:
print("All Data is printed")
if __name__ == '__main__':
obj = sampleTest
while Str_time < End_Time:
obj.test()
Str_time = datetime.datetime.now().strftime("%H:%M:%S")
else:
print("Time Is done")
Expected output: On first, loop it should print all list data but in the second loop, it should run as per the delay.
1st time: Hello, Hi, Ola
after that
1. Every 5 seconds it should print Hello
2. Every 10 seconds it should print Hi
3. Every 7seconds it should print Ola
Actual Output: List of data is getting printed as per the delay.
Hello delay 5 & time is 11:41:45
Hi delay 10 & time is 11:41:50
Ola delay 3 & time is 11:42:00
All Data is printed
Hello delay 5 & time is 11:42:03
Hi delay 10 & time is 11:42:08
Ola delay 3 & time is 11:42:18
You can try comparing the current time with the start time, for example:
time.sleep(1);
diff = int(time.time() - start_time)
if (diff % wait_time == 0):
print(text_to_print)
Here is the full code implementing this:
from time import sleep
import time
import datetime
now = datetime.datetime.now()
Start_Time = datetime.datetime.now()
Str_time = Start_Time.strftime("%H:%M:%S")
End_Time = '11:15:00'
starttime=time.time()
diff = 0
class sampleTest:
#staticmethod
def test():
list1 = ["Hello", "Hi", "Ola"]
list2 = [5, 10, 7]
for i in range(len(list1)):
if (diff % list2[i] == 0):
t = datetime.datetime.now().strftime('%H:%M:%S')
print(f"{list1[i]} delay {list2[i]} & time is {t} ")
if __name__ == '__main__':
obj = sampleTest
while Str_time < End_Time:
obj.test()
time.sleep(1);
diff = int(time.time() - starttime)
Str_time = datetime.datetime.now().strftime("%H:%M:%S")
else:
print("Time Is done")
In accordance with your desired output, I believe threads are the best option, which means:
from time import sleep
import datetime
import threading
now = datetime.datetime.now()
Start_Time = datetime.datetime.now()
Str_time = Start_Time.strftime("%H:%M:%S")
End_Time = '11:15:00'
class sampleTest:
def __init__(self):
self.run = True
print ("1st time: Hello, Hi, Ola")
print ("Now: " + datetime.datetime.now().strftime('%H:%M:%S'))
def test(self, i):
list1 = ["Hello", "Hi", "Ola"]
list2 = [5, 10, 7]
while self.run:
sleep(list2[i])
t = datetime.datetime.now().strftime('%H:%M:%S')
print(f"{list1[i]} delay {list2[i]} & time is {t}")
def stop(self):
self.run = False
if __name__ == '__main__':
obj = sampleTest()
t1 = threading.Thread(target=obj.test,args=(0,))
t2 = threading.Thread(target=obj.test,args=(1,))
t3 = threading.Thread(target=obj.test,args=(2,))
t1.start()
t2.start()
t3.start()
while Str_time < End_Time:
Str_time = datetime.datetime.now().strftime("%H:%M:%S")
else:
obj.stop()
t1.join()
t2.join()
t3.join()
print("All data is printed")
print("Time Is done")

Outputing time once in python

I'm trying to make the program output the time it took to complete fib(n) but during the time it's calculating, it continuously posts minute amounts of time. How do I get the program to just output the time once. Here if my program:
import time
def fib(n):
if n <= 1:
return 1
else:
start_time = time.time()
answer = fib(n-1) + fib(n-2)
end_time = time.time()
total_time = end_time - start_time
print(total_time)
return answer
Since your function is recursive, each call will print out its own time. If you want to know how much time the function took, I would suggest wrapping the main call to fib in a time statement, rather than putting the timing in the actual function code.
Instead of placing the code which calculates the time inside the fib() function, place it outside the function, like so:
import time
def fib(n):
if n <= 1:
return 1
else:
answer = fib(n-1) + fib(n-2)
return answer
#Place it all here
start_time = time.time()
fib(90) #Or some other number
end_time = time.time()
total_time = end_time - start_time
print(total_time)
You use my timing program that I wrote.
#!python3
import timeit
from os import system
system('cls')
# % % % % % % % % % % % % % % % % %
# times the code 100 times
runs = 100
totalTime = 0.0; average = 0.0
testTimes = []
for i in range(runs):
startTimer = timeit.default_timer()
# % % % % % % % % % % % % % % % %
# >>>>> code to be tested goes here <<<<<
def fib(n):
if n <= 1:
return 1
else:
answer = fib(n - 1) + fib(n - 2)
return answer
r = fib(26)
print('fib result is:', r)
# % % % % % % % % % % % % % % % %
endTimer = timeit.default_timer()
timeInterval = endTimer - startTimer
testTimes.append(timeInterval)
totalTime += timeInterval
print('\n', '{} {:.4f} {}'.format("This run's time is", timeInterval,
'seconds' + '\n'))
# print the results
print('{} {:.4f} {}'.format(' Total time:', totalTime, 'seconds'))
print('{} {:.4f} {}'.format('Shortest time:', min(testTimes), 'seconds'))
print('{} {:.4f} {}'.format(' Longest time:', max(testTimes), 'seconds'))
print('{} {:.4f} {}'.format(' Average time:', (totalTime / runs), 'seconds'))
As others noted, to time a recursive function, place the timings around the call to the function, not in the function. Here some additional code to time computing the first 30 number of the sequence.
import time
import numpy as np
def fib(n):
if n <= 1:
answer = 1
else:
answer = fib(n-1) + fib(n-2)
return answer
for i in np.arange(1,30):
start = time.time()
f = fib(i)
end = time.time()
total = end - start
print(i, fib(i), total)

Categories