I am playing around with multiprocessing in Python 3 to try and understand how it works and when it's good to use it.
I am basing my examples on this question, which is really old (2012).
My computer is a Windows, 4 physical cores, 8 logical cores.
First: not segmented data
First I try to brute force compute numpy.sinfor a million values. The million values is a single chunk, not segmented.
import time
import numpy
from multiprocessing import Pool
# so that iPython works
__spec__ = "ModuleSpec(name='builtins', loader=<class '_frozen_importlib.BuiltinImporter'>)"
def numpy_sin(value):
return numpy.sin(value)
a = numpy.arange(1000000)
if __name__ == '__main__':
pool = Pool(processes = 8)
start = time.time()
result = numpy.sin(a)
end = time.time()
print('Singled threaded {}'.format(end - start))
start = time.time()
result = pool.map(numpy_sin, a)
pool.close()
pool.join()
end = time.time()
print('Multithreaded {}'.format(end - start))
And I get that, no matter the number of processes, the 'multi_threading' always takes 10 times or so as much as the 'single threading'. In the task manager, I see that not all the CPUs are maxed out, and the total CPU usage is goes between 18% and 31%.
So I try something else.
Second: segmented data
I try to split up the original 1 million computations in 10 batches of 100,000 each. Then I try again for 10 million computations in 10 batches of 1 million each.
import time
import numpy
from multiprocessing import Pool
# so that iPython works
__spec__ = "ModuleSpec(name='builtins', loader=<class '_frozen_importlib.BuiltinImporter'>)"
def numpy_sin(value):
return numpy.sin(value)
p = 3
s = 1000000
a = [numpy.arange(s) for _ in range(10)]
if __name__ == '__main__':
print('processes = {}'.format(p))
print('size = {}'.format(s))
start = time.time()
result = numpy.sin(a)
end = time.time()
print('Singled threaded {}'.format(end - start))
pool = Pool(processes = p)
start = time.time()
result = pool.map(numpy_sin, a)
pool.close()
pool.join()
end = time.time()
print('Multithreaded {}'.format(end - start))
I ran this last piece of code for different processes p and different list length s, 100000and 1000000.
At least now the task Manager gives the CPU maxed out at 100% usage.
I get the following results for the elapsed times (ORANGE: multiprocess, BLUE: single):
So multiprocessing never wins over the single process.
Why??
Numpy changes how the parent process runs so that it only runs on one core. You can call os.system("taskset -p 0xff %d" % os.getpid()) after you import numpy to reset the CPU affinity so that all cores are used.
See this question for more details
A computer can really only do one thing at a time. When multi-threading or multi-processing, the computer is really only switching back and forth between tasks quickly. With the provided problem, the computer could either perform the calculation 1,000,000 times, or split-up the work between a couple "workers" and perform 100,000 for each of 10 "workers".
Multi-processing shines not when computing something straight out, as the computer has to take time to create multiple processes, but while waiting for something. The main example I've heard is for webscraping. If a program requested data from a list of websites and waited for each server to send data before requesting data from the next, the program will have to sit for a couple seconds. If instead, the computer used multiprocessing/threading to ask all the websites first and all concurrently wait, the total running time is much shorter.
Related
I'm trying to finish my programming course and I'm stuck on one exercise.
I have to count how much time it takes in Python to create threads and whether it depends on the number of threads created.
I wrote a simple script and I don't know if it is good:
import threading
import time
def fun1(a,b):
c = a + b
print(c)
time.sleep(100)
times = []
for i in range(10000):
start = time.time()
threading.Thread(target=fun1, args=(55,155)).start()
end = time.time()
times.append(end-start)
print(times)
In times[] I got a 10000 results near 0.0 or exacly 0.0.
And now I don't know if I created the test because I don't understand something, or maybe the result is correct and the time of creating a thread does not depend on the number of already created ones?
Can U help me with it? If it's worng solution, explain me why, or if it's correct confirm it? :)
So there are two ways to interpret your question:
Whether the existence of other threads (that have not been started) affects creation time for new threads
Whether other threads running in the background (threads already started) affects creation time for new threads.
Checking the first one
In this case, you simply don't start the threads:
import threading
import time
def fun1(a,b):
c = a + b
print(c)
time.sleep(100)
times = []
for i in range(10):
start = time.time()
threading.Thread(target=fun1, args=(55,155)) # don't start
end = time.time()
times.append(end-start)
print(times)
output for 10 runs:
[4.696846008300781e-05, 2.8848648071289062e-05, 2.6941299438476562e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5033950805664062e-05, 2.6941299438476562e-05]
As you can see, the times are about the same (as you would expect).
Checking the second one
In this case, we want the previously created threads to keep running as we create more threads. So we give each thread a task that never finishes:
import threading
import time
def fun1(a,b):
while True:
pass # never ends
times = []
for i in range(100):
start = time.time()
threading.Thread(target=fun1, args=(55,155)).start()
end = time.time()
times.append(end-start)
print(times)
output:
Over 100 runs, the first one took 0.0003440380096435547 whereas the last one took 0.3017098903656006 so there's quite a magnitude of increase there.
I've found that numpy.fft.fft (and its variants) very slow when run in the background. Here is an example of what I'm talking about
import numpy as np
import multiprocessing as mproc
import time
import sys
# the producer function, which will run in the background and produce data
def Producer(dataQ):
numFrames = 5
n = 0
while n < numFrames:
data = np.random.rand(3000, 200)
dataQ.put(data) # send the datta to the consumer
time.sleep(0.1) # sleep for 0.5 second, so we dont' overload CPU
n += 1
# the consumer function, which will run in the backgrounnd and consume data from the producer
def Consumer(dataQ):
while True:
data = dataQ.get()
t1 = time.time()
fftdata = np.fft.rfft(data, n=3000*5)
tDiff = time.time() - t1
print("Elapsed time is %0.3f" % tDiff)
time.sleep(0.01)
sys.stdout.flush()
# the main program if __name__ == '__main__': is necessary to prevent this code from being run
# only when this program is started by user
if __name__ == '__main__':
data = np.random.rand(3000, 200)
t1 = time.time()
fftdata = np.fft.rfft(data, n=3000*5, axis=0)
tDiff = time.time() - t1
print("Elapsed time is %0.3f" % tDiff)
# generate a queue for transferring data between the producedr and the consumer
dataQ = mproc.Queue(4)
# start up the processoso
producerProcess = mproc.Process(target=Producer, args=[dataQ], daemon=False)
consumerProcess = mproc.Process(target=Consumer, args=[dataQ], daemon=False)
print("starting up processes")
producerProcess.start()
consumerProcess.start()
time.sleep(10) # let program run for 5 seconds
producerProcess.terminate()
consumerProcess.terminate()
The output it produes on my machine:
Elapsed time is 0.079
starting up processes
Elapsed time is 0.859
Elapsed time is 0.861
Elapsed time is 0.878
Elapsed time is 0.863
Elapsed time is 0.758
As you can see, it is roughly 10x slower when run in the background, and I can't figure out why this would be the case. The time.sleep() calls should ensure that the other process (the main process and producer process) aren't doing anything when the FFT is being computed, so it should use all the cores. I've checked CPU utilization through Windows Task Manager and it seems to use up about 25% when numpy.fft.fft is called heavily in both the single process and multiprocess cases.
Anyone have an idea whats going on?
The main problem is that your fft call in the background thread is:
fftdata = np.fft.rfft(data, n=3000*5)
rather than:
fftdata = np.fft.rfft(data, n=3000*5, axis=0)
which for me made all the difference.
There are a few other things worth noting. Rather than having the time.sleep() everywhere, why not just let the processor take care of this itself? Further more, rather than suspending the main thread, you can use
consumerProcess.join()
and then have the producer process run dataQ.put(None) once it has finished loading the data, and break out of the loop in the consumer process, i.e.:
def Consumer(dataQ):
while True:
data = dataQ.get()
if(data is None):
break
...
The ordering of results from the returned iterator of imap_unordered is arbitrary, and it doesn't seem to run faster than imap(which I check with the following code), so why would one use this method?
from multiprocessing import Pool
import time
def square(i):
time.sleep(0.01)
return i ** 2
p = Pool(4)
nums = range(50)
start = time.time()
print 'Using imap'
for i in p.imap(square, nums):
pass
print 'Time elapsed: %s' % (time.time() - start)
start = time.time()
print 'Using imap_unordered'
for i in p.imap_unordered(square, nums):
pass
print 'Time elapsed: %s' % (time.time() - start)
Using pool.imap_unordered instead of pool.imap will not have a large effect on the total running time of your code. It might be a little faster, but not by too much.
What it may do, however, is make the interval between values being available in your iteration more even. That is, if you have operations that can take very different amounts of time (rather than the consistent 0.01 seconds you were using in your example), imap_unordered can smooth things out by yielding faster-calculated values ahead of slower-calculated values. The regular imap will delay yielding the faster ones until after the slower ones ahead of them have been computed (but this does not delay the worker processes moving on to more calculations, just the time for you to see them).
Try making your work function sleep for i*0.1 seconds, shuffling your input list and printing i in your loops. You'll be able to see the difference between the two imap versions. Here's my version (the main function and the if __name__ == '__main__' boilerplate was is required to run correctly on Windows):
from multiprocessing import Pool
import time
import random
def work(i):
time.sleep(0.1*i)
return i
def main():
p = Pool(4)
nums = range(50)
random.shuffle(nums)
start = time.time()
print 'Using imap'
for i in p.imap(work, nums):
print i
print 'Time elapsed: %s' % (time.time() - start)
start = time.time()
print 'Using imap_unordered'
for i in p.imap_unordered(work, nums):
print i
print 'Time elapsed: %s' % (time.time() - start)
if __name__ == "__main__":
main()
The imap version will have long pauses while values like 49 are being handled (taking 4.9 seconds), then it will fly over a bunch of other values (which were calculated by the other processes while we were waiting for 49 to be processed). In contrast, the imap_unordered loop will usually not pause nearly as long at one time. It will have more frequent, but shorter pauses, and its output will tend to be smoother.
imap_unordered also seems to use less memory over time than imap. At least that's what I experienced with a iterator over millions of things.
I am trying to understand how to get children to write to a parent's variables. Maybe I'm doing something wrong here, but I would have imagined that multiprocessing would have taken a fraction of the time that it is actually taking:
import multiprocessing, time
def h(x):
h.q.put('Doing: ' + str(x))
return x
def f_init(q):
h.q = q
def main():
q = multiprocessing.Queue()
p = multiprocessing.Pool(None, f_init, [q])
results = p.imap(h, range(1,5))
p.close()
-----Results-----:
1
2
3
4
Multiprocessed: 0.0695610046387 seconds
1
2
3
4
Normal: 2.78949737549e-05 seconds # much shorter
for i in range(len(range(1,5))):
print results.next() # prints 1, 4, 9, 16
if __name__ == '__main__':
start = time.time()
main()
print "Multiprocessed: %s seconds" % (time.time()-start)
start = time.time()
for i in range(1,5):
print i
print "Normal: %s seconds" % (time.time()-start)
#Blender basically already answered your question, but as a comment. There is some overhead associated with the multiprocessing machinery, so if you incur the overhead without doing any significant work, it will be slower.
Try actually doing some work that parallelizes well. For example, write Python code to open a file, scan it using a regular expression, and pull out matching lines; then make a list of ten big files and time how long it takes to do all ten with multiprocessing vs. plain Python. Or write code to compute an expensive function and try that.
I have used multiprocessing.Pool() just to run a bunch of instances of an external program. I used subprocess to run an audio encoder, and it ran four instances of the encoder at once for a noticeable speedup.
Here is the code in question (a very simple crawler), the file is a list of urls, usually something > 1000.
import sys, gevent
from gevent import monkey
from gevent.pool import Pool
import httplib, socket
from urlparse import urlparse
from time import time
pool = Pool(100)
monkey.patch_all(thread=False)
count = 0
size = 0
failures = 0
global_timeout = 5
socket.setdefaulttimeout(global_timeout)
def process(ourl, mode = 'GET'):
global size, failures, global_timeout, count
try:
url = urlparse(ourl)
start = time()
conn = httplib.HTTPConnection(url.netloc, timeout = global_timeout)
conn.request(mode, ourl)
res = conn.getresponse()
req = res.read()
end = time()
bytes = len(req)
took = end - start
print mode, ourl, bytes, took
size = size + len(req)
count += 1
except Exception, e:
failures += 1
start = time()
gevent.core.dns_init()
print "spawning..."
for url in open('domains'):
pool.spawn(process, url.rstrip())
print "done...joining..."
pool.join()
print "complete"
end = time()
took = end - start
rate = size / took
print "It took %.2f seconds to process %d urls." % (took, count)
print rate, " bytes/sec"
print rate/1024, " KB/sec"
print rate/1048576, " MB/sec"
print "--- summary ---"
print "total:", count, "failures:", failures
I get so many different speed variations when I alter the pool size: -
pool = Pool(100)
I've been mulling over the thought of writing an algorithm to calculate the ideal pool size on the fly but rather than jumping in I'd like to know if theres something I've overlooked?
Any paralell processing will either be CPU-bound or IO-bound. From the nature of your code, it looks like at smaller sizes of the pool it will be IO-bound. Specifically, it will be bound by the bandwidth of your interface and perhaps by the number of concurrently open sockets the system can sustain (thinking some versions of Windows here, where I have managed to run out of available sockets on more than one occasion). It is possible that as you increase the pool size, the process may start tipping towards being CPU-bound (especially, if you have more data processing not showing here). To keep the pool size at the optimal value you need to monitor the usage of all these variables (# of open sockets, bandwith utilization by your process, CPU utilization, etc). You can either do this manually by profiling the metrics as you are running the crawler and making necessary adjustments to the pool size or you can try automating this. Whether or not something like that is possible from within Python is a different matter.