Multiprocessing in python refuses to run more than one process - python

import random
import time
import multiprocessing
import sys
start = time.time()
numbers1 = []
def NumGenerator(NumbersArray):
while NumberCheck(NumbersArray):
number = random.randint(0,100)
NumbersArray.append(number)
end = time.time()
print(end-start)
print('average is: ' + str(sum(NumbersArray) / len(NumbersArray)))
print(str(NumbersArray).replace("[", "").replace("]", ""))
sys.exit()
def NumberCheck(NumbersArray):
# Checks if the average of the array is 50
if NumbersArray:
if sum(NumbersArray)/len(NumbersArray) != 50:
return True
else: return False
else: return True
process1 = multiprocessing.Process(target=NumGenerator, args=(numbers1,))
process2 = multiprocessing.Process(target=NumGenerator, args=(numbers1,))
process3 = multiprocessing.Process(target=NumGenerator, args=(numbers1,))
process4 = multiprocessing.Process(target=NumGenerator, args=(numbers1,))
process1.start()
process2.start()
process3.start()
process4.start()
process1.join()
process2.join()
process3.join()
process4.join()
This is supposed to run on 4 threads and generate random numbers between 0 and 100 and add them to an array until the average of that array is 50. Currently it does the second part but on just one CPU core.

Try multiprocessing.pool's ThreadPool.
It follows an API similar to multiprocessing.Pool
Import with from multiprocessing.pool import ThreadPool
More info here and there

Related

How to call a pool with sleep between executions within a multiprocessing process in Python?

In the main function, I am calling a process to run imp_workload() method parallely for each DP_WORKLOAD
#!/usr/bin/env python
import multiprocessing
import subprocess
if __name__ == "__main__":
for DP_WORKLOAD in DP_WORKLOAD_NAME:
p1 = multiprocessing.Process(target=imp_workload, args=(DP_WORKLOAD, DP_DURATION_SECONDS, DP_CONCURRENCY, ))
p1.start()
However, inside this imp_workload() method, I need the import_command_run() method to run a number of processes (the number is equivalent to variable DP_CONCURRENCY) but with the sleep of 60 seconds before new execution.
This is the sample code I have written.
def imp_workload(DP_WORKLOAD, DP_DURATION_SECONDS, DP_CONCURRENCY):
while DP_DURATION_SECONDS > 0:
pool = multiprocessing.Pool(processes = DP_CONCURRENCY)
for j in range(DP_CONCURRENCY):
pool.apply_async(import_command_run, args=(DP_WORKLOAD, dp_workload_cmd, j,)
# Sleep for 1 minute
time.sleep(60)
pool.close()
# Clean the schemas after import is completed
clean_schema(DP_WORKLOAD)
# Sleep for 1 minute
time.sleep(60)
def import_command_run(DP_WORKLOAD):
abccmd = 'impdp admin/DP_PDB_ADMIN_PASSWORD#DP_PDB_FULL_NAME SCHEMAS=ABC'
defcmd = 'impdp admin/DP_PDB_ADMIN_PASSWORD#DP_PDB_FULL_NAME SCHEMAS=DEF'
# any of the above commands
run_imp_cmd(eval(dp_workload_cmd))
def run_imp_cmd(cmd):
output = subprocess.Popen([cmd], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
stdout,stderr = output.communicate()
return stdout
When I tried running it in this format, I got the following error:
time.sleep(60)
^
SyntaxError: invalid syntax
So, how can I kickoff the 'abccmd' job for DP_CONCURRENCY times parallely with a sleep of 1 min between each job and also each of these pool running in multiProcess?
Working on Python 2.7.5 (Due to restrictions, can't use Python 3.x so, will appreciate answers specific to Python 2.x)
P.S. This is a very large script and complex file so I have tried posting only relevant excerpts. Please ask for more details if necessary (or if it is not clear from this much)
Let me offer two possibilities:
Possibility 1
Here is an example of how you would kick off a worker function in parallel with DP_CURRENCY == 4 possible arguments, 0, 1, 2 and 3, cycling over and over for up to DP_DURATION_SECONDS seconds with a pool size of DP_CURRENCY and as soon as a job completes restarting the job but guaranteeing that at least TIME_BETWEEN_SUBMITS == 60 seconds has elapsed between successive restarts.
from __future__ import print_function
from multiprocessing import Pool
import time
from queue import SimpleQueue
TIME_BETWEEN_SUBMITS = 60
def worker(i):
print(i, 'started at', time.time())
time.sleep(40)
print(i, 'ended at', time.time())
return i # the argument
def main():
q = SimpleQueue()
def callback(result):
# every time a job finishes, put result (the argument) on the queue
q.put(result)
DP_CURRENCY = 4
DP_DURATION_SECONDS = TIME_BETWEEN_SUBMITS * 10
pool = Pool(DP_CURRENCY)
t = time.time()
expiration = t + DP_DURATION_SECONDS
# kick off initial tasks:
start_times = [None] * DP_CURRENCY
for i in range(DP_CURRENCY):
pool.apply_async(worker, args=(i,), callback=callback)
start_times[i] = time.time()
while True:
i = q.get() # wait for a job to complete
t = time.time()
if t >= expiration:
break
time_to_wait = TIME_BETWEEN_SUBMITS - (t - start_times[i])
if time_to_wait > 0:
time.sleep(time_to_wait)
pool.apply_async(worker, args=(i,), callback=callback)
start_times[i] = time.time()
# wait for all jobs to complete:
pool.close()
pool.join()
# required by Windows:
if __name__ == '__main__':
main()
Possibility 2
This is closer to what you had in that DP_DURATION_SECONDS == 60 seconds of sleeping is done between successive submission of any two jobs. But to me this doesn't make as much sense. If, for example, the worker function only took 50 seconds to complete, you would not be doing any parallel processing at all. In fact, each job would need to take at least 180 (i.e. (DP_CURRENCY - 1) * TIME_BETWEEN_SUBMITS) seconds to complete in order to have all 4 processes in the pool busy running jobs at the same time.
from __future__ import print_function
from multiprocessing import Pool
import time
from queue import SimpleQueue
TIME_BETWEEN_SUBMITS = 60
def worker(i):
print(i, 'started at', time.time())
# A task must take at least 180 seconds to run to have 4 tasks running in parallel if
# you wait 60 seconds between starting each successive task:
# take 182 seconds to run
time.sleep(3 * TIME_BETWEEN_SUBMITS + 2)
print(i, 'ended at', time.time())
return i # the argument
def main():
q = SimpleQueue()
def callback(result):
# every time a job finishes, put result (the argument) on the queue
q.put(result)
# at most 4 tasks at a time but only if worker takes at least 3 * TIME_BETWEEN_SUBMITS
DP_CURRENCY = 4
DP_DURATION_SECONDS = TIME_BETWEEN_SUBMITS * 10
pool = Pool(DP_CURRENCY)
t = time.time()
expiration = t + DP_DURATION_SECONDS
# kick off initial tasks:
for i in range(DP_CURRENCY):
if i != 0:
time.sleep(TIME_BETWEEN_SUBMITS)
pool.apply_async(worker, args=(i,), callback=callback)
time_last_job_submitted = time.time()
while True:
i = q.get() # wait for a job to complete
t = time.time()
if t >= expiration:
break
time_to_wait = TIME_BETWEEN_SUBMITS - (t - time_last_job_submitted)
if time_to_wait > 0:
time.sleep(time_to_wait)
pool.apply_async(worker, args=(i,), callback=callback)
time_last_job_submitted = time.time()
# wait for all jobs to complete:
pool.close()
pool.join()
# required by Windows:
if __name__ == '__main__':
main()

Multiprocessing: callback on condition?

I'm using this code as a template (KILLING IT section)
https://stackoverflow.com/a/36962624/9274778
So I've solved this for now... changed the code to the following
import random
from time import sleep
def worker(i,ListOfData):
print "%d started" % i
#MyCalculations with ListOfData
x = ListOfData * Calcs
if x > 0.95:
return ListOfDataRow, True
else:
return ListOfDataRow, False
callback running only in main
def quit(arg):
if arg[1] == True:
p.terminate() # kill all pool workers
if __name__ == "__main__":
import multiprocessing as mp
Loops = len(ListOfData) / 25
Start = 0
End = 25
pool = mp.Pool()
for y in range(0,Loops)
results = [pool.apply(worker, args=(i,ListOfData[x]),callback = quit)
for y in range(0,len(ListofData))]
for c in results:
if c[1] == True
break
Start = Start + 25
End = End +25
So I chunk my data frame (assume for now my ListOfData is always divisible by 25) and send it off to the multiprocessing. I've found for my PC performance groups of 25 works best. If the 1st set doesn't return a TRUE, then I go to the next chunk.
I couldn't use the async method as it ran all at different times and sometimes I'd get a TRUE back that was further down the list (not what I wanted).

Return value while executing paralel processes with python

I have been using multiprocessing with python and I have used queues successfully but there are some variables that I need to monitor (from main) while the process is still being executed.
I know that it is not a good practice to use global variables, but not even this approach has worked.
Can anyone can point me in the right direction?
Thanks in advance,
GCCruz
Addendum:
I am posting a simple example of what I would like to do:
import multiprocessing
import time
def sampleprocess(array, count):
'''process with heavy image processing in a loop'''
for i in range(count)
# Do processing on this array that outputs a given variable
sample_variable= count*10 # I would like to monitor this variable
if __name__ == '__main__':
p = multiprocessing.Process(target=sampleprocess, args=(array,1000,))
p.start()
# continuously monitor the dummy variable that is being computed on the process
while sample_variable < 1000
time.sleep(0.1)
print ' Still less than 1000'
Here are a few programs that might demonstrate how you could implement a solution:
Example 1
from multiprocessing import *
import time
def main():
array, loops = list(range(1000)), 1000
variable = Value('I')
p = Process(target=sample, args=(array, loops, variable))
p.start()
while variable.value < 1000:
print('Still less than 1000')
time.sleep(0.005)
print('Must be at least 1000')
p.join()
print('Value is', variable.value)
def sample(array, loops, variable):
for number in range(loops):
variable.value = number * 10
print('Sample is done')
if __name__ == '__main__':
main()
Example 2
from multiprocessing import *
import time
def main():
processes = 10
array, loops = list(range(1000)), 1000
shared = Array('I', processes)
p_array = []
for index in range(processes):
p = Process(target=sample, args=(array, loops, shared, index))
p.start()
p_array.append(p)
while True:
less_than_1000 = [p for p in enumerate(shared[:]) if p[1] < 1000]
if less_than_1000:
print(less_than_1000)
time.sleep(0.001)
else:
break
print('No process in less than 1000')
for p in p_array:
p.join()
print(shared[:])
def sample(array, loops, p_array, index):
time.sleep(1)
for number in range(loops):
time.sleep(0.001)
p_array[index] = number * 10
print('Sample is done')
if __name__ == '__main__':
main()
Example 3
from multiprocessing import *
import time
def main():
array, loops = list(range(1000)), 1000
with Manager() as manager:
variable = manager.Value('I', 0)
p = Process(target=sample, args=(array, loops, variable))
p.start()
while variable.value < 1000:
print('Still less than 1000')
time.sleep(0.005)
print('Must be at least 1000')
p.join()
print('Value is', variable.value)
def sample(array, loops, variable):
for number in range(loops):
variable.value = number * 10
print('Sample is done')
if __name__ == '__main__':
main()
Example 4
from multiprocessing import *
import time
def main():
array, loops = list(range(1000)), 1000
event = Event()
p = Process(target=sample, args=(array, loops, event))
p.start()
event.wait()
print('Must be at least 1000')
p.join()
def sample(array, loops, event):
for number in range(loops):
if number >= 100 and not event.is_set():
event.set()
time.sleep(0.001)
print('Sample is done')
if __name__ == '__main__':
main()
As you can see, there are a variety of ways to do what you are asking to accomplish.
One option is to use multiprocessing Value and Array for shared data objects.
https://docs.python.org/2/library/multiprocessing.html#multiprocessing-managers
A working example from your sample code. If you were to have more than one
process a lock would be needed to synchronize the writes.
from multiprocessing import Process, Value, Array, Lock
import time
def sampleprocess(s, count, lock):
'''process with heavy image processing in a loop'''
for i in range(count.value):
# Do processing on this array that outputs a given variable
sample_variable= count*10 # I would like to monitor this variable
with lock:
s.value= i*10
if name == 'main':
val = Value('i', 1000)
sample_variable = Value('i', 0)
lock = Lock()
p = Process(target=sampleprocess, args=(sample_variable, val, lock))
p.start()
# continuously monitor the dummy variable that is being computed on the process
while sample_variable.value < 1000:
time.sleep(0.1)
print ' Still less than 1000'

Stopping the processes spawned using pool.apply_async() before their completion

Suppose we have some processes spawned using pool.apply_async(). How can one stop all other processes when either one of them returns a value?
Also, Is this the right way to get running time of an algorithm?
Here's the sample code :-
import timeit
import multiprocessing as mp
data = range(1,200000)
def func(search):
for val in data:
if val >= search:
# Doing something such that other processes stop ????
return val*val
if __name__ == "__main__":
cpu_count = mp.cpu_count()
pool = mp.Pool(processes = cpu_count)
output = []
start = timeit.default_timer()
results = []
while cpu_count >= 1:
results.append(pool.apply_async(func, (150000,)))
cpu_count = cpu_count - 1
output = [p.get() for p in results]
stop = timeit.default_timer()
print output
pool.close()
pool.join()
print "Running Time : " + str(stop - start) + " seconds"
I've never done this, but python docs seems to give an idea about how this should be done.
Refer: https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.terminate
In your snippet, I would do this:
while cpu_count >= 1:
if len(results)>0:
pool.terminate()
pool.close()
break
results.append(pool.apply_async(func, (150000,)))
cpu_count = cpu_count - 1
Also your timing method seems okay. I would use time.time() at start and stop and then show the subtraction because I'm used to that.

Python saving execution time when multithreading

I am having a problem when multithreading and using queues in python 2.7. I want the code with threads to take about half as long as the one without, but I think I'm doing something wrong. I am using a simple looping technique for the fibonacci sequence to best show the problem.
Here is the code without threads and queues. It printed 19.9190001488 seconds as its execution time.
import time
start_time = time.time()
def fibonacci(priority, num):
if num == 1 or num == 2:
return 1
a = 1
b = 1
for i in range(num-2):
c = a + b
b = a
a = c
return c
print fibonacci(0, 200000)
print fibonacci(1, 100)
print fibonacci(2, 200000)
print fibonacci(3, 2)
print("%s seconds" % (time.time() - start_time))
Here is the code with threads and queues. It printed 21.7269999981 seconds as its execution time.
import time
start_time = time.time()
from Queue import *
from threading import *
numbers = [200000,100,200000,2]
q = PriorityQueue()
threads = []
def fibonacci(priority, num):
if num == 1 or num == 2:
q.put((priority, 1))
return
a = 1
b = 1
for i in range(num-2):
c = a + b
b = a
a = c
q.put((priority, c))
return
for i in range(4):
priority = i
num = numbers[i]
t = Thread(target = fibonacci, args = (priority, num))
threads.append(t)
#print threads
for t in threads:
t.start()
for t in threads:
t.join()
while not q.empty():
ans = q.get()
q.task_done()
print ans[1]
print("%s seconds" % (time.time() - start_time))
What I thought would happen is the multithreaded code takes half as long as the code without threads. Essentially I thought that all the threads work at the same time, so the 2 threads calculating the fibonacci number at 200,000 would finish at the same time, so execution is about twice as fast as the code without threads. Apparently that's not what happened. Am I doing something wrong? I just want to execute all threads at the same time, print in the order that they started and the thread that takes the longest time is pretty much the execution time.
EDIT:
I updated my code to use processes, but now the results aren't being printed. Only an execution time of 0.163000106812 seconds is showing. Here is the new code:
import time
start_time = time.time()
from Queue import *
from multiprocessing import *
numbers = [200000,100,200000,2]
q = PriorityQueue()
processes = []
def fibonacci(priority, num):
if num == 1 or num == 2:
q.put((priority, 1))
return
a = 1
b = 1
for i in range(num-2):
c = a + b
b = a
a = c
q.put((priority, c))
return
for i in range(4):
priority = i
num = numbers[i]
p = Process(target = fibonacci, args = (priority, num))
processes.append(p)
#print processes
for p in processes:
p.start()
for p in processes:
p.join()
while not q.empty():
ans = q.get()
q.task_done()
print ans[1]
print("%s seconds" % (time.time() - start_time))
You've run in one of the basic limiting factors of the CPython implementation, the Global Interpreter Lock or GIL. Effectively this serializes your program, your threads will take turns executing. One thread will own the GIL, while the other threads will wait for the GIL to come free.
One solution would to be use separate processes. Each process would have its own GIL so would execute in parallel. Probably the easiest way to do this is to use Python's multiprocessing module as replacement for the threading module.

Categories