Child process hanging multiprocessing - python

I am having a problem where child processes are hanging in my python application, only 4/16 processes have finished all of these processes are adding to a multiprocessing queue. https://docs.python.org/3/library/multiprocessing.html#pipes-and-queues According to python docs:
Warning
As mentioned above, if a child process has put items on a queue (and
it has not used JoinableQueue.cancel_join_thread), then that process
will not terminate until all buffered items have been flushed to the
pipe.
This means that if you try joining that process you may get a deadlock
unless you are sure that all items which have been put on the queue
have been consumed. Similarly, if the child process is non-daemonic
then the parent process may hang on exit when it tries to join all its
non-daemonic children.
Note that a queue created using a manager does not have this issue.
See Programming guidelines.
I believe this may be my problem, however I do a get() off the queue before I join. I am not sure what other alternatives I can take.
def RunInThread(dictionary):
startedProcesses = list()
resultList = list()
output = Queue()
scriptList = ThreadChunk(dictionary, 16) # last number determines how many threads
for item in scriptList:
if __name__ == '__main__':
proc = Process(target=CreateScript, args=(item, output))
startedProcesses.append(proc)
proc.start()
while not output.empty():
resultList.append(output.get())
#we must wait for the processes to finish before continuing
for process in startedProcesses:
process.join()
print "finished"
#defines chunk of data each thread will process
def ThreadChunk(seq, num):
avg = len(seq) / float(num)
out = []
last = 0.0
while last < len(seq):
out.append(seq[int(last):int(last + avg)])
last += avg
return out
def CreateScript(scriptsToGenerate, queue):
start = time.clock()
for script in scriptsToGenerate:
...
queue.put([script['timeInterval'], script['script']])
print time.clock() - start
print "I have finished"

The issue with your code is that while not output.empty() is not reliable (see empty). You might also run into the scenario where the interpreter hits while not output.empty() before the processes you created finished their initialization (thus having the Queue actually empty).
Since you know exactly how much items will be put in the queue (i.e. len(dictionnary)) you can read that number of items from the queue:
def RunInThread(dictionary):
startedProcesses = list()
output = Queue()
scriptList = ThreadChunk(dictionary, 16) # last number determines how many threads
for item in scriptList:
proc = Process(target=CreateScript, args=(item, output))
startedProcesses.append(proc)
proc.start()
resultList = [output.get() for _ in xrange(len(dictionary))]
#we must wait for the processes to finish before continuing
for process in startedProcesses:
process.join()
print "finished"
If at some point you're modifying your script and don't know anymore howmuch items will be produced, you can use Queue.get with a reasonnable timeout:
def RunInThread(dictionary):
startedProcesses = list()
resultList = list()
output = Queue()
scriptList = ThreadChunk(dictionary, 16) # last number determines how many threads
for item in scriptList:
proc = Process(target=CreateScript, args=(item, output))
startedProcesses.append(proc)
proc.start()
try:
while True:
resultList.append(output.get(True, 2)) # block for a 2 seconds timeout, just in case
except queue.Empty:
pass # no more items produced
#we must wait for the processes to finish before continuing
for process in startedProcesses:
process.join()
print "finished"
You might need to adjust the timeout depending on the actual time of the computation in your CreateScript.

Related

A way to wait for currently running tasks to finish then stop in multiprocessing Pool

I have a large number of tasks (40,000 to be exact) that I am using a Pool to run in parallel. To maximize efficiency, I pass the list of all tasks at once to starmap and let them run.
I would like to have it so that if my program is broken using Ctrl+C then currently running tasks will be allowed to finish but new ones will not be started. I have figured out the signal handling part to handle the Ctrl+C breaking just fine using the recommended method and this works well (at least with Python 3.6.9 that I am using):
import os
import signal
import random as rand
import multiprocessing as mp
def init() :
signal.signal(signal.SIGINT, signal.SIG_IGN)
def child(a, b, c) :
st = rand.randrange(5, 20+1)
print("Worker thread", a+1, "sleep for", st, "...")
os.system("sleep " + str(st))
pool = mp.Pool(initializer=init)
try :
pool.starmap(child, [(i, 2*i, 3*i) for i in range(10)])
pool.close()
pool.join()
print("True exit!")
except KeyboardInterrupt :
pool.terminate()
pool.join()
print("Interupted exit!")
The problem is that Pool seems to have no function to let the currently running tasks complete and then stop. It only has terminate and close. In the example above I use terminate but this is not what I want as this immediately terminates all running tasks (whereas I want to let the currently running tasks run to completion). On the other hand, close simply prevents adding more tasks, but calling close then join will wait for all pending tasks to complete (40,000 of them in my real case) (whereas I only want currently running tasks to finish not all of them).
I could somehow gradually add my tasks one by one or in chunks so I could use close and join when interrupted, but this seems less efficient unless there is a way to add a new task as soon as one finishes manually (which I'm not seeing how to do from the Pool documentation). It really seems like my use case would be common and that Pool should have a function for this, but I have not seen this question asked anywhere (or maybe I'm just not searching for the right thing).
Does anyone know how to accomplish this easily?
I tried to do something similar with concurrent.futures - see the last code block in this answer: it attempts to throttle adding tasks to the pool and only adds new tasks as tasks complete. You could change the logic to fit your needs. Maybe keep the pending work items slightly greater than the number of workers so you don't starve the executor. something like:
import concurrent.futures
import random as rand
import time
def child(*args, n=0):
signal.signal(signal.SIGINT, signal.SIG_IGN)
a,b,c = args
st = rand.randrange(1, 5)
time.sleep(st)
x = f"Worker {n} thread {a+1} slept for {st} - args:{args}"
return (n,x)
if __name__ == '__main__':
nworkers = 5 # ncpus?
results = []
fs = []
with concurrent.futures.ProcessPoolExecutor(max_workers=nworkers) as executor:
data = ((i, 2*i, 3*i) for i in range(100))
for n,args in enumerate(data):
try:
# limit pending tasks
while len(executor._pending_work_items) >= nworkers + 2:
# wait till one completes and get the result
futures = concurrent.futures.wait(fs, return_when=concurrent.futures.FIRST_COMPLETED)
#print(futures)
results.extend(future.result() for future in futures.done)
print(f'{len(results)} results so far')
fs = list(futures.not_done)
print(f'add a new task {n}')
fs.append(executor.submit(child, *args,**{'n':n}))
except KeyboardInterrupt as e:
print('ctrl-c!!}',file=sys.stderr)
# don't add anymore tasks
break
# get leftover results as they finish
for future in concurrent.futures.as_completed(fs):
print(f'{len(executor._pending_work_items)} tasks pending:')
result = future.result()
results.append(result)
results.sort()
# separate the results from the value used to sort
for n,result in results:
print(result)
Here is a way to get the results sorted in submission order without modifying the task. It uses a dictionary to relate each future to its submission order and uses it for the sort key.
# same imports
def child(*args):
signal.signal(signal.SIGINT, signal.SIG_IGN)
a,b,c = args
st = random.randrange(1, 5)
time.sleep(st)
x = f"Worker thread {a+1} slept for {st} - args:{args}"
return x
if __name__ == '__main__':
nworkers = 5 # ncpus?
sort_dict = {}
results = []
fs = []
with concurrent.futures.ProcessPoolExecutor(max_workers=nworkers) as executor:
data = ((i, 2*i, 3*i) for i in range(100))
for n,args in enumerate(data):
try:
# limit pending tasks
while len(executor._pending_work_items) >= nworkers + 2:
# wait till one completes and grab it
futures = concurrent.futures.wait(fs, return_when=concurrent.futures.FIRST_COMPLETED)
results.extend(future for future in futures.done)
print(f'{len(results)} futures completed so far')
fs = list(futures.not_done)
future = executor.submit(child, *args)
fs.append(future)
print(f'task {n} added - future:{future}')
sort_dict[future] = n
except KeyboardInterrupt as e:
print('ctrl-c!!',file=sys.stderr)
# don't add anymore tasks
break
# get leftover futures as they finish
for future in concurrent.futures.as_completed(fs):
print(f'{len(executor._pending_work_items)} tasks pending:')
results.append(future)
#sort the futures
results.sort(key=lambda f: sort_dict[f])
# get the results
for future in results:
print(future.result())
You could also just add an attribute to each future and sort on that (no need for the dictionary)
...
future = executor.submit(child, *args)
# add an attribute to the future that can be sorted on
future.submitted = n
fs.append(future)
...
results.sort(key=lambda f: f.submitted)

Problem trying to make two child processes share the load of processing the same resource

I'm messing around with python multiprocessing module. But something is not working as I was expecting it to do, so now I'm a little bit confused.
In a python script, I create two child processes, so they can work with the same resource. I was thinking that they were going to "share" the load more or less equally, but it seems that, instead of doing that, one of the processes executes just once, while the other one process almost everything.
To test it, I wrote the following code:
#!/usr/bin/python
import os
import multiprocessing
# Worker function
def worker(queueA, queueB):
while(queueA.qsize() != 0):
item = queueA.get()
item = "item: " + item + ". processed by worker " + str(os.getpid())
queueB.put(item)
return
# IPC Manager
manager = multiprocessing.Manager()
queueA = multiprocessing.Queue()
queueB = multiprocessing.Queue()
# Fill queueA with data
for i in range(0, 10):
queueA.put("hello" + str(i+1))
# Create processes
process1 = multiprocessing.Process(target = worker, args = (queueA, queueB,))
process2 = multiprocessing.Process(target = worker, args = (queueA, queueB,))
# Call processes
process1.start()
process2.start()
# Wait for processes to stop processing
process1.join()
process2.join()
for i in range(0, queueB.qsize()):
print queueB.get()
And that prints the following:
item: hello1. processed by worker 11483
item: hello3. processed by worker 11483
item: hello4. processed by worker 11483
item: hello5. processed by worker 11483
item: hello6. processed by worker 11483
item: hello7. processed by worker 11483
item: hello8. processed by worker 11483
item: hello9. processed by worker 11483
item: hello10. processed by worker 11483
item: hello2. processed by worker 11482
As you can see, one of the processes works with just one of the elements, and it doesn't continue to get more elements of the queue, while the other has to work with everything else.
I'm thinking that this is not correct, or at least not what I expected. Could you tell me which is the correct way of implementing this idea?
You're right that they won't be exactly equal, but mostly that's because your testing sample is so small. It takes time for each process to get started and start processing. The time it takes to process an item in the queue is extremely low and therefore one can quickly process 9 items before the other gets through one.
I tested this below (in Python3, but it should apply for 2.7 as well just change the print() function to a print statement):
import os
import multiprocessing
# Worker function
def worker(queueA, queueB):
for item in iter(queueA.get, 'STOP'):
out = str(os.getpid())
queueB.put(out)
return
# IPC Manager
manager = multiprocessing.Manager()
queueA = multiprocessing.Queue()
queueB = multiprocessing.Queue()
# Fill queueA with data
for i in range(0, 1000):
queueA.put("hello" + str(i+1))
# Create processes
process1 = multiprocessing.Process(target = worker, args = (queueA, queueB,))
process2 = multiprocessing.Process(target = worker, args = (queueA, queueB,))
# Call processes
process1.start()
process2.start()
queueA.put('STOP')
queueA.put('STOP')
# Wait for processes to stop processing
process1.join()
process2.join()
all = {}
for i in range(1000):
item = queueB.get()
if item not in all:
all[item] = 1
else:
all[item] += 1
print(all)
My output (a count of how many were done from each process):
{'18376': 537,
'18377': 463}
While they aren't the exact same, as we approach longer times they will approach being about equal.
Edit:
Another way to confirm this is to add a time.sleep(3) inside the worker function
def worker(queueA, queueB):
for item in iter(queueA.get, 'STOP'):
time.sleep(3)
out = str(os.getpid())
queueB.put(out)
return
I ran a range(10) test like in your original example and got:
{'18428': 5,
'18429': 5}

How to implement a dynamic amount of concurrent threads?

I am launching concurrent threads doing some stuff:
concurrent = 10
q = Queue(concurrent * 2)
for j in range(concurrent):
t = threading.Thread(target=doWork)
t.daemon = True
t.start()
try:
# process each line and assign it to an available thread
for line in call_file:
q.put(line)
q.join()
except KeyboardInterrupt:
sys.exit(1)
At the same time I have a distinct thread counting time:
def printit():
threading.Timer(1.0, printit).start()
print current_status
printit()
I would like to increase (or decrease) the amount of concurrent threads for the main process let's say every minute. I can make a time counter in the time thread and make it do things every minute but how to change the amount of concurrent threads in the main process ?
Is it possible (and if yes how) to do that ?
This is my worker:
def UpdateProcesses(start,processnumber,CachesThatRequireCalculating,CachesThatAreBeingCalculated,CacheDict,CacheLock,IdleLock,FileDictionary,MetaDataDict,CacheIndexDict):
NewPool()
while start[processnumber]:
IdleLock.wait()
while len(CachesThatRequireCalculating)>0 and start[processnumber] == True:
CacheLock.acquire()
try:
cacheCode = CachesThatRequireCalculating[0] # The list can be empty if an other process takes the last item during the CacheLock
CachesThatRequireCalculating.remove(cacheCode)
print cacheCode,"starts processing by",processnumber,"process"
except:
CacheLock.release()
else:
CacheLock.release()
CachesThatAreBeingCalculated.append(cacheCode[:3])
Array,b,f = TIPP.LoadArray(FileDictionary[cacheCode[:2]])#opens the dask array
Array = ((Array[:,:,CacheIndexDict[cacheCode[:2]][cacheCode[2]]:CacheIndexDict[cacheCode[:2]][cacheCode[2]+1]].compute()/2.**(MetaDataDict[cacheCode[:2]]["Bit Depth"])*255.).astype(np.uint16)).transpose([1,0,2]) #slices and calculates the array
f.close() #close the file
if CachesThatAreBeingCalculated.count(cacheCode[:3]) != 0: #if not, this cache is not needed annymore (the cacheCode is removed bij wavelengthchange)
CachesThatAreBeingCalculated.remove(cacheCode[:3])
try: #If the first time the object if not aivalable try a second time
CacheDict[cacheCode[:3]] = Array
except:
CacheDict[cacheCode[:3]] = Array
print cacheCode,"done processing by",processnumber,"process"
if start[processnumber]:
IdleLock.clear()
This is how I start them:
self.ProcessLst = [] #list with all the processes who calculate the caches
for processnumber in range(min(NumberOfMaxProcess,self.processes)):
self.ProcessTerminateLst.append(True)
for processnumber in range(min(NumberOfMaxProcess,self.processes)):
self.ProcessLst.append(process.Process(target=Proc.UpdateProcesses,args=(self.ProcessTerminateLst,processnumber,self.CachesThatRequireCalculating,self.CachesThatAreBeingCalculated,self.CacheDict,self.CacheLock,self.IdleLock,self.FileDictionary,self.MetaDataDict,self.CacheIndexDict,)))
self.ProcessLst[-1].daemon = True
self.ProcessLst[-1].start()
I close them like this:
for i in range(len(self.ProcessLst)): #For both while loops in the processes self.ProcessTerminateLst[i] must be True. So or the process is now ready to be terminad or is still in idle mode.
self.ProcessTerminateLst[i] = False
self.IdleLock.set() #Makes sure no process is in Idle and all are ready to be terminated
I would use a pool. a pool has a max number of threads it uses at the same time, but you can apply inf number of jobs. They stay in the waiting list until a thread is available. I don't think you can change number of current processes in the pool.

multiprocessing - reading big input data - program hangs

I want to run parallel computation on some input data which is loaded from a file. (The file can be really big, so I use a generator for this.)
On a certain number of items, my code runs OK but above this threshold the program hangs (some of the worker processes do not end).
Any suggestions? (I am running this with python2.7, 8 CPUs; 5,000 lines still OK, 7,500 does not work.)
Firstly, you need an input file. Generate it in bash:
for i in {0..10000}; do echo -e "$i"'\r' >> counter.txt; done
Then, run this:
python2.7 main.py 100 counter.txt > run_log.txt
main.py:
#!/usr/bin/python2.7
import os, sys, signal, time
import Queue
import multiprocessing as mp
def eat_queue(job_queue, result_queue):
"""Eats input queue, feeds output queue
"""
proc_name = mp.current_process().name
while True:
try:
job = job_queue.get(block=False)
if job == None:
print(proc_name + " DONE")
return
result_queue.put(execute(job))
except Queue.Empty:
pass
def execute(x):
"""Does the computation on the input data
"""
return x*x
def save_result(result):
"""Saves results in a list
"""
result_list.append(result)
def load(ifilename):
"""Generator reading the input file and
yielding it row by row
"""
ifile = open(ifilename, "r")
for line in ifile:
line = line.strip()
num = int(line)
yield (num)
ifile.close()
print("file closed".upper())
def put_tasks(job_queue, ifilename):
"""Feeds the job queue
"""
for item in load(ifilename):
job_queue.put(item)
for _ in range(get_max_workers()):
job_queue.put(None)
def get_max_workers():
"""Returns optimal number of processes to run
"""
max_workers = mp.cpu_count() - 2
if max_workers < 1:
return 1
return max_workers
def run(workers_num, ifilename):
job_queue = mp.Queue()
result_queue = mp.Queue()
# decide how many processes are to be created
max_workers = get_max_workers()
print "processes available: %d" % max_workers
if workers_num < 1 or workers_num > max_workers:
workers_num = max_workers
workers_list = []
# a process for feeding job queue with the input file
task_gen = mp.Process(target=put_tasks, name="task_gen",
args=(job_queue, ifilename))
workers_list.append(task_gen)
for i in range(workers_num):
tmp = mp.Process(target=eat_queue, name="w%d" % (i+1),
args=(job_queue, result_queue))
workers_list.append(tmp)
for worker in workers_list:
worker.start()
for worker in workers_list:
worker.join()
print "worker %s finished!" % worker.name
if __name__ == '__main__':
result_list = []
args = sys.argv
workers_num = int(args[1])
ifilename = args[2]
run(workers_num, ifilename)
This is because nothing in your code takes anything off result_queue. The behavior then depends on internal queue buffering details: if "not a lot" of data is waiting, everything appears fine, but if "a lot" of data is waiting, everything freezes. Not much more can be said, because it involves layers of internal magic ;-) But the docs do warn about it:
Warning
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.
One easy way to repair that: First add
result_queue.put(None)
before eat_queue() returns. Then add:
count = 0
while count < workers_num:
if result_queue.get() is None:
count += 1
before the main program .join()s the workers. That drains the result queue, and everything shuts down cleanly then.
BTW, this code is pretty bizarre:
while True:
try:
job = job_queue.get(block=False)
if job == None:
print(proc_name + " DONE")
return
result_queue.put(execute(job))
except Queue.Empty:
pass
Why are you doing non-blocking get()? This turns into a CPU-hog "busy loop" so long as the queue is empty. The primary point of .get() is to supply an efficient way to wait for work to show up. So:
while True:
job = job_queue.get()
if job is None:
print(proc_name + " DONE")
break
else:
result_queue.put(execute(job))
result_queue.put(None)
does the same thing, but far more efficiently.
Queue size caution
You didn't ask about this, but let's cover it before it bites you ;-) By default, there is no bound on a Queue's size. If, e.g., you add a billion items to the Queue, it will demand enough RAM to hold a billion items. So if your producer(s) can generate work items faster than your consumer(s) can process them, memory use can get out of hand quickly.
Fortunately, that's easy to repair: specify a maximum queue size. For example,
job_queue = mp.Queue(maxsize=10*workers_num)
^^^^^^^^^^^^^^^^^^^^^^^
Then job_queue.put(some_work_item) will block until consumers reduce the size of the queue to less than the maximum. This way you can process enormous problems with a queue that requires trivial RAM.

Recovering lost multiprocessing.Queue items when a worker Process dies

My scenario is this:
I've got worker which enqueues tasks into a multiprocessing.Queue() if said is empty. This is to ensure execution of tasks follow a certain priority and multiprocessing.Queue() doesn't do priorities.
There are a number of workers which pop from the mp.Queue and do some stuff. Sometimes (<0.1%) these fail and die without having the possibility to re-enqueue the task.
My tasks are locked via a central database and may only run once (hard requirement). For this they have certain states which they can transition from/to.
My current solution: Let all workers answer via another queue which tasks have been completed and introduce a deadline by which a task has to be done. Reset the task and re-enqueue it if a deadline has been reached. This has the problem that the solution is "soft", i.e. the deadline is arbitrary.
I am searching for the simplest possible solution. Is there a simpler or a more stringent solution to this?
This solution uses three queues to keep track of the work (simulated as WORK_ID):
todo_q: Any work to be done (including that to be redone if the process died in-flight)
start_q: Any work that has been started by a process
finish_q: Any work that has been completed
Using this method you should not need a timer. As long as you assign a process identifier and keep track of assignments, check to see whether Process.is_alive(). If the process died, then add that work back to the todo queue.
In the code below, I simulate a worker process dying 25% of the time...
from multiprocessing import Process, Queue
from Queue import Empty
from random import choice as rndchoice
import time
def worker(id, todo_q, start_q, finish_q):
"""multiprocessing worker"""
msg = None
while (msg!='DONE'):
try:
msg = todo_q.get_nowait() # Poll non-blocking on todo_q
if (msg!='DONE'):
start_q.put((id, msg)) # Let the controller know work started
time.sleep(0.05)
if (rndchoice(range(3))==1):
# Die a fraction of the time before finishing
print "DEATH to worker %s who had task=%s" % (id, msg)
break
finish_q.put((id, msg)) # Acknowledge work finished
except Empty:
pass
return
if __name__ == '__main__':
NUM_WORKERS = 5
WORK_ID = set(['A','B','C','D','E']) # Work to be done, you will need to
# name work items so they are unique
WORK_DONE = set([]) # Work that has been done
ASSIGNMENTS = dict() # Who was assigned a task
workers = dict()
todo_q = Queue()
start_q = Queue()
finish_q = Queue()
print "Starting %s tasks" % len(WORK_ID)
# Add work
for work in WORK_ID:
todo_q.put(work)
# spawn workers
for ii in xrange(NUM_WORKERS):
p = Process(target=worker, args=(ii, todo_q, start_q, finish_q))
workers[ii] = p
p.start()
finished = False
while True:
try:
start_ack = start_q.get_nowait() # Poll for work started
## Check for race condition between start_ack and finished_ack
if not ASSIGNMENTS.get(start_ack[0], False):
ASSIGNMENTS[start_ack[0]] = start_ack # Track the assignment
print "ASSIGNED worker=%s task=%s" % (start_ack[0],
start_ack[1])
WORK_ID.remove(start_ack[1]) # Account for started tasks
else:
# Race condition. Never overwrite existing assignments
# Wait until the ASSIGNMENT is cleared
start_q.put(start_ack)
except Empty:
pass
try:
finished_ack = finish_q.get_nowait() # Poll for work finished
# Check for race condition between start_ack and finished_ack
if (ASSIGNMENTS[finished_ack[0]][1]==finished_ack[1]):
# Clean up after the finished task
print "REMOVED worker=%s task=%s" % (finished_ack[0],
finished_ack[1])
del ASSIGNMENTS[finished_ack[0]]
WORK_DONE.add(finished_ack[1])
else:
# Race condition. Never overwrite existing assignments
# It was received out of order... wait for the 'start_ack'
finish_q.put(finished_ack)
finished_ack = None
except Empty:
pass
# Look for any dead workers, and put their work back on the todo_q
if not finished:
for id, p in workers.items():
status = p.is_alive()
if not status:
print " WORKER %s FAILED!" % id
# Add to the work again...
todo_q.put(ASSIGNMENTS[id][1])
WORK_ID.add(ASSIGNMENTS[id][1])
del ASSIGNMENTS[id] # Worker is dead now
del workers[id]
ii += 1
print "Spawning worker number", ii
# Respawn a worker to replace the one that died
p = Process(target=worker, args=(ii, todo_q, start_q,
finish_q))
workers[ii] = p
p.start()
else:
for id, p in workers.items():
p.join()
del workers[id]
break
if (WORK_ID==set([])) and (ASSIGNMENTS.keys()==list()):
finished = True
[todo_q.put('DONE') for x in xrange(NUM_WORKERS)]
else:
pass
print "We finished %s tasks" % len(WORK_DONE)
Running this on my laptop...
mpenning#mpenning-T61:~$ python queueack.py
Starting 5 tasks
ASSIGNED worker=2 task=C
ASSIGNED worker=0 task=A
ASSIGNED worker=4 task=B
ASSIGNED worker=3 task=E
ASSIGNED worker=1 task=D
DEATH to worker 4 who had task=B
DEATH to worker 3 who had task=E
WORKER 3 FAILED!
Spawning worker number 5
WORKER 4 FAILED!
Spawning worker number 6
REMOVED worker=2 task=C
REMOVED worker=0 task=A
REMOVED worker=1 task=D
ASSIGNED worker=0 task=B
ASSIGNED worker=2 task=E
REMOVED worker=2 task=E
DEATH to worker 0 who had task=B
WORKER 0 FAILED!
Spawning worker number 7
ASSIGNED worker=5 task=B
REMOVED worker=5 task=B
We finished 5 tasks
mpenning#mpenning-T61:~$
I tested this with over 10000 work items at a 25% mortality rate.

Categories