python multiprocessing IOError: [Errno232] The pipe is being closed - python

The following code will count all 750 joins and will print the results queue, but after it does that it gets stuck in deadlock. If I assign results to multiprocessing.Queue(), the program deadlocks immediately.
def function(job, results):
# do stuff
results_q.put(stuff)
if __name__ == '__main__':
devices = {}
with open('file.txt', 'r') as f:
projectFile= f.readlines()
jobs = multiprocessing.Queue()
results = multiprocessing.Manager().Queue()
pool = [ multiprocessing.Process(target=function, args=(jobs, results)) for ip in itertools.islice(projectFile, 0, 750) ]
for p in pool:
p.start()
for n in projectFile:
jobs.put(n.strip())
for p in pool:
jobs.put(None)
count=0
for p in pool:
p.join()
count += 1
print count
print results
Does anyone see anything that could be causing the deadlocks? I am pretty unsure of how to proceed as it all seems to check out in my head. Any help would be appreciated!

I think that this problem is caused by creating multiple processes that. This is not necessarily a deadlock, but the algorithm is taking a long time to instantiate methods. I made a test with threads and apparently worked well faster. Look the code:
import multiprocessing
import itertools
import threading
def function(job, results):
# do stuff
results.put(stuff)
if __name__ == '__main__':
devices = {}
with open('file.txt', 'r') as f:
projectFile= f.readlines()
jobs = multiprocessing.Queue()
results = multiprocessing.Manager().Queue()
pool = [threading.Thread(target=function, args=(jobs, results)) for ip in itertools.islice(projectFile, 0, 750) ]
for i,p in enumerate(pool):
print "Started Thread Number", i # Log to verify
p.start()
for n in projectFile:
jobs.put(n.strip())
for p in pool:
jobs.put(None)
count=0
for p in pool:
p.join() # This join is dangerous, make sure of the thread not raise any error
count += 1
print count
print results
I dont know if this code will solve your problem, maybe will be executed more fast.

Related

How do I restart a thread if its duration is longer than 10 seconds?

How in the main thread can I track the duration of the function write_file()?
Task: create a condition, if the execution time of the function is more than 10 seconds, then it is necessary to restart the function.
from multiprocessing import Pool
def write_file(file: str):
f = open(file, 'w')
for item in range(0, 1500000):
f.write("%s\n" % item)
f.close()
if __name__ == '__main__':
list_files = ['1.txt', '2.txt', '3.txt']
with Pool(3) as p:
p.map(write_file, list_files)
I found attempt to amend Pool to be overcomplicated here.
The Pool class let workers to be alive untill the whole working queue is done and thus has complex mechanism to control it.
Instead, if you have not very stict requirenments on 10 second, you can use the following code:
from multiprocessing import Process
import time
pdict = {}
for fname in list_files:
p = Process(target = write_file, args = (fname,))
pdict[fname] = p
p.start()
while pdict:
to_del = []
time.sleep(10)
for pname in pdict:
if pdict[pname].exitcode == None or pdict[pname].is_alive():
pdict[pname].terminate() #killing old; that should also release file resource
pdict[pname] = Process(target = write_file, args = (pname,))
pdict[pname].start() #simply creating new and starting
else:
to_del.append(pname)
for pname in to_del:
del pdict[pname]

Producer consumer in python locks on get

I'm struggling to make a producer consumer queue in Python3. I can't get my consumer to wake up:
from multiprocessing import Process, Queue
import time
def consumer(q):
while(True):
data=q.get()
if (data[0]==False):
print("Killing")
return
print((data[1]))
time.sleep(1)
maxitems=3
q = Queue(maxitems)
p = Process(target=consumer, args=(q,))
p.start()
for idx in range(0,10):
q.put((True,idx))
#Where idx would normally be a chunk of data
p.put((False,False))
p.join()
Output:
0
then it locks...
How do I get the consumer thread to wake up when I push data to it?
Launch:
python3.3 tryit.py
Built with:
[ebuild R ] dev-lang/python-3.3.5-r1:3.3::gentoo USE="gdbm ipv6 ncurses readline ssl threads xml -build -doc -examples -hardened -sqlite -tk -wininst" 0 KiB
p.put((False,False)) is wrong and some non-idiomatic Python, otherwise it's fine.
from multiprocessing import Process, Queue
import time
def consumer(q):
while True:
data=q.get()
if data[0]==False:
print("Killing")
break
print(data[1])
time.sleep(1)
maxitems=3
q = Queue(maxitems)
p = Process(target=consumer, args=(q,))
p.start()
for idx in range(0,10):
q.put((True,idx))
#Where idx would normally be a chunk of data
q.put((False,False))
p.join()
Somehow this needs to run from main
from multiprocessing import Process, Queue
import time
def consumer(q):
while(True):
data=q.get()
if (data[0]==False):
print("Killing")
return
print((data[1]))
time.sleep(1)
if __name__ == '__main__':
maxitems=3
q = Queue(maxitems)
p = Process(target=consumer, args=(q,))
p.start()
for idx in range(0,10):
q.put((True,idx))
#Where idx would normally be a chunk of data
q.put((False,False))
p.join()

From synchronous to asynchronous "Processing" when working with list chunks

I am using multiprocessing module via class Process to do some not cpu-bound tasks, e.g. I/O, or web requests. If the tasks take too long the CPU reaches 100% of usage (all threads are waiting the data to return). I suspect asynchronous execution solution but I have never done something like this. The code I am using is something like the following where I have a huge list and each process works on a chunk.
Could you please make a suggestion in this direction?
Thanks in advance!!
import multiprocessing
def getData(urlsChunk, myQueue):
for url in urlsChunk:
fp = urllib.urlopen(url)
try:
data = fp.read()
myQueue.put(data)
finally:
fp.close()
return myQueue
manager = multiprocessing.Manager()
HUGEQ = manager.Queue()
urls = ['a huge list of url items']
chunksize = int(math.ceil(len(urls) / float(nprocs)))
for i in range(nprocs):
p = Process(
target = getData, # This is my worker
args=(urls[chunksize * i:chunksize * (i + 1)],
MYQUEUE
)
)
processes.append(p)
p.start()
for p in processes:
p.join()
while True:
try:
MYQUEUEelem = MYQUEUE.get(block=False)
except Empty:
break
else:
'do something with the MYQUEUEelem'
Using multiprocessing.Pool, your code can be simplified:
import multiprocessing
def getData(url):
fp = urllib.urlopen(url)
try:
return fp.read()
finally:
fp.close()
if __name__ == '__main__': # should protect the "entry point" of the program
urls = ['a huge list of url items']
pool = multiprocessing.Pool()
for result in pool.imap(getData, urls, chunksize=10):
# do something with the result

Write data to hdf file using multiprocessing

This seems like a simple issue but I cant get my head around it.
I have a simulation which runs in a double for loop and writes the results to an HDF file. A simple version of this program is shown below:
import tables as pt
a = range(10)
b = range(5)
def Simulation():
hdf = pt.openFile('simulation.h5',mode='w')
for ii in a:
print(ii)
hdf.createGroup('/','A%s'%ii)
for i in b:
hdf.createArray('/A%s'%ii,'B%s'%i,[ii,i])
hdf.close()
return
Simulation()
This code does exactly what I want but since the process can take quite a while to run I tried to use the multiprocessing module and use the following code:
import multiprocessing
import tables as pt
a = range(10)
b = range(5)
def Simulation(ii):
hdf = pt.openFile('simulation.h5',mode='w')
print(ii)
hdf.createGroup('/','A%s'%ii)
for i in b:
hdf.createArray('/A%s'%ii,'B%s'%i,[ii,i])
hdf.close()
return
if __name__ == '__main__':
jobs = []
for ii in a:
p = multiprocessing.Process(target=Simulation, args=(ii,))
jobs.append(p)
p.start()
This however only prints the last simulation to the HDF file, somehow it overrites all the other groups.
Each time you open a file in write (w) mode, a new file is created -- so the contents of the file is lost if it already exists. Only the last file handle can successfully write to the file. Even if you changed that to append mode, you should not try to write to the same file from multiple processes -- the output will get garbled if two processes try to write at the same time.
Instead, have all the worker processes put output in a queue, and have a single dedicated process (either a subprocess or the main process) handle the output from the queue and write to the file:
import multiprocessing as mp
import tables as pt
num_arrays = 100
num_processes = mp.cpu_count()
num_simulations = 1000
sentinel = None
def Simulation(inqueue, output):
for ii in iter(inqueue.get, sentinel):
output.put(('createGroup', ('/', 'A%s' % ii)))
for i in range(num_arrays):
output.put(('createArray', ('/A%s' % ii, 'B%s' % i, [ii, i])))
def handle_output(output):
hdf = pt.openFile('simulation.h5', mode='w')
while True:
args = output.get()
if args:
method, args = args
getattr(hdf, method)(*args)
else:
break
hdf.close()
if __name__ == '__main__':
output = mp.Queue()
inqueue = mp.Queue()
jobs = []
proc = mp.Process(target=handle_output, args=(output, ))
proc.start()
for i in range(num_processes):
p = mp.Process(target=Simulation, args=(inqueue, output))
jobs.append(p)
p.start()
for i in range(num_simulations):
inqueue.put(i)
for i in range(num_processes):
# Send the sentinal to tell Simulation to end
inqueue.put(sentinel)
for p in jobs:
p.join()
output.put(None)
proc.join()
For comparison, here is a version which uses mp.Pool:
import multiprocessing as mp
import tables as pt
num_arrays = 100
num_processes = mp.cpu_count()
num_simulations = 1000
def Simulation(ii):
result = []
result.append(('createGroup', ('/', 'A%s' % ii)))
for i in range(num_arrays):
result.append(('createArray', ('/A%s' % ii, 'B%s' % i, [ii, i])))
return result
def handle_output(result):
hdf = pt.openFile('simulation.h5', mode='a')
for args in result:
method, args = args
getattr(hdf, method)(*args)
hdf.close()
if __name__ == '__main__':
# clear the file
hdf = pt.openFile('simulation.h5', mode='w')
hdf.close()
pool = mp.Pool(num_processes)
for i in range(num_simulations):
pool.apply_async(Simulation, (i, ), callback=handle_output)
pool.close()
pool.join()
It looks simpler doesn't it? However there is one signficant difference. The original code used output.put to send args to handle_output which was running in its own subprocess. handle_output would take args from the output queue and handle them immediately. With the Pool code above, Simulation accumulates a whole bunch of args in result and result is not sent to handle_output until after Simulation returns.
If Simulation takes a long time, there will be a long waiting period while nothing is being written to simulation.h5.

Multiprocessing, writing to file, and deadlock on large loops

I have a very weird problem with the code below. when numrows = 10 the Process loops completes itself and proceeds to finish. If the growing list becomes larger it goes into a deadlock. Why is this and how can I solve this?
import multiprocessing, time, sys
# ----------------- Calculation Engine -------------------
def feed(queue, parlist):
for par in parlist:
queue.put(par)
def calc(queueIn, queueOut):
while True:
try:
par = queueIn.get(block = False)
print "Project ID: %s started. " % par
res = doCalculation(par)
queueOut.put(res)
except:
break
def write(queue, fname):
print 'Started to write to file'
fhandle = open(fname, "w")
while True:
try:
res = queue.get(block = False)
for m in res:
print >>fhandle, m
except:
break
fhandle.close()
print 'Complete writing to the file'
def doCalculation(project_ID):
numrows = 100
toFileRowList = []
for i in range(numrows):
toFileRowList.append([project_ID]*100)
print "%s %s" % (multiprocessing.current_process().name, i)
return toFileRowList
def main():
parlist = [276, 266]
nthreads = multiprocessing.cpu_count()
workerQueue = multiprocessing.Queue()
writerQueue = multiprocessing.Queue()
feedProc = multiprocessing.Process(target = feed , args = (workerQueue, parlist))
calcProc = [multiprocessing.Process(target = calc , args = (workerQueue, writerQueue)) for i in range(nthreads)]
writProc = multiprocessing.Process(target = write, args = (writerQueue, 'somefile.csv'))
feedProc.start()
feedProc.join ()
for p in calcProc:
p.start()
for p in calcProc:
p.join()
writProc.start()
writProc.join()
if __name__=='__main__':
sys.exit(main())
I think the problem is the Queue buffer getting filled, so you need to read from the queue before you can put additional stuff in it.
For example, in your feed thread you have:
queue.put(par)
If you keep putting much stuff without reading this will cause it to block untill the buffer is freed, but the problem is that you only free the buffer in your calc thread, which in turn doesn't get started before you join your blocking feed thread.
So, in order for your feed thread to finish, the buffer should be freed, but the buffer won't be freed before the thread finishes :)
Try organizing your queues access more.
The feedProc and the writeProc are not actually running in parallel with the rest of your program. When you have
proc.start()
proc.join ()
you start the process and then, on the join() you immediatly wait for it to finish. In this case there's no gain in multiprocessing, only overhead. Try to start ALL processes at once before you join them. This will also have the effect that your queues get emptied regularyl and you won't deadlock.

Categories