I'm working through a multiprocessing example (An introduction to parallel programming). I modified the Pool Class example to meet my specific needs--to clip a bunch of rasters with a study area polygon in parallel. On the plus side, the script finishes and prints "Processing Complete.". On the negative side, there is no output generated. I suspect that I have some procedural error in the `pool.apply_async' function. Why is this script producing no results?
import arcpy, os
import multiprocessing as mp
arcpy.env.workspace = r'F:\temp\inws'
outws_utm11 = r'F:\temp\outws'
clipper_utm11 = r'F:\temp\some_polygon.shp'
rasters = arcpy.ListRasters()
pool = mp.Pool(processes=4)
def clip_raster(clipper, outws, raster):
arcpy.Clip_management(raster, "#", os.path.join(outws, raster), clipper, nodata_value = 0, clipping_geometry = "ClippingGeometry")
[pool.apply_async(clip_raster, args=(clipper_utm11,outws_utm11, ras)) for ras in rasters]
print "Processing complete."
The apply_async function kicks off your function in a worker process, but does not block until the function completes. You're letting the main process complete and exit instead of waiting for the workers to finish. This causes them to be killed, which is likely hapenning before they can create your output.
Since you're just applying the same function to all of your items in the rasters list, you should consider using pool.map instead. It will accept both a function name and an iterable object as its arguments, and call the function on each of the items in the list. All of these function calls will occur in a worker process in the pool. One caveat of the pool.map function though, is that the function object you pass it must only accept one argument: the item from the list. I see your clip_rasters function uses a couple of other arguments, so in my example below, I'm using functools.partial to create a new version of clip_raster that always includes the first two arguments. This new function that has clipper_utm11 and outws_utm11 bound to it can now be used with pool.map.
import arcpy, os
import functools
import multiprocessing as mp
arcpy.env.workspace = r'F:\temp\inws'
outws_utm11 = r'F:\temp\outws'
clipper_utm11 = r'F:\temp\some_polygon.shp'
rasters = arcpy.ListRasters()
pool = mp.Pool(processes=4)
def clip_raster(clipper, outws, raster):
arcpy.Clip_management(raster, "#", os.path.join(outws, raster), clipper, nodata_value = 0, clipping_geometry = "ClippingGeometry")
bound_clip_raster = functools.partial(clip_raster, clipper_utm11, outws_utm11)
results = pool.map(bound_clip_raster, rasters)
print "Processing complete."
This code will call the bound_clip_raster function once for each of the items in your rasters list, including clipper_utm11 and outws_utm11. All of the results will be available in a list called results, and the call to pool.map is blocking, so the main process will wait until all the workers are done before it exits.
If, for some strange reason, you're intent on using apply_async, then you'll need to add some code to the end of your script to use the AsyncResult object's associated methods to block the main process until they can complete, such as wait(), or poll for completion in a loop by calling ready(). But you should really use pool.map for this use case. This is what it's made for.
I have answered a question that may be useful to you too. Take a look here: question
There are a few good practices, like to put everything inside a function like you did, but also a must have is to put a main() function and block it by the
if __name__ == '__main__':
main()
Another thing is this one function to call the pool.apply_assync that I've inserted in your code.
Also have made a few modifications so you can try, I've tested and it works for me:
import arcpy, os
from multiprocessing import Pool
arcpy.env.workspace = r'C:\Gis\lab_geo\2236622'
outws_utm11 = r'C:\Gis\lab_geo\2236622\outs'
clipper_utm11 = r'C:\Gis\arcpy_teste\teste.shp'
rasters = arcpy.ListRasters()
def clipRaster(clipper, outws, raster):
arcpy.Clip_management(raster, "#", os.path.join(outws, raster), clipper, 0, "ClippingGeometry")
def clipRasterMulti(processList):
pool = Pool(processes=4, maxtasksperchild=10)
jobs= {}
for item in processList:
jobs[item[0]] = pool.apply_async(clipRaster, [x for x in item])
for item,result in jobs.items():
try:
result = result.get()
except Exception as e:
print(e)
pool.close()
pool.join()
def main():
processList = [(clipper_utm11,outws_utm11, ras) for ras in rasters]
clipRasterMulti(processList)
print "Processing complete."
if __name__ == '__main__':
main()
Related
I have a method which calculates a final result using multiple other methods. It has a while loop inside which continuously checks for new data, and if new data is received, it runs the other methods and calculates the results. This main method is the only one which is called by the user, and it stays active until the program is closed. the basic structure is as follows:
class sample:
def __init__(self):
results = []
def main_calculation(self):
while True:
#code to get data
if newdata != olddata:
#insert code to prepare data for analysis
res1 = self.calc1(prepped_data)
res2 = self.calc2(prepped_data)
final = res1 + res2
self.results.append(final)
I want to run calc1 and calc2 in parallel, so that I can get the final result faster. However, I am unsure of how to implement multiprocessing in this way, since I'm not using a __main__ guard. Is there any way to run these processes in parallel?
This is likely not the best organization for this code, but it is what is easiest for the actual calculations I am running, since it is necessary that this code be imported and run from a different file. However, I can restructure the code if this is not a salvageable structure.
According to the documentation, the reason you need to use a __main__ guard is that when your program creates a multiprocessing.Process object, it starts up a whole new copy of the Python interpreter which will import a new copy of your program's modules. If importing your module calls multiprocessing.Process() itself, that will create yet another copy of the Python interpreter which interprets yet another copy of your code, and so on until your system crashes (or actually, until Python hits a non-reentrant piece of the multiprocessing code).
In the main module of your program, which usually calls some code at the top level, checking __name__ == '__main__' is the way you can tell whether the program is being run for the first time or is being run as a subprocess. But in a different module, there might not be any code at the top level (other than definitions), and in that case there's no need to use a guard because the module can be safely imported without starting a new process.
In other words, this is dangerous:
import multiprocessing as mp
def f():
...
p = mp.Process(target=f)
p.start()
p.join()
but this is safe:
import multiprocessing as mp
def f():
...
def g():
p = mp.Process(target=f)
p.start()
p.join()
and this is also safe:
import multiprocessing as mp
def f():
...
class H:
def g(self):
p = mp.Process(target=f)
p.start()
p.join()
So in your example, you should be able to directly create Process objects in your function.
However, I'd suggest making it clear in the documentation for the class that that method creates a Process, because whoever uses it (maybe you) needs to know that it's not safe to call that method at the top level of a module. It would be like doing this, which also falls in the "dangerous" category:
import multiprocessing as mp
def f():
...
class H:
def g(self):
p = mp.Process(target=f)
p.start()
p.join()
H().g() # this creates a Process at the top level
You could also consider an alternative approach where you make the caller do all the process creation. In this approach, either your sample class constructor or the main_calculation() method could accept, say, a Pool object, and it can use the processes from that pool to do its calculations. For example:
class sample:
def main_calculation(self, pool):
while True:
if newdata != olddata:
res1_async = pool.apply_async(self.calc1, [prepped_data])
res2_async = pool.apply_async(self.calc2, [prepped_data])
res1 = res1_async.get()
res2 = res2_async.get()
# and so on
This pattern may also allow your program to be more efficient in its use of resources, if there are many different calculations happening, because they can all use the same pool of processes.
How do I call a method from a different class (different module) with the use of Multiprocess pool in python?
My aim is to start a process which keep running until some task is provide, and once task is completed it will again go back to waiting mode.
Below is code, which has three module, Reader class is my run time task, I will provide execution of reader method to ProcessExecutor.
Process executor is process pool, it will continue while loop until some task is provided to it.
Main module which initiates everything.
Module 1
class Reader(object):
def __init__(self, message):
self.message = message
def reader(self):
print self.message
Module 2
class ProcessExecutor():
def run(self, queue):
print 'Before while loop'
while True:
print 'Reached Run'
try:
pair = queue.get()
print 'Running process'
print pair
func = pair.get('target')
arguments = pair.get('args', None)
if arguments is None:
func()
else:
func(arguments)
queue.task_done()
except Exception:
print Exception.message
main Module
from process_helper import ProcessExecutor
from reader import Reader
import multiprocessing
import Queue
if __name__=='__main__':
queue = Queue.Queue()
myReader = Reader('Hi')
ps = ProcessExecutor()
pool = multiprocessing.Pool(2)
pool.apply_async(ps.run, args=(queue, ))
param = {'target': myReader.reader}
queue.put(param)
Code executed without any error: C:\Python27\python.exe
C:/Users/PycharmProjects/untitled1/main/main.py
Process finished with exit code 0
Code gets executed but it never reached to run method. I am not sure is it possible to call a method of the different class using multi-processes or not
I tried apply_async, map, apply but none of them are working.
All example searched online are calling target method from the script where the main method is implemented.
I am using python 2.7
Please help.
Your first problem is that you just exit without waiting on anything. You have a Pool, a Queue, and an AsyncResult, but you just ignore all of them and exit as soon as you've created them. You should be able to get away with only waiting on the AsyncResult (after that, there's no more work to do, so who cares what you abandon), except for the fact that you're trying to use Queue.task_done, which doesn't make any sense without a Queue.join on the other side, so you need to wait on that as well.
Your second problem is that you're using the Queue from the Queue module, instead of the one from the multiprocessing module. The Queue module only works across threads in the same process.
Also, you can't call task_done on a plain Queue; that's only a method for the JoinableQueue subclass.
Once you've gotten to the point where the pool tries to actually run a task, you will get the problem that bound methods can't be pickled unless you write a pickler for them. Doing that is a pain, even though it's the right way. The traditional workaround—hacky and cheesy, but everyone did it, and it works—is to wrap each method you want to call in a top-level function. The modern solution is to use the third-party dill or cloudpickle libraries, which know how to pickle bound methods, and how to hook into multiprocessing. You should definitely look into them. But, to keep things simple, I'll show you the workaround.
Notice that, because you've created an extra queue to pass methods onto, in addition to the one built into the pool, you'll need the workaround for both targets.
With these problems fixed, your code looks like this:
from process_helper import ProcessExecutor
from reader import Reader
import multiprocessing
def call_run(ps):
ps.run(queue)
def call_reader(reader):
return reader.reader()
if __name__=='__main__':
queue = multiprocessing.JoinableQueue()
myReader = Reader('Hi')
ps = ProcessExecutor()
pool = multiprocessing.Pool(2)
res = pool.apply_async(call_run, args=(ps,))
param = {'target': call_reader, 'args': myReader}
queue.put(param)
print res.get()
queue.join()
You have additional bugs beyond this in your ProcessReader, but I'm not going to debug everything for you. This gets you past the initial hurdles, and shows the answer to the specific question you were asking about. Also, I'm not sure what the point of all that code is. You seem to be trying to replace what Pool already does on top of Pool, only in a more complicated but less powerful way, but I'm not entirely sure.
Meanwhile, here's a program that does what I think you want, with no problems, by just throwing away that ProcessExecutor and everything that goes with it:
from reader import Reader
import multiprocessing
def call_reader(reader):
return reader.reader()
if __name__=='__main__':
myReader = Reader('Hi')
pool = multiprocessing.Pool(2)
res = pool.apply_async(call_reader, args=(myReader,))
print res.get()
Earlier I tried to use the threading module in python to create multiple threads. Then I learned about the GIL and how it does not allow taking advantage of multiple CPU cores on a single machine. So now I'm trying to do multiprocessing (I don't strictly need seperate threads).
Here is a sample code I wrote to see if distinct processes are being created. But as can be seen in the output below, I'm getting the same process ID everytime. So multiple processes are not being created. What am I missing?
import multiprocessing as mp
import os
def pri():
print(os.getpid())
if __name__=='__main__':
# Checking number of CPU cores
print(mp.cpu_count())
processes=[mp.Process(target=pri()) for x in range(1,4)]
for p in processes:
p.start()
for p in processes:
p.join()
Output:
4
12554
12554
12554
The Process class requires a callable as its target.
Instead of running the function in the separate process, you are calling it and passing its result (None in this case) to the Process class.
Just change the following:
mp.Process(target=pri())
with:
mp.Process(target=pri)
Since the subprocesses runs on a different process, you won't see their print statements. They also don't share the same memory space. You pass pri() to target, where it needs to be pri. You need to pass a callable object, not execute it.
The prints you see are part of your main thread executions. Because you pass pri(), the code is actually executed. You need to change your code so the pri function returns value, rather than prints it.
Then you need to implement a queue, where all your threads write to it and when they're done, your main thread reads the queue.
A nice feature of the multiprocessing module is the Pool object. It allows you to create a thread pool, and then just use it. It's more convenient.
I have tried your code, the thing is the command executes too quick, so the OS reuses the PIDs. If you add a time.sleep(1) in your pri function, it would work as you expect.
That is True only for Windows. The example below is made on Windows platform. On Unix like machines, you won't need the sleep.
The more convenience solution is like this:
from multiprocessing import Pool
from time import sleep
import os
def pri(x):
sleep(1)
return os.getpid()
def use_procs():
p_pool = Pool(4)
p_results = p_pool.map(pri, [_ for _ in range(1,4)])
p_pool.close()
p_pool.join()
return p_results
if __name__ == '__main__':
res = use_procs()
for r in res:
print r
Without the sleep:
==================== RESTART: C:/Python27/tests/test2.py ====================
6576
6576
6576
>>>
with the sleep:
==================== RESTART: C:/Python27/tests/test2.py ====================
10396
10944
9000
I'm trying to run a function with multiprocessing. This is the code:
import multiprocessing as mu
output = []
def f(x):
output.append(x*x)
jobs = []
np = mu.cpu_count()
for n in range(np*500):
p = mu.Process(target=f, args=(n,))
jobs.append(p)
running = []
for i in range(np):
p = jobs.pop()
running.append(p)
p.start()
while jobs != []:
for r in running:
if r.exitcode == 0:
try:
running.remove(r)
p = jobs.pop()
p.start()
running.append(p)
except IndexError:
break
print "Done:"
print output
The output is [], while it should be [1,4,9,...]. Someone sees where i'm making a mistake?
You are using multiprocessing, not threading. So your output list is not shared between the processes.
There are several possible solutions;
Retain most of your program but use a multiprocessing.Queue instead of a list. Let the workers put their results in the queue, and read it from the main program. It will copy data from process to process, so for big chunks of data this will have significant overhead.
You could use shared memory in the form of multiprocessing.Array. This might be the best solution if the processed data is large.
Use a Pool. This takes care of all the process management for you. Just like with a queue, it copies data from process to process. It is probably the easiest to use. IMO this is the best option if the data sent to/from each worker is small.
Use threading so that the output list is shared between threads. Threading in CPython has the restriction that only one thread at a time can be executing Python bytecode, so you might not get as much performance benefit as you'd expect. And unlike the multiprocessing solutions it will not take advantage of multiple cores.
Edit:
Thanks to #Roland Smith to point out.
The main problem is the function f(x). When child process call this, it's unable for them to fine the output variable (since it's not shared).
Edit:
Just as #cdarke said, in multiprocessing you have to carefully control the shared object that child process could access(maybe a lock), and it's pretty complicated and hard to debug.
Personally I suggest to use the Pool.map method for this.
For instance, I assume that you run this code directly, not as a module, then your code would be:
import multiprocessing as mu
def f(x):
return x*x
if __name__ == '__main__':
np = mu.cpu_count()
args = [n for n in range(np*500)]
pool = mu.Pool(processes=np)
result = pool.map(f, args)
pool.close()
pool.join()
print result
but there's something you must know
if you just run this file but not import with module, the if __name__ == '__main__': is important, since python will load this file as a module for other process, if you don't place the function 'f' outside if __name__ == '__main__':, the child process would not be able to find your function 'f'
**Edit:**thanks #Roland Smith point out that we could use tuple
if you have more then one args for the function f, then you might need a tuple to do so, for instance
def f((x,y))
return x*y
args = [(n,1) for n in range(np*500)]
result = pool.map(f, args)
or check here for more detailed discussion
How does the flow of apply_async work between calling the iterable (?) function and the callback function?
Setup: I am reading some lines of all the files inside a 2000 file directory, some with millions of lines, some with only a few. Some header/formatting/date data is extracted to charecterize each file. This is done on a 16 CPU machine, so it made sense to multiprocess it.
Currently, the expected result is being sent to a list (ahlala) so I can print it out; later, this will be written to *.csv. This is a simplified version of my code, originally based off this extremely helpful post.
import multiprocessing as mp
def dirwalker(directory):
ahlala = []
# X() reads files and grabs lines, calls helper function to calculate
# info, and returns stuff to the callback function
def X(f):
fileinfo = Z(arr_of_lines)
return fileinfo
# Y() reads other types of files and does the same thing
def Y(f):
fileinfo = Z(arr_of_lines)
return fileinfo
# results() is the callback function
def results(r):
ahlala.extend(r) # or .append, haven't yet decided
# helper function
def Z(arr):
return fileinfo # to X() or Y()!
for _,_,files in os.walk(directory):
pool = mp.Pool(mp.cpu_count()
for f in files:
if (filetype(f) == filetypeX):
pool.apply_async(X, args=(f,), callback=results)
elif (filetype(f) == filetypeY):
pool.apply_async(Y, args=(f,), callback=results)
pool.close(); pool.join()
return ahlala
Note, the code works if I put all of Z(), the helper function, into either X(), Y(), or results(), but is this either repetitive or possibly slower than possible? I know that the callback function is called for every function call, but when is the callback function called? Is it after pool.apply_async()...finishes all the jobs for the processes? Shouldn't it be faster if these helper functions were called within the scope (?) of the first function pool.apply_async() takes (in this case, X())? If not, should I just put the helper function in results()?
Other related ideas: Are daemon processes why nothing shows up? I am also very confused about how to queue things, and if this is the problem. This seems like a place to start learning it, but can queuing be safely ignored when using apply_async, or only at a noticable time inefficiency?
You're asking about a whole bunch of different things here, so I'll try to cover it all as best I can:
The function you pass to callback will be executed in the main process (not the worker) as soon as the worker process returns its result. It is executed in a thread that the Pool object creates internally. That thread consumes objects from a result_queue, which is used to get the results from all the worker processes. After the thread pulls the result off the queue, it executes the callback. While your callback is executing, no other results can be pulled from the queue, so its important that the callback finishes quickly. With your example, as soon as one of the calls to X or Y you make via apply_async completes, the result will be placed into the result_queue by the worker process, and then the result-handling thread will pull the result off of the result_queue, and your callback will be executed.
Second, I suspect the reason you're not seeing anything happen with your example code is because all of your worker function calls are failing. If a worker function fails, callback will never be executed. The failure won't be reported at all unless you try to fetch the result from the AsyncResult object returned by the call to apply_async. However, since you're not saving any of those objects, you'll never know the failures occurred. If I were you, I'd try using pool.apply while you're testing so that you see errors as soon as they occur.
The reason the workers are probably failing (at least in the example code you provided) is because X and Y are defined as function inside another function. multiprocessing passes functions and objects to worker processes by pickling them in the main process, and unpickling them in the worker processes. Functions defined inside other functions are not picklable, which means multiprocessing won't be able to successfully unpickle them in the worker process. To fix this, define both functions at the top-level of your module, rather than embedded insice the dirwalker function.
You should definitely continue to call Z from X and Y, not in results. That way, Z can be run concurrently across all your worker processes, rather than having to be run one call at a time in your main process. And remember, your callback function is supposed to be as quick as possible, so you don't hold up processing results. Executing Z in there would slow things down.
Here's some simple example code that's similar to what you're doing, that hopefully gives you an idea of what your code should look like:
import multiprocessing as mp
import os
# X() reads files and grabs lines, calls helper function to calculate
# info, and returns stuff to the callback function
def X(f):
fileinfo = Z(f)
return fileinfo
# Y() reads other types of files and does the same thing
def Y(f):
fileinfo = Z(f)
return fileinfo
# helper function
def Z(arr):
return arr + "zzz"
def dirwalker(directory):
ahlala = []
# results() is the callback function
def results(r):
ahlala.append(r) # or .append, haven't yet decided
for _,_,files in os.walk(directory):
pool = mp.Pool(mp.cpu_count())
for f in files:
if len(f) > 5: # Just an arbitrary thing to split up the list with
pool.apply_async(X, args=(f,), callback=results) # ,error_callback=handle_error # In Python 3, there's an error_callback you can use to handle errors. It's not available in Python 2.7 though :(
else:
pool.apply_async(Y, args=(f,), callback=results)
pool.close()
pool.join()
return ahlala
if __name__ == "__main__":
print(dirwalker("/usr/bin"))
Output:
['ftpzzz', 'findhyphzzz', 'gcc-nm-4.8zzz', 'google-chromezzz' ... # lots more here ]
Edit:
You can create a dict object that's shared between your parent and child processes using the multiprocessing.Manager class:
pool = mp.Pool(mp.cpu_count())
m = multiprocessing.Manager()
helper_dict = m.dict()
for f in files:
if len(f) > 5:
pool.apply_async(X, args=(f, helper_dict), callback=results)
else:
pool.apply_async(Y, args=(f, helper_dict), callback=results)
Then make X and Y take a second argument called helper_dict (or whatever name you want), and you're all set.
The caveat is that this worked by creating a server process that contains a normal dict, and all your other processes talk to that one dict via a Proxy object. So every time you read or write to the dict, you're doing IPC. This makes it a lot slower than a real dict.