I am using multiprocessing.Pool to create a pool of workers to load files into Pandas. It hangs all the time, probably about 75% of the time.
import pandas as pd
import multiprocessing as mp
def load(filename):
thing = pd.read_table(filename)
return thing
files = ['a','b','c'] # A list of a bunch of files
with mp.Pool(5) as pool:
result = pool.map(load, files)
By "hangs" I mean never finishes. I've straced the procs and they are waiting on futuexes, so I have no idea what that means. Am I invoking the pool correctly?
Again, it works perfectly 25% of the time, so I must be doing something right... thx!
Ubuntu Xenial Python3.5.2
Related
I have a command-line program which runs on single core. It takes an input file, does some calculations, and returns several files which I need to parse to store the produced output.
I have to call the program several times changing the input file. To speed up the things I was thinking parallelization would be useful.
Until now I have performed this task calling every run separately within a loop with the subprocess module.
I wrote a script which creates a new working folder on every run and than calls the execution of the program whose output is directed to that folder and returns some data which I need to store. My question is, how can I adapt the following code, found here, to execute my script always using the indicated amount of CPUs, and storing the output.
Note that each run has a unique running time.
Here the mentioned code:
import subprocess
import multiprocessing as mp
from tqdm import tqdm
NUMBER_OF_TASKS = 4
progress_bar = tqdm(total=NUMBER_OF_TASKS)
def work(sec_sleep):
command = ['python', 'worker.py', sec_sleep]
subprocess.call(command)
def update_progress_bar(_):
progress_bar.update()
if __name__ == '__main__':
pool = mp.Pool(NUMBER_OF_TASKS)
for seconds in [str(x) for x in range(1, NUMBER_OF_TASKS + 1)]:
pool.apply_async(work, (seconds,), callback=update_progress_bar)
pool.close()
pool.join()
I am not entirely clear what your issue is. I have some recommendations for improvement below, but you seem to claim on the page that you link to that everything works as expected and I don't see anything very wrong with the code as long as you are running on Linux.
Since the subprocess.call method is already creating a new process, you should just be using multithreading to invoke your worker function, work. But had you been using multiprocessing and your platform was one that used the spawn method to create new processes (such as Windows), then having the creation of the progress bar outside of the if __name__ = '__main__': block would have resulted in the creation of 4 additional progress bars that did nothing. Not good! So for portability it would have been best to move its creation to inside the if __name__ = '__main__': block.
import subprocess
from multiprocessing.pool import ThreadPool
from tqdm import tqdm
def work(sec_sleep):
command = ['python', 'worker.py', sec_sleep]
subprocess.call(command)
def update_progress_bar(_):
progress_bar.update()
if __name__ == '__main__':
NUMBER_OF_TASKS = 4
progress_bar = tqdm(total=NUMBER_OF_TASKS)
pool = ThreadPool(NUMBER_OF_TASKS)
for seconds in [str(x) for x in range(1, NUMBER_OF_TASKS + 1)]:
pool.apply_async(work, (seconds,), callback=update_progress_bar)
pool.close()
pool.join()
Note: If your worker.py program prints to the console, it will mess up the progress bar (the progress bar will be re-written repeatedly on multiple lines).
Have you considered instead importing worker.py (some refactoring of that code might be necessary) instead of invoking a new Python interpreter to execute it (in this case you would want to be explicitly using multiprocessing). On Windows this might not save you anything since a new Python interpreter would be executed for each new process anyway, but this could save you on Linux:
import subprocess
from multiprocessing.pool import Pool
from worker import do_work
from tqdm import tqdm
def update_progress_bar(_):
progress_bar.update()
if __name__ == '__main__':
NUMBER_OF_TASKS = 4
progress_bar = tqdm(total=NUMBER_OF_TASKS)
pool = Pool(NUMBER_OF_TASKS)
for seconds in [str(x) for x in range(1, NUMBER_OF_TASKS + 1)]:
pool.apply_async(do_work, (seconds,), callback=update_progress_bar)
pool.close()
pool.join()
I need to run a bunch of parallel processes, but cannot use the standard multiprocessing package since its serialization with pickle does not work for more complex objects. Therefore I'm currently using pathos.multiprocessing which uses dill for the serialization and it works flawlessly in that regard.
However, I would like to have an exit condition, so that all procceses get terminated once the result of a process meets a certain condition (I'm computing objective values for an optimization problem and I want all procceses to stop once the result from a process is worse than any of the previous results).
For the standard multiprocessing package I found this solution (taken from https://stackoverflow.com/a/21491438/15799363). Can I do something similar with pathos.multiprocessing? I couldn't figure out how to pass a callback function to processes with pathos.
from random import random
from multiprocessing import Pool
from time import sleep
def add_something(i):
# Sleep to simulate the long calculation
sleep(random() * 30)
return i + 1
def run_my_process():
# Create a process pool
pool = Pool(100)
# Callback function that checks results and kills the pool
def check_result(result):
print(result)
if result == 90:
pool.terminate()
# Start up all of the processes
for i in range(100):
pool.apply_async(add_something, args=[i], callback=check_result)
pool.close()
pool.join()
if __name__ == '__main__':
run_my_process()
This program returns the resolution of the video but since I need for a large scale project I need multiprocessing. I have tried using and parallel processing using a different function but that would just run it multiple times not making it efficent I am posting the entire code. Can you help me create a main process that takes all cores.
import os
from tkinter.filedialog import askdirectory
from moviepy.editor import VideoFileClip
if __name__ == "__main__":
dire = askdirectory()
d = dire[:]
print(dire)
death = os.listdir(dire)
print(death)
for i in death: #multiprocess this loop
dire = d
dire += f"/{i}"
v = VideoFileClip(dire)
print(f"{i}: {v.size}")
This code works fine but I need help with creating a main process(uses all cores) for the for loop alone. can you excuse the variables names I was angry at multiprocessing. Also if you have tips on making the code efficient I would appreciate it.
You are, I suppose, assuming that every file in the directory is a video clip. I am assuming that processing the video clip is an I/O bound "process" for which threading is appropriate. Here I have rather arbitrarily crated a thread pool size of 20 threads this way:
MAX_WORKERS = 20 # never more than this
N_WORKERS = min(MAX_WORKERS, len(death))
You would have to experiment with how large MAX_WORKERS could be before performance degrades. This might be a low number not because your system cannot support lots of threads but because concurrent access to multiple files on your disk that may be spread across the medium may be inefficient.
import os
from tkinter.filedialog import askdirectory
from moviepy.editor import VideoFileClip
from concurrent.futures import ThreadPoolExecutor as Executor
from functools import partial
def process_video(parent_dir, file):
v = VideoFileClip(f"{parent_dir}/{file}")
print(f"{file}: {v.size}")
if __name__ == "__main__":
dire = askdirectory()
print(dire)
death = os.listdir(dire)
print(death)
worker = partial(process_video, dire)
MAX_WORKERS = 20 # never more than this
N_WORKERS = min(MAX_WORKERS, len(death))
with Executor(max_workers=N_WORKERS) as executor:
results = executor.map(worker, death) # results is a list: [None, None, ...]
Update
According to #Reishin, moviepy results in executing the ffmpeg executable and thus ends up creating a process in which the work is being done. So there us no point in also using multiprocessing here.
moviepy is just an wrapper around ffmpeg and designed to edit clips thus working with one file at time - the performance is quite poor. Invoking each time the new process for a number of files is time consuming. At the end, need of multiple processes might be a result of choice of wrong lib.
I'd like to recommend to use pyAV lib instead, which provides direct py binding for ffmpeg and good performance:
import av
import os
from tkinter.filedialog import askdirectory
import multiprocessing
from concurrent.futures import ThreadPoolExecutor as Executor
MAX_WORKERS = int(multiprocessing.cpu_count() * 1.5)
def get_video_resolution(path):
container = None
try:
container = av.open(path)
frame = next(container.decode(video=0))
return path, f"{frame.width}x{frame.height}"
finally:
if container:
container.close()
def files_to_proccess():
video_dir = askdirectory()
return (full_file_path for f in os.listdir(video_dir) if (full_file_path := os.path.join(video_dir, f)) and not os.path.isdir(full_file_path))
def main():
for f in files_to_proccess():
print(f"{os.path.basename(f)}: {get_video_resolution(f)[1]}")
def main_multi_threaded():
with Executor(max_workers=MAX_WORKERS) as executor:
for path, resolution in executor.map(get_video_resolution, files_to_proccess()):
print(f"{os.path.basename(path)}: {resolution}")
if __name__ == "__main__":
#main()
main_multi_threaded()
Above are single and multi-threaded implementation, with optimal parallelism setting (in case if multithreading is something absolute required)
Is there a way to use both ThreadPool and Pool in python to parallelise a loop by specifying the number of CPUs and cores you wish to use?
For example I would have a loop execute as:
from multiprocessing.dummy import Pool as ThreadPool
from tqdm import tqdm
import numpy as np
def my_function(x):
return x + 1
pool = ThreadPool(4)
my_array = np.arange(0,1e6,1)
results = list(tqdm(pool.imap(my_function, my_array),total=len(my_array)))
For 4 cores but it I wanted to spread these out on multiple CPUs as well, is there a simple way to adapt the code?
You are confusing between a core and a CPU. Generally, for all purposes both are considered to be the same(let's call them processor from now on).
When creating a thread pool in python, the threads are user level threads and are run on the same processor, due to Global Interpreter Lock(GIL) in python. As only one thread can control the python interpreter at a time. So, using (python)threads we don't get any real concurrency in data-intensive tasks.
How to solve this? Easy. Spawn multiple python processes running on different processors(each with its own interpreter). This is where the multi processing(mp) module is used, to spawn multiple processes from the parent python process in which it is called.
You can verify this by running htop(on linux, mac) and analysing the number of python processes. In case of mp module, they all will have the same name as the parent script where the pool.map function is called.
Timings for your code on a 8 core mac: 39.7s
Timing for this code on the same machine : 2.9s(note I can use 8 cores at max, but for comparison purposes using only 4)
Below is the modified code:
from multiprocessing.dummy import Pool as ThreadPool
from tqdm import tqdm
import numpy as np
import time
import multiprocessing as mp
def my_function(x):
return x + 1
pool = ThreadPool(4)
my_array = np.arange(0,1e6,1)
t1 = time.time()
# results = list(tqdm(pool.imap(my_function, my_array),total=len(my_array)))
pool = mp.Pool(processes=4) # Generally, set to 2*num_cores you have
res = pool.map(my_function, my_array)
print("Time taken = ", time.time() - t1)
multiprocessing.dummy.Pool is exactly simple ThreadPool, which don't use multicores and multicpus (because of GIL). you must use multiprocessing.Pool to run Process, which is process in your OS (if you define Pool(N) - N is number of this processes, if no - number of your cores in OS is default). Arguments this processes get from internal queue of Pool. 'case of that U will use all cpu and all core in your OS
When we run the multiprocessing with several pools, how do we quit the process after their tasks and drop the memory usage? I already include close() and join(). But I saw that the process still holds the memory after their jobs. How do we drop this memory?
For example, my code plays like below
from multiprocessing import Pool
import pandas as pd
def sub_test(aux):
return sum(aux.A)
At the below four pools are generated and holds several copies of aux.
def test(aux):
pool = Pool(processes=4)
pool.map(sub_test, aux)
pool.close()
pool.join()
I expect that the process release the memory after close and join, but they still holds.
#here is the main function
aux = pd.DataFrame()
aux['A'] = [1,1,1]
for x in range(10):
test(aux)
Is there any commands that handle this issue?
Problems are solved.
pool.close() and pool.join() work properly.