Python multiprocessing loses activity without exiting file - python

I have a problem where my .py file, which uses maximum CPU through multiprocessing, stops operating without exiting the .py file.
I am running a heavy task that uses all cores in an old MacBook Pro (2012). The task runs fine at first, where I can visually see four python3.7 tasks populate the Activity Monitor window. However, after about 20 minutes, those four python3.7 disappear from the Activity Monitor.
The strangest part is the multiprocessing .py file is still operating, i.e. it never threw an uncaught exception nor exited the file.
Would you guys/gals have any ideas as to what's going on? My guess is 1) it's most likely an error in the script, and 2) the old computer is overheating.
Thanks!
Edit: Below is the multiprocess code, where the multiprocess function to execute is func with a list as its argument. I hope this helps!
import multiprocessing
def main():
pool = multiprocessing.Pool()
for i in range(24):
pool.apply_async(func, args = ([], ))
pool.close()
pool.join()
if __name__ == '__main__':
main()

Use a context manager to handle closing processes properly.
from multiprocessing import Pool
def main():
with Pool() as p:
result = p.apply_async(func, args = ([], ))
print(result)
if __name__ == '__main__':
main()
I wasn't sure what you were doing with the for i in range() part.

Related

How to call a linux command line program in parallel with python

I have a command-line program which runs on single core. It takes an input file, does some calculations, and returns several files which I need to parse to store the produced output.
I have to call the program several times changing the input file. To speed up the things I was thinking parallelization would be useful.
Until now I have performed this task calling every run separately within a loop with the subprocess module.
I wrote a script which creates a new working folder on every run and than calls the execution of the program whose output is directed to that folder and returns some data which I need to store. My question is, how can I adapt the following code, found here, to execute my script always using the indicated amount of CPUs, and storing the output.
Note that each run has a unique running time.
Here the mentioned code:
import subprocess
import multiprocessing as mp
from tqdm import tqdm
NUMBER_OF_TASKS = 4
progress_bar = tqdm(total=NUMBER_OF_TASKS)
def work(sec_sleep):
command = ['python', 'worker.py', sec_sleep]
subprocess.call(command)
def update_progress_bar(_):
progress_bar.update()
if __name__ == '__main__':
pool = mp.Pool(NUMBER_OF_TASKS)
for seconds in [str(x) for x in range(1, NUMBER_OF_TASKS + 1)]:
pool.apply_async(work, (seconds,), callback=update_progress_bar)
pool.close()
pool.join()
I am not entirely clear what your issue is. I have some recommendations for improvement below, but you seem to claim on the page that you link to that everything works as expected and I don't see anything very wrong with the code as long as you are running on Linux.
Since the subprocess.call method is already creating a new process, you should just be using multithreading to invoke your worker function, work. But had you been using multiprocessing and your platform was one that used the spawn method to create new processes (such as Windows), then having the creation of the progress bar outside of the if __name__ = '__main__': block would have resulted in the creation of 4 additional progress bars that did nothing. Not good! So for portability it would have been best to move its creation to inside the if __name__ = '__main__': block.
import subprocess
from multiprocessing.pool import ThreadPool
from tqdm import tqdm
def work(sec_sleep):
command = ['python', 'worker.py', sec_sleep]
subprocess.call(command)
def update_progress_bar(_):
progress_bar.update()
if __name__ == '__main__':
NUMBER_OF_TASKS = 4
progress_bar = tqdm(total=NUMBER_OF_TASKS)
pool = ThreadPool(NUMBER_OF_TASKS)
for seconds in [str(x) for x in range(1, NUMBER_OF_TASKS + 1)]:
pool.apply_async(work, (seconds,), callback=update_progress_bar)
pool.close()
pool.join()
Note: If your worker.py program prints to the console, it will mess up the progress bar (the progress bar will be re-written repeatedly on multiple lines).
Have you considered instead importing worker.py (some refactoring of that code might be necessary) instead of invoking a new Python interpreter to execute it (in this case you would want to be explicitly using multiprocessing). On Windows this might not save you anything since a new Python interpreter would be executed for each new process anyway, but this could save you on Linux:
import subprocess
from multiprocessing.pool import Pool
from worker import do_work
from tqdm import tqdm
def update_progress_bar(_):
progress_bar.update()
if __name__ == '__main__':
NUMBER_OF_TASKS = 4
progress_bar = tqdm(total=NUMBER_OF_TASKS)
pool = Pool(NUMBER_OF_TASKS)
for seconds in [str(x) for x in range(1, NUMBER_OF_TASKS + 1)]:
pool.apply_async(do_work, (seconds,), callback=update_progress_bar)
pool.close()
pool.join()

Python3 Process.join() not actually waiting on Linux when the process is created in multi-thread

I need to put a timeout on a process that is created inside a thread, however i encountered a strange behavoir and i'm not sure how to proceed.
The following code executed on Linux produces a wierd bug where (if the number of thrads is greater than 2 (my laptop has 8 core) or the code is executed in a loop for a few times) the process.join() doesn't actually wait for the process to finish or the timeout to expire but just goes on with the next instruction.
If the same code is executed on Windows with python 3.9 it gives a circular import error in the libraries for no reason.
If it is executed with python 3.8 it works almost perfectly until like 256 threads, then gives the same stange beahvour on process.join() as in linux.
Error on windows Python 3.9:
ImportError: cannot import name 'Queue' from partially initialized module 'multiprocessing.queues' (most likely due to a circular import)
Furthermore if i remove the return value from the process, so i remove the Queue. On linux the process.join() start working properly for arbitrarily large n_threads. However running the code in a loop stiil gives the error even for very small n_threads.
import random
from multiprocessing import Process, Queue
from threading import Thread
def dummy_process():
return random.randint(1, 10)
#function to retrieve process return value
def process_returner(queue, function, args):
queue.put(function(*args))
#function that creates the process with timeout
def execute_with_timeout(function, args, timeout=3):
q = Queue()
p1 = Process(
target=process_returner,
args=(q, function, args),
name="P",
)
p1.start()
p1.join(timeout=timeout) # SOMETIME IT DOES NOT WAIT FOR THE PROCESS TO FINISH
if p1.exitcode is None:
print(f"Oops, {p1} timeouts!")# SO IT RAISES THIS ERROR even if nowhere near 3 secods have passed
raise TimeoutError
p1.terminate()
return q.get() if not q.empty() else None
#thread that just call the new process and stores the return value in the given array
def dummy_thread(result_array, index):
try:
result_array[index] = execute_with_timeout(dummy_process, args=())
except TimeoutError:
pass
def test():
#in loop because with low n_threads as 4 the error is not so common
for _ in range(10):
n_threads =8
results = [-1] * n_threads
threads = set()
for i in range(n_threads):
t = Thread(target=dummy_thread, args=(results, i))
threads.add(t)
t.start()
for t in threads:
t.join()
print(results)
if __name__ == '__main__':
test()
I ran into a similar problem when using the multiprocessing module on Linux. Process.join() started returning immediately instead of waiting. exitcode would be equal to None and is_alive() would return True.
It turns out the problem wasn't in the Python code. I was calling my Python program from a Bash script that would sometimes execute trap "" SIGCHLD. Normally, setting trap only affects the script itself, but trap "" some_signal tells the shell's child processes to ignore the signal as well. Blocking SIGCHLD interferes with the multiprocessing module.
In my case, adding signal.signal(signal.SIGCHLD, signal.SIG_DFL) to the beginning of the Python program fixed the problem.

python multiprocessing does not run functions

I just want to see a simple code implementation of multiprocessing on windows, but it doesn't enter/run functions neither in jupyternotebook or running saved .py
import time
import multiprocessing
s=[1,4]
def subu(remo):
s[remo-1]=remo*9
print(f'here{remo}')
return
if __name__=="__main__":
p1=multiprocessing.Process(target=subu , args=[1])
p2=multiprocessing.Process(target=subu , args=[2])
p1.start()
p2.start()
p1.join()
p2.join()
# print("2222here")
print(s)
input()
the output by .py is:
[1, 4]
[1, 4]
and the output by jupyternotebook is:
[1,4]
which I hoped to be:
here1
here2
[9,18]
what's wrong with code above? and what about this code:
import concurrent
thread_num=2
s=[1,4]
def subu(remo):
s[remo-1]=remo*9
print(f'here{remo}')
return
with concurrent.futures.ProcessPoolExecutor() as executor:
## or if __name__=="__main__":
##... with concurrent.futures.ProcessPoolExecutor() as executor:
results=[executor.submit(subu,i) for i in range(thread_num)]
for f in concurrent.futures.as_completed(results):
print(f.result())
input()
doesnot run at all in jupyter pulling error
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I kinda know I can't expect jupyter to run multiprocessing. but saved.py also can't run it. and it exits without waiting for input()
There are a couple of potential problems. The worker function needs to be importable (at least on Windows) so that it can be found by the subprocess. And since subprocess memory isn't visible to the parent, the results need to be returned. So, putting the worker in a separate module
subumodule.py
def subu(remo):
remo = remo*9
print(f'here{remo}')
return remo
And using a process pool's existing infrastructure to return a worker return value to the parent. You could
import time
import multiprocessing
if __name__=="__main__":
with multiprocessing.Pool(2) as pool:
s = list(pool.map(subu, (1,2))) #here
print(s)
input()

Python multiprocessing module not calling function

I have a program that needs to create several graphs, with each one often taking hours. Therefore I want to run these simultaneously on different cores, but cannot seem to get these processes to run with the multiprocessing module. Here is my code:
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=full_graph)
jobs.append(p)
p.start()
p.join()
(full_graph() has been defined earlier in the program, and is simply a function that runs a collection of other functions)
The function normally outputs some graphs, and saves the data to a .txt file. All data is saved to the same 2 text files. However, calling the functions using the above code gives no console output, nor any output to the text file. All that happens is a few second long pause, and then the program exits.
I am using the Spyder IDE with WinPython 3.6.3
Without a simple full_graph sample nobody can tell you what's happening. But your code is inherently wrong.
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=full_graph)
jobs.append(p)
p.start()
p.join() # <- This would block until p is done
See the comment after p.join(). If your processes really take hours to complete, you would run one process for hours and then the 2nd, the 3rd. Serially and using a single core.
From the docs: https://docs.python.org/3/library/multiprocessing.html
Process.join: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Process.join
If the optional argument timeout is None (the default), the method blocks until the process whose join() method is called terminates. If timeout is a positive number, it blocks at most timeout seconds. Note that the method returns None if its process terminates or if the method times out. Check the process’s exitcode to determine if it terminated.
If each process does something different, you should then also have some args for full_graph(hint: may that be the missing factor?)
You probably want to use an interface like map from Pool
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool
And do (from the docs again)
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))

How does the python multiprocessing works in backend?

When i tried to run the code:
import multiprocessing
def worker():
"""worker function"""
print 'Worker'
return
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
The output is blank and simply executing without printing "Worker". How to print the required output in multiprocessing?
What actually is happening while using multiprocessing?
What is the maximum number of cores we can use for multiprocessing?
I've tried your code in Windows 7, Cygwin, and Ubuntu. For me all the threads finish before the loop comes to an end so I get all the prints to show, but using join() will guarantee all the threads will finish.
import multiprocessing
def worker():
"""worker function"""
print 'Worker'
return
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
for i in range(len(jobs)):
jobs.pop().join()
As far as how multiprocessing works in the backend, I'm going to let someone more experienced than myself answer that one :) I'll probably just make a fool of myself.
I get 5 time "Worker" printed for my part, are you on Python 3 ? if it is the case you muste use print("Worker"). from my experiment, I think multitreading doesn't mean using multiple cores, it just run the diferent tread alternatively to ensure a parallelism. try reading the multiprocessing lib documentation for more info.

Categories