External programs are running when multiprocessing Python is closed - python

In Python (3.5), I started running external executables (written in C++) by multiprocessing.Pool.map + subprocess from an Xshell connection. However, the Xshell connection is interrupted due to bad internet condition.
After connecting again, I see that the managing Python is gone but the C++ executables are still running (and it looks like in the correct way, the Pool seems still control them.)
The question is if this is a bug, and what I shall do in this case. I cannot kill or kill -9 them.
Add: after removing all sublst_file by hand, all running executables(cmd) are gone. It seems that except sub.SubprocessError as e: part is still working.
The basic frame of my program is outlined in the following.
import subprocess as sub
import multiprocessing as mp
import itertools as it
import os
import time
def chunks(lst, chunksize=5):
return it.zip_longest(*[iter(lst)]*chunksize)
class Work():
def __init__(self, lst):
self.lst = lst
def _work(self, sublst):
retry_times = 6
for i in range(retry_times):
try:
cmd = 'my external c++ cmd'
sublst_file = 'a config file generated from sublst'
sub.check_call([cmd, sublst_file])
os.remove(sublst_file)
return sublst # return success sublst
except sub.SubprocessError as e:
if i == (retry_times-1):
print('\n[ERROR] %s %s failed after %d tries\n' % (cmd, sublst_file, retry_times))
return []
else:
print('\n[WARNNING] %dth sleeping, please waiting for restart\n' % (i+1))
time.sleep(1+i)
def work(self):
with mp.Pool(4) as pool:
results = pool.map(self._work, chunks(self.lst, 5))
for r in it.chain(results):
# other work on success items
print(r)

The multiprocessing.Pool does terminate its workers upon terminate() which is also called by __del__ which in turn will be called upon module unload (at exit).
The reason why these guys are orphaned is because subprocess.check_call spawns are not terminated upon exit.
This fact is not mentioned explicitly in the reference, but there is no indication anywhere that the spawns are terminated. A brief review of the source code also left me with no findings. This behavior is also easily testable.
To clean up upon parent termination use the Popen interface and this answer Killing child process when parent crashes in python

Related

Is there a problem with Pipes in Python multiprocessing on macOS?

I'm encountering some strange behavior with Pipe in Python multiprocessing on my Mac (Intel, Monterey). I've tried the following code in 3.7 and 3.11 and in both cases, not all the tasks are executed.
def _mp_job(nth, child):
print("Nth is", nth)
if __name__ == "__main__":
from multiprocessing import Pool, Pipe, set_start_method, log_to_stderr
import logging, time
set_start_method("spawn")
logger = log_to_stderr()
logger.setLevel(logging.DEBUG)
with Pool(processes = 10) as mp_pool:
jobs = []
for i in range(20):
parent, child = Pipe()
# child = None
r = mp_pool.apply_async(_mp_job, args = (i, child))
jobs.append(r)
while jobs:
new_jobs = []
for job in jobs:
if not job.ready():
new_jobs.append(job)
jobs = new_jobs
print("%d jobs remaining" % len(jobs))
time.sleep(1)
I know exactly what's going on, but I don't know why.
[EDITED: my explanation for what was happening was quite unclear on my first pass, as reflected in the comments, so I've cleaned it up. Thanks for your patience.]
If I run this code on my macOS Monterey machine, it will loop forever, reporting that some number of jobs are remaining. The logging information reveals that the child processes are failing; you'll see a number of lines like this:
[DEBUG/SpawnPoolWorker-10] worker got EOFError or OSError -- exiting
What's happening is that when the child worker dequeues a job and tries to unpickle the argument list, it encounters ConnectionRefusedError when unpickling the child connection side of the Pipe in the arguments (I know these details not because of the output of the function above, but because I inserted a traceback printout at the point in the Python multiprocessing library where the worker reports encountering the OSError). At that point the worker fails, having removed the job from the work queue but not having completed it. That's why I have # child = None in there; if I uncomment that, everything works fine.
My first suspicion is that this is a bug in Python on macOS (I haven't tested this on other platforms, but it makes no sense to me that something this basic would have been missed unless it's a platform-specific error). I don't understand why the child process would get ConnectionRefusedError, since the Pipe establishes a socket pair and you shouldn't be able to get ConnectionRefusedError in that case, as far as I understand.
This seems more likely to happen the more processes I have in the pool. If I have 2, it seems to work reliably. But 4 or more seem to cause a problem; I have a six-core computer, so I don't think that's part of what's happening.
Does anyone have any insight into this? Am I doing something obviously wrong?

Python3 Process.join() not actually waiting on Linux when the process is created in multi-thread

I need to put a timeout on a process that is created inside a thread, however i encountered a strange behavoir and i'm not sure how to proceed.
The following code executed on Linux produces a wierd bug where (if the number of thrads is greater than 2 (my laptop has 8 core) or the code is executed in a loop for a few times) the process.join() doesn't actually wait for the process to finish or the timeout to expire but just goes on with the next instruction.
If the same code is executed on Windows with python 3.9 it gives a circular import error in the libraries for no reason.
If it is executed with python 3.8 it works almost perfectly until like 256 threads, then gives the same stange beahvour on process.join() as in linux.
Error on windows Python 3.9:
ImportError: cannot import name 'Queue' from partially initialized module 'multiprocessing.queues' (most likely due to a circular import)
Furthermore if i remove the return value from the process, so i remove the Queue. On linux the process.join() start working properly for arbitrarily large n_threads. However running the code in a loop stiil gives the error even for very small n_threads.
import random
from multiprocessing import Process, Queue
from threading import Thread
def dummy_process():
return random.randint(1, 10)
#function to retrieve process return value
def process_returner(queue, function, args):
queue.put(function(*args))
#function that creates the process with timeout
def execute_with_timeout(function, args, timeout=3):
q = Queue()
p1 = Process(
target=process_returner,
args=(q, function, args),
name="P",
)
p1.start()
p1.join(timeout=timeout) # SOMETIME IT DOES NOT WAIT FOR THE PROCESS TO FINISH
if p1.exitcode is None:
print(f"Oops, {p1} timeouts!")# SO IT RAISES THIS ERROR even if nowhere near 3 secods have passed
raise TimeoutError
p1.terminate()
return q.get() if not q.empty() else None
#thread that just call the new process and stores the return value in the given array
def dummy_thread(result_array, index):
try:
result_array[index] = execute_with_timeout(dummy_process, args=())
except TimeoutError:
pass
def test():
#in loop because with low n_threads as 4 the error is not so common
for _ in range(10):
n_threads =8
results = [-1] * n_threads
threads = set()
for i in range(n_threads):
t = Thread(target=dummy_thread, args=(results, i))
threads.add(t)
t.start()
for t in threads:
t.join()
print(results)
if __name__ == '__main__':
test()
I ran into a similar problem when using the multiprocessing module on Linux. Process.join() started returning immediately instead of waiting. exitcode would be equal to None and is_alive() would return True.
It turns out the problem wasn't in the Python code. I was calling my Python program from a Bash script that would sometimes execute trap "" SIGCHLD. Normally, setting trap only affects the script itself, but trap "" some_signal tells the shell's child processes to ignore the signal as well. Blocking SIGCHLD interferes with the multiprocessing module.
In my case, adding signal.signal(signal.SIGCHLD, signal.SIG_DFL) to the beginning of the Python program fixed the problem.

How to call method from different class using multiprocess pool python

How do I call a method from a different class (different module) with the use of Multiprocess pool in python?
My aim is to start a process which keep running until some task is provide, and once task is completed it will again go back to waiting mode.
Below is code, which has three module, Reader class is my run time task, I will provide execution of reader method to ProcessExecutor.
Process executor is process pool, it will continue while loop until some task is provided to it.
Main module which initiates everything.
Module 1
class Reader(object):
def __init__(self, message):
self.message = message
def reader(self):
print self.message
Module 2
class ProcessExecutor():
def run(self, queue):
print 'Before while loop'
while True:
print 'Reached Run'
try:
pair = queue.get()
print 'Running process'
print pair
func = pair.get('target')
arguments = pair.get('args', None)
if arguments is None:
func()
else:
func(arguments)
queue.task_done()
except Exception:
print Exception.message
main Module
from process_helper import ProcessExecutor
from reader import Reader
import multiprocessing
import Queue
if __name__=='__main__':
queue = Queue.Queue()
myReader = Reader('Hi')
ps = ProcessExecutor()
pool = multiprocessing.Pool(2)
pool.apply_async(ps.run, args=(queue, ))
param = {'target': myReader.reader}
queue.put(param)
Code executed without any error: C:\Python27\python.exe
C:/Users/PycharmProjects/untitled1/main/main.py
Process finished with exit code 0
Code gets executed but it never reached to run method. I am not sure is it possible to call a method of the different class using multi-processes or not
I tried apply_async, map, apply but none of them are working.
All example searched online are calling target method from the script where the main method is implemented.
I am using python 2.7
Please help.
Your first problem is that you just exit without waiting on anything. You have a Pool, a Queue, and an AsyncResult, but you just ignore all of them and exit as soon as you've created them. You should be able to get away with only waiting on the AsyncResult (after that, there's no more work to do, so who cares what you abandon), except for the fact that you're trying to use Queue.task_done, which doesn't make any sense without a Queue.join on the other side, so you need to wait on that as well.
Your second problem is that you're using the Queue from the Queue module, instead of the one from the multiprocessing module. The Queue module only works across threads in the same process.
Also, you can't call task_done on a plain Queue; that's only a method for the JoinableQueue subclass.
Once you've gotten to the point where the pool tries to actually run a task, you will get the problem that bound methods can't be pickled unless you write a pickler for them. Doing that is a pain, even though it's the right way. The traditional workaround—hacky and cheesy, but everyone did it, and it works—is to wrap each method you want to call in a top-level function. The modern solution is to use the third-party dill or cloudpickle libraries, which know how to pickle bound methods, and how to hook into multiprocessing. You should definitely look into them. But, to keep things simple, I'll show you the workaround.
Notice that, because you've created an extra queue to pass methods onto, in addition to the one built into the pool, you'll need the workaround for both targets.
With these problems fixed, your code looks like this:
from process_helper import ProcessExecutor
from reader import Reader
import multiprocessing
def call_run(ps):
ps.run(queue)
def call_reader(reader):
return reader.reader()
if __name__=='__main__':
queue = multiprocessing.JoinableQueue()
myReader = Reader('Hi')
ps = ProcessExecutor()
pool = multiprocessing.Pool(2)
res = pool.apply_async(call_run, args=(ps,))
param = {'target': call_reader, 'args': myReader}
queue.put(param)
print res.get()
queue.join()
You have additional bugs beyond this in your ProcessReader, but I'm not going to debug everything for you. This gets you past the initial hurdles, and shows the answer to the specific question you were asking about. Also, I'm not sure what the point of all that code is. You seem to be trying to replace what Pool already does on top of Pool, only in a more complicated but less powerful way, but I'm not entirely sure.
Meanwhile, here's a program that does what I think you want, with no problems, by just throwing away that ProcessExecutor and everything that goes with it:
from reader import Reader
import multiprocessing
def call_reader(reader):
return reader.reader()
if __name__=='__main__':
myReader = Reader('Hi')
pool = multiprocessing.Pool(2)
res = pool.apply_async(call_reader, args=(myReader,))
print res.get()

Running multiple independent python scripts concurrently

My goal is create one main python script that executes multiple independent python scripts in windows server 2012 at the same time. One of the benefits in my mind is that I can point taskscheduler to one main.py script as opposed to multiple .py scripts. My server has 1 cpu. I have read on multiprocessing,thread & subprocess which only added to my confusion a bit. I am basically running multiple trading scripts for different stock symbols all at the same time after market open at 9:30 EST. Following is my attempt but I have no idea whether this is right. Any direction/feedback is highly appreciated!
import subprocess
subprocess.Popen(["python", '1.py'])
subprocess.Popen(["python", '2.py'])
subprocess.Popen(["python", '3.py'])
subprocess.Popen(["python", '4.py'])
I think I'd try to do this like that:
from multiprocessing import Pool
def do_stuff_with_stock_symbol(symbol):
return _call_api()
if __name__ == '__main__':
symbols = ["GOOG", "APPL", "TSLA"]
p = Pool(len(symbols))
results = p.map(do_stuff_with_stock_symbol, symbols)
print(results)
(Modified example from multiprocessing introduction: https://docs.python.org/3/library/multiprocessing.html#introduction)
Consider using a constant pool size if you deal with a lot of stock symbols, because every python process will use some amount of memory.
Also, please note that using threads might be a lot better if you are dealing with an I/O bound workload (calling an API, writing and reading from disk). Processes really become necessary with python when dealing with compute bound workloads (because of the global interpreter lock).
An example using threads and the concurrent futures library would be:
import concurrent.futures
TIMEOUT = 60
def do_stuff_with_stock_symbol(symbol):
return _call_api()
if __name__ == '__main__':
symbols = ["GOOG", "APPL", "TSLA"]
with concurrent.futures.ThreadPoolExecutor(max_workers=len(symbols)) as executor:
results = {executor.submit(do_stuff_with_stock_symbol, symbol, TIMEOUT): symbol for symbol in symbols}
for future in concurrent.futures.as_completed(results):
symbol = results[future]
try:
data = future.result()
except Exception as exc:
print('{} generated an exception: {}'.format(symbol, exc))
else:
print('stock symbol: {}, result: {}'.format(symbol, data))
(Modified example from: https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example)
Note that threads will still use some memory, but less than processes.
You could use asyncio or green threads if you want to reduce memory consumption per stock symbol to a minimum, but at some point you will run into network bandwidth problems because of all the concurrent API calls :)
While what you're asking might not be the best way to handle what you're doing, I've wanted to do similar things in the past and it took a while to find what I needed so to answer your question:
I'm not promising this to be the "best" way to do it, but it worked in my use case.
I created a class I wanted to use to extend threading.
thread.py
"""
Extends threading.Thread giving access to a Thread object which will accept
A thread_id, thread name, and a function at the time of instantiation. The
function will be called when the threads start() method is called.
"""
import threading
class Thread(threading.Thread):
def __init__(self, thread_id, name, func):
threading.Thread.__init__(self)
self.threadID = thread_id
self.name = name
# the function that should be run in the thread.
self.func = func
def run(self):
return self.func()
I needed some work done that was part of another package
work_module.py
import...
def func_that_does_work():
# do some work
pass
def more_work():
# do some work
pass
Then the main script I wanted to run
main.py
from thread import Thread
import work_module as wm
mythreads = []
mythreads.append(Thread(1, "a_name", wm.func_that_does_work))
mythreads.append(Thread(2, "another_name", wm.more_work))
for t in mythreads:
t.start()
The threads die when the run() is returned. Being this extends a Thread from threading there are several options available in the docs here: https://docs.python.org/3/library/threading.html
If all you're looking to do is automate the startup, creating a .bat file is a great and simple alternative to trying to do it with another python script.
the example linked in the comments shows how to do it with bash on unix based machines, but batch files can do a very similar thing with the START command:
start_py.bat:
START "" /B "path\to\python.exe" "path\to\script_1.py"
START "" /B "path\to\python.exe" "path\to\script_2.py"
START "" /B "path\to\python.exe" "path\to\script_3.py"
the full syntax for START can be found here.

How to use multiprocessing with class instances in Python?

I am trying to create a class than can run a separate process to go do some work that takes a long time, launch a bunch of these from a main module and then wait for them all to finish. I want to launch the processes once and then keep feeding them things to do rather than creating and destroying processes. For example, maybe I have 10 servers running the dd command, then I want them all to scp a file, etc.
My ultimate goal is to create a class for each system that keeps track of the information for the system in which it is tied to like IP address, logs, runtime, etc. But that class must be able to launch a system command and then return execution back to the caller while that system command runs, to followup with the result of the system command later.
My attempt is failing because I cannot send an instance method of a class over the pipe to the subprocess via pickle. Those are not pickleable. I therefore tried to fix it various ways but I can't figure it out. How can my code be patched to do this? What good is multiprocessing if you can't send over anything useful?
Is there any good documentation of multiprocessing being used with class instances? The only way I can get the multiprocessing module to work is on simple functions. Every attempt to use it within a class instance has failed. Maybe I should pass events instead? I don't understand how to do that yet.
import multiprocessing
import sys
import re
class ProcessWorker(multiprocessing.Process):
"""
This class runs as a separate process to execute worker's commands in parallel
Once launched, it remains running, monitoring the task queue, until "None" is sent
"""
def __init__(self, task_q, result_q):
multiprocessing.Process.__init__(self)
self.task_q = task_q
self.result_q = result_q
return
def run(self):
"""
Overloaded function provided by multiprocessing.Process. Called upon start() signal
"""
proc_name = self.name
print '%s: Launched' % (proc_name)
while True:
next_task_list = self.task_q.get()
if next_task is None:
# Poison pill means shutdown
print '%s: Exiting' % (proc_name)
self.task_q.task_done()
break
next_task = next_task_list[0]
print '%s: %s' % (proc_name, next_task)
args = next_task_list[1]
kwargs = next_task_list[2]
answer = next_task(*args, **kwargs)
self.task_q.task_done()
self.result_q.put(answer)
return
# End of ProcessWorker class
class Worker(object):
"""
Launches a child process to run commands from derived classes in separate processes,
which sit and listen for something to do
This base class is called by each derived worker
"""
def __init__(self, config, index=None):
self.config = config
self.index = index
# Launce the ProcessWorker for anything that has an index value
if self.index is not None:
self.task_q = multiprocessing.JoinableQueue()
self.result_q = multiprocessing.Queue()
self.process_worker = ProcessWorker(self.task_q, self.result_q)
self.process_worker.start()
print "Got here"
# Process should be running and listening for functions to execute
return
def enqueue_process(target): # No self, since it is a decorator
"""
Used to place an command target from this class object into the task_q
NOTE: Any function decorated with this must use fetch_results() to get the
target task's result value
"""
def wrapper(self, *args, **kwargs):
self.task_q.put([target, args, kwargs]) # FAIL: target is a class instance method and can't be pickled!
return wrapper
def fetch_results(self):
"""
After all processes have been spawned by multiple modules, this command
is called on each one to retreive the results of the call.
This blocks until the execution of the item in the queue is complete
"""
self.task_q.join() # Wait for it to to finish
return self.result_q.get() # Return the result
#enqueue_process
def run_long_command(self, command):
print "I am running number % as process "%number, self.name
# In here, I will launch a subprocess to run a long-running system command
# p = Popen(command), etc
# p.wait(), etc
return
def close(self):
self.task_q.put(None)
self.task_q.join()
if __name__ == '__main__':
config = ["some value", "something else"]
index = 7
workers = []
for i in range(5):
worker = Worker(config, index)
worker.run_long_command("ls /")
workers.append(worker)
for worker in workers:
worker.fetch_results()
# Do more work... (this would actually be done in a distributor in another class)
for worker in workers:
worker.close()
Edit: I tried to move the ProcessWorker class and the creation of the multiprocessing queues outside of the Worker class and then tried to manually pickle the worker instance. Even that doesn't work and I get an error
RuntimeError: Queue objects should only be shared between processes
through inheritance
. But I am only passing references of those queues into the worker instance?? I am missing something fundamental. Here is the modified code from the main section:
if __name__ == '__main__':
config = ["some value", "something else"]
index = 7
workers = []
for i in range(1):
task_q = multiprocessing.JoinableQueue()
result_q = multiprocessing.Queue()
process_worker = ProcessWorker(task_q, result_q)
worker = Worker(config, index, process_worker, task_q, result_q)
something_to_look_at = pickle.dumps(worker) # FAIL: Doesn't like queues??
process_worker.start()
worker.run_long_command("ls /")
So, the problem was that I was assuming that Python was doing some sort of magic that is somehow different from the way that C++/fork() works. I somehow thought that Python only copied the class, not the whole program into a separate process. I seriously wasted days trying to get this to work because all of the talk about pickle serialization made me think that it actually sent everything over the pipe. I knew that certain things could not be sent over the pipe, but I thought my problem was that I was not packaging things up properly.
This all could have been avoided if the Python docs gave me a 10,000 ft view of what happens when this module is used. Sure, it tells me what the methods of multiprocess module does and gives me some basic examples, but what I want to know is what is the "Theory of Operation" behind the scenes! Here is the kind of information I could have used. Please chime in if my answer is off. It will help me learn.
When you run start a process using this module, the whole program is copied into another process. But since it is not the "__main__" process and my code was checking for that, it doesn't fire off yet another process infinitely. It just stops and sits out there waiting for something to do, like a zombie. Everything that was initialized in the parent at the time of calling multiprocess.Process() is all set up and ready to go. Once you put something in the multiprocess.Queue or shared memory, or pipe, etc. (however you are communicating), then the separate process receives it and gets to work. It can draw upon all imported modules and setup just as if it was the parent. However, once some internal state variables change in the parent or separate process, those changes are isolated. Once the process is spawned, it now becomes your job to keep them in sync if necessary, either through a queue, pipe, shared memory, etc.
I threw out the code and started over, but now I am only putting one extra function out in the ProcessWorker, an "execute" method that runs a command line. Pretty simple. I don't have to worry about launching and then closing a bunch of processes this way, which has caused me all kinds of instability and performance issues in the past in C++. When I switched to launching processes at the beginning and then passing messages to those waiting processes, my performance improved and it was very stable.
BTW, I looked at this link to get help, which threw me off because the example made me think that methods were being transported across the queues: http://www.doughellmann.com/PyMOTW/multiprocessing/communication.html
The second example of the first section used "next_task()" that appeared (to me) to be executing a task received via the queue.
Instead of attempting to send a method itself (which is impractical), try sending a name of a method to execute.
Provided that each worker runs the same code, it's a matter of a simple getattr(self, task_name).
I'd pass tuples (task_name, task_args), where task_args were a dict to be directly fed to the task method:
next_task_name, next_task_args = self.task_q.get()
if next_task_name:
task = getattr(self, next_task_name)
answer = task(**next_task_args)
...
else:
# poison pill, shut down
break
REF: https://stackoverflow.com/a/14179779
Answer on Jan 6 at 6:03 by David Lynch is not factually correct when he says that he was misled by
http://www.doughellmann.com/PyMOTW/multiprocessing/communication.html.
The code and examples provided are correct and work as advertised. next_task() is executing a task received via the queue -- try and understand what the Task.__call__() method is doing.
In my case what, tripped me up was syntax errors in my implementation of run(). It seems that the sub-process will not report this and just fails silently -- leaving things stuck in weird loops! Make sure you have some kind of syntax checker running e.g. Flymake/Pyflakes in Emacs.
Debugging via multiprocessing.log_to_stderr()F helped me narrow down the problem.

Categories