Python multiprocessing - python

This question is more fact finding and thought process than code oriented.
I have many compiled C++ programs that I need to run at different times and with different parameters. I'm looking at using Python multiprocessing to read a job from job queue (rabbitmq) and then feed that job to a C++ program to run (maybe subprocess). I was looking at the multiprocessing module because this will all run on dual Xeon server so I want to take full advantage of the multiprocessor ability of my server.
The Python program would be the central manager and would simply read jobs from the queue, spawn a process (or subprocess?) with the appropriate C++ program to run the job, get the results (subprocess stdout & stderr), feed that to a callback and put the process back in a queue of processes waiting for the next job to run.
First, does this sound like a valid strategy?
Second, are there any type of examples of something similar to this?
Thank you in advance.

The Python program would be the
central manager and would simply read
jobs from the que, spawn a process (or
subprocess?) with the appropriate C++
program to run the job, get the
results (subprocess stdout & stderr),
feed that to a callback and put the
process back in a que of processes
waiting for the next job to run.
You don't need the multiprocessing module for this. The multiprocessing module is good for running Python functions as separate processes. To run a C++ program and read results from stdout, you'd only need the subprocess module. The queue could be a list, and your Python program would simply loop while the list is non-empty.
However, if you want to
spawn multiple worker processes
have them read from a common queue
use the arguments from the queue to
spawn C++ programs (in parallel)
use the output of the C++ programs
to put new items in the queue
then you could do it with multiprocessing like this:
test.py:
import multiprocessing as mp
import subprocess
import shlex
def worker(q):
while True:
# Get an argument from the queue
x=q.get()
# You might change this to run your C++ program
proc=subprocess.Popen(
shlex.split('test2.py {x}'.format(x=x)),stdout=subprocess.PIPE)
out,err=proc.communicate()
print('{name}: using argument {x} outputs {o}'.format(
x=x,name=mp.current_process().name,o=out))
q.task_done()
# Put a new argument into the queue
q.put(int(out))
def main():
q=mp.JoinableQueue()
# Put some initial values into the queue
for t in range(1,3):
q.put(t)
# Create and start a pool of worker processes
for i in range(3):
p=mp.Process(target=worker, args=(q,))
p.daemon=True
p.start()
q.join()
print "Finished!"
if __name__=='__main__':
main()
test2.py (a simple substitute for your C++ program):
import time
import sys
x=int(sys.argv[1])
time.sleep(0.5)
print(x+3)
Running test.py might yield something like this:
Process-1: using argument 1 outputs 4
Process-3: using argument 3 outputs 6
Process-2: using argument 2 outputs 5
Process-3: using argument 6 outputs 9
Process-1: using argument 4 outputs 7
Process-2: using argument 5 outputs 8
Process-3: using argument 9 outputs 12
Process-1: using argument 7 outputs 10
Process-2: using argument 8 outputs 11
Process-1: using argument 10 outputs 13
Notice that the numbers in the right-hand column are fed back into the queue, and are (eventually) used as arguments to test2.py and show up as numbers in the left-hard column.

First, does this sound like a valid strategy?
Yes.
Second, are there any type of examples of something similar to this?
Celery

Sounds like a good strategy, but you don't need the multiprocessing module for it, but rather the subprocess module. subprocess is for running child processes from a Python program and interacting with them (stdio, stdout, pipes, etc.), while multiprocessing is more about distributing Python code to run in multiple processes to gain performance through parallelism.
Depending on the responsiveness strategy, you may also want to look at threading for launching subprocesses from a thread. This will allow you to wait on one subprocess while still being responsive on the queue to accept other jobs.

Related

Python subprocess hang when a lot of executable are called

I have a problem with using Python to run Windows executables in parallel.
I will explain my problem in more detail.
I was able to write some code that creates an amount of threads equal to the number of cores. Each thread executes the following function that starts the executable with the use of subprocess.Popen().
The executable are unit test for an application. The test use gtest library. From what I know they just read and write on the file system.
def _execute(self, test_file_path) -> None:
test_path = self._get_test_path_without_extension(test_file_path)
process = subprocess.Popen(test_path,
shell=False,
stdout=sys.stdout,
stderr=sys.stderr,
universal_newlines=True)
try:
process.communicate(timeout=TEST_TIMEOUT_IN_SECONDS)
if process.returncode != 0:
print(f'Test fail')
except subprocess.TimeoutExpired:
process.kill()
During the execution of processes it happens that some hang, never ending. I set a timeout as workaround but I wondering why some of these application never terminate. This block the execution of the Python code.
The following code show the creation of the threads. The function _execute_tests just take a test from the Queue (with the .get() function) and pass it to the function execute(test_file_path).
### Peace of code used to spawn the threads
for i in range(int(core_num)):
thread = threading.Thread(target=self._execute_tests,
args=(tests,),
daemon=True)
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
I already try to:
use subprocess.run, subprocess.call and the other function explained on the documentation page
use a larger buffer with the use of bufsize parameter
disable the buffer
move the stdout to a file per thread
move the stdout to subprocess.DEVNULL
remove the use of subprocess.communicate()
remove the use of threading
use multiprocessing
On my local machine with 16 core / 64GB RAM I can run without problems 16 threads. All of them always terminate without problems. To be able to reproduce the problem I need to increase the number of threads to 30/40.
On Azure machine with 8 core / 32 GB RAM the issues can be reproduce with just 8 threads in parallel.
If I run the executables from a bat
for /r "." %%a in (*.exe) do start /B "" "%%~fa"
the problem never happen.
Have someone an idea of what could be the problem?

Running separate file n. times from main script

i'm quite new with python/programming and i've stumbled upon this "issue".
I' d like to have my main script run a separate script multiple times, get all the processes that i have open this way and the possibility to shut them off.
i need to specify some variables every time from my main code, in order for those process to run separately, each with his own "setting" that i specify from the terminal on my main file.
Can You point me to the right direction? what do i need to read/search to adress this stuff?
basic example:
my main.py file will ask for a string eg. val = input("value: "). I type "Hello".
my process.py file will run in a loop, printing "hello" until i say stop.
While first process is running, i can specify another string in my main eg. val = "bye" and process.py would start again with the string "bye", whithout shutting down the one with "hello".
until i specify so, they both have to run.
and i'd also like to have a list of tot n. of process.py running, with their "specs", and the possibility to shut them of eg. with an id assigned to each one of them.
Am i doing it all wrong? there's a better way to do this? i don' t know at all.
Thanks for helping me
You can spawn subprocesses running your process.py script with different parameters.
These subprocesses will continue running independently of the parent process, or you can monitor and kill them from your main.py, which could look like this.
main.py
import subprocess
import time
def spawn_process_py(arguments):
args = [
sys.executable, # use the same Python this script is using
'process.py', # the script to run
] + list(arguments)
return subprocess.Popen(args)
p1 = spawn_process_py(['hello'])
p2 = spawn_process_py(['ohai'])
time.sleep(5) # wait for a bit
p1.terminate() # kill process 1
time.sleep(5) # wait for a bit
p2.terminate() # kill process 2
You can naturally keep track of the subprocesses using a dict or an array instead of just keeping them in free variables like p1 and p2.
process.py
import sys
import time
while True:
print('Process.py:', sys.argv[1:])
time.sleep(.2)
Other methods are e.g. the multiprocessing module, which adds inter-process communication, etc, but this is arguably the most straightforward.

Why os.remove() raises exception PermissionError?

On a Windows 7 platform I'm using python 3.6 as a framework to start working processes (written in C).
For starting the processes subprocess.Popen is used. The following shows the relevant code (one thread per process to be started).
redirstream = open(redirfilename, "w")
proc = subprocess.Popen(batchargs, shell=False, stdout=redirstream)
outs, errs = proc.communicate(timeout=60)
# wait for job to be finished
ret = proc.wait()
...
if ret == 0: # changed !!
redirstream.flush()
redirstream.close()
os.remove(redirfilename)
communicate is just used to be able, to terminate the executable after 60 seconds , for the case it hangs. redirstream is used to write output from the executable (written in C) to a file, used for general debugging purposes (not related to this issue). Of course, all processes are passed redirfiles with different filenames.
Up to ten such subprocesses are started in that way from independent python threads.
Although it works, I made a mysterious observation:
For the case an executable has finished without errors, I want to delete redirfilename, because it is not used anymore.
Now lets say, I have started process-A, B and C.
Processes A and B are finished and gave back 0 as result.
Process C however intentionally doesn't get data (just for testing, a serial connection has been disconnected) and waits for input from a named pipe (created from python) using Windows ReadFile function:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365467(v=vs.85).aspx
In that case, while "C" is still waiting for ReadFile to be finished, os.remove(redirfilename) for A and B sometimes throws exception "PermissionError", saying, that the file is still used by another process. But from task manager I can see, that the processes A and B are not existing anymore (as expected).
I tried to catch the PermissionError and repeat the delete command after some delay. Only after "C" has terminated (timeout after 60 seconds), the redirfile for A or B can be deleted.
Why is the redirstream still blocked and somehow in use, although the process behind is not alive anymore and why is it blocked by ReadFile() in a completely unrelated process, which is definitely not related to that particular file? Is that an issue in Python or in my implementation?
Any hints are highly appreciated...

Python Multiprocessing - sending inputs to child processes

I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.

Best way to fork multiple shell commands/processes in Python?

Most of the examples I've seen with os.fork and the subprocess/multiprocessing modules show how to fork a new instance of the calling python script or a chunk of python code. What would be the best way to spawn a set of arbitrary shell command concurrently?
I suppose, I could just use subprocess.call or one of the Popen commands and pipe the output to a file, which I believe will return immediately, at least to the caller. I know this is not that hard to do, I'm just trying to figure out the simplest, most Pythonic way to do it.
Thanks in advance
All calls to subprocess.Popen return immediately to the caller. It's the calls to wait and communicate which block. So all you need to do is spin up a number of processes using subprocess.Popen (set stdin to /dev/null for safety), and then one by one call communicate until they're all complete.
Naturally I'm assuming you're just trying to start a bunch of unrelated (i.e. not piped together) commands.
I like to use PTYs instead of pipes. For a bunch of processes where I only want to capture error messages I did this.
RNULL = open('/dev/null', 'r')
WNULL = open('/dev/null', 'w')
logfile = open("myprocess.log", "a", 1)
REALSTDERR = sys.stderr
sys.stderr = logfile
This next part was in a loop spawning about 30 processes.
sys.stderr = REALSTDERR
master, slave = pty.openpty()
self.subp = Popen(self.parsed, shell=False, stdin=RNULL, stdout=WNULL, stderr=slave)
sys.stderr = logfile
After this I had a select loop which collected any error messages and sent them to the single log file. Using PTYs meant that I never had to worry about partial lines getting mixed up because the line discipline provides simple framing.
There is no best for all possible circumstances. The best depends on the problem at hand.
Here's how to spawn a process and save its output to a file combining stdout/stderr:
import subprocess
import sys
def spawn(cmd, output_file):
on_posix = 'posix' in sys.builtin_module_names
return subprocess.Popen(cmd, close_fds=on_posix, bufsize=-1,
stdin=open(os.devnull,'rb'),
stdout=output_file,
stderr=subprocess.STDOUT)
To spawn multiple processes that can run in parallel with your script and each other:
processes, files = [], []
try:
for i, cmd in enumerate(commands):
files.append(open('out%d' % i, 'wb'))
processes.append(spawn(cmd, files[-1]))
finally:
for p in processes:
p.wait()
for f in files:
f.close()
Note: cmd is a list everywhere.
I suppose, I could just us subprocess.call or one of the Popen
commands and pipe the output to a file, which I believe will return
immediately, at least to the caller.
That's not a good way to do it if you want to process the data.
In this case, better do
sp = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE)
and then sp.communicate() or read directly from sp.stdout.read().
If the data shall be processed in the calling program at a later time, there are two ways to go:
You can retrieve the data ASAP, maybe via a separate thread, reading them and storing them somewhere where the consumer can get them.
You can have the producing subprocess have block and retrieve the data from it when you need them. The subprocess produces as many data as fit in the pipe buffer (usually 64 kiB) and then blocks on further writes. As soon as you need the data, you read() from the subprocess object's stdout (maybe stderr as well) and use them - or, again, you use sp.communicate() at that later time.
Way 1 would the way to go if producing the data needs much time, so that your wprogram would have to wait.
Way 2 would be to be preferred if the size of the data is quite huge and/or the data is produced so fast that buffering would make no sense.
See an older answer of mine including code snippets to do:
Uses processes not threads for blocking I/O because they can more reliably be p.terminated()
Implements a retriggerable timeout watchdog that restarts counting whenever some output happens
Implements a long-term timeout watchdog to limit overall runtime
Can feed in stdin (although I only need to feed in one-time short strings)
Can capture stdout/stderr in the usual Popen means (Only stdout is coded, and stderr redirected to stdout; but can easily be separated)
It's almost realtime because it only checks every 0.2 seconds for output. But you could decrease this or remove the waiting interval easily
Lots of debugging printouts still enabled to see whats happening when.
For spawning multiple concurrent commands, you would need to alter the class RunCmd to instantiate mutliple read output/write input queues and to spawn mutliple Popen subprocesses.

Categories