I am running a lot of simulations in parallel on background using:
for i in range (a, b):
os.system("python xxx.py &")
// To-Add: check if tasks are complete, then process the results
xxx.py calls another software.
Is there any way to check if the tasks are completed, so I can process the result?
If you use subprocess then you will have much better control over the external processes, including being able to start multiple processes without blocking, and check if they have completed. You will probably have to collect a set of Popen instances and use poll to see if they are complete.
Try this code:
#!/usr/bin/python3
import subprocess
import time
proc = subprocess.Popen('python xxx.py', shell=True)
while proc.poll() == None:
print ( proc.pid);
time.sleep(1);
Related
I'm writing a small application with a Tkinter GUI to interact with an existing executable that does not have a GUI. The executable can export Solid Edge files to different formats (to PDF for example.) (see Solid Edge Translation services on the www). The goal is to export files in batch to PDF.
So the part of the code that calls the executable is here. I need multiprocessing because running the executable takes a while and it would make my app not responsive.
for cmd in commands:
print(f'running cmd {cmd}')
p = Process(target=exportSingleFile, args=(cmd,))
p.start()
(commands = list of commands (as strings) with arguments for input and output file and output filetype (pdf) ). Something like this:
"C:/Program Files/Solid Edge ST9/Program/SolidEdgeTranslationServices.exe" -i="input file" -o="output file" -t=pdf"
But when I try to replace it with this, it seems my app becomes unresponsive and nothing really happens. I guess it's better to use a pool when exporting potentially dozens of files.
exportResult = []
with Pool() as pool:
exportResult = pool.imap_unordered(exportSingleFile,commands)
for r in exportResult:
print (r)
This is what "exportsinglefile" does
def exportSingleFile(cmd):
return subprocess.run(cmd, shell=True)
The multiprocessing module is mostly for running multiple parallel Python processes. Since your commands are already running as separate processes, it's redundant to use multiprocessing on top of that.
Instead, consider using the subprocess.Popen constructor directly, which starts a subprocess but does not wait for it to complete. Store these process objects in a list. You can then regularly poll() every process in the list to see if it completed. To schedule such a poll, use Tkinter's after function.
Rough sketch of such an implementation — you will need to adapt this to your situation, and I didn't test it:
class ParallelCommands:
def __init__(self, commands, num_parallel):
self.commands = commands[::-1]
self.num_parallel = num_parallel
self.processes = []
self.poll()
def poll(self):
# Poll processes for completion, and raise on errors.
for process in self.processes:
process.poll()
if process.returncode is not None and process.returncode != 0:
raise RuntimeError("Process finished with nonzero exit code")
# Remove completed processes.
self.processes = [
p for p in self.processes
if p.returncode is None
]
# Start new processes up to the maximum amount.
while self.commands and len(self.processes) < self.num_parallel:
command = self.commands.pop()
process = subprocess.Popen(command, shell=True)
self.processes.push(process)
def is_done(self):
return not self.processes and not self.commands
To start a bunch of commands, running at most 10 at the same time:
commands = ParallelCommands(["ls /bin", "ls /lib"], 10)
To wait for completion synchronously, blocking the UI; just for demonstration purposes:
while not commands.is_done():
commands.poll()
time.sleep(0.1)
I'm trying to make a non blocking subprocess call to run a slave.py script from my main.py program. I need to pass args from main.py to slave.py once when it(slave.py) is first started via subprocess.call after this slave.py runs for a period of time then exits.
main.py
for insert, (list) in enumerate(list, start =1):
sys.args = [list]
subprocess.call(["python", "slave.py", sys.args], shell = True)
{loop through program and do more stuff..}
And my slave script
slave.py
print sys.args
while True:
{do stuff with args in loop till finished}
time.sleep(30)
Currently, slave.py blocks main.py from running the rest of its tasks, I simply want slave.py to be independent of main.py, once I've passed args to it. The two scripts no longer need to communicate.
I've found a few posts on the net about non blocking subprocess.call but most of them are centered on requiring communication with slave.py at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion...?
You should use subprocess.Popen instead of subprocess.call.
Something like:
subprocess.Popen(["python", "slave.py"] + sys.argv[1:])
From the docs on subprocess.call:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
(Also don't use a list to pass in the arguments if you're going to use shell = True).
Here's a MCVE1 example that demonstrates a non-blocking suprocess call:
import subprocess
import time
p = subprocess.Popen(['sleep', '5'])
while p.poll() is None:
print('Still sleeping')
time.sleep(1)
print('Not sleeping any longer. Exited with returncode %d' % p.returncode)
An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:
# python3.5 required but could be modified to work with python3.4.
import asyncio
async def do_subprocess():
print('Subprocess sleeping')
proc = await asyncio.create_subprocess_exec('sleep', '5')
returncode = await proc.wait()
print('Subprocess done sleeping. Return code = %d' % returncode)
async def sleep_report(number):
for i in range(number + 1):
print('Slept for %d seconds' % i)
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
tasks = [
asyncio.ensure_future(do_subprocess()),
asyncio.ensure_future(sleep_report(5)),
]
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
1Tested on OS-X using python2.7 & python3.6
There's three levels of thoroughness here.
As mgilson says, if you just swap out subprocess.call for subprocess.Popen, keeping everything else the same, then main.py will not wait for slave.py to finish before it continues. That may be enough by itself. If you care about zombie processes hanging around, you should save the object returned from subprocess.Popen and at some later point call its wait method. (The zombies will automatically go away when main.py exits, so this is only a serious problem if main.py runs for a very long time and/or might create many subprocesses.) And finally, if you don't want a zombie but you also don't want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemon library to have the slave disassociate itself from the master -- in that case you can continue using subprocess.call in the master.
For Python 3.8.x
import shlex
import subprocess
cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)
This will allow the parent process to exit while the child process continues to run. Not sure about zombies.
Tested on Python 3.8.1 on macOS 10.15.5
The easiest solution for your non-blocking situation would be to add & at the end of the Popen like this:
subprocess.Popen(["python", "slave.py", " &"])
This does not block the execution of the rest of the program.
If you want to start a function several times with different arguments in a non-blocking way, you can use the ThreadPoolExecuter.
You submit your function calls to the executer like this
from concurrent.futures import ThreadPoolExecutor
def threadmap(fun, xs):
with ThreadPoolExecutor(max_workers=8) as executer:
return list(executer.map(fun, xs))
I'm trying to make a non blocking subprocess call to run a slave.py script from my main.py program. I need to pass args from main.py to slave.py once when it(slave.py) is first started via subprocess.call after this slave.py runs for a period of time then exits.
main.py
for insert, (list) in enumerate(list, start =1):
sys.args = [list]
subprocess.call(["python", "slave.py", sys.args], shell = True)
{loop through program and do more stuff..}
And my slave script
slave.py
print sys.args
while True:
{do stuff with args in loop till finished}
time.sleep(30)
Currently, slave.py blocks main.py from running the rest of its tasks, I simply want slave.py to be independent of main.py, once I've passed args to it. The two scripts no longer need to communicate.
I've found a few posts on the net about non blocking subprocess.call but most of them are centered on requiring communication with slave.py at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion...?
You should use subprocess.Popen instead of subprocess.call.
Something like:
subprocess.Popen(["python", "slave.py"] + sys.argv[1:])
From the docs on subprocess.call:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
(Also don't use a list to pass in the arguments if you're going to use shell = True).
Here's a MCVE1 example that demonstrates a non-blocking suprocess call:
import subprocess
import time
p = subprocess.Popen(['sleep', '5'])
while p.poll() is None:
print('Still sleeping')
time.sleep(1)
print('Not sleeping any longer. Exited with returncode %d' % p.returncode)
An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:
# python3.5 required but could be modified to work with python3.4.
import asyncio
async def do_subprocess():
print('Subprocess sleeping')
proc = await asyncio.create_subprocess_exec('sleep', '5')
returncode = await proc.wait()
print('Subprocess done sleeping. Return code = %d' % returncode)
async def sleep_report(number):
for i in range(number + 1):
print('Slept for %d seconds' % i)
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
tasks = [
asyncio.ensure_future(do_subprocess()),
asyncio.ensure_future(sleep_report(5)),
]
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
1Tested on OS-X using python2.7 & python3.6
There's three levels of thoroughness here.
As mgilson says, if you just swap out subprocess.call for subprocess.Popen, keeping everything else the same, then main.py will not wait for slave.py to finish before it continues. That may be enough by itself. If you care about zombie processes hanging around, you should save the object returned from subprocess.Popen and at some later point call its wait method. (The zombies will automatically go away when main.py exits, so this is only a serious problem if main.py runs for a very long time and/or might create many subprocesses.) And finally, if you don't want a zombie but you also don't want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemon library to have the slave disassociate itself from the master -- in that case you can continue using subprocess.call in the master.
For Python 3.8.x
import shlex
import subprocess
cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)
This will allow the parent process to exit while the child process continues to run. Not sure about zombies.
Tested on Python 3.8.1 on macOS 10.15.5
The easiest solution for your non-blocking situation would be to add & at the end of the Popen like this:
subprocess.Popen(["python", "slave.py", " &"])
This does not block the execution of the rest of the program.
If you want to start a function several times with different arguments in a non-blocking way, you can use the ThreadPoolExecuter.
You submit your function calls to the executer like this
from concurrent.futures import ThreadPoolExecutor
def threadmap(fun, xs):
with ThreadPoolExecutor(max_workers=8) as executer:
return list(executer.map(fun, xs))
I'm trying to make a non blocking subprocess call to run a slave.py script from my main.py program. I need to pass args from main.py to slave.py once when it(slave.py) is first started via subprocess.call after this slave.py runs for a period of time then exits.
main.py
for insert, (list) in enumerate(list, start =1):
sys.args = [list]
subprocess.call(["python", "slave.py", sys.args], shell = True)
{loop through program and do more stuff..}
And my slave script
slave.py
print sys.args
while True:
{do stuff with args in loop till finished}
time.sleep(30)
Currently, slave.py blocks main.py from running the rest of its tasks, I simply want slave.py to be independent of main.py, once I've passed args to it. The two scripts no longer need to communicate.
I've found a few posts on the net about non blocking subprocess.call but most of them are centered on requiring communication with slave.py at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion...?
You should use subprocess.Popen instead of subprocess.call.
Something like:
subprocess.Popen(["python", "slave.py"] + sys.argv[1:])
From the docs on subprocess.call:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
(Also don't use a list to pass in the arguments if you're going to use shell = True).
Here's a MCVE1 example that demonstrates a non-blocking suprocess call:
import subprocess
import time
p = subprocess.Popen(['sleep', '5'])
while p.poll() is None:
print('Still sleeping')
time.sleep(1)
print('Not sleeping any longer. Exited with returncode %d' % p.returncode)
An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:
# python3.5 required but could be modified to work with python3.4.
import asyncio
async def do_subprocess():
print('Subprocess sleeping')
proc = await asyncio.create_subprocess_exec('sleep', '5')
returncode = await proc.wait()
print('Subprocess done sleeping. Return code = %d' % returncode)
async def sleep_report(number):
for i in range(number + 1):
print('Slept for %d seconds' % i)
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
tasks = [
asyncio.ensure_future(do_subprocess()),
asyncio.ensure_future(sleep_report(5)),
]
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()
1Tested on OS-X using python2.7 & python3.6
There's three levels of thoroughness here.
As mgilson says, if you just swap out subprocess.call for subprocess.Popen, keeping everything else the same, then main.py will not wait for slave.py to finish before it continues. That may be enough by itself. If you care about zombie processes hanging around, you should save the object returned from subprocess.Popen and at some later point call its wait method. (The zombies will automatically go away when main.py exits, so this is only a serious problem if main.py runs for a very long time and/or might create many subprocesses.) And finally, if you don't want a zombie but you also don't want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemon library to have the slave disassociate itself from the master -- in that case you can continue using subprocess.call in the master.
For Python 3.8.x
import shlex
import subprocess
cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)
This will allow the parent process to exit while the child process continues to run. Not sure about zombies.
Tested on Python 3.8.1 on macOS 10.15.5
The easiest solution for your non-blocking situation would be to add & at the end of the Popen like this:
subprocess.Popen(["python", "slave.py", " &"])
This does not block the execution of the rest of the program.
If you want to start a function several times with different arguments in a non-blocking way, you can use the ThreadPoolExecuter.
You submit your function calls to the executer like this
from concurrent.futures import ThreadPoolExecutor
def threadmap(fun, xs):
with ThreadPoolExecutor(max_workers=8) as executer:
return list(executer.map(fun, xs))
I write a simple script that executes a system command on a sequence of files.
To speed things up, I'd like to run them in parallel, but not all at once - i need to control maximum number of simultaneously running commands.
What whould be the easiest way to approach this ?
If you are calling subprocesses anyway, I don't see the need to use a thread pool. A basic implementation using the subprocess module would be
import subprocess
import os
import time
files = <list of file names>
command = "/bin/touch"
processes = set()
max_processes = 5
for name in files:
processes.add(subprocess.Popen([command, name]))
if len(processes) >= max_processes:
os.wait()
processes.difference_update([
p for p in processes if p.poll() is not None])
On Windows, os.wait() is not available (nor any other method of waiting for any child process to terminate). You can work around this by polling in certain intervals:
for name in files:
processes.add(subprocess.Popen([command, name]))
while len(processes) >= max_processes:
time.sleep(.1)
processes.difference_update([
p for p in processes if p.poll() is not None])
The time to sleep for depends on the expected execution time of the subprocesses.
The answer from Sven Marnach is almost right, but there is a problem. If one of the last max_processes processes ends, the main program will try to start another process, and the for looping will end. This will close the main process, which can in turn close the child processes. For me, this behavior happened with the screen command.
The code in Linux will be like this (and will only work on python2.7):
import subprocess
import os
import time
files = <list of file names>
command = "/bin/touch"
processes = set()
max_processes = 5
for name in files:
processes.add(subprocess.Popen([command, name]))
if len(processes) >= max_processes:
os.wait()
processes.difference_update(
[p for p in processes if p.poll() is not None])
#Check if all the child processes were closed
for p in processes:
if p.poll() is None:
p.wait()
You need to combine a Semaphore object with threads. A Semaphore is an object that lets you limit the number of threads that are running in a given section of code. In this case we'll use a semaphore to limit the number of threads that can run the os.system call.
First we import the modules we need:
#!/usr/bin/python
import threading
import os
Next we create a Semaphore object. The number four here is the number of threads that can acquire the semaphore at one time. This limits the number of subprocesses that can be run at once.
semaphore = threading.Semaphore(4)
This function simply wraps the call to the subprocess in calls to the Semaphore.
def run_command(cmd):
semaphore.acquire()
try:
os.system(cmd)
finally:
semaphore.release()
If you're using Python 2.6+ this can become even simpler as you can use the 'with' statement to perform both the acquire and release calls.
def run_command(cmd):
with semaphore:
os.system(cmd)
Finally, to show that this works as expected we'll call the "sleep 10" command eight times.
for i in range(8):
threading.Thread(target=run_command, args=("sleep 10", )).start()
Running the script using the 'time' program shows that it only takes 20 seconds as two lots of four sleeps are run in parallel.
aw#aw-laptop:~/personal/stackoverflow$ time python 4992400.py
real 0m20.032s
user 0m0.020s
sys 0m0.008s
I merged the solutions by Sven and Thuener into one that waits for trailing processes and also stops if one of the processes crashes:
def removeFinishedProcesses(processes):
""" given a list of (commandString, process),
remove those that have completed and return the result
"""
newProcs = []
for pollCmd, pollProc in processes:
retCode = pollProc.poll()
if retCode==None:
# still running
newProcs.append((pollCmd, pollProc))
elif retCode!=0:
# failed
raise Exception("Command %s failed" % pollCmd)
else:
logging.info("Command %s completed successfully" % pollCmd)
return newProcs
def runCommands(commands, maxCpu):
processes = []
for command in commands:
logging.info("Starting process %s" % command)
proc = subprocess.Popen(shlex.split(command))
procTuple = (command, proc)
processes.append(procTuple)
while len(processes) >= maxCpu:
time.sleep(.2)
processes = removeFinishedProcesses(processes)
# wait for all processes
while len(processes)>0:
time.sleep(0.5)
processes = removeFinishedProcesses(processes)
logging.info("All processes completed")
What you are asking for is a thread pool. There is a fixed number of threads that can be used to execute tasks. When is not running a task, it waits on a task queue in order to get a new piece of code to execute.
There is this thread pool module, but there is a comment saying it is not considered complete yet. There may be other packages out there, but this was the first one I found.
If your running system commands you can just create the process instances with the subprocess module, call them as you want. There shouldn't be any need to thread (its unpythonic) and multiprocess seems a tad overkill for this task.
This answer is very similar to other answers present here but it uses a list instead of sets.
For some reason when using those answers I was getting a runtime error regarding the size of the set changing.
from subprocess import PIPE
import subprocess
import time
def submit_job_max_len(job_list, max_processes):
sleep_time = 0.1
processes = list()
for command in job_list:
print 'running {n} processes. Submitting {proc}.'.format(n=len(processes),
proc=str(command))
processes.append(subprocess.Popen(command, shell=False, stdout=None,
stdin=PIPE))
while len(processes) >= max_processes:
time.sleep(sleep_time)
processes = [proc for proc in processes if proc.poll() is None]
while len(processes) > 0:
time.sleep(sleep_time)
processes = [proc for proc in processes if proc.poll() is None]
cmd = '/bin/bash run_what.sh {n}'
job_list = ((cmd.format(n=i)).split() for i in range(100))
submit_job_max_len(job_list, max_processes=50)