Related
I have a script where I launch with popen a shell command.
The problem is that the script doesn't wait until that popen command is finished and go continues right away.
om_points = os.popen(command, "w")
.....
How can I tell to my Python script to wait until the shell command has finished?
Depending on how you want to work your script you have two options. If you want the commands to block and not do anything while it is executing, you can just use subprocess.call.
#start and block until done
subprocess.call([data["om_points"], ">", diz['d']+"/points.xml"])
If you want to do things while it is executing or feed things into stdin, you can use communicate after the popen call.
#start and process things, then wait
p = subprocess.Popen([data["om_points"], ">", diz['d']+"/points.xml"])
print "Happens while running"
p.communicate() #now wait plus that you can send commands to process
As stated in the documentation, wait can deadlock, so communicate is advisable.
You can you use subprocess to achieve this.
import subprocess
#This command could have multiple commands separated by a new line \n
some_command = "export PATH=$PATH://server.sample.mo/app/bin \n customupload abc.txt"
p = subprocess.Popen(some_command, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
#This makes the wait possible
p_status = p.wait()
#This will give you the output of the command being executed
print "Command output: " + output
Force popen to not continue until all output is read by doing:
os.popen(command).read()
Let the command you are trying to pass be
os.system('x')
then you covert it to a statement
t = os.system('x')
now the python will be waiting for the output from the commandline so that it could be assigned to the variable t.
wait() works fine for me. The subprocesses p1, p2 and p3 are executed at the same. Therefore, all processes are done after 3 seconds.
import subprocess
processes = []
p1 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p2 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p3 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
processes.append(p1)
processes.append(p2)
processes.append(p3)
for p in processes:
if p.wait() != 0:
print("There was an error")
print("all processed finished")
What you are looking for is the wait method.
I think process.communicate() would be suitable for output having small size. For larger output it would not be the best approach.
I need to run a shell command asynchronously from a Python script. By this I mean that I want my Python script to continue running while the external command goes off and does whatever it needs to do.
I read this post:
Calling an external command in Python
I then went off and did some testing, and it looks like os.system() will do the job provided that I use & at the end of the command so that I don't have to wait for it to return. What I am wondering is if this is the proper way to accomplish such a thing? I tried commands.call() but it will not work for me because it blocks on the external command.
Please let me know if using os.system() for this is advisable or if I should try some other route.
subprocess.Popen does exactly what you want.
from subprocess import Popen
p = Popen(['watch', 'ls']) # something long running
# ... do other stuff while subprocess is running
p.terminate()
(Edit to complete the answer from comments)
The Popen instance can do various other things like you can poll() it to see if it is still running, and you can communicate() with it to send it data on stdin, and wait for it to terminate.
If you want to run many processes in parallel and then handle them when they yield results, you can use polling like in the following:
from subprocess import Popen, PIPE
import time
running_procs = [
Popen(['/usr/bin/my_cmd', '-i %s' % path], stdout=PIPE, stderr=PIPE)
for path in '/tmp/file0 /tmp/file1 /tmp/file2'.split()]
while running_procs:
for proc in running_procs:
retcode = proc.poll()
if retcode is not None: # Process finished.
running_procs.remove(proc)
break
else: # No process is done, wait a bit and check again.
time.sleep(.1)
continue
# Here, `proc` has finished with return code `retcode`
if retcode != 0:
"""Error handling."""
handle_results(proc.stdout)
The control flow there is a little bit convoluted because I'm trying to make it small -- you can refactor to your taste. :-)
This has the advantage of servicing the early-finishing requests first. If you call communicate on the first running process and that turns out to run the longest, the other running processes will have been sitting there idle when you could have been handling their results.
This is covered by Python 3 Subprocess Examples under "Wait for command to terminate asynchronously". Run this code using IPython or python -m asyncio:
import asyncio
proc = await asyncio.create_subprocess_exec(
'ls','-lha',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
# do something else while ls is working
# if proc takes very long to complete, the CPUs are free to use cycles for
# other processes
stdout, stderr = await proc.communicate()
The process will start running as soon as the await asyncio.create_subprocess_exec(...) has completed. If it hasn't finished by the time you call await proc.communicate(), it will wait there in order to give you your output status. If it has finished, proc.communicate() will return immediately.
The gist here is similar to Terrels answer but I think Terrels answer appears to overcomplicate things.
See asyncio.create_subprocess_exec for more information.
What I am wondering is if this [os.system()] is the proper way to accomplish such a thing?
No. os.system() is not the proper way. That's why everyone says to use subprocess.
For more information, read http://docs.python.org/library/os.html#os.system
The subprocess module provides more
powerful facilities for spawning new
processes and retrieving their
results; using that module is
preferable to using this function. Use
the subprocess module. Check
especially the Replacing Older
Functions with the subprocess Module
section.
The accepted answer is very old.
I found a better modern answer here:
https://kevinmccarthy.org/2016/07/25/streaming-subprocess-stdin-and-stdout-with-asyncio-in-python/
and made some changes:
make it work on windows
make it work with multiple commands
import sys
import asyncio
if sys.platform == "win32":
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
async def _read_stream(stream, cb):
while True:
line = await stream.readline()
if line:
cb(line)
else:
break
async def _stream_subprocess(cmd, stdout_cb, stderr_cb):
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
await asyncio.wait(
[
_read_stream(process.stdout, stdout_cb),
_read_stream(process.stderr, stderr_cb),
]
)
rc = await process.wait()
return process.pid, rc
except OSError as e:
# the program will hang if we let any exception propagate
return e
def execute(*aws):
""" run the given coroutines in an asyncio loop
returns a list containing the values returned from each coroutine.
"""
loop = asyncio.get_event_loop()
rc = loop.run_until_complete(asyncio.gather(*aws))
loop.close()
return rc
def printer(label):
def pr(*args, **kw):
print(label, *args, **kw)
return pr
def name_it(start=0, template="s{}"):
"""a simple generator for task names
"""
while True:
yield template.format(start)
start += 1
def runners(cmds):
"""
cmds is a list of commands to excecute as subprocesses
each item is a list appropriate for use by subprocess.call
"""
next_name = name_it().__next__
for cmd in cmds:
name = next_name()
out = printer(f"{name}.stdout")
err = printer(f"{name}.stderr")
yield _stream_subprocess(cmd, out, err)
if __name__ == "__main__":
cmds = (
[
"sh",
"-c",
"""echo "$SHELL"-stdout && sleep 1 && echo stderr 1>&2 && sleep 1 && echo done""",
],
[
"bash",
"-c",
"echo 'hello, Dave.' && sleep 1 && echo dave_err 1>&2 && sleep 1 && echo done",
],
[sys.executable, "-c", 'print("hello from python");import sys;sys.exit(2)'],
)
print(execute(*runners(cmds)))
It is unlikely that the example commands will work perfectly on your system, and it doesn't handle weird errors, but this code does demonstrate one way to run multiple subprocesses using asyncio and stream the output.
I've had good success with the asyncproc module, which deals nicely with the output from the processes. For example:
import os
from asynproc import Process
myProc = Process("myprogram.app")
while True:
# check to see if process has ended
poll = myProc.wait(os.WNOHANG)
if poll is not None:
break
# print any new output
out = myProc.read()
if out != "":
print out
Using pexpect with non-blocking readlines is another way to do this. Pexpect solves the deadlock problems, allows you to easily run the processes in the background, and gives easy ways to have callbacks when your process spits out predefined strings, and generally makes interacting with the process much easier.
Considering "I don't have to wait for it to return", one of the easiest solutions will be this:
subprocess.Popen( \
[path_to_executable, arg1, arg2, ... argN],
creationflags = subprocess.CREATE_NEW_CONSOLE,
).pid
But... From what I read this is not "the proper way to accomplish such a thing" because of security risks created by subprocess.CREATE_NEW_CONSOLE flag.
The key things that happen here is use of subprocess.CREATE_NEW_CONSOLE to create new console and .pid (returns process ID so that you could check program later on if you want to) so that not to wait for program to finish its job.
I have the same problem trying to connect to an 3270 terminal using the s3270 scripting software in Python. Now I'm solving the problem with an subclass of Process that I found here:
http://code.activestate.com/recipes/440554/
And here is the sample taken from file:
def recv_some(p, t=.1, e=1, tr=5, stderr=0):
if tr < 1:
tr = 1
x = time.time()+t
y = []
r = ''
pr = p.recv
if stderr:
pr = p.recv_err
while time.time() < x or r:
r = pr()
if r is None:
if e:
raise Exception(message)
else:
break
elif r:
y.append(r)
else:
time.sleep(max((x-time.time())/tr, 0))
return ''.join(y)
def send_all(p, data):
while len(data):
sent = p.send(data)
if sent is None:
raise Exception(message)
data = buffer(data, sent)
if __name__ == '__main__':
if sys.platform == 'win32':
shell, commands, tail = ('cmd', ('dir /w', 'echo HELLO WORLD'), '\r\n')
else:
shell, commands, tail = ('sh', ('ls', 'echo HELLO WORLD'), '\n')
a = Popen(shell, stdin=PIPE, stdout=PIPE)
print recv_some(a),
for cmd in commands:
send_all(a, cmd + tail)
print recv_some(a),
send_all(a, 'exit' + tail)
print recv_some(a, e=0)
a.wait()
There are several answers here but none of them satisfied my below requirements:
I don't want to wait for command to finish or pollute my terminal with subprocess outputs.
I want to run bash script with redirects.
I want to support piping within my bash script (for example find ... | tar ...).
The only combination that satiesfies above requirements is:
subprocess.Popen(['./my_script.sh "arg1" > "redirect/path/to"'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
I need to run a shell command asynchronously from a Python script. By this I mean that I want my Python script to continue running while the external command goes off and does whatever it needs to do.
I read this post:
Calling an external command in Python
I then went off and did some testing, and it looks like os.system() will do the job provided that I use & at the end of the command so that I don't have to wait for it to return. What I am wondering is if this is the proper way to accomplish such a thing? I tried commands.call() but it will not work for me because it blocks on the external command.
Please let me know if using os.system() for this is advisable or if I should try some other route.
subprocess.Popen does exactly what you want.
from subprocess import Popen
p = Popen(['watch', 'ls']) # something long running
# ... do other stuff while subprocess is running
p.terminate()
(Edit to complete the answer from comments)
The Popen instance can do various other things like you can poll() it to see if it is still running, and you can communicate() with it to send it data on stdin, and wait for it to terminate.
If you want to run many processes in parallel and then handle them when they yield results, you can use polling like in the following:
from subprocess import Popen, PIPE
import time
running_procs = [
Popen(['/usr/bin/my_cmd', '-i %s' % path], stdout=PIPE, stderr=PIPE)
for path in '/tmp/file0 /tmp/file1 /tmp/file2'.split()]
while running_procs:
for proc in running_procs:
retcode = proc.poll()
if retcode is not None: # Process finished.
running_procs.remove(proc)
break
else: # No process is done, wait a bit and check again.
time.sleep(.1)
continue
# Here, `proc` has finished with return code `retcode`
if retcode != 0:
"""Error handling."""
handle_results(proc.stdout)
The control flow there is a little bit convoluted because I'm trying to make it small -- you can refactor to your taste. :-)
This has the advantage of servicing the early-finishing requests first. If you call communicate on the first running process and that turns out to run the longest, the other running processes will have been sitting there idle when you could have been handling their results.
This is covered by Python 3 Subprocess Examples under "Wait for command to terminate asynchronously". Run this code using IPython or python -m asyncio:
import asyncio
proc = await asyncio.create_subprocess_exec(
'ls','-lha',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
# do something else while ls is working
# if proc takes very long to complete, the CPUs are free to use cycles for
# other processes
stdout, stderr = await proc.communicate()
The process will start running as soon as the await asyncio.create_subprocess_exec(...) has completed. If it hasn't finished by the time you call await proc.communicate(), it will wait there in order to give you your output status. If it has finished, proc.communicate() will return immediately.
The gist here is similar to Terrels answer but I think Terrels answer appears to overcomplicate things.
See asyncio.create_subprocess_exec for more information.
What I am wondering is if this [os.system()] is the proper way to accomplish such a thing?
No. os.system() is not the proper way. That's why everyone says to use subprocess.
For more information, read http://docs.python.org/library/os.html#os.system
The subprocess module provides more
powerful facilities for spawning new
processes and retrieving their
results; using that module is
preferable to using this function. Use
the subprocess module. Check
especially the Replacing Older
Functions with the subprocess Module
section.
The accepted answer is very old.
I found a better modern answer here:
https://kevinmccarthy.org/2016/07/25/streaming-subprocess-stdin-and-stdout-with-asyncio-in-python/
and made some changes:
make it work on windows
make it work with multiple commands
import sys
import asyncio
if sys.platform == "win32":
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
async def _read_stream(stream, cb):
while True:
line = await stream.readline()
if line:
cb(line)
else:
break
async def _stream_subprocess(cmd, stdout_cb, stderr_cb):
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
await asyncio.wait(
[
_read_stream(process.stdout, stdout_cb),
_read_stream(process.stderr, stderr_cb),
]
)
rc = await process.wait()
return process.pid, rc
except OSError as e:
# the program will hang if we let any exception propagate
return e
def execute(*aws):
""" run the given coroutines in an asyncio loop
returns a list containing the values returned from each coroutine.
"""
loop = asyncio.get_event_loop()
rc = loop.run_until_complete(asyncio.gather(*aws))
loop.close()
return rc
def printer(label):
def pr(*args, **kw):
print(label, *args, **kw)
return pr
def name_it(start=0, template="s{}"):
"""a simple generator for task names
"""
while True:
yield template.format(start)
start += 1
def runners(cmds):
"""
cmds is a list of commands to excecute as subprocesses
each item is a list appropriate for use by subprocess.call
"""
next_name = name_it().__next__
for cmd in cmds:
name = next_name()
out = printer(f"{name}.stdout")
err = printer(f"{name}.stderr")
yield _stream_subprocess(cmd, out, err)
if __name__ == "__main__":
cmds = (
[
"sh",
"-c",
"""echo "$SHELL"-stdout && sleep 1 && echo stderr 1>&2 && sleep 1 && echo done""",
],
[
"bash",
"-c",
"echo 'hello, Dave.' && sleep 1 && echo dave_err 1>&2 && sleep 1 && echo done",
],
[sys.executable, "-c", 'print("hello from python");import sys;sys.exit(2)'],
)
print(execute(*runners(cmds)))
It is unlikely that the example commands will work perfectly on your system, and it doesn't handle weird errors, but this code does demonstrate one way to run multiple subprocesses using asyncio and stream the output.
I've had good success with the asyncproc module, which deals nicely with the output from the processes. For example:
import os
from asynproc import Process
myProc = Process("myprogram.app")
while True:
# check to see if process has ended
poll = myProc.wait(os.WNOHANG)
if poll is not None:
break
# print any new output
out = myProc.read()
if out != "":
print out
Using pexpect with non-blocking readlines is another way to do this. Pexpect solves the deadlock problems, allows you to easily run the processes in the background, and gives easy ways to have callbacks when your process spits out predefined strings, and generally makes interacting with the process much easier.
Considering "I don't have to wait for it to return", one of the easiest solutions will be this:
subprocess.Popen( \
[path_to_executable, arg1, arg2, ... argN],
creationflags = subprocess.CREATE_NEW_CONSOLE,
).pid
But... From what I read this is not "the proper way to accomplish such a thing" because of security risks created by subprocess.CREATE_NEW_CONSOLE flag.
The key things that happen here is use of subprocess.CREATE_NEW_CONSOLE to create new console and .pid (returns process ID so that you could check program later on if you want to) so that not to wait for program to finish its job.
I have the same problem trying to connect to an 3270 terminal using the s3270 scripting software in Python. Now I'm solving the problem with an subclass of Process that I found here:
http://code.activestate.com/recipes/440554/
And here is the sample taken from file:
def recv_some(p, t=.1, e=1, tr=5, stderr=0):
if tr < 1:
tr = 1
x = time.time()+t
y = []
r = ''
pr = p.recv
if stderr:
pr = p.recv_err
while time.time() < x or r:
r = pr()
if r is None:
if e:
raise Exception(message)
else:
break
elif r:
y.append(r)
else:
time.sleep(max((x-time.time())/tr, 0))
return ''.join(y)
def send_all(p, data):
while len(data):
sent = p.send(data)
if sent is None:
raise Exception(message)
data = buffer(data, sent)
if __name__ == '__main__':
if sys.platform == 'win32':
shell, commands, tail = ('cmd', ('dir /w', 'echo HELLO WORLD'), '\r\n')
else:
shell, commands, tail = ('sh', ('ls', 'echo HELLO WORLD'), '\n')
a = Popen(shell, stdin=PIPE, stdout=PIPE)
print recv_some(a),
for cmd in commands:
send_all(a, cmd + tail)
print recv_some(a),
send_all(a, 'exit' + tail)
print recv_some(a, e=0)
a.wait()
There are several answers here but none of them satisfied my below requirements:
I don't want to wait for command to finish or pollute my terminal with subprocess outputs.
I want to run bash script with redirects.
I want to support piping within my bash script (for example find ... | tar ...).
The only combination that satiesfies above requirements is:
subprocess.Popen(['./my_script.sh "arg1" > "redirect/path/to"'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
I'm launching a subprocess with the following command:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
However, when I try to kill using:
p.terminate()
or
p.kill()
The command keeps running in the background, so I was wondering how can I actually terminate the process.
Note that when I run the command with:
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
It does terminate successfully when issuing the p.terminate().
Use a process group so as to enable sending a signal to all the process in the groups. For that, you should attach a session id to the parent process of the spawned/child processes, which is a shell in your case. This will make it the group leader of the processes. So now, when a signal is sent to the process group leader, it's transmitted to all of the child processes of this group.
Here's the code:
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM) # Send the signal to all the process groups
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.kill()
p.kill() ends up killing the shell process and cmd is still running.
I found a convenient fix this by:
p = subprocess.Popen("exec " + cmd, stdout=subprocess.PIPE, shell=True)
This will cause cmd to inherit the shell process, instead of having the shell launch a child process, which does not get killed. p.pid will be the id of your cmd process then.
p.kill() should work.
I don't know what effect this will have on your pipe though.
If you can use psutil, then this works perfectly:
import subprocess
import psutil
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
proc = subprocess.Popen(["infinite_app", "param"], shell=True)
try:
proc.wait(timeout=3)
except subprocess.TimeoutExpired:
kill(proc.pid)
I could do it using
from subprocess import Popen
process = Popen(command, shell=True)
Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
it killed the cmd.exe and the program that i gave the command for.
(On Windows)
When shell=True the shell is the child process, and the commands are its children. So any SIGTERM or SIGKILL will kill the shell but not its child processes, and I don't remember a good way to do it.
The best way I can think of is to use shell=False, otherwise when you kill the parent shell process, it will leave a defunct shell process.
None of these answers worked for me so Im leaving the code that did work. In my case even after killing the process with .kill() and getting a .poll() return code the process didn't terminate.
Following the subprocess.Popen documentation:
"...in order to cleanup properly a well-behaved application should kill the child process and finish communication..."
proc = subprocess.Popen(...)
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
In my case I was missing the proc.communicate() after calling proc.kill(). This cleans the process stdin, stdout ... and does terminate the process.
As Sai said, the shell is the child, so signals are intercepted by it -- best way I've found is to use shell=False and use shlex to split the command line:
if isinstance(command, unicode):
cmd = command.encode('utf8')
args = shlex.split(cmd)
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Then p.kill() and p.terminate() should work how you expect.
Send the signal to all the processes in group
self.proc = Popen(commands,
stdout=PIPE,
stderr=STDOUT,
universal_newlines=True,
preexec_fn=os.setsid)
os.killpg(os.getpgid(self.proc.pid), signal.SIGHUP)
os.killpg(os.getpgid(self.proc.pid), signal.SIGTERM)
There is a very simple way for Python 3.5 or + (Actually tested on Python 3.8)
import subprocess, signal, time
p = subprocess.Popen(['cmd'], shell=True)
time.sleep(5) #Wait 5 secs before killing
p.send_signal(signal.CTRL_C_EVENT)
Then, your code may crash at some point if you have a keyboard input detection, or sth like this. In this case, on the line of code/function where the error is given, just use:
try:
FailingCode #here goes the code which is raising KeyboardInterrupt
except KeyboardInterrupt:
pass
What this code is doing is just sending a "CTRL+C" signal to the running process, what will cause the process to get killed.
Solution that worked for me
if os.name == 'nt': # windows
subprocess.Popen("TASKKILL /F /PID {pid} /T".format(pid=process.pid))
else:
os.kill(process.pid, signal.SIGTERM)
Full blown solution that will kill running process (including subtree) on timeout reached or specific conditions via a callback function.
Works both on windows & Linux, from Python 2.7 up to 3.10 as of this writing.
Install with pip install command_runner
Example for timeout:
from command_runner import command_runner
# Kills ping after 2 seconds
exit_code, output = command_runner('ping 127.0.0.1', shell=True, timeout=2)
Example for specific condition:
Here we'll stop ping if current system time seconds digit is > 5
from time import time
from command_runner import command_runner
def my_condition():
# Arbitrary condition for demo
return True if int(str(int(time()))[-1]) > 5
# Calls my_condition() every second (check_interval) and kills ping if my_condition() returns True
exit_code, output = command_runner('ping 127.0.0.1', shell=True, stop_on=my_condition, check_interval=1)
I have a script where I launch with popen a shell command.
The problem is that the script doesn't wait until that popen command is finished and go continues right away.
om_points = os.popen(command, "w")
.....
How can I tell to my Python script to wait until the shell command has finished?
Depending on how you want to work your script you have two options. If you want the commands to block and not do anything while it is executing, you can just use subprocess.call.
#start and block until done
subprocess.call([data["om_points"], ">", diz['d']+"/points.xml"])
If you want to do things while it is executing or feed things into stdin, you can use communicate after the popen call.
#start and process things, then wait
p = subprocess.Popen([data["om_points"], ">", diz['d']+"/points.xml"])
print "Happens while running"
p.communicate() #now wait plus that you can send commands to process
As stated in the documentation, wait can deadlock, so communicate is advisable.
You can you use subprocess to achieve this.
import subprocess
#This command could have multiple commands separated by a new line \n
some_command = "export PATH=$PATH://server.sample.mo/app/bin \n customupload abc.txt"
p = subprocess.Popen(some_command, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
#This makes the wait possible
p_status = p.wait()
#This will give you the output of the command being executed
print "Command output: " + output
Force popen to not continue until all output is read by doing:
os.popen(command).read()
Let the command you are trying to pass be
os.system('x')
then you covert it to a statement
t = os.system('x')
now the python will be waiting for the output from the commandline so that it could be assigned to the variable t.
What you are looking for is the wait method.
wait() works fine for me. The subprocesses p1, p2 and p3 are executed at the same. Therefore, all processes are done after 3 seconds.
import subprocess
processes = []
p1 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p2 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p3 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
processes.append(p1)
processes.append(p2)
processes.append(p3)
for p in processes:
if p.wait() != 0:
print("There was an error")
print("all processed finished")
I think process.communicate() would be suitable for output having small size. For larger output it would not be the best approach.