I cannot use output redirect in using Popen. Here's the problematic code:
subprocess.Popen(['program'
'arg', #arguments
'0',
'&> program.out'])
The program runs, but the stdout and stderr doesn't get routed to the output file. Furthermore, the last argument was concatenated with the redirect command as a single argument (0&> program.out in this case). When I join the commands together and pass the whole command string to Popen, with shell=True, things go smoothly, but I think this might not the recommended way to use Popen.
&> filename syntax is Bourne shell (e.g. bash) syntax, you'd probably want to do something closer to:
with open('program.out', 'w') as fd:
subprocess.Popen(['program', 'arg'], stdout=fd, stderr=fd)
as per the docs for stdout and stderr:
Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None.
where the code above is just passing "an existing file object"
Related
A standard feature Python's subprocess API is to combine STDERR and STDOUT by using the keyword argument
stderr = subprocess.STDOUT
But currently I need the opposite: Redirect the STDOUT of a command to STDERR. Right now I am doing this manually using subprocess.getoutput or subprocess.check_output and printing the result, but is there a more concise option?
Ouroborus mentions in the comments that we ought to be able to do something like
subprocess.run(args, stdout = subprocess.STDERR)
However, the docs don't mention the existence of subprocess.STDERR, and at least on my installation (3.8.10) that doesn't actually exist.
According to the docs,
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object with a valid file descriptor, and None.
Assuming you're on a UNIX-type system (and if you're not, I'm not sure what you're planning on doing with stdout / stderr anyway), the file descriptors for stdin/stdout/stderr are always the same:
0 is stdin
1 is stdout
2 is stderr
3+ are used for fds you create in your program
So we can do something like
subprocess.run(args, stdout = 2)
to run a process and redirect its stdout to stderr.
Of course I would recommend you save that as a constant somewhere instead of just leaving a raw number 2 there. And if you're on Windows or something you may have to do a little research to see if things work exactly the same.
Update:
A subsequent search suggests that this numbering convention is part of POSIX, and that Windows explicitly obeys it.
Update:
#kdb points out in the comments that sys.stderr will typically satisfy the "an existing file object with a valid file descriptor" condition, making it an attractive alternative to using a raw fd here.
I've been trying out how not to print shell outputs from Python's subprocess.call() by assigning open(os.devnull, 'w') and subprocess.PIPE to the stdout value:
subprocess.call(command, stdout=open(os.devnull, 'w'), shell=True)
and
subprocess.call(command, stdout=subprocess.PIPE, shell=True)
Both these lines execute the shell command stored in the command variable discreetly i.e. without outputting on the terminal. However, I don't know the difference between the two. I am new to using subprocess.
/ogs
The first method is to redirect the standard output to a file (/dev/null in POSIX), while the second one is to build a PIPE to redirect the output to a specific stream.
The official definition of subprocess.PIPE referred from command help(): "This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes."
I would say this method is like: we just put something in a message queue(memory) for a while for later use. But subprocess.call just return the status code. It seems you cannot refer the return value for subprocess.call(command, stdout=open(os.devnull, 'w'), shell=True) so that you cannot refer the value by subprocess.call(command, stdin=the_stdout, shell=True). It is hard to build a connection between two commands.
Based on the info in this article: http://blog.acipo.com/running-shell-commands-in-python/
Also Python 2.7 Documentation: https://docs.python.org/2/library/subprocess.html
It is recommended that we may use Popen with communicate()
Popen is an advanced class provided by Python 3.
There is a good resource about this: https://stackabuse.com/pythons-os-and-subprocess-popen-commands/
devnull is a point to /dev/null in Linux. When you write to /dev/null, it will discard everything received.
pipe has two ends, when you write to one end, the other pipe will receive the message you wrote.
I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input. How would I now write to standard input?
I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.
It might be better to use communicate:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To clarify some points:
As jro has mentioned, the right way is to use subprocess.communicate.
Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.
Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So, if the streams were not opened explicitly in text mode, then something like below should work:
import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]
I've left the stderr value above deliberately as STDOUT as an example.
That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):
import subprocess
what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'
# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]
# Well the result includes an '\n' at the end,
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()
# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()
This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.
Hope that helps someone out.
Since subprocess 3.5, there is the subprocess.run() function, which provides a convenient way to initialize and interact with Popen() objects. run() takes an optional input argument, through which you can pass things to stdin (like you would using Popen.communicate(), but all in one go).
Adapting jro's example to use run() would look like:
import subprocess
p = subprocess.run(['myapp'], input='data_to_write', capture_output=True, text=True)
After execution, p will be a CompletedProcess object. By setting capture_output to True, we make available a p.stdout attribute which gives us access to the output, if we care about it. text=True tells it to work with regular strings rather than bytes. If you want, you might also add the argument check=True to make it throw an error if the exit status (accessible regardless via p.returncode) isn't 0.
This is the "modern"/quick and easy way to do to this.
One can write data to the subprocess object on-the-fly, instead of collecting all the input in a string beforehand to pass through the communicate() method.
This example sends a list of animals names to the Unix utility sort, and sends the output to standard output.
import sys, subprocess
p = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sys.stdout)
for v in ('dog','cat','mouse','cow','mule','chicken','bear','robin'):
p.stdin.write( v.encode() + b'\n' )
p.communicate()
Note that writing to the process is done via p.stdin.write(v.encode()). I tried using
print(v.encode(), file=p.stdin), but that failed with the message TypeError: a bytes-like object is required, not 'str'. I haven't figured out how to get print() to work with this.
You can provide a file-like object to the stdin argument of subprocess.call().
The documentation for the Popen object applies here.
To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT,
... shell=True)
'ls: non_existent_file: No such file or directory\n'
This question already has answers here:
How to read the first byte of a subprocess's stdout and then discard the rest in Python?
(2 answers)
Closed 7 years ago.
I am calling a java program from my Python script, and it is outputting a lot of useless information I don't want. I have tried addind stdout=None to the Popen function:
subprocess.Popen(['java', '-jar', 'foo.jar'], stdout=None)
But it does the same. Any idea?
From the 3.3 documentation:
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None.
So:
subprocess.check_call(['java', '-jar', 'foo.jar'], stdout=subprocess.DEVNULL)
This only exists in 3.3 and later. But the documentation says:
DEVNULL indicates that the special file os.devnull will be used.
And os.devnull exists way back to 2.4 (before subprocess existed). So, you can do the same thing manually:
with open(os.devnull, 'w') as devnull:
subprocess.check_call(['java', '-jar', 'foo.jar'], stdout=devnull)
Note that if you're doing something more complicated that doesn't fit into a single line, you need to keep devnull open for the entire life of the Popen object, not just its construction. (That is, put the whole thing inside the with statement.)
The advantage of redirecting to /dev/null (POSIX) or NUL: (Windows) is that you don't create an unnecessary pipe, and, more importantly, can't run into edge cases where the subprocess blocks on writing to that pipe.
The disadvantage is that, in theory, subprocess may work on some platforms that os.devnull does not. If you only care about CPython on POSIX and Windows, PyPy, and Jython (which is most of you), this will never be a problem. For other cases, test before distributing your code.
From the documentation:
With the default settings of None, no redirection will occur.
You need to set stdout to subprocess.PIPE, then call .communicate() and simply ignore the captured output.
p = subprocess.Popen(['java', '-jar', 'foo.jar'], stdout=subprocess.PIPE)
p.communicate()
although I suspect that using subprocess.call() more than suffices for your needs:
subprocess.call(['java', '-jar', 'foo.jar'], stdout=subprocess.PIPE)
I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input. How would I now write to standard input?
I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.
It might be better to use communicate:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To clarify some points:
As jro has mentioned, the right way is to use subprocess.communicate.
Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.
Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So, if the streams were not opened explicitly in text mode, then something like below should work:
import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]
I've left the stderr value above deliberately as STDOUT as an example.
That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):
import subprocess
what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'
# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]
# Well the result includes an '\n' at the end,
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()
# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()
This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.
Hope that helps someone out.
Since subprocess 3.5, there is the subprocess.run() function, which provides a convenient way to initialize and interact with Popen() objects. run() takes an optional input argument, through which you can pass things to stdin (like you would using Popen.communicate(), but all in one go).
Adapting jro's example to use run() would look like:
import subprocess
p = subprocess.run(['myapp'], input='data_to_write', capture_output=True, text=True)
After execution, p will be a CompletedProcess object. By setting capture_output to True, we make available a p.stdout attribute which gives us access to the output, if we care about it. text=True tells it to work with regular strings rather than bytes. If you want, you might also add the argument check=True to make it throw an error if the exit status (accessible regardless via p.returncode) isn't 0.
This is the "modern"/quick and easy way to do to this.
One can write data to the subprocess object on-the-fly, instead of collecting all the input in a string beforehand to pass through the communicate() method.
This example sends a list of animals names to the Unix utility sort, and sends the output to standard output.
import sys, subprocess
p = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sys.stdout)
for v in ('dog','cat','mouse','cow','mule','chicken','bear','robin'):
p.stdin.write( v.encode() + b'\n' )
p.communicate()
Note that writing to the process is done via p.stdin.write(v.encode()). I tried using
print(v.encode(), file=p.stdin), but that failed with the message TypeError: a bytes-like object is required, not 'str'. I haven't figured out how to get print() to work with this.
You can provide a file-like object to the stdin argument of subprocess.call().
The documentation for the Popen object applies here.
To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT,
... shell=True)
'ls: non_existent_file: No such file or directory\n'