Redirect subprocess stdout to stderr - python

A standard feature Python's subprocess API is to combine STDERR and STDOUT by using the keyword argument
stderr = subprocess.STDOUT
But currently I need the opposite: Redirect the STDOUT of a command to STDERR. Right now I am doing this manually using subprocess.getoutput or subprocess.check_output and printing the result, but is there a more concise option?

Ouroborus mentions in the comments that we ought to be able to do something like
subprocess.run(args, stdout = subprocess.STDERR)
However, the docs don't mention the existence of subprocess.STDERR, and at least on my installation (3.8.10) that doesn't actually exist.
According to the docs,
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object with a valid file descriptor, and None.
Assuming you're on a UNIX-type system (and if you're not, I'm not sure what you're planning on doing with stdout / stderr anyway), the file descriptors for stdin/stdout/stderr are always the same:
0 is stdin
1 is stdout
2 is stderr
3+ are used for fds you create in your program
So we can do something like
subprocess.run(args, stdout = 2)
to run a process and redirect its stdout to stderr.
Of course I would recommend you save that as a constant somewhere instead of just leaving a raw number 2 there. And if you're on Windows or something you may have to do a little research to see if things work exactly the same.
Update:
A subsequent search suggests that this numbering convention is part of POSIX, and that Windows explicitly obeys it.
Update:
#kdb points out in the comments that sys.stderr will typically satisfy the "an existing file object with a valid file descriptor" condition, making it an attractive alternative to using a raw fd here.

Related

Cannot redirect IO using popen

I cannot use output redirect in using Popen. Here's the problematic code:
subprocess.Popen(['program'
'arg', #arguments
'0',
'&> program.out'])
The program runs, but the stdout and stderr doesn't get routed to the output file. Furthermore, the last argument was concatenated with the redirect command as a single argument (0&> program.out in this case). When I join the commands together and pass the whole command string to Popen, with shell=True, things go smoothly, but I think this might not the recommended way to use Popen.
&> filename syntax is Bourne shell (e.g. bash) syntax, you'd probably want to do something closer to:
with open('program.out', 'w') as fd:
subprocess.Popen(['program', 'arg'], stdout=fd, stderr=fd)
as per the docs for stdout and stderr:
Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None.
where the code above is just passing "an existing file object"

How do I spawn a shell sub-command, with its stdout bound to the parent's stdout in python

I want a python program to spawn a shell sub-command, with the stdout of the shell being written to the stdout of the python parent.
The python script will be run as a CGI, so the shell output should be sent back to the browser.
As python subprocess module docs tell (emphasis mine)
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, an existing file descriptor (a positive integer), an existing file object, and None. PIPE indicates that a new pipe to the child should be created. With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent. Additionally, stderr can be STDOUT, which indicates that the stderr data from the child process should be captured into the same file handle as for stdout.
So, it should be as easy as
subprocess.call(["your_shell_command", /*command line arguments, if any*/])
E.g.
subprocess.call(["ls", "-l"])
subprocess.call(["mkdir", "newdir"])
subprocess.call("your_shell_script.sh")

Ignoring output from subprocess.Popen [duplicate]

This question already has answers here:
How to read the first byte of a subprocess's stdout and then discard the rest in Python?
(2 answers)
Closed 7 years ago.
I am calling a java program from my Python script, and it is outputting a lot of useless information I don't want. I have tried addind stdout=None to the Popen function:
subprocess.Popen(['java', '-jar', 'foo.jar'], stdout=None)
But it does the same. Any idea?
From the 3.3 documentation:
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None.
So:
subprocess.check_call(['java', '-jar', 'foo.jar'], stdout=subprocess.DEVNULL)
This only exists in 3.3 and later. But the documentation says:
DEVNULL indicates that the special file os.devnull will be used.
And os.devnull exists way back to 2.4 (before subprocess existed). So, you can do the same thing manually:
with open(os.devnull, 'w') as devnull:
subprocess.check_call(['java', '-jar', 'foo.jar'], stdout=devnull)
Note that if you're doing something more complicated that doesn't fit into a single line, you need to keep devnull open for the entire life of the Popen object, not just its construction. (That is, put the whole thing inside the with statement.)
The advantage of redirecting to /dev/null (POSIX) or NUL: (Windows) is that you don't create an unnecessary pipe, and, more importantly, can't run into edge cases where the subprocess blocks on writing to that pipe.
The disadvantage is that, in theory, subprocess may work on some platforms that os.devnull does not. If you only care about CPython on POSIX and Windows, PyPy, and Jython (which is most of you), this will never be a problem. For other cases, test before distributing your code.
From the documentation:
With the default settings of None, no redirection will occur.
You need to set stdout to subprocess.PIPE, then call .communicate() and simply ignore the captured output.
p = subprocess.Popen(['java', '-jar', 'foo.jar'], stdout=subprocess.PIPE)
p.communicate()
although I suspect that using subprocess.call() more than suffices for your needs:
subprocess.call(['java', '-jar', 'foo.jar'], stdout=subprocess.PIPE)

Is it possible to pre-pend each STDERR with a given string

I am writing a program to interact with a linux machine through the serial port, and I am using pexpect.spawn as my main communication channel as follows:
proc = pexpect.spawn("cu dir -l /dev/ttyUSB0 -s 115200", logfile = *someFile*)
and I am sending commands to the machine with the sendline("cmd") method, and at the end of each session I parse the log file to see how the commands behaved.
I would like to be able to distinguish between lines that were printed to stdout and stderr from my log file, but currently I have no way of doing that.
Is that a way to globally prepend each line printed to stderr with a given string?
You don't mention how you capture stdout and stderr, but one simple way distinguish the stdout and stderr is to simply place stdout and stderr in different files. For example:
./command.py >stdout-log 2>stderr-log
I think this is a limitation of pexpect. You're basically dealing with a black box command prompt, so pexpect has no knowledge about whether a string returned to the console (effectively) is stdout or stderr, just that something came back. Can you safely assume a limited set of message and error formats in your system so that you could write some regex-based post-processor?

scrambled output from a child process run from subprocess

I'm using the following code to run another python script. The problem I'm facing is that the output of that script is coming out in an unorderly manner.
While running it from the command line, I get the correct output i.e. :
some output here
Editing xml file and saving changes
Uploading xml file back..
While running the script using subprocess, am getting some of the output in reverse order:
correct output till here
Uploading xml file back..
Editing xml file and saving changes
The script is executing without errors and making the right changes. So I think the culprit might be the code that is calling the child script, but I can't find the problem:
cmd = "child_script.py"
proc = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
(fout ,ferr) = ( proc.stdout, proc.stderr )
print "Going inside while - loop"
while True:
line = proc.stdout.readline()
print line
fo.write(line)
try :
err = ferr.readline()
fe.write(err)
except Exception, e:
pass
if not line:
pass
break
[EDIT]: fo and fe are file handles to output and error logs. Also the script is being run on Windows.Sorry for missing these details.
There are a few problems with the part of the script you've quoted, I'm afraid:
As mentioned in detly's comment, what are fo and fe? Presumably those are objects to which you're writing the output of the child process? (Update: you indicate that these are both for writing output logs.)
There's an indentation error on line 3. (Update: I've fixed that in the original post.)
You're specifying stderr=subprocess.STDOUT, so: (a) ferr will always be None in your loop and (b) due to buffering, standard output and error may be mixed in an unpredictable way. However, it looks from your code as if you actually want to deal with standard output and standard error separately, so perhaps try stderr=subprocess.PIPE instead.
It would be a good idea to rewrite your loop as jsbueno suggests:
from subprocess import Popen, PIPE
proc = Popen(["child_script.py"], stdout=PIPE, stderr=PIPE)
fout, ferr = proc.stdout, proc.stderr
for line in fout:
print(line.rstrip())
fo.write(line)
for line in ferr:
fe.write(line)
... or to reduce it even further, since it seems that the aim is essentially that you just want to write the standard output and standard error from the child process to fo and fe, just do:
proc = subprocess.Popen(["child_script.py"], stdout=fo, stderr=fe)
If you still see the output lines swapped in the file that fo is writing to, then we can only assume that there is some way in which this can happen in the child script. e.g. is the child script multi-threaded? Is one of the lines printed via a callback from another function?
Most of the times I've seen order of output differ based on execution, some output was sent to the C standard IO streams stdin, and some output was sent to stderr. The buffering characteristics of stdout and stderr vary depending upon if they are connected to a terminal, pipes, files, etc:
NOTES
The stream stderr is unbuffered. The stream stdout is
line-buffered when it points to a terminal. Partial lines
will not appear until fflush(3) or exit(3) is called, or a
newline is printed. This can produce unexpected results,
especially with debugging output. The buffering mode of
the standard streams (or any other stream) can be changed
using the setbuf(3) or setvbuf(3) call. Note that in case
stdin is associated with a terminal, there may also be
input buffering in the terminal driver, entirely unrelated
to stdio buffering. (Indeed, normally terminal input is
line buffered in the kernel.) This kernel input handling
can be modified using calls like tcsetattr(3); see also
stty(1), and termios(3).
So perhaps you should configure both stdout and stderr to go to the same source, so the same buffering will be applied to both streams.
Also, some programs open the terminal directly open("/dev/tty",...) (mostly so they can read passwords), so comparing terminal output with pipe output isn't always going to work.
Further, if your program is mixing direct write(2) calls with standard IO calls, the order of output can be different based on the different buffering choices.
I hope one of these is right :) let me know which, if any.

Categories