How do I make this compatible with Windows? - python

Good day, Stackoverflow!
I have a little (big) problem with porting one of my Python scripts for Linux to Windows. The hairy thing about this is that I have to start a process and redirect all of its streams into pipes that I go over and read and write to and from in my script.
With Linux this is a piece of cake:
server_startcmd = [
"java",
"-Xmx%s" % self.java_heapmax,
"-Xms%s" % self.java_heapmin,
"-jar",
server_jar,
"nogui"
]
server = Popen(server_startcmd, stdout = PIPE,
stderr = PIPE,
stdin = PIPE)
outputs = [
server_socket, # A listener socket that has been setup before
server.stderr,
server.stdout,
sys.stdin # Because I also have to read and process this.
]
clients = []
while True:
read_ready, write_ready, except_ready = select.select(outputs, [], [], 1.0)
if read_ready == []:
perform_idle_command() # important step
else:
for s in read_ready:
if s == sys.stdin:
# Do stdin stuff
elif s == server_socket:
# Accept client and add it to 'clients'
elif s in clients:
# Got data from one of the clients
The whole 3 way alternating between a server socket, stdin of the script and the output channels of the child process (as well as the input channel, as my script will write to that one, although that one is not in the select() list) is the most important part of the script.
I know that for Windows there is win32pipe in the win32api module. The problem is that finding resources to this API is pretty hard, and what I found was not really helpful.
How do I utilize this win32pipe module to do what I want? I have some sources where it's being used in a different but similar situation, but that confused me pretty much:
if os.name == 'nt':
import win32pipe
(stdin, stdout) = win32pipe.popen4(" ".join(server_args))
else:
server = Popen(server_args,
stdout = PIPE,
stdin = PIPE,
stderr = PIPE)
outputs = [server.stderr, server.stdout, sys.stdin]
stdin = server.stdin
[...]
while True:
try:
if os.name == 'nt':
outready = [stdout]
else:
outready, inready, exceptready = select.select(outputs, [], [], 1.0)
except:
break
stdout here is the combined stdout and stderr of the child process that has been started with win32pipe.popen4(...)
The questions arsing are:
Why not select() for the windows version? Does that not work?
If you don't use select() there, how can I implement the neccessary timeout that select() provides (which obviously won't work like this here)
Please, help me out!

I think you cannot use select() on pipes.
In one of the projects, where I was porting a linux app to Windows I too had missed this point and had to rewrite the whole logic.

Related

Catch prints in Python from a long process that activated via os.system [duplicate]

I am trying to find a way in Python to run other programs in such a way that:
The stdout and stderr of the program being run can be logged
separately.
The stdout and stderr of the program being run can be
viewed in near-real time, such that if the child process hangs, the
user can see. (i.e. we do not wait for execution to complete before
printing the stdout/stderr to the user)
Bonus criteria: The
program being run does not know it is being run via python, and thus
will not do unexpected things (like chunk its output instead of
printing it in real-time, or exit because it demands a terminal to
view its output). This small criteria pretty much means we will need
to use a pty I think.
Here is what i've got so far...
Method 1:
def method1(command):
## subprocess.communicate() will give us the stdout and stderr sepurately,
## but we will have to wait until the end of command execution to print anything.
## This means if the child process hangs, we will never know....
proc=subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable='/bin/bash')
stdout, stderr = proc.communicate() # record both, but no way to print stdout/stderr in real-time
print ' ######### REAL-TIME ######### '
######## Not Possible
print ' ########## RESULTS ########## '
print 'STDOUT:'
print stdout
print 'STDOUT:'
print stderr
Method 2
def method2(command):
## Using pexpect to run our command in a pty, we can see the child's stdout in real-time,
## however we cannot see the stderr from "curl google.com", presumably because it is not connected to a pty?
## Furthermore, I do not know how to log it beyond writing out to a file (p.logfile). I need the stdout and stderr
## as strings, not files on disk! On the upside, pexpect would give alot of extra functionality (if it worked!)
proc = pexpect.spawn('/bin/bash', ['-c', command])
print ' ######### REAL-TIME ######### '
proc.interact()
print ' ########## RESULTS ########## '
######## Not Possible
Method 3:
def method3(command):
## This method is very much like method1, and would work exactly as desired
## if only proc.xxx.read(1) wouldn't block waiting for something. Which it does. So this is useless.
proc=subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable='/bin/bash')
print ' ######### REAL-TIME ######### '
out,err,outbuf,errbuf = '','','',''
firstToSpeak = None
while proc.poll() == None:
stdout = proc.stdout.read(1) # blocks
stderr = proc.stderr.read(1) # also blocks
if firstToSpeak == None:
if stdout != '': firstToSpeak = 'stdout'; outbuf,errbuf = stdout,stderr
elif stderr != '': firstToSpeak = 'stderr'; outbuf,errbuf = stdout,stderr
else:
if (stdout != '') or (stderr != ''): outbuf += stdout; errbuf += stderr
else:
out += outbuf; err += errbuf;
if firstToSpeak == 'stdout': sys.stdout.write(outbuf+errbuf);sys.stdout.flush()
else: sys.stdout.write(errbuf+outbuf);sys.stdout.flush()
firstToSpeak = None
print ''
print ' ########## RESULTS ########## '
print 'STDOUT:'
print out
print 'STDERR:'
print err
To try these methods out, you will need to import sys,subprocess,pexpect
pexpect is pure-python and can be had with
sudo pip install pexpect
I think the solution will involve python's pty module - which is somewhat of a black art that I cannot find anyone who knows how to use. Perhaps SO knows :)
As a heads-up, i recommend you use 'curl www.google.com' as a test command, because it prints its status out on stderr for some reason :D
UPDATE-1:
OK so the pty library is not fit for human consumption. The docs, essentially, are the source code.
Any presented solution that is blocking and not async is not going to work here. The Threads/Queue method by Padraic Cunningham works great, although adding pty support is not possible - and it's 'dirty' (to quote Freenode's #python).
It seems like the only solution fit for production-standard code is using the Twisted framework, which even supports pty as a boolean switch to run processes exactly as if they were invoked from the shell.
But adding Twisted into a project requires a total rewrite of all the code. This is a total bummer :/
UPDATE-2:
Two answers were provided, one of which addresses the first two
criteria and will work well where you just need both the stdout and
stderr using Threads and Queue. The other answer uses select, a
non-blocking method for reading file descriptors, and pty, a method to
"trick" the spawned process into believing it is running in a real
terminal just as if it was run from Bash directly - but may or may not
have side-effects. I wish I could accept both answers, because the
"correct" method really depends on the situation and why you are
subprocessing in the first place, but alas, I could only accept one.
The stdout and stderr of the program being run can be logged separately.
You can't use pexpect because both stdout and stderr go to the same pty and there is no way to separate them after that.
The stdout and stderr of the program being run can be viewed in near-real time, such that if the child process hangs, the user can see. (i.e. we do not wait for execution to complete before printing the stdout/stderr to the user)
If the output of a subprocess is not a tty then it is likely that it uses a block buffering and therefore if it doesn't produce much output then it won't be "real time" e.g., if the buffer is 4K then your parent Python process won't see anything until the child process prints 4K chars and the buffer overflows or it is flushed explicitly (inside the subprocess). This buffer is inside the child process and there are no standard ways to manage it from outside. Here's picture that shows stdio buffers and the pipe buffer for command 1 | command2 shell pipeline:
The program being run does not know it is being run via python, and thus will not do unexpected things (like chunk its output instead of printing it in real-time, or exit because it demands a terminal to view its output).
It seems, you meant the opposite i.e., it is likely that your child process chunks its output instead of flushing each output line as soon as possible if the output is redirected to a pipe (when you use stdout=PIPE in Python). It means that the default threading or asyncio solutions won't work as is in your case.
There are several options to workaround it:
the command may accept a command-line argument such as grep --line-buffered or python -u, to disable block buffering.
stdbuf works for some programs i.e., you could run ['stdbuf', '-oL', '-eL'] + command using the threading or asyncio solution above and you should get stdout, stderr separately and lines should appear in near-real time:
#!/usr/bin/env python3
import os
import sys
from select import select
from subprocess import Popen, PIPE
with Popen(['stdbuf', '-oL', '-e0', 'curl', 'www.google.com'],
stdout=PIPE, stderr=PIPE) as p:
readable = {
p.stdout.fileno(): sys.stdout.buffer, # log separately
p.stderr.fileno(): sys.stderr.buffer,
}
while readable:
for fd in select(readable, [], [])[0]:
data = os.read(fd, 1024) # read available
if not data: # EOF
del readable[fd]
else:
readable[fd].write(data)
readable[fd].flush()
finally, you could try pty + select solution with two ptys:
#!/usr/bin/env python3
import errno
import os
import pty
import sys
from select import select
from subprocess import Popen
masters, slaves = zip(pty.openpty(), pty.openpty())
with Popen([sys.executable, '-c', r'''import sys, time
print('stdout', 1) # no explicit flush
time.sleep(.5)
print('stderr', 2, file=sys.stderr)
time.sleep(.5)
print('stdout', 3)
time.sleep(.5)
print('stderr', 4, file=sys.stderr)
'''],
stdin=slaves[0], stdout=slaves[0], stderr=slaves[1]):
for fd in slaves:
os.close(fd) # no input
readable = {
masters[0]: sys.stdout.buffer, # log separately
masters[1]: sys.stderr.buffer,
}
while readable:
for fd in select(readable, [], [])[0]:
try:
data = os.read(fd, 1024) # read available
except OSError as e:
if e.errno != errno.EIO:
raise #XXX cleanup
del readable[fd] # EIO means EOF on some systems
else:
if not data: # EOF
del readable[fd]
else:
readable[fd].write(data)
readable[fd].flush()
for fd in masters:
os.close(fd)
I don't know what are the side-effects of using different ptys for stdout, stderr. You could try whether a single pty is enough in your case e.g., set stderr=PIPE and use p.stderr.fileno() instead of masters[1]. Comment in sh source suggests that there are issues if stderr not in {STDOUT, pipe}
If you want to read from stderr and stdout and get the output separately, you can use a Thread with a Queue, not overly tested but something like the following:
import threading
import queue
def run(fd, q):
for line in iter(fd.readline, ''):
q.put(line)
q.put(None)
def create(fd):
q = queue.Queue()
t = threading.Thread(target=run, args=(fd, q))
t.daemon = True
t.start()
return q, t
process = Popen(["curl","www.google.com"], stdout=PIPE, stderr=PIPE,
universal_newlines=True)
std_q, std_out = create(process.stdout)
err_q, err_read = create(process.stderr)
while std_out.is_alive() or err_read.is_alive():
for line in iter(std_q.get, None):
print(line)
for line in iter(err_q.get, None):
print(line)
While J.F. Sebastian's answer certainly solves the heart of the problem, i'm running python 2.7 (which wasn't in the original criteria) so im just throwing this out there to any other weary travellers who just want to cut/paste some code.
I havent tested this throughly yet, but on all the commands i have tried it seems to work perfectly :)
you may want to change .decode('ascii') to .decode('utf-8') - im still testing that bit out.
#!/usr/bin/env python2.7
import errno
import os
import pty
import sys
from select import select
import subprocess
stdout = ''
stderr = ''
command = 'curl google.com ; sleep 5 ; echo "hey"'
masters, slaves = zip(pty.openpty(), pty.openpty())
p = subprocess.Popen(command, stdin=slaves[0], stdout=slaves[0], stderr=slaves[1], shell=True, executable='/bin/bash')
for fd in slaves: os.close(fd)
readable = { masters[0]: sys.stdout, masters[1]: sys.stderr }
try:
print ' ######### REAL-TIME ######### '
while readable:
for fd in select(readable, [], [])[0]:
try: data = os.read(fd, 1024)
except OSError as e:
if e.errno != errno.EIO: raise
del readable[fd]
finally:
if not data: del readable[fd]
else:
if fd == masters[0]: stdout += data.decode('ascii')
else: stderr += data.decode('ascii')
readable[fd].write(data)
readable[fd].flush()
except:
print "Unexpected error:", sys.exc_info()[0]
raise
finally:
p.wait()
for fd in masters: os.close(fd)
print ''
print ' ########## RESULTS ########## '
print 'STDOUT:'
print stdout
print 'STDERR:'
print stderr

python subprocess.Popen output during execution [duplicate]

I'm using a python script as a driver for a hydrodynamics code. When it comes time to run the simulation, I use subprocess.Popen to run the code, collect the output from stdout and stderr into a subprocess.PIPE --- then I can print (and save to a log-file) the output information, and check for any errors. The problem is, I have no idea how the code is progressing. If I run it directly from the command line, it gives me output about what iteration its at, what time, what the next time-step is, etc.
Is there a way to both store the output (for logging and error checking), and also produce a live-streaming output?
The relevant section of my code:
ret_val = subprocess.Popen( run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True )
output, errors = ret_val.communicate()
log_file.write(output)
print output
if( ret_val.returncode ):
print "RUN failed\n\n%s\n\n" % (errors)
success = False
if( errors ): log_file.write("\n\n%s\n\n" % errors)
Originally I was piping the run_command through tee so that a copy went directly to the log-file, and the stream still output directly to the terminal -- but that way I can't store any errors (to my knowlege).
My temporary solution so far:
ret_val = subprocess.Popen( run_command, stdout=log_file, stderr=subprocess.PIPE, shell=True )
while not ret_val.poll():
log_file.flush()
then, in another terminal, run tail -f log.txt (s.t. log_file = 'log.txt').
TLDR for Python 3:
import subprocess
import sys
with open("test.log", "wb") as f:
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
for c in iter(lambda: process.stdout.read(1), b""):
sys.stdout.buffer.write(c)
f.buffer.write(c)
You have two ways of doing this, either by creating an iterator from the read or readline functions and do:
import subprocess
import sys
# replace "w" with "wb" for Python 3
with open("test.log", "w") as f:
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
# replace "" with b'' for Python 3
for c in iter(lambda: process.stdout.read(1), ""):
sys.stdout.write(c)
f.write(c)
or
import subprocess
import sys
# replace "w" with "wb" for Python 3
with open("test.log", "w") as f:
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
# replace "" with b"" for Python 3
for line in iter(process.stdout.readline, ""):
sys.stdout.write(line)
f.write(line)
Or you can create a reader and a writer file. Pass the writer to the Popen and read from the reader
import io
import time
import subprocess
import sys
filename = "test.log"
with io.open(filename, "wb") as writer, io.open(filename, "rb", 1) as reader:
process = subprocess.Popen(command, stdout=writer)
while process.poll() is None:
sys.stdout.write(reader.read())
time.sleep(0.5)
# Read the remaining
sys.stdout.write(reader.read())
This way you will have the data written in the test.log as well as on the standard output.
The only advantage of the file approach is that your code doesn't block. So you can do whatever you want in the meantime and read whenever you want from the reader in a non-blocking way. When you use PIPE, read and readline functions will block until either one character is written to the pipe or a line is written to the pipe respectively.
Executive Summary (or "tl;dr" version): it's easy when there's at most one subprocess.PIPE, otherwise it's hard.
It may be time to explain a bit about how subprocess.Popen does its thing.
(Caveat: this is for Python 2.x, although 3.x is similar; and I'm quite fuzzy on the Windows variant. I understand the POSIX stuff much better.)
The Popen function needs to deal with zero-to-three I/O streams, somewhat simultaneously. These are denoted stdin, stdout, and stderr as usual.
You can provide:
None, indicating that you don't want to redirect the stream. It will inherit these as usual instead. Note that on POSIX systems, at least, this does not mean it will use Python's sys.stdout, just Python's actual stdout; see demo at end.
An int value. This is a "raw" file descriptor (in POSIX at least). (Side note: PIPE and STDOUT are actually ints internally, but are "impossible" descriptors, -1 and -2.)
A stream—really, any object with a fileno method. Popen will find the descriptor for that stream, using stream.fileno(), and then proceed as for an int value.
subprocess.PIPE, indicating that Python should create a pipe.
subprocess.STDOUT (for stderr only): tell Python to use the same descriptor as for stdout. This only makes sense if you provided a (non-None) value for stdout, and even then, it is only needed if you set stdout=subprocess.PIPE. (Otherwise you can just provide the same argument you provided for stdout, e.g., Popen(..., stdout=stream, stderr=stream).)
The easiest cases (no pipes)
If you redirect nothing (leave all three as the default None value or supply explicit None), Pipe has it quite easy. It just needs to spin off the subprocess and let it run. Or, if you redirect to a non-PIPE—an int or a stream's fileno()—it's still easy, as the OS does all the work. Python just needs to spin off the subprocess, connecting its stdin, stdout, and/or stderr to the provided file descriptors.
The still-easy case: one pipe
If you redirect only one stream, Pipe still has things pretty easy. Let's pick one stream at a time and watch.
Suppose you want to supply some stdin, but let stdout and stderr go un-redirected, or go to a file descriptor. As the parent process, your Python program simply needs to use write() to send data down the pipe. You can do this yourself, e.g.:
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE)
proc.stdin.write('here, have some data\n') # etc
or you can pass the stdin data to proc.communicate(), which then does the stdin.write shown above. There is no output coming back so communicate() has only one other real job: it also closes the pipe for you. (If you don't call proc.communicate() you must call proc.stdin.close() to close the pipe, so that the subprocess knows there is no more data coming through.)
Suppose you want to capture stdout but leave stdin and stderr alone. Again, it's easy: just call proc.stdout.read() (or equivalent) until there is no more output. Since proc.stdout() is a normal Python I/O stream you can use all the normal constructs on it, like:
for line in proc.stdout:
or, again, you can use proc.communicate(), which simply does the read() for you.
If you want to capture only stderr, it works the same as with stdout.
There's one more trick before things get hard. Suppose you want to capture stdout, and also capture stderr but on the same pipe as stdout:
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
In this case, subprocess "cheats"! Well, it has to do this, so it's not really cheating: it starts the subprocess with both its stdout and its stderr directed into the (single) pipe-descriptor that feeds back to its parent (Python) process. On the parent side, there's again only a single pipe-descriptor for reading the output. All the "stderr" output shows up in proc.stdout, and if you call proc.communicate(), the stderr result (second value in the tuple) will be None, not a string.
The hard cases: two or more pipes
The problems all come about when you want to use at least two pipes. In fact, the subprocess code itself has this bit:
def communicate(self, input=None):
...
# Optimization: If we are only using one pipe, or no pipe at
# all, using select() or threads is unnecessary.
if [self.stdin, self.stdout, self.stderr].count(None) >= 2:
But, alas, here we've made at least two, and maybe three, different pipes, so the count(None) returns either 1 or 0. We must do things the hard way.
On Windows, this uses threading.Thread to accumulate results for self.stdout and self.stderr, and has the parent thread deliver self.stdin input data (and then close the pipe).
On POSIX, this uses poll if available, otherwise select, to accumulate output and deliver stdin input. All this runs in the (single) parent process/thread.
Threads or poll/select are needed here to avoid deadlock. Suppose, for instance, that we've redirected all three streams to three separate pipes. Suppose further that there's a small limit on how much data can be stuffed into to a pipe before the writing process is suspended, waiting for the reading process to "clean out" the pipe from the other end. Let's set that small limit to a single byte, just for illustration. (This is in fact how things work, except that the limit is much bigger than one byte.)
If the parent (Python) process tries to write several bytes—say, 'go\n'to proc.stdin, the first byte goes in and then the second causes the Python process to suspend, waiting for the subprocess to read the first byte, emptying the pipe.
Meanwhile, suppose the subprocess decides to print a friendly "Hello! Don't Panic!" greeting. The H goes into its stdout pipe, but the e causes it to suspend, waiting for its parent to read that H, emptying the stdout pipe.
Now we're stuck: the Python process is asleep, waiting to finish saying "go", and the subprocess is also asleep, waiting to finish saying "Hello! Don't Panic!".
The subprocess.Popen code avoids this problem with threading-or-select/poll. When bytes can go over the pipes, they go. When they can't, only a thread (not the whole process) has to sleep—or, in the case of select/poll, the Python process waits simultaneously for "can write" or "data available", writes to the process's stdin only when there is room, and reads its stdout and/or stderr only when data are ready. The proc.communicate() code (actually _communicate where the hairy cases are handled) returns once all stdin data (if any) have been sent and all stdout and/or stderr data have been accumulated.
If you want to read both stdout and stderr on two different pipes (regardless of any stdin redirection), you will need to avoid deadlock too. The deadlock scenario here is different—it occurs when the subprocess writes something long to stderr while you're pulling data from stdout, or vice versa—but it's still there.
The Demo
I promised to demonstrate that, un-redirected, Python subprocesses write to the underlying stdout, not sys.stdout. So, here is some code:
from cStringIO import StringIO
import os
import subprocess
import sys
def show1():
print 'start show1'
save = sys.stdout
sys.stdout = StringIO()
print 'sys.stdout being buffered'
proc = subprocess.Popen(['echo', 'hello'])
proc.wait()
in_stdout = sys.stdout.getvalue()
sys.stdout = save
print 'in buffer:', in_stdout
def show2():
print 'start show2'
save = sys.stdout
sys.stdout = open(os.devnull, 'w')
print 'after redirect sys.stdout'
proc = subprocess.Popen(['echo', 'hello'])
proc.wait()
sys.stdout = save
show1()
show2()
When run:
$ python out.py
start show1
hello
in buffer: sys.stdout being buffered
start show2
hello
Note that the first routine will fail if you add stdout=sys.stdout, as a StringIO object has no fileno. The second will omit the hello if you add stdout=sys.stdout since sys.stdout has been redirected to os.devnull.
(If you redirect Python's file-descriptor-1, the subprocess will follow that redirection. The open(os.devnull, 'w') call produces a stream whose fileno() is greater than 2.)
We can also use the default file iterator for reading stdout instead of using iter construct with readline().
import subprocess
import sys
process = subprocess.Popen(
your_command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
for line in process.stdout:
sys.stdout.write(line)
In addition to all these answer, one simple approach could also be as follows:
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
while process.stdout.readable():
line = process.stdout.readline()
if not line:
break
print(line.strip())
Loop through the readable stream as long as it's readable and if it gets an empty result, stop.
The key here is that readline() returns a line (with \n at the end) as long as there's an output and empty if it's really at the end.
Hope this helps someone.
If you're able to use third-party libraries, You might be able to use something like sarge (disclosure: I'm its maintainer). This library allows non-blocking access to output streams from subprocesses - it's layered over the subprocess module.
If all you need is that the output will be visible on the console the easiest solution for me was to pass the following arguments to Popen
with Popen(cmd, stdout=sys.stdout, stderr=sys.stderr) as proc:
which will use your python scripts stdio file handles
Solution 1: Log stdout AND stderr concurrently in realtime
A simple solution which logs both stdout AND stderr concurrently, line-by-line in realtime into a log file.
import subprocess as sp
from concurrent.futures import ThreadPoolExecutor
def log_popen_pipe(p, stdfile):
with open("mylog.txt", "w") as f:
while p.poll() is None:
f.write(stdfile.readline())
f.flush()
# Write the rest from the buffer
f.write(stdfile.read())
with sp.Popen(["ls"], stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:
with ThreadPoolExecutor(2) as pool:
r1 = pool.submit(log_popen_pipe, p, p.stdout)
r2 = pool.submit(log_popen_pipe, p, p.stderr)
r1.result()
r2.result()
Solution 2: A function read_popen_pipes() that allows you to iterate over both pipes (stdout/stderr), concurrently in realtime
import subprocess as sp
from queue import Queue, Empty
from concurrent.futures import ThreadPoolExecutor
def enqueue_output(file, queue):
for line in iter(file.readline, ''):
queue.put(line)
file.close()
def read_popen_pipes(p):
with ThreadPoolExecutor(2) as pool:
q_stdout, q_stderr = Queue(), Queue()
pool.submit(enqueue_output, p.stdout, q_stdout)
pool.submit(enqueue_output, p.stderr, q_stderr)
while True:
if p.poll() is not None and q_stdout.empty() and q_stderr.empty():
break
out_line = err_line = ''
try:
out_line = q_stdout.get_nowait()
err_line = q_stderr.get_nowait()
except Empty:
pass
yield (out_line, err_line)
# The function in use:
with sp.Popen(["ls"], stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:
for out_line, err_line in read_popen_pipes(p):
print(out_line, end='')
print(err_line, end='')
p.poll()
Similar to previous answers but the following solution worked for me on windows using Python3 to provide a common method to print and log in realtime (source)
def print_and_log(command, logFile):
with open(logFile, 'wb') as f:
command = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
while True:
output = command.stdout.readline()
if not output and command.poll() is not None:
f.close()
break
if output:
f.write(output)
print(str(output.strip(), 'utf-8'), flush=True)
return command.poll()
A good but "heavyweight" solution is to use Twisted - see the bottom.
If you're willing to live with only stdout something along those lines should work:
import subprocess
import sys
popenobj = subprocess.Popen(["ls", "-Rl"], stdout=subprocess.PIPE)
while not popenobj.poll():
stdoutdata = popenobj.stdout.readline()
if stdoutdata:
sys.stdout.write(stdoutdata)
else:
break
print "Return code", popenobj.returncode
(If you use read() it tries to read the entire "file" which isn't useful, what we really could use here is something that reads all the data that's in the pipe right now)
One might also try to approach this with threading, e.g.:
import subprocess
import sys
import threading
popenobj = subprocess.Popen("ls", stdout=subprocess.PIPE, shell=True)
def stdoutprocess(o):
while True:
stdoutdata = o.stdout.readline()
if stdoutdata:
sys.stdout.write(stdoutdata)
else:
break
t = threading.Thread(target=stdoutprocess, args=(popenobj,))
t.start()
popenobj.wait()
t.join()
print "Return code", popenobj.returncode
Now we could potentially add stderr as well by having two threads.
Note however the subprocess docs discourage using these files directly and recommends to use communicate() (mostly concerned with deadlocks which I think isn't an issue above) and the solutions are a little klunky so it really seems like the subprocess module isn't quite up to the job (also see: http://www.python.org/dev/peps/pep-3145/ ) and we need to look at something else.
A more involved solution is to use Twisted as shown here: https://twistedmatrix.com/documents/11.1.0/core/howto/process.html
The way you do this with Twisted is to create your process using reactor.spawnprocess() and providing a ProcessProtocol that then processes output asynchronously. The Twisted sample Python code is here: https://twistedmatrix.com/documents/11.1.0/core/howto/listings/process/process.py
Based on all the above I suggest a slightly modified version (python3):
while loop calling readline (The iter solution suggested seemed to block forever for me - Python 3, Windows 7)
structered so handling of read data does not need to be duplicated after poll returns not-None
stderr piped into stdout so both output outputs are read
Added code to get exit value of cmd.
Code:
import subprocess
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, universal_newlines=True)
while True:
rd = proc.stdout.readline()
print(rd, end='') # and whatever you want to do...
if not rd: # EOF
returncode = proc.poll()
if returncode is not None:
break
time.sleep(0.1) # cmd closed stdout, but not exited yet
# You may want to check on ReturnCode here
I found a simple solution to a much complicated problem.
Both stdout and stderr need to be streamed.
Both of them need to be non-blocking: when there is no output and when there are too much output.
Do not want to use Threading or multiprocessing, also not willing to use pexpect.
This solution uses a gist I found here
import subprocess as sbp
import fcntl
import os
def non_block_read(output):
fd = output.fileno()
fl = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK)
try:
return output.readline()
except:
return ""
with sbp.Popen('find / -name fdsfjdlsjf',
shell=True,
universal_newlines=True,
encoding='utf-8',
bufsize=1,
stdout=sbp.PIPE,
stderr=sbp.PIPE) as p:
while True:
out = non_block_read(p.stdout)
err = non_block_read(p.stderr)
if out:
print(out, end='')
if err:
print('E: ' + err, end='')
if p.poll() is not None:
break
It looks like line-buffered output will work for you, in which case something like the following might suit. (Caveat: it's untested.) This will only give the subprocess's stdout in real time. If you want to have both stderr and stdout in real time, you'll have to do something more complex with select.
proc = subprocess.Popen(run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
while proc.poll() is None:
line = proc.stdout.readline()
print line
log_file.write(line + '\n')
# Might still be data on stdout at this point. Grab any
# remainder.
for line in proc.stdout.read().split('\n'):
print line
log_file.write(line + '\n')
# Do whatever you want with proc.stderr here...
Why not set stdout directly to sys.stdout? And if you need to output to a log as well, then you can simply override the write method of f.
import sys
import subprocess
class SuperFile(open.__class__):
def write(self, data):
sys.stdout.write(data)
super(SuperFile, self).write(data)
f = SuperFile("log.txt","w+")
process = subprocess.Popen(command, stdout=f, stderr=f)
All of the above solutions I tried failed either to separate stderr and stdout output, (multiple pipes) or blocked forever when the OS pipe buffer was full which happens when the command you are running outputs too fast (there is a warning for this on python poll() manual of subprocess). The only reliable way I found was through select, but this is a posix-only solution:
import subprocess
import sys
import os
import select
# returns command exit status, stdout text, stderr text
# rtoutput: show realtime output while running
def run_script(cmd,rtoutput=0):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
poller = select.poll()
poller.register(p.stdout, select.POLLIN)
poller.register(p.stderr, select.POLLIN)
coutput=''
cerror=''
fdhup={}
fdhup[p.stdout.fileno()]=0
fdhup[p.stderr.fileno()]=0
while sum(fdhup.values()) < len(fdhup):
try:
r = poller.poll(1)
except select.error, err:
if err.args[0] != EINTR:
raise
r=[]
for fd, flags in r:
if flags & (select.POLLIN | select.POLLPRI):
c = os.read(fd, 1024)
if rtoutput:
sys.stdout.write(c)
sys.stdout.flush()
if fd == p.stderr.fileno():
cerror+=c
else:
coutput+=c
else:
fdhup[fd]=1
return p.poll(), coutput.strip(), cerror.strip()
None of the Pythonic solutions worked for me.
It turned out that proc.stdout.read() or similar may block forever.
Therefore, I use tee like this:
subprocess.run('./my_long_running_binary 2>&1 | tee -a my_log_file.txt && exit ${PIPESTATUS}', shell=True, check=True, executable='/bin/bash')
This solution is convenient if you are already using shell=True.
${PIPESTATUS} captures the success status of the entire command chain (only available in Bash).
If I omitted the && exit ${PIPESTATUS}, then this would always return zero since tee never fails.
unbuffer might be necessary for printing each line immediately into the terminal, instead of waiting way too long until the "pipe buffer" gets filled.
However, unbuffer swallows the exit status of assert (SIG Abort)...
2>&1 also logs stderror to the file.
I think that the subprocess.communicate method is a bit misleading: it actually fills the stdout and stderr that you specify in the subprocess.Popen.
Yet, reading from the subprocess.PIPE that you can provide to the subprocess.Popen's stdout and stderr parameters will eventually fill up OS pipe buffers and deadlock your app (especially if you've multiple processes/threads that must use subprocess).
My proposed solution is to provide the stdout and stderr with files - and read the files' content instead of reading from the deadlocking PIPE. These files can be tempfile.NamedTemporaryFile() - which can also be accessed for reading while they're being written into by subprocess.communicate.
Below is a sample usage:
try:
with ProcessRunner(
("python", "task.py"), env=os.environ.copy(), seconds_to_wait=0.01
) as process_runner:
for out in process_runner:
print(out)
except ProcessError as e:
print(e.error_message)
raise
And this is the source code which is ready to be used with as many comments as I could provide to explain what it does:
If you're using python 2, please make sure to first install the latest version of the subprocess32 package from pypi.
import os
import sys
import threading
import time
import tempfile
import logging
if os.name == 'posix' and sys.version_info[0] < 3:
# Support python 2
import subprocess32 as subprocess
else:
# Get latest and greatest from python 3
import subprocess
logger = logging.getLogger(__name__)
class ProcessError(Exception):
"""Base exception for errors related to running the process"""
class ProcessTimeout(ProcessError):
"""Error that will be raised when the process execution will exceed a timeout"""
class ProcessRunner(object):
def __init__(self, args, env=None, timeout=None, bufsize=-1, seconds_to_wait=0.25, **kwargs):
"""
Constructor facade to subprocess.Popen that receives parameters which are more specifically required for the
Process Runner. This is a class that should be used as a context manager - and that provides an iterator
for reading captured output from subprocess.communicate in near realtime.
Example usage:
try:
with ProcessRunner(('python', task_file_path), env=os.environ.copy(), seconds_to_wait=0.01) as process_runner:
for out in process_runner:
print(out)
except ProcessError as e:
print(e.error_message)
raise
:param args: same as subprocess.Popen
:param env: same as subprocess.Popen
:param timeout: same as subprocess.communicate
:param bufsize: same as subprocess.Popen
:param seconds_to_wait: time to wait between each readline from the temporary file
:param kwargs: same as subprocess.Popen
"""
self._seconds_to_wait = seconds_to_wait
self._process_has_timed_out = False
self._timeout = timeout
self._process_done = False
self._std_file_handle = tempfile.NamedTemporaryFile()
self._process = subprocess.Popen(args, env=env, bufsize=bufsize,
stdout=self._std_file_handle, stderr=self._std_file_handle, **kwargs)
self._thread = threading.Thread(target=self._run_process)
self._thread.daemon = True
def __enter__(self):
self._thread.start()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._thread.join()
self._std_file_handle.close()
def __iter__(self):
# read all output from stdout file that subprocess.communicate fills
with open(self._std_file_handle.name, 'r') as stdout:
# while process is alive, keep reading data
while not self._process_done:
out = stdout.readline()
out_without_trailing_whitespaces = out.rstrip()
if out_without_trailing_whitespaces:
# yield stdout data without trailing \n
yield out_without_trailing_whitespaces
else:
# if there is nothing to read, then please wait a tiny little bit
time.sleep(self._seconds_to_wait)
# this is a hack: terraform seems to write to buffer after process has finished
out = stdout.read()
if out:
yield out
if self._process_has_timed_out:
raise ProcessTimeout('Process has timed out')
if self._process.returncode != 0:
raise ProcessError('Process has failed')
def _run_process(self):
try:
# Start gathering information (stdout and stderr) from the opened process
self._process.communicate(timeout=self._timeout)
# Graceful termination of the opened process
self._process.terminate()
except subprocess.TimeoutExpired:
self._process_has_timed_out = True
# Force termination of the opened process
self._process.kill()
self._process_done = True
#property
def return_code(self):
return self._process.returncode
Here is a class which I'm using in one of my projects. It redirects output of a subprocess to the log. At first I tried simply overwriting the write-method but that doesn't work as the subprocess will never call it (redirection happens on filedescriptor level). So I'm using my own pipe, similar to how it's done in the subprocess-module. This has the advantage of encapsulating all logging/printing logic in the adapter and you can simply pass instances of the logger to Popen: subprocess.Popen("/path/to/binary", stderr = LogAdapter("foo"))
class LogAdapter(threading.Thread):
def __init__(self, logname, level = logging.INFO):
super().__init__()
self.log = logging.getLogger(logname)
self.readpipe, self.writepipe = os.pipe()
logFunctions = {
logging.DEBUG: self.log.debug,
logging.INFO: self.log.info,
logging.WARN: self.log.warn,
logging.ERROR: self.log.warn,
}
try:
self.logFunction = logFunctions[level]
except KeyError:
self.logFunction = self.log.info
def fileno(self):
#when fileno is called this indicates the subprocess is about to fork => start thread
self.start()
return self.writepipe
def finished(self):
"""If the write-filedescriptor is not closed this thread will
prevent the whole program from exiting. You can use this method
to clean up after the subprocess has terminated."""
os.close(self.writepipe)
def run(self):
inputFile = os.fdopen(self.readpipe)
while True:
line = inputFile.readline()
if len(line) == 0:
#no new data was added
break
self.logFunction(line.strip())
If you don't need logging but simply want to use print() you can obviously remove large portions of the code and keep the class shorter. You could also expand it by an __enter__ and __exit__ method and call finished in __exit__ so that you could easily use it as context.
import os
def execute(cmd, callback):
for line in iter(os.popen(cmd).readline, ''):
callback(line[:-1])
execute('ls -a', print)
Had the same problem and worked out a simple and clean solution using process.sdtout.read1() which works perfectly for my needs in python3.
Here is a demo using the ping command (requires internet connection):
from subprocess import Popen, PIPE
cmd = "ping 8.8.8.8"
proc = Popen([cmd], shell=True, stdout=PIPE)
while True:
print(proc.stdout.read1())
Every second or so a new line is printed in the python console as the ping command reports its data in real time.
In my view "live output from subprocess command" means that both stdout and stderr should be live. And stdin should also be delivered to the subprocess.
The fragment below produces live output on stdout and stderr and also captures them as bytes in outcome.{stdout,stderr}.
The trick involves proper use of select and poll.
Works well for me on Python 3.9.
if self.log == 1:
print(f"** cmnd= {fullCmndStr}")
self.outcome.stdcmnd = fullCmndStr
try:
process = subprocess.Popen(
fullCmndStr,
shell=True,
encoding='utf8',
executable="/bin/bash",
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
except OSError:
self.outcome.error = OSError
else:
process.stdin.write(stdin)
process.stdin.close() # type: ignore
stdoutStrFile = io.StringIO("")
stderrStrFile = io.StringIO("")
pollStdout = select.poll()
pollStderr = select.poll()
pollStdout.register(process.stdout, select.POLLIN)
pollStderr.register(process.stderr, select.POLLIN)
stdoutEOF = False
stderrEOF = False
while True:
stdoutActivity = pollStdout.poll(0)
if stdoutActivity:
c= process.stdout.read(1)
if c:
stdoutStrFile.write(c)
if self.log == 1:
sys.stdout.write(c)
else:
stdoutEOF = True
stderrActivity = pollStderr.poll(0)
if stderrActivity:
c= process.stderr.read(1)
if c:
stderrStrFile.write(c)
if self.log == 1:
sys.stderr.write(c)
else:
stderrEOF = True
if stdoutEOF and stderrEOF:
break
if self.log == 1:
print(f"** cmnd={fullCmndStr}")
process.wait() # type: ignore
self.outcome.stdout = stdoutStrFile.getvalue()
self.outcome.stderr = stderrStrFile.getvalue()
self.outcome.error = process.returncode # type: ignore
The only way i've found how to read a subprocess' output in a streaming fashion (while also capturing it in a variable) in Python (for multiple output streams, i.e. both stdout and stderr) is by passing the subprocess a named temporary file to write to and then opening the same temporary file in a separate reading handle.
Note: this is for Python 3
stdout_write = tempfile.NamedTemporaryFile()
stdout_read = io.open(stdout_write.name, "r")
stderr_write = tempfile.NamedTemporaryFile()
stderr_read = io.open(stderr_write.name, "r")
stdout_captured = ""
stderr_captured = ""
proc = subprocess.Popen(["command"], stdout=stdout_write, stderr=stderr_write)
while True:
proc_done: bool = cli_process.poll() is not None
while True:
content = stdout_read.read(1024)
sys.stdout.write(content)
stdout_captured += content
if len(content) < 1024:
break
while True:
content = stderr_read.read(1024)
sys.stderr.write(content)
stdout_captured += content
if len(content) < 1024:
break
if proc_done:
break
time.sleep(0.1)
stdout_write.close()
stdout_read.close()
stderr_write.close()
stderr_read.close()
However, if you don't need to capture the output, then you can simply pass sys.stdout and sys.stderr streams from your Python script to the called subprocess, as xaav suggested in his answer :
subprocess.Popen(["command"], stdout=sys.stdout, stderr=sys.stderr)

Getting output from and giving commands to a python subprocess

I am trying to get output from a subprocess and then give commands to that process based on the preceding output. I need to do this a variable number of times, when the program needs further input. (I also need to be able to hide the subprocess command prompt if possible).
I figured this would be an easy task given that I have seen this problem being discussed in posts from 2003 and it is nearly 2012 and it appears to be a pretty common need and really seems like it should be a basic part of any programming language. Apparently I was wrong and somehow almost 9 years later there is still no standard way of accomplishing this task in a stable, non-destructive, platform independent way!
I don't really understand much about file i/o and buffering or threading so I would prefer a solution that is as simple as possible. If there is a module that accomplishes this that is compatible with python 3.x, I would be very willing to download it. I realize that there are multiple questions that ask basically the same thing, but I have yet to find an answer that addresses the simple task that I am trying to accomplish.
Here is the code I have so far based on a variety of sources; however I have absolutely no idea what to do next. All my attempts ended in failure and some managed to use 100% of my CPU (to do basically nothing) and would not quit.
import subprocess
from subprocess import Popen, PIPE
p = Popen(r'C:\postgis_testing\shellcomm.bat',stdin=PIPE,stdout=PIPE,stderr=subprocess.STDOUT shell=True)
stdout,stdin = p.communicate(b'command string')
In case my question is unclear I am posting the text of the sample batch file that I demonstrates a situation in which it is necessary to send multiple commands to the subprocess (if you type an incorrect command string the program loops).
#echo off
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
If anyone can help me I would very much appreciate it, and I'm sure many others would as well!
EDIT here is the functional code using eryksun's code (next post) :
import subprocess
import threading
import time
import sys
try:
import queue
except ImportError:
import Queue as queue
def read_stdout(stdout, q, p):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output: %s' % outdata)
if __name__ == '__main__':
#setup
p = subprocess.Popen(['shellcomm.bat'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
bufsize=0, shell=True) # I put shell=True to hide prompt
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q, p))
t.daemon = True
t.start()
#command loop
while p.poll() is None:
printout(q)
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
time.sleep(0.1) # I added this to give some time to check for closure (otherwise it doesn't work)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('Return Code: %d' % rc)
However when the script is run from a command prompt the following happens:
C:\Users\username>python C:\postgis_testing\shellcomm7.py
Input: sth
Traceback (most recent call last):
File "C:\postgis_testing\shellcomm7.py", line 51, in <module>
p.stdin.write(cmd)
IOError: [Errno 22] Invalid argument
It seems that the program closes out when run from command prompt. any ideas?
This demo uses a dedicated thread to read from stdout. If you search around, I'm sure you can find a more complete implementation written up in an object oriented interface. At least I can say this is working for me with your provided batch file in both Python 2.7.2 and 3.2.2.
shellcomm.bat:
#echo off
echo Command Loop Test
echo.
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
Here's what I get for output based on the sequence of commands "wrong", "still wrong", and "command string":
Output:
Command Loop Test
Type the correct command string:
Input: wrong
Output:
Type the correct command string:
Input: still wrong
Output:
Type the correct command string:
Input: command string
Output:
you are correct
Return Code: 0
For reading the piped output, readline might work sometimes, but set /P INPUT in the batch file naturally isn't writing a line ending. So instead I used lambda: stdout.read(1) to read a byte at a time (not so efficient, but it works). The reading function puts the data on a queue. The main thread gets the output from the queue after it writes a a command. Using a timeout on the get call here makes it wait a small amount of time to ensure the program is waiting for input. Instead you could check the output for prompts to know when the program is expecting input.
All that said, you can't expect a setup like this to work universally because the console program you're trying to interact with might buffer its output when piped. In Unix systems there are some utility commands available that you can insert into a pipe to modify the buffering to be non-buffered, line-buffered, or a given size -- such as stdbuf. There are also ways to trick the program into thinking it's connected to a pty (see pexpect). However, I don't know a way around this problem on Windows if you don't have access to the program's source code to explicitly set the buffering using setvbuf.
import subprocess
import threading
import time
import sys
if sys.version_info.major >= 3:
import queue
else:
import Queue as queue
input = raw_input
def read_stdout(stdout, q):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output:\n%s' % outdata)
if __name__ == '__main__':
ARGS = ["shellcomm.bat"] ### Modify this
#setup
p = subprocess.Popen(ARGS, bufsize=0, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q))
t.daemon = True
t.start()
#command loop
while 1:
printout(q)
if p.poll() is not None or p.stdin.closed:
break
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('\nReturn Code: %d' % rc)

WIndows: subprocess making new console window, losing stdin/out

I'm using Windows Vista and Python 2.7.2, but answers needn't be in Python.
So I can start and interact with a subprocesses stdin/stdout normally (using python), for command-line programs such as `dir'.
- however -
the program I now want to call likes to make a new console window for itself on Windows (not curses), with new handles, even when run from a pre-existing cmd.exe window. (Odd, as it's the "remote control" interface of VLC.) Is there any way of either:
getting the handles for the process-made console's stdin/out; or
getting the new shell to run within the old (like invoking bash from within bash)?
Failing that, so that I can hack the subprocesses' code, how would a new console be set up in Windows and in/output transferred?
Edit:
I.e.
>>> p = Popen(args=['vlc','-I','rc'],stdin=PIPE,stdout=PIPE)
# [New console appears with text, asking for commands]
>>> p.stdin.write("quit\r\n")
Traceback:
File "<stdin>", line 1, in <module>
IOError: [Errno 22] Invalid argument
>>> p.stdout.readline()
''
>>> p.stdout.readline()
''
# [...]
But the new console window that comes up doesn't accept keyboard input either.
Whereas normally:
>>> p = Popen(args=['cmd'],stdin=PIPE,stdout=PIPE)
>>> p.stdin.write("dir\r\n")
>>> p.stdin.flush()
>>> p.stdout.readline() #Don't just do this IRL, may block.
'Microsoft Windows [Version...
I haven't gotten the rc interface to work with a piped stdin/stdout on Windows; I get IOError at all attempts to communicate or write directly to stdin. There's an option --rc-fake-tty that lets the rc interface be scripted on Linux, but it's not available in Windows -- at least not in my somewhat dated version of VLC (1.1.4). Using the socket interface, on the other hand, seems to work fine.
The structure assigned to the startupinfo option -- and used by the Win32 CreateProcess function -- can be configured to hide a process window. However, for the VLC rc console, I think it's simpler to use the existing --rc-quiet option. In general, here's how to configure startupinfo to hide a process window:
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
subprocess.Popen(cmd, startupinfo=startupinfo)
Just to be complete -- in case using pipes is failing on your system too -- here's a little demo I cooked up using the --rc-host option to communicate using a socket. It also uses --rc-quiet to hide the console. This just prints the help and quits. I haven't tested anything else. I checked that it works in Python versions 2.7.2 and 3.2.2. (I know you didn't ask for this, but maybe it will be useful to you nonetheless.)
import socket
import subprocess
from select import select
try:
import winreg
except ImportError:
import _winreg as winreg
def _get_vlc_path():
views = [(winreg.HKEY_CURRENT_USER, 0),
(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_64KEY),
(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_32KEY)]
subkey = r'Software\VideoLAN\VLC'
access = winreg.KEY_QUERY_VALUE
for hroot, flag in views:
try:
with winreg.OpenKey(hroot, subkey, 0, access | flag) as hkey:
value, type_id = winreg.QueryValueEx(hkey, None)
if type_id == winreg.REG_SZ:
return value
except WindowsError:
pass
raise SystemExit("Error: VLC not found.")
g_vlc_path = _get_vlc_path()
def send_command(sock, cmd, get_result=False):
try:
cmd = (cmd + '\n').encode('ascii')
except AttributeError:
cmd += b'\n'
sent = total = sock.send(cmd)
while total < len(cmd):
sent = sock.send(cmd[total:])
if sent == 0:
raise socket.error('Socket connection broken.')
total += sent
if get_result:
return receive_result(sock)
def receive_result(sock):
data = bytearray()
sock.setblocking(0)
while select([sock], [], [], 1.0)[0]:
chunk = sock.recv(1024)
if chunk == b'':
raise socket.error('Socket connection broken.')
data.extend(chunk)
sock.setblocking(1)
return data.decode('utf-8')
def main(address, port):
import time
rc_host = '{0}:{1}'.format(address, port)
vlc = subprocess.Popen([g_vlc_path, '-I', 'rc', '--rc-host', rc_host,
'--rc-quiet'])
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
sock.connect((address, port))
help_msg = send_command(sock, 'help', True)
print(help_msg)
send_command(sock, 'quit')
except socket.error as e:
exit("Error: " + e.args[0])
finally:
sock.close()
time.sleep(0.5)
if vlc.poll() is None:
vlc.terminate()
if __name__ == '__main__':
main('localhost', 12345)
With reference to monitoring the stdOut which appears in the new Spawned Console Window.
Here´s another question/answer that solves the problem.
In summary (as answered by Adam M-W ):
Suppress the new spawned console by launching vlc in quiet mode --intf=dummy --dummy-quiet or --intf=rc --rc-quiet.
Monitor stdErr of launched process
Note: As for stdIn commands for the rc interface, the --rc-host solution is described by eryksun´s answer

Merging a Python script's subprocess' stdout and stderr while keeping them distinguishable

I would like to direct a python script's subprocess' stdout and stdin into the same file. What I don't know is how to make the lines from the two sources distinguishable? (For example prefix the lines from stderr with an exclamation mark.)
In my particular case there is no need for live monitoring of the subprocess, the executing Python script can wait for the end of its execution.
tsk = subprocess.Popen(args,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
subprocess.STDOUT is a special flag that tells subprocess to route all stderr output to stdout, thus combining your two streams.
btw, select doesn't have a poll() in windows. subprocess only uses the file handle number, and doesn't call your file output object's write method.
to capture the output, do something like:
logfile = open(logfilename, 'w')
while tsk.poll() is None:
line = tsk.stdout.readline()
logfile.write(line)
I found myself having to tackle this problem recently, and it took a while to get something I felt worked correctly in most cases, so here it is! (It also has the nice side effect of processing the output via a python logger, which I've noticed is another common question here on Stackoverflow).
Here is the code:
import sys
import logging
import subprocess
from threading import Thread
logging.basicConfig(stream=sys.stdout,level=logging.INFO)
logging.addLevelName(logging.INFO+2,'STDERR')
logging.addLevelName(logging.INFO+1,'STDOUT')
logger = logging.getLogger('root')
pobj = subprocess.Popen(['python','-c','print 42;bargle'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
def logstream(stream,loggercb):
while True:
out = stream.readline()
if out:
loggercb(out.rstrip())
else:
break
stdout_thread = Thread(target=logstream,
args=(pobj.stdout,lambda s: logger.log(logging.INFO+1,s)))
stderr_thread = Thread(target=logstream,
args=(pobj.stderr,lambda s: logger.log(logging.INFO+2,s)))
stdout_thread.start()
stderr_thread.start()
while stdout_thread.isAlive() and stderr_thread.isAlive():
pass
Here is the output:
STDOUT:root:42
STDERR:root:Traceback (most recent call last):
STDERR:root: File "<string>", line 1, in <module>
STDERR:root:NameError: name 'bargle' is not defined
You can replace the subprocess call to do whatever you want, I just chose running python with a command that I knew would print to both stdout and stderr. The key bit is reading stderr and stdout each in a separate thread. Otherwise you may be blocking on reading one while there is data ready to be read on the other.
If you want to interleave to get roughly the same order that you would if you ran the process interactively then you need to do what the shell does and poll stdin/stdout and write in the order that they poll.
Here's some code that does something along the lines of what you want - in this case sending the stdout/stderr to a logger info/error streams.
tsk = subprocess.Popen(args,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
poll = select.poll()
poll.register(tsk.stdout,select.POLLIN | select.POLLHUP)
poll.register(tsk.stderr,select.POLLIN | select.POLLHUP)
pollc = 2
events = poll.poll()
while pollc > 0 and len(events) > 0:
for event in events:
(rfd,event) = event
if event & select.POLLIN:
if rfd == tsk.stdout.fileno():
line = tsk.stdout.readline()
if len(line) > 0:
logger.info(line[:-1])
if rfd == tsk.stderr.fileno():
line = tsk.stderr.readline()
if len(line) > 0:
logger.error(line[:-1])
if event & select.POLLHUP:
poll.unregister(rfd)
pollc = pollc - 1
if pollc > 0: events = poll.poll()
tsk.wait()
At the moment all other answers don't handle buffering on the child subprocess' side if the subprocess is not a Python script that accepts -u flag. See "Q: Why not just use a pipe (popen())?" in the pexpect documentation.
To simulate -u flag for some of C stdio-based (FILE*) programs you could try stdbuf.
If you ignore this then your output won't be properly interleaved and might look like:
stderr
stderr
...large block of stdout including parts that are printed before stderr...
You could try it with the following client program, notice the difference with/without -u flag (['stdbuf', '-o', 'L', 'child_program'] also fixes the output):
#!/usr/bin/env python
from __future__ import print_function
import random
import sys
import time
from datetime import datetime
def tprint(msg, file=sys.stdout):
time.sleep(.1*random.random())
print("%s %s" % (datetime.utcnow().strftime('%S.%f'), msg), file=file)
tprint("stdout1 before stderr")
tprint("stdout2 before stderr")
for x in range(5):
tprint('stderr%d' % x, file=sys.stderr)
tprint("stdout3 after stderr")
On Linux you could use pty to get the same behavior as when the subprocess runs interactively e.g., here's a modified #T.Rojan's answer:
import logging, os, select, subprocess, sys, pty
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
master_fd, slave_fd = pty.openpty()
p = subprocess.Popen(args,stdout=slave_fd, stderr=subprocess.PIPE, close_fds=True)
with os.fdopen(master_fd) as stdout:
poll = select.poll()
poll.register(stdout, select.POLLIN)
poll.register(p.stderr,select.POLLIN | select.POLLHUP)
def cleanup(_done=[]):
if _done: return
_done.append(1)
poll.unregister(p.stderr)
p.stderr.close()
poll.unregister(stdout)
assert p.poll() is not None
read_write = {stdout.fileno(): (stdout.readline, logger.info),
p.stderr.fileno(): (p.stderr.readline, logger.error)}
while True:
events = poll.poll(40) # poll with a small timeout to avoid both
# blocking forever and a busy loop
if not events and p.poll() is not None:
# no IO events and the subprocess exited
cleanup()
break
for fd, event in events:
if event & select.POLLIN: # there is something to read
read, write = read_write[fd]
line = read()
if line:
write(line.rstrip())
elif event & select.POLLHUP: # free resources if stderr hung up
cleanup()
else: # something unexpected happened
assert 0
sys.exit(p.wait()) # return child's exit code
It assumes that stderr is always unbuffered/line-buffered and stdout is line-buffered in an interactive mode. Only full lines are read. The program might block if there are non-terminated lines in the output.
I suggest you write your own handlers, something like (not tested, I hope you catch the idea):
class my_buffer(object):
def __init__(self, fileobject, prefix):
self._fileobject = fileobject
self.prefix = prefix
def write(self, text):
return self._fileobject.write('%s %s' % (self.prefix, text))
# delegate other methods to fileobject if necessary
log_file = open('log.log', 'w')
my_out = my_buffer(log_file, 'OK:')
my_err = my_buffer(log_file, '!!!ERROR:')
p = subprocess.Popen(command, stdout=my_out, stderr=my_err, shell=True)
You may write the stdout/err to a file after the command execution.
In the example below I use pickling so I am sure I will be able to read without any particular parsing to differentiate between the stdout/err and at some point I could dumo the exitcode and the command itself.
import subprocess
import cPickle
command = 'ls -altrh'
outfile = 'log.errout'
pipe = subprocess.Popen(command, stdout = subprocess.PIPE,
stderr = subprocess.PIPE, shell = True)
stdout, stderr = pipe.communicate()
f = open(outfile, 'w')
cPickle.dump({'out': stdout, 'err': stderr},f)
f.close()

Categories