I would like to start off a python process and log subprocess error messages to the logging object of the parent script. I would ideally like to unify the log streams into one file. Can I somehow access the output stream of the logging class? One solution I know of is to use proc log for logging. As described in the answer below, I could read from the proc.stdin and stderr, but I'd have duplicate logging headers. I wonder if there is a way to pass the file descriptor underlying the logging class directly to the subprocess?
logging.basicConfig(filename="test.log",level=logging.DEBUG)
logging.info("Started")
procLog = open(os.path.expanduser("subproc.log"), 'w')
proc = subprocess.Popen(cmdStr, shell=True, stderr=procLog, stdout=procLog)
proc.wait()
procLog.flush()
Based on Adam Rosenfield's code, you could
use select.select to block until there is output to be read from
proc.stdout or proc.stderr,
read and log that output, then
repeat until the process is done.
Note that the following writes to /tmp/test.log and runs the command ls -laR /tmp. Change to suit your needs.
(PS: Typically /tmp contains directories which can not be read by normal users, so running ls -laR /tmp produces output to both stdout and stderr. The code below correctly interleaves those two streams as they are produced.)
import logging
import subprocess
import shlex
import select
import fcntl
import os
import errno
import contextlib
logger = logging.getLogger(__name__)
def make_async(fd):
'''add the O_NONBLOCK flag to a file descriptor'''
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
def read_async(fd):
'''read some data from a file descriptor, ignoring EAGAIN errors'''
try:
return fd.read()
except IOError, e:
if e.errno != errno.EAGAIN:
raise e
else:
return ''
def log_fds(fds):
for fd in fds:
out = read_async(fd)
if out:
logger.info(out)
#contextlib.contextmanager
def plain_logger():
root = logging.getLogger()
hdlr = root.handlers[0]
formatter_orig = hdlr.formatter
hdlr.setFormatter(logging.Formatter('%(message)s'))
yield
hdlr.setFormatter(formatter_orig)
def main():
# fmt = '%(name)-12s: %(levelname)-8s %(message)s'
logging.basicConfig(filename = '/tmp/test.log', mode = 'w',
level = logging.DEBUG)
logger.info("Started")
cmdStr = 'ls -laR /tmp'
with plain_logger():
proc = subprocess.Popen(shlex.split(cmdStr),
stdout = subprocess.PIPE, stderr = subprocess.PIPE)
# without `make_async`, `fd.read` in `read_async` blocks.
make_async(proc.stdout)
make_async(proc.stderr)
while True:
# Wait for data to become available
rlist, wlist, xlist = select.select([proc.stdout, proc.stderr], [], [])
log_fds(rlist)
if proc.poll() is not None:
# Corner case: check if more output was created
# between the last call to read_async and now
log_fds([proc.stdout, proc.stderr])
break
logger.info("Done")
if __name__ == '__main__':
main()
Edit:
You can redirect stdout and stderr to logfile = open('/tmp/test.log', 'a').
A small difficulty with doing so, however, is that any logger handler that is also writing to /tmp/test.log will not be aware of what the subprocess is writing, and so the log file may get garbled.
If you do not make logging calls while the subprocess is doing its business, then the only problem is that the logger handler has the wrong position in the file after the subprocess has finished. That can be fixed by calling
handler.stream.seek(0, 2)
so the handler will resume writing at the end of the file.
import logging
import subprocess
import contextlib
import shlex
logger = logging.getLogger(__name__)
#contextlib.contextmanager
def suspended_logger():
root = logging.getLogger()
handler = root.handlers[0]
yield
handler.stream.seek(0, 2)
def main():
logging.basicConfig(filename = '/tmp/test.log', filemode = 'w',
level = logging.DEBUG)
logger.info("Started")
with suspended_logger():
cmdStr = 'test2.py 1>>/tmp/test.log 2>&1'
logfile = open('/tmp/test.log', 'a')
proc = subprocess.Popen(shlex.split(cmdStr),
stdout = logfile,
stderr = logfile)
proc.communicate()
logger.info("Done")
if __name__ == '__main__':
main()
Related
import serial
import string
from time import sleep
from subprocess import call, check_call, CalledProcessError
import logging
import random
import signal
import os
LOG_FILENAME = "/tmp/dfu.log"
LOG_FD = open(LOG_FILENAME, "w")
logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
dfu_image_list = ['/tmp/sample.dfu']
while True:
try:
for test_file in dfu_image_list:
logging.info("\n==================\nbegin dfu download test")
check_call('sudo dfu-util -vD /tmp/sample.dfu', shell=True,
stdout=LOG_FD, stderr=LOG_FD)
logging.info("Download completed successfully!")
sleep(5)
except CalledProcessError as e:
msg = "dfu-util failed with return code :%s \n\nMessage:%s" %
(e.returncode, e.message)
logging.warning(msg)
logging.warning("USB device likely needs time to re-enumerate,
waiting 10 seconds before restarting")
sleep(10)
except OSError:
logging.error("dfu-util executable not found!")
exit(1)
Execution of above python script provides logs into /tmp/dfu.log.
However logs into log file are from the function check_call.
Expected behavior is main threads logs like
logging.info("\n==================\nbegin dfu download test")
logs of function check_call
logging.info("Download completed successfully!").
However only logs of function check_call gets reflected and main threads logs like
begin dfu download test
Download completed successfully!
are not getting reflected into log file.
Remember that the logging module does some buffering, which means writing a log using log.something() doesn't necessarily means that this log will be written to the file.
Also, you're opening a file twice, and writing to it from different places. Usually that's a bad idea, even if Linux is preemptive and you flush the log and could possible work it still a bad idea.
What about you communicate() with the process instead of check_call() and then you log the stdout or stderr as you wish. For example:
for image in dfu_image_list:
logging.info('=' * 10)
logging.info('Begin dfu download using {}'.format(image))
process = Popen(['sudo', 'dfu-util', '-vD', image], stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
logging.info(stdout)
if stderr:
logging.warning(stderr)
logging.info('Download completed successfully!')
By the way, your loop logic is flawed. As any error will restart the loop for dfu_image_list images.
I think this is more what you want to do:
from sys import exit
from time import sleep
from subprocess import Popen, PIPE, CalledProcessError
import logging
LOG_FILENAME = "/tmp/dfu.log"
ATTEMPTS = 3
logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
def download(where):
logging.info('=' * 10)
logging.info('Begin dfu download to {}'.format(where))
for attempt in range(ATTEMPTS):
logging.info('Attempt #{}...'.format(attempt + 1))
try:
process = Popen(
['sudo', 'dfu-util', '-vD', where],
stdout=PIPE, stderr=PIPE
)
stdout, stderr = process.communicate()
logging.info(stdout)
if stderr:
logging.warning(stderr)
logging.info('Download completed successfully!')
return True
except CalledProcessError as e:
logging.warning(
'dfu-util failed with return code {}'.format(e.returncode)
)
logging.warning(
'Message:\n{}'.format(e.message)
)
logging.warning(
'USB device likely needs time to re-enumerate, '
'waiting 10 seconds before restarting...'
)
sleep(10)
continue
except OSError:
logging.critical('dfu-util executable not found!')
return False
if __name__ == '__main__':
if not download('/tmp/sample.dfu'):
exit(1)
This is what I am trying to achieve
def fun():
runner = InteractiveConsole()
while(True):
code = raw_input()
code.rstrip('\n')
# I want to achieve the following
# By default the output and error of the 'code' is sent to STDOUT and STDERR
# I want to obtain the output in two variables out and err
out,err = runner.push(code)
All the solution that I have looked at till now, use either pipes to issue separate script execution command (which is not possible in my case). Any other way I can achieve this?
import StringIO, sys
from contextlib import contextmanager
#contextmanager
def redirected(out=sys.stdout, err=sys.stderr):
saved = sys.stdout, sys.stderr
sys.stdout, sys.stderr = out, err
try:
yield
finally:
sys.stdout, sys.stderr = saved
def fun():
runner = InteractiveConsole()
while True:
out = StringIO.StringIO()
err = StringIO.StringIO()
with redirected(out=out, err=err):
out.flush()
err.flush()
code = raw_input()
code.rstrip('\n')
# I want to achieve the following
# By default the output and error of the 'code' is sent to STDOUT and STDERR
# I want to obtain the output in two variables out and err
runner.push(code)
output = out.getvalue()
print output
In newer versions of python, this contezt manager is built in:
with contextlib.redirect_stdout(out), contextlib.redirect_stderr(err):
...
InteractiveConsole doesn't expose any API for setting a file like object for output or errors, you'll need to monkey patch sys.stdout and sys.stderr. As always with monkey patching, be mindful of what the side effects might be. In this case, you'd be replacing the global stdin and stdout file objects with your own implementation, which might swallow up unintended output as well (especially if you're using any threads).
It would be slightly safer to 'tee' the output with something like:
import sys
import StringIO
class TeeBuffer(object):
def __init__(self, real):
self.real = real
self.buf = StringIO.StringIO()
def write(self, val):
self.real.write(val)
self.buf.write(val)
def fun():
runner = InteractiveConsole()
out = TeeBuffer(sys.stdout)
err = TeeBuffer(sys.stderr)
sys.stdout = out
sys.stderr = err
while(True):
code = raw_input()
code.rstrip('\n')
out, err = runner.push(code)
outstr = out.buf.getvalue()
errstr = err.buf.getvalue()
sys.stdout = out.real
sys.stderr = err.real
Then your user still sees the output, without you having to worry about printing it back out to the correct place on each run.
You can use a context manager to redirect stdout temporarily:
#contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout
Used as follows:
with opened(filename, "w") as f:
with stdout_redirected(f):
print "Hello world"
This isn't thread-safe, of course, but neither is doing this same dance manually. In single-threaded programs (for example, in scripts) it is a popular way of doing things.
It's easy to tweak this to redirect both stdout and stderr to cStringIOs:
#contextmanager
def out_redirected():
save_stdout = sys.stdout
save_stderr = sys.stderr
sys.stdout = cStringIO.String()
sys.stderr = cStringIO.String()
try:
yield sys.stdout, sys.stderr
finally:
sys.stdout = save_stdout
sys.stderr = save_stderr
You'd use this as
with out_redirected() as out, err:
runner.push(code)
out.seek(0)
print out.read()
I have a large project consisting of sufficiently large number of modules, each printing something to the standard output. Now as the project has grown in size, there are large no. of print statements printing a lot on the std out which has made the program considerably slower.
So, I now want to decide at runtime whether or not to print anything to the stdout. I cannot make changes in the modules as there are plenty of them. (I know I can redirect the stdout to a file but even this is considerably slow.)
So my question is how do I redirect the stdout to nothing ie how do I make the print statement do nothing?
# I want to do something like this.
sys.stdout = None # this obviously will give an error as Nonetype object does not have any write method.
Currently the only idea I have is to make a class which has a write method (which does nothing) and redirect the stdout to an instance of this class.
class DontPrint(object):
def write(*args): pass
dp = DontPrint()
sys.stdout = dp
Is there an inbuilt mechanism in python for this? Or is there something better than this?
Cross-platform:
import os
import sys
f = open(os.devnull, 'w')
sys.stdout = f
On Windows:
f = open('nul', 'w')
sys.stdout = f
On Linux:
f = open('/dev/null', 'w')
sys.stdout = f
A nice way to do this is to create a small context processor that you wrap your prints in. You then just use is in a with-statement to silence all output.
Python 2:
import os
import sys
from contextlib import contextmanager
#contextmanager
def silence_stdout():
old_target = sys.stdout
try:
with open(os.devnull, "w") as new_target:
sys.stdout = new_target
yield new_target
finally:
sys.stdout = old_target
with silence_stdout():
print("will not print")
print("this will print")
Python 3.4+:
Python 3.4 has a context processor like this built-in, so you can simply use contextlib like this:
import contextlib
with contextlib.redirect_stdout(None):
print("will not print")
print("this will print")
If the code you want to surpress writes directly to sys.stdout using None as redirect target won't work. Instead you can use:
import contextlib
import sys
import os
with contextlib.redirect_stdout(open(os.devnull, 'w')):
sys.stdout.write("will not print")
sys.stdout.write("this will print")
If your code writes to stderr instead of stdout, you can use contextlib.redirect_stderr instead of redirect_stdout.
Running this code only prints the second line of output, not the first:
$ python test.py
this will print
This works cross-platform (Windows + Linux + Mac OSX), and is cleaner than the ones other answers imho.
If you're in python 3.4 or higher, there's a simple and safe solution using the standard library:
import contextlib
with contextlib.redirect_stdout(None):
print("This won't print!")
(at least on my system) it appears that writing to os.devnull is about 5x faster than writing to a DontPrint class, i.e.
#!/usr/bin/python
import os
import sys
import datetime
ITER = 10000000
def printlots(out, it, st="abcdefghijklmnopqrstuvwxyz1234567890"):
temp = sys.stdout
sys.stdout = out
i = 0
start_t = datetime.datetime.now()
while i < it:
print st
i = i+1
end_t = datetime.datetime.now()
sys.stdout = temp
print out, "\n took", end_t - start_t, "for", it, "iterations"
class devnull():
def write(*args):
pass
printlots(open(os.devnull, 'wb'), ITER)
printlots(devnull(), ITER)
gave the following output:
<open file '/dev/null', mode 'wb' at 0x7f2b747044b0>
took 0:00:02.074853 for 10000000 iterations
<__main__.devnull instance at 0x7f2b746bae18>
took 0:00:09.933056 for 10000000 iterations
If you're in a Unix environment (Linux included), you can redirect output to /dev/null:
python myprogram.py > /dev/null
And for Windows:
python myprogram.py > nul
You can just mock it.
import mock
sys.stdout = mock.MagicMock()
Your class will work just fine (with the exception of the write() method name -- it needs to be called write(), lowercase). Just make sure you save a copy of sys.stdout in another variable.
If you're on a *NIX, you can do sys.stdout = open('/dev/null'), but this is less portable than rolling your own class.
How about this:
from contextlib import ExitStack, redirect_stdout
import os
with ExitStack() as stack:
if should_hide_output():
null_stream = open(os.devnull, "w")
stack.enter_context(null_stream)
stack.enter_context(redirect_stdout(null_stream))
noisy_function()
This uses the features in the contextlib module to hide the output of whatever command you are trying to run, depending on the result of should_hide_output(), and then restores the output behavior after that function is done running.
If you want to hide standard error output, then import redirect_stderr from contextlib and add a line saying stack.enter_context(redirect_stderr(null_stream)).
The main downside it that this only works in Python 3.4 and later versions.
sys.stdout = None
It is OK for print() case. But it can cause an error if you call any method of sys.stdout, e.g. sys.stdout.write().
There is a note in docs:
Under some conditions stdin, stdout and stderr as well as the original
values stdin, stdout and stderr can be None. It is usually
the case for Windows GUI apps that aren’t connected to a console and
Python apps started with pythonw.
Supplement to iFreilicht's answer - it works for both python 2 & 3.
import sys
class NonWritable:
def write(self, *args, **kwargs):
pass
class StdoutIgnore:
def __enter__(self):
self.stdout_saved = sys.stdout
sys.stdout = NonWritable()
return self
def __exit__(self, *args):
sys.stdout = self.stdout_saved
with StdoutIgnore():
print("This won't print!")
If you don't want to deal with resource-allocation nor rolling your own class, you may want to use TextIO from Python typing. It has all required methods stubbed for you by default.
import sys
from typing import TextIO
sys.stdout = TextIO()
There are a number of good answers in the flow, but here is my Python 3 answer (when sys.stdout.fileno() isn't supported anymore) :
import os
import sys
oldstdout = os.dup(1)
oldstderr = os.dup(2)
oldsysstdout = sys.stdout
oldsysstderr = sys.stderr
# Cancel all stdout outputs (will be lost) - optionally also cancel stderr
def cancel_stdout(stderr=False):
sys.stdout.flush()
devnull = open('/dev/null', 'w')
os.dup2(devnull.fileno(), 1)
sys.stdout = devnull
if stderr:
os.dup2(devnull.fileno(), 2)
sys.stderr = devnull
# Redirect all stdout outputs to a file - optionally also redirect stderr
def reroute_stdout(filepath, stderr=False):
sys.stdout.flush()
file = open(filepath, 'w')
os.dup2(file.fileno(), 1)
sys.stdout = file
if stderr:
os.dup2(file.fileno(), 2)
sys.stderr = file
# Restores stdout to default - and stderr
def restore_stdout():
sys.stdout.flush()
sys.stdout.close()
os.dup2(oldstdout, 1)
os.dup2(oldstderr, 2)
sys.stdout = oldsysstdout
sys.stderr = oldsysstderr
To use it:
Cancel all stdout and stderr outputs with:
cancel_stdout(stderr=True)
Route all stdout (but not stderr) to a file:
reroute_stdout('output.txt')
To restore stdout and stderr:
restore_stdout()
Why don't you try this?
sys.stdout.close()
sys.stderr.close()
Will add some example to the numerous answers here:
import argparse
import contextlib
class NonWritable:
def write(self, *args, **kwargs):
pass
parser = argparse.ArgumentParser(description='my program')
parser.add_argument("-p", "--param", help="my parameter", type=str, required=True)
#with contextlib.redirect_stdout(None): # No effect as `argparse` will output to `stderr`
#with contextlib.redirect_stderr(None): # AttributeError: 'NoneType' object has no attribute 'write'
with contextlib.redirect_stderr(NonWritable): # this works!
args = parser.parse_args()
The normal output would be:
>python TEST.py
usage: TEST.py [-h] -p PARAM
TEST.py: error: the following arguments are required: -p/--param
I use this. Redirect stdout to a string, which you subsequently ignore. I use a context manager to save and restore the original setting for stdout.
from io import StringIO
...
with StringIO() as out:
with stdout_redirected(out):
# Do your thing
where stdout_redirected is defined as:
from contextlib import contextmanager
#contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout
I've been writing a small Python script that executes some shell commands using the subprocess module and a helper function:
import subprocess as sp
def run(command, description):
"""Runs a command in a formatted manner. Returns its return code."""
start=datetime.datetime.now()
sys.stderr.write('%-65s' % description)
s=sp.Popen(command, shell=True, stderr=sp.PIPE, stdout=sp.PIPE)
out,err=s.communicate()
end=datetime.datetime.now()
duration=end-start
status='Done' if s.returncode==0 else 'Failed'
print '%s (%d seconds)' % (status, duration.seconds)
The following lines reads the standard output and error:
s=sp.Popen(command, shell=True, stderr=sp.PIPE, stdout=sp.PIPE)
out,err=s.communicate()
As you can see, stdout and stderr are not used. Suppose that I want to write the output and error messages to a log file, in a formatted way, e.g.:
[STDOUT: 2011-01-17 14:53:55] <message>
[STDERR: 2011-01-17 14:53:56] <message>
My question is, what's the most Pythonic way to do it? I thought of three options:
Inherit the file object and override the write method.
Use a Delegate class which implements write.
Connect to the PIPE itself in some way.
UPDATE : reference test script
I'm checking the results with this script, saved as test.py:
#!/usr/bin/python
import sys
sys.stdout.write('OUT\n')
sys.stdout.flush()
sys.stderr.write('ERR\n')
sys.stderr.flush()
Any ideas?
1 and 2 are reasonable solutions, but overriding write() won't be enough.
The problem is that Popen needs file handles to attach to the process, so Python file objects doesn't work, they have to be OS level. To solve that you have to have a Python object that has a os level file handle. The only way I can think of solving that is to use pipes, so you have an os level file handle to write to. But then you need another thread that sits and polls that pipe for things to read in so it can log it. (So this is more strictly an implementation of 2, as it delegates to logging).
Said and done:
import io
import logging
import os
import select
import subprocess
import time
import threading
LOG_FILENAME = 'output.log'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
class StreamLogger(io.IOBase):
def __init__(self, level):
self.level = level
self.pipe = os.pipe()
self.thread = threading.Thread(target=self._flusher)
self.thread.start()
def _flusher(self):
self._run = True
buf = b''
while self._run:
for fh in select.select([self.pipe[0]], [], [], 0)[0]:
buf += os.read(fh, 1024)
while b'\n' in buf:
data, buf = buf.split(b'\n', 1)
self.write(data.decode())
time.sleep(1)
self._run = None
def write(self, data):
return logging.log(self.level, data)
def fileno(self):
return self.pipe[1]
def close(self):
if self._run:
self._run = False
while self._run is not None:
time.sleep(1)
os.close(self.pipe[0])
os.close(self.pipe[1])
So that class starts a os level pipe that Popen can attach the stdin/out/error to for the subprocess. It also starts a thread that polls the other end of that pipe once a second for things to log, which it then logs with the logging module.
Possibly this class should implement more things for completeness, but it works in this case anyway.
Example code:
with StreamLogger(logging.INFO) as out:
with StreamLogger(logging.ERROR) as err:
subprocess.Popen("ls", stdout=out, stderr=err, shell=True)
output.log ends up like so:
INFO:root:output.log
INFO:root:streamlogger.py
INFO:root:and
INFO:root:so
INFO:root:on
Tested with Python 2.6, 2.7 and 3.1.
I would think any implementation of 1 and 3 would need to use similar techniques. It is a bit involved, but unless you can make the Popen command log correctly itself, I don't have a better idea).
I would suggest option 3, with the logging standard library package. In this case I'd say the other 2 were overkill.
1 and 2 won't work. Here's an implementation of the principle:
import subprocess
import time
FileClass = open('tmptmp123123123.tmp', 'w').__class__
class WrappedFile(FileClass):
TIMETPL = "%Y-%m-%d %H:%M:%S"
TEMPLATE = "[%s: %s] "
def __init__(self, name, mode='r', buffering=None, title=None):
self.title = title or name
if buffering is None:
super(WrappedFile, self).__init__(name, mode)
else:
super(WrappedFile, self).__init__(name, mode, buffering)
def write(self, s):
stamp = time.strftime(self.TIMETPL)
if not s:
return
# Add a line with timestamp per line to be written
s = s.split('\n')
spre = self.TEMPLATE % (self.title, stamp)
s = "\n".join(["%s %s" % (spre, line) for line in s]) + "\n"
super(WrappedFile, self).write(s)
The reason it doesn't work is that Popen never calls stdout.write. A wrapped file will work fine when we call its write method and will even be written to if passed to Popen, but the write will happen in a lower layer, skipping the write method.
This simple solution worked for me:
import sys
import datetime
import tempfile
import subprocess as sp
def run(command, description):
"""Runs a command in a formatted manner. Returns its return code."""
with tempfile.SpooledTemporaryFile(8*1024) as so:
print >> sys.stderr, '%-65s' % description
start=datetime.datetime.now()
retcode = sp.call(command, shell=True, stderr=sp.STDOUT, stdout=so)
end=datetime.datetime.now()
so.seek(0)
for line in so.readlines():
print >> sys.stderr,'logging this:', line.rstrip()
duration=end-start
status='Done' if retcode == 0 else 'Failed'
print >> sys.stderr, '%s (%d seconds)' % (status, duration.seconds)
REF_SCRIPT = r"""#!/usr/bin/python
import sys
sys.stdout.write('OUT\n')
sys.stdout.flush()
sys.stderr.write('ERR\n')
sys.stderr.flush()
"""
SCRIPT_NAME = 'refscript.py'
if __name__ == '__main__':
with open(SCRIPT_NAME, 'w') as script:
script.write(REF_SCRIPT)
run('python ' + SCRIPT_NAME, 'Reference script')
This uses Adam Rosenfield's make_async and read_async. Whereas my original answer used select.epoll and was thus Linux-only, it now uses select.select, which should work under Unix or Windows.
This logs output from the subprocess to /tmp/test.log as it occurs:
import logging
import subprocess
import shlex
import select
import fcntl
import os
import errno
def make_async(fd):
# https://stackoverflow.com/a/7730201/190597
'''add the O_NONBLOCK flag to a file descriptor'''
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
def read_async(fd):
# https://stackoverflow.com/a/7730201/190597
'''read some data from a file descriptor, ignoring EAGAIN errors'''
try:
return fd.read()
except IOError, e:
if e.errno != errno.EAGAIN:
raise e
else:
return ''
def log_process(proc,stdout_logger,stderr_logger):
loggers = { proc.stdout: stdout_logger, proc.stderr: stderr_logger }
def log_fds(fds):
for fd in fds:
out = read_async(fd)
if out.strip():
loggers[fd].info(out)
make_async(proc.stdout)
make_async(proc.stderr)
while True:
# Wait for data to become available
rlist, wlist, xlist = select.select([proc.stdout, proc.stderr], [], [])
log_fds(rlist)
if proc.poll() is not None:
# Corner case: check if more output was created
# between the last call to read_async and now
log_fds([proc.stdout, proc.stderr])
break
if __name__=='__main__':
formatter = logging.Formatter('[%(name)s: %(asctime)s] %(message)s')
handler = logging.FileHandler('/tmp/test.log','w')
handler.setFormatter(formatter)
stdout_logger=logging.getLogger('STDOUT')
stdout_logger.setLevel(logging.DEBUG)
stdout_logger.addHandler(handler)
stderr_logger=logging.getLogger('STDERR')
stderr_logger.setLevel(logging.DEBUG)
stderr_logger.addHandler(handler)
proc = subprocess.Popen(shlex.split('ls -laR /tmp'),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
log_process(proc,stdout_logger,stderr_logger)
I'm looking for a Python solution that will allow me to save the output of a command in a file without hiding it from the console.
FYI: I'm asking about tee (as the Unix command line utility) and not the function with the same name from Python intertools module.
Details
Python solution (not calling tee, it is not available under Windows)
I do not need to provide any input to stdin for called process
I have no control over the called program. All I know is that it will output something to stdout and stderr and return with an exit code.
To work when calling external programs (subprocess)
To work for both stderr and stdout
Being able to differentiate between stdout and stderr because I may want to display only one of the to the console or I could try to output stderr using a different color - this means that stderr = subprocess.STDOUT will not work.
Live output (progressive) - the process can run for a long time, and I'm not able to wait for it to finish.
Python 3 compatible code (important)
References
Here are some incomplete solutions I found so far:
http://devlishgenius.blogspot.com/2008/10/logging-in-real-time-in-python.html (mkfifo works only on Unix)
http://blog.kagesenshi.org/2008/02/teeing-python-subprocesspopen-output.html (doesn't work at all)
Diagram http://blog.i18n.ro/wp-content/uploads/2010/06/Drawing_tee_py.png
Current code (second try)
#!/usr/bin/python
from __future__ import print_function
import sys, os, time, subprocess, io, threading
cmd = "python -E test_output.py"
from threading import Thread
class StreamThread ( Thread ):
def __init__(self, buffer):
Thread.__init__(self)
self.buffer = buffer
def run ( self ):
while 1:
line = self.buffer.readline()
print(line,end="")
sys.stdout.flush()
if line == '':
break
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdoutThread = StreamThread(io.TextIOWrapper(proc.stdout))
stderrThread = StreamThread(io.TextIOWrapper(proc.stderr))
stdoutThread.start()
stderrThread.start()
proc.communicate()
stdoutThread.join()
stderrThread.join()
print("--done--")
#### test_output.py ####
#!/usr/bin/python
from __future__ import print_function
import sys, os, time
for i in range(0, 10):
if i%2:
print("stderr %s" % i, file=sys.stderr)
else:
print("stdout %s" % i, file=sys.stdout)
time.sleep(0.1)
Real output
stderr 1
stdout 0
stderr 3
stdout 2
stderr 5
stdout 4
stderr 7
stdout 6
stderr 9
stdout 8
--done--
Expected output was to have the lines ordered. Remark, modifying the Popen to use only one PIPE is not allowed because in the real life I will want to do different things with stderr and stdout.
Also even in the second case I was not able to obtain real-time like out, in fact all the results were received when the process finished. By default, Popen should use no buffers (bufsize=0).
I see that this is a rather old post but just in case someone is still searching for a way to do this:
proc = subprocess.Popen(["ping", "localhost"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
with open("logfile.txt", "w") as log_file:
while proc.poll() is None:
line = proc.stderr.readline()
if line:
print "err: " + line.strip()
log_file.write(line)
line = proc.stdout.readline()
if line:
print "out: " + line.strip()
log_file.write(line)
If requiring python 3.6 isn't an issue there is now a way of doing this using asyncio. This method allows you to capture stdout and stderr separately but still have both stream to the tty without using threads. Here's a rough outline:
class RunOutput:
def __init__(self, returncode, stdout, stderr):
self.returncode = returncode
self.stdout = stdout
self.stderr = stderr
async def _read_stream(stream, callback):
while True:
line = await stream.readline()
if line:
callback(line)
else:
break
async def _stream_subprocess(cmd, stdin=None, quiet=False, echo=False) -> RunOutput:
if isWindows():
platform_settings = {"env": os.environ}
else:
platform_settings = {"executable": "/bin/bash"}
if echo:
print(cmd)
p = await asyncio.create_subprocess_shell(
cmd,
stdin=stdin,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
**platform_settings
)
out = []
err = []
def tee(line, sink, pipe, label=""):
line = line.decode("utf-8").rstrip()
sink.append(line)
if not quiet:
print(label, line, file=pipe)
await asyncio.wait(
[
_read_stream(p.stdout, lambda l: tee(l, out, sys.stdout)),
_read_stream(p.stderr, lambda l: tee(l, err, sys.stderr, label="ERR:")),
]
)
return RunOutput(await p.wait(), out, err)
def run(cmd, stdin=None, quiet=False, echo=False) -> RunOutput:
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
_stream_subprocess(cmd, stdin=stdin, quiet=quiet, echo=echo)
)
return result
The code above was based on this blog post: https://kevinmccarthy.org/2016/07/25/streaming-subprocess-stdin-and-stdout-with-asyncio-in-python/
This is a straightforward port of tee(1) to Python.
import sys
sinks = sys.argv[1:]
sinks = [open(sink, "w") for sink in sinks]
sinks.append(sys.stderr)
while True:
input = sys.stdin.read(1024)
if input:
for sink in sinks:
sink.write(input)
else:
break
I'm running on Linux right now but this ought to work on most platforms.
Now for the subprocess part, I don't know how you want to 'wire' the subprocess's stdin, stdout and stderr to your stdin, stdout, stderr and file sinks, but I know you can do this:
import subprocess
callee = subprocess.Popen(
["python", "-i"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
Now you can access callee.stdin, callee.stdout and callee.stderr like normal files, enabling the above "solution" to work. If you want to get the callee.returncode, you'll need to make an extra call to callee.poll().
Be careful with writing to callee.stdin: if the process has exited when you do that, an error may be rised (on Linux, I get IOError: [Errno 32] Broken pipe).
This is how it can be done
import sys
from subprocess import Popen, PIPE
with open('log.log', 'w') as log:
proc = Popen(["ping", "google.com"], stdout=PIPE, encoding='utf-8')
while proc.poll() is None:
text = proc.stdout.readline()
log.write(text)
sys.stdout.write(text)
If you don't want to interact with the process you can use the subprocess module just fine.
Example:
tester.py
import os
import sys
for file in os.listdir('.'):
print file
sys.stderr.write("Oh noes, a shrubbery!")
sys.stderr.flush()
sys.stderr.close()
testing.py
import subprocess
p = subprocess.Popen(['python', 'tester.py'], stdout=subprocess.PIPE,
stdin=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
print stdout, stderr
In your situation you can simply write stdout/stderr to a file first. You can send arguments to your process with communicate as well, though I wasn't able to figure out how to continually interact with the subprocess.
On Linux, if you really need something like the tee(2) syscall, you can get it like this:
import os
import ctypes
ld = ctypes.CDLL(None, use_errno=True)
SPLICE_F_NONBLOCK = 0x02
def tee(fd_in, fd_out, length, flags=SPLICE_F_NONBLOCK):
result = ld.tee(
ctypes.c_int(fd_in),
ctypes.c_int(fd_out),
ctypes.c_size_t(length),
ctypes.c_uint(flags),
)
if result == -1:
errno = ctypes.get_errno()
raise OSError(errno, os.strerror(errno))
return result
To use this, you probably want to use Python 3.10 and something with os.splice (or use ctypes in the same way to get splice). See the tee(2) man page for an example.
My solution isn't elegant, but it works.
You can use powershell to gain access to "tee" under WinOS.
import subprocess
import sys
cmd = ['powershell', 'ping', 'google.com', '|', 'tee', '-a', 'log.txt']
if 'darwin' in sys.platform:
cmd.remove('powershell')
p = subprocess.Popen(cmd)
p.wait()