Python safeiy capture live output from multiple subprocesses - python

It is explained in https://stackoverflow.com/a/18422264/7238575 how one can run a subprocess and read out the results live. However, it looks like it creates a file with a name test.log to do so. This makes me worry that if multiple scripts are using this trick in the same directory the test.log file might well be corrupted. Is there a way that does not require a file to be created outside Python? Or can we make sure that each process uses a unique log file? Or am I completely misunderstanding the situation and is there no risk of simultaneous writes by different programs to the same test.log file?

You don't need to write the live output to a file. You can write it to simply to STDOUT with sys.stdout.write("your message").
On the other hand you can generate unique log files for each process:
import os
import psutil
pid = psutil.Process(os.getpid())
process_name = pid.name()
path, extension = os.path.splitext(os.path.join(os.getcwd(), "my_basic_log_file.log"))
created_log_file_name = "{0}_{1}{2}".format(path, process_name, extension)
print(created_log_file_name)
Output:
>>> python3 test_1.py
/home/my_user/test_folder/my_basic_log_file_python3.log
If you see the above example my process name was python3 so this process name was inserted to the "basic" log file name. With this solution you can create unique log files for your processes.
You can set your process name with the setproctitle.setproctitle("my_process_name").
Here is an example.
import os
import psutil
import setproctitle
setproctitle.setproctitle("milan_balazs")
pid = psutil.Process(os.getpid())
process_name = pid.name()
path, extension = os.path.splitext(os.path.join(os.getcwd(), "my_basic_log_file.log"))
created_log_file_name = "{0}_{1}{2}".format(path, process_name, extension)
print(created_log_file_name)
Output:
>>> python3 test_1.py
/home/my_user/test_folder/my_basic_log_file_milan_balazs.log
Previously I have written a quite complex and safe command caller which can make live output (not to file). You can check it:
import sys
import os
import subprocess
import select
import errno
def poll_command(process, realtime):
"""
Watch for error or output from the process
:param process: the process, running the command
:param realtime: flag if realtime logging is needed
:return: Return STDOUT and return code of the command processed
"""
coutput = ""
poller = select.poll()
poller.register(process.stdout, select.POLLIN)
fdhup = {process.stdout.fileno(): 0}
while sum(fdhup.values()) < len(fdhup):
try:
r = poller.poll(1)
except select.error as err:
if not err.args[0] == errno.EINTR:
raise
r = []
for fd, flags in r:
if flags & (select.POLLIN | select.POLLPRI):
c = version_conversion(fd, realtime)
coutput += c
else:
fdhup[fd] = 1
return coutput.strip(), process.poll()
def version_conversion(fd, realtime):
"""
There are some differences between Python2/3 so this conversion is needed.
"""
c = os.read(fd, 4096)
if sys.version_info >= (3, 0):
c = c.decode("ISO-8859-1")
if realtime:
sys.stdout.write(c)
sys.stdout.flush()
return c
def exec_shell(command, real_time_out=False):
"""
Call commands.
:param command: Command line.
:param real_time_out: If this variable is True, the output of command is logging in real-time
:return: Return STDOUT and return code of the command processed.
"""
if not command:
print("Command is not available.")
return None, None
print("Executing '{}'".format(command))
rtoutput = real_time_out
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, return_code = poll_command(p, rtoutput)
if p.poll():
error_msg = "Return code: {ret_code} Error message: {err_msg}".format(
ret_code=return_code, err_msg=out
)
print(error_msg)
print("[OK] - The command calling was successful. CMD: '{}'".format(command))
return out, return_code
exec_shell("echo test running", real_time_out=True)
Output:
>>> python3 test.py
Executing 'echo test running'
test running
[OK] - The command calling was successful. CMD: 'echo test running'
I hope my answer answers your question! :)

Related

Understanding subprocess module in python when running program from python script

I usually run a program from my OpenSuse linux terminal by typing ./run file_name. This will bring up a series of options that I can choose from by typing a numeric value 0-9 and hitting return on my keyboard. Now I want to do this from a python script automatically. My example below is not working, but I can't understand where I'm failing and how to debug:
import subprocess
p = subprocess.Popen(["/path/to/program/run", file_name], stdin = subprocess.PIPE,stdout=subprocess.PIPE,shell=False)
print "Hello"
out, err = p.communicate(input='0\r\n')
print out
print err
for line in p.stdout.readlines():
print line
The output of this program is just
>> Hello
>>
i.e. then it seems to freeze (I have no idea whats actually happening!) I would have expected to see what I see when I run ./run file_name
and hit 0 and then return directly in my terminal, but I assure you this is not the case.
What can I do to debug my code?
Edit 1: as suggested in comments
import subprocess
fileName = 'test_profile'
p = subprocess.Popen(["/path/to/program/run", fileName], stdin = subprocess.PIPE,stdout=subprocess.PIPE,shell=False)
print "Hello"
for line in iter(p.stdout.readline,""):
print line
will indeed return the stdout of my program!
communicate waits for the completion of the program. For example:
import subprocess
p = subprocess.Popen(["cut", "-c2"], stdin=subprocess.PIPE, stdout=subprocess.PIPE,shell=False)
out, err = p.communicate(input='abc')
print("Result: '{}'".format(out.strip()))
# Result: 'b'
It sounds like you have a more interactive script, in which case you probably should try out pexpect
import pexpect
child = pexpect.spawn('cut -c2')
child.sendline('abc')
child.readline() # repeat what was typed
print(child.readline()) # prints 'b'

Logging last Bash command to file from script

I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.

Getting output from and giving commands to a python subprocess

I am trying to get output from a subprocess and then give commands to that process based on the preceding output. I need to do this a variable number of times, when the program needs further input. (I also need to be able to hide the subprocess command prompt if possible).
I figured this would be an easy task given that I have seen this problem being discussed in posts from 2003 and it is nearly 2012 and it appears to be a pretty common need and really seems like it should be a basic part of any programming language. Apparently I was wrong and somehow almost 9 years later there is still no standard way of accomplishing this task in a stable, non-destructive, platform independent way!
I don't really understand much about file i/o and buffering or threading so I would prefer a solution that is as simple as possible. If there is a module that accomplishes this that is compatible with python 3.x, I would be very willing to download it. I realize that there are multiple questions that ask basically the same thing, but I have yet to find an answer that addresses the simple task that I am trying to accomplish.
Here is the code I have so far based on a variety of sources; however I have absolutely no idea what to do next. All my attempts ended in failure and some managed to use 100% of my CPU (to do basically nothing) and would not quit.
import subprocess
from subprocess import Popen, PIPE
p = Popen(r'C:\postgis_testing\shellcomm.bat',stdin=PIPE,stdout=PIPE,stderr=subprocess.STDOUT shell=True)
stdout,stdin = p.communicate(b'command string')
In case my question is unclear I am posting the text of the sample batch file that I demonstrates a situation in which it is necessary to send multiple commands to the subprocess (if you type an incorrect command string the program loops).
#echo off
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
If anyone can help me I would very much appreciate it, and I'm sure many others would as well!
EDIT here is the functional code using eryksun's code (next post) :
import subprocess
import threading
import time
import sys
try:
import queue
except ImportError:
import Queue as queue
def read_stdout(stdout, q, p):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output: %s' % outdata)
if __name__ == '__main__':
#setup
p = subprocess.Popen(['shellcomm.bat'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
bufsize=0, shell=True) # I put shell=True to hide prompt
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q, p))
t.daemon = True
t.start()
#command loop
while p.poll() is None:
printout(q)
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
time.sleep(0.1) # I added this to give some time to check for closure (otherwise it doesn't work)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('Return Code: %d' % rc)
However when the script is run from a command prompt the following happens:
C:\Users\username>python C:\postgis_testing\shellcomm7.py
Input: sth
Traceback (most recent call last):
File "C:\postgis_testing\shellcomm7.py", line 51, in <module>
p.stdin.write(cmd)
IOError: [Errno 22] Invalid argument
It seems that the program closes out when run from command prompt. any ideas?
This demo uses a dedicated thread to read from stdout. If you search around, I'm sure you can find a more complete implementation written up in an object oriented interface. At least I can say this is working for me with your provided batch file in both Python 2.7.2 and 3.2.2.
shellcomm.bat:
#echo off
echo Command Loop Test
echo.
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
Here's what I get for output based on the sequence of commands "wrong", "still wrong", and "command string":
Output:
Command Loop Test
Type the correct command string:
Input: wrong
Output:
Type the correct command string:
Input: still wrong
Output:
Type the correct command string:
Input: command string
Output:
you are correct
Return Code: 0
For reading the piped output, readline might work sometimes, but set /P INPUT in the batch file naturally isn't writing a line ending. So instead I used lambda: stdout.read(1) to read a byte at a time (not so efficient, but it works). The reading function puts the data on a queue. The main thread gets the output from the queue after it writes a a command. Using a timeout on the get call here makes it wait a small amount of time to ensure the program is waiting for input. Instead you could check the output for prompts to know when the program is expecting input.
All that said, you can't expect a setup like this to work universally because the console program you're trying to interact with might buffer its output when piped. In Unix systems there are some utility commands available that you can insert into a pipe to modify the buffering to be non-buffered, line-buffered, or a given size -- such as stdbuf. There are also ways to trick the program into thinking it's connected to a pty (see pexpect). However, I don't know a way around this problem on Windows if you don't have access to the program's source code to explicitly set the buffering using setvbuf.
import subprocess
import threading
import time
import sys
if sys.version_info.major >= 3:
import queue
else:
import Queue as queue
input = raw_input
def read_stdout(stdout, q):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output:\n%s' % outdata)
if __name__ == '__main__':
ARGS = ["shellcomm.bat"] ### Modify this
#setup
p = subprocess.Popen(ARGS, bufsize=0, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q))
t.daemon = True
t.start()
#command loop
while 1:
printout(q)
if p.poll() is not None or p.stdin.closed:
break
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('\nReturn Code: %d' % rc)

Merging a Python script's subprocess' stdout and stderr while keeping them distinguishable

I would like to direct a python script's subprocess' stdout and stdin into the same file. What I don't know is how to make the lines from the two sources distinguishable? (For example prefix the lines from stderr with an exclamation mark.)
In my particular case there is no need for live monitoring of the subprocess, the executing Python script can wait for the end of its execution.
tsk = subprocess.Popen(args,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
subprocess.STDOUT is a special flag that tells subprocess to route all stderr output to stdout, thus combining your two streams.
btw, select doesn't have a poll() in windows. subprocess only uses the file handle number, and doesn't call your file output object's write method.
to capture the output, do something like:
logfile = open(logfilename, 'w')
while tsk.poll() is None:
line = tsk.stdout.readline()
logfile.write(line)
I found myself having to tackle this problem recently, and it took a while to get something I felt worked correctly in most cases, so here it is! (It also has the nice side effect of processing the output via a python logger, which I've noticed is another common question here on Stackoverflow).
Here is the code:
import sys
import logging
import subprocess
from threading import Thread
logging.basicConfig(stream=sys.stdout,level=logging.INFO)
logging.addLevelName(logging.INFO+2,'STDERR')
logging.addLevelName(logging.INFO+1,'STDOUT')
logger = logging.getLogger('root')
pobj = subprocess.Popen(['python','-c','print 42;bargle'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
def logstream(stream,loggercb):
while True:
out = stream.readline()
if out:
loggercb(out.rstrip())
else:
break
stdout_thread = Thread(target=logstream,
args=(pobj.stdout,lambda s: logger.log(logging.INFO+1,s)))
stderr_thread = Thread(target=logstream,
args=(pobj.stderr,lambda s: logger.log(logging.INFO+2,s)))
stdout_thread.start()
stderr_thread.start()
while stdout_thread.isAlive() and stderr_thread.isAlive():
pass
Here is the output:
STDOUT:root:42
STDERR:root:Traceback (most recent call last):
STDERR:root: File "<string>", line 1, in <module>
STDERR:root:NameError: name 'bargle' is not defined
You can replace the subprocess call to do whatever you want, I just chose running python with a command that I knew would print to both stdout and stderr. The key bit is reading stderr and stdout each in a separate thread. Otherwise you may be blocking on reading one while there is data ready to be read on the other.
If you want to interleave to get roughly the same order that you would if you ran the process interactively then you need to do what the shell does and poll stdin/stdout and write in the order that they poll.
Here's some code that does something along the lines of what you want - in this case sending the stdout/stderr to a logger info/error streams.
tsk = subprocess.Popen(args,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
poll = select.poll()
poll.register(tsk.stdout,select.POLLIN | select.POLLHUP)
poll.register(tsk.stderr,select.POLLIN | select.POLLHUP)
pollc = 2
events = poll.poll()
while pollc > 0 and len(events) > 0:
for event in events:
(rfd,event) = event
if event & select.POLLIN:
if rfd == tsk.stdout.fileno():
line = tsk.stdout.readline()
if len(line) > 0:
logger.info(line[:-1])
if rfd == tsk.stderr.fileno():
line = tsk.stderr.readline()
if len(line) > 0:
logger.error(line[:-1])
if event & select.POLLHUP:
poll.unregister(rfd)
pollc = pollc - 1
if pollc > 0: events = poll.poll()
tsk.wait()
At the moment all other answers don't handle buffering on the child subprocess' side if the subprocess is not a Python script that accepts -u flag. See "Q: Why not just use a pipe (popen())?" in the pexpect documentation.
To simulate -u flag for some of C stdio-based (FILE*) programs you could try stdbuf.
If you ignore this then your output won't be properly interleaved and might look like:
stderr
stderr
...large block of stdout including parts that are printed before stderr...
You could try it with the following client program, notice the difference with/without -u flag (['stdbuf', '-o', 'L', 'child_program'] also fixes the output):
#!/usr/bin/env python
from __future__ import print_function
import random
import sys
import time
from datetime import datetime
def tprint(msg, file=sys.stdout):
time.sleep(.1*random.random())
print("%s %s" % (datetime.utcnow().strftime('%S.%f'), msg), file=file)
tprint("stdout1 before stderr")
tprint("stdout2 before stderr")
for x in range(5):
tprint('stderr%d' % x, file=sys.stderr)
tprint("stdout3 after stderr")
On Linux you could use pty to get the same behavior as when the subprocess runs interactively e.g., here's a modified #T.Rojan's answer:
import logging, os, select, subprocess, sys, pty
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
master_fd, slave_fd = pty.openpty()
p = subprocess.Popen(args,stdout=slave_fd, stderr=subprocess.PIPE, close_fds=True)
with os.fdopen(master_fd) as stdout:
poll = select.poll()
poll.register(stdout, select.POLLIN)
poll.register(p.stderr,select.POLLIN | select.POLLHUP)
def cleanup(_done=[]):
if _done: return
_done.append(1)
poll.unregister(p.stderr)
p.stderr.close()
poll.unregister(stdout)
assert p.poll() is not None
read_write = {stdout.fileno(): (stdout.readline, logger.info),
p.stderr.fileno(): (p.stderr.readline, logger.error)}
while True:
events = poll.poll(40) # poll with a small timeout to avoid both
# blocking forever and a busy loop
if not events and p.poll() is not None:
# no IO events and the subprocess exited
cleanup()
break
for fd, event in events:
if event & select.POLLIN: # there is something to read
read, write = read_write[fd]
line = read()
if line:
write(line.rstrip())
elif event & select.POLLHUP: # free resources if stderr hung up
cleanup()
else: # something unexpected happened
assert 0
sys.exit(p.wait()) # return child's exit code
It assumes that stderr is always unbuffered/line-buffered and stdout is line-buffered in an interactive mode. Only full lines are read. The program might block if there are non-terminated lines in the output.
I suggest you write your own handlers, something like (not tested, I hope you catch the idea):
class my_buffer(object):
def __init__(self, fileobject, prefix):
self._fileobject = fileobject
self.prefix = prefix
def write(self, text):
return self._fileobject.write('%s %s' % (self.prefix, text))
# delegate other methods to fileobject if necessary
log_file = open('log.log', 'w')
my_out = my_buffer(log_file, 'OK:')
my_err = my_buffer(log_file, '!!!ERROR:')
p = subprocess.Popen(command, stdout=my_out, stderr=my_err, shell=True)
You may write the stdout/err to a file after the command execution.
In the example below I use pickling so I am sure I will be able to read without any particular parsing to differentiate between the stdout/err and at some point I could dumo the exitcode and the command itself.
import subprocess
import cPickle
command = 'ls -altrh'
outfile = 'log.errout'
pipe = subprocess.Popen(command, stdout = subprocess.PIPE,
stderr = subprocess.PIPE, shell = True)
stdout, stderr = pipe.communicate()
f = open(outfile, 'w')
cPickle.dump({'out': stdout, 'err': stderr},f)
f.close()

Assign output of os.system to a variable and prevent it from being displayed on the screen [duplicate]

This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 2 years ago.
I want to assign the output of a command I run using os.system to a variable and prevent it from being output to the screen. But, in the below code ,the output is sent to the screen and the value printed for var is 0, which I guess signifies whether the command ran successfully or not. Is there any way to assign the command output to the variable and also stop it from being displayed on the screen?
var = os.system("cat /etc/services")
print var #Prints 0
From this question which I asked a long time ago, what you may want to use is popen:
os.popen('cat /etc/services').read()
From the docs for Python 3.6,
This is implemented using subprocess.Popen; see that class’s
documentation for more powerful ways to manage and communicate with
subprocesses.
Here's the corresponding code for subprocess:
import subprocess
proc = subprocess.Popen(["cat", "/etc/services"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
print("program output:", out)
You might also want to look at the subprocess module, which was built to replace the whole family of Python popen-type calls.
import subprocess
output = subprocess.check_output("cat /etc/services", shell=True)
The advantage it has is that there is a ton of flexibility with how you invoke commands, where the standard in/out/error streams are connected, etc.
The commands module is a reasonably high-level way to do this:
import commands
status, output = commands.getstatusoutput("cat /etc/services")
status is 0, output is the contents of /etc/services.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code. Since you are only interested in the output, you can write a utility wrapper like this.
from subprocess import PIPE, run
def out(command):
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
return result.stdout
my_output = out("echo hello world")
# Or
my_output = out(["echo", "hello world"])
I know this has already been answered, but I wanted to share a potentially better looking way to call Popen via the use of from x import x and functions:
from subprocess import PIPE, Popen
def cmdline(command):
process = Popen(
args=command,
stdout=PIPE,
shell=True
)
return process.communicate()[0]
print cmdline("cat /etc/services")
print cmdline('ls')
print cmdline('rpm -qa | grep "php"')
print cmdline('nslookup google.com')
I do it with os.system temp file:
import tempfile, os
def readcmd(cmd):
ftmp = tempfile.NamedTemporaryFile(suffix='.out', prefix='tmp', delete=False)
fpath = ftmp.name
if os.name=="nt":
fpath = fpath.replace("/","\\") # forwin
ftmp.close()
os.system(cmd + " > " + fpath)
data = ""
with open(fpath, 'r') as file:
data = file.read()
file.close()
os.remove(fpath)
return data
Python 2.6 and 3 specifically say to avoid using PIPE for stdout and stderr.
The correct way is
import subprocess
# must create a file object to store the output. Here we are getting
# the ssid we are connected to
outfile = open('/tmp/ssid', 'w');
status = subprocess.Popen(["iwgetid"], bufsize=0, stdout=outfile)
outfile.close()
# now operate on the file
from os import system, remove
from uuid import uuid4
def bash_(shell_command: str) -> tuple:
"""
:param shell_command: your shell command
:return: ( 1 | 0, stdout)
"""
logfile: str = '/tmp/%s' % uuid4().hex
err: int = system('%s &> %s' % (shell_command, logfile))
out: str = open(logfile, 'r').read()
remove(logfile)
return err, out
# Example:
print(bash_('cat /usr/bin/vi | wc -l'))
>>> (0, '3296\n')```

Categories