Concatenate command of an external shell software in python [duplicate] - python

If I do the following:
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
I get:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this?

Popen.communicate() documentation:
Note that if you want to send data to
the process’s stdin, you need to
create the Popen object with
stdin=PIPE. Similarly, to get anything
other than None in the result tuple,
you need to give stdout=PIPE and/or
stderr=PIPE too.
Replacing os.popen*
pipe = os.popen(cmd, 'w', bufsize)
# ==>
pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin
Warning Use communicate() rather than
stdin.write(), stdout.read() or
stderr.read() to avoid deadlocks due
to any of the other OS pipe buffers
filling up and blocking the child
process.
So your example could be written as follows:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
grep_stdout = p.communicate(input=b'one\ntwo\nthree\nfour\nfive\nsix\n')[0]
print(grep_stdout.decode())
# -> four
# -> five
# ->
On Python 3.5+ (3.6+ for encoding), you could use subprocess.run, to pass input as a string to an external command and get its exit status, and its output as a string back in one call:
#!/usr/bin/env python3
from subprocess import run, PIPE
p = run(['grep', 'f'], stdout=PIPE,
input='one\ntwo\nthree\nfour\nfive\nsix\n', encoding='ascii')
print(p.returncode)
# -> 0
print(p.stdout)
# -> four
# -> five
# ->

I figured out this workaround:
>>> p = subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=subprocess.PIPE)
>>> p.stdin.write(b'one\ntwo\nthree\nfour\nfive\nsix\n') #expects a bytes type object
>>> p.communicate()[0]
'four\nfive\n'
>>> p.stdin.close()
Is there a better one?

There's a beautiful solution if you're using Python 3.4 or better. Use the input argument instead of the stdin argument, which accepts a bytes argument:
output_bytes = subprocess.check_output(
["sed", "s/foo/bar/"],
input=b"foo",
)
This works for check_output and run, but not call or check_call for some reason.
In Python 3.7+, you can also add text=True to make check_output take a string as input and return a string (instead of bytes):
output_string = subprocess.check_output(
["sed", "s/foo/bar/"],
input="foo",
text=True,
)

I'm a bit surprised nobody suggested creating a pipe, which is in my opinion the far simplest way to pass a string to stdin of a subprocess:
read, write = os.pipe()
os.write(write, "stdin input here")
os.close(write)
subprocess.check_call(['your-command'], stdin=read)

I am using python3 and found out that you need to encode your string before you can pass it into stdin:
p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
out, err = p.communicate(input='one\ntwo\nthree\nfour\nfive\nsix\n'.encode())
print(out)

Apparently a cStringIO.StringIO object doesn't quack close enough to
a file duck to suit subprocess.Popen
I'm afraid not. The pipe is a low-level OS concept, so it absolutely requires a file object that is represented by an OS-level file descriptor. Your workaround is the right one.

from subprocess import Popen, PIPE
from tempfile import SpooledTemporaryFile as tempfile
f = tempfile()
f.write('one\ntwo\nthree\nfour\nfive\nsix\n')
f.seek(0)
print Popen(['/bin/grep','f'],stdout=PIPE,stdin=f).stdout.read()
f.close()

"""
Ex: Dialog (2-way) with a Popen()
"""
p = subprocess.Popen('Your Command Here',
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
stdin=PIPE,
shell=True,
bufsize=0)
p.stdin.write('START\n')
out = p.stdout.readline()
while out:
line = out
line = line.rstrip("\n")
if "WHATEVER1" in line:
pr = 1
p.stdin.write('DO 1\n')
out = p.stdout.readline()
continue
if "WHATEVER2" in line:
pr = 2
p.stdin.write('DO 2\n')
out = p.stdout.readline()
continue
"""
..........
"""
out = p.stdout.readline()
p.wait()

On Python 3.7+ do this:
my_data = "whatever you want\nshould match this f"
subprocess.run(["grep", "f"], text=True, input=my_data)
and you'll probably want to add capture_output=True to get the output of running the command as a string.
On older versions of Python, replace text=True with universal_newlines=True:
subprocess.run(["grep", "f"], universal_newlines=True, input=my_data)

Beware that Popen.communicate(input=s)may give you trouble ifsis too big, because apparently the parent process will buffer it before forking the child subprocess, meaning it needs "twice as much" used memory at that point (at least according to the "under the hood" explanation and linked documentation found here). In my particular case,swas a generator that was first fully expanded and only then written tostdin so the parent process was huge right before the child was spawned,
and no memory was left to fork it:
File "/opt/local/stow/python-2.7.2/lib/python2.7/subprocess.py", line 1130, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory

This is overkill for grep, but through my journeys I've learned about the Linux command expect, and the python library pexpect
expect: dialogue with interactive programs
pexpect: Python module for spawning child applications; controlling them; and responding to expected patterns in their output.
import pexpect
child = pexpect.spawn('grep f', timeout=10)
child.sendline('text to match')
print(child.before)
Working with interactive shell applications like ftp is trivial with pexpect
import pexpect
child = pexpect.spawn ('ftp ftp.openbsd.org')
child.expect ('Name .*: ')
child.sendline ('anonymous')
child.expect ('Password:')
child.sendline ('noah#example.com')
child.expect ('ftp> ')
child.sendline ('ls /pub/OpenBSD/')
child.expect ('ftp> ')
print child.before # Print the result of the ls command.
child.interact() # Give control of the child to the user.

p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
p.stdin.write('one\n')
time.sleep(0.5)
p.stdin.write('two\n')
time.sleep(0.5)
p.stdin.write('three\n')
time.sleep(0.5)
testresult = p.communicate()[0]
time.sleep(0.5)
print(testresult)

Related

os.system and subprocess don't give me any output [duplicate]

I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.
What would be a code example that would do such a thing?
For example:
def run_command(cmd):
# ??????
print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'
In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output function:
>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.
The check_output function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.
Modern versions of Python (3.5 or higher): run
If you're using Python 3.5+, and do not need backwards compatibility, the new run function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:
>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
The return value is a bytes object, so if you want a proper string, you'll need to decode it. Assuming the called process returns a UTF-8-encoded string:
>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
This can all be compressed to a one-liner if desired:
>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
If you want to pass input to the process's stdin, you can pass a bytes object to the input keyword argument:
>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'
You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). If you want run to throw an exception when the process returns a nonzero exit code, you can pass check=True. (Or you can check the returncode attribute of result above.) When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:
>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
Using run this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run function alone.
Older versions of Python (3-3.4): more about check_output
If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output function as briefly described above. It has been available since Python 2.7.
subprocess.check_output(*popenargs, **kwargs)
It takes takes the same arguments as Popen (see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.
You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
If you need to pipe from stderr or pass input to the process, check_output won't be up to the task. See the Popen examples below in that case.
Complex applications and legacy versions of Python (2.6 and below): Popen
If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output or run provide, you'll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.
The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.
To send input and capture output, communicate is almost always the preferred method. As in:
output = subprocess.Popen(["mycmd", "myarg"],
stdout=subprocess.PIPE).communicate()[0]
Or
>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE,
... stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo
If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:
>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo
Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.
In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.
As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.
Notes
1. Running shell commands: the shell=True argument
Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support. For example:
>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'
However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via
run(cmd, [stdout=etc...], input=other_output)
Or
Popen(cmd, [stdout=etc...]).communicate(other_output)
The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.
This is way easier, but only works on Unix (including Cygwin) and Python2.7.
import commands
print commands.getstatusoutput('wc -l file')
It returns a tuple with the (return_value, output).
For a solution that works in both Python2 and Python3, use the subprocess module instead:
from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response
I had the same problem but figured out a very simple way of doing this:
import subprocess
output = subprocess.getoutput("ls -l")
print(output)
Note: This solution is Python3 specific as subprocess.getoutput() doesn't work in Python2
Something like that:
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
# returns None while subprocess is running
retcode = p.poll()
line = p.stdout.readline()
yield line
if retcode is not None:
break
Note, that I'm redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.
This function yields line by line as they come (normally you'd have to wait for subprocess to finish to get the output as a whole).
For your case the usage would be:
for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
print line,
This is a tricky but super simple solution which works in many situations:
import os
os.system('sample_cmd > tmp')
print(open('tmp', 'r').read())
A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.
Extra note from the comments:
You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.
os.remove('tmp')
Vartec's answer doesn't read all lines, so I made a version that did:
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
Usage is the same as the accepted answer:
command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
print(line)
You can use following commands to run any shell command. I have used them on ubuntu.
import os
os.popen('your command here').read()
Note: This is deprecated since python 2.6. Now you must use subprocess.Popen. Below is the example
import subprocess
p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
I had a slightly different flavor of the same problem with the following requirements:
Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime).
#vartec solved this Pythonically with his use of generators and the 'yield'
keyword above
Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read)
Don't waste CPU cycles polling the process at high-frequency
Check the return code of the subprocess
Print STDERR (separate from STDOUT) if we get a non-zero error return code.
I've combined and tweaked previous answers to come up with the following:
import subprocess
from time import sleep
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
# Read stdout from subprocess until the buffer is empty !
for line in iter(p.stdout.readline, b''):
if line: # Don't print blank lines
yield line
# This ensures the process has completed, AND sets the 'returncode' attr
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# The run_command() function is responsible for logging STDERR
print("Error: " + str(err))
This code would be executed the same as previous answers:
for line in run_command(cmd):
print(line)
Your Mileage May Vary, I attempted #senderle's spin on Vartec's solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid.
I found that I had to assign PIPE to every handle to get it to return the output I expected - the following worked for me.
import subprocess
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE).communicate()
and call like this, ([0] gets the first element of the tuple, stdout):
run_command('tracert 11.1.0.1')[0]
After learning more, I believe I need these pipe arguments because I'm working on a custom system that uses different handles, so I had to directly control all the std's.
To stop console popups (with Windows), do this:
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
# instantiate a startupinfo obj:
startupinfo = subprocess.STARTUPINFO()
# set the use show window flag, might make conditional on being in Windows:
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# pass as the startupinfo keyword argument:
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
startupinfo=startupinfo).communicate()
run_command('tracert 11.1.0.1')
On Python 3.7+, use subprocess.run and pass capture_output=True:
import subprocess
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True)
print(repr(result.stdout))
This will return bytes:
b'hello world\n'
If you want it to convert the bytes to a string, add text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, text=True)
print(repr(result.stdout))
This will read the bytes using your default encoding:
'hello world\n'
If you need to manually specify a different encoding, use encoding="your encoding" instead of text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, encoding="utf8")
print(repr(result.stdout))
Splitting the initial command for the subprocess might be tricky and cumbersome.
Use shlex.split() to help yourself out.
Sample command
git log -n 5 --since "5 years ago" --until "2 year ago"
The code
from subprocess import check_output
from shlex import split
res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Without shlex.split() the code would look as follows
res = check_output([
'git',
'log',
'-n',
'5',
'--since',
'5 years ago',
'--until',
'2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Here a solution, working if you want to print output while process is running or not.
I added the current working directory also, it was useful to me more than once.
Hoping the solution will help someone :).
import subprocess
def run_command(cmd_and_args, print_constantly=False, cwd=None):
"""Runs a system command.
:param cmd_and_args: the command to run with or without a Pipe (|).
:param print_constantly: If True then the output is logged in continuous until the command ended.
:param cwd: the current working directory (the directory from which you will like to execute the command)
:return: - a tuple containing the return code, the stdout and the stderr of the command
"""
output = []
process = subprocess.Popen(cmd_and_args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
while True:
next_line = process.stdout.readline()
if next_line:
output.append(str(next_line))
if print_constantly:
print(next_line)
elif not process.poll():
break
error = process.communicate()[1]
return process.returncode, '\n'.join(output), error
For some reason, this one works on Python 2.7 and you only need to import os!
import os
def bash(command):
output = os.popen(command).read()
return output
print_me = bash('ls -l')
print(print_me)
If you need to run a shell command on multiple files, this did the trick for me.
import os
import subprocess
# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
# Get all filenames in working directory
for filename in os.listdir('./'):
# This command will be run on each file
cmd = 'nm ' + filename
# Run the command and capture the output line by line.
for line in runProcess(cmd.split()):
# Eliminate leading and trailing whitespace
line.strip()
# Split the output
output = line.split()
# Filter the output and print relevant lines
if len(output) > 2:
if ((output[2] == 'set_program_name')):
print filename
print line
Edit: Just saw Max Persson's solution with J.F. Sebastian's suggestion. Went ahead and incorporated that.
According to #senderle, if you use python3.6 like me:
def sh(cmd, input=""):
rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
assert rst.returncode == 0, rst.stderr.decode("utf-8")
return rst.stdout.decode("utf-8")
sh("ls -a")
Will act exactly like you run the command in bash
Improvement for better logging.
For better output you can use iterator.
From below, we get better
from subprocess import Popen, getstatusoutput, PIPE
def shell_command(cmd):
result = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
output = iter(result.stdout.readline, b'')
error = iter(result.stderr.readline, b'')
print("##### OutPut ###")
for line in output:
print(line.decode("utf-8"))
print("###### Error ########")
for line in error:
print(error.decode("utf-8")) # Convert bytes to str
status, terminal_output = run_command(cmd)
print(terminal_output)
shell_command("ls") # this will display all the files & folders in directory
Other method using getstatusoutput ( Easy to understand)
from subprocess import Popen, getstatusoutput, PIPE
status_Code, output = getstausoutput(command)
print(output) # this will give the terminal output
# status_code, output = getstatusoutput("ls") # this will print the all files & folder available in the directory
If you use the subprocess python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except if you want.
The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.
import subprocess
def command_caller(command=None)
sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
out, err = sp.communicate()
if sp.returncode:
print(
"Return code: %(ret_code)s Error message: %(err_msg)s"
% {"ret_code": sp.returncode, "err_msg": err}
)
return sp.returncode, out, err
I would like to suggest simppl as an option for consideration. It is a module that is available via pypi: pip install simppl and was runs on python3.
simppl allows the user to run shell commands and read the output from the screen.
The developers suggest three types of use cases:
The simplest usage will look like this:
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(start=0, end=100):
sp.print_and_run('<YOUR_FIRST_OS_COMMAND>')
sp.print_and_run('<YOUR_SECOND_OS_COMMAND>') ```
To run multiple commands concurrently use:
commands = ['<YOUR_FIRST_OS_COMMAND>', '<YOUR_SECOND_OS_COMMAND>']
max_number_of_processes = 4
sp.run_parallel(commands, max_number_of_processes) ```
Finally, if your project uses the cli module, you can run directly another command_line_tool as part of a pipeline. The other tool will
be run from the same process, but it will appear from the logs as
another command in the pipeline. This enables smoother debugging and
refactoring of tools calling other tools.
from example_module import example_tool
sp.print_and_run_clt(example_tool.run, ['first_number', 'second_nmber'],
{'-key1': 'val1', '-key2': 'val2'},
{'--flag'}) ```
Note that the printing to STDOUT/STDERR is via python's logging module.
Here is a complete code to show how simppl works:
import logging
from logging.config import dictConfig
logging_config = dict(
version = 1,
formatters = {
'f': {'format':
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}
},
handlers = {
'h': {'class': 'logging.StreamHandler',
'formatter': 'f',
'level': logging.DEBUG}
},
root = {
'handlers': ['h'],
'level': logging.DEBUG,
},
)
dictConfig(logging_config)
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(0, 100)
sp.print_and_run('ls')
Here is a simple and flexible solution that works on a variety of OS versions, and both Python 2 and 3, using IPython in shell mode:
from IPython.terminal.embed import InteractiveShellEmbed
my_shell = InteractiveShellEmbed()
result = my_shell.getoutput("echo hello world")
print(result)
Out: ['hello world']
It has a couple of advantages
It only requires an IPython install, so you don't really need to worry about your specific Python or OS version when using it, it comes with Jupyter - which has a wide range of support
It takes a simple string by default - so no need to use shell mode arg or string splitting, making it slightly cleaner IMO
It also makes it cleaner to easily substitute variables or even entire Python commands in the string itself
To demonstrate:
var = "hello world "
result = my_shell.getoutput("echo {var*2}")
print(result)
Out: ['hello world hello world']
Just wanted to give you an extra option, especially if you already have Jupyter installed
Naturally, if you are in an actual Jupyter notebook as opposed to a .py script you can also always do:
result = !echo hello world
print(result)
To accomplish the same.
The output can be redirected to a text file and then read it back.
import subprocess
import os
import tempfile
def execute_to_file(command):
"""
This function execute the command
and pass its output to a tempfile then read it back
It is usefull for process that deploy child process
"""
temp_file = tempfile.NamedTemporaryFile(delete=False)
temp_file.close()
path = temp_file.name
command = command + " > " + path
proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
if proc.stderr:
# if command failed return
os.unlink(path)
return
with open(path, 'r') as f:
data = f.read()
os.unlink(path)
return data
if __name__ == "__main__":
path = "Somepath"
command = 'ecls.exe /files ' + path
print(execute(command))
eg, execute('ls -ahl')
differentiated three/four possible returns and OS platforms:
no output, but run successfully
output empty line, run successfully
run failed
output something, run successfully
function below
def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
could be
[], ie, len()=0 --> no output;
[''] --> output empty line;
None --> error occured, see below
if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
print "Command: " + cmd
# https://stackoverflow.com/a/40139101/2292993
def _execute_cmd(cmd):
if os.name == 'nt' or platform.system() == 'Windows':
# set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
else:
# Use bash; the default is sh
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")
# the Popen() instance starts running once instantiated (??)
# additionally, communicate(), or poll() and wait process to terminate
# communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
# if communicate(), the results are buffered in memory
# Read stdout from subprocess until the buffer is empty !
# if error occurs, the stdout is '', which means the below loop is essentially skipped
# A prefix of 'b' or 'B' is ignored in Python 2;
# it indicates that the literal should become a bytes literal in Python 3
# (e.g. when code is automatically converted with 2to3).
# return iter(p.stdout.readline, b'')
for line in iter(p.stdout.readline, b''):
# # Windows has \r\n, Unix has \n, Old mac has \r
# if line not in ['','\n','\r','\r\n']: # Don't print blank lines
yield line
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# responsible for logging STDERR
print("Error: " + str(err))
yield None
out = []
for line in _execute_cmd(cmd):
# error did not occur earlier
if line is not None:
# trailing comma to avoid a newline (by print itself) being printed
if output: print line,
out.append(line.strip())
else:
# error occured earlier
out = None
return out
else:
print "Simulation! The command is " + cmd
print ""

Substitution of subprocess.PIPE in Python?

I am using subprocess module to interact with output of the linux commands. below is my code.
import subprocess
import sys
file_name = 'myfile.txt'
p = subprocess.Popen("grep \"SYSTEM CONTROLLER\" "+ file_name, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
print output.strip()
p = subprocess.Popen("grep \"controller\|worker\" "+ file_name, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
lines = output.rstrip().split("\n")
print lines
My program hangs while executing second subprocess i.e.
p = subprocess.Popen("grep \"controller\|worker\""+ file_name,stdout=subprocess.PIPE, shell=True)
I got to know that the reason of process hang is buffer redirected to subprocess.PIPE is getting filled, which blocks the process from writing further.
I want to know if there is any way to avoid the buffer full situation so that my program keeps on executing without any hang issue ?
The actual issue is that there is a whitespace missing between the pattern and the filename and therefore grep waits for input on the standard input (stdin).
"buffer full" (.communicate() is not susceptible) or p.stdout.read() (it fixes nothing: it loads the output into memory and unlike .communicate() it fails if more than one pipe is used) are a red herring here.
Drop shell=True and use a list argument for the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
p = Popen(["grep", r"controller\|worker", file_name], stdout=PIPE)
output = p.communicate()[0]
if p.returncode == 0:
print('found')
elif p.returncode == 1:
print('not found')
else:
print('error')
As it says at https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate:
Note: The data read is buffered in memory, so do not use this method
if the data size is large or unlimited.
Instead, use the file objects to read the text as it is produced:
output = p.stdout.read()
As long as no other pipes (e.g. stderr) fill up while you are reading, the process shouldn't be blocked.

Running shell command and capturing the output

I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.
What would be a code example that would do such a thing?
For example:
def run_command(cmd):
# ??????
print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'
In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output function:
>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.
The check_output function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.
Modern versions of Python (3.5 or higher): run
If you're using Python 3.5+, and do not need backwards compatibility, the new run function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:
>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
The return value is a bytes object, so if you want a proper string, you'll need to decode it. Assuming the called process returns a UTF-8-encoded string:
>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
This can all be compressed to a one-liner if desired:
>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
If you want to pass input to the process's stdin, you can pass a bytes object to the input keyword argument:
>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'
You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). If you want run to throw an exception when the process returns a nonzero exit code, you can pass check=True. (Or you can check the returncode attribute of result above.) When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:
>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
Using run this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run function alone.
Older versions of Python (3-3.4): more about check_output
If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output function as briefly described above. It has been available since Python 2.7.
subprocess.check_output(*popenargs, **kwargs)
It takes takes the same arguments as Popen (see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.
You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.
If you need to pipe from stderr or pass input to the process, check_output won't be up to the task. See the Popen examples below in that case.
Complex applications and legacy versions of Python (2.6 and below): Popen
If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output or run provide, you'll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.
The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.
To send input and capture output, communicate is almost always the preferred method. As in:
output = subprocess.Popen(["mycmd", "myarg"],
stdout=subprocess.PIPE).communicate()[0]
Or
>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE,
... stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo
If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:
>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo
Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.
In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.
As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.
Notes
1. Running shell commands: the shell=True argument
Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support. For example:
>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'
However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via
run(cmd, [stdout=etc...], input=other_output)
Or
Popen(cmd, [stdout=etc...]).communicate(other_output)
The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.
This is way easier, but only works on Unix (including Cygwin) and Python2.7.
import commands
print commands.getstatusoutput('wc -l file')
It returns a tuple with the (return_value, output).
For a solution that works in both Python2 and Python3, use the subprocess module instead:
from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response
I had the same problem but figured out a very simple way of doing this:
import subprocess
output = subprocess.getoutput("ls -l")
print(output)
Note: This solution is Python3 specific as subprocess.getoutput() doesn't work in Python2
Something like that:
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
# returns None while subprocess is running
retcode = p.poll()
line = p.stdout.readline()
yield line
if retcode is not None:
break
Note, that I'm redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.
This function yields line by line as they come (normally you'd have to wait for subprocess to finish to get the output as a whole).
For your case the usage would be:
for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
print line,
This is a tricky but super simple solution which works in many situations:
import os
os.system('sample_cmd > tmp')
print(open('tmp', 'r').read())
A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.
Extra note from the comments:
You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.
os.remove('tmp')
Vartec's answer doesn't read all lines, so I made a version that did:
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
Usage is the same as the accepted answer:
command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
print(line)
You can use following commands to run any shell command. I have used them on ubuntu.
import os
os.popen('your command here').read()
Note: This is deprecated since python 2.6. Now you must use subprocess.Popen. Below is the example
import subprocess
p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
I had a slightly different flavor of the same problem with the following requirements:
Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime).
#vartec solved this Pythonically with his use of generators and the 'yield'
keyword above
Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read)
Don't waste CPU cycles polling the process at high-frequency
Check the return code of the subprocess
Print STDERR (separate from STDOUT) if we get a non-zero error return code.
I've combined and tweaked previous answers to come up with the following:
import subprocess
from time import sleep
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
# Read stdout from subprocess until the buffer is empty !
for line in iter(p.stdout.readline, b''):
if line: # Don't print blank lines
yield line
# This ensures the process has completed, AND sets the 'returncode' attr
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# The run_command() function is responsible for logging STDERR
print("Error: " + str(err))
This code would be executed the same as previous answers:
for line in run_command(cmd):
print(line)
Your Mileage May Vary, I attempted #senderle's spin on Vartec's solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid.
I found that I had to assign PIPE to every handle to get it to return the output I expected - the following worked for me.
import subprocess
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE).communicate()
and call like this, ([0] gets the first element of the tuple, stdout):
run_command('tracert 11.1.0.1')[0]
After learning more, I believe I need these pipe arguments because I'm working on a custom system that uses different handles, so I had to directly control all the std's.
To stop console popups (with Windows), do this:
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
# instantiate a startupinfo obj:
startupinfo = subprocess.STARTUPINFO()
# set the use show window flag, might make conditional on being in Windows:
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# pass as the startupinfo keyword argument:
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
startupinfo=startupinfo).communicate()
run_command('tracert 11.1.0.1')
On Python 3.7+, use subprocess.run and pass capture_output=True:
import subprocess
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True)
print(repr(result.stdout))
This will return bytes:
b'hello world\n'
If you want it to convert the bytes to a string, add text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, text=True)
print(repr(result.stdout))
This will read the bytes using your default encoding:
'hello world\n'
If you need to manually specify a different encoding, use encoding="your encoding" instead of text=True:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, encoding="utf8")
print(repr(result.stdout))
Splitting the initial command for the subprocess might be tricky and cumbersome.
Use shlex.split() to help yourself out.
Sample command
git log -n 5 --since "5 years ago" --until "2 year ago"
The code
from subprocess import check_output
from shlex import split
res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Without shlex.split() the code would look as follows
res = check_output([
'git',
'log',
'-n',
'5',
'--since',
'5 years ago',
'--until',
'2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Here a solution, working if you want to print output while process is running or not.
I added the current working directory also, it was useful to me more than once.
Hoping the solution will help someone :).
import subprocess
def run_command(cmd_and_args, print_constantly=False, cwd=None):
"""Runs a system command.
:param cmd_and_args: the command to run with or without a Pipe (|).
:param print_constantly: If True then the output is logged in continuous until the command ended.
:param cwd: the current working directory (the directory from which you will like to execute the command)
:return: - a tuple containing the return code, the stdout and the stderr of the command
"""
output = []
process = subprocess.Popen(cmd_and_args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
while True:
next_line = process.stdout.readline()
if next_line:
output.append(str(next_line))
if print_constantly:
print(next_line)
elif not process.poll():
break
error = process.communicate()[1]
return process.returncode, '\n'.join(output), error
For some reason, this one works on Python 2.7 and you only need to import os!
import os
def bash(command):
output = os.popen(command).read()
return output
print_me = bash('ls -l')
print(print_me)
If you need to run a shell command on multiple files, this did the trick for me.
import os
import subprocess
# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
# Get all filenames in working directory
for filename in os.listdir('./'):
# This command will be run on each file
cmd = 'nm ' + filename
# Run the command and capture the output line by line.
for line in runProcess(cmd.split()):
# Eliminate leading and trailing whitespace
line.strip()
# Split the output
output = line.split()
# Filter the output and print relevant lines
if len(output) > 2:
if ((output[2] == 'set_program_name')):
print filename
print line
Edit: Just saw Max Persson's solution with J.F. Sebastian's suggestion. Went ahead and incorporated that.
According to #senderle, if you use python3.6 like me:
def sh(cmd, input=""):
rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
assert rst.returncode == 0, rst.stderr.decode("utf-8")
return rst.stdout.decode("utf-8")
sh("ls -a")
Will act exactly like you run the command in bash
Improvement for better logging.
For better output you can use iterator.
From below, we get better
from subprocess import Popen, getstatusoutput, PIPE
def shell_command(cmd):
result = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
output = iter(result.stdout.readline, b'')
error = iter(result.stderr.readline, b'')
print("##### OutPut ###")
for line in output:
print(line.decode("utf-8"))
print("###### Error ########")
for line in error:
print(error.decode("utf-8")) # Convert bytes to str
status, terminal_output = run_command(cmd)
print(terminal_output)
shell_command("ls") # this will display all the files & folders in directory
Other method using getstatusoutput ( Easy to understand)
from subprocess import Popen, getstatusoutput, PIPE
status_Code, output = getstausoutput(command)
print(output) # this will give the terminal output
# status_code, output = getstatusoutput("ls") # this will print the all files & folder available in the directory
If you use the subprocess python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except if you want.
The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.
import subprocess
def command_caller(command=None)
sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
out, err = sp.communicate()
if sp.returncode:
print(
"Return code: %(ret_code)s Error message: %(err_msg)s"
% {"ret_code": sp.returncode, "err_msg": err}
)
return sp.returncode, out, err
I would like to suggest simppl as an option for consideration. It is a module that is available via pypi: pip install simppl and was runs on python3.
simppl allows the user to run shell commands and read the output from the screen.
The developers suggest three types of use cases:
The simplest usage will look like this:
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(start=0, end=100):
sp.print_and_run('<YOUR_FIRST_OS_COMMAND>')
sp.print_and_run('<YOUR_SECOND_OS_COMMAND>') ```
To run multiple commands concurrently use:
commands = ['<YOUR_FIRST_OS_COMMAND>', '<YOUR_SECOND_OS_COMMAND>']
max_number_of_processes = 4
sp.run_parallel(commands, max_number_of_processes) ```
Finally, if your project uses the cli module, you can run directly another command_line_tool as part of a pipeline. The other tool will
be run from the same process, but it will appear from the logs as
another command in the pipeline. This enables smoother debugging and
refactoring of tools calling other tools.
from example_module import example_tool
sp.print_and_run_clt(example_tool.run, ['first_number', 'second_nmber'],
{'-key1': 'val1', '-key2': 'val2'},
{'--flag'}) ```
Note that the printing to STDOUT/STDERR is via python's logging module.
Here is a complete code to show how simppl works:
import logging
from logging.config import dictConfig
logging_config = dict(
version = 1,
formatters = {
'f': {'format':
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}
},
handlers = {
'h': {'class': 'logging.StreamHandler',
'formatter': 'f',
'level': logging.DEBUG}
},
root = {
'handlers': ['h'],
'level': logging.DEBUG,
},
)
dictConfig(logging_config)
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(0, 100)
sp.print_and_run('ls')
Here is a simple and flexible solution that works on a variety of OS versions, and both Python 2 and 3, using IPython in shell mode:
from IPython.terminal.embed import InteractiveShellEmbed
my_shell = InteractiveShellEmbed()
result = my_shell.getoutput("echo hello world")
print(result)
Out: ['hello world']
It has a couple of advantages
It only requires an IPython install, so you don't really need to worry about your specific Python or OS version when using it, it comes with Jupyter - which has a wide range of support
It takes a simple string by default - so no need to use shell mode arg or string splitting, making it slightly cleaner IMO
It also makes it cleaner to easily substitute variables or even entire Python commands in the string itself
To demonstrate:
var = "hello world "
result = my_shell.getoutput("echo {var*2}")
print(result)
Out: ['hello world hello world']
Just wanted to give you an extra option, especially if you already have Jupyter installed
Naturally, if you are in an actual Jupyter notebook as opposed to a .py script you can also always do:
result = !echo hello world
print(result)
To accomplish the same.
The output can be redirected to a text file and then read it back.
import subprocess
import os
import tempfile
def execute_to_file(command):
"""
This function execute the command
and pass its output to a tempfile then read it back
It is usefull for process that deploy child process
"""
temp_file = tempfile.NamedTemporaryFile(delete=False)
temp_file.close()
path = temp_file.name
command = command + " > " + path
proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
if proc.stderr:
# if command failed return
os.unlink(path)
return
with open(path, 'r') as f:
data = f.read()
os.unlink(path)
return data
if __name__ == "__main__":
path = "Somepath"
command = 'ecls.exe /files ' + path
print(execute(command))
eg, execute('ls -ahl')
differentiated three/four possible returns and OS platforms:
no output, but run successfully
output empty line, run successfully
run failed
output something, run successfully
function below
def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
could be
[], ie, len()=0 --> no output;
[''] --> output empty line;
None --> error occured, see below
if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
print "Command: " + cmd
# https://stackoverflow.com/a/40139101/2292993
def _execute_cmd(cmd):
if os.name == 'nt' or platform.system() == 'Windows':
# set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
else:
# Use bash; the default is sh
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")
# the Popen() instance starts running once instantiated (??)
# additionally, communicate(), or poll() and wait process to terminate
# communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
# if communicate(), the results are buffered in memory
# Read stdout from subprocess until the buffer is empty !
# if error occurs, the stdout is '', which means the below loop is essentially skipped
# A prefix of 'b' or 'B' is ignored in Python 2;
# it indicates that the literal should become a bytes literal in Python 3
# (e.g. when code is automatically converted with 2to3).
# return iter(p.stdout.readline, b'')
for line in iter(p.stdout.readline, b''):
# # Windows has \r\n, Unix has \n, Old mac has \r
# if line not in ['','\n','\r','\r\n']: # Don't print blank lines
yield line
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# responsible for logging STDERR
print("Error: " + str(err))
yield None
out = []
for line in _execute_cmd(cmd):
# error did not occur earlier
if line is not None:
# trailing comma to avoid a newline (by print itself) being printed
if output: print line,
out.append(line.strip())
else:
# error occured earlier
out = None
return out
else:
print "Simulation! The command is " + cmd
print ""

read subprocess stdout line by line

My python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file and show some of it to the user. I thought the following would work, but the output doesn't show up in my application until the utility has produced a significant amount of output.
#fake_utility.py, just generates lots of output over time
import time
i = 0
while True:
print hex(i)*512
i += 1
time.sleep(0.5)
#filters output
import subprocess
proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)
for line in proc.stdout:
#the real code does filtering here
print "test:", line.rstrip()
The behavior I really want is for the filter script to print each line as it is received from the subprocess. Sorta like what tee does but with python code.
What am I missing? Is this even possible?
Update:
If a sys.stdout.flush() is added to fake_utility.py, the code has the desired behavior in python 3.1. I'm using python 2.6. You would think that using proc.stdout.xreadlines() would work the same as py3k, but it doesn't.
Update 2:
Here is the minimal working code.
#fake_utility.py, just generates lots of output over time
import sys, time
for i in range(10):
print i
sys.stdout.flush()
time.sleep(0.5)
#display out put line by line
import subprocess
proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)
#works in python 3.0+
#for line in proc.stdout:
for line in iter(proc.stdout.readline,''):
print line.rstrip()
I think the problem is with the statement for line in proc.stdout, which reads the entire input before iterating over it. The solution is to use readline() instead:
#filters output
import subprocess
proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if not line:
break
#the real code does filtering here
print "test:", line.rstrip()
Of course you still have to deal with the subprocess' buffering.
Note: according to the documentation the solution with an iterator should be equivalent to using readline(), except for the read-ahead buffer, but (or exactly because of this) the proposed change did produce different results for me (Python 2.5 on Windows XP).
Bit late to the party, but was surprised not to see what I think is the simplest solution here:
import io
import subprocess
proc = subprocess.Popen(["prog", "arg"], stdout=subprocess.PIPE)
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
# do something with line
(This requires Python 3.)
Indeed, if you sorted out the iterator then buffering could now be your problem. You could tell the python in the sub-process not to buffer its output.
proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)
becomes
proc = subprocess.Popen(['python','-u', 'fake_utility.py'],stdout=subprocess.PIPE)
I have needed this when calling python from within python.
You want to pass these extra parameters to subprocess.Popen:
bufsize=1, universal_newlines=True
Then you can iterate as in your example. (Tested with Python 3.5)
A function that allows iterating over both stdout and stderr concurrently, in realtime, line by line
In case you need to get the output stream for both stdout and stderr at the same time, you can use the following function.
The function uses Queues to merge both Popen pipes into a single iterator.
Here we create the function read_popen_pipes():
from queue import Queue, Empty
from concurrent.futures import ThreadPoolExecutor
def enqueue_output(file, queue):
for line in iter(file.readline, ''):
queue.put(line)
file.close()
def read_popen_pipes(p):
with ThreadPoolExecutor(2) as pool:
q_stdout, q_stderr = Queue(), Queue()
pool.submit(enqueue_output, p.stdout, q_stdout)
pool.submit(enqueue_output, p.stderr, q_stderr)
while True:
if p.poll() is not None and q_stdout.empty() and q_stderr.empty():
break
out_line = err_line = ''
try:
out_line = q_stdout.get_nowait()
except Empty:
pass
try:
err_line = q_stderr.get_nowait()
except Empty:
pass
yield (out_line, err_line)
read_popen_pipes() in use:
import subprocess as sp
with sp.Popen(my_cmd, stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:
for out_line, err_line in read_popen_pipes(p):
# Do stuff with each line, e.g.:
print(out_line, end='')
print(err_line, end='')
return p.poll() # return status-code
You can also read lines w/o loop. Works in python3.6.
import os
import subprocess
process = subprocess.Popen(command, stdout=subprocess.PIPE)
list_of_byte_strings = process.stdout.readlines()
Pythont 3.5 added the methods run() and call() to the subprocess module, both returning a CompletedProcess object. With this you are fine using proc.stdout.splitlines():
proc = subprocess.run( comman, shell=True, capture_output=True, text=True, check=True )
for line in proc.stdout.splitlines():
print "stdout:", line
See also How to Execute Shell Commands in Python Using the Subprocess Run Method
I tried this with python3 and it worked, source
When you use popen to spawn the new thread, you tell the operating system to PIPE the stdout of the child processes so the parent process can read it and here, stderr is copied to the stderr of the parent process.
in output_reader we read each line of stdout of the child process by wrapping it in an iterator that populates line by line output from the child process whenever a new line is ready.
def output_reader(proc):
for line in iter(proc.stdout.readline, b''):
print('got line: {0}'.format(line.decode('utf-8')), end='')
def main():
proc = subprocess.Popen(['python', 'fake_utility.py'],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
t = threading.Thread(target=output_reader, args=(proc,))
t.start()
try:
time.sleep(0.2)
import time
i = 0
while True:
print (hex(i)*512)
i += 1
time.sleep(0.5)
finally:
proc.terminate()
try:
proc.wait(timeout=0.2)
print('== subprocess exited with rc =', proc.returncode)
except subprocess.TimeoutExpired:
print('subprocess did not terminate in time')
t.join()
The following modification of Rômulo's answer works for me on Python 2 and 3 (2.7.12 and 3.6.1):
import os
import subprocess
process = subprocess.Popen(command, stdout=subprocess.PIPE)
while True:
line = process.stdout.readline()
if line != '':
os.write(1, line)
else:
break
I was having a problem with the arg list of Popen to update servers, the following code resolves this a bit.
import getpass
from subprocess import Popen, PIPE
username = 'user1'
ip = '127.0.0.1'
print ('What is the password?')
password = getpass.getpass()
cmd1 = f"""sshpass -p {password} ssh {username}#{ip}"""
cmd2 = f"""echo {password} | sudo -S apt update"""
cmd3 = " && "
cmd4 = f"""echo {password} | sudo -S apt upgrade -y"""
cmd5 = " && "
cmd6 = "exit"
commands = [cmd1, cmd2, cmd3, cmd4, cmd5, cmd6]
command = " ".join(commands)
cmd = command.split()
with Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True) as p:
for line in p.stdout:
print(line, end='')
And to run the update on a local computer, the following code example does this.
import getpass
from subprocess import Popen, PIPE
print ('What is the password?')
password = getpass.getpass()
cmd1_local = f"""apt update"""
cmd2_local = f"""apt upgrade -y"""
commands = [cmd1_local, cmd2_local]
with Popen(['echo', password], stdout=PIPE) as auth:
for cmd in commands:
cmd = cmd.split()
with Popen(['sudo','-S'] + cmd, stdin=auth.stdout, stdout=PIPE, bufsize=1, universal_newlines=True) as p:
for line in p.stdout:
print(line, end='')

Retrieving the output of subprocess.call() [duplicate]

This question already has answers here:
Store output of subprocess.Popen call in a string [duplicate]
(15 answers)
Closed 4 years ago.
How can I get the output of a process run using subprocess.call()?
Passing a StringIO.StringIO object to stdout gives this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 444, in call
return Popen(*popenargs, **kwargs).wait()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 588, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 945, in _get_handles
c2pwrite = stdout.fileno()
AttributeError: StringIO instance has no attribute 'fileno'
>>>
If you have Python version >= 2.7, you can use subprocess.check_output which basically does exactly what you want (it returns standard output as string).
Simple example (linux version, see note):
import subprocess
print subprocess.check_output(["ping", "-c", "1", "8.8.8.8"])
Note that the ping command is using linux notation (-c for count). If you try this on Windows remember to change it to -n for same result.
As commented below you can find a more detailed explanation in this other answer.
Output from subprocess.call() should only be redirected to files.
You should use subprocess.Popen() instead. Then you can pass subprocess.PIPE for the stderr, stdout, and/or stdin parameters and read from the pipes by using the communicate() method:
from subprocess import Popen, PIPE
p = Popen(['program', 'arg1'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate(b"input data that is passed to subprocess' stdin")
rc = p.returncode
The reasoning is that the file-like object used by subprocess.call() must have a real file descriptor, and thus implement the fileno() method. Just using any file-like object won't do the trick.
See here for more info.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.
from subprocess import PIPE, run
command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)
I have the following solution. It captures the exit code, the stdout, and the stderr too of the executed external command:
import shlex
from subprocess import Popen, PIPE
def get_exitcode_stdout_stderr(cmd):
"""
Execute the external command and get its exitcode, stdout and stderr.
"""
args = shlex.split(cmd)
proc = Popen(args, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
exitcode = proc.returncode
#
return exitcode, out, err
cmd = "..." # arbitrary external command, e.g. "python mytest.py"
exitcode, out, err = get_exitcode_stdout_stderr(cmd)
I also have a blog post on it here.
Edit: the solution was updated to a newer one that doesn't need to write to temp. files.
I recently just figured out how to do this, and here's some example code from a current project of mine:
#Getting the random picture.
#First find all pictures:
import shlex, subprocess
cmd = 'find ../Pictures/ -regex ".*\(JPG\|NEF\|jpg\)" '
#cmd = raw_input("shell:")
args = shlex.split(cmd)
output,error = subprocess.Popen(args,stdout = subprocess.PIPE, stderr= subprocess.PIPE).communicate()
#Another way to get output
#output = subprocess.Popen(args,stdout = subprocess.PIPE).stdout
ber = raw_input("search complete, display results?")
print output
#... and on to the selection process ...
You now have the output of the command stored in the variable "output". "stdout = subprocess.PIPE" tells the class to create a file object named 'stdout' from within Popen. The communicate() method, from what I can tell, just acts as a convenient way to return a tuple of the output and the errors from the process you've run. Also, the process is run when instantiating Popen.
The key is to use the function subprocess.check_output
For example, the following function captures stdout and stderr of the process and returns that as well as whether or not the call succeeded. It is Python 2 and 3 compatible:
from subprocess import check_output, CalledProcessError, STDOUT
def system_call(command):
"""
params:
command: list of strings, ex. `["ls", "-l"]`
returns: output, success
"""
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call(["ls", "-l"])
If you want to pass commands as strings rather than arrays, use this version:
from subprocess import check_output, CalledProcessError, STDOUT
import shlex
def system_call(command):
"""
params:
command: string, ex. `"ls -l"`
returns: output, success
"""
command = shlex.split(command)
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call("ls -l")
In Ipython shell:
In [8]: import subprocess
In [9]: s=subprocess.check_output(["echo", "Hello World!"])
In [10]: s
Out[10]: 'Hello World!\n'
Based on sargue's answer. Credit to sargue.

Categories