i am unable to pass in commands to stdin in python 3.2.5. I have tried with the following 2 approaches
Also: This question is a continuation of a previous question.
from subprocess import Popen, PIPE, STDOUT
import time
p = Popen([r'fileLoc/uploader.exe'],shell = True, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
p.stdin.write('uploader -i file.txt -d outputFolder\n')
print (p.communicate()[0])
p.stdin.close()
i also get numbers such as 96, 0, 85 returned to me when i try the code in the IDLE interpreter, along with errors such as from the print (p.communicate()[0])
Traceback (most recent call last):
File "<pyshell#132>", line 1, in <module>
p.communicate()[0]
File "C:\Python32\lib\subprocess.py", line 832, in communicate
return self._communicate(input)
File "C:\Python32\lib\subprocess.py", line 1060, in _communicate
self.stdin.close()
IOError: [Errno 22] Invalid argument
i have also used:
from subprocess import Popen, PIPE, STDOUT
import time
p = Popen([r'fileLoc/uploader.exe'],shell = True, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
p.communicate(input= bytes(r'uploader -i file.txt -d outputFolder\n','UTF-8'))[0]
print (p.communicate()[0])
p.stdin.close()
but with no luck.
don't use shell=True when passing the arguments as list.
stdin.write needs a bytes object as argument. You try to wire a str.
communicate() writes input to the stdin and returns a tuple with the otput of stdout and sterr, and it waits until the process has finished. You can only use it once, trying to call it a second time will result in an error.
are your sure the line you're writing should be passed to your process on stdin? Shouldn't it be the command you're trying to run?
Pass command arguments as arguments, not as stdin
The command might read username/password from console directly without using subprocess' stdin. In this case you might need winpexpect or SendKeys modules. See my answer to a similar quesiton that has corresponding code examples
Here's an example how to start a subprocess with arguments, pass some input, and write merged subprocess' stdout/stderr to a file:
#!/usr/bin/env python3
import os
from subprocess import Popen, PIPE, STDOUT
command = r'fileLoc\uploader.exe -i file.txt -d outputFolder'# use str on Windows
input_bytes = os.linesep.join(["username#email.com", "password"]).encode("ascii")
with open('command_output.txt', 'wb') as outfile:
with Popen(command, stdin=PIPE, stdout=outfile, stderr=STDOUT) as p:
p.communicate(input_bytes)
Related
how to use subprocess if my temp-file argument is in the middle of the command? For example a terminal command looks like this:
program subprogram -a -b tmpFILE otherFILE
I tried variations of this:
from subprocess import Popen, PIPE
from tempfile import SpooledTemporaryFile as tempfile
tmpFILE=tempfile()
tmpFILE.write(someList)
tmpFILE.seek(0)
print Popen(['program','subprogram', '-a', '-b', otherFile],stdout=PIPE,stdin=tmpFILE).stdout.read()
f.close()
or
print Popen(['program','subprogram', '-a', '-b', tmpFILE, otherFile],stdout=PIPE,stdin=tmpFILE).stdout.read()
but nothing works... My temporary generated file in python shouldn't be as the last parameter.
Thanks
Is there a reason to use SpooledTemporaryFile instead of other types of temp file? If not, I recommend using NamedTemporaryFile as you can retrieve the name from it. I have tried to retrieve the name from SpooledTemporaryFile and got '<fdopen>' which does not seem to be valid.
Here is the suggested code:
from subprocess import Popen, PIPE
import tempfile
with tempfile.NamedTemporaryFile() as temp_file:
temp_file.write(someList)
temp_file.flush()
process = Popen(['program', 'subprogram', '-a', '-b', temp_file.name, otherFile], stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
Discussion
Using the with statement, you don't have to worry about closing the file. As soon as the with block is finished, the file is automatically closed.
Instead of calling seek, you should call flush to commit your file buffer to disk before calling program.
If I do the following:
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
I get:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this?
Popen.communicate() documentation:
Note that if you want to send data to
the process’s stdin, you need to
create the Popen object with
stdin=PIPE. Similarly, to get anything
other than None in the result tuple,
you need to give stdout=PIPE and/or
stderr=PIPE too.
Replacing os.popen*
pipe = os.popen(cmd, 'w', bufsize)
# ==>
pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin
Warning Use communicate() rather than
stdin.write(), stdout.read() or
stderr.read() to avoid deadlocks due
to any of the other OS pipe buffers
filling up and blocking the child
process.
So your example could be written as follows:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
grep_stdout = p.communicate(input=b'one\ntwo\nthree\nfour\nfive\nsix\n')[0]
print(grep_stdout.decode())
# -> four
# -> five
# ->
On Python 3.5+ (3.6+ for encoding), you could use subprocess.run, to pass input as a string to an external command and get its exit status, and its output as a string back in one call:
#!/usr/bin/env python3
from subprocess import run, PIPE
p = run(['grep', 'f'], stdout=PIPE,
input='one\ntwo\nthree\nfour\nfive\nsix\n', encoding='ascii')
print(p.returncode)
# -> 0
print(p.stdout)
# -> four
# -> five
# ->
I figured out this workaround:
>>> p = subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=subprocess.PIPE)
>>> p.stdin.write(b'one\ntwo\nthree\nfour\nfive\nsix\n') #expects a bytes type object
>>> p.communicate()[0]
'four\nfive\n'
>>> p.stdin.close()
Is there a better one?
There's a beautiful solution if you're using Python 3.4 or better. Use the input argument instead of the stdin argument, which accepts a bytes argument:
output_bytes = subprocess.check_output(
["sed", "s/foo/bar/"],
input=b"foo",
)
This works for check_output and run, but not call or check_call for some reason.
In Python 3.7+, you can also add text=True to make check_output take a string as input and return a string (instead of bytes):
output_string = subprocess.check_output(
["sed", "s/foo/bar/"],
input="foo",
text=True,
)
I'm a bit surprised nobody suggested creating a pipe, which is in my opinion the far simplest way to pass a string to stdin of a subprocess:
read, write = os.pipe()
os.write(write, "stdin input here")
os.close(write)
subprocess.check_call(['your-command'], stdin=read)
I am using python3 and found out that you need to encode your string before you can pass it into stdin:
p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
out, err = p.communicate(input='one\ntwo\nthree\nfour\nfive\nsix\n'.encode())
print(out)
Apparently a cStringIO.StringIO object doesn't quack close enough to
a file duck to suit subprocess.Popen
I'm afraid not. The pipe is a low-level OS concept, so it absolutely requires a file object that is represented by an OS-level file descriptor. Your workaround is the right one.
from subprocess import Popen, PIPE
from tempfile import SpooledTemporaryFile as tempfile
f = tempfile()
f.write('one\ntwo\nthree\nfour\nfive\nsix\n')
f.seek(0)
print Popen(['/bin/grep','f'],stdout=PIPE,stdin=f).stdout.read()
f.close()
"""
Ex: Dialog (2-way) with a Popen()
"""
p = subprocess.Popen('Your Command Here',
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
stdin=PIPE,
shell=True,
bufsize=0)
p.stdin.write('START\n')
out = p.stdout.readline()
while out:
line = out
line = line.rstrip("\n")
if "WHATEVER1" in line:
pr = 1
p.stdin.write('DO 1\n')
out = p.stdout.readline()
continue
if "WHATEVER2" in line:
pr = 2
p.stdin.write('DO 2\n')
out = p.stdout.readline()
continue
"""
..........
"""
out = p.stdout.readline()
p.wait()
On Python 3.7+ do this:
my_data = "whatever you want\nshould match this f"
subprocess.run(["grep", "f"], text=True, input=my_data)
and you'll probably want to add capture_output=True to get the output of running the command as a string.
On older versions of Python, replace text=True with universal_newlines=True:
subprocess.run(["grep", "f"], universal_newlines=True, input=my_data)
Beware that Popen.communicate(input=s)may give you trouble ifsis too big, because apparently the parent process will buffer it before forking the child subprocess, meaning it needs "twice as much" used memory at that point (at least according to the "under the hood" explanation and linked documentation found here). In my particular case,swas a generator that was first fully expanded and only then written tostdin so the parent process was huge right before the child was spawned,
and no memory was left to fork it:
File "/opt/local/stow/python-2.7.2/lib/python2.7/subprocess.py", line 1130, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
This is overkill for grep, but through my journeys I've learned about the Linux command expect, and the python library pexpect
expect: dialogue with interactive programs
pexpect: Python module for spawning child applications; controlling them; and responding to expected patterns in their output.
import pexpect
child = pexpect.spawn('grep f', timeout=10)
child.sendline('text to match')
print(child.before)
Working with interactive shell applications like ftp is trivial with pexpect
import pexpect
child = pexpect.spawn ('ftp ftp.openbsd.org')
child.expect ('Name .*: ')
child.sendline ('anonymous')
child.expect ('Password:')
child.sendline ('noah#example.com')
child.expect ('ftp> ')
child.sendline ('ls /pub/OpenBSD/')
child.expect ('ftp> ')
print child.before # Print the result of the ls command.
child.interact() # Give control of the child to the user.
p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
p.stdin.write('one\n')
time.sleep(0.5)
p.stdin.write('two\n')
time.sleep(0.5)
p.stdin.write('three\n')
time.sleep(0.5)
testresult = p.communicate()[0]
time.sleep(0.5)
print(testresult)
This question already has answers here:
How to redirect output with subprocess in Python?
(6 answers)
Closed 7 years ago.
In python 2.7, I would like to execute an OS command (for example 'ls -l' in UNIX) and save its output to a file. I don't want the execution results to show anywhere else other than the file.
Is this achievable without using os.system?
Use subprocess.check_call redirecting stdout to a file object:
from subprocess import check_call, STDOUT, CalledProcessError
with open("out.txt","w") as f:
try:
check_call(['ls', '-l'], stdout=f, stderr=STDOUT)
except CalledProcessError as e:
print(e.message)
Whatever you what to do when the command returns a non-zero exit status should be handled in the except. If you want a file for stdout and another to handle stderr open two files:
from subprocess import check_call, STDOUT, CalledProcessError, call
with open("stdout.txt","w") as f, open("stderr.txt","w") as f2:
try:
check_call(['ls', '-l'], stdout=f, stderr=f2)
except CalledProcessError as e:
print(e.message)
Assuming you just want to run a command have its output go into a file, you could use the subprocess module like
subprocess.call( "ls -l > /tmp/output", shell=True )
though that will not redirect stderr
You can open a file and pass it to subprocess.call as the stdout parameter and the output destined for stdout will go to the file instead.
import subprocess
with open("result.txt", "w") as f:
subprocess.call(["ls", "-l"], stdout=f)
It wont catch any output to stderr though that would have to be redirected by passing a file to subprocess.call as the stderr parameter. I'm not certain if you can use the same file.
I am using Popen to call a shell script that is continuously writing its stdout and stderr to a log file. Is there any way to simultaneously output the log file continuously (to the screen), or alternatively, make the shell script write to both the log file and stdout at the same time?
I basically want to do something like this in Python:
cat file 2>&1 | tee -a logfile #"cat file" will be replaced with some script
Again, this pipes stderr/stdout together to tee, which writes it both to stdout and my logfile.
I know how to write stdout and stderr to a logfile in Python. Where I'm stuck is how to duplicate these back to the screen:
subprocess.Popen("cat file", shell=True, stdout=logfile, stderr=logfile)
Of course, I could just do something like this, but is there any way to do this without tee and shell file descriptor redirection?:
subprocess.Popen("cat file 2>&1 | tee -a logfile", shell=True)
You can use a pipe to read the data from the program's stdout and write it to all the places you want:
import sys
import subprocess
logfile = open('logfile', 'w')
proc=subprocess.Popen(['cat', 'file'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in proc.stdout:
sys.stdout.write(line)
logfile.write(line)
proc.wait()
UPDATE
In python 3, the universal_newlines parameter controls how pipes are used. If False, pipe reads return bytes objects and may need to be decoded (e.g., line.decode('utf-8')) to get a string. If True, python does the decode for you
Changed in version 3.3: When universal_newlines is True, the class uses the encoding locale.getpreferredencoding(False) instead of locale.getpreferredencoding(). See the io.TextIOWrapper class for more information on this change.
To emulate: subprocess.call("command 2>&1 | tee -a logfile", shell=True) without invoking the tee command:
#!/usr/bin/env python2
from subprocess import Popen, PIPE, STDOUT
p = Popen("command", stdout=PIPE, stderr=STDOUT, bufsize=1)
with p.stdout, open('logfile', 'ab') as file:
for line in iter(p.stdout.readline, b''):
print line, #NOTE: the comma prevents duplicate newlines (softspace hack)
file.write(line)
p.wait()
To fix possible buffering issues (if the output is delayed), see links in Python: read streaming input from subprocess.communicate().
Here's Python 3 version:
#!/usr/bin/env python3
import sys
from subprocess import Popen, PIPE, STDOUT
with Popen("command", stdout=PIPE, stderr=STDOUT, bufsize=1) as p, \
open('logfile', 'ab') as file:
for line in p.stdout: # b'\n'-separated lines
sys.stdout.buffer.write(line) # pass bytes as is
file.write(line)
Write to terminal byte by byte for interactive applications
This method write any bytes it gets to stdout immediately, which more closely simulates the behavior of tee, especially for interactive applications.
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
with subprocess.Popen(sys.argv[1:], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as proc, \
open('logfile.txt', 'bw') as logfile:
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
logfile.write(byte)
# logfile.flush()
else:
break
exit_status = proc.returncode
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(10):
print(i)
sys.stdout.flush()
time.sleep(1)
First we can do a non-interactive sanity check:
./main.py ./sleep.py
And we see it counting to stdout on real time.
Next, for an interactive test, you can run:
./main.py bash
Then the characters you type appear immediately on the terminal as you type them, which is very important for interactive applications. This is what happens when you run:
bash | tee logfile.txt
Also, if you want the output to show on the ouptut file immediately, then you can also add a:
logfile.flush()
but tee does not do this, and I'm afraid it would kill performance. You can test this out easily with:
tail -f logfile.txt
Related question: live output from subprocess command
Tested on Ubuntu 18.04, Python 3.6.7.
This question already has answers here:
Store output of subprocess.Popen call in a string [duplicate]
(15 answers)
Closed 4 years ago.
How can I get the output of a process run using subprocess.call()?
Passing a StringIO.StringIO object to stdout gives this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 444, in call
return Popen(*popenargs, **kwargs).wait()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 588, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 945, in _get_handles
c2pwrite = stdout.fileno()
AttributeError: StringIO instance has no attribute 'fileno'
>>>
If you have Python version >= 2.7, you can use subprocess.check_output which basically does exactly what you want (it returns standard output as string).
Simple example (linux version, see note):
import subprocess
print subprocess.check_output(["ping", "-c", "1", "8.8.8.8"])
Note that the ping command is using linux notation (-c for count). If you try this on Windows remember to change it to -n for same result.
As commented below you can find a more detailed explanation in this other answer.
Output from subprocess.call() should only be redirected to files.
You should use subprocess.Popen() instead. Then you can pass subprocess.PIPE for the stderr, stdout, and/or stdin parameters and read from the pipes by using the communicate() method:
from subprocess import Popen, PIPE
p = Popen(['program', 'arg1'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate(b"input data that is passed to subprocess' stdin")
rc = p.returncode
The reasoning is that the file-like object used by subprocess.call() must have a real file descriptor, and thus implement the fileno() method. Just using any file-like object won't do the trick.
See here for more info.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code.
from subprocess import PIPE, run
command = ['echo', 'hello']
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True)
print(result.returncode, result.stdout, result.stderr)
I have the following solution. It captures the exit code, the stdout, and the stderr too of the executed external command:
import shlex
from subprocess import Popen, PIPE
def get_exitcode_stdout_stderr(cmd):
"""
Execute the external command and get its exitcode, stdout and stderr.
"""
args = shlex.split(cmd)
proc = Popen(args, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
exitcode = proc.returncode
#
return exitcode, out, err
cmd = "..." # arbitrary external command, e.g. "python mytest.py"
exitcode, out, err = get_exitcode_stdout_stderr(cmd)
I also have a blog post on it here.
Edit: the solution was updated to a newer one that doesn't need to write to temp. files.
I recently just figured out how to do this, and here's some example code from a current project of mine:
#Getting the random picture.
#First find all pictures:
import shlex, subprocess
cmd = 'find ../Pictures/ -regex ".*\(JPG\|NEF\|jpg\)" '
#cmd = raw_input("shell:")
args = shlex.split(cmd)
output,error = subprocess.Popen(args,stdout = subprocess.PIPE, stderr= subprocess.PIPE).communicate()
#Another way to get output
#output = subprocess.Popen(args,stdout = subprocess.PIPE).stdout
ber = raw_input("search complete, display results?")
print output
#... and on to the selection process ...
You now have the output of the command stored in the variable "output". "stdout = subprocess.PIPE" tells the class to create a file object named 'stdout' from within Popen. The communicate() method, from what I can tell, just acts as a convenient way to return a tuple of the output and the errors from the process you've run. Also, the process is run when instantiating Popen.
The key is to use the function subprocess.check_output
For example, the following function captures stdout and stderr of the process and returns that as well as whether or not the call succeeded. It is Python 2 and 3 compatible:
from subprocess import check_output, CalledProcessError, STDOUT
def system_call(command):
"""
params:
command: list of strings, ex. `["ls", "-l"]`
returns: output, success
"""
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call(["ls", "-l"])
If you want to pass commands as strings rather than arrays, use this version:
from subprocess import check_output, CalledProcessError, STDOUT
import shlex
def system_call(command):
"""
params:
command: string, ex. `"ls -l"`
returns: output, success
"""
command = shlex.split(command)
try:
output = check_output(command, stderr=STDOUT).decode()
success = True
except CalledProcessError as e:
output = e.output.decode()
success = False
return output, success
output, success = system_call("ls -l")
In Ipython shell:
In [8]: import subprocess
In [9]: s=subprocess.check_output(["echo", "Hello World!"])
In [10]: s
Out[10]: 'Hello World!\n'
Based on sargue's answer. Credit to sargue.