Python: Obtain output using subprocess.call, not Popen [duplicate] - python

This question already has answers here:
Retrieving the output of subprocess.call() [duplicate]
(7 answers)
Closed 8 years ago.
I have a system call made from a Python script. I would like to have a timer as well as utilizing the output of the call. I was able to do one at a time: Implement a timer using subprocess.call() and retrieve the output using subprocess.Popen(). However, I need both timer and the output result.
Is there any way to achieve this?
Following code give me an Attribute error: 'int' object has no attribute 'stdout', because the subprocess.call output is not the Popen object I need to use.
... Open file here ...
try:
result = subprocess.call(cmd, stdout=subprocess.PIPE, timeout=30)
out = result.stdout.read()
print (out)
except subprocess.TimeoutExpired as e:
print ("Timed out!")
... Write to file here ...
Any help would be appreciated.

In the documentation on subprocess.call() the one of the first things I noticed was:
Note:
Do not use stdout=PIPE or stderr=PIPE with this function. As the pipes are not being read in the current process, the child process may block if it generates enough output to a pipe to fill up the OS pipe buffer.
The next thing was the first line in the documentation
Run the command described by args. Wait for command to complete, then return the returncode attribute.
subprocess.call will return the "exit code", an int, generally 0 = success, 1 = something went wrong, etc.
For more infomation on exit codes...http://www.tldp.org/LDP/abs/html/exitcodes.html
Since you want the 'output'from your timer, you might want to revert to
timer_out = subprocess.Popen(command, shell=True, stdout=PIPE, stderr=PIPE, universal_newlines=True)
stout, sterror = timer_out.communicate()
...or something like it.

Related

How to manipulate input in bash program with python [duplicate]

I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input. How would I now write to standard input?
I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.
It might be better to use communicate:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To clarify some points:
As jro has mentioned, the right way is to use subprocess.communicate.
Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.
Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So, if the streams were not opened explicitly in text mode, then something like below should work:
import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]
I've left the stderr value above deliberately as STDOUT as an example.
That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):
import subprocess
what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'
# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]
# Well the result includes an '\n' at the end,
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()
# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()
This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.
Hope that helps someone out.
Since subprocess 3.5, there is the subprocess.run() function, which provides a convenient way to initialize and interact with Popen() objects. run() takes an optional input argument, through which you can pass things to stdin (like you would using Popen.communicate(), but all in one go).
Adapting jro's example to use run() would look like:
import subprocess
p = subprocess.run(['myapp'], input='data_to_write', capture_output=True, text=True)
After execution, p will be a CompletedProcess object. By setting capture_output to True, we make available a p.stdout attribute which gives us access to the output, if we care about it. text=True tells it to work with regular strings rather than bytes. If you want, you might also add the argument check=True to make it throw an error if the exit status (accessible regardless via p.returncode) isn't 0.
This is the "modern"/quick and easy way to do to this.
One can write data to the subprocess object on-the-fly, instead of collecting all the input in a string beforehand to pass through the communicate() method.
This example sends a list of animals names to the Unix utility sort, and sends the output to standard output.
import sys, subprocess
p = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sys.stdout)
for v in ('dog','cat','mouse','cow','mule','chicken','bear','robin'):
p.stdin.write( v.encode() + b'\n' )
p.communicate()
Note that writing to the process is done via p.stdin.write(v.encode()). I tried using
print(v.encode(), file=p.stdin), but that failed with the message TypeError: a bytes-like object is required, not 'str'. I haven't figured out how to get print() to work with this.
You can provide a file-like object to the stdin argument of subprocess.call().
The documentation for the Popen object applies here.
To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT,
... shell=True)
'ls: non_existent_file: No such file or directory\n'

stdout.read() from finished subprocess sometimes returning empty?

I have created a dictionary where I associate an id with a subprocess.
Something like:
cmd = "ls"
processes[id] = subprocess.Popen([cmd], shell=True, stdout=subprocess.PIPE)
Then I call a method with this process map as an input, that checks which process has finished. If the process finishes, I check the process's stdout.read() for a particular string match.
The issue is sometimes stdout.read() returns an empty value which causes issues in string matching.
Sample Code:
#Create a map
processes[id] = subprocess.Popen([cmd], shell=True, stdout=subprocess.PIPE)
...
#Pass that map to a method which checks which processes have finished
completedProcesses(processes)
def completedProcesses(processes):
processList = []
for id,process in processes.iteritems():
if process.poll() is not None:
#If some error in process stdout then print id
verifySuccessStatus(id, processes[id])
processList.add(id)
def verifySuccessStatus(id, process):
file=open(FAILED_IDS_FILE, 'a+')
buffer = process.stdout.read() #This returns empty value sometime
if 'Error' not in buffer:
file.write(id)
file.write('\n')
file.close()
I am new to python, I might be missing some internal functionality understanding of subprocess
There are at least two issues:
There is no point to call process.stdout.read() more than once. .read() does not return until EOF. It returns an empty string to indicate EOF after that.
You should read from the pipes while the processes are still running otherwise they may hang if they generate enough output to fill OS pipe buffers (~65K on my Linux box)
If you want to run multiple external processes concurrently and check their output after they are finished then see this answer that shows "thread pool" and async.io solutions.
Judging by your example command of ls, your issue may be caused by the stdout pipe filling up. Using the process.communicate() method handles this case for you, since you don't need to write multiple times to stdin.
# Recommend the future print function for easier file writing.
from __future__ import print_function
# Create a map
# Keeping access to 'stderr' is generally recommended, but not required.
# Also, if you don't know you need 'shell=True', it's safer practice not to use it.
processes[id] = subprocess.Popen(
[cmd],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
...
#Pass that map to a method which checks which processes have finished
check_processes(processes)
def check_processes(processes):
process_ids = []
# 'id' is a built-in function in python, so it's safer to use a different name.
for idx, process in processes.iteritems():
# When using pipes, communicate() will handle the case of the pipe
# filling up for you.
stdout, stderr = process.communicate()
if not is_success(stdout):
write_failed_id(idx)
process_ids.append(idx)
def is_success(stdout):
return 'Error' not in stdout
def write_failed_id(idx):
# Recommend using a context manager when interacting with files.
# Also, 'file' is a built-in function in python.
with open(FAILED_IDS_FILE, 'a+') as fail_file:
# The future print function makes file printing simpler.
print(idx, file=fail_file)
You're only reading stdout and looking for "Error". Perhaps you should also be looking in stderr:
processes[id] = subprocess.Popen(
[cmd],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
From the subprocess docs:
subprocess.STDOUT
Special value that can be used as the stderr argument to Popen and indicates that standard error should go into the same handle as standard output.
The process could have failed unexpectedly, returning no stdout but a non-zero return code. You can check this using process.returncode.
Popen.returncode
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t terminated yet.
A negative value -N indicates that the child was terminated by signal N (Unix only).

How to know the exact output from a Python subprocess.check_output() call?

I'm running some terminal commands from within Python using the subprocess.check_output() call. This works all fine when it returns correct results. When interacting with the bitcoin daemon I can get several responses however. For example, the raw output (as seen on a normal bash command line) can be:
: Cannot obtain a lock on data directory /home/kramer65/.bitcoin. Bitcoin is probably already running.
or
error: couldn't connect to server
Both these responses give an error code 1 however. I tried printing out the following attributes of the subprocess.CalledProcessError as e, which in both cases results in the same outputs (except for the cmd attribute of course:
print e.args, e.cmd, e.message, e.output, e.returncode
# () 'BTC' 1
# () 'BTC getinfo' 1
I guess the only thing that distincts the two errors, is the raw string that is outputted on the command line which I listed above. So my question is: how can I get the raw string that is shown on the command line from within Python?
It is likely coming on the stderr.
You could setup your subprocess with
stderr=subprocess.STDOUT
as a kwarg to merge the stdout and stderr together.
Alternatively, if you need them separately, do something like
proc = subprocess.Popen(..., stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
Note: unlike check_output, this method will not raise exception if the return code was nonzero, so you will have to do that manually if that's the behaviour you wanted.

How do I write to a Python subprocess' stdin?

I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input. How would I now write to standard input?
I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.
It might be better to use communicate:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To clarify some points:
As jro has mentioned, the right way is to use subprocess.communicate.
Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.
Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So, if the streams were not opened explicitly in text mode, then something like below should work:
import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]
I've left the stderr value above deliberately as STDOUT as an example.
That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):
import subprocess
what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'
# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]
# Well the result includes an '\n' at the end,
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()
# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()
This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.
Hope that helps someone out.
Since subprocess 3.5, there is the subprocess.run() function, which provides a convenient way to initialize and interact with Popen() objects. run() takes an optional input argument, through which you can pass things to stdin (like you would using Popen.communicate(), but all in one go).
Adapting jro's example to use run() would look like:
import subprocess
p = subprocess.run(['myapp'], input='data_to_write', capture_output=True, text=True)
After execution, p will be a CompletedProcess object. By setting capture_output to True, we make available a p.stdout attribute which gives us access to the output, if we care about it. text=True tells it to work with regular strings rather than bytes. If you want, you might also add the argument check=True to make it throw an error if the exit status (accessible regardless via p.returncode) isn't 0.
This is the "modern"/quick and easy way to do to this.
One can write data to the subprocess object on-the-fly, instead of collecting all the input in a string beforehand to pass through the communicate() method.
This example sends a list of animals names to the Unix utility sort, and sends the output to standard output.
import sys, subprocess
p = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sys.stdout)
for v in ('dog','cat','mouse','cow','mule','chicken','bear','robin'):
p.stdin.write( v.encode() + b'\n' )
p.communicate()
Note that writing to the process is done via p.stdin.write(v.encode()). I tried using
print(v.encode(), file=p.stdin), but that failed with the message TypeError: a bytes-like object is required, not 'str'. I haven't figured out how to get print() to work with this.
You can provide a file-like object to the stdin argument of subprocess.call().
The documentation for the Popen object applies here.
To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT,
... shell=True)
'ls: non_existent_file: No such file or directory\n'

Capture subprocess output [duplicate]

This question already has answers here:
Read streaming input from subprocess.communicate()
(7 answers)
Closed 6 years ago.
I learned that when executing commands in Python, I should use subprocess.
What I'm trying to achieve is to encode a file via ffmpeg and observe the program output until the file is done. Ffmpeg logs the progress to stderr.
If I try something like this:
child = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
complete = False
while not complete:
stderr = child.communicate()
# Get progress
print "Progress here later"
if child.poll() is not None:
complete = True
time.sleep(2)
the programm does not continue after calling child.communicate() and waits for the command to complete. Is there any other way to follow the output?
communicate() blocks until the child process returns, so the rest of the lines in your loop will only get executed after the child process has finished running. Reading from stderr will block too, unless you read character by character like so:
import subprocess
import sys
child = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
while True:
out = child.stderr.read(1)
if out == '' and child.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
This will provide you with real-time output. Taken from Nadia's answer here.
.communicate() "Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate."
Instead, you should be able to just read from child.stderr like an ordinary file.

Categories