This question already has answers here:
Read streaming input from subprocess.communicate()
(7 answers)
Closed 6 years ago.
I learned that when executing commands in Python, I should use subprocess.
What I'm trying to achieve is to encode a file via ffmpeg and observe the program output until the file is done. Ffmpeg logs the progress to stderr.
If I try something like this:
child = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
complete = False
while not complete:
stderr = child.communicate()
# Get progress
print "Progress here later"
if child.poll() is not None:
complete = True
time.sleep(2)
the programm does not continue after calling child.communicate() and waits for the command to complete. Is there any other way to follow the output?
communicate() blocks until the child process returns, so the rest of the lines in your loop will only get executed after the child process has finished running. Reading from stderr will block too, unless you read character by character like so:
import subprocess
import sys
child = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
while True:
out = child.stderr.read(1)
if out == '' and child.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
This will provide you with real-time output. Taken from Nadia's answer here.
.communicate() "Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate."
Instead, you should be able to just read from child.stderr like an ordinary file.
Related
I'm trying to do two things when executing a shell cmd with Python:
Capture stdout and print it as it happens
Capture stdout as a whole and process it when the cmd is complete
I looked at subprocess.check_output, but it does not have an stdout param that would allow me to print the output as it happens.
So after reading this question, I realized I may need to try a different approach.
from subprocess import Popen, PIPE
process = Popen(task_cmd, stdout = PIPE)
stdout, stderr = process.communicate()
print(stdout, stderr)
The problem with this approach is that according to the docs, Popen.communicate():
Reads data from stdout and stderr, until end-of-file is reached.
Wait for process to terminate
I still cannot seem to redirect output both to stdout AND to some sort of buffer that can be parsed when the command is complete.
Ideally, I'd like something like:
# captures the process output and dumps it to stdout in realtime
stdout_capture = Something(prints_to_stdout = True)
process = Popen(task_cmd, stdout = stdout_capture)
# prints the entire output of the executed process
print(stdout_capture.complete_capture)
Is there a recommended way to accomplish this?
You were on the right track with using giving Popen stdout=PIPE, but you can't use .communicate() because it returns the values after execution. Instead, I suggest you read from .stdout.
The only guaranteed way to get the output the moment it's generated is to read from the pipe one character at a time. Here is my approach:
def passthrough_and_capture_output(args):
import sys
import subprocess
process = subprocess.Popen(args, stdout=subprocess.PIPE, universal_newlines=True)
# universal_newlines means that the output of the process will be interpreted as text
capture = ""
s = process.stdout.read(1)
while len(s) > 0:
sys.stdout.write(s)
sys.stdout.flush()
capture += s
s = process.stdout.read(1)
return capture
Note that reading one character at a time can incur significant overhead, so if you are alright with lagging behind a bit, I suggest that you replace the 1 in read(1) with a different number of characters to output in batches.
from subprocess import check_output, CalledProcessError
def shell_command(args):
try:
res = check_output(args).decode()
except CalledProcessError as e:
res = e.output.decode()
for r in ['\r', '\n\n']:
res = res.replace(r, '')
return res.strip()
This question already has answers here:
Read streaming input from subprocess.communicate()
(7 answers)
Closed 6 years ago.
Using the subprocess module (Python 2.7), I'm running a command and attempting to process its output as it runs.
I have code like the following:
process = subprocess.Popen(
['udevadm', 'monitor', '--subsystem=usb', '--property'],
stdout=subprocess.PIPE)
for line in iter(process.stdout.readline, ''):
print(line)
However, the output only gets printed after I Ctrl+C, even if I add sys.stdout.flush() after the print statement.
Why is this happening, and how can I live stream the output from this process?
Notably, this udevadm monitor command is not intended to terminate, so I can't simply wait for the process to terminate and process its output all at once.
I found live output from subprocess command but the approach in the accepted answer did not solve my problem.
You could use unbuffer :
process = subprocess.Popen(
["unbuffer", 'udevadm', 'monitor', '--subsystem=usb', '--property'],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(process.stdout.readline, ''):
print(line)
This question already has answers here:
Retrieving the output of subprocess.call() [duplicate]
(7 answers)
Closed 8 years ago.
I have a system call made from a Python script. I would like to have a timer as well as utilizing the output of the call. I was able to do one at a time: Implement a timer using subprocess.call() and retrieve the output using subprocess.Popen(). However, I need both timer and the output result.
Is there any way to achieve this?
Following code give me an Attribute error: 'int' object has no attribute 'stdout', because the subprocess.call output is not the Popen object I need to use.
... Open file here ...
try:
result = subprocess.call(cmd, stdout=subprocess.PIPE, timeout=30)
out = result.stdout.read()
print (out)
except subprocess.TimeoutExpired as e:
print ("Timed out!")
... Write to file here ...
Any help would be appreciated.
In the documentation on subprocess.call() the one of the first things I noticed was:
Note:
Do not use stdout=PIPE or stderr=PIPE with this function. As the pipes are not being read in the current process, the child process may block if it generates enough output to a pipe to fill up the OS pipe buffer.
The next thing was the first line in the documentation
Run the command described by args. Wait for command to complete, then return the returncode attribute.
subprocess.call will return the "exit code", an int, generally 0 = success, 1 = something went wrong, etc.
For more infomation on exit codes...http://www.tldp.org/LDP/abs/html/exitcodes.html
Since you want the 'output'from your timer, you might want to revert to
timer_out = subprocess.Popen(command, shell=True, stdout=PIPE, stderr=PIPE, universal_newlines=True)
stout, sterror = timer_out.communicate()
...or something like it.
I have a script named 1st.py which creates a REPL (read-eval-print-loop):
print "Something to print"
while True:
r = raw_input()
if r == 'n':
print "exiting"
break
else:
print "continuing"
I then launched 1st.py with the following code:
p = subprocess.Popen(["python","1st.py"], stdin=PIPE, stdout=PIPE)
And then tried this:
print p.communicate()[0]
It failed, providing this traceback:
Traceback (most recent call last):
File "1st.py", line 3, in <module>
r = raw_input()
EOFError: EOF when reading a line
Can you explain what is happening here please? When I use p.stdout.read(), it hangs forever.
.communicate() writes input (there is no input in this case so it just closes subprocess' stdin to indicate to the subprocess that there is no more input), reads all output, and waits for the subprocess to exit.
The exception EOFError is raised in the child process by raw_input() (it expected data but got EOF (no data)).
p.stdout.read() hangs forever because it tries to read all output from the child at the same time as the child waits for input (raw_input()) that causes a deadlock.
To avoid the deadlock you need to read/write asynchronously (e.g., by using threads or select) or to know exactly when and how much to read/write, for example:
from subprocess import PIPE, Popen
p = Popen(["python", "-u", "1st.py"], stdin=PIPE, stdout=PIPE, bufsize=1)
print p.stdout.readline(), # read the first line
for i in range(10): # repeat several times to show that it works
print >>p.stdin, i # write input
p.stdin.flush() # not necessary in this case
print p.stdout.readline(), # read output
print p.communicate("n\n")[0], # signal the child to exit,
# read the rest of the output,
# wait for the child to exit
Note: it is a very fragile code if read/write are not in sync; it deadlocks.
Beware of block-buffering issue (here it is solved by using "-u" flag that turns off buffering for stdin, stdout in the child).
bufsize=1 makes the pipes line-buffered on the parent side.
Do not use communicate(input=""). It writes input to the process, closes its stdin and then reads all output.
Do it like this:
p=subprocess.Popen(["python","1st.py"],stdin=PIPE,stdout=PIPE)
# get output from process "Something to print"
one_line_output = p.stdout.readline()
# write 'a line\n' to the process
p.stdin.write('a line\n')
# get output from process "not time to break"
one_line_output = p.stdout.readline()
# write "n\n" to that process for if r=='n':
p.stdin.write('n\n')
# read the last output from the process "Exiting"
one_line_output = p.stdout.readline()
What you would do to remove the error:
all_the_process_will_tell_you = p.communicate('all you will ever say to this process\nn\n')[0]
But since communicate closes the stdout and stdin and stderr, you can not read or write after you called communicate.
Your second bit of code starts the first bit of code as a subprocess with piped input and output. It then closes its input and tries to read its output.
The first bit of code tries to read from standard input, but the process that started it closed its standard input, so it immediately reaches an end-of-file, which Python turns into an exception.
I trying to start a program (HandBreakCLI) as a subprocess or thread from within python 2.7. I have gotten as far as starting it, but I can't figure out how to monitor it's stderr and stdout.
The program outputs it's status (% done) and info about the encode to stderr and stdout, respectively. I'd like to be able to periodically retrieve the % done from the appropriate stream.
I've tried calling subprocess.Popen with stderr and stdout set to PIPE and using the subprocess.communicate, but it sits and waits till the process is killed or complete then retrieves the output then. Doesn't do me much good.
I've got it up and running as a thread, but as far as I can tell I still have to eventually call subprocess.Popen to execute the program and run into the same wall.
Am I going about this the right way? What other options do I have or how to I get this to work as described?
I have accomplished the same with ffmpeg. This is a stripped down version of the relevant portions. bufsize=1 means line buffering and may not be needed.
def Run(command):
proc = subprocess.Popen(command, bufsize=1,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
universal_newlines=True)
return proc
def Trace(proc):
while proc.poll() is None:
line = proc.stdout.readline()
if line:
# Process output here
print 'Read line', line
proc = Run([ handbrakePath ] + allOptions)
Trace(proc)
Edit 1: I noticed that the subprocess (handbrake in this case) needs to flush after lines to use this (ffmpeg does).
Edit 2: Some quick tests reveal that bufsize=1 may not be actually needed.