I'm new to python, and trying to run a exe software from python in windows.
I wrote the following code:
from subprocess import STDOUT, Popen, PIPE
cmd=r'C:\Users\lenaq\Desktop\sep\WATv16\TLWMA-0.09.exe'
with open('test.log', 'w') as f:
p = subprocess.Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
for c in iter(lambda: p.stdout.read(1), ''):
sys.stdout.write(c)
f.write(c)
The exe program have some running errors, and I need to get the output of the program in order to fix the params file in order to prevent the errors.
the problem is that by using the above code I don't get the full output of the exe (when comparing to the os.system() command). the error message window of the exe pops out before the completion of the output writing, and I don't know where is the problem.
can you please help me...
stderr=PIPE redirects the error stream to p.stderr, and you're not reading that (note that using p.communicate allows to get both stream results, but reading them separately can lead to deadlocks).
Anyway, if you don't care about merging both out & err streams, you could change that to:
stderr=STDOUT
so both out & err use the same stream p.stdout
Also: don't use shell=True, you don't need it.
If that doesn't fix it in your case, it means that the underlying program crashed while not flushing its output. Output flush works differently when output is not redirected, which may explain why you get more output when running it without redirection with os.system (more about this issue: forcing a program to flush its standard output when redirected)
One lead yet to be explored would be to use winpty which is an equivalent of unbuffer on Windows: What is the equivalent of unbuffer program on Windows?. Something like:
cmd = ["winpty.exe","-Xallow-non-tty","-Xplain","TLWMA-0.09.exe"]
p = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
Related
I have a python process running, having a logger object configured to print logs in a log file.
Now, I am trying to call a scala script through this python process, by using subprocess module of Python.
subprocess.Popen(scala_run_command, stdout=subprocess.PIPE, shell=True)
The issue is, whenever the python process exits, it hangs the shell, which comes to life only after explicitly running stty sane command. My guess is that it is caused because the scala script outputs to shell and hence the shell hangs, because of its stdout [something in its stdout causes the shell to lose its sanity].
For the same reason, I wanted to try to put the output of scala run script to be captured in my default log file, which does not seem to be happening using multiple ways.
So, the query boils down to, how to the get the stdout output of shell command ran through subprocess module in a log file. Even if there is a better way to achieve this instead of subprocess, run, I would love to know the ideas.
The current state of code looks like this.
__echo_command = 'echo ":load %s"'
__spark_console_command = 'spark;'
def run_scala_script(self, script):
echo_command = self.__echo_command % script
spark_console_command = self.__spark_console_command
echo_result = subprocess.run(echo_command, stdout=subprocess.PIPE, shell=True)
result = subprocess.run(spark_console_command, stdout=subprocess.PIPE, shell=True, input=echo_result.stdout)
logger.info('Scala script %s completed successfully' % script)
logger.info(result.stdout)
Use
p = subprocess.Popen(...)
followed by
stdout, stderr = p.communicate()
and then stdout and stderr will contain the output bytes from the subprocess' output streams. You can then log the stdout value.
I tried to write a code that can execute python codes easily.
but when I used subprocess library such:
import subprocess
print(subprocess.Popen("py setup.py install", shell = True, stdout = subprocess.PIPE).stdout.read())
print(subprocess.Popen("py setup.py py2exe", shell = True, stdout = subprocess.PIPE).stdout.read())
I saw just this result
b''
please help me please
Most likely the commands you are trying to run are producing a stderr, which your code does not display. It is possible to send the stderr messages to stdout if you don't want to handle it separately.
I'll use a different command in the subprocess that is relatively safe. And I will break it up a little instead of having one long line.
import subprocess
p = subprocess.Popen("python filedoesntexist",
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
print(p.stdout.read())
See that I added the parameter stderr=subprocess.STDOUT, this sends all the error messages to stdout. The subprocess tries to run "python filedoesntexist" and since filedoesntexist is a file that doesn't exists, it will print this message:
b"python: can't open file 'filedoesntexist': [Errno 2] No such file or directory\n"
But you might just want to get the string instead of bytes, and you can add the parameter universal_newlines=True like this:
p = subprocess.Popen("python filedoesntexist",
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
print(p.stdout.read())
Now it prints just the string like this:
python: can't open file 'filedoesntexist': [Errno 2] No such file or directory
For additional information, visit the python documentation
Edit
The documentation recommends using run(), which can be done like this (updated after comments from J.F. Sebastian) :
subprocess.run(["python", "filedoesntexist"])
If you need to handle stdout in some way, add parameters described earlier in the Popen examples.
Basically I'm trying to automate some Linux installers (and other tasks) using the subprocess library (Popen).
In the past I've been able to open processes like this:
self.process = subprocess.Popen( self.executable,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True, shell=True)
output, cmdError = self.process.communicate()
I can then print output or cmdError for error messages and this works well for single processes or commands.
But when I need to interact with a subprocess and examine the output, it is very difficult, here is my code for doing this:
def ExecProcessWithAnswers(self):
self.process = subprocess.Popen( self.executable,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
stdin=subprocess.PIPE,
universal_newlines=True, shell=True)
while self.process.poll() is None:
print self.process.stdout.readline()
Basically the idea is that I would poll the output (stdout pipe) and then send commands when a certain input is requested from the installer (stdin).
I've tried flushing the buffer, using 3 different ways to read/iterate the output from stdout, but all of them just block or only give me a small fraction of the output text. On the other hand, if I use the communicate method, I get all of the text I expect, but it terminates the process. I've also had EOF errors and other random things.
I've read around: some guides say this is a bug in 2.6.x but it is still in 2.7.x - apparently the stdout is buffered and cannot be changed. I've tried many different ways of parsing the output from various threads here but I still can't get this to work on 2.7.X.
Surely someone must know how to interact with a subprocess? Is my only option here to use pexcept?
I can't really switch to Python 3.x.x within my environment. I was hoping this would be fairly straight forward :(
Cheers
Edit: I've also tried removing the different Pipes, writing to files, changing the buffer size on popen, disabling the shell and universal newlines, etc.
My problem is this--I need to get output from a subprocess and I am using the following code to call it-- (Feel free to ignore the long arguments. The importing thing is the stdout= subprocess.PIPE)
(stdout, stderr) = subprocess.Popen([self.ChapterToolPath, "-x", book.xmlPath , "-a", book.aacPath , "-o", book.outputPath+ "/" + fileName + ".m4b"], stdout= subprocess.PIPE).communicate()
print stdout
Thanks to an answer below, I've been able to get the output of the program, but I still end up waiting for the process to terminate before I get anything. The interesting thing is that in my debugger, there is all sorts of text flying by in the console and it is all ignored. But the moment that anything is written to the console in black (I am using pycharm) the program continues without a problem. Could the main program be waiting for some kind of output in order to move on? This would make sense because I am trying to communicate with it.... Is there a difference between text that I can see in the console and actual text that makes it to the stdout? And how would I collect the text written to the console?
Thanks!
The first line of the documentation for subprocess.call() describes it as such:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
Thus, it necessarily waits for the subprocess to exit.
subprocess.Popen(), by contrast, does not do this, returning a handle on a process with which one than then communicate().
To get all output from a program:
from subprocess import check_output as qx
output = qx([program, arg1, arg2, ...])
To get output while the program is running:
from subprocess import Popen, PIPE
p = Popen([program, arg1, ...], stdout=PIPE)
for line in iter(p.stdout.readline, ''):
print line,
There might be a buffering issue on the program' side if it prints line-by-line when run interactively but buffers its output if run as a subprocess. There are various solutions depending on your OS or the program e.g., you could run it using pexpect module.
I trying to start a program (HandBreakCLI) as a subprocess or thread from within python 2.7. I have gotten as far as starting it, but I can't figure out how to monitor it's stderr and stdout.
The program outputs it's status (% done) and info about the encode to stderr and stdout, respectively. I'd like to be able to periodically retrieve the % done from the appropriate stream.
I've tried calling subprocess.Popen with stderr and stdout set to PIPE and using the subprocess.communicate, but it sits and waits till the process is killed or complete then retrieves the output then. Doesn't do me much good.
I've got it up and running as a thread, but as far as I can tell I still have to eventually call subprocess.Popen to execute the program and run into the same wall.
Am I going about this the right way? What other options do I have or how to I get this to work as described?
I have accomplished the same with ffmpeg. This is a stripped down version of the relevant portions. bufsize=1 means line buffering and may not be needed.
def Run(command):
proc = subprocess.Popen(command, bufsize=1,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
universal_newlines=True)
return proc
def Trace(proc):
while proc.poll() is None:
line = proc.stdout.readline()
if line:
# Process output here
print 'Read line', line
proc = Run([ handbrakePath ] + allOptions)
Trace(proc)
Edit 1: I noticed that the subprocess (handbrake in this case) needs to flush after lines to use this (ffmpeg does).
Edit 2: Some quick tests reveal that bufsize=1 may not be actually needed.