I need to run a external exe file inside a python script. I need two things out of this.
Get whatever the exe outputs to the stdout (stderr).
exe stops executing only after I press the enter Key. I can't change this behavior. I need the script the pass the enter Key input after it gets the output from the previous step.
This is what I have done so far and I am not sure how to go after this.
import subprocess
first = subprocess.Popen(["myexe.exe"],shell=True,stdout=subprocess.PIPE)
from subprocess import Popen, PIPE, STDOUT
first = Popen(['myexe.exe'], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
while first.poll() is None:
data = first.stdout.read()
if b'press enter to' in data:
first.stdin.write(b'\n')
first.stdin.close()
first.stdout.close()
This pipes stdin as well, do not forget to close your open file handles (stdin and stdout are also file handles in a sense).
Also avoid shell=True if at all possible, I use it a lot my self but best practices say you shouldn't.
I assumed python 3 here and stdin and stdout assumes bytes data as input and output.
first.poll() will poll for a exit code of your exe, if none is given it means it's still running.
Some other tips
one tedious thing to do can be to pass arguments to Popen, one neat thing to do is:
import shlex
Popen(shlex.split(cmd_str), shell=False)
It preserves space separated inputs with quotes around them, for instance python myscript.py debug "pass this parameter somewhere" would result in three parameters from sys.argv, ['myscript.py', 'debug', 'pass this parameter somewhere'] - might be useful in the future when working with Popen
Another thing that would be good is to check if there's output in stdout before reading from it, otherwise it might hang the application. To do this you could use select.
Or you could use pexpect which is often used with SSH since it lives in another user space than your application when it asks for input, you need to either fork your exe manually and read from that specific pid with os.read() or use pexpect.
Related
I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input. How would I now write to standard input?
I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.
It might be better to use communicate:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To clarify some points:
As jro has mentioned, the right way is to use subprocess.communicate.
Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.
Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So, if the streams were not opened explicitly in text mode, then something like below should work:
import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]
I've left the stderr value above deliberately as STDOUT as an example.
That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):
import subprocess
what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'
# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]
# Well the result includes an '\n' at the end,
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()
# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()
This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.
Hope that helps someone out.
Since subprocess 3.5, there is the subprocess.run() function, which provides a convenient way to initialize and interact with Popen() objects. run() takes an optional input argument, through which you can pass things to stdin (like you would using Popen.communicate(), but all in one go).
Adapting jro's example to use run() would look like:
import subprocess
p = subprocess.run(['myapp'], input='data_to_write', capture_output=True, text=True)
After execution, p will be a CompletedProcess object. By setting capture_output to True, we make available a p.stdout attribute which gives us access to the output, if we care about it. text=True tells it to work with regular strings rather than bytes. If you want, you might also add the argument check=True to make it throw an error if the exit status (accessible regardless via p.returncode) isn't 0.
This is the "modern"/quick and easy way to do to this.
One can write data to the subprocess object on-the-fly, instead of collecting all the input in a string beforehand to pass through the communicate() method.
This example sends a list of animals names to the Unix utility sort, and sends the output to standard output.
import sys, subprocess
p = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sys.stdout)
for v in ('dog','cat','mouse','cow','mule','chicken','bear','robin'):
p.stdin.write( v.encode() + b'\n' )
p.communicate()
Note that writing to the process is done via p.stdin.write(v.encode()). I tried using
print(v.encode(), file=p.stdin), but that failed with the message TypeError: a bytes-like object is required, not 'str'. I haven't figured out how to get print() to work with this.
You can provide a file-like object to the stdin argument of subprocess.call().
The documentation for the Popen object applies here.
To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT,
... shell=True)
'ls: non_existent_file: No such file or directory\n'
I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input. How would I now write to standard input?
I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.
It might be better to use communicate:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To clarify some points:
As jro has mentioned, the right way is to use subprocess.communicate.
Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.
Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So, if the streams were not opened explicitly in text mode, then something like below should work:
import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]
I've left the stderr value above deliberately as STDOUT as an example.
That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):
import subprocess
what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'
# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]
# Well the result includes an '\n' at the end,
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()
# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()
This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.
Hope that helps someone out.
Since subprocess 3.5, there is the subprocess.run() function, which provides a convenient way to initialize and interact with Popen() objects. run() takes an optional input argument, through which you can pass things to stdin (like you would using Popen.communicate(), but all in one go).
Adapting jro's example to use run() would look like:
import subprocess
p = subprocess.run(['myapp'], input='data_to_write', capture_output=True, text=True)
After execution, p will be a CompletedProcess object. By setting capture_output to True, we make available a p.stdout attribute which gives us access to the output, if we care about it. text=True tells it to work with regular strings rather than bytes. If you want, you might also add the argument check=True to make it throw an error if the exit status (accessible regardless via p.returncode) isn't 0.
This is the "modern"/quick and easy way to do to this.
One can write data to the subprocess object on-the-fly, instead of collecting all the input in a string beforehand to pass through the communicate() method.
This example sends a list of animals names to the Unix utility sort, and sends the output to standard output.
import sys, subprocess
p = subprocess.Popen('sort', stdin=subprocess.PIPE, stdout=sys.stdout)
for v in ('dog','cat','mouse','cow','mule','chicken','bear','robin'):
p.stdin.write( v.encode() + b'\n' )
p.communicate()
Note that writing to the process is done via p.stdin.write(v.encode()). I tried using
print(v.encode(), file=p.stdin), but that failed with the message TypeError: a bytes-like object is required, not 'str'. I haven't figured out how to get print() to work with this.
You can provide a file-like object to the stdin argument of subprocess.call().
The documentation for the Popen object applies here.
To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT,
... shell=True)
'ls: non_existent_file: No such file or directory\n'
I am attempting to wrap a program that is routinely used at work. When called with an insufficient number of arguments, or with a misspelled argument, the program issues a prompt to the user, asking for the needed input. As a consequence, when calling the routine with subprocess.Popen, the routine never sends any information to stdout or stderr when wrong parameters are passed. subprocess.Popen.communicate() and subprocess.Popen.read(1) both wait for a newline character before any information becomes available.
Is there any way to retrieve information from subprocess.Popen.stdout before the newline character is issued? If not, is there any method that can be used to determine whether the subprocess is waiting for input?
First thing to try: use the bufsize argument to Popen, and set it to 0:
subprocess.Popen(args, bufsize=0, ...)
Unfortunately, whether or not this works also depends upon how the subprocess flushes its output, and I presume you don't have much control over that.
On some platforms, when data written to stdout is flushed will actually change depending on whether the underlying I/O library detects an interactive terminal or a pipe. So while you might think the data is there waiting to be read — because that's how it works in a terminal window — it might actually be line buffered when you're running the same program as a subprocess from another within Python.
Added: I just realised that bufsize=0 is the default anyway. Nuts.
After asking around quite a bit, someone pointed me to the solution. Use pexpect.spawn and pexpect.expect. For example:
Bash "script" in a file titled prompt.sh to emulate the problem - read cannot be called directly from pexpect.spawn.
#!/bin/bash
read -p "This is a prompt: "
This will hang when called by subprocess.Popen. It can be handled by pexpect.spawn, though:
import pexpect
child = pexpect.spawn('./prompt.sh')
child.expect(search)
>>> 0
print child.after #Prints the matched text
>>> 'This is a prompt: '
A list, compiled regex, or list of compiled regex can also be used in place of the string in pexpect.expect to deal with differing prompts.
I'm having troubles getting this to work. Basically I have a python program that expect some data in stdin, that is reading it as sys.stdin.readlines() I have tested this and it is working without problems with things like echo "" | myprogram.py
I have a second program that using the subprocess module calls on the first program with the following code
proc = subprocess.Popen(final_shell_cmd,
stderr=subprocess.PIPE, stdout=subprocess.PIPE,
shell=False), env=shell_env)
f = ' '.join(shell_cmd_args)
#f.append('\4')
return proc.communicate(f)
The second program is a daemon and i have discovered that the second program works well as long as I hit ctrl-d after calling it from the first program.
So it seems there is something wrong with subprocess not closing the file and my first program expecting more input when nothing more should be sending.
anyone has any idea how I can get this working?
The main problem here is that "shell_cmd_args" may contain passwords and other sensitive information that we do not want to pass in as the command name as it will show in tools like "ps".
You want to redirect the subprocess's stdin, so you need stdin=subprocess.PIPE.
You should not need to write Control-D ('\4') to the file object. Control-D tells the shell to close the standard input that's connected to the program. The program doesn't see a Control-D character in that context.
Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
You're not "making a call" when you use popen, you're running an executable and talking to it over stdin, stdout, and stderr. If the executable has some way of doing a "session" of work (for instance, by reading lines from stdin) then, yes, you can do it. Otherwise, you'll need to exec multiple times.
subprocess.Popen is (mostly) just a wrapper around execvp(3)
Assuming you want to be able to run a shell and send it multiple commands (and read their output), it appears you can do something like this:
from subprocess import *
p = Popen(['/bin/sh'], shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)
After which, e.g.,:
>>> p.stdin.write("cat /etc/motd\n")
>>> p.stdout.readline()
'Welcome to dev-linux.mongo.com.\n'
(Of course, you should check stderr too, or else ask Popen to merge it with stdout). One major problem with the above is that the stdin and stdout pipes are in blocking mode, so it's easy to get "stuck" waiting forever for output from the shell. Although I haven't tried it, there's a recipe at the ActiveState site that shows how to address this.
Update: after looking at the related questions/answers, it looks like it might be simpler to just use Python's built-in select module to see if there's data to read on stdout (you should also do the same for stderr, of course), e.g.:
>>> select.select([p.stdout], [], [], 0)
([<open file '<fdopen>', mode 'rb' at 0x10341690>], [], [])
For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
Sounds like you're using shell=True. Don't, unless you need to. Instead use shell=False (the default) and pass in a command/arg list.
Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
Any reason you can't just create two Popen instances and wait/communicate on each as necessary? That's the normal way to do it, if I understand you correctly.