How to interact with an external program in Python 3? - python

Using Python 3, I want to execute an external program, interact with it by providing some text into standard input, and then print the result.
As an example, I created the following external program, called test.py:
print('Test Program')
print('1 First option, 2 Second Option')
choice = input()
if choice == '1':
second_param = input('Insert second param: ')
result = choice + ' ' + second_param
print(result)
If I run this program directly, it works as expected. If I provide the input 1 and then 2, the result is 1 2.
I want to run this program in another script and interact with it to print the same result.
After reading the documentation for subprocess, and checking out similar questions on SO, I ended up with the following:
EXTERNAL_PROG = 'test.py'
p = Popen(['py', EXTERNAL_PROG], stdout=PIPE, stdin=PIPE, shell=True)
print(p.stdout.readline().decode('utf-8'))
print(p.stdout.readline().decode('utf-8'))
p.stdin.write(b'1\n')
p.stdin.write(b'2\n')
print(p.stdout.readline().decode('utf-8'))
However, when I run the code, the program freezes after printing 1 First option, 2 Second Option, and I need to restart my shell. This is probably caused by the fact that subprocess.stdout.readline() expects to find a newline character, and the prompt for the second param doesn’t contain one.
I found 2 SO questions that talk about something similar but I couldn’t get it to work.
Here, the answer recommends using the pexpect module. I tried to adapt the code to my situation but it didn’t work.
Here, the suggestion is to use -u, but adding it didn’t change anything.
I know that a solution can be found by modifying test.py, but this is not possible in my case since I need to use another external program and this is just a minimal example based on it.

If you have fixed input to your program (means input not changing at run time) then this solution can be relevant.
Answer
First create file.
Input file. name it input.txt and put 1 2 in it
command = "python test.py < input.txt > output.txt 2>&1"
# now run this command
os.system(command)
When you run this, you will find output.txt in same directory. If your program is executed successfully then output.txt contains output of code test.py but if your code gives any error then error is in output.txt.
Answer As You Want
main.py become
import sys
from subprocess import PIPE, Popen
EXTERNAL_PROG = 'test.py'
p = Popen(['python3', EXTERNAL_PROG], stdout=PIPE, stdin=PIPE, stderr=PIPE)
print(p.stdout.readline())
print(p.stdout.readline())
p.stdin.write(b'1\n')
p.stdin.write(b'2\n')
p.stdin.flush()
print(p.stdout.readline())
print(p.stdout.readline())

Related

How can I check if a command was succesful or not?

So, I'm trying to check if the command was successful or not when doing subprocess command.
I'm really bad at explaining but just look at my example:
Here's my code
output = subprocess.getoutput("sdf")
print(output)
I want to check if the output is:
'sdf' is not recognized as an internal or external command,
operable program or batch file.
I tried this code:
error_temp = fr"'sdf' is not recognized as an internal or external command, operable program or batch file."
if output == error_temp:
print("'sdf' was not recognized by this system, please register this command and try again later.")
else:
print(output)
But it's not really working, I think it's got to do with a skip line in the output...
Any help is appreciated, thanks.
EDIT:
I fixed this problem thanks to #Cristian
Here's my updated code:
status = subprocess.getstatusoutput("sdf")
print(status[0])
You can use the getstatusoutput function from the same package. It returns a tuple with the exit code and the message. If the exit code is 0, it is considered as a successful completion. Other codes indicate an abnormal completion.
I just want to show an alternative to getstatusoutput, an older method that always launches a shell to execute your program, which may be inefficient if you do not need the facilities that a shell provides (such as wildcard expansion).
The following uses subprocess.run (which can also use a shell to execute your program if you specify shell=True). The first example does not capture the output from the executed program and the second example does. The program being run is a small Python program, test.py, executed with the command python test.py
test.py
print('It works.\n')
Example 1 -- Do not capture output
import subprocess
completed_process = subprocess.run(['python', 'test.py'])
print(completed_process.returncode)
Prints:
It works.
0
Example 2 -- Capture output
import subprocess
completed_process = subprocess.run(['python', 'test.py'], capture_output=True, text=True)
print(completed_process.returncode)
print(completed_process.stdout)
Prints:
0
It works.

Python - Run process and wait for output

I want to run a program, wait for it's output, send inputs to it and repeat until a condition.
All I could find was questions about waiting for a program to finish, which is NOT the case. The process will still be running, it just won't be giving any (new) outputs.
Program output is in stdout and in a log file, either can be used.
Using linux.
Code so far:
import subprocess
flag = True
vsim = subprocess.popen(['./run_vsim'],
stdin=subprocess.pipe,
shell=true,
cwd='path/to/program')
while flag:
with open(log_file), 'r') as f:
for l in f:
if condition:
break
vsim.stdin.write(b'do something\n')
vsim.stdin.flush()
vsim.stdin.write(b'do something else\n')
vsim.stdin.flush()
As is, the "do something" input is being sent multiple times even before the program finished starting up. Also, the log file is read before the program finishes running the command from the last while iteraction. That causes it to buffer the inputs, so I keeps executing the commands even after the condition as been met.
I could use time.sleep after each stdin.write but since the time needed to execute each command is variable, I would need to use times longer than necessary making the python script slower. Also, that's a dumb solution to this.
Thanks!
If you are using python3, you can try updating your code to use subprocess.run instead. It should wait for your task to complete and return the output.
As of 2019, you can use subprocess.getstatusoutput() to run a process and wait for the output, i.e.:
import subprocess
args = "echo 'Sleep for 5 seconds' && sleep 5"
status_output = subprocess.getstatusoutput(args)
if status_output[0] == 0: # exitcode 0 means NO error
print("Ok:", status_output[1])
else:
print("Error:", status_output[1])
Python Demo
From python docs:
subprocess.getstatusoutput(_cmd_)
Return (exitcode, output) of executing cmd in a shell.
Execute the string cmd in a shell with Popen.check_output() and return a 2-tuple (exitcode, output). The locale encoding is used; see the notes on Frequently Used Arguments for more details.
A trailing newline is stripped from the output. The exit code for the command can be interpreted as the return code of subprocess. Example:
>>> subprocess.getstatusoutput('ls /bin/ls')
(0, '/bin/ls')
>>> subprocess.getstatusoutput('cat /bin/junk')
(1, 'cat: /bin/junk: No such file or directory')
>>> subprocess.getstatusoutput('/bin/junk')
(127, 'sh: /bin/junk: not found')
>>> subprocess.getstatusoutput('/bin/kill $$')
(-15, '')
You can use commands instead of subprocess. Here is an example with ls command:
import commands
status_output = commands.getstatusoutput('ls ./')
print status_output[0] #this will print the return code (0 if everything is fine)
print status_output[1] #this will print the output (list the content of the current directory)

Start a subprocess, wait for it to complete and then retrieve data in Python

I'm struggling to get some python script to start a subprocess, wait until it completes and then retrieve the required data. I'm quite new to Python.
The command I wish to run as a subprocess is
./bin.testing/Eva -t --suite="temp0"
Running that command by hand in the Linux terminal produces:
in terminal mode
Evaluation error = 16.7934
I want to run the command as a python sub-process, and receive the output back. However, everything I try seems to skip the second line (ultimately, it's the second line that I want.) At the moment, I have this:
def job(self,fen_file):
from subprocess import Popen, PIPE
from sys import exit
try:
eva=Popen('{0}/Eva -t --suite"{0}"'.format(self.exedir,fen_file),shell=True,stdout=PIPE,stderr=PIPE)
stdout,stderr=eva.communicate()
except:
print ('Error running test suite '+fen_file)
exit("Stopping")
print(stdout)
.
.
.
return 0
All this seems to produce is
in terminal mode
0
with the important line missing. The print statement is just so I can see what I am getting back from the sub-process -- the intention is that it will be replaced with code that processes the number from the second line and returns the output (here I'm just returning 0 just so I can get this particular bit to work first. The caller of this function prints the result, which is why there is a zero at the end of the output.) exedir is just the directory of the executable for the sub-process, and fen-file is just an ascii file that the sub-process needs. I have tried removing the 'in terminal mode' from the source code of the sub-process and re compiling it, but that doesn't work -- it still doesn't return the important second line.
Thanks in advance; I expect what I am doing wrong is really very simple.
Edit: I ought to add that the subprocess Eva can take a second or two to complete.
Since the 2nd line is an error message, it's probably stored in your stderr variable!
To know for sure you can print your stderr in your code, or you can run the program on the command line and see if the output is split into stdout and stderr. One easy way is to do ./bin.testing/Eva -t --suite="temp0" > /dev/null. Any messages you get are stderr since stdout is redirected to /dev/null.
Also, typically with Popen the shell=True option is discouraged unless really needed. Instead pass a list:
[os.path.join(self.exedir, 'Eva'), '-t', '--suite=' + fen_file], shell=False, ...
This can avoid problems down the line if one of your arguments would normally be interpreted by the shell. (Note, I removed the ""'s, because the shell would normally eat those for you!)
Try using subprocess check_output.
output_lines = subprocess.check_output(['./bin.testing/Eva', '-t', '--suite="temp0"'])
for line in output_lines.splitlines():
print(line)

Controlling a python script from another script

I am trying to learn how to write a script control.py, that runs another script test.py in a loop for a certain number of times, in each run, reads its output and halts it if some predefined output is printed (e.g. the text 'stop now'), and the loop continues its iteration (once test.py has finished, either on its own, or by force). So something along the lines:
for i in range(n):
os.system('test.py someargument')
if output == 'stop now': #stop the current test.py process and continue with next iteration
#output here is supposed to contain what test.py prints
The problem with the above is that, it does not check the output of test.py as it is running, instead it waits until test.py process is finished on its own, right?
Basically trying to learn how I can use a python script to control another one, as it is running. (e.g. having access to what it prints and so on).
Finally, is it possible to run test.py in a new terminal (i.e. not in control.py's terminal) and still achieve the above goals?
An attempt:
test.py is this:
from itertools import permutations
import random as random
perms = [''.join(p) for p in permutations('stop')]
for i in range(1000000):
rand_ind = random.randrange(0,len(perms))
print perms[rand_ind]
And control.py is this: (following Marc's suggestion)
import subprocess
command = ["python", "test.py"]
n = 10
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
#if output == '' and p.poll() is not None:
# break
if output == 'stop':
print 'sucess'
p.kill()
break
#Do whatever you want
#rc = p.poll() #Exit Code
You can use subprocess module or also the os.popen
os.popen(command[, mode[, bufsize]])
Open a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w'.
With subprocess I would suggest
subprocess.call(['python.exe', command])
or the subprocess.Popen --> that is similar to os.popen (for instance)
With popen you can read the connected object/file and check whether "Stop now" is there.
The os.system is not deprecated and you can use as well (but you won't get a object from that), you can just check if return at the end of execution.
From subprocess.call you can run it in a new terminal or if you want to call multiple times ONLY the test.py --> than you can put your script in a def main() and run the main as much as you want till the "Stop now" is generated.
Hope this solve your query :-) otherwise comment again.
Looking at what you wrote above you can also redirect the output to a file directly from the OS call --> os.system(test.py *args >> /tmp/mickey.txt) then you can check at each round the file.
As said the popen is an object file that you can access.
What you are hinting at in your comment to Marc Cabos' answer is Threading
There are several ways Python can use the functionality of other files. If the content of test.py can be encapsulated in a function or class, then you can import the relevant parts into your program, giving you greater access to the runnings of that code.
As described in other answers you can use the stdout of a script, running it in a subprocess. This could give you separate terminal outputs as you require.
However if you want to run the test.py concurrently and access variables as they are changed then you need to consider threading.
Yes you can use Python to control another program using stdin/stdout, but when using another process output often there is a problem of buffering, in other words the other process doesn't really output anything until it's done.
There are even cases in which the output is buffered or not depending on if the program is started from a terminal or not.
If you are the author of both programs then probably is better using another interprocess channel where the flushing is explicitly controlled by the code, like sockets.
You can use the "subprocess" library for that.
import subprocess
command = ["python", "test.py", "someargument"]
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
break
if output == 'stop now':
#Do whatever you want
rc = p.poll() #Exit Code

One Python script giving "user input" to another python script

this is my first question, I hope I'm doing this right.
let's say I have these this file:
"simple.py":
a=raw_input("your name?")
print "Hello",a
but with a different script, I want to execute "simple.py" many time and giving the input automatically, that would work like:
"everyone.py"
run simple.py input=Alice
run simple.py input=Bob
...
to get
"Hello Alice"
"Hello Bob"
...
I know it's possible to make "everyone.py" run "simple.py" by using os.system, but is there any practical way to do something like this? And what if the first script asks for input several times?
It's important that I CANNOT EDIT SIMPLE.PY, only the other file
Thanks in advance :)
For a case as simple as simple.py, you should be able to use subprocess.Popen:
import subprocess
child = subprocess.Popen(['python', 'simple.py'], stdin=subprocess.PIPE)
child.communicate('Alice')
For more complex cases, the Pexpect module may be useful. It helps automate interaction with normally-interactive programs by providing more convenient, robust interfaces to send input, wait for prompts, and read output. It should work in cases where Popen doesn't work or is more annoying.
import pexpect
child = pexpect.spawn('python simple.py')
child.expect("your name?")
child.sendline('Alice')
import sys
print "Hello",sys.argv[1]
running C:\Python26\python.exe simple.py Alice would produce
Hello Alice
There's a good example on how to get input from the system into a python application here:
http://www.tutorialspoint.com/python/python_command_line_arguments.htm
Since you didn't mention that you can not modify simple.py then you would need to automaticly input things into the raw_input() and the fastest way to do this is simply to pipe in data into the script as "input":
C:> echo "Alice" | run simple.py
Unfortunately neither of the answers above worked for me so I came up with a third solution for others to try.
To send inputs from one python file to another (python version 3.7), I used three files.
File for running the subprocess
File for outputs (very simple)
File that needs the inputs
Here are the three files in the same order as above.
You don't need to print out the output, but I'll include the terminal output below the file examples.
The subprocess file:
from subprocess import Popen,PIPE
p1 = Popen(["python","output_file.py"], stdout=PIPE)
p2 = Popen(["python", "input_file.py"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close()
output = p2.communicate()[0]
print(output)
The output file is very simple and there may be a way to work around it. Nevertheless, here is my version:
print(1)
print(2)
print('My String')
The input file requires type casting for numbers.
i = input('Enter a number: ')
j = input('Enter another: ')
k = int(i) + int(j)
print(k)
l = input('Tell me something. ')
print(l)
Here is the terminal output:
b'Enter a number: Enter another: 3\r\nTell me something. My String!\r\n'

Categories