Controlling a python script from another script - python

I am trying to learn how to write a script control.py, that runs another script test.py in a loop for a certain number of times, in each run, reads its output and halts it if some predefined output is printed (e.g. the text 'stop now'), and the loop continues its iteration (once test.py has finished, either on its own, or by force). So something along the lines:
for i in range(n):
os.system('test.py someargument')
if output == 'stop now': #stop the current test.py process and continue with next iteration
#output here is supposed to contain what test.py prints
The problem with the above is that, it does not check the output of test.py as it is running, instead it waits until test.py process is finished on its own, right?
Basically trying to learn how I can use a python script to control another one, as it is running. (e.g. having access to what it prints and so on).
Finally, is it possible to run test.py in a new terminal (i.e. not in control.py's terminal) and still achieve the above goals?
An attempt:
test.py is this:
from itertools import permutations
import random as random
perms = [''.join(p) for p in permutations('stop')]
for i in range(1000000):
rand_ind = random.randrange(0,len(perms))
print perms[rand_ind]
And control.py is this: (following Marc's suggestion)
import subprocess
command = ["python", "test.py"]
n = 10
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
#if output == '' and p.poll() is not None:
# break
if output == 'stop':
print 'sucess'
p.kill()
break
#Do whatever you want
#rc = p.poll() #Exit Code

You can use subprocess module or also the os.popen
os.popen(command[, mode[, bufsize]])
Open a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w'.
With subprocess I would suggest
subprocess.call(['python.exe', command])
or the subprocess.Popen --> that is similar to os.popen (for instance)
With popen you can read the connected object/file and check whether "Stop now" is there.
The os.system is not deprecated and you can use as well (but you won't get a object from that), you can just check if return at the end of execution.
From subprocess.call you can run it in a new terminal or if you want to call multiple times ONLY the test.py --> than you can put your script in a def main() and run the main as much as you want till the "Stop now" is generated.
Hope this solve your query :-) otherwise comment again.
Looking at what you wrote above you can also redirect the output to a file directly from the OS call --> os.system(test.py *args >> /tmp/mickey.txt) then you can check at each round the file.
As said the popen is an object file that you can access.

What you are hinting at in your comment to Marc Cabos' answer is Threading
There are several ways Python can use the functionality of other files. If the content of test.py can be encapsulated in a function or class, then you can import the relevant parts into your program, giving you greater access to the runnings of that code.
As described in other answers you can use the stdout of a script, running it in a subprocess. This could give you separate terminal outputs as you require.
However if you want to run the test.py concurrently and access variables as they are changed then you need to consider threading.

Yes you can use Python to control another program using stdin/stdout, but when using another process output often there is a problem of buffering, in other words the other process doesn't really output anything until it's done.
There are even cases in which the output is buffered or not depending on if the program is started from a terminal or not.
If you are the author of both programs then probably is better using another interprocess channel where the flushing is explicitly controlled by the code, like sockets.

You can use the "subprocess" library for that.
import subprocess
command = ["python", "test.py", "someargument"]
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
break
if output == 'stop now':
#Do whatever you want
rc = p.poll() #Exit Code

Related

Python: program hangs when trying to read from subprocess stdout while it is running

I am trying to communicate with a c++ script (let's call it script A) using the python subprocess module. Script A is running alongside the python program and is constantly interacted with. My goal is to send script A input commands, and capture the outputs that are being printed to STDOUT afterwards by script A. I'm working on windows 10.
Here is a snippet describing the logic:
proc = subprocess.Popen([".\\build\\bin.exe"], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
terminate = False
while not terminate:
command = input("Enter your command here: ")
if command == "q":
terminate = True
else:
proc.stdin.write(command.encode()) # send input to script A
output = proc.stdout.readline().decode() # problematic line, trying to capture output from script A
print(f"Output is: {output}")
The problem is that while script A is writing output to STDOUT after each command like I expect it to, the python script hangs when it reaches the line highlighted above. I tried to capture the output using proc.stdout.read(1) with bufsize=0 on the call to Popen and for line in iter(proc.stdout.readlines()) and some other ways but the problem persists.
Would appreciate any help on this because nothing I tried is working for me.
Thanks in advance!
You already suggested to use bufsize=0, which seems the right solution. However, this only affects buffering at the Python side. If the executable you are calling uses buffered input or output, I don't think there's anything you can do about it (as also mentioned here].
If both programs are under your own control, then you can easily make this work. Here is an example. For simplicity I created two Python scripts that interact with each other in a similar way you are doing. Note that this doesn't differ very much from the situation with a C++ application, since in both cases an executable is started as subprocess.
File pong.py (simple demo application that reads input and responds to it - similar to your "script A"):
while True:
try:
line = input()
except EOFError:
print('EOF')
break
if line == 'ping':
print('pong')
elif line == 'PING':
print('PONG')
elif line in ['exit', 'EXIT', 'quit', 'QUIT']:
break
else:
print('what?')
print('BYE!')
File main.py (the main program that communicates with pong.py):
import subprocess
class Test:
def __init__(self):
self.proc = subprocess.Popen(['python.exe', 'pong.py'], bufsize=0, encoding='ascii',
stdout=subprocess.PIPE, stdin=subprocess.PIPE)
def talk(self, tx):
print('TX: ' + tx)
self.proc.stdin.write(tx + '\n')
rx = self.proc.stdout.readline().rstrip('\r\n')
print('RX: ' + rx)
def main():
test = Test()
test.talk('ping')
test.talk('test')
test.talk('PING')
test.talk('exit')
if __name__ == '__main__':
main()
Output of python main.py:
TX: ping
RX: pong
TX: test
RX: what?
TX: PING
RX: PONG
TX: exit
RX: BYE!
Of course there are other solutions as well. For example, you might use a socket to communicate between the two applications. However, this is only applicable if you can modify both application (e.g. if you are developing both applications), not if the executable you are calling is a third-party application.
first, buffersize=0, which is the right solution. However, this is not enough.
In your executeable program, you should set the stdout buffersize to 0. or flush in time.
In C/C++ program, you can add
setbuf(stdout, nullptr);
to your source code.

How to interact with an external program in Python 3?

Using Python 3, I want to execute an external program, interact with it by providing some text into standard input, and then print the result.
As an example, I created the following external program, called test.py:
print('Test Program')
print('1 First option, 2 Second Option')
choice = input()
if choice == '1':
second_param = input('Insert second param: ')
result = choice + ' ' + second_param
print(result)
If I run this program directly, it works as expected. If I provide the input 1 and then 2, the result is 1 2.
I want to run this program in another script and interact with it to print the same result.
After reading the documentation for subprocess, and checking out similar questions on SO, I ended up with the following:
EXTERNAL_PROG = 'test.py'
p = Popen(['py', EXTERNAL_PROG], stdout=PIPE, stdin=PIPE, shell=True)
print(p.stdout.readline().decode('utf-8'))
print(p.stdout.readline().decode('utf-8'))
p.stdin.write(b'1\n')
p.stdin.write(b'2\n')
print(p.stdout.readline().decode('utf-8'))
However, when I run the code, the program freezes after printing 1 First option, 2 Second Option, and I need to restart my shell. This is probably caused by the fact that subprocess.stdout.readline() expects to find a newline character, and the prompt for the second param doesn’t contain one.
I found 2 SO questions that talk about something similar but I couldn’t get it to work.
Here, the answer recommends using the pexpect module. I tried to adapt the code to my situation but it didn’t work.
Here, the suggestion is to use -u, but adding it didn’t change anything.
I know that a solution can be found by modifying test.py, but this is not possible in my case since I need to use another external program and this is just a minimal example based on it.
If you have fixed input to your program (means input not changing at run time) then this solution can be relevant.
Answer
First create file.
Input file. name it input.txt and put 1 2 in it
command = "python test.py < input.txt > output.txt 2>&1"
# now run this command
os.system(command)
When you run this, you will find output.txt in same directory. If your program is executed successfully then output.txt contains output of code test.py but if your code gives any error then error is in output.txt.
Answer As You Want
main.py become
import sys
from subprocess import PIPE, Popen
EXTERNAL_PROG = 'test.py'
p = Popen(['python3', EXTERNAL_PROG], stdout=PIPE, stdin=PIPE, stderr=PIPE)
print(p.stdout.readline())
print(p.stdout.readline())
p.stdin.write(b'1\n')
p.stdin.write(b'2\n')
p.stdin.flush()
print(p.stdout.readline())
print(p.stdout.readline())

Python - Run process and wait for output

I want to run a program, wait for it's output, send inputs to it and repeat until a condition.
All I could find was questions about waiting for a program to finish, which is NOT the case. The process will still be running, it just won't be giving any (new) outputs.
Program output is in stdout and in a log file, either can be used.
Using linux.
Code so far:
import subprocess
flag = True
vsim = subprocess.popen(['./run_vsim'],
stdin=subprocess.pipe,
shell=true,
cwd='path/to/program')
while flag:
with open(log_file), 'r') as f:
for l in f:
if condition:
break
vsim.stdin.write(b'do something\n')
vsim.stdin.flush()
vsim.stdin.write(b'do something else\n')
vsim.stdin.flush()
As is, the "do something" input is being sent multiple times even before the program finished starting up. Also, the log file is read before the program finishes running the command from the last while iteraction. That causes it to buffer the inputs, so I keeps executing the commands even after the condition as been met.
I could use time.sleep after each stdin.write but since the time needed to execute each command is variable, I would need to use times longer than necessary making the python script slower. Also, that's a dumb solution to this.
Thanks!
If you are using python3, you can try updating your code to use subprocess.run instead. It should wait for your task to complete and return the output.
As of 2019, you can use subprocess.getstatusoutput() to run a process and wait for the output, i.e.:
import subprocess
args = "echo 'Sleep for 5 seconds' && sleep 5"
status_output = subprocess.getstatusoutput(args)
if status_output[0] == 0: # exitcode 0 means NO error
print("Ok:", status_output[1])
else:
print("Error:", status_output[1])
Python Demo
From python docs:
subprocess.getstatusoutput(_cmd_)
Return (exitcode, output) of executing cmd in a shell.
Execute the string cmd in a shell with Popen.check_output() and return a 2-tuple (exitcode, output). The locale encoding is used; see the notes on Frequently Used Arguments for more details.
A trailing newline is stripped from the output. The exit code for the command can be interpreted as the return code of subprocess. Example:
>>> subprocess.getstatusoutput('ls /bin/ls')
(0, '/bin/ls')
>>> subprocess.getstatusoutput('cat /bin/junk')
(1, 'cat: /bin/junk: No such file or directory')
>>> subprocess.getstatusoutput('/bin/junk')
(127, 'sh: /bin/junk: not found')
>>> subprocess.getstatusoutput('/bin/kill $$')
(-15, '')
You can use commands instead of subprocess. Here is an example with ls command:
import commands
status_output = commands.getstatusoutput('ls ./')
print status_output[0] #this will print the return code (0 if everything is fine)
print status_output[1] #this will print the output (list the content of the current directory)

Python Popen not behaving like a subprocess

My problem is this--I need to get output from a subprocess and I am using the following code to call it-- (Feel free to ignore the long arguments. The importing thing is the stdout= subprocess.PIPE)
(stdout, stderr) = subprocess.Popen([self.ChapterToolPath, "-x", book.xmlPath , "-a", book.aacPath , "-o", book.outputPath+ "/" + fileName + ".m4b"], stdout= subprocess.PIPE).communicate()
print stdout
Thanks to an answer below, I've been able to get the output of the program, but I still end up waiting for the process to terminate before I get anything. The interesting thing is that in my debugger, there is all sorts of text flying by in the console and it is all ignored. But the moment that anything is written to the console in black (I am using pycharm) the program continues without a problem. Could the main program be waiting for some kind of output in order to move on? This would make sense because I am trying to communicate with it.... Is there a difference between text that I can see in the console and actual text that makes it to the stdout? And how would I collect the text written to the console?
Thanks!
The first line of the documentation for subprocess.call() describes it as such:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
Thus, it necessarily waits for the subprocess to exit.
subprocess.Popen(), by contrast, does not do this, returning a handle on a process with which one than then communicate().
To get all output from a program:
from subprocess import check_output as qx
output = qx([program, arg1, arg2, ...])
To get output while the program is running:
from subprocess import Popen, PIPE
p = Popen([program, arg1, ...], stdout=PIPE)
for line in iter(p.stdout.readline, ''):
print line,
There might be a buffering issue on the program' side if it prints line-by-line when run interactively but buffers its output if run as a subprocess. There are various solutions depending on your OS or the program e.g., you could run it using pexpect module.

detecting end of tty output

Hi I'm writing a psudo-terminal that can live in a tty and spawn a second tty which is filters input and output from
I'm writing it in python for now, spawning the second tty and reading and writing is easy
but when I read, the read does not end, it waits for more input.
import subprocess
pfd = subprocess.Popen(['/bin/sh'], shell=True,
stdout=subprocess.PIPE, stdin=subprocess.PIPE)
cmd = "ls"
pfd.stdin.write(cmd + '\n')
out = ''
while 1:
c = pfd.stdout.read(1)
if not c: # if end of output (this never happends)
break
if c == '\n': # print line when found
print repr(out)
out = ''
else:
out += c
----------------------------- outputs ------------------------
intty $ python intty.py
'intty.py'
'testA_blank'
'testB_blank'
(hangs here does not return)
it looks like it's reaching the end of hte buffer and instead of returning None or '' it hangs waiting for more input.
what should I be looking for to see if the output has completed? the end of the buffer? a non-printable character?
---------------- edit -------------
this happends also when I run xpcshell instead of ls, I'm assuming these interactive programs have some way of knowing to display the prompt again,
strangly the prompt, in this case "js>" never apears
Well, your output actually hasn't completed. Because you spawned /bin/sh, the shell is still running after "ls" completes. There is no EOF indicator, because it's still running.
Why not simply run /bin/ls?
You could do something like
pfd = subprocess.Popen(['ls'], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
out, err_output = pfd.communicate()
This also highlights subprocess.communicate, which is a safer way to get output (For outputs which fit in memory, anyway) from a single program run. This will return only when the program has finished running.
Alternately, you -could- read linewise from the shell, but you'd be looking for a special shell sequence like the sh~# line which could easily show up in program output. Thus, running a shell is probably a bad idea all around.
Edit Here is what I was referring to, but it's still not really the best solution, as it has a LOT of caveats:
while 1:
c = pfd.stdout.read(1)
if not c:
break
elif c == '\n': # print line when found
print repr(out)
out = ''
else:
out += c
if out.strip() == 'sh#':
break
Note that this will break out if any other command outputs 'sh#' at the beginning of the line, and also if for some reason the output is different from expected, you will enter the same blocking situation as before. This is why it's a very sub-optimal situation for a shell.
For applications like a shell, the output will not end until the shell ends. Either use select.select() to check if it has more output waiting for you, or end the process.

Categories