Executing external Python file in background and get output in python - python

I have an python file that takes some inputs and print outputs according to these inputs. I am try to execute this script in another script. All I need to do is sending some parameters to this file and getting the outputs in this script.
The script that I need to be executed :
while True:
print("Sabah 1")
print("Oglen 2")
print("Aksam 3")
print("Gece 4")
print("---------")
secim = raw_input("Gun icerisindeki zaman dilimini giriniz")
isim = raw_input("Isminizi giriniz.")
if (secim=='1') :
print("Gunaydin"+isim)
elif (secim == '2'):
print("Tunaydin"+isim)
elif (secim == '3'):
print("iyi aksamlar"+isim)
elif (secim == '4'):
print("Iyi geceler"+isim)
else:
print("Program sonlandiriliyor")
break
The script that should execute the script above :
import subprocess, threading, time
can_break = False
def run():
args = ['python','odev2.py','arg1','arg2']
popen = subprocess.Popen(args,shell=False,stdout=subprocess.PIPE)
while not can_break:
print(popen.stdout.readline())
t = threading.Thread(target=run)
try:
t.start()
while True:
print('Main Thread.... ')
time.sleep(1)
except KeyboardInterrupt:
can_break_break = True
The problem is the output of the second script keep printing 'Main Thread...' I cannot read any output.
Thanks.

I assume you are wondering why you aren't seeing any of the prompts from odev2.py?
Short answer is to replace run with:
def run():
args = ['python','-u','odev2.py','arg1','arg2']
popen = subprocess.Popen(args,shell=False,stdout=subprocess.PIPE)
while not can_break:
sys.stdout.write(popen.stdout.read(1))
The long answer is that there are two issues.
First, there is buffering. IO is expensive, so rather than immediately writing out your data, File objects will sometimes instead collect the data in a buffer and only write the data out when the buffer is full. The builtin file objects (like sys.stdout) will decide to buffer depending on what they are writing to. If they are writing to a terminal, then they will turn off buffering so the output gets to the user immediately. But if they are writing to a file, they will buffer to make it more efficient.
So when you run odev2.py at the command line, print is writing to a terminal and so it is unbuffered and your prompts appear immediately. But when you run odev2.py with subprocess(stdout=PIPE), odev2.py is getting a pipe as standard output so the file object buffers. That means you don't see the prompt.
You can see the same effect running odev2.py from the command line using 'python odev2.py arg1 arg2 | cat'. This sets standard output to a pipe and you won't see any prompt, but odev2.py will be waiting for you to type.
This buffering problem can be fixed by having odev2.py write the prompts itself (rather than having raw_input print them) and then call sys.stdout.flush() at the appropriate times. But a quick and dirty way is to run python with the -u flag that turns off all buffering. Try 'python -u odev2.py arg1 arg2 | cat' to see the difference.
Note: -u turns off all buffering, which can have a huge impact on performance. So this is only a quick and dirty solution
Second, your reading thread uses readline. readline won't return until it sees a newline, but odev2.py's prompts don't end with newlines. So the readline in run won't return until odev2.py sends a newline, but odev2 won't send a newline until it reads some input from you. So you are deadlocked.
The robust solution for this is to replace readline with a non-blocking IO that will return you what is available. But there doesn't seem to be a portable way to do that in python 2.7. So quick hack is to do single character reads. So instead of doing popen.stdout.readline(), you just do popen.stdout.read(1).

Related

Python: program hangs when trying to read from subprocess stdout while it is running

I am trying to communicate with a c++ script (let's call it script A) using the python subprocess module. Script A is running alongside the python program and is constantly interacted with. My goal is to send script A input commands, and capture the outputs that are being printed to STDOUT afterwards by script A. I'm working on windows 10.
Here is a snippet describing the logic:
proc = subprocess.Popen([".\\build\\bin.exe"], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
terminate = False
while not terminate:
command = input("Enter your command here: ")
if command == "q":
terminate = True
else:
proc.stdin.write(command.encode()) # send input to script A
output = proc.stdout.readline().decode() # problematic line, trying to capture output from script A
print(f"Output is: {output}")
The problem is that while script A is writing output to STDOUT after each command like I expect it to, the python script hangs when it reaches the line highlighted above. I tried to capture the output using proc.stdout.read(1) with bufsize=0 on the call to Popen and for line in iter(proc.stdout.readlines()) and some other ways but the problem persists.
Would appreciate any help on this because nothing I tried is working for me.
Thanks in advance!
You already suggested to use bufsize=0, which seems the right solution. However, this only affects buffering at the Python side. If the executable you are calling uses buffered input or output, I don't think there's anything you can do about it (as also mentioned here].
If both programs are under your own control, then you can easily make this work. Here is an example. For simplicity I created two Python scripts that interact with each other in a similar way you are doing. Note that this doesn't differ very much from the situation with a C++ application, since in both cases an executable is started as subprocess.
File pong.py (simple demo application that reads input and responds to it - similar to your "script A"):
while True:
try:
line = input()
except EOFError:
print('EOF')
break
if line == 'ping':
print('pong')
elif line == 'PING':
print('PONG')
elif line in ['exit', 'EXIT', 'quit', 'QUIT']:
break
else:
print('what?')
print('BYE!')
File main.py (the main program that communicates with pong.py):
import subprocess
class Test:
def __init__(self):
self.proc = subprocess.Popen(['python.exe', 'pong.py'], bufsize=0, encoding='ascii',
stdout=subprocess.PIPE, stdin=subprocess.PIPE)
def talk(self, tx):
print('TX: ' + tx)
self.proc.stdin.write(tx + '\n')
rx = self.proc.stdout.readline().rstrip('\r\n')
print('RX: ' + rx)
def main():
test = Test()
test.talk('ping')
test.talk('test')
test.talk('PING')
test.talk('exit')
if __name__ == '__main__':
main()
Output of python main.py:
TX: ping
RX: pong
TX: test
RX: what?
TX: PING
RX: PONG
TX: exit
RX: BYE!
Of course there are other solutions as well. For example, you might use a socket to communicate between the two applications. However, this is only applicable if you can modify both application (e.g. if you are developing both applications), not if the executable you are calling is a third-party application.
first, buffersize=0, which is the right solution. However, this is not enough.
In your executeable program, you should set the stdout buffersize to 0. or flush in time.
In C/C++ program, you can add
setbuf(stdout, nullptr);
to your source code.

Python subprocess does not return real time tshark output [duplicate]

I am using Python to call a Shell script with
def run_command(cmd):
print "Start to run: " + cmd
run = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
line = run.stdout.readline().decode()[:-1]
if line == '' and run.poll() is not None:
break
print line # print the log from shell
recode = run.returncode
if recode != 0:
raise Exception("Error occurs!")
print "End to run: " + cmd
Then I run
run_command("sh /home/tome/a.sh")
I notice the console output from a.sh is not in real time, looks like that there is a buffer for the stdout and when the stdout buffer is full, then the output is printed out.
I would ask how to disable the shell stdout buffer in my script a.sh
Thanks!
The buffering in question would largely be a problem on the script's side, not the Python side; while Python would buffer the reads, it wouldn't block unless the buffer was emptied and there was nothing available to read.
So really, you need to disable buffering in the script itself. Adding stdbuf -oL (or -o0 for completely unbuffered, but line buffering should cover you since you read by line as well) to your commands should help in some cases (where the programs don't adjust their own buffering internally).
If you're seeing this behavior only by looking at Python's output, be aware that Python itself can buffer output as well. You can disable this by passing -u when running Python, or setting the environment variable PYTHONUNBUFFERED=1 before running it, or from within a script, you can manually call sys.stdout.flush() after any writes (direct, or implicit via print) to stdout. On modern Python, print takes an argument to force a flush after printing, but since you're on Python 2.x, that's not an option.

Python subprocess: Giving stdin, reading stdout, then giving more stdin

I'm working with a piece of scientific software called Chimera. For some of the code downstream of this question, it requires that I use Python 2.7.
I want to call a process, give that process some input, read its output, give it more input based on that, etc.
I've used Popen to open the process, process.stdin.write to pass standard input, but then I've gotten stuck trying to get output while the process is still running. process.communicate() stops the process, process.stdout.readline() seems to keep me in an infinite loop.
Here's a simplified example of what I'd like to do:
Let's say I have a bash script called exampleInput.sh.
#!/bin/bash
# exampleInput.sh
# Read a number from the input
read -p 'Enter a number: ' num
# Multiply the number by 5
ans1=$( expr $num \* 5 )
# Give the user the multiplied number
echo $ans1
# Ask the user whether they want to keep going
read -p 'Based on the previous output, would you like to continue? ' doContinue
if [ $doContinue == "yes" ]
then
echo "Okay, moving on..."
# [...] more code here [...]
else
exit 0
fi
Interacting with this through the command line, I'd run the script, type in "5" and then, if it returned "25", I'd type "yes" and, if not, I would type "no".
I want to run a python script where I pass exampleInput.sh "5" and, if it gives me "25" back, then I pass "yes"
So far, this is as close as I can get:
#!/home/user/miniconda3/bin/python2
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5")
answer = process.communicate()[0]
if answer == "25":
process.stdin.write("yes")
## I'd like to print the STDOUT here, but the process is already terminated
But that fails of course, because after `process.communicate()', my process isn't running anymore.
(Just in case/FYI): Actual problem
Chimera is usually a gui-based application to examine protein structure. If you run chimera --nogui, it'll open up a prompt and take input.
I often need to know what chimera outputs before I run my next command. For example, I will often try to generate a protein surface and, if Chimera can't generate a surface, it doesn't break--it just says so through STDOUT. So, in my python script, while I'm looping through many proteins to analyze, I need to check STDOUT to know whether to continue analysis on that protein.
In other use cases, I'll run lots of commands through Chimera to clean up a protein first, and then I'll want to run lots of separate commands to get different pieces of data, and use that data to decide whether to run other commands. I could get the data, close the subprocess, and then run another process, but that would require re-running all of those cleaning up commands each time.
Anyways, those are some of the real-world reasons why I want to be able to push STDIN to a subprocess, read the STDOUT, and still be able to push more STDIN.
Thanks for your time!
you don't need to use process.communicate in your example.
Simply read and write using process.stdin.write and process.stdout.read. Also make sure to send a newline, otherwise read won't return. And when you read from stdin, you also have to handle newlines coming from echo.
Note: process.stdout.read will block until EOF.
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5\n")
stdout = process.stdout.readline()
print(stdout)
if stdout == "25\n":
process.stdin.write("yes\n")
print(process.stdout.readline())
$ python2 test.py
25
Okay, moving on...
Update
When communicating with an program in that way, you have to pay special attention to what the application is actually writing. Best is to analyze the output in a hex editor:
$ chimera --nogui 2>&1 | hexdump -C
Please note that readline [1] only reads to the next newline (\n). In your case you have to call readline at least four times to get that first block of output.
If you just want to read everything up until the subprocess stops printing, you have to read byte by byte and implement a timeout. Sadly, neither read nor readline does provide such a timeout mechanism. This is probably because the underlying read syscall [2] (Linux) does not provide one either.
On Linux we can write a single-threaded read_with_timeout() using poll / select. For an example see [3].
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from fd until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
In case you need a reliable way to read non blocking under Windows and Linux, this answer might be helpful.
[1] from the python 2 docs:
readline(limit=-1)
Read and return one line from the stream. If limit is specified, at most limit bytes will be read.
The line terminator is always b'\n' for binary files; for text files, the newline argument to open() can be used to select the line terminator(s) recognized.
[2] from man 2 read:
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
[3] example
$ tree
.
├── prog.py
└── prog.sh
prog.sh
#!/usr/bin/env bash
for i in $(seq 3); do
echo "${RANDOM}"
sleep 1
done
sleep 3
echo "${RANDOM}"
prog.py
# talk_with_example_input.py
import subprocess
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from f until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
process = subprocess.Popen(
["./prog.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE
)
print(read_with_timeout(process.stdout, 1.5))
print('-----')
print(read_with_timeout(process.stdout, 3))
$ python2 prog.py
6194
14508
11293
-----
10506

Python - Run process and wait for output

I want to run a program, wait for it's output, send inputs to it and repeat until a condition.
All I could find was questions about waiting for a program to finish, which is NOT the case. The process will still be running, it just won't be giving any (new) outputs.
Program output is in stdout and in a log file, either can be used.
Using linux.
Code so far:
import subprocess
flag = True
vsim = subprocess.popen(['./run_vsim'],
stdin=subprocess.pipe,
shell=true,
cwd='path/to/program')
while flag:
with open(log_file), 'r') as f:
for l in f:
if condition:
break
vsim.stdin.write(b'do something\n')
vsim.stdin.flush()
vsim.stdin.write(b'do something else\n')
vsim.stdin.flush()
As is, the "do something" input is being sent multiple times even before the program finished starting up. Also, the log file is read before the program finishes running the command from the last while iteraction. That causes it to buffer the inputs, so I keeps executing the commands even after the condition as been met.
I could use time.sleep after each stdin.write but since the time needed to execute each command is variable, I would need to use times longer than necessary making the python script slower. Also, that's a dumb solution to this.
Thanks!
If you are using python3, you can try updating your code to use subprocess.run instead. It should wait for your task to complete and return the output.
As of 2019, you can use subprocess.getstatusoutput() to run a process and wait for the output, i.e.:
import subprocess
args = "echo 'Sleep for 5 seconds' && sleep 5"
status_output = subprocess.getstatusoutput(args)
if status_output[0] == 0: # exitcode 0 means NO error
print("Ok:", status_output[1])
else:
print("Error:", status_output[1])
Python Demo
From python docs:
subprocess.getstatusoutput(_cmd_)
Return (exitcode, output) of executing cmd in a shell.
Execute the string cmd in a shell with Popen.check_output() and return a 2-tuple (exitcode, output). The locale encoding is used; see the notes on Frequently Used Arguments for more details.
A trailing newline is stripped from the output. The exit code for the command can be interpreted as the return code of subprocess. Example:
>>> subprocess.getstatusoutput('ls /bin/ls')
(0, '/bin/ls')
>>> subprocess.getstatusoutput('cat /bin/junk')
(1, 'cat: /bin/junk: No such file or directory')
>>> subprocess.getstatusoutput('/bin/junk')
(127, 'sh: /bin/junk: not found')
>>> subprocess.getstatusoutput('/bin/kill $$')
(-15, '')
You can use commands instead of subprocess. Here is an example with ls command:
import commands
status_output = commands.getstatusoutput('ls ./')
print status_output[0] #this will print the return code (0 if everything is fine)
print status_output[1] #this will print the output (list the content of the current directory)

Controlling a python script from another script

I am trying to learn how to write a script control.py, that runs another script test.py in a loop for a certain number of times, in each run, reads its output and halts it if some predefined output is printed (e.g. the text 'stop now'), and the loop continues its iteration (once test.py has finished, either on its own, or by force). So something along the lines:
for i in range(n):
os.system('test.py someargument')
if output == 'stop now': #stop the current test.py process and continue with next iteration
#output here is supposed to contain what test.py prints
The problem with the above is that, it does not check the output of test.py as it is running, instead it waits until test.py process is finished on its own, right?
Basically trying to learn how I can use a python script to control another one, as it is running. (e.g. having access to what it prints and so on).
Finally, is it possible to run test.py in a new terminal (i.e. not in control.py's terminal) and still achieve the above goals?
An attempt:
test.py is this:
from itertools import permutations
import random as random
perms = [''.join(p) for p in permutations('stop')]
for i in range(1000000):
rand_ind = random.randrange(0,len(perms))
print perms[rand_ind]
And control.py is this: (following Marc's suggestion)
import subprocess
command = ["python", "test.py"]
n = 10
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
#if output == '' and p.poll() is not None:
# break
if output == 'stop':
print 'sucess'
p.kill()
break
#Do whatever you want
#rc = p.poll() #Exit Code
You can use subprocess module or also the os.popen
os.popen(command[, mode[, bufsize]])
Open a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w'.
With subprocess I would suggest
subprocess.call(['python.exe', command])
or the subprocess.Popen --> that is similar to os.popen (for instance)
With popen you can read the connected object/file and check whether "Stop now" is there.
The os.system is not deprecated and you can use as well (but you won't get a object from that), you can just check if return at the end of execution.
From subprocess.call you can run it in a new terminal or if you want to call multiple times ONLY the test.py --> than you can put your script in a def main() and run the main as much as you want till the "Stop now" is generated.
Hope this solve your query :-) otherwise comment again.
Looking at what you wrote above you can also redirect the output to a file directly from the OS call --> os.system(test.py *args >> /tmp/mickey.txt) then you can check at each round the file.
As said the popen is an object file that you can access.
What you are hinting at in your comment to Marc Cabos' answer is Threading
There are several ways Python can use the functionality of other files. If the content of test.py can be encapsulated in a function or class, then you can import the relevant parts into your program, giving you greater access to the runnings of that code.
As described in other answers you can use the stdout of a script, running it in a subprocess. This could give you separate terminal outputs as you require.
However if you want to run the test.py concurrently and access variables as they are changed then you need to consider threading.
Yes you can use Python to control another program using stdin/stdout, but when using another process output often there is a problem of buffering, in other words the other process doesn't really output anything until it's done.
There are even cases in which the output is buffered or not depending on if the program is started from a terminal or not.
If you are the author of both programs then probably is better using another interprocess channel where the flushing is explicitly controlled by the code, like sockets.
You can use the "subprocess" library for that.
import subprocess
command = ["python", "test.py", "someargument"]
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
break
if output == 'stop now':
#Do whatever you want
rc = p.poll() #Exit Code

Categories