Python talking with other application(s) using subprocess - python

this is the idea. I'll have 'main' python script that will start (using subprocess) app1 and app2. 'main' script will send input to app1 and output result to app2 and vice versa (and main script will need to remember what was sent so I can't send pipe from app1 to app2).
This is main script.
import subprocess
import time
def main():
prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
while 1:
prvi.stdin.write('131231\n')
time.sleep(1) # maybe it needs to wait
print "procitano", prvi.stdout.read()
if __name__ == '__main__':
main()
And this is 'random1.py' file.
import random
def main():
while 1:
inp = raw_input()
print inp, random.random()
if __name__ == '__main__':
main()
First I've tried with only one subprocess just to see if it's working. And it's not. It only outputs 'procitano' and waits there.
How can I read output from 'prvi' (without communicate(). When I use it, it exits my app and that's something that I don't want)?

Add prvi.stdin.flush() after prvi.stdin.write(...).
Explanation: To optimize communication between processes, the OS will buffer 4KB of data before it sends that whole buffer to the other process. If you send less data, you need to tell the OS "That's it. Send it now" -> flush()
[EDIT] The next problem is that prvi.stdout.read() will never return since the child doesn't exit.
You will need to develop a protocol between the processes, so each knows how many bytes of data to read when it gets something. A simple solution is to use a line based protocol (each "message" is terminated by a new line). To do that, replace read() with readline() and don't forget to append \n to everything you send + flush()

main.py
import subprocess
import time
def main():
prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
prvi.stdin.write('131231\n')
time.sleep(1) # maybe it needs to wait
print "procitano", prvi.stdout.read()
if __name__ == '__main__':
main()
random1.py
import random
def main():
inp = raw_input()
print inp, random.random()
inp = raw_input()
if __name__ == '__main__':
main()
I've tested with the above codes, then I've got the same problem as your codes.
I think problem is timing.
Here is my guess,
When the main.py tries the code below
prvi.stdout.read() # i think this code may use the random1.py process
the code below grab the random1.py process
inp = raw_input()
To solve this problem, I think, as Aaron Digulla says, you need develope the protocol to make it.

use -u flag to make random1.py output unbuffered
use p.stdout.readline() instead of .read()
time.sleep is unnecessary due to .read blocks.

Related

How to run & stop python script from another python script?

I want code like this:
if True:
run('ABC.PY')
else:
if ScriptRunning('ABC.PY):
stop('ABC.PY')
run('ABC.PY'):
Basically, I want to run a file, let's say abc.py, and based on some conditions. I want to stop it, and run it again from another python script. Is it possible?
I am using Windows.
You can use python Popen objects for running processes in a child process
So run('ABC.PY') would be p = Popen("python 'ABC.PY'")
if ScriptRunning('ABC.PY) would be if p.poll() == None
stop('ABC.PY') would be p.kill()
This is a very basic example for what you are trying to achieve
Please checkout subprocess.Popen docs to fine tune your logic for running the script
import subprocess
import shlex
import time
def run(script):
scriptArgs = shlex.split(script)
commandArgs = ["python"]
commandArgs.extend(scriptArgs)
procHandle = subprocess.Popen(commandArgs, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return procHandle
def isScriptRunning(procHandle):
return procHandle.poll() is None
def stopScript(procHandle):
procHandle.terminate()
time.sleep(5)
# Forcefully terminate the script
if isScriptRunning(procHandle):
procHandle.kill()
def getOutput(procHandle):
# stderr will be redirected to stdout due "stderr=subprocess.STDOUT" argument in Popen call
stdout, _ = procHandle.communicate()
returncode = procHandle.returncode
return returncode, stdout
def main():
procHandle = run("main.py --arg 123")
time.sleep(5)
isScriptRunning(procHandle)
stopScript(procHandle)
print getOutput(procHandle)
if __name__ == "__main__":
main()
One thing that you should be aware about is stdout=subprocess.PIPE.
If your python script has a very large output, the pipes may overflow causing your script to block until .communicate is called over the handle.
To avoid this, pass a file handle to stdout, like this
fileHandle = open("main_output.txt", "w")
subprocess.Popen(..., stdout=fileHandle)
In this way, the output of the python process will be dumped into the file.(You will have to modily the getOutput() function too for this)
import subprocess
process = None
def run_or_rerun(flag):
global process
if flag:
assert(process is None)
process = subprocess.Popen(['python', 'ABC.PY'])
process.wait() # must wait or caller will hang
else:
if process.poll() is None: # it is still running
process.terminate() # terminate process
process = subprocess.Popen(['python', 'ABC.PY']) # rerun
process.wait() # must wait or caller will hang

Popen([...], stderr=PIPE) ignores input() message from spawned python program

I have one file running the other through the Popen().
# a.py
import subprocess
p = subprocess.Popen(['python3','/home/scotty/b.py'],
stderr=subprocess.PIPE)
p.wait()
# b.py
input('???')
When I run a.py "???" doesn't appear but the prompt still works... why? And how can I fix it?
If I remove stderr=subprocess.PIPE then "???" does show up.
According to the docs the output of input "is written to standard output" and I'm not touching standard output.
It seems that in your exemple, everything happens as if opening a pipe for stderr but not for stdout has the effect of redirecting stdout to stderr because in the tests that I made, stderr receives the input's prompt.
Below is a code that works for me in linux. It is probably overcomplicated because I use socketpairs, which is most certainly not necessary, but at least it is robust and it works.
##################################
# a.py
import subprocess
import socket
import select
esock, echildsock = socket.socketpair()
osock, ochildsock = socket.socketpair()
p = subprocess.Popen(['python3','b.py'],
stderr=echildsock.fileno(),
stdout=ochildsock.fileno())
while p.poll() is None:
r, w, x = select.select([esock, osock],[],[], 1.0)
if not r:
continue # timed out
for s in r:
print('stdout ready' if s is osock else 'stderr ready')
data = s.recv(1024)
print('received', data.decode('utf8'))
osock.shutdown(socket.SHUT_RDWR)
osock.close()
esock.shutdown(socket.SHUT_RDWR)
esock.close()
##################################
# b.py
res = input('???')
print('in b:', res)

PYTHON subprocess cmd.exe closes after first command

I am working on a python program which implements the cmd window.
I am using subproccess with PIPE.
If for example i write "dir" (by stdout), I use communicate() in order to get the response from the cmd and it does work.
The problem is that in a while True loop, this doesn't work more than one time, it seems like the subprocess closes itself..
Help me please
import subprocess
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=None)
x=""
while x!="x":
x = raw_input("insert a command \n")
process.stdin.write(x+"\n")
o,e=process.communicate()
print o
process.stdin.close()
The main problem is that trying to read subprocess.PIPE deadlocks when the program is still running but there is nothing to read from stdout. communicate() manually terminates the process to stop this.
A solution would be to put the piece of code that reads stdout in another thread, and then access it via Queue, which allows for reliable sharing of data between threads by timing out instead of deadlocking.
The new thread will read standard out continuously, stopping when there is no more data.
Each line will be grabbed from the queue stream until a timeout is reached(no more data in Queue), then the list of lines will be displayed to the screen.
This process will work for non-interactive programs
import subprocess
import threading
import Queue
def read_stdout(stdout, queue):
while True:
queue.put(stdout.readline()) #This hangs when there is no IO
process = subprocess.Popen('cmd.exe', shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
q = Queue.Queue()
t = threading.Thread(target=read_stdout, args=(process.stdout, q))
t.daemon = True # t stops when the main thread stops
t.start()
while True:
x = raw_input("insert a command \n")
if x == "x":
break
process.stdin.write(x + "\n")
o = []
try:
while True:
o.append(q.get(timeout=.1))
except Queue.Empty:
print ''.join(o)

Get realtime output from python subprocess

I'm trying to invoke a command line utility from Python. The code is as follows
import subprocess
import sys
class Executor :
def executeEXE(self,executable ) :
CREATE_NO_WINDOW = 0x08000000
process = subprocess.Popen(executable, stdout=subprocess.PIPE,
creationflags=CREATE_NO_WINDOW )
while True:
line = process.stdout.readline()
if line == '' and process.poll() != None:
break
print line
The problem with above code is I want the real-time output of above process which I'm not getting. What I'm doing wrong here.
there are 2 problems in your code:
first of all, readline() will block untill when a new line is printed out and flushed.
That means you should execute the code
while True:
...
in a new Thread and call a callback function when the output is ready.
Since the readline is waiting for a new line, you must use
print 'Hello World'
sys.stdout.flush()
everytime in your executable.
You can see some code and example on my git:
pyCommunicator
Instead, if your external tool is buffered, the only thing you can try is to use stderr as PIPE:
https://stackoverflow.com/a/11902799/2054758

Capturing stdout from subprocess after sending SIGINT

I have a dtrace snippet run via python script and the dtrace snippet is such that it generates data when CTRL-C is issued to it. So I had a signal_handler defined in the python script to catch CTRL-C from user and relay this to the dtrace invocation done via subprocess.Popen but I am unable to get any output in my log file. Here is the script:
Proc = []
signal_posted = False
def signal_handler(sig, frame):
print("Got CTRL-C!")
global signal_posted
signal_posted = True
global Proc
Proc.send_signal(signal.SIGINT) #Signal posting from handler
def execute_hotkernel():
#
# Generate the .out output file
#
fileout = "hotkernel.out"
fileo = open(fileout, "w+")
global Proc
Proc = subprocess.Popen(['/usr/sbin/dtrace', '-n', dtrace_script], stdout = fileo)
while Proc.poll() is None:
time.sleep(0.5)
def main():
signal.signal(signal.SIGINT, signal_handler) # Change our signal handler
execute_hotkernel()
if __name__ == '__main__':
main()
Since I have a file hotkernel.out set in subprocess.Popen command for stdout I was expecting the output from dtrace to be redirected to hotkernel.out on doing a CTRL-C but it is empty. What is missing here?
I have a similar issue.
In my case, it's a shell script that runs until you hit Control-C, and then prints out summary information. When I run this using subprocess.Popen, whether using a PIPE or a file object for stdout, I either don't get the information (with a file object) or it hangs when I try to run stdout.readline().
I finally tried running the subprocess from the interpreter and discovered I could get the last line of output after the SIGINT with a PIPE if I call stdout.readline() (where it hangs) and hit Control-C (in the interpreter), and then call stdout.readline() again.
I do not know how to emulate this in script, for a file output or for a PIPE. I did not try the file output in the interpreter.
EDIT:
I finally got back to this and determined, it's actually pretty easy to emulate outside of python and really has nothing to do with python.
/some_cmd_that_ends_on_sigint
(enter control-c)
*data from stdout in event handler*
Works
/some_cmd_that_ends_on_sigint | tee some.log
(enter control-c)
*Nothing sent to stdout in event handler prints to the screen or the log*
Where's my log?
I ended up just adding a file stream in the event handler (in the some_cmd_that_ends_on_sigint source) that writes the data to a (possibly secondary) log. Works, if a bit awkward. You get the data on the screen if running without any piping, but I can also read it when piped or from python from the secondary log.

Categories