How to get asynchronous input AND output using subprocess.POPEN - python

I've been working on this for a few hours and haven't been able to come up with a good solution. A little background, I'm running a password cracking program that's closed source from the command line but have to constantly pause it when my gpu temperature gets too hot.
I do other manipulations in python with this program so that's the language I'd prefer. Anyways, the password program gives periodic updates on how well it's doing, the gpu temperature, etc. and allows me to pause it at any time.
I'm getting the temperature fine but because of blocking issues I'm guessing I can't send the pause command. It's not doing anything at least. I've seen several examples of threading the output, but haven't seen something that that uses threading input and output without causing any issues.
I mean for all I know this could be impossible under current POPEN constraints but would appreciate some direction.
popen = Popen(command, stdout=PIPE, stdin=PIPE, shell=True)
lines_iterator = iter(popen.stdout.readline, b"")
while 1:
for line in lines_iterator:
cleanLine = line.replace("\n", "")
p = re.compile('[0-9][0-9]c Temp')
m = p.search(cleanLine)
print cleanLine
if m:
temperature = m.group(0)
if int(temperature[:2]) > 80:
overheating = True
print "overheating"
if overheating:
if "[s]tatus [p]ause [r]esume [b]ypass [q]uit" in line:
#It's not doing anything right here, it just continues
print popen.communicate("p")[0]
This is the gist of my code. It's still kind of through the hacky phase so I know that it might not be following best coding practices.

EDIT: Sorry, I was confused in the initial answer about the scope of overheating. I deleted the first part of my answer since it's not relevant anymore.
communicate will wait for the process to exit so it might not be what you're looking for in this case. If you want the process to keep going
you can use something like popen.stdin.write("p"). You might also need to send a "\n" along if that's required by your process.
Also, if you're OK with an extra dependency you might be interested in the pexpect module that was designed to control interactive processes.

A simple portable solution is to use threads here. It is enough if there are no block buffering issues.
To read output and stop input if overheating is detected (not tested):
#!/usr/bin/env python
from subprocess import Popen, PIPE, CalledProcessError
from threading import Event, Thread
def detect_overheating(pipe, overheating):
with pipe: # read output here
for line in iter(pipe.readline, ''):
if detected_overheating(line.rstrip('\n')):
overheating.set() # overheating
elif paused: #XXX global
overheating.clear() # no longer overheating
process = Popen(args, stdout=PIPE, stdin=PIPE, bufsize=1,
universal_newlines=True) # enable text mode
overheating = Event()
t = Thread(target=detect_overheating, args=[process.stdout, overheating])
t.daemon = True # close pipe if the process dies
t.start()
paused = False
with process.stdin: # write input here
while process.poll() is None:
if overheating.wait(1): # Python 2.7+
# overheating
if not paused:
process.stdin.write('p\n') # pause
process.stdin.flush()
paused = True
elif paused: # no longer overheating
pass #XXX unpause here
paused = False
if process.wait() != 0: # non-zero exit status may indicate failure
raise CalledProcessError(process.returncode, args)

Related

Subprocess will not run any commands after initial args in Popen constructor, even after flushing all pipes [duplicate]

I want to achieve something which is very similar to this.
My actual goal is to run Rasa from within python.
Taken from Rasa's site:
Rasa is a framework for building conversational software: Messenger/Slack bots, Alexa skills, etc. We’ll abbreviate this as a bot in this documentation.
It is basically a chatbot which runs in the command prompt. This is how it works on cmd :
Now I want to run Rasa from python so that I can integrate it with my Django-based website. i.e. I want to keep taking inputs from the user, pass it to rasa, rasa processes the text and gives me an output which I show back to the user.
I have tried this (running it from cmd as of now)
import sys
import subprocess
from threading import Thread
from queue import Queue, Empty # python 3.x
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
def getOutput(outQueue):
outStr = ''
try:
while True: #Adds output from the Queue until it is empty
outStr+=outQueue.get_nowait()
except Empty:
return outStr
p = subprocess.Popen('command_to_run_rasa',
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=False,
universal_newlines=True,
)
outQueue = Queue()
outThread = Thread(target=enqueue_output, args=(p.stdout, outQueue))
outThread.daemon = True
outThread.start()
someInput = ""
while someInput != "stop":
someInput = input("Input: ") # to take input from user
p.stdin.write(someInput) # passing input to be processed by the rasa command
p.stdin.flush()
output = getOutput(outQueue)
print("Output: " + output + "\n")
p.stdout.flush()
But it works fine only for the first line of output. Not for successive input/output cycles. See output below.
How do I get it working for multiple cycles?
I've referred to this, and I think I understand the problem in my code from it but I dont know how to solve it.
EDIT: I'm using Python 3.6.2 (64-bit) on Windows 10
You need to keep interacting with your subprocess - at the moment once you pick the output from your subprocess you're pretty much done as you close its STDOUT stream.
Here is the most rudimentary way to continue user input -> process output cycle:
import subprocess
import sys
import time
if __name__ == "__main__": # a guard from unintended usage
input_buffer = sys.stdin # a buffer to get the user input from
output_buffer = sys.stdout # a buffer to write rasa's output to
proc = subprocess.Popen(["path/to/rasa", "arg1", "arg2", "etc."], # start the process
stdin=subprocess.PIPE, # pipe its STDIN so we can write to it
stdout=output_buffer, # pipe directly to the output_buffer
universal_newlines=True)
while True: # run a main loop
time.sleep(0.5) # give some time for `rasa` to forward its STDOUT
print("Input: ", end="", file=output_buffer, flush=True) # print the input prompt
print(input_buffer.readline(), file=proc.stdin, flush=True) # forward the user input
You can replace input_buffer with a buffer coming from your remote user(s) and output_buffer with a buffer that forwards the data to your user(s) and you'll get essentially what you're looking for - the sub-process will be getting the input directly from the user (input_buffer) and print its output to the user (output_buffer).
If you need to perform other tasks while all this is running in the background, just run everything under the if __name__ == "__main__": guard in a separate thread, and I'd suggest adding a try..except block to pick up KeyboardInterrupt and exit gracefully.
But... soon enough you'll notice that it doesn't exactly work properly all the time - if it takes longer than half a second of wait for rasa to print its STDOUT and enter the wait for STDIN stage, the outputs will start to mix. This problem is considerably more complex than you might expect. The main issue is that STDOUT and STDIN (and STDERR) are separate buffers and you cannot know when a subprocess is actually expecting something on its STDIN. This means that without a clear indication from the subprocess (like you have the \r\n[path]> in Windows CMD prompt on its STDOUT for example) you can only send data to the subprocesses STDIN and hope it will be picked up.
Based on your screenshot, it doesn't really give a distinguishable STDIN request prompt because the first prompt is ... :\n and then it waits for STDIN, but then once the command is sent it lists options without an indication of its end of STDOUT stream (technically making the prompt just ...\n but that would match any line preceding it as well). Maybe you can be clever and read the STDOUT line by line, then on each new line measure how much time has passed since the sub-process wrote to it and once a threshold of inactivity is reached assume that rasa expects input and prompt the user for it. Something like:
import subprocess
import sys
import threading
# we'll be using a separate thread and a timed event to request the user input
def timed_user_input(timer, wait, buffer_in, buffer_out, buffer_target):
while True: # user input loop
timer.wait(wait) # wait for the specified time...
if not timer.is_set(): # if the timer was not stopped/restarted...
print("Input: ", end="", file=buffer_out, flush=True) # print the input prompt
print(buffer_in.readline(), file=buffer_target, flush=True) # forward the input
timer.clear() # reset the 'timer' event
if __name__ == "__main__": # a guard from unintended usage
input_buffer = sys.stdin # a buffer to get the user input from
output_buffer = sys.stdout # a buffer to write rasa's output to
proc = subprocess.Popen(["path/to/rasa", "arg1", "arg2", "etc."], # start the process
stdin=subprocess.PIPE, # pipe its STDIN so we can write to it
stdout=subprocess.PIPE, # pipe its STDIN so we can process it
universal_newlines=True)
# lets build a timer which will fire off if we don't reset it
timer = threading.Event() # a simple Event timer
input_thread = threading.Thread(target=timed_user_input,
args=(timer, # pass the timer
1.0, # prompt after one second
input_buffer, output_buffer, proc.stdin))
input_thread.daemon = True # no need to keep the input thread blocking...
input_thread.start() # start the timer thread
# now we'll read the `rasa` STDOUT line by line, forward it to output_buffer and reset
# the timer each time a new line is encountered
for line in proc.stdout:
output_buffer.write(line) # forward the STDOUT line
output_buffer.flush() # flush the output buffer
timer.set() # reset the timer
You can use a similar technique to check for more complex 'expected user input' patterns. There is a whole module called pexpect designed to deal with this type of tasks and I wholeheartedly recommend it if you're willing to give up some flexibility.
Now... all this being said, you are aware that Rasa is built in Python, installs as a Python module and has a Python API, right? Since you're already using Python why would you call it as a subprocess and deal with all this STDOUT/STDIN shenanigans when you can directly run it from your Python code? Just import it and interact with it directly, they even have a very simple example that does exactly what you're trying to do: Rasa Core with minimal Python.

Running interactive program from within python

I want to achieve something which is very similar to this.
My actual goal is to run Rasa from within python.
Taken from Rasa's site:
Rasa is a framework for building conversational software: Messenger/Slack bots, Alexa skills, etc. We’ll abbreviate this as a bot in this documentation.
It is basically a chatbot which runs in the command prompt. This is how it works on cmd :
Now I want to run Rasa from python so that I can integrate it with my Django-based website. i.e. I want to keep taking inputs from the user, pass it to rasa, rasa processes the text and gives me an output which I show back to the user.
I have tried this (running it from cmd as of now)
import sys
import subprocess
from threading import Thread
from queue import Queue, Empty # python 3.x
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
def getOutput(outQueue):
outStr = ''
try:
while True: #Adds output from the Queue until it is empty
outStr+=outQueue.get_nowait()
except Empty:
return outStr
p = subprocess.Popen('command_to_run_rasa',
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=False,
universal_newlines=True,
)
outQueue = Queue()
outThread = Thread(target=enqueue_output, args=(p.stdout, outQueue))
outThread.daemon = True
outThread.start()
someInput = ""
while someInput != "stop":
someInput = input("Input: ") # to take input from user
p.stdin.write(someInput) # passing input to be processed by the rasa command
p.stdin.flush()
output = getOutput(outQueue)
print("Output: " + output + "\n")
p.stdout.flush()
But it works fine only for the first line of output. Not for successive input/output cycles. See output below.
How do I get it working for multiple cycles?
I've referred to this, and I think I understand the problem in my code from it but I dont know how to solve it.
EDIT: I'm using Python 3.6.2 (64-bit) on Windows 10
You need to keep interacting with your subprocess - at the moment once you pick the output from your subprocess you're pretty much done as you close its STDOUT stream.
Here is the most rudimentary way to continue user input -> process output cycle:
import subprocess
import sys
import time
if __name__ == "__main__": # a guard from unintended usage
input_buffer = sys.stdin # a buffer to get the user input from
output_buffer = sys.stdout # a buffer to write rasa's output to
proc = subprocess.Popen(["path/to/rasa", "arg1", "arg2", "etc."], # start the process
stdin=subprocess.PIPE, # pipe its STDIN so we can write to it
stdout=output_buffer, # pipe directly to the output_buffer
universal_newlines=True)
while True: # run a main loop
time.sleep(0.5) # give some time for `rasa` to forward its STDOUT
print("Input: ", end="", file=output_buffer, flush=True) # print the input prompt
print(input_buffer.readline(), file=proc.stdin, flush=True) # forward the user input
You can replace input_buffer with a buffer coming from your remote user(s) and output_buffer with a buffer that forwards the data to your user(s) and you'll get essentially what you're looking for - the sub-process will be getting the input directly from the user (input_buffer) and print its output to the user (output_buffer).
If you need to perform other tasks while all this is running in the background, just run everything under the if __name__ == "__main__": guard in a separate thread, and I'd suggest adding a try..except block to pick up KeyboardInterrupt and exit gracefully.
But... soon enough you'll notice that it doesn't exactly work properly all the time - if it takes longer than half a second of wait for rasa to print its STDOUT and enter the wait for STDIN stage, the outputs will start to mix. This problem is considerably more complex than you might expect. The main issue is that STDOUT and STDIN (and STDERR) are separate buffers and you cannot know when a subprocess is actually expecting something on its STDIN. This means that without a clear indication from the subprocess (like you have the \r\n[path]> in Windows CMD prompt on its STDOUT for example) you can only send data to the subprocesses STDIN and hope it will be picked up.
Based on your screenshot, it doesn't really give a distinguishable STDIN request prompt because the first prompt is ... :\n and then it waits for STDIN, but then once the command is sent it lists options without an indication of its end of STDOUT stream (technically making the prompt just ...\n but that would match any line preceding it as well). Maybe you can be clever and read the STDOUT line by line, then on each new line measure how much time has passed since the sub-process wrote to it and once a threshold of inactivity is reached assume that rasa expects input and prompt the user for it. Something like:
import subprocess
import sys
import threading
# we'll be using a separate thread and a timed event to request the user input
def timed_user_input(timer, wait, buffer_in, buffer_out, buffer_target):
while True: # user input loop
timer.wait(wait) # wait for the specified time...
if not timer.is_set(): # if the timer was not stopped/restarted...
print("Input: ", end="", file=buffer_out, flush=True) # print the input prompt
print(buffer_in.readline(), file=buffer_target, flush=True) # forward the input
timer.clear() # reset the 'timer' event
if __name__ == "__main__": # a guard from unintended usage
input_buffer = sys.stdin # a buffer to get the user input from
output_buffer = sys.stdout # a buffer to write rasa's output to
proc = subprocess.Popen(["path/to/rasa", "arg1", "arg2", "etc."], # start the process
stdin=subprocess.PIPE, # pipe its STDIN so we can write to it
stdout=subprocess.PIPE, # pipe its STDIN so we can process it
universal_newlines=True)
# lets build a timer which will fire off if we don't reset it
timer = threading.Event() # a simple Event timer
input_thread = threading.Thread(target=timed_user_input,
args=(timer, # pass the timer
1.0, # prompt after one second
input_buffer, output_buffer, proc.stdin))
input_thread.daemon = True # no need to keep the input thread blocking...
input_thread.start() # start the timer thread
# now we'll read the `rasa` STDOUT line by line, forward it to output_buffer and reset
# the timer each time a new line is encountered
for line in proc.stdout:
output_buffer.write(line) # forward the STDOUT line
output_buffer.flush() # flush the output buffer
timer.set() # reset the timer
You can use a similar technique to check for more complex 'expected user input' patterns. There is a whole module called pexpect designed to deal with this type of tasks and I wholeheartedly recommend it if you're willing to give up some flexibility.
Now... all this being said, you are aware that Rasa is built in Python, installs as a Python module and has a Python API, right? Since you're already using Python why would you call it as a subprocess and deal with all this STDOUT/STDIN shenanigans when you can directly run it from your Python code? Just import it and interact with it directly, they even have a very simple example that does exactly what you're trying to do: Rasa Core with minimal Python.

How to make subprocess only communicate error

We have created a commodity function used in many projects which uses subprocess to start a command. This function is as follows:
def _popen( command_list ):
p = subprocess.Popen( command_list, stdout=subprocess.PIPE,
stderr=subprocess.PIPE )
out, error_msg = p.communicate()
# Some processes (e.g. system_start) print a number of dots in stderr
# even when no error occurs.
if error_msg.strip('.') == '':
error_msg = ''
return out, error_msg
For most processes this works as intended.
But now I have to use it with a background-process which need to keep running as long as my python-script is running as well and thus now the fun starts ;-).
Note: the script also needs to start other non background-processes using this same _popen-function.
I know that by skipping p.communicate I can make the process start in the background, while my python script continues.
But there are 2 problems with this:
I need to check that the background process started correctly
While the main process is running I need to check the stdout and stderr of the background process from time to time without stopping the process / ending hanging in the background process.
Check background process started correctly
For 1 I currently adapted the _popen version to take an extra parameter 'skip_com' (default False) to skip the p.communicate call. And in that case I return the p-object i.s.o. out and error_msg.
This so I can check if the process is running directly after starting it up and if not call communicate on the p-object to check what the error_msg is.
MY_COMMAND_LIST = [ "<command that should go to background>" ]
def _popen( command_list, skip_com=False ):
p = subprocess.Popen( command_list, stdout=subprocess.PIPE,
stderr=subprocess.PIPE )
if not skip_com:
out, error_msg = p.communicate()
# Some processes (e.g. system_start) print a number of dots in stderr
# even when no error occurs.
if error_msg.strip('.') == '':
error_msg = ''
return out, error_msg
else:
return p
...
p = _popen( MY_COMMAND_LIST, True )
error = _get_command_pid( MY_COMMAND_LIST ) # checks if background command is running using _popen and ps -ef
if error:
_, error_msg = p.communicate()
I do not know if there is a better way to do this.
check stdout / stderr
For 2 I have not found a solution which does not cause the script to wait for the end of the background process.
The only ways I know to communicate is using iter on e.g. p.stdout.readline. But that will hang if the process is still running:
for line in iter( p.stdout.readline, "" ): print line
Any one an idea how to do this?
/edit/ I need to check the data I get from stdout and stderr seperately. Especially stderr is important in this case since if the background process encounters an error it will exit and I need to catch that in my main program to be able to prevent errors caused by that exit.
The stdout output is needed in some situations to check the expected behaviour in the background process and to react on that.
Update
The subprocess will actually exit if it encounters an error
If you don't need to read the output to detect an error then redirect it to DEVNULL and call .poll() to check child process' status from time to time without stopping the process.
assuming you have to read the output:
Do not use stdout=PIPE, stderr=PIPE unless you read from the pipes. Otherwise, the child process may hang as soon as any of the corresponding OS pipe buffers fill up.
If you want to start a process and do something else while it is running then you need a non-blocking way to read its output. A simple portable way is to use a thread:
def process_output(process):
with finishing(process): # close pipes, call .wait()
for line in iter(process.stdout.readline, b''):
if detected_error(line):
communicate_error(process, line)
process = Popen(command, stdout=PIPE, stderr=STDOUT, bufsize=1)
Thread(target=process_output, args=[process]).start()
I need to check the data I get from stdout and stderr seperately.
Use two threads:
def read_stdout(process):
with waiting(process), process.stdout: # close pipe, call .wait()
for line in iter(process.stdout.readline, b''):
do_something_with_stdout(line)
def read_stderr(process):
with process.stderr:
for line in iter(process.stderr.readline, b''):
if detected_error(line):
communicate_error(process, line)
process = Popen(command, stdout=PIPE, stderr=PIPE, bufsize=1)
Thread(target=read_stdout, args=[process]).start()
Thread(target=read_stderr, args=[process]).start()
You could put the code into a custom class (to group do_something_with_stdout(), detected_error(), communicate_error() methods).
It may be better or worse than what you imagine...
Anyway, the correct way of reading a pipe line by line is simply:
for line in p.stdout:
#process line is you want of just
print line
Or if you need to process that inside of a higher level loop
line = next(p.stdout)
But a harder problem could come from the commands started from Python. Many programs use the underlying C standard library, and by default stdout is a buffered stream. The system detects whether the standard output is connected to a terminal, and automatically flushes output on a new line (\n) or on a read on same terminal. But if output is connected to a pipe or a file, everything is buffered until the buffer is full, which on current systems requires several kBytes. In that case nothing can be done at Python level. Above code would get a full line as soon as it would written on the pipe, but cannot guess before callee has actually written something...

Filter out command that needs a terminal in Python subprocess module

I am developing a robot that accepts commands from network (XMPP) and uses subprocess module in Python to execute them and sends back the output of commands. Essentially it is an SSH-like XMPP-based non-interactive shell.
The robot only executes commands from authenticated trusted sources, so arbitrary shell commands are allowed (shell=True).
However, when I accidentally send some command that needs a tty, the robot is stuck.
For example:
subprocess.check_output(['vim'], shell=False)
subprocess.check_output('vim', shell=True)
Should each of the above commands is received, the robot is stuck, and the terminal from which the robot is run, is broken.
Though the robot only receives commands from authenticated trusted sources, human errs. How could I make the robot filter out those commands that will break itself? I know there is os.isatty but how could I utilize it? Is there a way to detect those "bad" commands and refuse to execute them?
TL;DR:
Say, there are two kinds of commands:
Commands like ls: does not need a tty to run.
Commands like vim: needs a tty; breaks subprocess if no tty is given.
How could I tell a command is ls-like or is vim-like and refuses to run the command if it is vim-like?
What you expect is a function that receives command as input, and returns meaningful output by running the command.
Since the command is arbitrary, requirement for tty is just one of many bad cases may happen (other includes running a infinite loop), your function should only concern about its running period, in other words, a command is “bad” or not should be determined by if it ends in a limited time or not, and since subprocess is asynchronous by nature, you can just run the command and handle it in a higher vision.
Demo code to play, you can change the cmd value to see how it performs differently:
#!/usr/bin/env python
# coding: utf-8
import time
import subprocess
from subprocess import PIPE
#cmd = ['ls']
#cmd = ['sleep', '3']
cmd = ['vim', '-u', '/dev/null']
print 'call cmd'
p = subprocess.Popen(cmd, shell=True,
stdin=PIPE, stderr=PIPE, stdout=PIPE)
print 'called', p
time_limit = 2
timer = 0
time_gap = 0.2
ended = False
while True:
time.sleep(time_gap)
returncode = p.poll()
print 'process status', returncode
timer += time_gap
if timer >= time_limit:
print 'timeout, kill process'
p.kill()
break
if returncode is not None:
ended = True
break
if ended:
print 'process ended by', returncode
print 'read'
out, err = p.communicate()
print 'out', repr(out)
print 'error', repr(err)
else:
print 'process failed'
Three points are notable in the above code:
We use Popen instead of check_output to run the command, unlike check_output which will wait for the process to end, Popen returns immediately, thus we can do further things to control the process.
We implement a timer to check for the process's status, if it runs for too long, we killed it manually because we think a process is not meaningful if it could not end in a limited time. In this way your original problem will be solved, as vim will never end and it will definitely being killed as an “unmeaningful” command.
After the timer helps us filter out bad commands, we can get stdout and stderr of the command by calling communicate method of the Popen object, after that its your choice to determine what to return to the user.
Conclusion
tty simulation is not needed, we should run the subprocess asynchronously, then control it by a timer to determine whether it should be killed or not, for those ended normally, its safe and easy to get the output.
Well, SSH is already a tool that will allow users to remotely execute commands and be authenticated at the same time. The authentication piece is extremely tricky, please be aware that building the software you're describing is a bit risky from a security perspective.
There isn't a way to determine whether a process is going to need a tty or not. And there's no os.isatty method because if you ran a sub-processes that needed one wouldn't mean that there was one. :)
In general, it would probably be safer from a security perspective and also a solution to this problem if you were to consider a white list of commands. You could choose that white list to avoid things that would need a tty, because I don't think you'll easily get around this.
Thanks a lot for #J.F. Sebastia's help (see comments under the question), I've found a solution (workaround?) for my case.
The reason why vim breaks terminal while ls does not, is that vim needs a tty. As Sebastia says, we can feed vim with a pty using pty.openpty(). Feeding a pty gurantees the command will not break terminal, and we can add a timout to auto-kill such processes. Here is (dirty) working example:
#!/usr/bin/env python3
import pty
from subprocess import STDOUT, check_output, TimeoutExpired
master_fd, slave_fd = pty.openpty()
try:
output1 = check_output(['ls', '/'], stdin=slave_fd, stderr=STDOUT, universal_newlines=True, timeout=3)
print(output1)
except TimeoutExpired:
print('Timed out')
try:
output2 = check_output(['vim'], stdin=slave_fd, stderr=STDOUT, universal_newlines=True, timeout=3)
print(output2)
except TimeoutExpired:
print('Timed out')
Note it is stdin that we need to take care of, not stdout or stderr.
You can refer to my answer in: https://stackoverflow.com/a/43012138/3555925, which use pseudo-terminal to make stdout no-blocking, and use select in handle stdin/stdout.
I can just modify the command var to 'vim'. And the script is working fine.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import select
import termios
import tty
import pty
from subprocess import Popen
command = 'vim'
# save original tty setting then set it to raw mode
old_tty = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin.fileno())
# open pseudo-terminal to interact with subprocess
master_fd, slave_fd = pty.openpty()
# use os.setsid() process the leader of a new session, or bash job control will not be enabled
p = Popen(command,
preexec_fn=os.setsid,
stdin=slave_fd,
stdout=slave_fd,
stderr=slave_fd,
universal_newlines=True)
while p.poll() is None:
r, w, e = select.select([sys.stdin, master_fd], [], [])
if sys.stdin in r:
d = os.read(sys.stdin.fileno(), 10240)
os.write(master_fd, d)
elif master_fd in r:
o = os.read(master_fd, 10240)
if o:
os.write(sys.stdout.fileno(), o)
# restore tty settings back
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)

python subprocess - How to check if there is not any new data in the PIPE?

I have the following example:
import subprocess
p = subprocess.Popen("cmd",stdin = subprocess.PIPE, stdout=subprocess.PIPE )
p.stdin.write(b'cd\\' + b'\r\n')
p.stdin.write(b'dir' + b'\r\n')
p.stdin.write(b'\r\n')
while True:
line = p.stdout.readline()
print(line.decode('ascii'), end='')
if line.rstrip().decode('ascii') == 'C:\>': #I need to check at this point if there is new data in the PIPE
print('End of File')
break
I am listening to the PIPE for any output from the subprocess and if there is not any new data coming through the PIPE I would like to stop reading. I would like to have a control statement that would tell me that PIPE is empty. This would help me to avoid problems in case my process freezes or ends with an unexpected result.
Unless the process is over or there is a signal you are expecting to stop reading at, there is no good way to know ahead of time if there is data in the pipe, because the command will only terminate when it reaches the number of bytes you want to read [.read(n)], reaches a newline char [.readline()], or reaches the end of the file (which doesn't exist until the process is over).
However, you don't need to run cmd.exe to run your program, since your program will already be run in the cmd shell.
I suggest you use subprocess to call the program directly, and handle exceptions/return_code in your code. You could do something like...
import subprocess
import time
p = subprocess.Popen("your_program.exe",
"-f", "filename",
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
# If you have to use stdin, do it here.
p.stdin.write('lawl here are my inputs\n')
run_for = 0
while p.poll() == None:
time.sleep(1)
if run_for > 10:
p.kill()
break
run_for += 1
if p.return_code == 0:
...handle success...
else:
...handle failure...
You could do this in a loop and spin up a new process that would run the next file.
If it's that costly to spin up the program (and if it's not then stop reading now because it's about to get embarrassing) then perhaps (and this is a total hack, but) after your process has run a while, you could pass a particularly odd but innocuous string to p.stdin, as in p.stdin.write("\n~%$%~\n").
If you could get away with that, then you could do something like...
for line in p.stdout.readlines():
if '~%$%~' in line:
break
But holy crap, please don't do that. It's such a hack.

Categories