stdin.write() being blocked from interacting with foil.exe - python

I'm writing a wrapper for Xfoil and my first command set of commands are:
commands=[]
commands.append('plop\n')
commands.append('g,f\n')
commands.append('\n')
commands.append('load '+ afile+'\n')
commands.append('\n')
#commands.append('ppar\n');
#commands.append('n %g\n',n);
commands.append('\n')
commands.append('\n')
commands.append('oper\n')
commands.append('iter '+ str(iter) + '\n')
commands.append('visc {0:f}\n'.format(Re))
commands.append('m {0:f}\n'.format(M))
I'm interacting with xfoil as below:
xfoil_path=os.getcwd()+'/xfoil.exe'
Xfoil = Popen(xfoil_path, shell=True, stdin=PIPE, stdout=None, stderr=None, creationflags=0)
for i in commands:
print '\nExecuting:', i
#stdin.write returns None if write is blocked and that seems to be the case here
Xfoil.stdin.write(i)
Xfoil.wait()
#print Xfoil.stdin.write(i)
However, Xfoil.stdin.write is being blocked form interacting with the program -- xfoil.exe -- as Xfoil.stdin.write(i) returns a None.
This happens immediately after the first command i.e. plop
How do I resolve this?

Solution is to add Xfoil.stdin.close(); Closing the buffer allows the program to proceed.
Xfoil = Popen(xfoil_path, shell=True, stdin=PIPE, stdout=None, stderr=None, creationflags=0)
for i in commands:
Xfoil.stdin.write(i)
Xfoil.stdin.close()
Xfoil.wait()
Seeking help understand why Xfoil.stdin.close() needs to be added. How does closing the buffer allow xfoil.exe to proceed?

To send multiple commands, you could use Popen.communicate() method that sends commands, closes the pipe, and waits for the child process to finish:
import os
from subprocess import Popen, PIPE
process = Popen(os.path.abspath('xfoil.exe'), stdin=PIPE)
process.communicate(b"".join(commands))
Xfoil.wait() in your code waits for the executable to finish after the first command. Closing the pipe (Xfoil.stdin) indicates EOF otherwise the deadlock may happen if xfoil.exe read until EOF (no command makes it exit otherwise).

Related

Running a subprocess in a script and based on the output, continuing the execution of the script

I am writing a script which uses subprocess to launch a server and then continues on with the execution of the script.
This is my current code
cmd = "some command"
process = check_output(cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
time.sleep(70)
print(process.returncode)
I am using time.sleep to delay the execution of the next lines in the script so that the server starts but this is not efficient.
Once the server starts, I get this output in the console: INFO:bridge_serving!
Is there a way that i can check the output of the console and once it says INFO:bridge_serving! the next lines of the script should continue running.
I am guessing that you want the process to stay open/running while continually checking the output.
subprocess.check_output() wont return anything until the process is complete.
You probably want to keep the process open and use a pipe to read the output as it is created.
See: https://stackoverflow.com/a/4760517/3389859 for more info.

Python waiting for subprocess to finish

I'm really new to Python and I got a little problem with the subprocess class.
I'm starting an external Program with :
thread1.event.clear()
thread2.event.clear()
print "Sende Motoren STOP"
print "Gebe BILD in Auftrag"
proc = Popen(['gphoto2 --capture-image &'], shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
sleep (args.max+2)
thread1.event.set()
thread2.event.set()
sleep (args.tp-2-args.max)
My Problem is that in my shell where I Started the Python script, I still get the outputs of GPHOTO2 and I think Python is still waiting for GPHOTO to finish.
Any ideas?
The documentation for subprocess.Pope states:
stdin, stdout and stderr specify the executed programs' standard
input, standard output and standard error file handles, respectively.
[...]
With None, no redirection will occur; the child's file handles will be
inherited from the parent.
So you might want to try something along the lines of this. Which btw. blocks until completion. So you might not need the sleep() (here the wait() from subprocess.Popen might be want you want?).
import subprocess
ret_code = subprocess.call(["echo", "Hello World!"], stdout=subprocess.PIPE);

subprocess popen.communicate() vs. stdin.write() and stdout.read()

I have noticed two different behaviors with two approaches that should have result in the same outcome.
The goal - to execute an external program using subprocess module, send some data and read the results.
The external program is PLINK, platform is WindowsXP, Python version 3.3.
The main idea-
execution=["C:\\Pr..\\...\\plink.exe", "-l", username, "-pw", "***", IP]
a=subprocess.Popen(execution, bufsize=0, stdout=PIPE, stdin=PIPE, stderr=STDOUT, shell=False)
con=a.stdout.readline()
if (con.decode("utf-8").count("FATAL ERROR: Network error: Connection timed out")==0):
a.stdin.write(b"con rout 1\n")
print(a.stdout.readline().decode("utf-8"))
a.stdin.write(b"infodf\n")
print(a.stdout.readline().decode("utf-8"))
else:
print("ERROR")
a.kill()
So far so good.
Now, I want to be able to do a loop (after each write to the sub process's stdin), that waits untill EOF of the sub process's stdout, print it, then another stdin command, and so on.
So I first tried what previous discussions about the same topic yield (live output from subprocess command, read subprocess stdout line by line, python, subprocess: reading output from subprocess) .
And it didnt work (it hangs forever) because the PLINK process is remaining alive untill I kill it myself, so there is no use of waiting for the stdout of the sub process to reach EOF or to do a loop while stdout is true because it will always be true until I kill it.
So I decided to read from stdout twice every time I am writing to stdin (good enought for me)-
execution=["C:\\Pr..\\...\\plink.exe", "-l", username, "-pw", "***", IP]
a=subprocess.Popen(execution, bufsize=0, stdout=PIPE, stdin=PIPE, stderr=STDOUT, shell=False)
con=a.stdout.readline()
if (con.decode("utf-8").count("FATAL ERROR: Network error: Connection timed out")==0):
a.stdin.write(b"con rout 1\n")
print(a.stdout.readline().decode("utf-8"))
print(a.stdout.readline().decode("utf-8")) //the extra line [1]
a.stdin.write(b"infodf\n")
print(a.stdout.readline().decode("utf-8"))
print(a.stdout.readline().decode("utf-8")) //the extra line [2]
else:
print("ERROR")
a.kill()
But the first extra readline() hangs forever, as far as I understand, for the same reason I mentioned. The first extra readline() waits forever for output, because the only output was already read in the first readline(), and because PLINK is alive, the function just "sit" there and waits for a new output line to get.
So I tried this code, expecting the same hang because PLINK never dies until i kill it-
execution=["C:\\Pr..\\...\\plink.exe", "-l", username, "-pw", "***", IP]
a=subprocess.Popen(execution, bufsize=0, stdout=PIPE, stdin=PIPE, stderr=STDOUT, shell=False)
con=a.stdout.readline()
if (con.decode("utf-8").count("FATAL ERROR: Network error: Connection timed out")==0):
a.stdin.write(b"con rout 1\n")
print(a.stdout.readline().decode("utf-8"))
a.stdin.write(b"infodf\n")
print(a.stdout.readline().decode("utf-8"))
print(a.communicate()[0].decode("utf-8")) //Popen.communicate() function
else:
print("ERROR")
a.kill()
I tried that because according to the documentation of communicate(), the function wait until the process is ended, and then it finishes. Also, it reads from stdout until EOF. (same as writing and reading stdout and stdin)
But communicate() finishes and does not hang, in opposite of the previous code block.
What am I missing here? why when using communicate() the PLINK ends, but when using readline() it does not?
Your program without communicate() deadlocks because both processes are waiting on each other to write something before they write anything more themselves.
communicate() does not deadlock in your example because it closes the stream, like the command a.stdin.close() would. This sends an EOF to your subprocess, letting it know that there is no more input coming, so it can close itself, which in turn closes its output, so a.stdout.read() eventually returns an EOF (empty string).
There is no special signal that your main process will receive from your subprocess to let you know that it is done writing the results from one command, but is ready for another command.
This means that to communicate back and forth with one subprocess like you're trying to, you must read the exact number of lines that the subprocess sends. Like you saw, if you try to read too many lines, you deadlock. You might be able to use what you know, such as the command you sent it, and the output you have seen so far, to figure out exactly how many lines to read.
You can use threads to write and read at the same time, especially if the output only needs to be printed to the user:
from threading import Thread
def print_remaining(stream):
for line in stream:
print(line.decode("utf-8"))
con = a.stdout.readline()
if "FATAL ERROR" not in con.decode("utf-8"):
Thread(target=print_remaining, args=[a.stdout]).start()
for cmd in LIST_OF_COMMANDS_TO_SEND:
a.stdin.write(cmd)
The thing is that when using subprocess.Popen your code continues to be read even before the process terminates. Try appending .wait() to your Popen call (see documentation),
a=subprocess.Popen(execution, bufsize=0, stdout=PIPE, stdin=PIPE, stderr=STDOUT, shell=False).wait()
This will ensure that the execution finishes before going on with anything else.

Popen subprocessing problems

I'm trying to learn about the subprocessing module and am therefore making a hlds server administrator.
My goal is to be able to start server instances and send all commands through dispatcher.py to administrate multiple servers, e.g. send commands to subprocesses stdin.
what I've got so far for some initial testing, but got stuck already :]
#dispatcher.py
import subprocess
RUN = '/home/daniel/hlds/hlds_run -game cstrike -map de_dust2 -maxplayers 11'
#RUN = "ls -l"
hlds = subprocess.Popen(RUN.split(), stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
print hlds.communicate()[0]
print hlds.communicate()[1]
hlds.communicate('quit')
I am not getting any stdout from the hlds server, but it works fine if i dont set stdout to PIPE. And the hlds.communicate('quit') does not seem to be sent to the hlds process stdin either. The ls -l command returns stdout correctly but not hlds.
All help appreciated! :)
See the Popen.communicate docs (emphasis mine):
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.
So you can only call communicate once per run of a process, since it waits for the process to terminate. That's why ls -l seems to work -- it terminates immediately, while hlds doesn't.
You'd need to do:
out, error = hlds.communicate('quit')
if you want to send in quit and get all output until it terminates.
If you need more interactivity, you'll need to use hlds.stdout, hlds.stdin, and hlds.stderr directly.

Python monitoring stderr and stdout of a subprocess

I trying to start a program (HandBreakCLI) as a subprocess or thread from within python 2.7. I have gotten as far as starting it, but I can't figure out how to monitor it's stderr and stdout.
The program outputs it's status (% done) and info about the encode to stderr and stdout, respectively. I'd like to be able to periodically retrieve the % done from the appropriate stream.
I've tried calling subprocess.Popen with stderr and stdout set to PIPE and using the subprocess.communicate, but it sits and waits till the process is killed or complete then retrieves the output then. Doesn't do me much good.
I've got it up and running as a thread, but as far as I can tell I still have to eventually call subprocess.Popen to execute the program and run into the same wall.
Am I going about this the right way? What other options do I have or how to I get this to work as described?
I have accomplished the same with ffmpeg. This is a stripped down version of the relevant portions. bufsize=1 means line buffering and may not be needed.
def Run(command):
proc = subprocess.Popen(command, bufsize=1,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
universal_newlines=True)
return proc
def Trace(proc):
while proc.poll() is None:
line = proc.stdout.readline()
if line:
# Process output here
print 'Read line', line
proc = Run([ handbrakePath ] + allOptions)
Trace(proc)
Edit 1: I noticed that the subprocess (handbrake in this case) needs to flush after lines to use this (ffmpeg does).
Edit 2: Some quick tests reveal that bufsize=1 may not be actually needed.

Categories