Reading process output - python

I'm writing a simple wrapper over python debugger (pdb) and I need to parse pdb output. But I have a problem reading text from process pipe.
Example of my code:
import subprocess, threading, time
def readProcessOutput(process):
while not process.poll():
print(process.stdout.readline())
process = subprocess.Popen('python -m pdb script.py', shell=True, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=subprocess.PIPE)
read_thread = threading.Thread(target=readProcessOutput, args=(process,))
read_thread.start()
while True:
time.sleep(0.5)
When i execute given command (python -m pdb script.py) in OS shell I get results like this:
> c:\develop\script.py(1)<module>()
-> print('hello, world!')
(Pdb)
But when i run my script i get only two lines, but can't get pdb prompt. Writing commands to stdin after this has no effect. So my question is:
why I cannot read third line? How can I avoid this problem and get correct output?
Platform: Windows XP, Python 3.3

The third line can not be read by readline() because it is not terminated yet by the end of line. You see usually the cursor after "(pdb) " until you write anything + enter.
The communication to processes that have some prompt is usually more complicated. It proved to me to write also an independent thread for data writer first for easier testing the communication in order to be sure that the main thread never freezes if too much is tried to be written or read. Then it can be simplified again.

Related

Run a program in the background and then open another program using subprocess

On the terminal, I have two programs to run using subprocess
First, I will call ./matrix-odas & so the first program will run in the background and I can then type the second command. The first command will return some messages.
The second command ~/odas/bin/odaslive -vc ~/odas/config/odaslive/matrix_creator.cfg will open the second program and it will keep running and keep printing out text. I'd like to use subprocess to open these programs and capture both outputs.
I have never used subprocess before and following tutorials, I am writing the script on Jupyter notebook (python 3.7) in order to see the output easily.
from subprocess import Popen, PIPE
p = Popen(["./matrix-odas", "&"], stdout=PIPE, stderr=PIPE, cwd=wd, universal_newlines=True)
stdout, stderr = p.communicate()
print(stdout)
This is the code that i tried to open the first program. But Jupyter notebook always gets stuck at p.communicate() and I can't see the messages. Without running the first program in the background, I won't be able to get the command prompt after the messages are printed.
I would like to know what subprocess function should I use to solve this issue and which platform is better to test subprocess code. Any suggestions will be appreciated. Thank you so much!
From this example at the end of this section of the docs
with Popen(["ifconfig"], stdout=PIPE) as proc:
log.write(proc.stdout.read())
it looks like you can access stdout (and I would assume stderr) from the object directly. I am not sure whether you need to use Popen as a context manager to access that property or not.

Get windows shell output of long running script in python

My problem is very similar to the problems described here and here with one thing not being covered: Assume, that I have an external longrunning.exe on windows, and I'm calling that with subprocess.Popen(). My exe now prints some commands and after some time, it gets into a loopback-mode and waits for input. When that happens, it outputs a single dot every second in the windows command prompt. It does put the subsequent dots on the same line, whereas all output before that is on its own line. I seem to be able to catch all output before that, but I cannot get this output, probably to some buffering going on(?). How can I get this output in my python console? Relevant code as below:
import subprocess
import sys
cmd = 'long_running.exe'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
for line in iter(p.stdout.readline, b''):
sys.stdout.write(line)

How to use Popen with an interactive command? nslookup, ftp

Is there any way to use Popen with interactive commands? I mean nslookup, ftp, powershell... I read the whole subprocess documentation several times but I can't find the way.
What I have (removing the parts of the project which aren't of interest here) is:
from subprocess import call, PIPE, Popen
command = raw_input('>>> ')
command = command.split(' ')
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
execution = process.stdout.read()
error = process.stderr.read()
output = execution + error
process.stderr.close()
process.stdout.close()
print(output)
Basically, when I try to print the output with a command like dir, the output is a string, so I can work with the .read() on it. But when I try to use nslookup for example, the output isn't a string, so it can't be read, and the script enters in a deadlock.
I know that I can invoke nslookup in non-interactive mode, but that's not the point. I want to remove all the chances of a deadlock, and make it works with every command you can run in a normal cmd.
The real way the project works is through sockets, so the raw_input is a s.recv() and the output is sending back the output, but I have simplified it to focus on the problem.

How to execute a shell script in the background from a Python script

I am working on executing the shell script from Python and so far it is working fine. But I am stuck on one thing.
In my Unix machine I am executing one command in the background by using & like this. This command will start my app server -
david#machineA:/opt/kml$ /opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &
Now I need to execute the same thing from my Python script but as soon as it execute my command it never goes to else block and never prints out execute_steps::Successful, it just hangs over there.
proc = subprocess.Popen("/opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
if proc.returncode != 0:
logger.error("execute_steps::Errors while executing the shell script: %s" % stderr)
sleep(0.05) # delay for 50 ms
else:
logger.info("execute_steps::Successful: %s" % stdout)
Anything wrong I am doing here? I want to print out execute_steps::Successful after executing the shell script in the background.
All other command works fine but only the command which I am trying to run in background doesn't work fine.
There's a couple things going on here.
First, you're launching a shell in the background, and then telling that shell to run the program in the background. I don't know why you think you need both, but let's ignore that for now. In fact, by adding executable='/bin/bash' on top of shell=True, you're actually trying to run a shell to run a shell to run the program in the background, although that doesn't actually quite work.*
Second, you're using PIPE for the process's output and error, but then not reading them. This can cause the child to deadlock. If you don't want the output, use DEVNULL, not PIPE. If you want the output to process yourself, use proc.communicate().**, or use a higher-level function like check_output. If you just want it to intermingle with your own output, just leave those arguments off.
* If you're using the shell because kml_http is a non-executable script that has to be run by /bin/bash, then don't use shell=True for that, or executable, just make make /bin/bash the first argument in the command line, and /opt/kml/bin/kml_http the second. But this doesn't seem likely; why would you install something non-executable into a bin directory?
** Or you can read it explicitly from proc.stdout and proc.stderr, but that gets more complicated.
At any rate, the whole point of executing something in the background is that it keeps running in the background, and your script keeps running in the foreground. So, you're checking its returncode before it's finished, and then moving on to whatever's next in your code, and never coming back again.
It seems like you want to wait for it to be finished. In that case, don't run it in the background—use proc.wait, or just use subprocess.call() instead of creating a Popen object. And don't use & either, of course. While we're at it, don't use the shell, either:
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
Now, you won't get to that if statement until kml_http finishes running.
If you want to wait for it to be finished, but at the same time keep doing other stuff, then you're trying to do two things at once in your program, which means you need a thread to do the waiting:
def run_kml_http():
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
t = threading.Thread(target=run_kml_http)
t.start()
# Now you can do other stuff in the main thread, and the background thread will
# wait around until kml_http is finished and execute the `if` statement whenever
# that happens
You're using stderr=PIPE, stdout=PIPE which means that rather than letting the stdin and stdout of the child process be forwarded to the current process' standard output and error streams, they are being redirected to a pipe which you must read from in your python process (via proc.stdout and proc.stderr.
To "background" a process, simply omit the usage of PIPE:
#!/usr/bin/python
from subprocess import Popen
from time import sleep
proc = Popen(
['/bin/bash', '-c', 'for i in {0..10}; do echo "BASH: $i"; sleep 1; done'])
for x in range(10):
print "PYTHON: {0}".format(x)
sleep(1)
proc.wait()
which will show the process being "backgrounded".

Python Popen not behaving like a subprocess

My problem is this--I need to get output from a subprocess and I am using the following code to call it-- (Feel free to ignore the long arguments. The importing thing is the stdout= subprocess.PIPE)
(stdout, stderr) = subprocess.Popen([self.ChapterToolPath, "-x", book.xmlPath , "-a", book.aacPath , "-o", book.outputPath+ "/" + fileName + ".m4b"], stdout= subprocess.PIPE).communicate()
print stdout
Thanks to an answer below, I've been able to get the output of the program, but I still end up waiting for the process to terminate before I get anything. The interesting thing is that in my debugger, there is all sorts of text flying by in the console and it is all ignored. But the moment that anything is written to the console in black (I am using pycharm) the program continues without a problem. Could the main program be waiting for some kind of output in order to move on? This would make sense because I am trying to communicate with it.... Is there a difference between text that I can see in the console and actual text that makes it to the stdout? And how would I collect the text written to the console?
Thanks!
The first line of the documentation for subprocess.call() describes it as such:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
Thus, it necessarily waits for the subprocess to exit.
subprocess.Popen(), by contrast, does not do this, returning a handle on a process with which one than then communicate().
To get all output from a program:
from subprocess import check_output as qx
output = qx([program, arg1, arg2, ...])
To get output while the program is running:
from subprocess import Popen, PIPE
p = Popen([program, arg1, ...], stdout=PIPE)
for line in iter(p.stdout.readline, ''):
print line,
There might be a buffering issue on the program' side if it prints line-by-line when run interactively but buffers its output if run as a subprocess. There are various solutions depending on your OS or the program e.g., you could run it using pexpect module.

Categories