Waiting for output from a subprocess which does not terminate - python

I need to run a subprocess from my script. The subprocess is an interactive (shell-like) application, to which I issue commands through the subprocess' stdin.
After I issue a command, the subprocess outputs the result to stdout and then waits for the next command (but does not terminate).
For example:
from subprocess import Popen, PIPE
p = Popen(args = [...], stdin = PIPE, stdout = PIPE, stderr = PIPE, shell = False)
# Issue a command:
p.stdin.write('command\n')
# *** HERE: get the result from p.stdout ***
# CONTINUE with the rest of the script once there is not more data in p.stdout
# NOTE that the subprocess is still running and waiting for the next command
# through stdin.
My problem is getting the result from p.stdout. The script needs to get the output while there is new data in p.stdout; but once there is no more data, I want to continue with the script.
The subprocess does not terminate, so I cannot use communicate() (which waits for the process to terminate).
I tried reading from p.stdout after issuing the command, like this:
res = p.stdout.read()
But the subprocess is not fast enough, and I just get empty result.
I thought about polling p.stdout in a loop until I get something, but then how do I know I got everything? And it seems wasteful anyway.
Any suggestions?

Use gevent.subprocess in gevent-1.0 to substitute the standard subprocess module. It could do the concurrency tasks using synchronous logic and won't block the script. Here is a brief tutorial about gevent.subprocess

Use circuits.io.Process in circuits-dev to wrap an asynchronous call to subprocess.
Example: https://bitbucket.org/circuits/circuits-dev/src/tip/examples/ping.py

After investigating several options I reached two solutions:
Setting the subprocess' stdout stream to be non blocking by using the fcntl module.
Using a thread to collect the subprocess' output to a proxy queue, and then reading the queue from the main thread.
I describe both solutions (and the problem and its origin) in this post.

Related

No stdout from killed subprocess

i have a homework assignment to capture a 4way handshake between a client and AP using scapy. im trying to use "aircrack-ng capture.pcap" to check for valid handshakes in the capture file i created using scapy
i launch the program using Popen. the program waits for user input so i have to kill it. when i try to get stdout after killing it the output is empty.
i've tried stdout.read(), i've tried communicate, i've tried reading stderr, and i've tried it both with and without shells
check=Popen("aircrack-ng capture.pcap",shell=True,stdin=PIPE,stdout=PIPE,stderr=PIPE)
check.kill()
print(check.stdout.read())
While you shouldn't do this (trying to rely on hardcoded delays is inherently race-condition-prone), that the issue is caused by your kill() being delivered while sh is still starting up can be demonstrated by the problem being "solved" (not reliably, but sufficient for demonstration) by tiny little sleep long enough let the shell start up and the echo run:
import time
from subprocess import Popen, PIPE
check=Popen("echo hello && sleep 1000", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
time.sleep(0.01) # BAD PRACTICE: Race-condition-prone, use one of the below instead.
check.kill()
print(check.stdout.read())
That said, a much better-practice solution would be to close the stdin descriptor so the reads immediately return 0-byte results. On newer versions of Python (modern 3.x), you can do that with DEVNULL:
import time
from subprocess import Popen, PIPE, DEVNULL
check=Popen("echo hello && read input && sleep 1000",
shell=True, stdin=DEVNULL, stdout=PIPE, stderr=PIPE)
print(check.stdout.read())
...or, with Python 2.x, a similar effect can be achieved by passing an empty string to communicate(), thus close()ing the stdin pipe immediately:
import time
from subprocess import Popen, PIPE
check=Popen("echo hello && read input && sleep 1000",
shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
print(check.communicate('')[0])
Never, and I mean, never kill a process as part of normal operation. There's no guarantee whatsoever how far it has proceeded by the time you kill it, so you cannot expect any specific results from it in such a case.
To explicitly pass nothing to a subprocess as input to prevent hanging when it tries to read stdin:
connect its stdin to /dev/null (nul in Windows) as per run a process to /dev/null in python :
p=Popen(<...>, stdin=open(os.devnull)) #or stdin=subprocess.DEVNULL in Python 3.3+
or use stdin=PIPE and <process>.communicate() without arguments -- this will pass an empty stream
Use <process>.communicate(), or use subprocess.check_output() instead of Popen to read output reliably
A process, in the general case, is not guaranteed to output any data at any particular moment due to I/O buffering. So you need to read the output stream after the process completes to be sure you've got everything.
At the same time, you need to keep reading the stream in the meantime if the process can produce enough output to fill an I/O buffer1. Otherwise, it will hang waiting for you to read the buffered data. If both stdout and stderr are PIPEs, you need to read them both, in parallel -- i.e. in different threads.
communicate() and check_output (that uses the former under the hood) achieve this by reading stdout and stderr in two separate threads.
Prefer convenience functions to Popen for common use cases -- in your case, check_output -- as they take care of all the aforementioned caveats for you.
1Pipes are fully buffered and a typical buffer size is 64KB

Running a nohup command through SSH with Python's Popen then logging out

I'm trying to use this answer to issue a long-running process with nohup using subprocess.Popen, then exit the connection.
From the login shell, if I run
$ nohup sleep 20 &
$ exit
I get the expected behavior, namely that I can log out while the sleep process is still running, and connect back later to check on its status. If I try to the same through Python, however, it seems as though the exit command does not get executed until the sleeping time is over.
import subprocess
HOST=<host>
sshProcess = subprocess.Popen(['ssh', HOST],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines = True,
bufsize = 0)
sshProcess.stdin.write("nohup sleep 20 &\n")
sshProcess.stdin.write("exit\n")
sshProcess.stdin.close()
What am I missing?
From the docs: Python Docs
Warning Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
Alright pretty sure i got it now:
About communicate(): [...]Wait for process to terminate
So i guess your earlier Solution was better. But if you call it at the end if shouldn't be a problem or just don't call it at all if you don't need stdin or stderr as output
However according to this:StackOverflow Comment If you set preexec_fn=os.setpgrp in your Popen call it should work.

Async read console output with python while process is running

I want to run a long process (calculix simulation) via python.
As mentioned here one can read the console string with communicate().
As far as I understand the string is returned after the process is completed? Is there a possibility to get the console output while the process is running?
You have to use subprocess.Popen.poll to check process terminates or not.
while sub_process.poll() is None:
output_line = sub_process.stdout.readline()
This will give you runtime output.
This should work:
sp = subprocess.Popen([your args], stdout=subprocess.PIPE)
while sp.poll() is None: # sp.poll() returns None while subprocess is running
output = sp.stdout # here you have acccess to the stdout while the process is running
# Do stuff with stdout
Notice we don't call communicate() on subprocess here.

Python Popen().stdout.read() hang

I'm trying to get output of another script, using Python's subprocess.Popen like follows
process = Popen(command, stdout=PIPE, shell=True)
exitcode = process.wait()
output = process.stdout.read() # hangs here
It hangs at the third line, only when I run it as a python script and I cannot reproduce this in the python shell.
The other script prints just a few words and I am assuming that it's not a buffer issue.
Does anyone has idea about what I am doing wrong here?
You probably want to use .communicate() rather than .wait() plus .read(). Note the warning about wait() on the subprocess documentation page:
Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that.
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait
read() waits for EOF before returning.
You can:
wait for the subprocess to die, then read() will return.
use readline() if your output is broken into lines (will still hang if no output lines).
use os.read(F,N) which returns at most N bytes from F, but will still block if the pipe is empty (unless O_NONBLOCK is set on the fd).
You can see how to deal with hanging reading of stdout/stderr in the next sources:
readingproc

Proper way of re-using and closing a subprocess object

I have the following code in a loop:
while true:
# Define shell_command
p1 = Popen(shell_command, shell=shell_type, stdout=PIPE, stderr=PIPE, preexec_fn=os.setsid)
result = p1.stdout.read();
# Define condition
if condition:
break;
where shell_command is something like ls (it just prints stuff).
I have read in different places that I can close/terminate/exit a Popen object in a variety of ways, e.g. :
p1.stdout.close()
p1.stdin.close()
p1.terminate
p1.kill
My question is:
What is the proper way of closing a subprocess object once we are done using it?
Considering the nature of my script, is there a way to open a subprocess object only once and reuse it with different shell commands? Would that be more efficient in any way than opening new subprocess objects each time?
Update
I am still a bit confused about the sequence of steps to follow depending on whether I use p1.communicate() or p1.stdout.read() to interact with my process.
From what I understood in the answers and the comments:
If I use p1.communicate() I don't have to worry about releasing resources, since communicate() would wait until the process is finished, grab the output and properly close the subprocess object
If I follow the p1.stdout.read() route (which I think fits my situation, since the shell command is just supposed to print stuff) I should call things in this order:
p1.wait()
p1.stdout.read()
p1.terminate()
Is that right?
What is the proper way of closing a subprocess object once we are done using it?
stdout.close() and stdin.close() will not terminate a process unless it exits itself on end of input or on write errors.
.terminate() and .kill() both do the job, with kill being a bit more "drastic" on POSIX systems, as SIGKILL is sent, which cannot be ignored by the application. Specific differences are explained in this blog post, for example. On Windows, there's no difference.
Also, remember to .wait() and to close the pipes after killing a process to avoid zombies and force the freeing of resources.
A special case that is often encountered are processes which read from STDIN and write their result to STDOUT, closing themselves when EOF is encountered. With these kinds of programs, it's often sensible to use subprocess.communicate:
>>> p = Popen(["sort"], stdin=PIPE, stdout=PIPE)
>>> p.communicate("4\n3\n1")
('1\n3\n4\n', None)
>>> p.returncode
0
This can also be used for programs which print something and exit right after:
>>> p = Popen(["ls", "/home/niklas/test"], stdin=PIPE, stdout=PIPE)
>>> p.communicate()
('file1\nfile2\n', None)
>>> p.returncode
0
Considering the nature of my script, is there a way to open a subprocess object only once and reuse it with different shell commands? Would that be more efficient in any way than opening new subprocess objects each time?
I don't think the subprocess module supports this and I don't see what resources could be shared here, so I don't think it would give you a significant advantage.
Considering the nature of my script, is there a way to open a subprocess object only once and reuse it with different shell commands?
Yes.
#!/usr/bin/env python
from __future__ import print_function
import uuid
import random
from subprocess import Popen, PIPE, STDOUT
MARKER = str(uuid.uuid4())
shell_command = 'echo a'
p = Popen('sh', stdin=PIPE, stdout=PIPE, stderr=STDOUT,
universal_newlines=True) # decode output as utf-8, newline is '\n'
while True:
# write next command
print(shell_command, file=p.stdin)
# insert MARKER into stdout to separate output from different shell_command
print("echo '%s'" % MARKER, file=p.stdin)
# read command output
for line in iter(p.stdout.readline, MARKER+'\n'):
if line.endswith(MARKER+'\n'):
print(line[:-len(MARKER)-1])
break # command output ended without a newline
print(line, end='')
# exit on condition
if random.random() < 0.1:
break
# cleanup
p.stdout.close()
if p.stderr:
p.stderr.close()
p.stdin.close()
p.wait()
Put while True inside try: ... finally: to perform the cleanup in case of exceptions. On Python 3.2+ you could use with Popen(...): instead.
Would that be more efficient in any way than opening new subprocess objects each time?
Does it matter in your case? Don't guess. Measure it.
The "correct" order is:
Create a thread to read stdout (and a second one to read stderr, unless you merged them into one).
Write commands to be executed by the child to stdin. If you're not reading stdout at the same time, writing to stdin can block.
Close stdin (this is the signal for the child that it can now terminate by itself whenever it is done)
When stdout returns EOF, the child has terminated. Note that you need to synchronize the stdout reader thread and your main thread.
call wait() to see if there was a problem and to clean up the child process
If you need to stop the child process for any reason (maybe the user wants to quit), then you can:
Close stdin if the child terminates when it reads EOF.
Kill the with terminate(). This is the correct solution for child processes which ignore stdin.
If the child doesn't respond, try kill()
In all three cases, you must call wait() to clean up the dead child process.
Depends on what you expect the process to do; you should always call p1.wait() in order to avoid zombies. Other steps depend on the behaviour of the subprocess; if it produces any output, you should consume the output (e.g. p1.read() ...but this would eat lots of memory) and only then call the p1.wait(); or you may wait for some timeout and call p1.terminate() to kill the process if you think it doesn't work as expected, and possible call p1.wait() to clean the zombie.
Alternatively, p1.communicate(...) would do the handling if io and waiting for you (not the killing).
Subprocess objects aren't supposed to be reused.

Categories