Under what condition does a Python subprocess get a SIGPIPE? - python

I am reading the the Python documentation on the Popen class in the subprocess module section and I came across the following code:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
The documentation also states that
"The p1.stdout.close() call after starting the p2 is important in order for p1 to receive a SIGPIPE if p2 exits before p1.
Why must the p1.stdout be closed before we can receive a SIGPIPE and how does p1 knows that p2 exits before p1 if we already closed it?

SIGPIPE is a signal that would be sent if dmesg tried to write to a closed pipe. Here, dmesg ends up with two targets to write to, your Python process and the grep process.
That's because subprocess clones file handles (using the os.dup2() function). Configuring p2 to use p1.stdout triggers a os.dup2() call that asks the OS to duplicate the pipe filehandle; the duplicate is used to connect dmesg to grep.
With two open file handles for dmesg stdout, dmesg is never given a SIGPIPE signal if only one of them closes early, so grep closing would never be detected. dmesg would needlessly continue to produce output.
So by closing p1.stdout immediately, you ensure that the only remaining filehandle reading from dmesg stdout is the grep process, and if that process were to exit, dmesg receives a SIGPIPE.

Related

Python close_fds not clear

I had an issue with close_fds in Python27 so after doing some research I found this example:
from subprocess import Popen, PIPE, STDOUT
p1 = Popen(['cat'], stdin=PIPE, stdout=PIPE)
p2 = Popen(['grep', 'a'], stdin=p1.stdout, stdout=PIPE)
p1.stdin.write("aaaaaaaaaaaaaaaa\n")
p1.stdin.close()
p2.stdout.read()
My problem is that I can't understand why p1.stdin remains open. p1 is not a child of p2 so p2 shouldn't inherit any p1 resource except p1.stdout which is explicitly passed. Furthermore why setting close_fds=True in p2 resolves the issue? Here is written this:
If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed.
So even if I will be able to understand the inheritance between p1 and p2 still p1.stdin shouldn't be closed by close_fds=True because it is the standard input (1).
Since p1 and p2 are siblings, there is no inheritance going on between their corresponding processes directly.
However, consider the file descriptor that the parent sees as p1.stdin, inherited by p1 and redirected to its stdin. This file descriptor exists in the parent process (with a number other than 0, 1, or 2 - you can verify this by printing p1.stdin.fileno()), and it has to exist, because we intend to write to it from the parent. It is this file descriptor that is unintentionally inherited and kept open by p2.
When an open file is referenced by multiple file descriptors, as is the case with p1.stdin, it is only closed when all the descriptors are closed. This is why it is necessary to both close p1.stdin and pass close_fds to p2. (If you implemented the spawning code manually, you would simply close the file descriptor after the second fork().)

Python subprocesses with several pipes

I know how to do several "nested" pipes using subprocesses however I have another doubt. I want to do the following:
p1=Popen(cmd1,stdout=PIPE)
p2=Popen(cmd2,stdin=p1.stdout)
p3=Popen(cmd3,stdin=p1.stdout)
Take into account that p3 uses p1.stdout instead of p2.stdout. The problem is that after doing p2, p1.stdout is blank. Please help me!
You can't send the same pipe to two different processes. Or, rather, if you do, they end up accessing the same pipe, meaning if one process reads something, it's no longer available to the other one.
What you need to do is "tee" the data in some way.
If you don't need to stream the data as they come in, you can read all the output from p1, then send it as input to both p2 and p3. This is easy:
output = check_output(cmd1)
p2 = Popen(cmd2, stdin=PIPE)
p2.communicate(output)
p3 = Popen(cmd3, stdin=PIPE)
p3.communicate(output)
If you just need p2 and p3 to run in parallel, you can just run them each in a thread.
But if you actually need real-time streaming, you have to connect things up more carefully. If you can be sure that p2 and p3 will always consume their input, without blocking, faster than p1 can supply it, you can do this without threads (just loop on p1.stdout.read()), but otherwise, you'll need an output thread for each consumer process, and a Queue or some other way to pass the data around. See the source code to communicate for more ideas on how to synchronize the separate threads.
If you want to copy the output from a subprocess to other processes without reading all output at once then here's an implementation of #abarnert's suggestion to loop over p1.stdout that achieves it:
from subprocess import Popen, PIPE
# start subprocesses
p1 = Popen(cmd1, stdout=PIPE, bufsize=1)
p2 = Popen(cmd2, stdin=PIPE, bufsize=1)
p3 = Popen(cmd3, stdin=PIPE, bufsize=1)
# "tee" the data
for line in iter(p1.stdout.readline, b''): # assume line-oriented data
p2.stdin.write(line)
p3.stdin.write(line)
# clean up
for pipe in [p1.stdout, p2.stdin, p3.stdin]:
pipe.close()
for proc in [p1, p2, p3]:
proc.wait()

Python subprocess reading process terminates before writing process example, clarification needed

Code snippet from: http://docs.python.org/3/library/subprocess.html#replacing-shell-pipeline
output=`dmesg | grep hda`
# becomes
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
Question: I do not quite understand why this line is needed: p1.stdout.close()?
What if by doing this p1 stdout is closed even before it is completely done outputting data and p2 is still alive ? Are we not risking that by closing p1.stdout so soon? How does this work?
p1.stdout.close() closes Python's copy of the file descriptor. p2 already has that descriptor open (via stdin=p1.stdout), so closing Python's descriptor doesn't affect p2. However, now that pipe end is only opened once, so when it closes (e.g. if p2 dies), p1 will see the pipe close and will get SIGPIPE.
If you didn't close p1.stdout in Python, and p2 died, p1 would get no signal because Python's descriptor would be holding the pipe open.
Pipes are external to processes (its an operating system thing) and are accessed by processes using read and write handles. Many processes can have handles to the pipe and can read and write in all sorts of disastrous ways if not managed properly. Pipes close when all handles to the pipes are closed.
Although process execution works differently in Linux and Windows, Here is basically what happens (I'm going to get killed on this!)
p1 = Popen(["dmesg"], stdout=PIPE)
Create pipe_1, give a write handle to dmesg as its stdout, and return a read handle in the parent as p1.stdout. You now have 1 pipe with 2 handles (pipe_1 write in dmesg, pipe_1 read in the parent).
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
Create pipe_2. Give grep a write handle to pipe_2 and a copy of the read handle to pipe_1. You now have 2 pipes and 5 handles (pipe_1 write in dmesg, pipe_1 read and pipe_2 write in grep, pipe_1 read and pipe_2 read in the parent).
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
Notice that pipe_1 has two read handles. You want grep to have the read handle so that it reads dmesg data. You don't need the handle in the parent any more. Close it so that there is only 1 read handle on pipe_1. If grep dies, its pipe_1 read handle is closed, the operating system notices there are no remaining read handles for pipe_1 and gives dmesg the bad news.
output = p2.communicate()[0]
dmesg sends data to stdout (the pipe_1 write handle) which begins filling pipe_1. grep reads stdin (the pipe_1 read handle) which empties pipe_1. grep also writes stdout (the pipe_2 write handle) filling pipe_2. The parent process reads pipe_2... and you got yourself a pipeline!

Explain example pipeline from Python subprocess module

Section 17.1.4.2: Replacing shell pipeline of the python subprocess module says to replace
output=`dmesg | grep hda`
with
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
The comment to the third line explains why the close function is being called, but not why it makes sense. It doesn't, to me. Will not closing p1.stdout before the communicate method is called prevent any output from being sent through the pipe? (Obviously it won't, I've tried to run the code and it runs fine). Why is it necessary to call close to make p1 receive SIGPIPE? What kind of close is it that doesn't close? What, exactly, is it closing?
Please consider this an academic question, I'm not trying to accomplish anything except understanding these things better.
You are closing p1.stdout in the parent process, thus leaving dmesg as the only process with that file descriptor open. If you didn't do this, even when dmesg closed its stdout, you would still have it open, and a SIGPIPE would not be generated. (The OS basically keeps a reference count, and generates SIGPIPE when it hits zero. If you don't close the file, you prevent it from ever reaching zero.)

blocks - send input to python subprocess pipeline

I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.
My system is Linux Ubuntu 9.04 with default python 2.6.
I started with this documentation example.
from subprocess import Popen, PIPE
p1 = Popen(["grep", "-v", "not"], stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
print output
That works, but since p1's stdin is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type ^D closing stdin, I get the output I want.
However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
p1.stdin.write('test\n')
output = p2.communicate()[0] # blocks forever here
Didn't work. I tried using p2.stdout.read() instead on last line, but it also blocks. I added p1.stdin.flush() and p1.stdin.close() but it didn't work either. I Then I moved to communicate:
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
p1.communicate('test\n') # blocks forever here
output = p2.communicate()[0]
So that's still not it.
I noticed that running a single process (like p1 above, removing p2) works perfectly. And passing a file handle to p1 (stdin=open(...)) also works. So the problem is:
Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?
I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.
UPDATE 1: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.
First I've tried running p1.communicate on a thread.
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
t = threading.Thread(target=p1.communicate, args=('some data\n',))
t.start()
output = p2.communicate()[0] # blocks forever here
Okay, didn't work. Tried other combinations like changing it to .write() and also p2.read(). Nothing. Now let's try the opposite approach:
def get_output(subp):
output = subp.communicate()[0] # blocks on thread
print 'GOT:', output
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
t = threading.Thread(target=get_output, args=(p2,))
t.start()
p1.communicate('data\n') # blocks here.
t.join()
code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.
UPDATE 2
Paul Du Bois answered below with some information, so I did more tests.
I've read entire subprocess.py module and got how it works. So I tried applying exactly that to code.
I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on subprocess.py's communicate() method, but for two processes instead of one. Here's the entire listing of what I tried:
import os
from subprocess import Popen, PIPE
import threading
def get_output(fobj, buffer):
while True:
chunk = fobj.read() # BLOCKS HERE
if not chunk:
break
buffer.append(chunk)
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
b = [] # create a buffer
t = threading.Thread(target=get_output, args=(p2.stdout, b))
t.start() # start reading thread
for x in xrange(100000):
p1.stdin.write('hello world\n') # write data
p1.stdin.flush()
p1.stdin.close() # close input...
t.join()
Well. It didn't work. Even after p1.stdin.close() was called, p2.stdout.read() still blocks.
Then I tried the posix code on subprocess.py:
import os
from subprocess import Popen, PIPE
import select
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
numwrites = 100000
to_read = [p2.stdout]
to_write = [p1.stdin]
b = [] # create buffer
while to_read or to_write:
read_now, write_now, xlist = select.select(to_read, to_write, [])
if read_now:
data = os.read(p2.stdout.fileno(), 1024)
if not data:
p2.stdout.close()
to_read = []
else:
b.append(data)
if write_now:
if numwrites > 0:
numwrites -= 1
p1.stdin.write('hello world!\n'); p1.stdin.flush()
else:
p1.stdin.close()
to_write = []
print b
Also blocks on select.select(). By spreading prints around, I found out this:
Reading is working. Code reads many times during execution.
Writing is also working. Data is written to p1.stdin.
At the end of numwrites, p1.stdin.close() is called.
When select() starts blocking, only to_read has something, p2.stdout. to_write is already empty.
os.read() call always returns something, so p2.stdout.close() is never called.
Conclusion from both tests: Closing the stdin of the first process on the pipeline (grep in the example) is not making it dump its buffered output to the next and die.
No way to make it work?
PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.
I found out how to do it.
It is not about threads, and not about select().
When I run the first process (grep), it creates two low-level file descriptors, one for each pipe. Lets call those a and b.
When I run the second process, b gets passed to cut sdtin. But there is a brain-dead default on Popen - close_fds=False.
The effect of that is that cut also inherits a. So grep can't die even if I close a, because stdin is still open on cut's process (cut ignores it).
The following code now runs perfectly.
from subprocess import Popen, PIPE
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
p1.stdin.write('Hello World\n')
p1.stdin.close()
result = p2.stdout.read()
assert result == "Hello Worl\n"
close_fds=True SHOULD BE THE DEFAULT on unix systems. On windows it closes all fds, so it prevents piping.
EDIT:
PS: For people with a similar problem reading this answer: As pooryorick said in a comment, that also could block if data written to p1.stdin is bigger than the buffers. In that case you should chunk the data into smaller pieces, and use select.select() to know when to read/write. The code in the question should give a hint on how to implement that.
EDIT2: Found another solution, with more help from pooryorick - instead of using close_fds=True and close ALL fds, one could close the fds that belongs to the first process, when executing the second, and it will work. The closing must be done in the child so the preexec_fn function from Popen comes very handy to do just that. On executing p2 you can do:
p2 = Popen(cmd2, stdin=p1.stdout, stdout=PIPE, stderr=devnull, preexec_fn=p1.stdin.close)
Working with large files
Two principles need to be applied uniformly when working with large files in Python.
Since any IO routine can block, we must keep each stage of the pipeline in a different thread or process. We use threads in this example, but subprocesses would let you avoid the GIL.
We must use incremental reads and writes so that we don't wait for EOF before starting to make progress.
An alternative is to use nonblocking IO, though this is cumbersome in standard Python. See gevent for a lightweight threading library that implements the synchronous IO API using nonblocking primitives.
Example code
We'll construct a silly pipeline that is roughly
{cat /usr/share/dict/words} | grep -v not \
| {upcase, filtered tee to stderr} | cut -c 1-10 \
| {translate 'E' to '3'} | grep K | grep Z | {downcase}
where each stage in braces {} is implemented in Python while the others use standard external programs. TL;DR: See this gist.
We start with the expected imports.
#!/usr/bin/env python
from subprocess import Popen, PIPE
import sys, threading
Python stages of the pipeline
All but the last Python-implemented stage of the pipeline needs to go in a thread so that it's IO does not block the others. These could instead run in Python subprocesses if you wanted them to actually run in parallel (avoid the GIL).
def writer(output):
for line in open('/usr/share/dict/words'):
output.write(line)
output.close()
def filter(input, output):
for line in input:
if 'k' in line and 'z' in line: # Selective 'tee'
sys.stderr.write('### ' + line)
output.write(line.upper())
output.close()
def leeter(input, output):
for line in input:
output.write(line.replace('E', '3'))
output.close()
Each of these needs to be put in its own thread, which we'll do using this convenience function.
def spawn(func, **kwargs):
t = threading.Thread(target=func, kwargs=kwargs)
t.start()
return t
Create the pipeline
Create the external stages using Popen and the Python stages using spawn. The argument bufsize=-1 says to use the system default buffering (usually 4 kiB). This is generally faster than the default (unbuffered) or line buffering, but you'll want line buffering if you want to visually monitor the output without lags.
grepv = Popen(['grep','-v','not'], stdin=PIPE, stdout=PIPE, bufsize=-1)
cut = Popen(['cut','-c','1-10'], stdin=PIPE, stdout=PIPE, bufsize=-1)
grepk = Popen(['grep', 'K'], stdin=PIPE, stdout=PIPE, bufsize=-1)
grepz = Popen(['grep', 'Z'], stdin=grepk.stdout, stdout=PIPE, bufsize=-1)
twriter = spawn(writer, output=grepv.stdin)
tfilter = spawn(filter, input=grepv.stdout, output=cut.stdin)
tleeter = spawn(leeter, input=cut.stdout, output=grepk.stdin)
Drive the pipeline
Assembled as above, all the buffers in the pipeline will fill up, but since nobody is reading from the end (grepz.stdout), they will all block. We could read the entire thing in one call to grepz.stdout.read(), but that would use a lot of memory for large files. Instead, we read incrementally.
for line in grepz.stdout:
sys.stdout.write(line.lower())
The threads and processes clean up once they reach EOF. We can explicitly clean up using
for t in [twriter, tfilter, tleeter]: t.join()
for p in [grepv, cut, grepk, grepz]: p.wait()
Python-2.6 and earlier
Internally, subprocess.Popen calls fork, configures the pipe file descriptors, and calls exec. The child process from fork has copies of all file descriptors in the parent process, and both copies will need to be closed before the corresponding reader will get EOF. This can be fixed by manually closing the pipes (either by close_fds=True or a suitable preexec_fn argument to subprocess.Popen) or by setting the FD_CLOEXEC flag to have exec automatically close the file descriptor. This flag is set automatically in Python-2.7 and later, see issue12786. We can get the Python-2.7 behavior in earlier versions of Python by calling
p._set_cloexec_flags(p.stdin)
before passing p.stdin as an argument to a subsequent subprocess.Popen.
There are three main tricks to making pipes work as expected
Make sure each end of the pipe is used in a different thread/process
(some of the examples near the top suffer from this problem).
explicitly close the unused end of the pipe in each process
deal with buffering by either disabling it (Python -u option), using
pty's, or simply filling up the buffer with something that won't affect the
data, ( maybe '\n', but whatever fits).
The examples in the Python "pipeline" module (I'm the author) fit your scenario
exactly, and make the low-level steps fairly clear.
http://pypi.python.org/pypi/pipeline/
More recently, I used the subprocess module as part of a
producer-processor-consumer-controller pattern:
http://www.darkarchive.org/w/Pub/PythonInteract
This example deals with buffered stdin without resorting to using a pty, and
also illustrates which pipe ends should be closed where. I prefer processes to
threading, but the principle is the same. Additionally, it illustrates
synchronizing Queues to which feed the producer and collect output from the consumer,
and how to shut them down cleanly (look out for the sentinels inserted into the
queues). This pattern allows new input to be generated based on recent output,
allowing for recursive discovery and processing.
Nosklo's offered solution will quickly break if too much data is written to the receiving end of the pipe:
from subprocess import Popen, PIPE
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
p1.stdin.write('Hello World\n' * 20000)
p1.stdin.close()
result = p2.stdout.read()
assert result == "Hello Worl\n"
If this script doesn't hang on your machine, just increase "20000" to something that exceeds the size of your operating system's pipe buffers.
This is because the operating system is buffering the input to "grep", but once that buffer is full, the p1.stdin.write call will block until something reads from p2.stdout. In toy scenarios, you can get way with writing to/reading from a pipe in the same process, but in normal usage, it is necessary to write from one thread/process and read from a separate thread/process. This is true for subprocess.popen, os.pipe, os.popen*, etc.
Another twist is that sometimes you want to keep feeding the pipe with items generated from earlier output of the same pipe. The solution is to make both the pipe feeder and the pipe reader asynchronous to the man program, and implement two queues: one between the main program and the pipe feeder and one between the main program and the pipe reader. PythonInteract is an example of that.
Subprocess is a nice convenience model, but because it hides the details of the os.popen and os.fork calls it does under the hood, it can sometimes be more difficult to deal with than the lower-level calls it utilizes. For this reason, subprocess is not a good way to learn about how inter-process pipes really work.
You must do this in several threads. Otherwise, you'll end up in a situation where you can't send data: child p1 won't read your input since p2 doesn't read p1's output because you don't read p2's output.
So you need a background thread that reads what p2 writes out. That will allow p2 to continue after writing some data to the pipe, so it can read the next line of input from p1 which again allows p1 to process the data which you send to it.
Alternatively, you can send the data to p1 with a background thread and read the output from p2 in the main thread. But either side must be a thread.
Responding to nosklo's assertion (see other comments to this question) that it can't be done without close_fds=True:
close_fds=True is only necessary if you've left other file
descriptors open. When opening multiple child processes, it's always good to
keep track of open files that might get inherited, and to explicitly close any
that aren't needed:
from subprocess import Popen, PIPE
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p1.stdin.write('Hello World\n')
p1.stdin.close()
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
result = p2.stdout.read()
assert result == "Hello Worl\n"
close_fds defaults to False because subprocess
prefers to trust the calling program to know what it's doing with open file
descriptors, and just provide the caller with an easy option to close them all
if that's what it wants to do.
But the real issue is that pipe buffers will bite you for all but toy examples.
As I have said in my other answers to this question, the rule of thumb is to
not have your reader and your writer open in the same process/thread. Anyone
who wants to use the subprocess module for two-way communication would be
well-served to study os.pipe and os.fork, first. They're actually not that
hard to use if you have a good example to look at.
I think you may be examining the wrong problem. Certainly as Aaron says if you try to be both a producer to the beginning of a pipeline, and a consumer of the end of the pipeline, it is easy to get into a deadlock situation. This is the problem that communicate() solves.
communicate() isn't exactly correct for you since stdin and stdout are on different subprocess objects; but if you take a look at the implementation in subprocess.py you'll see that it does exactly what Aaron suggested.
Once you see that communicate both reads and writes, you'll see that in your second try communicate() competes with p2 for the output of p1:
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
# ...
p1.communicate('data\n') # reads from p1.stdout, as does p2
I am running on win32, which definitely has different i/o and buffering characteristics, but this works for me:
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
t = threading.Thread(target=get_output, args=(p2,))
t.start()
p1.stdin.write('hello world\n' * 100000)
p1.stdin.close()
t.join()
I tuned the input size to produce a deadlock when using a naive unthreaded p2.read()
You might also try buffering into a file, eg
fd, _ = tempfile.mkstemp()
os.write(fd, 'hello world\r\n' * 100000)
os.lseek(fd, 0, os.SEEK_SET)
p1 = Popen(["grep", "-v", "not"], stdin=fd, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE)
print p2.stdout.read()
That also works for me without deadlocks.
In one of the comments above, I challenged nosklo to either post some code to back up his assertions about select.select or to upvote my responses he had previously down-voted. He responded with the following code:
from subprocess import Popen, PIPE
import select
p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
data_to_write = 100000 * 'hello world\n'
to_read = [p2.stdout]
to_write = [p1.stdin]
b = [] # create buffer
written = 0
while to_read or to_write:
read_now, write_now, xlist = select.select(to_read, to_write, [])
if read_now:
data = p2.stdout.read(1024)
if not data:
p2.stdout.close()
to_read = []
else:
b.append(data)
if write_now:
if written < len(data_to_write):
part = data_to_write[written:written+1024]
written += len(part)
p1.stdin.write(part); p1.stdin.flush()
else:
p1.stdin.close()
to_write = []
print b
One problem with this script is that it second-guesses the size/nature of the
system pipe buffers. The script would experience fewer failures if it could remove
magic numbers like 1024.
The big problem is that this script code only works consistently with the right
combination of data input and external programs. grep and cut both work with
lines, and so their internal buffers behave a bit differently. If we use a
more generic command like "cat", and write smaller bits of data into the pipe,
the fatal race condition will pop up more often:
from subprocess import Popen, PIPE
import select
import time
p1 = Popen(["cat"], stdin=PIPE, stdout=PIPE)
p2 = Popen(["cat"], stdin=p1.stdout, stdout=PIPE, close_fds=True)
data_to_write = 'hello world\n'
to_read = [p2.stdout]
to_write = [p1.stdin]
b = [] # create buffer
written = 0
while to_read or to_write:
time.sleep(1)
read_now, write_now, xlist = select.select(to_read, to_write, [])
if read_now:
print 'I am reading now!'
data = p2.stdout.read(1024)
if not data:
p1.stdout.close()
to_read = []
else:
b.append(data)
if write_now:
print 'I am writing now!'
if written < len(data_to_write):
part = data_to_write[written:written+1024]
written += len(part)
p1.stdin.write(part); p1.stdin.flush()
else:
print 'closing file'
p1.stdin.close()
to_write = []
print b
In this case, two different results will manifest:
write, write, close file, read -> success
write, read -> hang
So again, I challenge nosklo to either post code showing the use of
select.select to handle arbitrary input and pipe buffering from a
single thread, or to upvote my responses.
Bottom line: don't try to manipulate both ends of a pipe from a single thread.
It's just not worth it. See
pipeline for a nice low-level
example of how to do this correctly.
What about using a SpooledTemporaryFile ? This bypasses (but perhaps doesn't solve) the issue:
http://docs.python.org/library/tempfile.html#tempfile.SpooledTemporaryFile
You can write to it like a file, but it's actually a memory block.
Or am I totally misunderstanding...
Here's an example of using Popen together with os.fork to accomplish the same
thing. Instead of using close_fds it just closes the pipes at the
right places. Much simpler than trying to use select.select, and
takes full advantage of system pipe buffers.
from subprocess import Popen, PIPE
import os
import sys
p1 = Popen(["cat"], stdin=PIPE, stdout=PIPE)
pid = os.fork()
if pid: #parent
p1.stdin.close()
p2 = Popen(["cat"], stdin=p1.stdout, stdout=PIPE)
data = p2.stdout.read()
sys.stdout.write(data)
p2.stdout.close()
else: #child
data_to_write = 'hello world\n' * 100000
p1.stdin.write(data_to_write)
p1.stdin.close()
It's much simpler than you think!
import sys
from subprocess import Popen, PIPE
# Pipe the command here. It will read from stdin.
# So cat a file, to stdin, like (cat myfile | ./this.py),
# or type on terminal and hit control+d when done, etc
# No need to handle this yourself, that's why we have shell's!
p = Popen("grep -v not | cut -c 1-10", shell=True, stdout=PIPE)
nextData = None
while True:
nextData = p.stdout.read()
if nextData in (b'', ''):
break
sys.stdout.write ( nextData.decode('utf-8') )
p.wait()
This code is written for python 3.6, and works with python 2.7.
Use it like:
cat README.md | python ./example.py
or
python example.py < README.md
To pipe the contents of "README.md" to this program.
But.. at this point, why not just use "cat" directly, and pipe the output like you want? like:
cat filename | grep -v not | cut -c 1-10
typed into the console will do the job as well. I personally would only use the code option if I was further processing the output, otherwise a shell script would be easier to maintain and be retained.
You just, use the shell to do the piping for you. In one, out the other. That's what she'll are GREAT at doing, managing processes, and managing single-width chains of input and output. Some would call it a shell's best non-interactive feature..

Categories