I'm experiencing an issue where a call to proc.communicate() still hangs even after calling proc.terminate().
I create tasks to be run in the background with the call
import subprocess as sub
p = sub.Popen(command, stdout=sub.PIPE, stderr=sub.PIPE, shell=False)
At the completion of the script, I call terminate(), communicate(), and wait() to gather information from the process.
p.terminate()
errorcode = p.wait()
(pout, perr) = p.communicate()
The script hangs at the call to communicate. I'd assumed that any call to communicate that follows a call to terminate would return immediately. Is there any reason why this would fail?
Edit: I'm using this method because the command is really a tight loop that won't terminate on its own. I'd like to use p.terminate() to do that, and then see what the stdout and stderr has to offer.
You don't need the first two statements. Just call communicate().
Related
I have a python script that creates a subprocess to run indexing operations (logstash to elasticsearch).
the code snippet is as follows,
process = subprocess.Popen([logstash, '-f', sample.conf],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
I do not call process.wait(), as the subprocess I'm creating needs to run independent of the rest of the script.
But I do have to update the database record when the subprocess is complete. The indexing operation that I am running does not allow me to create a post job call that will allow me to update the database.
How can I handle this with python subprocess? I do store the PIDs of the jobs in a text file, but I'd like to have a trigger in place that knows when the subprocess is complete to execute the next script.
Since you appear to stash the process variable somewhere, later you can check its returncode attribute after calling its poll method.
If the process has completed then its returncode value won't be None and you can update your database.
You could create your process in a thread. You can wait from the thread, so you get the output, and it's non-blocking
import threading
import subprocess
def run_command():
p = subprocess.Popen([logstash, '-f', sample.conf],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.stdout.read()
p.wait()
# now your command has ended, do whatever you like
and in your main thread:
t = threading.Thread(target=run_command)
t.start()
# continue with main processing
I have a shell script which I need to invoke in the Python program.
from subprocess import *
p=Popen('some_script',stdin=PIPE)
p.communicate('S')
But after the communicate() command is executed the process goes for a deadlock. The shell script has not been written by me and I cannot modify it. I just need to kill the process after the communicate() method is executed, but the program should not exit.
Please help me in this regard.
The entire point of Popen.communicate is that it will wait until the process terminates. If this is not desired behavior, you must explicitly interact with the process' stdin/stdout/stderr.
import subprocess
p = subprocess.Popen('some_script', stdin=subprocess.PIPE)
p.stdin.write('S\n')
p.kill()
Do subprocess calls in Python hang? That is, do subprocess calls operate in the same thread as the rest of the Python code, or is it a non-blocking model? I couldn't find anything in the docs or on SO on the matter. Thanks!
Most methods in the subprocess module are blocking, meaning that they want for the subprocess to complete before returning. However, subprocess.Popen is non-blocking.
result = subprocess.call(cmd) # This will block until cmd is complete
p = subprocess.Popen(cmd) # This will return a Popen object right away
Once you have the Popen object, you can use the poll instance method to see if the subprocess is complete without blocking.
if p.poll() is None: # Make sure you check against None, since it could return 0 when the process is complete.
print "Process is still running"
subprocesses run in the background. In the subprocess module, there is a class called Popen that starts a process in the background. It has a wait() method you can use to wait for the process to finish. It also has a communicate() helper method that will handle stdin/stdout/stderr plus wait for the process to complete. It also has convenience functions like call() and check_call() that create a Popen object and then wait for it to complete.
So, subprocess implements a non-blocking model but also gives you blocking helper functions.
I need to run a subprocess from my script. The subprocess is an interactive (shell-like) application, to which I issue commands through the subprocess' stdin.
After I issue a command, the subprocess outputs the result to stdout and then waits for the next command (but does not terminate).
For example:
from subprocess import Popen, PIPE
p = Popen(args = [...], stdin = PIPE, stdout = PIPE, stderr = PIPE, shell = False)
# Issue a command:
p.stdin.write('command\n')
# *** HERE: get the result from p.stdout ***
# CONTINUE with the rest of the script once there is not more data in p.stdout
# NOTE that the subprocess is still running and waiting for the next command
# through stdin.
My problem is getting the result from p.stdout. The script needs to get the output while there is new data in p.stdout; but once there is no more data, I want to continue with the script.
The subprocess does not terminate, so I cannot use communicate() (which waits for the process to terminate).
I tried reading from p.stdout after issuing the command, like this:
res = p.stdout.read()
But the subprocess is not fast enough, and I just get empty result.
I thought about polling p.stdout in a loop until I get something, but then how do I know I got everything? And it seems wasteful anyway.
Any suggestions?
Use gevent.subprocess in gevent-1.0 to substitute the standard subprocess module. It could do the concurrency tasks using synchronous logic and won't block the script. Here is a brief tutorial about gevent.subprocess
Use circuits.io.Process in circuits-dev to wrap an asynchronous call to subprocess.
Example: https://bitbucket.org/circuits/circuits-dev/src/tip/examples/ping.py
After investigating several options I reached two solutions:
Setting the subprocess' stdout stream to be non blocking by using the fcntl module.
Using a thread to collect the subprocess' output to a proxy queue, and then reading the queue from the main thread.
I describe both solutions (and the problem and its origin) in this post.
I have the following code in a loop:
while true:
# Define shell_command
p1 = Popen(shell_command, shell=shell_type, stdout=PIPE, stderr=PIPE, preexec_fn=os.setsid)
result = p1.stdout.read();
# Define condition
if condition:
break;
where shell_command is something like ls (it just prints stuff).
I have read in different places that I can close/terminate/exit a Popen object in a variety of ways, e.g. :
p1.stdout.close()
p1.stdin.close()
p1.terminate
p1.kill
My question is:
What is the proper way of closing a subprocess object once we are done using it?
Considering the nature of my script, is there a way to open a subprocess object only once and reuse it with different shell commands? Would that be more efficient in any way than opening new subprocess objects each time?
Update
I am still a bit confused about the sequence of steps to follow depending on whether I use p1.communicate() or p1.stdout.read() to interact with my process.
From what I understood in the answers and the comments:
If I use p1.communicate() I don't have to worry about releasing resources, since communicate() would wait until the process is finished, grab the output and properly close the subprocess object
If I follow the p1.stdout.read() route (which I think fits my situation, since the shell command is just supposed to print stuff) I should call things in this order:
p1.wait()
p1.stdout.read()
p1.terminate()
Is that right?
What is the proper way of closing a subprocess object once we are done using it?
stdout.close() and stdin.close() will not terminate a process unless it exits itself on end of input or on write errors.
.terminate() and .kill() both do the job, with kill being a bit more "drastic" on POSIX systems, as SIGKILL is sent, which cannot be ignored by the application. Specific differences are explained in this blog post, for example. On Windows, there's no difference.
Also, remember to .wait() and to close the pipes after killing a process to avoid zombies and force the freeing of resources.
A special case that is often encountered are processes which read from STDIN and write their result to STDOUT, closing themselves when EOF is encountered. With these kinds of programs, it's often sensible to use subprocess.communicate:
>>> p = Popen(["sort"], stdin=PIPE, stdout=PIPE)
>>> p.communicate("4\n3\n1")
('1\n3\n4\n', None)
>>> p.returncode
0
This can also be used for programs which print something and exit right after:
>>> p = Popen(["ls", "/home/niklas/test"], stdin=PIPE, stdout=PIPE)
>>> p.communicate()
('file1\nfile2\n', None)
>>> p.returncode
0
Considering the nature of my script, is there a way to open a subprocess object only once and reuse it with different shell commands? Would that be more efficient in any way than opening new subprocess objects each time?
I don't think the subprocess module supports this and I don't see what resources could be shared here, so I don't think it would give you a significant advantage.
Considering the nature of my script, is there a way to open a subprocess object only once and reuse it with different shell commands?
Yes.
#!/usr/bin/env python
from __future__ import print_function
import uuid
import random
from subprocess import Popen, PIPE, STDOUT
MARKER = str(uuid.uuid4())
shell_command = 'echo a'
p = Popen('sh', stdin=PIPE, stdout=PIPE, stderr=STDOUT,
universal_newlines=True) # decode output as utf-8, newline is '\n'
while True:
# write next command
print(shell_command, file=p.stdin)
# insert MARKER into stdout to separate output from different shell_command
print("echo '%s'" % MARKER, file=p.stdin)
# read command output
for line in iter(p.stdout.readline, MARKER+'\n'):
if line.endswith(MARKER+'\n'):
print(line[:-len(MARKER)-1])
break # command output ended without a newline
print(line, end='')
# exit on condition
if random.random() < 0.1:
break
# cleanup
p.stdout.close()
if p.stderr:
p.stderr.close()
p.stdin.close()
p.wait()
Put while True inside try: ... finally: to perform the cleanup in case of exceptions. On Python 3.2+ you could use with Popen(...): instead.
Would that be more efficient in any way than opening new subprocess objects each time?
Does it matter in your case? Don't guess. Measure it.
The "correct" order is:
Create a thread to read stdout (and a second one to read stderr, unless you merged them into one).
Write commands to be executed by the child to stdin. If you're not reading stdout at the same time, writing to stdin can block.
Close stdin (this is the signal for the child that it can now terminate by itself whenever it is done)
When stdout returns EOF, the child has terminated. Note that you need to synchronize the stdout reader thread and your main thread.
call wait() to see if there was a problem and to clean up the child process
If you need to stop the child process for any reason (maybe the user wants to quit), then you can:
Close stdin if the child terminates when it reads EOF.
Kill the with terminate(). This is the correct solution for child processes which ignore stdin.
If the child doesn't respond, try kill()
In all three cases, you must call wait() to clean up the dead child process.
Depends on what you expect the process to do; you should always call p1.wait() in order to avoid zombies. Other steps depend on the behaviour of the subprocess; if it produces any output, you should consume the output (e.g. p1.read() ...but this would eat lots of memory) and only then call the p1.wait(); or you may wait for some timeout and call p1.terminate() to kill the process if you think it doesn't work as expected, and possible call p1.wait() to clean the zombie.
Alternatively, p1.communicate(...) would do the handling if io and waiting for you (not the killing).
Subprocess objects aren't supposed to be reused.