Popen.communicate is stuck until process spawned by sub-process terminates - python

I have the below three scripts, and when I run main.py file, it spawns child.py which again executes the subchild.py and terminates quicly, subchild.py however keeps executing for a lot of time.
The problem with this is, main.py is blocked at p.communicate() till subchild.py terminates. If I open task manager and kill the running subchild.py main.py immediately returns the output of child.py
So my questions are as below
main.py is supposed to wait only till child.py terminates, why is it waiting for subchild.py to terminate ?
How do I make the p.communicate() not wait till subchild.py completes its execution ?
# main.py file
if __name__ == '__main__':
import subprocess
p = subprocess.Popen(
["python", "child.py"],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE
)
out, _ = p.communicate()
print(out)
# child.py file
if __name__ == '__main__':
import subprocess
print("Child running")
p = subprocess.Popen(["python", "subchild.py"])
print("Child pid - ", p.pid)
exit(1)
# subchild.py file
if __name__ == '__main__':
import time
time.sleep(10000)
Note: I'm trying this on Windows 7 Enterprise. I'm using python3.6.6
Update after comment:
On the main.py I need child.py's pid, stdout, stderr and process object so that I can kill child.py from main.py at any later point if I want to. This code is a small snippet is part of some API which I am building where the user would want the control to kill the process if he wishes to. subprocesses.call or subprocesses.run would not let me get control over the process object. I also won't have control over what child.py command I will receive as input for main.py, So I need to somehow not wait for the subchild.py and exit immediately with child.py's output as soon as it completes.

Your communicate call doesn't just wait for the child to complete. It first tries to read the entire contents of the child's stdout and stderr.
Even when the child has completed, the parent can't stop reading, because the grandchild inherits the child's stdin, stdout, and stderr. The parent needs to wait for the grandchild to complete, or at least for the grandchild to close its stdout and stderr, before the parent can be sure it's done reading.

Right now, child.py will always wait for subchild.py to finish. If you don't need to wait, you can try subprocess.call() as an alternative in the child.py code. Replace the Popen() logic with the call() logic instead.
p = subprocess.call(["python", "subchild.py"])
Here is a reference: https://docs.python.org/2/library/subprocess.html#subprocess.call
An alternative is to keep the code as it is an actually change to using .call() in the main.py file.
import subprocess
p = subprocess.call(
["python", "child.py"],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE
)
You mentioned you can't use .call() because you'll need to kill it, however this is not the case. Making this change should allow you to:
return main.py and child.py quickly,
it will print the ID of the subchild process from child.py and
subchild.py will continue running

Finally I found the solution to this myself, this seems to be an open issue with python. Would be glad to see this fixed.
https://bugs.python.org/issue26731

Related

Random behaviour while reading from subprocess stdout in a thread

I am writing a script that to start a process and check's its stdout (while it's being run, not at the end of execution).
The obvious choice seemed to have a thread that will be blocked reading lines from the process stdout.
I have tested it with WSL2 bash using:
python __main__.py 'echo ok'
The outcome is random, resulting in one of the following cases:
Execution terminated without any output
"ok" printed as expected
"ok" printed follow by a 'ValueError: readline of closed file' exception
Any idea on what might be the problem ?
The code:
import argparse
from subprocess import Popen, PIPE
import sys
import threading
class ReadlineThread(threading.Thread):
def __init__(self, proc):
threading.Thread.__init__(self)
self._proc = proc
def run(self):
while self._proc.poll() is None:
line = self._proc.stdout.readline()
sys.stdout.buffer.write(line)
sys.stdout.flush()
def main():
parser = argparse.ArgumentParser()
parser.add_argument('command', nargs='+', help='bar help')
args = parser.parse_args()
with Popen(args.command, stdout=PIPE, stderr=PIPE, shell=True) as proc:
stdout_thread = ReadlineThread(proc)
stdout_thread.start()
if __name__ == "__main__":
main()
When you create a thread, it becomes part of the parent process. The parent process is the thread that runs your main function. In your main function, you call stdout_thread.start(), which begins the process of starting a thread and then immediately returns. Aftfer that, there is no more code in your main function, which results in python shutting down the main process. Since your thread is part of the main process, it will be taken down when the main process terminates. Meanwhile, the thread you've started up is still being created.
Here we have what is called a race condition. Your thread is starting while simultaneously the process it belongs to is shutting down. If your thread manages to start up and complete its work before the process terminates, you get your expected result. If the process terminates before the thread has started, you get no output. In the third situation, the process closes its stdout before the thread has finished reading it, resulting in an error.
To fix this, in your main function you should wait for your spawned thread to finish, which could be achieved by calling stdout_thread.join().

Have subprocess.Popen only wait on its child process to return, but not any grandchildren

I have a python script that does this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=False)
theStdin=request.input.encode('utf-8')
(outputhere,errorshere) = p.communicate(input=theStdin)
It works as expected, it waits for the subprocess to finish via p.communicate(). However within the pythonscript.py I want to "fire and forget" a "grandchild" process. I'm currently doing this by overwriting the join function:
class EverLastingProcess(Process):
def join(self, *args, **kwargs):
pass # Overwrites join so that it doesn't block. Otherwise parent waits.
def __del__(self):
pass
And starting it like this:
p = EverLastingProcess(target=nameOfMyFunction, args=(arg1, etc,), daemon=False)
p.start()
This also works fine I just run pythonscript.py in a bash terminal or bash script. Control and a response returns while the child process started by EverLastingProcess keeps going. However, when I run pythonscript.py with Popen running the process as shown above, it looks from timings that the Popen is waiting on the grandchild to finish.
How can I make it so that the Popen only waits on the child process, and not any grandchild processes?
The solution above (using the join method with the shell=True addition) stopped working when we upgraded our Python recently.
There are many references on the internet about the pieces and parts of this, but it took me some doing to come up with a useful solution to the entire problem.
The following solution has been tested in Python 3.9.5 and 3.9.7.
Problem Synopsis
The names of the scripts match those in the code example below.
A top-level program (grandparent.py):
Uses subprocess.run or subprocess.Popen to call a program (parent.py)
Checks return value from parent.py for sanity.
Collects stdout and stderr from the main process 'parent.py'.
Does not want to wait around for the grandchild to complete.
The called program (parent.py)
Might do some stuff first.
Spawns a very long process (the grandchild - "longProcess" in the code below).
Might do a little more work.
Returns its results and exits while the grandchild (longProcess) continues doing what it does.
Solution Synopsis
The important part isn't so much what happens with subprocess. Instead, the method for creating the grandchild/longProcess is the critical part. It is necessary to ensure that the grandchild is truly emancipated from parent.py.
Subprocess only needs to be used in a way that captures output.
The longProcess (grandchild) needs the following to happen:
It should be started using multiprocessing.
It needs multiprocessing's 'daemon' set to False.
It should also be invoked using the double-fork procedure.
In the double-fork, extra work needs to be done to ensure that the process is truly separate from parent.py. Specifically:
Move the execution away from the environment of parent.py.
Use file handling to ensure that the grandchild no longer uses the file handles (stdin, stdout, stderr) inherited from parent.py.
Example Code
grandparent.py - calls parent.py using subprocess.run()
#!/usr/bin/env python3
import subprocess
p = subprocess.run(["/usr/bin/python3", "/path/to/parent.py"], capture_output=True)
## Comment the following if you don't need reassurance
print("The return code is: " + str(p.returncode))
print("The standard out is: ")
print(p.stdout)
print("The standard error is: ")
print(p.stderr)
parent.py - starts the longProcess/grandchild and exits, leaving the grandchild running. After 10 seconds, the grandchild will write timing info to /tmp/timelog.
!/usr/bin/env python3
import time
def longProcess() :
time.sleep(10)
fo = open("/tmp/timelog", "w")
fo.write("I slept! The time now is: " + time.asctime(time.localtime()) + "\n")
fo.close()
import os,sys
def spawnDaemon(func):
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0: # parent process
return
except OSError as e:
print("fork #1 failed. See next. " )
print(e)
sys.exit(1)
# Decouple from the parent environment.
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError as e:
print("fork #2 failed. See next. " )
print(e)
print(1)
# Redirect standard file descriptors.
# Here, they are reassigned to /dev/null, but they could go elsewhere.
sys.stdout.flush()
sys.stderr.flush()
si = open('/dev/null', 'r')
so = open('/dev/null', 'a+')
se = open('/dev/null', 'a+')
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# Run your daemon
func()
# Ensure that the daemon exits when complete
os._exit(os.EX_OK)
import multiprocessing
daemonicGrandchild=multiprocessing.Process(target=spawnDaemon, args=(longProcess,))
daemonicGrandchild.daemon=False
daemonicGrandchild.start()
print("have started the daemon") # This will get captured as stdout by grandparent.py
References
The code above was mainly inspired by the following two resources.
This reference is succinct about the use of the double-fork but does not include the file handling we need in this situation.
This reference contains the needed file handling, but does many other things that we do not need.
Edit: the below stopped working after a Python upgrade, see the accepted answer from Lachele.
Working answer from a colleague, change to shell=True like this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
I've tested and the grandchild subprocesses stay alive after the child processes returns without waiting for them to finish.

How to kill subprocess after time.sleep()? [duplicate]

I am running some shell scripts with the subprocess module in python. If the shell scripts is running to long, I like to kill the subprocess. I thought it will be enough if I am passing the timeout=30 to my run(..) statement.
Here is the code:
try:
result=run(['utilities/shell_scripts/{0} {1} {2}'.format(
self.language_conf[key][1], self.proc_dir, config.main_file)],
shell=True,
check=True,
stdout=PIPE,
stderr=PIPE,
universal_newlines=True,
timeout=30,
bufsize=100)
except TimeoutExpired as timeout:
I have tested this call with some shell scripts that runs 120s. I expected the subprocess to be killed after 30s, but in fact the process is finishing the 120s script and than raises the Timeout Exception. Now the Question how can I kill the subprocess by timeout?
The documentation explicitly states that the process should be killed:
from the docs for subprocess.run:
"The timeout argument is passed to Popen.communicate(). If the timeout expires, the child process will be killed and waited for. The TimeoutExpired exception will be re-raised after the child process has terminated."
But in your case you're using shell=True, and I've seen issues like that before, because the blocking process is a child of the shell process.
I don't think you need shell=True if you decompose your arguments properly and your scripts have the proper shebang. You could try this:
result=run(
[os.path.join('utilities/shell_scripts',self.language_conf[key][1]), self.proc_dir, config.main_file], # don't compose argument line yourself
shell=False, # no shell wrapper
check=True,
stdout=PIPE,
stderr=PIPE,
universal_newlines=True,
timeout=30,
bufsize=100)
note that I can reproduce this issue very easily on Windows (using Popen, but it's the same thing):
import subprocess,time
p=subprocess.Popen("notepad",shell=True)
time.sleep(1)
p.kill()
=> notepad stays open, probably because it manages to detach from the parent shell process.
import subprocess,time
p=subprocess.Popen("notepad",shell=False)
time.sleep(1)
p.kill()
=> notepad closes after 1 second
Funnily enough, if you remove time.sleep(), kill() works even with shell=True probably because it successfully kills the shell which is launching notepad.
I'm not saying you have exactly the same issue, I'm just demonstrating that shell=True is evil for many reasons, and not being able to kill/timeout the process is one more reason.
However, if you need shell=True for a reason, you can use psutil to kill all the children in the end. In that case, it's better to use Popen so you get the process id directly:
import subprocess,time,psutil
parent=subprocess.Popen("notepad",shell=True)
for _ in range(30): # 30 seconds
if parent.poll() is not None: # process just ended
break
time.sleep(1)
else:
# the for loop ended without break: timeout
parent = psutil.Process(parent.pid)
for child in parent.children(recursive=True): # or parent.children() for recursive=False
child.kill()
parent.kill()
(source: how to kill process and child processes from python?)
that example kills the notepad instance as well.

Subprocess Deadlock

I have a shell script which I need to invoke in the Python program.
from subprocess import *
p=Popen('some_script',stdin=PIPE)
p.communicate('S')
But after the communicate() command is executed the process goes for a deadlock. The shell script has not been written by me and I cannot modify it. I just need to kill the process after the communicate() method is executed, but the program should not exit.
Please help me in this regard.
The entire point of Popen.communicate is that it will wait until the process terminates. If this is not desired behavior, you must explicitly interact with the process' stdin/stdout/stderr.
import subprocess
p = subprocess.Popen('some_script', stdin=subprocess.PIPE)
p.stdin.write('S\n')
p.kill()

web.py + subprocess = hang

Here's my main file:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
time.sleep(3)
And here's test_web_app.py:
import web
class Handler:
def GET(self): pass
app = web.application(['/', 'Handler'], globals())
app.run()
When I run the main file, the program executes, but a zombie process is left hanging and I have to kill it manually. Why is this? How can I get the Popen to die when the program ends? The Popen only hangs if I pipe stdout and sleep for a bit before the program ends.
Edit -- here's the final, working version of the main file:
import subprocess, time, atexit
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
def kill_app():
popen.kill()
popen.wait()
atexit.register(kill_app)
time.sleep(3)
You have not waited for the process. Once it's done, you have to call popen.wait.
You can check if the process is terminated using the poll method of the popen object to see if it has completed.
If you don't need the stdout of the web server process, you can simply ignore the stdout option.
You can use the atexit module to implement a hook that gets called when your main file exits. This should use the kill method of the Popen object and then wait on it to make sure that it's terminated.
If your main script doesn't need to be doing anything else while the subprocess executes I'd do:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe, stderr=pipe)
out, err = popen.communicate()
(I think if you specifically pipe stdout back to your program, you need to read it at some point to avoid creating zombie processes - communicate will read it in a reasonably safe way).
Or if you don't care about parsing stdout / stderr, don't bother piping them:
popen = subprocess.Popen('pythonw -uB test_web_app.py')
popen.communicate()

Categories