Here's my main file:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
time.sleep(3)
And here's test_web_app.py:
import web
class Handler:
def GET(self): pass
app = web.application(['/', 'Handler'], globals())
app.run()
When I run the main file, the program executes, but a zombie process is left hanging and I have to kill it manually. Why is this? How can I get the Popen to die when the program ends? The Popen only hangs if I pipe stdout and sleep for a bit before the program ends.
Edit -- here's the final, working version of the main file:
import subprocess, time, atexit
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
def kill_app():
popen.kill()
popen.wait()
atexit.register(kill_app)
time.sleep(3)
You have not waited for the process. Once it's done, you have to call popen.wait.
You can check if the process is terminated using the poll method of the popen object to see if it has completed.
If you don't need the stdout of the web server process, you can simply ignore the stdout option.
You can use the atexit module to implement a hook that gets called when your main file exits. This should use the kill method of the Popen object and then wait on it to make sure that it's terminated.
If your main script doesn't need to be doing anything else while the subprocess executes I'd do:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe, stderr=pipe)
out, err = popen.communicate()
(I think if you specifically pipe stdout back to your program, you need to read it at some point to avoid creating zombie processes - communicate will read it in a reasonably safe way).
Or if you don't care about parsing stdout / stderr, don't bother piping them:
popen = subprocess.Popen('pythonw -uB test_web_app.py')
popen.communicate()
Related
I'm using python 3.6.8 and I have a situation when one process cannot continue until other one is finished.
p1 is in the main thread and must stay opened for a long time doing things.
p2 must run in separate thread (daemon=True), read stdout/err using communicate() and finish.
(all pipes are needed, must not disable them)
As you will see below, when run code by python 3.10.4 I have output "thread.popen/communicate", but python 3.6.8 will not print this line.
it will stuck inside communicate() i think.
What I ask for is I need a workaround for 3.6.8 and optionally explanation what is going on with Python 3.6.8? a bug with Locks or maybe Pipes?
Thank you!
import threading
from time import sleep
from subprocess import Popen, PIPE, STDOUT
def run():
print('thread')
p2 = Popen('git', stdin = PIPE, stdout = PIPE, stderr = PIPE)
o,e = p2.communicate()
print('thread.popen/communicate')
if __name__ == '__main__':
threading.Thread(target=run, daemon=True).start()
p1 = Popen('cmd', stdin = PIPE, stdout = PIPE, stderr = STDOUT)
print('main.popen')
# p1.wait()
sleep(2)
F:\MySSDPrograms\cudatext\py\cuda_lsp>python.exe new.py
thread
main.popen
thread.popen/communicate
F:\MySSDPrograms\cudatext\py\cuda_lsp>f:\Python36\python.exe new.py
thread
main.popen
I have a shell script which I need to invoke in the Python program.
from subprocess import *
p=Popen('some_script',stdin=PIPE)
p.communicate('S')
But after the communicate() command is executed the process goes for a deadlock. The shell script has not been written by me and I cannot modify it. I just need to kill the process after the communicate() method is executed, but the program should not exit.
Please help me in this regard.
The entire point of Popen.communicate is that it will wait until the process terminates. If this is not desired behavior, you must explicitly interact with the process' stdin/stdout/stderr.
import subprocess
p = subprocess.Popen('some_script', stdin=subprocess.PIPE)
p.stdin.write('S\n')
p.kill()
I'm really new to Python and I got a little problem with the subprocess class.
I'm starting an external Program with :
thread1.event.clear()
thread2.event.clear()
print "Sende Motoren STOP"
print "Gebe BILD in Auftrag"
proc = Popen(['gphoto2 --capture-image &'], shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
sleep (args.max+2)
thread1.event.set()
thread2.event.set()
sleep (args.tp-2-args.max)
My Problem is that in my shell where I Started the Python script, I still get the outputs of GPHOTO2 and I think Python is still waiting for GPHOTO to finish.
Any ideas?
The documentation for subprocess.Pope states:
stdin, stdout and stderr specify the executed programs' standard
input, standard output and standard error file handles, respectively.
[...]
With None, no redirection will occur; the child's file handles will be
inherited from the parent.
So you might want to try something along the lines of this. Which btw. blocks until completion. So you might not need the sleep() (here the wait() from subprocess.Popen might be want you want?).
import subprocess
ret_code = subprocess.call(["echo", "Hello World!"], stdout=subprocess.PIPE);
I am using linux/cpython 3.3/bash. Here's my problem:
#!/usr/bin/env python3
from subprocess import Popen, PIPE, DEVNULL
import time
s = Popen('cat', stdin=PIPE, stdout=DEVNULL, stderr=DEVNULL)
s.stdin.write(b'helloworld')
s.stdin.close()
time.sleep(1000) #doing stuff
This leaves cat as a zombie (and I'm busy "doing stuff" and can't wait on the child process). Is there a way in bash that I can wrap cat (e.g. through creating a grand-child) that would allow me to write to cat's stdin, but have init take over as the parent? A python solution would work too, and I can also use nohup, disown etc.
Run the subprocess from another process whose only task is to wait on it.
pid = os.fork()
if pid == 0:
s = Popen('cat', stdin=PIPE, stdout=DEVNULL, stderr=DEVNULL)
s.stdin.write(b'helloworld')
s.stdin.close()
s.wait()
sys.exit()
time.sleep(1000)
One workaround might be to "daemonize" your cat: fork, then quickly fork again and exit in the 2nd process, with the 1st one wait()ing for the 2nd. The 3rd process can then exec() cat, which will inherit its file descriptors from its parent. Thus you need to create a pipe first, then close stdin in the child and dup() it from the pipe.
I don't know how to do these things in python, but I'm fairly certain it should be possible.
I need to run a subprocess from my script. The subprocess is an interactive (shell-like) application, to which I issue commands through the subprocess' stdin.
After I issue a command, the subprocess outputs the result to stdout and then waits for the next command (but does not terminate).
For example:
from subprocess import Popen, PIPE
p = Popen(args = [...], stdin = PIPE, stdout = PIPE, stderr = PIPE, shell = False)
# Issue a command:
p.stdin.write('command\n')
# *** HERE: get the result from p.stdout ***
# CONTINUE with the rest of the script once there is not more data in p.stdout
# NOTE that the subprocess is still running and waiting for the next command
# through stdin.
My problem is getting the result from p.stdout. The script needs to get the output while there is new data in p.stdout; but once there is no more data, I want to continue with the script.
The subprocess does not terminate, so I cannot use communicate() (which waits for the process to terminate).
I tried reading from p.stdout after issuing the command, like this:
res = p.stdout.read()
But the subprocess is not fast enough, and I just get empty result.
I thought about polling p.stdout in a loop until I get something, but then how do I know I got everything? And it seems wasteful anyway.
Any suggestions?
Use gevent.subprocess in gevent-1.0 to substitute the standard subprocess module. It could do the concurrency tasks using synchronous logic and won't block the script. Here is a brief tutorial about gevent.subprocess
Use circuits.io.Process in circuits-dev to wrap an asynchronous call to subprocess.
Example: https://bitbucket.org/circuits/circuits-dev/src/tip/examples/ping.py
After investigating several options I reached two solutions:
Setting the subprocess' stdout stream to be non blocking by using the fcntl module.
Using a thread to collect the subprocess' output to a proxy queue, and then reading the queue from the main thread.
I describe both solutions (and the problem and its origin) in this post.