I'm using python 3.6.8 and I have a situation when one process cannot continue until other one is finished.
p1 is in the main thread and must stay opened for a long time doing things.
p2 must run in separate thread (daemon=True), read stdout/err using communicate() and finish.
(all pipes are needed, must not disable them)
As you will see below, when run code by python 3.10.4 I have output "thread.popen/communicate", but python 3.6.8 will not print this line.
it will stuck inside communicate() i think.
What I ask for is I need a workaround for 3.6.8 and optionally explanation what is going on with Python 3.6.8? a bug with Locks or maybe Pipes?
Thank you!
import threading
from time import sleep
from subprocess import Popen, PIPE, STDOUT
def run():
print('thread')
p2 = Popen('git', stdin = PIPE, stdout = PIPE, stderr = PIPE)
o,e = p2.communicate()
print('thread.popen/communicate')
if __name__ == '__main__':
threading.Thread(target=run, daemon=True).start()
p1 = Popen('cmd', stdin = PIPE, stdout = PIPE, stderr = STDOUT)
print('main.popen')
# p1.wait()
sleep(2)
F:\MySSDPrograms\cudatext\py\cuda_lsp>python.exe new.py
thread
main.popen
thread.popen/communicate
F:\MySSDPrograms\cudatext\py\cuda_lsp>f:\Python36\python.exe new.py
thread
main.popen
Related
I am suffering from the Windows Python subprocess module.
This is test code1(named test1.py):
import subprocess as sbp
with sbp.Popen('python tests/test2.py',stdout=sbp.PIPE) as proc:
print('parent process')
print(proc.stdout.read(1))
print('end.')
and test code2(named test2.py):
import random
import time
def r():
while True:
yield random.randint(0, 100)
for i in r():
print(i)
time.sleep(1)
Generally, the test code2 generates random integer(0~100) and print it out infinitely.
I want the test code1 create a subprocess and launch it, read the stdout in realtime(not waiting for subprocess finished).
But when I run the code, the output is :
python.exe test1.py
parent process
It blocks on stdout.read() forever.
I have tried:
Replace stdout.read with communicate(), doesn't work as python doc expected, it will blocking until subprocess terminate.
use poll() methods to detect subprocess and read n bytes, forever block on read()
Modify the test2.code, only generate one nunber and break the loop. The father process print it out immediately(I think it's because child process terminated)
I searched a lot of similiar answers and did as they suggested(use stdout instead of communicate), but still didn't work?
Could anyone help me explaining why and how to do it?
This is my platform information:
Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32
It has to do with Python's output buffering (for a child process in your case). Try disabling the buffering and your code should work. You can do it by either running python with -u key, or calling sys.stdout.flush().
To use the -u key you need to modify the argument in the call to Popen, to use the flush() call you need to modify the test2.py.
Also, your test1.py would print just a single number, because you read only 1 byte from the pipe, instead of reading them in a loop.
Solution 1:
test1.py
import subprocess as sbp
with sbp.Popen(["python3", "-u", "./test2.py"], stdout=sbp.PIPE) as proc:
print("parent process")
while proc.poll() is None: # Check the the child process is still running
data = proc.stdout.read(1) # Note: it reads as binary, not text
print(data)
print("end")
This way you don't have to touch the test2.py at all.
Solution 2:
test1.py
import subprocess as sbp
with sbp.Popen("./test2.py", stdout=sbp.PIPE) as proc:
print("parent process")
while proc.poll() is None: # Check the the child process is still running
data = proc.stdout.read(1) # Note: it reads as binary, not text
print(data)
print("end")
test2.py
import random
import time
import sys
def r():
while True:
yield random.randint(0, 100)
for i in r():
print(i)
sys.stdout.flush() # Here you force Python to instantly flush the buffer
time.sleep(1)
This will print each received byte on a new line, e.g.:
parent process
b'9'
b'5'
b'\n'
b'2'
b'6'
b'\n'
You can switch the pipe to text mode by providing encoding in arguments or providing universal_newlines=True, which will make it use the default encoding. And then write directly to sys.stdout of your parent process. This will basically stream the output of a child process to the output of the parent process.
test1.py
import subprocess as sbp
import sys
with sbp.Popen("./test2.py", stdout=sbp.PIPE, universal_newlines=True) as proc:
print("parent process")
while proc.poll() is None: # Check the the child process is still running
data = proc.stdout.read(1) # Note: it reads as binary, not text
sys.stdout.write(data)
print("end")
This will provide the output as if the test2.py is executed directly:
parent process
33
94
27
I am trying to run all mp3 files in the background by creating a process using the multiprocessing library.
import os
import subprocess
from multiprocessing import Process
def music_player():
music_folder = "/home/pi/Music/"
files = os.listdir(music_folder)
for mp3_file in files:
print("playing " + mp3_file)
p = subprocess.Popen(["omxplayer","-o","local",music_folder+mp3_file],
stdout = subprocess.PIPE,
stdin = subprocess.PIPE,
stderr = subprocess.PIPE)
print(p)
print(p.poll())
print(p.pid)
p.wait()
p = Process(target = music_player)
print(p, p.is_alive())
p.start()
print(p.pid)
print(p, p.is_alive())
command = raw_input()
if(command == "stop"):
print("terminating...")
p.terminate()
print(p, p.is_alive())
print(p.exitcode)
After entering the "stop" command the code exits but the music is still running and on executing ps I see 2 process of omxplayer which I then have to manually kill through kill <pid> to make the music stop.
I previously tried using the subprocess library and killing the process using kill() and terminate() but the same issue occurred.
First observation, you don't need the multiprocessing module for what you're doing here. subprocess is for creating and managing processes which will run other scripts and programs; multiprocessing is for creating and managing processes which will be calling code which is already internal to your (parent) script.
I suspect that your seeing the effect of buffering. By the time you kill this process it's already buffered a significant amount of music out to the hardware (or even the OS buffers for the device).
What happens if you start the same program omxplayer from your shell, but in the background (as the & token to the end of your Unix shell command line to push a program into the background). Then use the kill command on that process and see if you see the same results.
I am using linux/cpython 3.3/bash. Here's my problem:
#!/usr/bin/env python3
from subprocess import Popen, PIPE, DEVNULL
import time
s = Popen('cat', stdin=PIPE, stdout=DEVNULL, stderr=DEVNULL)
s.stdin.write(b'helloworld')
s.stdin.close()
time.sleep(1000) #doing stuff
This leaves cat as a zombie (and I'm busy "doing stuff" and can't wait on the child process). Is there a way in bash that I can wrap cat (e.g. through creating a grand-child) that would allow me to write to cat's stdin, but have init take over as the parent? A python solution would work too, and I can also use nohup, disown etc.
Run the subprocess from another process whose only task is to wait on it.
pid = os.fork()
if pid == 0:
s = Popen('cat', stdin=PIPE, stdout=DEVNULL, stderr=DEVNULL)
s.stdin.write(b'helloworld')
s.stdin.close()
s.wait()
sys.exit()
time.sleep(1000)
One workaround might be to "daemonize" your cat: fork, then quickly fork again and exit in the 2nd process, with the 1st one wait()ing for the 2nd. The 3rd process can then exec() cat, which will inherit its file descriptors from its parent. Thus you need to create a pipe first, then close stdin in the child and dup() it from the pipe.
I don't know how to do these things in python, but I'm fairly certain it should be possible.
I need to run a subprocess from my script. The subprocess is an interactive (shell-like) application, to which I issue commands through the subprocess' stdin.
After I issue a command, the subprocess outputs the result to stdout and then waits for the next command (but does not terminate).
For example:
from subprocess import Popen, PIPE
p = Popen(args = [...], stdin = PIPE, stdout = PIPE, stderr = PIPE, shell = False)
# Issue a command:
p.stdin.write('command\n')
# *** HERE: get the result from p.stdout ***
# CONTINUE with the rest of the script once there is not more data in p.stdout
# NOTE that the subprocess is still running and waiting for the next command
# through stdin.
My problem is getting the result from p.stdout. The script needs to get the output while there is new data in p.stdout; but once there is no more data, I want to continue with the script.
The subprocess does not terminate, so I cannot use communicate() (which waits for the process to terminate).
I tried reading from p.stdout after issuing the command, like this:
res = p.stdout.read()
But the subprocess is not fast enough, and I just get empty result.
I thought about polling p.stdout in a loop until I get something, but then how do I know I got everything? And it seems wasteful anyway.
Any suggestions?
Use gevent.subprocess in gevent-1.0 to substitute the standard subprocess module. It could do the concurrency tasks using synchronous logic and won't block the script. Here is a brief tutorial about gevent.subprocess
Use circuits.io.Process in circuits-dev to wrap an asynchronous call to subprocess.
Example: https://bitbucket.org/circuits/circuits-dev/src/tip/examples/ping.py
After investigating several options I reached two solutions:
Setting the subprocess' stdout stream to be non blocking by using the fcntl module.
Using a thread to collect the subprocess' output to a proxy queue, and then reading the queue from the main thread.
I describe both solutions (and the problem and its origin) in this post.
Here's my main file:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
time.sleep(3)
And here's test_web_app.py:
import web
class Handler:
def GET(self): pass
app = web.application(['/', 'Handler'], globals())
app.run()
When I run the main file, the program executes, but a zombie process is left hanging and I have to kill it manually. Why is this? How can I get the Popen to die when the program ends? The Popen only hangs if I pipe stdout and sleep for a bit before the program ends.
Edit -- here's the final, working version of the main file:
import subprocess, time, atexit
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
def kill_app():
popen.kill()
popen.wait()
atexit.register(kill_app)
time.sleep(3)
You have not waited for the process. Once it's done, you have to call popen.wait.
You can check if the process is terminated using the poll method of the popen object to see if it has completed.
If you don't need the stdout of the web server process, you can simply ignore the stdout option.
You can use the atexit module to implement a hook that gets called when your main file exits. This should use the kill method of the Popen object and then wait on it to make sure that it's terminated.
If your main script doesn't need to be doing anything else while the subprocess executes I'd do:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe, stderr=pipe)
out, err = popen.communicate()
(I think if you specifically pipe stdout back to your program, you need to read it at some point to avoid creating zombie processes - communicate will read it in a reasonably safe way).
Or if you don't care about parsing stdout / stderr, don't bother piping them:
popen = subprocess.Popen('pythonw -uB test_web_app.py')
popen.communicate()