I am trying to run all mp3 files in the background by creating a process using the multiprocessing library.
import os
import subprocess
from multiprocessing import Process
def music_player():
music_folder = "/home/pi/Music/"
files = os.listdir(music_folder)
for mp3_file in files:
print("playing " + mp3_file)
p = subprocess.Popen(["omxplayer","-o","local",music_folder+mp3_file],
stdout = subprocess.PIPE,
stdin = subprocess.PIPE,
stderr = subprocess.PIPE)
print(p)
print(p.poll())
print(p.pid)
p.wait()
p = Process(target = music_player)
print(p, p.is_alive())
p.start()
print(p.pid)
print(p, p.is_alive())
command = raw_input()
if(command == "stop"):
print("terminating...")
p.terminate()
print(p, p.is_alive())
print(p.exitcode)
After entering the "stop" command the code exits but the music is still running and on executing ps I see 2 process of omxplayer which I then have to manually kill through kill <pid> to make the music stop.
I previously tried using the subprocess library and killing the process using kill() and terminate() but the same issue occurred.
First observation, you don't need the multiprocessing module for what you're doing here. subprocess is for creating and managing processes which will run other scripts and programs; multiprocessing is for creating and managing processes which will be calling code which is already internal to your (parent) script.
I suspect that your seeing the effect of buffering. By the time you kill this process it's already buffered a significant amount of music out to the hardware (or even the OS buffers for the device).
What happens if you start the same program omxplayer from your shell, but in the background (as the & token to the end of your Unix shell command line to push a program into the background). Then use the kill command on that process and see if you see the same results.
Related
I'm using python 3.6.8 and I have a situation when one process cannot continue until other one is finished.
p1 is in the main thread and must stay opened for a long time doing things.
p2 must run in separate thread (daemon=True), read stdout/err using communicate() and finish.
(all pipes are needed, must not disable them)
As you will see below, when run code by python 3.10.4 I have output "thread.popen/communicate", but python 3.6.8 will not print this line.
it will stuck inside communicate() i think.
What I ask for is I need a workaround for 3.6.8 and optionally explanation what is going on with Python 3.6.8? a bug with Locks or maybe Pipes?
Thank you!
import threading
from time import sleep
from subprocess import Popen, PIPE, STDOUT
def run():
print('thread')
p2 = Popen('git', stdin = PIPE, stdout = PIPE, stderr = PIPE)
o,e = p2.communicate()
print('thread.popen/communicate')
if __name__ == '__main__':
threading.Thread(target=run, daemon=True).start()
p1 = Popen('cmd', stdin = PIPE, stdout = PIPE, stderr = STDOUT)
print('main.popen')
# p1.wait()
sleep(2)
F:\MySSDPrograms\cudatext\py\cuda_lsp>python.exe new.py
thread
main.popen
thread.popen/communicate
F:\MySSDPrograms\cudatext\py\cuda_lsp>f:\Python36\python.exe new.py
thread
main.popen
I work in Unix, and I have a "general tool" that loads another process (GUI utility) on the background, and exits.
I call my "general tool" from a Python script, using Popen and proc.communicate() method.
My "general tool" runs for ~1 second, loads the GUI process on the background and exits immediately.
The problem is that proc.communicate() continues waiting to the process, although it's already terminated. I have to manually close the GUI (which is a subprocess that runs on the BG), so proc.communicate() returns.
How can this be solved?
I need proc.communicate() to return once the main process is terminated, and not to wait for the subprocesses that run on the background...
Thanks!!!
EDIT:
Adding some code snippets:
My "General Tool" last Main lines (Written in Perl):
if ($args->{"gui"}) {
my $script_abs_path = abs_path($0);
my $script_dir = dirname($script_abs_path);
my $gui_util_path = $script_dir . "/bgutil";
system("$gui_util_path $args->{'work_area'} &");
}
return 0;
My Python script that runs the "General Tool":
cmd = PATH_TO_MY_GENERAL_TOOL
proc = subprocess.Popen(cmd, shell = True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
stdout, dummy = proc.communicate()
exit_code = proc.returncode
if exit_code != 0:
print 'The tool has failed with status: {0}. Error message is:\n{1}'.format(exit_code, stdout)
sys.exit(1)
print 'This line is printed only when the GUI process is terminated...'
Don't use communicate. Communicate is explicitly designed to wait until the stdout of the process has closed. Presumably perl is not closing stdout as it's leaving it open for it's own subprocess to write to.
You also don't really need to use Popen as you're not really using its features. That is, you create pipes, and then just reprint to stdout with your own message. And it doesn't look like you need a shell at all.
Try using subprocess.call or even subprocess.check_call.
eg.
subprocess.check_call(cmd)
No need to check the return value as check_call throws an exception (which contains the exit code) if the process returns with a non-zero exit code. The output of the process is directly written to the controlling terminal -- no need to redirect the output.
Finally, if cmd is a compound of a path to an executable and its arguments then use shlex.split.
eg.
cmd = "echo whoop" # or cmd = "ls 'does not exist'"
subprocess.check_call(shlex.split(cmd))
Sample code to test with:
mypython.py
import subprocess, shlex
subprocess.check_call(shlex.split("perl myperl.pl"))
print("finishing top level process")
myperl.pl
print "starting perl subprocess\n";
my $cmd = 'python -c "
import time
print(\'starting python subprocess\')
time.sleep(3);
print(\'finishing python subprocess\')
" &';
system($cmd);
print "finishing perl subprocess\n";
Output is:
$ python mypython.py
starting perl subprocess
finishing perl subprocess
finishing top level process
$ starting python subprocess
finishing python subprocess
I wrote a twitter like service, and I want to do some stress testing on it with python.
I have a client program, called "client".
I want to write a script that will start several processes of the the "client" program, send a few messages, wait a few seconds and will exit.
what I wrote is
p = subprocess.Popen(['client','c1','localhost','4981'],stdin=subprocess.PIPE)
now I can't call the communicate method, because it waits for an EOF but the process isn't over yet.
calling the stdin.flush doesn't seems to work either.
Any tips on how do I do this?
(I don't have to do this in python, if theres a way to do this with a bash script its also ok)
You can use bash loop to run some clients and put them into the background. If you need to communicate with one, just put it to the foreground using fg and then put it to background again using bg
Call p.stdin.close() to signal that there are no more messages:
#!/usr/bin/python
import time
from subprocess import Popen, PIPE
# start several processes of the the "client" program
processes = [Popen(['client','c1','localhost','4981'], stdin=PIPE)
for _ in range(5)]
# send a few messages
for p in processes:
print >>p.stdin, message
p.stdin.close()
# wait a few seconds (remove finished processes while we wait)
for _ in range(3):
for p in processes[:]:
if p.poll() is not None:
processes.remove(p)
time.sleep(1)
# and will exit (kill unfinished subprocesses)
for p in processes:
if p.poll() is None:
p.kill()
p.wait()
From within my python script, I want to start another python script which will run in the background waiting for the instruction to terminate.
Host Python script (H1) starts subprocess P1.
P1 performs some short lived work & returns a sentinel to indicate that it is now going to sleep awaiting instructions to terminate.
H1 polls for this sentinel repeatedly. When it receives the sentinel, it performs some other IO bound task and when that completes, tells P1 to die gracefully (meaning close any resources that you have acquired).
Is this feasible to do with the subprocess module ?
Yes, start the process with :
p=subprocess.Popen([list for the script to execute], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
You can then read from p.stdout and p.stderr to watch for your sentinel and write to p.stdin to send messages to the child process. If you are running on a posix system, you might consider using pexpect instead; it doesn't support MS Windows, but it handles communicating with child processes better than subprocess.
"""H1"""
from multiprocessing import Process, Pipe
import sys
def P1(conn):
print 'P1: some short lived work'
sys.stdout.flush()
conn.send('work done')
# wait for shutdown command...
conn.recv()
conn.close()
print 'P1: shutting down'
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=P1, args=(child_conn,))
p.start()
print parent_conn.recv()
print 'H1: some other IO bound task'
parent_conn.send("game over")
p.join()
Output:
P1: some short lived work
work done
H1: some other IO bound task
P1: shutting down
Here's my main file:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
time.sleep(3)
And here's test_web_app.py:
import web
class Handler:
def GET(self): pass
app = web.application(['/', 'Handler'], globals())
app.run()
When I run the main file, the program executes, but a zombie process is left hanging and I have to kill it manually. Why is this? How can I get the Popen to die when the program ends? The Popen only hangs if I pipe stdout and sleep for a bit before the program ends.
Edit -- here's the final, working version of the main file:
import subprocess, time, atexit
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe)
def kill_app():
popen.kill()
popen.wait()
atexit.register(kill_app)
time.sleep(3)
You have not waited for the process. Once it's done, you have to call popen.wait.
You can check if the process is terminated using the poll method of the popen object to see if it has completed.
If you don't need the stdout of the web server process, you can simply ignore the stdout option.
You can use the atexit module to implement a hook that gets called when your main file exits. This should use the kill method of the Popen object and then wait on it to make sure that it's terminated.
If your main script doesn't need to be doing anything else while the subprocess executes I'd do:
import subprocess, time
pipe = subprocess.PIPE
popen = subprocess.Popen('pythonw -uB test_web_app.py', stdout=pipe, stderr=pipe)
out, err = popen.communicate()
(I think if you specifically pipe stdout back to your program, you need to read it at some point to avoid creating zombie processes - communicate will read it in a reasonably safe way).
Or if you don't care about parsing stdout / stderr, don't bother piping them:
popen = subprocess.Popen('pythonw -uB test_web_app.py')
popen.communicate()