Pausing a process or thread in Python - python

I've created a little audio player that looks up a chapter of an audiobook at a website I've specified, downloads it, and plays it. The only problem is, I can't pause it. To play the mp3 file I'm using os.system("afplay chapter.mp3")
I've thought about creating a thread with the os.system call in it but I'm pretty sure I can't pause it that way. If the thread was a loop I could just lock a variable it needs to access and unlock when I'm ready to resume. But since this thread would be just one line of code that doesn't seem possible. I've also looked at creating a process and sending SIGSTOP to it. But for some unknown reason that won't work.
import os, signal
from multiprocessing import Process
p = Process(target=play)
p.start()
raw_input("press enter to pause: ")
os.kill(p.pid, signal.SIGSTOP)
The code just executes silently without stopping the process.
I know there are alternatives to afplay but for now I'm just going to stick with the os.system call. So my question is, How can I pause a one-line thread or process? Instead of creating a new process with the Process() call do I need to find the process id of afplay? If so how?

os.system creates a child process and waits for that child process to exit. You can use os.execv to replace a process with another program, or use subprocess.Popen to create a child process that you can find the pid of with Popen.pid.

Why don't you just use the subprocess module instead of subprocess and os.system? That should give you better control over spawned processes.

Related

Get a signal once a subprocess ends

I am using the subprocess.Popen function to run a command line. Without having to use Popen.wait(), I want to check the subprocess after it has finished using Popen.poll(). Any suggestions on how to do this?
import subprocess
job = subprocess.Popen('command line', shell = True)
print(job.poll())
As it is, I get job.poll() printed before the subprocess starts. I want it to wait until it ends. I don't want to use wait because the rest of the user interface becomes unusable until the process ends. This is in PyQt4.
As Python - wait on a condition without high cpu usage says, there are only two ways in existence to wait for something: polling or setting up/using a notification system.
If it's UI - didn't you forget about one notification system you always have - the message queue?
Besides, you can always (and, if it's UI, should always) perform any time-consuming tasks in worker threads. If which case, you are just fine with a synchronous call.

Python subprocess script keeps running after it is done

In one of my Django views, I am calling a python script and getting its pid with:
from subprocess import Popen
p = Popen(['python', 'script.py'])
mypid = p.pid
When trying to find out if the process still is running from another page, I use the following function on mypid (thanks to this question):
def doesProcessExist(pid):
if pid < 0:
return False
try:
os.kill(pid, 0)
except OSError, e:
return e.errno == errno.EPERM
else:
return True
No matter how long I wait, the process still shows up as running. The only thing that stops it, is if I spawn a new python script process with Popen. Is there anyway I can fix this? I am not sure if this is caused by Django not closing python properly after the script is finished or something else. In Ubuntu's process status manager, the process shows up as [python] <defunct>.
--
The problem is true for all script.py I have tried. I am currently using one as simple as:
from time import sleep
sleep(5)
Really, what you're doing is wrong. When you use a high-level wrapper like a subprocess.Popen, you need to manage the process through that object. Just having the PID elsewhere isn't enough to manage it.
If you insist on dealing in PIDs instead of Popen objects, then you should use the low-level APIs in os.
Fortunately, you're not doing anything complicated, like creating pipes to talk to the child process. So, you can just launch it with your favorite spawn variant, then wait for it with waitpid or one of its variants.
I'm assuming you're doing this all in a single-process web server. If you're using a forking web server, where the other page could be in a different process, even using PIDs won't work. The parent process has to reap the child, not some other arbitrary process. If you want to make that work, you'll have to make things more complicated, and you're really going to have to learn about the Unix process model before anyone can explain it to you.
What you see is a zombie process. It doesn't keep running. It can't. It is dead. The only thing that is left is some info that allows for related processes to retrieve its status.
To find out whether a subprocess is alive without blocking, call p.poll(). If it returns None then the process is still alive, otherwise you can safely forget about it (it is already reaped by .poll()).
subprocess module calls _cleanup() function that reaps zombie processes inside Popen() constructor. So normally your script won't create many zombie processes anyway.
To see a list of zombie processes:
import os
#NOTE: don't use Popen() here
print os.popen(r"ps aux | grep Z | grep -v grep").read(),
Processes in Unix stick around until the parent waits for them. calling wait on the object returned by thepopen will wait for the process to be done and will wait for it so it goes away. Until you do that it will exist as a zombie process See this message for info on getting the process to go away in the background while your web server runs without waiting for it in a foreground thread/view.
So, let's say that you do
p = subprocess.Popen(...)
At some point you need to call
p.wait()

python/django spawn background process and avoid zombie process

I need to spawn a background process in django, the view returns immediately, the background process continues make some changes, then update the db. This is done by os.spawnl() function to call a separate .py file.
The problem is after the background process is done, it becames a zombie function [python] <defunct>.
How do I avoid that? I followed this and this example but I still got the child process as zombie after the django render process.
I want to take this chance to practice my *nix process management skills so please do me a favor, don't give me Celery or other mq/async task solutions, and I hate dependencies.
This got to long for a comment-
The wait syscall (which os.wait is a wrapper for) reaps exit codes/pids from dead processes. You will want to os.wait in the process that is a generation above your zombie processes; the parent of the zombies processes. The parent processes will receive a SIGCHLD signal when one of its child processes die. If you insist on doing all of this yourself, you will need to install a signal handler to trap for SIGCHLD and in the signal handler call os.wait. Read some documentation on unix process handling and the Python documentation on the os module as there are variations of the os.wait function that will be non-blocking which maybe helpful.
import signal
signal.signal(signal.SIGCHLD, lambda _x,_y: os.wait())
I had a similar problem. I used active_children() from multiprocessing module.
import multiprocessing
# somewhere in middleware or where appropriate call
active_children()

Listening for subprocess failure in python

Using subprocess.Popen(), I'm launching a process that is supposed to take a long time. However, there is a chance that the process will fail shortly after it launches (producing a return code of 1). If that happens, I want to intercept the failure and present an explanatory message to the user. Is there a way to "listen" to the process and respond if it fails? I can't just use Popen.wait() because my python program has to keep running.
The hack I have in place right now is to time.sleep() my python program for .5 seconds (which should be enough time for the subprocess to to fail if it's going to do so). After the python program resumes, it polls the subprocess to determine if it has failed or not.
I imagine that a better solution might use threading and Popen.wait(), but I'm a relative beginner to python.
Edit:
The subprocess is a Java daemon that I'm launching. If another instance of the daemon is already running on the system, the Java subprocess will exit with a return code of 1, and I want to intercept the messy Java exception stack trace and present an understandable error message to the user.
Two approaches:
Call Popen.wait() on a thread as you suggested yourself, then call an error handler function if the exit code is non-zero. Make sure that the error handler is thread safe, preferably by dispatching the error message to the main thread if your application has an event loop.
Rewrite your application to use an event loop that already supports monitoring child processes, such as pyev. If you just want to monitor one subprocess, this is probably overkill.

subprocess.wait() not waiting for Popen process to finish (when using threads)?

I am experiencing some problems when using subprocess.Popen() to spawn several instances of the same application from my python script using threads to have them running simultaneously. In each thread I run the application using the popen() call, and then I wait for it to finish by callingwait(). The problem seems to be that the wait()-call does not actually wait for the process to finish. I experimented by using only one thread, and by printing out text messages when the process starts, and when it finishes. So the thread function would look something like this:
def worker():
while True:
job = q.get() # q is a global Queue of jobs
print('Starting process %d' % job['id'])
proc = subprocess.Popen(job['cmd'], shell=True)
proc.wait()
print('Finished process %d' % job['id'])
job.task_done()
But even when I only use one thread, it will print out several "Starting process..." messages, before any "Finished process..." message appears. Are there any cases when wait() does not actually wait? I have several different external applications (C++ console applications), which in turn will have several instances running simultaneously, and for some of them my code works, but for others it won't. Could there be some issue with the external applications that somehow affects the call to wait()?
The code for creating the threads looks something like this:
for i in range(1):
t = Thread(target=worker)
t.daemon = True
t.start()
q.join() # Wait for the queue to empty
Update 1:
I should also add that for some of the external applications I sometimes get a return code (proc.returncode) of -1073471801. For example, one of the external applications will give that return code the first two times Popen is called, but not the last two (when I have four jobs).
Update 2:
To clear things up, right now I have four jobs in the queue, which are four different test cases. When I run my code, for one of the external applications the first two Popen-calls generate the return code -1073471801. But if I print the exact command which Popen calls, and run it in a command window, it executes without any problems.
Solved!
I managed to solve the issues I was having. I think the problem was my lack of experience in threaded programming. I missed the fact that when I had created my first worker threads, they would keep on living until the python script exits. By mistake I created more worker threads each time I put new items on the queue (I do that in batches for every external program I want to run). So by the time I got to the fourth external application, I had four threads running simultaneously even though I only thought I had one.
You could also use check_call() instead of Popen. check_call() waits for the command to finish, even when shell=True and then returns the exit code of the job.
Sadly when running your subprocess using shell=True, wait() will only wait for the sh subprocess to finish and not for the command cmd.
I will suggest if it possible to don't use the shell=True, if not possible you can create a process group like in this answer and use os.waitpid to wait for the process group not just the shell process.
Hope this was helpful :)
Make sure all applications your are calling have valid system return codes when they finish
I was having issues as well, but was inspired by yours.
Mine looks like this, and works beautifully:
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags = subprocess.STARTF_USESHOWWINDOW
startupinfo.wShowWindow = subprocess.SW_HIDE
proc = subprocess.Popen(command, startupinfo=startupinfo)
proc.communicate()
proc.wait()
Notice that this one hides the window as well.

Categories