I've been using Python to run a video processing program on a large collection of .mp4 files. The video processing program (which I did not write and can't alter) does not exit once it reaches the final frame of the video, so using os.system(cmd) in a loop going through all the .mp4 files didn't work for me unless I sat there killing the processing program after each video ended.
I tried to solve this using a subprocess that got terminated after the video ended (a predetermined amount of time):
for file in os.listdir(myPath):
if file.endswith(".mp4"):
vidfile = os.path.join(myPath, file)
command = "./Tracking " + vidfile
p = subprocess.Popen(command, shell=True)
sleep(840)
p.terminate()
However, the Tracking program still doesn't exit, so I end up with tons of videos open at the same time. I can only get rid of them by force quitting each separate frame or by using kill -9 id for the id of that particular instance of the program. I've read that using shell=True isn't recommended, but I'm not sure if that would cause this behavior.
How can I kill the Tracking program after a certain amount of time? I'm extremely new to Python and am not sure how to do this. I was considering doing something like os.system("kill -9 id") after the sleep(), but I don't know how to get the id of the program either.
Drop shell=True, use p.kill() to kill the process:
import subprocess
from time import time as timer, sleep
p = subprocess.Popen(["./Tracking", vidfile])
deadline = timer() + 840
while timer() < deadline:
if p.poll() is not None: # already finished
break
sleep(1)
else: # timeout
try:
p.kill()
except EnvironmentError:
pass # ignore errors
p.wait()
If it doesn't help then try to create a new process group and kill it instead. See How to terminate a python subprocess launched with shell=True.
Related
I am using subprocess in Python to call an external program on WINDOWS. I control the process with ThreadPool so that I can limit it to max 6 processes at the same time, and new process continuously began when one was done.
Codes as below:
### some codes above
### Code of Subprocess Part
from multiprocessing.pool import ThreadPool as Pool
def FAST_worker(file):
p = subprocess.Popen([r'E:/pyworkspace/FAST/FAST_RV_W64.exe', file],
cwd = r'E:/pyworkspace/FAST/',
shell = True)
p.wait()
# List of *.in filenames
FAST_in_pathname_li = [
'334.in',
'893.in',
'9527.in',
...
'114514.in',
'1919810.in',
]
# Limit max 6 processes at same time
with Pool(processes = 6) as pool:
for result in pool.imap_unordered(FAST_worker, FAST_in_pathname_li):
pass
### some codes below
I got problem when the external program unexpectedly terminated and showed error message pop-up. Though the other 5 processes still kept going, the whole progress will finally get stuck at the "subprocess part" and couldn't go forward anymore. (unless I came to my desk and manually clicked "Shut down the program")
What I want to know is how can I avoid the pop-up and make the whole script process keep going, like bypass the error message or something, rather than manual click, in order to avoid wasting time.
Since we don't know enough about the program FAST_worker is calling, I'll assume you already checked there isn't any "kill on error" or "quiet" mode that would be more convenient to use in a script.
My two cents: maybe you can setup a timeout on the worker execution, so that a stuck process is killed automatically after a certain delay.
Building on the snippet provided here, here is a draft:
from threading import Timer
def FAST_worker(file, timeout_sec):
def kill_proc():
"""called by the Timer thread upon expiration"""
p.kill()
# maybe add task to list of failed task, for tracability
p = subprocess.Popen([r'E:/pyworkspace/FAST/FAST_RV_W64.exe', file],
cwd = r'E:/pyworkspace/FAST/',
shell = True)
# setup timer to kill the process after a timeout
timer = Timer(timeout_sec, kill_proc)
try:
timer.start()
stdout, stderr = p.wait()
finally:
timer.cancel()
Note that there are also GUI automation libraries in python that can do the clicking for you, but that is likely to be more tedious to program:
tutorial for pyAutoGui
SO question on the subject
Update 2: So I piped the output of stderr and it looks like when I include shell=True, i just get the help file for omx player (it lists all the command line switches and such). Is it possible that shell=True might not play nicely with omxplayer?
Update: I came across that link before but it failed on me so I moved on without digging deeper. After Tshepang suggested it again I looked into it further. I have two problems, and I'm hoping the first is caused by the second. The first problem is that when I include shell=True as an arg, the video never plays. If I don't include it, the video plays, but is not ever killed. Updated code below.
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video). My eventual goal is to be able to use vid1 as a sort of "screensaver" and have vid2 play when a user interacts with the system, but for now im simply trying to kill vid1 on keyboard input and running into quite the hard time doing it. I'm hoping someone can tell me where my code is falling down.
Forewarning that I'm extremely new to Python, and linux based systems in general, so if im doing this terribly wrong, please feel free to redirect me, but this seemed to be the fastest way to get there.
Here is my code as it stands:
import subprocess
import os
import signal
vid1 = ['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4']
while True:
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
vid = subprocess.Popen(vid1, stdout=subprocess.PIPE, preexec_fn=os.setsid)
print 'SID is: ', preexec_fn
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'])
id = raw_input()
if not id:
break
os.killpg(vid.pid, signal.SIGTERM)
print "your input: ", id
print "While loop has exited"
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video).
By default, SIGINT is propagated to all processes in the foreground process group, see "How Ctrl+C works". preexec_fn=os.setsid (or os.setpgrp) actually prevents it: use it only if you do not want omxplayer to receive Ctrl+C i.e., use it if you manually call os.killpg when you need to kill a process tree (assuming omxplayer children do not change their process group).
"keyboard interrupt" (sigint signal) is visible as KeyboardInterrupt exception in Python. Your code should catch it:
#!/usr/bin/env python
from subprocess import call, check_call
try:
rc = call(['omxplayer', 'first file'])
except KeyboardInterrupt:
check_call(['omxplayer', 'second file'])
else:
if rc != 0:
raise RuntimeError('omxplayer failed to play the first file, '
'return code: %d' % rc)
The above assumes that omxplayer exits on Ctrl+C.
You could see the help message due to several reasons e.g., omxplayer does not support --loop option (run it manually to check) or you mistakenly use shell=True and pass the command as a list: always pass the command as a single string if you need shell=True and in reverse: always (on POSIX) pass the command as a list of arguments if shell=False (default).
I don't know if what I intend to do is even possible or reasonable, so I'm open for any suggestions.
Currently I have a script which starts n subprocesses of some.exe id which I regularly poll() to determine if they terminated and if so, with which errorlevel. The subprocesses are saved in a dict (key = subprocess, value = id). n instances of some.exe are kept running, each with an own id, until every id of a predefined list has been processed.
some.exe has no gui on it's onw, it writes the progress to the stdout, which I do not need. Now, for some reason sometimes, some.exe doesn't continue, as if it would wait - thus poll() never produces an errorlevel and done = proc.poll() is not None is never true. Which sooner or later leads to my dict of n procs, all being inactive and the overall progress is stuck.
If issued manually in a cmd, some.exe with an id, that shows this behaviour in the script - works perfectly fine.
Therefore my idea was to start a new cmd window from the script, which runs some.exe with the id, but I should still be able to poll() said exe.
Here's roughly what I have so far:
while id_list > 0:
if len(proc_dict) < n:
id = next_id()
proc = subprocess.Popen(["some.exe", id], stdout=PIPE)
proc.poll()
proc_dict[proc] = id
else:
done_procs = []
for proc in proc_dict.keys():
done = proc.poll() is not None
if done:
print("returncode: "+proc.returncode)
done_procs.append(proc)
if done_procs:
for p in done_procs:
del proc_dict[p]
time.sleep(2)
edit: if I proc.communicate()[0] in the else: where the sleep is located, some.exe is able to continue/finish, but as communicate waits for the process, it slows down the script way too much.
I believe the problem is that some.exe's output is enough to fill the os pipe buffer, causing a deadlock. There is a warning about this in the docs here
If you want to discard the stdout, instead of sending it to a pipe you could instead send it to devnull, this post explains how to do that
I have a python program which launches subprocesses using Popen and consumes their output nearly real-time as it is produced. The code of the relevant loop is:
def run(self, output_consumer):
self.prepare_to_run()
popen_args = self.get_popen_args()
logging.debug("Calling popen with arguments %s" % popen_args)
self.popen = subprocess.Popen(**popen_args)
while True:
outdata = self.popen.stdout.readline()
if not outdata and self.popen.returncode is not None:
# Terminate when we've read all the output and the returncode is set
break
output_consumer.process_output(outdata)
self.popen.poll() # updates returncode so we can exit the loop
output_consumer.finish(self.popen.returncode)
self.post_run()
def get_popen_args(self):
return {
'args': self.command,
'shell': False, # Just being explicit for security's sake
'bufsize': 0, # More likely to see what's being printed as it happens
# Not guarantted since the process itself might buffer its output
# run `python -u` to unbuffer output of a python processes
'cwd': self.get_cwd(),
'env': self.get_environment(),
'stdout': subprocess.PIPE,
'stderr': subprocess.STDOUT,
'close_fds': True, # Doesn't seem to matter
}
This works great on my production machines, but on my dev machine, the call to .readline() hangs when certain subprocesses complete. That is, it will successfully process all of the output, including the final output line saying "process complete", but then will again poll readline and never return. This method exits properly on the dev machine for most of the sub-processes I call, but consistently fails to exit for one complex bash script that itself calls many sub-processes.
It's worth noting that popen.returncode gets set to a non-None (usually 0) value many lines before the end of the output. So I can't just break out of the loop when that is set or else I lose everything that gets spat out at the end of the process and is still buffered waiting for reading. The problem is that when I'm flushing the buffer at that point, I can't tell when I'm at the end because the last call to readline() hangs. Calling read() also hangs. Calling read(1) gets me every last character out, but also hangs after the final line. popen.stdout.closed is always False. How can I tell when I'm at the end?
All systems are running python 2.7.3 on Ubuntu 12.04LTS. FWIW, stderr is being merged with stdout using stderr=subprocess.STDOUT.
Why the difference? Is it failing to close stdout for some reason? Could the sub-sub-processes do something to keep it open somehow? Could it be because I'm launching the process from a terminal on my dev box, but in production it's launched as a daemon through supervisord? Would that change the way the pipes are processed and if so how do I normalize them?
The main code loop looks right. It could be that the pipe isn't closing because another process is keeping it open. For example, if script launches a background process that writes to stdout then the pipe will no close. Are you sure no other child process still running?
An idea is to change modes when you see the .returncode has set. Once you know the main process is done, read all its output from buffer, but don't get stuck waiting. You can use select to read from the pipe with a timeout. Set a several seconds timeout and you can clear the buffer without getting stuck waiting child process.
Without knowing the contents of the "one complex bash script" which causes the problem, there's too many possibilities to determine the exact cause.
However, focusing on the fact that you claim it works if you run your Python script under supervisord, then it might be getting stuck if a sub-process is trying to read from stdin, or just behaves differently if stdin is a tty, which (I presume) supervisord will redirect from /dev/null.
This minimal example seems to cope better with cases where my example test.sh runs subprocesses which try to read from stdin...
import os
import subprocess
f = subprocess.Popen(args='./test.sh',
shell=False,
bufsize=0,
stdin=open(os.devnull, 'rb'),
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
close_fds=True)
while 1:
s = f.stdout.readline()
if not s and f.returncode is not None:
break
print s.strip()
f.poll()
print "done %d" % f.returncode
Otherwise, you can always fall back to using a non-blocking read, and bail out when you get your final output line saying "process complete", although it's a bit of a hack.
If you use readline() or read(), it should not hang. No need to check returncode or poll(). If it is hanging when you know the process is finished, it is most probably a subprocess keeping your pipe open, as others said before.
There are two things you could do to debug this:
* Try to reproduce with a minimal script instead of the current complex one, or
* Run that complex script with strace -f -e clone,execve,exit_group and see what is that script starting, and if any process is surviving the main script (check when the main script calls exit_group, if strace is still waiting after that, you have a child still alive).
I find that calls to read (or readline) sometimes hang, despite previously calling poll. So I resorted to calling select to find out if there is readable data. However, select without a timeout can hang, too, if the process was closed. So I call select in a semi-busy loop with a tiny timeout for each iteration (see below).
I'm not sure if you can adapt this to readline, as readline might hang if the final \n is missing, or if the process doesn't close its stdout before you close its stdin and/or terminate it. You could wrap this in a generator, and everytime you encounter a \n in stdout_collected, yield the current line.
Also note that in my actual code, I'm using pseudoterminals (pty) to wrap the popen handles (to more closely fake user input) but it should work without.
# handle to read from
handle = self.popen.stdout
# how many seconds to wait without data
timeout = 1
begin = datetime.now()
stdout_collected = ""
while self.popen.poll() is None:
try:
fds = select.select([handle], [], [], 0.01)[0]
except select.error, exc:
print exc
break
if len(fds) == 0:
# select timed out, no new data
delta = (datetime.now() - begin).total_seconds()
if delta > timeout:
return stdout_collected
# try longer
continue
else:
# have data, timeout counter resets again
begin = datetime.now()
for fd in fds:
if fd == handle:
data = os.read(handle, 1024)
# can handle the bytes as they come in here
# self._handle_stdout(data)
stdout_collected += data
# process exited
# if using a pseudoterminal, close the handles here
self.popen.wait()
Why are you setting the sdterr to STDOUT?
The real benefit of making a communicate() call on a subproces is that you are able to retrieve a tuple containining the stdout response as well as the stderr meesage.
Those might be useful if the logic depends on their succsss or failure.
Also, it would save you from the pain of having to iterate through lines. Communicate() gives you everything and there would be no unresolved questions about whether or not the full message was received
I wrote a demo with bash subprocess that can be easy explored.
A closed pipe can be recognized by '' in the output from readline(), while the output from an empty line is '\n'.
from subprocess import Popen, PIPE, STDOUT
p = Popen(['bash'], stdout=PIPE, stderr=STDOUT)
out = []
while True:
outdata = p.stdout.readline()
if not outdata:
break
#output_consumer.process_output(outdata)
print "* " + repr(outdata)
out.append(outdata)
print "* closed", repr(out)
print "* returncode", p.wait()
Example of input/output: Closing the pipe distinctly before terminating the process. That is why wait() should be used instead of poll()
[prompt] $ python myscript.py
echo abc
* 'abc\n'
exec 1>&- # close stdout
exec 2>&- # close stderr
* closed ['abc\n']
exit
* returncode 0
[prompt] $
Your code did output a huge number of empty strings for this case.
Example: Fast terminated process without '\n' on the last line:
echo -n abc
exit
* 'abc'
* closed ['abc']
* returncode 0
I have the following example:
import subprocess
p = subprocess.Popen("cmd",stdin = subprocess.PIPE, stdout=subprocess.PIPE )
p.stdin.write(b'cd\\' + b'\r\n')
p.stdin.write(b'dir' + b'\r\n')
p.stdin.write(b'\r\n')
while True:
line = p.stdout.readline()
print(line.decode('ascii'), end='')
if line.rstrip().decode('ascii') == 'C:\>': #I need to check at this point if there is new data in the PIPE
print('End of File')
break
I am listening to the PIPE for any output from the subprocess and if there is not any new data coming through the PIPE I would like to stop reading. I would like to have a control statement that would tell me that PIPE is empty. This would help me to avoid problems in case my process freezes or ends with an unexpected result.
Unless the process is over or there is a signal you are expecting to stop reading at, there is no good way to know ahead of time if there is data in the pipe, because the command will only terminate when it reaches the number of bytes you want to read [.read(n)], reaches a newline char [.readline()], or reaches the end of the file (which doesn't exist until the process is over).
However, you don't need to run cmd.exe to run your program, since your program will already be run in the cmd shell.
I suggest you use subprocess to call the program directly, and handle exceptions/return_code in your code. You could do something like...
import subprocess
import time
p = subprocess.Popen("your_program.exe",
"-f", "filename",
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
# If you have to use stdin, do it here.
p.stdin.write('lawl here are my inputs\n')
run_for = 0
while p.poll() == None:
time.sleep(1)
if run_for > 10:
p.kill()
break
run_for += 1
if p.return_code == 0:
...handle success...
else:
...handle failure...
You could do this in a loop and spin up a new process that would run the next file.
If it's that costly to spin up the program (and if it's not then stop reading now because it's about to get embarrassing) then perhaps (and this is a total hack, but) after your process has run a while, you could pass a particularly odd but innocuous string to p.stdin, as in p.stdin.write("\n~%$%~\n").
If you could get away with that, then you could do something like...
for line in p.stdout.readlines():
if '~%$%~' in line:
break
But holy crap, please don't do that. It's such a hack.