Calling ffmpeg kills script in background only - python

I've got a python script that calls ffmpeg via subprocess to do some mp3 manipulations. It works fine in the foreground, but if I run it in the background, it gets as far as the ffmpeg command, which itself gets as far as dumping its config into stderr. At this point, everything stops and the parent task is reported as stopped, without raising an exception anywhere. I've tried a few other simple commands in the place of ffmpeg, they execute normally in foreground or background.
This is the minimal example of the problem:
import subprocess
inf = "3HTOSD.mp3"
outf = "out.mp3"
args = [ "ffmpeg",
"-y",
"-i", inf,
"-ss", "0",
"-t", "20",
outf
]
print "About to do"
result = subprocess.call(args)
print "Done"
I really can't work out why or how a wrapped process can cause the parent to terminate without at least raising an error, and how it only happens in so niche a circumstance. What is going on?
Also, I'm aware that ffmpeg isn't the nicest of packages, but I'm interfacing with something that has using ffmpeg compiled into it, so using it again seems sensible.

It might be related to Linux process in background - “Stopped” in jobs? e.g., using parent.py:
from subprocess import check_call
check_call(["python", "-c", "import sys; sys.stdin.readline()"])
should reproduce the issue: "parent.py script shown as stopped" if you run it in bash as a background job:
$ python parent.py &
[1] 28052
$ jobs
[1]+ Stopped python parent.py
If the parent process is in an orphaned process group then it is killed on receiving SIGTTIN signal (a signal to stop).
The solution is to redirect the input:
import os
from subprocess import check_call
try:
from subprocess import DEVNULL
except ImportError: # Python 2
DEVNULL = open(os.devnull, 'r+b', 0)
check_call(["python", "-c", "import sys; sys.stdin.readline()"], stdin=DEVNULL)
If you don't need to see ffmpeg stdout/stderr; you could also redirect them to /dev/null:
check_call(ffmpeg_cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)

I like to use the commands module. It's simpler to use in my opinion.
import commands
cmd = "ffmpeg -y -i %s -ss 0 -t 20 %s 2>&1" % (inf, outf)
status, output = commands.getstatusoutput(cmd)
if status != 0:
raise Exception(output)
As a side note, sometimes PATH can be an issue, and you might want to use an absolute path to the ffmpeg binary.
matt#goliath:~$ which ffmpeg
/opt/local/bin/ffmpeg

From the python/subprocess/call documentation:
Wait for command to complete, then return the returncode attribute.
So as long as the process you called does not exit, your program does not go on.
You should set up a Popen process object, put its standard output and error in different buffers/streams and when there is an error, you terminate the process.
Maybe something like this works:
proc = subprocess.Popen(args, stderr = subprocess.PIPE) # puts stderr into a new stream
while proc.poll() is None:
try:
err = proc.stderr.read()
except: continue
else:
if err:
proc.terminate()
break

Related

Start background process in python via subprocess and write output to file

Dear stackoverflow users,
I'm looking for a solution for a probably quite easy problem. I want to automate some quantum chemical calculations and ran into a small problem.
Normally you start your quantum chemical programm (in my case it's called orca) with your input file (*.inp) on a remote server as a background process and pipe the output into an outputfile (*.out) via
nohup orca H2.inp >& H2.out &
or something similar.
Now I wanted to use a python script (with some templating) to write the input file automatically. At the end the script should start the calculation in a way that I could log out of the server without stopping orca. I tried that with
subprocess.run(["orca", input_file], stdout=output_file)
but so far it did not work. How do I "emulate" the command given at the top with the subprocess module?
Regards
Update
I have one file that is called H2.xyz. The script reads and splits the filename by the point and creates an input file name H2.inp and the output should be written into the file H2.out.
Update 2
The input file is derived from the *xyz file via
xyzfile = str(sys.argv[1])
input_file = xyzfile.split(".")[0] + ".inp"
output_file = xyzfile.split(".")[0] + ".out"
and is created within the script via templating. In the end I want to run the script in the following way:
python3 script.py H2_0_1.xyz
Why not simply:
subprocess.Popen(f'orca {input_file} >& {output_file}',
shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
More info:
Run Process and Don't Wait
For me (Windows, Python 2.7) the method call works very fine like this:
with open('H2.out', 'a') as out :
subprocess.call(['orca', infile], stdout=out,
stderr=out,
shell=True) # Yes, I know. But It's Windows.
On Linux you maybe do not need shell=True for a list of arguments.
Is the usage of subprocess important? If not, you could use os.system.
The Python call would get really short, in your case
os.system("nohup orca H2.inp >& H2.out &")
should do the trick.
I had the same problem not long ago.
Here is my solution:
commandLineCode = "nohup orca H2.inp >& H2.out &"
try:
proc = subprocess.Popen(commandLineCode,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd = workingDir)
except OSError:
print("Windows Error occured")
print(traceback.format_exc())
timeoutInSeconds = 100
try:
outs, errs = proc.communicate(timeout = timeoutInSeconds)
except subprocess.TimeoutExpired:
print("timeout")
proc.kill()
outs, errs = proc.communicate()
stdoutDecode = outs.decode("utf-8")
stderrDecode = errs.decode("utf-8")
for line in stdoutDecode.splitlines():
# write line to outputFile
if stderrDecode:
for line in stderrDecode.splitlines():
# write line to error log
The OSError exception is pretty important since you never now what your OS might do wrong.
For more on the communicate() command which actually starts the process read:
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate

Calling a shell script from python

I have a python script that calls a shell scrips, that in turn calls a .exe called iv4_console. I need to print the stdout of iv4_console for debugging purposes. I used this:
Python:
import sys
import subprocess
var="rW015005000000"
proc = subprocess.Popen(["c.sh", var], shell=True, stdout=subprocess.PIPE)
output = ''
for line in iter(proc.stdout.readline, ""):
print line
output += line
Shell:
start_dir=$PWD
release=$1
echo Release inside shell: $release
echo Directory: $start_dir
cd $start_dir
cd ../../iv_system4/ports/visualC12/Debug
echo Debug dir: $PWD
./iv4_console.exe ../embedded/LUA/analysis/verbose-udp-toxml.lua ../../../../../logs/$release/VASP_DUN722_20160307_Krk_Krk_113048_092_1_$release.dvl &>../../../../FCW/ObjectDetectionTest/VASP_DUN722_20160307_Krk_Krk_113048_092_1_$release.xml
./iv4_console.exe ../embedded/LUA/analysis/verbose-udp-toxml.lua ../../../../../logs/$release/VASP_FL140_20170104_C60_Checkout_afterIC_162557_001_$release.dvl &>../../../../FCW/ObjectDetectionTest/VASP_FL140_20170104_C60_Checkout_afterIC_162557_001_$release.xml
exit
But this didn't work, it prints nothing. What do you think?
See my comment, best approach (i.m.o) would be to just use python only.
However, in answer of your question, try:
import sys
import subprocess
var="rW015005000000"
proc = subprocess.Popen(["/bin/bash", "/full/path/to/c.sh"], stdout=subprocess.PIPE)
# Best to always avoid shell=True because of security vulnerabilities.
proc.wait() # To make sure the shell script does not continue running indefinitely in the background
output, errors = proc.communicate()
print(output.decode())
# Since subprocess.communicate() returns a bytes-string, you can use .decode() to print the actual output as a string.
You can use
import subprocess
subprocess.call(['./c.sh'])
to call the shell script in python file
or
import subprocess
import shlex
subprocess.call(shlex.split('./c.sh var'))

How to redirect print and stdout to a pipe and read it from parent process?

If possible I would like to not use subProcess.popen. The reason I want to capture the stdout of the process started by the child is because I need to save the output of the child in a variable to display it back later. However I have yet to find a way to do so anywhere. I also need to activate multiple programs without necessarily closing the one that's active. I also need to be controlling the child process whit the parent process.
I'm launching a subprocess like this
listProgram = ["./perroquet.py"]
listOutput = ["","",""]
tubePerroquet = os.pipe()
pipeMain = os.pipe()
pipeAge = os.pipe()
pipeSavoir = os.pipe()
pid = os.fork()
process = 1
if pid == 0:
os.close(pipePerroquet[1])
os.dup2(pipePerroquet[0],0)
sys.stdout = os.fdopen(tubeMain[1], 'w')
os.execvp("./perroquet.py", listProgram)
Now as you can see I'm launching the program with os.execvp and using os.dup2() to redirect the stdout of the child. However I'm not sure of what I've done in the code and want to know of the correct way to redirect stdout with os.dup2 and then be able to read it in the parent process.
Thank you for your help.
I cannot understand why you do not want to use the excellent subprocess module that could save you a lot of boiler plate code (and as much error possibilities ...). Anyway, I assume perroquet.py is a python script, not an executable progam. Shell know how to find the correct interpretor for scripts, but exec family are low-level functions that expect a real executable program.
You should at least have something like :
listProgram = [ "python", "./perroquet.py","",""]
...
os.execvp("python", listProgram)
But I'd rather use :
prog = subprocess.Popen(("python", "./perroquet.py", "", ""), stdout = PIPE)
or even as you are already in python import it and directly call the functions from there.
EDIT :
It looks thart what you really want is :
user gives you a command (can be almost anything)
[ you validate that the command is safe ] - unsure if you intend to do it but you should ...
you make the shell execute the command and get its output - you may want to read stderr too and control exit code
You should try something like
while True:
cmd = raw_input("commande :") # input with Python 3
if cmd.strip().lower() == exit: break
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
out, err = proc.communicate()
code = proc.returncode
print("OUT", out, "ERR", err, "CODE", code)
It is absolutely unsafe, since this code executes any command as the underlying shell would do (include rm -rf *, rd /s/q ., ...), but it gives you the output, the output and the return code of the command, and it can be used is a loop. The only limitation is that as you use a different shell for each command, you cannot use commands that change shell environment - they will be executed but will have no effect.
Here's a solution if you need to extract any changes to the environment
from subprocess import Popen, PIPE
import os
def execute_and_get_env(cmd, initial_env=None):
if initial_env is None:
initial_env = os.environ
r_fd, w_fd = os.pipe()
write_env = "; env >&{}".format(w_fd)
p = Popen(cmd + write_env, shell=True, env=initial_env, pass_fds=[w_fd], stdout=PIPE, stderr=PIPE)
output, error = p.communicate()
# this will cause problems if the environment gets very large as
# writing to the pipe will hang because it gets full and we only
# read from the pipe when the process is over
os.close(w_fd)
with open(r_fd) as f:
env = dict(line[:-1].split("=", 1) for line in f)
return output, error, env
export_cmd = "export my_var='hello world'"
echo_cmd = "echo $my_var"
out, err, env = execute_and_get_env(export_cmd)
out, err, env = execute_and_get_env(echo_cmd, env)
print(out)

python subprocess.call output is not interleaved

I have a python (v3.3) script that runs other shell scripts. My python script also prints message like "About to run script X" and "Done running script X".
When I run my script I'm getting all the output of the shell scripts separate from my print statements. I see something like this:
All of script X's output
All of script Y's output
All of script Z's output
About to run script X
Done running script X
About to run script Y
Done running script Y
About to run script Z
Done running script Z
My code that runs the shell scripts looks like this:
print( "running command: " + cmnd )
ret_code = subprocess.call( cmnd, shell=True )
print( "done running command")
I wrote a basic test script and do *not* see this behaviour. This code does what I would expect:
print("calling")
ret_code = subprocess.call("/bin/ls -la", shell=True )
print("back")
Any idea on why the output is not interleaved?
Thanks. This works but has one limitation - you can't see any output until after the command completes. I found an answer from another question (here) that uses popen but also lets me see the output in real time. Here's what I ended up with this:
import subprocess
import sys
cmd = ['/media/sf_git/test-automation/src/SalesVision/mswm/shell_test.sh', '4', '2']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, bufsize=1, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(process.stdout.readline, ''):
line = line.replace('\n', '')
print(line)
sys.stdout.flush()
process.wait() # Wait for the underlying process to complete.
errcode = process.returncode # Harvest its returncode, if needed.
print( 'Script ended with return code of: ' + str(errcode) )
This uses Popen and allows me to see the progress of the called script.
It has to do with STDOUT and STDERR buffering. You should be using subprocess.Popen to redirect STDOUT and STDERR from your child process into your application. Then, as needed, output them. Example:
import subprocess
cmd = ['ls', '-la']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
process.wait() # Wait for the underlying process to complete.
out, err = process.communicate() # Capture what it outputted on STDOUT and STDERR
errcode = process.returncode # Harvest its returncode, if needed.
print(out)
print('done running command')
Additionally, I wouldn't use shell = True unless it's really required. It forces subprocess to fire up a whole shell environment just to run a command. It's usually better to inject directly into the env parameter of Popen.

Communicate with python subprocess while it is running

I running a subprocess that run a software in "command" mode. (This software is Nuke by The Foundy, in case you know that software)
When in command mode, this software is waiting for user input. This mode allow to create compositing scripts without any UI.
I have done this bit of code that start the process, find when the application is done starting then I try to send the process some commands, but the stdin doesn't seem to be sending the commands properly.
Here the sample code I did to test this process.
import subprocess
appPath = '/Applications/Nuke6.3v3/Nuke6.3v3.app/Nuke6.3v3' readyForCommand = False
commandAndArgs = [appPath, '-V', '-t']
commandAndArgs = ' '.join(commandAndArgs)
process = subprocess.Popen(commandAndArgs,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True, )
while True:
if readyForCommand:
print 'trying to send command to nuke...'
process.stdin.write('import nuke')
process.stdin.write('print nuke')
process.stdin.write('quit()')
print 'done sending commands'
readyForCommand = False
else:
print 'Reading stdout ...'
outLine = process.stdout.readline().rstrip()
if outLine:
print 'stdout:', outLine
if outLine.endswith('getenv.tcl'):
print 'setting ready for command'
readyForCommand = True
if outLine == '' and process.poll() != None:
print 'in break!'
break
print('return code: %d' % process.returncode)
when I run nuke in a shell and send the same commands here is what I get:
sylvain.berger core/$ nuke -V -t
[...]
Loading /Applications/Nuke6.3v3/Nuke6.3v3.app/Contents/MacOS/plugins/getenv.tcl
>>> import nuke
>>> print nuke
<module 'nuke' from '/Applications/Nuke6.3v3/Nuke6.3v3.app/Contents/MacOS/plugins/nuke/__init__.pyc'>
>>> quit()
sylvain.berger core/$
Any idea why the stdin is not sending the commands properly?
Thanks
your code will send the text
import nukeprint nukequit()
with no newline, thus the python instance will not try to execute anything, everything is just sitting in a buffer waiting for a newline
The subprocess module is not intended for interactive communication with a process. At best, you can give it a single pre-computed standard input string and then read its stdout and stderr:
p = Popen(..., stdin=PIPE, stdout=PIPE, stderr=PIPE)
out, err = p.communicate(predefined_stdin)
If you actually need interaction, consider using pexpect.

Categories