I am unable to save my output of subprocess.Popen correctly. I get this in the file I chose. The directory specified is correct, as just above I told it to erase text already existing in it, which worked. Any solutions to this?
Code is below
f = open("hunter_logs.txt", "w")
subp = subprocess.Popen(
'docker run -p 5001-5110:5001-5110/udp -v D:\Hunter\hunter\hunter-scenarios:/hunter-scenarios europe-west3-docker.pkg.dev/hunter-all/controller-repo/hunter_controller:latest -d /hunter-scenarios -s croatia -i OPFOR', stdout=f)
Probably the process is outputting some of its logs to stderr and some to stdout. Add stderr=f as another argument to Popen() in order to capture both streams to the same file.
Related
I m want to extract the scene change timestamp using the scene change detection from ffmpeg. I have to run it on a few hundreds of videos , so i wanted to use a python subprocess to loop over all the content of a folder.
My problem is that the command that i was using for getting these values on a single video involve piping the output to a file which seems to not be an option from inside a subprocess call.
this is my code :
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',showinfo\"","-f","null","-","2>","output"])
this one tell ffmpeg need an output
output = "./result/"+name
p=subprocess.check_output(["ffmpeg", "-i", sourcedir+"/"+name+".mpg","-filter:v", "select='gt(scene,0.4)',metadata=print:file=output","-an","-f","null","-"])
this one give me no error but doesn't create the file
this is the original command that i use directly with ffmpeg:
ffmpeg -i input.flv -filter:v "select='gt(scene,0.4)',showinfo" -f null - 2> ffout
I just need the ouput of this command to be written to a file, anyone see how i could make it work?
is there a better way then subprocess ? or just another way ? will it be easier in C?
You can redirect the stderr output directly from Python without any need for shell=True which can lead to shell injection.
It's as simple as:
with open(output_path, 'w') as f:
subprocess.check_call(cmd, stderr=f)
Things are easier in your case if you use the shell argument of the subprocess command and it should behave the same. When using the shell command, you can pass in a string as the command rather then a list of args.
cmd = "ffmpeg -i {0} -filter:v \"select='gt(scene,0.4)',showinfo\" -f {1} - 2> ffout".format(inputName, outputFile)
p=subprocess.check_output(cmd, shell=True)
If you want to pass arguments, you can easily format your string
I am trying to run the bash command pdfcrack in Python on a remote server. This is my code:
bashCommand = "pdfcrack -f pdf123.pdf > myoutput.txt"
import subprocess
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I, however, get the following error message:
Non-option argument myoutput2.txt
Error: file > not found
Can anybody see my mistake?
The first argument to Popen is a list containing the command name and its arguments. > is not an argument to the command, though; it is shell syntax. You could simply pass the entire line to Popen and instruct it to use the shell to execute it:
process = subprocess.Popen(bashCommand, shell=True)
(Note that since you are redirecting the output of the command to a file, though, there is no reason to set its standard output to a pipe, because there will be nothing to read.)
A better solution, though, is to let Python handle the redirection.
process = subprocess.Popen(['pdfcrack', '-f', 'pdf123.pdf'], stdout=subprocess.PIPE)
with open('myoutput.txt', 'w') as fh:
for line in process.stdout:
fh.write(line)
# Do whatever else you want with line
Also, don't use str.split as a replacement for the shell's word splitting. A valid command line like pdfcrack -f "foo bar.pdf" would be split into the incorrect list ['pdfcrack', '-f', '"foo', 'bar.pdf"'], rather than the correct list ['pdfcrack', '-f', 'foo bar.pdf'].
> is interpreted by shell, but not valid otherwise.
So, that would work (don't split, use as-is):
process = subprocess.Popen(bashCommand, shell=True)
(and stdout=subprocess.PIPE isn't useful since all output is redirected to the output file)
But it could be better with native python for redirection to output file and passing arguments as list (handles quote protection if needed)
with open("myoutput.txt","w") as f:
process = subprocess.Popen(["pdfcrack","-f","pdf123.pdf"], stdout=subprocess.PIPE)
f.write(process.read())
process.wait()
Your mistake is > in command.
It doesn't treat this as redirection to file because normally bash does it and now you run it without using bash.
Try with shell=True if you whan to use bash. And then you don't have to split command into list.
subprocess.Popen("pdfcrack -f pdf123.pdf > myoutput.txt", shell=True)
I'm trying to get the output of a command's STDOUT with the HandBrakeCLI program when encoding a video. I can't seem to get python to handle its output on the standard output stream. I've tried the following codes:
import subprocess
import sys
encode = subprocess.check_output("HandBrakeCLI -i video.mkv -o out.mp4", shell=True, stderr=subprocess.STDOUT, universal_newlines=True)
print(encode)
This printed nothing as well as this which I also tried:
import subprocess
import sys
encode = subprocess.Popen("HandBrakeCLI -i video.mkv -o out.mp4", stdout=subprocess.PIPE, stderr = subprocess.PIPE, shell=True, universal_newlines=True)
print(encode.stdout.read())
As stated before, both will result in no output. This application is the type that will update text on a single line in bash as it's encoding. I'm not sure if that type of data stream creates a problem with python or not.
It seems HandBrakeCLI changes its output depending on whether it prints to a terminal. Either specify a command-line flag to force the necessary output or you could trick it by providing a pseudo-tty (if your system supports it) using pexpect or pty module directly.
Code examples on how to get output from a subprocess using pexpect, pty modules:
Last unbuffered line can't be read
Python subprocess readlines() hangs
I'm attempting to create a looped video file by calling ffmpeg from the python subprocess library. Here's the part that's giving me problems:
import subprocess as sp
sp.Popen(['ffmpeg', '-f', 'concat', '-i', "<(for f in ~/Desktop/*.mp4; do echo \"file \'$f\'\"; done)", "-c", "copy", "~/Desktop/sample3.mp4"])
With the above code I'm getting the following error:
<(for f in /home/delta/Desktop/*.mp4; do echo "file '$f'"; done): No such file or directory
I did find a similarly phrased question here. But I'm not sure how the solution might apply to solving my issue.
Following the advice in the comments and looking elsewhere I ended up changing the code to this:
sp.Popen("ffmpeg -f concat -i <(for f in ~/Desktop/*.mp4; do echo \"file \'$f\'\"; done) -c copy ~/Desktop/sample3.mp4",
shell=True, executable="/bin/bash")
--which works fine. – moorej
If you need to parameterize input and output files, consider breaking out your parameters:
# sample variables
inputDirectory = os.path.expanduser('~/Desktop')
outputDirectory = os.path.expanduser('~/dest.mp4')
sp.Popen(['''ffmpef -f concat -i <(for f in "$1"/*; do
echo "file '$f'";
done) -c copy "$2" ''',
bash, # this becomes $0
inputDirectory, # this becomes $1
outputDirectory, # this becomes $2
], shell=True, executable="/bin/bash")
...as this ensures that your code won't do untoward things even when given an input directory with a hostile name like /uploads/$(rm -rf ~)'$(rm -rf ~)'. (ffmpeg is likely to fail to parse an input file with such a name, and if there's any video you don't want included in the current working directory but we'd need to know the escaping rules it uses to avoid that; but it's far better for ffmpeg to fail than to execute arbitrary code).
I'm trying to write data to files in a chroot environment. Since I'm non-root user, they only way I can communicate with chroot is using schroot command.
Currently I'm using the following trick to write the data.
$ schroot -c chroot_session -r -d /tmp -- bash -c "echo \"$text\" > file.txt"
But I'm sure this will give me a lot of grief if text has some special characters, quotes etc. So whats a better way of sending $text to chroot. Most probably I'll be using the above command through a python script. Is there a simpler method?
Kinda hackish, but…
c = ConfigParser.RawConfigParser()
c.readfp(open(os.path.join('/var/lib/schroot/session', chroot_session), 'r'))
chroot_basedir = c.get(chroot_session, 'mount-location')
with open(os.path.join(chroot_basedir, '/tmp/file.txt'), 'w') as fp:
fp.write(text)
Okay, so privileges don't let you get in by any method other than schroot, huh?
p = subprocess.Popen(['schroot', '-c', name, '-r', 'tee', '/tmp/file.txt'],
stdin=subprocess.PIPE,
stdout=open('/dev/null', 'w'),
stderr=sys.stderr)
p.stdin.write(text)
p.stdin.close()
rc = p.wait()
assert rc == 0
you can use python to write $text into the file (python has the right to write),
then copy that file to file.txt