Show Filename in Video ffmpeg batch script - python

I have a folder with around 10 different mov files. I would like to add the filename as text on each of the videos using ffmpeg in a bat file. Could someone help me achieve this please?
EDIT:
I have tried using
#ECHO OFF&Setlocal EnableDelayedExpansion
Set INPUT=E:\\Users\\Oli\\Documents\\Projects\\v1.3.0\\downloads3
Set OUTPUT=E:\\Users\\Oli\\Documents\\Projects\\v1.3.0\\downloads3
for %%a in ("%INPUT%\*.*") DO (
set "filename=%%~na"
ffmpeg -i "%%a" -vf "drawtext=text=!fileName:.= !:x=105:y=120:fontfile=E:\\Users\\Oli\\Documents\\Projects\\v1.3.0\\downloads3\\impact.ttf:fontsize=25:fontcolor=white" -b:v 1M -r 60 -b:a 320k -ar 48000 -crf 17 "%%~na.mov"
)`
But it gives me the error:
Cannot find a valid font for the family Sans
[AVFilterGraph # 0000026eb75a9f40] Error initializing filter 'drawtext' with args 'text=FileName1'
Error reinitializing filters!
Failed to inject frame into filter network: No such file or directory
Error while processing the decoded data for stream #0:0

Let's get rid of the variable assignment and simply use variable expansion to set the name. Also, though it will still work, remove the secondary backslash because it is not needed and looks ugly, lastly, always wrap set variables for paths in double quotes. Give this a try.
#echo off
set "INPUT=E:\Users\Oli\Documents\Projects\v1.3.0\downloads3"
set "OUTPUT=E:\Users\Oli\Documents\Projects\v1.3.0\downloads3"
for %%a in ("%INPUT%\*.*") do (
ffmpeg -i "%%~a" -vf "drawtext=text=%%~na:x=105:y=120:fontfile=%~dp0impact.ttf:fontsize=25:fontcolor=white" -b:v 1M -r 60 -b:a 320k -ar 48000 -crf 17 "%%~na.mov"
)

Related

I have a ffmpeg command to concatenate 300+ videos of different formats. What is the proper syntax for the concat complex filter?

I plan to concatenate a large amount of video files of different formats and resolution, some without sound, and add a short black screen "pause" of about 0.5s between each.
I wrote a python script to generate such command.
I created a 0.5s video file using ffmpeg.exe -t 0.5 -f lavfi -i color=c=black:s=640x480 -c:v libx264 -tune stillimage -pix_fmt yuv420p blank500ms.mp4.
I then added a silent audio to it with -f lavfi -i anullsrc -c:v copy -c:a aac -shortest
I now have the problem of adding a blank audio track for streams without one, but I don't want to generate new file, I want to add it to my complex filter.
This is my complex script and generate command.
The command (there are line returns, because I send this with the python subprocess module)
ffmpeg.exe
-i
input0.mp4
-i
input1.mp4
-i
input2.mp4
-i
input3.mp4
-i
input4.mp4
-i
input5.mp4
-i
input6.mp4
-i
input7.mp4
-i
input8.mp4
-i
input9.mp4
-i
input10.mp4
-f
lavfi
-i
anullsrc
-filter_complex_script
C:/filter_complex_script.txt
-map
"[final_video]"
-map
"[final_audio]"
output.mp4
The complex_filter_script:
[0]fps=24[fps0];
[fps0]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled0];
[1]fps=24[fps1];
[fps1]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled1];
[2]fps=24[fps2];
[fps2]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled2];
[3]fps=24[fps3];
[fps3]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled3];
[4]fps=24[fps4];
[fps4]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled4];
[5]fps=24[fps5];
[fps5]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled5];
[6]fps=24[fps6];
[fps6]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled6];
[7]fps=24[fps7];
[fps7]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled7];
[8]fps=24[fps8];
[fps8]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled8];
[9]fps=24[fps9];
[fps9]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled9];
[10]fps=24[fps10];
[fps10]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2,setsar=1,setpts=PTS-STARTPTS[rescaled10];
[10]split=10[blank0][blank1][blank2][blank3][blank4][blank5][blank6][blank7][blank8][blank9];
[rescaled0:v][0:a][blank0][rescaled1:v][1:a][blank1][rescaled2:v][2:a][blank2][rescaled3:v][3:a][blank3][rescaled4:v][4:a][blank4][rescaled5:v][5:a][blank5][rescaled6:v][11:a][blank6][rescaled7:v][11:a][blank7][rescaled8:v][11:a][blank8][rescaled9:v][11:a][blank9]concat=n=22:v=1:a=1[final_video][final_audio]
As you can see, some video use [11:a], because it's a silent audio stream.
input10.mp4, mapped to [10] and then split (or "cloned") into blanked0 to 9, is a short pause separator.
ffmpeg tells me the error
[Parsed_split_55 # 000001591c33b280] Media type mismatch between the 'Parsed_split_55' filter output pad 1 (video) and the 'Parsed_concat_56' filter input pad 5 (audio)
[AVFilterGraph # 000001591bf1e6c0] Cannot create the link split:1 -> concat:5
Error initializing complex filters.
Invalid argument
I'm a bit lost when it comes to using the [X:Y:Z] syntax, and how the order matter in the concat argument list.
I'm open to any other suggestion to solve my problem. I would rather do this in a single command, without intermediate file.
EDIT:
For details, I already wrote a large concat+xstack filter that worked well with 8GB of memory.
In this case, there are a lot of inputs, but those inputs are small, most of them are between 1 and 10MB, so it would probably not generate out-of-memory problems, although I'm not certain.
While theoretically doable, I don't recommend calling FFmpeg with so many input files. This will increase the memory footprint of the runtime and likely to bog down the speed (if not throwing an out-of-memory error). Instead, my suggestion is to approach this in 2 steps:
Step 1: Transcode each video files so each is properly encoded exactly in the way you like it. Do this in a loop and save as intermediate files.
Step 2: Copy-concat all the intermediate files to form the final output
The important part here is that all temp files have the exact same stream config. Video: codec, framerate (fps), pix_fmt (pfmt), size (w,h), and timebase and Audio: codec, sample_fmt (sfmt), sampling rate (fs), channel layout ('layout') and timebase. (I'm using these "variables" in the command sketches below inside curly braces.)
Step 1 command sketches:
Below I assuming that video & audio configs are identical among the input files except for the size, which you already addressed in your code. If not, you may need additional filters.
If video file has both audio & video:
ffmpeg -i input.mp4 \
-f lavfi -i color=c=black:s={w}x{h}:d=0.5:r={fps},format={pfmt} \
-f lavfi -i aevalsrc=0:n=1:c={layout}:s={fs},aformat={sfmt} \
-filter_complex [0:v]scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:-1:-1,setsar=1[v]; \
[v][0:a][1:v][2:a]concat=n=2:v=1:a=1[vout][aout] \
-map [vout] -map [aout] -enc_time_base 0 output.mp4
If video file only has video stream:
ffmpeg -i input.mp4 \
-f lavfi -i color=c=black:s={w}x{h}:d=0.5:r={fps},format={pfmt} \
-f lavfi -i aevalsrc=0:n=1:c={layout}:s={fs},aformat={sfmt} \
-filter_complex [0:v]scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:-1:-1,setsar=1[v]; \
[v][2:a][1:v][2:a]concat=n=2:v=1:a=1[vout][aout] \
-map [vout] -map [aout] -enc_time_base 0 output.mp4
Note that the only difference between 1 & 2 is the 2nd input of concat filter. If audio is missing, just use the aevalsrc for the missing stream.
No 0.5-s padding for the last input video:
With audio
ffmpeg -i input.mp4 \
-vf scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:-1:-1,setsar=1 \
-enc_time_base 0 output.mp4
Without audio:
ffmpeg -i input.mp4 \
-f lavfi -i aevalsrc=0:n=1:c={layout}:s={fs},aformat={sfmt} \
-filter_complex [0:v]scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:-1:-1,setsar=1[v]; \
[v][2:a]concat=n=1:v=1:a=1[vout][aout] \
-map [vout] -map [aout] -enc_time_base 0 output.mp4
Use ffprobe to identify whether the file has audio stream (you can also use ffmpeg, but I prefer this approach):
ffprobe -of default=nk=1:nw=1 -select_streams a -show_entries stream input.mp4
In python, you can run this command with subprocess.run with stdout=sp.PIPE and check the length of the obtained stdout bytes (>0 with audio, =0 no audio).
While running the per-input ffmpeg, also compose ffconcat text file.
The concat demuxer takes a text file as the input, and it has the following format:
ffconcat version 1.0
file output1.mp4
file output2.mp4
...
where the output#.mp4 are the names of the files you generated in the loop. Build this file in the Step-1 loop and save it in the same directory as the intermediate video files (call it ffconcat.txt).
Step 2 command sketch
Most of the work is done at this point, and you should be able to obtain the final video by:
ffmpeg -i ffconcat.txt -c copy final.mp4
Warning: I didn't test these codes, so if you encounter any typo that you cannot figure out please leave a comment and I'll be happy to correct/clarify.
One-n-done sketch
What's written above can be extended to a single-run (or a partial combo) approach. Assume there are 100 files, then you can do:
ffmpeg -i input0.mp4 -i input1.mp4 ... -i input99.mp4 \
-f lavfi -i color=c=black:s={w}x{h}:d=0.5:r={fps},format={pfmt} \
-f lavfi -i aevalsrc=0:n=1:c={layout}:s={fs},aformat={sfmt} \
-filter_complex \
[0:v]scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:-1:-1,setsar=1[v0]; \
[1:v]scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:-1:-1,setsar=1[v1]; \
...
[99:v]scale={w}:{h}:force_original_aspect_ratio=decrease,pad={w}:{h}:-1:-1,setsar=1[v99]; \
[v0][0:a][100:v][101:a][v2][101:a][100:v][101:a]...[100:v][101:a][v99][99:a]concat=n=199:v=1:a=1[vout][aout] \
-map [vout] -map [aout] output.mp4
Here, I assumed that the 1st and last have audio and the second has no audio. Input #100 = color filter, Input #101 = aevalsrc filter. The total number of video-audio stream pairs to concatenate is 199 (100 videos and 99 0.5-s pause. The key here is that you can reuse the filter outputs as many times as you need.

Passing ffmpeg command containing % through Python subprocess

I'm trying to build a GUI with Tkinter where a set of images is converted, via press of a button, to an .mp4 video.
When I run the following from the command line, all is well:
> "ffmpeg -r 5 -i ptimage%03d -crf 20 animation.mp4"
However, in Python, the following gives me an error that I think is related to passing the % in the argument:
commandString = "ffmpeg -r 5 -i ptimage%03d -crf 20 animation.mp4"
args = shlex.split(commandString)
p = subprocess.run(args)
The error I get is ptimage%03d: No such file or directory. I'm 99% sure I'm running the command from the right directory; when I run the same command replacing ptimage%03d with ptimage000.jpg, a specific image the list, I get a (really short) video successfully.
I've tried escaping the % with \%, but that doesn't help.
Any ideas?
You omitted the file extension. Use ptimage%03d.jpg, not ptimage%03d. With ptimage%03d ffmpeg is expecting files named ptimage000 ,ptimage001, etc.
ffmpeg -framerate 5 -i ptimage%03d.jpg -crf 20 animation.mp4
Unrelated notes: Some players (YouTube excluded) can't handle such a low frame, so consider adding the -r 10 output option. Same with the chroma subsampling: consider adding -vf format=yuv420p output option.

FFmpeg same Filename for output but without extension with python

i'm kinda noob with Python but i managed to make this code to encode multiple videos files from a folder to H265, everything woks fine exept for the output name.
Actually the output name keep the old file extension with the new one like this "MyMovie.mov.mp4" and i want it to be named like this "MyMovie.mp4" is there any way to exclude the original file extension from the output file?
import os, sys, re
input_folder= '/content/drive/My Drive/Videos'
output_folder= '/content/drive/My Drive/Videos/X265'
quality_setting = '30'
file_type = 'mp4
my_suffixes = (".mp4", ".mov", ".mkv", ".avi", ".ts", ".flv", ".webm", ".wmv", ".mpg", ".m4v", ".f4v")
from pathlib import Path
Path(output_folder).mkdir(parents=True, exist_ok=True)
for filename in os.listdir(my_folder):
if (filename.endswith(my_suffixes)):
cmd = !ffmpeg -v quiet -stats -hwaccel cuvid -i "$input_folder/{filename}" -metadata comment="X265-QF$quality_setting-AAC" -c:v hevc_nvenc -preset:v slow -rc vbr -cq $quality_setting -c:a aac -b:a 160k "$output_folder/{filename}.$file_type"
Ps: This code is used on Google Colab that's why i need this with python.
In you for-loop you check if filename ends with one of the extensions listed above. So filename contains something like output.mp4 then in cmd you add .$file_type to the end of your command which is set to .mp4. I think you should remove that last part so you will only have the extension contained in filename.
I found a way i used rpartition to cut the file extension
cmd = !ffmpeg -v quiet -stats -hwaccel cuvid -i "$input_folder/{filename}" -metadata comment="X265-QF$quality_setting-AAC" -c:v hevc_nvenc -preset:v slow -rc vbr -cq $quality_setting -c:a aac -b:a 160k "$output_folder/{filename.rpartition('.')[0]}.$file_type"

How can I create a way to bulk cut audio?

I want to create an automated way to cut .mp3 files to 45 seconds.
So far I have been able to use ffmpeg to cut the audio to 45 seconds with this command:
ffmpeg -t 45 -i input.mp3 -acodec copy output.mp3
However this does not actually speed anything up, as if I have to do this with each file I might as well use audacity. I know that I should be able to use a .bat file to create a loop for this, however I don't know how to set up the loop. In python I would create a list of the file names in my directory with listdir:
fileNames = listdir(path),
and then create a for loop:
(something like
i = 1
for fileName in fileNames:
x = 2 * int(i)
ffmpeg -t 45 -i str(fileName)+'.mp3' -acodec copy str(x)+'.mp3'
that)
However I don't know how to create something like this in a .bat file. Some help with this, or a way to achieve this in python, would be much appreciated.
You can try using below script. Save the code into a *.bat file in the folder where you have your mp3 songs and execute it and it will process all your songs.
#ECHO OFF
setlocal enableextensions enabledelayedexpansion
set /a count = 1
for %%f in (*.mp3) do (
set "output=!count!.mp3"
ffmpeg -t 45 -i %%f -acodec copy !output!
set /a count+=1
)
endlocal

Why do the results of my "ffmpeg -ss -to" split have audio but no video?

I am trying to split clips into short intervals (that I am reading in from a csv) using ffmpeg. The commands that I'm using look like this:
ffmpeg -i filename.mp4 -ss 00:00:00.030000 -to 00:00:02.030000
-pix_fmt yuv420p -c copy new_filename.mp4
This successfully splits the parent mp4 into many smaller mp4s, but the smaller files lose some or all of their video. Most of them end up being just audio. Some have video - but only for about half of the clip (the rest is black). The audio is always there. Any ideas why this might be happening?
A couple notes: I'm using ffmpeg 3.0.2. Also, I am creating this command as a Python list and running it with the following call
subprocess.run(cmd, stderr=subprocess.STDOUT)
This was answered by comments from Mulvya and szatmary. Posting those as an answer to close out question.
"The OP is streamcopying. To the OP, if you change -c copy to -c:a
copy, it will work." –Mulvya
"-c copy will not work because there are no keyframes in that time
range. Your must reencode." –szatmary
The solution Mulvya suggested was to add a stream specifier. I did this, and got the result I was looking for. There's documentation here: https://ffmpeg.org/ffmpeg.html#Stream-specifiers-1

Categories