ffmpeg rtmp broadcast on youtube speed below 1x - python

i made an python and opencv program that produce frame per second around 8-15fps with MJPEG output format where MJPEG address served on localhost webserver (0.0.0.0:5000) and, i do attempt to broadcast its frame to rtmp server like youtube using ffmpeg so basically i do convert MJEG to flv and forward to rtmp server with following command ffmpeg -f mjpeg -i http://0.0.0.0:5000/video_feed -f lavfi -i anullsrc -c:v libx264 -vf "scale=trunc(oh*a/2)*2:320,unsharp=lx=3:ly=3:la=1.0" -crf 24 -c:a aac -ac 1 -f flv rtmp://a.rtmp.youtube.com/live2/xxx-xxx-xxx but unfortunatelly youtube stream has too many buffering that occur every around 5 second and ffmpeg terminal tell that writing speed is only around 0.317x (expected to be sync with youtube around 0.99-1x), my question is
does there a way to stream 'realtime' around 8-15fps and automatically sync with youtube rtmp server without buffering because i thought that youtube require around 30fps while my fps only 9-15fps that probably causing buffer.
do there an such like additional ffmpeg's parameter that able to speed up writing? thank you

A raw video will usually be assigned a framerate of 25. But your source is variable frame rate. You need to assign wallclock time as timestamp and generate a constant frame rate output for YT.
ffmpeg -f mjpeg -use_wallclock_as_timestamps true -i http://0.0.0.0:5000/video_feed -f lavfi -re -i anullsrc -vsync cfr -r 25 -c:v libx264 -vf "scale=trunc(oh*a/2)*2:320,unsharp=lx=3:ly=3:la=1.0" -crf 24 -c:a aac -ac 1 -f flv rtmp://a.rtmp.youtube.com/live2/xxx-xxx-xxx

Related

Ffmpeg alpha channel video generation

I am trying to remove a background from a video using ffmpeg and a PY library that does that, the PY lib (backgroundremover) just creates a matte.mp4 file as an output, having the background as black and the person as white silhouette.
PY lib: https://github.com/nadermx/backgroundremover#advance-usage-for-video
What I am doing at the moment:
Shrink & convert the video to MP4
ffmpeg -i ios.mov -s 320x240 -filter:v fps=30 -vf scale=320:-2 edited.mp4
Create the matte video
backgroundremover -i edited.mp4 -wn 4 -mk -o matte.mp4
Create video with alpha channel (the problem)
ffmpeg -i edited.mp4 -i matte.mp4 -filter_complex "[0:v][1:v]alphamerge" -shortest -c:v qtrle -an output.mov
Last command fails with invalid frame sizes, how do I force a frame size or skip this check?
Error:
[swscaler # 0x7ff5c957b000] No accelerated colorspace conversion found from yuv420p to argb.
[Parsed_alphamerge_0 # 0x7ff5c4e6d480] Input frame sizes do not match (320x240 vs 426x320).
[Parsed_alphamerge_0 # 0x7ff5c4e6d480] Failed to configure output pad on Parsed_alphamerge_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Answer:
ffmpeg -y -i edited.mp4 -i matte.mp4 -f lavfi -i color=c=black:s=320x240 -filter_complex "[1:v]scale=320:240,setsar=1:1,split[vs][alpha];[0:v][vs]alphamerge[vt];[2:v][vt]overlay=shortest=1[rgb];[rgb][alpha]alphamerge" -shortest -c:v hevc_videotoolbox -allow_sw 1 -alpha_quality 0.75 -vtag hvc1 -pix_fmt yuva420p -an output.mov
The error Input frame sizes do not match (320x240 vs 426x320) is "self explained".
The resolution of edited.mp4 is 320x240.
The resolution of matte.mp4 is 426x320.
I don't know why backgroundremover modifies the resolution from 320x240 to 426x320.
The rest of the messages are just warnings.
I am not sure about it, but I think the first FFmpeg command should be:
ffmpeg -y -i ios.mov -filter:v fps=30 -vf scale=320:240,setsar=1:1 edited.mp4
It's not solving the issue - the resolution of matte.mp4 is still 426x320.
It could be a bug in backgroundremover...
You may solve the error message using scale filer.
The alpha merge should be followed by an overlay filter:
ffmpeg -y -i edited.mp4 -i matte.mp4 -f lavfi -i color=c=black:s=320x240 -filter_complex "[1:v]scale=320:240,setsar=1:1[vs];[0:v][vs]alphamerge[vt];[2:v][vt]overlay=shortest=1" -shortest -c:v qtrle -an output.mov
Sample output:

Set images duration in FFmpeg when converting a list of images to one video?

I achieved my need by converting a list of images within a folder to one video using this cmd
ffmpeg -framerate 1 -loop 1 -pattern_type glob -i '*.jpg' -pix_fmt yuv420p -t 12 video.mp4;
I just need to set a duration for example 6s for each image! Is this possible using FFmpeg?
If you want to play each frame for 6 seconds use -framerate 1/6. Also, -loop 1 is not required unless you want the files to all finish encoding once and then start over at the beginning.
ffmpeg -framerate 1/6 -pattern_type glob -i "*.jpg" -vcodec libx264 \
-pix_fmt yuv420p -r 30 -threads 4 -crf 25 -refs 1 -bf 0 -coder 0 -g 25 \
-keyint_min 15 -movflags +faststart output.mp4
You can change -crf etc to your requirements.

How to combine multiple audios and videos into a single file?

I have a bunch of audio and video files and their corresponding start time in a stream.
Like this diagram below:
How can I combine this all into a single video file efficiently?
I've tried ffmpeg but it was too slow (I don't know if I used ffmpeg correctly or not)
This is my ffmpeg command example:
ffmpeg -i audio_0_3.mp3 -i audio_1_4.mp3 -i audio_2_5.mp3 -i audio_2_6.mp3 -i audio_2_7.mp3 -i audio_3_10.mp3 -i audio_3_8.mp3 -i audio_3_9.mp3 \
-filter_complex [0:a]adelay=0ms[s0];[1:a]adelay=36282ms[s1];[2:a]adelay=462385ms[s2];[3:a]adelay=686909ms[s3];[4:a]adelay=876931ms[s4];[5:a]adelay=1675187ms[s5];[6:a]adelay=1339944ms[s6];[7:a]adelay=1567946ms[s7];[s0][s1][s2][s3][s4][s5][s6][s7]amix=inputs=8[s8] \
-map [s8] out.mp4

How to concat multiple explicit image paths to video with ffmpeg

I am trying to figure out how to create a video from explicit paths to an image sequence.
I am writing these images from Houdini.
Instead of doing some kind of regex matching to replace $F3 with %03d I am trying to figure out how to concat the image paths into a video.
I'm trying to do something like this:
ffmpeg -y -framerate 12 -i -start_number 1 -i test_00001.jpg -start_number 2 -i test_00002.jpg -start_number 3 -i test_00003.jpg -start_number 4 -i test_00004.jpg -start_number 5 -i test_00005.jpg -start_number 6 -i test_00006.jpg -start_number 7 -i test_00007.jpg -start_number 8 -i test_00008.jpg -start_number 9 -i test_00009.jpg -filter_complex "concat=n=3" -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p $HOME/Desktop/test.mp4
This result only plays a couple of frames in the result video
If I write the images to a text file with the following format
file '/Volumes/hqueue/projects/Fire/render/test_1.jpg'
file '/Volumes/hqueue/projects/Fire/render/test_2.jpg'
file '/Volumes/hqueue/projects/Fire/render/test_3.jpg'
file '/Volumes/hqueue/projects/Fire/render/test_4.jpg'
file '/Volumes/hqueue/projects/Fire/render/test_5.jpg'
file '/Volumes/hqueue/projects/Fire/render/test_6.jpg'
file '/Volumes/hqueue/projects/Fire/render/test_7.jpg
and then run a command like the following
ffmpeg -y -framerate 12 -f concat -i /var/folders/fy/8zlxyq497kz0nzgb1nqc9xf59rwbjm/T/image_list.txt -c:v libx264 -profile:v high -c copy -crf 20 -pix_fmt yuv420p $HOME/Desktop/test.mp4
I get the following output
[concat # 0x7f870e809c00] Unsafe file name '/Volumes/hqueue/projects/Fire/render/test_1.jpg'
/var/folders/fy/8zlxyq497kz0nzgb1nqc9xf59rwbjm/T/image_list.txt: Operation not permitted
Right now it's producing a video, but only with a couple of the input frames.
I put selected files in one folder and use
ffmpeg -pattern_type glob -i '*.jpg' output.mp4

ffmpeg video recording gets corrupted

I run the following command to record video thru ffmpeg
ffmpeg -y -rtbufsize 100M -f gdigrab -framerate 10 -i desktop -c:v libx264 -r 10 -tune zerolatency -pix_fmt yuv420p record.mp4
This works fine when I run it thru PowerShell(I stop the recording manually by pressing ctrl + c).
I am trying to do the same thing thru Python and I have created two functions to start and stop the operation.
def recThread():
cmd = 'ffmpeg -y -rtbufsize 100M -f gdigrab -framerate 10 -i desktop -c:v libx264 -r 10 -tune zerolatency -pix_fmt yuv420p ' + videoFile
global proc
proc = subprocess.Popen(cmd)
proc.wait()
def stop():
proc.terminate()
However when I run this, the video is corrupted.
I have tried using os.system command instead of subprocess and got the same result. Any help would be appreciated.
I tried changing the video format to avi and it worked like a charm. After that investigated why the same thing was not working with mp4, and found that if h264 encoder is used, ffmpeg performs an operation at the time of exit to support h264 conversion. proc.terminate() does not let ffmpeg exit gracefully.

Categories