ffmpeg video recording gets corrupted - python

I run the following command to record video thru ffmpeg
ffmpeg -y -rtbufsize 100M -f gdigrab -framerate 10 -i desktop -c:v libx264 -r 10 -tune zerolatency -pix_fmt yuv420p record.mp4
This works fine when I run it thru PowerShell(I stop the recording manually by pressing ctrl + c).
I am trying to do the same thing thru Python and I have created two functions to start and stop the operation.
def recThread():
cmd = 'ffmpeg -y -rtbufsize 100M -f gdigrab -framerate 10 -i desktop -c:v libx264 -r 10 -tune zerolatency -pix_fmt yuv420p ' + videoFile
global proc
proc = subprocess.Popen(cmd)
proc.wait()
def stop():
proc.terminate()
However when I run this, the video is corrupted.
I have tried using os.system command instead of subprocess and got the same result. Any help would be appreciated.

I tried changing the video format to avi and it worked like a charm. After that investigated why the same thing was not working with mp4, and found that if h264 encoder is used, ffmpeg performs an operation at the time of exit to support h264 conversion. proc.terminate() does not let ffmpeg exit gracefully.

Related

Ffmpeg alpha channel video generation

I am trying to remove a background from a video using ffmpeg and a PY library that does that, the PY lib (backgroundremover) just creates a matte.mp4 file as an output, having the background as black and the person as white silhouette.
PY lib: https://github.com/nadermx/backgroundremover#advance-usage-for-video
What I am doing at the moment:
Shrink & convert the video to MP4
ffmpeg -i ios.mov -s 320x240 -filter:v fps=30 -vf scale=320:-2 edited.mp4
Create the matte video
backgroundremover -i edited.mp4 -wn 4 -mk -o matte.mp4
Create video with alpha channel (the problem)
ffmpeg -i edited.mp4 -i matte.mp4 -filter_complex "[0:v][1:v]alphamerge" -shortest -c:v qtrle -an output.mov
Last command fails with invalid frame sizes, how do I force a frame size or skip this check?
Error:
[swscaler # 0x7ff5c957b000] No accelerated colorspace conversion found from yuv420p to argb.
[Parsed_alphamerge_0 # 0x7ff5c4e6d480] Input frame sizes do not match (320x240 vs 426x320).
[Parsed_alphamerge_0 # 0x7ff5c4e6d480] Failed to configure output pad on Parsed_alphamerge_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Answer:
ffmpeg -y -i edited.mp4 -i matte.mp4 -f lavfi -i color=c=black:s=320x240 -filter_complex "[1:v]scale=320:240,setsar=1:1,split[vs][alpha];[0:v][vs]alphamerge[vt];[2:v][vt]overlay=shortest=1[rgb];[rgb][alpha]alphamerge" -shortest -c:v hevc_videotoolbox -allow_sw 1 -alpha_quality 0.75 -vtag hvc1 -pix_fmt yuva420p -an output.mov
The error Input frame sizes do not match (320x240 vs 426x320) is "self explained".
The resolution of edited.mp4 is 320x240.
The resolution of matte.mp4 is 426x320.
I don't know why backgroundremover modifies the resolution from 320x240 to 426x320.
The rest of the messages are just warnings.
I am not sure about it, but I think the first FFmpeg command should be:
ffmpeg -y -i ios.mov -filter:v fps=30 -vf scale=320:240,setsar=1:1 edited.mp4
It's not solving the issue - the resolution of matte.mp4 is still 426x320.
It could be a bug in backgroundremover...
You may solve the error message using scale filer.
The alpha merge should be followed by an overlay filter:
ffmpeg -y -i edited.mp4 -i matte.mp4 -f lavfi -i color=c=black:s=320x240 -filter_complex "[1:v]scale=320:240,setsar=1:1[vs];[0:v][vs]alphamerge[vt];[2:v][vt]overlay=shortest=1" -shortest -c:v qtrle -an output.mov
Sample output:

How to combine multiple audios and videos into a single file?

I have a bunch of audio and video files and their corresponding start time in a stream.
Like this diagram below:
How can I combine this all into a single video file efficiently?
I've tried ffmpeg but it was too slow (I don't know if I used ffmpeg correctly or not)
This is my ffmpeg command example:
ffmpeg -i audio_0_3.mp3 -i audio_1_4.mp3 -i audio_2_5.mp3 -i audio_2_6.mp3 -i audio_2_7.mp3 -i audio_3_10.mp3 -i audio_3_8.mp3 -i audio_3_9.mp3 \
-filter_complex [0:a]adelay=0ms[s0];[1:a]adelay=36282ms[s1];[2:a]adelay=462385ms[s2];[3:a]adelay=686909ms[s3];[4:a]adelay=876931ms[s4];[5:a]adelay=1675187ms[s5];[6:a]adelay=1339944ms[s6];[7:a]adelay=1567946ms[s7];[s0][s1][s2][s3][s4][s5][s6][s7]amix=inputs=8[s8] \
-map [s8] out.mp4

Joining audio file and all frame picture to video format with Python ffmpeg

i have a tmp directory which contains a collection of image frames and audio files. i am using linux mint 19.3 and python3.8. in the terminal I type
ffmpeg -i tmp/%d.png -vcodec png tmp/output.mov -y
and ffmpeg -i tmp/output.mov -i tmp/audio.mp3 -codec copy output.mov -y
then the collection of images and audio in the directory will become a complete video. that I asked
when I run it in python using the syntax
call(["ffmpeg", "-i", "tmp/%d.png" , "-vcodec", "png", "tmp/output.mov", "-y"],stdout=open(os.devnull, "w"), stderr=STDOUT)
and
call(["ffmpeg", "-i", "tmp/output.mov", "-i", "tmp/audio.mp3", "-codec", "copy", "output.mov", "-y"],stdout=open(os.devnull, "w"), stderr=STDOUT)
it does not merge into a video (without output error)
I tried the syntax
os.system("ffmpeg -i tmp/%d.png -vcodec png tmp/output.mov -y")
and
os.system("ffmpeg -i tmp/output.mov -i tmp/audio.mp3 -codec copy output.mov -y"), the video failed to merge with the error output tmp/output.mov: No such file or directory
Please help. thank you
Use the full path
Instead of tmp/output.mov use /tmp/output.mov. Do this for the rest of the inputs and outputs.
Do everything in one command
ffmpeg -y -i /tmp/%d.png -i /tmp/audio.mp3 -codec copy -shortest output.mov

How can stop the shell command using python?

ffmpeg -f gdigrab -framerate 30 -i desktop -c:v h264_nvenc -qp 0 output.mkv
Hi there.
That's my shell command that capture data from screen. My command is waiting for press q, and when i press q command stops.
For ex i want to run my command through subprocess.Popen and stop it in 5 seconds.
How can i do that?

ffmpeg rtmp broadcast on youtube speed below 1x

i made an python and opencv program that produce frame per second around 8-15fps with MJPEG output format where MJPEG address served on localhost webserver (0.0.0.0:5000) and, i do attempt to broadcast its frame to rtmp server like youtube using ffmpeg so basically i do convert MJEG to flv and forward to rtmp server with following command ffmpeg -f mjpeg -i http://0.0.0.0:5000/video_feed -f lavfi -i anullsrc -c:v libx264 -vf "scale=trunc(oh*a/2)*2:320,unsharp=lx=3:ly=3:la=1.0" -crf 24 -c:a aac -ac 1 -f flv rtmp://a.rtmp.youtube.com/live2/xxx-xxx-xxx but unfortunatelly youtube stream has too many buffering that occur every around 5 second and ffmpeg terminal tell that writing speed is only around 0.317x (expected to be sync with youtube around 0.99-1x), my question is
does there a way to stream 'realtime' around 8-15fps and automatically sync with youtube rtmp server without buffering because i thought that youtube require around 30fps while my fps only 9-15fps that probably causing buffer.
do there an such like additional ffmpeg's parameter that able to speed up writing? thank you
A raw video will usually be assigned a framerate of 25. But your source is variable frame rate. You need to assign wallclock time as timestamp and generate a constant frame rate output for YT.
ffmpeg -f mjpeg -use_wallclock_as_timestamps true -i http://0.0.0.0:5000/video_feed -f lavfi -re -i anullsrc -vsync cfr -r 25 -c:v libx264 -vf "scale=trunc(oh*a/2)*2:320,unsharp=lx=3:ly=3:la=1.0" -crf 24 -c:a aac -ac 1 -f flv rtmp://a.rtmp.youtube.com/live2/xxx-xxx-xxx

Categories