I have video file
Duration: 00:19:36.69, start: 48891.159489, bitrate: 7234 kb/s
Stream #0:0[0x1e0]: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 3840x2160, 241/12fps
I converted video in python by iterate frame by frame:
vid_cap = cv2.VideoCapture(video_path)
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
vid_writer = cv2.VideoWriter(vid_full_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
vid_cap = cv2.VideoCapture(video_path)
flag, im0 = vid_cap.read()
while flag:
vid_writer.write(im0)
flag, im0 = vid_cap.read()
vid_writer.release()
the output details file :
Duration: 00:19:38.85, start: 0.000000, bitrate: 43718 kb/s
Stream #0:0(und): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 3840x2160 [SAR 1:1 DAR 16:9], 43717 kb/s, 20fps
I try use ffmpeg to convert the video to mp4 by still I don't get the same video file.
there is any option to create this specific converted video by ffmpeg, without for loop?
Related
MWE
import cv2
FPS = 30
KEY_ESC = 27
OUTPUT_FILE = "vid.mp4"
cam = cv2.VideoCapture(0)
codec = cv2.VideoWriter.fourcc(*"mp4v") # MPEG-4 http://mp4ra.org/#/codecs
frame_size = cam.read()[1].shape[:2]
video_writer = cv2.VideoWriter(OUTPUT_FILE, codec, FPS, frame_size)
# record until user exits with ESC
while True:
success, image = cam.read()
cv2.imshow("window", image)
video_writer.write(image)
if cv2.waitKey(5) == KEY_ESC:
break
cam.release()
video_writer.release()
Problem
Video does not play.
Firefox reports "No video with supported format and MIME type found.".
VLC reports "cannot find any /moov/trak" "No steams found".
The problem is that np.ndarray.shape, although not properly documented, returns (rows, columns) for a 2d array, which corresponds to (height, width). VideoWriter(frameSize) although not properly documented, seems to expect (width, height).
You can correct this with:
frame_size = tuple(reversed(cam.read()[1].shape[:2]))
But, I recommend creating the VideoWriter using the VideoCapture properties as follows:
output_file = "vid.mp4"
codec = cv2.VideoWriter.fourcc(*"mp4v")
fps = cam.get(cv2.CAP_PROP_FPS)
frame_width = cam.get(cv2.CAP_PROP_FRAME_WIDTH)
frame_height = cam.get(cv2.CAP_PROP_FRAME_HEIGHT)
frame_size = (int(frame_width), int(frame_height))
video_writer = cv2.VideoWriter(output_file, codec, fps, frame_size)
Sources:
VideoWriter()
VideoCapture.get()
https://answers.opencv.org/question/66545/problems-with-the-video-writer-in-opencv-300/
Related:
OpenCV - Video doesn't play after saving in Python
OpenCV VideoWriter: Issues with playing video
I've written a downloader in scrapy which downloads multiple ts Files asynchronously. Now I'm trying to concatenate and convert to mp4. I've written the following code but the video order doesnt seem to be correct and the mp4 file is completly messed up and not watchable.
import ffmpeg
if all_files_downloaded:
BASE_URL=f"/test{i}"
process = ffmpeg.input('pipe:').output(f'{BASE_URL}/video.mp4', vcodec='h264_nvenc').overwrite_output().run_async(pipe_stdin=True)
for snippets in sorted(self.parallelviddeos.get(id)):
self.parallelviddeos.get(id)[snippets].seek(0)
process.stdin.write(self.parallelviddeos.get(id)[snippets].read())
process.stdin.close()
process.wait()
The dictionary 'parallelviddeos' which has the id as a key and a list of ByteIO objects (the *.ts files)
as an input for the corresponding video does look like this
parallelviddeos={id1:{0:BytesIO,2:BytesIO,3:BytesIO,4:BytesIO},id2:{0:BytesIO,...}}
Can somebody help me with the transformation from the BytesIO-Part to the complete mp4-file.
I'm searching for a pipeline based method.
Im'getting the ts-Files as a response from a request in byte format.
We probably don't suppose to write TS files to the pipe in such a way.
Without the PIPE, we may use one of the concat options.
I think the safest option is using another FFmpeg sub-process for decoding the TS into raw video frames, and writing the pipe frame by frame.
You may start with my following post, and execute it in a loop.
Here is a "self contained" code sampe:
import ffmpeg
import numpy as np
import cv2 # Use OpenCV for testing
import io
import subprocess as sp
import threading
from functools import partial
out_filename = 'video.mp4'
# Build synthetic input, and read into BytesIO
###############################################
# Assume we know the width and height from advance
# (In case you don't know the resolution, I posted solution for getting it using FFprobe).
width = 192
height = 108
fps = 10
n_frames = 100
in_filename1 = 'in1.ts'
in_filename2 = 'in2.ts'
# Build synthetic video (in1.ts) for testing:
(
ffmpeg
.input(f'testsrc=size={width}x{height}:rate=1:duration={n_frames}', f='lavfi', r=fps)
.filter('setpts', f'N/{fps}/TB')
.output(in_filename1, vcodec='libx264', crf=17, pix_fmt='yuv420p', loglevel='error')
.global_args('-hide_banner')
.overwrite_output()
.run()
)
# Build synthetic video (in1.ts) for testing:
(
ffmpeg
.input(f'mandelbrot=size={width}x{height}:rate=1', f='lavfi', r=fps)
.filter('setpts', f'N/{fps}/TB')
.output(in_filename2, vcodec='libx264', crf=17, pix_fmt='yuv420p', loglevel='error', t=n_frames)
.global_args('-hide_banner')
.overwrite_output()
.run()
)
# Read the file into in-memory binary streams
with open(in_filename1, 'rb') as f:
in_bytes = f.read()
stream1 = io.BytesIO(in_bytes)
# Read the file into in-memory binary streams
with open(in_filename2, 'rb') as f:
in_bytes = f.read()
stream2 = io.BytesIO(in_bytes)
# Use list instead of dictionary (just for the example).
in_memory_viddeos = [stream1, stream2]
###############################################
# Writer thread: Write to stdin in chunks of 1024 bytes
def writer(decoder_process, stream):
for chunk in iter(partial(stream.read, 1024), b''):
decoder_process.stdin.write(chunk)
decoder_process.stdin.close()
def decode_in_memory_and_re_encode(vid_bytesio):
""" Decode video in BytesIO, and write the decoded frame into the encoder sub-process """
vid_bytesio.seek(0)
# Execute video decoder sub-process
decoder_process = (
ffmpeg
.input('pipe:') #, f='mpegts', vcodec='h264')
.video
.output('pipe:', format='rawvideo', pix_fmt='bgr24')
.run_async(pipe_stdin=True, pipe_stdout=True)
)
thread = threading.Thread(target=writer, args=(decoder_process, vid_bytesio))
thread.start()
# Read decoded video (frame by frame), and display each frame (using cv2.imshow for testing)
while True:
# Read raw video frame from stdout as bytes array.
in_bytes = decoder_process.stdout.read(width * height * 3)
if not in_bytes:
break
# Write the decoded frame to the encoder.
encoder_process.stdin.write(in_bytes)
# transform the byte read into a numpy array (for testing)
in_frame = np.frombuffer(in_bytes, np.uint8).reshape([height, width, 3])
# Display the frame (for testing)
cv2.imshow('in_frame', in_frame)
cv2.waitKey(10)
thread.join()
decoder_process.wait()
# Execute video encoder sub-process
encoder_process = (
ffmpeg
.input('pipe:', r=fps, f='rawvideo', s=f'{width}x{height}', pixel_format='bgr24')
.video
.output(out_filename, vcodec='libx264', crf=17, pix_fmt='yuv420p')
.overwrite_output()
.run_async(pipe_stdin=True)
)
# Re-encode the "in memory" videos in a loop
for memvid in in_memory_viddeos:
decode_in_memory_and_re_encode(memvid)
encoder_process.stdin.close()
encoder_process.wait()
cv2.destroyAllWindows()
Sorry for ignoring your dictionary structure.
I assume the issue is related to video encoding, and not to the way you are iterating the BytesIO objects.
Update:
Reading the width and height from in-memory video stream:
There are few options.
I decided to read the width and height from the header of a BMP image.
Start FFmpeg sub-process with the following arguments (image2pipe format, and bmp video codec):
decoder_process = ffmpeg.input('pipe:'), f='mpegts', vcodec='h264').video.output('pipe:',f='image2pipe', vcodec='bmp').run_async(pipe_stdin=True, pipe_stdout=True)
Get the with and height by parsing the header:
in_bytes = decoder_process.stdout.read(54) # 54 bytes BMP Header + 4 bytes
(width, height) = struct.unpack("<ll", in_bytes[18:26])
Complete code sample:
import ffmpeg
import numpy as np
import cv2 # Use OpenCV for testing
import io
import subprocess as sp
import threading
from functools import partial
import struct
out_filename = 'video.mp4'
# Build synthetic input, and read into BytesIO
###############################################
# Assume we know the width and height from advance
width = 192
height = 108
fps = 10
n_frames = 100
in_filename1 = 'in1.ts'
in_filename2 = 'in2.ts'
## Build synthetic video (in1.ts) for testing:
#(
# ffmpeg
# .input(f'testsrc=size={width}x{height}:rate=1:duration={n_frames}', f='lavfi', r=fps)
# .filter('setpts', f'N/{fps}/TB')
# .output(in_filename1, vcodec='libx264', crf=17, pix_fmt='yuv420p', loglevel='error')
# .global_args('-hide_banner')
# .overwrite_output()
# .run()
#)
## Build synthetic video (in1.ts) for testing:
#(
# ffmpeg
# .input(f'mandelbrot=size={width}x{height}:rate=1', f='lavfi', r=fps)
# .filter('setpts', f'N/{fps}/TB')
# .output(in_filename2, vcodec='libx264', crf=17, pix_fmt='yuv420p', loglevel='error', t=n_frames)
# .global_args('-hide_banner')
# .overwrite_output()
# .run()
#)
# Read the file into in-memory binary streams
with open(in_filename1, 'rb') as f:
in_bytes = f.read()
stream1 = io.BytesIO(in_bytes)
# Read the file into in-memory binary streams
with open(in_filename2, 'rb') as f:
in_bytes = f.read()
stream2 = io.BytesIO(in_bytes)
# Use list instead of dictionary (just for the example).
in_memory_viddeos = [stream1, stream2]
###############################################
# Writer thread: Write to stdin in chunks of 1024 bytes
def writer(decoder_process, stream):
for chunk in iter(partial(stream.read, 1024), b''):
try:
decoder_process.stdin.write(chunk)
except (BrokenPipeError, OSError):
# get_in_memory_video_frame_size causes BrokenPipeError exception and OSError exception.
# This in not a clean solution, but it's the simplest I could find.
return
decoder_process.stdin.close()
def get_in_memory_video_frame_size(vid_bytesio):
""" Get the resolution of a video in BytesIO """
vid_bytesio.seek(0)
# Execute video decoder sub-process, the output format is BMP
decoder_process = (
ffmpeg
.input('pipe:') #, f='mpegts', vcodec='h264')
.video
.output('pipe:', f='image2pipe', vcodec='bmp')
.run_async(pipe_stdin=True, pipe_stdout=True)
)
thread = threading.Thread(target=writer, args=(decoder_process, vid_bytesio))
thread.start()
# Read raw video frame from stdout as bytes array.
# https://en.wikipedia.org/wiki/BMP_file_format
in_bytes = decoder_process.stdout.read(54) # 54 bytes BMP Header + 4 bytes
decoder_process.stdout.close()
thread.join()
decoder_process.wait()
vid_bytesio.seek(0)
# The width and height are located in bytes 18 to 25 (4 bytes each)
(width, height) = struct.unpack("<ll", in_bytes[18:26])
return width, abs(height)
def decode_in_memory_and_re_encode(vid_bytesio):
""" Decode video in BytesIO, and write the decoded frame into the encoder sub-process """
vid_bytesio.seek(0)
# Execute video decoder sub-process
decoder_process = (
ffmpeg
.input('pipe:') #, f='mpegts', vcodec='h264')
.video
.output('pipe:', format='rawvideo', pix_fmt='bgr24')
.run_async(pipe_stdin=True, pipe_stdout=True)
)
thread = threading.Thread(target=writer, args=(decoder_process, vid_bytesio))
thread.start()
# Read decoded video (frame by frame), and display each frame (using cv2.imshow for testing)
while True:
# Read raw video frame from stdout as bytes array.
in_bytes = decoder_process.stdout.read(width * height * 3)
if not in_bytes:
break
# Write the decoded frame to the encoder.
encoder_process.stdin.write(in_bytes)
# transform the byte read into a numpy array (for testing)
in_frame = np.frombuffer(in_bytes, np.uint8).reshape([height, width, 3])
# Display the frame (for testing)
cv2.imshow('in_frame', in_frame)
cv2.waitKey(10)
thread.join()
decoder_process.wait()
width, height = get_in_memory_video_frame_size(in_memory_viddeos[0])
# Execute video encoder sub-process
encoder_process = (
ffmpeg
.input('pipe:', r=fps, f='rawvideo', s=f'{width}x{height}', pixel_format='bgr24')
.video
.output(out_filename, vcodec='libx264', crf=17, pix_fmt='yuv420p')
.overwrite_output()
.run_async(pipe_stdin=True)
)
# Re-encode the "in memory" videos in a loop
for memvid in in_memory_viddeos:
decode_in_memory_and_re_encode(memvid)
encoder_process.stdin.close()
encoder_process.wait()
cv2.destroyAllWindows()
my goal is to re-stream local video content / desktop screencasting, to an UDP flow that I need to process on a Python script.
Here is the FFMPEG script that I'm using:
ffmpeg -re -i C:\Users\test\Downloads\out.ts -strict -2 -c:v copy -an -preset slower -tune stillimage -b 11200k -f rawvideo udp://127.0.0.1:5000
And here is the simple Python script supposed to read the stream flow:
import cv2
cap = cv2.VideoCapture('udp://127.0.0.1:5000',cv2.CAP_FFMPEG)
if not cap.isOpened():
print('VideoCapture not opened')
exit(-1)
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) # float
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) # float
print(str(width))
print(str(height))
while True:
ret, frame = cap.read()
imgray = frame[int(round((height/100)*70,0)):int(round((width/100)*42,0)), int(round((height/100)*74,0)):int(round((width/100)*54,0))]
if not ret:
print('frame empty')
break
cv2.imshow('image', imgray)
if cv2.waitKey(1)&0XFF == ord('q'):
break
cap.release()
I'm able to visualize portion of the stream video as expect, but I'm facing lot of issue in video quality degradation, specially video artifact probably due missing packet processing:
Also these are error log I'm geting from script:
[h264 # 0000026eb272f280] error while decoding MB 105 66, bytestream -21
[h264 # 0000026eb2fcb740] error while decoding MB 100 53, bytestream -11
[h264 # 0000026eb272f280] error while decoding MB 32 22, bytestream -11
[h264 # 0000026ead9ee300] error while decoding MB 60 20, bytestream -25
[h264 # 0000026eb27f00c0] error while decoding MB 9 62, bytestream -5
[h264 # 0000026ead9ee780] error while decoding MB 85 44, bytestream -5
[h264 # 0000026eb27f0800] error while decoding MB 64 25, bytestream -15
[h264 # 0000026eb272f280] error while decoding MB 112 23, bytestream -17
[h264 # 0000026eb2735200] error while decoding MB 30 21, bytestream -7
Actually I don't care about video fluidity,I can also reduce the FPS, important thing is the video quality. Not sure if I'm doing wrong on the scripting python part or if I'm using wrong FFMPEG command.
Many Thanks
For video quality you will have to use multi-threading. It'll fix it up. I will give you an example that you can work on.
import threading
import queue
import cv2
q = queue.Queue()
def receive():
cap = cv2.VideoCapture('udp://#127.0.0.1:5000?buffer_size=65535&pkt_size=65535&fifo_size=65535')
ret, frame = cap.read()
q.put(frame)
while ret:
ret, frame = cap.read()
q.put(frame)
def display():
while True:
if q.empty() != True:
frame = q.get()
cv2.imshow('Video', frame)
k = cv2.waitKey(1) & 0xff
if k == 27: # press 'ESC' to quit
break
tr = threading.Thread(target=receive, daemon=True)
td = threading.Thread(target=display)
tr.start()
td.start()
td.join()
I am trying to save a video in OpenCV but i keep getting the error "could not demultiplex stream". I then checked size and found out that it was in kB. I primarily want to save grayscale videos how do i make it possible?
Is there any specific codec i need to use?
mplayer gives the following output
MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team
mplayer: could not connect to socket
mplayer: No such file or directory
Failed to open LIRC support. You will not be able to use your remote control.
Playing output.avi.
libavformat version 54.20.4 (external)
Mismatching header version 54.20.3
AVI file format detected.
[aviheader] Video stream found, -vid 0
AVI: Missing video stream!? Contact the author, it may be a bug :(
libavformat file format detected.
[lavf] stream 0: video (mpeg4), -vid 0
VIDEO: [MP4V] 1280x720 24bpp -nan fps 0.0 kbps ( 0.0 kbyte/s)
Clip info:
encoder: Lavf54.20.4
Load subtitles in ./
Failed to open VDPAU backend libvdpau_nouveau.so: cannot open shared object file: No such file or directory
[vdpau] Error when calling vdp_device_create_x11: 1
==========================================================================
Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
libavcodec version 54.35.1 (external)
Mismatching header version 54.35.0
Unsupported AVPixelFormat 53
Selected video codec: [ffodivx] vfm: ffmpeg (FFmpeg MPEG-4)
==========================================================================
Audio: no sound
Starting playback...
V: 0.0 0/ 0 ??% ??% ??,?% 0 0
Exiting... (End of file)
Right now i tried with multiple codec formats
import imutils
import cv2
import numpy as np
interval = 30
outfilename = 'output.avi'
threshold=100.
fps = 10
cap = cv2.VideoCapture("video.mp4")
ret, frame = cap.read()
height, width, nchannels = frame.shape
fourcc = cv2.cv.CV_FOURCC(*'DIVX')
out = cv2.VideoWriter( outfilename,fourcc, fps, (width,height))
ret, frame = cap.read()
frame = imutils.resize(frame, width=500)
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
while(True):
frame0 = frame
ret, frame = cap.read()
frame = imutils.resize(frame, width=500)
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
if not ret:
deletedcount +=1
break
if np.sum( np.absolute(frame-frame0) )/np.size(frame) > threshold:
out.write(frame)
else:
print "Deleted"
cv2.imshow('Feed - Press "q" to exit',frame)
key = cv2.waitKey(interval) & 0xFF
if key == ord('q'):
print('received key q' )
break
cap.release()
out.release()
print('Successfully completed')
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
out = cv2.VideoWriter('output.avi',-1, 20.0, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret:
gray = cv2.cvtColor(src=frame, code=cv2.COLOR_BGR2GRAY)
out.write(gray)
cv2.imshow('frame', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
Try this one
Select Intel iyuv codec.
The out.avi is non working file.
The output.avi is new working file.
If video is not getting saved, possibly the reason may be its capture size which is hardcoded as (640,480).
You can try the below code:
cap = cv2.VideoCapture(0)
fourcc_codec = cv2.VideoWriter_fourcc(*'XVID')
fps = 20.0
capture_size = (int(cap.get(3)), int(cap.get(4)))
out = cv2.VideoWriter("output.avi", fourcc_codec, fps, capture_size)
You can also check if you are passing the correct shape, do it like this:
h, w, _ = frame.shape
size = (w, h)
out = cv2.VideoWriter('video.avi', cv2.VideoWriter_fourcc(*'XVID'), 30, size)
I've written a small python code, which captures video stream from the webcam, and writes it into an output file.
I've put a sleep of 50 ms, and specified a fps of 20.0 in the VideoWriter as following:
#!/usr/bin/python
import cv2
from PIL import Image
import threading
from http.server import BaseHTTPRequestHandler,HTTPServer
from socketserver import ThreadingMixIn
from io import StringIO,BytesIO
import time
import datetime
capture=None
out=None
class CamHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path.endswith('.mjpg'):
self.send_response(200)
self.send_header('Content-type','multipart/x-mixed-replace; boundary=--jpgboundary')
self.end_headers()
while True:
try:
rc,img = capture.read()
if not rc:
continue
#Get the timestamp on the frame
timestamp = datetime.datetime.now()
ts = timestamp.strftime("%A %d %B %Y %I:%M:%S%p")
cv2.putText(img, ts, (10, img.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
#Store the frame into the output file
out.write(img)
#Some processing before sending the frame to webserver
imgRGB=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
jpg = Image.fromarray(imgRGB)
tmpFile = BytesIO()
jpg.save(tmpFile,'JPEG')
self.wfile.write("--jpgboundary".encode())
self.send_header('Content-type','image/jpeg')
self.send_header('Content-length',str(tmpFile.getbuffer().nbytes))
self.end_headers()
jpg.save(self.wfile,'JPEG')
time.sleep(0.05)
except KeyboardInterrupt:
break
return
if self.path.endswith('.html'):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
self.wfile.write('<html><head></head><body>'.encode())
self.wfile.write('<img src="http://127.0.0.1:8080/cam.mjpg"/>'.encode())
self.wfile.write('</body></html>'.encode())
return
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
def main():
global capture
global out
capture = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
global img
try:
server = ThreadedHTTPServer(('0.0.0.0', 8080), CamHandler)
print( "server started")
server.serve_forever()
except KeyboardInterrupt:
capture.release()
server.socket.close()
out.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
It works fine, and I'm able to get the video saved. However, the fps as seen in property of the video is 600.0 (30X what I set!!)
$ mediainfo output.avi
General
Complete name : output.avi
Format : AVI
Format/Info : Audio Video Interleave
File size : 4.95 MiB
Duration : 17s 252ms
Overall bit rate : 2 408 Kbps
Writing application : Lavf58.3.100
Video
ID : 0
Format : MPEG-4 Visual
Format profile : Simple#L1
Format settings, BVOP : No
Format settings, QPel : No
Format settings, GMC : No warppoints
Format settings, Matrix : Default (H.263)
Codec ID : XVID
Codec ID/Hint : XviD
Duration : 17s 252ms
Bit rate : 2 290 Kbps
Width : 640 pixels
Height : 480 pixels
Display aspect ratio : 4:3
Frame rate : 600.000 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Compression mode : Lossy
Bits/(Pixel*Frame) : 0.012
Stream size : 4.71 MiB (95%)
Writing library : Lavc58.6.103
I'm pretty sure my code looks alright, do let me know if there are any obvious mistakes. In case it matters, I'm using an ubuntu OS, Asus X553M laptop with inbuilt webcam to run the above.
EDIT 1:
I'm using python3, if that matters
EDIT 2:
Using MJPG codec did solve the problem, (thanks #api55) so can someone tell me why XVID would give the incorrect fps?
Is it possible that XVID codec incorrectly writes the fps property, while the video is actually correctly coded in 20 fps?
You getting frames every 50 ms + processing time, but writing them to video with delay 50ms, so they will be played with higher speed with ratio=(50+processing time)/ 50ms.
I have this recipe, it's pretty similar to what Andrey answered. The fps gets pretty close to what's expected.
cap = cv2.VideoCapture(capture) # 0 | filepath
fps = cap.get(cv2.CAP_PROP_FPS)
period = 1000 / fps
while cap.isOpened():
start = time.time()
success, frame = cap.read()
if not success:
if capture:
break # video
continue # cam
# processing steps
cv2.imshow('video (ESC to quit)', frame)
processing_time = (time.time() - start) * 1000
wait_ms = period - processing_time if period > processing_time else period
if cv2.waitKey(int(wait_ms)) & 0xFF == 27:
break
end = (time.time() - start)
print(f'FPS: {1.0 / end:.2f} (expected {fps:.2f})\r', end='')