Why does opencv Videocapture function read video frames with wrong pixel values? - python

I am capturing videos from a camera and saving them using the opencv VideoWriter function. I save the captured videos as uncompressed avi files. When I finish recording the video, I have another script that is supposed to read the video frame by frame, process the pixel values. However, when I try to read the frames of the saved video, the pixel values are off a bit.
For example, comparing the first frames of the video being written, and the video being read (supposed to be 100% identical), I notice that the pixel values are off by a small number (RGB values off by a small number, usually less than 5).
I have already made sure that I am using the exact same video codex when writing the video, and when reading the video (Check code below)
def write_video():
out = cv2.VideoWriter("encrypted.avi" ,cv2.VideoWriter_fourcc(*'XVID'),30, (640,480))
foreach frame:
out.write(frame)
def read_video():
cap = cv2.VideoCapture("encrypted.avi")
cap.set(cv2.CAP_PROP_FOURCC,cv2.VideoWriter_fourcc(*'XVID'))
while(cap.isOpened()):
ret, frame = cap.read()
For my application, the frames being written and read should match 100%. I have included an image highlighting the difference between the first frame in the video being written, and the video being read. Any help is much appreciated!

These are the compression artifacts, since you are using lossy compression. If you would like your frames match down to the last bit, write them as a sequence of .PNG files -- these are losslessly compressed and preserve everything. Beware that .PNG will take much more of your HDD space than compressed video.

Related

diy compression of a image giving a previous frame (Inter-frame compression)

Is there any way i can compress a numpy image giving a previous frame
I wanna do a sort of DIY video live stream for my 3d printed rc car (Newone) from my raspberry pi.
currently Iam sending continuously jpgs without any Inter-frame compression. It would be cool to have a sort of motion prediction, which would save lot of traffic
npImg = cap.read()[1] # capturing image
npImg=cv2.resize(npImg,(320,180))# downsize the image for stream
jImg = cv2.imencode('.jpg', npImg, [int(cv2.IMWRITE_JPEG_QUALITY),90])[1]# numpy image to jpg
stream = io.BytesIO(jImg)# creating a stream to continuously taking chunks from the jpg file
while run:
part = stream.read(504)# 506 is the max UDP package lengh, minus 2 for video authentification
if not part: break# if-block will execute if string is empty; breaks the loop
soc.sendto(b'v'+part,address_server)
soc.sendto(b'f', address_server) #img finished

Why opencv video reading fps is not same with video encrypted fps?

In OpenCV with Python, when the fps of the webcam and a video file in the directory are same, why does the video file play in fast forward whereas the webcam continues to show the frames at a normal rate? What role does the cv2.waitKey() function play here
The fps of a video file means how it was encrypted, how many frames contain within a second as the name reveals. For example, if extracted 1 second of this video will produce only that number of frames (images).
The corresponding fps of the web camera means how many frames that camera can capture in a second. If saved to a video file that would mean how many frames are contained within each 1-second span.
There is a third (probably hidden to you) concept here though. How fast the opencv can read a video file. Normally, and for typical resolutions on a modern computer this fps is larger than the actual video. So, your computer seem to playback the video in fast forward mode because it reads (and displays) frames in a faster pace than the video file's fps.
Theoretically, you can calculate the delay you should import to the video playback to force it to displayed with normal pace. I am not sure how easily you can accomplish that (in a scientific way and not trial and error mode).
Hope this clarifies the issue.

reading .sec format video opencv

I'm dealing with .sec file format videos (samsung camera backup files). Each backup folder contains an .exe application to playback those backups. Those backups come in two ways; single camera backup and multible (multible positions at same place) cameras backups as well. The one camera backup frames been captured successfully bt opencv, yet the multiple cameras didn't. I have noticed couple of points as:
The single camera videos frames always exist, yet the multiple cameras do not (motion detection activated?).
The .exe file of the multiple cameras do play all video ones.
The cap variable (cv2.VideoCapture) for those files (.sec) do not accept setting parameters (cap.set() returns fasle).
I have used an app called "MediaInfo.exe" to get info about those files as:
the single camera:
the multiple camers:
What i'm looking for is to reach a success capturing one (or more) of the multiple cameras backups.
Thanks in advance.
UPDATE
It seems the proplem is not clear, so here is the code i have:
cap = cv2.VideoCapture('a_file_from_single_camera_backup.sec')
ret, frame = cap.read()
print(ret, frame)
output:
True [[[132 140 130][133 141 131][134 142 132]...[ 60 51 43][ 60 51 43][ 60 51 43]]...
and
cap = cv2.VideoCapture('a_file_from_multiple_cameras_backup.sec')
ret, frame = cap.read()
print(ret, frame)
output:
False None
and
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter.fourcc('H', '2', '6', '4'))
output: (for both files)
False
for anyone who is facing an issue reading the .sec (samsun security camera backup) file format using opencv, there are two scenario as (in my case):
The backup belongs to a one camera (one position) that can be read by
opencv correctly (need to do nothing). You just need to know that the
vid comes in h.264 codec.
The backup belongs to multiple cameras (multiple positions) which
won't be read by opencv, yet you can change the file extension from
.sec to .hevc so the opencv will do the job. Also, you need to know
that the vid is h.265 codec as well.
for more, https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding
Good luck!

YUYV Framerate faster than MJPG from USB Camera OpenCV

I am using a SOM that is running a cortex A5 #500MHz and am trying to maximise the frame rate received from a USB camera. My camera supports video capture with YUYV and MJPEG.
Other posts suggested forcing opencv to read MJPEG frames from the camera, however this slowed down the frame rate.
I can currently get about 18 fps reading YUYV format and about 10 fps reading MJPEG's at 640x480. Currently I am just grabbing frames and am not doing any other processing. I am getting the CAP_PROP_FOURCC format each loop to ensure that opencv is setting the capture format correctly.
I am currently running opencv 4 and python3.5
Any ideas why this may be happening?
EDIT: Capture code:
# Repeatedly capture current image
while True:
ret, image = cap.read()
if image is None:
time.sleep(0.5)
continue
codec = cap.get(cv2.CAP_PROP_FOURCC)
print(codec)
# Print the framerate.
text = '{:.2f}, {:.2f}, {:.2f} fps'.format(*fps.tick())
print(text)
Please provide the exact SOM and the camera that you are using.
There are many factors, for example the format of the images captured by the camera, how they are transferred and how they are received and managed by the SOM.
Transferring them shouldn't be the problem in terms of bandwidth.
I am assuming that the settings in opencv only apply to the SOM and won't change the format of the camera capture and therefore the SOM has more processing to do, hence the frame rate drops.
[EDIT]
I cannot comment yet so I hope you read this ... your camera link is dead :/

Possible to delete frames from a video file in opencv

I can open a video and play it with opencv 2 using the cv2.VideoCapture(myvideo). But is there a way to delete a frame within that video using opencv 2? The deletion must happen in-place, that is, the file being played will end up with a shorter time due to deleted frames. Simply zeroing out the matrix wouldn't be sufficient.
For example something like:
video = cv2.VideoCapture(myvideo.flv)
while True:
img = video.read()
# Show the image
cv2.imgshow(img)
# Then go delete it and proceed to next frame, but is this possible?
# delete(img)??
So the above code would technically contain 0 bytes at the end since it reads then deletes the frame in the video file.
OpenCV is not the right tool for this job. What you need for this is a media processing framework, like ffmpeg (=libavformat/libavcodec/libswscale) or GStreamer.
Also depending on the encoding scheme used, simply deleting just a single frame may not be possible. Only in a video consisting of just Intra frames (I-frames), frame exact editing is possible. If the video is encoding in so called group of pictures (GOP) removing a single frame requires to reencode the whole GOP it was part of.
You can't do it in-place, but you can use OpenCV's VideoWriter to write the frames that you want in a new video file.

Categories