How does QueryFrame work? - python

import cv
# create a window
winname = "myWindow"
win = cv.NamedWindow(winname, cv.CV_WINDOW_AUTOSIZE)
# load video file
invideo = cv.CaptureFromFile("video.avi")
# interval between frame in ms.
fps = cv.GetCaptureProperty(invid, cv.CV_CAP_PROP_FPS)
interval = int(1000.0 / fps)
# play video
while (True):
im = cv.QueryFrame(invideo)
cv.ShowImage(winname, im)
if cv.WaitKey(interval) == 27: # ASCII 27 is the ESC key
break
del invideo
cv.DestroyWindow(winname)
Above is a simple python code using opencv libraray to play a video file.
The only part I don't understand is im = cv.QueryFrame(invideo)
According to opencv api, " QueryFrame grabs a frame from a camera or video file, decompresses it and returns it."
For my understanding, it just returns an image in iplimage format for one single frame, but how does it know which frame it returns? The only parameter QueryFrame need is the video capture, but there no index to tell it which frame amount the video frames I need to retrieve. What if I need to play a video starting from middle part?

You have to use cv.GetCaptureProperty with CV_CAP_PROP_FRAME_COUNT to get the number of frames of your video.
Divide it by 2 to find the middle.
Use QueryFrame until you reach this value.

Related

Save video clip from longer video between two timestamps in Python cv2

I have an hour-long video that I would like to save a clip between two timestamps-- say, 11:20-11:35. Is the best way to do this frame-by-frame, or is there a better way?
Here's the gist of what I did frame-by-frame. If there's a less lossy way to do it, I'd love to know! I know I could do it from the terminal using ffmpeg, but I am curious for how to best do it using cv2.
def get_clip(input_filename, output_filename, start_sec, end_sec):
# input and output videos are probably mp4
vidcap = cv2.VideoCapture(input_filename)
# math to find starting and ending frame number
fps = find_frames_per_second(vidcap)
start_frame = int(start_sec*fps)
end_frame = int(end_sec*fps)
vidcap.set(cv2.CAP_PROP_POS_FRAMES,start_frame)
# open video writer
vidwrite = cv2.VideoWriter(output_filename, cv2.VideoWriter_fourcc(*'MP4V'), fps, get_frame_size(vidcap))
success, image = vidcap.read()
frame_count = start_frame
while success and (frame_count < end_frame):
vidwrite.write(image) # write frame into video
success, image = vidcap.read() # read frame from video
frame_count+=1
vidwrite.release()

How to skip a video in an nth increment in OpenCV

I've been reading OpenCV documentation about reading and skipping on a video using trackbar, for example 10, the video will play at 10s, 20s, 30s...n. But I can't seem to implement it on the right way using Python. I just want to ask for your suggestion of an algorithm. If any, some snippets. Thank you.
You can make loop over video files and open every video[i] with step like in this code
import cv2
video_files = os.listdir(folder)
step = 10 ## every 5 video
### loop over videos show every 10..
for i in range(0,100, step):
video_name = video_files[i]
cap = cv2.VideoCapture('../folder/'+video_name)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
### loop over video and show
for i in range(num_frames):
ret_prev, frame = cap.read()
cv2.imshow('frame',frame)
cv2.waitKey(0)
I'm have understood your comment
This is code to show just every 10s 20s etc..
import cv2
cap = cv2.VideoCapture('../folder/'+video_name)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = cap.get(cv2.CAP_PROP_FPS)
### loop over video and show
for i in range(num_frames):
ret_prev, frame = cap.read()
### this we show every 10s other will be skip
if i % fps*10:
cv2.imshow('frame',frame)
cv2.waitKey(0)

Get frames per second of a gif in python?

In python, im loading in a gif with PIL. I extract the first frame, modify it, and put it back. I save the modified gif with the following code
imgs[0].save('C:\\etc\\test.gif',
save_all=True,
append_images=imgs[1:],
duration=10,
loop=0)
Where imgs is an array of images that makes up the gif, and duration is the delay between frames in milliseconds. I'd like to make the duration value the same as the original gif, but im unsure how to extract either the total duration of a gif or the frames displayed per second.
As far as im aware, the header file of gifs does not provide any fps information.
Does anyone know how i could get the correct value for duration?
Thanks in advance
Edit: Example of gif as requested:
Retrived from here.
In GIF files, each frame has its own duration. So there is no general fps for a GIF file. The way PIL supports this is by providing an info dict that gives the duration of the current frame. You could use seek and tell to iterate through the frames and calculate the total duration.
Here is an example program that calculates the average frames per second for a GIF file.
import os
from PIL import Image
FILENAME = os.path.join(os.path.dirname(__file__),
'Rotating_earth_(large).gif')
def get_avg_fps(PIL_Image_object):
""" Returns the average framerate of a PIL Image object """
PIL_Image_object.seek(0)
frames = duration = 0
while True:
try:
frames += 1
duration += PIL_Image_object.info['duration']
PIL_Image_object.seek(PIL_Image_object.tell() + 1)
except EOFError:
return frames / duration * 1000
return None
def main():
img_obj = Image.open(FILENAME)
print(f"Average fps: {get_avg_fps(img_obj)}")
if __name__ == '__main__':
main()
If you assume that the duration is equal for all frames, you can just do:
print(1000 / Image.open(FILENAME).info['duration'])

How to create array of frames from an mp4 with opencv for python

I am attempting to use opencv_python to break an mp4 file down into it's frames so I can later open them with pillow, or at least be able to use the images to run my own methods on them.
I understand that the following snippet of code gets a frame from a live video or a recorded video.
import cv2
cap = cv2.VideoCapture("myfile.mp4")
boolean, frame = cap.read()
What exactly does the read function return and how can I create an array of images which I can modify.
adapted from How to process images of a video, frame by frame, in video streaming using OpenCV and Python. Untested. However, the frames are read into a numpy array and and append to a list that is converted to a numpy array when the all the frames are read in.
import cv2
import numpy as np
images = []
cap = cv2.VideoCapture("./out.mp4")
while not cap.isOpened():
cap = cv2.VideoCapture("./out.mp4")
cv2.waitKey(1000)
print "Wait for the header"
pos_frame = cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
while True:
frame_ready, frame = cap.read() # get the frame
if frame_ready:
# The frame is ready and already captured
# cv2.imshow('video', frame)
# store the current frame in as a numpy array
np_frame = cv2.imread('video', frame)
images.append(np_frame)
pos_frame = cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
else:
# The next frame is not ready, so we try to read it again
cap.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, pos_frame-1)
print "frame is not ready"
# It is better to wait for a while for the next frame to be ready
cv2.waitKey(1000)
if cv2.waitKey(10) == 27:
break
if cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
# If the number of captured frames is equal to the total number of frames,
# we stop
break
all_frames = np.array(images)
Simply use this code to get an array of frames from your video:
import cv2
import numpy as np
frames = []
video = cv2.VideoCapture("spiderino_turning.mp4")
while True:
read, frame= video.read()
if not read:
break
frames.append(frame)
frames = np.array(frames)
but regarding your question, video.read() returns two values. The first one (read in the example code) indicates if the frame is successfully read or not (i.e., True on succeeding and False on any error). The second returning value is the frame that can be empty if the read attempt is unsuccessful or a 3D array (i.e., color image) otherwise.
But why can a read attempt be unsuccessful?
If you are reading from a camera, any problem with the camera (e.g., the cable is disconnected or the camera's battery is dead) can cause an error.
If you are reading from a video, the read attempt will fail when all the frames are read, and there are no more.

Python: How to get some images from a video in Python3?

How can I extract frames from a video file using Python3?
For example, I want to get 16 picture from a video and combine them into a 4x4 grid.
I don't want 16 separate images at the end, I want one image containing 16 frames from the video.
----Edit----
import av
container = av.open('/home/uguraba/Downloads/equals/equals.mp4')
video = next(s for s in container.streams)
for packet in container.demux(video):
for frame in packet.decode():
if frame.index %3000==0:
frame.to_image().save('/home/uguraba/Downloads/equals/frame-%04d.jpg' % frame.index)
By using this script i can get frames. There will be lots of frames saved. Can i take specific frames like 5000-7500-10000 ?
Also my question is how can i see the total frame number ?
Use PyMedia or PyAV to access image data and PIL or Pillow to manipulate it in desired form(s).
These libraries have plenty of examples, so with basic knowledge about the video muxing/demuxing and picture editing you should be able to do it pretty quickly. It's not so complicated as it would seem at first.
Essentially, you demux the video stream into frames, going frame by frame.
You get the picture either in its original (e.g. JPEG) or raw form and push it into PIL/Pillow.
You do with it what you want, resizing etc... - PIL provides all necessary stuff.
And then you paste it into one big image at desired position.
That's all.
You can do that with OpenCV3, the Python wrapper and Numpy.
First you need to do is capture the frames then save them separately and finally paste them all in a bigger matrix.
import numpy as np
import cv2
cap = cv2.VideoCapture(video_source)
# capture the 4 frames
_, frame1 = cap.read()
_, frame2 = cap.read()
_, frame3 = cap.read()
_, frame4 = cap.read()
# 'glue' the frames using numpy and vertigal/horizontal stacks
big_frame = np.vstack((np.hstack((frame1, frame2)),
np.hstack((frame3, frame4))))
# Show a 4x4 unique frame
cv2.imshow('result', big_frame)
cv2.waitKey(1000)
To compile and install OpenCV3 and Numpy in Python3 you can follow this tutorial.
You can implement a kind of "control panel" from 4 different video sources with something like that:
import numpy as np
import cv2
cam1 = cv2.VideoCapture(video_source1)
cam2 = cv2.VideoCapture(video_source2)
cam3 = cv2.VideoCapture(video_source3)
cam4 = cv2.VideoCapture(video_source4)
while True:
more1, frame_cam1 = cam1.read()
more2, frame_cam2 = cam2.read()
more3, frame_cam3 = cam3.read()
more4, frame_cam4 = cam4.read()
if not all([more1, more2, more3, more4]) or cv2.waitKey(1) & 0xFF in (ord('q'), ord('Q')):
break
big_frame = np.vstack((np.hstack((frame_cam1, frame_cam2)),
np.hstack((frame_cam3, frame_cam4))))
# Show a 4x4 unique frame
cv2.imshow('result', big_frame)
print('END. One or more sources ended.')

Categories