Save video clip from longer video between two timestamps in Python cv2 - python

I have an hour-long video that I would like to save a clip between two timestamps-- say, 11:20-11:35. Is the best way to do this frame-by-frame, or is there a better way?

Here's the gist of what I did frame-by-frame. If there's a less lossy way to do it, I'd love to know! I know I could do it from the terminal using ffmpeg, but I am curious for how to best do it using cv2.
def get_clip(input_filename, output_filename, start_sec, end_sec):
# input and output videos are probably mp4
vidcap = cv2.VideoCapture(input_filename)
# math to find starting and ending frame number
fps = find_frames_per_second(vidcap)
start_frame = int(start_sec*fps)
end_frame = int(end_sec*fps)
vidcap.set(cv2.CAP_PROP_POS_FRAMES,start_frame)
# open video writer
vidwrite = cv2.VideoWriter(output_filename, cv2.VideoWriter_fourcc(*'MP4V'), fps, get_frame_size(vidcap))
success, image = vidcap.read()
frame_count = start_frame
while success and (frame_count < end_frame):
vidwrite.write(image) # write frame into video
success, image = vidcap.read() # read frame from video
frame_count+=1
vidwrite.release()

Related

How to skip a video in an nth increment in OpenCV

I've been reading OpenCV documentation about reading and skipping on a video using trackbar, for example 10, the video will play at 10s, 20s, 30s...n. But I can't seem to implement it on the right way using Python. I just want to ask for your suggestion of an algorithm. If any, some snippets. Thank you.
You can make loop over video files and open every video[i] with step like in this code
import cv2
video_files = os.listdir(folder)
step = 10 ## every 5 video
### loop over videos show every 10..
for i in range(0,100, step):
video_name = video_files[i]
cap = cv2.VideoCapture('../folder/'+video_name)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
### loop over video and show
for i in range(num_frames):
ret_prev, frame = cap.read()
cv2.imshow('frame',frame)
cv2.waitKey(0)
I'm have understood your comment
This is code to show just every 10s 20s etc..
import cv2
cap = cv2.VideoCapture('../folder/'+video_name)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = cap.get(cv2.CAP_PROP_FPS)
### loop over video and show
for i in range(num_frames):
ret_prev, frame = cap.read()
### this we show every 10s other will be skip
if i % fps*10:
cv2.imshow('frame',frame)
cv2.waitKey(0)

Inconsistent number of read video frames with OpenCv

I'm trying to extract the frames from the following video (Disclosure: I'm not the owner of the video, the video is taken from a public dataset). To get the number of video frames I do:
cap = cv2.VideoCapture(video_path)
cap.get(cv2.CAP_PROP_FRAME_COUNT) # This returns 32
To extract the frames I have this method:
def obtain_frames(video_path: str):
cap = cv2.VideoCapture(video_path)
frames = []
while True:
success, image = cap.read()
if not success:
break
frames.append(image)
return frames
Finally, I count the number of extracted video frames with:
frames = obtain_frames(video_path)
len(frames) # This returns 17
and I get an inconsistent number compared to cv2.CAP_PROP_FRAME_COUNT.
I'm also aware of this SO question but still, when I display the video I can go all through the end and yet I can't read all the frames.
Any pointers/directions are welcome.

Possible error with code and extracting frames with opencv at a specific fps?

I believe there is a possible error with this code because it saves more images than it should when I calculate (fps * seconds). It also saves a NULL image at the end. I feel like this is the most basic code to save images and I'm not sure what the error could be.
Also, I'd like an option to specify at which fps rate I'd take the images. The fps I would like the use the most is 1 fps. I couldn't find a feature in opencv that allows us to specify this, so I was considering taking the approach of somehow saving a frame at each second. Another approach could be saving all the frames, but iterating through and using the frame at each second. I'm fairly new at these concepts. Are these approaches good, and how should I begin coding them? I'm not sure where to begin.
def extractFrames():
vidcap = cv2.VideoCapture('video.AVI')
success,image = vidcap.read()
count = 0;
while success:
success,image = vidcap.read()
cv2.imwrite("%d.jpg" % count, image)
count += 1
This should do what you request:
import time,cv2
import numpy as np
def extractFrames(n):
"""
extract image from a live video frame at n fps,
if the n is smaller than the fps of the camera
"""
vidcap = cv2.VideoCapture('video.AVI')
now = time.time()
success,image = vidcap.read()
real_fps = 1.0/(time.time()-now);
if n > real_fps:
print('real fps is smaller than what you want , change to less than '+real_fps)
return
while success:
now = time.time()
success,image = vidcap.read()
cv2.imwrite("%d.jpg" % count, image)
ch = 0xFF & cv2.waitKey(1)
if ch == 27:
break
time.sleep((1.0/n) - (time.time() - now));
cv2.destroyAllWindows()
extractFrames(1)
P.S:
I recommend using library like Pygame for event loops and drawing instead of Opencv's function

How does QueryFrame work?

import cv
# create a window
winname = "myWindow"
win = cv.NamedWindow(winname, cv.CV_WINDOW_AUTOSIZE)
# load video file
invideo = cv.CaptureFromFile("video.avi")
# interval between frame in ms.
fps = cv.GetCaptureProperty(invid, cv.CV_CAP_PROP_FPS)
interval = int(1000.0 / fps)
# play video
while (True):
im = cv.QueryFrame(invideo)
cv.ShowImage(winname, im)
if cv.WaitKey(interval) == 27: # ASCII 27 is the ESC key
break
del invideo
cv.DestroyWindow(winname)
Above is a simple python code using opencv libraray to play a video file.
The only part I don't understand is im = cv.QueryFrame(invideo)
According to opencv api, " QueryFrame grabs a frame from a camera or video file, decompresses it and returns it."
For my understanding, it just returns an image in iplimage format for one single frame, but how does it know which frame it returns? The only parameter QueryFrame need is the video capture, but there no index to tell it which frame amount the video frames I need to retrieve. What if I need to play a video starting from middle part?
You have to use cv.GetCaptureProperty with CV_CAP_PROP_FRAME_COUNT to get the number of frames of your video.
Divide it by 2 to find the middle.
Use QueryFrame until you reach this value.

Python JPEG to movie

I am looking for a way to concatenate a directory of images files (e.g., JPEGs) to a movie file (MOV, MP4, AVI) with Python. Ideally, this would also allow me to take multiple JPEGs from that directory and "paste" them into a grid which is one frame of a movie file. Which modules could achieve this?
You could use the Python interface of OpenCV, in particular a VideoWriter could probably do the job. From what I understand of the doc, the following would do what you want:
w = cvCreateVideoWriter(filename, -1, <your framerate>,
<your frame size>, is_color=1)
and, in a loop, for each file:
cvWriteFrame(w, frame)
Note that I have not tried this code, but I think that I got the idea right. Please tell me if it works.
here's a cut-down version of a script I have that took frames from one video and them modified them(that code taken out), and written to another video. maybe it'll help.
import cv2
fourcc = cv2.cv.CV_FOURCC(*'XVID')
out = cv2.VideoWriter('out_video.avi', fourcc, 24, (704, 240))
c = cv2.VideoCapture('in_video.avi')
while(1):
_, f = c.read()
if f is None:
break
f2 = f.copy() #make copy of the frame
#do a bunch of stuff (missing)
out.write(f2) #write frame to the output video
out.release()
cv2.destroyAllWindows()
c.release()
If you have a bunch of images, load them in a loop and just write one image after another to your vid.
I finally got into a working version of the project that got me into this question.
Now I want to contribute with the knowledge I got.
Here is my solution for getting all pictures in current directory and converting into a video having then centralized in a black background, so this solution works for different size images.
import glob
import cv2
import numpy as np
DESIRED_SIZE = (800, 600)
SLIDE_TIME = 5 # Seconds each image
FPS = 24
fourcc = cv2.VideoWriter.fourcc(*'X264')
writer = cv2.VideoWriter('output.avi', fourcc, FPS, DESIRED_SIZE)
for file_name in glob.iglob('*.jpg'):
img = cv2.imread(file_name)
# Resize image to fit into DESIRED_SIZE
height, width, _ = img.shape
proportion = min(DESIRED_SIZE[0]/width, DESIRED_SIZE[1]/height)
new_size = (int(width*proportion), int(height*proportion))
img = cv2.resize(img, new_size)
# Centralize image in a black frame with DESIRED_SIZE
target_size_img = np.zeros((DESIRED_SIZE[1], DESIRED_SIZE[0], 3), dtype='uint8')
width_offset = (DESIRED_SIZE[0] - new_size[0]) // 2
height_offset = (DESIRED_SIZE[1] - new_size[1]) // 2
target_size_img[height_offset:height_offset+new_size[1],
width_offset:width_offset+new_size[0]] = img
for _ in range(SLIDE_TIME * FPS):
writer.write(target_size_img)
writer.release()
Is it actually important to you that the solution should use python and produce a movie file? Or are these just your expectations of what a solution would look like?
If you just want to be able to play back a bunch of jpeg files as a movie, you can do it without using python or cluttering up your computer with .avi/.mov/mp4 files by going to vidmyfigs.com and using your mouse to select image files from your hard drive. The "movie" plays back in your Web browser.

Categories