Extract images from live video feed of ipcamera - python

I want to extract images every 5 minutes from a webcam live video feed using opencv. I have the below code to extract from a video. But don't know how to do it for a live video stream from an ipcamera
Below code is used to get an image every 5 seconds from a valid video
import cv2
videoFile = "folder-path"
cap = cv2.VideoCapture(videoFile)
success, image = cap.read()
success = True
count = 0
while success:
# Capture frame-by-frame
cap.set(cv2.CAP_PROP_POS_MSEC,(count*1000)
success, image = cap.read()
cv2.imwrite("file path/frame%d.jpg" % count, image)
count = count + 5

Use cv2.VideoCapture() with the index of the camera you care about. If you have only one camera, cv2.VideoCapture(0) will do the trick. If you have multiple, you will want to increment the index until you are accessing the correct camera.
This code will capture a frame from camera 0 every 5 minutes:
camera = cv2.VideoCapture(0) # start a connection to the camera
ret, frame = camera.read() # read a frame
cv2.waitKey(300000) # wait 5 minutes

Related

Duration of videoWriter being controlled with fps and frame Count (OpenCV & Python)

When use the code below, it gives me a video from stream which has desired number of frames and FPS I choose. However, video length becomes duration =(1/fps)*frameCount. I guess that mp4 does video compression so that the duration of the video does not make the file size bigger, this is a good thing. Nevertheless, is there a way to have a video with smaller duration?
EDIT: An example scenario:
The thing I wanna do is to have 1 Frame for every minutes of streaming and I want to have, lets say 1 FPS video with 1000 Frames. In this scenario, the actual streaming duration becomes 1000 minutes, but I want a 1 FPS video which has 1000 seconds of duration.
Here is the code I am using below:
import os
import cv2 as cv
import math
cap = cv.VideoCapture('rtsp://**.***.**.*:*****/***********')
# Frame sizes
down_width = 640
down_height = 480
down_points = (down_width, down_height)
# stream fps and desired FPS
fpsSystem = cap.get(cv.CAP_PROP_FPS) # 25 in my case
fps = 1.0/60
# Frame Count parameters
frameRead = 0 # successfully readed frame number
DesFrameCount = 60*6 # Desired frame count
takeFrameTime = 1/fps # every TakeFrameTime seconds frames will be stored
frameCycle = 0 # every frameCycle. frame will be stored
frameWritten = 0
success = True
# Define the codec and create VideoWriter object
fourcc = cv.VideoWriter_fourcc(*'mp4v')
# Video Name
randomName = np.random.randint(0,1000)
out = cv.VideoWriter('output'+str(randomName) + ".mp4", fourcc, fps, (down_width, down_height))
while success:
success, frame = cap.read()
if not success:
print("Can't receive frame (stream end?). Exiting ...")
break
else:
frameRead += 1
frameCycle += 1
# Frame Resizing
frame = cv.resize(frame,down_points,interpolation= cv.INTER_LINEAR)
# Save the particular frame desired to be written according to frame parameters
if frameCycle == math.floor(fpsSystem*takeFrameTime):
frameCycle = 0
out.write(frame)
frameWritten += 1
# Stop the loop when desired number of Frame obtained
if cv.waitKey(1) == ord('q') or (frameWritten == DesFrameCount):
break
# Release everything if job is finished
cap.release()
out.release()
cv.destroyAllWindows()

Inconsistent number of read video frames with OpenCv

I'm trying to extract the frames from the following video (Disclosure: I'm not the owner of the video, the video is taken from a public dataset). To get the number of video frames I do:
cap = cv2.VideoCapture(video_path)
cap.get(cv2.CAP_PROP_FRAME_COUNT) # This returns 32
To extract the frames I have this method:
def obtain_frames(video_path: str):
cap = cv2.VideoCapture(video_path)
frames = []
while True:
success, image = cap.read()
if not success:
break
frames.append(image)
return frames
Finally, I count the number of extracted video frames with:
frames = obtain_frames(video_path)
len(frames) # This returns 17
and I get an inconsistent number compared to cv2.CAP_PROP_FRAME_COUNT.
I'm also aware of this SO question but still, when I display the video I can go all through the end and yet I can't read all the frames.
Any pointers/directions are welcome.

How to crop a video (such as a zoom call) into multiple files?

I have conferance call video with different people's tiles arranged on a grid.
Example:
gallery view zoom
Can I crop every video tile to a separate file using python or nodejs?
Yes, you can achieve that using OpenCV library
Read the video in OpenCV using VideoCapture API. Note down framerate while reading.
Parse through each frame and crop the frame:
Write the frame in a video using OpenCV VideoWriter
Here is the example code using (640,480) to be the new dimensions:
cap = cv2.VideoCapture(<video_file_name>)
fps = cap.get(cv2.CAP_PROP_FPS)
out = cv2.VideoWriter('<output video file name>, -1, fps, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
crop_frame = frame[y:y+h, x:x+w]
# write the crooped frame
out.write(crop_frame)
# Release reader wand writer after parsing all frames
cap.release()
out.release()
Here's the code (tested). It works by initialising a number of video outputs, then for each frame of the input video: cropping the region of interest (roi) and assigning each to the relevent output video. You might need to make tweaks depending on input video dimensions, number of times, offsets etc.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture('in.mp4')
ret, frame = cap.read()
(h, w, d) = np.shape(frame)
horiz_divisions = 5 # Number of tiles stacked horizontally
vert_divisions = 5 # Number of tiles stacked vertically
divisions = horiz_divisions*vert_divisions # Total number of tiles
seg_h = int(h/vert_divisions) # Tile height
seg_w = int(w/horiz_divisions) # Tile width
# Initialise the output videos
outvideos = [0] * divisions
for i in range(divisions):
outvideos[i] = cv2.VideoWriter('out{}.avi'.format(str(i)),cv2.VideoWriter_fourcc('M','J','P','G'), 10, (seg_w,seg_h))
# main code
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
vid = 0 # video counter
for i in range(vert_divisions):
for j in range(horiz_divisions):
# Get the coordinates (top left corner) of the current tile
row = i * seg_h
col = j * seg_w
roi = frame[row:row+seg_h,col:col+seg_w,0:3] # Copy the region of interest
outvideos[vid].write(roi)
vid += 1
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release all the objects
cap.release()
for i in range(divisions):
outvideos[i].release()
# Release everything if job is finished
cv2.destroyAllWindows()
Hope this helps!

Python and OpenCV - Determine Correct Frame Rate of Captured Streaming Video

I’m using OpenCV in a Python environment to capture a video stream from an external source, display the video, and write the video to a file. The video stream can come from different video sources. I need to write the video using the exact same frame rate as the incoming video (e.g., 60 fps, 29.97 fps, 30 fps, etc.).
Because the streaming video does not have the frame rate embedded in the stream, I need to determine the correct frame rate. I have tried suggestions by others of sampling some frames then divide the number of captured frames by the elapsed time. For me, this results in a frame rate that is close, but not close enough.
When I capture the video with VLC Media Player, VLC determines the frame rate correctly.
Here is the Python script that I’m currently using. It buffers 500 frames to compute the frame rate and then starts writing the video while continuing to capture (with a 500 frame delay). (VLS capture/write doesn’t have a noticeable delay.)
Foremost importance to me – correctly determine the frame rate of the incoming video stream. Second importance – I want to write the video with minimum delay after capture.
Any suggestions?
import numpy as np
import cv2
from time import time
cap = cv2.VideoCapture(0)
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
kount = 0
delay = 500
buffer = []
start = time()
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
kount += 1
buffer.append(frame)
cv2.imshow('frame',frame)
if kount >= delay:
if kount == delay:
end = time()
fps = kount / (end - start)
out = cv2.VideoWriter('output.avi',fourcc, fps, (frame_width,frame_height))
out.write(buffer[kount-delay])
if cv2.waitKey(1) & 0xFF == ord('q'):
for i in range(kount - delay, kount):
out.write(buffer[i])
break
else:
break
print("Frames Per Second = ", fps)
cap.release()
out.release()
cv2.destroyAllWindows()

How do i take median of 5 grayscale frames?

I am taking input from a video and I want to take the median value of the first 5 frames so that I can use it as background image for motion detection using deferential.
Also, I want to use a time condition that, say if motion is not detected then calculate the background again, else wait t seconds. I am new to opencv and I don't know how to do it.. Please help
Also, I want to take my video in 1 fps but this does not work. Here is the code I have:
import cv2
BLUR_SIZE = 3
NOISE_CUTOFF = 12
cam = cv2.VideoCapture('gh10fps.mp4')
cam.set(3, 640)
cam.set(4, 480)
cam.set(cv2.cv.CV_CAP_PROP_FPS, 1)
fps=cam.get(cv2.cv.CV_CAP_PROP_FPS)
print "Current FPS: ",fps
If you really want the median of the first 5 frames, then following should do what you are looking for:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
frames = []
for _ in range(5):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frames.append(gray)
median = np.median(frames, axis=0).astype(dtype=np.uint8)
cv2.imshow('frame', median)
cv2.waitKey(0)
cap.release()
cv2.destroyAllWindows()
Note, this is just taking the source from a webcam as an example.

Categories