I have working code for applying background subtraction on a still video but it won't properly write the frames of subtracted background to its output file. I get the .avi file & filename which I specified in cv2.VideoWriter, but it doesn't seem to write each frame that I pass:
import cv2
import numpy as np
cap = cv2.VideoCapture('traffic-mini.mp4')
fgbg = cv2.createBackgroundSubtractorMOG2()
cv2.startWindowThread()
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (640,480))
while True:
ret, frame = cap.read()
if ret == True:
frame = fgbg.apply(frame)
out.write(frame)
cv2.imshow('fg',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
for i in range (1,5):
cv2.waitKey(1)
The output video test_output.avi is always 6KB and has no frames passed in. What am I missing? Thanks in advance
Try this:
#Add a 0 to the end of the out after (640, 480)
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (640,480),0)
while True:
ret, frame = cap.read()
if ret == True:
frame = cv2.resize(frame, (640,480))
frame = fgbg.apply(frame)
out.write(frame)
cv2.imshow('fg',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
The reason why is to write out black and white frames you need the 0 at the end to tell opencv that there is no channel involved.
You may have to switch the two number for the resize around as I can remember off hand which is width or height of the frame, but the point is the video frame size should match for both your output and your input. Also a hint for background subtraction is to gray the video out as well like
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
It's because the size of frame is not (640,480). Instead of
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (640,480))
try
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (int(cap.get(3)), int(cap.get(4))))
MNM's proposed solution - adding 0 as a last parameter of VideoWriter - works well on my end - using OpenCV 3.4.5 on Raspbian Stretch (Raspberry Pi 3).
Although the official documentation https://docs.opencv.org/3.4.5/dd/d9e/classcv_1_1VideoWriter.html - states that "isColor If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only)." It may be applicable to other OSes.
Related
I have conferance call video with different people's tiles arranged on a grid.
Example:
gallery view zoom
Can I crop every video tile to a separate file using python or nodejs?
Yes, you can achieve that using OpenCV library
Read the video in OpenCV using VideoCapture API. Note down framerate while reading.
Parse through each frame and crop the frame:
Write the frame in a video using OpenCV VideoWriter
Here is the example code using (640,480) to be the new dimensions:
cap = cv2.VideoCapture(<video_file_name>)
fps = cap.get(cv2.CAP_PROP_FPS)
out = cv2.VideoWriter('<output video file name>, -1, fps, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
crop_frame = frame[y:y+h, x:x+w]
# write the crooped frame
out.write(crop_frame)
# Release reader wand writer after parsing all frames
cap.release()
out.release()
Here's the code (tested). It works by initialising a number of video outputs, then for each frame of the input video: cropping the region of interest (roi) and assigning each to the relevent output video. You might need to make tweaks depending on input video dimensions, number of times, offsets etc.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture('in.mp4')
ret, frame = cap.read()
(h, w, d) = np.shape(frame)
horiz_divisions = 5 # Number of tiles stacked horizontally
vert_divisions = 5 # Number of tiles stacked vertically
divisions = horiz_divisions*vert_divisions # Total number of tiles
seg_h = int(h/vert_divisions) # Tile height
seg_w = int(w/horiz_divisions) # Tile width
# Initialise the output videos
outvideos = [0] * divisions
for i in range(divisions):
outvideos[i] = cv2.VideoWriter('out{}.avi'.format(str(i)),cv2.VideoWriter_fourcc('M','J','P','G'), 10, (seg_w,seg_h))
# main code
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
vid = 0 # video counter
for i in range(vert_divisions):
for j in range(horiz_divisions):
# Get the coordinates (top left corner) of the current tile
row = i * seg_h
col = j * seg_w
roi = frame[row:row+seg_h,col:col+seg_w,0:3] # Copy the region of interest
outvideos[vid].write(roi)
vid += 1
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release all the objects
cap.release()
for i in range(divisions):
outvideos[i].release()
# Release everything if job is finished
cv2.destroyAllWindows()
Hope this helps!
I'm having an error when I run my code, I don't understand what is happening but I think is when the program finish because I have the result what I want, that is convert an existing video to gray scale and save it.
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
cap = cv2.VideoCapture('videos/output.avi')
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('results/output.avi', fourcc, 20.0, (640, 480))
while (cap.isOpened()):
_, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.write(frame)
# Key events
key = cv2.waitKey(1)
if key == 27: # esc
break
cap.release()
cv2.destroyAllWindows()
Thank you!
There may be a case where at least one of the frames in your video weren't read in properly. That's why the cv2.cvtColor method is throwing an error because the frame data you are providing is empty.
You should consider using the first output of cv2.VideoCapture.read() to make sure the video frame was properly captured, then write it to file. The first output is a flag that determines if the current frame was read in successfully. Also, you'll need to handle the end where we reach the end of the video. In that case, the flag will be False so that we should exit the loop. Finally, if it's your intent to write grayscale frames, there is an optional fifth parameter in cv2.VideoWriter called isColor where we can set this to False to allow us to directly write grayscale frames. This means the call to cv2.cvtColor is no longer required.
One additional thing I'll recommend is to infer the frame width and height from the video file instead of setting it yourself. This way the input and output resolution is the same. Finally, don't forget to release the cv2.VideoWriter object when you're finished and I've added an additional check for the video file to see if it has properly opened:
import numpy as np
import cv2
import sys
cap = cv2.VideoCapture('videos/output.avi')
# Check to see if the video has properly opened
if not cap.isOpened():
print("File could not be opened")
sys.exit(1)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Get the frame width and height
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Change
out = cv2.VideoWriter('results/output.avi', fourcc, 20.0, (frame_width, frame_height), isColor=False)
while True:
ret, frame = cap.read() # New
if not ret: # New
break # Get out if we don't read a frame successfully
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.write(frame)
# Key events
key = cv2.waitKey(1)
if key == 27: # esc
break
cap.release()
out.release() # New
cv2.destroyAllWindows()
As a minor note, you don't have any windows being displayed, so cv2.destroyAllWindows() is superfluous here. Consider removing it from your code.
So this answer has another approach(Here,you can also extract different color by changing the corresponding weights in front of the B,G, R values)
import cv2
cap = cv2.VideoCapture('videos/output.avi')
frame_width = int(cap.get(3)) # finds the frame width automatically
frame_height = int(cap.get(4)) # finds the frame height automatically
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('results/outpy.avi', cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), 10, (frame_width, frame_height))
while (cap.isOpened()): # value is true if the file is successfully opened.
ret, frame = cap.read()
if ret == True: # checks if the return value is True or False. False means file ended.
# grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # the grey matrix has a different shape than the frame matrix
# that's why the output files were blank
# to circumvent this RGB2GRAY, I manually added the "NORMALIZED" R,G,B values.
frame[:,:,0] = 0.114*frame[:,:,0]+0.587*frame[:,:,1]+0.299*frame[:,:,2] #multiplying normalized co-efficients to B,G,R
# for extracting red, make 0.299 as 1.0 and others as 0.0; same goes for other colours.
frame[:, :, 1]=frame[:,:,0] # making the G and R values same as the B.
frame[:, :, 2]=frame[:,:,0]
# now frame is a 3d grayscale matrix. Identical to the cv2.cvtColor method....except it is 3d
# now frame is grey scaled...R,G,B values for each pixel,holds the same number....
out.write(frame)
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
I'm using OpenCV and Python to take images and video. I would like to have OpenCV to take multiples pictures and videos when I click some specific letters on keyboard. But currently I can only take multiple images, I can't take a video at the same time. This is my current code :
import numpy as np
import cv2, time
video=cv2.VideoCapture(0)
a=0
i=0
while True:
a = a + 1
check, frame = video.read()
cv2.imshow("Capturing", frame)
key=cv2.waitKey(1)
if key == ord ('r'):
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
out.write(frame)
if key == ord ('x'):
i+=1
cv2.imwrite('image'+str(i)+'.jpg', frame)
cv2.imshow("Hasil Capture", frame)
print('taking pictures')
if key == ord ('q'):
break
print(a)
video.release()
cv2.destroyAllWindows
I am trying to write a openCV program where i am breaking down the video into frames and comparing two frames one after the other if both are the same i reject the frame else append the frame to a output file.
How can i achieve it?
OpenCV 2.4.13 Python 2.7
The following example captures frames from the first camera connected to your system, compares each frame to the previous frame, and when different, the frame is added to a file. If you sit still in front of the camera, you might see the diagnostic 'no change' message printed if you run the program from a console terminal window.
There are a number of ways to measure how different one frame is from another. For simplicity we have used the average difference, pixel by pixel, between the new frame and the previous frame, compared to a threshold.
Note that frames are returned as numpy arrays by the openCV read function.
import numpy as np
import cv2
interval = 100
fps = 1000./interval
camnum = 0
outfilename = 'temp.avi'
threshold=100.
cap = cv2.VideoCapture(camnum)
ret, frame = cap.read()
height, width, nchannels = frame.shape
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
out = cv2.VideoWriter( outfilename,fourcc, fps, (width,height))
while(True):
# previous frame
frame0 = frame
# new frame
ret, frame = cap.read()
if not ret:
break
# how different is it?
if np.sum( np.absolute(frame-frame0) )/np.size(frame) > threshold:
out.write( frame )
else:
print( 'no change' )
# show it
cv2.imshow('Type "q" to close',frame)
# check for keystroke
key = cv2.waitKey(interval) & 0xFF
# exit if so-commanded
if key == ord('q'):
print('received key q' )
break
# When everything done, release the capture
cap.release()
out.release()
print('VideoDemo - exit' )
As a task for school our group has to create an application that knows when a goal is scored. This means that a ball shaped object passes a line.
First we are attempting to input a video, get OpenCV to track the ball, and then to output it as a video.
I have put a bunch of other code snippets together that I have found on StackOverflow, but it doesn't work.
I am creating a new post because all the other related threads are either C++ or use colour detection instead of the shape detection that we use. I also can't find a clear answer on outputting the video file when it was turned into a series of images.
Following is the code that I have so far:
import cv2
import numpy as np
cap = cv2.VideoCapture('bal.mp4')
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output1.avi',fourcc, 20.0, (640,480))
while(1):
# Take each frame
ret, frame = cap.read()
if ret == True:
if ret == 0:
break
frame = cv2.medianBlur(frame,5)
cimg = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(cimg,3,1,20,param1=50,param2=30,minRadi$
if circles == None:
print "NoneType"
break
circles = np.uint16(np.around(circles.astype(np.double),3))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imwrite('test.jpg',cimg)
out.write(cimg)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
We get working images, but the video is unplayable with VLC or any other media player.
This is an image from the program:
This issue is turning it into a playable video now.
Thanks in advance.
Not sure if you got this working in the end, but changing it to mp4 worked for me on my mac
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
out = cv2.VideoWriter('static/video/outpy.mp4', 0x7634706d, 20, (frame_width, frame_height))