i want to crop each of the frames of this video and save all the cropped images to use them as input for a focus stacking software but my approach:
cap = cv2.VideoCapture(r"C:\Users\HP\Downloads\VID_20221128_112556.mp4")
ret, frames = cap.read()
count=0
for img in frames:
stops.append(img)
cv2.imwrite("../stack/croppedframe%d.jpg" % count,img[500:1300,100:1000])
print(count)
count += 1
Throws this error:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgcodecs\src\loadsave.cpp:801: error: (-215:Assertion failed) !_img.empty() in function 'cv::imwrite'
what can i do?
If you take the frames variable with for loop it will give you the image on the y-axis.if you use while loop and read the next frame, the code will work. You can try the example below.
cap = cv2.VideoCapture(r"C:\Users\HP\Downloads\VID_20221128_112556.mp4")
ret, frame = cap.read()
count=0
while(ret):
stops.append(frame)
cv2.imwrite("../stack/croppedframe%d.jpg" % count,frame[500:1300,100:1000])
print(count)
count += 1
ret, frame = cap.read()
Related
While extracting frames from a video using OpenCV, how can I set a particular value at which the frame extraction will occur?
I have seen many available sample codes for image extraction, but they are not shown any option for the frame rate.
There are many ways of frame extraction, one is to use ffmg for frame extraction.
other is, You can try this code, but we can't use any random value that you will understand while trying at different values.
change directories ap per your system.
import math
count = 0
videoFile = "train/train.mp4"
cap = cv2.VideoCapture(videoFile)
frameRate = cap.get(5) #frame rate
x=1
while(cap.isOpened()):
frameId = cap.get(1)
ret, frame = cap.read()
if (ret != True):
break
else (frameId % math.floor(frameRate) == 0):
filename ="train/frame2/frame%d.jpg" % count;count+=1
cv2.imwrite(filename, frame)
cap.release()
print ("Done!")
I am trying to slice a video capture frame for image data collection of hand gestures, so I thought an easy place to start would be to slice a video frame and write the frames from a specific part of the frame to a directory, but I am having this constant assertion error when I try to so much as show the sliced frame. I've searched for other solutions, but I can't figure it out
The code is
import numpy as np
t,r,b,l = 250, 500, 500,50
fgbg = cv2.BackgroundSubtractorMOG2()
cam = cv2.VideoCapture(0)
while cam.isOpened():
ret,frame = cam.read()
cv2.rectangle(frame,(l,t),(r,b),(255,0,0),2)
cv2.imshow('frame',frame)
(h,w) = frame.shape[:2]
roi = frame[t:b,r:l]
cv2.imshow('roi',roi) #giving error
if 0xFF & cv2.waitKey(1) == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
I've also tried:
cv2.imshow('roi',roi.astype('uint8')
and
cv2.imshow('roi',np.asarray(roi,dtype='uint8'))
All are giving me the same error
I keep getting this error:
error: OpenCV(4.5.1) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-kh7iq4w7\opencv\modules\highgui\src\window.cpp:376: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
Edit: roi =frame[t:b,r:l]
keeps returning an empty list, how can i slice the ndarray frame so that I get a proper sliced array?
Any help would be really appreciated. Thank you
So it looks like the issue was entirely about the way it was being sliced:
my code was:
roi = frame[t:b, r:l] where the slice should be [rows, columns]
My mistake was slicing the columns right to left, instead of left to right.
roi = frame[t:b, l:r] gives the sliced frame as desired
I'm having an error when I run my code, I don't understand what is happening but I think is when the program finish because I have the result what I want, that is convert an existing video to gray scale and save it.
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
cap = cv2.VideoCapture('videos/output.avi')
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('results/output.avi', fourcc, 20.0, (640, 480))
while (cap.isOpened()):
_, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.write(frame)
# Key events
key = cv2.waitKey(1)
if key == 27: # esc
break
cap.release()
cv2.destroyAllWindows()
Thank you!
There may be a case where at least one of the frames in your video weren't read in properly. That's why the cv2.cvtColor method is throwing an error because the frame data you are providing is empty.
You should consider using the first output of cv2.VideoCapture.read() to make sure the video frame was properly captured, then write it to file. The first output is a flag that determines if the current frame was read in successfully. Also, you'll need to handle the end where we reach the end of the video. In that case, the flag will be False so that we should exit the loop. Finally, if it's your intent to write grayscale frames, there is an optional fifth parameter in cv2.VideoWriter called isColor where we can set this to False to allow us to directly write grayscale frames. This means the call to cv2.cvtColor is no longer required.
One additional thing I'll recommend is to infer the frame width and height from the video file instead of setting it yourself. This way the input and output resolution is the same. Finally, don't forget to release the cv2.VideoWriter object when you're finished and I've added an additional check for the video file to see if it has properly opened:
import numpy as np
import cv2
import sys
cap = cv2.VideoCapture('videos/output.avi')
# Check to see if the video has properly opened
if not cap.isOpened():
print("File could not be opened")
sys.exit(1)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Get the frame width and height
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Change
out = cv2.VideoWriter('results/output.avi', fourcc, 20.0, (frame_width, frame_height), isColor=False)
while True:
ret, frame = cap.read() # New
if not ret: # New
break # Get out if we don't read a frame successfully
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.write(frame)
# Key events
key = cv2.waitKey(1)
if key == 27: # esc
break
cap.release()
out.release() # New
cv2.destroyAllWindows()
As a minor note, you don't have any windows being displayed, so cv2.destroyAllWindows() is superfluous here. Consider removing it from your code.
So this answer has another approach(Here,you can also extract different color by changing the corresponding weights in front of the B,G, R values)
import cv2
cap = cv2.VideoCapture('videos/output.avi')
frame_width = int(cap.get(3)) # finds the frame width automatically
frame_height = int(cap.get(4)) # finds the frame height automatically
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('results/outpy.avi', cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), 10, (frame_width, frame_height))
while (cap.isOpened()): # value is true if the file is successfully opened.
ret, frame = cap.read()
if ret == True: # checks if the return value is True or False. False means file ended.
# grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # the grey matrix has a different shape than the frame matrix
# that's why the output files were blank
# to circumvent this RGB2GRAY, I manually added the "NORMALIZED" R,G,B values.
frame[:,:,0] = 0.114*frame[:,:,0]+0.587*frame[:,:,1]+0.299*frame[:,:,2] #multiplying normalized co-efficients to B,G,R
# for extracting red, make 0.299 as 1.0 and others as 0.0; same goes for other colours.
frame[:, :, 1]=frame[:,:,0] # making the G and R values same as the B.
frame[:, :, 2]=frame[:,:,0]
# now frame is a 3d grayscale matrix. Identical to the cv2.cvtColor method....except it is 3d
# now frame is grey scaled...R,G,B values for each pixel,holds the same number....
out.write(frame)
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
We are doing a project on license plate recognition using python(2.7.12). We have divided the video into frames using the following code:
import cv2 #importing opencv library
import numpy #importing numpy
cap = cv2.VideoCapture('C:/Python27/project/license.avi') #read the video
success,image = cap.read() #divide into frames
count = 0
success = True
while (cap.isOpened()):
success,image = cap.read()
print 'Read a new frame: ', success #when frame has been read successfully
if (type(image) == type(None)): #check for invalid frame
break
else:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert frame into grey
cv2.imwrite("frame%d.jpg" % count, gray_image) # save frame as JPEG file
count += 1 #repeat for all the frames
cap.release()
We are trying to get the best frame(with high quality).Is it possible to automatically select a frame that has the complete license plate?
Any suggestions would be helpful.
What I need to do is fairly simple:
load a 5 frames video file
detect background
On every frames, one by one :
subtract background (create foreground mask)
do some calculations on foreground mask
save both original frame and foreground mask
Just to see the 5 frames and the 5 corresponding fgmasks :
import numpy as np
import cv2
cap = cv2.VideoCapture('test.avi')
fgbg = cv2.BackgroundSubtractorMOG()
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
# Display the fgmask frame
cv2.imshow('fgmask',fgmask)
# Display original frame
cv2.imshow('img', frame)
k = cv2.waitKey(0) & 0xff
if k == 5:
break
cap.release()
cv2.destroyAllWindows()
Every frame gets opened and displayed correctly but the showed fgmask do not correspond to the showed original frame. Somewhere in the process, the order of the fgmasks gets mixed up.
The background does get subtracted correctly but I don't get the 5 expected fgmasks.
What am I missing ? I feel like this should be straightforward : the while loop runs over the 5 frames of the video and fgbg.apply apply the background subtraction function to each frame.
OpenCV version that I use is opencv-2.4.9-3
As bikz05 suggested, running average method worked pretty good on my 5 images sets. Thanks for the tip !
import cv2
import numpy as np
c = cv2.VideoCapture('test.avi')
_,f = c.read()
avg1 = np.float32(f)
avg2 = np.float32(f)
# loop over images and estimate background
for x in range(0,4):
_,f = c.read()
cv2.accumulateWeighted(f,avg1,1)
cv2.accumulateWeighted(f,avg2,0.01)
res1 = cv2.convertScaleAbs(avg1)
res2 = cv2.convertScaleAbs(avg2)
cv2.imshow('img',f)
cv2.imshow('avg1',res1)
cv2.imshow('avg2',res2)
k = cv2.waitKey(0) & 0xff
if k == 5:
break