How can I output the the video with initial frame superimposed using cv2.BackgroundSubtractorMOG2?
Here is my code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
fgbg = cv2.BackgroundSubtractorMOG2()
fourcc = cv2.cv.CV_FOURCC(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480), isColor = False)
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
fgmask = fgbg.apply(frame)
cv2.imshow('original', frame)
cv2.imshow('fg', fgmask)
out.write(fgmask)
k = cv2.waitKey(30) & 0xFF
if k == 27:
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
This is what I get:
You can use the cv2.addWeighted function to achieve the superimpose effect
however, note that the MOG background subtractor continuously updates it's "idea" of background as new frames are added via the apply function (which is why it eventually removes objects that were originally in the fg mask, but has been stationary for too long), hence the superimposed "initial frame" may be misleading to yourself.
You might be better off superimposing the immediate previous frame / frames instead of the initial frame if you want to see the effect of the fg mask
Related
Using OpenCV and Mediapipe to create an application where multiple faces in the live feed is not desired. Hence I require a solution to either destroy video feed and display an image until there is only 1 face in the frame(and display the feed again of course)
or Overlay an image on the entire video feed display window(hence hide the feed display)
Here's the code so far:
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('images.png')
mul = cv2.resize(mul, (640, 480))
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
results = face_detection.process(imgRGB)
for count, detection in enumerate(results.detections):
continue
count += 1
if count > 1:
cv2.destroyAllWindows()
cv2.imshow("output", mul)
time.sleep(10)
continue
cv2.imshow("output", frame)
cap.release()
cv2.destroyAllWindows()
I'm trying to destroy the feed and display. The introduced delay using time.sleep(10) is because without it the windows switch between the video feed and the image at a very high rate, making it hard to see what's happening.
The problem is that image is not being displayed, the window appears blank grey; and after 10 seconds the video feed comes back up and is taking very long to display the image again even though the multiple faces never leave the frame
Thank you
You observe gray frames because you are destroying the window every time loop starts over. You are stucking at the if count > 1: statement since count is being increased for each frame because it does not depend on any condition and never initialize again (so count always >1 after detecting 2 faces although faces are in different frames). Here is my solution to the problem hope it helps.
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('image.jpg')
mul = cv2.resize(mul, (640, 480))
count = 0
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
cv2.imshow("Original", frame)
imgRGB = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = face_detection.process(imgRGB)
if(results.detections):
for count, detection in enumerate(results.detections):
count += 1
print("Number of Faces: ", count)
if count > 1:
cv2.imshow('StackOverflow', mul)
else:
cv2.imshow('StackOverflow', frame)
count = 0
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
cv2.destroyAllWindows()
Here is the result with one face in frame;
Result with multiple faces;
I’m trying to segment the moving propeller of this Video. My approach is, to detect all black and moving pixels to separate the propeller from the rest.
Here is what I tried so far:
import numpy as np
import cv2
x,y,h,w = 350,100,420,500 # Croping values
cap = cv2.VideoCapture('Video Path')
while(1):
_, frame = cap.read()
frame = frame[y:y+h, x:x+w] # Crop Video
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_black = np.array([0,0,0])
upper_black = np.array([360,255,90])
mask = cv2.inRange(hsv, lower_black, upper_black)
res = cv2.bitwise_and(frame,frame, mask= mask)
nz = np.argwhere(mask)
cv2.imshow('Original',frame)
cv2.imshow('Propeller Segmentation',mask)
k = cv2.waitKey(30) & 0xff # press esc to exit
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
Screenshot form the Video
Result of the Segmentation
With function cv.createBackgroundSubtractorMOG2()
I think you should have a look at background subtraction. It should be the right approach for your problem.
OpenCV provides a good tutorial on this: Link
In the following snippet of code how would I set the size of the capture window? I want to take a 256*256 pixel picture.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
cv2.imshow('img1',frame)
if cv2.waitKey(1) & 0xFF == ord('y'):
cv2.imwrite('imag.png',frame)
cv2.destroyAllWindows()
cap.release()
You can alternatively use OpenCVs resize function or slice your frame if you are just interested in a certain region of your captured image.
You can set the size of the window like so
video_capture = cv2.VideoCapture(0)
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 256)
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 256)
I want my code to capture a single image from webcam and process it further, like later detect colours and sobel edges and a lot more.
In short, I want to do image acquisition.
To use your webcam, you can use VideoCapture:
import cv2
cap = cv2.VideoCapture(0) # use 0 if you only have front facing camera
ret, frame = cap.read() #read one frame
print(frame.shape)
cap.release() # release the VideoCapture object.
You start the webcam, read one image and immediately release it. The frame is the image and you can preprocess it however you want. You can view the image using imshow:
cv2.imshow('image', frame)
if cv2.waitKey(0) & 0xff == ord('q'): # press q to exit
cv2.destroyAllWindows()
import cv2
cap = cv2.VideoCapture(0) # Usually if u have only 1 camera, then it's 0, if u have multiple camera then it's may be 0,1,2 ...
ret, frame = cap.read() # ret is True or False status which shows if you are success reading frame from web cam, frame is an array
# If u want to loop to read continously
ret = True
while ret:
ret, frame = cap.read()
if frame is None:
continue # this will stop the loop if we failed to read frame, because ret will be False
If this is the answer that u wanted, then it have been asked multiple times. Make sure you have tried to search the answer before you asked
cam = cv2.VideoCapture(0)
image = cam.read()[1]
cv2.imshow("image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I can capture a video from the webcam and save it fine with this code
cap = cv2.VideoCapture(0)
fgbg= cv2.BackgroundSubtractorMOG()
fourcc = cv2.cv.CV_FOURCC(*'DIVX')
out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640,480))
while(cap.isOpened()):
ret,frame = cap.read()
if ret:
fgmask = fgbg.apply(frame)
out.write(frame) #Save it
cv2.imshow('Background Subtraction', fgmask)
if cv2.waitKey(1) & 0xFF == ord('q'):
break #q to quit
else:
break #EOF
cap.release()
out.release()
cv2.destroyAllWindows()
This records it as one would expect, and shows the background subtraction thing also. It saves it to output.avi. All is well. But I can't save the foreground mask, it gives me a Could not demultiplex stream error. (This line is changed in the code above).
out.write(fgmask) #Save it
Why is this? Is the fgmask not a frame like when I am reading from the capture?
Alright figured it out! Let me know if there's a more efficient way to do this or if I am missing something..
The foreground mask generated in background subtraction is an 8bit binary image, so we have to convert it to a different format. Probably a better one exists, but I used RGB
frame = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2RGB)