OpenCV remove background preserving the foreground - python

I'm new to openCv and I'm trying to extract myself from the background in a video that I got from my webcam. So far I'm able to remove background while the foreground(video of me) is still appearing in white. I want it to be exactly like it was in the original video. Below is my current code:
fgbg = cv2.createBackgroundSubtractorMOG2()
while(1):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
cv2.imshow('fgmask',frame)
cv2.imshow('frame',fgmask)
Any hint or some sense of direction will be much appreciated.

fgmask is a white mask of the foreground. You must mask your original frame with this mask. Now, the frame is an RGB or possibly a BGR image, while the mask is a single channel (you can check by printing frame.shape and fgmask.shape). Therefore you must convert the mask to RGB and then apply the mask:
mask_rgb = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2RGB)
out_frame = cv2.bitwise_and(frame, mask_rgb)
cv2.imshow("FG", out_frame)

Related

Enhancing OpenCV Masking

I made a program that applies a mask over an object as described in this StackOverflow question. I did so using colour thresholding and making the mask select only the colour range of human skin (I don't know if it works for white people as I am not white and it works well for me). the problem is when I run it, some greys (grey area on the wall or a shadow) are also picked up on the mask and it is applied there.
I wanted to know whether there was a way to remove the unnecessary bits in the background, and/or if there was a way using object detection I could solve this. PS I tried using createBackgroundSubtractorGMG/MOG/etc but that came out very weird and way worse.
Here is my code:
import cv2
from cv2 import bitwise_and
from cv2 import COLOR_HSV2BGR
import numpy as np
from matplotlib import pyplot as plt
cap = cv2.VideoCapture(0)
image = cv2.imread('yesh1.jpg')
bg = cv2.imread('kruger.jpg')
bg = cv2.cvtColor(bg, cv2.COLOR_BGR2RGB)
kernel1 = np.ones((1,1),np.uint8)
kernel2 = np.ones((10,10),np.uint8)
while (1):
ret, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lowerBound = np.array([1, 1, 1])
upperBound = np.array([140, 255 ,140])
mask = cv2.inRange(hsv, lowerBound, upperBound)
blur = cv2.GaussianBlur(mask,(5,5),0)
ret1,mask = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel1)
contourthickness = cv2.cvtColor(mask, cv2.IMREAD_COLOR)
res = bitwise_and(frame, frame, mask = mask)
crop_bg = bg[0:480, 0:640]
final = frame + res
final = np.where(contourthickness != 0, crop_bg, final)
cv2.imshow('frame', frame)
cv2.imshow('Final', final) # TIS WORKED BBYY
key = cv2.waitKey(1) & 0xFF
if key == 27:
break
cv2.destroyAllWindows()
EDIT:
Following #fmw42 's comment, I am adding the original image as well as a screenshot of how the different frames look. The masked image also changes colour. Something to fix that will also be helpful.
#Jeremi. Your code working 100%. Using White wall for background. Avoid door(it is not white, it is cream), shadow around edge, to prevent noising. If you have white bed sheet or white walls. I am using Raspberry pi 4b/8gb, 4k monitor. I can't get actual size of window.
Here is output:
What you see on my output. I placed my hand behind white sheet closer to camera. I do not have white wall on my room. My room is greener. That why you see logo on background. Btw, I can move my hand no problem.

Python video analysis -- detecting color change and time stamp

I'm trying to use OpenCV to analyze a 10 min video for green in the top right corner and printing out the time stamps every time there is green.
I've used meanshift and tried splitting it into frames but a 10 minute long video would require a lot of frames -- especially since I'm trying to be accurate.
I was thinking of masking over the green, but there isn't a way to print the time stamps... any tips on what packages or tools I should use?
Here's what I have, and I've tried basically everything in OpenCV but I've never used it before so I'm lost...
import cv2
import numpy as np
cap = cv2.VideoCapture('video.mp4')
# This drives the program into an infinite loop.
while(1):
# Captures the live stream frame-by-frame
_, frame = cap.read()
# Converts images from BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([121, 240, 9]) # darker green
upper_red = np.array([254,255,255]) # white green
# Defining range of green color in HSV
# This creates a mask
mask = cv2.inRange(hsv, lower_red, upper_red)
# that only the green coloured objects are highlighted
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
# if k == 27: break
# Destroys all of the HighGUI windows.
cv2.destroyAllWindows()
# release the captured frame
cap.release()
Splitting the frames gives me thousands of images and just reading the video doesn't give me time stamps, which is what I want it to print out. Thanks!

How to get foreground mask when already have background image

I know that with cv2.createBackgroundSubtractorMOG2() we can substract the foreground mask using a background estimating method based on every 500 frames(default). But how about I already have a background picture and just want substract the foreground using that picture in each frame? What I'm tring is like this:
import numpy as np
import cv2
video = "xx.avi"
cap = cv2.VideoCapture(video)
bg = cv2.imread("bg.png")
while True:
ret, frame = cap.read()
original_frame = frame.copy()
if ret:
# get foremask?
fgmask = frame - bg
# filter kernel for denoising:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
opening = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)
# Dilate to merge adjacent blobs
dilation = cv2.dilate(closing, kernel, iterations = 2)
# show fg:dilation
cv2.imshow('fg mask', dilation)
cv2.imshow('original', original_frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
cap.release()
cv2.destroyAllWindows()
break
else:
break
However I got colourful frames when doing frame = frame - bg. How could I get the correct foreground mask?
You are getting colourful images because you are substracting 2 color images, so the colour you are getting on each pixel is the difference on each channel (B,G and R) between both images.
In order to perform background subtraction, as dhanushka comments, the simplest option is to use MOG2 and forward it your background image for some (500) frames so it will learn this as the background. MOG2 is designed to learn the variability of each pixel colour with a Gaussian model, so if you are feeding always the same image, it will not learn this. Anyway, I think it should work for what you are intending to do.
The nice thing about this approach is that MOG2 will take care of lots of more things like updating the model over time, dealing with shadows and so on.
Another option would be to implement your own background subtraction method as you tried to do.
So, if you want to test it, you need to convert your fgmask colour image into something you can easily threshold and decide for each pixel if it is background or foreground. A simple option would be to convert it to grayscale, and then apply a simple threshold, the lower the threshold the more "sensitive" your subtraction method is, (play with the thresh value), i.e.:
...
# get foremask?
fgmask = frame - bg
gray_image = cv2.cvtColor(fgmask, cv2.COLOR_BGR2GRAY)
thresh = 20
im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1]
# filter kernel for denoising:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
opening = cv2.morphologyEx(im_bw, cv2.MORPH_OPEN, kernel)
...

How can I efficently segmentate hands from my video?

I'm new here.
I'm having some problems with segmentating hands in OpenCV, Python from video. I looked over every topic I could, but I can't see my mistake in code I wrote.
Please, note I worked only once with Python and only with basics - it's my first OpenCV application ever.
My problem is that I'm getting output video as black with some white squares. I assume, I'm using wrong values in thresholds. I took colour from the middle of the hand, conversed it to HSV and set lower values to [99,100,100] and upper [119,255,255].
Can anyone help me?
Here is code, if anyone would like to look at it:
import numpy as np
import cv2
cap = cv2.VideoCapture('vid2.avi')
fourcc = cv2.cv.CV_FOURCC(*'MP4V')
out = cv2.VideoWriter('output.avi',fourcc, 29, (1920,1080))
while(1):
ret, frame = cap.read()
if ret==True:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
blur = cv2.GaussianBlur(hsv, (15, 15), 0)
lower = np.array([99,100,100])
upper = np.array([119,255,255])
mask = cv2.inRange(blur, lower, upper)
res = cv2.bitwise_and(frame,frame, mask= mask)
out.write(res)
if ret==False:
break
cap.release()
cv2.destroyAllWindows()
out.release()

real time object detection based on color in opencv and python

I am new to opencv. I saw many tutorials on color detection. My camera is rotating and I want it to stop rotation when it detects, say, a blue color. How can I check whether any blue object has come into my frame? I want it to be in the centre of the frame. Basically I want a feedback, maybe a flag bit, when a blue color is detected. I am using opencv2 and python. Please help
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_green, upper_green)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
I did this much. Now what should I check?? if res is a zero matrix?? That is to get an acknowledgement if a blue object is detected.
I want something like this: http://www.youtube.com/watch?v=0PXnzAAKro8. So, how should I check whether a blue object has been detected.

Categories