Enhancing OpenCV Masking - python

I made a program that applies a mask over an object as described in this StackOverflow question. I did so using colour thresholding and making the mask select only the colour range of human skin (I don't know if it works for white people as I am not white and it works well for me). the problem is when I run it, some greys (grey area on the wall or a shadow) are also picked up on the mask and it is applied there.
I wanted to know whether there was a way to remove the unnecessary bits in the background, and/or if there was a way using object detection I could solve this. PS I tried using createBackgroundSubtractorGMG/MOG/etc but that came out very weird and way worse.
Here is my code:
import cv2
from cv2 import bitwise_and
from cv2 import COLOR_HSV2BGR
import numpy as np
from matplotlib import pyplot as plt
cap = cv2.VideoCapture(0)
image = cv2.imread('yesh1.jpg')
bg = cv2.imread('kruger.jpg')
bg = cv2.cvtColor(bg, cv2.COLOR_BGR2RGB)
kernel1 = np.ones((1,1),np.uint8)
kernel2 = np.ones((10,10),np.uint8)
while (1):
ret, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lowerBound = np.array([1, 1, 1])
upperBound = np.array([140, 255 ,140])
mask = cv2.inRange(hsv, lowerBound, upperBound)
blur = cv2.GaussianBlur(mask,(5,5),0)
ret1,mask = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel1)
contourthickness = cv2.cvtColor(mask, cv2.IMREAD_COLOR)
res = bitwise_and(frame, frame, mask = mask)
crop_bg = bg[0:480, 0:640]
final = frame + res
final = np.where(contourthickness != 0, crop_bg, final)
cv2.imshow('frame', frame)
cv2.imshow('Final', final) # TIS WORKED BBYY
key = cv2.waitKey(1) & 0xFF
if key == 27:
break
cv2.destroyAllWindows()
EDIT:
Following #fmw42 's comment, I am adding the original image as well as a screenshot of how the different frames look. The masked image also changes colour. Something to fix that will also be helpful.

#Jeremi. Your code working 100%. Using White wall for background. Avoid door(it is not white, it is cream), shadow around edge, to prevent noising. If you have white bed sheet or white walls. I am using Raspberry pi 4b/8gb, 4k monitor. I can't get actual size of window.
Here is output:
What you see on my output. I placed my hand behind white sheet closer to camera. I do not have white wall on my room. My room is greener. That why you see logo on background. Btw, I can move my hand no problem.

Related

How do i remove the background of this type of images with black color?

I want to change image background to black color. I tried with some codes but it didn't work, sometime it removes the object. The backgrounds in this images may vary depends on the places. with curiosity do I need to make machine learning to remove background for this kind of images for better results?
import numpy as np
import cv2
image = cv2.imread(r'D:/IMG_6334.JPG')
r = 150.0 / image.shape[1]
dim = (150, int(image.shape[0] * r))
resized = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
lower_white = np.array([80, 1, 1],np.uint8) #lower hsv value
upper_white = np.array([130, 255, 255],np.uint8) #upper hsv value
hsv_img = cv2.cvtColor(resized,cv2.COLOR_BGR2HSV) #rgb to hsv color space
#filter the background pixels
frame_threshed = cv2.inRange(hsv_img, lower_white, upper_white)
kernel = np.ones((5,5),np.uint8)
#dilate the resultant image to remove noises in the background
#Number of iterations and kernal size will depend on the backgound noises size
dilation = cv2.dilate(frame_threshed,kernel,iterations = 2)
resized[dilation==255] = (0,0,0) #convert background pixels to black color
cv2.imshow('res', resized)
cv2.waitKey(0)
This is a seperate topic on itself. Image matting is the thing you are looking for. This is used to convert your background to black and your foreground to white(which in this case you dont have to do). Check out this website http://alphamatting.com/ where all the state of the art matting algos are present and try implementing it in ur code. I would say this is really long route, so I can say better solution if you mention what exactly are you planning to do after removing the backgrounds of the image.

Expanding background color to connected components(Flood fill) - Image Processing

kinda stuck trying to figure out how I can expand the background color inwards.
I have this image that has been generated through a mask after noisy background subtraction.
I am trying to make it into this:
So far I have tried to this, but to no avail:
import cv2
from PIL import Image
import numpy as np
Image.open("example_of_misaligned_frame.png") # open poor frame
img_copy = np.asanyarray(img).copy()
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX) # find contours
# create bounding box around blob and figure out the row.cols to iterate over
x,y,w,h = cv2.boundingRect(max(contours, key = cv2.contourArea))
# flood fill the entire region with back, hoping that off-white region gets filled due to connected components.
for row in range(y, y+h):
for col in range(x, x+w):
cv2.floodFill(img_copy, None, seedPoint=(col,row), newVal=0)
This results in a completely black image :(
Any help, pointing me in the right direction, is greatly appreciated.
You can solve it by using floodFill twice:
First time - fill the black pixels with Off-White color.
Second time - fill the Off-White pixels with black color.
There is still an issue for finding the RGB values of the Off-White color.
I found an improvised solution for finding the Off-White color (I don't know the exact rules for what color is considered to be background).
Here is a working code sample:
import cv2
import numpy as np
#Image.open("example_of_misaligned_frame.png") # open poor frame
img = cv2.imread("example_of_misaligned_frame.png")
#img_copy = np.asanyarray(img).copy()
img_copy = img.copy()
#Improvised way to find the Off White color (it's working because the Off White has the maximum color components values).
tmp = cv2.dilate(img, np.ones((50,50), np.uint8), iterations=10)
# Color of Off-White pixel
offwhite = tmp[0, 0, :]
# Convert to tuple
offwhite = tuple((int(offwhite[0]), int(offwhite[1]), int(offwhite[2])))
# Fill black pixels with off-white color
cv2.floodFill(img_copy, None, seedPoint=(0,0), newVal=offwhite)
# Fill off-white pixels with black color
cv2.floodFill(img_copy, None, seedPoint=(0,0), newVal=0, loDiff=(2, 2, 2, 2), upDiff=(2, 2, 2, 2))
cv2.imshow("img_copy", img_copy)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result of cv2.dilate:
Result of first cv2.floodFill:
Result of second cv2.floodFill:
In Python/OpenCV, you can simply extract a binary mask from your flood filled process image and erode that mask. Then reapply it to the input or to your flood filled result.
Input:
import cv2
# read image
img = cv2.imread("masked_image.png")
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# make anything not black into white
gray[gray!=0] = 255
# erode mask
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (51,51))
mask = cv2.morphologyEx(gray, cv2.MORPH_ERODE, kernel)
# make mask into 3 channels
mask = cv2.merge([mask,mask,mask])
# apply new mask to img
result = img.copy()
result = cv2.bitwise_and(img, mask)
# write result to disk
cv2.imwrite("masked_image_original_mask.png", gray)
cv2.imwrite("masked_image_eroded_mask.png", mask)
cv2.imwrite("masked_image_eroded_image.png", result)
# display it
cv2.imshow("IMAGE", img)
cv2.imshow("MASK", mask)
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Mask:
Eroded Mask:
Result:
Adjust the size of the circular (elliptical) morphology kernel as desired for more or less erosion.

Python video analysis -- detecting color change and time stamp

I'm trying to use OpenCV to analyze a 10 min video for green in the top right corner and printing out the time stamps every time there is green.
I've used meanshift and tried splitting it into frames but a 10 minute long video would require a lot of frames -- especially since I'm trying to be accurate.
I was thinking of masking over the green, but there isn't a way to print the time stamps... any tips on what packages or tools I should use?
Here's what I have, and I've tried basically everything in OpenCV but I've never used it before so I'm lost...
import cv2
import numpy as np
cap = cv2.VideoCapture('video.mp4')
# This drives the program into an infinite loop.
while(1):
# Captures the live stream frame-by-frame
_, frame = cap.read()
# Converts images from BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([121, 240, 9]) # darker green
upper_red = np.array([254,255,255]) # white green
# Defining range of green color in HSV
# This creates a mask
mask = cv2.inRange(hsv, lower_red, upper_red)
# that only the green coloured objects are highlighted
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
# if k == 27: break
# Destroys all of the HighGUI windows.
cv2.destroyAllWindows()
# release the captured frame
cap.release()
Splitting the frames gives me thousands of images and just reading the video doesn't give me time stamps, which is what I want it to print out. Thanks!

How to get foreground mask when already have background image

I know that with cv2.createBackgroundSubtractorMOG2() we can substract the foreground mask using a background estimating method based on every 500 frames(default). But how about I already have a background picture and just want substract the foreground using that picture in each frame? What I'm tring is like this:
import numpy as np
import cv2
video = "xx.avi"
cap = cv2.VideoCapture(video)
bg = cv2.imread("bg.png")
while True:
ret, frame = cap.read()
original_frame = frame.copy()
if ret:
# get foremask?
fgmask = frame - bg
# filter kernel for denoising:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
opening = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)
# Dilate to merge adjacent blobs
dilation = cv2.dilate(closing, kernel, iterations = 2)
# show fg:dilation
cv2.imshow('fg mask', dilation)
cv2.imshow('original', original_frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
cap.release()
cv2.destroyAllWindows()
break
else:
break
However I got colourful frames when doing frame = frame - bg. How could I get the correct foreground mask?
You are getting colourful images because you are substracting 2 color images, so the colour you are getting on each pixel is the difference on each channel (B,G and R) between both images.
In order to perform background subtraction, as dhanushka comments, the simplest option is to use MOG2 and forward it your background image for some (500) frames so it will learn this as the background. MOG2 is designed to learn the variability of each pixel colour with a Gaussian model, so if you are feeding always the same image, it will not learn this. Anyway, I think it should work for what you are intending to do.
The nice thing about this approach is that MOG2 will take care of lots of more things like updating the model over time, dealing with shadows and so on.
Another option would be to implement your own background subtraction method as you tried to do.
So, if you want to test it, you need to convert your fgmask colour image into something you can easily threshold and decide for each pixel if it is background or foreground. A simple option would be to convert it to grayscale, and then apply a simple threshold, the lower the threshold the more "sensitive" your subtraction method is, (play with the thresh value), i.e.:
...
# get foremask?
fgmask = frame - bg
gray_image = cv2.cvtColor(fgmask, cv2.COLOR_BGR2GRAY)
thresh = 20
im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1]
# filter kernel for denoising:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
opening = cv2.morphologyEx(im_bw, cv2.MORPH_OPEN, kernel)
...

How can I efficently segmentate hands from my video?

I'm new here.
I'm having some problems with segmentating hands in OpenCV, Python from video. I looked over every topic I could, but I can't see my mistake in code I wrote.
Please, note I worked only once with Python and only with basics - it's my first OpenCV application ever.
My problem is that I'm getting output video as black with some white squares. I assume, I'm using wrong values in thresholds. I took colour from the middle of the hand, conversed it to HSV and set lower values to [99,100,100] and upper [119,255,255].
Can anyone help me?
Here is code, if anyone would like to look at it:
import numpy as np
import cv2
cap = cv2.VideoCapture('vid2.avi')
fourcc = cv2.cv.CV_FOURCC(*'MP4V')
out = cv2.VideoWriter('output.avi',fourcc, 29, (1920,1080))
while(1):
ret, frame = cap.read()
if ret==True:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
blur = cv2.GaussianBlur(hsv, (15, 15), 0)
lower = np.array([99,100,100])
upper = np.array([119,255,255])
mask = cv2.inRange(blur, lower, upper)
res = cv2.bitwise_and(frame,frame, mask= mask)
out.write(res)
if ret==False:
break
cap.release()
cv2.destroyAllWindows()
out.release()

Categories