I want to remove the background by using the mask image. Now, I have already get the mask image.I try to let the value of the original image's background become 0 where the value of mask is 0. But the result is very bad. How can I solve this problem. Thank you
from skimage import io
import numpy as np
img = io.imread("GT06.jpg")
mask = io.imread("GT03.png")
mask2 = np.where((mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
io.imshow(img)
io.show()
GT06.jpg
GT03.png
This results in:
I want to get the foreground like this:
The problem is that your mask isn't pure black and white, i.e. all 0 or 255 changing you mask two generation to:
mask2 = np.where((mask<200),0,1).astype('uint8')
results in:
You could either play with the mask or the threshold number - I used 200.
In Python you could use OpenCV. Here are instructions to install OpenCV in Python if you don't have it in your system. I think you could do the same with other libraries, the procedure will be the same, the trick is to invert the mask and apply it to some background, you will have your masked image and a masked background, then you combine both.
The image1 is your image masked with the original mask, image2 is the background image masked with the inverted mask, and image3 is the combined image. Important. image1, image2 and image3 must be of the same size and type. The mask must be grayscale.
import cv2
import numpy as np
# opencv loads the image in BGR, convert it to RGB
img = cv2.cvtColor(cv2.imread('E:\\FOTOS\\opencv\\iT5q1.png'),
cv2.COLOR_BGR2RGB)
# load mask and make sure is black&white
_, mask = cv2.threshold(cv2.imread('E:\\FOTOS\\opencv\\SH9jL.png', 0),
0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# load background (could be an image too)
bk = np.full(img.shape, 255, dtype=np.uint8) # white bk, same size and type of image
bk = cv2.rectangle(bk, (0, 0), (int(img.shape[1] / 2), int(img.shape[0] / 2)), 0, -1) # rectangles
bk = cv2.rectangle(bk, (int(img.shape[1] / 2), int(img.shape[0] / 2)), (img.shape[1], img.shape[0]), 0, -1)
# get masked foreground
fg_masked = cv2.bitwise_and(img, img, mask=mask)
# get masked background, mask must be inverted
mask = cv2.bitwise_not(mask)
bk_masked = cv2.bitwise_and(bk, bk, mask=mask)
# combine masked foreground and masked background
final = cv2.bitwise_or(fg_masked, bk_masked)
mask = cv2.bitwise_not(mask) # revert mask to original
Related
I have the following image that I generate from the below script,
I would like to know how can I eliminate the contours from the borders? (i.e. between the black bg and the purple pixels).
You can find the image as a pytorch tensor here
img = np.moveaxis(image.cpu().numpy(), 0, -1) # image is a pytorch tensor
img *= 255.0/img.max()
img = img.astype(np.uint8)
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
_,t_img = cv2.threshold(img,90,155,cv2.THRESH_TOZERO_INV)
c_img = cv2.Canny(t_img,10,100)
contours,_ = cv2.findContours(c_img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
drawContour = cv2.drawContours(img,contours,-1,(255,0,0),1)
plt.imshow(img)
I still don't know if this is what you want, but what you could do is: generate a binary mask of the foreground using either simple thresholding or masking black regions if the background is always black as seems to be the case here. Then you can erode the foreground mask to remove a defined amount of pixels from the border of the mask (side note: use binary_dilation for the opposite operation):
import scipy.ndimage as ndimage
# img should be a numpy array with RGB channels in the last dimension
fg_mask = (img > (0, 0, 0)).any(axis=-1) # binary foreground mask (all pixels which are not black)
filled = ndimage.binary_fill_holes(fg_mask) # fill potential holes in mask (not needed here)
eroded = ndimage.binary_erosion(filled,
iterations=1,
structure=ndimage.generate_binary_structure(2, connectivity=2))
new_img = img * eroded[..., None] # apply eroded mask to img
Parameters to adjust are iterations (the higher the value the more pixels are removed from the foreground/background border) and structure (connectivity of 1 delivers smoother edges, 2 is used here) of binary_erosion.
Results:
Original picture:
Processed picture (new_img) with eroded borders:
I have two images and a mask, all of same dimensions, as Numpy arrays:
Desired output
I would like to merge them in such a way that the output will be like this:
Current code
def merge(lena, rocket, mask):
'''Mask init and cropping'''
mask = np.zeros(lena.shape[:2], dtype='uint8')
cv2.fillConvexPoly(mask, circle, 255) # might be polygon
'''Bitwise operations'''
lena = cv2.bitwise_or(lena, lena, mask=mask)
mask_inv = cv2.bitwise_not(mask) # mask inverting
rocket = cv2.bitwise_or(rocket, rocket, mask=mask_inv)
output = cv2.bitwise_or(rocket, lena)
return output
Current result
This code gives me this result:
Applying cv2.GaussianBlur(mask, (51,51), 0) distorts colors of overlayed image in different ways.
Other SO questions relate to similar problems but not solving exactly this type of blurred compositing.
Update: this gives same result as a current one
mask = np.zeros(lena.shape[:2], dtype='uint8')
mask = cv2.GaussianBlur(mask, (51,51), 0)
mask = mask[..., np.newaxis]
cv2.fillConvexPoly(mask, circle, 1)
output = mask * lena + (1 - mask) * rocket
Temporal solution
Possibly this is not optimal due to many conversions, please advise
mask = np.zeros(generated.shape[:2])
polygon = np.array(polygon, np.int32) # 2d array of x,y coords
cv2.fillConvexPoly(mask, polygon, 1)
mask = cv2.GaussianBlur(mask, (51, 51), 0)
mask = mask.astype('float32')
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
foreground = cv2.multiply(lena, mask, dtype=cv2.CV_8U)
background = cv2.multiply(rocket, (1 - mask), dtype=cv2.CV_8U)
output = cv2.add(foreground, background)
Please advise how can I blur a mask, properly merge it with foreground and then overlay on background image?
You need to renormalize the mask before blending:
def blend_merge(lena, rocket, mask):
mask = cv2.GaussianBlur(mask, (51, 51), 0)
mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
mask = mask.astype('float32') / 255
foreground = cv2.multiply(lena, mask, dtype=cv2.CV_8U)
background = cv2.multiply(rocket, (1 - mask), dtype=cv2.CV_8U)
output = cv2.add(foreground, background)
return output
A full working example is here.
Here is how to do that in Python/OpenCV. Your second method is close.
Read the 3 input images
Apply linear (or Gaussian) blur to the circle
Stretch the circle to full dynamic range (0 to 255) as a mask
Convert the mask to float in the range 0 to 1 as 3 channels
Apply mask to image1 and inverted mask to image2 via multiplication and add the products together
Convert the result to 8-bit range (0 to 255) clipping to be sure no overflow or wrap around
Save the results
Input images:
import cv2
import numpy as np
# Read images
image1 = cv2.imread('lena_wide.jpg')
image2 = cv2.imread('rocket.jpg')
circle = cv2.imread('white_circle.jpg', cv2.IMREAD_GRAYSCALE)
# linear blur mask
mask = cv2.blur(circle, (30,30))
# alternate using Gaussian blur
#mask = cv2.GaussianBlur(circle, (0,0), sigmaX=10, sigmaY=10)
# stretch mask to full dynamic range
mask = cv2.normalize(mask, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
# convert mask to float in range 0 to 1
maskf = (mask/255).astype(np.float64)
maskf = cv2.merge([maskf,maskf,maskf])
# apply mask to image1 and inverted mask to image2
result = maskf*image1 + (1-maskf)*image2
result = result.clip(0,255).astype(np.uint8)
# save results
cv2.imwrite('white_circle_ramped.jpg', mask)
cv2.imwrite('lena_wide_rocked_composited.png', result)
# show results
cv2.imshow("mask", mask)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Ramped mask image:
Result:
ADDITION:
Here is an alternate approach using mostly Numpy.
import cv2
import numpy as np
# Read images
image1 = cv2.imread('lena_wide.jpg')
image2 = cv2.imread('rocket.jpg')
circle = cv2.imread('white_circle2.jpg', cv2.IMREAD_GRAYSCALE)
# linear blur mask
mask = cv2.blur(circle, (30,30))
# alternate using Gaussian blur
#mask = cv2.GaussianBlur(circle, (0,0), sigmaX=10, sigmaY=10)
# stretch mask to full dynamic range
mask = cv2.normalize(mask, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
# convert mask to 3 channels
mask = cv2.merge([mask,mask,mask])
# apply mask to image1 and inverted mask to image2
image1_masked = np.multiply(image1, mask/255).clip(0,255).astype(np.uint8)
image2_masked = np.multiply(image2, 1-mask/255).clip(0,255).astype(np.uint8)
# add the two masked images together
result = np.add(image1_masked, image2_masked)
# save results
cv2.imwrite('white_circle_ramped2.jpg', mask)
cv2.imwrite('lena_wide_rocked_composited2.png', result)
# show results
cv2.imshow("mask", mask)
cv2.imshow("image1_masked", image1_masked)
cv2.imshow("image2_masked", image2_masked)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I'm writing a script that creates a mask for an image. My input image looks like this:
The original image is only 40x40px, here it is for reference:
I want to create a mask of the purple area in the center of the image. This is what I do:
# read the 40x40 image and convert it to RGB
input_image = cv2.cvtColor(cv2.imread('image.png'), cv2.COLOR_BGR2RGB)
# get the value of the color in the center of the image
center_color = input_image[20, 20]
# create the mask: pixels with same color = 255 (white), other pixels = 0 (black)
mask_bw = np.where(input_image == center_color, 255, 0)
# show the image
plt.imshow(mask_bw)
Most of the time this works perfectly fine, but for some images (like the one I attached to this question) I consistently get some blue areas in my mask like on the image below. This is reproducible and the areas are always the same for the same input images.
This is already weird enough, but if I try to remove the blue areas, this doesn't work either.
mask_bw[mask_bw != (255, 255, 255)] = 0 # this doesn't change anything..
Why is this happening and how do I fix this?
Additional info
tried with numpy version 1.17.3 and 1.17.4
Reproduced in my local environment and in a google colab notebook
The main problem is that you're trying to compare three channels but only setting the value for one channel. This is most likely causing the blue areas on the mask. When you use np.where() to set the other pixels to black, you are only setting this on the 1st channel instead of all three channels. You can visualize this by splitting each channel and printing the before/after arrays which will show you that the resulting array values are RGB(0,0,255). So to fix this problem, we need to compare each individual channel then set the desired area in white while setting any black areas on the mask to black for all three channels. Here is one way to do it:
import numpy as np
import cv2
image = cv2.imread('1.png')
center_color = image[20, 20]
b, g, r = cv2.split(image)
mask = (b == center_color[0]) & (g == center_color[1]) & (r == center_color[2])
image[mask] = 255
image[mask==0] = 0
cv2.imshow('image', image)
cv2.waitKey()
A hotfix to remove the blue areas using your current code would be to convert the image to grayscale (1-channel) then change all non-white pixels to black.
import numpy as np
import cv2
# Load image, find color, create mask
image = cv2.imread('1.png')
center_color = image[20, 20]
mask = np.where(image == center_color, 255, 0)
mask = np.array(mask, dtype=np.uint8)
# Convert image to grayscale, convert all non-white pixels to black
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask[mask != 255] = 0
cv2.imshow('mask', mask)
cv2.waitKey()
Here are two alternative methods to obtain a mask of the purple area
Method #1: Work in grayscale space
import numpy as np
import cv2
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
center_color = gray[20, 20]
mask = np.array(np.where(gray == center_color, 255, 0), dtype=np.uint8)
cv2.imshow('mask', mask)
cv2.waitKey()
Method #2: Color thresholding
The idea is to convert the image to HSV color space then use a lower and upper color range to segment the image to create a binary mask
import numpy as np
import cv2
image = cv2.imread('1.png')
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 124, 0])
upper = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower, upper)
cv2.imshow('mask', mask)
cv2.waitKey()
Both methods should yield the same result
If you have a 3-channel image (i.e. RGB or BGR or somesuch) and you want to generate a single channel mask (i.e. you want 0/1 or True/False) for each pixel, then you effectively need to group the 3 values into a single using np.all() like this:
import cv2
import numpy as np
# Load image and get centre colour
image = cv2.imread('40x40.png')
cc = im[20, 20]
print(image.shape)
(40, 40, 3)
# Generate list of unique colours present in image so we know what we are dealing with
print(np.unique(im.reshape(-1,3), axis=0))
array([[140, 109, 142],
[151, 106, 140],
[160, 101, 137],
[165, 134, 157],
[175, 149, 171],
[206, 87, 109],
[206, 185, 193]], dtype=uint8)
# Generate mask of pixels matching centre colour
mask_bw = np.where(np.all(im==cc,axis=2), 255, 0)
# Check shape of mask - no 3rd dimension !!!
print(mask_bw.shape)
(40, 40)
# Check unique colours in mask
print(np.unique(mask_bw.reshape(-1,1), axis=0))
array([[ 0],
[255]])
How can I apply mask to a color image in latest python binding (cv2)? In previous python binding the simplest way was to use cv.Copy e.g.
cv.Copy(dst, src, mask)
But this function is not available in cv2 binding. Is there any workaround without using boilerplate code?
Here, you could use cv2.bitwise_and function if you already have the mask image.
For check the below code:
img = cv2.imread('lena.jpg')
mask = cv2.imread('mask.png',0)
res = cv2.bitwise_and(img,img,mask = mask)
The output will be as follows for a lena image, and for rectangular mask.
Well, here is a solution if you want the background to be other than a solid black color. We only need to invert the mask and apply it in a background image of the same size and then combine both background and foreground. A pro of this solution is that the background could be anything (even other image).
This example is modified from Hough Circle Transform. First image is the OpenCV logo, second the original mask, third the background + foreground combined.
# http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.html
import cv2
import numpy as np
# load the image
img = cv2.imread('E:\\FOTOS\\opencv\\opencv_logo.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# detect circles
gray = cv2.medianBlur(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), 5)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 20, param1=50, param2=50, minRadius=0, maxRadius=0)
circles = np.uint16(np.around(circles))
# draw mask
mask = np.full((img.shape[0], img.shape[1]), 0, dtype=np.uint8) # mask is only
for i in circles[0, :]:
cv2.circle(mask, (i[0], i[1]), i[2], (255, 255, 255), -1)
# get first masked value (foreground)
fg = cv2.bitwise_or(img, img, mask=mask)
# get second masked value (background) mask must be inverted
mask = cv2.bitwise_not(mask)
background = np.full(img.shape, 255, dtype=np.uint8)
bk = cv2.bitwise_or(background, background, mask=mask)
# combine foreground+background
final = cv2.bitwise_or(fg, bk)
Note: It is better to use the opencv methods because they are optimized.
import cv2 as cv
im_color = cv.imread("lena.png", cv.IMREAD_COLOR)
im_gray = cv.cvtColor(im_color, cv.COLOR_BGR2GRAY)
At this point you have a color and a gray image. We are dealing with 8-bit, uint8 images here. That means the images can have pixel values in the range of [0, 255] and the values have to be integers.
Let's do a binary thresholding operation. It creates a black and white masked image. The black regions have value 0 and the white regions 255
_, mask = cv.threshold(im_gray, thresh=180, maxval=255, type=cv.THRESH_BINARY)
im_thresh_gray = cv.bitwise_and(im_gray, mask)
The mask can be seen below on the left. The image on its right is the result of applying bitwise_and operation between the gray image and the mask. What happened is, the spatial locations where the mask had a pixel value zero (black), became pixel value zero in the result image. The locations where the mask had pixel value 255 (white), the resulting image retained its original gray value.
To apply this mask to our original color image, we need to convert the mask into a 3 channel image as the original color image is a 3 channel image.
mask3 = cv.cvtColor(mask, cv.COLOR_GRAY2BGR) # 3 channel mask
Then, we can apply this 3 channel mask to our color image using the same bitwise_and function.
im_thresh_color = cv.bitwise_and(im_color, mask3)
mask3 from the code is the image below on the left, and im_thresh_color is on its right.
You can plot the results and see for yourself.
cv.imshow("original image", im_color)
cv.imshow("binary mask", mask)
cv.imshow("3 channel mask", mask3)
cv.imshow("im_thresh_gray", im_thresh_gray)
cv.imshow("im_thresh_color", im_thresh_color)
cv.waitKey(0)
The original image is lenacolor.png that I found here.
Answer given by Abid Rahman K is not completely correct. I also tried it and found very helpful but got stuck.
This is how I copy image with a given mask.
x, y = np.where(mask!=0)
pts = zip(x, y)
# Assuming dst and src are of same sizes
for pt in pts:
dst[pt] = src[pt]
This is a bit slow but gives correct results.
EDIT:
Pythonic way.
idx = (mask!=0)
dst[idx] = src[idx]
The other methods described assume a binary mask. If you want to use a real-valued single-channel grayscale image as a mask (e.g. from an alpha channel), you can expand it to three channels and then use it for interpolation:
assert len(mask.shape) == 2 and issubclass(mask.dtype.type, np.floating)
assert len(foreground_rgb.shape) == 3
assert len(background_rgb.shape) == 3
alpha3 = np.stack([mask]*3, axis=2)
blended = alpha3 * foreground_rgb + (1. - alpha3) * background_rgb
Note that mask needs to be in range 0..1 for the operation to succeed. It is also assumed that 1.0 encodes keeping the foreground only, while 0.0 means keeping only the background.
If the mask may have the shape (h, w, 1), this helps:
alpha3 = np.squeeze(np.stack([np.atleast_3d(mask)]*3, axis=2))
Here np.atleast_3d(mask) makes the mask (h, w, 1) if it is (h, w) and np.squeeze(...) reshapes the result from (h, w, 3, 1) to (h, w, 3).
i am trying to remove noise in an image less and am currently running this code
import numpy as np
import argparse
import cv2
from skimage import morphology
# Construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True,
help = "Path to the image")
args = vars(ap.parse_args())
# Load the image, convert it to grayscale, and blur it slightly
image = cv2.imread(args["image"])
cv2.imshow("Image", image)
cv2.imwrite("image.jpg", image)
greenLower = np.array([50, 100, 0], dtype = "uint8")
greenUpper = np.array([120, 255, 120], dtype = "uint8")
green = cv2.inRange(image, greenLower, greenUpper)
#green = cv2.GaussianBlur(green, (3, 3), 0)
cv2.imshow("green", green)
cv2.imwrite("green.jpg", green)
cleaned = morphology.remove_small_objects(green, min_size=64, connectivity=2)
cv2.imshow("cleaned", cleaned)
cv2.imwrite("cleaned.jpg", cleaned)
cv2.waitKey(0)
However, the image does not seem to have changed from "green" to "cleaned" despite using the remove_small_objects function. why is this and how do i clean the image up? Ideally i would like to isolate only the image of the cabbage.
My thought process is after thresholding to remove pixels less than 100 in size, then smoothen the image with blur and fill up the black holes surrounded by white - that is what i did in matlab. If anybody could direct me to get the same results as my matlab implementation, that would be greatly appreciated. Thanks for your help.
Edit: made a few mistakes when changing the code, updated to what it currently is now and display the 3 images
image:
green:
clean:
my goal is to get somthing like this picture below from matlab implementation:
Preprocessing
A good idea when you're filtering an image is to lowpass the image or blur it a bit; that way neighboring pixels become a little more uniform in color, so it will ease brighter and darker spots on the image and keep holes out of your mask.
img = cv2.imread('image.jpg')
blur = cv2.GaussianBlur(img, (15, 15), 2)
lower_green = np.array([50, 100, 0])
upper_green = np.array([120, 255, 120])
mask = cv2.inRange(blur, lower_green, upper_green)
masked_img = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('', masked_img)
cv2.waitKey()
Colorspace
Currently, you're trying to contain an image by a range of colors with different brightness---you want green pixels, regardless of whether they are dark or light. This is much more easily accomplished in the HSV colorspace. Check out my answer here going in-depth on the HSV colorspace.
img = cv2.imread('image.jpg')
blur = cv2.GaussianBlur(img, (15, 15), 2)
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
lower_green = np.array([37, 0, 0])
upper_green = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower_green, upper_green)
masked_img = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('', masked_img)
cv2.waitKey()
Removing noise in a binary image/mask
The answer provided by ngalstyan shows how to do this nicely with morphology. What you want to do is called opening, which is the combined process of eroding (which more or less just removes everything within a certain radius) and then dilating (which adds back to any remaining objects however much was removed). In OpenCV, this is accomplished with cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel). The tutorials on that page show how it works nicely.
img = cv2.imread('image.jpg')
blur = cv2.GaussianBlur(img, (15, 15), 2)
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
lower_green = np.array([37, 0, 0])
upper_green = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower_green, upper_green)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
opened_mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
masked_img = cv2.bitwise_and(img, img, mask=opened_mask)
cv2.imshow('', masked_img)
cv2.waitKey()
Filling in gaps
In the above, opening was shown as the method to remove small bits of white from your binary mask. Closing is the opposite operation---removing chunks of black from your image that are surrounded by white. You can do this with the same idea as above, but using cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel). This isn't even necessary after the above in your case, as the mask doesn't have any holes. But if it did, you could close them up with closing. You'll notice my opening step actually removed a small bit of the plant at the bottom. You could actually fill those gaps with closing first, and then opening to remove the spurious bits elsewhere, but it's probably not necessary for this image.
Trying out new values for thresholding
You might want to get more comfortable playing around with different colorspaces and threshold levels to get a feel for what will work best for a particular image. It's not complete yet and the interface is a bit wonky, but I have a tool you can use online to try out different thresholding values in different colorspaces; check it out here if you'd like. That's how I quickly found values for your image.
Although, the above problem is solved using cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel). But, if somebody wants to use morphology.remove_small_objects to remove area less than a specified size, for those this answer may be helpful.
Code I used to remove noise for above image is:
import numpy as np
import cv2
from skimage import morphology
# Load the image, convert it to grayscale, and blur it slightly
image = cv2.imread('im.jpg')
cv2.imshow("Image", image)
#cv2.imwrite("image.jpg", image)
greenLower = np.array([50, 100, 0], dtype = "uint8")
greenUpper = np.array([120, 255, 120], dtype = "uint8")
green = cv2.inRange(image, greenLower, greenUpper)
#green = cv2.GaussianBlur(green, (3, 3), 0)
cv2.imshow("green", green)
cv2.imwrite("green.jpg", green)
imglab = morphology.label(green) # create labels in segmented image
cleaned = morphology.remove_small_objects(imglab, min_size=64, connectivity=2)
img3 = np.zeros((cleaned.shape)) # create array of size cleaned
img3[cleaned > 0] = 255
img3= np.uint8(img3)
cv2.imshow("cleaned", img3)
cv2.imwrite("cleaned.jpg", img3)
cv2.waitKey(0)
Cleaned image is shown below:
To use morphology.remove_small_objects, first labeling of blobs is essential. For that I use imglab = morphology.label(green). Labeling is done like, all pixels of 1st blob is numbered as 1. similarly, all pixels of 7th blob numbered as 7 and so on. So, after removing small area, remaining blob's pixels values should be set to 255 so that cv2.imshow() can show these blobs. For that I create an array img3 of the same size as of cleaned image. I used img3[cleaned > 0] = 255 line to convert all pixels which value is more than 0 to 255.
It seems what you want to remove is a disconnected group of small blobs.
I think erode() will do a good job remove them with the right kernel.
Given an nxn kernel, erode moves the kernel through the image and replaces the center pixel by the minimum pixel in the kernel.
Then you can dilate() the resulting image to restore eroded edges of the green part.
Another option would be to use fastndenoising
##### option 1
kernel_size = (5,5) # should roughly have the size of the elements you want to remove
kernel_el = cv2.getStructuringElement(cv2.MORPH_RECT, kernel_size)
eroded = cv2.erode(green, kernel_el, (-1, -1))
cleaned = cv2.dilate(eroded, kernel_el, (-1, -1))
##### option 2
cleaned = cv2.fastNlMeansDenoisingColored(green, h=10)