I am using the code below to remove the backroung of the images and highlight only my region of interest (ROI), however, the algorithm behaves in a wrong way in some images, discarding the stain (ROI) and deleting along with the background.
import numpy as np
import cv2
#Read the image and perform threshold
img = cv2.imread('photo.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.medianBlur(gray,5)
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
#Search for contours and select the biggest one
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
#Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
#Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
#Display the result
cv2.imwrite('photo.png', res)
#cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
I don't know if I understand correctly because when I run your code I do not get the output you posted (exit). If you would like to obtain only the mole it can't be done by simply thresholding because the mole is too near the border plus if you look at your image closley you will see that it has some sort of frame. However there is a simple way to do this for this image but it may not work in other cases. You can draw a fake border over your image and seperate the ROI from other noise area. Then make a threshold for which contour you wish to display. Cheers!
Example:
#Import all necessery libraries
import numpy as np
import cv2
#Read the image and perform threshold and get its height and weight
img = cv2.imread('moles.png')
h, w = img.shape[:2]
# Transform to gray colorspace and blur the image.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
# Make a fake rectangle arround the image that will seperate the main contour.
cv2.rectangle(blur, (0,0), (w,h), (255,255,255), 10)
# Perform Otsu threshold.
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Create a mask for bitwise operation
mask = np.zeros((h, w), np.uint8)
# Search for contours and iterate over contours. Make threshold for size to
# eliminate others.
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for i in contours:
cnt = cv2.contourArea(i)
if 1000000 >cnt > 100000:
cv2.drawContours(mask, [i],-1, 255, -1)
# Perform the bitwise operation.
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result.
cv2.imwrite('mole_res.jpg', res)
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Related
I'm using a clothing dataset for some tasks and I need to create a black and white mask of the clothing items. I'm using this code to achieve the below:
img = cv2.imread(filepath, 0)
#img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(img,(5,5),0)
ret3,th3 = cv2.threshold(blur,0,250,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imwrite(os.path.join(new_dir_path, filename), th3)
Here's an example of what I have so far:
and I'm getting erroneous ones like these
How can I fill out the holes appropriately so I can get a solid appearance like the first pair?
** EDIT **
I also tried employing edge detection with the code below:
edges = cv2.Canny(image=img_blur, threshold1=100, threshold2=200) # Canny Edge Detection
and I get this for this
The problem is it's a very hit and miss. For certain cases I get correct edges, and for others I don't. Also how would I be able to fill out the wholes if the edges are complex like above?
Given that you know the background will always be white, I think you can do something like this:
import cv2
import numpy as np
# read the image
filepath = 'dress.png'
img = cv2.imread(filepath)
# find the mask
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# set some absolute values for the threshold, since we know the background will always be white
_, mask = cv2.threshold(img_gray,244,255,cv2.THRESH_BINARY)
mask_inv = cv2.bitwise_not(mask)
# find the largest contour
contours, _ = cv2.findContours(mask_inv, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
largest_contour = sorted(contours, key=cv2.contourArea, reverse=True)[0]
# draw the largest contour to fill in the holes in the mask
final_result = np.ones(img.shape[:2]) # create a blank canvas to draw the final result
final_result = cv2.drawContours(final_result, [largest_contour], -1, color=(0, 255, 0), thickness=cv2.FILLED)
# show results
cv2.imshow('mask', mask)
cv2.imshow('img', img)
cv2.imshow('final_result', final_result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I'm trying to calculate the size of a wound. For that, I am capturing an image with a coin that serves as a basis to know the scale of the image and calculate the size of the region of the wound. I'm using the watershed algorithm for segmentation, but the currency is being covered by the tracking. Can someone help me get around the coin and detach it from the image along with the wound area.
Input
Exit:
import numpy as np
import cv2
# Read the image and perfrom an OTSU threshold
img = cv2.imread('feridatest.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Remove hair with opening
kernel = np.ones((6,6),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# Combine surrounding noise with ROI
kernel = np.ones((6,6),np.uint8)
dilate = cv2.dilate(opening,kernel,iterations=3)
# Blur the image for smoother ROI
blur = cv2.blur(dilate,(5,5))
# Perform another OTSU threshold and search for biggest contour
ret, thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input
[![enter image description here][1]][1]
Exit:
[![enter image description here][2]][2]
I'm trying to use Tessract in the code below to extract the two lines of the image. I tryied to improve the image quality but even though it didn't work.
Can anyone help me?
from PIL import Image, ImageEnhance, ImageFilter
import pytesseract
img = Image.open(r'C:\ocr\test00.jpg')
new_size = tuple(4*x for x in img.size)
img = img.resize(new_size, Image.ANTIALIAS)
img.save(r'C:\\test02.jpg', 'JPEG')
print( pytesseract.image_to_string( img ) )
Given the comment by #barny I don't know if this will work, but you can try the code below. I created a script that selects the display area and warps this into a straight image. Next a threshold to a black and white mask of the characters and the result is cleaned up a bit.
Try if it improves recognition. If it does, also look at the intermediate stages so you'll understand all that happens.
Update: It seems Tesseract prefers black text on white background, inverted and dilated the result.
Result:
Updated result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('disp.jpg')
# create grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask = cv2.threshold(gray_image, 190, 255, cv2.THRESH_BINARY)
# findcontours
ret, contours, hier = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# select the largest contour
largest_area = 0
for cnt in contours:
if cv2.contourArea(cnt) > largest_area:
cont = cnt
largest_area = cv2.contourArea(cnt)
# find the rectangle (and the cornerpoints of that rectangle) that surrounds the contours / photo
rect = cv2.minAreaRect(cont)
box = cv2.boxPoints(rect)
box = np.int0(box)
#### Warp image to square
# assign cornerpoints of the region of interest
pts1 = np.float32([box[2],box[3],box[1],box[0]])
# provide new coordinates of cornerpoints
pts2 = np.float32([[0,0],[500,0],[0,110],[500,110]])
# determine and apply transformationmatrix
M = cv2.getPerspectiveTransform(pts1,pts2)
tmp = cv2.warpPerspective(image,M,(500,110))
# create grayscale
gray_image2 = cv2.cvtColor(tmp, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask2 = cv2.threshold(gray_image2, 160, 255, cv2.THRESH_BINARY_INV)
# remove noise / close gaps
kernel = np.ones((5,5),np.uint8)
result = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernel)
#draw rectangle on original image
cv2.drawContours(image, [box], 0, (255,0,0), 2)
# dilate result to make characters more solid
kernel2 = np.ones((3,3),np.uint8)
result = cv2.dilate(result,kernel2,iterations = 1)
#invert to get black text on white background
result = cv2.bitwise_not(result)
#show image
cv2.imshow("Result", result)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I'm pre-processing some images in order to remove the background from my area of interest. However, the images on my bench have rounded edges due to the focus of the camera. How do I discard these rounded edges and be able to remove only my object of interest from the image? The code below I can remove the background of the image, but it does not work right due to the edges around.
import numpy as np
import cv2
#Read the image and perform threshold and get its height and weight
img = cv2.imread('IMD408.bmp')
h, w = img.shape[:2]
# Transform to gray colorspace and blur the image.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
# Make a fake rectangle arround the image that will seperate the main contour.
cv2.rectangle(blur, (0,0), (w,h), (255,255,255), 10)
# Perform Otsu threshold.
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Create a mask for bitwise operation
mask = np.zeros((h, w), np.uint8)
# Search for contours and iterate over contours. Make threshold for size to
# eliminate others.
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for i in contours:
cnt = cv2.contourArea(i)
if 1000000 >cnt > 100000:
cv2.drawContours(mask, [i],-1, 255, -1)
# Perform the bitwise operation.
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result.
cv2.imwrite('IMD408.png', res)
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
input image:
Exit:
Error:
Since you mentioned that all the images have the same hue, then this should work well for them. Steps is to do some white balancing which will increase the contrast a bit.
Get the greyscale.
Threshold the grayscale image. Values less than 127 are set to 255 (white). This will give you a binary image, which will become a mask for the original image.
Apply the mask
You might have to play around with the thresholding if you want better results, here is the link for that. But this should get you started. I'm using a different OpenCV version compared to you might have to tweak the code a bit.
import cv2
def equaliseWhiteBalance(image):
''' Return equilised WB of an image '''
wb = cv2.xphoto.createSimpleWB() #Create WB Object
imgWB = wb.balanceWhite(img) #Balance White on image
r,g,b = cv2.split(imgWB) #Get individual r,g,b channels
r_equ = cv2.equalizeHist(r) #Equalise RED channel
g_equ = cv2.equalizeHist(g) #Equalise GREEN channel
b_equ = cv2.equalizeHist(b) #Equalise BLUE channel
img_equ_WB = cv2.merge([r_equ,g_equ,b_equ]) #Merge equalised channels
return imgWB
#Read the image
img = cv2.imread('IMD408.bmp')
result = img.copy()
#Get whiteBalance of image
imgWB = equaliseWhiteBalance(img)
cv2.imshow('img', imgWB)
cv2.waitKey(0)
# Get gray image
gray = cv2.cvtColor(imgWB,cv2.COLOR_RGB2GRAY)
cv2.imshow('img', gray)
cv2.waitKey(0)
# Perform threshold
_, thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
cv2.imshow('img', thresh)
cv2.waitKey(0)
# Apply mask
result[thresh!=0] = (255,255,255)
cv2.imshow('img', result)
cv2.waitKey(0)
If all the dark corner vignettes have different sizes per image, then I suggest looking for centroid of contours on the binary (mask) image. Centroids with a 'short' distance to any corner of your image will be the dark vignettes, so their value can be changed from black to white.
I am working with skin images, in recognition of skin blemishes, and due to the presence of noises, mainly by the presence of hairs, this work becomes more complicated.
I have an image example in which I work in an attempt to highlight only the skin spot, but due to the large number of hairs, the algorithm is not effective. With this, I would like you to help me develop an algorithm to remove or reduce the amount of hair so that I can only highlight my area of interest (ROI), which are the spots.
Algorithm used to highlight skin blemishes:
import numpy as np
import cv2
#Read the image and perform threshold
img = cv2.imread('IMD006.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.medianBlur(gray,5)
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
#Search for contours and select the biggest one
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
#Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
#Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
#Display the result
cv2.imwrite('IMD006.png', res)
#cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Example image used:
How to deal with these noises to the point of improving my region of interest?
This is quite a difficult task becasue the hair goes over your ROI (mole). I don't know how to help remove it from the mole but I can help to remove the backround like in the picture without hairs. For the removal of hairs from mole I advise you to search for "removing of watermarks from image" and "deep neural networks" to maybe train a model to remove the hairs (note that this task will be quite difficult).
That being said, for the removing of background you could try the same code that you allready have for detection without hairs. You will get a binary image like this:
Now your region is filled with white lines (hairs) that go over your contour that is your ROI and cv2.findContours() would also pick them out because they are connected. But if you look at the picture you will find out that the white lines are quite thin and you can remove it from the image by performing opening (cv2.morphologyEx) on the image. Opening is erosion followed by dilation so when you erode the image with a big enough kernel size the white lines will dissapear:
Now you have a white spot with some noises arround which you can connect by performing another dilation (cv2.dilate()):
To make the ROI a bit smoother you can blur the image cv2.blur():
After that you can make another treshold and search for the biggest contour. The final result:
Hope it helps a bit. Cheers!
Example code:
import numpy as np
import cv2
# Read the image and perfrom an OTSU threshold
img = cv2.imread('hair.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Remove hair with opening
kernel = np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# Combine surrounding noise with ROI
kernel = np.ones((6,6),np.uint8)
dilate = cv2.dilate(opening,kernel,iterations=3)
# Blur the image for smoother ROI
blur = cv2.blur(dilate,(15,15))
# Perform another OTSU threshold and search for biggest contour
ret, thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()