extract ridges and valleys from finger Image - python

for my class project I am trying to extract ridges and Valleys from the finger image. An example is given below.
#The code I am using
import cv2
import numpy as np
import fingerprint_enhancer
clip_hist_percent=25
image = cv2.imread("")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Calculate grayscale histogram
hist = cv2.calcHist([gray],[0],None,[256],[0,256])
hist_size = len(hist)
# Calculate cumulative distribution from the histogram
accumulator = []
accumulator.append(float(hist[0]))
for index in range(1, hist_size):
accumulator.append(accumulator[index -1] + float(hist[index]))
# Locate points to clip
maximum = accumulator[-1]
clip_hist_percent *= (maximum/100.0)
clip_hist_percent /= 2.0
# Locate left cut
minimum_gray = 0
while accumulator[minimum_gray] < clip_hist_percent:
minimum_gray += 1
# Locate right cut
maximum_gray = hist_size -1
while accumulator[maximum_gray] >= (maximum - clip_hist_percent):
maximum_gray -= 1
# Calculate alpha and beta values
alpha = 255 / (maximum_gray - minimum_gray)
beta = -minimum_gray * alpha
auto_result = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)
gray = cv2.cvtColor(auto_result, cv2.COLOR_BGR2GRAY)
# compute gamma = log(mid*255)/log(mean)
mid = 0.5
mean = np.mean(gray)
gamma = math.log(mid*255)/math.log(mean)
# do gamma correction
img_gamma1 = np.power(auto_result,gamma).clip(0,255).astype(np.uint8)
g1 = cv2.cvtColor(img_gamma2, cv2.COLOR_BGR2GRAY)
# blur = cv2.GaussianBlur(g1,(2,1),0)
thresh2 = cv2.adaptiveThreshold(g1, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 199, 3)
# blur = cv2.GaussianBlur(thresh2,(2,1),0)
blur=((3,3),1)
erode_=(5,5)
dilate_=(3, 3)
dilate = cv2.dilate(cv2.erode(cv2.GaussianBlur(thresh2/255, blur[0],
blur[1]), np.ones(erode_)), np.ones(dilate_))*255
out = fingerprint_enhancer.enhance_Fingerprint(dilate)
I am having difficulty extracting the lines on the finger. I tried to adjust the brightness and contrast, applied calcHist, adaptive thresholding, applied blur, then applied the Gabor filters (as per UTKARSH code). The result look like above.
We could clearly see that the lower part of the image has many spurious lines. My project requirement is to get clear lines from the RGB image. Could anyone help me with the steps and the code?
Thank you in advance
reference:
https://github.com/Utkarsh-Deshmukh/Fingerprint-Enhancement-Python
https://ieeexplore.ieee.org/abstract/document/7358782

There are several strange things (IMO) about your code.
First you do a contrast stretch that sets the 12.5% darkest pixels to black and the 12.5% brightest pixels to white. You probably already have this number of white pixels, so not much happens there, but you do remove all the information in the darkest region of the finger print.
Next you threshold. Here you remove most of the remaining information. Thresholding is something you should leave until the very last step of any processing. In particular, the algorithm implemented in fingerprint_enhancer.enhance_Fingerprint() takes a gray-scale image as input. You should not binarize its input at all!
I would start with a local contrast stretch, then you can directly apply the enhancement algorithm:
import cv2
import fingerprint_enhancer
image = cv2.imread("zMxbO.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply local contrast stretch
se = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (25, 25)) # larger than the width of the widest ridges
low = cv2.morphologyEx(gray, cv2.MORPH_OPEN, se) # locally lowest grayvalue
high = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, se) # locally highest grayvalue
gray = (gray - o) / (c - o + 1e-6)
# Apply fingerprint enhancement
out = fingerprint_enhancer.enhance_Fingerprint(gray, resize=True)
The local contrast stretch yields this:
The finger print enhancement algorithm now yields this:
Note things go wrong around the edges, where the background was cut out and replaced with white, as well as in the dark region, where the noise dominates and the enhancement algorithm hallucinates a bit. I don't think you can extract meaningful information from that area, a better illumination would be necessary.

Related

Approximating edge with rough outline - OpenCV

I've been researching and trying a couple functions to get what I want and I feel like I might be overthinking it.
One version of my code is below. The sample image is here.
My end goal is to find the angle (yellow) of the approximated line with respect to the frame (green line) Final
I haven't even got to the angle portion of the program yet.
The results I was obtaining from the below code were as follows. Canny Closed Small Removed
Anybody have a better way of creating the difference and establishing the estimated line?
Any help is appreciated.
import cv2
import numpy as np
pX = int(512)
pY = int(768)
img = cv2.imread('IMAGE LOCATION', cv2.IMREAD_COLOR)
imgS = cv2.resize(img, (pX, pY))
aimg = cv2.imread('IMAGE LOCATION', cv2.IMREAD_GRAYSCALE)
# Blur image to reduce noise and resize for viewing
blur = cv2.medianBlur(aimg, 5)
rblur = cv2.resize(blur, (384, 512))
canny = cv2.Canny(rblur, 120, 255, 1)
cv2.imshow('canny', canny)
kernel = np.ones((2, 2), np.uint8)
#fringeMesh = cv2.dilate(canny, kernel, iterations=2)
#fringeMesh2 = cv2.dilate(fringeMesh, None, iterations=1)
#cv2.imshow('fringeMesh', fringeMesh2)
closing = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
cv2.imshow('Closed', closing)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(closing, connectivity=8)
#connectedComponentswithStats yields every separated component with information on each of them, such as size
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 200 #num_pixels
fringeMesh3 = np.zeros((output.shape))
for i in range(0, nb_components):
if sizes[i] >= min_size:
fringeMesh3[output == i + 1] = 255
#contours, _ = cv2.findContours(fringeMesh3, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#cv2.drawContours(fringeMesh3, contours, -1, (0, 255, 0), 1)
cv2.imshow('final', fringeMesh3)
#cv2.imshow("Natural", imgS)
#cv2.imshow("img", img)
cv2.imshow("aimg", aimg)
cv2.imshow("Blur", rblur)
cv2.waitKey()
cv2.destroyAllWindows()
You can fit a straight line to the first white pixel you encounter in each column, starting from the bottom.
I had to trim your image because you shared a screen grab of it with a window decoration, title and frame rather than your actual image:
import cv2
import math
import numpy as np
# Load image as greyscale
im = cv2.imread('trimmed.jpg', cv2.IMREAD_GRAYSCALE)
# Get index of first white pixel in each column, starting at the bottom
yvals = (im[::-1,:]>200).argmax(axis=0)
# Make the x values 0, 1, 2, 3...
xvals = np.arange(0,im.shape[1])
# Fit a line of the form y = mx + c
z = np.polyfit(xvals, yvals, 1)
# Convert the slope to an angle
angle = np.arctan(z[0]) * 180/math.pi
Note 1: The value of z (the result of fitting) is:
array([ -0.74002694, 428.01463745])
which means the equation of the line you are looking for is:
y = -0.74002694 * x + 428.01463745
i.e. the y-intercept is at row 428 from the bottom of the image.
Note 2: Try to avoid JPEG format as an intermediate format in image processing - it is lossy and changes your pixel values - so where you have thresholded and done your morphology you are expecting values of 255 and 0, JPEG will lossily alter those values and you end up testing for a range or thresholding again.
Your 'Closed' image seems to quite clearly segment the two regions, so I'd suggest you focus on turning that boundary into a line that you can do something with. Connected components analysis and contour detection don't really provide any useful information here, so aren't necessary.
One quite simple approach to finding the line angle is to find the first white pixel in each row. To get only the rows that are part of your diagonal, don't include rows where that pixel is too close to either side (e.g. within 5%). That gives you a set of points (pixel locations) on the boundary of your two types of grass.
From there you can either do a linear regression to get an equation for the straight line, or you can get two points by averaging the x values for the top and bottom half of the rows, and then calculate the gradient angle from that.
An alternative approach would be doing another morphological close with a very large kernel, to end up with just a solid white region and a solid black region, which you could turn into a line with canny or findContours. From there you could either get some points by averaging, use the endpoints, or given a smooth enough result from a large enough kernel you could detect the line with hough lines.

How can I remove these parallel lines noise on my image using opencv

I'm new to opencv and I m trying to remove all these diagonal parallel lines that are noise in my image.
I have tried using HoughLinesP after some erosion/dilatation but the result is poo (and keeping only the one with a near 135 degree angle).
img = cv2.imread('images/dungeon.jpg')
ret,img = cv2.threshold(img,180,255,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(5,5))
eroded = cv2.erode(img,element)
dilate = cv2.dilate(eroded, element)
skeleton = cv2.subtract(img, dilate)
gray = cv2.cvtColor(skeleton,cv2.COLOR_BGR2GRAY)
minLineLength = 10
lines = cv2.HoughLinesP(gray, 1, np.pi/180, 1, 10, 0.5)
for line in lines:
for x1,y1,x2,y2 in line:
angle = math.atan2(y2-y1,x2-x1)
if (angle > -0.1 and angle < 0.1):
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),1)
cv2.imshow("result", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
My thinking here was to detect these lines in order to remove them afterwards but I m not even sure that's the good way to do this.
I guess you are trying to get the contours of the walls, right? Here’s a possible path to the solution using mainly spatial filtering. You will still need to clean the results to get where you want. The idea is to try and compute a mask of the parallel lines (high-frequency noise) of the image and calculate the difference between the (binary) input and this mask. These are the steps:
Convert the input image to grayscale
Apply Gaussian Blur to get rid of the high-frequency noise you are trying to eliminate
Get a binary image of the blurred image
Apply area filters to get rid of everything that is not noise, to get a noise mask
Compute the difference between the original binary mask and the noise mask
Clean up the difference image
Compute contours on this image
Let’s see the code:
import cv2
import numpy as np
# Set image path
path = "C://opencvImages//"
fileName = "map.png"
# Read Input image
inputImage = cv2.imread(path+fileName)
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Apply Gaussian Blur:
blurredImage = cv2.GaussianBlur(grayscaleImage, (3, 3), cv2.BORDER_DEFAULT)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(blurredImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Save a copy of the binary mask
binaryCopy = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR)
This is the output:
Up until now you get this binary mask. The process so far has smoothed the noise and is creating thick black blobs where the noise is located. Again, the idea is to generate a noise mask that can be subtracted to this image.
Let’s apply an area filter and try to remove the big white blobs, which are NOT the noise we are interested to preserve. I’ll define the function towards the end, for now I just want to present the general idea:
# Set the minimum pixels for the area filter:
minArea = 50000
# Perform an area filter on the binary blobs:
filteredImage = areaFilter(minArea, binaryImage)
The filter will suppress every white blob that is above the minimum threshold. The value is big because in this particular case we are interested in preserving only the black blobs. This is the result:
We have a pretty solid mask. Let’s subtract this from the original binary mask we created earlier:
# Get the difference between the binary image and the mask:
imgDifference = binaryImage - filteredImage
This is what we get:
The difference image has some small noise. Let’s apply the area filter again to get rid of it. This time with a more traditional threshold value:
# Set the minimum pixels for the area filter:
minArea = 20
# Perform an area filter on the binary blobs:
filteredImage = areaFilter(minArea, imgDifference)
Cool. This is the final mask:
Just for completeness. Let’s compute contours on this input, which is very straightforward:
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(filteredImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Draw the contours on the mask image:
cv2.drawContours(binaryCopy, contours, -1, (0, 255, 0), 3)
Let’s see the result:
As you see it is not perfect. However, there’s still some room for improvement, perhaps you can polish a little bit more this idea to get a potential solution. Here's the definition and implementation of the areaFilter function:
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage

Opencv, Python - How to remove the gray pixels around the date text

I am trying to remove the grayish “noise” surrounding the dates using Python/OpenCV to help the OCR (Optical Character Recognition) to recognize the dates.
The original image looks like this: https://static.mothership.sg/1/2017/03/10-Feb-MC-1.jpg
The python script I tried looked as below. However, I have other similar images in which the contrast or lighting coditions varies.
import cv2
import numpy as np
img = cv2.imread("mc.jpeg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
alpha = 3.5
beta = -2
new = alpha * img + beta
new = np.clip(new, 0, 255).astype(np.uint8)
cv2.imwrite("cleaned.png", new)
I also tried Thresholding and/or adaptiveThresholding and some time, I was able to separate the dates from the grayish background. Sometimes it was very challenging. I wonder is there an automatic way to determine the threshold value ?
Below are example of what I hope to achieve.
Blurry Image:
Otsu's Binarization automatically calculates a threshold value from an image histogram.
# Otsu's thresholding after Gaussian filtering
blur = cv2.GaussianBlur(img,(5,5),0)
ret,Otsu = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imwrite("Otsu's_thresholding", Otsu)
see this link
You can try to build a model of the background and then weight each input pixel by that model. The output gain should be relatively constant during most of the image. These are the steps for this method:
Apply a soft median blur filter to get rid of small noise
Get the model of the background via local maximum. Apply a very strong close operation, with a big structuring element (I’m using a rectangular kernel of size 15)
Perform gain adjustment by dividing 255 between each local maximum pixel. Weight this value with each input image pixel.
You should get a nice image where the background illumination is pretty much normalized, threshold this image to get a binary mask of the text
This is the code:
import numpy as np
import cv2
# image path
path = "C:/opencvImages/sheet01.jpg"
# Read an image in default mode:
inputImage = cv2.imread(path)
# Remove small noise via median:
filterSize = 5
imageMedian = cv2.medianBlur(inputImage, filterSize)
# Get local maximum:
kernelSize = 15
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(imageMedian, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)
# Adjust image gain:
height, width, depth = localMax.shape
# Create output Mat:
outputImage = np.zeros(shape=[height, width, depth], dtype=np.uint8)
for i in range(0, height):
for j in range(0, width):
# Get current BGR pixels:
v1 = inputImage[i, j]
v2 = localMax[i, j]
# Gain adjust:
tempArray = []
for c in range(0, 3):
currentPixel = v2[c]
if currentPixel != 0:
gain = 255 / v2[c]
gain = v1[c] * gain
else:
gain = 0
# Gain set and clamp:
tempArray.append(np.clip(gain, 0, 255))
# Set pixel vec to out image:
outputImage[i, j] = tempArray
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(outputImage, cv2.COLOR_BGR2GRAY)
# Threshold:
threshValue = 110
_, binaryImage = cv2.threshold(grayscaleImage, threshValue, 255, cv2.THRESH_BINARY)
# Write image:
imageFilename = "C:/opencvImages/binaryMask2.png"
cv2.imwrite(imageFilename, binaryImage)
I get the following results testing the complete image:
And the cropped text:
Please note that the gain adjustment operations are not vectorized. The script is slow, mainly because I'm starting with Python and don’t know the proper Numpy syntax to speed-up this operation. I've been using C++ for a long time, so feel free to further improve the code.
Edit:
Please, be aware that your result can only be as good as the quality of your input. See your input and ask yourself "Is this a good input for an automated process?" (Automated processes are usually not very smart). The second picture you posted is very low quality. Not only is blurry but also is low res and has compression artifacts. All these factors will hinder automated processing.
With that said, here's an improvement you can include in the original:
Try to normalize brightness-contrast on the grayscale output:
grayscaleImage = np.uint8(cv2.normalize(grayscaleImage, grayscaleImage, 0, 255, cv2.NORM_MINMAX))
Your grayscale image goes from this:
to this:
A little bit darker and improved on contrast. Let's try to compute the optimal threshold value automatically via Otsu thresholding:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
It gets you this:
However, we can adjust the result if we add bias to Otsu's threshold, like this:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
bias = 0.9
threshValue = bias * threshValue
_, binaryImage = cv2.threshold(grayscaleImage, threshValue, 255, cv2.THRESH_BINARY)
That's the best quality you can get with these images using this method.
If you find these suggestions and tips useful, please, at least up-vote my answer.

Robust Algorithm to detect uneven illumination in images [Detection Only Needed]

One of the biggest challenges in tesseract OCR text recognition is the uneven illumination of images.
I need an algorithm that can decide the image is containing uneven illuminations or not.
Test Images
I Attached the images of no illumination image, glare image( white-spotted image) and shadow containing image.
If we give an image to the algorithm, the algorithm should divide into two class like
No uneven illumination - our no illumination image will fall into this category.
Uneven illumination - Our glare image( white-spotted image), shadow containing image will fall in this category.
No Illumination Image - Category A
UnEven Illumination Image (glare image( white-spotted image)) Category B
Uneven Illumination Image (shadow containing an image) Category B
Initial Approach
Change colour space to HSV
Histogram analysis of the value channel of HSV to identify the uneven illumination.
Instead of the first two steps, we can use the perceived brightness
channel instead of the value channel of HSV
Set a low threshold value to get the number of pixels which are less than the low threshold
Set a high threshold value to get the number of pixels which are higher than the high threshold
percentage of low pixels values and percentage of high pixel values to detect uneven lightning condition (The setting threshold for percentage as well )
But I could not find big similarities between uneven illumination
images. I just found there are some pixels that have low value and
some pixels have high value with histogram analysis.
Basically what I feel is if setting some threshold values in the low and to find how many pixels are less than the low threshold and setting some high threshold value to find how many pixels are greater than that threshold. with the pixels counts can we come to a conclusion to detect uneven lightning conditions in images? Here we need to finalize two threshold values and the percentage of the number of pixels to come to the conclusion.
def show_hist_v(img_path):
img = cv2.imread(img_path)
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv_img)
histr =cv2.calcHist(v, [0], None, [255],[0,255])
plt.plot(histr)
plt.show()
low_threshold =np.count_nonzero(v < 50)
high_threshold =np.count_nonzero(v >200)
total_pixels = img.shape[0]* img.shape[1]
percenet_low =low_threshold/total_pixels*100
percenet_high =high_threshold/total_pixels*100
print("Total Pixels - {}\n Pixels More than 200 - {} \n Pixels Less than 50 - {} \n Pixels percentage more than 200 - {} \n Pixel spercentage less than 50 - {} \n".format(total_pixels,high_threshold,low_threshold,percenet_low,percenet_high))
return total_pixels,high_threshold,low_threshold,percenet_low,percenet_high
So can someone improve my initial approach or give better than this approach to detect uneven illumination in images for general cases?
Also, I tried perceived brightness instead of the value channel since the value channel takes the maximum of (b,g,r) values the perceive brightness is a good choice as I think
def get_perceive_brightness( float_img):
float_img = np.float64(float_img) # unit8 will make overflow
b, g, r = cv2.split(float_img)
float_brightness = np.sqrt(
(0.241 * (r ** 2)) + (0.691 * (g ** 2)) + (0.068 * (b ** 2)))
brightness_channel = np.uint8(np.absolute(float_brightness))
return brightness_channel
def show_hist_v(img_path):
img = cv2.imread(img_path)
v = get_perceive_brightness(img)
histr =cv2.calcHist(v, [0], None, [255],[0,255])
plt.plot(histr)
plt.show()
low_threshold =np.count_nonzero(v < 50)
high_threshold =np.count_nonzero(v >200)
total_pixels = img.shape[0]* img.shape[1]
percenet_low =low_threshold/total_pixels*100
percenet_high =high_threshold/total_pixels*100
print("Total Pixels - {}\n Pixels More than 200 - {} \n Pixels Less than 50 - {} \n Pixels percentage more than 200 - {} \n Pixel spercentage less than 50 - {} \n".format(total_pixels,high_threshold,low_threshold,percenet_low,percenet_high))
return total_pixels,high_threshold,low_threshold,percenet_low,percenet_high
Histogram analysis of perceived brightness channel
As Ahmet suggested.
def get_percentage_of_binary_pixels(img=None, img_path=None):
if img is None:
if img_path is not None:
gray_img = cv2.imread(img_path, 0)
else:
return "No img or img_path"
else:
print(img.shape)
if len(img.shape) > 2:
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
else:
gray_img = img
h, w = gray_img.shape
guassian_blur = cv2.GaussianBlur(gray_img, (5, 5), 0)
thresh_value, otsu_img = cv2.threshold(guassian_blur, 0, 255,
cv2.THRESH_BINARY + cv2.THRESH_OTSU)
cv2.imwrite("binary/{}".format(img_path.split('/')[-1]), otsu_img)
black_pixels = np.count_nonzero(otsu_img == 0)
# white_pixels = np.count_nonzero(otsu_img == 255)
black_pixels_percentage = black_pixels / (h * w) * 100
# white_pixels_percentage = white_pixels / (h * w) * 100
return black_pixels_percentage
when we get more than 35% of black_ pixels percentage with otsu binarization, we can detect the uneven illumination images around 80 percentage. When the illumination occurred in a small region of the image, the detection fails.
Thanks in advance
I suggest using the division trick to separate text from the background, and then calculate statistics on the background only. After setting some reasonable thresholds it is easy to create classifier for the illumination.
def get_image_stats(img_path, lbl):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (25, 25), 0)
no_text = gray * ((gray/blurred)>0.99) # select background only
no_text[no_text<10] = no_text[no_text>20].mean() # convert black pixels to mean value
no_bright = no_text.copy()
no_bright[no_bright>220] = no_bright[no_bright<220].mean() # disregard bright pixels
print(lbl)
std = no_bright.std()
print('STD:', std)
bright = (no_text>220).sum()
print('Brigth pixels:', bright)
plt.figure()
plt.hist(no_text.reshape(-1,1), 25)
plt.title(lbl)
if std>25:
print("!!! Detected uneven illumination")
if no_text.mean()<200 and bright>8000:
print("!!! Detected glare")
This results in:
good_img
STD: 11.264569863071165
Brigth pixels: 58
glare_img
STD: 15.00149131296984
Brigth pixels: 15122
!!! Detected glare
uneven_img
STD: 57.99510339944441
Brigth pixels: 688
!!! Detected uneven illumination
Now let's analyze the histograms and apply some common sense. We expect background to be even and have low variance, like it is the case in "good_img". If it has high variance, then its standard deviation would be high and it is the case of uneven brightness. On the lower image you can see 3 (smaller) peaks that are responsible for the 3 different illuminated areas. The largest peak in the middle is the result of setting all black pixels to the mean value. I believe it is safe to call images with STD above 25 as "uneven illumination" case.
It is easy to spot a high amount of bright pixels when there is glare (see image on right). Glared image looks like a good image, besided the hot spot. Setting threshold of bright pixels to something like 8000 (1.5% of total image size) should be good to detect such images. There is a possibility that the background is very bright everywhere, so if the mean of no_text pixels is above 200, then it is the case and there is no need to detect hot spots.
Why don't you remove the lightning effect from the images?
For instance:
If we want to read with pytesseract output will be ' \n\f'
But if we remove the lightning:
import cv2
import pytesseract
img = cv2.imread('img2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
smooth = cv2.GaussianBlur(gray, (95, 95), 0)
division = cv2.divide(gray, smooth, scale=192)
And read with the pytesseract, some part of the output will be:
.
.
.
Dosage & use
See package insert for compicic
information,
Instruction:
Keep all medicines out of the re.
Read the instructions carefully
Storage:
Store at temperature below 30°C.
Protect from Heat, light & moisture. BATCH NO. : 014C003
MFG. DATE - 03-2019
—— EXP. DATE : 03-2021
GENIX Distributed
AS Exclusi i :
genx PHARMA PRIVATE LIMITED Cevoka Pv 2 A ‘<
» 45-B, Kore ci
Karachi-75190, | Pakisier al Pei yaa fans
www.genixpharma.com
Repeat for the last image:
And read with the pytesseract, some part of the output will be:
.
.
.
Dosage & use
See package insert for complete prescribing
information. Rx Only
Instruction:
Keep all medicines out of the reach of children.
Read the instructions carefully before using.
Storage:
Store at temperature below 30°C. 5
Protect from Neat, light & moisture. BATCH NO, : 0140003
MFG. DATE : 03-2019
EXP. DATE : 03-2021
Manufactured by:
GENI N Exclusively Distributed by:
GENIX PHARMA PRIVATE LIMITED Ceyoka (Pvt) Ltd.
44, 45-B, Korangi Creek Road, 55, Negombe Road,
Karachi-75190, Pakistan. Peliyagoda, Snianka,
www. genixpharma.com
Update
You can find the illuminated part using erode and dilatation methods.
Result:
Code:
import cv2
import imutils
import numpy as np
from skimage import measure
from imutils import contours
img = cv2.imread('img2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (95, 95), 0)
thresh = cv2.threshold(blurred, 200, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=4)
labels = measure.label(thresh, neighbors=8, background=0)
mask = np.zeros(thresh.shape, dtype="uint8")
for label in np.unique(labels):
if label == 0:
continue
labelMask = np.zeros(thresh.shape, dtype="uint8")
labelMask[labels == label] = 255
numPixels = cv2.countNonZero(labelMask)
if numPixels > 300:
mask = cv2.add(mask, labelMask)
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = contours.sort_contours(cnts)[0]
for (i, c) in enumerate(cnts):
(x, y, w, h) = cv2.boundingRect(c)
((cX, cY), radius) = cv2.minEnclosingCircle(c)
cv2.circle(img, (int(cX), int(cY)), int(radius),
(0, 0, 255), 3)
cv2.putText(img, "#{}".format(i + 1), (x, y - 15),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 0, 255), 2)
cv2.imshow("Image", img)
cv2.waitKey(0)
Though I only tested with the second-image. You may need to change the parameters for the other images.
Here is a quick solution in ImageMagick. But it can easily be implemented in Python/OpenCV as shown further down.
Use division normalization.
Read the input
Optionally convert to grayscale
Copy the image and blur it
Divide the blurred image by the original
Save the results
Input:
convert 8W0bp.jpg \( +clone -blur 0x13 \) +swap -compose divide -composite x1.png
convert ob87W.jpg \( +clone -blur 0x13 \) +swap -compose divide -composite x2.png
convert HLJuA.jpg \( +clone -blur 0x13 \) +swap -compose divide -composite x3.png
Results:
In Python/OpenCV:
import cv2
import numpy as np
import skimage.filters as filters
# read the image
img = cv2.imread('8W0bp.jpg')
#img = cv2.imread('ob87W.jpg')
#img = cv2.imread('HLJuA.jpg')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (33,33), 0)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=255)
# sharpen using unsharp masking
sharp = filters.unsharp_mask(division, radius=1.5, amount=2.5, multichannel=False, preserve_range=False)
sharp = (255*sharp).clip(0,255).astype(np.uint8)
# save results
cv2.imwrite('8W0bp_division.jpg',division)
cv2.imwrite('8W0bp_division_sharp.jpg',sharp)
#cv2.imwrite('ob87W_division.jpg',division)
#cv2.imwrite('ob87W_division_sharp.jpg',sharp)
#cv2.imwrite('HLJuA_division.jpg',division)
#cv2.imwrite('HLJuA_division_sharp.jpg',sharp)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('sharp', sharp)
cv2.waitKey(0)
cv2.destroyAllWindows()
Results:
Here my pipeline:
%matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
from scipy.signal import find_peaks
I use the functions:
def get_perceived_brightness( float_img):
float_img = np.float64(float_img) # unit8 will make overflow
b, g, r = cv2.split(float_img)
float_brightness = np.sqrt((0.241 * (r ** 2)) + (0.691 * (g ** 2)) + (0.068 * (b ** 2)))
brightness_channel = np.uint8(np.absolute(float_brightness))
return brightness_channel
# from: https://stackoverflow.com/questions/46300577/find-locale-minimum-in-histogram-1d-array-python
def smooth(x,window_len=11,window='hanning'):
if x.ndim != 1:
raise ValueError("smooth only accepts 1 dimension arrays.")
if x.size < window_len:
raise ValueError("Input vector needs to be bigger than window size.")
if window_len<3:
return x
if not window in ['flat', 'hanning', 'hamming', 'bartlett', 'blackman']:
raise ValueError("Window is on of 'flat', 'hanning', 'hamming', 'bartlett', 'blackman'")
s=np.r_[x[window_len-1:0:-1],x,x[-2:-window_len-1:-1]]
if window == 'flat': #moving average
w=np.ones(window_len,'d')
else:
w=eval('np.'+window+'(window_len)')
y=np.convolve(w/w.sum(),s,mode='valid')
return y
I load the image
image_file_name = 'im3.jpg'
image = cv2.imread(image_file_name)
# image category
category = 0
# gray convertion
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
height = image.shape[0]
width = image.shape[1]
First test. Does the image have any big white spots?
# First test. Does the image have any big white spots?
saturation_thresh = 250
raw_saturation_region = cv2.threshold(image_gray, saturation_thresh, 255, cv2.THRESH_BINARY)[1]
num_raw_saturation_regions, raw_saturation_regions,stats, _ = cv2.connectedComponentsWithStats(raw_saturation_region)
# index 0 is the background -> to remove
area_raw_saturation_regions = stats[1:,4]
min_area_bad_spot = 1000 # this can be calculated as percentage of the image area
if (np.max(area_raw_saturation_regions) > min_area_bad_spot):
category = 2 # there is at least one spot
The result for the image normal:
The result for the image with spots:
The result for the image with shadows:
If the image pass the first test, I process the second test. Is the image dark?
# Second test. Is the image dark?
min_mean_intensity = 60
if category == 0 :
mean_intensity = np.mean(image_gray)
if (mean_intensity < min_mean_intensity):
category = 3 # dark image
If the image pass also the second test, I process the third test. Is the image uniformy illuminatad?
window_len = 15 # odd number
delay = int((window_len-1)/2) # delay is the shift introduced from the smoothing. It's half window_len
# for example if the window_len is 15, the delay is 7
# infact hist.shape = 256 and smooted_hist.shape = 270 (= 256 + 2*delay)
if category == 0 :
perceived_brightness = get_perceived_brightness(image)
hist,bins = np.histogram(perceived_brightness.ravel(),256,[0,256])
# smoothed_hist is shifted from the original one
smoothed_hist = smooth(hist,window_len)
# smoothed histogram syncronized with the original histogram
sync_smoothed_hist = smoothed_hist[delay:-delay]
# if number the peaks with:
# 20<bin<250
# prominance >= mean histogram value
# the image could have shadows (but it could have also a background with some colors)
mean_hist = int(height*width / 256)
peaks, _ = find_peaks(sync_smoothed_hist, prominence=mean_hist)
selected_peaks = peaks[(peaks > 20) & (peaks < 250)]
if (selected_peaks.size>1) :
category = 4 # there are shadows
The histogram for the image normal:
The histogram for the image with spots:
The histogram for the image with shadows:
If the image pass all the tests, than it's normal
# all tests are passed. The image is ok
if (category == 0) :
category=1 # the image is ok

How to remove hair from skin images using opencv?

I am working with recognition of skin spots. For this, I work with a number of images with different noises. One of these noises are the hairs, because I have images with hairs over the area of ​​the stain (ROI). How to decrease or remove these types of image noise?
The code below decreases the area where hairs are, but does not remove hairs that are above the area of ​​interest (ROI).
import numpy as np
import cv2
IMD = 'IMD436'
# Read the image and perfrom an OTSU threshold
img = cv2.imread(IMD+'.bmp')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Remove hair with opening
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# Combine surrounding noise with ROI
kernel = np.ones((6,6),np.uint8)
dilate = cv2.dilate(opening,kernel,iterations=3)
# Blur the image for smoother ROI
blur = cv2.blur(dilate,(15,15))
# Perform another OTSU threshold and search for biggest contour
ret, thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result
cv2.imwrite(IMD+'.png', res)
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Exit:
How can I remove hair from the top of my region of interest?
Images used:
I am responding to your tag on a related post. As I understand you and another colege are working together on a project to locate the moles on the skin? Because I think I have already gave help to one or maybe both of you on similar questions and already mentioned that the removal of the hair is very tricky and difficult task. If you remove the hair on the image you lose information and you can't replace that part of the image (no program or alghorithm can guess what is under the hair - but it can make an estimation). What you could do as I mentioned in other posts and I think that it would be the best approach is to learn about deep neural networks and make your own for the hair removal. You can google "watermark removal deep neural network" and see what I mean. That being said, your code does not seem to extract all ROIs (the moles) you have given in the example image. I have made another example on how you can better extract the moles. Basically you should perform closing before transforming to binary and you will get better results.
For the second part - hair removal, if you do not wish to make a neural network, I think that alternative solution could be, that you calculate the mean pixel intesity of the region that contains the mole. Then iterate throug every pixel and make some sort of criteria on how much can the pixel differ from the mean. Hair seem to be presented with pixels that are darker than the mole area. So when you find the pixel, replace it with the neigbour pixel that does not fall in this criteria. In the example I have made a simple logic which will not work with every image but it can serve as an example. To make a fully operational solution you should make a better, more complex alghorithm which I guess will take quite some time. Hope it helps a bit! Cheers!
import numpy as np
import cv2
from PIL import Image
# Read the image and perfrom an OTSU threshold
img = cv2.imread('skin2.png')
kernel = np.ones((15,15),np.uint8)
# Perform closing to remove hair and blur the image
closing = cv2.morphologyEx(img,cv2.MORPH_CLOSE,kernel, iterations = 2)
blur = cv2.blur(closing,(15,15))
# Binarize the image
gray = cv2.cvtColor(blur,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Search for contours and select the biggest one
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Calculate the mean color of the contour
mean = cv2.mean(res, mask = mask)
print(mean)
# Make some sort of criterion as the ratio hair vs. skin color varies
# thus makes it hard to unify the threshold.
# NOTE that this is only for example and it will not work with all images!!!
if mean[2] >182:
bp = mean[0]/100*35
gp = mean[1]/100*35
rp = mean[2]/100*35
elif 182 > mean[2] >160:
bp = mean[0]/100*30
gp = mean[1]/100*30
rp = mean[2]/100*30
elif 160>mean[2]>150:
bp = mean[0]/100*50
gp = mean[1]/100*50
rp = mean[2]/100*50
elif 150>mean[2]>120:
bp = mean[0]/100*60
gp = mean[1]/100*60
rp = mean[2]/100*60
else:
bp = mean[0]/100*53
gp = mean[1]/100*53
rp = mean[2]/100*53
# Write temporary image
cv2.imwrite('temp.png', res)
# Open the image with PIL and load it to RGB pixelpoints
mask2 = Image.open('temp.png')
pix = mask2.load()
x,y = mask2.size
# Itearate through the image and make some sort of logic to replace the pixels that
# differs from the mean of the image
# NOTE that this alghorithm is for example and it will not work with other images
for i in range(0,x):
for j in range(0,y):
if -1<pix[i,j][0]<bp or -1<pix[i,j][1]<gp or -1<pix[i,j][2]<rp:
try:
pix[i,j] = b,g,r
except:
pix[i,j] = (int(mean[0]),int(mean[1]),int(mean[2]))
else:
b,g,r = pix[i,j]
# Transform the image back to cv2 format and mask the result
res = np.array(mask2)
res = res[:,:,::-1].copy()
final = cv2.bitwise_and(res, res, mask=mask)
# Display the result
cv2.imshow('img', final)
cv2.waitKey(0)
cv2.destroyAllWindows()
You can try the following steps, at least to get a road map to the proper solution implementation:
Find the hair region using adaptive local thresholding - Otsu's
method or any other method. I think "local thresholding" or even
"local histogram equalization and then global thresholding" will
find the hair regions.
To fill the hair regions, use "texture synthesis" to synthesize skin
like texture for the hair region.
One good and easy method for texture synthesis is described in "A.A. Efros and T.K. Leung, Texture synthesis by non-parametric sampling', In Proceedings of the International Conference on Computer Vision (ICCV), Kerkyra, Greece, 1999".
Texture synthesis will give a better result than averaging or median filtering to estimate the pixels in the hair region.
Also, take a look at this paper, it should help you a lot:
http://link.springer.com/article/10.1007%2Fs00521-012-1149-1?LI=true

Categories