How to segment blood vessels python opencv - python

I am trying to segment the blood vessels in retinal images using Python and OpenCV. Here is the original image:
Ideally I want all the blood vessels to be very visible like this (different image):
Here is what I have tried so far. I took the green color channel of the image.
img = cv2.imread('images/HealthyEyeFundus.jpg')
b,g,r = cv2.split(img)
Then I tried to create a matched filter by following this article and this is what the output image is:
Then I tried doing max entropy thresholding:
def max_entropy(data):
# calculate CDF (cumulative density function)
cdf = data.astype(np.float).cumsum()
# find histogram's nonzero area
valid_idx = np.nonzero(data)[0]
first_bin = valid_idx[0]
last_bin = valid_idx[-1]
# initialize search for maximum
max_ent, threshold = 0, 0
for it in range(first_bin, last_bin + 1):
# Background (dark)
hist_range = data[:it + 1]
hist_range = hist_range[hist_range != 0] / cdf[it] # normalize within selected range & remove all 0 elements
tot_ent = -np.sum(hist_range * np.log(hist_range)) # background entropy
# Foreground/Object (bright)
hist_range = data[it + 1:]
# normalize within selected range & remove all 0 elements
hist_range = hist_range[hist_range != 0] / (cdf[last_bin] - cdf[it])
tot_ent -= np.sum(hist_range * np.log(hist_range)) # accumulate object entropy
# find max
if tot_ent > max_ent:
max_ent, threshold = tot_ent, it
return threshold
img = skimage.io.imread('image.jpg')
# obtain histogram
hist = np.histogram(img, bins=256, range=(0, 256))[0]
# get threshold
th = max_entropy.max_entropy(hist)
print th
ret,th1 = cv2.threshold(img,th,255,cv2.THRESH_BINARY)
This is the result I'm getting, which is obviously not showing all the blood vessels:
I've also tried taking the matched filter version of the image and taking the magnitude of its sobel values.
img0 = cv2.imread('image.jpg',0)
sobelx = cv2.Sobel(img0,cv2.CV_64F,1,0,ksize=5) # x
sobely = cv2.Sobel(img0,cv2.CV_64F,0,1,ksize=5) # y
magnitude = np.sqrt(sobelx**2+sobely**2)
This makes the vessels pop out more:
Then I tried Otsu thresholding on it:
img0 = cv2.imread('image.jpg',0)
# # Otsu's thresholding
ret2,th2 = cv2.threshold(img0,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Otsu's thresholding after Gaussian filtering
blur = cv2.GaussianBlur(img0,(9,9),5)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
one = Image.fromarray(th2).show()
one = Image.fromarray(th3).show()
Otsu doesn't give adequate results. It ends up including noise in the results:
Any help is appreciated on how I can segment the blood vessels successfully.

I worked on retina vessel detection for a bit few years ago, and there are different ways to do it:
If you don't need a top result but something fast, you can use oriented openings, see here and here.
Then you have an other version using mathematical morphology version here.
For better results, here are some ideas:
Personally, I used combination of Gabor filters, and results where pretty good. See the segmentation result here on the first image of drive.
And Gabor can be combined with learning for a good result, or here.
Few years ago, they claimed to have the best algorithm, but I've never had the opportunity to test it. I was sceptic about the performance gap and the way they thresholded the line detector results, it was kind of obscure.
But I know that nowadays, many people try to tackle the problem using CNN, but I've not heard about significant improvements.

Related

extract ridges and valleys from finger Image

for my class project I am trying to extract ridges and Valleys from the finger image. An example is given below.
#The code I am using
import cv2
import numpy as np
import fingerprint_enhancer
clip_hist_percent=25
image = cv2.imread("")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Calculate grayscale histogram
hist = cv2.calcHist([gray],[0],None,[256],[0,256])
hist_size = len(hist)
# Calculate cumulative distribution from the histogram
accumulator = []
accumulator.append(float(hist[0]))
for index in range(1, hist_size):
accumulator.append(accumulator[index -1] + float(hist[index]))
# Locate points to clip
maximum = accumulator[-1]
clip_hist_percent *= (maximum/100.0)
clip_hist_percent /= 2.0
# Locate left cut
minimum_gray = 0
while accumulator[minimum_gray] < clip_hist_percent:
minimum_gray += 1
# Locate right cut
maximum_gray = hist_size -1
while accumulator[maximum_gray] >= (maximum - clip_hist_percent):
maximum_gray -= 1
# Calculate alpha and beta values
alpha = 255 / (maximum_gray - minimum_gray)
beta = -minimum_gray * alpha
auto_result = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)
gray = cv2.cvtColor(auto_result, cv2.COLOR_BGR2GRAY)
# compute gamma = log(mid*255)/log(mean)
mid = 0.5
mean = np.mean(gray)
gamma = math.log(mid*255)/math.log(mean)
# do gamma correction
img_gamma1 = np.power(auto_result,gamma).clip(0,255).astype(np.uint8)
g1 = cv2.cvtColor(img_gamma2, cv2.COLOR_BGR2GRAY)
# blur = cv2.GaussianBlur(g1,(2,1),0)
thresh2 = cv2.adaptiveThreshold(g1, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 199, 3)
# blur = cv2.GaussianBlur(thresh2,(2,1),0)
blur=((3,3),1)
erode_=(5,5)
dilate_=(3, 3)
dilate = cv2.dilate(cv2.erode(cv2.GaussianBlur(thresh2/255, blur[0],
blur[1]), np.ones(erode_)), np.ones(dilate_))*255
out = fingerprint_enhancer.enhance_Fingerprint(dilate)
I am having difficulty extracting the lines on the finger. I tried to adjust the brightness and contrast, applied calcHist, adaptive thresholding, applied blur, then applied the Gabor filters (as per UTKARSH code). The result look like above.
We could clearly see that the lower part of the image has many spurious lines. My project requirement is to get clear lines from the RGB image. Could anyone help me with the steps and the code?
Thank you in advance
reference:
https://github.com/Utkarsh-Deshmukh/Fingerprint-Enhancement-Python
https://ieeexplore.ieee.org/abstract/document/7358782
There are several strange things (IMO) about your code.
First you do a contrast stretch that sets the 12.5% darkest pixels to black and the 12.5% brightest pixels to white. You probably already have this number of white pixels, so not much happens there, but you do remove all the information in the darkest region of the finger print.
Next you threshold. Here you remove most of the remaining information. Thresholding is something you should leave until the very last step of any processing. In particular, the algorithm implemented in fingerprint_enhancer.enhance_Fingerprint() takes a gray-scale image as input. You should not binarize its input at all!
I would start with a local contrast stretch, then you can directly apply the enhancement algorithm:
import cv2
import fingerprint_enhancer
image = cv2.imread("zMxbO.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply local contrast stretch
se = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (25, 25)) # larger than the width of the widest ridges
low = cv2.morphologyEx(gray, cv2.MORPH_OPEN, se) # locally lowest grayvalue
high = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, se) # locally highest grayvalue
gray = (gray - o) / (c - o + 1e-6)
# Apply fingerprint enhancement
out = fingerprint_enhancer.enhance_Fingerprint(gray, resize=True)
The local contrast stretch yields this:
The finger print enhancement algorithm now yields this:
Note things go wrong around the edges, where the background was cut out and replaced with white, as well as in the dark region, where the noise dominates and the enhancement algorithm hallucinates a bit. I don't think you can extract meaningful information from that area, a better illumination would be necessary.

Remove small objects -- Liver image segmentation

I need to make a liver image segmentation starting from a matrix of Hounsfield units (input-image) and a mask approximation of the liver (input-mask).
After some processing, I ended up with this representation of the liver. The main problem now is how to remove those small objects and keep only the liver in the image. I will explain what I did to obtain this image:
1) Hounsfield thresholding + Normalization - After this step, the image looks like this
def slice_window(img, level, window):
low = level - window / 2
high = level + window / 2
return img.clip(low, high)
# `hu_mat` is the input image
hu_mat_slice = slice_window(hu_mat, 100, 50)
def translate_ranges(img, from_range_low, from_range_high, to_range_low, to_range_high):
return np.interp(img,
(from_range_low, from_range_high),
(to_range_low, to_range_high))
hu_mat_norm = translate_ranges(hu_mat_slice, hu_mat_slice.min(), hu_mat_slice.max(), 0, 1)
2) ROI (Convex Hull) + Binarizing - After this step, the image looks like this
I tried to isolate the liver as much as I could by using the initial mask approximation. I generated the Convex Hull and kept only the points inside the convex hull.
def in_hull(hull, points, x):
hull_path = Path(points[hull.vertices])
# radius=25: "expands" the polygon; this ensures me the liver will not end up cutted
return hull_path.contains_point(x, radius=25)
hu_mat_hull = np.zeros((len(hu_mat_norm), len(hu_mat_norm[0])))
for i in range(len(hu_mat_norm)):
for j in range(len(hu_mat_norm[0])):
if not in_hull(hull, points, (i, j)):
hu_mat_hull[i][j] = 0
else:
hu_mat_hull[i][j] = hu_mat_norm[i][j]
threshold_confidence = 0.5
hu_mat_binary = np.array([[0 if el < threshold_confidence else 1 for el in row] for row in hu_mat_hull])
3) Remove small objects
For this part, I tried to use the some morphology for removing the small objects from the image:
from skimage import morphology
hu_mat_bool = np.array(hu_mat_binary, bool)
rem_small = morphology.remove_small_objects(hu_mat_bool, min_size=1000).astype(int)
I used different values for the min_size parameter, but this is the best resulted image. Actually, it removes something, but very little. Those small objects which are close to the liver are ignored.
I've also tried to find contours in the image and keep only the largest one:
from skimage import measure
contours = measure.find_contours(hu_mat_orig_hull, 0.95)
The found contours are present here. I tried to make a dilation starting from the small contours, but didn't succeed to remove the small objects from the image.
What else should I try in order to remove those small objects and generate a mask similar to this?

Which are the best practices for Tesseract OCR on low quality images?

I've been working for a while on an OCR solution for my business and I can't seem to get the catch of image filtering for low quality images. The balance between removing the noise and not breaking the characters is genuinely complicated.
What's the issue?
More specifically, this is the kind of text image that I work with.
Character code to recognize
And this would be the result after cleaning as much as I can.
Character code after cleaning
I'm using Python. When I pass this to Tesseract using --oem 3 --psm 8 the result is SC454B1TAC, which is not that bad, but I think the image should be good enough to get the characters red correctly.
What Am I Doing?
For the moment, the filtering that I perform goes like this:
Change the image to black and white
Get a threshold image with a gaussian filter applied to it
Remove the dark band on the bottom
Dilate and erode the image to remove spots
Get the connected components of the resulting image to close gaps
Give the image to Tesseract and print the result
Here is some code, I hope it's clear enough:
# Remove dark band
def remove_band(self, image):
col1 = [row[0] for row in image] # First column
col2 = [row[2] for row in image] # Second column
col3 = [row[3] for row in image] # Third column
for i, c in enumerate(zip(col1, col2, col3)):
if c[0] == 0 and c[1] == 0 and c[2] == 0:
image[i] = 255
return image
# Tesseract func
def print_text(self, rotated):
# Get OCR output using Pytesseract
# NOTE: We are using Tesseract 5. If you use Tesseract 4, white/blacklisting doesn't work. Also the algorithm is worse.
# Installation guide: https://ubuntuhandbook.org/index.php/2021/12/install-tesseract-ocr-5-ubuntu/
custom_config = '--oem ' + str(self.oem) + ' --psm ' + str(self.psm) \
+ ' -c tessedit_char_whitelist=01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ -c tessedit_char_blacklist=.-}{abcdefghijklmnopqrstuvwxyz'
out = pytesseract.image_to_string(rotated, config=custom_config, lang=self.lang)
return out
# Perform all steps for OCR
def perform_ocr(self, image):
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Binary threshold with gaussian filter
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,17,5)
# Remove the dark band
noband = self.remove_band(thresh)
# Dilate and erode
kernel_d = np.ones((2,2), np.uint8)
kernel_e = np.ones((2,2), np.uint8)
img_dilation = cv2.dilate(noband, kernel_d, iterations=2)
img_erosion = cv2.erode(img_dilation, kernel_e, iterations=2)
# Get the connected components
num_comps, labeled_pixels, comp_stats, comp_centroids = \
cv2.connectedComponentsWithStats(img_erosion, connectivity=4)
min_comp_area = 10 # pixels
# get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remaining_comp_labels = [i for i in range(1, num_comps) if comp_stats[i][4] >= min_comp_area]
# filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
noiseless = np.where(np.isin(labeled_pixels,remaining_comp_labels)==True,255,0).astype('uint8')
# Save image
final_save_file = os.path.join(self.final_save_path, 'final_' + str(self.img_num) + ".jpg")
cv2.imwrite(final_save_file, noiseless)
# Get Tesseract result
out = self.print_text(noiseless)
return out
As you can see there are a lot of parameters that are manually tuned and could be changed. I've played with them and these give the best results so far.
How can you help?
Can you give me some advice on how to improve this method? Any libraries for cleaning, any useful functions I'm not using, a better set of parameters for this functions, advice on image resolution, lighting...
Anything helps!
Also, I think this is a known issue, but many times if the image is not good Tesseract will recognize characters twice and print both results. Which is a good way to handle this?
Thanks for everything,
Fran.

Detect Crop Lines using opencv

I am working on a lane detection project and I want to feed in the path robot can take between crop rows. I initially converted the image to birds eye view for better processing and tried Hough transform, but Hough transform is not giving me good results.
Bird's eye view of the image
Are there any other approaches I am missing out?
Before applying the Hough lines algorithm you could do the following :
1)Color shifting
Apply color shifting where you split the colors of the image into blue, green and red channels. Since the crop rows are green you can amplify the color green to stand out more from the rest of the channels
b,g,r = cv2.split(img)
gscale = 2*g-r-b
2)Canny Edge Detection
Fiddle with the min and max arguments in the cv2.Canny() function until satisfactory.
gscale = cv2.Canny(gscale,minVal,maxValue)
3)Skeletonization
Skeletonization is the process of thinning the regions of interest to their binary constituents. This makes it easier for to perform pattern recognition.
size = np.size(gscale) #returns the product of the array dimensions
skel = np.zeros(gscale.shape,np.uint8) #array of zeros
ret,gscale = cv2.threshold(gscale,128,255,0) #thresholding the image
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
done = False
while( not done):
eroded = cv2.erode(gscale,element)
temp = cv2.dilate(eroded,element)
temp = cv2.subtract(gscale,temp)
skel = cv2.bitwise_or(skel,temp)
gscale = eroded.copy()
zeros = size - cv2.countNonZero(gscale)
if zeros==size:
done = True
You should get better performance in the Hough lines algorithm after applying all these in their respective order.

Adaptive Canny Edge Detection Algorithm

I am trying to implement Canny Algorithm using python from scratch.
I am following the steps
Bilateral Filtering the image
Gradient calculation using First Derivative of Gaussian oriented in 4 different directions
def deroGauss(w=5,s=1,angle=0):
wlim = (w-1)/2
y,x = np.meshgrid(np.arange(-wlim,wlim+1),np.arange(-wlim,wlim+1))
G = np.exp(-np.sum((np.square(x),np.square(y)),axis=0)/(2*np.float64(s)**2))
G = G/np.sum(G)
dGdx = -np.multiply(x,G)/np.float64(s)**2
dGdy = -np.multiply(y,G)/np.float64(s)**2
angle = angle*math.pi/180 #converting to radians
dog = math.cos(angle)*dGdx + math.sin(angle)*dGdy
return dog
Non max suppression in all the 4 gradient image
def nonmaxsup(I,gradang):
dim = I.shape
Inms = np.zeros(dim)
weak = np.zeros(dim)
strong = np.zeros(dim)
final = np.zeros(dim)
xshift = int(np.round(math.cos(gradang*np.pi/180)))
yshift = int(np.round(math.sin(gradang*np.pi/180)))
Ipad = np.pad(I,(1,),'constant',constant_values = (0,0))
for r in xrange(1,dim[0]+1):
for c in xrange(1,dim[1]+1):
maggrad = [Ipad[r-xshift,c-yshift],Ipad[r,c],Ipad[r+xshift,c+yshift]]
if Ipad[r,c] == np.max(maggrad):
Inms[r-1,c-1] = Ipad[r,c]
return Inms
Double Thresholding and Hysteresis: Now here the real problem comes.
I am using Otsu's method toe calculate the thresholds.
Should I use the grayscale image or the Gradient images to calculate the threshold?
Because in the gradient Image the pixel intensity values are getting reduced to a very low value after bilateral filtering and then after convolving with Derivative of Gaussian it is reduced further. For example :: 28, 15
Threshold calculated using the grayscale is much above the threshold calculated using the gradient image.
Also If I use the grayscale or even the gradient images to calculate the thresholds the resultant image is not good enough and does not contain all the edges.
So practically, I have nothing left to apply Hysteresis on.
I have tried
img_edge = img_edge*255/np.max(img_edge)
to scale up the values but the result remains the same
But if I use the same thresholds with cv2.Canny the result is very good.
What actually can be wrong?
Applying the Otsu threshold from the original image doesn't make sense, it is completely unrelated to the gradient intensities.
Otsu from the gradient intensities is not perfect because the statistical distributions of noise and edges are skewed and overlap a lot.
You can try some small multiple of Otsu or some small multiple of the average. But in no case will you get perfect results by simple or hysteresis thresholding. Edge detection is an ill-posed problem.

Categories