I need to make a liver image segmentation starting from a matrix of Hounsfield units (input-image) and a mask approximation of the liver (input-mask).
After some processing, I ended up with this representation of the liver. The main problem now is how to remove those small objects and keep only the liver in the image. I will explain what I did to obtain this image:
1) Hounsfield thresholding + Normalization - After this step, the image looks like this
def slice_window(img, level, window):
low = level - window / 2
high = level + window / 2
return img.clip(low, high)
# `hu_mat` is the input image
hu_mat_slice = slice_window(hu_mat, 100, 50)
def translate_ranges(img, from_range_low, from_range_high, to_range_low, to_range_high):
return np.interp(img,
(from_range_low, from_range_high),
(to_range_low, to_range_high))
hu_mat_norm = translate_ranges(hu_mat_slice, hu_mat_slice.min(), hu_mat_slice.max(), 0, 1)
2) ROI (Convex Hull) + Binarizing - After this step, the image looks like this
I tried to isolate the liver as much as I could by using the initial mask approximation. I generated the Convex Hull and kept only the points inside the convex hull.
def in_hull(hull, points, x):
hull_path = Path(points[hull.vertices])
# radius=25: "expands" the polygon; this ensures me the liver will not end up cutted
return hull_path.contains_point(x, radius=25)
hu_mat_hull = np.zeros((len(hu_mat_norm), len(hu_mat_norm[0])))
for i in range(len(hu_mat_norm)):
for j in range(len(hu_mat_norm[0])):
if not in_hull(hull, points, (i, j)):
hu_mat_hull[i][j] = 0
else:
hu_mat_hull[i][j] = hu_mat_norm[i][j]
threshold_confidence = 0.5
hu_mat_binary = np.array([[0 if el < threshold_confidence else 1 for el in row] for row in hu_mat_hull])
3) Remove small objects
For this part, I tried to use the some morphology for removing the small objects from the image:
from skimage import morphology
hu_mat_bool = np.array(hu_mat_binary, bool)
rem_small = morphology.remove_small_objects(hu_mat_bool, min_size=1000).astype(int)
I used different values for the min_size parameter, but this is the best resulted image. Actually, it removes something, but very little. Those small objects which are close to the liver are ignored.
I've also tried to find contours in the image and keep only the largest one:
from skimage import measure
contours = measure.find_contours(hu_mat_orig_hull, 0.95)
The found contours are present here. I tried to make a dilation starting from the small contours, but didn't succeed to remove the small objects from the image.
What else should I try in order to remove those small objects and generate a mask similar to this?
Related
I've been researching and trying a couple functions to get what I want and I feel like I might be overthinking it.
One version of my code is below. The sample image is here.
My end goal is to find the angle (yellow) of the approximated line with respect to the frame (green line) Final
I haven't even got to the angle portion of the program yet.
The results I was obtaining from the below code were as follows. Canny Closed Small Removed
Anybody have a better way of creating the difference and establishing the estimated line?
Any help is appreciated.
import cv2
import numpy as np
pX = int(512)
pY = int(768)
img = cv2.imread('IMAGE LOCATION', cv2.IMREAD_COLOR)
imgS = cv2.resize(img, (pX, pY))
aimg = cv2.imread('IMAGE LOCATION', cv2.IMREAD_GRAYSCALE)
# Blur image to reduce noise and resize for viewing
blur = cv2.medianBlur(aimg, 5)
rblur = cv2.resize(blur, (384, 512))
canny = cv2.Canny(rblur, 120, 255, 1)
cv2.imshow('canny', canny)
kernel = np.ones((2, 2), np.uint8)
#fringeMesh = cv2.dilate(canny, kernel, iterations=2)
#fringeMesh2 = cv2.dilate(fringeMesh, None, iterations=1)
#cv2.imshow('fringeMesh', fringeMesh2)
closing = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
cv2.imshow('Closed', closing)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(closing, connectivity=8)
#connectedComponentswithStats yields every separated component with information on each of them, such as size
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 200 #num_pixels
fringeMesh3 = np.zeros((output.shape))
for i in range(0, nb_components):
if sizes[i] >= min_size:
fringeMesh3[output == i + 1] = 255
#contours, _ = cv2.findContours(fringeMesh3, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#cv2.drawContours(fringeMesh3, contours, -1, (0, 255, 0), 1)
cv2.imshow('final', fringeMesh3)
#cv2.imshow("Natural", imgS)
#cv2.imshow("img", img)
cv2.imshow("aimg", aimg)
cv2.imshow("Blur", rblur)
cv2.waitKey()
cv2.destroyAllWindows()
You can fit a straight line to the first white pixel you encounter in each column, starting from the bottom.
I had to trim your image because you shared a screen grab of it with a window decoration, title and frame rather than your actual image:
import cv2
import math
import numpy as np
# Load image as greyscale
im = cv2.imread('trimmed.jpg', cv2.IMREAD_GRAYSCALE)
# Get index of first white pixel in each column, starting at the bottom
yvals = (im[::-1,:]>200).argmax(axis=0)
# Make the x values 0, 1, 2, 3...
xvals = np.arange(0,im.shape[1])
# Fit a line of the form y = mx + c
z = np.polyfit(xvals, yvals, 1)
# Convert the slope to an angle
angle = np.arctan(z[0]) * 180/math.pi
Note 1: The value of z (the result of fitting) is:
array([ -0.74002694, 428.01463745])
which means the equation of the line you are looking for is:
y = -0.74002694 * x + 428.01463745
i.e. the y-intercept is at row 428 from the bottom of the image.
Note 2: Try to avoid JPEG format as an intermediate format in image processing - it is lossy and changes your pixel values - so where you have thresholded and done your morphology you are expecting values of 255 and 0, JPEG will lossily alter those values and you end up testing for a range or thresholding again.
Your 'Closed' image seems to quite clearly segment the two regions, so I'd suggest you focus on turning that boundary into a line that you can do something with. Connected components analysis and contour detection don't really provide any useful information here, so aren't necessary.
One quite simple approach to finding the line angle is to find the first white pixel in each row. To get only the rows that are part of your diagonal, don't include rows where that pixel is too close to either side (e.g. within 5%). That gives you a set of points (pixel locations) on the boundary of your two types of grass.
From there you can either do a linear regression to get an equation for the straight line, or you can get two points by averaging the x values for the top and bottom half of the rows, and then calculate the gradient angle from that.
An alternative approach would be doing another morphological close with a very large kernel, to end up with just a solid white region and a solid black region, which you could turn into a line with canny or findContours. From there you could either get some points by averaging, use the endpoints, or given a smooth enough result from a large enough kernel you could detect the line with hough lines.
Here is grayscale uint8 image I'm working with: source grayscale image.
This image is a result of stitching 6 different colorized depth images into one. There are 3 rectangular objects in the image, and my goal is to find edges of these objects. Obviously, I have no problem to find external edges of objects. But, separating objects from each other is a big pain.
Desired rectangles in image:
Input image as numpy array: https://drive.google.com/file/d/1uN9R4MgVQBzjJuMhcqWMUAhWDJCatHSf/view?usp=sharing
First of all I was trying to threshold binarize the image, following with some
erosion + dilation processing to distinguish all three objects from
each other. Then contours + minAreaRect would give me necessary
result. This option isn't robust enough, because objects in the scene
can be so close to each other, that edge between them has the same
depth as roughness of the object surfaces. So important edges can be
"blended" with object surfaces deviations. Consequently, sometimes,
I'm getting two objects united in one object.
Using canny edge detection with automatically calculated coefficients
(from picture median) catches all unnecessary brightness changes together with edges. Canny with manually adjusted coefficients works better, but it doesn't give closed edge result + it is not reliable (must be manually tweaked each time).
Another thing I tried - adjusting brightness of image nonlinearly (power-law transformation) - to increase brightness of objects surfaces leaving dark edge cavities unchanged.
p = 0.2; c = (input_image.max()) / (input_image.max()**(p)); output_image = (c*blur_gray.astype(np.float)**(p)).astype(np.uint8)
Here is a result: brightness adjusted image.
Threshold binarizing of this image give better results in terms of edges. I tried canny and Laplacian edge detection, but obtained results give disconnected parts of contour with some noise in object surface areas: binarized result of Laplacian filtering. Next step, in my mind, must be some kind of edge estimation/restoration algorithm. I tried Hough transform to get edge lines, but it didn't give any intelligible result.
It seems to me that I just go around in circles without achieving any intelligible result. So I request help. Probably my approach is fundamentally wrong, or I am missing something due to the fact that I do not have sufficient knowledge. Any ideas or suggestions?
P.S. After posting this, I'll continue, and will try to implement wateshed segmentation algorithm, may be it would work.
I tried to come up with a method to emphasize the vertical and horizontal lines separating the shapes.
I started by thresholding the original image (from numpy) and just used a [0, 10] range that seemed reasonable.
I ran a vertical and horizontal line kernel over the image to generate two masks
Vertical Kernel
Horizontal Kernel
I combined the two masks so that we'd have both of the lines separating the boxes
Now we can use findContours to find the boxes. I filtered out small contours to get just the 3 rectangles and used a 4-sided approximation to try and get just their sides.
import cv2
import numpy as np
import random
# approx n-sided shape
def approxSides(contour, numSides, step_size):
# approx until numSides points
num_points = 999999;
percent = step_size;
while num_points >= numSides:
# get number of points
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
num_points = len(approx);
# increment
percent += step_size;
# step back and get the points
# there could be more than numSides points if our step size misses it
percent -= step_size * 2;
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
return approx;
# convolve
def conv(mask, kernel, size, half):
# get res
h,w = mask.shape[:2];
# loop
nmask = np.zeros_like(mask);
for y in range(half, h - half):
print("Y: " + str(y) + " || " + str(h));
for x in range(half, w - half):
total = np.sum(np.multiply(mask[y-half:y+half+1, x-half:x+half+1], kernel));
total /= 255;
if total > half:
nmask[y][x] = 255;
else:
nmask[y][x] = 0;
return nmask;
# load numpy array
img = np.load("output_data.npy");
mask = cv2.inRange(img, 0, 10);
# resize
h,w = mask.shape[:2];
scale = 0.25;
h = int(h*scale);
w = int(w*scale);
mask = cv2.resize(mask, (w,h));
# use a line filter
size = 31; # size / 2 is max bridge size
half = int(size/2);
vKernel = np.zeros((size,size), np.float32);
for a in range(size):
vKernel[a][half] = 1/size;
hKernel = np.zeros((size,size), np.float32);
for a in range(size):
hKernel[half][a] = 1/size;
# run filters
vmask = cv2.filter2D(mask, -1, vKernel);
vmask = cv2.inRange(vmask, (half * 255 / size), 255);
hmask = cv2.filter2D(mask, -1, hKernel);
hmask = cv2.inRange(hmask, (half * 255 / size), 255);
combined = cv2.bitwise_or(vmask, hmask);
# contours OpenCV3.4, if you're using OpenCV 2 or 4, it returns (contours, _)
combined = cv2.bitwise_not(combined);
_, contours, _ = cv2.findContours(combined, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter out small contours
cutoff_size = 1000;
big_cons = [];
for con in contours:
area = cv2.contourArea(con);
if area > cutoff_size:
big_cons.append(con);
# do approx for 4-sided shape
colored = cv2.cvtColor(combined, cv2.COLOR_GRAY2BGR);
four_sides = [];
for con in big_cons:
approx = approxSides(con, 4, 0.01);
color = [random.randint(0,255) for a in range(3)];
cv2.drawContours(colored, [approx], -1, color, 2);
four_sides.append(approx); # not used for anything
# show
cv2.imshow("Image", img);
cv2.imshow("mask", mask);
cv2.imshow("vmask", vmask);
cv2.imshow("hmask", hmask);
cv2.imshow("combined", combined);
cv2.imshow("Color", colored);
cv2.waitKey(0);
I am trying to segment the blood vessels in retinal images using Python and OpenCV. Here is the original image:
Ideally I want all the blood vessels to be very visible like this (different image):
Here is what I have tried so far. I took the green color channel of the image.
img = cv2.imread('images/HealthyEyeFundus.jpg')
b,g,r = cv2.split(img)
Then I tried to create a matched filter by following this article and this is what the output image is:
Then I tried doing max entropy thresholding:
def max_entropy(data):
# calculate CDF (cumulative density function)
cdf = data.astype(np.float).cumsum()
# find histogram's nonzero area
valid_idx = np.nonzero(data)[0]
first_bin = valid_idx[0]
last_bin = valid_idx[-1]
# initialize search for maximum
max_ent, threshold = 0, 0
for it in range(first_bin, last_bin + 1):
# Background (dark)
hist_range = data[:it + 1]
hist_range = hist_range[hist_range != 0] / cdf[it] # normalize within selected range & remove all 0 elements
tot_ent = -np.sum(hist_range * np.log(hist_range)) # background entropy
# Foreground/Object (bright)
hist_range = data[it + 1:]
# normalize within selected range & remove all 0 elements
hist_range = hist_range[hist_range != 0] / (cdf[last_bin] - cdf[it])
tot_ent -= np.sum(hist_range * np.log(hist_range)) # accumulate object entropy
# find max
if tot_ent > max_ent:
max_ent, threshold = tot_ent, it
return threshold
img = skimage.io.imread('image.jpg')
# obtain histogram
hist = np.histogram(img, bins=256, range=(0, 256))[0]
# get threshold
th = max_entropy.max_entropy(hist)
print th
ret,th1 = cv2.threshold(img,th,255,cv2.THRESH_BINARY)
This is the result I'm getting, which is obviously not showing all the blood vessels:
I've also tried taking the matched filter version of the image and taking the magnitude of its sobel values.
img0 = cv2.imread('image.jpg',0)
sobelx = cv2.Sobel(img0,cv2.CV_64F,1,0,ksize=5) # x
sobely = cv2.Sobel(img0,cv2.CV_64F,0,1,ksize=5) # y
magnitude = np.sqrt(sobelx**2+sobely**2)
This makes the vessels pop out more:
Then I tried Otsu thresholding on it:
img0 = cv2.imread('image.jpg',0)
# # Otsu's thresholding
ret2,th2 = cv2.threshold(img0,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Otsu's thresholding after Gaussian filtering
blur = cv2.GaussianBlur(img0,(9,9),5)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
one = Image.fromarray(th2).show()
one = Image.fromarray(th3).show()
Otsu doesn't give adequate results. It ends up including noise in the results:
Any help is appreciated on how I can segment the blood vessels successfully.
I worked on retina vessel detection for a bit few years ago, and there are different ways to do it:
If you don't need a top result but something fast, you can use oriented openings, see here and here.
Then you have an other version using mathematical morphology version here.
For better results, here are some ideas:
Personally, I used combination of Gabor filters, and results where pretty good. See the segmentation result here on the first image of drive.
And Gabor can be combined with learning for a good result, or here.
Few years ago, they claimed to have the best algorithm, but I've never had the opportunity to test it. I was sceptic about the performance gap and the way they thresholded the line detector results, it was kind of obscure.
But I know that nowadays, many people try to tackle the problem using CNN, but I've not heard about significant improvements.
I read this blog post where he uses a Laser and a Webcam to estimated the distance of the cardboard from the Webcam.
I had another idea about that. I don't want to calculate the distance from the webcam.
I want to check if an object is approaching the webcam. The algorithm, according to me, will be something like:
Detect the object in the webcam feed.
If the object is approaching the webcam it'll grow larger and larger in the video feed.
Use this data for further calculations.
Since I want to detect random objects, I am using the findContours() method to find the contours in the video feed. Using that, I will at least have the outlines of the objects in the video feed. The source code is:
import numpy as np
import cv2
vid=cv2.VideoCapture(0)
ans, instant=vid.read()
average=np.float32(instant)
cv2.accumulateWeighted(instant, average, 0.01)
background=cv2.convertScaleAbs(average)
while(1):
_,f=vid.read()
imgray=cv2.cvtColor(f, cv2.COLOR_BGR2GRAY)
ret, thresh=cv2.threshold(imgray,127,255,0)
diff=cv2.absdiff(f, background)
cv2.imshow("input", f)
cv2.imshow("Difference", diff)
if cv2.waitKey(5)==27:
break
cv2.destroyAllWindows()
The output is:
I am stuck here. I have the contours stored in an array. What do I do with it when the size increases? How do I proceed?
One trouble here is recognising and differentiating the moving objects from other stuff in the video feed. An approach might be to let the camera 'learn' what the background looks like with no object. Then you can constantly compare its input against this background. One way to get the background is to use a running average.
Any difference greater than a small threshold means there is a moving object. If you constantly display this difference, you basically have a motion tracker. The size of the objects is simply the sum of all the non-zero (thresholded) pixels, or their bounding rectangles. You can track this size and use it to guess whether the object is moving closer or further. Morphological operations can help group the contours into one cohesive object.
Since it will be tracking ANY movement, if there are two objects, they will be counted together. Here is where you can use the contours to find and track individual objects, e.g. using the contour bounds or centroids. You could also possibly separate them by colour.
Here are some results using this strategy (the grey blob is my hand):
It actually did a fairly good job of guessing which way my hand was moving.
Code:
import cv2
import numpy as np
AVERAGE_ALPHA = 0.2 # 0-1 where 0 never adapts, and 1 instantly adapts
MOVEMENT_THRESHOLD = 30 # Lower values pick up more movement
REDUCED_SIZE = (400, 600)
MORPH_KERNEL = np.ones((10, 10), np.uint8)
def reduce_image(input_image):
"""Make the image easier to deal with."""
reduced = cv2.resize(input_image, REDUCED_SIZE)
reduced = cv2.cvtColor(reduced, cv2.COLOR_BGR2GRAY)
return reduced
# Initialise
vid = cv2.VideoCapture(0)
average = None
old_sizes = np.zeros(20)
size_update_index = 0
while (True):
got_frame, frame = vid.read()
if got_frame:
# Reduce image
reduced = reduce_image(frame)
if average is None: average = np.float32(reduced)
# Get background
cv2.accumulateWeighted(reduced, average, AVERAGE_ALPHA)
background = cv2.convertScaleAbs(average)
# Get thresholded difference image
movement = cv2.absdiff(reduced, background)
_, threshold = cv2.threshold(movement, MOVEMENT_THRESHOLD, 255, cv2.THRESH_BINARY)
# Apply morphology to help find object
dilated = cv2.dilate(threshold, MORPH_KERNEL, iterations=10)
closed = cv2.morphologyEx(dilated, cv2.MORPH_CLOSE, MORPH_KERNEL)
# Get contours
contours, _ = cv2.findContours(closed, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(closed, contours, -1, (150, 150, 150), -1)
# Find biggest bounding rectangle
areas = [cv2.contourArea(c) for c in contours]
if (areas != list()):
max_index = np.argmax(areas)
max_cont = contours[max_index]
x, y, w, h = cv2.boundingRect(max_cont)
cv2.rectangle(closed, (x, y), (x+w, y+h), (255, 255, 255), 5)
# Guess movement direction
size = w*h
if size > old_sizes.mean():
print "Towards"
else:
print "Away"
# Update object size
old_sizes[size_update_index] = size
size_update_index += 1
if (size_update_index) >= len(old_sizes): size_update_index = 0
# Display image
cv2.imshow('RaptorVision', closed)
Obviously this needs more work in terms of identifying, selecting and tracking the objects etc (at the moment it does horribly if there is something else moving in the background). There are also many parameters to vary and tweak (the ones set are what worked well for my system). I'll leave that up to you though.
Some links:
background extraction
motion tracking
If you want to get a bit more high-tech with the background removal, have a look here:
wallflower
Detect the object in the webcam feed.
If the object is approaching the webcam it'll grow larger and larger in the video feed.
Use this data for further calculations.
Good idea.
If you want to use the contour detection approach, you could do it the following way:
You have a series of Images I1, I2, ... In
Do a contour detection on each one. C1, C2, ..., Cn (Contour is a set of points in OpenCV)
Take a large enough sample on your Image i and i+1: S_i \leq C_i, i \in 1...n
Check for all points in your sample for the nearest point on i+1. Then you trajectorys for all your points.
Check if this trajectorys point mostly outwards (tricky part ;)
If they appear outwards for a suffiecent number of frames your contour got bigger.
Alternative you could try to prune the points that are not part of the correct contour and work with a covering rectangle. It's very easy to check the size that way, but i don't knwo how easy it will be to choose the "correct" points.
What would be the approach to trim an image that's been input using a scanner and therefore has a large white/black area?
the entropy solution seems problematic and overly intensive computationally. Why not edge detect?
I just wrote this python code to solve this same problem for myself. My background was dirty white-ish, so the criteria that I used was darkness and color. I simplified this criteria by just taking the smallest of the R, B or B value for each pixel, so that black or saturated red both stood out the same. I also used the average of the however many darkest pixels for each row or column. Then I started at each edge and worked my way in till I crossed a threshold.
Here is my code:
#these values set how sensitive the bounding box detection is
threshold = 200 #the average of the darkest values must be _below_ this to count (0 is darkest, 255 is lightest)
obviousness = 50 #how many of the darkest pixels to include (1 would mean a single dark pixel triggers it)
from PIL import Image
def find_line(vals):
#implement edge detection once, use many times
for i,tmp in enumerate(vals):
tmp.sort()
average = float(sum(tmp[:obviousness]))/len(tmp[:obviousness])
if average <= threshold:
return i
return i #i is left over from failed threshold finding, it is the bounds
def getbox(img):
#get the bounding box of the interesting part of a PIL image object
#this is done by getting the darekest of the R, G or B value of each pixel
#and finding were the edge gest dark/colored enough
#returns a tuple of (left,upper,right,lower)
width, height = img.size #for making a 2d array
retval = [0,0,width,height] #values will be disposed of, but this is a black image's box
pixels = list(img.getdata())
vals = [] #store the value of the darkest color
for pixel in pixels:
vals.append(min(pixel)) #the darkest of the R,G or B values
#make 2d array
vals = np.array([vals[i * width:(i + 1) * width] for i in xrange(height)])
#start with upper bounds
forupper = vals.copy()
retval[1] = find_line(forupper)
#next, do lower bounds
forlower = vals.copy()
forlower = np.flipud(forlower)
retval[3] = height - find_line(forlower)
#left edge, same as before but roatate the data so left edge is top edge
forleft = vals.copy()
forleft = np.swapaxes(forleft,0,1)
retval[0] = find_line(forleft)
#and right edge is bottom edge of rotated array
forright = vals.copy()
forright = np.swapaxes(forright,0,1)
forright = np.flipud(forright)
retval[2] = width - find_line(forright)
if retval[0] >= retval[2] or retval[1] >= retval[3]:
print "error, bounding box is not legit"
return None
return tuple(retval)
if __name__ == '__main__':
image = Image.open('cat.jpg')
box = getbox(image)
print "result is: ",box
result = image.crop(box)
result.show()
For starters, Here is a similar question. Here is a related question. And a another related question.
Here is just one idea, there are certainly other approaches. I would select an arbitrary crop edge and then measure the entropy* on either side of the line, then proceed to re-select the crop line (probably using something like a bisection method) until the entropy of the cropped-out portion falls below a defined threshold. As I think, you may need to resort to a brute root-finding method as you will not have a good indication of when you have cropped too little. Then repeat for the remaining 3 edges.
*I recall discovering that the entropy method in the referenced website was not completely accurate, but I could not find my notes (I'm sure it was in a SO post, however.)
Edit:
Other criteria for the "emptiness" of an image portion (other than entropy) might be contrast ratio or contrast ratio on an edge-detect result.