Related
I am trying to convert the result of a skeletonization into a set of line segments, where the vertices correspond to the junction points of the skeleton. The shape is not a closed polygon and it may be somewhat noisy (the segments are not as straight as they should be).
Here is an example input image:
And here are the points I want to retrieve:
I have tried using the harris corner detector, but it has trouble in some areas even after trying to tweak the algorithm's parameters (such as the angled section on the bottom of the image). Here are the results:
Do you know of any method capable of doing this? I am using python with mostly OpenCV and Numpy but I am not bound to any library. Thanks in advance.
Edit: I've gotten some good responses regarding the junction points, I am really grateful. I would also appreciate any solutions regarding extracting line segments from the junction points. I think #nathancy's answer could be used to extract line segments by subtracting the masks with the intersection mask, but I am not sure.
My approach is based on my previous answer here. It involves convolving the image with a special kernel. This convolution identifies the end-points of the lines, as well as the intersections. This will result in a points mask containing the pixel that matches the points you are looking for. After that, apply a little bit of morphology to join possible duplicated points. The method is sensible to the corners produced by the skeleton.
This is the code:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "Repn3.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
inputImageCopy = inputImage.copy()
# Convert to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Compute the skeleton:
skeleton = cv2.ximgproc.thinning(grayscaleImage, None, 1)
# Threshold the image so that white pixels get a value of 10 and
# black pixels a value of 0:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)
# Set the convolution kernel:
h = np.array([[1, 1, 1],
[1, 10, 1],
[1, 1, 1]])
# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)
So far I convolved the skeleton image with my special kernel. You can inspect the image produced and search for the numerical values at the corners and intersections.
This is the output so far:
Next, identify a corner or an intersection. This bit is tricky, because the threshold value depends directly on the skeleton image, which sometimes doesn't produce good (close to straight) corners:
# Create list of thresholds:
thresh = [130, 110, 40]
# Prepare the final mask of points:
(height, width) = binaryImage.shape
pointsMask = np.zeros((height, width, 1), np.uint8)
# Perform convolution and create points mask:
for t in range(len(thresh)):
# Get current threshold:
currentThresh = thresh[t]
# Locate the threshold in the filtered image:
tempMat = np.where(imgFiltered == currentThresh, 255, 0)
# Convert and shape the image to a uint8 height x width x channels
# numpy array:
tempMat = tempMat.astype(np.uint8)
tempMat = tempMat.reshape(height,width,1)
# Accumulate mask:
pointsMask = cv2.bitwise_or(pointsMask, tempMat)
This is the binary mask:
Let's dilate to join close points:
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 4
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform Dilate:
pointsMask = cv2.morphologyEx(pointsMask, cv2.MORPH_DILATE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the output:
Now simple extract external contours. Get their bounding boxes and calculate their centroid:
# Look for the outer contours (no children):
contours, _ = cv2.findContours(pointsMask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the points here:
pointsList = []
# Loop through the contours:
for i, c in enumerate(contours):
# Get the contours bounding rectangle:
boundRect = cv2.boundingRect(c)
# Get the centroid of the rectangle:
cx = int(boundRect[0] + 0.5 * boundRect[2])
cy = int(boundRect[1] + 0.5 * boundRect[3])
# Store centroid into list:
pointsList.append( (cx,cy) )
# Set centroid circle and text:
color = (0, 0, 255)
cv2.circle(inputImageCopy, (cx, cy), 3, color, -1)
font = cv2.FONT_HERSHEY_COMPLEX
cv2.putText(inputImageCopy, str(i), (cx, cy), font, 0.5, (0, 255, 0), 1)
# Show image:
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
This is the result. Some corners are missed, you might one to improve the solution before computing the skeleton.
Here's a simple approach, the idea is:
Obtain binary image. Load image, convert to grayscale, Gaussian blur, then Otsu's threshold.
Obtain horizontal and vertical line masks. Create horizontal and vertical structuring elements with cv2.getStructuringElement then perform cv2.morphologyEx to isolate the lines.
Find joints. We cv2.bitwise_and the two masks together to get the joints. The idea is that the intersection points on the two masks are the joints.
Find centroid on joint mask. We find contours then calculate the centroid.
Find leftover endpoints. Endpoints do not correspond to an intersection so to find those, we can use the Shi-Tomasi Corner Detector
Horizontal and vertical line masks
Results (joints in green and endpoints in blue)
Code
import cv2
import numpy as np
# Load image, grayscale, Gaussian blur, Otsus threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find horizonal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,1))
horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=1)
# Find vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,5))
vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=1)
# Find joint intersections then the centroid of each joint
joints = cv2.bitwise_and(horizontal, vertical)
cnts = cv2.findContours(joints, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
# Find centroid and draw center point
x,y,w,h = cv2.boundingRect(c)
centroid, coord, area = cv2.minAreaRect(c)
cx, cy = int(centroid[0]), int(centroid[1])
cv2.circle(image, (cx, cy), 5, (36,255,12), -1)
# Find endpoints
corners = cv2.goodFeaturesToTrack(thresh, 5, 0.5, 10)
corners = np.int0(corners)
for corner in corners:
x, y = corner.ravel()
cv2.circle(image, (x, y), 5, (255,100,0), -1)
cv2.imshow('thresh', thresh)
cv2.imshow('joints', joints)
cv2.imshow('horizontal', horizontal)
cv2.imshow('vertical', vertical)
cv2.imshow('image', image)
cv2.waitKey()
I would like to get the 4 corners of a page,
The steps I took:
Converted to grayscale
Applied threshold the image
Applied Canny for detecting edges
After that I have used findContours
Draw the approx polygon for each polygon, my assumption was the relevant polygon must have 4 vertices.
but along the way I found out my solution sometimes misses,
apparently my solution is not robust enough (probably a bit a naive solution).
I think some of the reasons for those paper corner detection failure are:
The thresholds are picked manually for canny detection.
The same about the epsilon value for approxPolyDP
My Code
import cv2
import numpy as np
image = cv2.imread('page1.jpg')
descalingFactor = 3
imgheight, imgwidth = image.shape[:2]
resizedImg = cv2.resize(image, (int(imgwidth / descalingFactor), int(imgheight / descalingFactor)),
interpolation=cv2.INTER_AREA)
cv2.imshow(winname="original", mat=resizedImg)
cv2.waitKey()
gray = cv2.cvtColor(resizedImg, cv2.COLOR_BGR2GRAY)
cv2.imshow(winname="gray", mat=gray)
cv2.waitKey()
img_blur = cv2.GaussianBlur(gray, (5, 5), 1)
cv2.imshow(winname="blur", mat=img_blur)
cv2.waitKey()
canny = cv2.Canny(gray,
threshold1=120,
threshold2=255,
edges=1)
cv2.imshow(winname="Canny", mat=canny)
cv2.waitKey()
contours, _ = cv2.findContours(image=canny, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
for idx, cnt in enumerate(contours):
# print("Contour #", idx)
# print("Contour #", idx, " len(cnt): ", len(cnt))
cv2.drawContours(image=resizedImg, contours=[cnt], contourIdx=0, color=(255, 0, 0), thickness=3)
cv2.imshow(winname="contour" + str(idx), mat=resizedImg)
conv = cv2.convexHull(cnt)
epsilon = 0.1 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
cv2.drawContours(resizedImg, [approx], 0, (0, 0, 255), 3)
cv2.waitKey(0)
if len(approx) == 4:
print("found the paper!!")
break
pts = np.squeeze(approx)
Another approach
I was wondering wouldn't it be a better approach to fit a polygon with 4 vertices (Quadrilateral) to the contour , and then check if the area difference between the polygon to the contour is below a specified threshold.
Can somebody please suggest a more robust solution (demonstrating it with code), thank you.
The images:
image1: https://ibb.co/K2SqLwZ
image2: https://ibb.co/mbGFsNp
image3: https://ibb.co/m6QKkzw
image4: https://ibb.co/xh7W41V
As fmw42 suggested, you need to restrict the problem more. There are way too many variables to build a "works under all circumstances" solution. A possible, very basic, solution would be to try and get the convex hull of the page.
Another, more robust approach, would be to search for the four vertices of the corners and extrapolate lines to approximate the paper edges. That way you don't need perfect, clean edges, because you would reconstruct them using the four (maybe even three) corners.
To find the vertices you can run Hough Line detector or a Corner Detector on the edges and get at least four discernible clusters of end/starting points. From that you can average the four clusters to get a pair of (x, y) points per corner and extrapolate lines using those points.
That solution would be hypothetical and pretty laborious for a Stack Overflow question, so let me try the first proposal - detection via convex hull. Here are the steps:
Threshold the input image
Get edges from the input
Get the external contours of the edges using a minimum area filter
Get the convex hull of the filtered image
Get the corners of the convex hull
Let's see the code:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "img2.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert BGR to grayscale:
grayInput = cv2.cvtColor(inputImageCopy, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(grayInput, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
The first step is to get a binary image, very straightforward. This is the result if you threshold via Otsu:
It is never a good idea to try and segment an object from a textured (or high frequency) background, however, in this case the paper it is discernible in the image histogram and the binary image is reasonably good. Let's try and detect edges on this image, I'm applying Canny with the same parameters as your code:
# Get edges:
cannyImage = cv2.Canny(binaryImage, threshold1=120, threshold2=255, edges=1)
Which produces this:
Seems good enough, the target edges are mostly present. Let's detect contours. The idea is to set an area filter, because the target contour is the biggest amongst the rest. I (heuristically) set a minimum area of 100000 pixels. Once the target contour is found I get its convex hull, like this:
# Find the EXTERNAL contours on the binary image:
contours, hierarchy = cv2.findContours(cannyImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the corners:
cornerList = []
# Look for the outer bounding boxes (no children):
for i, c in enumerate(contours):
# Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Get the bounding rect's data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Estimate the bounding rect area:
rectArea = rectWidth * rectHeight
# Set a min area threshold
minArea = 100000
# Filter blobs by area:
if rectArea > minArea:
# Get the convex hull for the target contour:
hull = cv2.convexHull(c)
# (Optional) Draw the hull:
color = (0, 0, 255)
cv2.polylines(inputImageCopy, [hull], True, color, 2)
You'll notice I've prepared beforehand a list (cornerList) in which I'll store (hopefully) all the corners. The last two lines of the previous snippet are optional, they draw the convex hull via cv2.polylines, this would be the resulting image:
Still inside the loop, after we compute the convex hull, we will get the corners via cv2.goodFeaturesToTrack, which implements a Corner Detector. The function receives a binary image, so we need to prepare a black image with the convex hull points drawn in white:
# Create image for good features to track:
(height, width) = cannyImage.shape[:2]
# Black image same size as original input:
hullImg = np.zeros((height, width), dtype =np.uint8)
# Draw the points:
cv2.drawContours(hullImg, [hull], 0, 255, 2)
cv2.imshow("hullImg", hullImg)
cv2.waitKey(0)
This is the image:
Now, we must set the corner detector. It needs the number of corners you are looking for, a minimum "quality" parameter that discards poor points detected as "corners" and a minimum distance between the corners. Check out the documentation for more parameters. Let's set the detector, it will return an array of points where it detected a corner. After we get this array, we will store each point in our cornerList, like this:
# Set the corner detection:
maxCorners = 4
qualityLevel = 0.01
minDistance = int(max(height, width) / maxCorners)
# Get the corners:
corners = cv2.goodFeaturesToTrack(hullImg, maxCorners, qualityLevel, minDistance)
corners = np.int0(corners)
# Loop through the corner array and store/draw the corners:
for c in corners:
# Flat the array of corner points:
(x, y) = c.ravel()
# Store the corner point in the list:
cornerList.append((x,y))
# (Optional) Draw the corner points:
cv2.circle(inputImageCopy, (x, y), 5, 255, 5)
cv2.imshow("Corners", inputImageCopy)
cv2.waitKey(0)
Additionally you can draw the corners as circles, it will yield this image:
This is the same algorithm tested on your third image:
I am attempting to find the area inside an arbitrarily-shaped closed curve plotted in python (example image below). So far, I have tried to use both the alphashape and polygon methods to acheive this, but both have failed. I am now attempting to use OpenCV and the floodfill method to count the number of pixels inside the curve and then I will later convert that to an area given the area that a single pixel encloses on the plot.
Example image:
testplot.jpg
In order to do this, I am doing the following, which I adapted from another post about OpenCV.
import cv2
import numpy as np
# Input image
img = cv2.imread('testplot.jpg', cv2.IMREAD_GRAYSCALE)
# Dilate to better detect contours
temp = cv2.dilate(temp, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Find largest contour
cnts, _ = cv2.findContours(255-temp, cv2.RETR_TREE , cv2.CHAIN_APPROX_NONE) #255-img and cv2.RETR_TREE is to account for how cv2 expects the background to be black, not white, so I convert the background to black.
largestCnt = [] #I expect this to yield the blue contour
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initial mask for flood filling, should cover entire figure
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour onto it, flood fill this contour
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
area = cv2.countNonZero(temp) #Number of pixels encircled by blue line
I expect from this to get to a place where I have the same image as above, but with the center of the contour filled in white and the background and original blue contour in black. I end up with this:
result.jpg
While this at first glance appears to have accurately turned the area inside the contour white, the white area is actually larger than the area inside the contour and so the result I get is overestimating the number of pixels inside it.
Any input on this would be greatly appreciated. I am fairly new to OpenCV so I may have misunderstood something.
EDIT:
Thanks to a comment below, I made some edits and this is now my code, with edits noted:
import cv2
import numpy as np
# EDITED INPUT IMAGE: Input image
img = cv2.imread('testplot2.jpg', cv2.IMREAD_GRAYSCALE)
# EDIT: threshold
_, temp = cv2.threshold(img, 250, 255, cv2.THRESH_BINARY_INV)
# EDIT, REMOVED: Dilate to better detect contours
# Find largest contour
cnts, _ = cv2.findContours(temp, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE)
largestCnt = [] #I expect this to yield the blue contour
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initial mask for flood filling, should cover entire figure
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour, flood filled
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
area = cv2.countNonZero(temp) #Number of pixels encircled by blue line
I input a different image with the axes and the frame that python adds by default removed for ease. I get what I expect at the second step, so this image. However, in the enter image description here both the original contour and the area it encircles appear to have been made white, whereas I want the original contour to be black and only the area it encircles to be white. How might I acheive this?
The problem is your opening operation at the end. This morphological operation includes a dilation at the end that expands the white contour, increasing its area. Let’s try a different approach where no morphology is involved. These are the steps:
Convert your image to grayscale
Apply Otsu’s thresholding to get a binary image, let’s work with black and white pixels only.
Apply a first flood-fill operation at image location (0,0) to get rid of the outer white space.
Filter small blobs using an area filter
Find the “Curve Canvas” (The white space that encloses the curve) and locate and store its starting point at (targetX, targetY)
Apply a second flood-fill al location (targetX, targetY)
Get the area of the isolated blob with cv2.countNonZero
Let’s take a look at the code:
import cv2
import numpy as np
# Set image path
path = "C:/opencvImages/"
fileName = "cLIjM.jpg"
# Read Input image
inputImage = cv2.imread(path+fileName)
inputCopy = inputImage.copy()
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu + bias adjustment:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
This is the binary image you get:
Now, let’s flood-fill at the corner located at (0,0) with a black color to get rid of the first white space. This step is very straightforward:
# Flood-fill background, seed at (0,0) and use black color:
cv2.floodFill(binaryImage, None, (0, 0), 0)
This is the result, note how the first big white area is gone:
Let’s get rid of the small blobs applying an area filter. Everything below an area of 100 is gonna be deleted:
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 100
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
This is the result of the filter:
Now, what remains is the second white area, I need to locate its starting point because I want to apply a second flood-fill operation at this location. I’ll traverse the image to find the first white pixel. Like this:
# Get Image dimensions:
height, width = filteredImage.shape
# Store the flood-fill point here:
targetX = -1
targetY = -1
for i in range(0, width):
for j in range(0, height):
# Get current binary pixel:
currentPixel = filteredImage[j, i]
# Check if it is the first white pixel:
if targetX == -1 and targetY == -1 and currentPixel == 255:
targetX = i
targetY = j
print("Flooding in X = "+str(targetX)+" Y: "+str(targetY))
There’s probably a more elegant, Python-oriented way of doing this, but I’m still learning the language. Feel free to improve the script (and share it here). The loop, however, gets me the location of the first white pixel, so I can now apply a second flood-fill at this exact location:
# Flood-fill background, seed at (targetX, targetY) and use black color:
cv2.floodFill(filteredImage, None, (targetX, targetY), 0)
You end up with this:
As you see, just count the number of non-zero pixels:
# Get the area of the target curve:
area = cv2.countNonZero(filteredImage)
print("Curve Area is: "+str(area))
The result is:
Curve Area is: 1510
Here is another approach using Python/OpenCV.
Read Input
convert to HSV colorspace
Threshold on color range of blue
Find the largest contour
Get its area and print that
draw the contour as a white filled contour on black background
Save the results
Input:
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('closed_curve.jpg')
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
#select blu color range in hsv
lower = (24,128,115)
upper = (164,255,255)
# threshold on blue in hsv
thresh = cv2.inRange(hsv, lower, upper)
# get largest contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
area = cv2.contourArea(c)
print("Area =",area)
# draw filled contour on black background
result = np.zeros_like(thresh)
cv2.drawContours(result, [c], -1, 255, cv2.FILLED)
# save result
cv2.imwrite("closed_curve_thresh.jpg", thresh)
cv2.imwrite("closed_curve_result.jpg", result)
# view result
cv2.imshow("threshold", thresh)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold Image:
Result Filled Contour On Black Background:
Area Result:
Area = 2347.0
My goal is to accurately measure the diameter of a hole from a microscope. Workflow is: take an image, process for fitting, fit, convert radius in pixels to mm, write to a csv
This is an output of my image processing script used to measure the diameter of a hole. I'm having an issue where it seems like my circle fitting is prioritizing matching the contour rather than something like a least squares approach.
I've alternatively averaged many fits in something like this:
My issue here is I like to quickly scan to make sure the circle fit is appropriate. The trade off is the more fits I have, the more realistic the fit, the fewer I have the easier is to make sure the number is correct. My circles aren't always as pretty and circular as this one so it's important to me.
Here's the piece of my script fitting circles if you could take a look and tell me how to do more of a least squares approach on the order of 5 circles. I don't want to use minimum circle detection because a fluid is flowing through this hole so I'd like it to be more like a hydraulic diameter-- thanks!
(thresh, blackAndWhiteImage0) = cv2.threshold(img0, 100, 255, cv2.THRESH_BINARY) #make black + white
median0 = cv2.medianBlur(blackAndWhiteImage0, 151) #get rid of noise
circles0 = cv2.HoughCircles(median0,cv2.HOUGH_GRADIENT,1,minDist=5,param1= 25, param2=10, minRadius=min_radius_small,maxRadius=max_radius_small) #fit circles to image
Here is another way to fit a circle by getting the equivalent circle center and radius from the binary image using connected components and drawing a circle from that using Python/OpenCV/Skimage.
Input:
import cv2
import numpy as np
from skimage import measure
# load image and set the bounds
img = cv2.imread("dark_circle.png")
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# blur
blur = cv2.GaussianBlur(gray, (3,3), 0)
# threshold
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# apply morphology open with a circular shaped kernel
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
binary = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2)
# find contour and draw on input (for comparison with circle)
cnts = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
c = cnts[0]
result = img.copy()
cv2.drawContours(result, [c], -1, (0, 255, 0), 1)
# find radius and center of equivalent circle from binary image and draw circle
# see https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops
# Note: this should be the same as getting the centroid and area=cv2.CC_STAT_AREA from cv2.connectedComponentsWithStats and computing radius = 0.5*sqrt(4*area/pi) or approximately from the area of the contour and computed centroid via image moments.
regions = measure.regionprops(binary)
circle = regions[0]
yc, xc = circle.centroid
radius = circle.equivalent_diameter / 2.0
print("radius =",radius, " center =",xc,",",yc)
xx = int(round(xc))
yy = int(round(yc))
rr = int(round(radius))
cv2.circle(result, (xx,yy), rr, (0, 0, 255), 1)
# write result to disk
cv2.imwrite("dark_circle_fit.png", result)
# display it
cv2.imshow("image", img)
cv2.imshow("thresh", thresh)
cv2.imshow("binary", binary)
cv2.imshow("result", result)
cv2.waitKey(0)
Result showing contour (green) compared to circle fit (red):
Circle Radius and Center:
radius = 117.6142467296168 center = 220.2169911178609 , 150.26823599797507
A least squares fit method (between the contour points and a circle) can be obtained using Scipy. For example, see:
https://gist.github.com/lorenzoriano/6799568
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
I would suggest computing a mask as in nathancy's answer, but then simply counting the number of pixels in the mask opening that he computed (which is an unbiased estimate of the area of the hole), and then translating the area to a radius using radius = sqrt(area/pi). This will give you the radius of the circle with the same area as the hole, and corresponds to one method to obtain a best fit circle.
A different way of obtaining a best fit circle is to take the contour of the hole (as returned in cnts by cv.findContours in nethancy's answer), finding its centroid, and then computing the mean distance of each vertex to the centroid. This would correspond approximately* to a least squares fit of a circle to the hole perimeter.
* I say approximately because the vertices of the contour are an approximation to the contour, and the distances between these vertices is likely not uniform. The error should be really small though.
Here's code example using DIPlib (disclosure: I'm an author) (note: the import PyDIP statement below requires you install DIPlib, and you cannot install it with pip, there is a binary release for Windows on the GitHub page, or otherwise you need to build it from sources).
import PyDIP as dip
import imageio
import math
img = imageio.imread('https://i.stack.imgur.com/szvc2.jpg')
img = dip.Image(img[:,2600:-1])
img.SetPixelSize(0.01, 'mm') # Use your actual values!
bin = ~dip.OtsuThreshold(dip.Gauss(img, [3]))
bin = dip.Opening(bin, 25)
#dip.Overlay(img, bin - dip.BinaryErosion(bin, 1, 3)).Show()
msr = dip.MeasurementTool.Measure(dip.Label(bin), features=['Size', 'Radius'])
#print(msr)
print('Method 1:', math.sqrt(msr[1]['Size'][0] / 3.14), 'mm')
print('Method 2:', msr[1]['Radius'][1], 'mm')
The MeasurementTool.Measure function computes 'Size', which is the area; and 'Radius', which returns the max, mean, min and standard deviation of the distances between each boundary pixel and the centroid. From 'Radius', we take the 2nd value, the mean radius.
This outputs:
Method 1: 7.227900647539411 mm
Method 2: 7.225178113501325 mm
But do note that I assigned a random pixel size (0.01mm per pixel), you'll need to fill in the right pixels-to-mm conversion value.
Note how the two estimates are very close. Both methods are good, unbiased estimates. The first method is computationally cheaper.
One suggestion I have is looking at cv2.fitEllipse()
Hopefully you can use the aspect ratio between ellipse witdth/height to tell the odd ones out.
An approach is to Gaussian blur
then Otsu's threshold the image to obtain a binary image. From here, we perform morphological opening with a elliptical shaped kernel. This step will effectively remove the tiny noise particles. To obtain a nice estimate of the circle, we find contours and use cv2.minEnclosingCircle() which also gives us the center point and the radius. Here's a visualization:
Input image (screenshoted)
Binary image
Morph open
Result -> Result w/ radius
Radius: 122.11396026611328
From here you can convert the radius in pixels to mm based on your calibration scale
Code
import cv2
import numpy as np
# Load image, convert to grayscale, Gaussian blur, then Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Morph open with a elliptical shaped kernel
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2)
# Find contours and draw minimum enclosing circle
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
((x, y), r) = cv2.minEnclosingCircle(c)
cv2.circle(image, (int(x), int(y)), int(r), (36, 255, 12), 2)
print('Radius: {}'.format(r))
cv2.imshow('thresh', thresh)
cv2.imshow('opening', opening)
cv2.imshow('image', image)
cv2.waitKey()
I'm trying to write a program that can detect the straight line cut on the circular lens, as seen on the left side of the image:
Now, I have tried using Canny edge detection, Hough Line Transform and findContour to separate the line alone, but I have been unsuccessful in doing so.
I have also tried to detect the line by first detecting the outer circle of the lens and performing a contour search within the ROI(detected circle), but I get random lines across the lens but not the output I want.
So first of all I would like to point out that your image is very noisy. Meaning that simply by looking for contours or edges or lines will probably not work because due to noise. This makes a task very difficult. If you are looking for a way to automatize such a task I would suggest to put some effort in finding the right lighting (I think that a classical dome light would suffice) as it would make much less noise on the image (less reflections ...) and hence it would be easier to make such algoritm.
That being said. I have made an example on how I would try to achieve such a task. Note that this solution might not work for other images but in this one example the result is quite good. It will maybe give you a new point of view on how to tackle this problem.
First I would try to perform an histogram equalization before tranforming the image to binary with OTSU threshold. After that I would perform opening (erosion followed by dilation) on the image:
After that I would make a bounding box over the biggest contour. With x,y,h,w I can calculate the center of bounding box which will serve as my center of the ROI I am going to create. Draw a circle with radius slightly less then w/2 on a copy of the image and a circle on a new mask with the radius equal to w/2. Then perform a bitwise operation:
Now you have your ROI and have to threshold it again to make the boundary without noise and search for contours:
Now you can see that you have two contours (inner and outer). So now you can extract the area where the lens is cut. You can do this by calculating distances between every point of the inner contour and outer contour. Formula for distance between 2 points is sqrt((x2-x1)^2 + (y2-y2)^2). Threshold this distances so that if the distance is less than some integer and draw a line between these two points on the image. I drew the distances with a blue line so. After that tranform the image to HSV colorspace and mask it with bitwise operation again so all that is left are those blue lines:
Perform an OTSU threshold again and select the biggest contour (those blue lines) and fit a line through the contour. Draw the line on the original image and you will get the ending result:
Example code:
import cv2
import numpy as np
### Perform histogram equalization and threshold with OTSU.
img = cv2.imread('lens.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
equ = cv2.equalizeHist(gray)
_, thresh = cv2.threshold(equ,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
### Perform opening (erosion followed by dilation) and search for contours.
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
_, contours, hierarchy = cv2.findContours(opening,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
### Select the biggest one and create a bounding box.
### This will be used to calculate the center of your ROI.
cnt = max(contours, key=cv2.contourArea)
### Calculate x and y of the center.
x,y,w2,h2 = cv2.boundingRect(cnt)
center_x = int(x+(w2/2))
center_y = int(y+(h2/2))
### Create the radius of your inner circle ROI and draw it on a copy of the image.
img2 = img.copy()
radius = int((w2/2)-20)
cv2.circle(img2,(center_x,center_y), radius, (0,0,0), -1)
### Create the radius of your inner circle ROI and draw it on a blank mask.
radius_2 = int(w2/2)
h,w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
cv2.circle(mask,(center_x,center_y), radius_2, (255,255,255), -1)
### Perform a bitwise operation so that you will get your ROI
res = cv2.bitwise_and(img2, img2, mask=mask)
### Modify the image a bit to eliminate noise with thresholding and closing.
_, thresh = cv2.threshold(res,190,255,cv2.THRESH_BINARY)
kernel = np.ones((3,3),np.uint8)
closing = cv2.morphologyEx(thresh,cv2.MORPH_CLOSE,kernel, iterations = 2)
### Search for contours again and select two biggest one.
gray = cv2.cvtColor(closing,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
area = sorted(contours, key=cv2.contourArea, reverse=True)
contour1 = area[0]
contour2 = area[1]
### Iterate through both contours and calculate the minimum distance.
### If it is less than the threshold you provide, draw the lines on the image.
### Forumula is sqrt((x2-x1)^2 + (y2-y2)^2).
for i in contour1:
x = i[0][0]
y = i[0][1]
for j in contour2:
x2 = j[0][0]
y2 = j[0][1]
dist = np.sqrt((x2-x)**2 + (y2-y)**2)
if dist < 12:
xy = (x,y)
x2y2 = (x2,y2)
line = (xy,x2y2)
cv2.line(img2,xy,x2y2,(255,0,0),2)
else:
pass
### Transform the image to HSV colorspace and mask the result.
hsv = cv2.cvtColor(img2, cv2.COLOR_BGR2HSV)
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
mask = cv2.inRange(hsv, lower_blue, upper_blue)
res = cv2.bitwise_and(img2,img2, mask= mask)
### Search fot contours again.
gray = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
### Fit a line through the contour and draw it on the original image.
[vx,vy,x,y] = cv2.fitLine(cnt, cv2.DIST_L2,0,0.01,0.01)
left = int((-x*vy/vx) + y)
right = int(((w-x)*vy/vx)+y)
cv2.line(img,(w-1,right),(0,left),(0,0,255),2)
### Display the result.
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()