OpenCV HoughLines - python

Currently I'm working on a free-time project, which consists of parsing a video feed into a HTML template. In order to differentiate types of DOMs, I use 4 different angled lines in a rectangle (please ignore the triangle and the circle), but to calculate their angle, I must first find the lines. OpenCV's HoughLinesP comes in handy, but for whatever reason, it finds only a few lines.
Here are some sample images, on the left, that's the source image (with the found lines drawn on it, colored green), and on the left, that's the processed image. First, the source is converted to grayscale, then I run Canny on it. (I tried blurring it too)
In my opinion only a small amount of lines are detected. This makes me question my threshold parameters. Currently, they are the following:
gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
canny = cv2.Canny(gray, 50, 150, apertureSize=3)
And then for the line detection:
rho = 1.5
theta = pi / 180.
threshold = 60
min_line_length = 100
max_line_gap = 10
lines = cv2.HoughLinesP(lineimage, rho, theta, threshold, min_line_length, max_line_gap)
Please note, that I'm only running edge/line detection on components which have a parent, that's why there are no lines detected on the most-outer component.
Am I doing something wrong? Are my parameters off the grid? I would greatly appreciate any suggestion on how to improve it!
Thank you in advance!

Related

How to measure the central angle with Python cv2 package

Our team set up a vision system with a camera, a microscope and a tunable lens to look at the internal surface of a cone.
Visually speaking, the camera takes 12 image for one cone with each image covering 30 degrees.
Now we've collected many sample images and want to make sure each "fan"(as shown below) is at least 30 degree.
Is there any way in Python, with cv2 or other packages, to measure this central angle. Thanks.
Here is one way to do that in Python/OpenCV.
Read the image
Convert to gray
Threshold
Use morphology open and close to smooth and fill out the boundary
Apply Canny edge extraction
Separate the image into top edge and bottom edge by blackening the opposite side to each edge
Fit lines to the top and bottom edges
Compute the angle of each edge
Compute the difference between the two angles
Draw the lines on the input
Save the results
Input:
import cv2
import numpy as np
import math
# read image
img = cv2.imread('cone_shape.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray,11,255,cv2.THRESH_BINARY)[1]
# apply open then close to smooth boundary
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (13,13))
morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = np.ones((33,33), np.uint8)
morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)
# apply canny edge detection
edges = cv2.Canny(morph, 150, 200)
hh, ww = edges.shape
hh2 = hh // 2
# split edge image in half vertically and blacken opposite half
top_edge = edges.copy()
top_edge[hh2:hh, 0:ww] = 0
bottom_edge = edges.copy()
bottom_edge[0:hh2, 0:ww] = 0
# get coordinates of white pixels in top and bottom
# note: need to transpose y,x in numpy to x,y for opencv
top_white_pts = np.argwhere(top_edge.transpose()==255)
bottom_white_pts = np.argwhere(bottom_edge.transpose()==255)
# fit lines to white pixels
# (x,y) is point on line, (vx,vy) is unit vector along line
(vx1,vy1,x1,y1) = cv2.fitLine(top_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
(vx2,vy2,x2,y2) = cv2.fitLine(bottom_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
# compute angle for vectors vx,vy
top_angle = (180/math.pi)*math.atan(vy1/vx1)
bottom_angle = (180/math.pi)*math.atan(vy2/vx2)
print(top_angle, bottom_angle)
# cone angle is the difference
cone_angle = math.fabs(top_angle - bottom_angle)
print(cone_angle)
# draw lines on input
lines = img.copy()
p1x1 = int(x1-1000*vx1)
p1y1 = int(y1-1000*vy1)
p1x2 = int(x1+1000*vx1)
p1y2 = int(y1+1000*vy1)
cv2.line(lines, (p1x1,p1y1), (p1x2,p1y2), (0, 0, 255), 1)
p2x1 = int(x2-1000*vx2)
p2y1 = int(y2-1000*vy2)
p2x2 = int(x2+1000*vx2)
p2y2 = int(y2+1000*vy2)
cv2.line(lines, (p2x1,p2y1), (p2x2,p2y2), (0, 0, 255), 1)
# save resulting images
cv2.imwrite('cone_shape_thresh.jpg',thresh)
cv2.imwrite('cone_shape_morph.jpg',morph)
cv2.imwrite('cone_shape_edges.jpg',edges)
cv2.imwrite('cone_shape_lines.jpg',lines)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.imshow("edges", edges)
cv2.imshow("top edge", top_edge)
cv2.imshow("bottom edge", bottom_edge)
cv2.imshow("lines", lines)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image:
Morphology processed image:
Edge Image:
Lines on input:
Cone Angle (in degrees):
42.03975696357633
That sounds possible. You need to do some preprocessing and filtering to figure out what works and there is probably some tweaking involved.
There are three approaches that could work.
1.)
The basic idea is to somehow get two lines and measure the angle between them.
Define a threshold to define the outer black region (out of the central angle) and set all values below it to zero.
This will also set some of the blurry stripes inside the central angle to zero so we have to try to "heal" them away. This is done by using Morphological Transformations. You can read about them here and here.
You could try the operation Closing, but I don't know if it fixes stripes. Usually it fixes dots or scratches. This answer seems to indicate that it should work on lines.
Maybe at that point apply some Gaussian blurring and to the threshold thing again. Then try to use some edge or line detection.
It's basically try and error, you have to see what works.
2.)
Another thing that could work is to try to use the arc-enter code herelike scratches, maybe even strengthen them and use the Hough Circle Transform. I think it detects arcs as well.
Just try it and see what the function returns. In the best case there are several circles / arcs that you can use to estimate the central angle.
There are several approaches on arc detection here on StackOverflow or here.
I am not sure if that's the same with all your image, but the one above looks like there are some thin, green and pink arcs that seem to stretch all along the central angle. You could use that to filter for that color, then make it grey scale.
This question might be helpful.
3.)
Apply an edge filter, e.g Canny skimage.feature.canny
Try several sigmas and post the images in your question, then we can try to think on how to continue.
What could work is to calculate the convex hull around all points that are part of an edge. Then get the two lines that form the central angle from the convex hull.

how to center MRI images

I work on MRIs. The problem is that the images are not always centered. In addition, there are often black bands around the patient's body.
I would like to be able to remove the black borders and center the patient's body like this:
I have already tried to determine the edges of the patient's body by reading the pixel table but I haven't come up with anything very conclusive.
In fact my solution works on only 50% of the images... I don't see any other way to do it...
Development environment: Python3.7 + OpenCV3.4
I'm not sure this is the standard or most efficient way to do this, but it seems to work:
# Load image as grayscale (since it's b&w to start with)
im = cv2.imread('im.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold it. I tried a few pixel values, and got something reasonable at min = 5
_,thresh = cv2.threshold(im,5,255,cv2.THRESH_BINARY)
# Find contours:
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Put all contours together and reshape to (_,2).
# The first "column" will be your x values of your contours, and second will be y values
c = np.vstack(contours).reshape(-1,2)
# Extract the most left, most right, uppermost and lowermost point
xmin = np.min(c[:,0])
ymin = np.min(c[:,1])
xmax = np.max(c[:,0])
ymax = np.max(c[:,1])
# Use those as a guide of where to crop your image
crop = im[ymin:ymax, xmin:xmax]
cv2.imwrite('cropped.jpg', crop)
What you get in the end is this:
There are multiple ways to do this, and this is answer is pretty much computer vision tips and tricks.
If the mass is in the center, and the area outside is always going to be black, you can threshold the image and then find the edge pixels like you already are. I'd add 10 pixels to the border to adjust for variances in the threshold process.
Or if the body is always similarly sized, you can find the centroid of the blob (white area in the threshold image), and then crop a fixed area around it.

Detecting silver and reflecting balls with OpenCV

I am trying to detect silver balls reflecting the environment with OpenCV:
With black balls, I successfully did it by detecting circles:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(5,5),0);
gray = cv2.medianBlur(gray,5)
gray = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,3.5)
kernel = np.ones((3,3),np.uint8)
gray = cv2.erode(gray,kernel,iterations = 1)
gray = cv2.dilate(gray,kernel,iterations = 1)
circles = cv2.HoughCircles(gray, cv.CV_HOUGH_GRADIENT, 1, 260, \
param1=30, param2=65, minRadius=0, maxRadius=0)
But when using the program with silver balls, we don't get any result.
When looking at the edges calculated by the program, the edges of the ball are quite sharp. But the Code is not recognizing any ball.
How do I improve the detection rate of the silver ball? I think of two ways doing that:
- Improving edge calculation
- Make the circle detection accept an image with unclear edges
Is that possible? What is the best way of doing so?
Help is very appreciated.
You have to tune your parameters. The HoughCircles function does a good job in detecting circles (even with gaps). Note that HoughCircles performs an internal binarization using canny edge detection. Thus, you don't have to do thresholding.
Given your image above, the code
import cv2
from matplotlib import pyplot as plt
import numpy as np
PATH = 'path/to/the/image.jpg'
img = cv2.imread(PATH, cv2.IMREAD_GRAYSCALE)
plt.imshow(img, cmap='gray')
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 20, param1=130, param2=30, minRadius=0, maxRadius=0)
if circles is not None:
for x, y, r in circles[0]:
c = plt.Circle((x, y), r, fill=False, lw=3, ec='C1')
plt.gca().add_patch(c)
plt.gcf().set_size_inches((12, 8))
plt.show()
yields the result
What do the different parameters mean?
The function signature is defined as
cv.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]])
image and circles are self explanatory and will be skipped.
method
Specifies the variant of the hough algorithm that is used internally. As stated in the documentation, only HOUGH_GRADIENT is support atm. This method utilizes the 21HT (p.2, THE 2-1 HOUGH TRANSFORM) algorithm. The major advantage of this variant lies in the reduction of memory usage. The standard way of detecting circles using hough transform requires a search in a 3D hough space (x, y and radius). However, with 21HT your hough space is reduced to only 2 dimensions, which lowers the memory consumption by a fair amount.
dp
The dp parameter sets the inverse accumulator resolution. A good explanation can be found here. Note that this explanation uses the standard hough transform as an example. But the effect is the same for 21HT. The accumulator for 21HT is just a bit different as for the standard HT.
minDist
Simply specifies the minimum distance between the centers of circles. In the code example above it's set to 20 which means that the centers of two detected circles have to be at least 20 pixels away from each other. I'm not sure how opencv filters the circles out, but scanning the source code it looks like circles with lower matches are thrown out.
param1
Specifies the thresholds that are passed to the Canny Edge algorithm. Basically it's called like cv2.Canny(image, param1 / 2, param1).
param2
This paragraph should probably be validated by someone who is more familiar with the opencv source code.
param2 specifies the accumulator threshold. This value decides how complete a circle has to be in order to count as a valid circle. I'm not sure in which unit the parameter is given, tho. But (scanning the source code again) it looks like it is an absolute vote threshold (meaning that it is directly affected by different radii).
The image below shows different circles (or what can be identified as a circle). The further you go to the right, the lower the threshold has to be in order to detect that circle.
minRadius and maxRadius
Simply limits the circle search in the range [minRadius, maxRadius] for the radius. This is useful (and can increase performance) if you can approximate (or know) the size of the circles you are searching for.

Trying to improve my road segmentation program in OpenCV

I am trying to make a program that is capable of identifying a road in a scene and proceeded to using morphological filtering and the watershed algorithm. However the program produces either mediocre or bad results. It seems to do okay (not good enough through) if the road takes up most of the scene. However in other pictures, it turns out that the sky gets segmented instead (watershed with the clouds).
I tried to see if I can preform more image processing to improve the results, but this is the best I have so far and don't know how to move forward to improve my program.
How can I improve my program?
Code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
import imutils
def invert_img(img):
img = (255-img)
return img
#img = cv2.imread('images/coins_clustered.jpg')
img = cv2.imread('images/road_4.jpg')
img = imutils.resize(img, height = 300)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
thresh = invert_img(thresh)
# noise removal
kernel = np.ones((3,3), np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 4)
# sure background area
sure_bg = cv2.dilate(opening,kernel,iterations=3)
#sure_bg = cv2.morphologyEx(sure_bg, cv2.MORPH_TOPHAT, kernel)
# Finding sure foreground area
dist_transform = cv2.distanceTransform(opening, cv2.DIST_L2, 5)
ret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
'''
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
imgray = cv2.GaussianBlur(imgray, (5, 5), 0)
img = cv2.Canny(imgray,200,500)
'''
markers = cv2.watershed(img,markers)
img[markers == -1] = [255,0,0]
cv2.imshow('background',sure_bg)
cv2.imshow('foreground',sure_fg)
cv2.imshow('threshold',thresh)
cv2.imshow('result',img)
cv2.waitKey(0)
For start, segmentation problems are hard. The more general you want the solution to be, the more hard it gets. Road segemntation is a well-known problem, and i'm sure you can find many papers which tackle this issue from various directions.
Something that helps me get ideas for computer vision problems is trying to think what makes it so easy for me to detect it and so hard for computer.
For example, let's look on the road on your images. What makes it unique from the background?
Distinct gray color.
Always have 2 shoulders lines in white color
Always on the bottom section of the image
Always have a seperation line in the middle (yellow/white)
Pretty smooth
Wider on the bottom and vanishing into horizon.
Now, after we have found some unique features, we need to find ways to quantify them, so it will be obvious to the algorithm as it is obvious to us.
Work on the RGB (or even better - HSV) image, don't convert it to gray on the beginning and lose all the color data. Look for gray area!
Again, let's find white regions (inside gray ones). You can try do edge detection in the specific orientation of the shoulders line. You are looking for line that takes about half of the height of the image. etc...
Lets delete the upper half of the image. It is hardly that you ever have there a road, and you will get rid from a lot of noise in your algorithm.
see 2...
Lets check the local standard deviation, or some other smoothness feature.
If we found some shape, lets check if it fits what we expect.
I know these are just ideas and I don't claim they are easy to implement, but if you want to improve your algorithm you must give it more "knowledge", just as you have.
Exploit some domain knowledge; in other words, make some simplifying assumptions. Even basic things like "the camera's not upside down" and "the pavement has a uniform hue" will improve the common case.
If you can treat crossroads as a special case, then finding the edges of the roadway may be a simpler and more useful task than finding the roadway itself.

OpenCV concave and convex corner points of polygons

Problem
I am working on a project where I need to get the bounding boxes of dumbell like shapes. However, I need the fewest points possible, and the boxes need to fit the shapes at all corners. Here's an Image I made to test: Blurry, cracked, dumbell shape
I don't care about the gaps going into the shape, I just want to clean it up, and straighten the edges so that I can get the contours of a shape like this: Cleaned up
I have been attempting to threshold() it out, getting the contours of it using findContours() and then using approxPolyDP() to simplify the crazy amount of points the contours end up being. So, after fiddling with this for about three days now, how can I simply get either:
Two boxes specifying the ends of the dumbell and a rectangle in the middle, or
One contour with the twelve points for all the corners
The second option would be preferred since that really is my ultimate goal: getting the points that are at those corners.
A few things to note:
I am using OpenCV for Python
There will generally be many of these shapes of all sizes all over the input image
They will have only horizontal or vertical positioning. No strange 27 degree angles...
What I need:
I really don't need someone to write the code for me, I just need some method or algorithm in order to get this done, preferably with some simple examples.
My Code
Here is my overly clean code with functions I don't even use but figure I would use them eventually:
import cv2
import numpy as np
class traceImage():
def __init__(self, imageLocation):
self.threshNum = 127
self.im = cv2.imread(imageLocation)
self.imOrig = self.im
self.imGray = cv2.cvtColor(self.im, cv2.COLOR_BGR2GRAY)
self.ret, self.imThresh = cv2.threshold(self.imGray, self.threshNum, 255, 0)
self.contours, self.hierarchy = cv2.findContours(self.imThresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
def createGray(self):
self.imGray = cv2.cvtColor(self.im, cv2.COLOR_BGR2GRAY)
def adjustThresh(self, threshNum):
self.ret, self.imThresh = cv2.threshold(self.imGray, threshNum, 255, 0)
def getContours(self):
self.contours, self.hierarchy = cv2.findContours(self.imThresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
def approximatePoly(self, percent):
i=0
for shape in self.contours:
shape = cv2.approxPolyDP(shape, percent*cv2.arcLength(shape, True), True)
self.contours[i] = shape
i+=1
def drawContours(self, blobWidth, color=(255,255,255)):
cv2.drawContours(self.im, self.contours, -1, color, blobWidth)
def newWindow(self, name):
cv2.namedWindow(name)
def showImage(self, window):
cv2.imshow(window, self.im)
def display(self):
while True:
cv2.waitKey()
def displayUntil(self, key):
while True:
pressed = cv2.waitKey()
if pressed == key:
break
if __name__ == "__main__":
blobWidth = 30
ti = traceImage("dumbell.png")
ti.approximatePoly(0.01)
for thresh in range(127,256):
ti.adjustThresh(thresh)
ti.getContours()
ti.drawContours(blobWidth)
ti.showImage("Image")
ti.displayUntil(10)
ti.createGray()
ti.adjustThresh(127)
ti.getContours()
ti.approximatePoly(0.0099)
ti.drawContours(2, (0,255,0))
ti.showImage("Image")
ti.display()
Code Explanation
I know I might not be doing some things right here, but hey, I'm proud of it :)
So, the idea is that there are very often holes and gaps in these dumbells and so I figured that if I iterated through all the threshold values from 127 to 255 and drew the contours onto the image with large enough thickness, the thickness of drawing the contours would fill in any small enough holes, and I could use the new, blobby image to get the edges and then scale the sides back down to size. That was my thinking. There's got to be another, beter way though...
Summary
I want to end up with 12 points; one for each corner of the shape.
EDIT:
After trying out some erosion and dilation, it seems that the best option would be to slice the contours at certain points and then use bounding boxes around the sliced shapes to get the right boxy corners, and then doing some calculations to rejoin the boxes into one shape. A rather interesting challenge...
EDIT 2:
I discovered something that works well! I made my own line detection system, that only detects horizontal or vertical lines, and then on a detected line/contour edge, the program draws a black line that extends across the whole image, thus effectively slicing the image at the straight lines of the contours. Once it does that, it gets new contours of the sliced up boxes, draws bounding boxes around the pieces and then uses dilation to close the gaps. So far, it works well on large shapes, but when the shapes are small, it tends to lose a bit of the shape.
So, after fiddling with erosion, dilation, closing, opening, and looking at straight contours, I have figured out a solution that works. Thank you #Ante and #a.alsram! Your two ideas combined helped me get to my solution. So here's how it works.
Method
The program iterates over each contour, and over every pair of points in the contour, looking for point pairs that lie on the same axis and calculating the distance between them. If the distance is greater than an adjustable threshold, the program decides that those points are considered an edge on the shape. Then the program uses that edge, and draws a black line along the whole contour, thus cutting the contour at that edge. Then the program redetermines contours and since the shape was cut. These pieces that were cut off are know their own contours, which then are bounded by bounding boxes. and finally, all shapes are dilated and eroded (close) to rejoin the boxes that were cut off.
This method can be done several times, but each time there is a little bit of accuracy loss. But it works for what I need and certainly was a fun challenge! Thanks for your help guys!
natebot13
Maybe simple solution can help. If there is a threshold length to close a gaps,
it is possible to split image in a grid with cell lengths >= threshold, and use
cells that have something inside. With that there will be only horizontal and
vertical lines, and by taking a care about grid to follow original horizontal
and vertical lines it will cover main line features.
Update
Take a look on mathematical morphology. I think closing operation with structuring element (2*k+1)x(2*k+1) pixels can do what you are looking for.
Algorithm should take threshold parameter k, and performs dilation and than erosion. That means change image so that for each white pixel set all neighbours on distance k ((2*k+1)x(2*k+1) box) to the white, and than change image so that for each black pixel set neighbours on distance k to the black.
It is enough to do operations on boundary pixels.

Categories