Parallel Line detection using Hough Transform, OpenCV and python - python

I need help on an algorithm I've been working. I'm trying to detect all the lines in a thresholded image, detect all the lines and then output only those that are parallel. The thresholded image outputs the object of my interest, and then I filter this image through a canny edge detector. This edge image is then passed through the Probabilistic Hough Transform. Now, I want the algorithm to be capable of detecting parallel lines in any image. I had in mind to do this by trying to detect the coordinates of all the lines and calculate their slope (with this then the angle). Parallel lines must have the same or almost the same angle and in that way I could output only the lines with the same angle. I could maybe draw an imaginary line in the image and then use it as reference for all the detected lines in the image? I just don't understand how to use the coordinates of all the lines detected through the function cv2.HoughLinesP(). The documentation of this functions says that the output is a 4D array and this is confusing for me. This is a part of my code:
Line Detection through Probabilistic Hough Transform
rho_res = .1 # [pixels]
theta_res = np.pi / 180. # [radians]
threshold = 50 # [# votes]
min_line_length = 100 # [pixels]
max_line_gap = 40 # [pixels]
lines = cv2.HoughLinesP(edge_image, rho_res, theta_res, threshold, np.array([]),
minLineLength=min_line_length, maxLineGap=max_line_gap)
Draw lines
if lines is not None:
for i in range(0, len(linesP)):
coords = lines[i][0]
slope = (float(coords[3]) - coords[1]) / (float(coords[2]) - coords[0])
cv2.line(img, (coords[0], coords[1]), (coords[2], coords[3]), (0,0,255), 2, cv2.LINE_AA)
Any idea on how I could extrapolate all the detected lines and then output only those that are parallel? I have tried a few algorithms online but none seems to work. Again, my problem is understanding and working with the output variables of the function cv2.HoughLinesP(). I have also find a code that is supposed to calculate the slope. I tried this but is just giving me one value (one slope). I want the slope of all the lines in the image.

Project the Hough transform onto the angle axis. This gives you a 1D signal as a function of theta, that is proportional to the “amount of line” in that orientation. Peaks in this signal indicate orientations that have many parallel lines. Find the largest peak, that gives you a theta.
Now go back to the Hough transform image, and detect peaks with this value of theta (maybe allow a little bit of wiggle). Now you’ll have all parallel lines at this orientation.
Sorry I can’t give you code that works with cv2.HoughLinesP, I don’t know this function. I hope this description gives you a starting point.

Calculate slope (angle) for all lines in range 0..Pi using atan2 function. To limit range by positive angles, add Pi to negative results.
Sort results by slope. Walk through sorted list, make unions for close values - these lines are near parallel. Note that you might have long series for slightly different neighbor value but start and end of series might differ a lot. So use some (angular) threshold to break series run.

I just don't understand how to use the coordinates of all the lines detected through the function cv2.HoughLinesP(). The documentation of this function says that the output is a 4D array and this is confusing for me.
4D array is just the output vector of detected lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) , where (x1,y1) and (x2, y2) are the ending points of each detected line segment.
Please refer to the attached picture to get an idea of what those mean. Keep in my mind that those coordinated are in the image space.

Related

Generating a segmentation mask for circular particles from threshold mask?

I am trying to find all the circular particles in the image attached. This is the only image I am have (along with its inverse).
I have read this post and yet I can't use hsv values for thresholding. I have tried using Hough Transform.
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=0.01, minDist=0.1, param1=10, param2=5, minRadius=3,maxRadius=6)
and using the following code to plot
names =[circles]
for nums in names:
color_img = cv2.imread(path)
blue = (211,211,211)
for x, y, r in nums[0]:
cv2.circle(color_img, (x,y), r, blue, 1)
plt.figure(figsize=(15,15))
plt.title("Hough")
plt.imshow(color_img, cmap='gray')
The following code was to plot the mask:
for masks in names:
black = np.zeros(img_gray.shape)
for x, y, r in masks[0]:
cv2.circle(black, (x,y), int(r), 255, -1) # -1 to draw filled circles
plt.imshow(black, gray)
Yet I am only able to get the following mask which if fairly poor.
This is an image of what is considered a particle and what is not.
One simple approach involves slightly eroding the image, to separate touching circular objects, then doing a connected component analysis and discarding all objects larger than some chosen threshold, and finally dilating the image back so the circular objects are approximately of the original size again. We can do this dilation on the labelled image, such that you retain the separated objects.
I'm using DIPlib because I'm most familiar with it (I'm an author).
import diplib as dip
a = dip.ImageRead('6O0Oe.png')
a = a(0) > 127 # the PNG is a color image, but OP's image is binary,
# so we binarize here to simulate OP's condition.
separation = 7 # tweak these two parameters as necessary
size_threshold = 500
b = dip.Erosion(a, dip.SE(separation))
b = dip.Label(b, maxSize=size_threshold)
b = dip.Dilation(b, dip.SE(separation))
Do note that the image we use here seems to be a zoomed-in screen grab rather than the original image OP is dealing with. If so, the parameters must be made smaller to identify the smaller objects in the smaller image.
My approach is based on a simple observation that most of the particles in your image have approximately same perimeter and the "not particles" have greater perimeter than them.
First, have a look at the RANSAC algorithm and how does it find inliers and outliers. It basically is for 2D data but we will have to transform it to 1D data in our case.
In your case, I am calling inliers to the correct particles and Outliers to incorrect particles.
Our data on which we have to work on will be the perimeter of these particles. To get the perimeter, find contours in this image and get the perimeter of each contour. Refer this for information about Contours.
Now we have the data, knowledge about RANSAC algo and our simple observation mentioned above. Now in this data, we have to find the most dense and compact cluster which will contain all the inliers and others will be outliers.
Now let's assume the inliers are in the range of 40-60 and the outliers are beyond 60. Let's define a threshold value T = 0. We say that for each point in the data, inliers for that point are in the range of (value of that point - T, value of that point + T).
Now first iterate over all the points in the data and count number of inliers to that point for a T and store this information. Find the maximum number of inliers possible for a value of T. Now increment the value of T by 1 and again find the maximum number of inliers possible for that T. Repeat these steps by incrementing value of T one by one.
There will be a range of values of T for which Maximum number of inliers are the same. These inliers are the particles in your image and the particles having perimeter greater than these inliers are the outliers thus the "not particles" in your image.
I have tried this algorithm in my test cases which are similar to your and it works. I am always able to determine the outliers. I hope it works for you too.
One last thing, I see that boundary of your particles are irregular and not smooth, try to make them smooth and use this algorithm if this doesn't work for you in this image.

OpenCV Python creating bounding box or enclosing circle/polygon around scattered points

I am working on object detection for collision avoidance using OpenCV Python on a small quad. First I need to detect objects using Optical Flow Pyramid (LK)(OpenCV) approach. I was able to track points on the image ROI as shown in the image points tracked using opticalflow lk pyr
I need to create a bounding box or enclosing convexHull or some polygonal shape as shown below to show that these are the detected objects red lines are which I drew . Ignoring isolated points, only points at certain distance to each other must be taken.
If any one could help me or provide me with your ideas it will be useful.
If my question is not precise or to vast please let me know
you can get minimum and maximum x,y values by looping through the cluster labels. and then can draw rectangles using those 4 points for each cluster.
following code will help you.
ret, label, center = cv2.kmeans(Z, 10, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
for i in label.ravel():
x_values = []
y_values = []
count = Z[label.ravel()==i]
for x,y in count:
x_values.append(x)
y_values.append(y)
min_x = min(x_values)
min_y = min(y_values)
max_x = max(x_values)
max_y=max(y_values)
cv2.rectangle(frame, (max_x, max_y), (min_x, min_y), (0, 255, 0), 3)
To get the bounding box draw a rectangle around p1(x_min, y_min) and p2(x_max, y_max), where x_min/max and y_min/max denote the minimum and maximum x and y coordinates of a point cluster.
So as you already have your points the first step is to form clusters of close points and get rid of outliers.
Please research cluster analysis to find out how to do this. I'm not willing to write a book here. https://en.wikipedia.org/wiki/Cluster_analysis might give you a first idea.
The problem involves two steps:
You need to perform clustering over the key-points. I suggest taking a look at scikit-learn clustering.
Build a bounding rectangle or any other type convex envelope over each of the clusters. For this open CV provides the function BoundingRect(link to the documentation) which provides the desired functionality: The function calculates and returns the minimal up-right bounding rectangle for the specified point set.

OpenCV - Estimating Box dimensions in Python

This is the continuation of my previous question. I now have an image like this
Here the corners are detected. Now I am trying to estimate the dimensions of the bigger box while smaller black box dimensions are known.
Can anyone guide me what is the best way to estimate the dimensions of the box? I can do it with simple Euclidean distance but I don't know if it is the correct way or not. Or even if it is the correct way then from a list of tuples (coordinates) how can I find distances like A-B or A-D or G-H but not like A-C or A-F?
The sequence has to be preserved in order to get correct dimensions. Also I have two boxes here so when I create list of corners coordinates then it should contain all coordinates from A-J and I don't know which coordinates belong to which box. So how can I preserve that for two different boxes because I want to run this code for more similar images.
Note: The corners in this image is not a single point but a set of points so I clustered the set of the corner and average them to get a single (x,y) coordinate for each corner.
I have tried my best to explain my questions. Will be extremely glad to have some answers :) Thanks.
For the
How can I find distances like A-B or A-D or G-H but not like A-C or
A-F
part
Here's a quick code, not efficient for images with lots of corners, but for your case it's OK. The idea is to start from the dilated edge image you got in your other question (with only the big box, but the idea is the same for the image where there is also the small box)
then for every possible combination of corners, you look at a few points on an imaginary line between them, and then you check if these points actually fall on a real line in the image.
import cv2
import numpy as np
#getting intermediate points on the line between point1 and point2
#for example, calling this function with (p1,p2,3) will return the point
#on the line between p1 and p2, at 1/3 distance from p2
def get_intermediate_point(p1,p2,ratio):
return [p1[0]+(p2[0]-p1[0])/ratio,p1[1]+(p2[1]-p1[1])/ratio]
#open dilated edge images
img=cv2.imread(dilated_edges,0)
#corners you got from your segmentation and other question
corners=[[29,94],[102,21],[184,52],[183,547],[101,576],[27,509]]
nb_corners=len(corners)
#intermediate points between corners you are going to test
ratios=[2,4,6,8] #in this example, the middle point, the quarter point, etc
nb_ratios=len(ratios)
#list which will contain all connected corners
connected_corners=[]
#double loop for going through all possible corners
for i in range(nb_corners-1):
for j in range(i+1,nb_corners):
cpt=0
c1=corners[i]; c2=corners[j]
#testing every intermediate points between the selected corners
for ratio in ratios:
p=get_intermediate_point(c1,c2,ratio)
#checking if these points fall on a white pixel in the image
if img[p[0],p[1]]==255:
cpt+=1
#if enough of the intermediate points fall on a white pixel
if cpt>=int(nb_ratios*0.75):
#then we assume that the 2 corners are indeed connected by a line
connected_corners.append([i,j])
print(connected_corners)
In general you cannot, since any reconstruction is only up to scale.
Basically, given a calibrated camera and 6 2D-points (6x2=12) you want to find 6 3D points + scale = 6x3+1=19. There aren't enough equations.
In order to do so, you will have to make some assumptions and insert them into the equations.
Form example:
The box edges are perpendicular to each other (which means that every 2 neighboring points share at least one coordinate value).
You need to assume that you know the height of the bottom points, i.e. they are on the same plane as your calibration box (this will give you the Z of the visible bottom points).
Hopefully, these constraints are enough to given you less equations that unknown and you can solve the linear equation set.

Rectangle(quadrilateral) Detection by ConvexHull

I want to detect the rectangle from an image.
I used cv2.findContours() with cv2.convexHull() to filter out the irregular polygon.
Afterwards, I will use the length of hull to determine whether the contour is a rectangle or not.
hull = cv2.convexHull(contour,returnPoints = True)
if len(hull) ==4:
return True
However, sometimes, the convexHull() will return an array with length 5.
If I am using the criterion above, I will miss this rectangle.
For example,
After using cv2.canny()
By using the methods above, I will get the hull :
[[[819 184]]
[[744 183]]
[[745 145]]
[[787 145]]
[[819 146]]]
Here is my question: Given an array (Convex Hull) with length 5, how can I determine whether it is actually referring to a quadrilateral? Thank you.
=====================================================================
updated:
After using Sobel X and Y direction,
sobelxy = cv2.Sobel(img_inversion, cv2.CV_8U, 1, 1, ksize=3)
I got:
Well,
This is not the right way to extract rectangles. Since we are talking basics here, I would suggest you to take the inversion of the image and apply Sobel in X and Y direction and then run the findcontours function. Then with this you will be able to get lot of rectangles that you can filter out. You will have to apply lot of checks to identify the rectangle having text in it. Also I dont understand why do you want to force select rectangle with length 5. You are limiting the scale.
Secondly, another way is to use the Sobel X and Y image and then apply OpenCVs LineSegmentDetector. Once you get all the line segments you have to apply RANSAC for (Quad fit) so the condition here should be all the angles on a set of randomly chosen intersecting lines should be acute(roughly) and finally filter out the quad roi with text( for this use SWT or other reliable techniques).
As for your query you should select quad with ideally length 4 (points).
Ref: Crop the largest rectangle using OpenCV
This link will give you the jist of detecting the rectangle in a very simple way.
The images below give you a sort of walkthrough for inversion and sobel of image. Inversion of image eliminates the double boundaries you get from sobel.
For Inversion you use tilde operator.
Also before taking inversion also, its better you suppress the illumination artifacts. This can be done using homomorphic filtering. or taking log of an image.
It isn't so easy to fit a rectangle to a convex polygon.
You can try to find the minimum area or minimum perimeter rectangle by rotating calipers (https://en.wikipedia.org/wiki/Rotating_calipers).
Then by comparing the areas/perimeters of the hull and the rectangle, you can assess "rectangularity".

Python OpenCV HoughLinesP Fails to Detect Lines

I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.
import cv2
import numpy as np
img = cv2.imread('image_with_edges.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,10,100,apertureSize = 3)
lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('houghlines3.jpg',img)
Input Image:
Output Image: (see the Red Line):
#ljetibo Try this with:
c_6.jpg
There's quite a bit wrong here so I'll just start from the beginning.
Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on tresholding and the exact meaning of the treshold methods.
The manual mentions that
cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst
the special value THRESH_OTSU may be combined with one of the above
values. In this case, the function determines the optimal threshold
value using the Otsu’s algorithm and uses it instead of the specified
thresh .
I know it's a bit confusing because you don't actully combine THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe.
Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.
Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:
After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:
cv2.erode(src, kernel, [optionalOptions] ) → dst
So you have to write:
b = cv2.erode(b,element)
Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created
cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless.
What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.
What this means, in your case, is that you drag [1] matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself.
If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise.
Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever.
>>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]], dtype=uint8)
If we erode the second image with a 3x3 rectangular kernel we get the image bellow.
Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:
Ok, now we look for EXACTLY vertical and EXACTLY horizontal lines ONLY. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:
Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?
Sorry for the too-long post, hope it helps a bit.
Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.
I'd suggest OP asks a new question more specific to the detection of curves because you are dealing with curves op, not horizontal and vertical lines.
There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:
Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has RANSAC implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text here. This is the easiest because all you have to do is the math.
A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out OpenCv manual. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.
To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See this paper on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.
There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.
To show you what you can do with just Hough transform check out bellow:
import cv2
import numpy as np
def draw_lines(hough, image, nlines):
n_x, n_y=image.shape
#convert to color image so that you can see the lines
draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for (rho, theta) in hough[0][:nlines]:
try:
x0 = np.cos(theta)*rho
y0 = np.sin(theta)*rho
pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
int(y0 + (n_x+n_y)*np.cos(theta)) )
pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
int(y0 - (n_x+n_y)*np.cos(theta)) )
alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
alphdeg = alph*180/np.pi
#OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
if abs( np.cos( alph - 180 )) > 0.8: #0.995:
cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
cv2.line(draw_im, pt1, pt2, (0,0,255), 2)
except:
pass
cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
[cv2.IMWRITE_PNG_COMPRESSION, 12])
img = cv2.imread('a.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
cv2.imwrite("1tresh.jpg", b)
element = np.ones((3,3))
b = cv2.erode(b,element)
cv2.imwrite("2erodedtresh.jpg", b)
edges = cv2.Canny(b,10,100,apertureSize = 3)
cv2.imwrite("3Canny.jpg", edges)
hough = cv2.HoughLines(edges, 1, np.pi/180, 200)
draw_lines(hough, b, 100)
As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the if abs( np.cos( alph - 180 )) > 0.8: while the red drawn lines are drawn by rho>0 and abs( np.cos( alphdeg - 90)) > 0.7 condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm

Categories