Python OpenCV HoughLinesP Fails to Detect Lines - python

I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.
import cv2
import numpy as np
img = cv2.imread('image_with_edges.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,10,100,apertureSize = 3)
lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('houghlines3.jpg',img)
Input Image:
Output Image: (see the Red Line):
#ljetibo Try this with:
c_6.jpg

There's quite a bit wrong here so I'll just start from the beginning.
Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on tresholding and the exact meaning of the treshold methods.
The manual mentions that
cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst
the special value THRESH_OTSU may be combined with one of the above
values. In this case, the function determines the optimal threshold
value using the Otsu’s algorithm and uses it instead of the specified
thresh .
I know it's a bit confusing because you don't actully combine THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe.
Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.
Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:
After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:
cv2.erode(src, kernel, [optionalOptions] ) → dst
So you have to write:
b = cv2.erode(b,element)
Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created
cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless.
What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.
What this means, in your case, is that you drag [1] matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself.
If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise.
Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever.
>>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]], dtype=uint8)
If we erode the second image with a 3x3 rectangular kernel we get the image bellow.
Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:
Ok, now we look for EXACTLY vertical and EXACTLY horizontal lines ONLY. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:
Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?
Sorry for the too-long post, hope it helps a bit.
Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.
I'd suggest OP asks a new question more specific to the detection of curves because you are dealing with curves op, not horizontal and vertical lines.
There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:
Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has RANSAC implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text here. This is the easiest because all you have to do is the math.
A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out OpenCv manual. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.
To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See this paper on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.
There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.
To show you what you can do with just Hough transform check out bellow:
import cv2
import numpy as np
def draw_lines(hough, image, nlines):
n_x, n_y=image.shape
#convert to color image so that you can see the lines
draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for (rho, theta) in hough[0][:nlines]:
try:
x0 = np.cos(theta)*rho
y0 = np.sin(theta)*rho
pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
int(y0 + (n_x+n_y)*np.cos(theta)) )
pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
int(y0 - (n_x+n_y)*np.cos(theta)) )
alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
alphdeg = alph*180/np.pi
#OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
if abs( np.cos( alph - 180 )) > 0.8: #0.995:
cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
cv2.line(draw_im, pt1, pt2, (0,0,255), 2)
except:
pass
cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
[cv2.IMWRITE_PNG_COMPRESSION, 12])
img = cv2.imread('a.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
cv2.imwrite("1tresh.jpg", b)
element = np.ones((3,3))
b = cv2.erode(b,element)
cv2.imwrite("2erodedtresh.jpg", b)
edges = cv2.Canny(b,10,100,apertureSize = 3)
cv2.imwrite("3Canny.jpg", edges)
hough = cv2.HoughLines(edges, 1, np.pi/180, 200)
draw_lines(hough, b, 100)
As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the if abs( np.cos( alph - 180 )) > 0.8: while the red drawn lines are drawn by rho>0 and abs( np.cos( alphdeg - 90)) > 0.7 condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm

Related

Generating a segmentation mask for circular particles from threshold mask?

I am trying to find all the circular particles in the image attached. This is the only image I am have (along with its inverse).
I have read this post and yet I can't use hsv values for thresholding. I have tried using Hough Transform.
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=0.01, minDist=0.1, param1=10, param2=5, minRadius=3,maxRadius=6)
and using the following code to plot
names =[circles]
for nums in names:
color_img = cv2.imread(path)
blue = (211,211,211)
for x, y, r in nums[0]:
cv2.circle(color_img, (x,y), r, blue, 1)
plt.figure(figsize=(15,15))
plt.title("Hough")
plt.imshow(color_img, cmap='gray')
The following code was to plot the mask:
for masks in names:
black = np.zeros(img_gray.shape)
for x, y, r in masks[0]:
cv2.circle(black, (x,y), int(r), 255, -1) # -1 to draw filled circles
plt.imshow(black, gray)
Yet I am only able to get the following mask which if fairly poor.
This is an image of what is considered a particle and what is not.
One simple approach involves slightly eroding the image, to separate touching circular objects, then doing a connected component analysis and discarding all objects larger than some chosen threshold, and finally dilating the image back so the circular objects are approximately of the original size again. We can do this dilation on the labelled image, such that you retain the separated objects.
I'm using DIPlib because I'm most familiar with it (I'm an author).
import diplib as dip
a = dip.ImageRead('6O0Oe.png')
a = a(0) > 127 # the PNG is a color image, but OP's image is binary,
# so we binarize here to simulate OP's condition.
separation = 7 # tweak these two parameters as necessary
size_threshold = 500
b = dip.Erosion(a, dip.SE(separation))
b = dip.Label(b, maxSize=size_threshold)
b = dip.Dilation(b, dip.SE(separation))
Do note that the image we use here seems to be a zoomed-in screen grab rather than the original image OP is dealing with. If so, the parameters must be made smaller to identify the smaller objects in the smaller image.
My approach is based on a simple observation that most of the particles in your image have approximately same perimeter and the "not particles" have greater perimeter than them.
First, have a look at the RANSAC algorithm and how does it find inliers and outliers. It basically is for 2D data but we will have to transform it to 1D data in our case.
In your case, I am calling inliers to the correct particles and Outliers to incorrect particles.
Our data on which we have to work on will be the perimeter of these particles. To get the perimeter, find contours in this image and get the perimeter of each contour. Refer this for information about Contours.
Now we have the data, knowledge about RANSAC algo and our simple observation mentioned above. Now in this data, we have to find the most dense and compact cluster which will contain all the inliers and others will be outliers.
Now let's assume the inliers are in the range of 40-60 and the outliers are beyond 60. Let's define a threshold value T = 0. We say that for each point in the data, inliers for that point are in the range of (value of that point - T, value of that point + T).
Now first iterate over all the points in the data and count number of inliers to that point for a T and store this information. Find the maximum number of inliers possible for a value of T. Now increment the value of T by 1 and again find the maximum number of inliers possible for that T. Repeat these steps by incrementing value of T one by one.
There will be a range of values of T for which Maximum number of inliers are the same. These inliers are the particles in your image and the particles having perimeter greater than these inliers are the outliers thus the "not particles" in your image.
I have tried this algorithm in my test cases which are similar to your and it works. I am always able to determine the outliers. I hope it works for you too.
One last thing, I see that boundary of your particles are irregular and not smooth, try to make them smooth and use this algorithm if this doesn't work for you in this image.

Python Opencv: Filter Image for Text Detection

I have these set of images I want to de-noise in order to run OCR on :
I am trying to read the 1973 from the image.
I have tried
import cv2,numpy as np
img=cv2.imread('uxWbP.png',0)
img = cv2.resize(img, (0, 0), fx=2, fy=2)
copy_img=np.copy(img)
#adaptive threshold as the image has different lighting conditions in different areas
thresh = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 21, 2)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#kill small contours
for i_cnt, cnt in enumerate(sorted(contours, key=lambda x: cv2.boundingRect(x)[0])):
_area = cv2.contourArea(cnt)
x, y, w, h = cv2.boundingRect(cnt)
x_y_area = w * h
if 10000 < x_y_area and x_y_area < 400000:
pass
# cv2.rectangle(copy_img, (x, y), (x + w, y + h), (255, 0, 255), 2)
# cv2.putText(copy_img, str(int(x_y_area)) + ' , ' + str(w) + ' , ' + str(h), (x, y + 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
# cv2.drawContours(copy_img, [cnt], 0, (0, 255, 0), 1)
elif 10000 > x_y_area:
#write over small contours
cv2.drawContours(thresh, [cnt], -1, 255, -1)
cv2.imshow('img',copy_img)
cv2.imshow('thresh',thresh)
cv2.waitKey(0)
Which significantly improves the image to:
Any recommendations on how to filter this image sufficiently either on improvements to the filtered image or complete change from the start, that I could run OCR or some ML detection scripts on this? I'd like to split out the numbers for detection, but open to other methods as well.
Another thing to try - either separately from the blurring (or in combination with it - is the erosion/dilation game, as hinted at in the comment by #eldesgraciado , to whom I think a good part of the credit for these answers should go.
These two (erosion and dilation) can be applied one after the other, repeatedly. I think the trick is to change the kernel size. Anyway, I know I've used that to reduce noise in the past. Here's one example of dilation:
>>> import cv2
>>> import numpy as np
>>> im_0 = cv2.imread("FWM8b.png")
>>> k_size = 3
>>> kernel = np.ones((k_size, k_size), np.uint8)
>>> im_dilated = cv2.dilate(im_0, kernel, iterations=1)
>>> cv2.imshow("d", im_dilated)
>>> cv2.waitKey(0)
Make whatever kernel you want for erosion, and check out the effects.
>>> im_eroded = cv2.erode(im_0, kernel, iterations=1)
>>> cv2.imshow("erosion", im_eroded)
>>> cv2.waitKey(0)
Edit with possible improvements:
>>> im_blurred = cv2.GaussianBlur(im_dilated, (0, 0), 3)
>>> im_better = cv2.addWeighted(im_0, 0.5, im_blurred, 1.2, 0)
# Getting closer.
^ dilated, blurred, and combined (added) with original, 1st way
# Even better, I think.
im_better2 = cv2.addWeighted(im_0, 0.9, im_blurred, 1.7, 0)
^ dilated, blurred, and combined (added) with original, 2nd way
You could do artifact removal, but be careful not to get rid of the stalk of the 7. If you can keep the 7 together, you can do connected-component analysis and keep the biggest connected components.
You could sum the values of pixels on each column and each row, which would probably lead to something like this (very approximated - almost time for work). Note that I was much more careful with the green curve - sums of columns - but the consistency of scaling is probably off.
Note that this is more a sum of (255 - pixel_value). That could find you rectangles where your to-be-found glyphs (digits) should be. You could do a 2-d map of column_pixel_sum + row_pixel_sum, or just do some approximation, as I have done below.
Also to feel free to rotate the image (or take pixel sums at different angles), and combine your info for each rotation.
Lots of other things to try ... the suggestion by #eldesgraciado of a noise model is especially intriguing.
Another thing you could try out is to create a "noise model" and subtract it from the original image. First, take the image and apply Gaussian Blur with very low parameters, just barely blurring it, next subtract this mask from the image. From here, the steps are experimental: The difference should be again blurred and thresholded. Save this image. You run this pre-processing with various parameters and saving each time the final binary image, then, average the masks obtained so far. The persistent blobs should be the ones you are looking for... like some sort of spatial bandstop, I guess...
Keep experimenting.
Unsharp mask (my other answer) on this result image. More noise gone, but hurts the 7.
My first thought is to put on a Gaussian blur for a sort of "unsharp filter". (I think my second idea is better; it combines this blur-and-add with the erosion/dilation game. I posted it as a separate answer, because I think it is a different-enough strategy to merit that.) #eldesgraciado noted frequency stuff, which is basically what we're doing here. I'll put on some code and explanation. (Here is one answer to an SO post that has a lot about sharpening - the answer linked is a more variable unsharp mask written in Python. Do take the time to look at other answers - including this one, one of many simple implementations that look just like mine - though some are written in different programming languages.) You'll need to mess with parameters. It's possible this won't work, but it's the first thing I thought of.
>>> import cv2
>>> im_0 = cv2.imread("FWM8b.png")
>>> cv2.imshow("FWM8b.png", im_0)
>>> cv2.waitKey(0)
## Press any key.
>>> ## Here's where we get to frequency. We'll use a Gaussian Blur.
## We want to take out the "frequency" of changes from white to black
## and back to white that are less than the thickness of the "1973"
>>> k_size = 0 ## This is the kernal size - the "width frequency",
## if you will. Using zero gives a width based on sigmas in
## the Gaussian function.
## You'll want to experiment with this and the other
## parameters, perhaps trying to run OCR over the image
## after each combination of parameters.
## Hint, avoid even numbers, and think of it as a radius
>>> gs_border = 3
>>> im_blurred = cv2.GaussianBlur(im_0, (k_size, k_size), gs_border)
>>> cv2.imshow("gauss", im_blurred)
>>> cv2.waitKey(0)
Okay, my parameters probably didn't blur this enough. The parts of the words that you want to get rid of aren't really blurry. I doubt you'll even see much of a difference from the original, but hopefully you'll get the idea.
We're going to multiply the original image by a value, multiply the blurry image by a value, and subtract value*blurry from value*orig. Code will be clearer, I hope.
>>> orig_img_multiplier = 1.5
>>> blur_subtraction_factor = -0.5
>>> gamma = 0
>>> im_better = cv2.addWeighted(im_0, orig_img_multiplier, im_blurred, blur_subtraction_factor, gamma)
>>> cv2.imshow("First shot at fixing", im_better)
Yeah, not too much different. Mess around with the parameters, try to do the blur before you do your adaptive threshold, and try some other methods. I can't guarantee it will work, but hopefully it will get you started going somewhere.
Edit
This is a great question. Responding to the tongue-in-cheek criticism of #eldesgraciado
Ah, naughty, naughty. Trying to break them CAPTCHA codes, huh? They are difficult to break for a reason. The text segmentation, as you see, is non-trivial. In your particular image, there’s a lot of high-frequency noise, you could try some frequency filtering first and see what result you get.
I submit the following from the Wikipedia article on reCAPTCHA (archived).
reCAPTCHA has completely digitized the archives of The New York Times and books from Google Books, as of 2011.three The archive can be searched from the New York Times Article Archive.four Through mass collaboration, reCAPTCHA was helping to digitize books that are too illegible to be scanned by computers, as well as translate books to different languages, as of 2015.five
Also look at this article (archived).
I don't think this CAPTCHA is part of Massive-scale Online Collaboration, though.
Edit: Some other type of sharpening will be needed. I just realized that I'm applying 1.5 and -0.5 multipliers to pixels which usually have values very close to 0 or 255, meaning I'm probably just recovering the original image after the sharpening. I welcome any feedback on this.
Also, from comments with #eldesgracio:
Someone probably knows a better sharpening algorithm than the one I used. Blur it enough, and maybe threshold on average values over an n-by-n grid (pixel density). I don't know to much about the whole adaptive-thresholding-then-contours thing. Maybe that could be re-done after the blurring...
Just to give you some ideas ...
Here's a blur with k_size = 5
Here's a blur with k_size = 25
Note those are the BLURS, not the fixes. You'll likely need to mess with the orig_img_multiplier and blur_subtraction_factor based on the frequency (I can't remember exactly how, so I can't really tell you how it's done.) Don't hesitate to fiddle with gs_border, gamma, and anything else you might find in the documentation for the methods I've shown.
Good luck with it.
By the way, the frequency is more something based on the 2-D Fast Fourier Transform, and possibly based on kernel details. I've just messed around with this stuff myself - definitely not an expert AND definitely happy if someone wants to give more details - but I hope I've given a basic idea. Adding some jitter noise (up and down or side to side blurring, rather than radius-based), might be helpful as well.

Parallel Line detection using Hough Transform, OpenCV and python

I need help on an algorithm I've been working. I'm trying to detect all the lines in a thresholded image, detect all the lines and then output only those that are parallel. The thresholded image outputs the object of my interest, and then I filter this image through a canny edge detector. This edge image is then passed through the Probabilistic Hough Transform. Now, I want the algorithm to be capable of detecting parallel lines in any image. I had in mind to do this by trying to detect the coordinates of all the lines and calculate their slope (with this then the angle). Parallel lines must have the same or almost the same angle and in that way I could output only the lines with the same angle. I could maybe draw an imaginary line in the image and then use it as reference for all the detected lines in the image? I just don't understand how to use the coordinates of all the lines detected through the function cv2.HoughLinesP(). The documentation of this functions says that the output is a 4D array and this is confusing for me. This is a part of my code:
Line Detection through Probabilistic Hough Transform
rho_res = .1 # [pixels]
theta_res = np.pi / 180. # [radians]
threshold = 50 # [# votes]
min_line_length = 100 # [pixels]
max_line_gap = 40 # [pixels]
lines = cv2.HoughLinesP(edge_image, rho_res, theta_res, threshold, np.array([]),
minLineLength=min_line_length, maxLineGap=max_line_gap)
Draw lines
if lines is not None:
for i in range(0, len(linesP)):
coords = lines[i][0]
slope = (float(coords[3]) - coords[1]) / (float(coords[2]) - coords[0])
cv2.line(img, (coords[0], coords[1]), (coords[2], coords[3]), (0,0,255), 2, cv2.LINE_AA)
Any idea on how I could extrapolate all the detected lines and then output only those that are parallel? I have tried a few algorithms online but none seems to work. Again, my problem is understanding and working with the output variables of the function cv2.HoughLinesP(). I have also find a code that is supposed to calculate the slope. I tried this but is just giving me one value (one slope). I want the slope of all the lines in the image.
Project the Hough transform onto the angle axis. This gives you a 1D signal as a function of theta, that is proportional to the “amount of line” in that orientation. Peaks in this signal indicate orientations that have many parallel lines. Find the largest peak, that gives you a theta.
Now go back to the Hough transform image, and detect peaks with this value of theta (maybe allow a little bit of wiggle). Now you’ll have all parallel lines at this orientation.
Sorry I can’t give you code that works with cv2.HoughLinesP, I don’t know this function. I hope this description gives you a starting point.
Calculate slope (angle) for all lines in range 0..Pi using atan2 function. To limit range by positive angles, add Pi to negative results.
Sort results by slope. Walk through sorted list, make unions for close values - these lines are near parallel. Note that you might have long series for slightly different neighbor value but start and end of series might differ a lot. So use some (angular) threshold to break series run.
I just don't understand how to use the coordinates of all the lines detected through the function cv2.HoughLinesP(). The documentation of this function says that the output is a 4D array and this is confusing for me.
4D array is just the output vector of detected lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) , where (x1,y1) and (x2, y2) are the ending points of each detected line segment.
Please refer to the attached picture to get an idea of what those mean. Keep in my mind that those coordinated are in the image space.

OpenCV concave and convex corner points of polygons

Problem
I am working on a project where I need to get the bounding boxes of dumbell like shapes. However, I need the fewest points possible, and the boxes need to fit the shapes at all corners. Here's an Image I made to test: Blurry, cracked, dumbell shape
I don't care about the gaps going into the shape, I just want to clean it up, and straighten the edges so that I can get the contours of a shape like this: Cleaned up
I have been attempting to threshold() it out, getting the contours of it using findContours() and then using approxPolyDP() to simplify the crazy amount of points the contours end up being. So, after fiddling with this for about three days now, how can I simply get either:
Two boxes specifying the ends of the dumbell and a rectangle in the middle, or
One contour with the twelve points for all the corners
The second option would be preferred since that really is my ultimate goal: getting the points that are at those corners.
A few things to note:
I am using OpenCV for Python
There will generally be many of these shapes of all sizes all over the input image
They will have only horizontal or vertical positioning. No strange 27 degree angles...
What I need:
I really don't need someone to write the code for me, I just need some method or algorithm in order to get this done, preferably with some simple examples.
My Code
Here is my overly clean code with functions I don't even use but figure I would use them eventually:
import cv2
import numpy as np
class traceImage():
def __init__(self, imageLocation):
self.threshNum = 127
self.im = cv2.imread(imageLocation)
self.imOrig = self.im
self.imGray = cv2.cvtColor(self.im, cv2.COLOR_BGR2GRAY)
self.ret, self.imThresh = cv2.threshold(self.imGray, self.threshNum, 255, 0)
self.contours, self.hierarchy = cv2.findContours(self.imThresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
def createGray(self):
self.imGray = cv2.cvtColor(self.im, cv2.COLOR_BGR2GRAY)
def adjustThresh(self, threshNum):
self.ret, self.imThresh = cv2.threshold(self.imGray, threshNum, 255, 0)
def getContours(self):
self.contours, self.hierarchy = cv2.findContours(self.imThresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
def approximatePoly(self, percent):
i=0
for shape in self.contours:
shape = cv2.approxPolyDP(shape, percent*cv2.arcLength(shape, True), True)
self.contours[i] = shape
i+=1
def drawContours(self, blobWidth, color=(255,255,255)):
cv2.drawContours(self.im, self.contours, -1, color, blobWidth)
def newWindow(self, name):
cv2.namedWindow(name)
def showImage(self, window):
cv2.imshow(window, self.im)
def display(self):
while True:
cv2.waitKey()
def displayUntil(self, key):
while True:
pressed = cv2.waitKey()
if pressed == key:
break
if __name__ == "__main__":
blobWidth = 30
ti = traceImage("dumbell.png")
ti.approximatePoly(0.01)
for thresh in range(127,256):
ti.adjustThresh(thresh)
ti.getContours()
ti.drawContours(blobWidth)
ti.showImage("Image")
ti.displayUntil(10)
ti.createGray()
ti.adjustThresh(127)
ti.getContours()
ti.approximatePoly(0.0099)
ti.drawContours(2, (0,255,0))
ti.showImage("Image")
ti.display()
Code Explanation
I know I might not be doing some things right here, but hey, I'm proud of it :)
So, the idea is that there are very often holes and gaps in these dumbells and so I figured that if I iterated through all the threshold values from 127 to 255 and drew the contours onto the image with large enough thickness, the thickness of drawing the contours would fill in any small enough holes, and I could use the new, blobby image to get the edges and then scale the sides back down to size. That was my thinking. There's got to be another, beter way though...
Summary
I want to end up with 12 points; one for each corner of the shape.
EDIT:
After trying out some erosion and dilation, it seems that the best option would be to slice the contours at certain points and then use bounding boxes around the sliced shapes to get the right boxy corners, and then doing some calculations to rejoin the boxes into one shape. A rather interesting challenge...
EDIT 2:
I discovered something that works well! I made my own line detection system, that only detects horizontal or vertical lines, and then on a detected line/contour edge, the program draws a black line that extends across the whole image, thus effectively slicing the image at the straight lines of the contours. Once it does that, it gets new contours of the sliced up boxes, draws bounding boxes around the pieces and then uses dilation to close the gaps. So far, it works well on large shapes, but when the shapes are small, it tends to lose a bit of the shape.
So, after fiddling with erosion, dilation, closing, opening, and looking at straight contours, I have figured out a solution that works. Thank you #Ante and #a.alsram! Your two ideas combined helped me get to my solution. So here's how it works.
Method
The program iterates over each contour, and over every pair of points in the contour, looking for point pairs that lie on the same axis and calculating the distance between them. If the distance is greater than an adjustable threshold, the program decides that those points are considered an edge on the shape. Then the program uses that edge, and draws a black line along the whole contour, thus cutting the contour at that edge. Then the program redetermines contours and since the shape was cut. These pieces that were cut off are know their own contours, which then are bounded by bounding boxes. and finally, all shapes are dilated and eroded (close) to rejoin the boxes that were cut off.
This method can be done several times, but each time there is a little bit of accuracy loss. But it works for what I need and certainly was a fun challenge! Thanks for your help guys!
natebot13
Maybe simple solution can help. If there is a threshold length to close a gaps,
it is possible to split image in a grid with cell lengths >= threshold, and use
cells that have something inside. With that there will be only horizontal and
vertical lines, and by taking a care about grid to follow original horizontal
and vertical lines it will cover main line features.
Update
Take a look on mathematical morphology. I think closing operation with structuring element (2*k+1)x(2*k+1) pixels can do what you are looking for.
Algorithm should take threshold parameter k, and performs dilation and than erosion. That means change image so that for each white pixel set all neighbours on distance k ((2*k+1)x(2*k+1) box) to the white, and than change image so that for each black pixel set neighbours on distance k to the black.
It is enough to do operations on boundary pixels.

Finding shapes in an image using opencv

I'm trying to look for shapes in an image using OpenCV. I know the shapes I want to match (there are some shapes I don't know about, but I don't need to find them) and their orientations. I don't know their sizes (scale) and locations.
My current approach:
Detect contours
For each contour, calculate the maximum bounding box
Match each bounding box to one of the known shapes separately. In my real project, I'm scaling the region to the template size and calculating differences in Sobel gradient, but for this demo, I'm just using the aspect ratio.
Where this approach comes undone is where shapes touch. The contour detection picks up the two adjacent shapes as a single contour (single bounding box). The matching step will then obviously fail.
Is there a way to modify my approach to handle adjacent shapes separately? Also, is there a better way to perform step 3?
For example: (Es colored green, Ys colored blue)
Failed case: (unknown shape in red)
Source code:
import cv
import sys
E = cv.LoadImage('e.png')
E_ratio = float(E.width)/E.height
Y = cv.LoadImage('y.png')
Y_ratio = float(Y.width)/Y.height
EPSILON = 0.1
im = cv.LoadImage(sys.argv[1], cv.CV_LOAD_IMAGE_GRAYSCALE)
storage = cv.CreateMemStorage(0)
seq = cv.FindContours(im, storage, cv.CV_RETR_EXTERNAL,
cv.CV_CHAIN_APPROX_SIMPLE)
regions = []
while seq:
pts = [ pt for pt in seq ]
x, y = zip(*pts)
min_x, min_y = min(x), min(y)
width, height = max(x) - min_x + 1, max(y) - min_y + 1
regions.append((min_x, min_y, width, height))
seq = seq.h_next()
rgb = cv.LoadImage(sys.argv[1], cv.CV_LOAD_IMAGE_COLOR)
for x,y,width,height in regions:
pt1 = x,y
pt2 = x+width,y+height
if abs(float(width)/height - E_ratio) < EPSILON:
color = (0,255,0,0)
elif abs(float(width)/height - Y_ratio) < EPSILON:
color = (255,0,0,0)
else:
color = (0,0,255,0)
cv.Rectangle(rgb, pt1, pt2, color, 2)
cv.ShowImage('rgb', rgb)
cv.WaitKey(0)
e.png:
y.png:
good:
bad:
Before anybody asks, no, I'm not trying to break a captcha :) OCR per se isn't really relevant here: the actual shapes in my real project aren't characters -- I'm just lazy, and characters are the easiest thing to draw (and still get detected by trivial methods).
As your shapes can vary in size and ratio, you should look at scaling invariant descriptors. A bunch of such descriptors would be perfect for your application.
Process those descriptors on your test template and then use some kind of simple classification to extract them. It should give pretty good results with simple shapes as you show.
I used Zernike and Hu moments in the past, the latter being the most famous. You can find an example of implementation here : http://www.lengrand.fr/2011/11/classification-hu-and-zernike-moments-matlab/.
Another thing : Given your problem, you should look at OCR technologies (stands for optical character recognition : http://en.wikipedia.org/wiki/Optical_character_recognition ;)).
Hope this helps a bit.
Julien
Have you try Chamfer Matching or contour matching (correspondence) using CCH as descriptor.
Chamfer matching is using distance transform of target image and template contour. not exactly scale invariant but fast.
The latter is rather slow, as the complexity is at least quadratic for bipartite matching problem. on the other hand, this method is invariant to scale, rotation, and probably local distortion (for approximate matching, which IMHO is good for the bad example above).

Categories