Why is Opencv/Hough Transform not finding the whole line? - python

I finished a tutorial on OpenCv for finding lanes, and I am trying to apply it to finding a piece of tape on the floor. I got the code running and set the region of interest but it only finds a few edges of the tape. I think it has to do with the thickness but I am not 100% sure. Any help would be appreciated.
import cv2
import numpy as np
import matplotlib.pyplot as plt
def canny(image):
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
canny = cv2.Canny(blur, 50, 150)
return canny
def display_lines(image, lines):
line_image = np.zeros_like(image)
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line.reshape(4)
cv2.line(line_image, (x1, y1), (x2, y2), (255, 0, 0), 10)
return line_image
def region_of_interest(image):
height = image.shape[0]
polygons = np.array([
[(200, height), (400, height), (355, 0)]
])
mask = np.zeros_like(image)
cv2.fillPoly(mask, polygons, 255)
masked_image = cv2.bitwise_and(image, mask)
return masked_image
image = cv2.imread('tape3.jpg')
lane_image = np.copy(image)
canny_image = canny(image)
cropped_image = region_of_interest(canny_image)
lines = cv2.HoughLinesP(cropped_image, 2, np.pi/180, 100, np.array([]), minLineLength=40, maxLineGap=5)
line_image = display_lines(lane_image, lines)
combo_image = cv2.addWeighted(lane_image, 0.8, line_image, 1, 1)
# cv2 print image
print(region_of_interest(image))
cv2.imshow("result", combo_image)
cv2.waitKey(0)

This may not answer your original question, but this could be an alternate way to achieve what you're looking for.
I started by thresholding the grayscale of the image to try and isolate the tape
Then I used opencv's findContours to get the segmentation points of each white blob
The thresholding method I used is sensitive to light and shadow so you may have to find some other thresholding method if this isn't a workable constraint. If different colored tape is a concern, you can threshold off of other values (convert to HSV or LAB and threshold off of the H or B channels respectively to look for red).
Edit:
If you still want to use HoughLinesP, here's a working example with your picture.
First I applied canny:
Then I used the HoughLinesP function:
I've never used houghLinesP before so I'm not sure of the potential pitfalls, but it seems to work, though it actually creates a bunch of overlapping lines with these parameters, you'll have to play around with it a bit.
Relevant Code:
# canny
canned = cv2.Canny(gray, 591, 269);
# dilate
kernel = np.ones((3,3), np.uint8);
canned = cv2.dilate(canned, kernel, iterations = 1);
# hough
lines = cv2.HoughLinesP(canned, rho = 1, theta = 1*np.pi/180, threshold = 30, minLineLength = 10, maxLineGap = 20);
Edit 2:
I looked at the documentation for the function and the third parameter (theta) refers to the angle resolution. I think it might not have worked in your code because you didn't run dilation on the image after Canny. With a one-degree search resolution it's not hard to imagine that we could miss the very thin line that canny returns. It might even be worth dilating the lines more than I did in the example by using a larger kernel (or dilating multiple times).

Related

Detect lines in Python OpenCV without applying Gaussian Blur

I am detecting lines in a noiseless, programmatically generated png file. I would normally use Hough Lines, which requires me to first provide edges from a canny detection, but the first step of the canny detection is to apply a gaussian blur to eliminate noise. Is there a way I can do edge detection on my original image without ever intentionally blurring it? I suspect this will yield better results than burring first since the lines are already perfectly clean and high-contrast.
Here is a simple example using canny detection and an image. The lines in each group start at 5 pixels wide, then the next line is 4, then 3, 2, and 1. As you can see, the canny detection doesn't work perfectly (the 2 pixel lines appear smaller than the 1 pixel ones):
Original image:
Edges (Result of canny detection):
Sample code:
import cv2
import numpy as np
import matplotlib.pylab as plot
# img = cv2.imread("8px_and_2px_lines.png")
img = cv2.imread("5-1px_lines.png")
crop_size = 520
img = img[100:crop_size, 100:crop_size]
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imwrite("5-1px_lines_cropped.png", img)
cv2.imshow("start", img)
edges = cv2.Canny(gray, 50, 150, apertureSize=3)
cv2.imshow("canny", edges)
cv2.imwrite("5-1px_lines_cropped_canny.png", edges)
# plot.imshow(edges, cmap="gray")
# plot.show()
lines = cv2.HoughLines(edges, 1, np.pi / 180, 200)
line_length = 3000
for line in lines:
rho, theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a * rho
y0 = b * rho
x1 = int(x0 + line_length * (-b))
y1 = int(y0 + line_length * (a))
x2 = int(x0 - line_length * (-b))
y2 = int(y0 - line_length * (a))
cv2.line(img, (x1, y1), (x2, y2), (0, 0, 255), 2)
cv2.imshow("lines", img)
cv2.waitKey()
Any ideas on how I can do a better line detection on these images? I think the gaussian blur built into the canny detector is making the lines harder to detect.
One simple way is simply to threshold, invert so lines are white and then skeletonize. Here is code for Python/OpenCV/Skimage
Input:
import cv2
import numpy as np
import skimage.morphology
img = cv2.imread("lines_horizontal.png")
ht, wd = img.shape[:2]
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# create a binary thresholded image
thresh = cv2.threshold(gray, 0, 1, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# invert so lines are white
thresh = 1 - thresh
# apply skeletonization
skeleton = skimage.morphology.skeletonize(thresh)
skeleton = (255*skeleton).clip(0,255).astype(np.uint8)
# save result
cv2.imwrite("lines_horizontal_skeleton.png", skeleton)
# show results
cv2.imshow("skeleton", skeleton)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Note that there will be some distortion right at the ends from the skeletonization at endpoints.
Note also that OpenCV opencv-contrib-python has a thinning method that is similar to skeletonization.
Presumably, the posted image does not represent the general case, so my answer is probably inappropriate.
If you get the pixels on a vertical line, nothing's easier than detecting the transitions from white to black and conversely. As the line are perfectly horizontal, it is enough to do this for a single column (but you can repeat for every column if you want) !
By the above method, you get both sides of the lines, with their original spacing. If you need a single trace, average the ordinates in pairs.

Rotate Image to align features with X-axis in OpenCV Python

In the following microscopy image, I extracted the horizontal white line grid using morphological operators in OpenCV. I couldn't completely get rid of the noise which is why there are some white lines in-between. The grid lines need to be parallel to the x-axis. During the microscopic reading process, perfect parallelism cannot be ensured. In this case, the lines are moving slightly upwards from left to right.
How can I realign the lines to the x-axis so that they are parallel to the lower and upper edges of the image using OpenCV or any other Python package?
I'm relatively new to OpenCV so if anyone could give me a hint what operations or functions would be helpful to tackle this problem, I'd be grateful.
Thanks!
You may fit lines, get the mean angle and rotate the image.
The suggested solution uses the following stages:
Threshold (binarize) the image.
Apply closing morphological operation for connecting the lines.
Find contours.
Iterate the contours and fit a line for each contour.
Compute the angle of each line, and build a list of angles.
Compute the mean angle of the angles that are "close to the median angle".
Rotate the image by the mean angle.
Here is the code:
import cv2
import numpy as np
import math
img = cv2.imread("input.png", cv2.IMREAD_GRAYSCALE) # Read input image as grayscale.
threshed = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)[1] # threshold (binarize) the image
# Apply closing for connecting the lines
threshed = cv2.morphologyEx(threshed, cv2.MORPH_CLOSE, np.ones((1, 10)))
# Find contours
contours = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
img2 = cv2.cvtColor(threshed, cv2.COLOR_GRAY2BGR) # BGR image - used for drawing
angles = [] # List of line angles.
# Iterate the contours and fit a line for each contour
# Remark: consider ignoring small contours
for c in contours:
vx, vy, cx, cy = cv2.fitLine(c, cv2.DIST_L2, 0, 0.01, 0.01) # Fit line
w = img.shape[1]
cv2.line(img2, (int(cx-vx*w), int(cy-vy*w)), (int(cx+vx*w), int(cy+vy*w)), (0, 255, 0)) # Draw the line for testing
ang = (180/np.pi)*math.atan2(vy, vx) # Compute the angle of the line.
angles.append(ang)
angles = np.array(angles) # Convert angles to NumPy array.
# Remove outliers and
lo_val, up_val = np.percentile(angles, (40, 60)) # Get the value of lower and upper 40% of all angles (mean of only 10 angles)
mean_ang = np.mean(angles[np.where((angles >= lo_val) & (angles <= up_val))])
print(f'mean_ang = {mean_ang}') # -0.2424
M = cv2.getRotationMatrix2D((img.shape[1]//2, img.shape[0]//2), mean_ang, 1) # Get transformation matrix - for rotating by mean_ang
img = cv2.warpAffine(img, M, (img.shape[1], img.shape[0]), cv2.INTER_CUBIC) # Rotate the image
# Display results
cv2.imshow('img2', img2)
cv2.imshow('img', img)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
img2 (for testing):
img (after rotating):
Note:
The code is just an example - I don't expect it to solve all of your microscopy images.

HoughCircles can't detect this circle

I'm using openCV to detect some coins, first I used some functions to fill the coin area so I can make a solid white circle where the coin is, then im trying to use houghCircles to detect the white circle so I can crop it to send to a neural network. But the houghCircle is not detecting anything, any tips on this?
Here is the code:
import numpy as np
import cv2
gray = cv2.imread('coin25a2.jpg',0)
color = cv2.imread('coin25a2.jpg',1)
gray_blur = cv2.GaussianBlur(gray, (15,15), 0)
thresh = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11,1)
kernel = np.ones((3, 3), np.uint8)
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=7)
circles = cv2.HoughCircles(closing,cv2.HOUGH_GRADIENT,1, 200, 20,30,30, 0)
circles = np.uint16(np.around(circles))
print(circles)
cv2.imshow("a", closing)
cv2.waitKey(0)
The circles variable is not returning any valid (x,y,r).
circles = cv2.HoughCircles(closing,cv2.HOUGH_GRADIENT,1, 200, 20,30,30, 0)
The last parameter is the maximum radius of the circle that you want to find. I think you need to put a large value there, instead of 0.
A better plan is to go with only the default parameters and adjust later.
cv2.HoughCircles(image, method, dp, minDist)
, which is the same as
cv2.HoughCircles(closing,cv2.HOUGH_GRADIENT,1, 200)

How to remove an extension to a blob caused by morphology

I have an image that I'm eroding and dilating like so:
kernel = np.ones((5,5),np.float32)/1
eroded_img = cv2.erode(self.inpainted_adjusted_image, kernel, iterations=10)
dilated_img = cv2.dilate(eroded_img, kernel, iterations=10)
Here's the result of the erosion and dilation:
and then I'm taking a threshold of it like so:
self.thresh = cv2.threshold(dilated_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
But the threshold gives me an unwanted extension that I've marked in the image below (The region above the red line is the unwanted region):
How do I get rid of this unwanted region? Is there a better way to do what I'm doing?
Working with a different type of threshold (adaptive threshold, which takes local brigthness into account) will already get rid of your problem: The adaptive threshold result is what you are looking for.
[EDIT: I have taken the liberty of adding some code on Hough circles. I admit that I have played with the parameters for this single image to get a nice looking result, though I do not know what type of accuracy you are needing for such a type of problem]
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('image.png',0)
thresh = cv2.threshold(img, 210, 255, cv2.ADAPTIVE_THRESH_MEAN_C)[1]
canny = cv2.Canny(thresh,50,150)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(canny,cv2.HOUGH_GRADIENT,1,20, param1=50,param2=23,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(255,0,0),3)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
titles = ['Original Image', 'Adaptive Thresholding', "Canny", "Hough Circle"]
images = [img, thresh, canny, cimg]
for i in xrange(4):
plt.subplot(2,2,i+1),plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([]),plt.yticks([])
plt.show()
Let us know if this is not yet sufficient.
From the binary Image it would be fairly easy to fit a circle using a Hough transform. Once you have the outer boundary of the circle i would suggest bleeding the boundary and cropping out the portion that outside the boundary.
Another approach is to adjust your threshold value. It looks like you could get away with that. You might need some morphological operations to get a clean edge. Using a disk kernel will help retain the shape to a good extent.
Since your question has been rolled back to its original version, I have attached a solution using flood fill which works on your images.
import numpy as np
import cv2
import sys
import matplotlib.pyplot as plt
img = cv2.imread('image.png', 0)
h, w = img.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
gray = cv2.blur(img,(5,5))
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(gray)
print maxLoc
fixed_range = True
connectivity = 4
flooded = img.copy()
mask[:] = 0
connectivity = 4 #8
flags = connectivity
flags |= cv2.FLOODFILL_FIXED_RANGE
cv2.floodFill(flooded, mask, maxLoc, (255, 255, 255), (60,)*3, (60,)*3, flags)
thresh = cv2.threshold(flooded, 250, 255, cv2.THRESH_BINARY)[1]
titles = ['Original Image', 'Blurred', "Floodfill", "Threshold"]
images = [img, gray, flooded, thresh]
for i in xrange(4):
plt.subplot(2,2,i+1),plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([]),plt.yticks([])
plt.show()

Improve contour detection with OpenCV (Python)

I am trying to identify cards from a photo. I managed to do what I wanted on ideal photos, but I am now having hard time applying the same procedure with slightly different lighting, etc. So the question is about making the following contour detection more robust.
I need to share a big part of my code for the takers to be able to make the images of interest, but my question relates only to the last block and image.
import numpy as np
import cv2
from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
import math
img = cv2.imread('image.png')
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(img)
Then the cards are detected:
# Prepocess
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(1,1),1000)
flag, thresh = cv2.threshold(blur, 120, 255, cv2.THRESH_BINARY)
# Find contours
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea,reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
# Show image
imgcont = img.copy()
[cv2.drawContours(imgcont, [contours[i]], 0, (0,255,0), 5) for i in listindex]
plt.imshow(imgcont)
The perspective is corrected:
#plt.rcParams['figure.figsize'] = (3.0, 3.0)
warp = range(numcards)
for i in range(numcards):
card = contours[i]
peri = cv2.arcLength(card,True)
approx = cv2.approxPolyDP(card,0.02*peri,True)
rect = cv2.minAreaRect(contours[i])
r = cv2.cv.BoxPoints(rect)
h = np.array([ [0,0],[399,0],[399,399],[0,399] ],np.float32)
approx = np.array([item for sublist in approx for item in sublist],np.float32)
transform = cv2.getPerspectiveTransform(approx,h)
warp[i] = cv2.warpPerspective(img,transform,(400,400))
# Show perspective correction
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
grid[i].imshow(warp[i]) # The AxesGrid object work as a list of axes.
That were I am having my problem. I want to detect the contour of the shapes. The best way I found is using a combination of bilateralFilter and AdaptativeThreshold on a gray image:
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
image2 = cv2.bilateralFilter(warp[i].copy(),10,100,100)
grey = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)
grey2 = cv2.cv.AdaptiveThreshold(cv2.cv.fromarray(grey), cv2.cv.fromarray(grey), 255, cv2.cv.CV_ADAPTIVE_THRESH_MEAN_C, cv2.cv.CV_THRESH_BINARY, blockSize=31, param1=6)
grid[i].imshow(grey,cmap=plt.cm.binary)
This is very close to what I would like, but how can I improve it to get closed contours in white, and everything else in black?
Why not just use Canny and apply perspective correction after finding the contours (because it seems to blur the edges)? For example, using the small image you provided in your question (the result could be better on a bigger one):
Based on some parts of your code:
import numpy as np
import cv2
import math
img = cv2.imread('image.bmp')
# Prepocess
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
flag, thresh = cv2.threshold(gray, 120, 255, cv2.THRESH_BINARY)
# Find contours
img2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
card_number = -1 #just so happened that this is the worst case
stencil = np.zeros(img.shape).astype(img.dtype)
cv2.drawContours(stencil, [contours[listindex[card_number]]], 0, (255, 255, 255), cv2.FILLED)
res = cv2.bitwise_and(img, stencil)
cv2.imwrite("out.bmp", res)
canny = cv2.Canny(res, 100, 200)
cv2.imwrite("canny.bmp", canny)
First, remove everything except a single card for simplicity, then apply Canny edge detector:
Then you can dilate/erode, correct perspective, remove the largest contour etc.
Except for the image in the bottom right corner, the following steps should generally work:
Dilate and erode the binary masks to bridge any one or two pixels gaps between contour fragments.
Use maximal supression to turn your thick binary masks along the boundary of your shapes into thin edges.
As used earlier in the pipeline, use cvFindcontours to identify closed contours. Each contour identified by the method can be tested for being closed.
As a general solution to such problems, I would advise you to try my algorithm to find closed contours around a given point. Check active segmentation with fixation

Categories