Related
Our team set up a vision system with a camera, a microscope and a tunable lens to look at the internal surface of a cone.
Visually speaking, the camera takes 12 image for one cone with each image covering 30 degrees.
Now we've collected many sample images and want to make sure each "fan"(as shown below) is at least 30 degree.
Is there any way in Python, with cv2 or other packages, to measure this central angle. Thanks.
Here is one way to do that in Python/OpenCV.
Read the image
Convert to gray
Threshold
Use morphology open and close to smooth and fill out the boundary
Apply Canny edge extraction
Separate the image into top edge and bottom edge by blackening the opposite side to each edge
Fit lines to the top and bottom edges
Compute the angle of each edge
Compute the difference between the two angles
Draw the lines on the input
Save the results
Input:
import cv2
import numpy as np
import math
# read image
img = cv2.imread('cone_shape.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray,11,255,cv2.THRESH_BINARY)[1]
# apply open then close to smooth boundary
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (13,13))
morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = np.ones((33,33), np.uint8)
morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)
# apply canny edge detection
edges = cv2.Canny(morph, 150, 200)
hh, ww = edges.shape
hh2 = hh // 2
# split edge image in half vertically and blacken opposite half
top_edge = edges.copy()
top_edge[hh2:hh, 0:ww] = 0
bottom_edge = edges.copy()
bottom_edge[0:hh2, 0:ww] = 0
# get coordinates of white pixels in top and bottom
# note: need to transpose y,x in numpy to x,y for opencv
top_white_pts = np.argwhere(top_edge.transpose()==255)
bottom_white_pts = np.argwhere(bottom_edge.transpose()==255)
# fit lines to white pixels
# (x,y) is point on line, (vx,vy) is unit vector along line
(vx1,vy1,x1,y1) = cv2.fitLine(top_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
(vx2,vy2,x2,y2) = cv2.fitLine(bottom_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
# compute angle for vectors vx,vy
top_angle = (180/math.pi)*math.atan(vy1/vx1)
bottom_angle = (180/math.pi)*math.atan(vy2/vx2)
print(top_angle, bottom_angle)
# cone angle is the difference
cone_angle = math.fabs(top_angle - bottom_angle)
print(cone_angle)
# draw lines on input
lines = img.copy()
p1x1 = int(x1-1000*vx1)
p1y1 = int(y1-1000*vy1)
p1x2 = int(x1+1000*vx1)
p1y2 = int(y1+1000*vy1)
cv2.line(lines, (p1x1,p1y1), (p1x2,p1y2), (0, 0, 255), 1)
p2x1 = int(x2-1000*vx2)
p2y1 = int(y2-1000*vy2)
p2x2 = int(x2+1000*vx2)
p2y2 = int(y2+1000*vy2)
cv2.line(lines, (p2x1,p2y1), (p2x2,p2y2), (0, 0, 255), 1)
# save resulting images
cv2.imwrite('cone_shape_thresh.jpg',thresh)
cv2.imwrite('cone_shape_morph.jpg',morph)
cv2.imwrite('cone_shape_edges.jpg',edges)
cv2.imwrite('cone_shape_lines.jpg',lines)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.imshow("edges", edges)
cv2.imshow("top edge", top_edge)
cv2.imshow("bottom edge", bottom_edge)
cv2.imshow("lines", lines)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image:
Morphology processed image:
Edge Image:
Lines on input:
Cone Angle (in degrees):
42.03975696357633
That sounds possible. You need to do some preprocessing and filtering to figure out what works and there is probably some tweaking involved.
There are three approaches that could work.
1.)
The basic idea is to somehow get two lines and measure the angle between them.
Define a threshold to define the outer black region (out of the central angle) and set all values below it to zero.
This will also set some of the blurry stripes inside the central angle to zero so we have to try to "heal" them away. This is done by using Morphological Transformations. You can read about them here and here.
You could try the operation Closing, but I don't know if it fixes stripes. Usually it fixes dots or scratches. This answer seems to indicate that it should work on lines.
Maybe at that point apply some Gaussian blurring and to the threshold thing again. Then try to use some edge or line detection.
It's basically try and error, you have to see what works.
2.)
Another thing that could work is to try to use the arc-enter code herelike scratches, maybe even strengthen them and use the Hough Circle Transform. I think it detects arcs as well.
Just try it and see what the function returns. In the best case there are several circles / arcs that you can use to estimate the central angle.
There are several approaches on arc detection here on StackOverflow or here.
I am not sure if that's the same with all your image, but the one above looks like there are some thin, green and pink arcs that seem to stretch all along the central angle. You could use that to filter for that color, then make it grey scale.
This question might be helpful.
3.)
Apply an edge filter, e.g Canny skimage.feature.canny
Try several sigmas and post the images in your question, then we can try to think on how to continue.
What could work is to calculate the convex hull around all points that are part of an edge. Then get the two lines that form the central angle from the convex hull.
I using Open CV and skimage for document analysis of datasheets.
I am trying to segment out the shade region separately .
I am currently able to segment out the part and number as different clusters.
Using felzenszwalb() from skimage I segment the parts:
import matplotlib.pyplot as plt
import numpy as np
from skimage.segmentation import felzenszwalb
from skimage.io import imread
img = imread('test.jpg')
segments_fz = felzenszwalb(img, scale=100, sigma=0.2, min_size=50)
print("Felzenszwalb number of segments {}".format(len(np.unique(segments_fz))))
plt.imshow(segments_fz)
plt.tight_layout()
plt.show()
But not able to connect them. Any idea to connect methodically and label out the corresponding segment with part and part number would of great help .
Thanks in advance for your time – if I’ve missed out anything, over- or under-emphasised a specific point let me know in the comments.
Preliminaries
Some preliminary code:
%matplotlib inline
%load_ext Cython
import numpy as np
import cv2
from matplotlib import pyplot as plt
import skimage as sk
import skimage.morphology as skm
import itertools
def ShowImage(title,img,ctype):
plt.figure(figsize=(20, 20))
if ctype=='bgr':
b,g,r = cv2.split(img) # get b,g,r
rgb_img = cv2.merge([r,g,b]) # switch it to rgb
plt.imshow(rgb_img)
elif ctype=='hsv':
rgb = cv2.cvtColor(img,cv2.COLOR_HSV2RGB)
plt.imshow(rgb)
elif ctype=='gray':
plt.imshow(img,cmap='gray')
elif ctype=='rgb':
plt.imshow(img)
else:
raise Exception("Unknown colour type")
plt.axis('off')
plt.title(title)
plt.show()
For reference, here's your original image:
#Read in image
img = cv2.imread('part.jpg')
ShowImage('Original',img,'bgr')
Identifying Numbers
To simplify things, we'll want to classify pixels as being either on or off. We can do so with thresholding. Since our image contains two clear classes of pixels (black and white), we can use Otsu's method. We'll invert the colour scheme since the libraries we're using consider black pixels boring and white pixels interesting.
#Convert image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#Apply Otsu's method to eliminate pixels of intermediate colour
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ShowImage('Applying Otsu',thresh,'gray')
#Verify that pixels are either black or white and nothing in between
np.unique(thresh)
Our strategy will be to locate numbers and then follow the line(s) near them to parts and then to label those parts. Since, conveniently, all of the Arabic numerals are formed from contiguous pixels, we can start by finding the connected components.
ret, components = cv2.connectedComponents(thresh)
#Each component is a different colour
ShowImage('Connected Components', components, 'rgb')
We can then filter the connected components to find the numbers by filtering for dimension. Note that this is not a super robust method of doing this. A better option would be to use character recognition, but this is left as an exercise to the reader :-)
class Box:
def __init__(self,x0,x1,y0,y1):
self.x0, self.x1, self.y0, self.y1 = x0,x1,y0,y1
def overlaps(self,box2,tol):
if self.x0 is None or box2.x0 is None:
return False
return not (self.x1+tol<=box2.x0 or self.x0-tol>=box2.x1 or self.y1+tol<=box2.y0 or self.y0-tol>=box2.y1)
def merge(self,box2):
self.x0 = min(self.x0,box2.x0)
self.x1 = max(self.x1,box2.x1)
self.y0 = min(self.y0,box2.y0)
self.y1 = max(self.y1,box2.y1)
box2.x0 = None #Used to mark `box2` as being no longer valid. It can be removed later
def dist(self,x,y):
#Get center point
ax = (self.x0+self.x1)/2
ay = (self.y0+self.y1)/2
#Get distance to center point
return np.sqrt((ax-x)**2+(ay-y)**2)
def good(self):
return not (self.x0 is None)
def ExtractComponent(original_image, component_matrix, component_number):
"""Extracts a component from a ConnectedComponents matrix"""
#Create a true-false matrix indicating if a pixel is part of a particular component
is_component = component_matrix==component_number
#Find the coordinates of those pixels
coords = np.argwhere(is_component)
# Bounding box of non-black pixels.
y0, x0 = coords.min(axis=0)
y1, x1 = coords.max(axis=0) + 1 # slices are exclusive at the top
# Get the contents of the bounding box.
return x0,x1,y0,y1,original_image[y0:y1, x0:x1]
numbers_img = thresh.copy() #This is used purely to show that we can identify numbers
numbers = []
for component in range(components.max()):
tx0,tx1,ty0,ty1,this_component = ExtractComponent(thresh, components, component)
#ShowImage('Component #{0}'.format(component), this_component, 'gray')
cheight, cwidth = this_component.shape
#print(cwidth,cheight) #Enable this to see dimensions
#Identify numbers based on aspect ratio
if (abs(cwidth-14)<3 or abs(cwidth-7)<3) and abs(cheight-24)<3:
numbers_img[ty0:ty1,tx0:tx1] = 128
numbers.append(Box(tx0,tx1,ty0,ty1))
ShowImage('Numbers', numbers_img, 'gray')
We now connect the numbers into contiguous blocks by expanding their bounding boxes slightly and looking for overlaps.
#This is kind of a silly way to do this, but it will work find for small quantities (hundreds)
merged=True #If true, then a merge happened this round
while merged: #Continue until there are no more mergers
merged=False #Reset merge indicator
for a,b in itertools.combinations(numbers,2): #Consider all pairs of numbers
if a.overlaps(b,10): #If this pair overlaps
a.merge(b) #Merge it
merged=True #Make a note that we've merged
numbers = [x for x in numbers if x.good()] #Eliminate those boxes that were gobbled by the mergers
#This is used purely to show that we can identify numbers
numbers_img = thresh.copy()
for n in numbers:
numbers_img[n.y0:n.y1,n.x0:n.x1] = 128
thresh[n.y0:n.y1,n.x0:n.x1] = 0 #Drop numbers from thresholded image
ShowImage('Numbers', numbers_img, 'gray')
Okay, so now we've identified the numbers! We'll use these later to identify parts.
Identifying Arrows
Next, we'll want to figure out what parts the numbers are pointing to. To do so, we want to detect lines. The Hough transform is good for this. To reduce the number of false positives, we skeletonize the data, which transforms it into a representation which is at most one pixel wide.
skel = sk.img_as_ubyte(skm.skeletonize(thresh>0))
ShowImage('Skeleton', skel, 'gray')
Now we perform the Hough transform. We're looking for one that identifies all of the lines going from the numbers to the parts. Getting this right may take some fiddling with the parameters.
lines = cv2.HoughLinesP(
skel,
1, #Resolution of r in pixels
np.pi / 180, #Resolution of theta in radians
30, #Minimum number of intersections to detect a line
None,
80, #Min line length
10 #Max line gap
)
lines = [x[0] for x in lines]
line_img = thresh.copy()
line_img = cv2.cvtColor(line_img, cv2.COLOR_GRAY2BGR)
for l in lines:
color = tuple(map(int, np.random.randint(low=0, high=255, size=3)))
cv2.line(line_img, (l[0], l[1]), (l[2], l[3]), color, 3, cv2.LINE_AA)
ShowImage('Lines', line_img, 'bgr')
We now want to find the line or lines which are closest to each number and retain only these. We're essentially filtering out all of the lines which are not arrows. To do so, we compare the end points of each line to the center point of each number box.
comp_labels = np.zeros(img.shape[0:2], dtype=np.uint8)
for n_idx,n in enumerate(numbers):
distvals = []
for i,l in enumerate(lines):
#Distances from each point of line to midpoint of rectangle
dists = [n.dist(l[0],l[1]),n.dist(l[2],l[3])]
#Minimum distance and the end point (0 or 1) of the line associated with that point
#Tuples of (Line Number, Line Point, Dist to Line Point) are produced
distvals.append( (i,np.argmin(dists),np.min(dists)) )
#Sort by distance between the number box and the line
distvals = sorted(distvals, key=lambda x: x[2])
#Include nearby lines, not just the closest one. This accounts for forking.
distvals = [x for x in distvals if x[2]<1.5*distvals[0][2]]
#Draw a white rectangle where the number box was
cv2.rectangle(comp_labels, (n.x0,n.y0), (n.x1,n.y1), 1, cv2.FILLED)
#Draw white lines where the arrows are
for dv in distvals:
l = lines[dv[0]]
lp = (l[0],l[1]) if dv[1]==0 else (l[2],l[3])
cv2.line(comp_labels, (l[0], l[1]), (l[2], l[3]), 1, 3, cv2.LINE_AA)
cv2.line(comp_labels, (lp[0], lp[1]), ((n.x0+n.x1)//2, (n.y0+n.y1)//2), 1, 3, cv2.LINE_AA)
ShowImage('Lines', comp_labels, 'gray')
Finding Parts
This part was hard! We now want to segment the parts in the image. If there was some way to disconnect the lines linking subparts together, this would be easy. Unfortunately, the lines connecting the subparts are the same width as many of the lines which constitute the parts.
To work around this, we could use a lot of logic. It would be painful and error-prone.
Alternatively, we could assume you have an expert-in-the-loop. This expert's sole job is to cut the lines connecting the subparts. This should be both easy and fast for them. Labeling everything would be slow and sad for humans, but is fast for computers. Separating things is easy for humans, but hard for computers. So we let both do what they do best.
In this case, you could probably train someone to do this job in a few minutes, so a true "expert" isn't really necessary. Just a mildly competent human.
If you pursue this, you'll need to write the expert in the loop tool. To do so, save the skeleton images, have your expert modify them, and read the skeletonized images back in. Like so.
#Save the image, or display it on a GUI
#cv2.imwrite("/z/skel.png", skel);
#EXPERT DOES THEIR THING HERE
#Read the expert-mediated image back in
skelhuman = cv2.imread('/z/skel.png')
#Convert back to the form we need
skelhuman = cv2.cvtColor(skelhuman,cv2.COLOR_BGR2GRAY)
ret, skelhuman = cv2.threshold(skelhuman,0,255,cv2.THRESH_OTSU)
ShowImage('SkelHuman', skelhuman, 'gray')
Now that we have the parts separated, we'll eliminate as much of the arrows as possible. We've already extracted these above, so we can add them back later if we need to.
To eliminate the arrows, we'll find all of the lines that terminate in locations other than by another line. That is, we'll locate pixels which have only one neighbouring pixel. We'll then eliminate the pixel and look at its neighbour. Doing this iteratively eliminates the arrows. Since I don't know another term for it, I'll call this a Fuse Transform. Since this will require manipulating individual pixels, which would be super slow in Python, we'll write the transform in Cython.
%%cython -a --cplus
import cython
from libcpp.queue cimport queue
import numpy as np
cimport numpy as np
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
#cython.cdivision(True)
cpdef void FuseTransform(unsigned char [:, :] image):
# set the variable extension types
cdef int c, x, y, nx, ny, width, height, neighbours
cdef queue[int] q
# grab the image dimensions
height = image.shape[0]
width = image.shape[1]
cdef int dx[8]
cdef int dy[8]
#Offsets to neighbouring cells
dx[:] = [-1,-1,0,1,1,1,0,-1]
dy[:] = [0,-1,-1,-1,0,1,1,1]
#Find seed cells: those with only one neighbour
for y in range(1, height-1):
for x in range(1, width-1):
if image[y,x]==0: #Seed cells cannot be blank cells
continue
neighbours = 0
for n in range(0,8): #Looks at all neighbours
nx = x+dx[n]
ny = y+dy[n]
if image[ny,nx]>0: #This neighbour has a value
neighbours += 1
if neighbours==1: #Was there only one neighbour?
q.push(y*width+x) #If so, this is a seed cell
#Starting with the seed cells, gobble up the lines
while not q.empty():
c = q.front()
q.pop()
y = c//width #Convert flat index into 2D x-y index
x = c%width
image[y,x] = 0 #Gobble up this part of the fuse
neighbour = -1 #No neighbours yet
for n in range(0,8): #Look at all neighbours
nx = x+dx[n] #Find coordinates of neighbour cells
ny = y+dy[n]
#If the neighbour would be off the side of the matrix, ignore it
if nx<0 or ny<0 or nx==width or ny==height:
continue
if image[ny,nx]>0: #Is the neighbouring cell active?
if neighbour!=-1: #If we've already found an active neighbour
neighbour=-1 #Then pretend we found no neighbours
break #And stop looking. This is the end of the fuse.
else: #Otherwise, make a note of the neighbour's index.
neighbour = ny*width+nx
if neighbour!=-1: #If there was only one neighbour
q.push(neighbour) #Continue burning the fuse
Back in standard Python:
#Apply the Fuse Transform
skh_dilated=skelhuman.copy()
FuseTransform(skh_dilated)
ShowImage('Fuse Transform', skh_dilated, 'gray')
Now that we've eliminated all of the arrows and lines connecting the parts, we dilate the remaining pixels a lot.
kernel = np.ones((3,3),np.uint8)
dilated = cv2.dilate(skh_dilated, kernel, iterations=6)
ShowImage('Dilation', dilated, 'gray')
Putting It All Together
And overlay the labels and arrows we segmented out earlier...
comp_labels_dilated = cv2.dilate(comp_labels, kernel, iterations=5)
labels_combined = np.uint8(np.logical_or(comp_labels_dilated,dilated))
ShowImage('Comp Labels', labels_combined, 'gray')
Finally, we take the merged number boxes, component arrows, and parts and color each of them using pretty colors from Color Brewer. We then overlay this on the original image to obtain the desired highlighting.
ret, labels = cv2.connectedComponents(labels_combined)
colormask = np.zeros(img.shape, dtype=np.uint8)
#Colors from Color Brewer
colors = [(228,26,28),(55,126,184),(77,175,74),(152,78,163),(255,127,0),(255,255,51),(166,86,40),(247,129,191),(153,153,153)]
for l in range(labels.max()):
if l==0: #Background component
colormask[labels==0] = (255,255,255)
else:
colormask[labels==l] = colors[l]
ShowImage('Comp Labels', colormask, 'bgr')
blended = cv2.addWeighted(img,0.7,colormask,0.3,0)
ShowImage('Blended', blended, 'bgr')
The final image
So, to recap, we identified numbers, arrows, and parts. In some cases, we were able to separate them automatically. In other cases, we used expert in the loop. Where we had to manipulate pixels individually, we used Cython for speed.
Of course, the danger with this sort of thing is that some other image will break the (many) assumptions I've made here. But that's a risk that you take when you try to use a single image to present a problem.
I'm currently using skimage.measure.find_contours() to find contours on a surface. Now that I've found the contours I need to able to find the area enclosed within them.
When all of the vertices are within the data set this is fine as a have a fully enclosed polygon.
However, how do I ensure the polygon is fully enclosed if the contour breaches the edge of the surface, either at an edge or at a corner? When this happens I would like to use the edge of the surface as additional vertices to close off the polygon. For example in the following image, with contours shown, you can see that the contours end at the edge of the image, how do I close them up? Also in the example of the brown contour, which is just a single line, I don't think I want an area returned, how would I single out this case?
I know I can check for enclosed contours/polygons by checking if the last vertices of the polygon is the same as the first.
I have code for calculating the area inside a polygon, taken from here
def find_area(array):
a = 0
ox,oy = array[0]
for x,y in array[1:]:
a += (x*oy-y*ox)
ox,oy = x,y
return -a/2
I just need help in closing off the polygons. And checking for the different cases that might occur.
Thanks
Update:
After applying the solution suggested by #soupault I have this code:
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
# Construct some test data
x, y = np.ogrid[-np.pi:np.pi:100j, -np.pi:np.pi:100j]
r = np.sin(np.exp((np.sin(x)**3 + np.cos(y)**2)))
# Coordinates of point of interest
pt = [(49,75)]
# Apply thresholding to the surface
threshold = 0.8
blobs = r > threshold
# Make a labelled image based on the thresholding regions
blobs_labels = measure.label(blobs, background = 0)
# Show the thresholded regions
plt.figure()
plt.imshow(blobs_labels, cmap='spectral')
# Apply regionprops to charactersie each of the regions
props = measure.regionprops(blobs_labels, intensity_image = r)
# Loop through each region in regionprops, identify if the point of interest is
# in that region. If so, plot the region and print it's area.
plt.figure()
plt.imshow(r, cmap='Greys')
plt.plot(pt[0][0], pt[0][1],'rx')
for prop in props:
coords = prop.coords
if np.sum(np.all(coords[:,[1,0]] == pt[0], axis=1)):
plt.plot(coords[:,1],coords[:,0],'r.')
print(prop.area)
This solution assumes that each pixel is 1x1 in size. In my real data solution this isn't the case so I have also applied the following function to apply linear interpolation to the data. I believe you can also apply a similar function to make the area of each pixel smaller and increase the resolution of the data.
import numpy as np
from scipy import interpolate
def interpolate_patch(x,y,patch):
x_interp = np.arange(np.ceil(x[0]), x[-1], 1)
y_interp = np.arange(np.ceil(y[0]), y[-1], 1)
f = interpolate.interp2d(x, y, patch, kind='linear')
patch_interp = f(x_interp, y_interp)
return x_interp, y_interp, patch_interp
If you need to measure the properties of different regions, it is natural to start with finding the regions (not contours).
The algorithm will be the following, in this case:
Prepare a labeled image:
1.a Either fill the areas between different contour lines with the different colors;
1.b Or apply some image thresholding function, and then run skimage.measure.label (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label);
Execute regionprops using the very labeled image as an input (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops);
Iterate over regions in regionprops and calculate the desired parameters (area, perimeter, etc).
Once you identified the regions in your image via regionprops, you can call .coords for each of them to get the enclosed contour.
If someone will need close open contours by image edges (and make a polygon) here is:
import shapely.geometry as sgeo
import shapely.ops as sops
def close_contour_with_image_edge(contour, image_shape):
"""
this function uses shapely because its easiest way to do that
:param contour: contour generated by skimage.measure.find_contours()
:param image_shape: tuple (row, cols), standard return of numpy shape()
:return:
"""
# make contour linestring
contour_line = sgeo.LineString(contour)
# make image box linestring
box_rows, box_cols = image_shape[0], image_shape[1]
img_box = sgeo.LineString(coordinates=(
(0, 0),
(0, box_cols-1),
(box_rows-1, box_cols-1),
(box_rows-1, 0),
(0, 0)
))
# intersect box with non-closed contour and get shortest line which touch both of contour ends
edge_points = img_box.intersection(contour_line)
edge_parts = sops.split(img_box, edge_points)
edge_parts = list(part for part in edge_parts.geoms if part.touches(edge_points.geoms[0]) and part.touches(edge_points.geoms[1]))
edge_parts.sort(reverse=False, key=lambda x: x.length)
contour_edge = edge_parts[0]
# weld it
contour_line = contour_line.union(contour_edge)
contour_line = sops.linemerge(contour_line)
contour_polygon = sgeo.Polygon(contour_line.coords)
return contour_polygon
I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.
import cv2
import numpy as np
img = cv2.imread('image_with_edges.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,10,100,apertureSize = 3)
lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('houghlines3.jpg',img)
Input Image:
Output Image: (see the Red Line):
#ljetibo Try this with:
c_6.jpg
There's quite a bit wrong here so I'll just start from the beginning.
Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on tresholding and the exact meaning of the treshold methods.
The manual mentions that
cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst
the special value THRESH_OTSU may be combined with one of the above
values. In this case, the function determines the optimal threshold
value using the Otsu’s algorithm and uses it instead of the specified
thresh .
I know it's a bit confusing because you don't actully combine THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe.
Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.
Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:
After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:
cv2.erode(src, kernel, [optionalOptions] ) → dst
So you have to write:
b = cv2.erode(b,element)
Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created
cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless.
What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.
What this means, in your case, is that you drag [1] matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself.
If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise.
Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever.
>>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]], dtype=uint8)
If we erode the second image with a 3x3 rectangular kernel we get the image bellow.
Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:
Ok, now we look for EXACTLY vertical and EXACTLY horizontal lines ONLY. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:
Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?
Sorry for the too-long post, hope it helps a bit.
Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.
I'd suggest OP asks a new question more specific to the detection of curves because you are dealing with curves op, not horizontal and vertical lines.
There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:
Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has RANSAC implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text here. This is the easiest because all you have to do is the math.
A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out OpenCv manual. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.
To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See this paper on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.
There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.
To show you what you can do with just Hough transform check out bellow:
import cv2
import numpy as np
def draw_lines(hough, image, nlines):
n_x, n_y=image.shape
#convert to color image so that you can see the lines
draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for (rho, theta) in hough[0][:nlines]:
try:
x0 = np.cos(theta)*rho
y0 = np.sin(theta)*rho
pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
int(y0 + (n_x+n_y)*np.cos(theta)) )
pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
int(y0 - (n_x+n_y)*np.cos(theta)) )
alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
alphdeg = alph*180/np.pi
#OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
if abs( np.cos( alph - 180 )) > 0.8: #0.995:
cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
cv2.line(draw_im, pt1, pt2, (0,0,255), 2)
except:
pass
cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
[cv2.IMWRITE_PNG_COMPRESSION, 12])
img = cv2.imread('a.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
cv2.imwrite("1tresh.jpg", b)
element = np.ones((3,3))
b = cv2.erode(b,element)
cv2.imwrite("2erodedtresh.jpg", b)
edges = cv2.Canny(b,10,100,apertureSize = 3)
cv2.imwrite("3Canny.jpg", edges)
hough = cv2.HoughLines(edges, 1, np.pi/180, 200)
draw_lines(hough, b, 100)
As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the if abs( np.cos( alph - 180 )) > 0.8: while the red drawn lines are drawn by rho>0 and abs( np.cos( alphdeg - 90)) > 0.7 condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm
Here is an example of the kinds of images I'll be dealing with:
(source: csverma at pages.cs.wisc.edu)
There is one bright spot on each ball. I want to locate the coordinates of the centre of the bright spot. How can I do it in Python or Matlab? The problem I'm having right now is that more than one points on the spot has the same (or roughly the same) white colour, but what I need is to find the centre of this 'cluster' of white points.
Also, for the leftmost and rightmost images, how can I find the centre of the whole circular object?
You can simply threshold the image and find the average coordinates of what is remaining. This handles the case when there are multiple values that have the same intensity. When you threshold the image, there will obviously be more than one bright white pixel, so if you want to bring it all together, find the centroid or the average coordinates to determine the centre of all of these white bright pixels. There isn't a need to filter in this particular case. Here's something to go with in MATLAB.
I've read in that image directly, converted to grayscale and cleared off the white border that surrounds each of the images. Next, I split up the image into 5 chunks, threshold the image, find the average coordinates that remain and place a dot on where each centre would be:
im = imread('http://pages.cs.wisc.edu/~csverma/CS766_09/Stereo/callight.jpg');
im = rgb2gray(im);
im = imclearborder(im);
%// Split up images and place into individual cells
split_point = floor(size(im,2) / 5);
images = mat2cell(im, size(im,1), split_point*ones(5,1));
%// Show image to place dots
imshow(im);
hold on;
%// For each image...
for idx = 1 : 5
%// Get image
img = images{idx};
%// Threshold
thresh = img > 200;
%// Find coordinates of thresholded image
[y,x] = find(thresh);
%// Find average
xmean = mean(x);
ymean = mean(y);
%// Place dot at centre
%// Make sure you offset by the right number of columns
plot(xmean + (idx-1)*split_point, ymean, 'r.', 'MarkerSize', 18);
end
I get this:
If you want a Python solution, I recommend using scikit-image combined with numpy and matplotlib for plotting. Here's the above code transcribed in Python. Note that I saved the image referenced by the link manually on disk and named it balls.jpg:
import skimage.io
import skimage.segmentation
import numpy as np
import matplotlib.pyplot as plt
# Read in the image
# Note - intensities are floating point from [0,1]
im = skimage.io.imread('balls.jpg', True)
# Threshold the image first then clear the border
im_clear = skimage.segmentation.clear_border(im > (200.0/255.0))
# Determine where to split up the image
split_point = int(im.shape[1]/5)
# Show image in figure and hold to place dots in
plt.figure()
plt.imshow(np.dstack([im,im,im]))
# For each image...
for idx in range(5):
# Extract sub image
img = im_clear[:,idx*split_point:(idx+1)*split_point]
# Find coordinates of thresholded image
y,x = np.nonzero(img)
# Find average
xmean = x.mean()
ymean = y.mean()
# Plot on figure
plt.plot(xmean + idx*split_point, ymean, 'r.', markersize=14)
# Show image and make sure axis is removed
plt.axis('off')
plt.show()
We get this figure:
Small sidenote
I could have totally skipped the above code and used regionprops (MATLAB link, scikit-image link). You could simply threshold the image, then apply regionprops to find the centroids of each cluster of white pixels, but I figured I'd show you a more manual way so you can appreciate the algorithm and understand it for yourself.
Hope this helps!
Use a 2D convolution and then find the point with the highest intensity. You can apply a concave non-linear function (such as exp) on intensity values before applying the 2d convolution, to intensify the bright spots relative to the dimmer parts of the image. Something like conv2(exp(img),ker)