I am working with 2D convex polygons, and have two sets of points surrounding it, lets call the neighborhood closest to the polygon A and the one further out B. These sets of points are offsets of the original polygon, the offsetting itself was done using Shapely in Python (which admittedly does not do a great job, if anyone knows of any alternatives please do comment).
What I would like to do now is for each point in A, find the point in B that is closest to it in normal direction of the polygon's edges. I've attached some images below that explain what I'm doing better than I can in words.
So far I've attempted to do this myself and get poor results. My approach is for each point in A, find its two closest vertices using a nearest neighbor search, these two points can be used to calculate a slope. I find the line perpendicular to this slope anchored at the point A I am currently iterating through. I then construct a bunch of points along this line and find the point in B that is closest to it. To make sure I am finding points in the right direction, I actually find the two closest points in B and chose the one such that its distance to the point I'm iterating through in A is minimized. This is done for all points in A.
A few shortcomings of this approach:
If a point is exactly at a vertex it fails and gives garbage
If there is a long edge and the point is near the beginning or end of this edge, it might chose the wrong two vertices to calculate the slope.
This leads me to believe there must be a better way. Here is an image explaining what I would like to do:
The red dashed line is an example of the convex polygon I am working with. The black points represent the smaller offset, A, and the white points are the bigger offset, B. The green points are the points in B that my code currently identifies as the normal points, though they are clearly wrong. I have drawn in the blue arrows to show what I mean by the normal direction of each edge. As en example of the code's shortcoming, you can see how at the rightmost points where two points reside exactly at a vertex the code doesn't choose the points we would expect.
Here is a copy of the code itself:
def Nearest_Points(inner, outer, vertices):
step = 0.1
near_points = []
tree = KDTree(vertices)
line_tree = KDTree(outer_points)
for point in inner_points:
#Finding closest vertices to point
dist, index = tree.query(point, k = 2)
nearest_vertex1 = vertices[index[0]]
nearest_vertex2 = vertices[index[1]]
#Constructing line perpendicular to edge and anchored at point
dy = nearest_vertex2[1]-nearest_vertex1[1]
dx = nearest_vertex2[0]-nearest_vertex1[0]
line_points = []
if dy != 0:
for i in range(-50, 51):
x = point[0]+i*step
y = -dx/dy*(x-point[0])+point[1]
line_points.append([x, y])
else:
for i in range(-50, 51):
line_points.append([point[0], point[1]+step*i])
#Finding the two points in outer neighborhood closest to line
dist2, index_2 = line_tree.query(line_points, k = 2)
dist2_arr = np.asarray(dist2)
min_1 = np.min(dist2_arr[:,0])
min_2 = np.min(dist2_arr[:,1])
for i in range(len(dist2)):
if dist2[i][0] == min_1:
index_2_1 = i
if dist2[i][1] == min_2:
index_2_2 = i
near1 = outer_points[index_2[index_2_1][0]]
near2 = outer_points[index_2[index_2_2][1]]
#Of these two points, finding the one closest to the point currently being iterated through
near1_dist = (near1[0]-point[0])**2 + (near1[1]-point[1])**2
near2_dist = (near2[0]-point[0])**2 + (near2[1]-point[1])**2
if near1_dist < near2_dist:
near_points.append(near1)
else:
near_points.append(near2)
return near_points
Thank you.
I am trying to triangulate a number of polygons such that the triangles do not add extra points. For the sake of keeping the question short I will be using 2 circles in each other, in reality these will be opencv contours, however the translation between the two is quite complex and the circles also show the problem.
So I have the following code (based on the example) in order to first get the circles and then triangulate them with the triangle project
import matplotlib.pyplot as plt
import numpy as np
import triangle as tr
def circle(N, R):
i = np.arange(N)
theta = i * 2 * np.pi / N
pts = np.stack([np.cos(theta), np.sin(theta)], axis=1) * R
seg = np.stack([i, i + 1], axis=1) % N
return pts, seg
pts0, seg0 = circle(30, 1.4)
pts1, seg1 = circle(16, 0.6)
pts = np.vstack([pts0, pts1])
seg = np.vstack([seg0, seg1 + seg0.shape[0]])
print(pts)
print(seg)
A = dict(vertices=pts, segments=seg, holes=[[0, 0]])
print(seg)
B = tr.triangulate(A) #note that the origin uses 'qpa0.05' here
tr.compare(plt, A, B)
plt.show()
Now this causes both the outer and inner circles to get triangulated like show here , clearly ignoring the hole. However by setting the 'qpa0.05' flag we can cause the circle to use the hole as seen here . However doing this causes the triangles to be split, adding many different triangles, increasing the qpa to a higher value does cause the number of triangles to be somewhat decreased, however they remain there.
Note that I want to be able to handle multiple holes in the same shape, and that shapes might end up being concave.
Anybody know how to get the triangulation to use the holes without adding extra triangles?
You can connect the hole (or holes) to the exterior perimeter so that you get a single "degenerate polygon" defined by a single point sequence that connects all points without self-intersection.
You go in and out through the same segment. If you follow the outer perimeter clockwise, you need to follow the hole perimeter counterclockwise or vice-versa. Otherwise, it would self-intersect.
I figured it out
the "qpa0.05" should have been a 'p', the p makes the code factor in holes and the a sets the maximum area for the triangles, this causes the extra points to be added.
B = tr.triangulate(A,'p')
I am trying to "shrink" the size of the outer boundary (in red) that is a set of pixels along the edge of an image. I already shrunk the boundary using normal vectors, and I have a set of points that are a certain distance inside the outer boundary. However, there are a lot of points that I could get rid of to make the inner boundary (in blue) smoother. So, how do I get rid of the extraneous points?
I tried converting everything to polar coordinates, and then taking the point that is closest to to the center of the image for each interval of pi/60 radians from 0 to 2pi. However, I realized that this wouldn't work for something like boomerang-ish shape, where there would be two points that I would want to keep in the pi/60 interval.
polarShrunk is the set of shrunk points converted to polar coordinates
cell_img is the object with a distanceBetween method that finds the distance between two points and a findMoments method which finds the "center of mass" of the image when it binarized to black and white. pol2cart converts polar coordinates to cartesian coordinates
i = 0
smoothArray = []
for x in range(120):
i += 1
theta1 = (i - 1)*np.pi/60
theta2 = i*np.pi/60
simAnglePoints = []
for y in polarShrunk:
if (y[0] < theta2) and (y[1] > theta1):
simAnglePoints.append(y)
closestPoint = np.array([10000, 10000])
for z in np.array(simAnglePoints):
if cell_img.distanceBetween(np.array(cell_img.pol2cart(z)), cell_img.findMoments) < cell_img.distanceBetween(closestPoint, cell_img.findMoments):
closestPoint = z
smoothArray.append(closestPoint)
I want only the points that follow the black line that I drew over the picture, and to get rid of all of the other points from the polarShrunk array.
Any help would be appreciated! I really only need a way to approach this problem, and I would like to figure out how to implement that approach in Python on my own.
Good morning all!
I have to calculate the area of a polygon using python.
The formula to do that is given by (sorry, can't post pictures yet..)
(x0*y1 - y0*x1) + (x1*y2 - y1*x2) + ... + (xn-1*y0 - yn-1*x0)
2
This is the code i came up with. However, it results in a (correct) negative value, and i have no idea why.
Would it be valid to simply multiply the area time -0.5 or is there something wrong with my code?
Any help is greatly appreciated!!
polygon = [[0,0],[-1,5],[2,3],[1,5],[3,6],[4,5],[5,3],[8,-2],[4,-4],[2,-5]]
area = 0.0
n = len(polygon)
for i in range(n):
i1 = (i+1)%n
area += polygon[i][0]*polygon[i1][1] - polygon[i1][0]*polygon[i][1]
area *= 0.5
print 'area = ', area
The formula works by computing a sum of the cross products of each pair of vectors between the origin and each end of the line segment composing the polygon. In essence, the area is computed as the difference between the area of the green and red triangles in the picture below. (Note that the red triangles are partially underneath the green ones.)
The sign of the cross product depends on the orientation of the vectors, i.e. if we can make the second vector to align with the first by turning it left or right. Therefore, you will get either a negative or positive area depending on whether the points in the polygon are ordered clockwise or counter-clockwise. The solution is correct, but you need to apply the abs() function to the result, since you don't need the sign.
The sign of the final answer is based on the orientation of the polygon. You can check it by taking the reverse list of polygon in the given example.
polygon = [[0,0],[-1,5],[2,3],[1,5],[3,6],[4,5],[5,3],[8,-2],[4,-4],[2,-5]]
polygon.reverse()
...
In this case you'll find area to be positive, thought it is essentially the same polygon.
You can read more about why orientation makes area negative here.
You simply need to take the absolute value of the final result.
print 'area = ', abs(area)
I have a historical time sequence of seafloor images scanned from film that need registration.
from pylab import *
import cv2
import urllib
urllib.urlretrieve('http://geoport.whoi.edu/images/frame014.png','frame014.png');
urllib.urlretrieve('http://geoport.whoi.edu/images/frame015.png','frame015.png');
gray1=cv2.imread('frame014.png',0)
gray2=cv2.imread('frame015.png',0)
figure(figsize=(14,6))
subplot(121);imshow(gray1,cmap=cm.gray);
subplot(122);imshow(gray2,cmap=cm.gray);
I want to use the black region on the left of each image to do the registration, since that region was inside the camera and should be fixed in time. So I just need to compute the affine transformation between the black regions.
I determined these regions by thresholding and finding the largest contour:
def find_biggest_contour(gray,threshold=40):
# threshold a grayscale image
ret,thresh = cv2.threshold(gray,threshold,255,1)
# find the contours
contours,h = cv2.findContours(thresh,mode=cv2.RETR_LIST,method=cv2.CHAIN_APPROX_NONE)
# measure the perimeter
perim = [cv2.arcLength(cnt,True) for cnt in contours]
# find contour with largest perimeter
i=perim.index(max(perim))
return contours[i]
c1=find_biggest_contour(gray1)
c2=find_biggest_contour(gray2)
x1=c1[:,0,0];y1=c1[:,0,1]
x2=c2[:,0,0];y2=c2[:,0,1]
figure(figsize=(8,8))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1,y1,'b-')
imshow(gray2,cmap=cm.gray, alpha=0.5);plot(x2,y2,'g-')
axis([0,1500,1000,0]);
The blue is the longest contour from the 1st frame, the green is the longest contour from the 2nd frame.
What is the best way to determine the rotation and offset between the blue and green contours?
I only want to use the right side of the contours in some region surrounding the step, something like the region between the arrows.
Of course, if there is a better way to register these images, I'd love to hear it. I already tried a standard feature matching approach on the raw images, and it didn't work well enough.
Following Shambool's suggested approach, here's what I've come up with. I used a Ramer-Douglas-Peucker algorithm to simplify the contour in the region of interest and identified the two turning points. I was going to use the two turning points to get my three unknowns (xoffset, yoffset and angle of rotation), but the 2nd turning point is a bit too far toward the right because RDP simplified away the smoother curve in this region. So instead I used the angle of the line segment leading up to the 1st turning point. Differencing this angle between image1 and image2 gives me the rotation angle. I'm still not completely happy with this solution. It worked well enough for these two images, but I'm not sure it will work well on the entire image sequence. We'll see.
It would really be better to fit the contour to the known shape of the black border.
# select region of interest from largest contour
ind1=where((x1>190.) & (y1>200.) & (y1<900.))[0]
ind2=where((x2>190.) & (y2>200.) & (y2<900.))[0]
figure(figsize=(10,10))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1[ind1],y1[ind1],'b-')
imshow(gray2,cmap=cm.gray, alpha=0.5);plot(x2[ind2],y2[ind2],'g-')
axis([0,1500,1000,0])
def angle(x1,y1):
# Returns angle of each segment along an (x,y) track
return array([math.atan2(y,x) for (y,x) in zip(diff(y1),diff(x1))])
def simplify(x,y, tolerance=40, min_angle = 60.*pi/180.):
"""
Use the Ramer-Douglas-Peucker algorithm to simplify the path
http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm
Python implementation: https://github.com/sebleier/RDP/
"""
from RDP import rdp
points=vstack((x,y)).T
simplified = array(rdp(points.tolist(), tolerance))
sx, sy = simplified.T
theta=abs(diff(angle(sx,sy)))
# Select the index of the points with the greatest theta
# Large theta is associated with greatest change in direction.
idx = where(theta>min_angle)[0]+1
return sx,sy,idx
sx1,sy1,i1 = simplify(x1[ind1],y1[ind1])
sx2,sy2,i2 = simplify(x2[ind2],y2[ind2])
fig = plt.figure(figsize=(10,6))
ax =fig.add_subplot(111)
ax.plot(x1, y1, 'b-', x2, y2, 'g-',label='original path')
ax.plot(sx1, sy1, 'ko-', sx2, sy2, 'ko-',lw=2, label='simplified path')
ax.plot(sx1[i1], sy1[i1], 'ro', sx2[i2], sy2[i2], 'ro',
markersize = 10, label='turning points')
ax.invert_yaxis()
plt.legend(loc='best')
# determine x,y offset between 1st turning points, and
# angle from difference in slopes of line segments approaching 1st turning point
xoff = sx2[i2[0]] - sx1[i1[0]]
yoff = sy2[i2[0]] - sy1[i1[0]]
iseg1 = [i1[0]-1, i1[0]]
iseg2 = [i2[0]-1, i2[0]]
ang1 = angle(sx1[iseg1], sy1[iseg1])
ang2 = angle(sx2[iseg2], sy2[iseg2])
ang = -(ang2[0] - ang1[0])
print xoff, yoff, ang*180.*pi
-28 14 5.07775871644
# 2x3 affine matrix M
M=array([cos(ang),sin(ang),xoff,-sin(ang),cos(ang),yoff]).reshape(2,3)
print M
[[ 9.99959685e-01 8.97932821e-03 -2.80000000e+01]
[ -8.97932821e-03 9.99959685e-01 1.40000000e+01]]
# warp 2nd image into coordinate frame of 1st
Minv = cv2.invertAffineTransform(M)
gray2b = cv2.warpAffine(gray2,Minv,shape(gray2.T))
figure(figsize=(10,10))
imshow(gray1,cmap=cm.gray, alpha=0.5);plot(x1[ind1],y1[ind1],'b-')
imshow(gray2b,cmap=cm.gray, alpha=0.5);
axis([0,1500,1000,0]);
title('image1 and transformed image2 overlain with 50% transparency');
Good question.
One approach is to represent contours as 2d point clouds and then do registration.
More simple and clear code in Matlab that can give you affine transform.
And more complex C++ code(using VXL lib) with python and matlab wrapper included.
Or you can use some modificated ICP(iterative closest point) algorithm that is robust to noise and can handle affine transform.
Also your contours seems to be not very accurate so it can be a problem.
Another approach is to use some kind of registration that use pixel values.
Matlab code (I think it's using some kind of minimizer+ crosscorrelation metric)
Also maybe there is some kind of optical flow registration(or some other kind) that is used in medical imaging.
Also you can use point features as SIFT(SURF).
You can try it quick in FIJI(ImageJ)
also this link.
Open 2 images
Plugins->feature extraction-> sift (or other)
Set expected transformation to affine
Look at estimated transformation model [3,3] homography matrix in ImageJ log.
If it works good then you can implement it in python using OpenCV or maybe using Jython with ImageJ.
And it will be better if you post original images and describe all conditions (it seems that image is changing between frames)
You can represent these contours with their respective ellipses. These ellipses are centered on the centroid of the contour and they are oriented towards the main density axis. You can compare the centroids and the orientation angle.
1) Fill the contours => drawContours with thickness=CV_FILLED
2) Find moments => cvMoments()
3) And use them.
Centroid: { x, y } = {M10/M00, M01/M00 }
Orientation (theta):
EDIT: I customized the sample code from legacy (enteringblobdetection.cpp) for your case.
/* Image moments */
double M00,X,Y,XX,YY,XY;
CvMoments m;
CvRect r = ((CvContour*)cnt)->rect;
CvMat mat;
cvMoments( cvGetSubRect(pImgFG,&mat,r), &m, 0 );
M00 = cvGetSpatialMoment( &m, 0, 0 );
X = cvGetSpatialMoment( &m, 1, 0 )/M00;
Y = cvGetSpatialMoment( &m, 0, 1 )/M00;
XX = (cvGetSpatialMoment( &m, 2, 0 )/M00) - X*X;
YY = (cvGetSpatialMoment( &m, 0, 2 )/M00) - Y*Y;
XY = (cvGetSpatialMoment( &m, 1, 1 )/M00) - X*Y;
/* Contour description */
CvPoint myCentroid(r.x+(float)X,r.y+(float)Y);
double myTheta = atan( 2*XY/(XX-YY) );
Also, check this with OpenCV 2.0 examples.
If you don't want to find the homography between the two images and want to find the affine transformation you have three unknowns, rotation angle (R), and the displacement in x and y (X,Y). Therefore minimum of two points (with two known values for each) are needed to find the unknowns. Two points should be matched between the two images or two lines, each has two known values, the intercept and slope. If you go with the point matching approach, the further the points are from each other the more robust is the found transform to noise (this is very simple if you remember error propagation rules).
In the two point matching method:
find two points (A and B) in the first image I1 and their corresponding points (A',B') in the second image I2
find the middle point between A and B: C, and the middle point between A' and B': C'
the difference C and C' (C-C') gives the translation between the images (X and Y)
using the dot product of C-A and C'-A' you can find the rotation angle (R)
To detect robust points, I would find the the points along the side of counter you have found with highest absolute value of the second derivative (Hessian) and then try to match them. Since you mentioned this is a video footage you can easily make the assumption the transformation between each two frames is small to reject the outliers.