Fit Curve-Spline to 3D Point Cloud - python

Objective
I have a 3D facet model (e.g. .off file) which can for example look like a pipe/tube (see example picture). The goal is to derive an approximate spline (best case combination of lines and splines) which represents the 3D skeleton of this tube using python.
State of the art
Stackoverflow posts in same field:
how-to-fit-a-line-through-a-3d-pointcloud
General:
Fitting-Spline-Curves-through-Set-of-Unorganized Point Cloud
Skeletons from point cloud
Powercrust / NN-Crust? I read about those, however I cant find a python implementation and maybe I am too dumb to implement it on my own. As far as I understood, the basis is delaunay/voronoi which I know already. However i dont know any further steps
My Approach (so far)
Starting from the example facet model (Image 1), I used a python package to convert the 3d model to a point cloud (Image 2). This point cloud can be used for a voxelized representation (Image 3). Consequently, these three types of data are my starting point.
Basically, this problem does not seem too complicated to me, however I am missing a starting logic. Most of the research papers overcomplicate this for various further-reaching tasks.
One idea would be to do a PCA to derive major axes of the component, and scan along these axes. However, this doesnt appear to lead to good results in a performant way.
Another idea would be to use the voxelized grid and detect a path due to voxel adjacencies.
Another idea would be to use KD-Tree to evaluate closest points to detect the correct planes for defining the spline direction via their plane normals.
An approach that I tried was to select N random points from the pointcloud and search for all neighbors within a radius (cKDTree.query_ball_point). I calculated the center of all neighboring points. This leads to the result in image 4. The result seems good as first approach, however it is more or less a tuning of the radius parameter.
Image 1:
Image 2:
Image 3:
Image 4:

Delaunay/Voronoi methods can be used for this problem, since the medial axis is a sub-graph of the Voronoi graph
(see for example this paper by Attali, Boissonnat and Edelsbrunner).
In the following I will demonstrate the methods on an example of points sampled from a quarter torus surface with small radius 10 and large radius 100 (the medial path/skeleton starts at point (100, 0, 0) and ends at (0, 100, 0)).
The Voronoi graph is the dual of the 3D Delaunay tetrahedralization (I will from now on use the term triangulation for this).
Computing the Delaunay triangulation can be done using scipy's scipy.spatial.Delaunay package.
Below is a figure of the sample points (200 in this example) and their full Delaunay triangulation
(the triangulation was plotted with the function from here).
The Voronoi vertex corresponding to a Delaunay tetrahedron is the center of the circumscribing sphere of the tetrahedron.
The following is a function that computes these Delaunay centers, it is an extension to the 2D function from my previous answer here.
def compute_delaunay_tetra_circumcenters(dt):
"""
Compute the centers of the circumscribing circle of each tetrahedron in the Delaunay triangulation.
:param dt: the Delaunay triangulation
:return: array of xyz points
"""
simp_pts = dt.points[dt.simplices]
# (n, 4, 3) array of tetrahedra points where simp_pts[i, j, :] holds the j'th 3D point (of four) of the i'th tetrahedron
assert simp_pts.shape[1] == 4 and simp_pts.shape[2] == 3
# finding the circumcenter (x, y, z) of a simplex defined by four points:
# (x-x0)**2 + (y-y0)**2 + (z-z0)**2 = (x-x1)**2 + (y-y1)**2 + (z-z1)**2
# (x-x0)**2 + (y-y0)**2 + (z-z0)**2 = (x-x2)**2 + (y-y2)**2 + (z-z2)**2
# (x-x0)**2 + (y-y0)**2 + (z-z0)**2 = (x-x3)**2 + (y-y3)**2 + (z-z3)**2
# becomes three linear equations (squares are canceled):
# 2(x1-x0)*x + 2(y1-y0)*y + 2(z1-z0)*y = (x1**2 + y1**2 + z1**2) - (x0**2 + y0**2 + z0**2)
# 2(x2-x0)*x + 2(y2-y0)*y + 2(z2-z0)*y = (x2**2 + y2**2 + z2**2) - (x0**2 + y0**2 + z0**2)
# 2(x3-x0)*x + 2(y3-y0)*y + 2(z3-z0)*y = (x3**2 + y3**2 + z3**2) - (x0**2 + y0**2 + z0**2)
# building the 3x3 matrix of the linear equations
a = 2 * (simp_pts[:, 1, 0] - simp_pts[:, 0, 0])
b = 2 * (simp_pts[:, 1, 1] - simp_pts[:, 0, 1])
c = 2 * (simp_pts[:, 1, 2] - simp_pts[:, 0, 2])
d = 2 * (simp_pts[:, 2, 0] - simp_pts[:, 0, 0])
e = 2 * (simp_pts[:, 2, 1] - simp_pts[:, 0, 1])
f = 2 * (simp_pts[:, 2, 2] - simp_pts[:, 0, 2])
g = 2 * (simp_pts[:, 3, 0] - simp_pts[:, 0, 0])
h = 2 * (simp_pts[:, 3, 1] - simp_pts[:, 0, 1])
i = 2 * (simp_pts[:, 3, 2] - simp_pts[:, 0, 2])
v1 = (simp_pts[:, 1, 0] ** 2 + simp_pts[:, 1, 1] ** 2 + simp_pts[:, 1, 2] ** 2) - (simp_pts[:, 0, 0] ** 2 + simp_pts[:, 0, 1] ** 2 + simp_pts[:, 0, 2] ** 2)
v2 = (simp_pts[:, 2, 0] ** 2 + simp_pts[:, 2, 1] ** 2 + simp_pts[:, 2, 2] ** 2) - (simp_pts[:, 0, 0] ** 2 + simp_pts[:, 0, 1] ** 2 + simp_pts[:, 0, 2] ** 2)
v3 = (simp_pts[:, 3, 0] ** 2 + simp_pts[:, 3, 1] ** 2 + simp_pts[:, 3, 2] ** 2) - (simp_pts[:, 0, 0] ** 2 + simp_pts[:, 0, 1] ** 2 + simp_pts[:, 0, 2] ** 2)
# solve a 3x3 system by inversion (see https://en.wikipedia.org/wiki/Invertible_matrix#Inversion_of_3_%C3%97_3_matrices)
A = e*i - f*h
B = -(d*i - f*g)
C = d*h - e*g
D = -(b*i - c*h)
E = a*i - c*g
F = -(a*h - b*g)
G = b*f - c*e
H = -(a*f - c*d)
I = a*e - b*d
det = a*A + b*B + c*C
# multiplying inv*[v1, v2, v3] to get solution point (x, y, z)
x = (A*v1 + D*v2 + G*v3) / det
y = (B*v1 + E*v2 + H*v3) / det
z = (C*v1 + F*v2 + I*v3) / det
return (np.vstack((x, y, z))).T
We would like to filter out tetrahedra that are outside of the original surface (see for example the long tetrahedra in the figure above).
This might be done by testing the tetrahedra against the original surface.
However, a simpler way, which is very suited for the input of a tube/pipe surface is
to filter out tetrahedra that have a large circumscribing radius.
This is what the alpha-shape algorithm does.
This is easily done within our context since the radius is just the distance between the center and any of the tetrahedron points.
The following figure shows the Delaunay triangulation after filtering out tetrahedra with radius greater than 20.
We can now use these building blocks to build the Voronoi sub-graph of the tetrahedra that pass the radius condition.
The function below does this using the connectivity information in the Delaunay triangulation to construct the Voronoi sub-graph, represented as an edge list.
def compute_voronoi_vertices_and_edges(points, r_thresh=np.inf):
"""
Compute (finite) Voronoi edges and vertices of a set of points.
:param points: input points.
:param r_thresh: radius value for filtering out vertices corresponding to
Delaunay tetrahedrons with large radii of circumscribing sphere (alpha-shape condition).
:return: array of xyz Voronoi vertex points and an edge list.
"""
dt = Delaunay(points)
xyz_centers = compute_delaunay_tetra_circumcenters(dt)
# filtering tetrahedrons that have radius > thresh
simp_pts_0 = dt.points[dt.simplices[:, 0]]
radii = np.linalg.norm(xyz_centers - simp_pts_0, axis=1)
is_in = radii < r_thresh
# build an edge list from (filtered) tetrahedrons neighbor relations
edge_lst = []
for i in range(len(dt.neighbors)):
if not is_in[i]:
continue # i is an outside tetra
for j in dt.neighbors[i]:
if j != -1 and is_in[j]:
edge_lst.append((i, j))
return xyz_centers, edge_lst
The result is still not sufficient as can be seen in the figure below, where the sub-graph edges are the black line segments.
The reason is that 3D Delaunay triangulations are notorious for having thin tetrahedra
(so-called slivers, needles and caps in this paper by Shewchuk),
which cause the outer "spikes" in the results.
While there are general methods to remove these unwanted spikes
(see, for example, Amenta and Bern),
in the context of a tube surface there is a simple solution.
The path we are looking for can be computed as the shortest Euclidean path in the graph starting at the closest point to the start of the tube and ending at the closest point to the end.
The following code does this using a networkx graph with weights set to the lengths of the edges.
# get closest vertex to start and end points
xyz_centers, edge_lst = compute_voronoi_vertices_and_edges(pts, r_thresh=20.)
kdt = cKDTree(xyz_centers)
dist0, idx0 = kdt.query(np.array([100., 0, 0]))
dist1, idx1 = kdt.query(np.array([0, 100., 0]))
# compute shortest weighted path
edge_lengths = [np.linalg.norm(xyz_centers[e[0], :] - xyz_centers[e[1], :]) for e in edge_lst]
g = nx.Graph((i, j, {'weight': dist}) for (i, j), dist in zip(edge_lst, edge_lengths))
path_s = nx.shortest_path(g,source=idx0,target=idx1, weight='weight')
The figure below shows the results for the original 200 points.
And this is the results for a denser sample of 1000 points.
Now you can pass an approximating spline - interpolating or least-square fit, through the path points.
You can use scipy.interpolate.UnivariateSpline as suggested in the link
or scipy.interpolate.splrep as done here or any other standard spline implementation.

Related

Why is angle of point (X, Y point) w.r.t origin after rotation is different than before rotation in Python?

I have two doubts. I have X and Y coordinates which I have listed below. I also have plotted coordinates as shown in picture below.
x = [0, 1, 1, 0, 0, 1]
y = [1, 1, 2, 2, 3, 3]
Now, I have decided to rotate the geometry in clockwise. Therefore, I have rotate all points at 45 degree (+ve) using below formula.
x_dash = x[i] * math.cos(theta) + y[i] * math.sin(theta)
y_dash = -x[i] * math.sin(theta) + y[i] * math.cos(theta)
After using above code (Formula), I got below results which shows new coordinate points after 45 degree clockwise rotation and after plotting, I got below plot.
x_dash = [0.8509035245341184, 1.3762255133518482, 2.2271290378859665, 1.7018070490682369, 2.552710573602355, 3.078032562420085]
y_dash = [0.5253219888177297, -0.3255815357163887, 0.19974045310134103, 1.0506439776354595, 1.575965966453189, 0.7250624419190707]
My questions:
(1) if I take two coordinates (X and Y) of one point and if I find an angle using
theta = np.degrees(np.arctan2(y, x)),
I did not get 45 degree.
For example:
np.degrees(np.arctan2(0.5253219888177297, 0.8509035245341184))
Result: 31.68992191129556
However, when I found an angle of 1st point before rotation. np.degrees(np.arctan2(1, 0)), I got 90.0.
I would like to know the reason that why there is a differece between the angle of same point before and after the rotation.
(2) If I have a rotated geometry like in the 2nd picture and I do not know the angle of rotation. What should I do make that geometry without rotation (like in the first picture).
Kinldy help me with these questions.
By default, math.sin() and math.cos() assumes that the arguments are in radians. So, the code considers the angle of rotation as 45 radians, and not 45 degrees.
You can define theta as:
theta = numpy.radians(45)
Hope this clarifies everything.

I have pixel coordinates in one plane, am I able to transform them to a new plane using homography and then convert to real world coordinates?

I am using a DeepSORT to gather the x and y coordinates of an object. I am using homography to get a zoomed in bird's-eye-view of a specific portion of the video. I know the real-world distances of the area I zoom into, and want to know the real world position and speed of the object as it moves through the identified area.
Here is my current code for the homographical transformation
# points for tracking window
pt_A = [x_0, y_0]
pt_B = [x_1, y_1]
pt_C = [x_2, y_2]
pt_D = [x_3, y_3]
# euclidean distances between each point
width_AD = np.sqrt(((pt_A[0] - pt_D[0]) ** 2) + ((pt_A[1] - pt_D[1]) ** 2))
width_BC = np.sqrt(((pt_B[0] - pt_C[0]) ** 2) + ((pt_B[1] - pt_C[1]) ** 2))
max_width = max(int(width_AD), int(width_BC))
height_AB = np.sqrt(((pt_A[0] - pt_B[0]) ** 2) + ((pt_A[1] - pt_B[1]) ** 2))
height_CD = np.sqrt(((pt_C[0] - pt_D[0]) ** 2) + ((pt_C[1] - pt_D[1]) ** 2))
max_height = max(int(height_AB), int(height_CD))
input_pts = np.float32([pt_A, pt_B, pt_C, pt_D])
output_pts = np.float32([[0, 0], [0, max_height - 1], [max_width - 1, max_height - 1], [max_width - 1, 0]])
# Compute the perspective transform h_transform
h_transform = cv2.getPerspectiveTransform(input_pts, output_pts)
This h_transform warps the video in the orientation I would like to use when using warpPerspective. I want to know how I can now apply the transformation to the tracked points of objects moving through the area and measure the position and speed of the objects in m/s using the known length and width of the area I zoom into.
This was one thing I tried to convert the points of each object into the new plane, would this be correct or seem correct?
#Finally, any 2d point in rectangle A can be found in rectangle B using this operation:
point_in_A = (x,y,1)
tempMatrix (1x3) = h_transform * point_in_A(1x3)
tempMatrix will be (x, y, scale);
#using tempMatrix values below:
result xy_in_B = (x / scale , y / scale);
Once this is correct, how do I convert these pixel coordinates into real world coordinates? I believe the above transformation would place the coordinates onto a plane where the corners are the selected area with the origin being in the bottom left side.
I have confused myself trying to wrap my head around this so I apologize if this is not a viable question or is very confusing!

Why does the output contain only 2 values but not the displacement for the entire image?

I have been stuck here for sometime now. I cannot understand what am I doing wrong in calculating the displacement vectors along x-axis and y-axis using the Lucas Kanade method.
I implemented it as given in the above Wikipedia link. Here is what I have done:
import cv2
import numpy as np
img_a = cv2.imread("./images/1.png",0)
img_b = cv2.imread("./images/2.png",0)
# Calculate gradient along x and y axis
ix = cv2.Sobel(img_a, cv2.CV_64F, 1, 0, ksize = 3, scale = 1.0/3.0)
iy = cv2.Sobel(img_a, cv2.CV_64F, 0, 1, ksize = 3, scale = 1.0/3.0)
# Calculate temporal difference between the 2 images
it = img_b - img_a
ix = ix.flatten()
iy = iy.flatten()
it = -it.flatten()
A = np.vstack((ix, iy)).T
atai = np.linalg.inv(np.dot(A.T,A))
atb = np.dot(A.T, it)
v = np.dot(np.dot(np.linalg.inv(np.dot(A.T,A)),A.T),it)
print(v)
This code runs without an error but it prints an array of 2 values! I had expected the v matrix to be of the same size as that of the image. Why does this happen? What am I doing incorrectly?
PS: I know there are methods directly available with OpenCV but I want to write this simple algorithm (as also given in the Wikipedia link shared above) myself.
To properly compute the Lucas–Kanade optical flow estimate you need to solve the system of two equations for every pixel, using information from its neighborhood, not for the image as a whole.
This is the recipe (notation refers to that used on the Wikipedia page):
Compute the image gradient (A) for the first image (ix, iy in the OP) using any method (Sobel is OK, I prefer Gaussian derivatives; note that it is important to apply the right scaling in Sobel: 1/8).
ix = cv2.Sobel(img_a, cv2.CV_64F, 1, 0, ksize = 3, scale = 1.0/8.0)
iy = cv2.Sobel(img_a, cv2.CV_64F, 0, 1, ksize = 3, scale = 1.0/8.0)
Compute the structure tensor (ATWA): Axx = ix * ix, Axy = ix * iy, Ayy = iy * iy. Each of these three images must be smoothed with a Gaussian filter (this is the windowing). For example,
Axx = cv2.GaussianBlur(ix * ix, (0,0), 5)
Axy = cv2.GaussianBlur(ix * iy, (0,0), 5)
Ayy = cv2.GaussianBlur(iy * iy, (0,0), 5)
These three images together form the structure tensor, which is a 2x2 symmetric matrix at each pixel. For a pixel at (i,j), the matrix is:
| Axx(i,j) Axy(i,j) |
| Axy(i,j) Ayy(i,j) |
Compute the temporal gradient (b) by subtracting the two images (it in the OP).
it = img_b - img_a
Compute ATWb: Abx = ix * it, Aby = iy * it, and smooth these two images with the same Gaussian filter as above.
Abx = cv2.GaussianBlur(ix * it, (0,0), 5)
Aby = cv2.GaussianBlur(iy * it, (0,0), 5)
Compute the inverse of ATWA (a symmetric positive-definite matrix) and multiply by ATWb. Note that this inverse is of the 2x2 matrix at each pixel, not of the images as a whole. You can write this out as a set of simple arithmetic operations on the images Axx, Axy, Ayy, Abx and Aby.
The inverse of the matrix ATWA is given by:
| Ayy -Axy |
| -Axy Axx | / ( Axx*Ayy - Axy*Axy )
so you can write the solution as
norm = Axx*Ayy - Axy*Axy
vx = ( Ayy * Abx - Axy * Aby ) / norm
vy = ( Axx * Aby - Axy * Abx ) / norm
If the image is natural, it will have at least a tiny bit of noise, and norm will not have zeros. But for artificial images norm could have zeros, meaning you can't divide by it. Simply adding a small value to it will avoid division by zero errors: norm += 1e-6.
The size of the Gaussian filter is chosen as a compromise between precision and allowed motion speed: a larger filter will yield less precise results, but will work with larger shifts between images.
Typically, the vx and vy is only evaluated where the two eigenvalues of the matrix ATWA are sufficiently large (if at least one is small, the result is inaccurate or possibly wrong).
Using DIPlib (disclosure: I'm an author) this is all very easy because it supports images with a matrix at each pixel. You would do this as follows:
import diplib as dip
img_a = dip.ImageRead("./images/1.png")
img_b = dip.ImageRead("./images/2.png")
A = dip.Gradient(img_a, [1.0])
b = img_b - img_a
ATA = dip.Gauss(A * dip.Transpose(A), [5.0])
ATb = dip.Gauss(A * b, [5.0])
v = dip.Inverse(ATA) * ATb

Calculate rotation matrix to align two vectors in 3D space?

I have two separate vectors of 3D data points that represent curves and I'm plotting these as scatter data in a 3D plot with matplotlib.
Both the vectors start at the origin, and both are of unit length. The curves are similar to each other, however, there is typically a rotation between the two curves (for test purposes, I've actually being using one curve and applying a rotation matrix to it to create the second curve).
I want to align the two curves so that they line up in 3D e.g. rotate curve b, so that its start and end points line up with curve a. I've been trying to do this by subtracting the final point from the first, to get a direction vector representing the straight line from the start to the end of each curve, converting these to unit vectors and then calculating the cross and dot products and using the methodology outlined in this answer (https://math.stackexchange.com/a/476311/357495) to calculate a rotation matrix.
However, when I do this, the calculated rotation matrix is wrong and I'm not sure why?
My code is below (I'm using Python 2.7):
# curve_1, curve_2 are arrays of 3D points, of the same length (both start at the origin)
curve_vec_1 = (curve_1[0] - curve_1[-1]).reshape(3,1)
curve_vec_2 = (curve_2[index][0] - curve_2[index][-1]).reshape(3,1)
a,b = (curve_vec_1/ np.linalg.norm(curve_vec_1)).reshape(3), (curve_vec_2/ np.linalg.norm(curve_vec_2)).reshape(3)
v = np.cross(a,b)
c = np.dot(a,b)
s = np.linalg.norm(v)
I = np.identity(3)
vXStr = '{} {} {}; {} {} {}; {} {} {}'.format(0, -v[2], v[1], v[2], 0, -v[0], -v[1], v[0], 0)
k = np.matrix(vXStr)
r = I + k + np.square(k) * ((1 -c)/(s**2))
for i in xrange(item.shape[0]):
item[i] = (np.dot(r, item[i]).reshape(3,1)).reshape(3)
In my test case, curve 2 is simply curve 1 with the following rotation matrix applied:
[[1 0 0 ]
[ 0 0.5 0.866]
[ 0 -0.866 0.5 ]]
(just a 60 degree rotation around the x axis).
The rotation matrix computed by my code to align the two vectors again is:
[[ 1. -0.32264329 0.27572962]
[ 0.53984249 1. -0.35320293]
[-0.20753816 0.64292975 1. ]]
The plot of the direction vectors for the two original curves (a and b in blue and green respectively) and the result of b transformed with the computed rotation matrix (red) is below. I'm trying to compute the rotation matrix to align the green vector to the blue.
Based on Daniel F's correction, here is a function that does what you want:
import numpy as np
def rotation_matrix_from_vectors(vec1, vec2):
""" Find the rotation matrix that aligns vec1 to vec2
:param vec1: A 3d "source" vector
:param vec2: A 3d "destination" vector
:return mat: A transform matrix (3x3) which when applied to vec1, aligns it with vec2.
"""
a, b = (vec1 / np.linalg.norm(vec1)).reshape(3), (vec2 / np.linalg.norm(vec2)).reshape(3)
v = np.cross(a, b)
c = np.dot(a, b)
s = np.linalg.norm(v)
kmat = np.array([[0, -v[2], v[1]], [v[2], 0, -v[0]], [-v[1], v[0], 0]])
rotation_matrix = np.eye(3) + kmat + kmat.dot(kmat) * ((1 - c) / (s ** 2))
return rotation_matrix
Test:
vec1 = [2, 3, 2.5]
vec2 = [-3, 1, -3.4]
mat = rotation_matrix_from_vectors(vec1, vec2)
vec1_rot = mat.dot(vec1)
assert np.allclose(vec1_rot/np.linalg.norm(vec1_rot), vec2/np.linalg.norm(vec2))
Problem is here:
r = I + k + np.square(k) * ((1 -c)/(s**2))
np.square(k) squares each element of the matrix. You want np.matmul(k,k) or k # k which is the matrix multiplied by itself.
I'd also implement the side cases (especially s=0) mentioned in the comments of that answer or you will end up with errors for quite a few cases.
Based off of #Peter and #Daniel F's work. The above function worked for me, except for in cases of the same direction vector, where v would be a zero vector. I catch this here, and return the identity vector instead.
def rotation_matrix_from_vectors(vec1, vec2):
""" Find the rotation matrix that aligns vec1 to vec2
:param vec1: A 3d "source" vector
:param vec2: A 3d "destination" vector
:return mat: A transform matrix (3x3) which when applied to vec1, aligns it with vec2.
"""
a, b = (vec1 / numpy.linalg.norm(vec1)).reshape(3), (vec2 / numpy.linalg.norm(vec2)).reshape(3)
v = numpy.cross(a, b)
if any(v): #if not all zeros then
c = numpy.dot(a, b)
s = numpy.linalg.norm(v)
kmat = numpy.array([[0, -v[2], v[1]], [v[2], 0, -v[0]], [-v[1], v[0], 0]])
return numpy.eye(3) + kmat + kmat.dot(kmat) * ((1 - c) / (s ** 2))
else:
return numpy.eye(3) #cross of all zeros only occurs on identical directions
One can use scipy for this, reproducing here #Peter answer with scipy Rotation see:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Rotation.html?highlight=scipy%20spatial%20transform%20rotation#scipy.spatial.transform.Rotation
from scipy.spatial.transform import Rotation as R
import numpy as np
def get_rotation_matrix(vec2, vec1=np.array([1, 0, 0])):
"""get rotation matrix between two vectors using scipy"""
vec1 = np.reshape(vec1, (1, -1))
vec2 = np.reshape(vec2, (1, -1))
r = R.align_vectors(vec2, vec1)
return r[0].as_matrix()
vec1 = np.array([2, 3, 2.5])
vec2 = np.array([-3, 1, -3.4])
mat = get_rotation_matrix(vec1=vec1, vec2=vec2)
print(mat)
vec1_rot = mat.dot(vec1)
assert np.allclose(vec1_rot / np.linalg.norm(vec1_rot), vec2 / np.linalg.norm(vec2))
terveisin, Markus
I think if you do not have rotation axis, the rotation matrix is not unique.

Find point along line a specified distance from a polygon

Given a 2-D closed polygon defined by a series of points and an infinite line, I would like to find points on that line a specified distance from the polygon. The polygon is known to be closed, not intersecting, and not containing 3 consecutive collinear points. In general there are many possible points along the line. Ideally I would like to find them all, or alternatively the one nearest some initial guess location. I am using python but a solution in any language would be helpful. I believe scipy.spatial kdtree will be one important component, but I cannot see how to do the whole solution. Here is some code to define the problem, which shows at least some of the corner cases involved:
import numpy as np
import matplotlib.pyplot as plt
poly = np.array([[0, 0],
[10, 0],
[10, 3],
[1, 1],
[1, 6],
[0, 6],
[.8, 4],
[0, 0]])
line = np.array([[-2, 4.5],
[12, 3]])
plt.plot(poly[:, 0], poly[:, 1])
plt.plot(line[:, 0], line[:, 1])
plt.xlim([-1, 11])
plt.ylim([-1, 7])
plt.show()
points = find_points_distance_from_polygon(poly, line, distance)
Edit: I am looking for the algorithm to find the points.
Update:
What I have tried so far is an approximate solution using the distance to each point. My thought was that if I refined the polygon by adding additional points along each line, then this approach might be accurate enough. However I would have to add a lot of points if the distance was small. I thought there is probably a better way.
import scipy.spatial as spatial
import scipy.optimize as opt
import math
def find_point_distance_from_polygon_along_line(tree, line, dist, guess_ratio):
def f(x):
pt = line[0, :] + x * (line[1, :] - line[0, :])
d, i = tree.query(pt)
return math.fabs(d - dist)
res = opt.minimize(f, [guess_ratio])
return line[0, :] + res.x * (line[1, :] - line[0, :])
tree = spatial.cKDTree(poly)
pt = find_point_distance_from_polygon_along_line(tree, line, 1, 0)
For the example in the plot and a distance of 0.5, I expect to find 4 points at approximately (.1, 4.2), (1.5, 4.1), (9.1, 3.3), and (10.5, 3.1). My current plan would find more points, particularly points which are some distance from the opposite edge of the polygon. I want the line connecting the point on the line to the polygon to be external to the polygon.
If number of polygon edges is reasonable, you can use simple linear algorithm.
Let's parametric equation for line is
L(u) = L0 + u * dL
where L0 is some base point, dL is direction vector, u is parameter
and parametric equation for i-th segment is
P = P[i] + t * Dir[i]
where P[i] is the first point of segment, Dir[i] is normalized direction vector, t is parameter in range 0..1
Arbitrary point at the line has it's projection on given segment at parameter
t = DotProduct(L(u) - P[i], Dir[i]) //equation 1
and length of normal to the projection (needed distance) is
Dist = Abs(CrossProduct(L(u) - P[i], Dir[i]))
Abs((L0x + u * dLx - Px) * Diry - (L0y + u * dLy - Py) * Dirx) = Dist
so
u = (+-Dist - ((L0x- Px)*Diry -(L0y-Py)*Dirx)) / (dLx * Diry - dLy * Dirx)
substitute values u into equation 1 and check if parameter t is in range 0..1 (projection inside the segment). If yes, L(u) is needed point.
Then check distance to vertices - solve
(L0x + u * dLx - Px)^2 + (L0y + u * dLy - Py)^2 = Dist^2

Categories