Sort points in 2D space to make a spline - python

I have a sequence of points which are distributed in 2D space. They represent a shape but they are not ordered. So, I can plot them as points to give an idea of the shape, but if I plot the line connecting them, I miss the shape because the order of points is not the right order of connection.
I'm wondering, how can I put them in the right order such that, if I connect them one by one in sequence, I get a spline showing the shape they represent? I found and tried the convex hull in Matlab but with no results. The shape could be complex, for example a star and with convex hull I get a shape that is too much simplified (many points are not taken into account).
Thanks for help!
EDIT
Could be everything the image. I've randomly created one to show you a possible case, with some parts that are coming into the shape, and also points can have different distances.
I've tried with convex hull function in Matlab, that's what I get. Every time the contour have a "sharp corner", I miss it and the final shape is not what I'm looking for. Also, Matlab function has no parameter to set to change convex hull result (at least I can't see anything in the help).
hull = convhull(coords(:,1),coords(:,2));
plot(coords(hull,1),coords(hull,2),'.r');

You need to somehow order your points, so they can be in a sequence; in the case of your drawing example, the points can likely be ordered using the minimal distance, to the next -not yet used- point, starting at one end (you'll probably have to provide the end).
Then you can draw a spline, maybe using Chaikin's algorithm for curves that will locally approximate a bezier curve.
You need to start working on this, and post another question with your code, if you are having difficulties.

Alpha shapes may perform better than convexhulls for this problem. Alpha shapes will touch all the points in the exterior of a point cloud, even can carve out holes.
But for complicated shape reconstruction, I would recommend you to try a beta-skeleton bsed approach discussed in https://people.eecs.berkeley.edu/~jrs/meshpapers/AmentaBernEppstein.pdf
See more details on β-Skeleton at https://en.wikipedia.org/wiki/Beta_skeleton
Quote from the linked article:
The circle-based β-skeleton may be used in image analysis to reconstruct the shape of a two-dimensional object, given a set of sample points on the boundary of the object (a computational form of the connect the dots puzzle where the sequence in which the dots are to be connected must be deduced by an algorithm rather than being given as part of the puzzle).
it is possible to prove that the choice β = 1.7 will correctly reconstruct the entire boundary of any smooth surface, and not generate any edges that do not belong to the boundary, as long as the samples are generated sufficiently densely relative to the local curvature of the surface
Cheers

Related

How to check if a 2D point is inside or outside a 2D Closed Bezier Curve using Python?

Hello,
in my 2d software i have two inputs available:
an array of XY points
[(x,y),(1,1),(2,2),(2,3),(-1,3),...]
and another matrix representing the closed 2D bezier curve handles
[((x,y),(x,y),(x,y)),
((-1,-1),(1,1),(1,2)),
((1,1),(2,2),(2,3)),
...]
How can i check if a point is inside or outside the given curve using python ? using preferably numpy maybe
I don't know how the theory of Bezier curves, so if your second list of points is a kind of compressed way to represent a Bezier curve, first try to sample some points of the curve with the precision you want.
So you have n points of your curve, and then you can apply a simple PIP algorithm : https://en.wikipedia.org/wiki/Point_in_polygon
I can explain in details later if you want to know how to do it programmatically.
I cant write code right here, because I need the entire program to understand properly, however I may provide two approaches how to do that.
The hardest way is to approximate each Bézier curve by a polyline. And then, according to the wiki you can use two techniques:
Ray casting algorithm: the shorthand of the algorithm: You put a ray, which starting from a point and goes through the entire polygon to an another point. Some lines lies inside a polygon, some outside. And then you check to which line belongs a specific point Looks like this:
Winding number algorithm: A little bit about winding numbers. So if a winding number is non-zero, the point lies inside the polygon
The huge drawback of this approach is that the accuracy depends on how close you approximated a curve to a polyline.
The second way is to use a bitmap. For example, you set your points to the white then render the area under the curve to the black and see if your points remain white. This method is more accurate and the fastest one, because you can use the GPU for the render.
And some links related to the first a approach:
https://pomax.github.io/bezierinfo/#intersections
http://web.mit.edu/hyperbook/Patrikalakis-Maekawa-Cho/node80.html

Curvature of a one-pixel wide curve

I have a numpy array depicting a one-pixel wide, discrete, connected curve. This curve is obtained by the Skeletonization operation of image processing. I am trying to find the curvature of the above curve at an arbitrary point, to detect bends/kinks (which will have high curvature value).
I tried to implement the above using the general formula for curvature. However, since this a pixelated, discrete curve, whose generating function is unknown, I tried to resort to using numpy gradient instead.
The problem I see with the above is that, since the curve is one-pixel wide, at any point the slope can be only one of 0, 1 or infinity. As a result, the curvature values that I get are mostly meaningless or useless.
I am looking for some suggestion on where to start in order to get a smooth curve out of the above, so that I can calculate curvature in a more meaningful way. Can somebody suggest any mathematical operation or convolution that I can apply to achieve the same? Below is a representative binary image that I have.
P.S. I am very, very new to image processing, so references to standard algorithms (in math books) or library implementations would be very helpful.
An established way to do this is to fit a low-order parametric curve to each of the skeletonized points using two or more neighbouring points. Then you compute curvature at the point using the fitted curve parameters with an analytic formula. Several curve models can be used. The two main models are:
A circle. The radius of curvature, R is the reciprocal of the curvature. For a curve, it equals the radius of the circular arc which best approximates the curve at that point. You can fit a circle to a set of 2D data points using various methods. A python library that has implemented several is here.
A quadratic. This can be fitted to the point and its neighbours, then curvature can be estimated through second-order differentiation of the curve here. You can use numpy.polyfit to fit this model. A simple strategy is to first estimate the tangent vector at the point, by fitting a local line (e.g. with polyfit using an order 1 curve). The you rotate the points to align the tangent vector with the x axis. Finally you fit a 1D quadratic f(x) to the rotated points using polyfit.
The tricky thing with making any curvature estimator is that curvature can be estimated at different scales. For example, do I want my estimator to be sensitive to high frequency detail or is this actually noise? This decision manifests in the choice of neighbourhood size. Too small, and errors from noise and discretization lead to unstable estimates. However too large, and there may be large modelling error (error by approximating the curve as a parametric function). Generally you have to select the best neighbourhood size yourself.
You're also going to have some poor curvature estimates at junction points, but that's largely unavoidable as curvature is not well defined there. A naïve fix could be to segment all paths at junction points, and then estimate curvature on each path individually.
Toby gave an excellent suggestion regarding junction points: detect the junction points and take each line in between those independently.
Detecting junction points (and end points). This is quite simple: all pixels that are set and have more than two neighbors are junction points. All pixels that are set and have exactly one neighbor are end points. Detect all those points and put their coordinates in a list.
Finding the lines in between pairs of points. Starting at each coordinate in your list, look for a line starting there. Note that for the junction points, you'll have at least three lines starting there. If you do this, you'll find each line two times. You can remove duplicates by reversing the lines that end to the left of where they start (and if the two end points are on the same image column, take the one on top as the start). Now they will be directly comparable, so you can delete the duplicates (or not store them in the first place). Note that just comparing start and end point is not sufficient as you can have different lines with the same start and end points.
Tracing each line. The step above requires that you trace each line. See if you can figure it out, it's fun! Here is a description of an algorithm that traces the outline of objects, you can use it as inspiration as this problem is very similar. Store a vector with x-coordinates and one with y-coordinates for each line.
Smoothing the lines. As you noticed, consecutive steps are in one of 8 directions, so angles are strongly discretized. You can prevent this by smoothing the coordinate vectors. This is a quick-and-dirty trick, but it works. Think of these vectors as 1D images, and apply a smoothing filter (I prefer the Gaussian filter for many reasons). Here you filter the vector with x-coordinates separately from the vector with y-coordinates.
Computing the curvature. Finally, you can compute the curvature of the curve, as the norm of the derivative of the unit normal to the curve. Don't forget to take the distance between points into account when computing derivatives!

How to determine and extract surface points of 3D object?

I have a 3D object which is not hollow, so there are many 3D points. How would you determine which of these points of such an object (especially with a very curvaceous surface) are on the surface? I understand how to extract them, but I need either a function somewhat like libraryUNK.surfacePoint... Which I don't know.
Or better an understanding of what is considered to be a surface point, which I don't know either and couldn't (yet) develop (for myself) any proper definition.
I know I can do triangulation to get the surface. But I don't get what to do next, as I will be left now with a set of triangles, some of which are on the surface, some of which are not... but again I have no definition how to consider what's on surface and what is not...
It seems to me that you want to compute the convex hull of your 3D points cloud.
It's not an easy problem, but there's plently of solutions (and algorithms) to do that. One of the efficients one is called "convex hull". There's a ConvexHull function in scipy.spatial.
Here is the details with an example (2D, but it works in any dimension)
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
This function use the QHull library
http://www.qhull.org/
There's plently of ressources on the QHull page. There's also a Wikipedia page (Again, this is not the only method to compute convex hulls, you may want to try others):
https://en.wikipedia.org/wiki/Quickhull
Edit: after re-reading the question, it seems your volume may not be convex. Unfortunately, if it isn't, there's no way to tell whether a point is inside the volume or in the surface without additional informations on the volume or on the points cloud.

Pipeline to create Voronoi Meshes

I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.

Spatstat: Given a list of 2-d points, how to connect them into a polygon, and further make it the study region?

Please allow me to start the question with a simplest task:If I have four points which are vertices of a rectangle, stored in a 4x2 matrix, how can I turn this into a rectangular window? (Please do not use any special command specific to drawing rectangles as the rectangle is raised just to represent a general class of regular geometrical object)
To make things more complicated, suppose I have a nx2 matrix, how can I connect all of the n points so that it becomes a polygon? Note the object is not necessarily convex. I think the main difficulty is that, how can R know which point should be connected with which?
The reason I am asking is that I was doing some image processing on a fish, and I managed to get the body line of the fish by finding the contour with opencv in python, and output it as a nx2 csv file. When I read the csv file into R and tried to use the SpatialPolygnos in the sp package to turn this into a polygon, some very unexpected behavior happened; there seems to be a break somewhere in the middle that the polygon got cut in half, i.e. the boundary of the polygon was not connected. Is there anyway I can fix this problem?
Thank you.
Edit: Someone kindly pointed out that this is possibly a duplicate of another question: drawing polygons in R. However the solution to that question relies on the shape being drawn is convex and hence it makes sense to order by angels; However here the shape is not necessarily convex and it will not work.
Do you want it to be a spatstat study region (of class owin) since you have the spatstat tag on there? In that case you can just use owin(poly=x) where x is your nx2 matrix (after loading the spatstat library of course). The rows in this matrix should contain the vertices of the polygon in the order that you want them connected (that's how R knows which point to connect with which). See help(owin) for more details.

Categories