Curvature of a one-pixel wide curve - python

I have a numpy array depicting a one-pixel wide, discrete, connected curve. This curve is obtained by the Skeletonization operation of image processing. I am trying to find the curvature of the above curve at an arbitrary point, to detect bends/kinks (which will have high curvature value).
I tried to implement the above using the general formula for curvature. However, since this a pixelated, discrete curve, whose generating function is unknown, I tried to resort to using numpy gradient instead.
The problem I see with the above is that, since the curve is one-pixel wide, at any point the slope can be only one of 0, 1 or infinity. As a result, the curvature values that I get are mostly meaningless or useless.
I am looking for some suggestion on where to start in order to get a smooth curve out of the above, so that I can calculate curvature in a more meaningful way. Can somebody suggest any mathematical operation or convolution that I can apply to achieve the same? Below is a representative binary image that I have.
P.S. I am very, very new to image processing, so references to standard algorithms (in math books) or library implementations would be very helpful.

An established way to do this is to fit a low-order parametric curve to each of the skeletonized points using two or more neighbouring points. Then you compute curvature at the point using the fitted curve parameters with an analytic formula. Several curve models can be used. The two main models are:
A circle. The radius of curvature, R is the reciprocal of the curvature. For a curve, it equals the radius of the circular arc which best approximates the curve at that point. You can fit a circle to a set of 2D data points using various methods. A python library that has implemented several is here.
A quadratic. This can be fitted to the point and its neighbours, then curvature can be estimated through second-order differentiation of the curve here. You can use numpy.polyfit to fit this model. A simple strategy is to first estimate the tangent vector at the point, by fitting a local line (e.g. with polyfit using an order 1 curve). The you rotate the points to align the tangent vector with the x axis. Finally you fit a 1D quadratic f(x) to the rotated points using polyfit.
The tricky thing with making any curvature estimator is that curvature can be estimated at different scales. For example, do I want my estimator to be sensitive to high frequency detail or is this actually noise? This decision manifests in the choice of neighbourhood size. Too small, and errors from noise and discretization lead to unstable estimates. However too large, and there may be large modelling error (error by approximating the curve as a parametric function). Generally you have to select the best neighbourhood size yourself.
You're also going to have some poor curvature estimates at junction points, but that's largely unavoidable as curvature is not well defined there. A naïve fix could be to segment all paths at junction points, and then estimate curvature on each path individually.

Toby gave an excellent suggestion regarding junction points: detect the junction points and take each line in between those independently.
Detecting junction points (and end points). This is quite simple: all pixels that are set and have more than two neighbors are junction points. All pixels that are set and have exactly one neighbor are end points. Detect all those points and put their coordinates in a list.
Finding the lines in between pairs of points. Starting at each coordinate in your list, look for a line starting there. Note that for the junction points, you'll have at least three lines starting there. If you do this, you'll find each line two times. You can remove duplicates by reversing the lines that end to the left of where they start (and if the two end points are on the same image column, take the one on top as the start). Now they will be directly comparable, so you can delete the duplicates (or not store them in the first place). Note that just comparing start and end point is not sufficient as you can have different lines with the same start and end points.
Tracing each line. The step above requires that you trace each line. See if you can figure it out, it's fun! Here is a description of an algorithm that traces the outline of objects, you can use it as inspiration as this problem is very similar. Store a vector with x-coordinates and one with y-coordinates for each line.
Smoothing the lines. As you noticed, consecutive steps are in one of 8 directions, so angles are strongly discretized. You can prevent this by smoothing the coordinate vectors. This is a quick-and-dirty trick, but it works. Think of these vectors as 1D images, and apply a smoothing filter (I prefer the Gaussian filter for many reasons). Here you filter the vector with x-coordinates separately from the vector with y-coordinates.
Computing the curvature. Finally, you can compute the curvature of the curve, as the norm of the derivative of the unit normal to the curve. Don't forget to take the distance between points into account when computing derivatives!

Related

Determining Best Fit Center of Scattered 3D Points

I am attempting to find the center of several sets of 3D points on a sphere. Each set is comprised of three or more points that fall on the arc of a circle, but not perfectly as they have been supplied by an object detection algorithm, so there is some inherent error in these points. This is for me where the difficulty lies, I cannot simply solve the equations, I need to try and minimize variance in radius to this point across all three-point sets.
Currently, I am calculating a plane of best fit for each set of points. By calculating the radius (perpendicular distance) to this normal for each set and determining the variance I can figure out which plane (normal or center of rotation) fits all three sets the best. I am also doing this for an average of the three planes and for two planes after throwing out the plane that agrees least with the other two. So I am getting a pretty decent approximation currently.
My question is, does anyone know how to implement in Python some sort of function that can help me find a normal vector through these points that minimize the variance in radius for all sets. I suspect this won't be far off my current approximation, but am looking for the most accurate solution to this problem.
The picture below shows the results of what I am currently doing. The pink points represent the points I am using, labeled 0,1,2 for each set of points. The blue dots represent the normal vector projected to the surface of the sphere. The orange is the average of the three blue dots projected to the surface of the sphere. Ignore green they are not relevant to this. To minimize the variance my code is currently telling me that axis (blue dot) 0 results in the least variance in radius for the data set as a whole, but I highly doubt it is the best fitting point.

Measurement for intersection of 2 irregular shaped 3d object

I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex.
I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate.
Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape.
I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method.
Update:
I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.
By straight voxelization:
If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel.
Then for every vertex of the mesh, set the bit of the cell it is included in (this just takes a truncation of the coordinates). By doing this, you will obtain the boundary of the object as a connected surface. You will obtain an estimate of the volume by means of a 3D flood filling algorithm, either from an inside or an outside pixel. (Outside will be easier but be sure to leave a one voxel margin around the object.)
Estimating the volumes of both objects as well as intersection or union is straightforward with this machinery. The cost will depend on the number of faces and the number of voxels.
A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a given point is in a given volume, use a crossing number test, which Google. There are acceleration structures that can help with this test, but my guess is that the number of samples that'll give you reasonable accuracy is lower than the number of samples necessary to benefit overall from building the acceleration structure.
As a variant of this, you can check line intersection instead of point intersection: Generate a random (axis-aligned, for efficiency) line, and measure how much of it is contained in A, in B, and in both A and B. This requires more bookkeeping than point-in-polyhedron, but will give you better per-sample information and thus reduce the number of times you end up iterating through all the faces.

Sort points in 2D space to make a spline

I have a sequence of points which are distributed in 2D space. They represent a shape but they are not ordered. So, I can plot them as points to give an idea of the shape, but if I plot the line connecting them, I miss the shape because the order of points is not the right order of connection.
I'm wondering, how can I put them in the right order such that, if I connect them one by one in sequence, I get a spline showing the shape they represent? I found and tried the convex hull in Matlab but with no results. The shape could be complex, for example a star and with convex hull I get a shape that is too much simplified (many points are not taken into account).
Thanks for help!
EDIT
Could be everything the image. I've randomly created one to show you a possible case, with some parts that are coming into the shape, and also points can have different distances.
I've tried with convex hull function in Matlab, that's what I get. Every time the contour have a "sharp corner", I miss it and the final shape is not what I'm looking for. Also, Matlab function has no parameter to set to change convex hull result (at least I can't see anything in the help).
hull = convhull(coords(:,1),coords(:,2));
plot(coords(hull,1),coords(hull,2),'.r');
You need to somehow order your points, so they can be in a sequence; in the case of your drawing example, the points can likely be ordered using the minimal distance, to the next -not yet used- point, starting at one end (you'll probably have to provide the end).
Then you can draw a spline, maybe using Chaikin's algorithm for curves that will locally approximate a bezier curve.
You need to start working on this, and post another question with your code, if you are having difficulties.
Alpha shapes may perform better than convexhulls for this problem. Alpha shapes will touch all the points in the exterior of a point cloud, even can carve out holes.
But for complicated shape reconstruction, I would recommend you to try a beta-skeleton bsed approach discussed in https://people.eecs.berkeley.edu/~jrs/meshpapers/AmentaBernEppstein.pdf
See more details on β-Skeleton at https://en.wikipedia.org/wiki/Beta_skeleton
Quote from the linked article:
The circle-based β-skeleton may be used in image analysis to reconstruct the shape of a two-dimensional object, given a set of sample points on the boundary of the object (a computational form of the connect the dots puzzle where the sequence in which the dots are to be connected must be deduced by an algorithm rather than being given as part of the puzzle).
it is possible to prove that the choice β = 1.7 will correctly reconstruct the entire boundary of any smooth surface, and not generate any edges that do not belong to the boundary, as long as the samples are generated sufficiently densely relative to the local curvature of the surface
Cheers

detect point on a curve boundary

I have boundaries of semi-circle or ellipse shaped objects. Example image is
The boundary can be slightly jagged (when you zoom in). I am looking to detect a point of interest (location x and y) on these curves, where we see a definite change in the shape, such as
There can be two outputs:
No point of interest: we cannot find specific features
Point of interest with x and y location
Currently, I am using Python and OpenCV. I cannot think of a efficient and effective way to solve this problem. Any help will be really appreciated.
Nothing says that others will agree with my closure vote, so ...
I suggest two steps:
Fit an ellipse to the given points. I'm sure you've already found curve-fitting algorithms (and perhaps software packages) by now -- and asking for those is specifically proscribed on Stack Overflow.
Code a small anomaly detector, which works on the difference between the fitted curve and the actual data points.
Step 2 depends heavily on your definition of "point of interest". What are the criteria? I notice that your second point of interest actually lies very close to the fitted curve; it's the region on either side the deviates inward.
I suggest that you do your fitting in polar coordinates, and then consider the result in terms of theta and radius. Think of "flattening" the two curves as a single unit, so that the central angle (theta) is the new x-coordinate, and the distance from the center is the new y-coordinate.
Now, subtract the two curves and plot the difference (or just store this new curve as an array of points). Look for appropriate anomalies in these differences. This is where you have to decide what you need. Perhaps a sufficient deviation in the "r" value (radius, distance from center"); perhaps a change in the gradient (find a peak/valley, but not a gently sloping bulge). Do you want absolute difference, or an integral of deviation (area between the fit and the anomaly). Do you want it linear or squared ... or some other function? Does the width of the anomaly figure into your criteria?
That's what you need to decide. Does this get you moving?

Rotation invariant curve fitting

I am trying to build a python application that inspects a part that consists of a straight line followed by a curved arc (as seen in picture). The goal is to find the curvature and arc length of the curved section. My approach so far has been to threshold and skelotinze to produce a series of points. I then use least squares fitting to fit a piecewise mathematical function that is composed of a line and an arc. This works great for level parts.
However, if the parts are placed such that the straight section is not flat (parallel with horizontal axis) I run into problems. My solution thus far has been to fit a line to the left quarter of the image to detect the slope of the straight section. When doing my least squares fitting I then add in this factor (ie. add mx + b to the piecewise function). This produces a result that is close.
The error I found in this is approach is actually kind of interesting. Least squares fitting tries to minimize error in the y-axis for both flat and rotated parts. This is fine for level parts, but for rotated parts the error should really be defined as the distance between the data and the curve in a line perpendicular to the straight section so that the error is measured the same in both circumstances.
Any help in overall design of this problem would be greatly appreciated. I didn't think template matching was a good solution as my parts have different parameters (ie. curvature, arc length).
Here is an example where the straight section is a horizontal line:

Categories