For a drone competition, I have:
A- reference 3D trajectory (around 30 points or less)
B- tolerance from that ideal trajectory (say in cm or less)
C- experimental 3D points read from the drone's GPS (about 5000 points)
A bit like in:
RED: reference (A)
BLUE: experimental (C)
GREEN: "tolerance" (A+B, well not really, but you get the idea...)
I'd like to know the best way to check that the "experimental 3D points" (C) all falls within the "reference 3D trajectory" (A) plus/minus the expected "tolerance" (B).
Closest matches I've found here is:
Finding out if a curve is inside or outside a boundary in python. X axis with different resolutions (python)
Approximating data with a multi segment cubic bezier curve and a distance as well as a curvature contraint
Fit Curve-Spline to 3D Point Cloud
How to compare two 3D curves in Python?
https://stackoverflow.com/questions/8980101/what's-the-best-method-to-compare-original-trajectory-with-two-compressed-trajec
From github I've found something close:
https://pypi.org/project/similaritymeasures/
Yet I'd like to be sure I can compare trajectories with different number of points ("reference" +/- "tolerance" provides a simplified "tunnel"). That difference of dataset is the main drawback, since the drone can stop, perhaps even go backward a little before resuming the race.
Maybe displaying the result using Mathpy and/or Payton as well.
While the timing is a factor, it would be nice to do some stats like the min/max deviation from the "reference 3D trajectory" (A).
Probably out of scope, but:
https://www.researchgate.net/publication/281188521_Model_Predictive_Path-Following_Control_of_an_ARDrone_Quadrotor
https://www.researchgate.net/publication/247935750_UAV_Motion_Estimation_using_Hybrid_Stereoscopic_Vision
Related
I have about 300,000 points defining my 3D surface. I would like to know if I dropped a infinitely stiff sheet onto my 3D surface, what the equation of that plane would be. I know I need to find the 3 points the sheet would rest on as that defines a plane, but I'm not sure how to find my 3 points out of the ~300,000. You can assume this 3D surface is very bumpy and that this sheet will most likely lie on 3 "hills".
Edit: Some more background knowledge. This is point cloud data for a scan of a 3D surface which is nearly flat. What I would like to know is how this object would rest if I flipped it over and put it on a completely flat surface. I realize that this surface may be able to rest on the table in various different ways depending on the density and thickness of the object but you can assume the number of ways is finite and I would like to know all of the different ways just in case.
Edit: After looking at some point cloud libraries I'm thinking of doing something like computing the curvature using a kd tree (using SciPy) and only looking at regions that have a negative curvature and then there should be 3+ regions with negative curvature so some combinatorics + iterations should give the correct 3 points for the plane(s).
I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex.
I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate.
Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape.
I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method.
Update:
I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.
By straight voxelization:
If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel.
Then for every vertex of the mesh, set the bit of the cell it is included in (this just takes a truncation of the coordinates). By doing this, you will obtain the boundary of the object as a connected surface. You will obtain an estimate of the volume by means of a 3D flood filling algorithm, either from an inside or an outside pixel. (Outside will be easier but be sure to leave a one voxel margin around the object.)
Estimating the volumes of both objects as well as intersection or union is straightforward with this machinery. The cost will depend on the number of faces and the number of voxels.
A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a given point is in a given volume, use a crossing number test, which Google. There are acceleration structures that can help with this test, but my guess is that the number of samples that'll give you reasonable accuracy is lower than the number of samples necessary to benefit overall from building the acceleration structure.
As a variant of this, you can check line intersection instead of point intersection: Generate a random (axis-aligned, for efficiency) line, and measure how much of it is contained in A, in B, and in both A and B. This requires more bookkeeping than point-in-polyhedron, but will give you better per-sample information and thus reduce the number of times you end up iterating through all the faces.
I have a numpy array depicting a one-pixel wide, discrete, connected curve. This curve is obtained by the Skeletonization operation of image processing. I am trying to find the curvature of the above curve at an arbitrary point, to detect bends/kinks (which will have high curvature value).
I tried to implement the above using the general formula for curvature. However, since this a pixelated, discrete curve, whose generating function is unknown, I tried to resort to using numpy gradient instead.
The problem I see with the above is that, since the curve is one-pixel wide, at any point the slope can be only one of 0, 1 or infinity. As a result, the curvature values that I get are mostly meaningless or useless.
I am looking for some suggestion on where to start in order to get a smooth curve out of the above, so that I can calculate curvature in a more meaningful way. Can somebody suggest any mathematical operation or convolution that I can apply to achieve the same? Below is a representative binary image that I have.
P.S. I am very, very new to image processing, so references to standard algorithms (in math books) or library implementations would be very helpful.
An established way to do this is to fit a low-order parametric curve to each of the skeletonized points using two or more neighbouring points. Then you compute curvature at the point using the fitted curve parameters with an analytic formula. Several curve models can be used. The two main models are:
A circle. The radius of curvature, R is the reciprocal of the curvature. For a curve, it equals the radius of the circular arc which best approximates the curve at that point. You can fit a circle to a set of 2D data points using various methods. A python library that has implemented several is here.
A quadratic. This can be fitted to the point and its neighbours, then curvature can be estimated through second-order differentiation of the curve here. You can use numpy.polyfit to fit this model. A simple strategy is to first estimate the tangent vector at the point, by fitting a local line (e.g. with polyfit using an order 1 curve). The you rotate the points to align the tangent vector with the x axis. Finally you fit a 1D quadratic f(x) to the rotated points using polyfit.
The tricky thing with making any curvature estimator is that curvature can be estimated at different scales. For example, do I want my estimator to be sensitive to high frequency detail or is this actually noise? This decision manifests in the choice of neighbourhood size. Too small, and errors from noise and discretization lead to unstable estimates. However too large, and there may be large modelling error (error by approximating the curve as a parametric function). Generally you have to select the best neighbourhood size yourself.
You're also going to have some poor curvature estimates at junction points, but that's largely unavoidable as curvature is not well defined there. A naïve fix could be to segment all paths at junction points, and then estimate curvature on each path individually.
Toby gave an excellent suggestion regarding junction points: detect the junction points and take each line in between those independently.
Detecting junction points (and end points). This is quite simple: all pixels that are set and have more than two neighbors are junction points. All pixels that are set and have exactly one neighbor are end points. Detect all those points and put their coordinates in a list.
Finding the lines in between pairs of points. Starting at each coordinate in your list, look for a line starting there. Note that for the junction points, you'll have at least three lines starting there. If you do this, you'll find each line two times. You can remove duplicates by reversing the lines that end to the left of where they start (and if the two end points are on the same image column, take the one on top as the start). Now they will be directly comparable, so you can delete the duplicates (or not store them in the first place). Note that just comparing start and end point is not sufficient as you can have different lines with the same start and end points.
Tracing each line. The step above requires that you trace each line. See if you can figure it out, it's fun! Here is a description of an algorithm that traces the outline of objects, you can use it as inspiration as this problem is very similar. Store a vector with x-coordinates and one with y-coordinates for each line.
Smoothing the lines. As you noticed, consecutive steps are in one of 8 directions, so angles are strongly discretized. You can prevent this by smoothing the coordinate vectors. This is a quick-and-dirty trick, but it works. Think of these vectors as 1D images, and apply a smoothing filter (I prefer the Gaussian filter for many reasons). Here you filter the vector with x-coordinates separately from the vector with y-coordinates.
Computing the curvature. Finally, you can compute the curvature of the curve, as the norm of the derivative of the unit normal to the curve. Don't forget to take the distance between points into account when computing derivatives!
I have a sensor that is continually collecting data (shown in blue) every minute that outputs a voltage output. I have a reference sensor collecting data (shown in red) that outputs in the units that I am interested. I am interested in determining a scaling factor so that I can scale the blue sensor's data to match the red sensor's data.
Normally, I would do a simple linear regression between the values of two sensors at any given time, which would give me a scaling factor based on the slope of the regression. I have noticed, however, that red sensor is slower at sensing a change in the environment, and can anywhere from 6-15 minutes behind -- this makes a regression difficult because at any given time, the two sensors may be measuring different things.
I was wondering if there is any sort of curve fitting that can be performed such that I can extract a scaling factor so that I can scale the blue sensor's data to match the red sensors.
I typically work in Python, so any Python packages (e.g. Numpy/Scipy) that would help with this would be especially helpful.
Thanks for the help. What I ended up doing was finding all the local maxima and minima on the reference curve, then used those peak locations to search for the same maxima or minima on the sample curve. I basically used the reference curve's maxima/minima points as the center of a "window" and I would search for the highest/lowest point on the sample curve within a few minutes of the center point.
Once I had found all the matched maxima/minima on the sample curve, I then could perform a linear regression between these points to determine a scaling factor.
I have boundaries of semi-circle or ellipse shaped objects. Example image is
The boundary can be slightly jagged (when you zoom in). I am looking to detect a point of interest (location x and y) on these curves, where we see a definite change in the shape, such as
There can be two outputs:
No point of interest: we cannot find specific features
Point of interest with x and y location
Currently, I am using Python and OpenCV. I cannot think of a efficient and effective way to solve this problem. Any help will be really appreciated.
Nothing says that others will agree with my closure vote, so ...
I suggest two steps:
Fit an ellipse to the given points. I'm sure you've already found curve-fitting algorithms (and perhaps software packages) by now -- and asking for those is specifically proscribed on Stack Overflow.
Code a small anomaly detector, which works on the difference between the fitted curve and the actual data points.
Step 2 depends heavily on your definition of "point of interest". What are the criteria? I notice that your second point of interest actually lies very close to the fitted curve; it's the region on either side the deviates inward.
I suggest that you do your fitting in polar coordinates, and then consider the result in terms of theta and radius. Think of "flattening" the two curves as a single unit, so that the central angle (theta) is the new x-coordinate, and the distance from the center is the new y-coordinate.
Now, subtract the two curves and plot the difference (or just store this new curve as an array of points). Look for appropriate anomalies in these differences. This is where you have to decide what you need. Perhaps a sufficient deviation in the "r" value (radius, distance from center"); perhaps a change in the gradient (find a peak/valley, but not a gently sloping bulge). Do you want absolute difference, or an integral of deviation (area between the fit and the anomaly). Do you want it linear or squared ... or some other function? Does the width of the anomaly figure into your criteria?
That's what you need to decide. Does this get you moving?