How to calculate the average/best Rotation between two coordinate systems? - python

Through a sensor I get the rotation between points in coordinate system A to points in coordinate system B. The measured rotations between the coordinate systems are not 100% identical due to the noise of the sensor.
How can I determine the average or optimal rotation matrix between the coordinate systems? Similar to this problem: stackoverflow: Averaging Quatenion, but contrary to that I do not want to use Quaternions, but try some least square approach.
Given: Rba(n): Rotation matrix from a to b, measured at n different time points
Wanted: Rba optimal
My approach: Minimization of the squared distance.
First I define n random points in space and apply the rotations to these points.
And now I can calculate the rotation by means of the Krabsch algorithm using singular value decomposition to minimize the square distance between the input points and the transformed points.
However, what I don't understand is that the calculated rotation matrix seems to be dependent on the input points. That is, I get different rotation matrices as a result for different input points, although the applied rotation matrices Rba(n) remain the same.
Why is that? And what is the right way?

Related

How to determine if an object is flat or not from depth image?

I have a 2x2 matrix of distances from a depth sensor.
The matrix is cropped so only the points we are interested in is in the frame(All the points in the cropped image contains the object).
My question is how can we determine if this object is flat or not?
The depth image is acquired from Realsense d435. I read the depth image and then multiply it by depth_scale.
The object is recognized using AI for the rgb image that is aligned with the depth image.
And I have 4 points on the object. So, all the distances in that rectangle contains the distance of the object from the sensor.
My first idea was standard deviation of all the points. But then this falls apart if the image is taken from an angle. (since the standard deviation won't be 0)
From an angle the distance of a flat object is changing uniformly on the y axis. Maybe somehow, we can use this information?
The 2x2 matrix is a numpy array in python. Maybe there are some libraries which do this already.
After reprojecting your four depth measurements to the 3D space, it becomes a problem of deciding if your set of points is coplanar. There are several ways you can go about it.
One way to do it is to reproject the points to 3D and fit a plane to all four of them there. Since you're fitting a plane to four points in three dimensions, you get an over-determined system, and it's very unlikely that all points would lie exactly on the estimated plane. At this stage, you could prescribe some tolerance to determine "goodness of fit". For instance, you could look at the R^2 coefficient.
To fit the plane you can use scipy.linalg.lstsq. Here's a good description of how it can be done: Fit plane to a set of points in 3D.
Another way to approach the problem is by calculating the volume of a tetrahedron spanned by the four points in 3D. If they are coplanar (or close to coplanar), the volume of such a tatrahedron should be equal to (or close to) 0. Assuming your pointa reprojected to 3D can be described by (x_0, y_0, z_0), ..., (x_3, y_3, z_3), the volume of the tetrahedron is equal to:
volume = abs(numpy.linalg.det(tetrahedron)) / 6, where
tetrahedron = np.array([[x_0, y_0, z_0, 1], [x_1, y_1, z_1, 1], [x_2, y_2, z_2, 1], [x_3, y_3, z_3, 1]])
To check if your points are on the same plane, (equivalently - if the tetrahedron has a small enough volume), it is now sufficient to check if
volume < TOL
for some defined small tolerance value, which must be determined experimentally.
You can define a surface by choosing three of the four 3D points.
Evaluate the distance from the remaining point to the surface.
How to choose the three points is... it may be good to choose the pattern that maximizes the area of the triangle.

Laplacian smoothing for inverse problem in python

I am currently working in a inverse problem, scripted in python, in which observed GPS data is used to infer the displacement below the earth (e.g and earthquake). The displacement beneath the earth can be discretized in a grid, and the equation that describes this problems can be written as:
G(n*n) * S(n*1) = U(n*1) [1]
Where G is our model matrix that describes problem, S is the displacement below the earth vector and U is the Surface Observations vector. In this case n is the number of grid points.
For penalizing "non-physical" solutions we can regularize the inverse problem by assuming that displacement in a grid point is similar to its neighbors via a Laplacian smoothing matrix. Mathematically, what we want is minimize the Laplacian of displacement, which using matrix can be written as:
F(nxn) * S(n*1) = 0(n*1) [2]
where F is the laplacian matrix, S is the displacements column vector and "0" is a zero column vector. n is the number of grid points used in the inversion. Both equations [1] and [2] can be optimized with methods such as scipy.optimize.nnls which is used in this script.
The problem is in the building of the Laplacian Matrix, I am not sure how I should start to build it; Is there any scipy or another library method that can achieve what I want? I have searched some methods in scipy for laplacian smoothing that are used for Image Processing but I don't know if they work for this particular case. The other option is building a finite difference method for the laplacian but I am unsure on how I should start creating it.
Thank you
Maybe NetworkX library in Python could be an option. Considering that the displacement beneath the earth can be discretized in a grid you can build a network G(N, E) where N is the number of nodes in your grid and E all possible edges connecting your nodes N.
In each node and edge, you can apply a weight value (e.g.displacement measure).
Moreover, NetworkX has a function related to the Laplacian Matrix calculation.

uniform sampling from a ellipsoidal confidence region

I have a 4-dimensional ellipsoid from which I want to draw samples uniformly. I thought of an approach using a hyper cube around the ellipsoid. We can draw a sample from it and check if it is in the ellipsoid. But the volume ratio of hypercube and ellipsoid in 4 dimensions is 0.3. That means I have only 30 percent success rate. As my algorithm has speed issues I don't want to use this approach. I have also been looking at Inverse transform sampling. Can you give me an insight on how to do this with a 4-dimensional ellipsoid ?
You can transform your hyper ellipsoid to a sphere.
So the given algorithm is valid for the sphere but can easily transformed to your ellipsoid.
Draw from a gaussian distribution N(0,1) for all coordinates x1, to x4. x=[x1,x2,x3,x4].
Normalize the vector x. ==> You have obtained uniformly distributed vectors on the surface.
Now, draw a radius u from [0,1] for the inner point from unit sphere
p=u**(1/4)*x is the uniformly distributed vector within the 4 dimensional unit sphere.

Two dimensional least squares fitting

I have a two dimensional data set, of some fixed dimensions (xLen and yLen), which contains a sine curve.
I've already determined the frequency of the sine curve, and I've generated my own sine data with the frequency using
SineData = math.sin((2*math.pi*freqX)/xLen + (2*math.pi*freqY)/yLen)
where freqX and freqY and the oscillation frequencies in the X and Y directions for the curve.
But now I'd like to do a linear least squares fit (or something similar), so that I can fit the right amplitude. As far as I know, a linear least squares is the right way to go, but if there's another way that's fine as well.
The leastsq function is SciPy doesn't do a multidimensional fit. Is there a python implementation for a 2/multidimensional least square fitting algorithm
Edit: I found the 2 dimensional frequency of the sine wave from a 2D FFT. The data contains a 2D sine + noise, so I only picked the largest peak of the 2D FFT and took an inverse of that. Now I have a sine curve, but with an amplitude that's off. Is there a way to do a 2 dimensional least squares (or similar), and fit the amplitude?
You might also consider a 2D Finite/Discrete Fourier Transform (FFT/DFT) if your data is well served by using trig functions.
NumPy has an DFT solution built in.
There are lots of places to help you get started; Google found this one.
Start with your original data. The transform will tell you if your frequency solution is correct and if there are other frequencies that are also significant.
In least squares fitting , one minimizes a residual function, perhaps chisquare. Since this involves summing estimates corresponding to the difference squared at each of the points of model minus data, the number of dimensions is "forgotten" in making the residual. Thus all the values in the 2D difference function array can be copied to a 1D array as the result of the residual function supplied to, for example, leastsq. An example for complex to real rather than 2D to 1D is given in my answer to this question: Least Squares Minimization Complex Numbers

Width of an arbitrary polygon

I need a way to characterize the size of sets of 2-D points, so I can determine whether to render them as individual points in a space or as representative polygons, dependent on the scale of the viewport. I already have an algorithm to calculate the convex hull of the set to produce the representative polygon, but I need a way to characterize its size. One obvious measure is the maximum distance between points on the convex hull, which is the diameter of the set. But I'm really more interested in the size of its cross-section perpendicular to its diameter, to figure out how narrow the bounding polygon is. Is there a simple way to do this, given the sorted list of vertices and and the indices of the furthest points (ideally in Python)?
Or alternatively, is there an easy way to calculate the radii of the minimal area bounding ellipse of a set of points? I have seen some approaches to this problem, but nothing that I can readily convert to Python, so I'm really looking for something that's turnkey.
You can compute:
the size of its cross-section perpendicular to its diameter
with the following steps:
Find the convex hull
Find the two points a and b which are furthest apart
Find the direction vector d = (a - b).normalized() between those two
Rotate your axes so that this direction vector lies horizontal, using the matrix:
[ d.x, d.y]
[-d.y, d.x]
Find the minimum and maximum y value of points in this new coordinate system. The difference is your "width"
Note that this is not a particularly good definition of "width" - a better one is:
The minimal perpendicular distance between two distinct parallel lines each having at least one point in common with the polygon's boundary but none with the polygon's interior
Another useful definition of size might be twice the average distance between points on the hull and the center
center = sum(convexhullpoints) / len(convexhullpoints)
size = 2 * sum(abs(p - center) for p in convexhullpoints) / len(convexhullpoints)

Categories