Laplacian smoothing for inverse problem in python - python

I am currently working in a inverse problem, scripted in python, in which observed GPS data is used to infer the displacement below the earth (e.g and earthquake). The displacement beneath the earth can be discretized in a grid, and the equation that describes this problems can be written as:
G(n*n) * S(n*1) = U(n*1) [1]
Where G is our model matrix that describes problem, S is the displacement below the earth vector and U is the Surface Observations vector. In this case n is the number of grid points.
For penalizing "non-physical" solutions we can regularize the inverse problem by assuming that displacement in a grid point is similar to its neighbors via a Laplacian smoothing matrix. Mathematically, what we want is minimize the Laplacian of displacement, which using matrix can be written as:
F(nxn) * S(n*1) = 0(n*1) [2]
where F is the laplacian matrix, S is the displacements column vector and "0" is a zero column vector. n is the number of grid points used in the inversion. Both equations [1] and [2] can be optimized with methods such as scipy.optimize.nnls which is used in this script.
The problem is in the building of the Laplacian Matrix, I am not sure how I should start to build it; Is there any scipy or another library method that can achieve what I want? I have searched some methods in scipy for laplacian smoothing that are used for Image Processing but I don't know if they work for this particular case. The other option is building a finite difference method for the laplacian but I am unsure on how I should start creating it.
Thank you

Maybe NetworkX library in Python could be an option. Considering that the displacement beneath the earth can be discretized in a grid you can build a network G(N, E) where N is the number of nodes in your grid and E all possible edges connecting your nodes N.
In each node and edge, you can apply a weight value (e.g.displacement measure).
Moreover, NetworkX has a function related to the Laplacian Matrix calculation.

Related

How to calculate the average/best Rotation between two coordinate systems?

Through a sensor I get the rotation between points in coordinate system A to points in coordinate system B. The measured rotations between the coordinate systems are not 100% identical due to the noise of the sensor.
How can I determine the average or optimal rotation matrix between the coordinate systems? Similar to this problem: stackoverflow: Averaging Quatenion, but contrary to that I do not want to use Quaternions, but try some least square approach.
Given: Rba(n): Rotation matrix from a to b, measured at n different time points
Wanted: Rba optimal
My approach: Minimization of the squared distance.
First I define n random points in space and apply the rotations to these points.
And now I can calculate the rotation by means of the Krabsch algorithm using singular value decomposition to minimize the square distance between the input points and the transformed points.
However, what I don't understand is that the calculated rotation matrix seems to be dependent on the input points. That is, I get different rotation matrices as a result for different input points, although the applied rotation matrices Rba(n) remain the same.
Why is that? And what is the right way?

Is Gaussian & Mean Curvatures Applicable for Rough Surfaces?

For a project I am working on, I have successfully performed the SFM procedure on road image data, and have been able to generate a .ply file containing point cloud coordinates (X, Y, Z), RGB values, and normals (nx, ny, nz).
Now, I am interested in calculating curvature values for each point from the data I have. I have come across Surface Curvature MATLAB Equivalent in Python, but the implementation is said to work only when X, Y, and Z are 2D arrays.
Is Gaussian and Mean curvatures applicable if the road surface is very rough? Given, the (X,Y,Z) coordinate data of size NX3 and given the (nx,ny,nz) normal data of size NX3, how should I find the curvature? Is there a current implementation in Python?
I'm actually writing a Python library for working with point clouds.
Using the raw point cloud, the only "curvature" concept I'm aware of is the one computed using the eigenvalues obtained from the neighbourhood of each point.
If that is what you mean, here is an example code:
from pyntcloud import PyntCloud
cloud = PyntCloud.from_file("Box.ply")
This is the example point cloud inside Box.ply:
The steps to compute the curvature are:
Get k-neighbors of each point:
k_neighbors = cloud.get_neighbors(k=10)
Compute the eigenvalues for each point using it's k (10 in this case) neighbours:
ev = cloud.add_scalar_field("eigen_values", k_neighbors=k_neighbors)
Compute the curvature from those eigenvalues:
cloud.add_scalar_field("curvature", ev=ev)
Saving the cloud with the new scalar fields:
cloud.to_file("out.ply")
Here is the point cloud inside out.ply colored according to the curvature value assigned to each point (White are higher curvature values):
Here is an example on how to filter the point cloud to only keep the points that have a curvature value above the mean:
curvature = cloud.points["curvature(K(16))"]
cloud.points = cloud.points[curvature > curvature.mean()]
cloud.to_file("out.ply")
And the new content of out.ply:

Estimation of fundamental matrix or essential matrix from feature matching

I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing:
1.The essential matrix that i compute from in built function does not match with the one i find from mathematical computation using fundamental matrix as E=k.t()FK.
2.As i vary the number of points used to compute F and E,the values of F and E are constantly changing.The function uses Ransac method.How do i know which value is the correct one??
3.I am also using an inbuilt function to decompose E and find the correct R and T from the 4 possible solutions.The value of R and T also change with the changing E.More concerning is the fact that the direction vector T changes without a pattern.Say it was in X direction at a value of E,if i change the value of E ,it changes to Y or Z.Y is this happening????.Has anyone else had the same problem.???
How do i resolve this problem.My project involves taking measurements of objects from images.
Any suggestions or help would be welcome!!
Both F and E are defined up to a scale factor. It may help to normalize the matrices, e. g. by dividing by the last element.
RANSAC is a randomized algorithm, so you will get a different result every time. You can test how much it varies by triangulating the points, or by computing the reprojection errors. If the results vary too much, you may want to increase the number of RANSAC trials or decrease the distance threshold, to make sure that RANSAC converges to the correct solution.
Yes, Computing Fundamental Matrix gives a different matrix every time as it is defined up to a scale factor.
It is a Rank 2 matrix with 7DOF(3 rot, 3 trans, 1 scaling).
The fundamental matrix is a 3X3 matrix, F33(3rd col and 3rd row) is scale factor.
You make ask why do we append matrix with constant at F33, Because of (X-Left)F(x-Right)=0, This is a homogenous equation with infinite solutions, we are adding a constraint by making F33 constant.

Generating random value for given cdf

Depending on sample of values of random variable I create cumulative density function using kernel density estimation.
cdf = gaussian_kde(sample)
What I need is to generate sample values of random variable whose density function is equal to constructed cdf. I know about the way of inversing the probability distribution function, but since I can not do it analitically it requires pretty complicated preparations. Is there integrated solution or maybe another way to accomplish the task?
If you're using a kernel density estimator (KDE) with Gaussian kernels, your density estimate is a Gaussian mixture model. This means that the density function is a weighted sum of 'mixture components', where each mixture component is a Gaussian distribution. In a typical KDE, there's a mixture component centered over each data point, and each component is a copy of the kernel. This distribution is easy to sample from without using the inverse CDF method. The procedure looks like this:
Setup
Let mu be a vector where mu[i] is the mean of mixture component i. In a KDE, this will just be the locations of the original data points
Let sigma be a vector where sigma[i] is the standard deviation of mixture component i. In typical KDEs, this will be the kernel bandwidth, which is shared for all points (but variable-bandwidth variants do exist).
Let w be a vector where w[i] contains the weight of mixture component i. The weights must be positive and sum to 1. In a typical, unweighted KDE, all weights will be 1/(number of data points) (but weighted variants do exist).
Choose the number of random points to sample, n_total
Determine how many points will be drawn from each mixture component.
Let n be a vector where n[i] contains the number of points to sample from mixture component i.
Draw n from a multinomial distribution with "number of trials" equal to n_total and "success probabilities" equal to w. This means the number of points to draw from each mixture component will be randomly chosen, proportional to the component weights.
Draw random values
For each mixture component i:
Draw n[i] values from a normal distribution with mean mu[i] and standard deviation sigma[i]
Shuffle the list of random values, so they have random order.
This procedure is relatively straightforward because random number generators (RNGs) for multinomial and normal distributions are widely available. If your kernels aren't Gaussian but some other probability distribution, you can replicate this strategy, replacing the normal RNG in step 4 with a RNG for that distribution (if it's available). You can also use this procedure to sample from mixture models in general, not just KDEs.

Formulation of the U-matrix (unified distance matrix) as a matrix operation?

I am computing the u-matrix in parallel, and am concerned with the computational cost of traversing matrices in Python, as opposed to computing values by means of matrix operations using Numpy.
When computing the u-matrix, you need the distance to the nearest nodes, the neighborhood. It seems hard to find that out without actually picking them out with a for loop.
To make a simple example, in a 2-d grid you want the neighborhood for the node with coordinates [u,v]: [[u-v,v], [u+1,v], [u,v-1], [u,v+1]].
Is there away you for example extract a neighborhood with a Numpy? Then I know how I can compute the euclidean distance for all such points. Or otherwise reformulate the problem?
What would happen to the u-matrix if the distance from each node to all other nodes in the grid as opposed to the nearest n?

Categories