Computing the 3D Transformation between Two Sets of Points - python

Using a Microsoft Kinect, I am collecting depth data about an object. From these data, I create a "cloud" of points (point cloud), which, when plotted, allow me to view the object that I scanned using the Kinect.
However, I would like to be able to collect multiple point clouds from different "views" and align them. More specifically, I would like to use an algorithm such as Iterative Closest Point (ICP) to do so, transforming each point in my point cloud by calculating the rotation and translation between each cloud that I collect and the previously-collected cloud.
However, while I understand the process behind ICP, I do not understand how I would implement it in 3D. Perhaps it is my lack of mathematical experience or my lack of experience with frameworks such as OpenCV, but I cannot find a solution. I would like to avoid libraries such as the Point Cloud Library which does this sort of thing for me, since I would like to do it myself.
Any and all suggestions are appreciated (if there is a solution that involves OpenCV/python that I can work on, that would be even better!)

I am currently struggling with ICP myself. Here is what I have gathered so far:
ICP consists of three steps:
Given two point clouds A and B, find pairs of points between A and B that probably represent the same point in space. Often this is done simply by matching each point with its closest neighbor in the other cloud, but you can use additional features such as color, texture or surface normal to improve the matching. Optionally you can then discard the worst matches.
Given this list of correspondence pairs, find the optimal transformation from A to B
Apply this transformation to all points in A
repeat these three steps until you converge on an acceptable solution.
Step one is easy, although there are lots of ways to optimize its speed, since this is the major performance bottleneck of ICP; and to improve the accuracy, since this is the main source of errors. OpenCV can help you there with the FLANN library.
I assume your troubles are with step two, finding the best transformation given a list of correspondences.
One common approach works with Singular Value Decomposition (SVD). Here is a rough sketch of the algorithm. Searching for ICP & SVD will give a lot of further references.
Take the list of corresponding points A1..An and B1..Bn from step 1
calculate the centroid Ca of all points in A and the centroid Cb of all points in B
Calculate the 3x3 covariance matrix M
M = (A1 - Ca)* (B1 - Cb)T + ... + (An - Ca)* (Bn - Cb)T
Use SVD to calculate the 3x3 Matrices U and V for M
(OpenCV has a function to perform SVD)
Calculate R = U * VT.
This is your desired optimal rotation matrix.
Calculate the optimal translation as Cb - R*Ca
The optimal transformation is the combination of R and this translation
Please note that I have not yet implemented this algorithm myself, so I am only paraphrasing what I read.

A very good introduction to ICP, including accelerated variants, can be found in Rusinkievicz's old paper here.

A new ICP algorithm is now in OpenCV contrib (surface matching module). It also benefits from the variants of various types (including Rusinkievicz and more):
http://docs.opencv.org/3.0-beta/modules/surface_matching/doc/surface_matching.html
For MATLAB implementation:
http://www.mathworks.co.jp/matlabcentral/fileexchange/47152-icp-registration-using-efficient-variants-and-multi-resolution-scheme/content/icp_mod_point_plane_pyr.m

#tdirdal:
Ok then I may not be looking at the correct code.
I am talking about this package link:
The code starts with constructing a transformation matrix and then loads a *.ply which contains a mesh (faces and vertices). The rest of the code depends on the mesh that has been loaded.
I have a very simple problem and I would appreciate it if you could let me know how I can solve this using the ICP method. I have the following two point clouds. P2 is a subset of P39 and I would like to find P2 in P39. Please let me know how I can use your matlab package to solve this problem.
P2:
11.2706 -5.3392 1.1903
13.6194 -4.8500 2.6222
8.8809 -3.8407 1.1903
10.7704 -2.1800 2.6222
8.5570 -1.0346 1.1903
13.1808 -2.5632 1.1903
P39:
-1.9977 -4.1434 -1.6750
-4.3982 -3.5743 -3.1069
-6.8065 -3.0071 -1.6751
-9.2169 -2.4386 -3.1070
-11.6285 -1.8696 -1.6751
-16.4505 -0.7305 -1.6751
-14.0401 -1.3001 -3.1070
-18.8577 -0.1608 -3.1070
-25.9398 -0.8647 -3.1070
-30.1972 -4.6857 -3.1069
-28.2349 -2.5200 -3.1069
-29.5843 -0.2496 -1.6751
-31.1688 -2.0974 -3.1070
-21.2580 0.4093 -1.6751
-23.6450 0.9838 -3.1070
-26.0636 1.5073 -1.6751
-28.4357 1.9258 -3.1070

Related

Kalman Filtering in Python

I've been trying to work on designing a Kalman Filter for a few weeks now, but I'm pretty sure I'm making a major error because my results are terrible. My common sense tells me it's because I'm using an already-existing matrix as my predicted state instead of using a transition matrix, but I'm not sure how to solve that if it indeed is the issue. By the way, this is my first time using Kalman Filtering, so I may be missing basic stuff.
Here is a detailed explanation:
I have 2 datasets of 81036 observations each, with each observation including 6 datapoints (i.e., I end up with 2 matrices of shape 81036 x 6). The first dataset is the measured state and the other one is the predicted state. I want to end up with a Python code that filters the data using both states, and I need the final covariance and error estimates. Here's the main part of my code:
import numpy as np
#nb of observations
nn=81036
#nb of datapoints
ns=6
#import
ps=np.genfromtxt('.......csv', delimiter=',')
ms=np.genfromtxt('.......csv', delimiter=',')
##kalman filtering with covariance
#initialize data (lazy initialization using means of columns)
xi=np.mean(ms,axis=0)
for i in np.arange(nn):
#errors
d=ms[i,:]-xi
d2=ps[i,:]-xi
#covariance matrices
P=np.zeros((ns,ns))
R=np.zeros((ns,ns))
for j in np.arange(ns):
for s in np.arange(ns):
P[j,s]=d[j]*d[s]
R[j,s]=d2[j]*d2[s]
#Gain
k=P*(P+R)**-1
#Update estimate
xi=xi+np.matmul(k,d2)
#Uncertainty/error
I=np.identity(ns)
mlt=np.matmul((I-k),P)
mlt=np.matmul(mlt,((I-k).T))
mlt2=np.matmul(k,R)
mlt2=np.matmul(mlt2,k.T)
Er=mlt+mlt2
When I run this code, I end up with my filtered state xi going through the roof, so I'm pretty sure this is not the correct code. I've tried to fix it in several ways (e.g., I tried to calculate the covariance matrix in the standard way I'm used to - D'D/n -, I tried to remove my predicted state matrix and simply add random noise to my measured state instead...), but nothing seems to work. I also tried some available libraries for Kalman Filtering (as well as libraries in Matlab and R), but they either work in 1D only or need me to specify variables like the transitional matrix, which I don't have. I'm at the end of my wits here, so I'd appreciate any help.
I've found the solution to this issue. Huge props to Kani for their comment, as it pointed me in the right direction.
It turns out that the issue is simply in the calculation of k. Although the equation is correct, the inverse function was not working properly because of the very small values in some instances of R and P. To solve this, I used the pseudoinverse instead, so the line for calculating k became as follows:
k = P # np.linalg.pinv(P + R)
Note that this might not be as accurate as the inverse in other cases, but it does the trick here.

How to evaluate a Hamiltonian (Quantum Mechanics) in Python

I'm working on this project where I've been tasked to model the band structure of various potentials, I've worked through all of the math on paper and have had success hard coding some results, but I'm looking to be able to produce results from scratch providing only the eigenvector, Hamiltonian, and potential being explored, I'm working in Python by the way.
So far I've been able to represent the components of my Hamiltonian that are just dependent on the (-h_bar^2/2m)(d^2/dx^2). I'm using Numpy to represent my bra's as arrays and then using Numpy's inner product function to evaluate the components of my Hamiltonian that are just dependent on the second derivative and its respective constants. Issues arise when I go to try and evaluate the components of the Hamiltonian that are potential dependent (seeing that the full Hamiltonian operator is (-h_bar^2/2m)(d^2/dx^2) + V(x)).
I'm not quite sure how to complete this part. I've tried evaluating the inner product in its integral form using SciPy, but I keep running into issues when trying to evaluate these integrals that have complex components (SciPy doesn't like that).
To get a more solid idea of what I'm doing, here's the PDF I'm working out of:https://era.library.ualberta.ca/items/4835014b-1dbc-48e1-bbec-c175c87c9b03/view/000b36c4-55ba-471b-aaf8-d9d1c5f5ee06/Pavelich_Robert_L_201609_MSc.pdf (page 19-22 as written in the actual document)
Any help would be much appreciated, I'm fairly new to Quantum Mechanics as a whole and even greener when it comes to modeling it. Thanks a lot!
The other way to compute Hamiltonian with your given Potential can be evaluated using Finite difference method where you can diagonalize your Hamiltonian and calculate eigen value and eigen states, Therefore could obtain the spectrum .
This code might be helpful https://github.com/mholtrop/QMPython/blob/master/Finite%20Well%20Bound%20States.ipynb

Numerical solution of two nonlinear 3D equations

Hello people on the internet!
I have two 3D equations that I want to solve simultaneously. They are of the form F(x,y,z)=C and G(x,y,z)=0. The solution of these equations are supposed to describe curves (maybe even areas in some regions, I am not sure) and I want to obtain a discrete set of numerical solutions that "sample" these lines. I tried searching for a while, but the methods directed at solving I stumbled upon only aim to find a single solution.
I thought about using a Grid on 3d space and just check the equations, however that forces me to loosen the conditions a bit. But in case (or in regions where) the solution is a curve, the points are supposed to resemble a curve after all.
For better reference, my functions are of the form:
with random parameters c_i, d_i, k_i, phi_i.
For tips I would prefer native python, but I am open to any possible solution. Any ideas appreciated! :)
You're going to want to start by sampling those functions on a 3D grid containing the portion of the solution set you are interested.
Once you've identified which regions of the gird may contain potential solutions, you'll then use an iterative method to minimize the function (F(x)-C)^2 + (G(x))^2.
The key here is that you will do the iterative algorithm for each gird region you identified as "interesting." Each time, initializing the method with values laying inside the region of interest.
Note: Sorry for the poor notation.

Standard procedure for geometric transformations

I'm trying to build a very basic tool for a problem in mechanical engineering and need to do simple transformations of coordinates and vectors in euclidian space which include translations + rotations.
For example
a component part with a local coordinate system (cs) is moved in respect to a world cs. I need to compute the new position of the (origin of) the local cs and its orientation.
a vector is shifted and rotated in respect to a cs. Its new position has to be computed.
I'm aware of different concepts for doing these computations:
Euler angles
Quaternions
Homogeneous coordinates
From my POV the use of homogeneous coordinates would be the simplest approach because it is possible to compute translations and rotations in one step.
Questions:
What is the most common approach in programming to implement this kind of computations?
Is there a python library which can handle these transformations? I found some smaller libraries like transformations.py but I guess transformations like these are very very common and so I wonder if this isn't part of scipy or something like that.
After all i assume i'm searching for the wrong terms and would be glad if someone could provide a hint for further reading, code examples, libraries (especially for python).
Use numpy and linear algebra to do the transformations as matrix multiplications

Find two most distant points in a set of points in 3D space

I need to find the diameter of the points cloud (two points with maximum distance between them) in 3-dimensional space. As a temporary solution, right now I'm just iterating through all possible pairs and comparing the distance between them, which is a very slow, O(n^2) solution.
I believe it can be done in O(n log n). It's a fairly easy task in 2D (just find the convex hull and then apply the rotating calipers algorithm), but in 3D I can't imagine how to use rotating calipers, since there is no way to order the points.
Is there any simple way to do it (or ready-to-use implementation in python or C/C++)?
PS: There are similar questions on StackOverflow, but the answers that I found only refers to Rotating Calipers (or similar) algorithms, which works fine in 2D situation but not really clear how to implement in 3D (or higher dimensionals).
While O(n log n) expected time algorithms exist in 3d, they seem tricky to implement (while staying competitive to brute-force O(n^2) algorithms).
An algorithm is described in Har-Peled 2001. The authors provide a source code than can optionally be used for optimal computation. I was not able to download the latest version, the "old" version could be enough for your purpose, or you might want to contact the authors for the code.
An alternative approach is presented in Malandain & Boissonnat 2002 and the authors provide code. Altough this algorithm is presented as approximate in higher dimensions, it could fit your purpose. Note that their code provide an implementation of Har-Peled's method for exact computation that you might also check.
In any case, in a real-world usage you should always check that your algorithm remains competitive with respect to the naïve O(n^2) approach.

Categories