Estimate Euclidean transformation with python - python

I want to do something similar to what in image analysis would be a standard 'image registration' using features.
I want to find the best transformation that transforms a set of 2D coordinates A in another one B.
But I want to add an extra constraint being that the transformation is 'rigid/Euclidean transformation' Meaning that there is no scaling but only translation and rotation.
Normally allowing scaling I would do:
from skimage import io, transform
destination = array([[1.0,2.0],[1.0,4.0],[3.0,3.0],[3.0,7.0]])
source = array([[1.2,1.7],[1.1,3.8],[3.1,3.4],[2.6,7.0]])
T = transform.estimate_transform('similarity',source,destination)
I believeestimate_transform under the hood just solves a least squares problem.
But I want to add the constraint of no scaling.
Are there any function in skimage or other packages that solve this?
Probably I need to write my own optimization problem with scipy, CVXOPT or cvxpy.
Any help to phrase/implement this optimization problem?
EDIT:
My implementation thanks to Stefan van der Walt Answer
from matplotlib.pylab import *
from scipy.optimize import *
def obj_fun(pars,x,src):
theta, tx, ty = pars
H = array([[cos(theta), -sin(theta), tx],\
[sin(theta), cos(theta), ty],
[0,0,1]])
src1 = c_[src,ones(src.shape[0])]
return sum( (x - src1.dot(H.T)[:,:2])**2 )
def apply_transform(pars, src):
theta, tx, ty = pars
H = array([[cos(theta), -sin(theta), tx],\
[sin(theta), cos(theta), ty],
[0,0,1]])
src1 = c_[src,ones(src.shape[0])]
return src1.dot(H.T)[:,:2]
res = minimize(obj_fun,[0,0,0],args=(dst,src), method='Nelder-Mead')

With that extra constraint you are no longer solving a linear least squares problem, so you'll have to use one of SciPy's minimization functions. The inner part of your minimization would set up a matrix H:
H = np.array([[np.cos(theta), -np.sin(theta), tx],
[np.sin(theta), np.cos(theta), ty],
[0, 0, 1]])
Then, you would compute the distance
|x_target - H.dot(x_source)|
for all data-points and sum the errors. Now, you have a cost function that you can send to the minimization function. You probably will also want to make use of RANSAC, which is available as skimage.measure.ransac, to reject outliers.

skimage now provides native support in the transform module.
http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.estimate_transform
Somewhat easier than OpenCV I find. There is an extensive set of functions which covers all use cases.

Related

Solving two coupled second order boundary value problems

I have solved a single second order differential equation with two boundary conditions using the module solve_bvp. However, now I am trying to solve the system of two second order differential equations;
U'' + a*B' = 0
B'' + b*U' = 0
with the boundary conditions U(+/-0.5) = +/-0.01 and B(+/-0.5) = 0. I have split this into a system of first ordinary differential equations and I am trying to use solve_bvp to solve them numerically. However, I am just getting arrays full of zeros for my solution. I believe I am implementing the boundary conditions wrong. It is not clear to me how to handle more than two equations from the documentation. My attempt is below
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import solve_bvp
alpha = 1E-8
zeta = 8E-3
C_k = 0.05
sigma = 0.01
def fun(x, y):
return np.vstack((y[1],-((alpha)/(C_k*sigma))*y[2],y[2], -(1/(C_k*zeta))*y[1]))
def bc(ya, yb):
return np.array([ya[0]+0.001, yb[0]-0.001,ya[0]-0, yb[0]-0])
x = np.linspace(-0.5, 0.5, 5000)
y = np.zeros((4, x.size))
print(y)
sol = solve_bvp(fun, bc, x, y)
print(sol)
In my question I have just relabeled a and b, but they're just parameters that I input. I have the analytic solution for this set of equations so I know one exists that is non-trivial. Any help would be greatly appreciated.
It is most times really helpful if you state at least once in a comment or by assignment to specifically named variables how you want to compose the state vector.
By the form of the derivative return vector, I would think you intend
U, U', B, B'
which means that U=y[0], U'=y[1] and B=y[2],B'=y[3], so that your derivatives vector should correctly be
return y[1], -((alpha)/(C_k*sigma))*y[3], y[3], -(1/(C_k*zeta))*y[1]
and the boundary conditions
return ya[0]+0.001, yb[0]-0.001, ya[2]-0, yb[2]-0
Especially your boundary condition should throw the algorithm in the first step because of a singular Jacobian, always check the .success field and the .message field of the solution structure.
Note that by default the absolute and relative tolerance of the experimental solve_bvp is 1e-3, and the number of nodes is limited to 500.
Setting the initial node number to 50 (5000 is much too much, the solver refines where necessary), and the tolerance to 1-6, I get the following solution plots that visibly satisfy the boundary conditions.

Euclidean Homography optimization python

I am working on panoramic stitching. I am trying to refine the euclidian homography matrix estimate which I get after performing RANSAC on the matches, using Levenberg-Marquardt algorithm in scipy.optimize.least_squares.
I am doing this to solve for bending in my panoramic image output.
The optimization problem now becomes a nonlinear local optimization, where I minimize homography error function :
where x' is transformed point and x is original point.
I am using the scipy.optimize.least_squares function as
ls_lm = least_squares(fun, [theta, tx, ty], args=(dst,src), method='lm')
Where dst and src are my correspondences from source and destination image after RANSAC. I take theta, tx, ty from the homography estimate H. My fun looks like :
def fun(pars, x, src):
theta, tx, ty = pars
H = array([[cos(theta), -sin(theta), tx],\
[sin(theta), cos(theta), ty],
[0,0,1]])
src1 = c_[src,ones(src.shape[0])]
fun = sum((x - src1.dot(H.T)[:,:2])**2)
ret_val = ones(len(src), float)
for i in range(len(src)):
ret_val[i] = fun
return ret_val
But the least_squares function is not converging and gives me the same input [theta, tx, ty] as output. What am I doing wrong ? Can I solve for the bending problem using some other method or approach ? Can bundle adjustment solve this, if yes how do I implement it ?
Also, is the Jacobian matrix input mandatory for my case ? If yes what should it be ?
Thanks, for your time !!
Things I tried :
1) Initialize my parameters from [0, 0, 0] and added noise to H. Results seem little off to the original H, but don't solve the problem.
2) Use scipy.optimize.minimize, got the same results as input.

Optimization of two vectors in python

I need to do an optimization of two vectors x and y, the objective function is a function of these two vectors f(x,y) and x and y are also related with a-x/y =0, is there a well-known method to solve this on python?
Well, your question is general, it'd be great if you provide more details. But I grabbed a code snippet from here, where you can edit. scipy has a class optimize which has a couple of methods to optimize functions.
import numpy as np
from scipy.optimize import minimize
def f(x,y):
return 10 - x/y
# initial values
x0 = 1.3
y0 = 0.5
res = minimize(f, [x0, y0], method='...')
print(res.x)
If you provide more info like what algorithm you want to use, I can provide more precise code.

Manually integrating to get inverse Laplace transform

I wanted to compute the inverse Laplace transform manually without resorting to any library. Specifically, I wanted to compute a bilateral laplace inverse transform. I wanted to check my understanding and tried the following manually, but not able to match the answer. Where am I going wrong?
I want to compute laplace transform of 1/(s-a). I know the answer is eat. My attempt:
a = 2
t = 0.5
f = lambda s: 1/(s-a)
def g(u):
gammah=1
s = complex(real=gammah,imag=u)
return (f(s)).real*np.cos(s.imag*t) * 2*np.exp(s.real*t)/pi
import spicy as sp
import numpy as np
sp.integrate(g,0,np.inf,limit=10000)
gives me -0.9999999
but I know the answer is exp = 2.71...
The main error is mathematical. As Wikipedia says,
integration is done along the vertical line Re(s) = γ in the complex plane such that γ is greater than the real part of all singularities of F(s)
The function F(s) = 1/(s-a) has a singularity at a, which is 2 in your example. So γ needs to be greater than 2. For example, with γ=3 the output of quad is
(2.718278877362764, 2.911191228083254e-06)
as expected. By the, your import spicy etc can't possibly work, correct import syntax would be
from scipy.integrate import quad
# ....
quad(g, 0, np.inf, limit=10000)

Calculate a plane from point cloud in Python without Numpy

I've seen several posts on this subject, but I need a pure Python (no Numpy or any other imports) solution that accepts a list of points (x,y,z coordinates) and calculates a normal for the closest plane that to those points.
I'm following one of the working Numpy examples from here: Fit points to a plane algorithms, how to iterpret results?
def fitPLaneLTSQ(XYZ):
# Fits a plane to a point cloud,
# Where Z = aX + bY + c ----Eqn #1
# Rearanging Eqn1: aX + bY -Z +c =0
# Gives normal (a,b,-1)
# Normal = (a,b,-1)
[rows,cols] = XYZ.shape
G = np.ones((rows,3))
G[:,0] = XYZ[:,0] #X
G[:,1] = XYZ[:,1] #Y
Z = XYZ[:,2]
(a,b,c),resid,rank,s = np.linalg.lstsq(G,Z)
normal = (a,b,-1)
nn = np.linalg.norm(normal)
normal = normal / nn
return normal
XYZ = np.array([
[0,0,1],
[0,1,2],
[0,2,3],
[1,0,1],
[1,1,2],
[1,2,3],
[2,0,1],
[2,1,2],
[2,2,3]
])
print fitPLaneLTSQ(XYZ)
[ -8.10792259e-17 7.07106781e-01 -7.07106781e-01]
I'm trying to adapt this code: Basic ordinary least squares calculation to replace np.linalg.lstsq
Here is what I have so far without using Numpy using the same coords as above:
xvals = [0,0,0,1,1,1,2,2,2]
yvals = [0,1,2,0,1,2,0,1,2]
zvals = [1,2,3,1,2,3,1,2,3]
""" Basic ordinary least squares calculation. """
sumx, sumy = map(sum, [xvals, yvals])
sumxy = sum(map(lambda x, y: x*y, xvals, yvals))
sumxsq = sum(map(lambda x: x**2, xvals))
Nsamp = len(xvals)
# y = a*x + b
# a (slope)
slope = (Nsamp*sumxy - sumx*sumy) / ((Nsamp*sumxsq - sumx**2))
# b (intercept)
intercept = (sumy - slope*sumx) / (Nsamp)
a = slope
b = intercept
normal = (a,b,-1)
mag = lambda x : math.sqrt(sum(i**2 for i in x))
nn = mag(normal)
normal = [i/nn for i in normal]
print normal
[0.0, 0.7071067811865475, -0.7071067811865475]
As you can see, the answers come out the same, but that is only because of this particular example. In other examples, they don't match. If you look closely you'll see that in the Numpy example the 'z' values are fed into 'np.linalg.lstsq', but in the non-Numpy version the 'z' values are ignored. How do I work in the 'z' values to the least-squares code?
Thanks
I do not think you can get away without implementing some basic matrix operations. As this is a multivariate linear regression problem, you will definitely need dot product, transpose and norm. These are easy. The difficult part is that you also need matrix inverse or QR decomposition or something similar. People usually use BLAS for these for good reasons, implementing them is not easy - but not impossible either.
With QR decomposition
I would start by creating a Matrix class that has the following methods
dot(m1, m2) (or __matmul__(m1, m2) if you have python 3.5): it is just the sum of products, should be straightforward
transpose(self): swapping matrix elements, should be easy
norm(self): square root of sum of squares (should be only used on vectors)
qr_decomp(self): this one is tricky. For an almost pure python implementation see this rosetta code solution (disclaimer: I have not thoroughly checked this code). It uses some numpy functions, but these are basic functions you can implement for your matrix class (shape, eye, dot, copysign, norm).
leastsqr_ut(R, A): solve the equation Rx = A if R is an upper triangular matrix. Not trivial, but should be easy enough as you can solve it equation by equation from the bottom.
With these, the solution is easy:
Generate the matrix G as detailed in your numpy example
Find the QR decomposition of G
Solve Rb = Q'z for b using that R is an upper triangular matrix
Then the normal vector you are looking for is (b[0], b[1], -1) (or the norm of it if you want a unit length normal vector).
With matrix inverse
The inverse of a 3x3 matrix is relatively easy to calculate, but this method is much less numerically stable than doing QR decomposition. If it is not an important concern, then you can do the following: implement
dot(m1, m2) (or __matmul__(m1, m2) if you have python 3.5): it is just the sum of products, should be straightforward
transpose(self): swapping matrix elements, should be easy
norm(self): square root of sum of squares (should be only used on vectors)
det(self): determinant, but it is enough if it works on 2x2 and 3x3 matrices, and for those simple formulas are available
inv(self): matrix inverse. It is enough if it works on 3x3 matrices, there is a simple formula for example here
Then the formula for b is b = inv(G'G) * (G'z) and your normal vector is again (b[0], b[1], -1).
As you can see, none of these are simple, and most of it is replicating some numpy functionality while making it a lot slower lot slower. So make sure you have absolutely no other choice.
I generated a code with a similar purpose (see "tangentplane_3D" function in the linked code).
In my case I had a scatter cloud of points that define a 3D ellipsoid. For each point I wanted to determine the tangent plane to the ellipsoid containing such point --> Goal: Determination of a 3D plane.
The problem can be seen in the following way: A plane is defined by its normal and the normal can be seen as the eigenvector associated to the minimum of the eigenvalues of a n set of points.
What I did, and you can check it on the code I posted, is to select k points close to the point of interest at which I wanted to calculate the tangent plane. Then, I performed a 3D Single Value Decomposition to these k points. Finally, from these SVD I selected the minimum eigenvalue and its associated eigenvector which is, in fact, the normal of the plane best fitting my set of points, and thus in my case, tangent to the ellipsoid plane. With the normal vector and the point you can subsequently calculate the complete plane equation.
I hope it helps!!
Best wishes.

Categories