I have an object (XYZ-coordinate system where Z is up) that rotates from t0 to t1 with the corresponding rotation matrices:
import numpy as np
from scipy.spatial.transform import Rotation as R
r_0 = np.array([[-0.02659679, -0.00281247, 0.99964229],
[ 0.76308514, -0.64603356, 0.01848528],
[ 0.64575048, 0.76330382, 0.01932857]])
r_1 = np.array([[ 0.05114056, -0.03815443, 0.99796237],
[-0.30594799, 0.95062582, 0.05202294],
[-0.95067369, -0.30798506, 0.03694226]])
# Calculate the relative rotation matrix from t0 to t1
rot_mat_rel = np.matmul(np.transpose(r_0), r_1)
r = R.from_maxtrix(rot_mat_rel)
# Obtain angles
print(r.as_euler('xyz', degrees=True)
# Result
array([ -1.52028392, -1.55242217, -148.10677483])
The problem is, that the relative angles look wrong to me but I can't find my mistake. What I wanted to know is how much the object rotated along x, y and z.
Edit: Code to for plots: https://codeshare.io/GA9zK8
You can use matrix_from_euler_xyz from this tutorial to check your results.
(You might need to run pip3 install pytransform3d in your terminal where you are running your python code from, or !pip3 install pytransform3d from Jupyter Notebook if you are using that.)
Preparing the data:
import numpy as np
from scipy.spatial.transform import Rotation as R
r_0 = np.array([[-0.02659679, -0.00281247, 0.99964229],
[ 0.76308514, -0.64603356, 0.01848528],
[ 0.64575048, 0.76330382, 0.01932857]])
r_1 = np.array([[ 0.05114056, -0.03815443, 0.99796237],
[-0.30594799, 0.95062582, 0.05202294],
[-0.95067369, -0.30798506, 0.03694226]])
# Calculate the relative rotation matrix from t0 to t1
rot_mat_rel = np.matmul(np.transpose(r_0), r_1)
r = R.from_matrix(rot_mat_rel)
Let's plot what the rotation r means in practice:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pytransform3d.rotations import *
ax = plot_basis(R=np.eye(3), ax_s=1)
p = np.array([0, 0, 0])
R = matrix_from_euler_xyz(r.as_euler('xyz'))
plot_basis(ax, R, p, alpha = 0.5)
plt.show()
We obtain this plot:
You can check if this is what you expected or not.
Check the rotation matrix which the pytransform3d module calculated from Euler angles r:
matrix_from_euler_xyz(r.as_euler('xyz'))
Giving ouput:
array([[-0.84872253, -0.52814402, 0.02709157],
[ 0.52754172, -0.84911505, -0.02652111],
[ 0.03701082, -0.00821713, 0.99928108]])
which is exactly the traspose of np.matmul(np.transpose(r_0), r_1):
array([[-0.84872253, 0.52754172, 0.03701082],
[-0.52814402, -0.84911505, -0.00821714],
[ 0.02709157, -0.02652111, 0.99928109]])
Which seems a good sign & may be a good starting point for checking your math.
As I don't see what you would expect to get, I suggest you experiment with plotting your results with the tools outlined here, and check step by step that what you have is what you have expected to have.
I'm probably a bit late and zabop's answer already points to the right direction. I just want to clarify two things.
There are several ambiguities when we work with transformations that can make things more confusing. The two things that might make the code here a bit confusing are:
active vs. passive rotation
intrinsic vs. extrinsic rotation
I'm starting from your example above:
import numpy as np
r_0 = np.array([[-0.02659679, -0.00281247, 0.99964229],
[ 0.76308514, -0.64603356, 0.01848528],
[ 0.64575048, 0.76330382, 0.01932857]])
r_1 = np.array([[ 0.05114056, -0.03815443, 0.99796237],
[-0.30594799, 0.95062582, 0.05202294],
[-0.95067369, -0.30798506, 0.03694226]])
The way I would calculate a rotation matrix that rotates r_0 to r_1 is the following (different from your code!):
r0_to_r1 = r_1.dot(r_0.T)
r0_to_r1
Result:
array([[ 0.99635252, 0.08212126, 0.0231898 ],
[ 0.05746796, -0.84663889, 0.52905579],
[ 0.06308011, -0.52579339, -0.84827012]])
I use the extrinsic convention for concatenation of rotation matrices, that is, r_1 is applied after r_0.T. (If r_0 and r_1 were real numbers, we would write r_1 - r_0 to obtain a number that transforms r_0 to r_1.)
You can verify that r0_to_r1 rotates from r_0 to r_1:
from numpy.testing import assert_array_almost_equal
# verify correctness: apply r0_to_r1 after r_0
assert_array_almost_equal(r_1, r0_to_r1.dot(r_0))
# would raise an error if test fails
Anyway, the intrinsic convention would also work:
r0_to_r1_intrinsic = r_0.T.dot(r_1)
assert_array_almost_equal(r_1, r_0.dot(r0_to_r1_intrinsic))
Since zabop introduced pytransform3d, I would also like to clarify that scipy uses active rotation matrices and the rotation matrix that pytransform3d.rotations.euler_xyz_from_matrix produces is a passive rotation matrix! This wasn't documented so clearly in previous versions. You can transform an active rotation matrix to a passive rotation matrix and vice versa with the matrix transpose. Both pytransform3d's function and scipy's Rotation.to_euler("xyz", ...) use the intrinsic concatenation convention.
from scipy.spatial.transform import Rotation as R
r = R.from_matrix(r0_to_r1)
euler_xyz_intrinsic_active_degrees = r.as_euler('xyz', degrees=True)
euler_xyz_intrinsic_active_degrees
Result: array([-148.20762964, -3.6166255 , 3.30106818])
You can obtain the same result with pytransform3d (note that we obtain the passive rotation matrix by .T):
import pytransform3d.rotations as pr
euler_xyz_intrinsic_active_radians = pr.euler_xyz_from_matrix(r0_to_r1.T)
np.rad2deg(euler_xyz_intrinsic_active_radians)
Result: array([-148.20762951, -3.61662542, 3.30106799])
You can also obtain the rotation matrix from euler angles with pytransform3d (note that we obtain the active rotation matrix by .T):
r0_to_r1_from_euler = pr.matrix_from_euler_xyz(euler_xyz_intrinsic_active_radians).T
r0_to_r1_from_euler
Result:
array([[ 0.99635251, 0.08212125, 0.0231898 ],
[ 0.05746796, -0.84663889, 0.52905579],
[ 0.06308011, -0.52579339, -0.84827013]])
Related
I am conducting PCA on a dataset. I am attempting to add a line in my 3d graph which shows the first principal component. I have tried a few methods but have not been able to display the first principal component as a line in my 3d graph. Any help is greatly appreciated. My code is as follows:
import numpy as np
np.set_printoptions (suppress=True, precision=5, linewidth=150)
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.preprocessing import LabelEncoder
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
file_name = 'C:/Users/data'
input_data = pd.read_csv (file_name + '.csv', header=0, index_col=0)
A = input_data.A.values.astype(float)
B = input_data.B.values.astype(float)
C = input_data.C.values.astype(float)
D = input_data.D.values.astype(float)
E = input_data.E.values.astype(float)
F = input_data.F.values.astype(float)
X = np.column_stack((A, B, C, D, E, F))
ncompo = int (input ("Number of components to study: "))
print("")
pca = PCA (n_components = ncompo)
pcafit = pca.fit(X)
cov_mat = np.cov(X, rowvar=0)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
perc = pcafit.explained_variance_ratio_
perc_x = range(1, len(perc)+1)
plt.plot(perc_x, perc)
plt.xlabel('Components')
plt.ylabel('Percentage of Variance Explained')
plt.show()
#3d Graph
plt.clf()
le = LabelEncoder()
le.fit(input_data.Grade)
number = le.transform(input_data.Grade)
colormap = np.array(['green', 'blue', 'red', 'yellow'])
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(D, E, F, c=colormap[number])
ax.set_xlabel('D')
ax.set_ylabel('E')
ax.set_zlabel('F')
plt.title('PCA')
plt.show()
Some remarks to begin with:
You are computing PCA twice! To compute PCA is to compute eigen values and eigen vectors of the covariance matrix. So, either you use the sklearn function pca.fit, either you do it yourself. But you don't need to do both, unless you want to discover pca.fit and see for yourself that it does exactly what you expect it to do (if this is what you wanted, fine. It is a good thing to do that king of checking. I did this once also). Of course pca.fit has another advantage: once you have it, it also provides pca.predict to project points in the components space. But that also is simply a base change using eigenvectors matrix (that is matrix to change base)
pca object let you get the eigenvectors (pca.components_) and eigen values (pca.explained_variance_)
pca.fit is a 'inplace' method. It does not return a new PCA object. It just fit the one you have. So, no need to get pcafit and use it.
This is not a minimal reproducible exemple as required on SO. We should be able to copy and paste it, and run it, so see exactly your problem. Not to guess what kind of secret data you have. And in the meantime, it should be minimal. So, contains data example generation (it doesn't matter if those data doesn't make sense. Sometimes it is even better, since it allows some testing. In my following code, I generate my own noisy data along an axis, which allow me to verify that, indeed, I am able to "guess" what was that axis). Plus, since your problem concerns only 3d plot, there is no need to include ploting of explained variance here. That part is not part of your question.
Now, to print the principal component, well, you already did the hard part. Twice. That is to compute it. It is the eigenvector associated with the highest eigenvalue.
With pca object no need to search for it, they are already sorted. So it is simply pca.components_[0]. And since you want to plot in the space D,E,F, you simply need to draw vector pca.components_[0][3:].
With correct scaling.
You can do that with plot providing just 2 points (first and last)
Here is my version (which, by the way, shows also what a minimal reproducible example is)
import numpy as np
np.set_printoptions (suppress=True, precision=5, linewidth=150)
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.preprocessing import LabelEncoder
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
# Generation of random data along a given vector
vec=np.array([1, -1, 0.5, -0.5, 0.75, 0.75]).reshape(-1,1)
# 10000 random data, that are U[0,10]×vec + gaussian noise std=1
X=(vec*np.random.rand(10000)*10 + np.random.normal(0,1,(6,10000))).T
(A,B,C,D,E,F)=X.T
input_data = pd.DataFrame({'A':A,'B':B,'C':C,'D':D,'E':E, 'F':F, 'Grade':np.random.randint(1,5, (10000,))})
ncompo=6
pca = PCA (n_components = ncompo)
pca.fit(X)
# Redundant
cov_mat = np.cov(X, rowvar=0)
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
# See
print("Eigen values")
print(eig_vals)
print(pca.explained_variance_)
print("Eigen vec")
print(eig_vecs)
print(pca.components_)
# Note, compare first components to
print("Main component")
print(vec/np.linalg.norm(vec))
print(pca.components_[0])
#3d Graph
le = LabelEncoder()
le.fit(input_data.Grade)
number = le.transform(input_data.Grade)
fig = plt.figure()
colormap = np.array(['green', 'blue', 'red', 'yellow'])
ax = fig.add_subplot(111, projection='3d')
ax.scatter(D, E, F, c=colormap[number])
U=pca.components_[0]
sc1=max(D)/U[3]
sc2=min(D)/U[3]
# Draw the 1st principal component as a blue line
ax.plot([sc1*U[3],sc2*U[3]], [sc1*U[4], sc2*U[4]], [sc1*U[5], sc2*U[5]], linewidth=3)
ax.set_xlabel('D')
ax.set_ylabel('E')
ax.set_zlabel('F')
plt.title('PCA')
plt.show()
My example is not that minimal, because I took advantage of it to illustrate my first remark, and also computed PCA twice, to compare both result.
So, here I print, eigenvalues
Eigen values
[30.88941 1.01334 0.99512 0.96493 0.97692 0.98101]
[30.88941 1.01334 0.99512 0.98101 0.97692 0.96493]
(1st being your computation by diagonalisation of covariance matrix, 2nd pca.explained_variance_)
As you can see, they are the same, except sorting for the 1st one
Like wise,
Eigen vec
[[-0.52251 -0.27292 0.40863 -0.06321 0.26699 0.6405 ]
[ 0.52521 0.07577 -0.34211 0.27583 -0.04161 0.72357]
[-0.26266 -0.41332 -0.60091 0.38027 0.47573 -0.16779]
[ 0.26354 -0.52548 0.47284 0.59159 -0.24029 -0.15204]
[-0.39493 0.63946 0.07496 0.64966 -0.08619 0.00252]
[-0.3959 -0.25276 -0.35452 -0.0572 -0.79718 0.12217]]
[[ 0.52251 -0.52521 0.26266 -0.26354 0.39493 0.3959 ]
[-0.27292 0.07577 -0.41332 -0.52548 0.63946 -0.25276]
[-0.40863 0.34211 0.60091 -0.47284 -0.07496 0.35452]
[-0.6405 -0.72357 0.16779 0.15204 -0.00252 -0.12217]
[-0.26699 0.04161 -0.47573 0.24029 0.08619 0.79718]
[-0.06321 0.27583 0.38027 0.59159 0.64966 -0.0572 ]]
Also the same, but for sorting and transpose.
Eigen vectors are presented column wise when you diagonalize a matrix.
Where as for pca.components_ each line is an eigen vector.
But you can see that in the 1st matrix, the eigen vector associated to the biggest eigen value, that is, since biggest eigen value was the 1st one, the 1st column (-0.52, 0.52, etc.)
is also the same as the first line of pca.components_.
Like wise, the 4th biggest eigen value in your diagonalisation was the last one.
And if you look at the last column of your eigen vectors (0.64, 0.72, -0.76...), it is the same as the 4th line of pca.components_ (with a irrelevant ×-1 factor)
So, long story short, you already have eigenvals in pca.explained_variance_ sorted from the biggest to the smallest. And eigen vectors in pca_components_, in the same order.
Last thing I print here, is comparison between the first component (pca.components_[0]) and the vector I used to generate the data in the first place (my data are all colinear to a vector vec, + a gaussian noise).
Main component
[[ 0.52523]
[-0.52523]
[ 0.26261]
[-0.26261]
[ 0.39392]
[ 0.39392]]
[ 0.52251 -0.52521 0.26266 -0.26354 0.39493 0.3959 ]
As expected, PCA did find correctly that main axis.
So, that was just side comments.
What is really what you were looking for is
ax.plot([sc1*U[3],sc2*U[3]], [sc1*U[4], sc2*U[4]], [sc1*U[5], sc2*U[5]], linewidth=3)
sc1 and sc2 being just scaling factors (here I choose it so that it scales approx like the data. Another way would have been to set ax.set_xlim, ax.set_ylim, ax.set_zlim from D.min(), D.max(), E.min(), E.max(), etc.
And then just use big values for sc1 and sc2, like
sc1=1000
sc2=-1000
I have two Numpy (complex) arrays A[t],B[t] defined over a grid of points "t". These two arrays are convolved in a way such that I want a third array C[y] = (A*B)(y), where "y" needs to be exactly the same points as the "t" grid. The point is that both A and B need to be integrated from -\infty to \infty according to the standard convolution operation.
Im using scipy.signal.convolve for this, and I would also like to use the fftconvolve since my arrays are supposed to be big enough. However, when I try the module on a minimal working code, I seem to be doing things very wrong. Here is a piece of the code, where I choose A(t) = exp( -t**2 ) and B(t) = exp(-t). The convolution of these two functions in Mathematica gives:
C[y] = \integral_{-\infty}^{\infty} dt A[t]B[ y- t ] = sqrt(pi)*exp( 0.25 - y )
But then I try this in Python and get very wrong results:
import scipy.signal as scp
import numpy as np
import matplotlib.pyplot as plt
delta = 0.001
t = np.arange(1000)*delta
a = np.exp( -t**2 )
b = np.exp( -t )
c = scp.convolve(a, b, mode='same')*delta
d = np.sqrt(np.pi)*np.exp( 0.25 - t )
plt.plot(np.arange(len(c)) * delta, c)
plt.plot(t[::50], d[::50], 'o')
As far as I understood, the "same" mode allows for evaluation over the same points of the original grids, but this doesn't seem to be the case... Any help is greatly appreciated!
I'm trying to Fourier transform a matrix of 0's with a solid circle (like a pinhole) of 1's using Python. I am trying to get an image of an Airy Function, which should look like concentric circular ripples viewed from above. I'm still a bit of a beginner with Python and coding more generally.
import numpy as np
dimension = 256
list1 = []
listpiece = []
for i in range(dimension):
for j in range(dimension):
listpiece.append(0)
list1.append(listpiece)
listpiece = []
k=128
for i in range(dimension):
for j in range(dimension):
if (i-k)*(i-k) + (j-k)*(j-k) <= 64*2:
list1[i][j] = 1
import matplotlib.pylab as plt
import scipy.sparse as sparse
plt.spy(list1)
plt.show()
Which gave this image of a black circle on a white background.
I then converted this list to a numpy array.
singledimlist = []
for i in range(dimension):
for j in range(dimension):
singledimlist.append(list1[i][j])
prefourierline = np.array( singledimlist )
shape = ( dimension, dimension )
prefourier = prefourierline.reshape( shape )
print(prefourier)
plt.spy(prefourier)
plt.show()
Which gave an identical image:
Using np.fft.fft2 gave a blank image, even though the output had very large changes:
from scipy.fftpack import fft, ifft
fouriered = np.fft.fft2(prefourier)
plt.spy(fouriered)
plt.show()
Output:
[[ 405. +0.00000000e+00j -401.08038516-1.50697234e-16j
389.47420686-2.31615451e-15j ... -370.63201656-5.88988318e-15j
389.47420686+2.35778788e-15j -401.08038516+8.95615360e-15j]
[-401.08038516-2.27306384e-15j 397.18553235-1.77932604e-15j
-385.65292606-1.63119926e-15j ... 366.93100304+7.84568423e-15j
-385.65292606-2.13934425e-15j 397.18553235-1.08069809e-14j]
[ 389.47420686+8.66313300e-15j -385.65292606-1.67296339e-14j
374.33891021+6.30297134e-15j ... -355.97430091-1.40810576e-14j
374.33891021+1.25700186e-14j -385.65292606-1.24588719e-14j]
...
[-370.63201656-4.69963986e-14j 366.93100304+4.87944288e-14j
-355.97430091-4.69561772e-14j ... 338.1937218 +3.81585557e-14j
-355.97430091-4.67444422e-14j 366.93100304+3.64531853e-14j]
[ 389.47420686+3.34933421e-14j -385.65292606-2.70693599e-14j
374.33891021+3.08443590e-14j ... -355.97430091-3.30709228e-14j
374.33891021+2.07603249e-14j -385.65292606-2.63513116e-14j]
[-401.08038516-5.83528175e-14j 397.18553235+7.09535468e-14j
-385.65292606-5.72142574e-14j ... 366.93100304+7.01916155e-14j
-385.65292606-6.12008707e-14j 397.18553235+6.47498390e-14j]]
So, I tried using np.fft.fft, but fared little better, instead of a blank image, I output a black horizontal stripe with the same width as the radius of the original circle, bisecting the white background.
from scipy.fftpack import fft, ifft
fouriered = np.fft.fft(prefourier)
plt.spy(fouriered)
plt.show()
I suspect the main problem lies between my computer screen and my chair.
My question is, what am I doing wrong? How does one Fourier transform an array of this sort?
Thanks, I'd be grateful for some help,
ES
There are multiple things, so I just provided a working example, that uses numpy. The zeroes and ones question is not a problem, since those are legitimate floating point numbers too, so the physics is fine. There are two issues in finding the right answer in the output. One is to zoom in or, alternatively, make the circle very small. Play with that and calculate the expected ring sizes from the close form solution (Airy-Function).
The other is contrast. Below I just used a log to visualize better. Alternatives would be to take a root. Also note that I didn't square the result (as physics would indicate, i.e. intensity vs electric field).
import matplotlib.pyplot as p
import numpy as np
n=1000
aa=np.ones((n,n))
x=np.linspace(-1,1,n)
y=np.linspace(-1,1,n)
X,Y= np.meshgrid(x,y) #this allows us to use vectorized approach, no for loops
R = np.sqrt(X**2+Y**2)
aa[R<0.1]=0
p.figure(figsize=(20,6))
p.subplot(131)
p.imshow(aa)
p.colorbar()
p.subplot(132)
spec= np.fft.fftshift(np.fft.fft2(aa))
p.imshow( np.log(np.abs(spec)))
p.colorbar()
p.title('airy func too fine to see')
p.subplot(133)
p.imshow( np.log(np.abs(spec[450:550,450:550])))
p.colorbar()
p.title('zoomed in');
I need your help.
I have to rebuild markers in 3D space from stereo image. In my case, I would like to reconstruct the markers using an uncalibrated method.
I shoot 2 photos and sign the markers manually for now.
import cv2
import numpy as np
from matplotlib import pyplot as plt
from scipy import linalg
img1 = cv2.imread('3.jpg',0)
img2 = cv2.imread('4.jpg',0)
pts1 = np.array([(1599.6711229946527, 1904.8048128342245), (1562.131016042781, 1734.4304812834225), (1495.7139037433158, 1295.5),
(2373.5748663101604, 1604.4839572192514), (2362.0240641711234, 2031.8636363636363), (2359.136363636364, 2199.3502673796793),
(2656.5695187165775, 1653.5748663101604), (2676.7834224598937, 1506.302139037433), (2740.312834224599, 1026.9438502673797),
(1957.745989304813, 807.4786096256685)],dtype='float64')
pts2 = np.array([(1579.457219251337, 1899.0294117647059), (1539.0294117647059, 1737.3181818181818),
(1472.612299465241, 1307.0508021390374), (2315.8208556149734, 1633.3609625668448),
(2298.4946524064176, 2054.9652406417113), (2301.3823529411766, 2190.687165775401),
(2630.5802139037432, 1670.9010695187167), (2642.131016042781, 1538.066844919786),
(2711.4358288770054, 1076.0347593582887), (1949.0828877005351, 842.1310160427806)],dtype='float64')
subsequently I find the fundamental matrix
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_7POINT)
and print the result from cv2.computeCorrespondEpilines
link
it would seem to work well!
I have the camera matrix, previously calibrated with a chessboard, following the tutorial on the opencv website
mtx=np.array([[3.19134206e+03, 0.00000000e+00, 2.01707613e+03],
[0.00000000e+00, 3.18501724e+03, 1.54542273e+03],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
extract the Essential matrix , following what is reported in the book Hartley and Zisserman
E = K.t() * F * K
E = mtx.T * F * mtx
I decomposed this matrix to find the rotation and translation matrices
R1, R2, T = cv2.decomposeEssentialMat(E)
kr= np.dot(mtx,R1)
kt= np.dot(mtx,T)
projction2=np.hstack((kr,kt))
projction1 = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]])
obtaining the projection matrices.
P1 is the first matrix, which as always described in the above book is P1 = [I | 0] the second matrix is P2 = K [ R | t ]
now I used the following code to go back to the triangulation of the points
points4D = cv2.triangulatePoints(projction1, projction2, pts1.T, pts2.T)
I convert the homogeneous coordinates into Cartesian and the result is this:
coordinate_eucl= cv2.convertPointsFromHomogeneous(points4D.T)
coordinate_eucl=coordinate_eucl.reshape(-1,3)
x,y,z=coordinate_eucl.T
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, c='r', marker='o')
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
link
what am I wrong?
thx
It is good to check each step individually. (You may want to look 4th step first)
1- First, you said you calibrated camera previously. How much reprojection error you got? Did you do any checks to validate success of your calibration. I also assume both your cameras mostly identical.
2- If the fundamental matrix you found is correct (make sure your point lists have same order for both list by the way), it should satisfy epipolar constraint p' F p = 0 where p' is the point in right view and p is point in left view (homogeneous pixel coordinates). Although they will not be exactly 0, should be close to 0. This equation must hold for all point correspondences. If it does not, try using CM_FM_RANSAC or skip to step 3.
3- Check whether you can directly calculate essential matrix with opencv function. Also, a similar equation must hold for essential matrix.
4- OpenCV decomposeEssentialMat function returns two possible rotation matrices and there are two possible translations (so total 4 possible R T combination). Try testing all of them. If you can get correct solution with one of 4 combinations, I will edit my answer to include how to get correct combination.
If your fundamental / essential matrix calculations are correct and problem still occurs, please let me know.
I have a vector with a min of two points in space, e.g:
A = np.array([-1452.18133319 3285.44737438 -7075.49516676])
B = np.array([-1452.20175668 3285.29632734 -7075.49110863])
I want to find the tangent of the vector at a discrete points along the curve, g.g the beginning and end of the curve. I know how to do it in Matlab but I want to do it in Python. This is the code in Matlab:
A = [-1452.18133319 3285.44737438 -7075.49516676];
B = [-1452.20175668 3285.29632734 -7075.49110863];
points = [A; B];
distance = [0.; 0.1667];
pp = interp1(distance, points,'pchip','pp');
[breaks,coefs,l,k,d] = unmkpp(pp);
dpp = mkpp(breaks,repmat(k-1:-1:1,d*l,1).*coefs(:,1:k-1),d);
ntangent=zeros(length(distance),3);
for j=1:length(distance)
ntangent(j,:) = ppval(dpp, distance(j));
end
%The solution would be at beginning and end:
%ntangent =
% -0.1225 -0.9061 0.0243
% -0.1225 -0.9061 0.0243
Any ideas? I tried to find the solution using numpy and scipy using multiple methods, e.g.
tck, u= scipy.interpolate.splprep(data)
but none of the methods seem satisfy what I want.
Give der=1 to splev to get the derivative of the spline:
from scipy import interpolate
import numpy as np
t=np.linspace(0,1,200)
x=np.cos(5*t)
y=np.sin(7*t)
tck, u = interpolate.splprep([x,y])
ti = np.linspace(0, 1, 200)
dxdt, dydt = interpolate.splev(ti,tck,der=1)
ok, I found the solution which is a little modification of "pv" above (note that splev works only for 1D vectors)
One problem I was having originally with "tck, u= scipy.interpolate.splprep(data)" is that it requires a min of 4 points to work (Matlab works with two points). I was using two points. After increasing the data points, it works as i want.
Here is the solution for completeness:
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
data = np.array([[-1452.18133319 , 3285.44737438, -7075.49516676],
[-1452.20175668 , 3285.29632734, -7075.49110863],
[-1452.32645025 , 3284.37412457, -7075.46633213],
[-1452.38226151 , 3283.96135828, -7075.45524248]])
distance=np.array([0., 0.15247556, 1.0834, 1.50007])
data = data.T
tck,u = interpolate.splprep(data, u=distance, s=0)
yderv = interpolate.splev(u,tck,der=1)
and the tangents are (which matches the Matlab results if the same data is used):
(-0.13394599723751408, -0.99063114953803189, 0.026614957159932656)
(-0.13394598523149195, -0.99063115868512985, 0.026614950816003666)
(-0.13394595055068903, -0.99063117647357712, 0.026614941718878599)
(-0.13394595652952143, -0.9906311632471152, 0.026614954146007865)