Difference between Eigen Svd and np.linag svd - python

I'm currently trying to implement some prototype Python code into C++ and am running into an issue where I am getting different results for svd calculate between the 2 when using the same exact array
Code:
python:
(U, S, V) = np.linalg.svd(H)
print("h\n",H)
print("u\n",U)
print("v\n",V)
print("s\n",S)
rotation_matrix = np.dot(U, V)
prints
h
[[ 1.19586781e+00 -1.36504900e+00 3.04707238e+00]
[-3.24276981e-01 4.25640964e-01 -6.78455372e-02]
[ 4.58970250e-02 -7.33566042e-02 -2.96605698e-03]]
u
[[-0.99546325 -0.09501679 0.0049729 ]
[ 0.09441242 -0.97994807 0.17546529]
[-0.01179897 0.17513875 0.98447306]]
v
[[-0.34290622 0.39295764 -0.85322893]
[ 0.49311955 -0.6977843 -0.51954806]
[-0.79953014 -0.59890012 0.04549948]]
s
[3.5624894 0.43029207 0.00721429]
C++
code:
std::cout << "H\n" << HTest << std::endl;
Eigen::JacobiSVD<Eigen::MatrixXd> svd;
svd.compute(HTest, Eigen::ComputeThinV | Eigen::ComputeThinU);
std::cout << "h is" << std::endl << HTest << std::endl;
std::cout << "Its singular values are:" << std::endl << svd.singularValues() << std::endl;
std::cout << "Its left singular vectors are the columns of the thin U matrix:" << std::endl << -1*svd.matrixU() << std::endl;
std::cout << "Its right singular vectors are the columns of the thin V matrix:" << std::endl << -1*svd.matrixV() << std::endl;
prints:
h is
1.19587 -1.36505 3.04707
-0.324277 0.425641 -0.0678455
0.045897 -0.0733566 -0.00296606
Its singular values are:
3.56249
0.430292
0.00721429
Its left singular vectors are the columns of the thin U matrix:
-0.995463 -0.0950168 0.0049729
0.0944124 -0.979948 0.175465
-0.011799 0.175139 0.984473
Its right singular vectors are the columns of the thin V matrix:
-0.342906 0.49312 -0.79953
0.392958 -0.697784 -0.5989
-0.853229 -0.519548 0.0454995
So H,U,S are eqivalent between the 2, but V is not. What could cause this?

I didn't notice that the V's were just transposes of each other. user chrslg has a good explanation for why this is so Ill just copy it here:
"I'd say, "because" :-). I don't think there is a good reason. Just 2 implementations. In maths lessons, you've probably learned SVD decomposition with formula M=U.S.Vᵀ. So C++ library probably stick to this formula, and gives U, S, V such as M=U.S.Vᵀ. Where as linalg documentation says that it returns U,S,V such as M=(U*S)#V. So one call V what the other calls Vᵀ. Hard to say which one is right. As long as they do what their doc say they do"

Related

linalg.svd and JacobiSVD<MatrixXf> svd ,The results are different

I'm translating the Python code into a C + + version,But I found that the two functions(linalg.svd and JacobiSVD svd ) produce different results.What should I do?
A = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16]])
U, S, V = svd(A,0)
print("U =\n", U)
print("S =\n", S)
print("V =\n", V)
MatrixXf m = MatrixXf::Zero(4,4);
m << 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16;
cout << "Here is the matrix m:" << endl << m << endl;
JacobiSVD<MatrixXf> svd(m, ComputeFullU | ComputeFullV);
cout << "Its singular values are:" << endl << svd.singularValues() << endl;
cout << "Its left singular vectors are the columns of the thin U matrix:" << endl << endl << svd.matrixU() << endl;
cout << "Its right singular vectors are the columns of the thin V matrix:" << endl << endl << svd.matrixV() << endl;
Forgive me for not being clear, but here are the Python code and C ++
code results
U =
[[-0.13472212 -0.82574206 0.54255324 0.07507318]
[-0.3407577 -0.4288172 -0.77936056 0.30429774]
[-0.54679327 -0.03189234 -0.06893859 -0.83381501]
[-0.75282884 0.36503251 0.30574592 0.45444409]]
S =
[3.86226568e+01 2.07132307e+00 1.57283823e-15 3.14535571e-16]
V =
[[-0.4284124 -0.47437252 -0.52033264 -0.56629275]
[ 0.71865348 0.27380781 -0.17103786 -0.61588352]
[-0.19891147 -0.11516042 0.82705525 -0.51298336]
[ 0.51032757 -0.82869661 0.12641052 0.19195853]]
C++
Here is the matrix m:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
Its singular values are:
38.6227
2.07132
2.69062e-16
6.823e-17
Its left singular vectors are the columns of the thin U matrix:
0.134722 0.825742 0.0384608 0.546371
0.340758 0.428817 0.35596 -0.757161
0.546793 0.0318923 -0.827301 -0.12479
0.752829 -0.365033 0.432881 0.33558
Its right singular vectors are the columns of the thin V matrix:
0.428412 -0.718653 -0.124032 0.533494
0.474373 -0.273808 -0.232267 -0.803774
0.520333 0.171038 0.83663 0.00706489
0.566293 0.615884 -0.480331 0.263215
Although it turns out that there are some small deviations, will this affect my work?

Eigen OLS vs python statsmodel.api.OLS

I need to calculate the slope,intercept of the line for a regression between 2 vectors with data. So i made a prototype with python below code:
A = [1,2,5,7,14,17,19]
b = [2,14,6,7,13,27,29]
A = sm.add_constant(A)
results = sm.OLS(A, b).fit()
print("results: ", results.params)
output: [0.04841897 0.64278656]
Now I need to replicate this using Eigen lib in C++ and as I understood, I need to pass a 1 column in the matrix of A. If I do so, I get totally different results for the regression than if I just use no second column or a 0 column. C++ code below:
Eigen::VectorXd A(7);
Eigen::VectorXd b(7);
A << 1,2,5,7,14,17,19;
b << 2,14,6,7,13,27,29;
MatrixXd new_A(A.rows(), 2);
VectorXd d = VectorXd::Constant(A.rows(), 1);
new_A << A, d;
Eigen::MatrixXd res = new_A.bdcSvd(Eigen::ComputeThinU | Eigen::ComputeThinV).solve(b);
cout << " slope: " << res.coeff(0, 0) << " intercept: " << res.coeff(1, 0) << endl;
cout << "dbl check: " << (new_A.transpose() * new_A).ldlt().solve(new_A.transpose() * b) << endl;
output with '1' column added to new_A -> slope: 1.21644 intercept:
2.70444 output with '0' or no column added -> slope: 0.642787 intercept: 0
How to get same results in C++? Which one is the right one, I seem to trust more the python one since I get the same when I use 0 column.
thank you,
Merlin
It seems i had to invert new_A with b, and replace ComputeThin with ComputeFull so that it builds.
Eigen::MatrixXd res = b.bdcSvd(Eigen::ComputeFullU | Eigen::ComputeFullV).solve(new_A);

save data (matrix or ndarray) with python then it load in c++ (as OpenCV Mat)

I've created some data in numpy that I would like to use in a separate C++ program. Therefore I need to save the data using python and later load it in C++. What is the best way of doing this?
My numpy ndarray is float 32 and of shape [10000 x 18 x 5]. I can save it for example using
numpy.save(filename, data)
Is there an easy way to load such data in C++? Target structure could be an Eigen::Matrix for example.
After searching for hours I found my year-old example files.
Caveat:
solution only covers 2D matrices
not suited for 3 dimensional or generic ndarrays
Write numpy array to ascii file with header specifying nrows, ncols:
def write_matrix2D_to_ascii(filename, matrix2D):
nrows, ncols = matrix2D.shape
with open(filename, "w") as file:
# write header [rows x cols]
nrows, ncols = matrix2D.shape
file.write(f"{nrows} {ncols}")
file.write("\n")
# write values
for row in range(nrows):
for col in range(ncols):
value = matrix2D[row, col]
file.write(str(value))
file.write(" ")
file.write("\n")
Example output data-file.txt looks like this (first row is header specifying nrows and ncols):
2 3
1.0 2.0 3.0
4.0 5.0 6.0
Cpp function to read matrix from ascii file into OpenCV matrix:
#include <iostream>
#include <fstream>
#include <iomanip> // set precision of output string
#include <opencv2/core/core.hpp> // OpenCV matrices for storing data
using namespace std;
using namespace cv;
void readMatAsciiWithHeader( const string& filename, Mat& matData)
{
cout << "Create matrix from file :" << filename << endl;
ifstream inFileStream(filename.c_str());
if(!inFileStream){
cout << "File cannot be found" << endl;
exit(-1);
}
int rows, cols;
inFileStream >> rows;
inFileStream >> cols;
matData.create(rows,cols,CV_32F);
cout << "numRows: " << rows << "\t numCols: " << cols << endl;
matData.setTo(0); // init all values to 0
float *dptr;
for(int ridx=0; ridx < matData.rows; ++ridx){
dptr = matData.ptr<float>(ridx);
for(int cidx=0; cidx < matData.cols; ++cidx, ++dptr){
inFileStream >> *dptr;
}
}
inFileStream.close();
}
Driver code to use above mentioned function in cpp program:
Mat myMatrix;
readMatAsciiWithHeader("path/to/data-file.txt", myMatrix);
For completeness, some code to save the data using C++:
int saveMatAsciiWithHeader( const string& filename, Mat& matData)
{
if (matData.empty()){
cout << "File could not be saved. MatData is empty" << endl;
return 0;
}
ofstream oStream(filename.c_str());
// Create header
oStream << matData.rows << " " << matData.cols << endl;
// Write data
for(int ridx=0; ridx < matData.rows; ridx++)
{
for(int cidx=0; cidx < matData.cols; cidx++)
{
oStream << setprecision(9) << matData.at<float>(ridx,cidx) << " ";
}
oStream << endl;
}
oStream.close();
cout << "Saved " << filename.c_str() << endl;
return 1;
}
Future work:
solution for 3D matrices
conversion to Eigen::Matrix

LibSVM Predict Not working

I've coded my classifer in libsvm's python utility, and it has worked quite well so far. Here is an example of how I call my Python API:
print svmutil.svm_predict([2], [f.flatten().tolist()], libsvm_model, '-b 1')
where f is a (1024,1) vector.
I have saved the model, and loaded it using the C++ API. However, when I attempt to load and predict the same vector, it gives me wrong results.
cv::Mat oneCol = fcMat.row(0);
svm_node *x = (struct svm_node *) malloc(1025*sizeof(struct svm_node));
for(int i=0; i<1024; i++){
x[i].index = i;
x[i].value = (double)oneCol.at<float>(i);
}
x[1024].index = -1;
double *prob_estimates=NULL;
prob_estimates = (double *) malloc(svmModel->nr_class*sizeof(double));
double retVal = svm_predict_probability(svmModel, x, prob_estimates);
cout << retVal << endl;
for(int j=0;j<svmModel->nr_class;j++)
cout << prob_estimates[j] << endl;
Over here, I attempt to load a vector in from an OpenCV object as such. However, the predicted model comes out wrong. Is something wrong here?
for(int i=0; i<1024; i++){
x[i].index = i+1;
x[i].value = (double)oneCol.at<float>(i);
}
In LibSVM, indexes start at 1. Who knew :(

Detect gray things with OpenCV

I'd like to detect an object using OpenCV that is distinctly different from other elements in the scene as it's gray. This is good because I can just run a test with R == G == B and it allows to be independent of luminosity, but doing it pixel by pixel is slow.
Is there a faster way to detect gray things? Maybe there's an OpenCV method that does the R == G == B test... cv2.inRange does color thresholding, it's not quite what I'm looking for.
The fastest method I can find in Python is to use slicing to compare each channel. After a few test runs, this method is upwards of 200 times faster than two nested for-loops.
bg = im[:,:,0] == im[:,:,1] # B == G
gr = im[:,:,1] == im[:,:,2] # G == R
slices = np.bitwise_and(bg, gr, dtype= np.uint8) * 255
This will generate a binary image where gray objects are indicated by white pixels. If you do not need a binary image, but only a logical array where grey pixels are indicated by True values, this method gets even faster:
slices = np.bitwise_and(bg, gr)
Omitting the type cast and multiplication yields a method 500 times faster than nested loops.
Running this operation on this test image:
Gives the following result:
As you can see, the gray object is correctly detected.
I'm surprised that such a simple check is slow, probably you are not coding it efficiently.
Here is a short piece of code that should do that for you. It optimal neither in speed nor in memory, but quite in number of lines of code :)
std::vector<cv::Mat> planes;
cv::split(image, planes);
cv::Mat mask = planes[0] == planes[1];
mask &= planes[1] == planes[2];
For the sake of it, here is a comparison with something that would be the fastest way to do it in my opinion (without parallelization)
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <vector>
#include <sys/time.h> //gettimeofday
static
double
P_ellapsedTime(struct timeval t0, struct timeval t1)
{
//return ellapsed time in seconds
return (t1.tv_sec-t0.tv_sec)*1.0 + (t1.tv_usec-t0.tv_usec)/1000000.0;
}
int
main(int argc, char* argv[])
{
struct timeval t0, t1;
cv::Mat image = cv::imread(argv[1]);
assert(image.type() == CV_8UC3);
std::vector<cv::Mat> planes;
std::cout << "Image resolution=" << image.rows << "x" << image.cols << std::endl;
gettimeofday(&t0, NULL);
cv::split(image, planes);
cv::Mat mask = planes[0] == planes[1];
mask &= planes[1] == planes[2];
gettimeofday(&t1, NULL);
std::cout << "Time using split: " << P_ellapsedTime(t0, t1) << "s" << std::endl;
cv::Mat mask2 = cv::Mat::zeros(image.size(), CV_8U);
unsigned char *imgBuf = image.data;
unsigned char *maskBuf = mask2.data;
gettimeofday(&t0, NULL);
for (; imgBuf != image.dataend; imgBuf += 3, maskBuf++)
*maskBuf = (imgBuf[0] == imgBuf[1] && imgBuf[1] == imgBuf[2]) ? 255 : 0;
gettimeofday(&t1, NULL);
std::cout << "Time using loop: " << P_ellapsedTime(t0, t1) << "s" << std::endl;
cv::namedWindow("orig", 0);
cv::imshow("orig", image);
cv::namedWindow("mask", 0);
cv::imshow("mask", mask);
cv::namedWindow("mask2", 0);
cv::imshow("mask2", mask2);
cv::waitKey(0);
}
Bench on an image:
Image resolution=3171x2179
Time using split: 0.06353s
Time using loop: 0.029044s

Categories