Numpy's eigh and eig yield inconsistent eigenvalues - python

Currently I'm trying to solve the generalized eigenvalue problem in NumPy for two symmetric matrices and I've been running into massive trouble as I'm expecting all eigenvalues to be positive, but eigh returns several very large numbers that are not all positive, while eig returns the correct, expected values (but is, of course, very, very slow).
In this case, note that K is symmetric as expected from its construction (here is the code in question):
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(diag(w*peq[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(peq*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
return
For testing, dmat is defined as
array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
And all of w[i], w_d[i], peq[i] are essentially arbitrary positive-valued arrays. w_d and w are of the same order (~ 1e-1) and peq[i] ranges on the order of (~ 1e-10 to 1e1)
Some of the output I'm getting is
Literally why
[ -6.25540943e+07 -4.82660391e+07 -2.62629052e+07 ..., 1.07960873e+10
1.07967334e+10 4.26007915e+10]
[ -5.25462340e-12+0.j 4.62614812e-01+0.j 1.23357898e+00+0.j ...,
2.17613917e+06+0.j 1.07967334e+10+0.j 4.26007915e+10+0.j]
EDIT:
Here's a self-contained version of the code for easier debugging
import numpy as np
from math import *
from scipy import sparse, linalg
# Variable declarations and such (pre-computed)
Ne, Np = 256, 8
N = Ne*Np - Ne + 1
domain_size = 4/Ne
x = np.array([-0.015625 , -0.01362094, -0.00924532, -0.0032703 , 0.0032703 ,
0.00924532, 0.01362094, 0.015625 ])
w = np.array([ 0.00055804, 0.00329225, 0.00533004, 0.00644467, 0.00644467,
0.00533004, 0.00329225, 0.00055804])
dmat = np.array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
# More declarations
x_d = np.zeros(N)
w_d = np.zeros(N)
dmat_d = np.zeros((N, N))
for i in range(Ne):
x_d[i*(Np-1):i*(Np-1)+Np] = x+i*domain_size
w_d[i*(Np-1):i*(Np-1)+Np] += w
dmat_d[i*(Np-1):i*(Np-1)+Np, i*(Np-1):i*(Np-1)+Np] += dmat
peq = (np.cos((x_d-2)*pi/4))**2
# Normalization
peq = peq/np.sum(w_d*peq)
p0 = np.maximum(peq, 1e-10)
p0 /= np.sum(p0*w_d)
# Make efficient matrix that can be built
K = sparse.lil_matrix((N, N))
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(np.diag(w*p0[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(p0*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not np.allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
EDIT2: This is really odd. Running all of the NumPy/SciPy tests on my machine, I receive no errors. But even running the simple test (with large enough matrices) as
import numpy as np
from spicy import linalg
M = np.random.random((1000,1000))
M += M.T
np.allclose(sorted(linalg.eigh(M)[0]), sorted(linalg.eig(M)[0]))
fails on my machine. Though running the same test with a 50x50 matrix does work---even after rebuilding the SciPy/NumPy stack and passing all unit tests.
EDIT3: Actually, this seems to fail everywhere, after testing it on a cluster computer. I'm not sure why.
The above fails due to the in-place behaviour of += and .T as a view rather than an operation.

Related

cv2.perspectiveTransform() not performing the operation

I want to apply a transformation matrix to a set of points. So the set of points:
points = np.array([[0 ,20], [0, 575], [0, 460]])
And I want to use the matrix I calculated with cv2.getPerspectiveTransform() which is a 3x3 matrix.
matrix = np.array([
[ -4. , -3. , 1920. ],
[ -2.25 , -1.6875 , 1080. ],
[ -0.0020833, -0.0015625, 1. ]])
Then I pass the array and a matrix to the following function:
def poly_points_transform(poly_points, matrix):
poly_points_transformed = np.empty_like(poly_points)
for i in range(len(poly_points)):
point = np.array([[poly_points[i]]])
transformed_point = cv2.perspectiveTransform(point, matrix)
np.append(poly_points_transformed, transformed_point)
return poly_points_transformed
Now It doesn't throw an error, but it just copies the src array to the poly_points_transformed. It might be something really rudimentary and stupid. If it is the case, I am sorry, but could someone give me a hint on what is wrong? Thanks in advance
We may solve it with one line of code:
transformed_point = cv2.perspectiveTransform(np.array([points], np.float64), matrix)[0]
As Micka commented cv2.perspectiveTransform takes a list of points (and returns a list of points as output).
np.array([points]) is used because cv2.perspectiveTransform expects 3D array.
For details see trouble getting cv.transform to work.
np.float64 is used in case the dtype of points is int32 (the method accepts float64 and float32 types).
[0] is used for removing the redundant dimension (convert from 3D to 2D).
For fixing the loop, replace np.append(poly_points_transformed, transformed_point) with:
poly_points_transformed[i] = transformed_point[0].
Since the array is initialized to poly_points_transformed = np.empty_like(poly_points), we can't use np.append().
Code sample:
import cv2
import numpy as np
points = np.array([[0.0 ,20.0], [0.0, 575.0], [0.0, 460.0]])
matrix = np.array([
[ -4. , -3. , 1920. ],
[ -2.25 , -1.6875 , 1080. ],
[ -0.0020833, -0.0015625, 1. ]])
# transformed_point = cv2.perspectiveTransform(np.array([points], np.float64), matrix)[0]
def poly_points_transform(poly_points, matrix):
poly_points_transformed = np.empty_like(poly_points)
for i in range(len(poly_points)):
point = np.array([[poly_points[i]]])
transformed_point = cv2.perspectiveTransform(point, matrix)
poly_points_transformed[i] = transformed_point[0] #np.append(poly_points_transformed, transformed_point)
return poly_points_transformed
poly_points_transformed = poly_points_transform(points, matrix)
The result is:
poly_points_transformed =
array([[1920., 1080.],
[1920., 1080.],
[1920., 1080.]])
Why are we getting [1920.0, 1080.0] value for all the transformed points?
Lets transform the middle point mathematically:
Multiply matrix by point (with 1 in the third index)
[ -4. , -3. , 1920. ] [ 0]
[ -2.25 , -1.6875 , 1080. ] * [575] =
[ -0.0020833, -0.0015625, 1. ] [ 1]
p = matrix # np.array([[0.0], [575.0], [1.0]]) =
[1.950000e+02]
[1.096875e+02]
[1.015625e-01]
Now divide the coordinates by the last element (converting homogeneous coordinates to Euclidian coordinates):
[1.950000e+02/1.015625e-01] [1920]
[1.096875e+02/1.015625e-01] = p / p[2] = [1080]
[1.015625e-01/1.015625e-01] [ 1]
The equivalent Euclidian point is [1920, 1080].
The transformation matrix may be wrong, because it transforms all the input points (with x coordinate equals 0) to the same output point...

Frequencies for Hermitian Fourier Transform (`numpy.fft.hfft()`)? (hypothetical function `numpy.fft.hfftfreq()`)

For a normal FFT, Numpy implements the method fftfreq(n,d), which provides the frequencies of the FFT right away. However, for the Hermitian transformation hfft, the companion function hfftfreq is missing. What would be the returned values of the function hfftfreq(n,d), if it existed?
Sources:
numpy.fft.fftfreq:
https://numpy.org/doc/stable/reference/generated/numpy.fft.fftfreq.html
Discrete Fourier Transform (numpy.fft):
https://numpy.org/doc/stable/reference/routines.fft.html
The hfft() / ihfft() pairs are equivalent to irfft() / rfft() pairs, respectively and except for the normalization.
In particular, np.fft.hfft(arr, n, norm='forward') is identical to np.fft.irfft(arr, n, norm='backward') (the backward norm is the default).
import numpy as np
n = 3
k = 2 * (n - 1)
s = np.arange(n)
print(s.astype(np.complex_))
# [0.+0.j 1.+0.j 2.+0.j]
irft_s = np.fft.irfft(s, k)
print(irft_s)
# [ 1. -0.5 0. -0.5]
print(np.fft.irfft(s, k, norm='forward'))
# [ 4. -2. 0. -2.]
hft_s = np.fft.hfft(s, k)
print(hft_s)
# [ 4. -2. 0. -2.]
print(np.fft.hfft(s, k, norm='forward'))
# [ 1. -0.5 0. -0.5]
print(hft_s / k)
# [ 1. -0.5 0. -0.5]
# : get back to the original signal via the respective inverses
rr_s = np.fft.rfft(irft_s)
hh_s = np.fft.ihfft(hft_s)
print(rr_s, np.allclose(s, rr_s))
# [0.+0.j 1.+0.j 2.+0.j] True
print(hh_s, np.allclose(s, hh_s))
# [0.-0.j 1.-0.j 2.-0.j] True
# : get back to the original signal via the mixed inverses
hr_s = np.fft.rfft(hft_s, norm='forward')
rh_s = np.fft.ihfft(irft_s, norm='forward')
print(hr_s, np.allclose(s, hr_s))
# [0.+0.j 1.+0.j 2.+0.j] True
print(rh_s, np.allclose(s, rh_s))
# [0.-0.j 1.-0.j 2.-0.j] True
Hence, the frequencies associated with the result of hfft() are actually the "frequencies" associated with the result of irfft().
However, the "frequencies" associated to the inverse transform of the signal are the "time" of the samples of the signal, which are the natural numbers from 0 to n - 1 (n being the size of the signal):
np.arange(n)
Note that the samples of ifft() and irfft() (and hfft()) are the same.
Conversely, the samples associated with the result of ihfft() are given by np.fft.rfftfreq().

Vectorized arange using np.einsum for raycast

I have a D dimensional point and vector, p and v, respectively, a positive number n, and a resolution.
I want to get all points after successively adding vector v*resolution to point p n/resolution times.
Example
p = np.array([3, 5])
v = np.array([-1.5, 3])
n = 10
resolution = 1.5
result:
[[ 3. , 5. ],
[ 0.75, 9.5 ],
[ -1.5 , 14. ],
[ -3.75, 18.5 ],
[ -6. , 23. ],
[ -8.25, 27.5 ],
[-10.5 , 32. ]]
My current approach is to tile the range, given by n and the resolution, by the dimension D, multiply by that by v and add p.
def getPoints(p, v, n, resolution=1.):
dRange = np.tile(np.arange(0, n, resolution), (v.shape[0],1))
return np.multiply(v.reshape(-1,1), dRange).T + p
Is there is a direct way to calculate DRange using np.einsum or another method?
Approach #1
Here's one approach leveraging NumPy broadcasting -
np.arange(0, n, resolution)[:,None] * v + p
Basically, we extend the range array to 2D, keeping the second one as singleton, to let it broadcast for elementwise multiplication against 1D v, giving us a 2D array. Then, we add p to it.
Approach #2
There isn't any sum-reduction here, so np.einsum or any dot-based function even though should work, but won't lend any help on performance. Let's put it out anyway, as it was mentioned in the question -
np.einsum('i,j->ij',np.arange(0, n, resolution), v) + p

QR factorisation using modified Gram Schmidt

The question:
For this problem, you are given a list of matrices called As, and your job is to find the QR factorization for each of them.
Implement qr_by_gram_schmidt: This function takes as input a matrix A and computes a QR decomposition, returning two variables, Q and R where A=QR, with Q orthogonal and R zero below the diagonal.
A is an n×m matrix with n≥m (i.e. more rows than columns).
You should implement this function using the modified Gram-Schmidt procedure.
INPUT:
As: List of arrays
OUTPUT:
Qs: List of the Q matrices output by qr_by_gram_schmidt, in the same order as As. For a matrix A of shape n×m, Q should have shape n×m.
Rs: List of the R matrices output by qr_by_gram_schmidt, in the same order as As. For a matrix A of shape n×m, R should have shape m×m
I have written the code for the QR factorization which I believe is correct:
import numpy as np
def qr_by_gram_schmidt(A):
m = np.shape(A)[0]
n = np.shape(A)[1]
Q = np.zeros((m, m))
R = np.zeros((n, n))
for j in xrange(n):
v = A[:,j]
for i in xrange(j):
R[i,j] = Q[:,i].T * A[:,j]
v = v.squeeze() - (R[i,j] * Q[:,i])
R[j,j] = np.linalg.norm(v)
Q[:,j] = (v / R[j,j]).squeeze()
return Q, R
How do I write the loop to calculate the the QR factorization of each of the matrices in As and storing them in that order?
edit: The code has some error too. I will appreciate it if you can help me in debugging it.
Thanks
I didn't check your GS code, but had to make a change (may not be correct!) to make it compile. You just have to set up a list of your matrices, I made 2 of them and then loop through that list and apply your function.
import numpy as np
def gs(A):
m = np.shape(A)[0]
n = np.shape(A)[1]
Q = np.zeros((m, m))
R = np.zeros((n, n))
print m,n,Q,R
for j in xrange(n):
v = A[:,j]
for i in xrange(j):
R[i,j] = np.dot(Q[:,i].T , A[:,j]) # I made an arbitrary change here!!!
v = v.squeeze() - (R[i,j] * Q[:,i])
R[j,j] = np.linalg.norm(v)
Q[:,j] = (v / R[j,j]).squeeze()
return Q, R
As= np.random.rand(2,3,3) # list of 2 (3x3) matrices
print As
for A in As:
print gs(A)
Output:
[[[ 0.9599614 0.02213113 0.43343881]
[ 0.44202415 0.6816688 0.88321052]
[ 0.93098107 0.80528361 0.88473308]]
[[ 0.41794678 0.10762796 0.42110659]
[ 0.89598082 0.81225543 0.52947205]
[ 0.0621515 0.59826789 0.14021332]]]
(array([[ 0.68158915, -0.67980134, 0.27075149],
[ 0.31384477, 0.60583989, 0.73106736],
[ 0.66101262, 0.41331364, -0.626286 ]]), array([[ 1.40841649, 0.76132516, 1.15743793],
[ 0. , 0.73077208, 0.60610414],
[ 0. , 0. , 0.20894464]]))
(array([[ 0.42190511, -0.39510208, 0.81602109],
[ 0.90446656, 0.121136 , -0.40898205],
[ 0.06274013, 0.91061541, 0.40846452]]), array([[ 0.99061796, 0.81760207, 0.66535379],
[ 0. , 0.6006613 , 0.02543844],
[ 0. , 0. , 0.18435946]]))

Difference between scipy pairwise distance and X.X+Y.Y - X.Y^t

Let's imagine we have data as
d1 = np.random.uniform(low=0, high=2, size=(3,2))
d2 = np.random.uniform(low=3, high=5, size=(3,2))
X = np.vstack((d1,d2))
X
array([[ 1.4930674 , 1.64890721],
[ 0.40456265, 0.62262546],
[ 0.86893397, 1.3590808 ],
[ 4.04177045, 4.40938126],
[ 3.01396153, 4.60005842],
[ 3.2144552 , 4.65539323]])
I want to compare two methods for generating the pairwise distances:
assuming that X and Y are the same:
(X-Y)^2 = X.X + Y.Y - 2*X.Y^t
Here is the first method as it is used in scikit-learn for computing the pairwise distance, and later for kernel matrix.
import numpy as np
def cal_pdist1(X):
Y = X
XX = np.einsum('ij,ij->i', X, X)[np.newaxis, :]
YY = XX.T
distances = -2*np.dot(X, Y.T)
distances += XX
distances += YY
return(distances)
cal_pdist1(X)
array([[ 0. , 2.2380968 , 0.47354188, 14.11610424,
11.02241244, 12.00213414],
[ 2.2380968 , 0. , 0.75800718, 27.56880003,
22.62893544, 24.15871196],
[ 0.47354188, 0.75800718, 0. , 19.37122424,
15.1050792 , 16.36714548],
[ 14.11610424, 27.56880003, 19.37122424, 0. ,
1.09274896, 0.74497242],
[ 11.02241244, 22.62893544, 15.1050792 , 1.09274896,
0. , 0.04325965],
[ 12.00213414, 24.15871196, 16.36714548, 0.74497242,
0.04325965, 0. ]])
Now, if I use scipy pairwise distance function as below, I get
import scipy, scipy.spatial
pd_sparse = scipy.spatial.distance.pdist(X, metric='seuclidean')
scipy.spatial.distance.squareform(pd_sparse)
array([[ 0. , 0.92916653, 0.45646989, 2.29444795, 1.89740167,
2.00059442],
[ 0.92916653, 0. , 0.50798432, 3.22211357, 2.78788236,
2.90062103],
[ 0.45646989, 0.50798432, 0. , 2.72720831, 2.28001564,
2.39338343],
[ 2.29444795, 3.22211357, 2.72720831, 0. , 0.71411943,
0.58399694],
[ 1.89740167, 2.78788236, 2.28001564, 0.71411943, 0. ,
0.14102567],
[ 2.00059442, 2.90062103, 2.39338343, 0.58399694, 0.14102567,
0. ]])
The results are completely different! Shouldn't they be the same?
pdist(..., metric='seuclidean') computes the standardized Euclidean distance, not the squared Euclidean distance (which is what cal_pdist returns).
From the docs:
Y = pdist(X, 'seuclidean', V=None)
Computes the standardized Euclidean distance. The standardized Euclidean distance between two n-vectors u and v is
__________________
√∑(ui−vi)^2 / V[xi]
V is the variance vector; V[i] is the variance computed over all the i’th components of the points. If not passed, it is automatically computed.
Try passing metric='sqeuclidean', and you will see that both functions return the same result to within rounding error.

Categories