SVD not working as expected with complex matrix in scipy? - python

I'm trying to find approximate nonzero solutions of M#x = 0 using SVD in scipy where M is a complex-valued 4x4 matrix.
First a toy example:
M = np.array([
[1,1,1,1],
[1,0,1,1],
[1,-1,0,0],
[0,0,0,1e-10]
])
U, s, Vh = scipy.linalg.svd(M)
print(s) # [2.57554368e+00 1.49380718e+00 3.67579714e-01 7.07106781e-11]
print(Vh[-1]) # [ 0.00000000e+00 2.77555756e-16 -7.07106781e-01 7.07106781e-01]
print(np.linalg.norm( M#Vh[-1] )) # 7.07106781193738e-11
So in this case, the smallest (last) value in s is very small, and corresponding last column Vh[-1] is the approximate solution to M#x=0, where M#Vh[-1] is also very small, roughly same order as s[-1].
Now the real example which doesn't work the same way:
M = np.array([[ 1.68572560e-01-3.98053448e-02j, 5.61165939e-01-1.22638499e-01j,
3.39625823e-02-1.16216469e+00j, 2.65140034e-06-4.10296457e-06j],
[ 4.17991622e-01+1.33504182e-02j, -4.79190633e-01-2.08562169e-01j,
4.87429517e-01+3.68070222e-01j, -3.63710538e-05+6.43912577e-06j],
[-2.18353842e+06-4.20344071e+05j, -2.52806647e+06-2.08794519e+05j,
-2.01808847e+06-1.96246695e+06j, -5.77147300e-01-3.12598394e+00j],
[-3.03044160e+05-6.45842521e+04j, -6.85879183e+05+2.07045473e+05j,
6.14194217e+04-1.28864668e+04j, -7.08794838e+00+9.70230041e+00j]])
U, s, Vh = scipy.linalg.svd(M)
print(s) # [4.42615634e+06 5.70600901e+05 4.68468171e-01 5.21600592e-13]
print(Vh[-1]) # [-5.35883825e-05+0.00000000e+00j 3.74712739e-05-9.89288566e-06j 4.03111556e-06+7.59306578e-06j -8.20834667e-01+5.71165865e-01j]
print(np.linalg.norm( M#Vh[-1] )) # 35.950705194666476
What's going on here? s[-1] is very small, so M#x should have a solution in principle, but Vh[-1] doesn't look like a solution. Is this an issue with M and Vh being complex numbers? A numerical stability/accuracy issue? Something else?
I'd really like to figure out what x would give M#x with roughly the same order of magnitude as s[-1], please let me know any way to solve this.

You forgot the conjugate transpose
The decomposition given by SVD is np.allclose(M, U # np.diag(s) # Vh), if s[-1] is small it means that the last column of U # np.diag(s) ~ M # np.inv(Vh) ~ M # Vh.T.conj(). So you can find the use
M # Vh[-1].T.conj() # [-7.77136331e-14-3.74441041e-13j,
# 4.67810503e-14+3.45797987e-13j,
# -2.84217094e-14-1.06581410e-14j,
# 7.10542736e-15+3.10862447e-15j]

Related

CVXOPT and portfolio optimization: puzzling issue

I am trying to solve the following simple optimization problem. One seeks to find the global minimum variance portfolio, being the portfolio that minimizes variance with only one constraint : weights must sum to one.
Optimization program
This problem has a well-known closed-form solution: Solution
I'm trying to reproduce the results using CVXopt in Python, and I encounter a puzzling issue.
I have a 10x10 sample covariance matrix.
If I solve the problem with the entire 10x10 matrix, the output is incorrect : the "sum-to-one" constraint is not respected, and weights are different from the closed-form solution
If I solve the problem with a 7x7 subsample of the same matrix, the output is correct : the "sum-to-one" constraint is respected, and weights are equivalent to the closed-form solution
Actually, with any subsample of size equal or lower than 7x7, it works. But for any subsample of size higher or equal to 8x8, it does not work anymore. I can't grasp where the problem is coming from.
Thank you for your help!
Here is a sample code
%reset -sf #Clear Environment
import numpy as np
import cvxopt
from cvxopt import matrix as dmatrix
from cvxopt.solvers import qp, options
from numpy.linalg import inv
from numpy import ones
from numpy import transpose as t
# Sample 10x10 covariance matrix
sigmafull = np.array([[0.01449082, 0.00846992, 0.00846171, 0.00773097, 0.00878925,
0.00748843, 0.00672341, 0.00665912, 0.0068593 , 0.00827341],
[0.00846992, 0.00952205, 0.00766057, 0.00726647, 0.00781524,
0.00672368, 0.00642426, 0.00609368, 0.00617965, 0.00704281],
[0.00846171, 0.00766057, 0.00842194, 0.00700168, 0.00772423,
0.0061137 , 0.00612574, 0.00601041, 0.00621007, 0.00712152],
[0.00773097, 0.00726647, 0.00700168, 0.00687784, 0.00726901,
0.00573606, 0.00567145, 0.00556391, 0.00575279, 0.00660916],
[0.00878925, 0.00781524, 0.00772423, 0.00726901, 0.00860462,
0.00612804, 0.0061301 , 0.00603605, 0.00630947, 0.0075281 ],
[0.00748843, 0.00672368, 0.0061137 , 0.00573606, 0.00612804,
0.00634431, 0.0054793 , 0.00513665, 0.00511852, 0.00575049],
[0.00672341, 0.00642426, 0.00612574, 0.00567145, 0.0061301 ,
0.0054793 , 0.0055722 , 0.0050824 , 0.00512499, 0.00576934],
[0.00665912, 0.00609368, 0.00601041, 0.00556391, 0.00603605,
0.00513665, 0.0050824 , 0.00521583, 0.00510142, 0.00576414],
[0.0068593 , 0.00617965, 0.00621007, 0.00575279, 0.00630947,
0.00511852, 0.00512499, 0.00510142, 0.00547566, 0.00603528],
[0.00827341, 0.00704281, 0.00712152, 0.00660916, 0.0075281 ,
0.00575049, 0.00576934, 0.00576414, 0.00603528, 0.00756009]])
# sigma = sigmafull[0:8,0:8] #With this subsample, output is incorrect. n=8 (and for all n>8)
sigma = sigmafull[0:7,0:7] #With this subsample, output is correct. n=7 (and for all n<7)
n=len(sigma)
sigma = dmatrix(sigma) #Formatting sigma to be a dense matrix for cvxopt
mu = dmatrix(np.zeros(n)) #We just want to minimize variance, hence vector of zeroes
#Format of the equality constraint : Ax = b
#We want the sum of x to be equal to 1
Amatrix = dmatrix(ones(n)).T #Vector of ones
bmatrix = dmatrix(1.0) #Scalar = 1
sol = qp(sigma, mu, None, None, A=Amatrix, b=bmatrix) #No inequality constraint
w_gmv = (inv(sigma)#ones(n))/(t(ones(n))#inv(sigma)#ones(n)) #Analytical solution which indeed does sum to 1
print(t(np.array(sol['x'])) - w_gmv) #If Vector of zeroes -> Weights are equivalent -> OK
print(sum(np.array(sol['x']))) #If equal to 1 -> OK

Matrix left division of stacked arrays using numpy

I'm working on a program to solve the Bloch (or more precise the Bloch McConnell) equations in python. So the equation to solve is:
where A is a NxN matrix, A+ its pseudoinverse and M0 and B are vectors of size N.
The special thing is that I wanna solve the equation for several offsets (and thus several matrices A) at the same time. The new dimensions are:
A: MxNxN
b: Nx1
M0: MxNx1
The conventional version of the program (using a loop over the 1st dimension of size M) works fine, but I'm stuck at one point in the 'parallel version'.
At the moment my code looks like this:
def bmcsim(A, b, M0, timestep):
ex = myexpm(A*timestep) # returns stacked array of size MxNxN
M = np.zeros_like(M0)
for n in range(ex.shape[0]):
A_tmp = A[n,:,:]
A_b = np.linalg.lstsq(A_tmp ,b, rcond=None)[0]
M[n,:,:] = np.abs(np.real(np.dot(ex[n,:,:], M0[n,:,:] + A_b) - A_b))
return M
and I would like to get rid of that for n in range(ex.shape[0]) loop. Unfortunately, np.linalg.lstsq doesn't work for stacked arrays, does it? In myexpm is used np.apply_along_axis for a another problem:
def myexpm(A):
vals,vects = np.linalg.eig(A)
tmp = np.einsum('ijk,ikl->ijl', vects, np.apply_along_axis(np.diag, -1, np.exp(vals)))
return np.einsum('ijk,ikl->ijl', tmp, np.linalg.inv(vects))
However, that just works for 1D input data. Is there something similar that I can use with np.linalg.lstsq? The np.dot in bmcsim will be replaced with np.einsum like in myexpm I guess, or are there better ways?
Thanks for your help!
Update:
I just realized that I can replace np.linalg.lstsq(A,b) with np.linalg.solve(A.T.dot(A), A.T.dot(b)) and managed to get rid of the loop this way:
def bmcsim2(A, b, M0, timestep):
ex = myexpm(A*timestep)
b_stack = np.repeat(b[np.newaxis, :, :], offsets.size, axis=0)
tmp_left = np.einsum('kji,ikl->ijl', np.transpose(A), A)
tmp_right = np.einsum('kji,ikl->ijl', np.transpose(A), b_stack)
A_b_stack = np.linalg.solve(tmp_left , tmp_right )
return np.abs(np.real(np.einsum('ijk,ikl->ijl',ex, M0+A_b_stack ) - A_b_stack ))
This is about 3 times faster, but still a bit complicated. I hope there is a better (shorter/easier) way, that's maybe even faster?!

nearest intersection point to many lines in python

I need a good algorithm for calculating the point that is closest to a collection of lines in python, preferably by using least squares. I found this post on a python implementation that doesn't work:
Finding the centre of multiple lines using least squares approach in Python
And I found this resource in Matlab that everyone seems to like... but I'm not sure how to convert it to python:
https://www.mathworks.com/matlabcentral/fileexchange/37192-intersection-point-of-lines-in-3d-space
I find it hard to believe that someone hasn't already done this... surely this is part of numpy or a standard package, right? I'm probably just not searching for the right terms - but I haven't been able to find it yet. I'd be fine with defining lines by two points each or by a point and a direction. Any help would be greatly appreciated!
Here's an example set of points that I'm working with:
initial XYZ points for the first set of lines
array([[-7.07107037, 7.07106748, 1. ],
[-7.34818339, 6.78264559, 1. ],
[-7.61352972, 6.48335745, 1. ],
[-7.8667115 , 6.17372055, 1. ],
[-8.1072994 , 5.85420065, 1. ]])
the angles that belong to the first set of lines
[-44.504854, -42.029223, -41.278573, -37.145774, -34.097022]
initial XYZ points for the second set of lines
array([[ 0., -20. , 1. ],
[ 7.99789129e-01, -19.9839984, 1. ],
[ 1.59830153e+00, -19.9360366, 1. ],
[ 2.39423914e+00, -19.8561769, 1. ],
[ 3.18637019e+00, -19.7445510, 1. ]])
the angles that belong to the second set of lines
[89.13244, 92.39087, 94.86425, 98.91849, 99.83488]
The solution should be the origin or very near it (the data is just a little noisy, which is why the lines don't perfectly intersect at a single point).
Here's a numpy solution using the method described in this link
def intersect(P0,P1):
"""P0 and P1 are NxD arrays defining N lines.
D is the dimension of the space. This function
returns the least squares intersection of the N
lines from the system given by eq. 13 in
http://cal.cs.illinois.edu/~johannes/research/LS_line_intersect.pdf.
"""
# generate all line direction vectors
n = (P1-P0)/np.linalg.norm(P1-P0,axis=1)[:,np.newaxis] # normalized
# generate the array of all projectors
projs = np.eye(n.shape[1]) - n[:,:,np.newaxis]*n[:,np.newaxis] # I - n*n.T
# see fig. 1
# generate R matrix and q vector
R = projs.sum(axis=0)
q = (projs # P0[:,:,np.newaxis]).sum(axis=0)
# solve the least squares problem for the
# intersection point p: Rp = q
p = np.linalg.lstsq(R,q,rcond=None)[0]
return p
Works
Edit: here is a generator for noisy test data
n = 6
P0 = np.stack((np.array([5,5])+3*np.random.random(size=2) for i in range(n)))
a = np.linspace(0,2*np.pi,n)+np.random.random(size=n)*np.pi/5.0
P1 = np.array([5+5*np.sin(a),5+5*np.cos(a)]).T
If this wikipedia equation carries any weight:
then you can use:
def nearest_intersection(points, dirs):
"""
:param points: (N, 3) array of points on the lines
:param dirs: (N, 3) array of unit direction vectors
:returns: (3,) array of intersection point
"""
dirs_mat = dirs[:, :, np.newaxis] # dirs[:, np.newaxis, :]
points_mat = points[:, :, np.newaxis]
I = np.eye(3)
return np.linalg.lstsq(
(I - dirs_mat).sum(axis=0),
((I - dirs_mat) # points_mat).sum(axis=0),
rcond=None
)[0]
If you want help deriving / checking that equation from first principles, then math.stackexchange.com would be a better place to ask.
surely this is part of numpy
Note that numpy gives you enough tools to express this very concisely already
Here's the final code that I ended up using. Thanks to kevinkayaks and everyone else who responded! Your help is very much appreciated!!!
The first half of this function simply converts the two collections of points and angles to direction vectors. I believe the rest of it is basically the same as what Eric and Eugene proposed. I just happened to have success first with Kevin's and ran with it until it was an end-to-end solution for me.
import numpy as np
def LS_intersect(p0,a0,p1,a1):
"""
:param p0 : Nx2 (x,y) position coordinates
:param p1 : Nx2 (x,y) position coordinates
:param a0 : angles in degrees for each point in p0
:param a1 : angles in degrees for each point in p1
:return: least squares intersection point of N lines from eq. 13 in
http://cal.cs.illinois.edu/~johannes/research/LS_line_intersect.pdf
"""
ang = np.concatenate( (a0,a1) ) # create list of angles
# create direction vectors with magnitude = 1
n = []
for a in ang:
n.append([np.cos(np.radians(a)), np.sin(np.radians(a))])
pos = np.concatenate((p0[:,0:2],p1[:,0:2])) # create list of points
n = np.array(n)
# generate the array of all projectors
nnT = np.array([np.outer(nn,nn) for nn in n ])
ImnnT = np.eye(len(pos[0]))-nnT # orthocomplement projectors to n
# now generate R matrix and q vector
R = np.sum(ImnnT,axis=0)
q = np.sum(np.array([np.dot(m,x) for m,x in zip(ImnnT,pos)]),axis=0)
# and solve the least squares problem for the intersection point p
return np.linalg.lstsq(R,q,rcond=None)[0]
#sample data
pa = np.array([[-7.07106638, 7.07106145, 1. ],
[-7.34817263, 6.78264524, 1. ],
[-7.61354115, 6.48336347, 1. ],
[-7.86671133, 6.17371816, 1. ],
[-8.10730426, 5.85419995, 1. ]])
paa = [-44.504854321138524, -42.02922380123842, -41.27857390748773, -37.145774853341386, -34.097022454778674]
pb = np.array([[-8.98220431e-07, -1.99999962e+01, 1.00000000e+00],
[ 7.99789129e-01, -1.99839984e+01, 1.00000000e+00],
[ 1.59830153e+00, -1.99360366e+01, 1.00000000e+00],
[ 2.39423914e+00, -1.98561769e+01, 1.00000000e+00],
[ 3.18637019e+00, -1.97445510e+01, 1.00000000e+00]])
pba = [88.71923357743934, 92.55801427272372, 95.3038321024299, 96.50212060095349, 100.24177145619092]
print("Should return (-0.03211692, 0.14173216)")
solution = LS_intersect(pa,paa,pb,pba)
print(solution)

Modified Gram Schmidt in Python for complex vectors

I wrote some code to implement the modified Gram Schmidt process. When
I tested it on real matrices, it is correct. However, when I tested it
on complex matrices, it went wrong.
I believe my code is correct by doing a step by step check. Therefore,
I wonder if there are numerical reasons why the modified Gram Schmidt
process fails on complex vectors.
Following is the code:
import numpy as np
def modifiedGramSchmidt(A):
"""
Gives a orthonormal matrix, using modified Gram Schmidt Procedure
:param A: a matrix of column vectors
:return: a matrix of orthonormal column vectors
"""
# assuming A is a square matrix
dim = A.shape[0]
Q = np.zeros(A.shape, dtype=A.dtype)
for j in range(0, dim):
q = A[:,j]
for i in range(0, j):
rij = np.vdot(q, Q[:,i])
q = q - rij*Q[:,i]
rjj = np.linalg.norm(q, ord=2)
if np.isclose(rjj,0.0):
raise ValueError("invalid input matrix")
else:
Q[:,j] = q/rjj
return Q
Following is the test code:
import numpy as np
# If testing on random matrices:
# X = np.random.rand(dim,dim)*10 + np.random.rand(dim,dim)*5 *1j
# If testing on some good one
v1 = np.array([1, 0, 1j]).reshape((3,1))
v2 = np.array([-1, 1j, 1]).reshape((3,1))
v3 = np.array([0, -1, 1j+1]).reshape((3,1))
X = np.hstack([v1,v2,v3])
Y = modifiedGramSchmidt(X)
Y3 = np.linalg.qr(X, mode="complete")[0]
if np.isclose(Y3.conj().T.dot(Y3), np.eye(dim, dtype=complex)).all():
print("The QR-complete gives orthonormal vectors")
if np.isclose(Y.conj().T.dot(Y), np.eye(dim, dtype=complex)).all():
print("The Gram Schmidt process is tested against a random matrix")
else:
print("But My modified GS goes wrong!")
print(Y.conj().T.dot(Y))
Update
The problem is that I implemented a algorithm designed for inner product linear in first argument
whereas I thought it were linear in second argument.
Thanks #landogardner
Your problem is to do with how numpy.vdot handles complex numbers — the complex conjugate of the first argument is used for the calculation (ref). So you're calculating rij as q*.Q[:,i] instead of q.Q[:,i]*. Just swap the order of the args:
rij = np.vdot(Q[:,i], q)
This got the test code working for me.

Different eigenvalues between scipy.sparse.linalg.eigs and numpy/scipy.eig

Context:
My goal is to create a Python3 program to operate differential operations on a vector V of size N. I did so, test it for basic operation and it works (differentiation, gradient...).
I tried to write with that basis more complex equations (Navier-Stokes, Orr-Sommerfeld,...) and I tried to validate my work by calculating the eigenvalues of these equations.
As these eigenvalues were completely unexpected, I simplify my problem and I am currently trying to calculate the eigenvalues only for the differentiation matrix (see below). But the results seem wrong...
Thanks in advance for your help, because I do not find any solution to my problem...
Definition of DM:
I use Chebyshev spectral method to operate the differentiation of vectors.
I use the following Chebyshev package (translated from Matlab to Python):
http://dip.sun.ac.za/%7Eweideman/research/differ.html
That package allow me to create a differentiation matrix DM, obtained with:
nodes, DM = chebyshev.chebdiff(N, maximal_order)
To obtain the 1st, 2nd, 3th... order differentiation, I write for example:
dVdx1 = np.dot(DM[0,:,:], V)
d2Vdx2 = np.dot(DM[1,:,:], V)
d3Vdx3 = np.dot(DM[2,:,:], V)
I tested that and it works.
I've build different operators based on that differentiation process.
I've tried to validate them by finding their eigenvalues. It didn't go well so I am just trying right now with DM only.
I do not manage to find the right eigenvalues of DM.
I've tried with different functions:
numpy.linalg.eigvals(DM)
scipy.linalg.eig(DM)
scipy.sparse.linalg.eigs(DM)
sympy.solve( (DM-x*np.eye).det(), x) [for snall size only]
Why I use scipy.sparse.LinearOperator:
I do not want to directly use the matrix DM, so I wrapped into a function which operates the differentiation (see code below) like that:
dVdx1 = derivative(V)
The reason why I do that comes from the global project itself.
This is useful for more complicated equations.
Creating such a function prevents me from using directly the matrix DM to find its eigenvalues (because DM stay inside the function).
For that reason, I use a scipy.sparse.LinearOperator to wrap my method derivative() and use it as an input of scipy.sparse.eig().
Code and results:
Here is the code to compute these eigenvalues:
import numpy as np
import scipy
import sympy
from scipy.sparse.linalg import aslinearoperator
from scipy.sparse.linalg import eigs
from scipy.sparse.linalg import LinearOperator
import chebyshev
N = 20 # should be 4, 20, 50, 100, 300
max_order = 4
option = 1
#option 1: building the differentiation matrix DM for a given order
if option == 1:
if 0:
# usage of package chebyshev, but I add a file with the matrix inside
nodes, DM = chebyshev.chebdiff(N, max_order)
order = 1
DM = DM[order-1,:,:]
#outfile = TemporaryFile()
np.save('DM20', DM)
if 1:
# loading the matrix from the file
# uncomment depending on N
#DM = np.load('DM4.npy')
DM = np.load('DM20.npy')
#DM = np.load('DM50.npy')
#DM = np.load('DM100.npy')
#DM = np.load('DM300.npy')
#option 2: building a random matrix
elif option == 2:
j = np.complex(0,1)
np.random.seed(0)
Real = np.random.random((N, N)) - 0.5
Im = np.random.random((N,N)) - 0.5
# If I want DM symmetric:
#Real = np.dot(Real, Real.T)
#Im = np.dot(Im, Im.T)
DM = Real + j*Im
# If I want DM singular:
#DM[0,:] = DM[1,:]
# Test DM symmetric
print('Is DM symmetric ? \n', (DM.transpose() == DM).all() )
# Test DM Hermitian
print('Is DM hermitian ? \n', (DM.transpose().real == DM.real).all() and
(DM.transpose().imag == -DM.imag).all() )
# building a linear operator which wrap matrix DM
def derivative(v):
return np.dot(DM, v)
linop_DM = LinearOperator( (N, N), matvec = derivative)
# building a linear operator directly from a matrix DM with asLinearOperator
aslinop_DM = aslinearoperator(DM)
# comparison of LinearOperator and direct Dot Product
V = np.random.random((N))
diff_lo = linop_DM.matvec(V)
diff_mat = np.dot(DM, V)
# diff_lo and diff_mat are equals
# FINDING EIGENVALUES
#number of eigenvalues to find
k = 1
if 1:
# SCIPY SPARSE LINALG LINEAR OPERATOR
vals_sparse, vecs = scipy.sparse.linalg.eigs(linop_DM, k, which='SR',
maxiter = 10000,
tol = 1E-3)
vals_sparse = np.sort(vals_sparse)
print('\nEigenvalues (scipy.sparse.linalg Linear Operator) : \n', vals_sparse)
if 1:
# SCIPY SPARSE ARRAY
vals_sparse2, vecs2 = scipy.sparse.linalg.eigs(DM, k, which='SR',
maxiter = 10000,
tol = 1E-3)
vals_sparse2 = np.sort(vals_sparse2)
print('\nEigenvalues (scipy.sparse.linalg with matrix DM) : \n', vals_sparse2)
if 1:
# SCIPY SPARSE AS LINEAR OPERATOR
vals_sparse3, vecs3 = scipy.sparse.linalg.eigs(aslinop_DM, k, which='SR',
maxiter = 10000,
tol = 1E-3)
vals_sparse3 = np.sort(vals_sparse3)
print('\nEigenvalues (scipy.sparse.linalg AS linear Operator) : \n', vals_sparse3)
if 0:
# NUMPY LINALG / SAME RESULT AS SCIPY LINALG
vals_np = np.linalg.eigvals(DM)
vals_np = np.sort(vals_np)
print('\nEigenvalues (numpy.linalg) : \n', vals_np)
if 1:
# SCIPY LINALG
vals_sp = scipy.linalg.eig(DM)
vals_sp = np.sort(vals_sp[0])
print('\nEigenvalues (scipy.linalg.eig) : \n', vals_sp)
if 0:
x = sympy.Symbol('x')
D = sympy.Matrix(DM)
print('\ndet D (sympy):', D.det() )
E = D - x*np.eye(DM.shape[0])
eig_sympy = sympy.solve(E.det(), x)
print('\nEigenvalues (sympy) : \n', eig_sympy)
Here are my results (for N=20):
Is DM symmetric ?
False
Is DM hermitian ?
False
Eigenvalues (scipy.sparse.linalg Linear Operator) :
[-2.5838015+0.j]
Eigenvalues (scipy.sparse.linalg with matrix DM) :
[-2.58059801+0.j]
Eigenvalues (scipy.sparse.linalg AS linear Operator) :
[-2.36137671+0.j]
Eigenvalues (scipy.linalg.eig) :
[-2.92933791+0.j -2.72062839-1.01741142j -2.72062839+1.01741142j
-2.15314244-1.84770128j -2.15314244+1.84770128j -1.36473659-2.38021351j
-1.36473659+2.38021351j -0.49536645-2.59716913j -0.49536645+2.59716913j
0.38136094-2.53335888j 0.38136094+2.53335888j 0.55256471-1.68108134j
0.55256471+1.68108134j 1.26425751-2.25101241j 1.26425751+2.25101241j
2.03390489-1.74122287j 2.03390489+1.74122287j 2.57770573-0.95982011j
2.57770573+0.95982011j 2.77749810+0.j ]
The values returned by scipy.sparse should be included in the ones found by scipy/numpy, which is not the case. (idem for sympy)
I've tried with different random matrices instead of DM (see option 2) (symmetric, non-symmetric, real, imaginary, etc...), which had small size N (4,5,6..) and also bigger ones (100,...).
That worked
By changing parameters like 'which' (LM, SM, LR...), 'tol' (10E-3, 10E-6..), 'maxiter', 'sigma' (0) in scipy.sparse... scipy.sparse.linalg.eigs always worked for random matrices but never for my matrix DM. In best cases, found eigenvalues are close to the ones found by scipy, but never match.
I really do not know what is so particular in my matrix.
I also dont know why using scipy.sparse.linagl.eig with a matrix, a LinearOperator or a AsLinearOperator gives different results.
I DO NOT KNOW HOW I COULD INCLUDE MY FILES CONTAINING MATRICES DM...
For N = 4 :
[[ 3.16666667 -4. 1.33333333 -0.5 ]
[ 1. -0.33333333 -1. 0.33333333]
[-0.33333333 1. 0.33333333 -1. ]
[ 0.5 -1.33333333 4. -3.16666667]]
Every idea is welcome.
May a moderator could tag my question with :
scipy.sparse.linalg.eigs / weideman / eigenvalues / scipy.eig /scipy.sparse.lingalg.linearOperator
Geoffroy.
I spoke with a few colleague and solve partly my problem.
My conclusion is that my matrix is simply very ill conditioned...
In my project, I can simplify my matrix by imposing boundary condition as follow:
DM[0,:] = 0
DM[:,0] = 0
DM[N-1,:] = 0
DM[:,N-1] = 0
which produces a matrix similar to that for N=4:
[[ 0 0 0 0]
[ 0 -0.33333333 -1. 0]
[ 0 1. 0.33333333 0]
[ 0 0 0 0]]
By using such condition, I obtain eigenvalues for scipy.sparse.linalg.eigs which are equal to the one in scipy.linalg.eig.
I also tried using Matlab, and it return the same values.
To continue my work, I actually needed to use the generalized eigenvalue problem in the standard form
λ B x= DM x
It seems that it does not work in my case because of my matrix B (which represents a Laplacian operator matrix).
If you have a similar problem, I advise you to visit that question:
https://scicomp.stackexchange.com/questions/10940/solving-a-generalised-eigenvalue-problem
(I think that) the matrix B needs to be positive definite to use scipy.sparse.
A solution would be to change B, to use scipy.linalg.eig or to use Matlab.
I will confirm that later.
EDIT:
I wrote a solution to the stack exchange question I post above which explains how I solve my problem.
I appears that scipy.sparse.linalg.eigs has indeed a bug if matrix B is not positive definite, and will return bad eigenvalues.

Categories