QR factorisation using modified Gram Schmidt - python

The question:
For this problem, you are given a list of matrices called As, and your job is to find the QR factorization for each of them.
Implement qr_by_gram_schmidt: This function takes as input a matrix A and computes a QR decomposition, returning two variables, Q and R where A=QR, with Q orthogonal and R zero below the diagonal.
A is an n×m matrix with n≥m (i.e. more rows than columns).
You should implement this function using the modified Gram-Schmidt procedure.
INPUT:
As: List of arrays
OUTPUT:
Qs: List of the Q matrices output by qr_by_gram_schmidt, in the same order as As. For a matrix A of shape n×m, Q should have shape n×m.
Rs: List of the R matrices output by qr_by_gram_schmidt, in the same order as As. For a matrix A of shape n×m, R should have shape m×m
I have written the code for the QR factorization which I believe is correct:
import numpy as np
def qr_by_gram_schmidt(A):
m = np.shape(A)[0]
n = np.shape(A)[1]
Q = np.zeros((m, m))
R = np.zeros((n, n))
for j in xrange(n):
v = A[:,j]
for i in xrange(j):
R[i,j] = Q[:,i].T * A[:,j]
v = v.squeeze() - (R[i,j] * Q[:,i])
R[j,j] = np.linalg.norm(v)
Q[:,j] = (v / R[j,j]).squeeze()
return Q, R
How do I write the loop to calculate the the QR factorization of each of the matrices in As and storing them in that order?
edit: The code has some error too. I will appreciate it if you can help me in debugging it.
Thanks

I didn't check your GS code, but had to make a change (may not be correct!) to make it compile. You just have to set up a list of your matrices, I made 2 of them and then loop through that list and apply your function.
import numpy as np
def gs(A):
m = np.shape(A)[0]
n = np.shape(A)[1]
Q = np.zeros((m, m))
R = np.zeros((n, n))
print m,n,Q,R
for j in xrange(n):
v = A[:,j]
for i in xrange(j):
R[i,j] = np.dot(Q[:,i].T , A[:,j]) # I made an arbitrary change here!!!
v = v.squeeze() - (R[i,j] * Q[:,i])
R[j,j] = np.linalg.norm(v)
Q[:,j] = (v / R[j,j]).squeeze()
return Q, R
As= np.random.rand(2,3,3) # list of 2 (3x3) matrices
print As
for A in As:
print gs(A)
Output:
[[[ 0.9599614 0.02213113 0.43343881]
[ 0.44202415 0.6816688 0.88321052]
[ 0.93098107 0.80528361 0.88473308]]
[[ 0.41794678 0.10762796 0.42110659]
[ 0.89598082 0.81225543 0.52947205]
[ 0.0621515 0.59826789 0.14021332]]]
(array([[ 0.68158915, -0.67980134, 0.27075149],
[ 0.31384477, 0.60583989, 0.73106736],
[ 0.66101262, 0.41331364, -0.626286 ]]), array([[ 1.40841649, 0.76132516, 1.15743793],
[ 0. , 0.73077208, 0.60610414],
[ 0. , 0. , 0.20894464]]))
(array([[ 0.42190511, -0.39510208, 0.81602109],
[ 0.90446656, 0.121136 , -0.40898205],
[ 0.06274013, 0.91061541, 0.40846452]]), array([[ 0.99061796, 0.81760207, 0.66535379],
[ 0. , 0.6006613 , 0.02543844],
[ 0. , 0. , 0.18435946]]))

Related

Error when trying to do Multi-Dimensional Scaling in python

I am trying to get 2D coordinates between 0 and 1 from a distance matrix, following the methods that are specified in the following posts (Multi-Dimensional Scaling):
https://math.stackexchange.com/questions/156161/finding-the-coordinates-of-points-from-distance-matrix
How to implement finding the coordinates of points from distance matrix in python based on gram-matrix?
However, when trying to implement it I get an error and I struggle to find the source of this error. This is what I am doing:
This is the distance matrix (as you can see, the maximum distance is 1, so all the points can be between 0 and 1):
import numpy as np
import math
distance_matrix = np.array(
[
[0.0, 0.47659458, 0.22173311, 0.46660708, 0.78423276],
[0.47659458, 0.0, 0.69805139, 0.01200111, 0.6629441],
[0.22173311, 0.69805139, 0.0, 0.68249177, 1.0],
[0.46660708, 0.01200111, 0.68249177, 0.0, 0.6850815],
[0.78423276, 0.6629441, 1.0, 0.6850815, 0.0],
]
)
Here is where I do the Multi-Dimensional Scaling:
def x_coord_of_point(D, j):
return ( D[0,j]**2 + D[0,1]**2 - D[1,j]**2 ) / ( 2*D[0,1] )
def coords_of_point(D, j):
x = x_coord_of_point(D, j)
return np.array([x, math.sqrt( D[0,j]**2 - x**2 )])
def calculate_positions(D):
(m, n) = D.shape
P = np.zeros( (n, 2) )
tr = ( min(min(D[2,0:2]), min(D[2,3:n])) / 2)**2
P[1,0] = D[0,1]
P[2,:] = coords_of_point(D, 2)
for j in range(3,n):
P[j,:] = coords_of_point(D, j)
if abs( np.dot(P[j,:] - P[2,:], P[j,:] - P[2,:]) - D[2,j]**2 ) > tr:
P[j,1] = - P[j,1]
return P
P = calculate_positions(distance_matrix)
print(P)
Output: [[ 0. 0. ]
[ 0.47659458 0. ]
[-0.22132834 0.01339166]
[ 0.46656063 0.0065838 ]
[ 0.42244347 -0.66072879]]
I do this to make all the points between 0 and 1:
P = P-P.min(axis=0)
Once I have the set of points that are supposed to satisfy the distance matrix, I compute the distance matrix from the points to see if it is equal to the original one:
def compute_dist_matrix(P):
dist_matrix = []
for i in range(len(P)):
lis_pos = []
for j in range(len(P)):
dist = np.linalg.norm(P[i]-P[j])
lis_pos.append(dist)
dist_matrix.append(lis_pos)
return np.array(dist_matrix)
compute_dist_matrix(P)
Output: array([[0. , 0.47659458, 0.22173311, 0.46660708, 0.78423276],
[0.47659458, 0. , 0.69805139, 0.01200111, 0.6629441 ],
[0.22173311, 0.69805139, 0. , 0.68792266, 0.93213762],
[0.46660708, 0.01200111, 0.68792266, 0. , 0.66876934],
[0.78423276, 0.6629441 , 0.93213762, 0.66876934, 0. ]])
As you can see, if we compare this array with the original distance matrix at the beginning of the post, there is no error in the first terms of the matrix, but as we get closer to the end, the error gets bigger and bigger. If the distance matrix is bigger than the one I use in this example, the erros then become huge.
I do not know if the source of error is in the functions that compute P or maybe the function "compute_dist_matrix" is the problem.
Can you spot the source of error? Or maybe, is there an easier way to compute all of this? Maybe there are some functions in some library that already perform this transformation.

Divide by Zero Warning in LU Decomposition- Doolittle Algorithm working

I have implemented the standard equations/algorithm of LU Decomposition of a Matrix by following this link: (1) and (2)
This returns the LU decomposition of a square matrix like below perfectly.
My problem is, however- it also gives a Divide by Zero warning.
Code here:
import numpy as np
def LUDecomposition (A):
L = np.zeros(np.shape(A),np.float64)
U = np.zeros(np.shape(A),np.float64)
acc = 0
L[0,0]=1
for i in np.arange(len(A)):
for k in range(i,len(A)):
for j in range(0,i):
acc += L[i,j]*U[j,k]
U[i,k] = A[i,k]-acc
for m in range(k+1,len(A)):
if m==k:
L[m,k]=1
else:
L[m,k] = (A[m,k]-acc)/U[k,k]
acc=0
return (L,U)
A = np.array([[-4, -1, -2],
[-4, 12, 3],
[-4, -2, 18]])
L, U = LUDecomposition (A)
Where am I going wrong?
It seems that you may have made some indentation errors regarding the first inner level for loops: U must be evaluated before L ; you also didn't correctly compute the summation term acc and didn't properly set the diagonal terms of L to 1. Following some other syntax modifications, you may rewrite your function as follows:
def LUDecomposition(A):
n = A.shape[0]
L = np.zeros((n,n), np.float64)
U = np.zeros((n,n), np.float64)
for i in range(n):
# U
for k in range(i,n):
s1 = 0 # summation of L(i, j)*U(j, k)
for j in range(i):
s1 += L[i,j]*U[j,k]
U[i,k] = A[i,k] - s1
# L
for k in range(i,n):
if i==k:
# diagonal terms of L
L[i,i] = 1
else:
s2 = 0 # summation of L(k, j)*U(j, i)
for j in range(i):
s2 += L[k,j]*U[j,i]
L[k,i] = (A[k,i] - s2)/U[i,i]
return L, U
which gives this time the correct output for matrix A when compared to scipy.linalg.lu as a reliable reference:
import numpy as np
from scipy.linalg import lu
A = np.array([[-4, -1, -2],
[-4, 12, 3],
[-4, -2, 18]])
L, U = LUDecomposition(A)
P, L_sp, U_sp = lu(A, permute_l=False)
P
>>> [[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
L
>>> [[ 1. 0. 0. ]
[ 1. 1. 0. ]
[ 1. -0.07692308 1. ]]
np.allclose(L_sp, L))
>>> True
U
>>> [[-4. -1. -2. ]
[ 0. 13. 5. ]
[ 0. 0. 20.38461538]]
np.allclose(U_sp, U))
>>> True
Note: unlike scipy lapack getrf algorithm, this Doolittle implementation does not include pivoting, these two comparisons are then only true if permutation matrix P returned by scipy.linalg.lu is an identity matrix, i.e. scipy didn't performed any permutations, which is indeed the case for your matrix A. The permutation matrix determined in scipy algorithm is meant to optimize conditions numbers of resulting matrix in order to reduce roundoff errors. At last, you may just simply verify that A = LU which will always be the case if the factorization is done right:
A = np.random.rand(10,10)
L, U = LUDecomposition(A)
np.allclose(A, np.dot(L, U))
>>> True
Nevertheless, in terms of numerical efficiency and accuracy, I wouldn't recommend you to use your own function to compute LU decomposition. Hope this helps.

How the speed up the calculation of updating values of each element in an array based on other elements in Python?

I am now working on a calculation shown below. I want to update the values of each element based on their adjacent elements. I am now using two for loops, but it shows the calculation is very slow since there are several outer iterations. I want to know whether there is any way can speed up this calculation>
for i in range(1,nx+1):
for j in range(1,ny+1):
p[i,j]=(a*p[i-1,j]+b*p[i+1,j]+c*p[i,j-1]+d*p[i,j+1])
a, b, c, d are some constant, p is numpy.array type
Sample input:
import numpy as np
p = np.ones((5,5))
for i in range(1,4):
for j in range(1,4):
p[i,j]=p[i-1,j] + p[i+1,j] +2*p[i,j+1]+2*p[i,j-1]
print(p)
The final output should be:
[[ 1. 1. 1. 1. 1.]
[ 1. 6. 16. 36. 1.]
[ 1. 11. 41. 121. 1.]
[ 1. 16. 76. 276. 1.]
[ 1. 1. 1. 1. 1.]]
Don't have enough rep to comment and this doesn't fully answer the question, but if you are using NumPy, you should definitely look at array broadcasting. Hard to tell exactly what your code is doing, but using broadcasting should make it a lot easier to update the full matrix instead of value by value
We can at least get rid of one nested loop using np.cumsum. In favorable conditions (large number of columns) this can give a 30fold speedup. Sample run:
results equal True
original 31.644793 ms
optimized 0.861980 ms
Code:
import numpy as np
n, m = 50, 600
a, b, c, d = np.random.random((4,))
P = np.random.random((n, m))
def f_OP(P):
p = P.copy()
for i in range(1, n-1):
for j in range(1, m-1):
p[i,j]=a*p[i-1,j] + b*p[i+1,j] +c*p[i,j-1]+d*p[i,j+1]
return p
def f_pp(P):
p = P.copy()
pp = d*p[1:-1, 2:] + b*p[2:, 1:-1]
pp[0] += a*p[0, 1:-1]
pp[:, 0] += c*p[1:-1, 0]
x = np.full((m-2,), c)
x[0] = 1
x = np.cumprod(x)[::-1]
pp = np.cumsum(pp * x, axis=1)
for i in range(1, n-2):
pp[i] += a * np.cumsum(pp[i-1])
p[1:-1, 1:-1] = pp / x
return(p)
print('results equal', np.allclose(f_OP(P), f_pp(P)))
from timeit import timeit
kwds = dict(globals=globals(), number=10)
print('original {:10.6f} ms'.format(timeit('f_OP(P)', **kwds)*100))
print('optimized {:10.6f} ms'.format(timeit('f_pp(P)', **kwds)*100))

Vectorized arange using np.einsum for raycast

I have a D dimensional point and vector, p and v, respectively, a positive number n, and a resolution.
I want to get all points after successively adding vector v*resolution to point p n/resolution times.
Example
p = np.array([3, 5])
v = np.array([-1.5, 3])
n = 10
resolution = 1.5
result:
[[ 3. , 5. ],
[ 0.75, 9.5 ],
[ -1.5 , 14. ],
[ -3.75, 18.5 ],
[ -6. , 23. ],
[ -8.25, 27.5 ],
[-10.5 , 32. ]]
My current approach is to tile the range, given by n and the resolution, by the dimension D, multiply by that by v and add p.
def getPoints(p, v, n, resolution=1.):
dRange = np.tile(np.arange(0, n, resolution), (v.shape[0],1))
return np.multiply(v.reshape(-1,1), dRange).T + p
Is there is a direct way to calculate DRange using np.einsum or another method?
Approach #1
Here's one approach leveraging NumPy broadcasting -
np.arange(0, n, resolution)[:,None] * v + p
Basically, we extend the range array to 2D, keeping the second one as singleton, to let it broadcast for elementwise multiplication against 1D v, giving us a 2D array. Then, we add p to it.
Approach #2
There isn't any sum-reduction here, so np.einsum or any dot-based function even though should work, but won't lend any help on performance. Let's put it out anyway, as it was mentioned in the question -
np.einsum('i,j->ij',np.arange(0, n, resolution), v) + p

Numpy's eigh and eig yield inconsistent eigenvalues

Currently I'm trying to solve the generalized eigenvalue problem in NumPy for two symmetric matrices and I've been running into massive trouble as I'm expecting all eigenvalues to be positive, but eigh returns several very large numbers that are not all positive, while eig returns the correct, expected values (but is, of course, very, very slow).
In this case, note that K is symmetric as expected from its construction (here is the code in question):
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(diag(w*peq[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(peq*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
return
For testing, dmat is defined as
array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
And all of w[i], w_d[i], peq[i] are essentially arbitrary positive-valued arrays. w_d and w are of the same order (~ 1e-1) and peq[i] ranges on the order of (~ 1e-10 to 1e1)
Some of the output I'm getting is
Literally why
[ -6.25540943e+07 -4.82660391e+07 -2.62629052e+07 ..., 1.07960873e+10
1.07967334e+10 4.26007915e+10]
[ -5.25462340e-12+0.j 4.62614812e-01+0.j 1.23357898e+00+0.j ...,
2.17613917e+06+0.j 1.07967334e+10+0.j 4.26007915e+10+0.j]
EDIT:
Here's a self-contained version of the code for easier debugging
import numpy as np
from math import *
from scipy import sparse, linalg
# Variable declarations and such (pre-computed)
Ne, Np = 256, 8
N = Ne*Np - Ne + 1
domain_size = 4/Ne
x = np.array([-0.015625 , -0.01362094, -0.00924532, -0.0032703 , 0.0032703 ,
0.00924532, 0.01362094, 0.015625 ])
w = np.array([ 0.00055804, 0.00329225, 0.00533004, 0.00644467, 0.00644467,
0.00533004, 0.00329225, 0.00055804])
dmat = np.array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
# More declarations
x_d = np.zeros(N)
w_d = np.zeros(N)
dmat_d = np.zeros((N, N))
for i in range(Ne):
x_d[i*(Np-1):i*(Np-1)+Np] = x+i*domain_size
w_d[i*(Np-1):i*(Np-1)+Np] += w
dmat_d[i*(Np-1):i*(Np-1)+Np, i*(Np-1):i*(Np-1)+Np] += dmat
peq = (np.cos((x_d-2)*pi/4))**2
# Normalization
peq = peq/np.sum(w_d*peq)
p0 = np.maximum(peq, 1e-10)
p0 /= np.sum(p0*w_d)
# Make efficient matrix that can be built
K = sparse.lil_matrix((N, N))
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(np.diag(w*p0[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(p0*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not np.allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
EDIT2: This is really odd. Running all of the NumPy/SciPy tests on my machine, I receive no errors. But even running the simple test (with large enough matrices) as
import numpy as np
from spicy import linalg
M = np.random.random((1000,1000))
M += M.T
np.allclose(sorted(linalg.eigh(M)[0]), sorted(linalg.eig(M)[0]))
fails on my machine. Though running the same test with a 50x50 matrix does work---even after rebuilding the SciPy/NumPy stack and passing all unit tests.
EDIT3: Actually, this seems to fail everywhere, after testing it on a cluster computer. I'm not sure why.
The above fails due to the in-place behaviour of += and .T as a view rather than an operation.

Categories