This question already has answers here:
Reconstructing a matrix from an SVD in python 3
(2 answers)
Closed last year.
I'm using numpy.linalg.svd() to get the singular value decomposition of matrices. But I can't get back from the decomposition to the original matrix for a non-square matrix.
For example, for a square matrix :
import numpy as np
n=5
# make a random (n,n) matrix
A= np.reshape( np.random.random_integers(0, 9, size= n**2), (n, n))
#SVD
U,S,Vh = np.linalg.svd(A)
# to get A from the SVD back
A_svd = U#np.diag(S)#Vh
#check if its the same
print(np.allclose(A,A_svd))
I get : >>> True
Now for a non-square matrix, for example A of shape (m,n), then the shape U is (m,m), the shape V is (n,n) and S is a diagonal matrix of (of length k), with k = min(m,n). For example :
import numpy as np
n=5
m= 8
# make a random (m,n) matrix
A= np.reshape( np.random.random_integers(0, 9, size= m*n), (m, n))
#SVD
U,S,Vh = np.linalg.svd(A)
With the following shapes :
>>> U.shape
(8, 8)
>>> S.shape
(5,)
>>> Vh.shape
(5, 5)
I dont know how to get the matrix A back if I have the svd decomposition though.
I cant do a simple multiplication because of the shapes difference. U#np.diag(S)#Vh or np.matmul(U,S,Vh) or with np.dot.
So I tried to reshape S and fill it with zeroes.
S_m = np.diag(S)
S_m.resize((U.shape[1], Vh.shape[0]))
#check if its the same
print(np.allclose(A,U #S_m# Vh))
>>> False
I found an answer here, using diagsvd from scipy.linalg.
import scipy.linalg as la
A_svd = U#la.diagsvd(S,*A.shape)#Vh
Related
I'm trying to use the dot product in Numpy between two matrices with different dimensions.
w is (1, 5) and X is (3, 5)
I'm not sure which command I can use to change the dimensions as I am new to python.
Thank you.
When I try running my function, it gives me an error saying:
ValueError: shapes (1,5) and (3,5) not aligned: 5 (dim 1) != 3 (dim 0)
from numpy.core.memmap import ndarray
def L(w, X, y):
"""
Arguments:
w -- vector of size n representing weights of input features n
X -- matrix of size m x n represnting input data, m data sample with n features each
y -- vector of size m (true labels)
Returns:
loss -- the value of the loss function defined above
"""
### START CODE HERE ### (2-4 lines of code)
#w needs to match X matrix
# w = (1, 5)
# x = (3, 5)
yhat = np.dot(w, X)
L1 = y - yhat
loss = np.dot(L1, L1)
### END CODE HERE ###
return loss
Here is the picture of directions:
image of directions
The dot product of two vectors is the sum of the products of elements with regards to position. The first element of the first vector is multiplied by the first element of the second vector and so on. The sum of these products is the dot product which can be done with np.dot() function.
Since we multiply elements at the same positions, the two vectors must have same length in order to have a dot product.
import numpy as np
a = np.array([[1,2],[3,4]])
b = np.array([[11,12],[13,14]])
np.dot(a,b)
It will produce the following output −
[[37 40]
[85 92]]
Note that the dot product is calculated as −
[[1*11+2*13, 1*12+2*14],[3*11+4*13, 3*12+4*14]]
You get the full flexibility with tensordot which implements tensor products with arbitrary choice of axes.
A nice application is estimating the covariance matrix, without messing with transpositions:
import numpy as np
from scipy.stats import multivariate_normal
dist = multivariate_normal(mean=[0,0],cov=[[1,1],[1,2]])
samples = dist.rvs(1000,2)
np.tensordot(samples,samples,axes=[0,0])/len(samples) # close to [[1,1],[1,2]
I can find eigenvectors of a matrix in Python as follows:
from numpy import linalg as LA
w, v = LA.eig(np.diag((1, 2, 3)))
But how to find the largest two eigenvectors for a larger matrix of size 100*200?
Eigenvalue decomposition is not defined for a non-square matrix. The closest operation is single value decomposition. SVD and EIG for a non-square matrix are related in that the single values are the square root of the eigenvalues of the transpose of the matrix times itself.
B = A' * A
SVD(A) * SVD(A) ~= EIG(B)
So one potential answer to your question is:
import numpy as np
A = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
B = np.matmul(np.transpose(A), A)
u,s,v = np.linalg.svd(A)
V, D = np.linalg.eig(B)
print(f'Compare s*s to V {s*s - V}')
While s is not directly the eigenvalues of A it is somewhat related.
I would like to get a square matrix B from a linear vector A such that B = A * transpose(A). A is a numpy array and np.shape(A) returns (10,). I would like B to be a (10,10) array. I tried B = np.matmut(A, A[np.newaxis]) but I get an error :
shapes (10,) and (1,10) not aligned: 10 (dim 0) != 1 (dim 0)
you can do this using outer:
import numpy as np
vector = np.arange(10)
np.outer(vector, vector)
The solution is a little ugly, but it does what you need.
import numpy as np
vector = np.array([1,2,3,4,5,6,7,8,9,10],)
matrix = np.dot(vector[:,None],vector[None,:])
print(matrix)
You can also do the following:
import numpy as np
vector = np.array([1,2,3,4,5,6,7,8,9,10],)
matrix = vector*vector[:,None]
print(matrix)
The issue comes from the fact that transposing a one dimensional array does not have the effect you might expect.
Variation on outer product:
a = A.reshape(-1, 1) # make sure it's a column vector
B = a # a.T
I have a 3D numpy array vecs. vecs has shape [M,N,3]. That is to say, vecs is an MxN collection of 3-element vectors. I am looking for a pythonic (numpythonic?) way to take the matrix product of each of those vectors with a single 3x3 matrix mat. In other words, I want a clean way to do this:
>>> for k in range(vecs.shape[0]):
>>> for j in range(vecs.shape[1]):
>>> vecs[k,j] = np.dot(mat, vecs[k,j])
Any way to do this?
Your dot, can be expressed with einsum as:
res[k,j,:] = np.einsum('ab,b->a',mat,vecs[k,j,:])
and generalized to work with the whole array as
res = np.einsum('ab,kjb->kja',mat,vecs)
In this particular case I think you can just do
np.dot(vecs,mat.T)
Here is a short snippet of code demonstrating that they are the same:
In [1]: import numpy as np
In [2]: a = np.random.randn(100,100,3)
In [3]: b = np.random.randn(3,3)
In [4]: expected = np.zeros_like(a)
In [5]: for i in range(a.shape[0]):
...: for j in range(a.shape[1]):
...: expected[i,j] = np.dot(b,a[i,j])
...:
In [6]: np.allclose(expected,np.dot(a,b.T))
Out[6]: True
You can use np.tensordot
vecs = np.tensordot(mat, vecs.T, axes=1).T
Here you tranpose your vecs to get (3, M, N) array in order to
apply the dot product with mat and then transpose the resulting (3, N, M) back into (M, N, 3) array.
Regarding the axes argument:
If an int N, sum over the last N axes of a and the first N axes of b
in order. The sizes of the corresponding axes must match.
So, in your case you sum along the second axis of mat with the first axis of vecs.T
I am trying to compute a transform given by b = A*x. A is a (3,4) matrix. If x is one (4,1) vector the result is b (3,1).
Instead, for x I have a bunch of vectors concatenated into a matrix and I am trying to evaluate the transform for each value of x. So x is (20, 4). How do I broadcast this in numpy such that I get 20 resulting values for b (20,3)?
I could loop over each input and compute the output but it feels like there must be a better way using broadcasting.
Eg.
A = [[1,0,0,0],
[2,0,0,0],
[3,0,0,0]]
if x is:
x = [[1,1,1,1],
[2,2,2,2]]
b = [[1,2,3],
[2,4,6]]
Each row of x is multiplied with A and result is stored as a row in b.
numpy dot
import numpy as np
A = np.random.normal(size=(3,4))
x = np.random.normal(size=(4,20))
y = np.dot(A,x)
print y.shape
Result: (3, 20)
And of course if you want (20,3) you can use np.transpose()