I need to multiply a lot of vectors beta with the same matrix M.
Let say that the matrix M has the shape (150,7), and that the beta-s are stored in a variable of shape (7,128,128).
How would you compute the product M*beta for every element of beta?
Until know I'm doing like that:
import numpy as np
M=np.ones((150,7))
beta=np.ones((7,128,128))
result=M#(beta.reshape((7,128*128))) # the result has shape (150,128*128)
result=np.reshape(result,(150,128,128))
I'm guessing that np.einsum() could be useful here, but I don't understand how to tell it on which dimension doing the multiplication/addition.
Here's how you could do this using np.einsum:
np.einsum('ij,jkl->ikl', M, beta)
result=M#(beta.reshape((7,128*128))) # the result has shape (150,128*128)
result=np.reshape(result,(150,128,128))
np.allclose(np.einsum('ij,jkl->ikl', M, beta), result)
# True
Related
Imagine having 2 sparse matrix:
> A, A.shape = (n,m)
> B, B.shape = (m,n)
I would like to compute the dot product A*B, but then only keep the diagonal. The matrices being big, I actually don't want to compute other values than the ones in the diagonal.
This is a variant of the question Is there a numpy/scipy dot product, calculating only the diagonal entries of the result?
Where the most relevant answer seems to be to use np.einsum:
np.einsum('ij,ji->i', A, B)
However this does not work:
ValueError: einstein sum subscripts string contains too many subscripts for operand 0
The solution is to use todense(), but it increases a lot the memory usage: np.einsum('ij,ji->i', A.todense(), B.todense())
The other solution, that I currently use, is to iterate over all the rows of A and compute each product in the loop :
for i in range(len_A):
result = np.float32(A[i].dot(B[:, i])[0, 0])
...
None of these solutions seems perfect. Is there an equivalent to np.einsum that could work with sparse matrices ?
[sum(A[i]*B.T[i]) for i in range(min(A.shape[0], B.shape[1]))]
otherwise this is faster:
l = min(A.shape[0], B.shape[1])
(A[np.arange(l)]*B.T[np.arange(l)]).sum(axis=1)
In general you shouldn't try to use numpy functions on the scipy.sparse arrays. In your case I'd first make sure both arrays actually have a compatible shape, that is
A, A.shape = (r,m)
B, B.shape = (m,r)
where r = min(n, p). Then we can compute the diagonal of the matrix product using
d = (A.multiply(B.T)).sum(axis=1)
Here we compute the entry wise row-column products, and manually sum them up. This avoids all the unnecessary computations you'd get using dot/#/*. (Note that unlike in numpy, both * and # perform matrix multiplication.)
I'm trying to vectorize a loop with NumPy but I'm stuck
I have a matrix A of shape (NN,NN) I define the A-dot product by
def scalA(u,v):
return v.T # A # u
Then I have two matrices B and C (B has a shape (N,NN) and C has a shape (K,NN) the loop I'm trying to vectorize is
res = np.zeros((N,K))
for n in range(N):
for k in range(K):
res[n,k] = scalA(B[n,:], C[k,:])
I found during my research functions like np.tensordot or np.einsum, but I haven't really understood how they work, and (if I have well understood) tensordot will compute the canonical dot product (that would correspond to A = np.eye(NN) in my case).
Thanks !
np.einsum('ni,ji,kj->nk', B,A,C)
I think this works. I wrote it 'by eye' without testing.
Probably you're looking for this:
def scalA(u,v):
return u # A # v.T
If shape of A is (NN,NN), shape of B is (N,NN), and shape of C is (K,NN), the result of scalA(B,C) has shape (N,K)
If shape of A is (NN,NN), shape of B is (NN,), and shape of C is (NN,), the result of scalA(B,C) is just a scalar.
However, if you're expecting B and C to have even higher dimensionality (greater than 2), this may need further tweaking. (I could not tell from your question whether that's the case)
I am doing a benchmarking test in python on different ways to calculate A'*A, with A being a N x M matrix. One of the fastest ways was to use numpy.dot().
I was curious if I can obtain the same result using numpy.cov() (which gives the covariance matrix) by somehow varying the weights or by somehow pre-processing the A matrix ? But I had no success. Does anyone know if there is any relation between the product A'*A and covariance of A, where A is a matrix with N rows/observations and M columns/variables?
Have a look at the cov source. Near the end of the function it does this:
c = dot(X, X_T.conj())
Which is basically the dot product you want to perform. However, there are all kinds of other operations: checking inputs, subtracting the mean, normalization, ...
In short, np.cov will never ever be faster than np.dot(A.T, A) because internally it contains exactly that operation.
That said - the covariance matrix is computed as
Or in Python:
import numpy as np
a = np.random.rand(10, 3)
m = np.mean(a, axis=0, keepdims=True)
x = np.dot((a - m).T, a - m) / (a.shape[0] - 1)
y = np.cov(a.T)
assert np.allclose(x, y) # check they are equivalent
As you can see, the covariance matrix is equivalent to the raw dot product if you subtract the mean of each variable and divide the result by the number of samples (minus one).
I have the following line of code in MATLAB which I am trying to convert to Python numpy:
pred = traindata(:,2:257)*beta;
In Python, I have:
pred = traindata[ : , 1:257]*beta
beta is a 256 x 1 array.
In MATLAB,
size(pred) = 1389 x 1
But in Python,
pred.shape = (1389L, 256L)
So, I found out that multiplying by the beta array is producing the difference between the two arrays.
How do I write the original Python line, so that the size of pred is 1389 x 1, like it is in MATLAB when I multiply by my beta array?
I suspect that beta is in fact a 1D numpy array. In numpy, 1D arrays are not row or column vectors where MATLAB clearly makes this distinction. These are simply 1D arrays agnostic of any shape. If you must, you need to manually introduce a new singleton dimension to the beta vector to facilitate the multiplication. On top of this, the * operator actually performs element-wise multiplication. To perform matrix-vector or matrix-matrix multiplication, you must use numpy's dot function to do so.
Therefore, you must do something like this:
import numpy as np # Just in case
pred = np.dot(traindata[:, 1:257], beta[:,None])
beta[:,None] will create a 2D numpy array where the elements from the 1D array are populated along the rows, effectively making a column vector (i.e. 256 x 1). However, if you have already done this on beta, then you don't need to introduce the new singleton dimension. Just use dot normally:
pred = np.dot(traindata[:, 1:257], beta)
I would like to solve a sparse linear equations system: A x = b, where A is a (M x M) array, b is an (M x N) array and x is and (M x N) array.
I solve this in three ways using the:
scipy.linalg.solve(A.toarray(), b.toarray()),
scipy.sparse.linalg.spsolve(A, b),
scipy.sparse.linalg.splu(A).solve(b.toarray()) # returns a dense array
I wish to solve the problem using the iterative scipy.sparse.linalg methods:
scipy.sparse.linalg.cg,
scipy.sparse.linalg.bicg,
...
However, the metods suport only a right hand side b with a shape (M,) or (M, 1). Any ideas on how to expand these methods to (M x N) array b?
A key difference between iterative solvers and direct solvers is that direct solvers can more efficiently solve for multiple right-hand values by using a factorization (usually either Cholesky or LU), while iterative solvers can't. This means that for direct solvers there is a computational advantage to solving for multiple columns simultaneously.
For iterative solvers, on the other hand, there's no computational gain to be had in simultaneously solving multiple columns, and this is probably why matrix solutions are not supported natively in the API of cg, bicg, etc.
Because of this, a direct solution like scipy.sparse.linalg.spsolve will probably be optimal for your case. If for some reason you still desire an iterative solution, I'd just create a simple convenience function like this:
from scipy.sparse.linalg import bicg
def bicg_solve(M, B):
X, info = zip(*(bicg(M, b) for b in B.T))
return np.transpose(X), info
Then you can create some data and call it as follows:
import numpy as np
from scipy.sparse import csc_matrix
# create some matrices
M = csc_matrix(np.random.rand(5, 5))
B = np.random.rand(5, 4)
X, info = bicg_solve(M, B)
print(X.shape)
# (5, 4)
Any iterative solver API which accepts a matrix on the right-hand-side will essentially just be a wrapper for something like this.