I'm trying to generate a kernel function for GP using only Matrix operations (no loops).
Vectors where no problem taking advantage of broadcasting
def kernel(A,B):
return 1/np.exp(np.linalg.norm(A-B.T))**2
A and B are both [n,1] vectors, but with [n,m] shaped matrices It just doesn't work. (Also tried reshaping to [1,n,m])
I'm interested on computing an X Matrix where every ij-th element is defined by Ai-Bj.
Now I'm working on Numpy but my final objective is implement this on Tensorflow.
Thanks in Advance.
Related
Pytorch does not have a native matrix logarithm. I would like to create an efficient logarithm using 'scipy.linalg.logm'. The problem is that 'logm' can only be applied to square tensors. I would like to compute the log of a tensor shape (64,22,3,3), in which the log is being applied to the (3,3) matrix component. Is there a fast and efficient way to apply logm to the entire tensor?
I currently am resorting to list comprehension which is incredibly slow:
x=torch.rand(64,22,3,3)
Log_x=torch.stack([logm(x[i,j]) for i in range(x.shape[0]) for j in range(x.shape[1])])
I'm trying to implement the idea I have suggested here, for Cauchy product of multivariate finite power series (i.e. polynomials) represented as NumPy ndarrays. numpy.convolve does the job for 1D arrays, respectively. But to my best knowledge there is no implementations of convolution for arbitrary dimensional arrays. In the above link, I have suggested the equation:
for convolution of two n dimensional arrays Phi of shape P=[p1,...,pn] and Psi of the shape Q=[q1,...,qn], where:
omegas are the elements of n dimensional array Omega of the shape O=P+Q-1
<A,B>_F is the generalization of Frobenius inner product for arbitrary dimensional arrays A and B of the same shape
A^F is A flipped in all n directions
{A}_[k1,...,kn] is a slice of A starting from [0,...,0] to [k1,...,kn]
Psi' is Psi extended with zeros to have the shape O as defined above
I tried implementing the above functions one by one:
import numpy as np
def crop(A,D1,D2):
return A[tuple(slice(D1[i], D2[i]) for i in range(D1.shape[0]))]
as was suggested here, slices/crops A from D1 to D2,
def sumall(A):
sum1=A
for k in range(A.ndim):
sum1 = np.sum(sum1,axis=0)
return sum1
is a generalization of numpy.sum for multidimensional ndarrays,
def flipall(A):
A1=A
for k in range(A.ndim):
A1=np.flip(A1,k)
return A1
flips A is all existing axises, and finally
def conv(A,B,K):
D0=np.zeros(K.shape,dtype=K.dtype)
return sumall(np.multiply(crop(A,np.maximum(D0,np.minimum(A.shape,K-B.shape)) \
,np.minimum(A.shape,K)), \
flipall(crop(B,np.maximum(D0,np.minimum(B.shape,K-A.shape)) \
,np.minimum(B.shape,K)))))
where K=[k1,...,kn] and for all 0<=kj<=oj, is a modified version of formula above which only calculate the non-zero multiplications to be more efficient. Now I'm trying to populate the Omega array using fromfunction or meshgrid in combination to vectorize as suggested here, but I have failed so far. Now my questions in prioritized order are:
how can I implement the final step and populate the final array in an efficient and pythonic way?
are there more efficient implementations of the functions above? Or how would you implement the formula?
is my equation correct? does this represent multiplication of multivariate finite power series?
haven't really others implemented this before in NumPy or am I reinventing the wheel here? I would appreciate if you could point me towards other solutions.
I would appreciate if you could help me with these questions. Thanks for your help in advance.
P.S.1 You may find some examples and other information in this GitHub Gist
P.S.2 Here in the AstroPy mailing list I was told that scipy.signal.convolve and/or scipy.ndimage.convolve do the job for higher dimensions as well. There is also a scipy.ndimage.filters.convolve. Here I have explained why they are not what I'm looking for.
I have the following line of code in MATLAB which I am trying to convert to Python numpy:
pred = traindata(:,2:257)*beta;
In Python, I have:
pred = traindata[ : , 1:257]*beta
beta is a 256 x 1 array.
In MATLAB,
size(pred) = 1389 x 1
But in Python,
pred.shape = (1389L, 256L)
So, I found out that multiplying by the beta array is producing the difference between the two arrays.
How do I write the original Python line, so that the size of pred is 1389 x 1, like it is in MATLAB when I multiply by my beta array?
I suspect that beta is in fact a 1D numpy array. In numpy, 1D arrays are not row or column vectors where MATLAB clearly makes this distinction. These are simply 1D arrays agnostic of any shape. If you must, you need to manually introduce a new singleton dimension to the beta vector to facilitate the multiplication. On top of this, the * operator actually performs element-wise multiplication. To perform matrix-vector or matrix-matrix multiplication, you must use numpy's dot function to do so.
Therefore, you must do something like this:
import numpy as np # Just in case
pred = np.dot(traindata[:, 1:257], beta[:,None])
beta[:,None] will create a 2D numpy array where the elements from the 1D array are populated along the rows, effectively making a column vector (i.e. 256 x 1). However, if you have already done this on beta, then you don't need to introduce the new singleton dimension. Just use dot normally:
pred = np.dot(traindata[:, 1:257], beta)
I am using Python 3.23 and I am want to multiply a sparse VECTOR with a dense MATRIX. The idea of first unfolding the sparse vector into a dense one and then multiplying is of course silly from any standpoint except for mem management until the actual unfolding. It will be more expensive with zeros in there...
Also, does any one know of a good way for SciPy to keep one dimensional matrices in sparse mode? The only one (admittedly) i have used is the classical notation of three vectors (x,y,value), so i have had to use np.ones(len(...)) to get it to work.
Well.. comments welcome!
Store the vector using the Scipy sparse matrix classes:
x = csr_matrix(np.random.rand(1000) > 0.99).T
print x.shape # (1000, 1)
I have a matrix B that is square and dense, and a matrix A that is rectangular and sparse.
Is there a way to efficiently compute the product B^-1 * A?
So far, I use (in numpy)
tmp = B.inv()
return tmp * A
which, I believe, makes us of A's sparsity. I was thinking about using the sparse method
numpy.sparse.linalg.spsolve, but this requires B, and not A, to be sparse.
Is there another way to speed things up?
Since the matrix to be inverted is dense, spsolve is not the tool you want. In addition, it is bad numerical practice to calculate the inverse of a matrix and multiply it by another - you are much better off using LU decomposition, which is supported by scipy.
Another point is that unless you are using the matrix class (I think that the ndarray class is better, this is something of a question of taste), you need to use dot instead of the multiplication operator. And if you want to efficiently multiply a sparse matrix by a dense matrix, you need to use the dot method of the sparse matrix. Unfortunately this only works if the first matrix is sparse, so you need to use the trick which Anycorn suggested of taking the transpose to swap the order of operations.
Here is a lazy implementation which doesn't use the LU decomposition, but which should otherwise be efficient:
B_inv = scipy.linalg.inv(B)
C = (A.transpose().dot(B_inv.transpose())).transpose()
Doing it properly with the LU decomposition involves finding a way to efficiently multiply a triangular matrix by a sparse matrix, which currently eludes me.