Tensorflow: Elementwise-inversion of multiple matrices of different shape - python

I have a set of differently-shaped matrices M = (M_1, M_2, ... M_K). For efficiency purposes, I can store all of M into a single tensor of size K x max(M_k.shape[0]) x max(M_k.shape[1]). This works fine for doing things like batch matrix multiplication and elementwise additions. But what if I want to do elementwise divisions where the zero elements are ignored?
The best version of this I've come up with is:
import numpy as np
import tensorflow as tf
M = tf.constant(np.array([[1.,2.,0],[3.,4.,5.],[6.,0,0]]), tf.float32)
Minv = tf.select(tf.equal(M, 0), tf.zeros_like(M), tf.inv(M))
Is this the fastest way? Does tf.select still get accelerated well via a GPU?

Related

Is there an efficient way of solving sparse linear equations in Tensorflow that is compatible with gradient tape?

I need to solve equations in Tensorflow in the form A(y)x = b, where A is a large sparse band matrix and also a function of some other tensor say y. Naturally, the solution x will be a function of tensor y too. After solving for x, I want to take gradient of x with respect to y.
I considered two options:
1. Use a sparse external library to efficiently invert A, such as scipy.sparse. For this I need to convert the tensors to numpy array and then back to tensors. The problem with this approach is that I cannot use gradient tape with external libraries such as scipy.sparse.
2. Use Tensorflow's matrix inversion that works with gradient tape. This is extremely slow for large matrices, since it does not utilize the sparsity of the tensor. I was unable to find a sparse invert implementation in Tensorflow.
A small simplified example of what I need:
y = tf.constant(3.14)
A = my_sparse_tensor(shape=(1000, 1000)) # Arbitrary function that returns a sparse tensor
b = tf.ones(shape=(1000, 1))
with tf.GradientTape() as g:
g.watch(y)
A = A * y
x = tf.matmul(sparse_invert(A), b)
dx_dy = g.gradient(x, y)
Of course the dependence of A on y is much more complicated than in this example.
Is there any way to do this in Tensorflow, or do I have to restrict myself to tf.linalg.inv ?

Row-wise outer product on sparse matrices

Given two sparse scipy matrices A, B I want to compute the row-wise outer product.
I can do this with numpy in a number of ways. The easiest perhaps being
np.einsum('ij,ik->ijk', A, B).reshape(n, -1)
or
(A[:, :, np.newaxis] * B[:, np.newaxis, :]).reshape(n, -1)
where n is the number of rows in A and B.
In my case, however, going through dense matrices eat up way too much RAM.
The only option I have found is thus to use a python loop:
sp.sparse.vstack((ra.T#rb).reshape(1,-1) for ra, rb in zip(A,B)).tocsr()
While using less RAM, this is very slow.
My question is thus, is there a sparse (RAM efficient) way to take the row-wise outer product of two matrices, which keeps things vectorized?
(A similar question is numpy elementwise outer product with sparse matrices but all answers there go through dense matrices.)
We can directly calculate the csr representation of the result. It's not superfast (~3 seconds on 100,000x768) but may be ok, depending on your use case:
import numpy as np
import itertools
from scipy import sparse
def spouter(A,B):
N,L = A.shape
N,K = B.shape
drows = zip(*(np.split(x.data,x.indptr[1:-1]) for x in (A,B)))
data = [np.outer(a,b).ravel() for a,b in drows]
irows = zip(*(np.split(x.indices,x.indptr[1:-1]) for x in (A,B)))
indices = [np.ravel_multi_index(np.ix_(a,b),(L,K)).ravel() for a,b in irows]
indptr = np.fromiter(itertools.chain((0,),map(len,indices)),int).cumsum()
return sparse.csr_matrix((np.concatenate(data),np.concatenate(indices),indptr),(N,L*K))
A = sparse.random(100,768,0.03).tocsr()
B = sparse.random(100,768,0.03).tocsr()
print(np.all(np.einsum('ij,ik->ijk',A.A,B.A).reshape(100,-1) == spouter(A,B).A))
A = sparse.random(100000,768,0.03).tocsr()
B = sparse.random(100000,768,0.03).tocsr()
from time import time
T = time()
C = spouter(A,B)
print(time()-T)
Sample run:
True
3.1073222160339355

Covariance Matrix from 2D vectors - Tensorflow, Numpy

I'm trying to generate a kernel function for GP using only Matrix operations (no loops).
Vectors where no problem taking advantage of broadcasting
def kernel(A,B):
return 1/np.exp(np.linalg.norm(A-B.T))**2
A and B are both [n,1] vectors, but with [n,m] shaped matrices It just doesn't work. (Also tried reshaping to [1,n,m])
I'm interested on computing an X Matrix where every ij-th element is defined by Ai-Bj.
Now I'm working on Numpy but my final objective is implement this on Tensorflow.
Thanks in Advance.

Matrix multiplication in CUDA running out of memory

I try to compute the matrix multiplication using the script:
import numpy as np
import math
from timeit import default_timer as timer
from numba import cuda
from numba import *
def mult(a,b):
return a*b
mult_gpu=cuda.jit(restype=float32,argtypes=[float32,float32],device=True)(mult)
#cuda.jit(argtypes=[float32[:,:],float32[:,:],float32[:,:,:]])
def mult_kernel(a,b,c):
Ni=c.shape[0]
Nj=c.shape[1]
Nk=c.shape[2]
startX,startY,startZ=cuda.grid(3)
gridX=cuda.gridDim.x*cuda.blockDim.x
gridY=cuda.gridDim.y*cuda.blockDim.y
gridZ=cuda.gridDim.z*cuda.blockDim.z
for i in range(startX,Ni,gridX):
for j in range(startY,Nj,gridY):
for k in range(startZ,Nk,gridZ):
c[i,j,k]=mult_gpu(a[i,k],b[j,k])
def main():
A=np.ones((20,50000),dtype=np.float32)
B=np.ones((3072,50000),dtype=np.float32)
C=np.ones((20,3072,50000),dtype=np.float32)
(Ni,Nj,Nk)=C.shape
my_gpu=cuda.get_current_device()
thread_ct=my_gpu.WARP_SIZE
block_ct_x=int(math.ceil(float(Ni)/thread_ct))
block_ct_y=int(math.ceil(float(Nj)/thread_ct))
block_ct_z=int(math.ceil(float(Nk)/thread_ct))
blockdim=thread_ct,thread_ct,thread_ct
griddim=block_ct_x,block_ct_y,block_ct_z
print "Threads per block:",blockdim
print "Blocks per grid:",griddim
start=timer()
Cg=cuda.to_device(C)
mult_kernel[griddim,blockdim](A,B,Cg)
Cg.to_host()
dt=timer()-start
print "Computation done in %f s"%(dt)
print 'C[:3,1,1] = ',C[:3,1,1]
print 'C[-3:,1,1] = ',C[-3:,1,1]
if __name__=='__main__':
main()
Executing this yields an error:
numba.cuda.cudadrv.driver.CudaAPIError: [2] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY
How could I fix this memory problem?
Nevertheless, using smaller matrices
A=np.ones((20,500),dtype=np.float32)
B=np.ones((372,500),dtype=np.float32)
C=np.ones((20,372,500),dtype=np.float32)
there is still an error:
numba.cuda.cudadrv.driver.CudaAPIError: [1] Call to cuLaunchKernel results in CUDA_ERROR_INVALID_VALUE
I got inspired by the Mandelbrot Example to implement the computation above.
EDIT1
In order to resolve any confusion, this is actually a 3D matrix by 3D matrix multiplication:
A=np.ones((20,1,50000),dtype=np.float32)
B=np.ones((1,3072,50000),dtype=np.float32)
C=np.ones((20,3072,50000),dtype=np.float32)
I skipped one dimension in A and B because it is not necessary for the computation.
EDIT2
My GPU is:
In [1]: from numba import cuda
In [2]: gpu=cuda.get_current_device()
In [3]: gpu.name
Out[3]: 'GeForce GT 750M'
EDIT3
According to the memory of my GPU (2GB), I reduced the size of each dimension by 2:
dimx=10
dimy=1536
dimz=25000
A=np.ones((dimx,dimz),dtype=np.float32)
B=np.ones((dimy,dimz),dtype=np.float32)
C=np.ones((dimx,dimy,dimz),dtype=np.float32)
But I still receive the CUDA_ERROR_OUT_OF_MEMORY error. How could one explain this?
The calculation yields a size of about 1.7GB for the 3 matrices:
(10*1536*25000*4.+10*25000*4+1536*25000*4.)/(10**9)=1.6906
Regarding the first problem, you're running out of memory. A major contributor to that is that this isn't the way people would normally do a matrix-matrix multiply. Normally, as you are multiplying row and column elements together, you would keep a running sum, then store that sum in the appropriate location in the product (result) matrix. This will allow you to have a much smaller size for the c matrix, ie. it need only be 2 dimensions, not 3. You may wish to just study the linear algebra definition of matrix-matrix multiply. When you multiply a 2D matrix by a 2D matrix, the result is a 2D matrix, not a 3D matrix.
In a nutshell, something like this:
for i in range(startX,Ni,gridX):
for j in range(startY,Nj,gridY):
c[i,j] = 0
for k in range(startZ,Nk,gridZ):
c[i,j]= c[i,j] + mult_gpu(a[i,k],b[j,k])
And adjust your c shape accordingly.
If you actually need the individual products in 3D form as you are doing here, then there is not much I can say except that you will need to scale the problem to fit in the GPU memory size for whatever GPU you are using.
Regarding the second problem, you have a problem here:
thread_ct=my_gpu.WARP_SIZE
...
blockdim=thread_ct,thread_ct,thread_ct
WARP_SIZE is 32 (presumably) so you are asking for a 3D block of dimensions 32*32*32 = 32K threads. But CUDA threadblocks are limited to a maximum of 1024 threads, which limit is the product of the individual dimensions.
If you change your thread_ct to 8, for example:
thread_ct=8
You should be able to get past this particular issue.

Matlab to Python numpy indexing and multiplication issue

I have the following line of code in MATLAB which I am trying to convert to Python numpy:
pred = traindata(:,2:257)*beta;
In Python, I have:
pred = traindata[ : , 1:257]*beta
beta is a 256 x 1 array.
In MATLAB,
size(pred) = 1389 x 1
But in Python,
pred.shape = (1389L, 256L)
So, I found out that multiplying by the beta array is producing the difference between the two arrays.
How do I write the original Python line, so that the size of pred is 1389 x 1, like it is in MATLAB when I multiply by my beta array?
I suspect that beta is in fact a 1D numpy array. In numpy, 1D arrays are not row or column vectors where MATLAB clearly makes this distinction. These are simply 1D arrays agnostic of any shape. If you must, you need to manually introduce a new singleton dimension to the beta vector to facilitate the multiplication. On top of this, the * operator actually performs element-wise multiplication. To perform matrix-vector or matrix-matrix multiplication, you must use numpy's dot function to do so.
Therefore, you must do something like this:
import numpy as np # Just in case
pred = np.dot(traindata[:, 1:257], beta[:,None])
beta[:,None] will create a 2D numpy array where the elements from the 1D array are populated along the rows, effectively making a column vector (i.e. 256 x 1). However, if you have already done this on beta, then you don't need to introduce the new singleton dimension. Just use dot normally:
pred = np.dot(traindata[:, 1:257], beta)

Categories