Addressing ranges in a Scipy sparse matrix - python

I have a large matrix, currently in numpy that i would like to port over to scipy sparse matrix, because saving the text representations of the numpy (2000,2000) matrix is over 100mb.
(1) There seem to a surfeit of sparse matrices available in scipy [for instance, lil_matrix or dok_matrix- which one would be optimal for simple incrementing, and efficient to save to a database?
(2)
I'd like to be able to address ranges in the matrix like so:
>> import numpy as np
>> a = np.zeros((1000,1000))
>> a[3:5,4:7] += 1
It seems that this is not possible for the sparse matrices?

I can't say which is most efficient to store. It's going to depend on your data.
I can say, however that the += operator works, just that you can't rely on the usual array broadcasting rules:
>>> m = sparse.lil_matrix((100,100))
>>> m[50:56,50:56]+=scipy.ones((6,6))
>>> m[50,50] #1.0

Related

Is there a way to have fast boolean operations on scipy.sparse matrices?

I have to solve a XOR operation on very high dimensional (~30'000) vectors to compute the Hamming distance. For example, I need to compute the XOR operation between one vector full of False with 16 sparsily located True with each row of a 50'000x30'000 matrix.
As of now, the quickest way I found is to not use scipy.sparse but the simple ^ operation on each row.
This:
l1distances=(self.hashes[index,:]^self.hashes[all_points,:]).sum(axis=1)
Happens to be ten times faster than this:
sparse_hashes = scipy.sparse.csr_matrix((self.hashes)).astype('bool')
for i in range(all_points.shape[0]):
l1distances[0,i]=(sparse_hashes[index]-sparse_hashes[all_points[i]]).sum()
But ten times faster is still quite slow since, theoretically, having a sparse vector with 16 activations should make the computation the same as having a 16 dimension one.
Is there any solution? I'm really struggling here, thanks for the help!
If your vector is highly sparse (like 16/30000) I'd probably just skip fiddling with sparse xor entirely.
from scipy import sparse
import numpy as np
import numpy.testing as npt
matrix_1 = sparse.random(10000, 100, density=0.1, format='csc')
matrix_1.data = np.ones(matrix_1.data.shape, dtype=bool)
matrix_2 = sparse.random(1, 100, density=0.1, format='csc', dtype=bool)
vec = matrix_2.A.flatten()
# Pull out the part of the sparse matrix that matches the vector and sum it after xor
matrix_xor = (matrix_1[:, vec].A ^ np.ones(vec.sum(), dtype=bool)[np.newaxis, :]).sum(axis=1)
# Sum the part that doesnt match the vector and add it
l1distances = matrix_1[:, ~vec].sum(axis=1).A.flatten() + matrix_xor
# Double check that I can do basic math
npt.assert_array_equal(l1distances, (matrix_1.A ^ vec[np.newaxis, :]).sum(axis=1))

Why multiplication functions of scipy sparse and numpy arrays give different results?

I have two matrices in Python 2.7: one dense A_dense and the another sparse matrix A_sparse. I am interested in computing element-wise multiplication followed by sum. There are two ways to do it: use numpy's multiplication or scipy sparse multiplication. I expect them to give exactly same result with difference in execution time. But I find that they give different results for certain matrix sizes.
import numpy as np
from scipy import sparse
L=2000
np.random.seed(2)
rand_x=np.random.rand(L)
A_sparse_init=np.diag(rand_x, -1)+np.diag(rand_x, 1)
A_sparse=sparse.csr_matrix(A_sparse_init)
A_dense=np.random.rand(L+1,L+1)
print np.sum(A_sparse.multiply(A_dense))-np.sum(np.multiply(A_dense[A_sparse.nonzero()], A_sparse.data))
Output:
1.1368683772161603e-13
If I choose L=2001, then output is:
0.0
To check the size dependence of the difference using two different multiplication method, I wrote:
L=100
np.random.seed(2)
N_loop=100
multiply_diff_arr=np.zeros(N_loop)
for i in xrange(N_loop):
rand_x=np.random.rand(L)
A_sparse_init=np.diag(rand_x, -1)+np.diag(rand_x, 1)
A_sparse=sparse.csr_matrix(A_sparse_init)
A_dense=np.random.rand(L+1,L+1)
multiply_diff_arr[i]=np.sum(A_sparse.multiply(A_dense))-np.sum(np.multiply(A_dense[A_sparse.nonzero()], A_sparse.data))
L+=1
I got the following plot:
Can anyone help me understand what's happening? Don't we expect the difference between two methods to be at least 1e-18 rather than 1e-13?
I don't have a full answer, but this might help find the answer:
Under the hood, scipy.sparse will convert to coo format and do this:
ret = self.tocoo()
if self.shape == other.shape:
data = np.multiply(ret.data, other[ret.row, ret.col])
The question is then why these two operations give different results:
ret = A_sparse.tocoo()
c = np.multiply(ret.data, A_dense[ret.row, ret.col])
ret.data = c.view(type=np.ndarray)
c.sum() - ret.sum()
-1.1368683772161603e-13
Edit:
The difference stems from different defaults on which axis to add.reduce first.
E.g.:
A_sparse.multiply(A_dense).sum(axis=1).sum()
A_sparse.multiply(A_dense).sum(axis=0).sum()
Numpy defaults to 0 first.

How can I generate a large identity matrix in python without running into "memory full"?

I am using the code below:
n = 40000
numpy.matlib.identity(n)
You can do this with scipy using sparse matrix representation:
import numpy as np
from scipy.sparse import identity
n = 30000
a = np.identity(n)
print a.nbytes
b = identity(n)
print b.data.nbytes
The difference is huge (quadratic): 7200000000 vs 240000.
You can also try to decrease the size by providing appropriate dtype, like a = np.identity(n, dtype='int8') but this will only reduce the size linearly (with maximum linear factor of less than 200).
The same way you can do b = identity(n, dtype='int8', format='dia') which will reduce the size even further to 30000.
But the most important thing is what are you planning to do with this matrix (highly doubt you just want to create it)? And some of the operations would not support sparse indices. Then you either have to buy more memory or come up with smart linear-algebra stuff to operate on parts of the matrices, store results on disk and merge them together.

Python - matrix multiplication with sparse result

Suppose I have two dense matrices U (10000x50) and V(50x10000), and one sparse matrix A(10000x10000). Each element in A is either 1 or 0. I hope to find A*(UV), noting that '*' is element-wise multiplication. To solve the problem, Scipy/numpy will calculate a dense matrix UV first. But UV is dense and large (10000x10000) so it's very slow.
Because I only need a few elements of UV indicated by A, it should save a lot of time if only necessary elements are calculated instead of calculating all elements then filtering using A. Is there a way to instruct scipy to do this?
BTW, I used Matlab to solve this problem and Matlab is smart enough to find what I'm trying to do and works efficiently.
Update:
I found Matlab calculated UV fully as scipy does. My scipy installation is simply too slow...
Here's a test script and possible speedup. The basic idea is to use the nonzero coordinates of A to select rows and columns of U and V, and then use einsum to perform a subset of the possible dot products.
import numpy as np
from scipy import sparse
#M,N,d = 10,5,.1
#M,N,d = 1000,50,.1
M,N,d = 5000,50,.01 # about the limit for my memory
A=sparse.rand(M,M,d)
A.data[:] = 1 # a sparse 0,1 array
U=(np.arange(M*N)/(M*N)).reshape(M,N)
V=(np.arange(M*N)/(M*N)).reshape(N,M)
A1=A.multiply(U.dot(V)) # the direct solution
A2=np.einsum('ij,ik,kj->ij',A.A,U,V)
print(np.allclose(A1,A2))
def foo(A,U,V):
# use A to select elements of U and V
A3=A.copy()
U1=U[A.row,:]
V1=V[:,A.col]
A3.data[:]=np.einsum('ij,ji->i',U1,V1)
return A3
A3 = foo(A,U,V)
print(np.allclose(A1,A3.A))
The 3 solutions match. For large arrays, foo is about 2x faster than the direct solution. For small size, the pure einsum is competitive, but bogs down for large arrays.
The use of dot in foo would have computed too many products, ij,jk->ik as opposed to ij,ji->i.

mmap sparse vector in python

I'm looking for simple sparse vector implementation that can be mapped into memory, similarly to numpy.memmap.
Unfortunately, numpy implementation deals only with full vector. Example usage:
vec = SparseVector('/tmp/file.dat') # SparseVector is the class I'm looking for
vec[10] = 10
vec[50] = 21
for key in vec:
print vec[key] # 10, 21
I foung scipy class representing sparse matrix, however 2 dimensions are clumsy to use as I'd need to make matrix with only one row a then use vec[0,i].
Any suggestions?
Someone else was just asking about 1d sparse vectors, only they wanted to take advantage of the scipy.sparse method of handling duplicate indices.
is there something like coo_matrix but for sparse vectors?
As shown there, a coo_matrix actually consists of 3 numpy arrays, data, row, col. Other formats rearrange the values in other ways, lil for example has 2 nested lists, one for the data, another for the coordinates. dok is a regular dictionary, with (i,j) tuples as keys.
In theory then a sparse vector will require 2 arrays. Or as your example shows it could be a simple dictionary.
So you could implement a mmap sparse vector by using two mmap arrays. As far as I know there isn't a mmap version of the scipy sparse matrices, though it's not something I've looked for.
But what functionality do you want? What dimension? So large that a dense version would not fit in regular memory? Are you doing math with it? Or just data lookup?

Categories