I have a sparse matrix stored on disk in coordinate format, (triplet format).
I would like to read chunks of the matrix into memory, using scipy.sparse, however, when doing this, scipy will always assume a dense matrix indexing from 0,0, regardless of the chunk.
This means, for example, that for the last 'chunk' in the sparse matrix scipy will interpret as being a huge matrix that only has some values in the bottom right corner.
How can I correctly handle the chunks so that when doing toarray to create a dense matrix it only creates the subset corresponding to that chunk?
The reason for doing this is that, even sparse, the matrix is too large for memory (approx 600 million 32bit floating point values) and to display on screen (as the matrix represents a geospatial raster) I need to convert it to a dense matrix to store in a geospatial format (e.g. geotiff).
You should be able tweak the row and col values when building the subset. For example:
In [84]: row=np.arange(10)
In [85]: col=np.random.randint(0,6,row.shape)
In [86]: data=np.ones(row.shape,dtype=int)*2
In [87]: M=sparse.coo_matrix((data,(row,col)),shape=(10,6))
In [88]: M.A
Out[88]:
array([[0, 0, 2, 0, 0, 0],
[0, 0, 0, 0, 0, 2],
[0, 0, 0, 2, 0, 0],
[0, 0, 2, 0, 0, 0],
[0, 0, 2, 0, 0, 0],
[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 2, 0],
[0, 0, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 2]])
To build a matrix with a subset of the rows use:
In [89]: M1=sparse.coo_matrix((data[5:],(row[5:]-5,col[5:])),shape=(5,6))
In [90]: M1.A
Out[90]:
array([[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 2, 0],
[0, 0, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 2]])
You'll have to decide whether you want to specify the shape for M1, or let it deduce it from the range of row and col.
If these coordinates are not sorted, or you also want to take a subrange of col, things could get more complicated. But I think this captures the basic idea.
Related
So I have an n*K integer matrix [Note: its a representation of the number of samples drawn from K-distributions (K-columns)]
a =[[0,1,0,0,2,0],
[0,0,1,0,0,0],
[3,0,0,0,0,0],
]
[Note: in the application context this matrix basically means that for the i row (sim instance) we drew 1 element from the "distribution 1" (1 \in [0,..K]) (a[0,1] = 1) and 2 from the distribution 4(a[0,4] = 2)].
What I need is to generate a 0-1 matrix that represents the same integer matrix but with ones(1). In this case, is a 3D matrix of n*a.max()*K that has a 1 for each sample that is drawn from the distributions. [Note: we need this matrix so we can multiply by our K-distribution sample matrix]
Output
b = [[[0,1,0,0,1,0], # we don't care if they samples are stack
[0,0,0,0,1,0],
[0,0,0,0,0,0]], # this is the first row representation
[[0,0,1,0,0,0],
[0,0,0,0,0,0],
[0,0,0,0,0,0]], # this is the second row representation
[[1,0,0,0,0,0],
[1,0,0,0,0,0],
[1,0,0,0,0,0]], # this is the third row representation
]
how to do that in NumPy ?
Thanks !
from #michael-szczesny comment
a = np.array([[0,1,0,0,2,0],
[0,0,1,0,0,0],
[3,0,0,0,0,0],
])
b = (np.arange(1, a.max()+1)[:,None] <= a[:,None]).astype('uint8')
print(b)
array([[[0, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0]],
[[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]],
[[1, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0]]], dtype=uint8)
I have created a adjacency matrix using networkx as below:
from networkx.algorithms.bipartite.matrix import biadjacency_matrix as adj
user_node_list = data['user_id'].unique()
item_node_list = data['item_id'].unique()
adj_matrix = adj(B, user_node_list, column_order=item_node_list, dtype=None, weight='rating', format='csr')
I want to visualize this adj_matrix. How can I do this?
You can use Pandas to visualize your adj_matrix as following:
import pandas as pd
A = pd.DataFrame(adj_matrix)
Much of the time we're working with graphs with sparse adjacency matrices, so networkx returns a SciPy Compressed Sparse Row matrix rather than a numpy.ndarray or numpy.matrix. The former representation uses more efficient data structures and algorithms for representing and processing sparse matrices. In particular the __repr__ representation of the matrix differs from that of a vanilla (dense) NumPy matrix. It will look something like
<11x11 sparse matrix of type '<class 'numpy.int64'>'
with 28 stored elements in Compressed Sparse Row format>
This makes sense because if the representation of a CSR matrix were the same as what we see with a dense matrix, a simple print statement or logging message could have serious performance impacts if the matrix were very large.
Compare the above output with the __repr__ output of a vanilla (dense) NumPy matrix:
matrix([[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1],
[0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0]])
which allows us to inspect the matrix elements visually (I am guessing that this is what was meant with "visualize the adj_matrix").
To convert a sparse CSR matrix to a dense NumPy matrix, simply do sparse_matrix.todense(). Note that this representation of a sparse matrix will require substantially more memory, so be mindful of that when working with larger graphs.
I am trying to make a special diagonal matrix that looks like this:
[[1,1,0,0,0,0],
[0,0,1,1,0,0],
[0,0,0,0,1,1]]
It is slightly different from the question here: Make special diagonal matrix in Numpy
I tried tweaking the solution but couldn't quite get it.
Appreciate any advice on how to achieve this efficiently.
Not as elegant as in comments, but :
a=4 # number of rows
b=a*2 #number of columns
np.array((([1]*2+[0]*b)*a)[:-b]).reshape(a,b)
array([[1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1]])
works for any a.
I have got an optimization problem defined in cvxpy, but want to work with the result in my code in numpy afterwards - how can I convert it from cvxpy into numpy?
It is of type
<class 'numpy.matrixlib.defmatrix.matrix'>
If I want to plot it to see the result, matplotlib shows only a blue area.
https://docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.view.html seems to have all you need
numpy's view can be used (afaik) in any which way you can use numpy's array directly.
It makes sense to "convert" to a numpy array if you want to get a copy of data:
y
matrix([[1, 0, 0, 0, 2, 0, 0, 0],
[3, 0, 0, 0, 4, 0, 0, 0]], dtype=int16)
type(y)
numpy.matrixlib.defmatrix.matrix
n = numpy.array(y)
n[1,2] = 999
n
array([[ 1, 0, 0, 0, 2, 0, 0, 0],
[ 3, 0, 999, 0, 4, 0, 0, 0]], dtype=int16)
y
matrix([[1, 0, 0, 0, 2, 0, 0, 0],
[3, 0, 0, 0, 4, 0, 0, 0]], dtype=int16)
How to construct sparse matrix from diagonal vectors like this:
Lets say my matrix is square with dimension N=6 and i have the following vector
vec = np.array([[1], [1,2]])
and I want to put those parts on diagonals
offset = np.array([2,3])
but vec[0] should start at Mat[0,2] and vec[1] should start at Mat[1,4]
I know about scipy.sparse.diags() but I don't think there is a way to specify just part of a diagonal where non-zero elements are present.
This is just an example to illustrate the problem. In reality I deal with very big arrays and I dont want to waste memory for useless zeros.
Is this the matrix that you want?
In [200]: sparse.dia_matrix(([[0,0,1,0,0,0],[0,0,0,0,1,2]],[2,3]),(6,6)).A
Out[200]:
array([[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
Yes, the specification includes zeros, which could be annoying in larger cases.
spdiags just wraps the dia_matrix, with the option of converting the result to another format. In your example that converts a 7 element sparse to a 3.
sparse.diags accepts a ragged list of values, but they still need to match the diagonals in length. And internally it converts them to the rectangular array that dia_matrix takes.
S3=sparse.diags([[1,0,0,0],[0,1,2]],[2,3],(6,6))
So if you really need to be parsimonious about the zeros you need to go the coo route.
For example:
In [363]: starts = [[0,2],[1,4]]
In [364]: data = np.concatenate(vec)
In [365]: rows=np.concatenate([range(s[0],s[0]+len(v)) for s,v in zip(starts, vec)])
In [366]: cols=np.concatenate([range(s[1],s[1]+len(v)) for s,v in zip(starts, vec)])
In [367]: sparse.coo_matrix((data,(rows,cols)),(6,6)).A
Out[367]:
array([[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])