Dot product with sparse matrix and vector - python

Im having a very hard time trying to program a dot product with a matrix in sparse format and a vector.
My matrix have the shape 3 x 3 in the folowing format:
Ms=[[0, 0, 0.6153414193508929],[1, 1, 0.9884632853575251],[2, 1, 0.22943483758936845],[2, 2, 0.336180557968783]]
Where the first index represent the row number, the second is the column number and third is the data.
the vector "b" is:
b=Array([[0.32599637],[0.31726302],[0.67265016]])
My question is: how i format the FOR-loop to iterate the third index in Ms (Ie: Column 0) and add the multiplication of the columns with the consequent index in "b", and jump to the next row. (like the description of dot product)
Please, if you donĀ“t undestand ask me to clarify
Thanks in advance!

You can take advantage of the fact that if A is a matrix of shape (M, N), and b is a vector of shape (N, 1), then A.b equals a vector c of shape (M, 1).
A row x_c in c = sum((x_A, a row in A) * b).
def dot(sparse_mat, dense_vec, sparse_shape):
assert sparse_shape[1] == dense_vec.shape[0], "Columns of matrix must be equal to rows of vector."
output = np.zeros((sparse_shape[0], dense_vec.shape[1]))
for (row, col, val) in sparse_mat:
row, col = int(row), int(col)
output[row] += dense_vec[int(col)] * val
return output
Ms = [[0, 0, 0.6153414193508929],
[1, 1, 0.9884632853575251],
[2, 1, 0.22943483758936845],
[2, 2, 0.336180557968783]]
b = np.array([[0.32599637],
[0.31726302],
[0.67265016]])
print(dot(Ms, b, (3, 3)))
# [[0.20059907]
# [0.31360285]
# [0.2989231 ]]
We should verify the above with scipy's sparse matrices.
from scipy.sparse import csr_matrix
Ms = np.array(Ms)
sparse_M = csr_matrix((Ms[:, 2], (Ms[:, 0].astype(int), Ms[:, 1].astype(int))), (3, 3))
print(sparse_M)
# (0, 0) 0.6153414193508929
# (1, 1) 0.9884632853575251
# (2, 1) 0.22943483758936845
# (2, 2) 0.336180557968783
print(sparse_M # b)
# [[0.20059907]
# [0.31360285]
# [0.2989231 ]]

Related

Matrix multiplication while subsetting elements from matrices and storing in a new matrix

I am attempting a numpy.matmul call using as variables
Matrix A of dimensions (p, t, q)
Matrix B of dimensions (r, t).
A categories vector of shape r and p categories, used to take slices of B and define the index of A do use.
The multiplications are done iteratively using the indices of each category. For each category p_i, I extract from A a submatrix (t, q). Then, I multiply those with a subset of B (x, t), where x is a mask defined by r == p_i. Finally, the matrix multiplication of (x, t) and (t, q) produces the output (x, q) which is stored at S[x].
I have noted that I cannot figure out a non-iterative version of this algorithm. The first snippet describes an iterative solution. The second one is an attempt at what I would wish to get, where everything is calculated as a single-step and would be presumably faster. However, it is incorrect because matrix A has three dimensions instead of two. Maybe there is no way to do this in NumPy with a single call, and in general, looking for advice/ideas to try out.
Thanks!
import numpy as np
p, q, r, t = 2, 9, 512, 4
# data initialization (random)
np.random.seed(500)
S = np.random.rand(r, q)
A = np.random.randint(0, 3, size=(p, t, q))
B = np.random.rand(r, t)
categories = np.random.randint(0, p, r)
print('iterative') # iterative
for i in range(p):
# print(i)
a = A[i, :, :]
mask = categories == i
b = B[mask]
print(b.shape, a.shape, S[mask].shape,
np.matmul(b, a).shape)
S[mask] = np.matmul(b, a)
print(S.shape)
a simple way to write it down
S = np.random.rand(r, q)
print(A[:p,:,:].shape)
result = np.matmul(B, A[:p,:,:])
# iterative assignment
i = 0
S[categories == i] = result[i, categories == i, :]
i = 1
S[categories == i] = result[i, categories == i, :]
The next snippet will produce an error during the multiplication step.
# attempt to multiply once, indexing all categories only once (not possible)
np.random.seed(500)
S = np.random.rand(r, q)
# attempt to use the categories vector
a = A[categories, :, :]
b = B[categories]
# due to the shapes of the arrays, this multiplication is not possible
print('\nsingle step (error due to shapes of the matrix a')
print(b.shape, a.shape, S[categories].shape)
S[categories] = np.matmul(b, a)
print(scores.shape)
iterative
(250, 4) (4, 9) (250, 9) (250, 9)
(262, 4) (4, 9) (262, 9) (262, 9)
(512, 9)
single step (error due to shapes of the 2nd matrix a).
(512, 4) (512, 4, 9) (512, 9)
In [63]: (np.ones((512,4))#np.ones((512,4,9))).shape
Out[63]: (512, 512, 9)
This because the first array is broadcasted to (1,512,4). I think you want instead to do:
In [64]: (np.ones((512,1,4))#np.ones((512,4,9))).shape
Out[64]: (512, 1, 9)
Then remove the middle dimension to get a (512,9).
Another way:
In [72]: np.einsum('ij,ijk->ik', np.ones((512,4)), np.ones((512,4,9))).shape
Out[72]: (512, 9)
To remove the loop altogether, you can try this
bigmask = np.arange(p)[:, np.newaxis] == categories
C = np.matmul(B, A)
res = C[np.broadcast_to(bigmask[..., np.newaxis], C.shape)].reshape(r, q)
# `res` has the same rows as the iterative `S` but in the wrong order
# so we need to reorder the rows
sort_index = np.argsort(np.broadcast_to(np.arange(r), bigmask.shape)[bigmask])
assert np.allclose(S, res[sort_index])
Though I'm not sure it's much faster than the iterative version.

Binary mask of top n-th quantile in a batch of 2D tensors, but with individual n for each tensor

I have a tensor A of shape (100, 16, 16) and tensor B of shape (100), where 100 is the batch size. I want to create a binary mask of A that has shape (100, 16, 16), where in each element (element has shape (1, 16, 16)) of the mask, the value is 1 if the element is greater than the computed quantile value, else 0. Each element in tensor B indicates the percentile value for each individual element in A, in sequence. If B is simply a scalar, I can use:
flat_A = torch.reshape(A, (100, -1))
quants = torch.quantile(flat_A, B, dim=1)
quants = torch.reshape(quants, (100, 1, 1))
mask = torch.where(A >= quants, 1, 0)
# quants will have shape (100, 1, 1)
The question is: if B is a 1D tensor of shape (100) like I said above, how can I compute the percentile value for each individual element in A? I tried the following, but the results did not look like what I expected:
>>> torch.quantile(flat_A, B, dim=1).shape
torch.Size([100, 100])
>>> torch.quantile(flat_A, B, dim=0).shape
torch.Size([100, 256])
I think the result's shape should be (100), so I can use mask = torch.where(A >= quants, 1, 0), or maybe I misunderstand it?
For more context, this question is also the extension of the scalar B value question I had previously here.
This is one way using torch.quantile() function. Note that here I am using tensors of shape (5, 2, 2) instead of (100, 16, 16) for simplicity.
import torch
# Generate some data of shape (5, 2, 2)
A = torch.arange(5 * 2 * 2).reshape(5, 2, 2) + 1.0
B = torch.linspace(0, 1, 5) # 5 quantile values for each element in A
Af = A.reshape(A.shape[0], -1) # flattens A to a 2D tensor
quantiles = torch.quantile(Af, B, dim = 1, keepdim = True)
quants = quantiles[torch.arange(A.shape[0]), torch.arange(A.shape[0]), 0]
mask = (A >= quants[:, None, None]).type(torch.uint8)
Here the tensor quantiles is of shape torch.Size([5, 5, 1]) because it stores the thresholds for each quantile value in B for each element in A (or row in Af). Since we have 5 quantile values, we get 5 thresholds for each element in A.
For instance, quantiles[i, j, 0] has the threshold for B[i]th quantile of A[j] or Af[j], and you essentially need the values quantiles[k, k, 0] for k in range of batch size or 5 here.
Now to satisfy the requirement that you need thresholds for corresponding quantiles in B and elements in A, simply index out the diagonal elements from quantiles and populate quants that has shape torch.Size([5]).
Finally to get the mask, compare A with the corresponding thresholds for each element. Note that this uses a broadcasted elementwise comparison with the thresholds. mask has the required shape of torch.Size([5, 2, 2]).

Einsum for high dimensions

Considering the 3 arrays below:
np.random.seed(0)
X = np.random.randint(10, size=(4,5))
W = np.random.randint(10, size=(3,4))
y = np.random.randint(3, size=(5,1))
i want to add and sum each column of the matrix X to the row of W ,given by y as index. So ,for example, if the first element in y is 3 , i'll add the first column of X to the fourth row of W(index 3 in python) and sum it. i'll do it over and over until all columns of X are added to the specific row of W and summed.
i could do it in different ways:
1- using for loop:
for i,j in enumerate(y):
W[j]+=X[:,i]
2- using the add.at function
np.add.at(W,(y.ravel()),X.T)
3- but i can't understand how to do it using einsum.
i was given a solution ,but really can't understand it.
N = y.max()+1
W[:N] += np.einsum('ijk,lk->il',(np.arange(N)[:,None,None] == y.ravel()),X)
Anyone could explain me this structure?
1 - (np.arange(N)[:,None,None] == y.ravel(),X). i imagine this part refers to summing the column of X to the specific row of W ,according to y. But where s W ? and why do we have to transform W in 4 dimensions in this case?
2- 'ijk,lk->il' - i didnt understand this either.
i -refers to the rows,
j - columns,
k- each element,
l - what does 'l' refers too?.
if anyone can understand this and explain to me , i would really appreciate.
Thanks in advance.
Let's simplify the problem by dropping one dimension and using values that are easy to verify manually:
W = np.zeros(3, np.int)
y = np.array([0, 1, 1, 2, 2])
X = np.array([1, 2, 3, 4, 5])
Values in the vector W get added values from X by looking up with y:
for i, j in enumerate(y):
W[j] += X[i]
W is calculated as [1, 5, 9], (check quickly by hand).
Now, how could this code be vectorized? We can't do a simple W[y] += X[y] as y has duplicate values in it and the different sums would overwrite each other at indices 1 and 2.
What could be done is to broadcast the values into a new dimension of len(y) and then sum up over this newly created dimension.
N = W.shape[0]
select = (np.arange(N) == y[:, None]).astype(np.int)
Taking the index range of W ([0, 1, 2]), and setting the values where they match y to 1 in a new dimension, otherwise 0. select contains this array:
array([[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[0, 0, 1]])
It has len(y) == len(X) rows and len(W) columns and shows for every y/row, what index of W it contributes to.
Let's multiply X with this array, mult = select * X[:, None]:
array([[1, 0, 0],
[0, 2, 0],
[0, 3, 0],
[0, 0, 4],
[0, 0, 5]])
We have effectively spread out X into a new dimension, and sorted it in a way we can get it into shape W by summing over the newly created dimension. The sum over the rows is the vector we want to add to W:
sum_Xy = np.sum(mult, axis=0) # [1, 5, 9]
W += sum_Xy
The computation of select and mult can be combined with np.einsum:
# `select` has shape (len(y)==len(X), len(W)), or `yw`
# `X` has shape len(X)==len(y), or `y`
# we want something `len(W)`, or `w`, and to reduce the other dimension
sum_Xy = np.einsum("yw,y->w", select, X)
And that's it for the one-dimensional example. For the two-dimensional problem posed in the question it is exactly the same approach: introduce an additional dimension, broadcast the y indices, and then reduce the additional dimension with einsum.
If you internalize how every step works for the one-dimensional example, I'm sure you can work out how the code is doing it in two dimensions, as it is just a matter of getting the indices right (W rows, X columns).

Numpy Rowwise Addition with a (Nx1) Matrix and a Vector with Length N

I am trying to update the weights in a neural network with this line:
self.l1weights[0] = self.l1weights[0] + self.learning_rate * l1error
And this results in a value error:
ValueError: could not broadcast input array from shape (7,7) into shape (7)
Printing the learning_rate*error and the weights returns something like this:
[[-0.00657573]
[-0.01430752]
[-0.01739463]
[-0.00038115]
[-0.01563393]
[-0.02060908]
[-0.01559269]]
[ 4.17022005e-01 7.20324493e-01 1.14374817e-04 3.02332573e-01
1.46755891e-01 9.23385948e-02 1.86260211e-01]
It is clear the weights are initialized as a vector of length 7 in this example and the error is initialized as a 7x1 matrix. I would expect addition to return a 7x1 matrix or a vector as well, but instead it generates a 7x7 matrix like this:
[[ 4.10446271e-01 7.13748760e-01 -6.46135890e-03 2.95756839e-01
1.40180157e-01 8.57628611e-02 1.79684478e-01]
[ 4.02714481e-01 7.06016970e-01 -1.41931487e-02 2.88025049e-01
1.32448367e-01 7.80310713e-02 1.71952688e-01]
[ 3.99627379e-01 7.02929868e-01 -1.72802505e-02 2.84937947e-01
1.29361266e-01 7.49439695e-02 1.68865586e-01]
[ 4.16640855e-01 7.19943343e-01 -2.66775370e-04 3.01951422e-01
1.46374741e-01 9.19574446e-02 1.85879061e-01]
[ 4.01388075e-01 7.04690564e-01 -1.55195551e-02 2.86698643e-01
1.31121961e-01 7.67046648e-02 1.70626281e-01]
[ 3.96412924e-01 6.99715412e-01 -2.04947062e-02 2.81723492e-01
1.26146810e-01 7.17295137e-02 1.65651130e-01]
[ 4.01429313e-01 7.04731801e-01 -1.54783174e-02 2.86739880e-01
1.31163199e-01 7.67459026e-02 1.70667519e-01]]
Numpy.sum also returns the same 7x7 matrix. Is there a way to solve this without explicit reshaping? Output size is variable and this is an issue specific to when the output size is one.
When adding (7,) array (named a) with (1, 7) array (named b), broadcasting happens and generates (7, 7) array. If your just want to do element-by-element addition, keep them in the same shape.
a + b.flatten() gives (7,). flatten makes all the dimensions collapse into one. This keeps the result as a row.
a.reshape(-1, 1) + b gives (1, 7). -1 in reshape requires numpy to decide how many elements are there given other dimensions. This keeps the result as a column.
a = np.arange(7) # row
b = a.reshape(-1, 1) # column
print((a + b).shape) # (7, 7)
print((a + b.flatten()).shape) # (7,)
print((a.reshape(-1, 1) + b).shape) # (7, 1)
In your case, a and b would be self.l1weights[0] and self.learning_rate * l1error respectively.

Updating sparse connectivity matrix in scipy

I'm working with a connectivity matrix that is a representation of a graph datastructure. The NxM matrix corresponds to N edges with M vertices (it's likely to have more edges than vertices, which is why I am working with scipy's csr_matrix). The "start" point of the edge is represented by "-1" and the end point is represent by "1" in the connectivity matrix. All other values are 0, so each row only has 2 nonzero values.
I need to integrate a "subdivide" method, which will efficiently update the connectivity matrix. Currently I am transforming the connectivity matrix to a dense matrix so I can add the new rows/columns and update the old ones. I am converting to a dense matrix as I haven't found a solution to finding the column index for updating the old edge connectivity (no equivalent scipy.where) and the csr representation does not allow me to update values via indexing.
from numpy import where, array, zeros, hstack, vstack
from scipy.sparse import coo_matrix, csr_matrix
def connectivity_matrix(edges):
m = len(edges)
data = array([-1] * m + [1] * m)
rows = array(list(range(m)) + list(range(m)))
cols = array([edge[0] for edge in edges] + [edge[1] for edge in edges])
C = coo_matrix((data, (rows, cols))).asfptype()
return C.tocsr()
def subdivide_edges(C, edge_indices):
C = C.todense()
num_e = C.shape[0] # number of edges
num_v = C.shape[1] # number of vertices
for edge in edge_indices:
num_e += 1 # increment row (edge count)
num_v += 1 # increment column (vertex count)
_, start = where(C[edge] == -1.0)
_, end = where(C[edge] == 1.0)
si = start[0]
ei = end[0]
# add row
r, c = C.shape
new_r = zeros((1, c))
C = vstack([C, new_r])
# add column
r, c = C.shape
new_c = zeros((r, 1))
C = hstack([C, new_c])
# edit edge start/end points
C[edge, ei] = 0.0
C[edge, num_v - 1] = 1.0
# add new edge start/end points
C[num_e - 1, ei] = 1.0
C[num_e - 1, num_v - 1] = -1.0
return csr_matrix(C)
edges = [(0, 1), (1, 2)] # edge connectivity
C = connectivity_matrix(edges)
C = subdivide_edges(C, [0, 1])
# new edge connectivity: [(0, 3), (1, 4), (3, 1), (4, 2)]
A sparse matrix does have a nonzero method (np.where uses np.nonzero). But look at its code - it returns coo row/cols data.
Using a sparse matrix left over from another question:
In [468]: M
Out[468]:
<5x5 sparse matrix of type '<class 'numpy.float64'>'
with 5 stored elements in COOrdinate format>
In [469]: Mc = M.tocsr()
In [470]: Mc.nonzero()
Out[470]: (array([0, 1, 2, 3, 4], dtype=int32), array([2, 0, 4, 3, 1], dtype=int32))
In [471]: Mc[1,:].nonzero()
Out[471]: (array([0]), array([0]))
In [472]: Mc[3,:].nonzero()
Out[472]: (array([0]), array([3]))
I converted to csr to do the row index.
There is also a sparse vstack.
But iterative work on sparse matrix is slow compared to dense arrays.
Be wary of float comparisons like C[edge] == -1.0. == tests work much better with integers.
Changing values from zero to nonzero does raise a warning, but does work:
In [473]: Mc[1,1] = 23
/usr/local/lib/python3.5/dist-packages/scipy/sparse/compressed.py:774: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
In [474]: (Mc[1,:]==23).nonzero()
Out[474]: (array([0]), array([1]))
Changing nonzeros to zero doesn't produce the warning, but it also doesn't change the underlying sparsity (until the matrix is cleaned up). lil format is better for element by element changes.
In [478]: Ml = M.tolil()
In [479]: Ml.nonzero()
Out[479]: (array([0, 1, 2, 3, 4], dtype=int32), array([2, 0, 4, 3, 1], dtype=int32))
In [480]: Ml[1,:].nonzero()
Out[480]: (array([0], dtype=int32), array([0], dtype=int32))
In [481]: Ml[1,2]=.5
In [482]: Ml[1,:].nonzero()
Out[482]: (array([0, 0], dtype=int32), array([0, 2], dtype=int32))
In [483]: (Ml[1,:]==.5).nonzero()
Out[483]: (array([0], dtype=int32), array([2], dtype=int32))
In [486]: sparse.vstack((Ml,Ml),format='lil')
Out[486]:
<10x5 sparse matrix of type '<class 'numpy.float64'>'
with 12 stored elements in LInked List format>
sparse.vstack works by converting the inputs to coo, and joining their attributes (rows, cols, data), and making a new matrix.
I suspect that your code will work with a lil matrix without too many changes. But it probably will be slower. Sparse gets its best speed when doing things like matrix multiplication on low density matrices. It also helps when the dense equivalents are too large to fit in memory. But for iterative work and growing matrices it is slow.

Categories