I have two numpy arrays. When I used numpy dot function I got different results. I couldn't understand how dot function worked along with broadcasting to produce these outputs.
Can someone explain me the difference between these two.
A = np.array([[2,4,6]])
Y = np.array([[1,0,1]])
np.dot(A,Y.T) = array([8])
np.dot (Y.T, A) = array([[2, 4, 6],
[0, 0, 0],
[2, 4, 6]])
The dot function is matrix multiplication, there's no broadcasting involved.
Using np.dot(A,Y.T) is the same as A#Y.T in python 3.5+.
Matrix multiplication is not commutative (the order of arguments matters).
In the first usage, A is a row vector, Y.T is a column vector. This results in a single value.
In the second example, Y.T is a column vector, while A is a row vector. This results in a matrix.
Related
I was trying to figure out how to calculate the Frobenius of a matrix in numpy. This way I can get the 2-norm of each row in the matrix x below:
My question is about the ord parameter in numpy's linalg.norm module and how the relevant part of numpy documentation describes which norm of a matrix one can calculate. I was able to get the Frobenius norm by setting ord=2, however, it says that only setting ord=None gives the Frobenius norm.
Here is my example:
x = np.array([[0, 3, 4],
[1, 6, 4]])
I found that I can the Frobenius norm with the following line of code:
x_norm = np.linalg.norm(x, ord = 2, axis=1,keepdims=True )
>>> x_norm
array([[ 5. ],
[ 7.28010989]])
My question is whether the documentation here would be considered not as helpful as possible and if this warrants a request to change the description of setting ord=2 in the aforementioned table.
You're not taking a matrix norm. Since you've passed axis=1, you're taking vector norms, and you should be looking at the vector norm column instead of the matrix norm column.
For vector norms, ord=None and ord=2 both produce a 2-norm.
[python 2.7 and numpy v1.11.1] I am looking at matrix condition numbers and am trying to compute the condition number for a matrix without using the function np.linalg.cond().
Based on numpy's documentation, the definition of a matrix's condition number is, "the norm of x times the norm of the inverse of x."
||X|| * ||X^-1||
for the matrix
a = np.matrix([[1, 1, 1],
[2, 2, 1],
[3, 3, 0]])
print np.linalg.cond(a)
1.84814479698e+16
print np.linalg.norm(a) * np.linalg.norm(np.linalg.inv(a))
2.027453660713377e+17
Where is the mistake in my computation?
Thanks!
You are trying to compute the condition using the Frobenius Norm definition. That is an optional parameter to the condition computation.
print(np.linalg.norm(a)*np.linalg.norm(np.linalg.inv(a)))
print(np.linalg.cond(a, p='fro'))
Produces
2.02745366071e+17
2.02745366071e+17
norm uses the Frobenius norm for matrix by default,when cond uses 2-norm:
In [347]: np.linalg.cond(a)
Out[347]: 38.198730775206172
In [348]:np.linalg.norm(a,2)*np.linalg.norm(np.linalg.inv(a),2)
Out[348]: 38.198730775206243
In [349]: np.linalg.norm(a)*np.linalg.norm(np.linalg.inv(a))
Out[349]: 39.29814570824248
NumPy cond() is currently buggy. There will come a time when we will fix it but for now if you are doing this for linear equation solutions you can use SciPy linalg.solve which will either produce an error for exact singularity or a warning if reciprocal condition number is below threshold and nothing if the array is invertible.
In python with numpy, say I have two matrices:
S, a sparse x*x matrix
M, a dense x*y matrix
Now I want to do np.dot(M, M.T) which will return a dense x*x matrix S_.
However, I only care about the cells that are nonzero in S, which means that it would not make a difference for my application if I did
S_ = S*S_
Obviously, that would be a waste of operations as I would like to leave out the irrelevant cells given in S alltogether. Remember that in matrix multiplication
S_[i,j] = np.sum(M[i,:]*M[:,j])
So I want to do this operation only for i,j such that S[i,j]=True.
Is this supported somehow by numpy implementations that run in C so that I do not need to implement it with python loops?
EDIT 1 [solved]: I still have this problem, actually M is now also sparse.
Now, given rows and cols of S, I implemented it like this:
data = np.array([ M[rows[i],:].dot(M[cols[i],:]).data[0] for i in xrange(len(rows)) ])
S_ = csr( (data, (rows,cols)) )
... but it is still slow. Any new ideas?
EDIT 2: jdehesa has given a great solution, but I would like to save more memory.
The solution was to do the following:
data = M[rows,:].multiply(M[cols,:]).sum(axis=1)
and then build a new sparse matrix from rows, cols and data.
However, when running the above line, scipy builds a (contiguous) numpy array with as many elements as nnz of the first submatrix plus nnz of the second submatrix, which can lead to MemoryError in my case.
In order to save more memory, I would like to multiply iteratively each row with its respective 'partner' column, then sum over and discard the result vector. Using simple python to implement this, basically I am back to the extremely slow version.
Is there a fast way of solving this problem?
Here is how you can do it with NumPy/SciPy, both for dense and sparse M matrices:
import numpy as np
import scipy.sparse as sp
# Coordinates where S is True
S = np.array([[0, 1],
[3, 6],
[3, 4],
[9, 1],
[4, 7]])
# Dense M matrix
# Random big matrix
M = np.random.random(size=(1000, 2000))
# Take relevant rows and compute values
values = np.sum(M[S[:, 0]] * M[S[:, 1]], axis=1)
# Make result matrix from values
result = np.zeros((len(M), len(M)), dtype=values.dtype)
result[S[:, 0], S[:, 1]] = values
# Sparse M matrix
# Construct sparse M as COO matrix or any other way
M = sp.coo_matrix(([10, 20, 30, 40, 50], # Data
([0, 1, 3, 4, 6], # Rows
[4, 4, 5, 5, 8])), # Columns
shape=(1000, 2000))
# Convert to CSR for fast row slicing
M_csr = M.tocsr()
# Take relevant rows and compute values
values = M_csr[S[:, 0]].multiply(M_csr[S[:, 1]]).sum(axis=1)
values = np.squeeze(np.asarray(values))
# Construct COO sparse matrix from values
result = sp.coo_matrix((values, (S[:, 0], S[:, 1])), shape=(M.shape[0], M.shape[0]))
Good afternoon all relatively simple`question here from a mechanical standpoint.
I'm currently performing PCA and have successfully written a code that computes the covariance matrix and correlation matrix, and the associated eigenspectrum.
Now, I have created an array that represents the eigenvectors row wise, and i would like to compute the transformation C*v^t, where c is the observation matrix and v^t is the element wise entries of the eigen vector transposed.
Now, since some of these matrices are pretty big-i'd like to be able to tell python which row of the eigenvector matrix to mulitply C by. So far I have tried some of the numpy functions, but to no avail.
(for those of you wondering, i don't want to compute the matrix product of all the eigen vecotrs, i only need to multiply by a small subset of them-the ones associated with the largest eigenvalues)
Thanks!
To "slice" a vector of row n out of 2-dimensional array A, you use a syntax like A[n]. If it's slicing columns you wanted instead, the syntax is A[:,n].
For transformations with numpy arrays and vectors, the syntax is with matrix multiplication operator:
>>> A = np.array([[0, -1], [1, 0]])
>>> vs = np.array([[1, 2], [3, 4]])
>>> A # vs[0] # this is a rotation of the first row of vs by A
array([-2, 1])
>>> A # vs[1] # this is a rotation of second row of vs by A
array([-4, 3])
Note: If you're on older python version (< 3.5), you might not have # available yet. Then you'll have to use a function np.dot(array, vector) instead of the operator.
I came across this python code (which works) and to me it seems amazing. However, I am unable to figure out what this code is doing. To replicate it, I sort of wrote a test code:
import numpy as np
# Create a random array which represent the 6 unique coeff.
# of a symmetric 3x3 matrix
x = np.random.rand(10, 10, 6)
So, I have 100 symmetric 3x3 matrices and I am only storing the unique components. Now, I want to generate the full 3x3 matrix and this is where the magic happens.
indices = np.array([[0, 1, 3],
[1, 2, 4],
[3, 4, 5]])
I see what this is doing. This is how the 0-5 index components should be arranged in the 3x3 matrix to have a symmetric matrix.
mat = x[..., indices]
This line has me lost. So, it is working on the last dimension of the x array but it is not at all clear to me how the rearrangement and reshaping is done but this indeed returns an array of shape (10, 10, 3, 3). I am amazed and confused!
From the advanced indexing documentation - bi rico's link.
Example
Suppose x.shape is (10,20,30) and ind is a (2,3,4)-shaped indexing intp array, thenresult = x[...,ind,:] has shape (10,2,3,4,30) because the (20,)-shaped subspace has been replaced with a (2,3,4)-shaped broadcasted indexing subspace. If we let i, j, kloop over the (2,3,4)-shaped subspace then result[...,i,j,k,:] =x[...,ind[i,j,k],:]. This example produces the same result as x.take(ind, axis=-2).