Multiplying an array by a designated row vector of another matrix - python

Good afternoon all relatively simple`question here from a mechanical standpoint.
I'm currently performing PCA and have successfully written a code that computes the covariance matrix and correlation matrix, and the associated eigenspectrum.
Now, I have created an array that represents the eigenvectors row wise, and i would like to compute the transformation C*v^t, where c is the observation matrix and v^t is the element wise entries of the eigen vector transposed.
Now, since some of these matrices are pretty big-i'd like to be able to tell python which row of the eigenvector matrix to mulitply C by. So far I have tried some of the numpy functions, but to no avail.
(for those of you wondering, i don't want to compute the matrix product of all the eigen vecotrs, i only need to multiply by a small subset of them-the ones associated with the largest eigenvalues)
Thanks!

To "slice" a vector of row n out of 2-dimensional array A, you use a syntax like A[n]. If it's slicing columns you wanted instead, the syntax is A[:,n].
For transformations with numpy arrays and vectors, the syntax is with matrix multiplication operator:
>>> A = np.array([[0, -1], [1, 0]])
>>> vs = np.array([[1, 2], [3, 4]])
>>> A # vs[0] # this is a rotation of the first row of vs by A
array([-2, 1])
>>> A # vs[1] # this is a rotation of second row of vs by A
array([-4, 3])
Note: If you're on older python version (< 3.5), you might not have # available yet. Then you'll have to use a function np.dot(array, vector) instead of the operator.

Related

Singular Value Decomposition (SVD) outputs a 1-D singular value array, instead of 2-D diagonal matrix [Python]

I was posting a question on similar subject, and encountered another more important question.
When I apply SVD to a matrix 'A' (code below) the output I get is the expected 2-D eigenvector matrices ('U' and 'V') and an unexpected 1-D singular value array 'S'.
U,S,V=np.linalg.svd(A)
For context: The reason for it being unexpected is that Singular Value Decomposition should result in the product of three matrices. The middle matrix (in this case 1-D array) should be a diagonal matrix, holding non-negative singular values in decreasing order of magnitude.
Why does Python 'transform' the matrix into an array? Is there a way around it?
Thanks!
This is made quite clear in the docs, there you'll see that:
s : (…, K) array: Vector(s) with the singular values, within each vector sorted in descending order. The first a.ndim - 2 dimensions have the same size as those of the input a.
So basically S is just the diagonal of the matrix you mention, i.e the singular values. You can construct a diagonal matrix from it with:
np.diag(S)
Use np.diag (https://docs.scipy.org/doc/numpy/reference/generated/numpy.diag.html)
>>> np.diag([0, 4, 8])
array([[0, 0, 0],
[0, 4, 0],
[0, 0, 8]])

How numpy dot works with broadcasting

I have two numpy arrays. When I used numpy dot function I got different results. I couldn't understand how dot function worked along with broadcasting to produce these outputs.
Can someone explain me the difference between these two.
A = np.array([[2,4,6]])
Y = np.array([[1,0,1]])
np.dot(A,Y.T) = array([8])
np.dot (Y.T, A) = array([[2, 4, 6],
[0, 0, 0],
[2, 4, 6]])
The dot function is matrix multiplication, there's no broadcasting involved.
Using np.dot(A,Y.T) is the same as A#Y.T in python 3.5+.
Matrix multiplication is not commutative (the order of arguments matters).
In the first usage, A is a row vector, Y.T is a column vector. This results in a single value.
In the second example, Y.T is a column vector, while A is a row vector. This results in a matrix.

Python command np.sum(x, axis=0) and softmax function

I have the following problem: I want to compute the softmax function in Python and get an unexpected result. The code is the following:
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
It works perfectly but I don´t know why: It works on matrices as follows: If I insert a 2x2 matrix A, the output is yet another 2x2 matrix. Why is that? Shouldn´t it return a differently sized array since every element of the matrix, i.e. $x=A[0,0]$, yields 2 output values (namely $exp(x)/(exp(A[0,0])+exp(A[1,0]))$ and $exp(x)/(exp(A[0,1])+exp(A[1,1]))$, because or the axis=0 command? That would lead to an 8-element output array, but the actual result only has 4 elements. Also, how exactly does the axis=0 command work? If I type A=np.array([2, 4]), then the logical result of np.sum(A, axis=0) should be array([2, 4]), since the columns are summed up. But the result is array([6]). And the command np.sum(A, axis=1) strangely yields "'axis' entry is out of bounds", although the result should be array([6]) since the rows are summed up. Maybe my two problems are linked.
Any help will be appreciated!
Thanks,
Leon
I will jump into the "final" problem:
matrix_22 / vector_2
Because that does not make mathematical sense, numpy uses a certain assumption. Just as:
matrix_22 * 5
what that does is multiplying each element of the matrix by 5. Then if we consider a matrix_22 as a vector of vectors, then the result of the matrix_22 / vector_2 results on applying the operation division for each vector on the matrix.
You can easily check that behaviour executing the following:
np.array([[14, 28], [70, 56]]) / np.array([2, 7])
Notation: matrix_22 is "some variable which contains a numpy array of shape 2x2, so it is a 2x2 matrix". And vector_2 is a numpy array of two elements.

Python numpy array indexing. How is this working?

I came across this python code (which works) and to me it seems amazing. However, I am unable to figure out what this code is doing. To replicate it, I sort of wrote a test code:
import numpy as np
# Create a random array which represent the 6 unique coeff.
# of a symmetric 3x3 matrix
x = np.random.rand(10, 10, 6)
So, I have 100 symmetric 3x3 matrices and I am only storing the unique components. Now, I want to generate the full 3x3 matrix and this is where the magic happens.
indices = np.array([[0, 1, 3],
[1, 2, 4],
[3, 4, 5]])
I see what this is doing. This is how the 0-5 index components should be arranged in the 3x3 matrix to have a symmetric matrix.
mat = x[..., indices]
This line has me lost. So, it is working on the last dimension of the x array but it is not at all clear to me how the rearrangement and reshaping is done but this indeed returns an array of shape (10, 10, 3, 3). I am amazed and confused!
From the advanced indexing documentation - bi rico's link.
Example
Suppose x.shape is (10,20,30) and ind is a (2,3,4)-shaped indexing intp array, thenresult = x[...,ind,:] has shape (10,2,3,4,30) because the (20,)-shaped subspace has been replaced with a (2,3,4)-shaped broadcasted indexing subspace. If we let i, j, kloop over the (2,3,4)-shaped subspace then result[...,i,j,k,:] =x[...,ind[i,j,k],:]. This example produces the same result as x.take(ind, axis=-2).

Numpy- weight and sum rows of a matrix

Using Python & Numpy, I would like to:
Consider each row of an (n columns x
m rows) matrix as a vector
Weight each row (scalar
multiplication on each component of
the vector)
Add each row to create a final vector
(vector addition).
The weights are given in a regular numpy array, n x 1, so that each vector m in the matrix should be multiplied by weight n.
Here's what I've got (with test data; the actual matrix is huge), which is perhaps very un-Numpy and un-Pythonic. Can anyone do better? Thanks!
import numpy
# test data
mvec1 = numpy.array([1,2,3])
mvec2 = numpy.array([4,5,6])
start_matrix = numpy.matrix([mvec1,mvec2])
weights = numpy.array([0.5,-1])
#computation
wmatrix = [ weights[n]*start_matrix[n] for n in range(len(weights)) ]
vector_answer = [0,0,0]
for x in wmatrix: vector_answer+=x
Even a 'technically' correct answer has been all ready given, I'll give my straightforward answer:
from numpy import array, dot
dot(array([0.5, -1]), array([[1, 2, 3], [4, 5, 6]]))
# array([-3.5 -4. -4.5])
This one is much more on with the spirit of linear algebra (and as well those three dotted requirements on top of the question).
Update:
And this solution is really fast, not marginally, but easily some (10- 15)x faster than all ready proposed one!
It will be more convenient to use a two-dimensional numpy.array than a numpy.matrix in this case.
start_matrix = numpy.array([[1,2,3],[4,5,6]])
weights = numpy.array([0.5,-1])
final_vector = (start_matrix.T * weights).sum(axis=1)
# array([-3.5, -4. , -4.5])
The multiplication operator * does the right thing here due to NumPy's broadcasting rules.

Categories