As we know that In Linear Algebra it is mandatory to multiply a vector by matrix or multiply two matrices, the number of rows of one matrix or vector must be equal to the number of columns in other vector or matrix.
while i was working in numpy python and it is giving me a different result.
Here is my code and it works.
np.array([1,2]) * np.array([[1],[2],[3]])
so is there any difference between numpy vector to matrix
matlication vs linear algebra vector to matrix multiplication.
use numpy np.dot(a,b)
Use the following code and you will get error you want.
np.dot(np.array([1,2]) , np.array([[1],[2],[3]]))
Becuase *,+,-,/ works element-wise on arrays.
If either a or b is 0-D (scalar), it is equivalent to multiply and
using numpy.multiply(a, b) or a * b is preferred.
Related
I have two arrays R3_mod with shape (21,21) containing many zeros and P2 with shape (21,) containing many zeros. I am getting the inverse of R3_mod using np.linalg.pinv() and eventually multiplying it to P2 as shown below. Is there a more efficient way to invert such arrays and then multiply?
Since the arrays are too big, you can access it here: https://drive.google.com/drive/u/0/folders/1NjEiNoneMaCbmbmObEs2GCNIb08NFIy3
import numpy as np
X = np.linalg.pinv(R3_mod).dot(P2)
Assuming that the matrix R3_mod is indeed invertible, I think it's best to use np.linalg.inv instead of linalg.pinv.
inv computes the inverse of the matrix directly, where pinv (stands for pseudo-inverse, see https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) computes the matrix A' that minimizes |AA'-I|. If the input matrix is invertible, pinv should return the same result as inv.
I'm trying to generate a kernel function for GP using only Matrix operations (no loops).
Vectors where no problem taking advantage of broadcasting
def kernel(A,B):
return 1/np.exp(np.linalg.norm(A-B.T))**2
A and B are both [n,1] vectors, but with [n,m] shaped matrices It just doesn't work. (Also tried reshaping to [1,n,m])
I'm interested on computing an X Matrix where every ij-th element is defined by Ai-Bj.
Now I'm working on Numpy but my final objective is implement this on Tensorflow.
Thanks in Advance.
I have a nxn matrix C and use inv from numpy.linalg to take the inverse to get Cinverse. My Cmatrix has elements of order 10**4 but my Cinverse matrix has elements of order 10**12 and higher (not sure if thats correct). When I do numpyp.dot(C,Cinverse), I do not get the identity matrix. Why is this?
I have a vector x which I multiply by itself to get a matrix.
x=array([ 121.41191662, 74.22830468, 73.23156336, 75.48354975,
79.89580817])
c=np.outer(xvector,xvector)
this is a 5x5 matrix.
then I get its inverse by
from numpy.linalg import inv
cinverse=inv(c)
then I want to see if I can get identity matrix back.
identity=np.dot(C00,C00inv)
However, I do not get the identity matrix. cinverse has very large matrix elements
around 10**13 and higher while c has matrix elements around 10,000.
The outer product of two vectors (be they the same or not) is not invertible. Since it is just a stack of scaled copies of the same vector its rank is one. Rank defective matrices cannot be inverted.
I'm surprised that numpy is not raising an exception or at least giving a warning.
So here is some code that generates the inverse matrix, and I will comment about it afterwards.
import numpy as np
x = np.random.rand(5,5)*10000 # makes a 5x5 matrix with elements around 10000
xin = np.linalg.inv(x)
iden = np.dot(x,xinv)
Now the first line of your iden matrix probably looks something like this:
[ 1.00000000e+00, -2.05382445e-16, -5.61067365e-16, 1.99719718e-15, -2.12322957e-16]
. Notice that the first element is exactly 1, as it should be, but there others are not exactly 0, however they are essentially zero and should be regarded as zero according to machine precision.
I have the following line of code in MATLAB which I am trying to convert to Python numpy:
pred = traindata(:,2:257)*beta;
In Python, I have:
pred = traindata[ : , 1:257]*beta
beta is a 256 x 1 array.
In MATLAB,
size(pred) = 1389 x 1
But in Python,
pred.shape = (1389L, 256L)
So, I found out that multiplying by the beta array is producing the difference between the two arrays.
How do I write the original Python line, so that the size of pred is 1389 x 1, like it is in MATLAB when I multiply by my beta array?
I suspect that beta is in fact a 1D numpy array. In numpy, 1D arrays are not row or column vectors where MATLAB clearly makes this distinction. These are simply 1D arrays agnostic of any shape. If you must, you need to manually introduce a new singleton dimension to the beta vector to facilitate the multiplication. On top of this, the * operator actually performs element-wise multiplication. To perform matrix-vector or matrix-matrix multiplication, you must use numpy's dot function to do so.
Therefore, you must do something like this:
import numpy as np # Just in case
pred = np.dot(traindata[:, 1:257], beta[:,None])
beta[:,None] will create a 2D numpy array where the elements from the 1D array are populated along the rows, effectively making a column vector (i.e. 256 x 1). However, if you have already done this on beta, then you don't need to introduce the new singleton dimension. Just use dot normally:
pred = np.dot(traindata[:, 1:257], beta)
I have two matrices to multiply. One is the weight matrix W, whose size is 900x2x2. Another is input matrix I, whose size is 2x2.
I want to perform a summation over c = WI which will be a 900x1 matrix, but when I perform the operation it multiplies them and gives me a 900x2x2 matrix again.
Question #2 (related): So I made both of them 2D and multiplied 900x4 * 4x1, but that gives me an error saying:
ValueError: operands could not be broadcast together with shapes (900,4) (4,1)
It seems you are trying to lose the last two axes of the first array against the only two axes of the second weight array with that matrix-multiplication. We could translate that idea into NumPy code with np.tensordot and assuming arr1 and arr2 as the input arrays respectively, like so -
np.tensordot(arr1,arr2,axes=([1,2],[0,1]))
Another simpler way to put into NumPy code would be with np.einsum, like so -
np.einsum('ijk,jk',arr1,arr2)