Multiplying 3D matrix with 2D matrix - python

I have two matrices to multiply. One is the weight matrix W, whose size is 900x2x2. Another is input matrix I, whose size is 2x2.
I want to perform a summation over c = WI which will be a 900x1 matrix, but when I perform the operation it multiplies them and gives me a 900x2x2 matrix again.
Question #2 (related): So I made both of them 2D and multiplied 900x4 * 4x1, but that gives me an error saying:
ValueError: operands could not be broadcast together with shapes (900,4) (4,1)

It seems you are trying to lose the last two axes of the first array against the only two axes of the second weight array with that matrix-multiplication. We could translate that idea into NumPy code with np.tensordot and assuming arr1 and arr2 as the input arrays respectively, like so -
np.tensordot(arr1,arr2,axes=([1,2],[0,1]))
Another simpler way to put into NumPy code would be with np.einsum, like so -
np.einsum('ijk,jk',arr1,arr2)

Related

Dynamically broadcast a numpy array

I currently have a 1D numpy array, epsilons, that needs to perform element-wise multiplication on array x. However, the dimensionality of x is dynamic and changes with each iteration of the following for loop:
for x in grads:
x = x * epsilons
print(grad)
epsilons always has the shape (M,). However, for the first iteration, x takes the shape (M,4,2) while it takes the shape (M,4) for the second iteration (the shape of x changes as the code iterates over grads). Is there a way I can automatically broadcast epsilons to the shape of x so that I can perform this element-wise multiplication for any shape of x?
You can just reshape epsilons to the correct shape. Indeed, Numpy automatically broadcast the vector shape (like the broadcast_to call) if is has a compatible shape: the same number of dimension should be at least the same and the shape should be either 1 of full for each dimension.
Thanks to #hpaulj for the improved solution.
# Reshape epsilons so that the vector value are along the first dimension (the least contiguous one)
reshapedEpsilons = epsilons.reshape((M,)+(1,)*(x.ndim-1))
# Broadcast automatically the vector values in the other dimensions so the result have the same shape than x
# Actual element-wise multiplication
x *= reshapedEpsilons
PS: note that a = a * b should create a new matrix and is less efficient than a *= b which modify the values in-place.

Why is numpy.dot() throwing a ValueError: shapes not aligned?

I want to write a program that finds the eigenvectors and eigenvalues of a Hermitian matrix by iterating over a guess (Rayleigh quotient iteration). I have a test matrix that I know the eigenvectors and eigenvalues of, however when I run my code I receive
ValueError: shapes (3,1) and (3,1) not aligned: 1 (dim 1) != 3 (dim 0)
By splitting each numerator and denominator into separate variables I've traced the problem to the line:
nm=np.dot(np.conj(b1),np.dot(A,b1))
My code:
import numpy as np
import numpy.linalg as npl
def eigen(A,mu,b,err):
mu0=mu
mu1=mu+10*err
while mu1-mu > err:
n=np.dot((npl.inv(A-mu*np.identity(np.shape(A)[0]))),b)
d=npl.norm(np.dot((npl.inv(A-(mu*np.identity(np.shape(A)[0])))),b))
b1=n/d
b=b1
nm=np.dot(np.conj(b1),np.dot(A,b1))
dm=np.dot(np.conj(b1),b1)
mu1=nm/dm
mu=mu1
return(mu,b)
A=np.array([[1,2,3],[1,2,1],[3,2,1]])
mu=4
b=np.array([[1],[2],[1]])
err=0.1
eigen(A,mu,b,err)
I believe the dimensions of the variables being input into the np.dot() function are wrong, but I cannot find where. Everything is split up and renamed as part of my debugging, I know it looks very difficult to read.
The mathematical issue is with matrix multiplication of shapes (3,1) and (3,1). That's essentially two vectors. Maybe you wanted to use the transposed matrix to do this?
nm = np.dot(np.conj(b1).T, np.dot(A, b1))
dm = np.dot(np.conj(b1).T, b1)
Have a look at the documentation of np.dot to see what arguments are acceptable.
If both a and b are 1-D arrays, it is inner product of vectors (...)
If both a and b are 2-D arrays, it is matrix multiplication (...)
The variables you're using are of shape (3, 1) and therefore 2-D arrays.
Also, this means, alternatively, instead of transposing the first matrix, you could use a flattened view of the array. This way, it's shape (3,) and a 1-D array and you'll get the inner product:
nm = np.dot(np.conj(b1).ravel(), np.dot(A, b1).ravel())
dm = np.dot(np.conj(b1).ravel(), b1.ravel())

Broadcasting - 3D field of coefficients to 3D field of matrices given matrix basis

I have a (large) 4D array, consisting of the 5 coefficients in a given basis for a matrix field. Given the 5 basis matrices, I want to efficiently calculate the matrix field.
The coefficient field c[x,y,z,i] being the value of i-th coefficient at position x,y,z
And the matrix field M[x,y,z,a,b] being the (3,3) matrix at position x,y,z
And the basis matrices T_1,...T_5, being the (3,3) basis matrices
I could loop over each position in space:
M[x,y,z,:,:] = T_1[:,:]*c[x,y,z,0] + T_2[:,:]*c[x,y,z,1]...T_5[:,:]*c[x,y,z,4]
But this is very inefficient. My attempts at using np.multiply,np.sum result in broadcasting errors due to the ambiguity of the desired product being a field of 3x3 matrices.
Keep in mind that to numpy, these 4 and 5d arrays are just that, not 3d arrays containing 2d matrices, etc.
Let's try to write your calculation in a way that clarifies dimensions:
M[x,y,z] = T_1*c[x,y,z,0] + T_2*c[x,y,z,1]...T_5*c[x,y,z,4]
M[x,y,z,:,:] = T_1[:,:]*c[x,y,z,0] + T_2[:,:]*c[x,y,z,1]...T_5[:,:]*c[x,y,z,4]
c[x,y,z,i] is a coefficient, right? So M is a weighted sum of the T_n arrays?
One way of expressing this is:
T = np.stack([T_1, T_2, ...T_5], axis=0) # 3d (nab)
M = np.einsum('nab,xyzn->xyzab', T, c)
We could alternatively stack T_i on a new last axis
T = np.stack([T_1, T_2 ...T_5], axis=2) # (abn)
M = np.einsum('abn,xyzn->xyzab', T, c)
or as broadcasted multiplication plus sum:
M = (T[None,None,None,:,:,:] * c[:,:,:,None,None,:]).sum(axis=-1)
I'm writing this code without testing, so there may be errors, but I think the basic outline is right.
It could also be written as a dot, if I can put the n dimension last in one argument, and 2nd to the last in the other. Or with tensordot. But there's less control over broadcasting of the other dimensions.
For test calculations you could also reshape these arrays so that the x,y,z are rolled into one, and the a,b into another, e.g
M[xyz,:] = T_n[ab]*c[xyz,n] # etc

Python: Functions of arrays that return arrays of the same shape

Note: I'm using numpy
import numpy as np
Given 4 arrays of the same (but arbitrary) shape, I am trying to write a function that forms 2x2 matrices from each corresponding element of the arrays, finds the eigenvalues, and returns two arrays of the same shape as the original four, with its elements being eigenvalues (i.e. the resulting arrays would have the same shape as the input, with array1 holding all the first eigenvalues and array2 holding all the second eigenvalues).
I tried doing the following, but unsurprisingly, it gives me an error that says the array is not square.
temp = np.linalg.eig([[m1, m2],[m3, m4]])[0]
I suppose I can make an empty temp variable in the same shape,
temp = np.zeros_like(m1)
and go over each element of the original arrays and repeat the process. My problem is that I want this generalised for arrays of any arbitrary shape (need not be one dimensional). I would guess that finding the shape of the arrays and designing loops to go over each element would not be a very good way of doing it. How do I do this efficiently?
Construct a 2x2x... array:
temp = np.array([[m1, m2], [m3, m4]])
Move the first two dimensions to the end for a ...x2x2 array:
for _ in range(2):
temp = np.rollaxis(temp, 0, temp.ndim)
Call np.linalg.eigvals (which broadcasts) for a ...x2 array of eigenvalues:
eigvals = np.linalg.eigvals(temp)
And split this into an array of first eigenvalues and an array of second eigenvalues:
eigvals1, eigvals2 = eigvals[..., 0], eigvals[..., 1]

how to convert a 2D numpy array to a 2D numpy matrix by changing shape

I have been struggling with changing a 2D numpy array to a 2D numpy matrix. I know that I can use numpy.asmatrix(x) to change array x into a matrix, however, the size for the matrix is not the size I wish to have. For example, I want to have a numpy.matrix((2,10)). It is easier for me to use two separate numpy.arrays to form each rows of the matrix. then I used numpy.append to put these two arrays into a matrix. However, when I use numpy.asmatrix to make this 2d array into a 2d matrix, the size is not the same size as my matrix (my desired matrix should have a size of 2*10 but when I change arrays to matrix, the size is 1*2). Does anybody know how I can change size of this asmatrix to my desired size?
code (a and b are two numpy.matrix with size of (1*10)):
m=10
c=sorted(random.sample(range(m),2))
n1=numpy.array([a[0:c[0]],b[c[0]:c[1]],a[c[1]:]])
n2=numpy.array([b[0:c[0]],a[c[0]:c[1]],b[c[1]:]])
n3=numpy.append(n1,n2)
n3=numpy.asmatrix(n3)
n1 and n2 are each arrays with shape 3 and n3 is matrix with shape 6. I want n3 to be a matrix with size 2*10
Thanks

Categories