Square array from linear array python - python

I would like to get a square matrix B from a linear vector A such that B = A * transpose(A). A is a numpy array and np.shape(A) returns (10,). I would like B to be a (10,10) array. I tried B = np.matmut(A, A[np.newaxis]) but I get an error :
shapes (10,) and (1,10) not aligned: 10 (dim 0) != 1 (dim 0)

you can do this using outer:
import numpy as np
vector = np.arange(10)
np.outer(vector, vector)

The solution is a little ugly, but it does what you need.
import numpy as np
vector = np.array([1,2,3,4,5,6,7,8,9,10],)
matrix = np.dot(vector[:,None],vector[None,:])
print(matrix)
You can also do the following:
import numpy as np
vector = np.array([1,2,3,4,5,6,7,8,9,10],)
matrix = vector*vector[:,None]
print(matrix)
The issue comes from the fact that transposing a one dimensional array does not have the effect you might expect.

Variation on outer product:
a = A.reshape(-1, 1) # make sure it's a column vector
B = a # a.T

Related

SVD on a non-square matrix [duplicate]

This question already has answers here:
Reconstructing a matrix from an SVD in python 3
(2 answers)
Closed last year.
I'm using numpy.linalg.svd() to get the singular value decomposition of matrices. But I can't get back from the decomposition to the original matrix for a non-square matrix.
For example, for a square matrix :
import numpy as np
n=5
# make a random (n,n) matrix
A= np.reshape( np.random.random_integers(0, 9, size= n**2), (n, n))
#SVD
U,S,Vh = np.linalg.svd(A)
# to get A from the SVD back
A_svd = U#np.diag(S)#Vh
#check if its the same
print(np.allclose(A,A_svd))
I get : >>> True
Now for a non-square matrix, for example A of shape (m,n), then the shape U is (m,m), the shape V is (n,n) and S is a diagonal matrix of (of length k), with k = min(m,n). For example :
import numpy as np
n=5
m= 8
# make a random (m,n) matrix
A= np.reshape( np.random.random_integers(0, 9, size= m*n), (m, n))
#SVD
U,S,Vh = np.linalg.svd(A)
With the following shapes :
>>> U.shape
(8, 8)
>>> S.shape
(5,)
>>> Vh.shape
(5, 5)
I dont know how to get the matrix A back if I have the svd decomposition though.
I cant do a simple multiplication because of the shapes difference. U#np.diag(S)#Vh or np.matmul(U,S,Vh) or with np.dot.
So I tried to reshape S and fill it with zeroes.
S_m = np.diag(S)
S_m.resize((U.shape[1], Vh.shape[0]))
#check if its the same
print(np.allclose(A,U #S_m# Vh))
>>> False
I found an answer here, using diagsvd from scipy.linalg.
import scipy.linalg as la
A_svd = U#la.diagsvd(S,*A.shape)#Vh

How to make (z,x,y,1)-shape numpy array into (z,x,y,3)-shape numpy array by duplicating the last element 3 times?

I want to make (z,x,y,1)-shaped numpy array into (z,x,y,3)-shaped numpy array by duplicating the last element?
For example given
import numpy as np
# The shape is (1,2,2,1) (that is z=1, x=2, y=2)
a = np.array([[[[1], [2]],[[3], [4]]]])
print(a.shape)
# I want to make it (1,2,2,3) by duplicating the last element 3 times as follow
a = np.array([[[[1,1,1], [2,2,2]],[[3,3,3], [4,4,4]]]])
print(a.shape)
so given a numpy array a of shape (z,x,y,1), how to make it (z,x,y,3) numpy array by duplicating the last element?
Try this:
def repeat_last(a, n=3):
a.repeat(n, axis=2).reshape(*a.shape[:-1], n)
You can use np.broadcast_to to do explicit broadcasting.
assert(a.shape[-1] == 1) # check it really is 1 in the last dimension
new_shape = a.shape[:-1] + (3,)
np.broadcast_to(a, new_shape)
You can concatenate three arrays (which are all a) along the last axis:
np.concatenate([a]*3, axis=-1)
NumPy's tile will do the trick. You just have to indicate the number of repetitions of the array along each axis (parameter reps).
In [39]: import numpy as np
In [40]: a = np.array([[[[1], [2]], [[3], [4]]]])
In [41]: b = np.array([[[[1,1,1], [2,2,2]], [[3,3,3], [4,4,4]]]])
In [42]: c = np.tile(a, (1, 1, 1, 3))
In [43]: np.array_equal(b, c)
Out[43]: True

Scale rows of 3D-tensor

I have an n-by-3-by-3 numpy array A and an n-by-3 numpy array B. I'd now like to multiply every row of every one of the n 3-by-3 matrices with the corresponding scalar in B, i.e.,
import numpy as np
A = np.random.rand(10, 3, 3)
B = np.random.rand(10, 3)
for a, b in zip(A, B):
a = (a.T * b).T
print(a)
Can this be done without the loop as well?
You can use NumPy broadcasting to let the elementwise multiplication happen in a vectorized manner after extending B to 3D after adding a singleton dimension at the end with np.newaxis or its alias/shorthand None. Thus, the implementation would be A*B[:,:,None] or simply A*B[...,None].

compute matrix product for multiple inputs

I am trying to compute a transform given by b = A*x. A is a (3,4) matrix. If x is one (4,1) vector the result is b (3,1).
Instead, for x I have a bunch of vectors concatenated into a matrix and I am trying to evaluate the transform for each value of x. So x is (20, 4). How do I broadcast this in numpy such that I get 20 resulting values for b (20,3)?
I could loop over each input and compute the output but it feels like there must be a better way using broadcasting.
Eg.
A = [[1,0,0,0],
[2,0,0,0],
[3,0,0,0]]
if x is:
x = [[1,1,1,1],
[2,2,2,2]]
b = [[1,2,3],
[2,4,6]]
Each row of x is multiplied with A and result is stored as a row in b.
numpy dot
import numpy as np
A = np.random.normal(size=(3,4))
x = np.random.normal(size=(4,20))
y = np.dot(A,x)
print y.shape
Result: (3, 20)
And of course if you want (20,3) you can use np.transpose()

Python 3d array times 1d vector

I'm trying to multiply a [12x256x256] array with a [12] array. The idea is taht the first one is a stack of 12 [256x256] arrays and the 2nd one is a stack of 1d scalars. So if the 2nd array is [1,2,3,4,...,12], then I want to multiply the first layer of the 3d one by 1, the 2nd layer by 2, etc.
How can I do this?
You can add new axises and multiply them.
import numpy as np
a = np.ones((12,256,256))
b = np.array(range(12))+1
c = a * b[:, np.newaxis, np.newaxis]
In numpy you can do
# let m be 12x256x256, n be 12
m = np.array(m)
n = np.array(n)
(m.swapaxes(0,2) * n).swapaxes(2,0)

Categories