I would like to generate invertible matrices (specifically those from GL(n), a general linear group of size n) using Tensorflow and/or Numpy for use with my neural network.
How can this be done and what would be the best way of doing so?
I understand there is a way to generate symmetric invertible matrices by computing (A + A.T)/2 for arbitrary square matrices A, however, I would like mine to not just be symmetric.
I happened to have found one way which I believe can generate a large variety of random invertible matrices using diagonal dominance.
The theorem is that given an nxn matrix, if the abs of the diagonal element is larger than the sum of the abs of all the row elements with respect to the row the diagonal element is in, and this holds true for all rows, then the underlying matrix is invertible. (here is the corresponding wikipedia article: https://en.wikipedia.org/wiki/Diagonally_dominant_matrix)
Therefore the following code snippet generates an arbitrary invertible matrix.
n = 5 # size of invertible matrix I wish to generate
m = np.random.rand(n, n)
mx = np.sum(np.abs(m), axis=1)
np.fill_diagonal(m, mx)
I want to make a zero-mean Gaussian Matrix, e.g., M of size (n,n) in Python such that
where, the four dimensional matrix A with entries is given. Is there any way to do that, without changing M into a vector?
This is question is the same as this, but for a sparse matrix (scipy.sparse). The solution given to the linked question used indexing schemes that are incompatible with sparse matrices.
For context I am constructing a Jacobian for a large discretized PDE, so the B matrix in this case contains various relevant partial terms while A will be the complete Jacobian I need to invert for a Newton's method approximation. On a large grid A will be far too large to fit in memory, so I want to use sparse matrices.
I would like to construct an array with the following structure:
A[i,j,i,j,] = B[i,j] with all other entries 0: A[i,j,l,k]=0 # (i,j) =\= (l,k)
I.e. if I have the B matrix constructed how can I create the matrix A, preferably in a vectorized manner.
Explicitly, let B = [[1,2],[3,4]]
Then:
A[1,1,:,:]=[[1,0],[0,0]]
A[1,2,:,:]=[[0,2],[0,0]]
A[2,1,:,:]=[[0,0],[3,0]]
A[2,2,:,:]=[[0,0],[0,4]]
I have a (large) 4D array, consisting of the 5 coefficients in a given basis for a matrix field. Given the 5 basis matrices, I want to efficiently calculate the matrix field.
The coefficient field c[x,y,z,i] being the value of i-th coefficient at position x,y,z
And the matrix field M[x,y,z,a,b] being the (3,3) matrix at position x,y,z
And the basis matrices T_1,...T_5, being the (3,3) basis matrices
I could loop over each position in space:
M[x,y,z,:,:] = T_1[:,:]*c[x,y,z,0] + T_2[:,:]*c[x,y,z,1]...T_5[:,:]*c[x,y,z,4]
But this is very inefficient. My attempts at using np.multiply,np.sum result in broadcasting errors due to the ambiguity of the desired product being a field of 3x3 matrices.
Keep in mind that to numpy, these 4 and 5d arrays are just that, not 3d arrays containing 2d matrices, etc.
Let's try to write your calculation in a way that clarifies dimensions:
M[x,y,z] = T_1*c[x,y,z,0] + T_2*c[x,y,z,1]...T_5*c[x,y,z,4]
M[x,y,z,:,:] = T_1[:,:]*c[x,y,z,0] + T_2[:,:]*c[x,y,z,1]...T_5[:,:]*c[x,y,z,4]
c[x,y,z,i] is a coefficient, right? So M is a weighted sum of the T_n arrays?
One way of expressing this is:
T = np.stack([T_1, T_2, ...T_5], axis=0) # 3d (nab)
M = np.einsum('nab,xyzn->xyzab', T, c)
We could alternatively stack T_i on a new last axis
T = np.stack([T_1, T_2 ...T_5], axis=2) # (abn)
M = np.einsum('abn,xyzn->xyzab', T, c)
or as broadcasted multiplication plus sum:
M = (T[None,None,None,:,:,:] * c[:,:,:,None,None,:]).sum(axis=-1)
I'm writing this code without testing, so there may be errors, but I think the basic outline is right.
It could also be written as a dot, if I can put the n dimension last in one argument, and 2nd to the last in the other. Or with tensordot. But there's less control over broadcasting of the other dimensions.
For test calculations you could also reshape these arrays so that the x,y,z are rolled into one, and the a,b into another, e.g
M[xyz,:] = T_n[ab]*c[xyz,n] # etc
I have been struggling with changing a 2D numpy array to a 2D numpy matrix. I know that I can use numpy.asmatrix(x) to change array x into a matrix, however, the size for the matrix is not the size I wish to have. For example, I want to have a numpy.matrix((2,10)). It is easier for me to use two separate numpy.arrays to form each rows of the matrix. then I used numpy.append to put these two arrays into a matrix. However, when I use numpy.asmatrix to make this 2d array into a 2d matrix, the size is not the same size as my matrix (my desired matrix should have a size of 2*10 but when I change arrays to matrix, the size is 1*2). Does anybody know how I can change size of this asmatrix to my desired size?
code (a and b are two numpy.matrix with size of (1*10)):
m=10
c=sorted(random.sample(range(m),2))
n1=numpy.array([a[0:c[0]],b[c[0]:c[1]],a[c[1]:]])
n2=numpy.array([b[0:c[0]],a[c[0]:c[1]],b[c[1]:]])
n3=numpy.append(n1,n2)
n3=numpy.asmatrix(n3)
n1 and n2 are each arrays with shape 3 and n3 is matrix with shape 6. I want n3 to be a matrix with size 2*10
Thanks