I am trying to do MCMC methods on the inverse problem of A(u)x=b, where A is a symmetric positive definite square matrix. I was given that A can be expressed as A = A0 + Σi ui Ai. I want to check whether the ergodic average converges to the initial u. But I need to create some random matrix A satisfying the conditions to test out my MCMC function.
Is it possible to create such a random matrix A that is of the form of A = A0 + Σi ui Ai and how can I go about it in python?
Any help is greatly appreciated, thank you!
One way would be to generate SPD (symmetric positive definite) matrices A[0],A1,.. and positive numbers u1,.. and then sum them up
B = A[0] + Sum{ i>=1 | u[i]*A[i]}
This will be SPD. It would be possible for B to be SPD even though the A[] were not, and the u[] not all positive, but I think that it could be tricky to determine the A[] and the u[] so that B is SPD in that case.
One issue is whether you want B to be strictly positive definite -- ie invertible -- or not. B will be invertible if either A[0] is , or at least one of the A[i], with u[i] > 0, is. Again it B could be invertible even if those conditions were not met, and again it might be tricky to ensure that B was invertible in that case.
There are various ways you could generate a single SPD nxn matrix P:
a/ Generate an upper triangular nxn matrix U and compute
P = U'*U
P will be SPD, and invertible iff all the diagonal elements of U are non-zero
b/ Generate a mxn matrix M and compute
P = M'*M
P will be SPD, but not necessarily invertible. It definitely won't be invertible if m<n. To make it invertible, add a positive multiple of the identity matrix.
c/ use sklearn.datasets.make_spd_matrix
eg here From the documentation it's not clear to me whether this will be invertible or not, so if you need an invertible one you might be best tp add a multiple of the identity.
Related
I am trying to use Normalized Cut algorithm (Shi and Malik, 2000) to cut a matrix into two matrices. In this regard, I need to find the second smallest eigenvector in a generalized eigenvalue system (Ax = lambda.B.x). In my input, B is a semidefinite positive matrix. However, scipy.linalg.eigh requires B to be definite positive and raises an error when I use it. I need to know if I can have a solution with this input, and how can I find it.
I tried
eigvals, eigvecs = eigh(A, B, eigvals_only=False, subset_by_index=[0, 1])
But I got:
numpy.linalg.LinAlgError: The leading minor of order 2 of B is not positive definite. The factorization of B could not be completed and no eigenvalues or eigenvectors were computed.
If B is semidefinite, it means it has at least one eigenvector associated with an eigenvalue 0, you still could have solutions if the nullspace of B is also a null space of A, i.e. if B # x = 0, A # x = 0, but in that case the generalized eigenvalue associated with x is undetermined.
I would like to generate invertible matrices (specifically those from GL(n), a general linear group of size n) using Tensorflow and/or Numpy for use with my neural network.
How can this be done and what would be the best way of doing so?
I understand there is a way to generate symmetric invertible matrices by computing (A + A.T)/2 for arbitrary square matrices A, however, I would like mine to not just be symmetric.
I happened to have found one way which I believe can generate a large variety of random invertible matrices using diagonal dominance.
The theorem is that given an nxn matrix, if the abs of the diagonal element is larger than the sum of the abs of all the row elements with respect to the row the diagonal element is in, and this holds true for all rows, then the underlying matrix is invertible. (here is the corresponding wikipedia article: https://en.wikipedia.org/wiki/Diagonally_dominant_matrix)
Therefore the following code snippet generates an arbitrary invertible matrix.
n = 5 # size of invertible matrix I wish to generate
m = np.random.rand(n, n)
mx = np.sum(np.abs(m), axis=1)
np.fill_diagonal(m, mx)
How can I create Matrix P consisting of three eigenvector columns by using a double nested loop.
from sympy.matrices import Matrix, zeros
from sympy import pprint
A = Matrix([[6,2,6], [2,6,6], [6,6,2]])
ew_A = A.eigenvals()
ev_A = A.eigenvects()
pprint(ew_A)
pprint(ev_A)
# Matrix P
(n,m) = A.shape
P = TODO # Initialising
# "filling Matrix P with ...
for i in TODO:
for j in TODO:
P[:,i+j] = TODO
## Calculating Diagonalmatrix
D= P**-1*P*A
Thanks so much in Advance
Finding the eigenvalues of a matrix, or diagonalizing it, is equivalent to finding the zeros of a polynomial with a degree equal to the size of the matrix. So in your case diagonalizing a 3x3 matrix is equivalent to finding the zeros of a 3rd degree polynomial. Maybe there is a simple algorithm for that, but mathematicians always go for the general case.
And in the general case you can show that there is no terminating algorithm for finding the zeros of a 5th-or-higher degree polynomial (that is called Galois theory), so there is also no simple "triple loop" algorithm for matrices of size 5x5 and higher. Eigenvalue software works by an iterative approximation algorithm, so that is a "while" loop around some finite loops.
This means that your question has no answer in the general case. In the 3x3 case maybe, but even that is not going to be terribly simple.
So I would like to generate a 50 X 50 covariance matrix for a random variable X given the following conditions:
one variance is 10 times larger than the others
the parameters of X are only slightly correlated
Is there a way of doing this in Python/R etc? Or is there a covariance matrix that you can think of that might satisfy these requirements?
Thank you for your help!
OK, you only need one matrix and randomness isn't important. Here's a way to construct a matrix according to your description. Start with an identity matrix 50 by 50. Assign 10 to the first (upper left) element. Assign a small number (I don't know what's appropriate for your problem, maybe 0.1? 0.01? It's up to you) to all the other elements. Now take that matrix and square it (i.e. compute transpose(X) . X where X is your matrix). Presto! You've squared the eigenvalues so now you have a covariance matrix.
If the small element is small enough, X is already positive definite. But squaring guarantees it (assuming there are no zero eigenvalues, which you can verify by computing the determinant -- if the determinant is nonzero then there are no zero eigenvalues).
I assume you can find Python functions for these operations.
Short version of my question:
What would be the optimal way of calculating an eigenvector for a matrix A, if we already know the eigenvalue belonging to the eigenvector?
Longer explanation:
I have a large stochastic matrix A which, because it is stochastic, has a non-negative left eigenvector x (such that A^Tx=x).
I'm looking for quick and efficient methods of numerically calculating this vector. (Preferrably in MATLAB or numpy/scipy - since both of these wrap around ARPACK/LAPACK, any one would be fine).
I know that 1 is the largest eigenvalue of A, so I know that calling something like this Python code:
from scipy.sparse.linalg import eigs
vals, vecs = eigs(A, k=1)
will result in vals = 1 and vecs equalling the vector I need.
However, the thing that bothers me here is that calculating eigenvalues is, in general, a more difficult operation than solving a linear system, and, in general, if a matrix M has eigenvalue l, then finding the appropriate eigenvector is a matter of solving the equation (M - 1 * I) * x = 0, which is, in theory at least, an operation that is simpler than calculating an eigenvalue, since we are only solving a linear system, more specifically, finding the nullspace of a matrix.
However, I find that all methods of nullspace calculation in MATLAB rely on svd calculation, a process I cannot afford to perform on a matrix of my size. I also cannot call solvers on the linear equation, because they all only find one solution, and that solution is 0 (which, yes, is a solution, but not the one I need).
Is there any way to avoid calls to eigs-like function to solve my problem more quickly than by calculating the largest eigenvalue and accompanying eigenvector?
Here's one approach using Matlab:
Let x denote the (row) left† eigenvector associated to eigenvalue 1. It satisfies the system of linear equations (or matrix equation) xA = x, or x(A−I)=0.
To avoid the all-zeros solution to that system of equations, remove the first equation and arbitrarily set the first entry of x to 1 in the remaining equations.
Solve those remaining equations (with x1 = 1) to obtain the other entries of x.
Example using Matlab:
>> A = [.6 .1 .3
.2 .7 .1
.5 .1 .4]; %// example stochastic matrix
>> x = [1, -A(1, 2:end)/(A(2:end, 2:end)-eye(size(A,1)-1))]
x =
1.000000000000000 0.529411764705882 0.588235294117647
>> x*A %// check
ans =
1.000000000000000 0.529411764705882 0.588235294117647
Note that the code -A(1, 2:end)/(A(2:end, 2:end)-eye(size(A,1)-1)) is step 3.
In your formulation you define x to be a (column) right eigenvector of AT (such that ATx = x). This is just x.' from the above code:
>> x = x.'
x =
1.000000000000000
0.529411764705882
0.588235294117647
>> A.'*x %// check
ans =
1.000000000000000
0.529411764705882
0.588235294117647
You can of course normalize the eigenvector to sum 1:
>> x = x/sum(x)
x =
0.472222222222222
0.250000000000000
0.277777777777778
>> A.'*x %'// check
ans =
0.472222222222222
0.250000000000000
0.277777777777778
† Following the usual convention. Equivalently, this corresponds to a right eigenvector of the transposed matrix.