How can I create Matrix P consisting of three eigenvector columns by using a double nested loop.
from sympy.matrices import Matrix, zeros
from sympy import pprint
A = Matrix([[6,2,6], [2,6,6], [6,6,2]])
ew_A = A.eigenvals()
ev_A = A.eigenvects()
pprint(ew_A)
pprint(ev_A)
# Matrix P
(n,m) = A.shape
P = TODO # Initialising
# "filling Matrix P with ...
for i in TODO:
for j in TODO:
P[:,i+j] = TODO
## Calculating Diagonalmatrix
D= P**-1*P*A
Thanks so much in Advance
Finding the eigenvalues of a matrix, or diagonalizing it, is equivalent to finding the zeros of a polynomial with a degree equal to the size of the matrix. So in your case diagonalizing a 3x3 matrix is equivalent to finding the zeros of a 3rd degree polynomial. Maybe there is a simple algorithm for that, but mathematicians always go for the general case.
And in the general case you can show that there is no terminating algorithm for finding the zeros of a 5th-or-higher degree polynomial (that is called Galois theory), so there is also no simple "triple loop" algorithm for matrices of size 5x5 and higher. Eigenvalue software works by an iterative approximation algorithm, so that is a "while" loop around some finite loops.
This means that your question has no answer in the general case. In the 3x3 case maybe, but even that is not going to be terribly simple.
Suppose I have a symmetric matrix A and a vector b and want to find A^(-1) b. Now, this is well-known to be doable in time O(N^2) (where N is the dimension of the vector\matrix), and I believe that in MATLAB this can be done as b\A. But all I can find in python is numpy.linalg.solve() which will do Gaussian elimination, which is O(N^3). I must not be looking in the right place...
scipy.linalg.solve has an argument to make it assume a symmetric matrix:
x = scipy.linalg.solve(A, b, assume_a="sym")
If you know your matrix is not just symmetric but positive definite you can give this stronger assumption instead, as "pos".
I have a simple problem.
I have two matrices A and B,
and I want to find a transformation AX that makes AX closest to B in the least squares sense.
i.e. find X such that X = argmin ||AX -B|| under 2-norm.
How can I solve this problem using numpy or scipy?
I tried to search but the only method that is there is lstsq (https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html) but it finds a linear equation solution involving vector x.
How to solve my problem, i.e. find a transformation X for matrix A?
Are there any non-linear methods that do this? How is this problem solved in optimization?
I'm sorry if this is a duplicate of some thread. I know there are lots of decompositions to decompose a matrix (like LU or SVD), but now I have an arbitrary non-square matrix and I want to decompose it to product of two matrices of given shape. If exact solution do not exist, I want to find a least-square one. If more then one solution exists, any of them would be fine.
I was using iterative methods like this:
A = np.random.rand(...)
B = np.random.rand(...)
for i in range(999):
A = np.linalg.lstsq(B.T, Y.T, None)[0].T
B = np.linalg.lstsq(A, Y, None)[0]
This is straightforward, but I found it converges sublinearly (actually logarithmicly), which is slow. I also found it sometimes (or frequently?) "bounces" back to a very high L2 loss. I'm wondering are there improvements exists to this or simply solving AB=Y should be done in a totally different way?
Thanks a lot!
You can do this with the SVD. See, for example the wiki article
Suppose, for example you had an mxn matrix Y and wanted to find a factorisation
Y = A*B where A is mx1 and B is nx1
so that
A*B
is as close as possible (measured by the Frobenius norm) to Y.
The solution is to take the SVD of Y:
Y = U*S*V'
and then take
A = s*U1 (the first column of A, scaled by the first singular value)
B = V1' (the first column of V)
If you want A to be mx2 and B 2xn, then tou take the first two colums (for A scaling the first column by the first singular value, the second column by the second singular value), and so on.
How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it.
You should have a look at numpy if you do matrix manipulation. This is a module mainly written in C, which will be much faster than programming in pure python. Here is an example of how to invert a matrix, and do other matrix manipulation.
from numpy import matrix
from numpy import linalg
A = matrix( [[1,2,3],[11,12,13],[21,22,23]]) # Creates a matrix.
x = matrix( [[1],[2],[3]] ) # Creates a matrix (like a column vector).
y = matrix( [[1,2,3]] ) # Creates a matrix (like a row vector).
print A.T # Transpose of A.
print A*x # Matrix multiplication of A and x.
print A.I # Inverse of A.
print linalg.solve(A, x) # Solve the linear equation system.
You can also have a look at the array module, which is a much more efficient implementation of lists when you have to deal with only one data type.
Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.
See Don't invert that matrix.
It is a pity that the chosen matrix, repeated here again, is either singular or badly conditioned:
A = matrix( [[1,2,3],[11,12,13],[21,22,23]])
By definition, the inverse of A when multiplied by the matrix A itself must give a unit matrix. The A chosen in the much praised explanation does not do that. In fact just looking at the inverse gives a clue that the inversion did not work correctly. Look at the magnitude of the individual terms - they are very, very big compared with the terms of the original A matrix...
It is remarkable that the humans when picking an example of a matrix so often manage to pick a singular matrix!
I did have a problem with the solution, so looked into it further. On the ubuntu-kubuntu platform, the debian package numpy does not have the matrix and the linalg sub-packages, so in addition to import of numpy, scipy needs to be imported also.
If the diagonal terms of A are multiplied by a large enough factor, say 2, the matrix will most likely cease to be singular or near singular. So
A = matrix( [[2,2,3],[11,24,13],[21,22,46]])
becomes neither singular nor nearly singular and the example gives meaningful results... When dealing with floating numbers one must be watchful for the effects of inavoidable round off errors.
For those like me, who were looking for a pure Python solution without pandas or numpy involved, check out the following GitHub project: https://github.com/ThomIves/MatrixInverse.
It generously provides a very good explanation of how the process looks like "behind the scenes". The author has nicely described the step-by-step approach and presented some practical examples, all easy to follow.
This is just a little code snippet from there to illustrate the approach very briefly (AM is the source matrix, IM is the identity matrix of the same size):
def invert_matrix(AM, IM):
for fd in range(len(AM)):
fdScaler = 1.0 / AM[fd][fd]
for j in range(len(AM)):
AM[fd][j] *= fdScaler
IM[fd][j] *= fdScaler
for i in list(range(len(AM)))[0:fd] + list(range(len(AM)))[fd+1:]:
crScaler = AM[i][fd]
for j in range(len(AM)):
AM[i][j] = AM[i][j] - crScaler * AM[fd][j]
IM[i][j] = IM[i][j] - crScaler * IM[fd][j]
return IM
But please do follow the entire thing, you'll learn a lot more than just copy-pasting this code! There's a Jupyter notebook as well, btw.
Hope that helps someone, I personally found it extremely useful for my very particular task (Absorbing Markov Chain) where I wasn't able to use any non-standard packages.
You could calculate the determinant of the matrix which is recursive
and then form the adjoined matrix
Here is a short tutorial
I think this only works for square matrices
Another way of computing these involves gram-schmidt orthogonalization and then transposing the matrix, the transpose of an orthogonalized matrix is its inverse!
Numpy will be suitable for most people, but you can also do matrices in Sympy
Try running these commands at http://live.sympy.org/
M = Matrix([[1, 3], [-2, 3]])
M
M**-1
For fun, try M**(1/2)
If you hate numpy, get out RPy and your local copy of R, and use it instead.
(I would also echo to make you you really need to invert the matrix. In R, for example, linalg.solve and the solve() function don't actually do a full inversion, since it is unnecessary.)