Python: Calculating the inverse of a pseudo inverse matrix - python

I am trying to calculate the pseudo inverse of a matrix which should be not very difficult. The problem is inverting the matrix.
I am using the following code:
A=numpy.random.random_sample((4,5,))
A=mat(A)
B=pseudoinverse(A)
def pseudoinverse(A):
helper=A.T*A
print helper*helper.I
PI=helper.I*A.T
return PI`
to test this I included the print line. helper*helper.I should give unity. The output I get from this is:
[[ 2. -1. 0. 0. 3. ]
[ 0. 2. 0. 0. 3.5 ]
[ 0. -0.5 1.125 -1. 2.25 ]
[ 2. 0. 0.25 -2. 3. ]
[ 0. 0. 0.5 -2. 4. ]]
which is clearly not unity. I don't know what I did wrong and really would like to know.

Your matrix A does not have full column rank. In consequence helper is singular and not invertible (If you print helper.I you will see some very large numbers).
The solution is to compute the right inverse instead of the left inverse:
helper = A * A.T
PI = A.T * helper.I
See Wikipedia for more details.
Unless you are doing this for exercise, you could also use numpy's built in implementation of the pseudeinverse.
edit
>>> numpy.random.seed(42)
>>> a = mat(numpy.random.random_sample((3, 4))) # smaller matrix for nicer output
>>> h = a * a.T
>>> h * h.I
matrix([[ 1.00000000e+00, 1.33226763e-15, 0.00000000e+00],
[ -1.77635684e-15, 1.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 1.33226763e-15, 1.00000000e+00]])
Up to numeric precision this looks pretty much like an identity matrix to me.
The problem in your code is that A.T * A is not invertible. If you try to invert such a matrix you get wrong results.
In contrast, A * A.T is invertible.
You have two options:
change the direction of multiplication
call pseudoinverse(A.T)

Related

Problems with pySpark Columnsimilarities

tl;dr
How do I use pySpark to compare the similarity of rows?
I have a numpy array where I would like to compare the similarities of each row to one another
print (pdArray)
#[[ 0. 1. 0. ..., 0. 0. 0.]
# [ 0. 0. 3. ..., 0. 0. 0.]
# [ 0. 0. 0. ..., 0. 0. 7.]
# ...,
# [ 5. 0. 0. ..., 0. 1. 0.]
# [ 0. 6. 0. ..., 0. 0. 3.]
# [ 0. 0. 0. ..., 2. 0. 0.]]
Using scipy I can compute cosine similarities as follow...
pyspark.__version__
# '2.2.0'
from sklearn.metrics.pairwise import cosine_similarity
similarities = cosine_similarity(pdArray)
similarities.shape
# (475, 475)
print(similarities)
array([[ 1.00000000e+00, 1.52204908e-03, 8.71545594e-02, ...,
3.97681174e-04, 7.02593036e-04, 9.90472253e-04],
[ 1.52204908e-03, 1.00000000e+00, 3.96760121e-04, ...,
4.04724413e-03, 3.65324300e-03, 5.63519735e-04],
[ 8.71545594e-02, 3.96760121e-04, 1.00000000e+00, ...,
2.62367141e-04, 1.87878869e-03, 8.63876439e-06],
...,
[ 3.97681174e-04, 4.04724413e-03, 2.62367141e-04, ...,
1.00000000e+00, 8.05217639e-01, 2.69724702e-03],
[ 7.02593036e-04, 3.65324300e-03, 1.87878869e-03, ...,
8.05217639e-01, 1.00000000e+00, 3.00229809e-03],
[ 9.90472253e-04, 5.63519735e-04, 8.63876439e-06, ...,
2.69724702e-03, 3.00229809e-03, 1.00000000e+00]])
As I am looking to expand to much larger sets than my original (475 row) matrix I am looking at using Spark via pySpark
from pyspark.mllib.linalg.distributed import RowMatrix
#load data into spark
tempSpark = sc.parallelize(pdArray)
mat = RowMatrix(tempSpark)
# Calculate exact similarities
exact = mat.columnSimilarities()
exact.entries.first()
# MatrixEntry(128, 211, 0.004969676943490767)
# Now when I get the data out I do the following...
# Convert to a RowMatrix.
rowMat = approx.toRowMatrix()
t_3 = rowMat.rows.collect()
a_3 = np.array([(x.toArray()) for x in t_3])
a_3.shape
# (488, 749)
As you can see the shape of the data is a) no longer square (which it should be and b) has dimensions which do not match the original number of rows... now it does match (in part_ the number of features in each row (len(pdArray[0]) = 749) but I don't know where the 488 is coming from
The presence of 749 makes me think I need to transpose my data first. Is that correct?
Finally, if this is the case why are the dimensions not (749, 749) ?
First, the columnSimilarities method only returns the off diagonal entries of the upper triangular portion of the similarity matrix. With the absence of the 1's along the diagonal, you may have 0's for entire rows in the resulting similarity matrix.
Second, a pyspark RowMatrix doesn't have meaningful row indices. So essentially when converting from a CoordinateMatrix to a RowMatrix, the i value in the MatrixEntry is being mapped to whatever is convenient (probably some incrementing index). So what is likely happening is the rows that have all 0's are simply being ignored and the matrix is being squished vertically when you convert it to a RowMatrix.
It probably makes sense to inspect the dimension of the similarity matrix immediately after computation with the columnSimilarities method. You can do this by using the numRows() and the numCols() methods.
print(exact.numRows(),exact.numCols())
Other than that, it does sound like you need to transpose your matrix to get the correct vector similarities. Furthermore, if there is some reason that you need this in a RowMatrix-like form, you could try using an IndexedRowMatrix which does have meaningful row indices and would preserve the row index from the original CoordinateMatrix upon conversion.

Can numpy diagonalise a skew-symmetric matrix with real arithmetic?

Any skew-symmetric matrix (A^T = -A) can be turned into a Hermitian matrix (iA) and diagonalised with complex numbers. But it is also possible to bring it into block-diagonal form with a special orthogonal transformation and find its eigevalues using only real arithmetic. Is this implemented anywhere in numpy?
Let's take a look at the dgeev() function of the LAPACK librarie. This routine computes the eigenvalues of any real double-precison square matrix. Moreover, this routine is right behind the python function numpy.linalg.eigvals() of the numpy library.
The method used by dgeev() is described in the documentation of LAPACK. It requires the reduction of the matrix A to its real Schur form S.
Any real square matrix A can be expressed as:
A=QSQ^t
where:
Q is a real orthogonal matrix: QQ^t=I
S is a real block upper triangular matrix. The blocks on the diagonal of S are of size 1×1 or 2×2.
Indeed, if A is skew-symmetric, this decomposition seems really close to a block diagonal form obtained by a special orthogonal transformation of A. Moreover, it is really to see that the Schur form S of the skew symmetric matrix A is ... skew-symmetric !
Indeed, let's compute the transpose of S:
S^t=(Q^tAQ)^t
S^t=Q^t(Q^tA)^t
S^t=Q^tA^tQ
S^t=Q^t(-A)Q
S^t=-Q^tAQ
S^t=-S
Hence, if Q is special orthogonal (det(Q)=1), S is a block diagonal form obtained by a special orthogonal transformation. Else, a special orthogonal matrix P can be computed by permuting the first two columns of Q and another Schur form Sd of the matrix A is obtained by changing the sign of S_{12} and S_{21}. Indeed, A=PSdP^t. Then, Sd is a block diagonal form of A obtained by a special orthogonal transformation.
In the end, even if numpy.linalg.eigvals() applied to a real matrix returns complex numbers, there is little complex computation involved in the process !
If you just want to compute the real Schur form, use the function scipy.linalg.schur() with argument output='real'.
Just a piece of code to check that:
import numpy as np
import scipy.linalg as la
a=np.random.rand(4,4)
a=a-np.transpose(a)
print "a= "
print a
#eigenvalue
w, v =np.linalg.eig(a)
print "eigenvalue "
print w
print "eigenvector "
print v
# Schur decomposition
#import scipy
#print scipy.version.version
t,z=la.schur(a, output='real', lwork=None, overwrite_a=True, sort=None, check_finite=True)
print "schur form "
print t
print "orthogonal matrix "
print z
Yes you can do it via sticking a unitary transformation in the middle of the product hence we get
A = V * U * V^-1 = V * T' * T * U * T' * T * V^{-1}.
Once you get the idea you can optimize the code by tiling things but let's do it the naive way by forming T explicitly.
If the matrix is even-sized then all blocks are complex conjugates. Otherwise we get a zero as the eigenvalue. The eigenvalues are guaranteed to have zero real parts so the first thing is to clean up the noise and then order such that the zeros are on the upper left corner (arbitrary choice).
n = 5
a = np.random.rand(n,n)
a=a-np.transpose(a)
[u,v] = np.linalg.eig(a)
perm = np.argsort(np.abs(np.imag(u)))
unew = 1j*np.imag(u[perm])
Obviously, we need to reorder the eigenvector matrix too to keep things equivalent.
vnew = v[:,perm]
Now so far we did nothing other than reordering the middle eigenvalue matrix in the eigenvalue decomposition. Now we switch from complex form to real block diagonal form.
First we have to know how many zero eigenvalues there are
numblocks = np.flatnonzero(unew).size // 2
num_zeros = n - (2 * numblocks)
Then we basically, form another unitary transformation (complex this time) and stick it the same way
T = sp.linalg.block_diag(*[1.]*num_zeros,np.kron(1/np.sqrt(2)*np.eye(numblocks),np.array([[1.,1j],[1,-1j]])))
Eigs = np.real(T.conj().T.dot(np.diag(unew).dot(T)))
Evecs = np.real(vnew.dot(T))
This gives you the new real valued decomposition. So the code all in one place
n = 5
a = np.random.rand(n,n)
a=a-np.transpose(a)
[u,v] = np.linalg.eig(a)
perm = np.argsort(np.abs(np.imag(u)))
unew = 1j*np.imag(u[perm])
vnew = v[perm,:]
numblocks = np.flatnonzero(unew).size // 2
num_zeros = n - (2 * numblocks)
T = sp.linalg.block_diag(*[1.]*num_zeros,np.kron(1/np.sqrt(2)*np.eye(numblocks),np.array([[1.,1j],[1,-1j]])))
Eigs = np.real(T.conj().T.dot(np.diag(unew).dot(T)))
Evecs = np.real(vnew.dot(T))
print(np.allclose(Evecs.dot(Eigs.dot(np.linalg.inv(Evecs))) - a,np.zeros((n,n))))
gives True. Note that this is the naive way of obtaining the real spectral decomposition. There are lots of places where you need to keep track of numerical error accumulation.
Example output
Eigs
Out[379]:
array([[ 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , -0.61882847, 0. , 0. ],
[ 0. , 0.61882847, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , -1.05097581],
[ 0. , 0. , 0. , 1.05097581, 0. ]])
Evecs
Out[380]:
array([[-0.15419078, -0.27710323, -0.39594838, 0.05427001, -0.51566173],
[-0.22985364, 0.0834649 , 0.23147553, -0.085043 , -0.74279915],
[ 0.63465436, 0.49265672, 0. , 0.20226271, -0.38686576],
[-0.02610706, 0.60684296, -0.17832525, 0.23822511, 0.18076858],
[-0.14115513, -0.23511356, 0.08856671, 0.94454277, 0. ]])

Efficient way of taking Logarithm function in a sparse matrix

I have a big sparse matrix. I want to take log4 for all element in that sparse matrix.
I try to use numpy.log() but it doesn't work with matrices.
I can also take logarithm row by row. Then I crush old row with a new one.
# Assume A is a sparse matrix (Linked List Format) with float values as data
# It is only for one row
import numpy as np
c = np.log(A.getrow(0)) / numpy.log(4)
A[0, :] = c
This was not as quick as I'd expected. Is there a faster way to do this?
You can modify the data attribute directly:
>>> a = np.array([[5,0,0,0,0,0,0],[0,0,0,0,2,0,0]])
>>> coo = coo_matrix(a)
>>> coo.data
array([5, 2])
>>> coo.data = np.log(coo.data)
>>> coo.data
array([ 1.60943791, 0.69314718])
>>> coo.todense()
matrix([[ 1.60943791, 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. , 0.69314718,
0. , 0. ]])
Note that this doesn't work properly if the sparse format has repeated elements (which is valid in the COO format); it'll take the logs individually, and log(a) + log(b) != log(a + b). You probably want to convert to CSR or CSC first (which is fast) to avoid this problem.
You'll also have to add checks if the sparse matrix is in a different format, of course. And if you don't want to modify the matrix in-place, just construct a new sparse matrix as you did in your answer, but without adding 3 because that's completely unnecessary here.
I think I solve it with very easy way. It is very strange that no one could answer immediately.
# Let A be a COO_matrix
import numpy as np
from scipy.sparse import coo_matrix
new_data = np.log(A.data+3)/np.log(4) #3 is not so important. It can be 1 too.
A = coo_matrix((new_data, (A.row, A.col)), shape=A.shape)

Python ( robot module): conversion hom. transformation (rotation) matrix and Euler or RPY angles

I'm using in the robotics toolbox (c.q. robot module) in Python http://code.google.com/p/robotics-toolbox-python/) and I'm utterly confused how to interpret the following conversion results (I've done things like this in the past could always work them out...).
A simple phi=PI/10 rotation about the x-axes produces the following (3x3) rotation matrix:
R =
[ 1. 0. 0. ]
[ 0. 0.99609879 -0.08824514]
[-0. 0.08824514 0.99609879]]
( where 0.996..=cos(phi) and 0.0882..=sin(phi) )
with corresponding (4x4) homogeneous transformation matrix:
T = | R 0 | =
| 0 1 |
[[ 1. 0. 0. 0. ]
[ 0. 0.99609879 -0.08824514 0. ]
[-0. 0.08824514 0.99609879 0. ]
[ 0. 0. 0. 1. ]]
Conversion of T into angle representation produces the following:
RPY (Roll, pitch, yaw) angles (around z, y and x axes, respectively, ... I presume):
print robot.tr2rpy(T)
[[ 0. 0. 0.08836007]]
Problem: How can the x-rotation be the last element (rather than the first)....?
Further:
Euler angles (around x, y and z axes, respectively, ... I presume):
print robot.tr2eul(T)
[[-1.57079633 0.08836007 1.57079633]]
( = [[ -PI/4, sin(phi), PI/4 ]] ))
Problem: My interpretation (sequential rotation around x,y,z axes) tells me the result is dead wrong...?
What am I missing? Thanks.
-- Henk
Solved:
With successive angles, a[0],a[1],a[2],
1) For Euler angles these are successive rotations around Z-Y-Z axes;
2) For RPY angles these are successive Y(aw)-P(itch)-R(roll) rotations (i.e around Z-Y-X axes).

Euclidian Distances between points

I have an array of points in numpy:
points = rand(dim, n_points)
And I want to:
Calculate all the l2 norm (euclidian distance) between a certain point and all other points
Calculate all pairwise distances.
and preferably all numpy and no for's. How can one do it?
If you're willing to use SciPy, the scipy.spatial.distance module (the functions cdist and/or pdist) do exactly what you want, with all the looping done in C. You can do it with broadcasting too but there's some extra memory overhead.
This might help with the second part:
import numpy as np
from numpy import *
p=rand(3,4) # this is column-wise so each vector has length 3
sqrt(sum((p[:,np.newaxis,:]-p[:,:,np.newaxis])**2 ,axis=0) )
which gives
array([[ 0. , 0.37355868, 0.64896708, 1.14974483],
[ 0.37355868, 0. , 0.6277216 , 1.19625254],
[ 0.64896708, 0.6277216 , 0. , 0.77465192],
[ 1.14974483, 1.19625254, 0.77465192, 0. ]])
if p was
array([[ 0.46193242, 0.11934744, 0.3836483 , 0.84897951],
[ 0.19102709, 0.33050367, 0.36382587, 0.96880535],
[ 0.84963349, 0.79740414, 0.22901247, 0.09652746]])
and you can check one of the entries via
sqrt(sum ((p[:,0]-p[:,2] )**2 ))
0.64896708223796884
The trick is to put newaxis and then do broadcasting.
Good luck!

Categories