Compute Gradient of overdefined Plane - python

I want to compute the gradient (direction and magnitude) of an overdefined plane (> 3 points), such as e.g. four x, y, z coordinates:
[0, 0, 1], [1, 0, 0], [0, 1, 1], [1, 1, 0]
My code for computing the gradient looks like this (using singular value decomposition from this post, modified):
import numpy as np
def regr(X):
y = np.average(X, axis=0)
Xm = X - y
cov = (1./X.shape[0])*np.matmul(Xm.T,Xm) # Covariance
u, s, v = np.linalg.svd(cov) # Singular Value Decomposition
return u[:,1]
However as a result I get:
[0, 1, 0]
which does not represent the gradient nor the normal vector. The result should look like this:
[sqrt(2)/2, 0, -sqrt(2)/2]
Any ideas why I am getting a wrong vector?

You can use the function numpy.gradient for that. Here is some math background how it works.
In short:
The directional derivative will be the dot product of the gradient with the (unit normalized) vector of the direction.
An easier solution is to use numdifftools, especially the convenience function directionaldiff.
An example can be found here. And if you look at the source code you can see the relation to the math mentioned above.

Related

How to apply a transformation matrix to the plane defined by the origin and normal

I have a plane defined by the origin(point) and normal. I need to apply 4 by 4 transformation matrix to it. How to do this correctly?
This is a helpful wikipedia link.
In case where your you are dealing with a three dimensional space, a 4 by 4 transformation matrix is probably a presentation of an affine transformation.
Check this wikipedia link.
to apply this transformation, you would first represent the plane using a 4x1 homogeneous representation (x, y, z, 1), where x, y, and z are the coordinates of a point on the plane, and the last component is 1 to indicate that the vector is a homogeneous vector.
Next, you would multiply this vector by the transformation matrix to obtain a new 4x1 vector, which represents the new position of the plane after the transformation.
the normal vector should not be affected by the translation part of the transformation matrix. This is because a normal vector represents the orientation of a surface and not its position, so it should not be affected by translation. thus the representation of the vector should be (x,y,z,0).
again, you would multiply this vector by the transformation matrix to obtain a new 4x1 vector, which represents the new orientation of the plane after the transformation.
only the top 3 elements of both the resulted vectors describe the new origin and the new normal (in-short the new plane).
This is an example in Python:
import numpy as np
# Original plane
o = np.array([0, 0, 0, 1])
n = np.array([0, 0, 1])
# Transformation matrix
T = np.array([[1, 0, 0, 2],
[0, 1, 0, 3],
[0, 0, 1, 4],
[0, 0, 0, 1]])
# Apply transformation to the origin
o_new = T # o
# Apply transformation to the normal
n_new = T[:3, :3] # n
print("New origin:", o_new[:3])
print("New normal:", n_new)
output :
New origin: [2 3 4]
New normal: [0 0 1]
Note: n_new = T[:3, :3] # n is the same as if n had its fourth element as 0 and then n_new = (T # n)[:3]

Rotation by theta in the 2D Plane in Python

Is there a way to implement this code in a few lines? I was thinking a for loop would be needed but not certain. Here is the code and the required definition:
# Construct a matrix representation rotation by theta in the 2D plane.
# The relevant math module functions are already imported at the top of this file.
# Round each entry of the result to 6 decimal places before returning.
# This is a static method that builds the contents and calls the Matrix constructor.
def rotation(theta):
if theta == (0): return Matrix([[1, 0], [0, 1]])
if theta == (3.141592653589793): return Matrix([[-1, 0], [0, -1]])
if theta == (1.5707963267948966): return Matrix([[0, -1], [1, 0]])
I attached if statements to show test cases and what values should return what is matrix form. Is this possible?

How to keep a consistent order to eigenvalues of a matrix using numpy or scipy?

I am trying to get a plot of a set of eigenvalues of a matrix against a detuning factor epsilon. I want the plot to look like this
Matlab Plot using eig()
However when I use np.linalg.eigvals I get the following
Python eigvals plot
I also tried using np.linalg.eigvalsh which gave Python eigvalsh plot.
The problem seems to be how the eigenvalues are ordered upon the return for the function. I was wondering if there's any way to get it so I produce plot lines like in the first image from matlab. I've also tried the equivalent scipy functions which just gave the same as the numpy.
Here is a copy of my code
import numpy as np
import matplotlib.pyplot as plt
mu = 57.88e-3
eps = np.linspace(-0.26,0.26,100)
def Model_g1g2(g1, g2, B, t):
E = np.empty([len(eps), 5]) # Initialise array for eigenvalues
for i in range(len(eps)):
# Matrix A
A = np.array([[(-eps[i]+(g1+g2)*mu*B)/2, 0, 0, 0, 0],
[0, -eps[i]/2, 0, (g1-g2)*mu*B, 0],
[0, 0, (-eps[i]-(g1+g2)*mu*B)/2, 0, 0],
[0, (g1-g2)*mu*B/2, 0, -eps[i]/2, t],
[0, 0, 0, t, eps[i]/2]
])
E[i,:] = np.linalg.eigvals(A) # Store eigenvalues
return E
E = Model_g1g2(1, 4, 0.5, 0.06)
# Produce Plot
for i in range(5):
plt.plot(eps, np.real(E[:,I]))
plt.show()
In Matlab, the eigenvalues are sorted.
Change E[i,:] = np.linalg.eigvals(A) to E[i,:] = sorted(np.linalg.eigvals(A)), then you'll get what you want.

SVD for recommendation engine

I'm trying to build a toy recommendation engine to wrap my mind around Singular Value Decomposition (SVD). I've read enough content to understand the motivations and intuition behind the actual decomposition of the matrix A (a user x movie matrix).
I need to know more about what goes on after that.
from numpy.linalg import svd
import numpy as np
A = np.matrix([
[0, 0, 0, 4, 5],
[0, 4, 3, 0, 0],
...
])
U, S, V = svd(A)
k = 5 #dimension reduction
A_k = U[:, :k] * np.diag(S[:k]) * V[:k, :]
Three Questions:
Do the values of matrix A_k represent the the predicted/approximate ratings?
What role/ what steps does cosine similarity play in the recommendation?
And finally I'm using Mean Absolute Error (MAE) to calculate my error. But what I'm values am I comparing? Something like MAE(A, A_k) or something else?

Working Example for Mahalanobis Distance Measure

I need to measure the distance between two n-diensional vectors. It seems that Mahalanobis Distance is a good choise here so i want to give it a try.
My Code looks like this:
import numpy as np
import scipy.spatial.distance.mahalanobis
x = [19, 8, 0, 0, 2, 1, 0, 0, 18, 0, 1673, 9, 218]
y = [17, 6, 0, 0, 1, 2, 0, 0, 8, 0, 984, 9, 30]
scipy.spatial.distance.mahalanobis(x,y,np.linalg.inv(np.cov(x,y)))
But I get this error message:
/usr/lib/python2.7/dist-packages/scipy/spatial/distance.pyc in mahalanobis(u, v, VI)
498 v = np.asarray(v, order='c')
499 VI = np.asarray(VI, order='c')
--> 500 return np.sqrt(np.dot(np.dot((u-v),VI),(u-v).T).sum())
501
502 def chebyshev(u, v):
ValueError: matrices are not aligned
The Scipy Doc says, that VI is the inverse of the covariance matrix, and i think np.cov is the covariance matrix and np.linalg.inv is the inverse of a matrix...
But I see what is the problem here (matrices are not aligned): The Matrix VI has the wrong dimension (2x2 and not 13x13).
So possible solution is to do this:
VI = np.linalg.inv(np.cov(np.vstack((x,y)).T))
but unfortuanly the det of np.cov(np.vstack((x,y)).T) is 0, which means that a inverse matrix does not exsists.
so how can i use mahalanobis distance measure when i even cant compute the covariance matrix?
Are you sure that Mahalanobis Distance is right for you application? According to Wikipedia you need a set of points to generate the covariance matrix, not just two vectors. Then you can compute distances of vectors from the set's center.
You don't have a sample set with which to calculate a covariance. You probably just want the Euclidean distance here (np.linalg.norm(x-y)). What is the bigger picture in what you are trying to achieve?

Categories