orthogonal projection with numpy - python

I have a list of 3D-points for which I calculate a plane by numpy.linalg.lstsq - method. But Now I want to do a orthogonal projection for each point into this plane, but I can't find my mistake:
from numpy.linalg import lstsq
def VecProduct(vek1, vek2):
return (vek1[0]*vek2[0] + vek1[1]*vek2[1] + vek1[2]*vek2[2])
def CalcPlane(x, y, z):
# x, y and z are given in lists
n = len(x)
sum_x = sum_y = sum_z = sum_xx = sum_yy = sum_xy = sum_xz = sum_yz = 0
for i in range(n):
sum_x += x[i]
sum_y += y[i]
sum_z += z[i]
sum_xx += x[i]*x[i]
sum_yy += y[i]*y[i]
sum_xy += x[i]*y[i]
sum_xz += x[i]*z[i]
sum_yz += y[i]*z[i]
M = ([sum_xx, sum_xy, sum_x], [sum_xy, sum_yy, sum_y], [sum_x, sum_y, n])
b = (sum_xz, sum_yz, sum_z)
a,b,c = lstsq(M, b)[0]
'''
z = a*x + b*y + c
a*x = z - b*y - c
x = -(b/a)*y + (1/a)*z - c/a
'''
r0 = [-c/a,
0,
0]
u = [-b/a,
1,
0]
v = [1/a,
0,
1]
xn = []
yn = []
zn = []
# orthogonalize u and v with Gram-Schmidt to get u and w
uu = VecProduct(u, u)
vu = VecProduct(v, u)
fak0 = vu/uu
erg0 = [val*fak0 for val in u]
w = [v[0]-erg0[0],
v[1]-erg0[1],
v[2]-erg0[2]]
ww = VecProduct(w, w)
# P_new = ((x*u)/(u*u))*u + ((x*w)/(w*w))*w
for i in range(len(x)):
xu = VecProduct([x[i], y[i], z[i]], u)
xw = VecProduct([x[i], y[i], z[i]], w)
fak1 = xu/uu
fak2 = xw/ww
erg1 = [val*fak1 for val in u]
erg2 = [val*fak2 for val in w]
erg = [erg1[0]+erg2[0], erg1[1]+erg2[1], erg1[2]+erg2[2]]
erg[0] += r0[0]
xn.append(erg[0])
yn.append(erg[1])
zn.append(erg[2])
return (xn,yn,zn)
This returns me a list of points which are all in a plane, but when I display them, they are not at the positions they should be.
I believe there is already a certain built-in method to solve this problem, but I couldn't find any =(

You are doing a very poor use of np.lstsq, since you are feeding it a precomputed 3x3 matrix, instead of letting it do the job. I would do it like this:
import numpy as np
def calc_plane(x, y, z):
a = np.column_stack((x, y, np.ones_like(x)))
return np.linalg.lstsq(a, z)[0]
>>> x = np.random.rand(1000)
>>> y = np.random.rand(1000)
>>> z = 4*x + 5*y + 7 + np.random.rand(1000)*.1
>>> calc_plane(x, y, z)
array([ 3.99795126, 5.00233364, 7.05007326])
It is actually more convenient to use a formula for your plane that doesn't depend on the coefficient of z not being zero, i.e. use a*x + b*y + c*z = 1. You can similarly compute a, b and c doing:
def calc_plane_bis(x, y, z):
a = np.column_stack((x, y, z))
return np.linalg.lstsq(a, np.ones_like(x))[0]
>>> calc_plane_bis(x, y, z)
array([-0.56732299, -0.70949543, 0.14185393])
To project points onto a plane, using my alternative equation, the vector (a, b, c) is perpendicular to the plane. It is easy to check that the point (a, b, c) / (a**2+b**2+c**2) is on the plane, so projection can be done by referencing all points to that point on the plane, projecting the points onto the normal vector, subtract that projection from the points, then referencing them back to the origin. You could do that as follows:
def project_points(x, y, z, a, b, c):
"""
Projects the points with coordinates x, y, z onto the plane
defined by a*x + b*y + c*z = 1
"""
vector_norm = a*a + b*b + c*c
normal_vector = np.array([a, b, c]) / np.sqrt(vector_norm)
point_in_plane = np.array([a, b, c]) / vector_norm
points = np.column_stack((x, y, z))
points_from_point_in_plane = points - point_in_plane
proj_onto_normal_vector = np.dot(points_from_point_in_plane,
normal_vector)
proj_onto_plane = (points_from_point_in_plane -
proj_onto_normal_vector[:, None]*normal_vector)
return point_in_plane + proj_onto_plane
So now you can do something like:
>>> project_points(x, y, z, *calc_plane_bis(x, y, z))
array([[ 0.13138012, 0.76009389, 11.37555123],
[ 0.71096929, 0.68711773, 13.32843506],
[ 0.14889398, 0.74404116, 11.36534936],
...,
[ 0.85975642, 0.4827624 , 12.90197969],
[ 0.48364383, 0.2963717 , 10.46636903],
[ 0.81596472, 0.45273681, 12.57679188]])

You can simply do everything in matrices is one option.
If you add your points as row vectors to a matrix X, and y is a vector, then the parameters vector beta for the least squares solution are:
import numpy as np
beta = np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y))
but there's an easier way, if we want to do projections: QR decomposition gives us an orthonormal projection matrix, as Q.T, and Q is itself the matrix of orthonormal basis vectors. So, we can first form QR, then get beta, then use Q.T to project the points.
QR:
Q, R = np.linalg.qr(X)
beta:
# use R to solve for beta
# R is upper triangular, so can use triangular solver:
beta = scipy.solve_triangular(R, Q.T.dot(y))
So now we have beta, and we can project the points using Q.T very simply:
X_proj = Q.T.dot(X)
Thats it!
If you want more information and graphical piccies and stuff, I made a whole bunch of notes, whilst doing something similar, at: https://github.com/hughperkins/selfstudy-IBP/blob/9dedfbb93f4320ac1bfef60db089ae0dba5e79f6/test_bases.ipynb
(Edit: note that if you want to add a bias term, so the best-fit doesnt have to pass through the origin, you can simply add an additional column, with all-1s, to X, which acts as the bias term/feature)

This web page has a pretty great code base. It implements the theory expounded by Maple in numpy quite well, as follows:
# import numpy to perform operations on vector
import numpy as np
# vector u
u = np.array([2, 5, 8])
# vector n: n is orthogonal vector to Plane P
n = np.array([1, 1, 7])
# Task: Project vector u on Plane P
# finding norm of the vector n
n_norm = np.sqrt(sum(n**2))
# Apply the formula as mentioned above
# for projecting a vector onto the orthogonal vector n
# find dot product using np.dot()
proj_of_u_on_n = (np.dot(u, n)/n_norm**2)*n
# subtract proj_of_u_on_n from u:
# this is the projection of u on Plane P
print("Projection of Vector u on Plane P is: ", u - proj_of_u_on_n)

Related

Python uniform random number generation to a triangle shape

I have three data points which I performed a linear fit and obtained the 1 sigma uncertainty lines. Now I would like to generate 100k data point uniformly distributed between the 1 sigma error bars (the big triangle on the left side) but I do not have any idea how am I able to do that. Here is my code
import matplotlib.pyplot as plt
import numpy as np
import math
from scipy.optimize import curve_fit
x = np.array([339.545772, 339.545781, 339.545803])
y = np.array([-0.430843, -0.43084 , -0.430842])
def line(x,m,c):
return m*x + c
popt, pcov = curve_fit(line,x,y)
slope = popt[0]
intercept = popt[1]
xx = np.array([326.0,343.0])
fit = line(xx,slope,intercept)
fit_plus1sigma = line(xx, slope + pcov[0,0]**0.5, intercept - pcov[1,1]**0.5)
fit_minus1sigma = line(xx, slope - pcov[0,0]**0.5, intercept + pcov[1,1]**0.5)
plt.plot(xx,fit,"C4",label="Linear fit")
plt.plot(xx,fit_plus1sigma,'g--',label=r'One sigma uncertainty')
plt.plot(xx,fit_minus1sigma,'g--')
plt.fill_between(xx, fit_plus1sigma, fit_minus1sigma, facecolor="gray", alpha=0.15)
In NumPy there is a Numpy random triangle function, however, I was not able to implement that in my case and I am not even sure if that is the right approach. I appreciate any help.
You can use this answer by Severin Pappadeux to sample uniformly in a triangle shape. You need the corner points of your triangle for that.
To find where your lines intersect you can follow this answer by Norbu Tsering. Then you just need the top-left corner and bottom-left corner coordinates of your triangle shape.
Putting all of this together, you can solve your problem like this.
Find the intersection:
# Source: https://stackoverflow.com/a/42727584/5320601
def get_intersect(a1, a2, b1, b2):
"""
Returns the point of intersection of the lines passing through a2,a1 and b2,b1.
a1: [x, y] a point on the first line
a2: [x, y] another point on the first line
b1: [x, y] a point on the second line
b2: [x, y] another point on the second line
"""
s = np.vstack([a1, a2, b1, b2]) # s for stacked
h = np.hstack((s, np.ones((4, 1)))) # h for homogeneous
l1 = np.cross(h[0], h[1]) # get first line
l2 = np.cross(h[2], h[3]) # get second line
x, y, z = np.cross(l1, l2) # point of intersection
if z == 0: # lines are parallel
return (float('inf'), float('inf'))
return (x / z, y / z)
p1 = ((xx[0], fit_plus1sigma[0]), (xx[1], fit_plus1sigma[1]))
p2 = ((xx[0], fit_minus1sigma[0]), (xx[1], fit_minus1sigma[1]))
cross = get_intersect(p1[0], p1[1], p2[0], p2[1])
This way you get your two points per line that are on it, and the intersection point, which you need to sample from within this triangle shape.
Then you can sample the points you need:
# Source: https://stackoverflow.com/a/47425047/5320601
def trisample(A, B, C):
"""
Given three vertices A, B, C,
sample point uniformly in the triangle
"""
r1 = random.random()
r2 = random.random()
s1 = math.sqrt(r1)
x = A[0] * (1.0 - s1) + B[0] * (1.0 - r2) * s1 + C[0] * r2 * s1
y = A[1] * (1.0 - s1) + B[1] * (1.0 - r2) * s1 + C[1] * r2 * s1
return (x, y)
points = []
for _ in range(100000):
points.append(trisample(p1[0], p2[0], cross))
Example picture for 1000 points:

Points on sphere

I am new in Python and I have a sphere of radius (R) and centred at (x0,y0,z0). Now, I need to find those points which are either on the surface of the sphere or inside the sphere e.g. points (x1,y1,z1) which satisfy ((x1-x0)**2+(y1-y0)**2+(z1-x0)*82)**1/2 <= R. I would like to print only those point's coordinates in a form of numpy array. Output would be something like this-[[x11,y11,z11],[x12,y12,z12],...]. I have the following code so far-
import numpy as np
import math
def create_points_around_atom(number,atom_coordinates):
n= number
x0 = atom_coordinates[0]
y0 = atom_coordinates[1]
z0 = atom_coordinates[2]
R = 1.2
for i in range(n):
phi = np.random.uniform(0,2*np.pi,size=(n,))
costheta = np.random.uniform(-1,1,size=(n,))
u = np.random.uniform(0,1,size=(n,))
theta = np.arccos(costheta)
r = R * np.cbrt(u)
x1 = r*np.sin(theta)*np.cos(phi)
y1 = r*np.sin(theta)*np.sin(phi)
z1 = r*np.cos(theta)
dist = np.sqrt((x1-x0)**2+(y1-y0)**2+(z1-z0)**2)
distance = list(dist)
point_on_inside_sphere = []
for j in distance:
if j <= R:
point_on_inside_sphere.append(j)
print('j:',j,'\tR:',R)
print('The list is:', point_on_inside_sphere)
print(len(point_on_inside_sphere))
kk =0
for kk in range(len(point_on_inside_sphere)):
for jj in point_on_inside_sphere:
xx = np.sqrt(jj**2-y1**2-z1**2)
yy = np.sqrt(jj**2-x1**2-z1**2)
zz = np.sqrt(jj**2-y1**2-x1**2)
print("x:", xx, "y:", yy,"z:", zz)
kk +=1
And I am running it-
create_points_around_atom(n=2,structure[1].coords)
where, structure[1].coords is a numpy array of three coordinates.
To sum up what has been discussed in the comments, and some other points:
There is no need to filter the points because u <= 1, which means np.cbrt(u) <= 1 and hence r = R * np.cbrt(u) <= R, i.e. all points will already be inside or on the surface of the sphere.
Calling np.random.uniform with size=(n,) creates an array of n elements, so there's no need to do this n times in a loop.
You are filtering distances from the atom_coordinate, but the points you are generating are centered on [0, 0, 0], because you are not adding this offset.
Passing R as an argument seems more sensible than hard-coding it.
There's no need to "pre-load" arguments in Python like one would sometimes do in C.
Since sin(theta) is non-negative over the sphere, you can directly calculate it from the costheta array using the identity cosĀ²(x) + sinĀ²(x) = 1.
Sample implementation:
# pass radius as an argument
def create_points_around_atom(number, center, radius):
# generate the random quantities
phi = np.random.uniform( 0, 2*np.pi, size=(number,))
theta_cos = np.random.uniform(-1, 1, size=(number,))
u = np.random.uniform( 0, 1, size=(number,))
# calculate sin(theta) from cos(theta)
theta_sin = np.sqrt(1 - theta_cos**2)
r = radius * np.cbrt(u)
# use list comprehension to generate the coordinate array without a loop
# don't forget to offset by the atom's position (center)
return np.array([
np.array([
center[0] + r[i] * theta_sin[i] * np.cos(phi[i]),
center[1] + r[i] * theta_sin[i] * np.sin(phi[i]),
center[2] + r[i] * theta_cos[i]
]) for i in range(number)
])

Solve a nonlinear ODE system for the Frenet frame

I have checked python non linear ODE with 2 variables , which is not my case. Maybe my case is not called as nonlinear ODE, correct me please.
The question isFrenet Frame actually, in which there are 3 vectors T(s), N(s) and B(s); the parameter s>=0. And there are 2 scalar with known math formula expression t(s) and k(s). I have the initial value T(0), N(0) and B(0).
diff(T(s), s) = k(s)*N(s)
diff(N(s), s) = -k(s)*T(s) + t(s)*B(s)
diff(B(s), s) = -t(s)*N(s)
Then how can I get T(s), N(s) and B(s) numerically or symbolically?
I have checked scipy.integrate.ode but I don't know how to pass k(s)*N(s) into its first parameter at all
def model (z, tspan):
T = z[0]
N = z[1]
B = z[2]
dTds = k(s) * N # how to express function k(s)?
dNds = -k(s) * T + t(s) * B
dBds = -t(s)* N
return [dTds, dNds, dBds]
z = scipy.integrate.ode(model, [T0, N0, B0]
Here is a code using solve_ivp interface from Scipy (instead of odeint) to obtain a numerical solution:
import numpy as np
from scipy.integrate import solve_ivp
from scipy.integrate import cumtrapz
import matplotlib.pylab as plt
# Define the parameters as regular Python function:
def k(s):
return 1
def t(s):
return 0
# The equations: dz/dt = model(s, z):
def model(s, z):
T = z[:3] # z is a (9, ) shaped array, the concatenation of T, N and B
N = z[3:6]
B = z[6:]
dTds = k(s) * N
dNds = -k(s) * T + t(s) * B
dBds = -t(s)* N
return np.hstack([dTds, dNds, dBds])
T0, N0, B0 = [1, 0, 0], [0, 1, 0], [0, 0, 1]
z0 = np.hstack([T0, N0, B0])
s_span = (0, 6) # start and final "time"
t_eval = np.linspace(*s_span, 100) # define the number of point wanted in-between,
# It is not necessary as the solver automatically
# define the number of points.
# It is used here to obtain a relatively correct
# integration of the coordinates, see the graph
# Solve:
sol = solve_ivp(model, s_span, z0, t_eval=t_eval, method='RK45')
print(sol.message)
# >> The solver successfully reached the end of the integration interval.
# Unpack the solution:
T, N, B = np.split(sol.y, 3) # another way to unpack the z array
s = sol.t
# Bonus: integration of the normal vector in order to get the coordinates
# to plot the curve (there is certainly better way to do this)
coords = cumtrapz(T, x=s)
plt.plot(coords[0, :], coords[1, :]);
plt.axis('equal'); plt.xlabel('x'); plt.xlabel('y');
T, N and B are vectors. Therefore, there are 9 equations to solve: z is a (9,) array.
For constant curvature and no torsion, the result is a circle:
thanks for your example. And I thought it again, found that since there is formula for dZ where Z is matrix(T, N, B), we can calculate Z[i] = Z[i-1] + dZ[i-1]*deltaS according to the concept of derivative. Then I code and find this idea can solve the circle example. So
is Z[i] = Z[i-1] + dZ[i-1]*deltaS suitable for other ODE? will it fail in some situation, or does scipy.integrate.solve_ivp/scipy.integrate.ode supply advantage over the direct usage of Z[i] = Z[i-1] + dZ[i-1]*deltaS?
in my code, I have to normalize Z[i] because ||Z[i]|| is not always 1. Why does it happen? A float numerical calculation error?
my answer to my question, at least it works for the circle
import numpy as np
from scipy.integrate import cumtrapz
import matplotlib.pylab as plt
# Define the parameters as regular Python function:
def k(s):
return 1
def t(s):
return 0
def dZ(s, Z):
return np.array(
[k(s) * Z[1], -k(s) * Z[0] + t(s) * Z[2], -t(s)* Z[1]]
)
T0, N0, B0 = np.array([1, 0, 0]), np.array([0, 1, 0]), np.array([0, 0, 1])
deltaS = 0.1 # step to calculate dZ/ds
num = int(2*np.pi*1/deltaS) + 1 # how many points on the curve we have to calculate
T = np.zeros([num, ], dtype=object)
N = np.zeros([num, ], dtype=object)
B = np.zeros([num, ], dtype=object)
T[0] = T0
N[0] = N0
B[0] = B0
for i in range(num-1):
temp_dZ = dZ(i*deltaS, np.array([T[i], N[i], B[i]]))
T[i+1] = T[i] + temp_dZ[0]*deltaS
T[i+1] = T[i+1]/np.linalg.norm(T[i+1]) # have to do this
N[i+1] = N[i] + temp_dZ[1]*deltaS
N[i+1] = N[i+1]/np.linalg.norm(N[i+1])
B[i+1] = B[i] + temp_dZ[2]*deltaS
B[i+1] = B[i+1]/np.linalg.norm(B[i+1])
coords = cumtrapz(
[
[i[0] for i in T], [i[1] for i in T], [i[2] for i in T]
]
, x=np.arange(num)*deltaS
)
plt.figure()
plt.plot(coords[0, :], coords[1, :]);
plt.axis('equal'); plt.xlabel('x'); plt.xlabel('y');
plt.show()
I found that the equation I listed in the first post does not work for my curve. So I read Gray A., Abbena E., Salamon S-Modern Differential Geometry of Curves and Surfaces with Mathematica. 2006 and found that for arbitrary curve, Frenet equation should be written as
diff(T(s), s) = ||r'||* k(s)*N(s)
diff(N(s), s) = ||r'||*(-k(s)*T(s) + t(s)*B(s))
diff(B(s), s) = ||r'||* -t(s)*N(s)
where ||r'||(or ||r'(s)||) is diff([x(s), y(s), z(s)], s).norm()
now the problem has changed to be some different from that in the first post, because there is no r'(s) function or discrete data array. So I think this is suitable for a new reply other than comment.
I met 2 questions while trying to solve the new equation:
how can we program with r'(s) if scipy's solve_ivp is used?
I try to modify my gaussian solution, but the result is totally wrong.
thanks again
import numpy as np
from scipy.integrate import cumtrapz
import matplotlib.pylab as plt
# Define the parameters as regular Python function:
def k(s):
return 1
def t(s):
return 0
def dZ(s, Z, r_norm):
return np.array([
r_norm * k(s) * Z[1],
r_norm*(-k(s) * Z[0] + t(s) * Z[2]),
r_norm*(-t(s)* Z[1])
])
T0, N0, B0 = np.array([1, 0, 0]), np.array([0, 1, 0]), np.array([0, 0, 1])
deltaS = 0.1 # step to calculate dZ/ds
num = int(2*np.pi*1/deltaS) + 1 # how many points on the curve we have to calculate
T = np.zeros([num, ], dtype=object)
N = np.zeros([num, ], dtype=object)
B = np.zeros([num, ], dtype=object)
R0 = N0
T[0] = T0
N[0] = N0
B[0] = B0
for i in range(num-1):
r_norm = np.linalg.norm(R0)
temp_dZ = dZ(i*deltaS, np.array([T[i], N[i], B[i]]), r_norm)
T[i+1] = T[i] + temp_dZ[0]*deltaS
T[i+1] = T[i+1]/np.linalg.norm(T[i+1])
N[i+1] = N[i] + temp_dZ[1]*deltaS
N[i+1] = N[i+1]/np.linalg.norm(N[i+1])
B[i+1] = B[i] + temp_dZ[2]*deltaS
B[i+1] = B[i+1]/np.linalg.norm(B[i+1])
R0 = R0 + T[i]*deltaS
coords = cumtrapz(
[
[i[0] for i in T], [i[1] for i in T], [i[2] for i in T]
]
, x=np.arange(num)*deltaS
)
plt.figure()
plt.plot(coords[0, :], coords[1, :]);
plt.axis('equal'); plt.xlabel('x'); plt.xlabel('y');
plt.show()

How can I use multiple dimensional polynomials with numpy.polynomial?

I'm able to use numpy.polynomial to fit terms to 1D polynomials like f(x) = 1 + x + x^2. How can I fit multidimensional polynomials, like f(x,y) = 1 + x + x^2 + y + yx + y x^2 + y^2 + y^2 x + y^2 x^2? It looks like numpy doesn't support multidimensional polynomials at all: is that the case? In my real application, I have 5 dimensions of input and I am interested in hermite polynomials. It looks like the polynomials in scipy.special are also only available for one dimension of inputs.
# One dimension of data can be fit
x = np.random.random(100)
y = np.sin(x)
params = np.polynomial.polynomial.polyfit(x, y, 6)
np.polynomial.polynomial.polyval([0, .2, .5, 1.5], params)
array([ -5.01799432e-08, 1.98669317e-01, 4.79425535e-01,
9.97606096e-01])
# When I try two dimensions, it fails.
x = np.random.random((100, 2))
y = np.sin(5 * x[:,0]) + .4 * np.sin(x[:,1])
params = np.polynomial.polynomial.polyvander2d(x, y, [6, 6])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-13-5409f9a3e632> in <module>()
----> 1 params = np.polynomial.polynomial.polyvander2d(x, y, [6, 6])
/usr/local/lib/python2.7/site-packages/numpy/polynomial/polynomial.pyc in polyvander2d(x, y, deg)
1201 raise ValueError("degrees must be non-negative integers")
1202 degx, degy = ideg
-> 1203 x, y = np.array((x, y), copy=0) + 0.0
1204
1205 vx = polyvander(x, degx)
ValueError: could not broadcast input array from shape (100,2) into shape (100)
I got annoyed that there is no simple function for a 2d polynomial fit of any number of degrees so I made my own. Like the other answers it uses numpy lstsq to find the best coefficients.
import numpy as np
from scipy.linalg import lstsq
from scipy.special import binom
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def _get_coeff_idx(coeff):
idx = np.indices(coeff.shape)
idx = idx.T.swapaxes(0, 1).reshape((-1, 2))
return idx
def _scale(x, y):
# Normalize x and y to avoid huge numbers
# Mean 0, Variation 1
offset_x, offset_y = np.mean(x), np.mean(y)
norm_x, norm_y = np.std(x), np.std(y)
x = (x - offset_x) / norm_x
y = (y - offset_y) / norm_y
return x, y, (norm_x, norm_y), (offset_x, offset_y)
def _unscale(x, y, norm, offset):
x = x * norm[0] + offset[0]
y = y * norm[1] + offset[1]
return x, y
def polyvander2d(x, y, degree):
A = np.polynomial.polynomial.polyvander2d(x, y, degree)
return A
def polyscale2d(coeff, scale_x, scale_y, copy=True):
if copy:
coeff = np.copy(coeff)
idx = _get_coeff_idx(coeff)
for k, (i, j) in enumerate(idx):
coeff[i, j] /= scale_x ** i * scale_y ** j
return coeff
def polyshift2d(coeff, offset_x, offset_y, copy=True):
if copy:
coeff = np.copy(coeff)
idx = _get_coeff_idx(coeff)
# Copy coeff because it changes during the loop
coeff2 = np.copy(coeff)
for k, m in idx:
not_the_same = ~((idx[:, 0] == k) & (idx[:, 1] == m))
above = (idx[:, 0] >= k) & (idx[:, 1] >= m) & not_the_same
for i, j in idx[above]:
b = binom(i, k) * binom(j, m)
sign = (-1) ** ((i - k) + (j - m))
offset = offset_x ** (i - k) * offset_y ** (j - m)
coeff[k, m] += sign * b * coeff2[i, j] * offset
return coeff
def plot2d(x, y, z, coeff):
# regular grid covering the domain of the data
if x.size > 500:
choice = np.random.choice(x.size, size=500, replace=False)
else:
choice = slice(None, None, None)
x, y, z = x[choice], y[choice], z[choice]
X, Y = np.meshgrid(
np.linspace(np.min(x), np.max(x), 20), np.linspace(np.min(y), np.max(y), 20)
)
Z = np.polynomial.polynomial.polyval2d(X, Y, coeff)
fig = plt.figure()
ax = fig.gca(projection="3d")
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, alpha=0.2)
ax.scatter(x, y, z, c="r", s=50)
plt.xlabel("X")
plt.ylabel("Y")
ax.set_zlabel("Z")
plt.show()
def polyfit2d(x, y, z, degree=1, max_degree=None, scale=True, plot=False):
"""A simple 2D polynomial fit to data x, y, z
The polynomial can be evaluated with numpy.polynomial.polynomial.polyval2d
Parameters
----------
x : array[n]
x coordinates
y : array[n]
y coordinates
z : array[n]
data values
degree : {int, 2-tuple}, optional
degree of the polynomial fit in x and y direction (default: 1)
max_degree : {int, None}, optional
if given the maximum combined degree of the coefficients is limited to this value
scale : bool, optional
Wether to scale the input arrays x and y to mean 0 and variance 1, to avoid numerical overflows.
Especially useful at higher degrees. (default: True)
plot : bool, optional
wether to plot the fitted surface and data (slow) (default: False)
Returns
-------
coeff : array[degree+1, degree+1]
the polynomial coefficients in numpy 2d format, i.e. coeff[i, j] for x**i * y**j
"""
# Flatten input
x = np.asarray(x).ravel()
y = np.asarray(y).ravel()
z = np.asarray(z).ravel()
# Remove masked values
mask = ~(np.ma.getmask(z) | np.ma.getmask(x) | np.ma.getmask(y))
x, y, z = x[mask].ravel(), y[mask].ravel(), z[mask].ravel()
# Scale coordinates to smaller values to avoid numerical problems at larger degrees
if scale:
x, y, norm, offset = _scale(x, y)
if np.isscalar(degree):
degree = (int(degree), int(degree))
degree = [int(degree[0]), int(degree[1])]
coeff = np.zeros((degree[0] + 1, degree[1] + 1))
idx = _get_coeff_idx(coeff)
# Calculate elements 1, x, y, x*y, x**2, y**2, ...
A = polyvander2d(x, y, degree)
# We only want the combinations with maximum order COMBINED power
if max_degree is not None:
mask = idx[:, 0] + idx[:, 1] <= int(max_degree)
idx = idx[mask]
A = A[:, mask]
# Do the actual least squares fit
C, *_ = lstsq(A, z)
# Reorder coefficients into numpy compatible 2d array
for k, (i, j) in enumerate(idx):
coeff[i, j] = C[k]
# Reverse the scaling
if scale:
coeff = polyscale2d(coeff, *norm, copy=False)
coeff = polyshift2d(coeff, *offset, copy=False)
if plot:
if scale:
x, y = _unscale(x, y, norm, offset)
plot2d(x, y, z, coeff)
return coeff
if __name__ == "__main__":
n = 100
x, y = np.meshgrid(np.arange(n), np.arange(n))
z = x ** 2 + y ** 2
c = polyfit2d(x, y, z, degree=2, plot=True)
print(c)
It doesn't look like polyfit supports fitting multivariate polynomials, but you can do it by hand, with linalg.lstsq. The steps are as follows:
Gather the degrees of monomials x**i * y**j you wish to use in the model. Think carefully about it: your current model already has 9 parameters, if you are going to push to 5 variables then with the current approach you'll end up with 3**5 = 243 parameters, a sure road to overfitting. Maybe limit to the monomials of __total_ degree at most 2 or three...
Plug the x-points into each monomial; this gives a 1D array. Stack all such arrays as columns of a matrix.
Solve a linear system with aforementioned matrix and with the right-hand side being the target values (I call them z because y is confusing when you also use x, y for two variables).
Here it is:
import numpy as np
x = np.random.random((100, 2))
z = np.sin(5 * x[:,0]) + .4 * np.sin(x[:,1])
degrees = [(i, j) for i in range(3) for j in range(3)] # list of monomials x**i * y**j to use
matrix = np.stack([np.prod(x**d, axis=1) for d in degrees], axis=-1) # stack monomials like columns
coeff = np.linalg.lstsq(matrix, z)[0] # lstsq returns some additional info we ignore
print("Coefficients", coeff) # in the same order as the monomials listed in "degrees"
fit = np.dot(matrix, coeff)
print("Fitted values", fit)
print("Original values", y)
I believe you have misunderstood what polyvander2d does and how it should be used. polyvander2d() returns the pseudo-Vandermonde matrix of degrees deg and sample points (x, y).
Here, y is not the value(s) of the polynomial at point(s) x but rather it is the y-coordinate of the point(s) and x is the x-coordinate. Roughly speaking, the returned array is a set of combinations of (x**i) * (y**j) and x and y are essentially 2D "mesh-grids". Therefore, both x and y must have identical shapes.
Your x and y, however, arrays have different shapes:
>>> x.shape
(100, 2)
>>> y.shape
(100,)
I do not believe numpy has a 5D-polyvander of the form polyvander5D(x, y, z, v, w, deg). Notice, all the variables here are coordinates and not the values of the polynomial p=p(x,y,z,v,w). You, however, seem to be using y (in the 2D case) as f.
It appears that numpy does not have 2D or higher equivalents for the polyfit() function. If your intention is to find the coefficients of the best-fitting polynomial in higher-dimensions, I would suggest that you generalize the approach described here: Equivalent of `polyfit` for a 2D polynomial in Python
The option isn't there because nobody wants to do that. Combine the polynomials linearly (f(x,y) = 1 + x + y + x^2 + y^2) and solve the system of equations yourself.

Procrustes Analysis with NumPy?

Is there something like Matlab's procrustes function in NumPy/SciPy or related libraries?
For reference. Procrustes analysis aims to align 2 sets of points (in other words, 2 shapes) to minimize square distance between them by removing scale, translation and rotation warp components.
Example in Matlab:
X = [0 1; 2 3; 4 5; 6 7; 8 9]; % first shape
R = [1 2; 2 1]; % rotation matrix
t = [3 5]; % translation vector
Y = X * R + repmat(t, 5, 1); % warped shape, no scale and no distortion
[d Z] = procrustes(X, Y); % Z is Y aligned back to X
Z
Z =
0.0000 1.0000
2.0000 3.0000
4.0000 5.0000
6.0000 7.0000
8.0000 9.0000
Same task in NumPy:
X = arange(10).reshape((5, 2))
R = array([[1, 2], [2, 1]])
t = array([3, 5])
Y = dot(X, R) + t
Z = ???
Note: I'm only interested in aligned shape, since square error (variable d in Matlab code) is easily computed from 2 shapes.
I'm not aware of any pre-existing implementation in Python, but it's easy to take a look at the MATLAB code using edit procrustes.m and port it to Numpy:
def procrustes(X, Y, scaling=True, reflection='best'):
"""
A port of MATLAB's `procrustes` function to Numpy.
Procrustes analysis determines a linear transformation (translation,
reflection, orthogonal rotation and scaling) of the points in Y to best
conform them to the points in matrix X, using the sum of squared errors
as the goodness of fit criterion.
d, Z, [tform] = procrustes(X, Y)
Inputs:
------------
X, Y
matrices of target and input coordinates. they must have equal
numbers of points (rows), but Y may have fewer dimensions
(columns) than X.
scaling
if False, the scaling component of the transformation is forced
to 1
reflection
if 'best' (default), the transformation solution may or may not
include a reflection component, depending on which fits the data
best. setting reflection to True or False forces a solution with
reflection or no reflection respectively.
Outputs
------------
d
the residual sum of squared errors, normalized according to a
measure of the scale of X, ((X - X.mean(0))**2).sum()
Z
the matrix of transformed Y-values
tform
a dict specifying the rotation, translation and scaling that
maps X --> Y
"""
n,m = X.shape
ny,my = Y.shape
muX = X.mean(0)
muY = Y.mean(0)
X0 = X - muX
Y0 = Y - muY
ssX = (X0**2.).sum()
ssY = (Y0**2.).sum()
# centred Frobenius norm
normX = np.sqrt(ssX)
normY = np.sqrt(ssY)
# scale to equal (unit) norm
X0 /= normX
Y0 /= normY
if my < m:
Y0 = np.concatenate((Y0, np.zeros(n, m-my)),0)
# optimum rotation matrix of Y
A = np.dot(X0.T, Y0)
U,s,Vt = np.linalg.svd(A,full_matrices=False)
V = Vt.T
T = np.dot(V, U.T)
if reflection != 'best':
# does the current solution use a reflection?
have_reflection = np.linalg.det(T) < 0
# if that's not what was specified, force another reflection
if reflection != have_reflection:
V[:,-1] *= -1
s[-1] *= -1
T = np.dot(V, U.T)
traceTA = s.sum()
if scaling:
# optimum scaling of Y
b = traceTA * normX / normY
# standarised distance between X and b*Y*T + c
d = 1 - traceTA**2
# transformed coords
Z = normX*traceTA*np.dot(Y0, T) + muX
else:
b = 1
d = 1 + ssY/ssX - 2 * traceTA * normY / normX
Z = normY*np.dot(Y0, T) + muX
# transformation matrix
if my < m:
T = T[:my,:]
c = muX - b*np.dot(muY, T)
#transformation values
tform = {'rotation':T, 'scale':b, 'translation':c}
return d, Z, tform
There is a Scipy function for it: scipy.spatial.procrustes
I'm just posting its example here:
>>> import numpy as np
>>> from scipy.spatial import procrustes
>>> a = np.array([[1, 3], [1, 2], [1, 1], [2, 1]], 'd')
>>> b = np.array([[4, -2], [4, -4], [4, -6], [2, -6]], 'd')
>>> mtx1, mtx2, disparity = procrustes(a, b)
>>> round(disparity)
0.0
You can have both Ordinary Procrustes Analysis and Generalized Procrustes Analysis in python with something like this:
import numpy as np
def opa(a, b):
aT = a.mean(0)
bT = b.mean(0)
A = a - aT
B = b - bT
aS = np.sum(A * A)**.5
bS = np.sum(B * B)**.5
A /= aS
B /= bS
U, _, V = np.linalg.svd(np.dot(B.T, A))
aR = np.dot(U, V)
if np.linalg.det(aR) < 0:
V[1] *= -1
aR = np.dot(U, V)
aS = aS / bS
aT-= (bT.dot(aR) * aS)
aD = (np.sum((A - B.dot(aR))**2) / len(a))**.5
return aR, aS, aT, aD
def gpa(v, n=-1):
if n < 0:
p = avg(v)
else:
p = v[n]
l = len(v)
r, s, t, d = np.ndarray((4, l), object)
for i in range(l):
r[i], s[i], t[i], d[i] = opa(p, v[i])
return r, s, t, d
def avg(v):
v_= np.copy(v)
l = len(v_)
R, S, T = [list(np.zeros(l)) for _ in range(3)]
for i, j in np.ndindex(l, l):
r, s, t, _ = opa(v_[i], v_[j])
R[j] += np.arccos(min(1, max(-1, np.trace(r[:1])))) * np.sign(r[1][0])
S[j] += s
T[j] += t
for i in range(l):
a = R[i] / l
r = [np.cos(a), -np.sin(a)], [np.sin(a), np.cos(a)]
v_[i] = v_[i].dot(r) * (S[i] / l) + (T[i] / l)
return v_.mean(0)
For testing purposes, the output of each algorithm can be visualized as follows:
import matplotlib.pyplot as p; p.rcParams['toolbar'] = 'None';
def plt(o, e, b):
p.figure(figsize=(10, 10), dpi=72, facecolor='w').add_axes([0.05, 0.05, 0.9, 0.9], aspect='equal')
p.plot(0, 0, marker='x', mew=1, ms=10, c='g', zorder=2, clip_on=False)
p.gcf().canvas.set_window_title('%f' % e)
x = np.ravel(o[0].T[0])
y = np.ravel(o[0].T[1])
p.xlim(min(x), max(x))
p.ylim(min(y), max(y))
a = []
for i, j in np.ndindex(len(o), 2):
a.append(o[i].T[j])
O = p.plot(*a, marker='x', mew=1, ms=10, lw=.25, c='b', zorder=0, clip_on=False)
O[0].set(c='r', zorder=1)
if not b:
O[2].set_color('b')
O[2].set_alpha(0.4)
p.axis('off')
p.show()
# Fly wings example (Klingenberg, 2015 | https://en.wikipedia.org/wiki/Procrustes_analysis)
arr1 = np.array([[588.0, 443.0], [178.0, 443.0], [56.0, 436.0], [50.0, 376.0], [129.0, 360.0], [15.0, 342.0], [92.0, 293.0], [79.0, 269.0], [276.0, 295.0], [281.0, 331.0], [785.0, 260.0], [754.0, 174.0], [405.0, 233.0], [386.0, 167.0], [466.0, 59.0]])
arr2 = np.array([[477.0, 557.0], [130.129, 374.307], [52.0, 334.0], [67.662, 306.953], [111.916, 323.0], [55.119, 275.854], [107.935, 277.723], [101.899, 259.73], [175.0, 329.0], [171.0, 345.0], [589.0, 527.0], [591.0, 468.0], [299.0, 363.0], [306.0, 317.0], [406.0, 288.0]])
def opa_out(a):
r, s, t, d = opa(a[0], a[1])
a[1] = a[1].dot(r) * s + t
return a, d, False
plt(*opa_out([arr1, arr2, np.matrix.copy(arr2)]))
def gpa_out(a):
g = gpa(a, -1)
D = [avg(a)]
for i in range(len(a)):
D.append(a[i].dot(g[0][i]) * g[1][i] + g[2][i])
return D, sum(g[3])/len(a), True
plt(*gpa_out([arr1, arr2]))
Probably you want to try this package with various flavors of different Procrustes methods, https://github.com/theochem/procrustes.

Categories