Related
I'm looking for a way to analyze two cubic splines and find the point where they come the closest to each other. I've seen a lot of solutions and posts but I've been unable to implement the methods suggested. I know that the closest point will be one of the end-points of the two curves or a point where the first derivative of both curves is equal. Checking the end points is easy. Finding the points where the first derivatives match is hard.
Given:
Curve 0 is B(t) (red)
Curve 1 is C(s) (blue)
A candidate for closest point is where:
B'(t) = C'(s)
The first derivative of each curve takes the following form:
Where the a, b, c coefficients are formed from the control points of the curves:
a=P1-P0
b=P2-P1
c=P3-P2
Taking the 4 control points for each cubic spline I can get each curve's parametric sections into a matrix form that can be expressed with Numpy with the following Python code:
def test_closest_points():
# Control Points for the two qubic splines.
spline_0 = [(1,28), (58,93), (113,95), (239,32)]
spline_1 = [(58, 241), (26,76), (225,83), (211,205)]
first_derivative_matrix = np.array([[3, -6, 3], [-6, 6, 0], [3, 0, 0]])
spline_0_x_A = spline_0[1][0] - spline_0[0][0]
spline_0_x_B = spline_0[2][0] - spline_0[1][0]
spline_0_x_C = spline_0[3][0] - spline_0[2][0]
spline_0_y_A = spline_0[1][1] - spline_0[0][1]
spline_0_y_B = spline_0[2][1] - spline_0[1][1]
spline_0_y_C = spline_0[3][1] - spline_0[2][1]
spline_1_x_A = spline_1[1][0] - spline_1[0][0]
spline_1_x_B = spline_1[2][0] - spline_1[1][0]
spline_1_x_C = spline_1[3][0] - spline_1[2][0]
spline_1_y_A = spline_1[1][1] - spline_1[0][1]
spline_1_y_B = spline_1[2][1] - spline_1[1][1]
spline_1_y_C = spline_1[3][1] - spline_1[2][1]
spline_0_first_derivative_x_coefficients = np.array([[spline_0_x_A], [spline_0_x_B], [spline_0_x_C]])
spline_0_first_derivative_y_coefficients = np.array([[spline_0_y_A], [spline_0_y_B], [spline_0_y_C]])
spline_1_first_derivative_x_coefficients = np.array([[spline_1_x_A], [spline_1_x_B], [spline_1_x_C]])
spline_1_first_derivative_y_coefficients = np.array([[spline_1_y_A], [spline_1_y_B], [spline_1_y_C]])
# Show All te matrix values
print 'first_derivative_matrix:'
print first_derivative_matrix
print
print 'spline_0_first_derivative_x_coefficients:'
print spline_0_first_derivative_x_coefficients
print
print 'spline_0_first_derivative_y_coefficients:'
print spline_0_first_derivative_y_coefficients
print
print 'spline_1_first_derivative_x_coefficients:'
print spline_1_first_derivative_x_coefficients
print
print 'spline_1_first_derivative_y_coefficients:'
print spline_1_first_derivative_y_coefficients
print
# Now taking B(t) as spline_0 and C(s) as spline_1, I need to find the values of t and s where B'(t) = C'(s)
This post has some good high-level advice but I'm unsure how to implement a solution in python that can find the correct values for t and s that have matching first derivatives (slopes). The B'(t) - C'(s) = 0 problem seems like a matter of finding roots. Any advice on how to do it with python and Numpy would be greatly appreciated.
Using Numpy assumes that the problem should be solved numerically. Without loss of generality we can treat that 0<s<=1 and 0<t<=1. You can use SciPy package to solve the problem numerically, e.g.
from scipy.optimize import minimize
import numpy as np
def B(t):
"""Assumed for simplicity: 0 < t <= 1
"""
return np.sin(6.28 * t), np.cos(6.28 * t)
def C(s):
"""0 < s <= 1
"""
return 10 + np.sin(3.14 * s), 10 + np.cos(3.14 * s)
def Q(x):
"""Distance function to be minimized
"""
b = B(x[0])
c = C(x[1])
return (b[0] - c[0]) ** 2 + (b[1] - c[1]) ** 2
res = minimize(Q, (0.5, 0.5))
print("B-Point: ", B(res.x[0]))
print("C-Point: ", C(res.x[1]))
B-Point: (0.7071067518175205, 0.7071068105555733)
C-Point: (9.292893243165555, 9.29289319446135)
This is example for two circles (one circle and arc). This will likely work with splines.
Your assumption of B'(t) = C'(s) is too strong.
Derivatives have direction and magnitude. Directions must coincide in the candidate points, but magnitudes might differ.
To find points with the same derivative slopes and the closest distance you can solve equation system (of course, high power :( )
yb'(t) * xc'(u) - yc'(t) * xb'(u) = 0 //vector product of (anti)collinear vectors is zero
((xb(t) - xc(u))^2 + (xb(t) - xc(u))^2)' = 0 //distance derivative
You can use the function fmin also:
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fmin
def BCubic(t, P0, P1, P2, P3):
a=P1-P0
b=P2-P1
c=P3-P2
return a*3*(1-t)**2 + b*6*(1-t)*t + c*3*t**2
def B(t):
return BCubic(t,4,2,3,1)
def C(t):
return BCubic(t,1,4,3,4)
def f(t):
# L1 or manhattan distance
return abs(B(t) - C(t))
init = 0 # 2
tmin = fmin(f,np.array([init]))
#Optimization terminated successfully.
#Current function value: 2.750000
# Iterations: 23
# Function evaluations: 46
print(tmin)
# [0.5833125]
tmin = tmin[0]
t = np.linspace(0, 2, 100)
plt.plot(t, B(t), label='B')
plt.plot(t, C(t), label='C')
plt.plot(t, abs(B(t)-C(t)), label='|B-C|')
plt.plot(tmin, B(tmin), 'r.', markersize=12, label='min')
plt.axvline(x=tmin, linestyle='--', color='k')
plt.legend()
plt.show()
I have written this code to model the motion of a spring pendulum
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2=(0.415+x)*(dydt)**2-50/1.006*x+9.81*cos(y)
dy2dt2=(-9.81*1.006*sin(y)-2*(dxdt)*(dydt))/(0.415+x)
return np.array([x,y, dx2dt2, dy2dt2])
init = array([0,pi/18,0,0])
time = np.linspace(0.0,10.0,1000)
sol = odeint(deriv,init,time)
def plot(h,t):
n,u,x,y=h
n=(0.4+x)*sin(y)
u=(0.4+x)*cos(y)
return np.array([n,u,x,y])
init2 = array([0.069459271,0.393923101,0,pi/18])
time2 = np.linspace(0.0,10.0,1000)
sol2 = odeint(plot,init2,time2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(sol2[:,0], sol2[:, 1], label = 'hi')
plt.legend()
plt.show()
where x and y are two variables, and I'm trying to convert x and y to the polar coordinates n (x-axis) and u (y-axis) and then graph n and u on a graph where n is on the x-axis and u is on the y-axis. However, when I graph the code above it gives me:
Instead, I should be getting an image somewhat similar to this:
The first part of the code - from "def deriv(z,t): to sol:odeint(deriv..." is where the values of x and y are generated, and using that I can then turn them into rectangular coordinates and graph them. How do I change my code to do this? I'm new to Python, so I might not understand some of the terminology. Thank you!
The first solution should give you the expected result, but there is a mistake in the implementation of the ode.
The function you pass to odeint should return an array containing the solutions of a 1st-order differential equations system.
In your case what you are solving is
While instead you should be solving
In order to do so change your code to this
import numpy as np
from scipy.integrate import odeint
from numpy import sin, cos, pi, array
import matplotlib.pyplot as plt
def deriv(z, t):
x, y, dxdt, dydt = z
dx2dt2 = (0.415 + x) * (dydt)**2 - 50 / 1.006 * x + 9.81 * cos(y)
dy2dt2 = (-9.81 * 1.006 * sin(y) - 2 * (dxdt) * (dydt)) / (0.415 + x)
return np.array([dxdt, dydt, dx2dt2, dy2dt2])
init = array([0, pi / 18, 0, 0])
time = np.linspace(0.0, 10.0, 1000)
sol = odeint(deriv, init, time)
plt.plot(sol[:, 0], sol[:, 1], label='hi')
plt.show()
The second part of the code looks like you are trying to do a change of coordinate.
I'm not sure why you try to solve the ode again instead of just doing this.
x = sol[:,0]
y = sol[:,1]
def plot(h):
x, y = h
n = (0.4 + x) * sin(y)
u = (0.4 + x) * cos(y)
return np.array([n, u])
n,u = plot( (x,y))
As of now, what you are doing there is solving this system:
Which leads to x=e^t and y=e^t and n' = (0.4 + e^t) * sin(e^t) u' = (0.4 + e^t) * cos(e^t).
Without going too much into the details, with some intuition you could see that this will lead to an attractor as the derivative of n and u will start to switch sign faster and with greater magnitude at an exponential rate, leading to n and u collapsing onto an attractor as shown by your plot.
If you are actually trying to solve another differential equation I would need to see it in order to help you further
This is what happen if you do the transformation and set the time to 1000:
I'm currently trying to make a program which will plot a function using matplotlib, graph it, shade the area under the curve between two variables, and use Simpson's 3/8th's rule to calculate the shaded area. However, when trying to print the variable I've assigned to the final value of the integral, it prints a list.
To begin, here's the base of my code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
This definition defines the function I will be working with here, a simple polynomial.
def func(x):
return (x - 3) * (x - 5) * (x - 7) + 85
Here is the function which calculates the area under the curve
def simpson(function, a, b, n):
"""Approximates the definite integral of f from a to b by the
composite Simpson's rule, using n subintervals (with n even)"""
if n % 2:
raise ValueError("n must be even (received n=%d)" % n)
h = (b - a) / n #The first section of Simpson's 3/8ths rule
s = function(a) + function(b) #The addition of functions over an interval
for i in range(1, n, 2):
s += 4 * function(a + i * h)
for i in range(2, n-1, 2):
s += 2 * function(a + i * h)
return s * h / 3
Now the simpson's rule definition is over, and I define a few variables for simplicity.
a, b = 2, 9 # integral limits
x = np.linspace(0, 10) #Generates 100 points evenly spaced between 0 and 10
y = func(x) #Just defines y to be f(x) so its ez later on
fig, ax = plt.subplots()
plt.plot(x, y, 'r', linewidth=2)
plt.ylim(ymin=0)
final_integral = simpson(lambda x: y, a, b, 100000)
At this point something must have broken down, but I'll include the rest of the code in case you can spot the issue further on.
# Make the shaded region
ix = np.linspace(a, b)
iy = func(ix)
verts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]
poly = Polygon(verts, facecolor='0.9', edgecolor='0.5')
ax.add_patch(poly)
plt.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$",
horizontalalignment='center', fontsize=20)
ax.text(0.25, 135, r"Using Simpson's 3/8ths rule, the area under the curve is: ", fontsize=20)
Here is where the integral value should be printed:
ax.text(0.25, 114, final_integral , fontsize=20)
Here is the rest of the code necessary to plot the graph:
plt.figtext(0.9, 0.05, '$x$')
plt.figtext(0.1, 0.9, '$y$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks((a, b))
ax.set_xticklabels(('$a$', '$b$'))
ax.set_yticks([])
plt.show()
When running this program, you get this graph, and a series of numbers has been printed where the area under the curve should be
Any help here is appreciated. I'm totally stuck. Also, sorry if this is a tad long, it's my first question on the forum.
Have you tried feeding your simpson() function the func() directly, as opposed to using the lambda setup?
I think this could work:
final_integral = simpson(func, a, b, 100000)
You might also try:
final_integral = simpson(lambda x: func(x), a, b, 100000)
What is happening is that y is an array with values func(x), and when you use the expression lambda x: y you are actually creating a constant function of the form f(x) = y = const. Your final_integral is then a list of integrals, where each integrand was the constant function with a particular value from the y array.
Note that you might want to format this number when you print it on the graph, in case it has a lot of trailing decimal points. How you do this depends on whether you are using Python 2 or 3.
You assigned x as linspace which is an array so y is also an array of values of a function of x. You can replace this line of code:
#old:
final_integral = simpson(lambda x:y, a, b, 100000)
#new:
final_integral = simpson(lambda t:func(t), a, b, 100000)
Changing the variable from x to t will give you the value for the area under that the curve. Hope this helps.
Suppose I have x and y vectors with a weight vector wgt. I can fit a cubic curve (y = a x^3 + b x^2 + c x + d) by using np.polyfit as follows:
y_fit = np.polyfit(x, y, deg=3, w=wgt)
Now, suppose I want to do another fit, but this time, I want the fit to pass through 0 (i.e. y = a x^3 + b x^2 + c x, d = 0), how can I specify a particular coefficient (i.e. d in this case) to be zero?
Thanks
You can try something like the following:
Import curve_fit from scipy, i.e.
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
import numpy as np
Define the curve fitting function. In your case,
def fit_func(x, a, b, c):
# Curve fitting function
return a * x**3 + b * x**2 + c * x # d=0 is implied
Perform the curve fitting,
# Curve fitting
params = curve_fit(fit_func, x, y)
[a, b, c] = params[0]
x_fit = np.linspace(x[0], x[-1], 100)
y_fit = a * x_fit**3 + b * x_fit**2 + c * x_fit
Plot the results if you please,
plt.plot(x, y, '.r') # Data
plt.plot(x_fit, y_fit, 'k') # Fitted curve
It does not answer the question in the sense that it uses numpy's polyfit function to pass through the origin, but it solves the problem.
Hope someone finds it useful :)
You can use np.linalg.lstsq and construct your coefficient matrix manually. To start, I'll create the example data x and y, and the "exact fit" y0:
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(100)
y0 = 0.07 * x ** 3 + 0.3 * x ** 2 + 1.1 * x
y = y0 + 1000 * np.random.randn(x.shape[0])
Now I'll create a full cubic polynomial 'training' or 'independent variable' matrix that includes the constant d column.
XX = np.vstack((x ** 3, x ** 2, x, np.ones_like(x))).T
Let's see what I get if I compute the fit with this dataset and compare it to polyfit:
p_all = np.linalg.lstsq(X_, y)[0]
pp = np.polyfit(x, y, 3)
print np.isclose(pp, p_all).all()
# Returns True
Where I've used np.isclose because the two algorithms do produce very small differences.
You're probably thinking 'that's nice, but I still haven't answered the question'. From here, forcing the fit to have a zero offset is the same as dropping the np.ones column from the array:
p_no_offset = np.linalg.lstsq(XX[:, :-1], y)[0] # use [0] to just grab the coefs
Ok, let's see what this fit looks like compared to our data:
y_fit = np.dot(p_no_offset, XX[:, :-1].T)
plt.plot(x, y0, 'k-', linewidth=3)
plt.plot(x, y_fit, 'y--', linewidth=2)
plt.plot(x, y, 'r.', ms=5)
This gives this figure,
WARNING: When using this method on data that does not actually pass through (x,y)=(0,0) you will bias your estimates of your output solution coefficients (p) because lstsq will be trying to compensate for that fact that there is an offset in your data. Sort of a 'square peg round hole' problem.
Furthermore, you could also fit your data to a cubic only by doing:
p_ = np.linalg.lstsq(X_[:1, :], y)[0]
Here again the warning above applies. If your data contains quadratic, linear or constant terms the estimate of the cubic coefficient will be biased. There can be times when - for numerical algorithms - this sort of thing is useful, but for statistical purposes my understanding is that it is important to include all of the lower terms. If tests turn out to show that the lower terms are not statistically different from zero that's fine, but for safety's sake you should probably leave them in when you estimate your cubic.
Best of luck!
I have a set of coordinates (x, y, z(x, y)) which describe intensities (z) at coordinates x, y. For a set number of these intensities at different coordinates, I need to fit a 2D Gaussian that minimizes the mean squared error.
The data is in numpy matrices and for each fitting session I will have either 4, 9, 16 or 25 coordinates. Ultimately I just need to get the central position of the gaussian (x_0, y_0) that has smallest MSE.
All of the examples that I have found use scipy.optimize.curve_fit but the input data they have is over an entire mesh rather than a few coordinates.
Any help would be appreciated.
Introduction
There are multiple ways to approach this. You can use non-linear methods (e.g. scipy.optimize.curve_fit), but they'll be slow and aren't guaranteed to converge. You can linearize the problem (fast, unique solution), but any noise in the "tails" of the distribution will cause issues. There are actually a few tricks you can apply to this particular case to avoid the latter issue. I'll show some examples, but I don't have time to demonstrate all of the "tricks" right now.
Just as a side note, a general 2D guassian has 6 parameters, so you won't be able to fully fit things with 4 points. However, it sounds like you might be assuming that there's no covariance between x and y and that the variances are the same in each direction (i.e. a perfectly "round" bell curve). If that's the case, then you only need four parameters. If you know the amplitude of the guassian, you'll only need three. However, I'm going to start with the general solution, and you can simplify it later on, if you want to.
For the moment, let's focus on solving this problem using non-linear methods (e.g. scipy.optimize.curve_fit).
The general equation for a 2D guassian is (directly from wikipedia):
where:
is essentially 0.5 over the covariance matrix, A is the amplitude,
and (X₀, Y₀) is the center
Generate simplified sample data
Let's write the equation above out:
import numpy as np
import matplotlib.pyplot as plt
def gauss2d(x, y, amp, x0, y0, a, b, c):
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
And then let's generate some example data. To start with, we'll generate some data that will be easy to fit:
np.random.seed(1977) # For consistency
x, y = np.random.random((2, 10))
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
zobs = gauss2d(x, y, amp, x0, y0, a, b, c)
fig, ax = plt.subplots()
scat = ax.scatter(x, y, c=zobs, s=200)
fig.colorbar(scat)
plt.show()
Note that we haven't added any noise, and the center of the distribution is within the range that we have data (i.e. center at 0.3, 0.7 and a scatter of x,y observations between 0 and 1). For the moment, let's stick with this, and then we'll see what happens when we add noise and shift the center.
Non-linear fitting
To start with, let's use scpy.optimize.curve_fit to preform a non-linear least-squares fit to the gaussian function. (On a side note, you can play around with the exact minimization algorithm by using some of the other functions in scipy.optimize.)
The scipy.optimize functions expect a slightly different function signature than the one we originally wrote above. We could write a wrapper to "translate", but let's just re-write the gauss2d function instead:
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
All we did was have the function expect the independent variables (x & y) as a single 2xN array.
Now we need to make an initial guess at what the guassian curve's parameters actually are. This is optional (the default is all ones, if I recall correctly), but you're likely to have problems converging if 1, 1 is not particularly close to the "true" center of the gaussian curve. For that reason, we'll use the x and y values of our largest observed z-value as a starting point for the center. I'll leave the rest of the parameters as 1, but if you know that they're likely to consistently be significantly different, change them to something more reasonable.
Here's the full, stand-alone example:
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
def main():
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
true_params = [amp, x0, y0, a, b, c]
xy, zobs = generate_example_data(10, true_params)
x, y = xy
i = zobs.argmax()
guess = [1, x[i], y[i], 1, 1, 1]
pred_params, uncert_cov = opt.curve_fit(gauss2d, xy, zobs, p0=guess)
zpred = gauss2d(xy, *pred_params)
print 'True parameters: ', true_params
print 'Predicted params:', pred_params
print 'Residual, RMS(obs - pred):', np.sqrt(np.mean((zobs - zpred)**2))
plot(xy, zobs, pred_params)
plt.show()
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
zobs = gauss2d(xy, *params)
return xy, zobs
def plot(xy, zobs, pred_params):
x, y = xy
yi, xi = np.mgrid[:1:30j, -.2:1.2:30j]
xyi = np.vstack([xi.ravel(), yi.ravel()])
zpred = gauss2d(xyi, *pred_params)
zpred.shape = xi.shape
fig, ax = plt.subplots()
ax.scatter(x, y, c=zobs, s=200, vmin=zpred.min(), vmax=zpred.max())
im = ax.imshow(zpred, extent=[xi.min(), xi.max(), yi.max(), yi.min()],
aspect='auto')
fig.colorbar(im)
ax.invert_yaxis()
return fig
main()
In this case, we exactly(ish) recover our original "true" parameters.
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1. 0.3 0.7 2. 3. 4. ]
Residual, RMS(obs - pred): 1.01560615193e-16
As we'll see in a second, this won't always be the case...
Adding Noise
Let's add some noise to our observations. All I've done here is change the generate_example_data function:
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
noise = np.random.normal(0, 0.3, num)
zobs = gauss2d(xy, *params) + noise
return xy, zobs
However, the result looks quite different:
And as far as the parameters go:
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1.129 0.263 0.750 1.280 32.333 10.103 ]
Residual, RMS(obs - pred): 0.152444640098
The predicted center hasn't changed much, but the b and c parameters have changed quite a bit.
If we change the center of the function to somewhere slightly outside of our scatter of points:
x0, y0 = -0.3, 1.1
We'll wind up with complete nonsense as a result in the presence of noise! (It still works correctly without noise.)
True parameters: [1, -0.3, 1.1, 2, 3, 4]
Predicted params: [ 0.546 -0.939 0.857 -0.488 44.069 -4.136]
Residual, RMS(obs - pred): 0.235664449826
This is a common problem when fitting a function that decays to zero. Any noise in the "tails" can result in a very poor result. There are a number of strategies to deal with this. One of the easiest is to weight the inversion by the observed z-values. Here's an example for the 1D case: (focusing on linearized the problem) How can I perform a least-squares fitting over multiple data sets fast? If I have time later, I'll add an example of this for the 2D case.