Fitting gaussian to absorbtion line in python - python

I am trying to fit a gaussian to my data which is taken in a pretty narrow spectral window. We got about 2 points of continuum and then about 10-11 that are part of the line. It should still be possible to fit it I think, but the curve fit is failing each time, and I am not sure why.
When running I get RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800.
Code and data:
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
x = np.arange(13)
xx = np.arange(130)/13.
y = np.array([19699.959 , 21679.445 , 21143.195 , 20602.875 , 16246.769 ,
11635.25 , 8602.465 , 7035.493 , 6697.0337, 6510.092 ,
7717.772 , 12270.446 , 16807.81 ])
# weighted arithmetic mean (corrected - check the section below)
mean = sum(x * y) / sum(y)
sigma = np.sqrt(sum(y * (x - mean)**2) / sum(y))
def Gauss(x, a, x0, sigma):
return a * np.exp(-(x - x0)**2 / (2 * sigma**2))
popt,pcov = curve_fit(Gauss, x, y, p0=[max(y), mean, sigma])
plt.plot(x, y, 'b+:', label='data')
plt.plot(xx, Gauss(xx, *popt), 'r-', label='fit')
plt.legend()
plt.show()

As the error says, the procedure to find the optimal values doesn't converge. If you really think that what you have can be fitted with a Gaussian curve, then this in general means you have a bad starting point.
How you're giving the starting point could have been an issue, particularly with how you provide sigma given that at positions 11, 12 and 13 you have what could be the onset of another signal. Anyhow, that's not the biggest issue this time, but the fact that you forgot to add an offset to the gaussian function
# ----> new parameter in signature
# |
def Gauss(x, y0, a, x0, sigma):
return y0 + a * np.exp(-(x - x0)**2 / (2 * sigma**2))
# |
# -------> adding and offset
Then, you can decide how to provide a starting point for the offset, but by eye, I did set 5000
popt, pcov = curve_fit(Gauss, x, y, p0=[5000, max(y), mean, sigma])
Doing that, I get a fit. But, due to the last three data points, it's not a very nice one.
If you avoid those values, the fit improves significantly.
Edit:
As indicated in the comments, the Gaussian is centered at about 8 looking downwards (silly me, it was an absorption line).
In such a case, the offset should be located at about the maximum ~22000 and then the parameter for the amplitude should be negative ~ -(max(y)-min(y)) ~ -16000.
And as an addition, better change xx to be as follows
xx = np.linspace(0, 13, 100)
or
xx = np.arange(0, 13, 0.05)
Which will give
and checking popt you get basically the values I mentioned/estimated by just looking at the plot ~(2180, -16000, 8) with a sigma of 2.7 which was the only one I don't have an immediate feeling on how to estimate.
My guess is that you should actually be fitting a mixture of Gauss and a Cauchy/Lorentz lineshape or even better a Voigt lineshape, to account for experimental broadening.

Related

How to write a function to fit data to a sum of N Gaussian-like peaks without explicitly defining the expression for every possible N?

I am trying to fit a progression of Gaussian peaks to a spectral lineshape.
The progression is a summation of N evenly spaced Gaussian peaks. When coded as a function, the formula for N=1 looks like this:
A * ((e0-i*hf)/e0)**3 * ((S**i)/np.math.factorial(i)) * np.exp(-4*np.log(2)*((x-e0+i*hf)/fwhm)**2)
where A, e0, hf, S and fwhm are to be determined from the fit with some good initial guesses.
Importantly, the parameter i starts at 0 and is incremented by 1 for every additional component.
So, for N = 3 the expression would take the form:
A * ((e0-0*hf)/e0)**3 * ((S**0)/np.math.factorial(0)) * np.exp(-4*np.log(2)*((x-e0+0*hf)/fwhm)**2) +
A * ((e0-1*hf)/e0)**3 * ((S**1)/np.math.factorial(1)) * np.exp(-4*np.log(2)*((x-e0+1*hf)/fwhm)**2) +
A * ((e0-2*hf)/e0)**3 * ((S**2)/np.math.factorial(2)) * np.exp(-4*np.log(2)*((x-e0+2*hf)/fwhm)**2)
All the parameters except i are constant for every component in the summation, and this is intended. i is changing in a controlled way depending on the number of parameters.
I am using curve_fit. One way to code the fitting routine would be to explicitly define the expression for any reasonable N and just use an appropriate one. Like, here it'would be 5 or 6, depending on the spacing, which is determined by hf. I could just define a long function with N components, writing an appropriate i value into each component. I understand how to do that (and did). But I would like to code this more intelligently. My goal is to write a function that will accept any value of N, add the appropriate amount of components as described above, compute the expression while incrementing the i properly and return the result.
I have attempted a variety of things. My main hurdle is that I don't know how to tell the program to use a particular N and the corresponding values of i. Finally, after some searching I thought I found a good way to code it with a lambda function.
from scipy.optimize import curve_fit
import numpy as np
def fullfunc(x,p,n):
def func(x,A,e0,hf,S,fwhm,i):
return A * ((e0-i*hf)/e0)**3 * ((S**i)/np.math.factorial(i)) * np.exp(-4*np.log(2)*((x-e0+i*hf)/fwhm)**2)
y_fit = np.zeros_like(x)
for i in range(n):
y_fit += func(x,p[0],p[1],p[2],p[3],p[4],i)
return y_fit
p = [1,26000,1400,1,1000]
x = [27027,25062,23364,21881,20576,19417,18382,17452,16611,15847,15151]
y = [0.01,0.42,0.93,0.97,0.65,0.33,0.14,0.06,0.02,0.01,0.004]
n = 7
fittedParameters, pcov = curve_fit(lambda x,p: fullfunc(x,p,n), x, y, p)
A,e0,hf,S,fwhm = fittedParameters
This gives:
TypeError: <lambda>() takes 2 positional arguments but 7 were given
and I don't understand why. I have a feeling the lambda function can't deal with a list of initial parameters.
I would greatly appreciate any advice on how to make this work without explicitly writing all the equations out, as I find that a bit too rigid.
The x and y ranges provided are samples of real data which give a general idea of what the shape is.
Since you only use summation over a range i=0, 1, ..., n-1, there is no need to refer to complicated lambda constructs that may or may not work in the context of curve fit. Just define your fit function as the summation of n components:
from matplotlib import pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
def func(x, A, e0, hf, S, fwhm):
return sum((A * ((e0-i*hf)/e0)**3 * ((S**i)/np.math.factorial(i)) * np.exp(-4*np.log(2)*((x-e0+i*hf)/fwhm)**2)) for i in range(n))
p = [1,26000,1400,1,1000]
x = [27027,25062,23364,21881,20576,19417,18382,17452,16611,15847,15151]
y = [0.01,0.42,0.93,0.97,0.65,0.33,0.14,0.06,0.02,0.01,0.004]
n = 7
fittedParameters, pcov = curve_fit(func, x, y, p0=p)
#A,e0,hf,S,fwhm = fittedParameters
print(fittedParameters)
plt.plot(x, y, "ro", label="data")
x_fit = np.linspace(min(x), max(x), 100)
y_fit = func(x_fit, *fittedParameters)
plt.plot(x_fit, y_fit, label="fit")
plt.legend()
plt.show()
Sample output:
P.S.: By the look of it, these data points are already well fitted with n=1.

curve_fit fails when using endpoint=False in independent variable

I'm fitting some data (I have hard-coded it here) using a technique described in this question and it seemed to work fine. However, I realized my xdata was not quite what I wanted it to be so I used 'endpoint=False' so that my xdata increased from 17 to 27.5 in steps of 0.5. Upon doing this, scipy warned me that:
minpack.py:794: OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning)
Perhaps this is working as intended and I'm missing some part of how curve_fit, or the Fourier function works, but I would really like to be able to fit this with the correct (albeit only slightly different) x values. My y values do have an offset that the fit removes when it runs successfully, which is fine by me.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
ydata = [48.97266579, 54.97148132, 65.33787537, 69.55269623, 56.5559082, 41.52973366,
28.06554699, 19.01652718, 16.74026489, 19.38094521, 25.63856506, 24.39780998,
18.99308014, 30.67970657, 31.52746582, 45.38796043, 45.3911972, 42.38343811,
41.90969849, 38.00998878, 49.11366463, 70.14483643]
xdata = np.linspace(17, 28, 22, endpoint=False) #, endpoint=False
def make_fourier(na, nb):
def fourier(x, *a):
ret = 0.0
for deg in range(0, na):
ret += a[deg] * np.cos((deg+1) * 2 * np.pi * x)
for deg in range(na, na+nb):
ret += a[deg] * np.sin((deg+1) * 2 * np.pi * x)
return ret
return fourier
def obtain_fourier_coef(ydata, harms):
popt, pcov = curve_fit(make_fourier(harms, harms), xdata, ydata, [0.0]*harms*2)
plt.plot(xdata, (make_fourier(harms,harms))(xdata, *popt))
plt.show()
plt.plot(xdata, ydata)
obtain_fourier_coef(ydata, 10)
With endpoint=False:
curve fit results plot
Without endpoint=False:
curve fit results plot
The Problem is caused by a combination of
[...] xdata increased from 17 to 27.5 in steps of 0.5.
and
np.cos((deg+1) * 2 * np.pi * x)
If x contains values in steps of 0.5, the values passed to the trigonometric functions are multiples of pi. This makes sin always return 0 and cos return either +1 or -1. Because of this degeneracy the resulting function cannot be fitted.

SciPy Curve Fit Fails Power Law

So, I'm trying to fit a set of data with a power law of the following kind:
def f(x,N,a): # Power law fit
if a >0:
return N*x**(-a)
else:
return 10.**300
par,cov = scipy.optimize.curve_fit(f,data,time,array([10**(-7),1.2]))
where the else condition is just to force a to be positive. Using scipy.optimize.curve_fit yields an awful fit (green line), returning values of 1.2e+04 and 1.9e0-7 for N and a, respectively, with absolutely no intersection with the data. From fits I've put in manually, the values should land around 1e-07 and 1.2 for N and a, respectively, though putting those into curve_fit as initial parameters doesn't change the result. Removing the condition for a to be positive results in a worse fit, as it chooses a negative, which leads to a fit with the wrong sign slope.
I can't figure out how to get a believable, let alone reliable, fit out of this routine, but I can't find any other good Python curve fitting routines. Do I need to write my own least-squares algorithm or is there something I'm doing wrong here?
UPDATE
In the original post, I showed a solution that uses lmfit which allows to assign bounds to your parameters. Starting with version 0.17, scipy also allows to assign bounds to your parameters directly (see documentation). Please find this solution below after the EDIT which can hopefully serve as a minimal example on how to use scipy's curve_fit with parameter bounds.
Original post
As suggested by #Warren Weckesser, you could use lmfit to get this task done, which allows you to assign bounds to your parameters and avoids this 'ugly' if-clause.
Since you do not provide any data, I created some which are shown here:
They follow the law f(x) = 10.5 * x ** (-0.08)
I fit them - as suggested by #roadrunner66 - by transforming the power law in a linear function:
y = N * x ** a
ln(y) = ln(N * x ** a)
ln(y) = a * ln(x) + ln(N)
So I first use np.log on the original data and then do the fit. When I now use lmfit, I get the following output:
[[Variables]]
lN: 2.35450302 +/- 0.019531 (0.83%) (init= 1.704748)
a: -0.08035342 +/- 0.005158 (6.42%) (init=-0.5)
So a is pretty close to the original value and np.exp(2.35450302) gives 10.53 which is also very close to the original value.
The plot then looks as follows; as you can see the fit describes the data very well:
Here is the entire code with a couple of inline comments:
import numpy as np
import matplotlib.pyplot as plt
from lmfit import minimize, Parameters, Parameter, report_fit
# generate some data with noise
xData = np.linspace(0.01, 100., 50.)
aOrg = 0.08
Norg = 10.5
yData = Norg * xData ** (-aOrg) + np.random.normal(0, 0.5, len(xData))
plt.plot(xData, yData, 'bo')
plt.show()
# transform data so that we can use a linear fit
lx = np.log(xData)
ly = np.log(yData)
plt.plot(lx, ly, 'bo')
plt.show()
def decay(params, x, data):
lN = params['lN'].value
a = params['a'].value
# our linear model
model = a * x + lN
return model - data # that's what you want to minimize
# create a set of Parameters
params = Parameters()
params.add('lN', value=np.log(5.5), min=0.01, max=100) # value is the initial value
params.add('a', value=-0.5, min=-1, max=-0.001) # min, max define parameter bounds
# do fit, here with leastsq model
result = minimize(decay, params, args=(lx, ly))
# write error report
report_fit(params)
# plot data
xnew = np.linspace(0., 100., 5000.)
# plot the data
plt.plot(xData, yData, 'bo')
plt.plot(xnew, np.exp(result.values['lN']) * xnew ** (result.values['a']), 'r')
plt.show()
EDIT
Assuming that you have scipy 0.17 installed, you can also do the following using curve_fit. I show it for your original definition of the power law (red line in the plot below) as well as for the logarithmic data (black line in the plot below). The data is generated in the same way as above. The plot the looks as follows:
As you can see, the data is described very well. If you print popt and popt_log, you obtain array([ 10.47463426, 0.07914812]) and array([ 2.35158653, -0.08045776]), respectively (note: for the letter one you will have to take the exponantial of the first argument - np.exp(popt_log[0]) = 10.502 which is close to the original data).
Here is the entire code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# generate some data with noise
xData = np.linspace(0.01, 100., 50)
aOrg = 0.08
Norg = 10.5
yData = Norg * xData ** (-aOrg) + np.random.normal(0, 0.5, len(xData))
# get logarithmic data
lx = np.log(xData)
ly = np.log(yData)
def f(x, N, a):
return N * x ** (-a)
def f_log(x, lN, a):
return a * x + lN
# optimize using the appropriate bounds
popt, pcov = curve_fit(f, xData, yData, bounds=(0, [30., 20.]))
popt_log, pcov_log = curve_fit(f_log, lx, ly, bounds=([0, -10], [30., 20.]))
xnew = np.linspace(0.01, 100., 5000)
# plot the data
plt.plot(xData, yData, 'bo')
plt.plot(xnew, f(xnew, *popt), 'r')
plt.plot(xnew, f(xnew, np.exp(popt_log[0]), -popt_log[1]), 'k')
plt.show()

numpy polyfit passing through 0

Suppose I have x and y vectors with a weight vector wgt. I can fit a cubic curve (y = a x^3 + b x^2 + c x + d) by using np.polyfit as follows:
y_fit = np.polyfit(x, y, deg=3, w=wgt)
Now, suppose I want to do another fit, but this time, I want the fit to pass through 0 (i.e. y = a x^3 + b x^2 + c x, d = 0), how can I specify a particular coefficient (i.e. d in this case) to be zero?
Thanks
You can try something like the following:
Import curve_fit from scipy, i.e.
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
import numpy as np
Define the curve fitting function. In your case,
def fit_func(x, a, b, c):
# Curve fitting function
return a * x**3 + b * x**2 + c * x # d=0 is implied
Perform the curve fitting,
# Curve fitting
params = curve_fit(fit_func, x, y)
[a, b, c] = params[0]
x_fit = np.linspace(x[0], x[-1], 100)
y_fit = a * x_fit**3 + b * x_fit**2 + c * x_fit
Plot the results if you please,
plt.plot(x, y, '.r') # Data
plt.plot(x_fit, y_fit, 'k') # Fitted curve
It does not answer the question in the sense that it uses numpy's polyfit function to pass through the origin, but it solves the problem.
Hope someone finds it useful :)
You can use np.linalg.lstsq and construct your coefficient matrix manually. To start, I'll create the example data x and y, and the "exact fit" y0:
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(100)
y0 = 0.07 * x ** 3 + 0.3 * x ** 2 + 1.1 * x
y = y0 + 1000 * np.random.randn(x.shape[0])
Now I'll create a full cubic polynomial 'training' or 'independent variable' matrix that includes the constant d column.
XX = np.vstack((x ** 3, x ** 2, x, np.ones_like(x))).T
Let's see what I get if I compute the fit with this dataset and compare it to polyfit:
p_all = np.linalg.lstsq(X_, y)[0]
pp = np.polyfit(x, y, 3)
print np.isclose(pp, p_all).all()
# Returns True
Where I've used np.isclose because the two algorithms do produce very small differences.
You're probably thinking 'that's nice, but I still haven't answered the question'. From here, forcing the fit to have a zero offset is the same as dropping the np.ones column from the array:
p_no_offset = np.linalg.lstsq(XX[:, :-1], y)[0] # use [0] to just grab the coefs
Ok, let's see what this fit looks like compared to our data:
y_fit = np.dot(p_no_offset, XX[:, :-1].T)
plt.plot(x, y0, 'k-', linewidth=3)
plt.plot(x, y_fit, 'y--', linewidth=2)
plt.plot(x, y, 'r.', ms=5)
This gives this figure,
WARNING: When using this method on data that does not actually pass through (x,y)=(0,0) you will bias your estimates of your output solution coefficients (p) because lstsq will be trying to compensate for that fact that there is an offset in your data. Sort of a 'square peg round hole' problem.
Furthermore, you could also fit your data to a cubic only by doing:
p_ = np.linalg.lstsq(X_[:1, :], y)[0]
Here again the warning above applies. If your data contains quadratic, linear or constant terms the estimate of the cubic coefficient will be biased. There can be times when - for numerical algorithms - this sort of thing is useful, but for statistical purposes my understanding is that it is important to include all of the lower terms. If tests turn out to show that the lower terms are not statistically different from zero that's fine, but for safety's sake you should probably leave them in when you estimate your cubic.
Best of luck!

2D Gaussian Fit for intensities at certain coordinates in Python

I have a set of coordinates (x, y, z(x, y)) which describe intensities (z) at coordinates x, y. For a set number of these intensities at different coordinates, I need to fit a 2D Gaussian that minimizes the mean squared error.
The data is in numpy matrices and for each fitting session I will have either 4, 9, 16 or 25 coordinates. Ultimately I just need to get the central position of the gaussian (x_0, y_0) that has smallest MSE.
All of the examples that I have found use scipy.optimize.curve_fit but the input data they have is over an entire mesh rather than a few coordinates.
Any help would be appreciated.
Introduction
There are multiple ways to approach this. You can use non-linear methods (e.g. scipy.optimize.curve_fit), but they'll be slow and aren't guaranteed to converge. You can linearize the problem (fast, unique solution), but any noise in the "tails" of the distribution will cause issues. There are actually a few tricks you can apply to this particular case to avoid the latter issue. I'll show some examples, but I don't have time to demonstrate all of the "tricks" right now.
Just as a side note, a general 2D guassian has 6 parameters, so you won't be able to fully fit things with 4 points. However, it sounds like you might be assuming that there's no covariance between x and y and that the variances are the same in each direction (i.e. a perfectly "round" bell curve). If that's the case, then you only need four parameters. If you know the amplitude of the guassian, you'll only need three. However, I'm going to start with the general solution, and you can simplify it later on, if you want to.
For the moment, let's focus on solving this problem using non-linear methods (e.g. scipy.optimize.curve_fit).
The general equation for a 2D guassian is (directly from wikipedia):
where:
is essentially 0.5 over the covariance matrix, A is the amplitude,
and (X₀, Y₀) is the center
Generate simplified sample data
Let's write the equation above out:
import numpy as np
import matplotlib.pyplot as plt
def gauss2d(x, y, amp, x0, y0, a, b, c):
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
And then let's generate some example data. To start with, we'll generate some data that will be easy to fit:
np.random.seed(1977) # For consistency
x, y = np.random.random((2, 10))
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
zobs = gauss2d(x, y, amp, x0, y0, a, b, c)
fig, ax = plt.subplots()
scat = ax.scatter(x, y, c=zobs, s=200)
fig.colorbar(scat)
plt.show()
Note that we haven't added any noise, and the center of the distribution is within the range that we have data (i.e. center at 0.3, 0.7 and a scatter of x,y observations between 0 and 1). For the moment, let's stick with this, and then we'll see what happens when we add noise and shift the center.
Non-linear fitting
To start with, let's use scpy.optimize.curve_fit to preform a non-linear least-squares fit to the gaussian function. (On a side note, you can play around with the exact minimization algorithm by using some of the other functions in scipy.optimize.)
The scipy.optimize functions expect a slightly different function signature than the one we originally wrote above. We could write a wrapper to "translate", but let's just re-write the gauss2d function instead:
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
All we did was have the function expect the independent variables (x & y) as a single 2xN array.
Now we need to make an initial guess at what the guassian curve's parameters actually are. This is optional (the default is all ones, if I recall correctly), but you're likely to have problems converging if 1, 1 is not particularly close to the "true" center of the gaussian curve. For that reason, we'll use the x and y values of our largest observed z-value as a starting point for the center. I'll leave the rest of the parameters as 1, but if you know that they're likely to consistently be significantly different, change them to something more reasonable.
Here's the full, stand-alone example:
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
def main():
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
true_params = [amp, x0, y0, a, b, c]
xy, zobs = generate_example_data(10, true_params)
x, y = xy
i = zobs.argmax()
guess = [1, x[i], y[i], 1, 1, 1]
pred_params, uncert_cov = opt.curve_fit(gauss2d, xy, zobs, p0=guess)
zpred = gauss2d(xy, *pred_params)
print 'True parameters: ', true_params
print 'Predicted params:', pred_params
print 'Residual, RMS(obs - pred):', np.sqrt(np.mean((zobs - zpred)**2))
plot(xy, zobs, pred_params)
plt.show()
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
zobs = gauss2d(xy, *params)
return xy, zobs
def plot(xy, zobs, pred_params):
x, y = xy
yi, xi = np.mgrid[:1:30j, -.2:1.2:30j]
xyi = np.vstack([xi.ravel(), yi.ravel()])
zpred = gauss2d(xyi, *pred_params)
zpred.shape = xi.shape
fig, ax = plt.subplots()
ax.scatter(x, y, c=zobs, s=200, vmin=zpred.min(), vmax=zpred.max())
im = ax.imshow(zpred, extent=[xi.min(), xi.max(), yi.max(), yi.min()],
aspect='auto')
fig.colorbar(im)
ax.invert_yaxis()
return fig
main()
In this case, we exactly(ish) recover our original "true" parameters.
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1. 0.3 0.7 2. 3. 4. ]
Residual, RMS(obs - pred): 1.01560615193e-16
As we'll see in a second, this won't always be the case...
Adding Noise
Let's add some noise to our observations. All I've done here is change the generate_example_data function:
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
noise = np.random.normal(0, 0.3, num)
zobs = gauss2d(xy, *params) + noise
return xy, zobs
However, the result looks quite different:
And as far as the parameters go:
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1.129 0.263 0.750 1.280 32.333 10.103 ]
Residual, RMS(obs - pred): 0.152444640098
The predicted center hasn't changed much, but the b and c parameters have changed quite a bit.
If we change the center of the function to somewhere slightly outside of our scatter of points:
x0, y0 = -0.3, 1.1
We'll wind up with complete nonsense as a result in the presence of noise! (It still works correctly without noise.)
True parameters: [1, -0.3, 1.1, 2, 3, 4]
Predicted params: [ 0.546 -0.939 0.857 -0.488 44.069 -4.136]
Residual, RMS(obs - pred): 0.235664449826
This is a common problem when fitting a function that decays to zero. Any noise in the "tails" can result in a very poor result. There are a number of strategies to deal with this. One of the easiest is to weight the inversion by the observed z-values. Here's an example for the 1D case: (focusing on linearized the problem) How can I perform a least-squares fitting over multiple data sets fast? If I have time later, I'll add an example of this for the 2D case.

Categories