Scipy's Optimize Curve Fit Limits - python

Is there any way I can provide limits for the Scipy's Optimize Curve Fit?
My example:
def optimized_formula(x, m_1, m_2, y_1, y_2, ratio_2):
return (log(x[0]) * m_1 + m_2)*((1 - x[1]/max_age)*(1-ratio_2)) + ((log(x[1]) * y_1 + y_2)*(x[1]/max_age)*ratio_2)
popt, pcov = optimize.curve_fit(optimized_formula, usage_and_age, prices)
x[0] is age and max_age is a constant. With that in mind, as x[0] approaches maximum, x[1]/max_age approaches 1.
Is it possible to provide a constraint/limit whereby x[1]/max_age > 0.3 and x[1]/max_age < 0.7 and other constraints such as m_1 < 0, m_2 > 0, and so on.

As suggested in another answer, you could use lmfit for these kind of problems. Therefore, I add an example on how to use it in case someone is interested in this topic, too.
Let's say you have a dataset as follows:
xdata = np.array([177.,180.,183.,187.,189.,190.,196.,197.,201.,202.,203.,204.,206.,218.,225.,231.,234.,
252.,262.,266.,267.,268.,277.,286.,303.])
ydata = np.array([0.81,0.74,0.78,0.75,0.77,0.81,0.73,0.76,0.71,0.74,0.81,0.71,0.74,0.71,
0.72,0.69,0.75,0.59,0.61,0.63,0.64,0.63,0.35,0.27,0.26])
and you want to fit a model to the data which looks like this:
model = n1 + (n2 * x + n3) * 1./ (1. + np.exp(n4 * (n5 - x)))
with the constraints that
0.2 < n1 < 0.8
-0.3 < n2 < 0
Using lmfit (version 0.8.3) you then obtain the following output:
n1: 0.26564921 +/- 0.024765 (9.32%) (init= 0.2)
n2: -0.00195398 +/- 0.000311 (15.93%) (init=-0.005)
n3: 0.87261892 +/- 0.068601 (7.86%) (init= 1.0766)
n4: -1.43507072 +/- 1.223086 (85.23%) (init=-0.36379)
n5: 277.684530 +/- 3.768676 (1.36%) (init= 274)
As you can see, the fit reproduces the data very well and the parameters are in the requested ranges.
Here is the entire code that reproduces the plot with a few additional comments:
from lmfit import minimize, Parameters, Parameter, report_fit
import numpy as np
xdata = np.array([177.,180.,183.,187.,189.,190.,196.,197.,201.,202.,203.,204.,206.,218.,225.,231.,234.,
252.,262.,266.,267.,268.,277.,286.,303.])
ydata = np.array([0.81,0.74,0.78,0.75,0.77,0.81,0.73,0.76,0.71,0.74,0.81,0.71,0.74,0.71,
0.72,0.69,0.75,0.59,0.61,0.63,0.64,0.63,0.35,0.27,0.26])
def fit_fc(params, x, data):
n1 = params['n1'].value
n2 = params['n2'].value
n3 = params['n3'].value
n4 = params['n4'].value
n5 = params['n5'].value
model = n1 + (n2 * x + n3) * 1./ (1. + np.exp(n4 * (n5 - x)))
return model - data #that's what you want to minimize
# create a set of Parameters
# 'value' is the initial condition
# 'min' and 'max' define your boundaries
params = Parameters()
params.add('n1', value= 0.2, min=0.2, max=0.8)
params.add('n2', value= -0.005, min=-0.3, max=10**(-10))
params.add('n3', value= 1.0766, min=-1000., max=1000.)
params.add('n4', value= -0.36379, min=-1000., max=1000.)
params.add('n5', value= 274.0, min=0., max=1000.)
# do fit, here with leastsq model
result = minimize(fit_fc, params, args=(xdata, ydata))
# write error report
report_fit(params)
xplot = np.linspace(min(xdata), max(xdata), 1000)
yplot = result.values['n1'] + (result.values['n2'] * xplot + result.values['n3']) * \
1./ (1. + np.exp(result.values['n4'] * (result.values['n5'] - xplot)))
#plot results
try:
import pylab
pylab.plot(xdata, ydata, 'k+')
pylab.plot(xplot, yplot, 'r')
pylab.show()
except:
pass
EDIT:
If you use version 0.9.x you need to adjust the code accordingly; check here which changes have been made from 0.8.3 to 0.9.x.

Note: New in version 0.17 of SciPy
Let's suppose you want to fit a model to the data which looks like this:
y=a*t**alpha+b
and with the constraint on alpha
0<alpha<2
while other parameters a and b remains free. Then we should use the bounds option of optimize.curve_fit:
import numpy as np
from scipy.optimize import curve_fit
def func(t, a,alpha,b):
return a*t**alpha+b
param_bounds=([-np.inf,0,-np.inf],[np.inf,2,np.inf])
popt, pcov = optimize.curve_fit(func, xdata,ydata,bounds=param_bounds)
Source is here

Try the lmfit module (http://lmfit.github.io/lmfit-py/). It adds a way to fix or set bounds on parameters for many of the optimization routines in scipy.optimize, including least-squares, and provides many tools to make fitting easier.

Since curve_fit() uses a least squares approach, you might want to look at scipy.optimize.fmin_slsqp(), which allows do perform constrained optimizations. Check this tutorial on how to use it.

Related

fsolve gives weird answers

I want to use fsolve to numerically find roots of a nonlinear transcendent equation.
The following code does this job.
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
kappa = 0.1
tau = 90
def equation(x, * parameters):
kappa,tau = parameters
return -x + kappa * np.sin(-tau*x)
x = np.linspace(-0.5,0.5, 35)
roots = fsolve(equation,x, (kappa,tau))
x_2 = np.linspace(-1.5,1.5,1500)
plt.plot(x_2 ,x_2 )
plt.plot(x_2 , kappa*np.sin(-x_2 *tau))
plt.scatter(x, roots)
plt.show()
I can double check the solutions graphically by plotting the two graphs f1(x)=x and f2(x)=k * sin(-x * tau), which i also included in the code.
fsolve gives me some wrong answers, without throwing any errors or convergence problems.
The Problem is, that I would like to automatize the procedure for varying kappa and tau, without me checking which answers are wrong and which are right. But with wrong answers as output, i can't use this method. Is there any other method or an option I can use, to be on the safe side?
Thanks for the help.
I couldn't run your code, there were errors and anyway, according to the documentation on scipy.fsolve, you're supposed to add an initial guess as the second input argument, not a range as what you did there fsolve(equation, x0, (kappa,tau))
You could of course however pass this in a loop, looping for every value in the array np.linspace(0.5, 0.5, 25). Although I do not understand what you are trying to achieve by varying kappa and tau, but if I take it that for those given parameters, you are interested in looking for the roots, here's how I would do it.
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
# Take it as it is
kappa = 0.1
tau = 90
def equation(x, parameters):
kappa,tau = parameters
return -x + kappa * np.sin(-tau*x)
# Initial guess of x = -0.1
SolutionStack = []
x0 = -kappa
y = fsolve(equation, x0, [kappa, tau])
SolutionStack.append(y[0])
y = fsolve(equation, SolutionStack[-1], [kappa, tau])
SolutionStack.append(y[0])
deltaY = SolutionStack[-1] - SolutionStack[0]
# Define tolerance
tol = 5e-4
while ((SolutionStack[-1] <= kappa) and (deltaY <= tol)):
y = fsolve(equation, SolutionStack[-1], [kappa, tau])
SolutionStack.append(y[0])
deltaY = SolutionStack[-1] - SolutionStack[-2]
# Obviously a little guesswork is involved here, as it pertains to 0.07
if deltaY <= tol:
SolutionStack[-1] = SolutionStack[-1] + 0.07
# Obtaining the roots
Roots = []
Roots.append(SolutionStack[0])
for i in range(len(SolutionStack)-1):
if (SolutionStack[i+1] - SolutionStack[i]) <= tol:
continue
else:
Roots.append(SolutionStack[i+1]
Probably not the smartest way to do it (assuming I even understood you correctly), but perhaps you have an idea now.

Fitting Tanh curves with python

I need to fit an tanh curve like this one :
import numpy as np
import matplotlib.pyplot as plt
from lmfit import Model
def f(x, a1=0.00010, a2=0.00013, a3=0.00013, teta1=1, teta2=0.00555, teta3=0.00555, phi1=-50, phi2=600, phi3=-900,
a=0.000000019, b=0):
formule = a1 * np.tanh(teta1 * (x + phi1)) + a2 * np.tanh(teta2 * (x + phi2)) + a3 * np.tanh(
teta3 * (x + phi3)) + a * x + b
return formule
# generate points used to plot
x_plot = np.linspace(-10000, 10000, 1000)
gmodel = Model(f)
result = gmodel.fit(f(x_plot), x=x_plot, a1=1,a2=1,a3=1,teta1=1,teta2=1,teta3=1,phi1=0,phi2=0,phi3=0)
plt.plot(x_plot, f(x_plot), 'bo')
plt.plot(x_plot, result.best_fit, 'r-')
plt.show()
i try to do someting like that but i got this result:
There is an other way for fitting this curve ? I don't know what i'm doing wrong ?
Basically your fit is fine (although not very nice from the coding point of view). Like always, non-linear fits strongly rely on initial parameters. Yours are just chosen badly. You could either think how to determine them manually or use a pre-made package like differential_evolution from scipy.optimize. I am not using this package but you can find an example here on SE
I agree with the answers from mikuszefski and F. Win but would like to add another point.
Your model includes a line + 3 tanh functions. It's not entirely clear that the data support that many different tanh functions. If so (and echoing mikuszefki), you will need to tell the fit that these are not identical. Your example starts them off being identical, which will make it very difficult for the fit to find a good solution. Either way, it would probably be helpful to be able to easily test if there really are 1, 2, 3, or more tanh functions.
You may also want to give not only initial values for your parameters, but also realistic boundaries on them so that the tanh functions are clearly separated and don't wander too far off from where they should be.
To clean up your code and to better allow you to change the number of tanh functions used and place boundary constraints, I would suggest making individual models and adding them as with:
from lmfit import Model
def f_tanh(x, eta=1, phi=0):
"tanh function"
return np.tanh(eta * (x + phi))
def f_line(x, slope=0, intercept=0):
"line function"
return slope*x + intercept
# create model as line + 2 tanh functions
gmodel = Model(f_line) + Model(f_tanh, prefix='t1_') + Model(f_tanh, prefix='t2_')
Now you can easily create parameters, with
params = gmodel.make_params(slope=0.003, intercept=0.001,
t1_eta=0.021, t1_phi=-2000,
t2_eta=0.013, t2_phi=600)
With the fit parameters defined, you can place bounds with:
params['t1_eta'].min = 0
params['t2_eta'].min = 0
params['t1_phi'].min = -3000
params['t1_phi'].max = -1000
params['t2_phi'].min = 0
params['t2_phi'].max = 1000
I think all of these will help you better explore the data and the fits to it. Putting this all together, you might have:
import numpy as np
import matplotlib.pyplot as plt
from lmfit import Model
def f_tanh(x, eta=1, phi=0):
"tanh function"
return np.tanh(eta * (x + phi))
def f_line(x, slope=0, intercept=0):
"line function"
return slope*x + intercept
# line + 2 tanh functions
gmodel = Model(f_line) + Model(f_tanh, prefix='t1_') + Model(f_tanh, prefix='t2_')
# generate "data"
x = np.linspace(-10000, 10000, 1000)
y = gmodel.eval(x=x, slope=0.0001,
t1_eta=0.010, t1_phi=-2100,
t2_eta=0.004, t2_phi=740)
y = y + np.random.normal(size=len(x), scale=0.02)
# make parameters with initial values
params = gmodel.make_params(slope=0.003, intercept=0.001,
t1_eta=0.021, t1_phi=-2000,
t2_eta=0.013, t2_phi=600)
# place realistic but generous constraints to keep tanhs separate
params['t1_eta'].min = 0
params['t2_eta'].min = 0
params['t1_phi'].min = -3000
params['t1_phi'].max = -1000
params['t2_phi'].min = 0
params['t2_phi'].max = 1000
result = gmodel.fit(y, params, x=x)
print(result.fit_report())
plt.plot(x, y, 'bo')
plt.plot(x, result.best_fit, 'r-')
plt.show()
This will give a good fit and plot and find the expected values, within the noise level. Hope that helps get you pointed in the right direction.
Your function is a bit confusing and you do not really have function values. You basically want to to fit to your function itself. Ideally you want to replace f(x_plot) in curve_fit() by real experimental data.
A good way to fit a function is using scipy.optimize.curve_fit
from scipy.optimize import curve_fit
popt, pcov = curve_fit(f, x_plot, f(x_plot), p0=[0.00010, 0.00013, 0.00013, 1, 0.00555, .00555, -50, 600, -900,
0.000000019, 0])
plt.plot(f(x_plot, *popt))
The resulting fit looks like this
with real data :
test_X = np.array(
[-9.77073e+03, -9.29706e+03, -8.82339e+03, -8.34979e+03, -7.87614e+03, -7.40242e+03, -6.92874e+03, -6.45506e+03,
-5.98143e+03, -5.50771e+03, -5.03404e+03, -4.56012e+03, -4.08674e+03, -3.61304e+03, -3.13937e+03, -2.66578e+03,
-2.19210e+03, -1.71845e+03, -1.24478e+03, -9.78925e+02, -9.29077e+02, -8.79059e+02, -8.29082e+02, -7.79092e+02,
-7.29080e+02, -6.79084e+02, -6.29061e+02, -5.79078e+02, -5.29103e+02, -4.79089e+02, -4.29094e+02, -3.79071e+02,
-3.29074e+02, -2.79062e+02, -2.29079e+02, -1.92907e+02, -1.72931e+02, -1.52930e+02, -1.32937e+02, -1.12946e+02,
-9.29511e+01, -7.29438e+01, -5.29292e+01, -3.29304e+01, -1.29330e+01, 7.04455e+00, 2.70676e+01, 4.70634e+01,
6.70526e+01, 8.70340e+01, 1.07056e+02, 1.27037e+02, 1.47045e+02, 1.67033e+02, 1.87039e+02, 2.20765e+02,
2.70680e+02, 3.20699e+02, 3.70693e+02, 4.20692e+02, 4.70696e+02, 5.20704e+02, 5.70685e+02, 6.20710e+02,
6.70682e+02, 7.20705e+02, 7.70707e+02, 8.20704e+02, 8.70713e+02, 9.20691e+02, 9.70700e+02, 1.23926e+03,
1.73932e+03, 2.23932e+03, 2.73926e+03, 3.23924e+03, 3.73926e+03, 4.23952e+03, 4.73926e+03, 5.23930e+03,
5.71508e+03, 6.21417e+03, 6.71413e+03, 7.21412e+03, 7.71410e+03, 8.21405e+03, 8.71402e+03, 9.21423e+03])
test_Y = np.array(
[-3.17679e-04, -3.27541e-04, -3.51184e-04, -3.60672e-04, -3.75965e-04, -3.86888e-04, -4.03222e-04, -4.23262e-04,
-4.38526e-04, -4.51187e-04, -4.61081e-04, -4.67121e-04, -4.96690e-04, -4.94811e-04, -5.10110e-04, -5.18985e-04,
-5.11754e-04, -4.90964e-04, -4.36904e-04, -3.93638e-04, -3.83336e-04, -3.71110e-04, -3.57207e-04, -3.39643e-04,
-3.24155e-04, -2.97296e-04, -2.74653e-04, -2.43700e-04, -1.95574e-04, -1.60716e-04, -1.43363e-04, -1.33610e-04,
-1.30734e-04, -1.26332e-04, -1.26063e-04, -1.24228e-04, -1.23424e-04, -1.20276e-04, -1.16886e-04, -1.21865e-04,
-1.16605e-04, -1.14148e-04, -1.14728e-04, -1.14660e-04, -1.16927e-04, -1.10380e-04, -1.09836e-04, 4.24232e-05,
8.66095e-05, 8.43905e-05, 9.09867e-05, 8.95580e-05, 9.02585e-05, 8.87033e-05, 8.86536e-05, 8.92236e-05,
9.24438e-05, 9.27929e-05, 9.24961e-05, 9.72166e-05, 1.00432e-04, 1.05457e-04, 1.11278e-04, 1.14716e-04,
1.25818e-04, 1.40721e-04, 1.62968e-04, 1.91776e-04, 2.28125e-04, 2.57918e-04, 2.88941e-04, 3.85003e-04,
4.91916e-04, 5.32483e-04, 5.50929e-04, 5.45350e-04, 5.38903e-04, 5.27765e-04, 5.15592e-04, 4.95717e-04,
4.81722e-04, 4.69538e-04, 4.58643e-04, 4.41407e-04, 4.29820e-04, 4.07784e-04, 3.92236e-04, 3.81761e-04])
i try this:
import numpy,
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings
def function(x, a1, a2, a3, teta1, teta2, teta3, phi1, phi2, phi3, a, b):
import numpy as np
formule = a1 * np.tanh(teta1 * (x + phi1)) + a2 * np.tanh(teta2 * (x + phi2)) + a3 * np.tanh(teta3 * (x + phi3)) + a * x + b
return formule
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = function(test_X, *parameterTuple)
return numpy.sum((test_Y - val) ** 2.0)
def generate_Initial_Parameters():
parameterBounds = []
parameterBounds.append([1.4e-04, 1.4e-04])
parameterBounds.append([2.00e-04,2.0e-04])
parameterBounds.append([2.5e-04, 2.5e-04])
parameterBounds.append([0, 2.0e+01])
parameterBounds.append([0, 4.0e-03])
parameterBounds.append([0, 4.0e-03])
parameterBounds.append([-8.e+01, 0])
parameterBounds.append([0, 9.0e+02])
parameterBounds.append([-2.1e+03, 0])
parameterBounds.append([-3.4e-08, -2.4e-08])
parameterBounds.append([-2.2e-05*2, 4.2e-05])
# "seed" the numpy random number generator for repeatable results
result = differential_evolution(sumOfSquaredError, parameterBounds)
return result.x
# generate initial parameter values
geneticParameters = generate_Initial_Parameters()
# curve fit the test data
fittedParameters, pcov = curve_fit(function, test_X, test_Y, geneticParameters)
print('Parameters', fittedParameters)
modelPredictions = function(test_X, *fittedParameters)
absError = modelPredictions - test_Y
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(test_Y))
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
ytry = ftry(test_X)
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth / 100.0, graphHeight / 100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(test_X, test_Y, 'D')
# create data for the fitted equation plot
yModel = function(test_X, *fittedParameters)
# now the model as a line plot
axes.plot(test_X, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
axes.plot(test_X, ytry)
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)
R-squared: 0.9978, not perfect but not so bad
enter image description here

scipy curve_fit strange result

I am trying to fit a distribution with scipy's curve_fit. I tried to fit a one component exponential function which resulted in an almost straight line (see figure). I also tried a two component exponential fit which seemed to work nicely. Two components just means that a part of the equation repeats with different input parameters. Anyway, here is the one component fit function:
def Exponential(Z,w0,z0,Z0):
z = Z - Z0
termB = (newsigma**2 + z*z0) / (numpy.sqrt(2.0)*newsigma*z0)
termA = (newsigma**2 - z*z0) / (numpy.sqrt(2.0)*newsigma*z0)
return w0/2.0 * numpy.exp(-(z**2 / (2.0*newsigma**2))) * (numpy.exp(termA**2)*erfc(termA) + numpy.exp(termB**2)*erfc(termB))
and the fitting is done with
fitexp = curve_fit(Exponential,newx,y2)
Then I tried something, just to try it out. I took two parameters of the two component fit, but did not use them in the calculation.
def ExponentialNew(Z,w0,z0,w1,z1,Z0):
z = Z - Z0
termB = (newsigma**2 + z*z0) / (numpy.sqrt(2.0)*newsigma*z0)
termA = (newsigma**2 - z*z0) / (numpy.sqrt(2.0)*newsigma*z0)
return w0/2.0 * numpy.exp(-(z**2 / (2.0*newsigma**2))) * (numpy.exp(termA**2)*erfc(termA) + numpy.exp(termB**2)*erfc(termB))
And suddenly this works.
Now, my quation is. WHY? As you can see, there is absolutely no difference in the calculation of the fit. It just gets two extra variables that are not used. Should this not get the same result?
#Andras_Deak
An actual example:
from scipy.special import erfc
import numpy
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
#setup data
x = [-58.,-54.,-50.,-46.,-42.,-38.,-34.,-30.,-26.,-22.,-18.,-14.,-10.,-6.,-2.,2.,6.,10.,14.,18.,22.,26.,30.,34.,38.,42.,46.,50.,54.,58.]
y = [23.06763817, 16.89802085, 17.83258379, 16.63446237, 13.81878965, 12.97965839, 14.30451789, 16.98288216, 22.26811491, 28.56756908, 33.06990344, 38.59842098, 54.19860393, 86.37381604, 137.47253315, 199.49724512, 238.66047662, 219.89405445, 160.68820199, 103.88901303, 65.92405727, 43.84596266, 31.5395342, 25.9610156, 22.71683709, 18.06740651, 13.85362374, 11.12867065, 10.36502799, 11.31855619]
y_err = [17.9823065, 4.13684885, 1.66490726, 2.4109372, 2.93359141, 1.9701747, 3.19214881, 3.65593012, 2.89089074, 3.58922121, 4.25505348, 4.72728874, 6.77736567, 11.3888196, 21.87771722, 39.0087495, 56.6910311, 51.7592369, 26.39750958, 10.62678862, 7.85893395, 8.11741621, 7.91731416, 7.07739132, 5.41818744, 6.11286843, 8.27070757, 7.85323065, 4.26885499, 0.9047867]
#function to fit
def Exponential2(Z, w0, z0, w1, z1, Z0):
z = Z - Z0
s = 3.98098937586
a = z**2 / (2.0*s**2)
b = (s**2 + z*z0) / (numpy.sqrt(2.0)*s*z0)
c = (s**2 - z*z0) / (numpy.sqrt(2.0)*s*z0)
d = (s**2 + z*z1) / (numpy.sqrt(2.0)*s*z1)
e = (s**2 - z*z1) / (numpy.sqrt(2.0)*s*z1)
return w0/2.0 * numpy.exp(-a) * (numpy.exp(c**2)*erfc(c) + numpy.exp(b**2)*erfc(b)) + w1/2.0 * numpy.exp(-a) * (numpy.exp(e**2)*erfc(e) + numpy.exp(d**2)*erfc(d))
#derive and set initial guess
ymaxpos = x[numpy.where(y==numpy.max(y))[0]]
p0_2 = [numpy.max(y),5,numpy.max(y)/2.0,20,ymaxpos]
#fit
fitexp2 = curve_fit(Exponential2,x,y,p0=p0_2,sigma=y_err)
#get results
w0err = numpy.sqrt(numpy.diag(fitexp2[1]))[0]
z0err = numpy.sqrt(numpy.diag(fitexp2[1]))[1]
w1err = numpy.sqrt(numpy.diag(fitexp2[1]))[2]
z1err = numpy.sqrt(numpy.diag(fitexp2[1]))[3]
w0 = fitexp2[0][0]
z0 = fitexp2[0][1]
w1 = fitexp2[0][2]
z1 = fitexp2[0][3]
Z0 = fitexp2[0][4]
#new x array for smoother curve
smoothx = numpy.arange(-58,59,0.1)
y2 = Exponential2(smoothx,w0,z0,w1,z1,Z0)
print 'Exponential 2: w0: '+str(w0.round(3))+' +/- '+str(w0err.round(3))+' \t z0: '+str(z0.round(3))+' +/- '+str(z0err.round(3))+' \t w1: '+str(w1.round(3))+' +/- '+str(w1err.round(3))+' \t\t z1: '+str(z1.round(3))+' +/- '+str(z1err.round(3))
#plot
fig = plt.figure()
ax = fig.add_subplot(111)
ax.errorbar(x,y,y_err,fmt='o',markersize=2,label='data')
ax.plot(smoothx,y2,label='fit',color='red')
ax.grid()
ax.legend()
plt.show()
As you can see, the plot does look good, but the returned value z1 is totaly unrealistic.
Exponential 2: w0: 312.608 +/- 36.764 z0: 8.263 +/- 1.158 w1: 12.689 +/- 9.138 z1: 1862257.883 +/- 45201809883.8
In my experience curve_fit can sometimes act up and stick with the initial values for the parameters. I would suspect that in your case adding a few fake parameters changed the heuristics of how the relevant parameters are being initialized (although this contradicts the documentation's statement that with no initial values given, they all default to 1).
It helps a lot in obtaining reliable fits if you specify reasonable bounds and initial values for your fitting parameters (I mean the p0 and bounds keywords). The fact that the default starting values should all be 1 suggests that for most use cases, the default won't cut it.

Python gaussian fit on simulated gaussian noisy data

I need to interpolate data coming from an instrument using a gaussian fit. To this end I thought about using the curve_fit function from scipy.
Since I'd like to test this functionality on fake data before trying it on the instrument I wrote the following code to generate noisy gaussian data and to fit it:
from scipy.optimize import curve_fit
import numpy
import pylab
# Create a gaussian function
def gaussian(x, a, b, c):
val = a * numpy.exp(-(x - b)**2 / (2*c**2))
return val
# Generate fake data.
zMinEntry = 80.0*1E-06
zMaxEntry = 180.0*1E-06
zStepEntry = 0.2*1E-06
x = numpy.arange(zMinEntry,
zMaxEntry,
zStepEntry,
dtype = numpy.float64)
n = len(x)
meanY = zMinEntry + (zMaxEntry - zMinEntry)/2
sigmaY = 10.0E-06
a = 1.0/(sigmaY*numpy.sqrt(2*numpy.pi))
y = gaussian(x, a, meanY, sigmaY) + a*0.1*numpy.random.normal(0, 1, size=len(x))
# Fit
popt, pcov = curve_fit(gaussian, x, y)
# Print results
print("Scale = %.3f +/- %.3f" % (popt[0], numpy.sqrt(pcov[0, 0])))
print("Offset = %.3f +/- %.3f" % (popt[1], numpy.sqrt(pcov[1, 1])))
print("Sigma = %.3f +/- %.3f" % (popt[2], numpy.sqrt(pcov[2, 2])))
pylab.plot(x, y, 'ro')
pylab.plot(x, gaussian(x, popt[0], popt[1], popt[2]))
pylab.grid(True)
pylab.show()
Unfortunately this does not work properly, the output of the code is the following:
Scale = 6174.816 +/- 7114424813.672
Offset = 429.319 +/- 3919751917.830
Sigma = 1602.869 +/- 17923909301.176
And the plotted result is (blue is the fit function, red dots is the noisy input data):
I also tried to look at this answer, but couldn't figure out where my problem is.
Am I missing something here? Or am I using the curve_fit function in the wrong way? Thanks in advance!
I agree with Olaf in so far as it is a question of scale. The optimal parameters differ by many orders of magnitude. However, scaling the parameters with which you generated your toy data does not seem to solve the problem for your actual application. curve_fit uses lestsq, which numerically approximates the Jacobian, where numerical problems arise because of the differences in scale (try to use the full_output keyword in curve_fit).
In my experience it is often best to use fmin which does not rely on approximated derivatives but uses only function values. You now have to write your own least-squares function that is to be optimized.
Starting values are still important. In your case you can make sufficiently good guesses by taking the maximum amplitude for a and the corresponding x-values for band c.
In code, it looks like this:
from scipy.optimize import curve_fit,fmin
import numpy
import pylab
# Create a gaussian function
def gaussian(x, a, b, c):
val = a * numpy.exp(-(x - b)**2 / (2*c**2))
return val
# Generate fake data.
zMinEntry = 80.0*1E-06
zMaxEntry = 180.0*1E-06
zStepEntry = 0.2*1E-06
x = numpy.arange(zMinEntry,
zMaxEntry,
zStepEntry,
dtype = numpy.float64)
n = len(x)
meanY = zMinEntry + (zMaxEntry - zMinEntry)/2
sigmaY = 10.0E-06
a = 1.0/(sigmaY*numpy.sqrt(2*numpy.pi))
y = gaussian(x, a, meanY, sigmaY) + a*0.1*numpy.random.normal(0, 1, size=len(x))
print a, meanY, sigmaY
# estimate starting values from the data
a = y.max()
b = x[numpy.argmax(a)]
c = b
# define a least squares function to optimize
def minfunc(params):
return sum((y-gaussian(x,params[0],params[1],params[2]))**2)
# fit
popt = fmin(minfunc,[a,b,c])
# Print results
print("Scale = %.3f" % (popt[0]))
print("Offset = %.3f" % (popt[1]))
print("Sigma = %.3f" % (popt[2]))
pylab.plot(x, y, 'ro')
pylab.plot(x, gaussian(x, popt[0], popt[1], popt[2]),lw = 2)
pylab.xlim(x.min(),x.max())
pylab.grid(True)
pylab.show()
Looks like some numerical instabilities are creeping into the optimizer. Try scaling the data. With the following data:
zMinEntry = 80.0*1E-06 * 1000
zMaxEntry = 180.0*1E-06 * 1000
zStepEntry = 0.2*1E-06 * 1000
sigmaY = 10.0E-06 * 1000
I get estimates of
Scale = 39.697 +/- 0.526
Offset = 0.130 +/- 0.000
Sigma = -0.010 +/- 0.000
Compare that to the true values:
Scale = 39.894228
Offset = 0.13
Sigma = 0.01
The minus sign of sigma can of course be ignored.
This gives the following plot
As I said in a comment, if you provide a reasonable initial guess, the fit succeeds, i.e. call curve_fit like that:
popt, pcov = curve_fit(gaussian, x, y, [50000,0.00012,0.00002])

Getting spline equation from UnivariateSpline object

I'm using UnivariateSpline to construct piecewise polynomials for some data that I have. I would then like to use these splines in other programs (either in C or FORTRAN) and so I would like to understand the equation behind the generated spline.
Here is my code:
import numpy as np
import scipy as sp
from scipy.interpolate import UnivariateSpline
import matplotlib.pyplot as plt
import bisect
data = np.loadtxt('test_C12H26.dat')
Tmid = 800.0
print "Tmid", Tmid
nmid = bisect.bisect(data[:,0],Tmid)
fig = plt.figure()
plt.plot(data[:,0], data[:,7],ls='',marker='o',markevery=20)
npts = len(data[:,0])
#print "npts", npts
w = np.ones(npts)
w[0] = 100
w[nmid] = 100
w[npts-1] = 100
spline1 = UnivariateSpline(data[:nmid,0],data[:nmid,7],s=1,w=w[:nmid])
coeffs = spline1.get_coeffs()
print coeffs
print spline1.get_knots()
print spline1.get_residual()
print coeffs[0] + coeffs[1] * (data[0,0] - data[0,0]) \
+ coeffs[2] * (data[0,0] - data[0,0])**2 \
+ coeffs[3] * (data[0,0] - data[0,0])**3, \
data[0,7]
print coeffs[0] + coeffs[1] * (data[nmid,0] - data[0,0]) \
+ coeffs[2] * (data[nmid,0] - data[0,0])**2 \
+ coeffs[3] * (data[nmid,0] - data[0,0])**3, \
data[nmid,7]
print Tmid,data[-1,0]
spline2 = UnivariateSpline(data[nmid-1:,0],data[nmid-1:,7],s=1,w=w[nmid-1:])
print spline2.get_coeffs()
print spline2.get_knots()
print spline2.get_residual()
plt.plot(data[:,0],spline1(data[:,0]))
plt.plot(data[:,0],spline2(data[:,0]))
plt.savefig('test.png')
And here is the resulting plot. I believe I have valid splines for each interval but it looks like my spline equation is not correct... I can't find any reference to what it is supposed to be in the scipy documentation. Anybody knows? Thanks !
The scipy documentation does not have anything to say about how one can take the coefficients and manually generate the spline curve. However, it is possible to figure out how to do this from the existing literature on B-splines. The following function bspleval shows how to construct the B-spline basis functions (the matrix B in the code), from which one can easily generate the spline curve by multiplying the coefficients with the highest-order basis functions and summing:
def bspleval(x, knots, coeffs, order, debug=False):
'''
Evaluate a B-spline at a set of points.
Parameters
----------
x : list or ndarray
The set of points at which to evaluate the spline.
knots : list or ndarray
The set of knots used to define the spline.
coeffs : list of ndarray
The set of spline coefficients.
order : int
The order of the spline.
Returns
-------
y : ndarray
The value of the spline at each point in x.
'''
k = order
t = knots
m = alen(t)
npts = alen(x)
B = zeros((m-1,k+1,npts))
if debug:
print('k=%i, m=%i, npts=%i' % (k, m, npts))
print('t=', t)
print('coeffs=', coeffs)
## Create the zero-order B-spline basis functions.
for i in range(m-1):
B[i,0,:] = float64(logical_and(x >= t[i], x < t[i+1]))
if (k == 0):
B[m-2,0,-1] = 1.0
## Next iteratively define the higher-order basis functions, working from lower order to higher.
for j in range(1,k+1):
for i in range(m-j-1):
if (t[i+j] - t[i] == 0.0):
first_term = 0.0
else:
first_term = ((x - t[i]) / (t[i+j] - t[i])) * B[i,j-1,:]
if (t[i+j+1] - t[i+1] == 0.0):
second_term = 0.0
else:
second_term = ((t[i+j+1] - x) / (t[i+j+1] - t[i+1])) * B[i+1,j-1,:]
B[i,j,:] = first_term + second_term
B[m-j-2,j,-1] = 1.0
if debug:
plt.figure()
for i in range(m-1):
plt.plot(x, B[i,k,:])
plt.title('B-spline basis functions')
## Evaluate the spline by multiplying the coefficients with the highest-order basis functions.
y = zeros(npts)
for i in range(m-k-1):
y += coeffs[i] * B[i,k,:]
if debug:
plt.figure()
plt.plot(x, y)
plt.title('spline curve')
plt.show()
return(y)
To give an example of how this can be used with Scipy's existing univariate spline functions, the following is an example script. This takes the input data and uses Scipy's functional and also its object-oriented approach to spline fitting. Taking the coefficients and knot points from either of the two and using these as inputs to our manually-calculated routine bspleval, we reproduce the same curve that they do. Note that the difference between the manually evaluated curve and Scipy's evaluation method is so small that it is almost certainly floating-point noise.
x = array([-273.0, -176.4, -79.8, 16.9, 113.5, 210.1, 306.8, 403.4, 500.0])
y = array([2.25927498e-53, 2.56028619e-03, 8.64512988e-01, 6.27456769e+00, 1.73894734e+01,
3.29052124e+01, 5.14612316e+01, 7.20531200e+01, 9.40718450e+01])
x_nodes = array([-273.0, -263.5, -234.8, -187.1, -120.3, -34.4, 70.6, 194.6, 337.8, 500.0])
y_nodes = array([2.25927498e-53, 3.83520726e-46, 8.46685318e-11, 6.10568083e-04, 1.82380809e-01,
2.66344008e+00, 1.18164677e+01, 3.01811501e+01, 5.78812583e+01, 9.40718450e+01])
## Now get scipy's spline fit.
k = 3
tck = splrep(x_nodes, y_nodes, k=k, s=0)
knots = tck[0]
coeffs = tck[1]
print('knot points=', knots)
print('coefficients=', coeffs)
## Now try scipy's object-oriented version. The result is exactly the same as "tck": the knots are the
## same and the coeffs are the same, they are just queried in a different way.
uspline = UnivariateSpline(x_nodes, y_nodes, s=0)
uspline_knots = uspline.get_knots()
uspline_coeffs = uspline.get_coeffs()
## Here are scipy's native spline evaluation methods. Again, "ytck" and "y_uspline" are exactly equal.
ytck = splev(x, tck)
y_uspline = uspline(x)
y_knots = uspline(knots)
## Now let's try our manually-calculated evaluation function.
y_eval = bspleval(x, knots, coeffs, k, debug=False)
plt.plot(x, ytck, label='tck')
plt.plot(x, y_uspline, label='uspline')
plt.plot(x, y_eval, label='manual')
## Next plot the knots and nodes.
plt.plot(x_nodes, y_nodes, 'ko', markersize=7, label='input nodes') ## nodes
plt.plot(knots, y_knots, 'mo', markersize=5, label='tck knots') ## knots
plt.xlim((-300.0,530.0))
plt.legend(loc='best', prop={'size':14})
plt.figure()
plt.title('difference')
plt.plot(x, ytck-y_uspline, label='tck-uspl')
plt.plot(x, ytck-y_eval, label='tck-manual')
plt.legend(loc='best', prop={'size':14})
plt.show()
The coefficients given by get_coeffs are B-spline (Basis spline) coefficients, described here: B-spline (Wikipedia)
Probably whatever other program/language you will be using has an implementation. Supply the knot locations and coefficients, and you should be all set.

Categories