Fitting Data to a Square-root or Logarithmic Function [duplicate] - python

I have a set of data and I want to compare which line describes it best (polynomials of different orders, exponential or logarithmic).
I use Python and Numpy and for polynomial fitting there is a function polyfit(). But I found no such functions for exponential and logarithmic fitting.
Are there any? Or how to solve it otherwise?

For fitting y = A + B log x, just fit y against (log x).
>>> x = numpy.array([1, 7, 20, 50, 79])
>>> y = numpy.array([10, 19, 30, 35, 51])
>>> numpy.polyfit(numpy.log(x), y, 1)
array([ 8.46295607, 6.61867463])
# y ≈ 8.46 log(x) + 6.62
For fitting y = AeBx, take the logarithm of both side gives log y = log A + Bx. So fit (log y) against x.
Note that fitting (log y) as if it is linear will emphasize small values of y, causing large deviation for large y. This is because polyfit (linear regression) works by minimizing ∑i (ΔY)2 = ∑i (Yi − Ŷi)2. When Yi = log yi, the residues ΔYi = Δ(log yi) ≈ Δyi / |yi|. So even if polyfit makes a very bad decision for large y, the "divide-by-|y|" factor will compensate for it, causing polyfit favors small values.
This could be alleviated by giving each entry a "weight" proportional to y. polyfit supports weighted-least-squares via the w keyword argument.
>>> x = numpy.array([10, 19, 30, 35, 51])
>>> y = numpy.array([1, 7, 20, 50, 79])
>>> numpy.polyfit(x, numpy.log(y), 1)
array([ 0.10502711, -0.40116352])
# y ≈ exp(-0.401) * exp(0.105 * x) = 0.670 * exp(0.105 * x)
# (^ biased towards small values)
>>> numpy.polyfit(x, numpy.log(y), 1, w=numpy.sqrt(y))
array([ 0.06009446, 1.41648096])
# y ≈ exp(1.42) * exp(0.0601 * x) = 4.12 * exp(0.0601 * x)
# (^ not so biased)
Note that Excel, LibreOffice and most scientific calculators typically use the unweighted (biased) formula for the exponential regression / trend lines. If you want your results to be compatible with these platforms, do not include the weights even if it provides better results.
Now, if you can use scipy, you could use scipy.optimize.curve_fit to fit any model without transformations.
For y = A + B log x the result is the same as the transformation method:
>>> x = numpy.array([1, 7, 20, 50, 79])
>>> y = numpy.array([10, 19, 30, 35, 51])
>>> scipy.optimize.curve_fit(lambda t,a,b: a+b*numpy.log(t), x, y)
(array([ 6.61867467, 8.46295606]),
array([[ 28.15948002, -7.89609542],
[ -7.89609542, 2.9857172 ]]))
# y ≈ 6.62 + 8.46 log(x)
For y = AeBx, however, we can get a better fit since it computes Δ(log y) directly. But we need to provide an initialize guess so curve_fit can reach the desired local minimum.
>>> x = numpy.array([10, 19, 30, 35, 51])
>>> y = numpy.array([1, 7, 20, 50, 79])
>>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y)
(array([ 5.60728326e-21, 9.99993501e-01]),
array([[ 4.14809412e-27, -1.45078961e-08],
[ -1.45078961e-08, 5.07411462e+10]]))
# oops, definitely wrong.
>>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y, p0=(4, 0.1))
(array([ 4.88003249, 0.05531256]),
array([[ 1.01261314e+01, -4.31940132e-02],
[ -4.31940132e-02, 1.91188656e-04]]))
# y ≈ 4.88 exp(0.0553 x). much better.

You can also fit a set of a data to whatever function you like using curve_fit from scipy.optimize. For example if you want to fit an exponential function (from the documentation):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(x, a, b, c):
return a * np.exp(-b * x) + c
x = np.linspace(0,4,50)
y = func(x, 2.5, 1.3, 0.5)
yn = y + 0.2*np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn)
And then if you want to plot, you could do:
plt.figure()
plt.plot(x, yn, 'ko', label="Original Noised Data")
plt.plot(x, func(x, *popt), 'r-', label="Fitted Curve")
plt.legend()
plt.show()
(Note: the * in front of popt when you plot will expand out the terms into the a, b, and c that func is expecting.)

I was having some trouble with this so let me be very explicit so noobs like me can understand.
Lets say that we have a data file or something like that
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
import sympy as sym
"""
Generate some data, let's imagine that you already have this.
"""
x = np.linspace(0, 3, 50)
y = np.exp(x)
"""
Plot your data
"""
plt.plot(x, y, 'ro',label="Original Data")
"""
brutal force to avoid errors
"""
x = np.array(x, dtype=float) #transform your data in a numpy array of floats
y = np.array(y, dtype=float) #so the curve_fit can work
"""
create a function to fit with your data. a, b, c and d are the coefficients
that curve_fit will calculate for you.
In this part you need to guess and/or use mathematical knowledge to find
a function that resembles your data
"""
def func(x, a, b, c, d):
return a*x**3 + b*x**2 +c*x + d
"""
make the curve_fit
"""
popt, pcov = curve_fit(func, x, y)
"""
The result is:
popt[0] = a , popt[1] = b, popt[2] = c and popt[3] = d of the function,
so f(x) = popt[0]*x**3 + popt[1]*x**2 + popt[2]*x + popt[3].
"""
print "a = %s , b = %s, c = %s, d = %s" % (popt[0], popt[1], popt[2], popt[3])
"""
Use sympy to generate the latex sintax of the function
"""
xs = sym.Symbol('\lambda')
tex = sym.latex(func(xs,*popt)).replace('$', '')
plt.title(r'$f(\lambda)= %s$' %(tex),fontsize=16)
"""
Print the coefficients and plot the funcion.
"""
plt.plot(x, func(x, *popt), label="Fitted Curve") #same as line above \/
#plt.plot(x, popt[0]*x**3 + popt[1]*x**2 + popt[2]*x + popt[3], label="Fitted Curve")
plt.legend(loc='upper left')
plt.show()
the result is:
a = 0.849195983017 , b = -1.18101681765, c = 2.24061176543, d = 0.816643894816

Here's a linearization option on simple data that uses tools from scikit learn.
Given
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import FunctionTransformer
np.random.seed(123)
# General Functions
def func_exp(x, a, b, c):
"""Return values from a general exponential function."""
return a * np.exp(b * x) + c
def func_log(x, a, b, c):
"""Return values from a general log function."""
return a * np.log(b * x) + c
# Helper
def generate_data(func, *args, jitter=0):
"""Return a tuple of arrays with random data along a general function."""
xs = np.linspace(1, 5, 50)
ys = func(xs, *args)
noise = jitter * np.random.normal(size=len(xs)) + jitter
xs = xs.reshape(-1, 1) # xs[:, np.newaxis]
ys = (ys + noise).reshape(-1, 1)
return xs, ys
transformer = FunctionTransformer(np.log, validate=True)
Code
Fit exponential data
# Data
x_samp, y_samp = generate_data(func_exp, 2.5, 1.2, 0.7, jitter=3)
y_trans = transformer.fit_transform(y_samp) # 1
# Regression
regressor = LinearRegression()
results = regressor.fit(x_samp, y_trans) # 2
model = results.predict
y_fit = model(x_samp)
# Visualization
plt.scatter(x_samp, y_samp)
plt.plot(x_samp, np.exp(y_fit), "k--", label="Fit") # 3
plt.title("Exponential Fit")
Fit log data
# Data
x_samp, y_samp = generate_data(func_log, 2.5, 1.2, 0.7, jitter=0.15)
x_trans = transformer.fit_transform(x_samp) # 1
# Regression
regressor = LinearRegression()
results = regressor.fit(x_trans, y_samp) # 2
model = results.predict
y_fit = model(x_trans)
# Visualization
plt.scatter(x_samp, y_samp)
plt.plot(x_samp, y_fit, "k--", label="Fit") # 3
plt.title("Logarithmic Fit")
Details
General Steps
Apply a log operation to data values (x, y or both)
Regress the data to a linearized model
Plot by "reversing" any log operations (with np.exp()) and fit to original data
Assuming our data follows an exponential trend, a general equation+ may be:
We can linearize the latter equation (e.g. y = intercept + slope * x) by taking the log:
Given a linearized equation++ and the regression parameters, we could calculate:
A via intercept (ln(A))
B via slope (B)
Summary of Linearization Techniques
Relationship | Example | General Eqn. | Altered Var. | Linearized Eqn.
-------------|------------|----------------------|----------------|------------------------------------------
Linear | x | y = B * x + C | - | y = C + B * x
Logarithmic | log(x) | y = A * log(B*x) + C | log(x) | y = C + A * (log(B) + log(x))
Exponential | 2**x, e**x | y = A * exp(B*x) + C | log(y) | log(y-C) = log(A) + B * x
Power | x**2 | y = B * x**N + C | log(x), log(y) | log(y-C) = log(B) + N * log(x)
+Note: linearizing exponential functions works best when the noise is small and C=0. Use with caution.
++Note: while altering x data helps linearize exponential data, altering y data helps linearize log data.

Well I guess you can always use:
np.log --> natural log
np.log10 --> base 10
np.log2 --> base 2
Slightly modifying IanVS's answer:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(x, a, b, c):
#return a * np.exp(-b * x) + c
return a * np.log(b * x) + c
x = np.linspace(1,5,50) # changed boundary conditions to avoid division by 0
y = func(x, 2.5, 1.3, 0.5)
yn = y + 0.2*np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn)
plt.figure()
plt.plot(x, yn, 'ko', label="Original Noised Data")
plt.plot(x, func(x, *popt), 'r-', label="Fitted Curve")
plt.legend()
plt.show()
This results in the following graph:

We demonstrate features of lmfit while solving both problems.
Given
import lmfit
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(123)
# General Functions
def func_log(x, a, b, c):
"""Return values from a general log function."""
return a * np.log(b * x) + c
# Data
x_samp = np.linspace(1, 5, 50)
_noise = np.random.normal(size=len(x_samp), scale=0.06)
y_samp = 2.5 * np.exp(1.2 * x_samp) + 0.7 + _noise
y_samp2 = 2.5 * np.log(1.2 * x_samp) + 0.7 + _noise
Code
Approach 1 - lmfit Model
Fit exponential data
regressor = lmfit.models.ExponentialModel() # 1
initial_guess = dict(amplitude=1, decay=-1) # 2
results = regressor.fit(y_samp, x=x_samp, **initial_guess)
y_fit = results.best_fit
plt.plot(x_samp, y_samp, "o", label="Data")
plt.plot(x_samp, y_fit, "k--", label="Fit")
plt.legend()
Approach 2 - Custom Model
Fit log data
regressor = lmfit.Model(func_log) # 1
initial_guess = dict(a=1, b=.1, c=.1) # 2
results = regressor.fit(y_samp2, x=x_samp, **initial_guess)
y_fit = results.best_fit
plt.plot(x_samp, y_samp2, "o", label="Data")
plt.plot(x_samp, y_fit, "k--", label="Fit")
plt.legend()
Details
Choose a regression class
Supply named, initial guesses that respect the function's domain
You can determine the inferred parameters from the regressor object. Example:
regressor.param_names
# ['decay', 'amplitude']
To make predictions, use the ModelResult.eval() method.
model = results.eval
y_pred = model(x=np.array([1.5]))
Note: the ExponentialModel() follows a decay function, which accepts two parameters, one of which is negative.
See also ExponentialGaussianModel(), which accepts more parameters.
Install the library via > pip install lmfit.

Wolfram has a closed form solution for fitting an exponential. They also have similar solutions for fitting a logarithmic and power law.
I found this to work better than scipy's curve_fit. Especially when you don't have data "near zero". Here is an example:
import numpy as np
import matplotlib.pyplot as plt
# Fit the function y = A * exp(B * x) to the data
# returns (A, B)
# From: https://mathworld.wolfram.com/LeastSquaresFittingExponential.html
def fit_exp(xs, ys):
S_x2_y = 0.0
S_y_lny = 0.0
S_x_y = 0.0
S_x_y_lny = 0.0
S_y = 0.0
for (x,y) in zip(xs, ys):
S_x2_y += x * x * y
S_y_lny += y * np.log(y)
S_x_y += x * y
S_x_y_lny += x * y * np.log(y)
S_y += y
#end
a = (S_x2_y * S_y_lny - S_x_y * S_x_y_lny) / (S_y * S_x2_y - S_x_y * S_x_y)
b = (S_y * S_x_y_lny - S_x_y * S_y_lny) / (S_y * S_x2_y - S_x_y * S_x_y)
return (np.exp(a), b)
xs = [33, 34, 35, 36, 37, 38, 39, 40, 41, 42]
ys = [3187, 3545, 4045, 4447, 4872, 5660, 5983, 6254, 6681, 7206]
(A, B) = fit_exp(xs, ys)
plt.figure()
plt.plot(xs, ys, 'o-', label='Raw Data')
plt.plot(xs, [A * np.exp(B *x) for x in xs], 'o-', label='Fit')
plt.title('Exponential Fit Test')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend(loc='best')
plt.tight_layout()
plt.show()

Related

fitting step function with variation in the step location with scipy optimize curve_fit

I am trying to fit x y data which look something like
x = np.linspace(-2, 2, 1000)
a = 0.5
yl = np.ones_like(x[x < a]) * -0.4 + np.random.normal(0, 0.05, x[x < a].shape[0])
yr = np.ones_like(x[x >= a]) * 0.4 + np.random.normal(0, 0.05, x[x >= a].shape[0])
y = np.concatenate((yl, yr))
plt.scatter(x, y, s=2, color='k')
I'm using a variation of the Heaviside step function
def f(x, a, b): return 0.5 * b * (np.sign(x - a))
and fitting with
popt, pcov = curve_fit(f, x, y, p0=p)
where p is some initial guess.
for any p curve_fit fit only b and not a
for example:
popt, pcov = curve_fit(f, x, y, p0=[-1.0, 0])
we get that popt is [-1., 0.20117665]
popt, pcov = curve_fit(f, x, y, p0=[.5, 2])
we get taht popt is [.5, 0.79902]
popt, pcov = curve_fit(f, x, y, p0=[1.5, -2])
we get taht popt is [1.5, 0.40128229]
why curve_fit not fitting a?
As mentioned by others, curve_fit (and all the other solvers in scipy.optimize) work well for optimizing continuous but not discrete variables. They all work by making small (like, at the 1.e-7 level) changes to the parameter values and seeing what (if any) change that makes in the result, and using that change to refine those values until the smallest residual is found. With your model function using np.sign:
def f(x, a, b): return 0.5 * b * (np.sign(x - a))
such a small change in the value of a will not change the model or fit result at all. That is, first the fit will try the starting value of, say, a=-1.0 or a=0.5, and then will try a=-0.999999995 or a=0.500000005. Those will both give the same result for np.sign(x-a). The fit does not know that it would need to change a by 1 to have any effect on the result. It cannot know this. np.sign() and np.sin() differ by one letter, but behave very differently in this respect.
It is pretty common for real data to take a step but to be sampled finely enough so that the step does not happen completely in one step. In that case, you would be able to model the step with a variety of functional forms (linear ramp, error function, arc-tangent, logistic, etc). The thorough answer from #JamesPhilipps gives one approach. I would probably use lmfit (being one of its main authors) and be willing to guess starting values for the parameters from looking at the data, perhaps:
import numpy as np
x = np.linspace(-2, 2, 1000)
a = 0.5
yl = np.ones_like(x[x < a]) * -0.4 + np.random.normal(0, 0.05, x[x < a].shape[0])
yr = np.ones_like(x[x >= a]) * 0.4 + np.random.normal(0, 0.05, x[x >= a].shape[0])
y = np.concatenate((yl, yr))
from lmfit.models import StepModel, ConstantModel
model = StepModel() + ConstantModel()
params = model.make_params(center=0, sigma=1, amplitude=1., c=-0.5)
result = model.fit(y, params, x=x)
print(result.fit_report())
import matplotlib.pyplot as plt
plt.scatter(x, y, label='data')
plt.plot(x, result.best_fit, marker='o', color='r', label='fit')
plt.show()
which would give a good fit and print out results of
[[Model]]
(Model(step, form='linear') + Model(constant))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 50
# data points = 1000
# variables = 4
chi-square = 2.32729556
reduced chi-square = 0.00233664
Akaike info crit = -6055.04839
Bayesian info crit = -6035.41737
## Warning: uncertainties could not be estimated:
[[Variables]]
amplitude: 0.80013762 (init = 1)
center: 0.50083312 (init = 0)
sigma: 4.6009e-04 (init = 1)
c: -0.40006255 (init = -0.5)
Note that it will find the center of the step because it assumed there was some finite width (sigma) to the step, but then found that width to be smaller than the step size in x. But also note that it cannot calculate the uncertainties in the parameters because, as above, a small change in center (your a) near the solution does not change the resulting fit. FWIW the StepModel can use a linear, error-function, arc-tangent, or logistic as the step function.
If you had constructed the test data to have a small width to the step, say with
something like
from scipy.special import erf
y = 0.638 * erf((x-0.574)/0.005) + np.random.normal(0, 0.05, len(x))
then the fit would have been able to find the best solution and evaluate the uncertainties.
I hope that explains why the fit with your model function could not refine the value of a, and what might be done about it.
Here is a graphical Python fitter using your data and function, with scipy's differential_evolution genetic algorithm module used to provide the initial parameter estimates for curve_fit. That module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space, which requires bounds within which to search. In this example, those bounds are taken from the data max and min values.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings
# generate data for testing
x = numpy.linspace(-2, 2, 1000)
a = 0.5
yl = numpy.ones_like(x[x < a]) * -0.4 + numpy.random.normal(0, 0.05, x[x < a].shape[0])
yr = numpy.ones_like(x[x >= a]) * 0.4 + numpy.random.normal(0, 0.05, x[x >= a].shape[0])
y = numpy.concatenate((yl, yr))
# alias data to match pervious example
xData = x
yData = y
def func(x, a, b): # variation of the Heaviside step function
return 0.5 * b * (numpy.sign(x - a))
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = func(xData, *parameterTuple)
return numpy.sum((yData - val) ** 2.0)
def generate_Initial_Parameters():
# min and max used for bounds
maxX = max(xData)
minX = min(xData)
parameterBounds = []
parameterBounds.append([minX, maxX]) # search bounds for a
parameterBounds.append([minX, maxX]) # search bounds for b
# "seed" the numpy random number generator for repeatable results
result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
return result.x
# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
# now call curve_fit without passing bounds from the genetic algorithm,
# just in case the best fit parameters are aoutside those bounds
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Fitted parameters:', fittedParameters)
print()
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)
Or you could say that a heavyside can be approximated by a sigmoïd function:
or in your case
You add a parameter k, but hopefully it will be big enough in the end, and you get rid of it to find the two other parameters.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.linspace(-2, 2, 1000)
a = 0.5
yl = np.ones_like(x[x < a]) * -0.4 + np.random.normal(0, 0.05, x[x < a].shape[0])
yr = np.ones_like(x[x >= a]) * 0.4 + np.random.normal(0, 0.05, x[x >= a].shape[0])
y = np.concatenate((yl, yr))
plt.scatter(x, y, s=2, color='k')
# def f(x, a, b): return 0.5 * b * (np.sign(x - a))
def g(x, a, b, k): return b / (1 + np.exp(-2 * k * (x - a))) - b / 2
y_sigmoid = g(x, a, 0.8, 10)
plt.scatter(x, y_sigmoid, s=2, color='g')
popt, pcov = curve_fit(g, x, y, p0=[-1.0, 0, 1])
# popt, pcov = curve_fit(f, x, y, p0=[-1.0, 0])
print(popt)
plt.scatter(x, g(x, *popt), s=2, color='r')
which gives, as expected:
[5.02081214e-01 8.03257583e-01 3.33970547e+03]
(green: random soft sigmoid, red: curve_fit result)

How can I do a better curve fitting with a gaussian function like this?

I have data and I am fitting the data with a gaussian curve fitting. The blue bullets are my data. The gaussian starts at zero and look like the red curve. But I want something, that looks more like the green curve. All gaussian curve fitting examples I found at the internet starts at zero. Maybe there is another function that can change the starting y value or something like that?
Here's my code so far:
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
import os
import csv
path = 'Proben Bilder v06 results'
filename = '00_sumListe.csv'
# read csv file with scale data
x = []
y = []
with open(os.path.join(path, filename), 'r') as csvfile:
sumFile = csv.reader(csvfile, delimiter=',')
for row in sumFile:
id = float(row[0])
sumListe = -float(row[1])
x = np.append(x, id)
y = np.append(y, sumListe)
y = y-min(y)
# x = np.arange(10)
# y = np.array([0, 1, 2, 3, 4, 5, 4, 3, 2, 1])
# weighted arithmetic mean (corrected - check the section below)
mean = sum(x * y) / sum(y)
sigma = np.sqrt(sum(y * (x - mean)**2) / sum(y))
def gauss(x, a, x0, sigma): # x0 = mü
return a * np.exp(-(x - x0)**2 / (2 * sigma**2))
popt, pcov = curve_fit(gauss, x, y, p0=[max(y), mean, sigma])
# plt.gca().invert_yaxis()
plt.plot(x, y, 'b+:', label='data')
plt.plot(x, gauss(x, *popt), 'r-', label='fit')
plt.legend()
plt.title('Fig. 3 - Fit for Time Constant')
plt.xlabel('steps')
plt.ylabel('mean value')
plt.show()
My data is a bit to big to write it here... I can't load it up or can I?
Does anyone have a better idea?
You could modify your gauss function so that there is an offset in the y axis to potentially give you a better fit. This requires you to add an extra initial guess in p0
# your code here
def gauss2(x, b, a, x0, sigma):
return b + (a * np.exp(-(x - x0) ** 2 / (2 * sigma ** 2)))
popt, pcov = curve_fit(gauss2, x, y, p0=[10, max(y), mean, sigma])

How to fit a polynomial with some of the coefficients constrained?

Using NumPy's polyfit (or something similar) is there an easy way to get a solution where one or more of the coefficients are constrained to a specific value?
For example, we could find the ordinary polynomial fitting using:
x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
z = np.polyfit(x, y, 3)
yielding
array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254])
But what if I wanted the best fit polynomial where the third coefficient (in the above case z[2]) was required to be 1? Or will I need to write the fitting from scratch?
In this case, I would use curve_fit or lmfit; I quickly show it for the first one.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(x, a, b, c, d):
return a + b * x + c * x ** 2 + d * x ** 3
x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
print(np.polyfit(x, y, 3))
popt, _ = curve_fit(func, x, y)
print(popt)
popt_cons, _ = curve_fit(func, x, y, bounds=([-np.inf, 2, -np.inf, -np.inf], [np.inf, 2.001, np.inf, np.inf]))
print(popt_cons)
xnew = np.linspace(x[0], x[-1], 1000)
plt.plot(x, y, 'bo')
plt.plot(xnew, func(xnew, *popt), 'k-')
plt.plot(xnew, func(xnew, *popt_cons), 'r-')
plt.show()
This will print:
[ 0.08703704 -0.81349206 1.69312169 -0.03968254]
[-0.03968254 1.69312169 -0.81349206 0.08703704]
[-0.14331349 2. -0.95913556 0.10494372]
So in the unconstrained case, polyfit and curve_fit give identical results (just the order is different), in the constrained case, the fixed parameter is 2, as desired.
The plot looks then as follows:
In lmfit you can also choose whether a parameter should be fitted or not, so you can then also just set it to a desired value (check this answer).
For completeness, with lmfit the solution would look like this:
import numpy as np
import matplotlib.pyplot as plt
from lmfit import Model
def func(x, a, b, c, d):
return a + b * x + c * x ** 2 + d * x ** 3
x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
pmodel = Model(func)
params = pmodel.make_params(a=1, b=2, c=1, d=1)
params['b'].vary = False
result = pmodel.fit(y, params, x=x)
print(result.fit_report())
xnew = np.linspace(x[0], x[-1], 1000)
ynew = result.eval(x=xnew)
plt.plot(x, y, 'bo')
plt.plot(x, result.best_fit, 'k-')
plt.plot(xnew, ynew, 'r-')
plt.show()
which would print a comprehensive report, including uncertainties, correlations and fit statistics as:
[[Model]]
Model(func)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 10
# data points = 6
# variables = 3
chi-square = 0.066
reduced chi-square = 0.022
Akaike info crit = -21.089
Bayesian info crit = -21.714
[[Variables]]
a: -0.14331348 +/- 0.109441 (76.37%) (init= 1)
b: 2 (fixed)
c: -0.95913555 +/- 0.041516 (4.33%) (init= 1)
d: 0.10494371 +/- 0.008231 (7.84%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
C(c, d) = -0.987
C(a, c) = -0.695
C(a, d) = 0.610
and produce a plot of
Note that lmfit.Model has many improvements over curve_fit, including automatically naming parameters based on function arguments, allowing any parameter to have bounds or simply be fixed without requiring nonsense like having upper and lower bounds that are almost equal. The key is that lmfit uses Parameter objects that have attributes instead of plain arrays of fitting variables. lmfit also supports mathematical constraints, composite models (eg, adding or multiplying models), and has superior reports.
Sorry for the resurrection
..but I felt that this answer was missing.
To fit a polynomial we solve the following system of equations:
a0*x0^n + a1*x0^(n-1) .. + an*x0^0 = y0
a0*x1^n + a1*x1^(n-1) .. + an*x1^0 = y1
...
a0*xm^n + a1*xm^(n-1) .. + an*xm^0 = ym
Which is a problem of the form V # a = y
where "V" is a Vandermonde matrix:
[[x0^n x0^(n-1) 1],
[x1^n x1^(n-1) 1],
...
[xm^n xm^(n-1) 1]]
"y" is a column vector holding the y-values:
[[y0],
[y1],
...
[ym]]
..and "a" is the column vector of coefficients that we are solving for:
[[a0],
[a1],
...
[an]]
This problem can be solved using linear least squares as follows:
import numpy as np
x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
deg = 3
V = np.vander(x, deg + 1)
z, *_ = np.linalg.lstsq(V, y, rcond=None)
print(z)
# [ 0.08703704 -0.81349206 1.69312169 -0.03968254]
..which produces the same solution as the polyfit method:
z = np.polyfit(x, y, deg)
print(z)
# [ 0.08703704 -0.81349206 1.69312169 -0.03968254]
Instead we want a solution where a2 = 1
substituting a2 = 1 into the system of equations from the beginning of the answer, and then moving the corresponding term from the lhs to the rhs we get:
a0*x0^n + a1*x0^(n-1) + 1*x0^(n-2) .. + an*x0^0 = y0
a0*x1^n + a1*x1^(n-1) + 1*x0^(n-2) .. + an*x1^0 = y1
...
a0*xm^n + a1*xm^(n-1) + 1*x0^(n-2) .. + an*xm^0 = ym
=>
a0*x0^n + a1*x0^(n-1) .. + an*x0^0 = y0 - 1*x0^(n-2)
a0*x1^n + a1*x1^(n-1) .. + an*x1^0 = y1 - 1*x0^(n-2)
...
a0*xm^n + a1*xm^(n-1) .. + an*xm^0 = ym - 1*x0^(n-2)
This corresponds to removing column 2 from the Vandermonde matrix and subtracting it from the y-vector as follows:
y_ = y - V[:, 2]
V_ = np.delete(V, 2, axis=1)
z_, *_ = np.linalg.lstsq(V_, y_, rcond=None)
z_ = np.insert(z_, 2, 1)
print(z_)
# [ 0.04659264 -0.48453866 1. 0.19438046]
Notice that I inserted the 1 in the coefficient vector after solving the linear least-squares problem, we are no longer solving for a2 since we set it to 1 and removed it from the problem.
For completeness this is what the solution looks like when plotted:
and the complete code that I used:
import numpy as np
x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
deg = 3
V = np.vander(x, deg + 1)
z, *_ = np.linalg.lstsq(V, y, rcond=None)
print(z)
# [ 0.08703704 -0.81349206 1.69312169 -0.03968254]
z = np.polyfit(x, y, deg)
print(z)
# [ 0.08703704 -0.81349206 1.69312169 -0.03968254]
y_ = y - V[:, 2]
V_ = np.delete(V, 2, axis=1)
z_, *_ = np.linalg.lstsq(V_, y_, rcond=None)
z_ = np.insert(z_, 2, 1)
print(z_)
# [ 0.04659264 -0.48453866 1. 0.19438046]
from matplotlib import pyplot as plt
plt.plot(x, y, 'o', label='data')
plt.plot(x, V # z, label='polyfit')
plt.plot(x, V # z_, label='constrained (a2 = 0)')
plt.legend()
plt.show()
Here is a way to do this using scipy.optimize.curve_fit:
First, let's recreate your example (as a sanity check):
import numpy as np
from scipy.optimize import curve_fit
​
def f(x, x3, x2, x1, x0):
"""this is the polynomial function"""
return x0 + x1*x + x2*(x*x) + x3*(x*x*x)
​
popt, pcov = curve_fit(f, x, y)
print(popt)
#array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254])
Which matches the values you get from np.polyfit().
Now adding the constraints for x1:
popt, pcov = curve_fit(
f,
x,
y,
bounds = ([-np.inf, -np.inf, .999999999, -np.inf], [np.inf, np.inf, 1.0, np.inf])
)
print(popt)
#array([ 0.04659264, -0.48453866, 1. , 0.19438046])
I had to use .999999999 because the lower bound must be strictly less than the upper bound.
Alternatively, you could define your function with the constrained coefficient as a constant, and get the values for the other 3:
def f_new(x, x3, x2, x0):
x1 = 1
return x0 + x1*x + x2*(x*x) + x3*(x*x*x)
popt, pcov = curve_fit(f_new, x, y)
print(popt)
#array([ 0.04659264, -0.48453866, 0.19438046])
Here is also a way by using scipy.optimize.curve_fit but aiming to fix whatever the polynomial coefficients are desired. (The code is not so long after removing the comments.)
The guy that does the job:
import numpy as np
from scipy.optimize import curve_fit
def polyfit(x, y, deg, which=-1, to=0):
"""
An extension of ``np.polyfit`` to fix values of the vector
of polynomial coefficients. By default, the last coefficient
(i.e., the constant term) is kept at zero.
Parameters
----------
x : array_like
x-coordinates of the sample points.
y : array_like
y-coordinates of the sample points.
deg : int
Degree of the fitting polynomial.
which : int or array_like, optional
Indexes of the coefficients to remain fixed. By default, -1.
to : float or array_like, optional
Values of the fixed coefficients. By default, 0.
Returns
-------
np.ndarray
(deg + 1) polynomial coefficients.
"""
p0 = np.polyfit(x, y, deg)
# if which == None it is reduced to np.polyfit
if which is None:
return p0
# indexes of the coeffs being fitted
which_not = np.delete(np.arange(deg + 1), which)
# create the array of coeffs
def _fill_p(p):
p_ = np.empty(deg + 1) # empty array
p_[which] = to # fill with custom coeffs
p_[which_not] = p # fill with `p`
return p_
# callback function for fitting
def _polyfit(x, *p):
p_ = _fill_p(p)
return np.polyval(p_, x)
# get the array of coeffs
p0 = np.delete(p0, which) # use `p0` as initial condition
p, _ = curve_fit(_polyfit, x, y, p0=p0) # fitting
p = _fill_p(p) # merge fixed and no-fixed coeffs
return p
Two simple examples on how to use the function above:
import matplotlib.pyplot as plt
# just create some fake data (a parabola)
np.random.seed(0) # seed to reproduce the example
deg = 2 # second order polynomial
p = np.random.randint(1, 5, size=deg+1) # random vector of coefficients
x = np.linspace(0, 10, num=20) # fake data: x-array
y = np.polyval(p, x) + 1.5*np.random.randn(20) # fake data: y-array
print(p) # output:[1, 4, 2]
# fitting
p1 = polyfit(x, y, deg, which=2, to=p[2]) # case 1: last coeff is fixed
p2 = polyfit(x, y, deg, which=[1,2], to=p[1:3]) # case 2: last two coeffs are fixed
y1 = np.polyval(p1, x) # y-array for case 1
y2 = np.polyval(p2, x) # y-array for case 2
print(p1) # output: [1.05, 3.67, 2.]
print(p2) # output: [1.08, 4., 2.]
# plotting
plt.plot(x, y, '.', label='fake data: y = p[0]*x**2 + p[1]*x + p[2]')
plt.plot(x, y1, label='p[2] fixed at 2')
plt.plot(x, y2, label='p[2] and p[1] fixed at [4, 2]')
plt.legend()
plt.show()

fitting a plane into a cloud of points (3d)

I have a data file containing multiple columns of data,I would like to extract 3 columns (that indicate the coordinates ) out of this data file and put them in another file, then using the newly created file I would like to fit a plane or surface (or whatever you would like to call it) using scipy.optimize.curve_fit. Here is my code:
# -*- coding: utf-8 -*-
from pylab import *
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
from scipy.optimize import curve_fit
### processing function
def store(var,textfile):
data=loadtxt(textfile,skiprows=1)
p0=[]
p1=[]
p2=[]
for i in range(0,len(data)):
p0.append(float(data[i,2]))
p1.append(float(data[i,3]))
p2.append(float(data[i,4]))
var.append(p0)
var.append(p1)
var.append(p2)
#extracting the data from a textfile
datafile1='cracks_0101005_5k_tensionTestCentreCrack_l0.001a0_r0.01.txt'
a1=[]
store(a1, datafile1)
rcParams.update({'legend.numpoints':1,'font.size': 20,'axes.labelsize':25,'xtick.major.pad':10,'ytick.major.pad':10,'legend.fontsize':14})
lw=2
ms=10
#fitting a surface(curve) into the data
def func(data, m, n, o):
return m*data[:,0] + n*data[:,2] + o
guess=(1,1,1)
params, pcov = curve_fit(func, a1[::2, :2], a1[:,1], guess)
print (params)
And I am getting the following error message:
Traceback (most recent call last):
File "fitcurve.py", line 41, in <module>
params, pcov = curve_fit(func, a1[::2, :2], a1[:,1], guess)
TypeError: list indices must be integers, not tuple
Would you please tell me what I am doing wrong?
Just to make it more clear :
I am trying to have Y as my dependent function, so it would be a function of X and Z.
Apparently a1[] is a list and not an array right?
But even when I change it to an array Myarray=np.asarray(a1) I get some other weird messages.
I would appreciate if someone could help me understand the issue here.
Cheers
Here possible errors I noticed in your code:
you want to fit y as function of x,z so the X array you want to sent is probably a1[:, ::2]. But that means that func already gets a m,2 array so here it must be return m*data[:,0] + n*data[:,1] + o
Still I think it should be a two parameter fit not three. You can calculate a possible m, n, o from the according result.
import matplotlib
matplotlib.use('Qt5Agg')
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from matplotlib import style
from random import random
style.use('ggplot')
from scipy.optimize import curve_fit
""" make some data and save to file """
data = []
a, b, s = 1.233, -2.17, 5.2
for i in range(88):
x = 10 * (2 * random() - 1)
y = 10 * (2 * random() - 1)
z = a * x + b * y+ s * (2 * random() - 1) * 0.5
data += [[x, y, z]]
data = np.array(data)
np.savetxt("data.txt", data)
""" get the data and use unpack to directly write into x,y,z variables"""
xData, yData, zData = np.loadtxt("data.txt", unpack=True)
"""...while I actally need the packed version as well, so I could load again"""
#allData = np.loadtxt("data.txt")
"""...or..."""
allData = np.array( zip(xData, yData, zData) )
def func(data, m, o):
return m * data[:,0] + o * data[:, 1]
guess = (1, 1)
params, pcov = curve_fit(func, allData[:, ::2], allData[:,1], guess)
""" showing data and fit result"""
x = np.linspace(-10, 10, 10)
y = np.linspace(-10, 10, 10)
X, Y = np.meshgrid(x, y)
Z = -params[0] / params[1] * X + 1 / params[1] * Y
fig1 = plt.figure(1)
ax = fig1.add_subplot( 1, 1, 1, projection='3d')
ax.scatter(xData, yData, zData)
ax.plot_wireframe(X, Y, Z, color='#4060a6', alpha=0.6 )
ax.set_title(
"({:1.2f},{:1.2f})".format(
-params[0] / params[1], 1 / params[1]
)
)
plt.show()
Note that while you fitted y = m * x + o * z I plot a wireframe of z = a * x + b * y mit b = 1 / o und a = -m / o, i.e. n = 1. You can rescale your m, n, o accordingly.
Here is an example of linear multiple regression of a flat surface:
import numpy as np
# the below "columns" of data could be i.e., x, y**2, sin(x), log(y), etc.
# numpy's array transpose can also be handy in formatting the data in this way
# first "column" will regress to an offset parameter (a * 1.0, or just a)
# second "column" will regress the X data (b * X)
# third "column" will regress the Y data (c * Y)
indepData = np.array([
[1.0, 11.0, 0.1], # first data point
[1.0 ,22.0, 0.2], # second data point
[1.0, 33.0, 0.3], # third data point
[1.0, 35.0, 0.5] # fourth data point
])
# Z data
depData = np.array([5.0, 60.0, 70.0, 185.0])
coeffs = np.linalg.lstsq(indepData, depData)[0]
print(coeffs)
X = 25.0
Y = 0.2
a = coeffs[0]
b = coeffs[1]
c = coeffs[2]
regressionPredictedValue = a + b*X + c*Y
print(regressionPredictedValue)

How to fit a two-term exponential in python?

I need to fit some data with a two-term exponential following the equation a*exp(b*x)+c*exp(d*x). In matlab, it's as easy as changing the 1 to a 2 in polyfit to go from a one-term exponential to a two-term. I haven't found a simple solution to do it in python and was wondering if there even was one? I tried using curve_fit but it is giving me lots of trouble and after scouring the internet I still haven't found anything useful. Any help is appreciated!
Yes you can use curve_fit from scipy. Here is an example for your specific fit function.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.linspace(0,4,50) # Example data
def func(x, a, b, c, d):
return a * np.exp(b * x) + c * np.exp(d * x)
y = func(x, 2.5, 1.3, 0.5, 0.5) # Example exponential data
# Here you give the initial parameters for a,b,c which Python then iterates over
# to find the best fit
popt, pcov = curve_fit(func,x,y,p0=(1.0,1.0,1.0,1.0))
print(popt) # This contains your three best fit parameters
p1 = popt[0] # This is your a
p2 = popt[1] # This is your b
p3 = popt[2] # This is your c
p4 = popt[3] # This is your d
residuals = y - func(x,p1,p2,p3,p4)
fres = sum( (residuals**2)/func(x,p1,p2,p3,p4) ) # The chi-sqaure of your fit
print(fres)
""" Now if you need to plot, perform the code below """
curvey = func(x,p1,p2,p3,p4) # This is your y axis fit-line
plt.plot(x, curvey, 'red', label='The best-fit line')
plt.scatter(x,y, c='b',label='The data points')
plt.legend(loc='best')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
You can do it with leastsq. Something like:
from numpy import log, exp
from scipy.optimize.minpack import leastsq
## regression function
def _exp(a, b, c, d):
"""
Exponential function y = a * exp(b * x) + c * exp(d * x)
"""
return lambda x: a * exp(b * x) + c * exp(d * x)
## interpolation
def interpolate(x, df, fun=_exp):
"""
Interpolate Y from X based on df, a dataframe with columns 'x' and 'y'.
"""
resid = lambda p, x, y: y - fun(*p)(x)
ls = leastsq(resid, [1.0, 1.0, 1.0, 1.0], args=(df['x'], df['y']))
a, b, c, d = ls[0]
y = fun(a, b, c, d)(x)
return y

Categories