Multivariate (polynomial) best fit curve in python? - python

How do you calculate a best fit line in python, and then plot it on a scatterplot in matplotlib?
I was I calculate the linear best-fit line using Ordinary Least Squares Regression as follows:
from sklearn import linear_model
clf = linear_model.LinearRegression()
x = [[t.x1,t.x2,t.x3,t.x4,t.x5] for t in self.trainingTexts]
y = [t.human_rating for t in self.trainingTexts]
clf.fit(x,y)
regress_coefs = clf.coef_
regress_intercept = clf.intercept_
This is multivariate (there are many x-values for each case). So, X is a list of lists, and y is a single list.
For example:
x = [[1,2,3,4,5], [2,2,4,4,5], [2,2,4,4,1]]
y = [1,2,3,4,5]
But how do I do this with higher order polynomial functions. For example, not just linear (x to the power of M=1), but binomial (x to the power of M=2), quadratics (x to the power of M=4), and so on. For example, how to I get the best fit curves from the following?
Extracted from Christopher Bishops's "Pattern Recognition and Machine Learning", p.7:

The accepted answer to this question
provides a small multi poly fit library which will do exactly what you need using numpy, and you can plug the result into the plotting as I've outlined below.
You would just pass in your arrays of x and y points and the degree(order) of fit you require into multipolyfit. This returns the coefficients which you can then use for plotting using numpy's polyval.
Note: The code below has been amended to do multivariate fitting, but the plot image was part of the earlier, non-multivariate answer.
import numpy
import matplotlib.pyplot as plt
import multipolyfit as mpf
data = [[1,1],[4,3],[8,3],[11,4],[10,7],[15,11],[16,12]]
x, y = zip(*data)
plt.plot(x, y, 'kx')
stacked_x = numpy.array([x,x+1,x-1])
coeffs = mpf(stacked_x, y, deg)
x2 = numpy.arange(min(x)-1, max(x)+1, .01) #use more points for a smoother plot
y2 = numpy.polyval(coeffs, x2) #Evaluates the polynomial for each x2 value
plt.plot(x2, y2, label="deg=3")
Note: This was part of the answer earlier on, it is still relevant if you don't have multivariate data. Instead of coeffs = mpf(..., use coeffs = numpy.polyfit(x,y,3)
For non-multivariate data sets, the easiest way to do this is probably with numpy's polyfit:
numpy.polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False)
Least squares polynomial fit.
Fit a polynomial p(x) = p[0] * x**deg + ... + p[deg] of degree deg to points (x, y). Returns a vector of coefficients p that minimises the squared error.

Slightly out of context because the resulting function is not a polynomial, but still interesting perhaps. One major problem with polynomial fitting is Runge's phenomenon: The higher the degree, the more dramatic oscillations will occur. This isn't just constructed either but it will come back to bite you.
As a remedy, I created smoothfit a while ago. It solves an appropriate least-squares problem and gives nice results, e.g.:
import numpy as np
import matplotlib.pyplot as plt
import smoothfit
x = [1, 4, 8, 11, 10, 15, 16]
y = [1, 3, 3, 4, 7, 11, 12]
a = 0.0
b = 17.0
plt.plot(x, y, 'kx')
lmbda = 3.0 # controls the smoothness
n = 100
u = smoothfit.fit1d(x, y, a, b, n, lmbda)
x = np.linspace(a, b, n)
vals = [u(xx) for xx in x]
plt.plot(x, vals, "-")
plt.show()

Related

Is there an a method to fit a wave created from two wave?

I need to fit a sine curve created from two sine waves and extract the parameters for the fitted curve (such as frequency, amplitude, etc).
Data example:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.arange(0, 50, 0.01)
x2 = np.arange(0, 100, 0.02)
x3 = np.arange(0, 150, 0.03)
sin1 = np.sin(x)
sin2 = np.sin(x2)
sin3= np.sin(x3/2)
sin4 = sin1 + sin2+sin3
plt.plot(x, sin4)
plt.show()
I used the codes provided in this answer.
yy = sin4
tt = x
res = fit_sin(tt, yy)
print(str(i), "Amplitude=%(amp)s, Angular freq.=%(omega)s, phase=%(phase)s, offset=%(offset)s, Max. Cov.=%(maxcov)s" % res )
fit_values=res["fitfunc"](tt)
Frequenc_fit= res['freq']
print(i, Frequenc_fit)
Frequenc_fit=Frequenc_fit
Amp_fit=res['amp']
Omega_fit=res['omega']
Phase_fit=res['phase']
Offset_fit=res['offset']
maxcov_fit=res['maxcov']
plt.plot(tt, yy, "-k", label="y", linewidth=2)
plt.plot(tt,fit_values, "r-", label="y fit curve", linewidth=2)
plt.legend(loc="best")
plt.show()
I got a fitted sine curve with a single frequency and amplitude as follows:
2 Amplitude=1.0149282025860233, Angular freq.=2.01112187048004, phase=-0.2730905030152767, offset=0.003304158823058212, Max. Cov.=0.0015266032307905222
2 0.3200799868471169
Is there a method to obtain fitted curve matches with the original one?
Supposing that the function to be fitted is
y(x)=a * sin( w * x )+b * sin( W * x )
the principle of the method below is explained in https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
The graphical representation of the result is :
Blue curve : From data obtained by scanning the graph given in the question.
Black curve : From the above calculus.
The available data was not accurate because it comes from scanning of the original figure. The deviation is mainly due to the numerical integrations in computing the values of SS and SSSS (Four successive numerical integrations is not accurate especially with biaised data).
Probably the correct result should be : w=2 , W=1 , a=1 , b=1.
NOTE : The above method is not iterative and thus doesn't requires guessed values of the parameters to start an iterative process. The approximate results of the parameters can be good initial values in order to use an iterative non-linear regression process.
NOTE : If the values of w and W where known a-priori the solving thanks to linear regression would be very simple and much accurate (Only the last 2X2 matrix calculus shown above).

Linear regression forcing one specific value

I want to calculate a simple linear regression where I need to force a particular value for one point. Namely, I have x and y arrays, and I want my regression f(x) to force f(x[-1]) == y[-1] - that is, the prediction over the last element of x should be equal to the last element of y.
Is there a way to do it using Python and scikit-learn?
Here's a slightly roundabout trick that will do it.
Try re-centering your data, i.e. subtract x[-1], y[-1] from all datapoints so that x[-1], y[-1] is now the origin.
Now fit your data using sklearn.linear_model.LinearRegression with fit_intercept set to False. This way, the data is fit so that the line is forced to pass through the origin. Because we've re-centered the data, the origin corresponds to x[-1], y[-1].
When you use the model to make predictions, subtract x[-1] from any datapoint for which you are making a prediction, then add y[-1] to the resulting prediction, and this will give you the same results as forcing your model to pass through x[-1], y[-1].
This is a little roundabout but it's the simplest way that occurs to me to do it using the sklearn linear regression function (without writing your own).
The suggestion from HappyDog is great as a quick way to get a fit however I'd like to introduce another method which doesn't require any manipulation of your data. The method will use the scipy.optimize.curve_fit method to fit your data.
First, we need to realize that a normal linear regression will find A and B such that y=Ax+B provides the best fit to the input data. Your requirements state that the fit must pass through the final point in your sample data set. Essentially we'll be dropping a line that passes through your final point and rotating it around this point until we can minimize the errors.
Take a look at the point-slope equation for a line: y-yi = m*(x-xi) where (xi, yi) is any point on that line. If we make the substution that this (xi, yi) point is the final point from your data set and solve for y, we get y=m*(x-xf)+yf. This is the model we will fit.
Translating this model to a python-function, we have:
def model(x, m, xf, yf):
return m*(x-xf)+yf
We create a mock-data set for this example and just for demonstration purposes we will significantly shift the final y-value:
x = np.linspace(0, 10, 100)
y = x + np.random.uniform(0, 3, len(x))
y[-1] += 10
We're almost ready to perform the fit. The curve_fit function expects a callable function (model) to fit, the x and y data, and a list of the guesses of each parameter we are trying to fit. Since our model accepts two extra "constant" arguments (xf and yf), we use functools.partial to "set" these arguments based on our data.
partial_model = functools.partial(model, xf=x[-1], yf=y[-1])
p0 = [y[-1]/x[-1]] # Initial guess for m, as long as xf != 0
Now we can fit!
best_fit, covar = curve_fit(partial_model, x, y, p0=p0)
print("Best fit:", best_fit)
y_fit = model(x, best_fit[0], x[-1], y[-1])
intercept = model(0, best_fit[0], x[-1], y[-1]) # The y-intercept
And we look at the results:
plt.plot(x, y, "g*") # Input data will be green stars
plt.plot(x, y_fit, "r-") # Fit will be a red line
plt.legend(["Sample Data", f"y=mx+b ; m={best_fit[0]:.4f}, b={intercept:.4f}"])
plt.show()
Putting all this together in one code block and including imports gives:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
import functools
def model(x, m, xf, yf):
return m*(x-xf)+yf
x = np.linspace(0, 10, 100)
y = x + np.random.uniform(0, 3, len(x))
y[-1] += 10
partial_model = functools.partial(model, xf=x[-1], yf=y[-1])
p0 = [y[-1]/x[-1]] # Initial guess for m, as long as xf != 0
best_fit, covar = curve_fit(partial_model, x, y, p0=p0)
print("Best fit:", best_fit)
y_fit = model(x, best_fit[0], x[-1], y[-1])
intercept = model(0, best_fit[0], x[-1], y[-1]) # The y-intercept
plt.plot(x, y, "g*") # Input data will be green stars
plt.plot(x, y_fit, "r-") # Fit will be a red line
plt.legend(["Sample Data", f"y=mx+b ; m={best_fit[0]:.4f}, b={intercept:.4f}"])
plt.show()
We see a line passing through the final point, as required, and have found the best slope to represent this dataset.

Correct fitting with scipy curve_fit including errors in x?

I'm trying to fit a histogram with some data in it using scipy.optimize.curve_fit. If I want to add an error in y, I can simply do so by applying a weight to the fit. But how to apply the error in x (i. e. the error due to binning in case of histograms)?
My question also applies to errors in x when making a linear regression with curve_fit or polyfit; I know how to add errors in y, but not in x.
Here an example (partly from the matplotlib documentation):
import numpy as np
import pylab as P
from scipy.optimize import curve_fit
# create the data histogram
mu, sigma = 200, 25
x = mu + sigma*P.randn(10000)
# define fit function
def gauss(x, *p):
A, mu, sigma = p
return A*np.exp(-(x-mu)**2/(2*sigma**2))
# the histogram of the data
n, bins, patches = P.hist(x, 50, histtype='step')
sigma_n = np.sqrt(n) # Adding Poisson errors in y
bin_centres = (bins[:-1] + bins[1:])/2
sigma_x = (bins[1] - bins[0])/np.sqrt(12) # Binning error in x
P.setp(patches, 'facecolor', 'g', 'alpha', 0.75)
# fitting and plotting
p0 = [700, 200, 25]
popt, pcov = curve_fit(gauss, bin_centres, n, p0=p0, sigma=sigma_n, absolute_sigma=True)
x = np.arange(100, 300, 0.5)
fit = gauss(x, *popt)
P.plot(x, fit, 'r--')
Now, this fit (when it doesn't fail) does consider the y-errors sigma_n, but I haven't found a way to make it consider sigma_x. I scanned a couple of threads on the scipy mailing list and found out how to use the absolute_sigma value and a post on Stackoverflow about asymmetrical errors, but nothing about errors in both directions. Is it possible to achieve?
scipy.optmize.curve_fit uses standard non-linear least squares optimization and therefore only minimizes the deviation in the response variables. If you want to have an error in the independent variable to be considered you can try scipy.odr which uses orthogonal distance regression. As its name suggests it minimizes in both independent and dependent variables.
Have a look at the sample below. The fit_type parameter determines whether scipy.odr does full ODR (fit_type=0) or least squares optimization (fit_type=2).
EDIT
Although the example worked it did not make much sense, since the y data was calculated on the noisy x data, which just resulted in an unequally spaced indepenent variable. I updated the sample which now also shows how to use RealData which allows for specifying the standard error of the data instead of the weights.
from scipy.odr import ODR, Model, Data, RealData
import numpy as np
from pylab import *
def func(beta, x):
y = beta[0]+beta[1]*x+beta[2]*x**3
return y
#generate data
x = np.linspace(-3,2,100)
y = func([-2.3,7.0,-4.0], x)
# add some noise
x += np.random.normal(scale=0.3, size=100)
y += np.random.normal(scale=0.1, size=100)
data = RealData(x, y, 0.3, 0.1)
model = Model(func)
odr = ODR(data, model, [1,0,0])
odr.set_job(fit_type=2)
output = odr.run()
xn = np.linspace(-3,2,50)
yn = func(output.beta, xn)
hold(True)
plot(x,y,'ro')
plot(xn,yn,'k-',label='leastsq')
odr.set_job(fit_type=0)
output = odr.run()
yn = func(output.beta, xn)
plot(xn,yn,'g-',label='odr')
legend(loc=0)

How to generate equispaced interpolating values

I have a list of (x,y) values that are not uniformly spaced. Here is the archive used in this question.
I am able to interpolate between the values but what I get are not equispaced interpolating points. Here's what I do:
x_data = [0.613,0.615,0.615,...]
y_data = [5.919,5.349,5.413,...]
# Interpolate values for x and y.
t = np.linspace(0, 1, len(x_data))
t2 = np.linspace(0, 1, 100)
# One-dimensional linear interpolation.
x2 = np.interp(t2, t, x_data)
y2 = np.interp(t2, t, y_data)
# Plot x,y data.
plt.scatter(x_data, y_data, marker='o', color='k', s=40, lw=0.)
# Plot interpolated points.
plt.scatter(x2, y2, marker='o', color='r', s=10, lw=0.5)
Which results in:
As can be seen, the red dots are closer together in sections of the graph where the original points distribution is denser.
I need a way to generate the interpolated points equispaced in x, y according to a given step value (say 0.1)
As askewchan correctly points out, when I mean "equispaced in x, y" I mean that two consecutive interpolated points in the curve should be distanced from each other (euclidean straight line distance) by the same value.
I tried unubtu's answer and it works well for smooth curves but seems to break for not so smooth ones:
This happens because the code calculates the point distance in an euclidean way instead of directly over the curve and I need the distance over the curve to be the same between points. Can this issue be worked around somehow?
Convert your xy-data to a parametrized curve, i.e. calculate all all distances between the points and generate the coordinates on the curve by cumulative summing. Then interpolate the x- and y-coordinates independently with respect to the new coordinates.
import numpy as np
from matplotlib import pyplot as plt
data = '''0.615 5.349
0.615 5.413
0.617 6.674
0.617 6.616
0.63 7.418
0.642 7.809
0.648 8.04
0.673 8.789
0.695 9.45
0.712 9.825
0.734 10.265
0.748 10.516
0.764 10.782
0.775 10.979
0.783 11.1
0.808 11.479
0.849 11.951
0.899 12.295
0.951 12.537
0.972 12.675
1.038 12.937
1.098 13.173
1.162 13.464
1.228 13.789
1.294 14.126
1.363 14.518
1.441 14.969
1.545 15.538
1.64 16.071
1.765 16.7
1.904 17.484
2.027 18.36
2.123 19.235
2.149 19.655
2.172 20.096
2.198 20.528
2.221 20.945
2.265 21.352
2.312 21.76
2.365 22.228
2.401 22.836
2.477 23.804'''
data = np.array([line.split() for line in data.split('\n')],dtype=float)
x,y = data.T
xd = np.diff(x)
yd = np.diff(y)
dist = np.sqrt(xd**2+yd**2)
u = np.cumsum(dist)
u = np.hstack([[0],u])
t = np.linspace(0,u.max(),10)
xn = np.interp(t, u, x)
yn = np.interp(t, u, y)
f = plt.figure()
ax = f.add_subplot(111)
ax.set_aspect('equal')
ax.plot(x,y,'o', alpha=0.3)
ax.plot(xn,yn,'ro', markersize=8)
ax.set_xlim(0,5)
Let's first consider a simple case. Suppose your data looked like the blue line,
below.
If you wanted to select equidistant points that were r distance apart,
then there would be some critical value for r where the cusp at (1,2) is the first equidistant point.
If you wanted points that were greater than this critical distance apart, then
the first equidistant point would jump from (1,2) to some place very different --
depicted by the intersection of the green arc with the blue line. The change is not gradual.
This toy case suggests that a tiny change in the parameter r can have a radical, discontinuous affect on the solution.
It also suggests that you must know the location of the ith equidistant point
before you can determine the location of the (i+1)-th equidistant point.
So it appears an iterative solution is required:
import numpy as np
import matplotlib.pyplot as plt
import math
x, y = np.genfromtxt('data', unpack=True, skip_header=1)
# find lots of points on the piecewise linear curve defined by x and y
M = 1000
t = np.linspace(0, len(x), M)
x = np.interp(t, np.arange(len(x)), x)
y = np.interp(t, np.arange(len(y)), y)
tol = 1.5
i, idx = 0, [0]
while i < len(x):
total_dist = 0
for j in range(i+1, len(x)):
total_dist += math.sqrt((x[j]-x[j-1])**2 + (y[j]-y[j-1])**2)
if total_dist > tol:
idx.append(j)
break
i = j+1
xn = x[idx]
yn = y[idx]
fig, ax = plt.subplots()
ax.plot(x, y, '-')
ax.scatter(xn, yn, s=50)
ax.set_aspect('equal')
plt.show()
Note: I set the aspect ratio to 'equal' to make it more apparent that the points are equidistant.
The following script will interpolate points with a equal step of x_max - x_min / len(x) = 0.04438
import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
data = np.loadtxt('data.txt')
x = data[:,0]
y = data[:,1]
f = interp1d(x, y)
x_new = np.linspace(np.min(x), np.max(x), x.shape[0])
y_new = f(x_new)
plt.plot(x,y,'o', x_new, y_new, '*r')
plt.show()
Expanding on the answer by #Christian K., here's how to do this for higher dimensional data with scipy.interpolate.interpn. Let's say we want to resample to 10 equally-spaced points:
import numpy as np
import scipy
# Assuming that 'data' is rows x dims (where dims is the dimensionality)
diffs = data[1:, :] - data[:-1, :]
dist = np.linalg.norm(diffs, axis=1)
u = np.cumsum(dist)
u = np.hstack([[0], u])
t = np.linspace(0, u[-1], 10)
resampled = scipy.interpolate.interpn((u,), pts, t)
It IS possible to generate equidistant points along the curve. But there must be more definition of what you want for a real answer. Sorry, but the code I've written for this task is in MATLAB, but I can describe the general ideas. There are three possibilities.
First, are the points to be truly equidistant from the neighbors in terms of a simple Euclidean distance? To do so would involve finding the intersection at any point on the curve with a circle of a fixed radius. Then just step along the curve.
Next, if you intend distance to mean distance along the curve itself, if the curve is a piecewise linear one, the problem is again easy to do. Just step along the curve, since distance on a line segment is easy to measure.
Finally, if you intend for the curve to be a cubic spline, again this is not incredibly difficult, but is a bit more work. Here the trick is to:
Compute the piecewise linear arclength from point to point along the curve. Call it t.
Generate a pair of cubic splines, x(t), y(t).
Differentiate x and y as functions of t. Since these are cubic segments, this is easy. The derivative functions will be piecewise quadratic.
Use an ode solver to move along the curve, integrating the differential arclength function. In MATLAB, ODE45 worked nicely.
Thus, one integrates
sqrt((x')^2 + (y')^2)
Again, in MATLAB, ODE45 can be set to identify those locations where the function crosses certain specified points.
If your MATLAB skills are up to the task, you can look at the code in interparc for more explanation. It is reasonably well commented code.

Fitting a line in 3D

Are there any algorithms that will return the equation of a straight line from a set of 3D data points? I can find plenty of sources which will give the equation of a line from 2D data sets, but none in 3D.
Thanks.
If you are trying to predict one value from the other two, then you should use lstsq with the a argument as your independent variables (plus a column of 1's to estimate an intercept) and b as your dependent variable.
If, on the other hand, you just want to get the best fitting line to the data, i.e. the line which, if you projected the data onto it, would minimize the squared distance between the real point and its projection, then what you want is the first principal component.
One way to define it is the line whose direction vector is the eigenvector of the covariance matrix corresponding to the largest eigenvalue, that passes through the mean of your data. That said, eig(cov(data)) is a really bad way to calculate it, since it does a lot of needless computation and copying and is potentially less accurate than using svd. See below:
import numpy as np
# Generate some data that lies along a line
x = np.mgrid[-2:5:120j]
y = np.mgrid[1:9:120j]
z = np.mgrid[-5:3:120j]
data = np.concatenate((x[:, np.newaxis],
y[:, np.newaxis],
z[:, np.newaxis]),
axis=1)
# Perturb with some Gaussian noise
data += np.random.normal(size=data.shape) * 0.4
# Calculate the mean of the points, i.e. the 'center' of the cloud
datamean = data.mean(axis=0)
# Do an SVD on the mean-centered data.
uu, dd, vv = np.linalg.svd(data - datamean)
# Now vv[0] contains the first principal component, i.e. the direction
# vector of the 'best fit' line in the least squares sense.
# Now generate some points along this best fit line, for plotting.
# I use -7, 7 since the spread of the data is roughly 14
# and we want it to have mean 0 (like the points we did
# the svd on). Also, it's a straight line, so we only need 2 points.
linepts = vv[0] * np.mgrid[-7:7:2j][:, np.newaxis]
# shift by the mean to get the line in the right place
linepts += datamean
# Verify that everything looks right.
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d as m3d
ax = m3d.Axes3D(plt.figure())
ax.scatter3D(*data.T)
ax.plot3D(*linepts.T)
plt.show()
Here's what it looks like:
If your data is fairly well behaved then it should be sufficient to find the least squares sum of the component distances. Then you can find the linear regression with z independent of x and then again independent of y.
Following the documentation example:
import numpy as np
pts = np.add.accumulate(np.random.random((10,3)))
x,y,z = pts.T
# this will find the slope and x-intercept of a plane
# parallel to the y-axis that best fits the data
A_xz = np.vstack((x, np.ones(len(x)))).T
m_xz, c_xz = np.linalg.lstsq(A_xz, z)[0]
# again for a plane parallel to the x-axis
A_yz = np.vstack((y, np.ones(len(y)))).T
m_yz, c_yz = np.linalg.lstsq(A_yz, z)[0]
# the intersection of those two planes and
# the function for the line would be:
# z = m_yz * y + c_yz
# z = m_xz * x + c_xz
# or:
def lin(z):
x = (z - c_xz)/m_xz
y = (z - c_yz)/m_yz
return x,y
#verifying:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = Axes3D(fig)
zz = np.linspace(0,5)
xx,yy = lin(zz)
ax.scatter(x, y, z)
ax.plot(xx,yy,zz)
plt.savefig('test.png')
plt.show()
If you want to minimize the actual orthogonal distances from the line (orthogonal to the line) to the points in 3-space (which I'm not sure is even referred to as linear regression). Then I would build a function that computes the RSS and use a scipy.optimize minimization function to solve it.

Categories