Difference between Levenberg-Marquardt-Algorithm and ODR - python

I was able to fit curves to a x/y dataset using peak-o-mat, as shown below. Thats a linear background and 10 lorentzian curves.
Since I need to fit many similar curves I wrote a scripted fitting routine, using mpfit.py, which is a Levenberg-Marquardt-Algorithm. However the fit takes longer and, in my opinion, is less accurate than the peak-o-mat result:
Starting values
Fit result with fixed linear background (values for linear background taken from the peak-o-mat result)
Fit result with all variables free
I believe the starting values are already very close, but even with the fixed linear background, the left lorentzian is clearly a degradation of the fit.
The result is even worse for total free fit.
Peak-o-mat appears to use scipy.odr.odrpack. Now what is more likely:
I did some implementation error?
odrpack is more suitable for this particular problem?
Fitting to a more simple problem (linear data with one peak in the middle) shows very good correlation between peak-o-mat and my script. Also I did not find a lot about ordpack.
Edit: It seems I could answer the question by myself, however the answer is a bit unsettling. Using scipy.odr (which allows fitting with odr or leastsq method) both give the result as peak-o-mat, even without constraints.
The image below shows again the data, the start values (almost perfect) and then the odr and leastsq fits. The component curves are for the odr one
I will switch to odr, but this still leaves me upset. The methods (mpfit.py, scipy.optimize.leastsq, scipy.odr in leastsq mode) 'should' yield the same results.
And for people stumbling upon this post: to do the odr fit an error must be specified for x and y values. If there is no error, use small values with sx << sy.
linear = odr.Model(f)
mydata = odr.RealData(x, y, sx = 1e-99, sy = 0.01)
myodr = odr.ODR(mydata, linear, beta0 = beta0, maxit = 2000)
myoutput1 = myodr.run()

You can use peak-o-mat for scripting as well. The easiest would be to create project containing all data you want to fit via the GUI, clean it, transform it and attach (i.e. choose a model, provide an initial guess and fit) the base model to one of the sets. Then you can (deep)copy that model and attach it to all of the other data sets. Try this:
from peak_o_mat.project import Project
from peak_o_mat.fit import Fit
from copy import deepcopy
p = Project()
p.Read('in.lpj')
base = p[2][0] # this is the set which has been fit already
for data in p[2][1:]: # all remaining sets of plot number 2
mod = deepcopy(base.mod)
data.mod = mod
f = Fit(data, data.mod)
res = f.run()
pars = res[0]
err = res[1]
data.mod._newpars(pars, err)
print data.mod.parameters_as_table()
p.Write('out')
Please tell me, if you need more details.

Related

Fitting data with function with several domains whose boundaries are variable

As the title mentions, I am having trouble fitting data points to a function with 3 domains whose boundaries are a parameter of my function. Here is the function I am dealing with:
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
def Conductivity (phi,phi_c,t,s):
sigma=[0]*(len(phi))
for i in range (0,len(phi)):
if phi[i]<phi_c:
sigma[i]=sigma_m*(phi_c-phi[i])**(-s)
elif phi[i]==phi_c:
sigma[i]=sigma_f*(sigma_m/sigma_f)**(t/(t+s))
else:
sigma[i]=sigma_f*(phi[i]-phi_c)**t
return sigma
And my data points are:
phi_data=[0,0.005,0.007,0.008,0.017,0.05,0.085,0.10]
sigma_data=[2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00]
My constraints are that phi_c, s, and t must be strictly greater than zero (in practice, phi_c is rarely higher than 0.1 but higher than 0.001, s is usually between 0.5 and 1.5, and t is usually anywhere between 1.5 and 6).
My goal is to fit my data points and have my fit give me values of phi_c, s, and t. s and t can be estimated to help the code (in the specific set of data points that I showed, t should be around 2, and s should be around 0.5). phi_c is completely unknown, except for the range of values that I mentioned just above.
I have used both curve_fit from scipy and Model from lmfit but both provide ridiculously small phi_c values (like 10**(-16) or similarly small values that make me believe the programme wants phi_c to be negative).
Here is my code for when I used curve_fit:
popt, pcov = curve_fit(Conductivity, phi_data, sigma_data, p0=[0.01,2,0.5], bounds=(0,[0.5,10,3]))
Here is my code for when I used Model from lmfit:
t_estimate=0.5
s_estimate=2
phi_c_estimate=0.005
condmodel = Model(Conductivity)
params = condmodel.make_params(phi_c=phi_c_estimate,t=t_estimate,s=s_estimate)
result = condmodel.fit(sigma_data, params, phi=phi_data)
params['phi_c'].min = 0
params['phi_c'].max = 0.1
Both options give an okay fit when plotted, but the estimated value of phi_c is nowhere near plausible.
If you have any idea what I could do to have a better fit, please let me know!
PS: I have a read a promising post about using the package symfit to fit the data on the different regions separately, unfortunately the package symfit does not work for me. It keeps uninstalling my version of scipy then reinstalling an older version, and then it tells me it needs a newer version of scipy to function.
EDIT: I managed to make the symfit package work. Here is my entire code:
from symfit import parameters, variables, Fit, Piecewise, exp, Eq
import numpy as np
import matplotlib.pyplot as plt
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
phi, sigma = variables ('phi, sigma')
t, s, phi_c = parameters('t, s, phi_c')
phi_c.min = 0.001
phi_c.max = 0.1
sigma1 = sigma_m*(phi_c-phi)**(-s)
sigma2 = sigma_f*(phi-phi_c)**t
model = {sigma: Piecewise ((sigma1, phi <= phi_c), (sigma2, phi > phi_c))}
constraints = [Eq(sigma1.subs({phi: phi_c}), sigma2.subs({phi: phi_c}))]
phi_data=np.array([0,0.005,0.007,0.008,0.017,0.05,0.085,0.10])
sigma_data=np.array([2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00])
fit = Fit(model, phi=phi_data, sigma=sigma_data, constraints=constraints)
fit_result = fit.execute()
print(fit_result)
Unfortunately I get the following error:
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 236, in _print_ComplexInfinity
return self._print_NaN(expr)
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 74, in _print_known_const
known = self.known_constants[expr.__class__.__name__]
KeyError: 'ComplexInfinity'
My knowledge of coding is very limited, I have no idea what this means and what I should do to not have this error anymore. Please let me know if you have an idea.
I'm not certain that I have a single answer for you, but this will be too long to fit into a comment.
First, a model that switches functional form is especially challenging. But, what's more is that your form has
elif phi[i]==phi_c:
For floating point numbers that are variables, this is going to basically never be true. You might not mean "exactly equal" but "pretty close", which might be
elif abs(phi[i] - phi_c) < 1.0e-5:
or something...
But also, converting that from a for loop to using numpy.where() is probably worth looking into.
Second, it is not at all clear that your different forms actually evaluate to the same values at the boundaries to ensure a continuous function. You might want to check that.
Third, models with powers and exponentials are especially challenging to fit as a small change in power can have a huge impact on the resulting value. It's also very easy to get "negative value raised to non-integer value", which is of course, complex.
Fourth, those sigma_m and sigma_f constants look like they could easily cause trouble. You should definitely evaluate your model with your starting parameter values and see if you can sort of reproduce your data with your model and reasonable starting values. I suspect that you'll need to change your starting values.

How do I improve a Gaussian/Normal fit in Python 3.X by using a running median?

I have an array of 100x100 data points, where I'm trying to perform a Gaussian fit to each column of 100 values in the array. I then want the parameters of the Gaussian found by using the fit of the first column to be the initial parameters of the starting point for the next column to use. Let's say I start with the initial parameters of 1000, 0, and 1, and the fit finds values of 800, 3, and 1.5. I then want the fitter to use these three parameters as initial values for the next column.
My code is:
x = np.linspace(-50,50,100)
Gauss_Model = models.Gaussian1D(amplitude = 1000., mean = 0, stddev = 1.)
Fitting_Model = fitting.LevMarLSQFitter()
Fit_Data = []
for i in range(0, Data_Array.shape[0]):
Fit_Data.append(Fitting_Model(Gauss_Model, x, Data_Array[:,i]))
Right now it uses the same initial values for every fit. Does anyone know how to perform such a running median/mean for a Gaussian fitting method? Would really appreciate any help or being pointed in the right direction, thanks!
I'm not familiar with the specific library you are using, but if you can get your fitted parameters out with something like fit_data[-1].amplitude or fit_data[-1].mean, then you could modify your loop to use something like:
for i in range(0, data_array.shape[0]):
if fit_data: # true if not an empty list
Gauss_Model = models.Gaussian1D(amplitude=fit_data[-1].amplitude,
mean=fit_data[-1].mean,
stddev=fit_data[-1].stddev)
fit_data.append(Fitting_Model(Gauss_Model, x, Data_Array[:,i]))
basically checking whether you have already fit a model, and if you have, use the most recent fitted amplitude, mean, and standard deviation as the starting point for your next Gauss_Model.
A thought: this might speed up your fitting, but it shouldn't result in a "better" fit to the 100 data points in each fit operation. Your resulting model is probably the best fit model to the data it was presented. If you want to estimate the error in the parameters of your model, you can use the fact that, for two normal distributions A ~ N(m_a, v_a) and B ~ N(m_b, v_b), the distribution A + B will have mean m_a + m_b and variance is v_a + v_b. Thus, the distribution of your means will be N(sum(means)/n, sum(variances)/n). Basically you can say that your true mean is centered at the mean of your means with standard deviation (sum(stddev)/sqrt(n)).
I also cannot tell what library you are using, and the details of how to do this probably depend on the details of how that library stores the fitted values. I can say that for lmfit (https://lmfit.github.io/lmfit-py/) we struggled with this sort of usage and arrived at a design that makes what you are trying to do pretty easy. With lmfit, you might compose this problem as:
import numpy as np
from lmfit import GaussianModel
x = np.linspace(-50,50,100)
# get Data_Array from somewhere....
# create a model for a Gaussian
Gauss_Model = GaussianModel()
# make a set of parameters, setting initial values
params = Gauss_Model.make_params(amplitude=1000, center=0, sigma=1.0)
Fit_Results = []
for i in range(Data_Array.shape[1]):
result = Gauss_Model.fit(Data_Array[:, i], params, x=x)
Fit_Results.append(result)
# update `params` with the current best fit params for the next column
params = result.params
Note that this works because lmfit is careful that Model.fit() will not alter the input parameters, and will put the resulting best-fit parameters for each fit in result.params.
And, if you decide you do want to have all columns use the original initial values, just comment out that last params = result.params.
Lmfit has a lot more bells and whistles, but I hope that helps you do what you need.

Fit gaussians (or other distributions) on my data using python

I have a database of features, a 2D np.array (2000 samples and each sample contains 100 features, 2000 X 100). I want to fit gaussian distributions to my database using python. My code is the following:
data = load_my_data() # loads a np.array with size 2000x200
clf = mixture.GaussianMixture(n_components= 50, covariance_type='full')
clf.fit(data)
I am not sure about the parameters for example the covariance_type and how can I investigate whether the fit was occured succesfully or not.
EDIT: I debug the code to investigate what is happening with the clf.means_ and appartently it produced a matrix n_components X size_of_features 50 X 20). Is there a way that i can check that the fitting was successful, or to plot data? What are the alternatives to Gaussian mixtures (mixtures of exponential for example, I cannot find any available implementation)?
I think you are using sklearn package.
Once you have fit, then type
print clf.means_
If it has output, then the data is fitted, if it raise errors, not fitted.
Hope this helps you.
You can do dimensionality reduction using PCA to 3D space (let's say) and then plot means and data.
Is is always preferred to choose a reduced set of candidate before trying to identify the distribution (in other words, use Cullen & Frey to reject the unlikely candidates) and then go for goodness of fit a select the best result,
You can just create a list of all available distributions in scipy. An example with two distributions and random data:
import numpy as np
import scipy.stats as st
data = np.random.random(10000)
#Specify all distributions here
distributions = [st.laplace, st.norm]
mles = []
for distribution in distributions:
pars = distribution.fit(data)
mle = distribution.nnlf(pars, data)
mles.append(mle)
results = [(distribution.name, mle) for distribution, mle in
zip(distributions, mles)]
best_fit = sorted(zip(distributions, mles), key=lambda d: d[1])[0]
print 'Best fit reached using {}, MLE value: {}'.format(best_fit[0].name, best_fit[1])
I understand, you may like to do regression of two different distributions, more than fitting them to an arithmetic curve. If this is the case, you may be interested in plotting one against the other one, and make a linear (or polynomial) regression, checking the coefficients
If this is the case, linear regression of two distributions, may tell you if there linear dependent or not.
Linear Regression using Scipy documentation

Using scipy optimize for MLE estimate and curve fitting

I randomly generated 1000 data points using the weights I know are true for the normal distribution. Now I am trying to minimize the -log likelihood function to estimate the values of sig^2 and the weights. I sort of get the process conceptually, but when I try to code it I'm just lost.
This is my model:
p(y|x, w, sig^2) = N(y|w0+w1x+...+wnx^n, sig^2)
I've been googling for a while now and I've learned the scipy.stats.optimize.minimize function is good for this, but I can't get it to work right. Every solution I have tried has worked for the example I got the solution from, but I'm unable to extrapolate it to my problem.
x = np.linspace(0, 1000, num=1000)
data = []
for y in x:
data.append(np.polyval([.5, 1, 3], y))
#plot to confirm I do have a normal distribution...
data.sort()
pdf = stats.norm.pdf(data, np.mean(data), np.std(data))
plt.plot(test, pdf)
plt.show()
#This is where I am stuck.
logLik = -np.sum(stats.norm.logpdf(data, loc=??, scale=??))
I have found that the equation error(w) = .5*sum(poly(x_n, w) - y_n)^2 is relevant for minimizing the error of the weights, which therefore maximizes my likelihood for the weights, but I don't understand how to code this... I have found a similar relationship for sig^2, but have the same problem. Can somebody clarify how to do this to help my curve fitting? Maybe go as far to post psuedo code I can use?
Yes, implementing likelihood fitting with minimize is tricky, I spend a lot of time on it. Which is why I wrapped it. If I may shamelessly plug my own package symfit, your problem can be solved by doing something like this:
from symfit import Parameter, Variable, Likelihood, exp
import numpy as np
# Define the model for an exponential distribution
beta = Parameter()
x = Variable()
model = (1 / beta) * exp(-x / beta)
# Draw 100 samples from an exponential distribution with beta=5.5
data = np.random.exponential(5.5, 100)
# Do the fitting!
fit = Likelihood(model, data)
fit_result = fit.execute()
I have to admit I don't exactly understand your distribution, since I don't understand the role of your w, but perhaps with this code as an example, you'll know how to adapt it.
If not, let me know the full mathematical equation of your model so I can help you further.
For more info check the docs. (For a more technical description of what happens under the hood, read here and here.)
I think there's an issue with your setup. With maximum likelihood, you obtain the parameters that maximize the probability of observing your data (given a certain model). Your model seems to be:
where epsilon is N(0, sigma).
So you maximize it:
or equivalently take logs to get:
The f in this case is the log-normal probability density function which you can get with stats.norm.logpdf. You should then use scipy.minimize to maximize an expression that will be the summation of stats.norm.logpdf evaluated at each of the i points, from 1 to your sample size.
If I've understood you correctly, your code is missing having a y vector plus an x vector! Show us a sample of those vectors and I can update my answer to include a sample code for estimating MLE with that date.

Fitting Parametric Curves in Python

I have experimental data of the form (X,Y) and a theoretical model of the form (x(t;*params),y(t;*params)) where t is a physical (but unobservable) variable, and *params are the parameters that I want to determine. t is a continuous variable, and there is a 1:1 relationship between x and t and between y and t in the model.
In a perfect world, I would know the value of T (the real-world value of the parameter) and would be able to do an extremely basic least-squares fit to find the values of *params. (Note that I am not trying to "connect" the values of x and y in my plot, like in 31243002 or 31464345.) I cannot guarantee that in my real data, the latent value T is monotonic, as my data is collected across multiple cycles.
I'm not very experienced doing curve fitting manually, and have to use extremely crude methods without easy access to a basic scipy function. My basic approach involves:
Choose some value of *params and apply it to the model
Take an array of t values and put it into the model to create an array of model(*params) = (x(*params),y(*params))
Interpolate X (the data values) into model to get Y_predicted
Run a least-squares (or other) comparison between Y and Y_predicted
Do it again for a new set of *params
Eventually, choose the best values for *params
There are several obvious problems with this approach.
1) I'm not experienced enough with coding to develop a very good "do it again" other than "try everything in the solution space," of maybe "try everything in a coarse grid" and then "try everything again in a slightly finer grid in the hotspots of the coarse grid." I tried doing MCMC methods, but I never found any optimum values, largely because of problem 2
2) Steps 2-4 are super inefficient in their own right.
I've tried something like (resembling pseudo-code; the actual functions are made up). There are many minor quibbles that could be made about using broadcasting on A,B, but those are less significant than the problem of needing to interpolate for every single step.
People I know have recommended using some sort of Expectation Maximization algorithm, but I don't know enough about that to code one up from scratch. I'm really hoping there's some awesome scipy (or otherwise open-source) algorithm I haven't been able to find that covers my whole problem, but at this point I am not hopeful.
import numpy as np
import scipy as sci
from scipy import interpolate
X_data
Y_data
def x(t,A,B):
return A**t + B**t
def y(t,A,B):
return A*t + B
def interp(A,B):
ts = np.arange(-10,10,0.1)
xs = x(ts,A,B)
ys = y(ts,A,B)
f = interpolate.interp1d(xs,ys)
return f
N = 101
lsqs = np.recarray((N**2),dtype=float)
count = 0
for i in range(0,N):
A = 0.1*i #checks A between 0 and 10
for j in range(0,N):
B = 10 + 0.1*j #checks B between 10 and 20
f = interp(A,B)
y_fit = f(X_data)
squares = np.sum((y_fit - Y_data)**2)
lsqs[count] = (A,b,squares) #puts the values in place for comparison later
count += 1 #allows us to move to the next cell
i = np.argmin(lsqs[:,2])
A_optimal = lsqs[i][0]
B_optimal = lsqs[i][1]
If I understand the question correctly, the params are constants which are the same in every sample, but t varies from sample to sample. So, for example, maybe you have a whole bunch of points which you believe have been sampled from a circle
x = a+r cos(t)
y = b+r sin(t)
at different values of t.
In this case, what I would do is eliminate the variable t to get a relation between x and y -- in this case, (x-a)^2+(y-b)^2 = r^2. If your data fit the model perfectly, you would have (x-a)^2+(y-b)^2 = r^2 at each of your data points. With some error, you could still find (a,b,r) to minimize
sum_i ((x_i-a)^2 + (y_i-b)^2 - r^2)^2.
Mathematica's Eliminate command can automate the procedure of eliminating t in some cases.
PS You might do better at stats.stackexchange, math.stackexchange or mathoverflow.net . I know the last one has a scary reputation, but we don't bite, really!

Categories