scipy leastsq fit - penalize certain solutions - python

I have implemented an algorithm that is able to fit multiple data sets at the same time. It is based on this solution: multi fit
The target function is too complex to show here (LaFortune scatter model), so I will use the target function from the solution for explanation:
def lor_func(x,c,par):
a,b,d=par
return a/((x-c)**2+b**2)
How can I punish the fitting algorithm if it chooses a parameter set par that results in lor_func < 0.
A negative value for the target function is valid from a mathematical point of view. So the parameter set par resulting in this negative target function might be the solution with the least error. But I want to exlude such solutions as they are nor physically valid.
A function like:
def lor_func(x,c,par):
a,b,d=par
value = a/((x-c)**2+b**
return max(0, value)
does not work as the fit returns wrong data as it optimizes the 0-values too. The result will then be different from the correct one.

use the bounds argument of scipy.optimize.least_squares?
res = least_squares(func, x_guess, args=(Gd, K),
bounds=([0.0, -100, 0, 0],
[1.0, 0.0, 10, 1]),
max_nfev=100000, verbose=1)
like I did here:
Suggestions for fitting noisy exponentials with scipy curve_fit?

Related

How to configure the optimization algorithm during maximum likelihood estimation in OpenTURNS?

I have a Sample and i want to fit the parameters of a Beta distribution with maximum likelihood estimation. Moreover, I want to truncate its parameters into the [0,100] interval. This should be easy with MaximumLikelihoodFactory, but the problem is that the optimization algorithm fails. How may I change the algorithm so that it can succeed?
Here is a simple example, where I generate a sample with size 100 and configure the parameters a and b with the setKnownParameter.
import openturns as ot
# Get sample
beta_true = ot.Beta(3.0, 1.0, 0.0, 100.0)
sample = beta_true.getSample(100)
# Fit
factory = ot.MaximumLikelihoodFactory(ot.Beta())
factory.setKnownParameter([0.0, 100.0], [2, 3])
beta = factory.build(sample)
print(beta)
The previous script produces:
Beta(alpha = 2, beta = 2, a = 0, b = 100)
WRN - Switch to finite difference to compute the gradient at point=[0.130921,-2.18413]
WRN - TNC went to an abnormal point=[nan,nan]
The algorithm surely fails, since the values of alpha and beta are unchanged with respect to their default values.
I do not know why this fails, perhaps because it uses finite difference derivatives. Anyway, I would like to customize the optimization algorithm and see if it can change anything to the result, but I do not know how to do this.
The ResourceMap has a key which allows to configure the optimization algorithm. The value is a string which is the name of the default algorithm:
"MaximumLikelihoodFactory-DefaultOptimizationAlgorithm": "TNC"
The code uses this algorithm to perform the maximum likelihood estimation (MLE). But it does not say what values can be set. Actually, the code of the MLE uses the OptimizationAlgorithm.Build static method to create the optimization algorithm. According to the doc, this is the "Name of the algorithm or problem to solve. For example TNC, Cobyla or one of the NLopt solver names.". So I can configure, say M. J. D. Powell's "Cobyla" algorithm:
ot.ResourceMap.SetAsString("MaximumLikelihoodFactory-DefaultOptimizationAlgorithm", "Cobyla")
factory = ot.MaximumLikelihoodFactory(ot.Beta())
factory.setKnownParameter([0.0, 100.0], [2, 3])
beta = factory.build(sample)
print(beta)
The previous script produces:
Beta(alpha = 2.495, beta = 0.842196, a = 0, b = 100)
This shows that the algorithm now performs correctly.
I can also use one of NLOPT's algorithms, e.g. "LN_AUGLAG" (this is a "L"ocal algorithm, with "N"o derivatives).

How do I improve a Gaussian/Normal fit in Python 3.X by using a running median?

I have an array of 100x100 data points, where I'm trying to perform a Gaussian fit to each column of 100 values in the array. I then want the parameters of the Gaussian found by using the fit of the first column to be the initial parameters of the starting point for the next column to use. Let's say I start with the initial parameters of 1000, 0, and 1, and the fit finds values of 800, 3, and 1.5. I then want the fitter to use these three parameters as initial values for the next column.
My code is:
x = np.linspace(-50,50,100)
Gauss_Model = models.Gaussian1D(amplitude = 1000., mean = 0, stddev = 1.)
Fitting_Model = fitting.LevMarLSQFitter()
Fit_Data = []
for i in range(0, Data_Array.shape[0]):
Fit_Data.append(Fitting_Model(Gauss_Model, x, Data_Array[:,i]))
Right now it uses the same initial values for every fit. Does anyone know how to perform such a running median/mean for a Gaussian fitting method? Would really appreciate any help or being pointed in the right direction, thanks!
I'm not familiar with the specific library you are using, but if you can get your fitted parameters out with something like fit_data[-1].amplitude or fit_data[-1].mean, then you could modify your loop to use something like:
for i in range(0, data_array.shape[0]):
if fit_data: # true if not an empty list
Gauss_Model = models.Gaussian1D(amplitude=fit_data[-1].amplitude,
mean=fit_data[-1].mean,
stddev=fit_data[-1].stddev)
fit_data.append(Fitting_Model(Gauss_Model, x, Data_Array[:,i]))
basically checking whether you have already fit a model, and if you have, use the most recent fitted amplitude, mean, and standard deviation as the starting point for your next Gauss_Model.
A thought: this might speed up your fitting, but it shouldn't result in a "better" fit to the 100 data points in each fit operation. Your resulting model is probably the best fit model to the data it was presented. If you want to estimate the error in the parameters of your model, you can use the fact that, for two normal distributions A ~ N(m_a, v_a) and B ~ N(m_b, v_b), the distribution A + B will have mean m_a + m_b and variance is v_a + v_b. Thus, the distribution of your means will be N(sum(means)/n, sum(variances)/n). Basically you can say that your true mean is centered at the mean of your means with standard deviation (sum(stddev)/sqrt(n)).
I also cannot tell what library you are using, and the details of how to do this probably depend on the details of how that library stores the fitted values. I can say that for lmfit (https://lmfit.github.io/lmfit-py/) we struggled with this sort of usage and arrived at a design that makes what you are trying to do pretty easy. With lmfit, you might compose this problem as:
import numpy as np
from lmfit import GaussianModel
x = np.linspace(-50,50,100)
# get Data_Array from somewhere....
# create a model for a Gaussian
Gauss_Model = GaussianModel()
# make a set of parameters, setting initial values
params = Gauss_Model.make_params(amplitude=1000, center=0, sigma=1.0)
Fit_Results = []
for i in range(Data_Array.shape[1]):
result = Gauss_Model.fit(Data_Array[:, i], params, x=x)
Fit_Results.append(result)
# update `params` with the current best fit params for the next column
params = result.params
Note that this works because lmfit is careful that Model.fit() will not alter the input parameters, and will put the resulting best-fit parameters for each fit in result.params.
And, if you decide you do want to have all columns use the original initial values, just comment out that last params = result.params.
Lmfit has a lot more bells and whistles, but I hope that helps you do what you need.

How do you fit a polynomial to a data set?

I'm working on two functions. I have two data sets, eg [[x(1), y(1)], ..., [x(n), y(n)]], dataSet and testData.
createMatrix(D, S) which returns a data matrix, where D is the degree and S is a vector of real numbers [s(1), s(2), ..., s(n)].
I know numpy has a function called polyfit. But polyfit takes in three variables, any advice on how I'd create the matrix?
polyFit(D), which takes in the polynomial of degree D and fits it to the data sets using linear least squares. I'm trying to return the weight vector and errors. I also know that there is lstsq in numpy.linag that I found in this question: Fitting polynomials to data
Is it possible to use that question to recreate what I'm trying?
This is what I have so far, but it isn't working.
def createMatrix(D, S):
x = []
y = []
for i in dataSet:
x.append(i[0])
y.append(i[1])
polyfit(x, y, D)
What I don't get here is what does S, the vector of real numbers, have to do with this?
def polyFit(D)
I'm basing a lot of this on the question posted above. I'm unsure about how to get just w though, the weight vector. I'll be coding the errors, so that's fine I was just wondering if you have any advice on getting the weight vectors themselves.
It looks like all createMatrix is doing is creating the two vectors required by polyfit. What you have will work, but, the more pythonic way to do it is
def createMatrix(dataSet, D):
D = 3 # set this to whatever degree you're trying
x, y = zip(*dataSet)
return polyfit(x, y, D)
(This S/O link provides a detailed explanation of the zip(*dataSet) idiom.)
This will return a vector of coefficients that you can then pass to something like poly1d to generate results. (Further explanation of both polyfit and poly1d can be found here.)
Obviously, you'll need to decide what value you want for D. The simple answer to that is 1, 2, or 3. Polynomials of higher order than cubic tend to be rather unstable and the intrinsic errors make their output rather meaningless.
It sounds like you might be trying to do some sort of correlation analysis (i.e., does y vary with x and, if so, to what extent?) You'll almost certainly want to just use linear (D = 1) regression for this type of analysis. You can try to do a least squares quadratic fit (D = 2) but, again, the error bounds are probably wider than your assumptions (e.g. normality of distribution) will tolerate.

Calculate maximum likelihood using PyMC3

There are cases when I'm not actually interested in the full posterior of a Bayesian inference, but simply the maximum likelihood (or maximum a posterior for suitably chosen priors), and possibly it's Hessian. PyMC3 has functions to do that, but find_MAP seems to return the model parameters in transformed form depending on the prior distribution on them. Is there an easy way to get the untransformed values from these? The output of find_hessian is even less clear to me, but it's most likely in the transformed space too.
May be the simpler solution will be to pass the argument transform=None, to avoid PyMC3 doing the transformation and then using find_MAP
I let you and example for a simple model.
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform('p', 0, 1, transform=None)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP()
std_q = ((1/pm.find_hessian(mean_q))**0.5)[0]
print(mean_q['p'], std_q)
Have you considered using ADVI?
I came across this once more and found a way to get the untransformed values from the transformed ones. Just in case somebody else needs this as-well. The gist of it is that the untransformed values are essentially theano expressions that can be evaluated given the transformed values. PyMC3 helps here a little by providing the Model.fn() function which creates such an evaluation function accepting values by name. Now you only need to supply the untransformed variables of interest to the outs argument. A complete example:
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform('p', 0, 1)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
map_estimate = pm.find_MAP()
# create a function that evaluates p, given the transformed values
evalfun = normal_aproximation.fn(outs=p)
# create name:value mappings for the free variables (e.g. transformed values)
inp = {v:map_estimate[v.name] for v in model.free_RVs}
# now use that input mapping to evaluate p
p_estimate = evalfun(inp)
outs can also receive a list of variables, evalfun will then output the values of the corresponding variables in the same order.

Putting bounds on stochastic variables in PyMC

I have a variable A which is Bernoulli distributed, A = pymc.Bernoulli('A', p_A), but I don't have a hard value for p_A and want to sample for it. I do know that it should be small, so I want to use an exponential distribution p_A = pymc.Exponential('p_A', 10).
However, the exponential distribution can return values higher than 1, which would throw off A. Is there a way of bounding the output of p_A without having to re-implement either the Bernoulli or the Exponential distributions in my own #pymc.stochastic-decorated function?
You can use a deterministic function to truncate the Exponential distribution. Personally I believe it would be better if you use a distribution that is bound between 0 and 1, but to exactly solve your problem you can do as follows:
import pymc as pm
p_A = pm.Exponential('p_A',10)
#pm.deterministic
def p_B(p=p_A):
return min(1, p)
A = pm.Bernoulli('A', p_B)
model = dict(p_A=p_A, p_B=p_B, A=A)
S = pm.MCMC(model)
S.sample(1000)
p_B_trace = S.trace('p_B')[:]
PyMC provides bounds. The following should also work:
p_A = pymc.Bound(pymc.Exponential, upper=1)('p_A', lam=10)
For any other lost souls who come across this:
I think the best solution for my purposes (that is, I was only using the exponential distribution because the probabilities I was looking to generate were probably small, rather than out of mathematical convenience) was to use a Beta function instead.
For certain parameter values it approximates the shape of an exponential function (and can do the same for binomials and normals), but is bounded to [0 1]. Probably only useful for doing things numerically, though, as I imagine it's a pain to do any analysis with.

Categories