Fitting with funtional parameter constraints in Python - python

I have some data {x_i,y_i} and I want to fit a model function y=f(x,a,b,c) to find the best fitting values of the parameters (a,b,c); however, the three of them are not totally independent but constraints to 1<b , 0<=c<1 and g(a,b,c)>0, where g is a "good" function. How could I implement this in Python since with curve_fit one cannot put the parametric constraints directly?
I have been reading with lmfit but I only see numerical constraints like 1<b, 0<=c<1 and not the one with g(a,b,c)>0, which is the most important.

If I understand correctly, you have
def g(a,b,c):
c1 = (1.0 - c)
cx = 1/c1
c2 = 2*c1
g = a*a*b*gamma(2+cx)*gamma(cx)/gamma(1+3/c2)-b*b/(1+b**c2)**(1/c2)
return g
If so, and if get the math right, this could be represented as
a = sqrt((g+b*b/(1+b**c2)**(1/c2))*gamma(1+3/c2)/(b*gamma(2+cx)*gamma(cx)))
Which is to say that you could think about your problem as having a variable g which is > 0 and a value for a derived from b, c, and g by the above expression.
And that you can do with lmfit and its expression-based constraint mechanism. You would have to add the gamma function, as with
from lmfit import Parameters
from scipy.special import gamma
params = Parameters()
params._asteval.symtable['gamma'] = gamma
and then set up the parameters with bounds and constraints. I would probably follow the math above to allow better debugging and use something like:
params.add('b', 1.5, min=1)
params.add('c', 0.4, min=0, max=1)
params.add('g', 0.2, min=0)
params.add('c1', expr='1-c')
params.add('cx', expr='1.0/c1')
params.add('c2', expr='2*c1')
params.add('gprod', expr='b*gamma(2+cx)*gamma(cx)/gamma(1+3/c2)')
params.add('bfact', expr='(1+b**c2)**(1/c2)')
params.add('a', expr='sqrt(g+b*b/(bfact*gprod))')
Note that this gives 3 actual variables (now g, b, and c) with plenty of derived values calculated from these, including a. I would certainly check all that math. It looks like you're safe from negative**fractional_power, sqrt(negitive), and gamma(-1), but be aware of these possibilities that will kill the fit.
You could embed all of that into your fitting function, but using constraint expressions gives you the ability to constrain parameter values independently of how the fitting or model function is defined.
Hope that helps. Again, if this does not get to what you are trying to do, post more details about the constraint you are trying to impose.

Like James Phillips, I was going to suggest SciPy's curve_fit. But the way that you have defined your function, one of the constraints is on the function itself, and SciPy's bounds are defined only in terms of input variables.
What, exactly, are the forms of your functions? Can you transform them so that you can use a standard definition of bounds, and then reverse the transformation to give a function in the original form that you wanted?
I have encountered a related problem when trying to fit exponential regressions using SciPy's curve_fit. The parameter search algorithms vary in a linear fashion, and it's really easy to fail to establish a gradient. If I write a function which fits the logarithm of the function I want, it's much easier to make curve_fit work. Then, for my final work, I take the exponent of my fitted function.
This same strategy could work for you. Predict ln(y). The value of that function can be unbounded. Then for your final result, output exp(ln(y)) = y.

Related

Scipy curve_fit does not estimate offset parameter but works great for other needed parameters

I don't know if this is a problem related to my objective function or Scipy's curve_fit library because most of the posts with a similar problem usually have something wrong with their objective function.
Problem statement:
I am trying to fit a gamma function with 4 parameters namely: time to peak (tmax), max amplitude(ymax), shape(b) and offset. curve_fit is estimating reliable results for the first 3 parameters but returns the initial guess for offset.
My initial guess is p0 = [231, 5, 0.006,60]
and the plot looks like this:
Plot of initial guess of parameters
However, when I run it on data, the offset value is always 60 or whatever the initial guess is set as whereas the other 3 parameters are giving reliable results.
My objective function is:
def objective2(t,tmax, ymax, b, offset):
offInt = math.ceil(offset)
offArr = np.zeros(offInt).astype(dtype=np.float32)
offArr = np.append(offArr,ymax*((t[:(len(t)-offInt)]/tmax)*np.exp((b*tmax)))* 2.713*np.exp((tmax-t[:(len(t)-offInt)])*b))
return offArr
where t = np.arange(1,887,1).astype(dtype=np.float32)
Can someone help me understand where things are going south?
Thanks.

Fitting data with function with several domains whose boundaries are variable

As the title mentions, I am having trouble fitting data points to a function with 3 domains whose boundaries are a parameter of my function. Here is the function I am dealing with:
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
def Conductivity (phi,phi_c,t,s):
sigma=[0]*(len(phi))
for i in range (0,len(phi)):
if phi[i]<phi_c:
sigma[i]=sigma_m*(phi_c-phi[i])**(-s)
elif phi[i]==phi_c:
sigma[i]=sigma_f*(sigma_m/sigma_f)**(t/(t+s))
else:
sigma[i]=sigma_f*(phi[i]-phi_c)**t
return sigma
And my data points are:
phi_data=[0,0.005,0.007,0.008,0.017,0.05,0.085,0.10]
sigma_data=[2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00]
My constraints are that phi_c, s, and t must be strictly greater than zero (in practice, phi_c is rarely higher than 0.1 but higher than 0.001, s is usually between 0.5 and 1.5, and t is usually anywhere between 1.5 and 6).
My goal is to fit my data points and have my fit give me values of phi_c, s, and t. s and t can be estimated to help the code (in the specific set of data points that I showed, t should be around 2, and s should be around 0.5). phi_c is completely unknown, except for the range of values that I mentioned just above.
I have used both curve_fit from scipy and Model from lmfit but both provide ridiculously small phi_c values (like 10**(-16) or similarly small values that make me believe the programme wants phi_c to be negative).
Here is my code for when I used curve_fit:
popt, pcov = curve_fit(Conductivity, phi_data, sigma_data, p0=[0.01,2,0.5], bounds=(0,[0.5,10,3]))
Here is my code for when I used Model from lmfit:
t_estimate=0.5
s_estimate=2
phi_c_estimate=0.005
condmodel = Model(Conductivity)
params = condmodel.make_params(phi_c=phi_c_estimate,t=t_estimate,s=s_estimate)
result = condmodel.fit(sigma_data, params, phi=phi_data)
params['phi_c'].min = 0
params['phi_c'].max = 0.1
Both options give an okay fit when plotted, but the estimated value of phi_c is nowhere near plausible.
If you have any idea what I could do to have a better fit, please let me know!
PS: I have a read a promising post about using the package symfit to fit the data on the different regions separately, unfortunately the package symfit does not work for me. It keeps uninstalling my version of scipy then reinstalling an older version, and then it tells me it needs a newer version of scipy to function.
EDIT: I managed to make the symfit package work. Here is my entire code:
from symfit import parameters, variables, Fit, Piecewise, exp, Eq
import numpy as np
import matplotlib.pyplot as plt
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
phi, sigma = variables ('phi, sigma')
t, s, phi_c = parameters('t, s, phi_c')
phi_c.min = 0.001
phi_c.max = 0.1
sigma1 = sigma_m*(phi_c-phi)**(-s)
sigma2 = sigma_f*(phi-phi_c)**t
model = {sigma: Piecewise ((sigma1, phi <= phi_c), (sigma2, phi > phi_c))}
constraints = [Eq(sigma1.subs({phi: phi_c}), sigma2.subs({phi: phi_c}))]
phi_data=np.array([0,0.005,0.007,0.008,0.017,0.05,0.085,0.10])
sigma_data=np.array([2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00])
fit = Fit(model, phi=phi_data, sigma=sigma_data, constraints=constraints)
fit_result = fit.execute()
print(fit_result)
Unfortunately I get the following error:
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 236, in _print_ComplexInfinity
return self._print_NaN(expr)
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 74, in _print_known_const
known = self.known_constants[expr.__class__.__name__]
KeyError: 'ComplexInfinity'
My knowledge of coding is very limited, I have no idea what this means and what I should do to not have this error anymore. Please let me know if you have an idea.
I'm not certain that I have a single answer for you, but this will be too long to fit into a comment.
First, a model that switches functional form is especially challenging. But, what's more is that your form has
elif phi[i]==phi_c:
For floating point numbers that are variables, this is going to basically never be true. You might not mean "exactly equal" but "pretty close", which might be
elif abs(phi[i] - phi_c) < 1.0e-5:
or something...
But also, converting that from a for loop to using numpy.where() is probably worth looking into.
Second, it is not at all clear that your different forms actually evaluate to the same values at the boundaries to ensure a continuous function. You might want to check that.
Third, models with powers and exponentials are especially challenging to fit as a small change in power can have a huge impact on the resulting value. It's also very easy to get "negative value raised to non-integer value", which is of course, complex.
Fourth, those sigma_m and sigma_f constants look like they could easily cause trouble. You should definitely evaluate your model with your starting parameter values and see if you can sort of reproduce your data with your model and reasonable starting values. I suspect that you'll need to change your starting values.

Putting bounds on stochastic variables in PyMC

I have a variable A which is Bernoulli distributed, A = pymc.Bernoulli('A', p_A), but I don't have a hard value for p_A and want to sample for it. I do know that it should be small, so I want to use an exponential distribution p_A = pymc.Exponential('p_A', 10).
However, the exponential distribution can return values higher than 1, which would throw off A. Is there a way of bounding the output of p_A without having to re-implement either the Bernoulli or the Exponential distributions in my own #pymc.stochastic-decorated function?
You can use a deterministic function to truncate the Exponential distribution. Personally I believe it would be better if you use a distribution that is bound between 0 and 1, but to exactly solve your problem you can do as follows:
import pymc as pm
p_A = pm.Exponential('p_A',10)
#pm.deterministic
def p_B(p=p_A):
return min(1, p)
A = pm.Bernoulli('A', p_B)
model = dict(p_A=p_A, p_B=p_B, A=A)
S = pm.MCMC(model)
S.sample(1000)
p_B_trace = S.trace('p_B')[:]
PyMC provides bounds. The following should also work:
p_A = pymc.Bound(pymc.Exponential, upper=1)('p_A', lam=10)
For any other lost souls who come across this:
I think the best solution for my purposes (that is, I was only using the exponential distribution because the probabilities I was looking to generate were probably small, rather than out of mathematical convenience) was to use a Beta function instead.
For certain parameter values it approximates the shape of an exponential function (and can do the same for binomials and normals), but is bounded to [0 1]. Probably only useful for doing things numerically, though, as I imagine it's a pain to do any analysis with.

polyfit refining: setting polynomial to be always possitive

I am trying to fit a polynomial to my data, e.g.
import scipy as sp
x = [1,6,9,17,23,28]
y = [6.1, 7.52324, 5.71, 5.86105, 6.3, 5.2]
and say I know the degree of polynomial (e.g.: 3), then I just use scipy.polyfit method to get the polynomial of a given degree:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
fittedModelFunction = sp.polyfit(x, y, 3)
func = sp.poly1d(fittedModelFunction)
++++++++++++++++++++++++++++++
QUESTIONS: ++++++++++++++++++++++++++++++
1) How can I tell in addition that the resulting function func must be always positive (i.e. f(x) >= 0 for any x)?
2) How can I further define a constraint (e.g. number of (local) min and max points, etc.) in order to get a better fitting?
Is there smth like this:
http://mail.scipy.org/pipermail/scipy-user/2007-July/013138.html
but more accurate?
Always Positve
I haven't been able to find a scipy reference that determines if a function is positive-definite, but an indirect way would be to find the all the roots - Scipy Roots - of the function and inspect the limits near those roots. There are a few cases to consider:
No roots at all
Pick any x and evaluate the function. Since the function does not cross the x-axis because of a lack of roots, any positive result will indicate the function is positive!
Finite number of roots
This is probably the most likely case. You would have to inspect the limits before and after each root - Scipy Limits. You would have to specify your own minimum acceptable delta for the limit however. I haven't seen a 2-sided limit method provided by Scipy, but it looks simple enough to make your own.
from sympy import limit
// f: function, v: variable to limit, p: point, d: delta
// returns two limit values
def twoSidedLimit(f, v, p, d):
return limit(f, v, p-d), limit(f, v, p+d)
Infinite roots
I don't think that polyfit would generate an oscillating function, but this is something to consider. I don't know how to handle this with the method I have already offered... Um, hope it does not happen?
Constraints
The only built-in form of constraints seems to be limited to the optimize library of SciPy. A crude way to enforce constraints for polyfit would be to get the function from polyfit, generate a vector of values for various x, and try to select values from the vector that violate the constraint. If you try to use filter, map, or lambda it may be slow with large vectors since python's filter makes a copy of the list/vector being filtered. I can't really help in this regard.

scipy integrate over array with variable bounds

I am trying to integrate a function over a list of point and pass the whole array to an integration function in order ot vectorize the thing. For starters, calling scipy.integrate.quad is way too slow since I have something like 10 000 000 points to integrate. Using scipy.integrate.romberg does the trick much faster, almost instantaneous while quad is slow since you must loop over it or vectorize it.
My function is quite complicated, but for demonstation purpose, let's say I want to integrate x^2 from a to b, but x is an array of scalar to evaluate x. For example
import numpy as np
from scipy.integrate import quad, romberg
def integrand(x, y):
return x**2 + y**2
quad(integrand, 0, 10, args=(10) # this fails since y is not a scalar
romberg(integrand, 0, 10) # y works here, giving the integral over
# the entire range
But this only work for fixed bounds. Is there a way to do something like
z = np.arange(20,30)
romberg(integrand, 0, z) # Fails since the function doesn't seem to
# support variable bounds
Only way I see it is to re-implement the algorithm itself in numpy and use that instead so I can have variable bounds. Any function that supports something like this? There is also romb, where you must supply the values of integrand directly and a dx interval, but that will be too imprecise for my complicated function (the marcum Q function, couldn't find any implementation, that could be another way to dot it).
The best approach when trying to evaluate a special function is to write a function that uses the properties of the function to quickly and accurately evaluate it in all parameter regimes. It is quite unlikely that a single approach will give accurate (or even stable) results for all ranges of parameters. Direct evaluation of an integral, as in this case, will almost certainly break down in many cases.
That being said, the general problem of evaluating an integral over many ranges can be solved by turning the integral into a differential equation and solving that. Roughly, the steps would be
Given an integral I(t) which I will assume is an integral of a function f(x) from 0 to t [this can be generalized to an arbitrary lower limit], write it as the differential equation dI/dt = f(x).
Solve this differential equation using scipy.integrate.odeint() for some initial conditions (here I(0)) over some range of times from 0 to t. This range should contain all limits of interest. How finely this is sampled depends on the function and how accurately it needs to be evaluated.
The result will be the value of the integral from 0 to t for the set of t we input. We can turn this into a "continuous" function using interpolation. For example, using a spline we can define i = scipy.interpolate.InterpolatedUnivariateSpline(t,I).
Given a set of upper and lower limits in arrays b and a, respectively, then we can evaluate them all at once as res=i(b)-i(a).
Whether this approach will work in your case will require you to carefully study it over your range of parameters. Also note that the Marcum Q function involves a semi-infinite integral. In principle this is not a problem, just transform the integral to one over a finite range. For example, consider the transformation x->1/x. There is no guarantee this approach will be numerically stable for your problem.

Categories