PyMC throwing Error or Crashing When Trying to Sample - python

I'm trying to model a process where the number of trials n used in a binomial process is generated by a non-homogeneous Poisson process. I'm currently using PyMC to fit the Poisson part of this (example here, without the whole capping part), but I can't figure out how to integrate this with the Binomial. How can I take what I've generated with the Poisson process and use it in fitting a Binomial process? Or is there a better way to do this using similar methods?
Here's what I've tried:
import pymc as pm
import numpy as np
t = np.arange(5)
a = pm.Uniform(name='a', value=1., lower=0, upper=10)
b = pm.Uniform(name='b', value=1., lower=0, upper=10)
#pm.deterministic
def linear(a=a, b=b):
return a * t + b
N_A = pm.Poisson(mu=linear, name='N_A')
C = pm.Beta('C', 1, 1)
obs_A = pm.Binomial('obs_A', N_A, C, observed=True, value=np.array([0,1,4,3,7])
mcmc = pm.MCMC([obs_A, C, N_A, a, b])
mcmc.sample(10000,5000)
When I try to pull a sample, it throws the error
pymc.Node.ZeroProbability: Stochastic obs_A's value is outside its support, or it forbids its parents' current values.
I'm sure I'm formulating this incorrectly, but I'm not sure how.

The error appears because the automatically generated initial value of N_A (the number of trials) is less than some of the observed values specified in pm.Binomial, i.e. the probability of observing such a number is zero, and the initial value has to be rejected. One solution would be to supply an acceptable initial value for N_A explicitly:
x = np.array([0,1,4,3,7])
N_A = pm.Poisson(mu=linear, name='N_A', value=x)
For reference: https://github.com/pymc-devs/pymc/issues/12:
This is a model mis-specification issue, not a bug. It means that the initial parameter value for the likelhood are incompatible with the data -- you need to specify compatible initial values.

Related

Solving log-transformed ODE system without overflow error

I have a system of ODEs where my state variables and independent variable span many orders of magnitude (initial values are around 0 at t=0 and are expected to become about 10¹⁰ by t=10¹⁷). I also want to ensure that my state variables remain positive.
According to this Stack Overflow post, one way to enforce positivity is to log-transform the ODEs to solve for the evolution of the logarithm of a variable instead of the variable itself. However when I try this with my ODEs, I get an overflow error probably because of the huge dynamic range / orders of magnitude of my state variables and time variable. Am I doing something wrong or is log-transform just not applicable in my case?
Here is a minimal working example that is successfully solved by scipy.integrate.solve_ivp:
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,y):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = y
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = mdot_grow_now - mdot_convert
M2dot = mdot_convert
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
And here is the same example but now log-transformed following the Stack Overflow post referenced above (since dlogx/dt = 1/x * dx/dt, we simply replace the LHS with x*dlogx/dt and divide both sides by x to isolate dlogx/dt on the LHS; and we make sure to use np.exp() on the state variables – now logx instead of x – within the integrator function):
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,logy):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = np.exp(logy)
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = (mdot_grow_now - mdot_convert) / M1
M2dot = (mdot_convert) / M2
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
[…] is log-transform just not applicable in my case?
I don’t know where your transform went wrong, but it will certainly not achieve what you think it does. Log-transforming as a means to avoid negative values makes sense and works if and only if the following two conditions hold:
If the value of a dynamical variable approaches zero (from above), its derivative also approaches zero (from above) in your model.
Due to numerical noise, your derivative may turn negative though it actually isn’t.
Conversely, it is not necessary or doesn’t work in the following cases:
If Condition 1 fails because your derivative never approaches zero in your model, but is strictly positive, you have no problem to begin with, as your derivative should not become negative in any reasonable implementation of your model. (You might make it happen by implementing some spectacular numerical annihilation, but that’s quite a difficult feat to achieve and not what I would consider a reasonable implementation.)
If Condition 1 fails because your derivative becomes truly negative in your model, logarithms won’t save you, because the dynamics wants to push the derivative below zero and the logarithms cannot represent this. You usually get an overflow error due to the logarithms becoming extremely negative or the adaptive integration fails.
Even if Condition 1 applies, Condition 2 can usually be handled by avoiding numerical annihilations and similar when implementing your model.
Unless I am mistaken, your model falls into the first category. If M1 goes to zero, mdot_convert goes towards zero and thus M1dot = mdot_grow_now - mdot_convert is strictly positive, because mdot_grow_now is. M2dot is strictly positive anyway. Thus, you gain nothing from log-transforming. In fact, in the vast majority of cases, your dynamical variables will quickly increase.
With all that being said, some things you might want to look into are:
Normalising your variables to be in the order of magnitude of 1.
Stochastic differential equations.

Fitting data with function with several domains whose boundaries are variable

As the title mentions, I am having trouble fitting data points to a function with 3 domains whose boundaries are a parameter of my function. Here is the function I am dealing with:
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
def Conductivity (phi,phi_c,t,s):
sigma=[0]*(len(phi))
for i in range (0,len(phi)):
if phi[i]<phi_c:
sigma[i]=sigma_m*(phi_c-phi[i])**(-s)
elif phi[i]==phi_c:
sigma[i]=sigma_f*(sigma_m/sigma_f)**(t/(t+s))
else:
sigma[i]=sigma_f*(phi[i]-phi_c)**t
return sigma
And my data points are:
phi_data=[0,0.005,0.007,0.008,0.017,0.05,0.085,0.10]
sigma_data=[2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00]
My constraints are that phi_c, s, and t must be strictly greater than zero (in practice, phi_c is rarely higher than 0.1 but higher than 0.001, s is usually between 0.5 and 1.5, and t is usually anywhere between 1.5 and 6).
My goal is to fit my data points and have my fit give me values of phi_c, s, and t. s and t can be estimated to help the code (in the specific set of data points that I showed, t should be around 2, and s should be around 0.5). phi_c is completely unknown, except for the range of values that I mentioned just above.
I have used both curve_fit from scipy and Model from lmfit but both provide ridiculously small phi_c values (like 10**(-16) or similarly small values that make me believe the programme wants phi_c to be negative).
Here is my code for when I used curve_fit:
popt, pcov = curve_fit(Conductivity, phi_data, sigma_data, p0=[0.01,2,0.5], bounds=(0,[0.5,10,3]))
Here is my code for when I used Model from lmfit:
t_estimate=0.5
s_estimate=2
phi_c_estimate=0.005
condmodel = Model(Conductivity)
params = condmodel.make_params(phi_c=phi_c_estimate,t=t_estimate,s=s_estimate)
result = condmodel.fit(sigma_data, params, phi=phi_data)
params['phi_c'].min = 0
params['phi_c'].max = 0.1
Both options give an okay fit when plotted, but the estimated value of phi_c is nowhere near plausible.
If you have any idea what I could do to have a better fit, please let me know!
PS: I have a read a promising post about using the package symfit to fit the data on the different regions separately, unfortunately the package symfit does not work for me. It keeps uninstalling my version of scipy then reinstalling an older version, and then it tells me it needs a newer version of scipy to function.
EDIT: I managed to make the symfit package work. Here is my entire code:
from symfit import parameters, variables, Fit, Piecewise, exp, Eq
import numpy as np
import matplotlib.pyplot as plt
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
phi, sigma = variables ('phi, sigma')
t, s, phi_c = parameters('t, s, phi_c')
phi_c.min = 0.001
phi_c.max = 0.1
sigma1 = sigma_m*(phi_c-phi)**(-s)
sigma2 = sigma_f*(phi-phi_c)**t
model = {sigma: Piecewise ((sigma1, phi <= phi_c), (sigma2, phi > phi_c))}
constraints = [Eq(sigma1.subs({phi: phi_c}), sigma2.subs({phi: phi_c}))]
phi_data=np.array([0,0.005,0.007,0.008,0.017,0.05,0.085,0.10])
sigma_data=np.array([2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00])
fit = Fit(model, phi=phi_data, sigma=sigma_data, constraints=constraints)
fit_result = fit.execute()
print(fit_result)
Unfortunately I get the following error:
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 236, in _print_ComplexInfinity
return self._print_NaN(expr)
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 74, in _print_known_const
known = self.known_constants[expr.__class__.__name__]
KeyError: 'ComplexInfinity'
My knowledge of coding is very limited, I have no idea what this means and what I should do to not have this error anymore. Please let me know if you have an idea.
I'm not certain that I have a single answer for you, but this will be too long to fit into a comment.
First, a model that switches functional form is especially challenging. But, what's more is that your form has
elif phi[i]==phi_c:
For floating point numbers that are variables, this is going to basically never be true. You might not mean "exactly equal" but "pretty close", which might be
elif abs(phi[i] - phi_c) < 1.0e-5:
or something...
But also, converting that from a for loop to using numpy.where() is probably worth looking into.
Second, it is not at all clear that your different forms actually evaluate to the same values at the boundaries to ensure a continuous function. You might want to check that.
Third, models with powers and exponentials are especially challenging to fit as a small change in power can have a huge impact on the resulting value. It's also very easy to get "negative value raised to non-integer value", which is of course, complex.
Fourth, those sigma_m and sigma_f constants look like they could easily cause trouble. You should definitely evaluate your model with your starting parameter values and see if you can sort of reproduce your data with your model and reasonable starting values. I suspect that you'll need to change your starting values.

trouble getting started with simple pymc3 example

I am new to using the PyMC3 package and am just trying to implement an example from a course on measurement uncertainty that I’m taking. (Note this is an optional employee education course through work, not a graded class where I shouldn’t find answers online). The course uses R but I find python to be preferable.
The (simple) problem is posed as following:
Say you have an end-gauge of actual (unknown) length at room-temperature length, and measured length m. The relationship between the two is:
length = m / (1 + alpha*dT)
where alpha is an expansion coefficient and dT is the deviation from room temperature and m is the measured quantity. The goal is to find the posterior distribution on length in order to determine its expected value and standard deviation (i.e. the measurement uncertainty)
The problem specifies prior distributions on alpha and dT (Gaussians with small standard deviation) and a loose prior on length (Gaussian with large standard deviation). The problem specifies that m was measured 25 times with an average of 50.000215 and standard deviation of 5.8e-6. We assume that the measurements of m are normally distributed with a mean of the true value of m.
One issue I had is that the likelihood doesn’t seem like it can be specified just based on these statistics in PyMC3, so I generated some dummy measurement data (I ended up doing 1000 measurements instead of 25). Again, the question is to get a posterior distribution on length (and in the process, although of less interest, updated posteriors on alpha and dT).
Here’s my code, which is not working and having convergence issues:
from IPython.core.pylabtools import figsize
import numpy as np
from matplotlib import pyplot as plt
import scipy.stats as stats
import pymc3 as pm
import theano.tensor as tt
basic_model = pm.Model()
xdata = np.random.normal(50.000215,5.8e-6*np.sqrt(1000),1000)
with basic_model:
#prior distributions
theta = pm.Normal('theta',mu=-.1,sd=.04)
alpha = pm.Normal('alpha',mu=.0000115,sd=.0000012)
length = pm.Normal('length',mu=50,sd=1)
mumeas = length*(1+alpha*theta)
with basic_model:
obs = pm.Normal('obs',mu=mumeas,sd=5.8e-6,observed=xdata)
#yobs = Normal('yobs',)
start = pm.find_MAP()
#trace = pm.sample(2000, step=pm.Metropolis, start=start)
step = pm.Metropolis()
trace = pm.sample(10000, tune=200000,step=step,start=start,njobs=1)
length_samples = trace['length']
fig,ax=plt.subplots()
plt.hist(length_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of $\lambda_1$", color="#A60628", normed=True)
I would really appreciate any help as to why this isn’t working. I've been trying for a while and it never converges to the expected solution given from the R code. I tried the default sampler (NUTS I think) as well as Metropolis but that completely failed with a zero gradient error. The (relevant) course slides are attached as an image. Finally, here is the comparable R code:
library(rjags)
#Data
jags_data <- list(xbar=50.000215)
jags_code <- jags.model(file = "calibration.txt",
data = jags_data,
n.chains = 1,
n.adapt = 30000)
post_samples <- coda.samples(model = jags_code,
variable.names =
c("l","mu","alpha","theta"),#,"ypred"),
n.iter = 30000)
summary(post_samples)
mean(post_samples[[1]][,"l"])
sd(post_samples[[1]][,"l"])
plot(post_samples)
and the calibration.txt model:
model{
l~dnorm(50,1.0)
alpha~dnorm(0.0000115,694444444444)
theta~dnorm(-0.1,625)
mu<-l*(1+alpha*theta)
xbar~dnorm(mu,29726516052)
}
(note I think the dnorm distribution takes 1/sigma^2, hence the weird-looking variances)
Any help or insight as to why the PyMC3 sampling isn't converging and what I should do differently would be extremely appreciated. Thanks!
I also had trouble getting anything useful from the generated data and model in the code. It seems to me that the level of noise in the fake data could equally be explained by the different sources of variance in the model. That can lead to a situation of highly correlated posterior parameters. Add to that the extreme scale imbalances, then it makes sense this would have sampling issues.
However, looking at the JAGS model, it seems they really are using just that one input observation. I've never seen this technique(?) before, that is, inputting summary statistics of data instead of the raw data itself. I suppose it worked for them in JAGS, so I decided to try running the exact same MCMC, including using the precision (tau) parameterization of the Gaussian.
Original Model with Metropolis
with pm.Model() as m0:
# tau === precision parameterization
dT = pm.Normal('dT', mu=-0.1, tau=625)
alpha = pm.Normal('alpha', mu=0.0000115, tau=694444444444)
length = pm.Normal('length', mu=50.0, tau=1.0)
mu = pm.Deterministic('mu', length*(1+alpha*dT))
# only one input observation; tau indicates the 5.8 nm sd
obs = pm.Normal('obs', mu=mu, tau=29726516052, observed=[50.000215])
trace = pm.sample(30000, tune=30000, chains=4, cores=4, step=pm.Metropolis())
While it's still not that great at sampling length and dT, it at least appears convergent overall:
I think noteworthy here is that despite the relatively weak prior on length (sd=1), the strong priors on all the other parameters appear to propagate a tight uncertainty bound on the length posterior. Ultimately, this is the posterior of interest, so this seems to be consistent with the intent of the exercise. Also, see that mu comes out in the posterior as exactly the distribution described, namely, N(50.000215, 5.8e-6).
Trace Plots
Forest Plot
Pair Plot
Here, however, you can see the core problem is still there. There's both strong correlation between length and dT, plus 4 or 5 orders of magnitude scale difference between the standard errors. I'd definitely do a long run before I really trusted the result.
Alternative Model with NUTS
In order to get this running with NUTS, you'd have to address the scaling issue. That is, somehow we need to reparameterize to get all the tau values closer to 1. Then, you'd run the sampler and transform back into the units you're interested in. Unfortunately, I don't have time to play around with this right now (I'd have to figure it out too), but maybe it's something you can start exploring on your own.

How do I improve a Gaussian/Normal fit in Python 3.X by using a running median?

I have an array of 100x100 data points, where I'm trying to perform a Gaussian fit to each column of 100 values in the array. I then want the parameters of the Gaussian found by using the fit of the first column to be the initial parameters of the starting point for the next column to use. Let's say I start with the initial parameters of 1000, 0, and 1, and the fit finds values of 800, 3, and 1.5. I then want the fitter to use these three parameters as initial values for the next column.
My code is:
x = np.linspace(-50,50,100)
Gauss_Model = models.Gaussian1D(amplitude = 1000., mean = 0, stddev = 1.)
Fitting_Model = fitting.LevMarLSQFitter()
Fit_Data = []
for i in range(0, Data_Array.shape[0]):
Fit_Data.append(Fitting_Model(Gauss_Model, x, Data_Array[:,i]))
Right now it uses the same initial values for every fit. Does anyone know how to perform such a running median/mean for a Gaussian fitting method? Would really appreciate any help or being pointed in the right direction, thanks!
I'm not familiar with the specific library you are using, but if you can get your fitted parameters out with something like fit_data[-1].amplitude or fit_data[-1].mean, then you could modify your loop to use something like:
for i in range(0, data_array.shape[0]):
if fit_data: # true if not an empty list
Gauss_Model = models.Gaussian1D(amplitude=fit_data[-1].amplitude,
mean=fit_data[-1].mean,
stddev=fit_data[-1].stddev)
fit_data.append(Fitting_Model(Gauss_Model, x, Data_Array[:,i]))
basically checking whether you have already fit a model, and if you have, use the most recent fitted amplitude, mean, and standard deviation as the starting point for your next Gauss_Model.
A thought: this might speed up your fitting, but it shouldn't result in a "better" fit to the 100 data points in each fit operation. Your resulting model is probably the best fit model to the data it was presented. If you want to estimate the error in the parameters of your model, you can use the fact that, for two normal distributions A ~ N(m_a, v_a) and B ~ N(m_b, v_b), the distribution A + B will have mean m_a + m_b and variance is v_a + v_b. Thus, the distribution of your means will be N(sum(means)/n, sum(variances)/n). Basically you can say that your true mean is centered at the mean of your means with standard deviation (sum(stddev)/sqrt(n)).
I also cannot tell what library you are using, and the details of how to do this probably depend on the details of how that library stores the fitted values. I can say that for lmfit (https://lmfit.github.io/lmfit-py/) we struggled with this sort of usage and arrived at a design that makes what you are trying to do pretty easy. With lmfit, you might compose this problem as:
import numpy as np
from lmfit import GaussianModel
x = np.linspace(-50,50,100)
# get Data_Array from somewhere....
# create a model for a Gaussian
Gauss_Model = GaussianModel()
# make a set of parameters, setting initial values
params = Gauss_Model.make_params(amplitude=1000, center=0, sigma=1.0)
Fit_Results = []
for i in range(Data_Array.shape[1]):
result = Gauss_Model.fit(Data_Array[:, i], params, x=x)
Fit_Results.append(result)
# update `params` with the current best fit params for the next column
params = result.params
Note that this works because lmfit is careful that Model.fit() will not alter the input parameters, and will put the resulting best-fit parameters for each fit in result.params.
And, if you decide you do want to have all columns use the original initial values, just comment out that last params = result.params.
Lmfit has a lot more bells and whistles, but I hope that helps you do what you need.

Non-standard distributions variables for KS testing?

Could you use the kstest in scipy.stats for the non-standard distribution functions (ie. vary the DOF for Students t, or vary gamma for Cauchy)? My end goal is to find the max p-value and corresponding parameter for my distribution fit but that isn't the issue.
EDIT:
"
scipy.stat's cauchy pdf is:
cauchy.pdf(x) = 1 / (pi * (1 + x**2))
where it implies x_0 = 0 for the location parameter and for gamma, Y = 1. I actually need it to look like this
cauchy.pdf(x, x_0, Y) = Y**2 / [(Y * pi) * ((x - x_0)**2 + Y**2)]
"
Q1) Could Students t, at least, could be used in a way perhaps like
stuff = []
for dof in xrange(0,100):
d, p, dof = scipy.stats.kstest(data, "t", args = (dof, ))
stuff.append(np.hstack((d, p, dof)))
since it seems to have the option to vary the parameter?
Q2) How would you do this if you needed the full normal distribution equation (need to vary sigma) and Cauchy as written above (need to vary gamma)? EDIT: Instead of searching scipy.stats for non-standard distributions, is it actually possible to feed a function I write into the kstest that will find p-value's?
Thanks kindly
It seems that what you really want to do is parameter estimation.Using the KT-test in this manner is not really what it is meant for. You should use the .fit method for the corresponding distribution.
>>> import numpy as np, scipy.stats as stats
>>> arr = stats.norm.rvs(loc=10, scale=3, size=10) # generate 10 random samples from a normal distribution
>>> arr
array([ 11.54239861, 15.76348509, 12.65427353, 13.32551871,
10.5756376 , 7.98128118, 14.39058752, 15.08548683,
9.21976924, 13.1020294 ])
>>> stats.norm.fit(arr)
(12.364046769964004, 2.3998164726918607)
>>> stats.cauchy.fit(arr)
(12.921113834451496, 1.5012714431045815)
Now to quickly check the documentation:
>>> help(cauchy.fit)
Help on method fit in module scipy.stats._distn_infrastructure:
fit(data, *args, **kwds) method of scipy.stats._continuous_distns.cauchy_gen instance
Return MLEs for shape, location, and scale parameters from data.
MLE stands for Maximum Likelihood Estimate. Starting estimates for
the fit are given by input arguments; for any arguments not provided
with starting estimates, ``self._fitstart(data)`` is called to generate
such.
One can hold some parameters fixed to specific values by passing in
keyword arguments ``f0``, ``f1``, ..., ``fn`` (for shape parameters)
and ``floc`` and ``fscale`` (for location and scale parameters,
respectively).
...
Returns
-------
shape, loc, scale : tuple of floats
MLEs for any shape statistics, followed by those for location and
scale.
Notes
-----
This fit is computed by maximizing a log-likelihood function, with
penalty applied for samples outside of range of the distribution. The
returned answer is not guaranteed to be the globally optimal MLE, it
may only be locally optimal, or the optimization may fail altogether.
So, let's say I wanted to hold one of those parameters constant, you could easily do:
>>> stats.cauchy.fit(arr, floc=10)
(10, 2.4905786982353786)
>>> stats.norm.fit(arr, floc=10)
(10, 3.3686549590571668)

Categories