subclassing of scipy.stats.rv_continuous - python

I have 3 questions regarding subclassing of scipy.stats.rv_continuous.
My goal is to code a statistical mixture model of a truncated normal distribution, a truncated exponential distribution and 2 uniform distributions.
1) Why is drawing random variates via mm_model.rvs(size = 1000) so extremely slow? I read something about performance issues in the documentary, but I was really surprised.
2) After drawing random variates via mm_model.rvs(size = 1000) I get this IntegrationWarning?
IntegrationWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
warnings.warn(msg, IntegrationWarning)
3) I read in the documentary that I can transmit parameters to the pdf via the "shape" parameter. I tried to adjust my pdf and set the shape parameter but it did not work. Could someone explain it?
Thanks for any help.
def trunc_norm(z,low_bound,up_bound,mu,sigma):
a = (low_bound - mu) / sigma
b = (up_bound - mu) / sigma
return stats.truncnorm.pdf(z,a,b,loc=mu,scale=sigma)
def trunc_exp(z,up_bound,lam):
return stats.truncexpon.pdf(z,b=up_bound*lam,scale=1/lam)
def uniform(z,a,b):
return stats.uniform.pdf(z,loc=a,scale=(b-a))
class Measure_mixture_model(stats.rv_continuous):
def _pdf(self,z):
z_true = 8
z_min = 0
z_max = 10
p_hit = 0.7
p_short = 0.1
p_max = 0.1
p_rand = 0.1
sig_hit = 1
lambda_short = 0.5
sum_p = p_hit + p_short + p_max + p_rand
if sum_p < 0.99 or 1.01 < sum_p:
misc.eprint("apriori probabilities p_hit, p_short, p_max, p_rand have to be 1!")
return None
pdf_hit = trunc_norm(z,z_min,z_max,z_true,sig_hit)
pdf_short = trunc_exp(z,z_true,lambda_short)
pdf_max = uniform(z,0.99*z_max,z_max)
pdf_rand = uniform(z,z_min,z_max)
pdf = p_hit * pdf_hit + p_short * pdf_short + p_max * pdf_max + p_rand * pdf_rand
return pdf
#mm_model = Measure_mixture_model(shapes='z_true,z_min,z_max,p_hit,p_short,p_max,p_rand,sig_hit,lambda_short')
mm_model = Measure_mixture_model()
z = np.linspace(-1,11,1000)
samples = mm_model.pdf(z)
plt.plot(z,samples)
plt.show()
rand_samples = mm_model.rvs(size = 1000)
bins = np.linspace(-1, 11, 100)
plt.hist(rand_samples,bins)
plt.title("measure mixture model")
plt.xlabel("z: measurement")
plt.ylabel("p: relative frequency")
plt.show()

(1) and (2) are probably related. You are asking scipy to generate random samples based on only the density you provide.
I don't really know what scipy does, but I suspect it integrates the density ("pdf") to get the probability function ("cdf") and then inverts it to map uniform samples to your distribution. This is numerically expensive and as you experienced error prone.
To speed things up you can help scipy by implementing _rvs directly. Just draw a uniform to decide which sub-model of your mixture to select and then invoke the rvs of the selected sub-model. And similar for other functions you may require.
Here are some tips on how to implement a vectorised rvs:
To batch-select sub-models. Since your number of sub-models is small np.digitize should be good enough for this. If possible use rv_frozen instances for sub-models; they are very convenient, but I seem to remember that you can't pass all the optional parameters to them, so you may have to handle those separately.
self._bins = np.cumsum([p_hit, p_short, p_max])
self._bins /= self._bins[-1] + p_rand
submodel = np.digitize(uniform.rvs(size=size), self._bins)
result = np.empty(size)
for j, frozen in enumerate((frz_trunc_norm, frz_trunc_exp, frz_unif_1, frz_unif_2)):
inds = np.where(submodel == j)
result[inds] = frozen.rvs(size=inds.shape)
return result
Re (3) Here is what the scipy docs have to say.
A note on shapes: subclasses need not specify them explicitly. In this case, shapes will be automatically deduced from the signatures of the overridden methods (pdf, cdf etc). If, for some reason, you prefer to avoid relying on introspection, you can specify shapes explicitly as an argument to the instance constructor.
So the normal way would be to just put some arguments in your methods.

Related

Bayesian fit of cosine wave taking longer than expected

In a recent homework, I was asked to perform a Bayesian fit over a set of data a and b using a Metropolis algorithm. The relationship between a and b is given:
e(t) = e_0*cos(w*t)
w = 2 * pi
The Metropolis algorithm is (it works fine with other fit):
def metropolis(logP, args, v0, Nsteps, stepSize):
vCur = v0
logPcur = logP(vCur, *args)
v = []
Nattempts = 0
for i in range(Nsteps):
while(True):
#Propose step:
vNext = vCur + stepSize*np.random.randn(*vCur.shape)
logPnext = logP(vNext, *args)
Nattempts += 1
#Accept/reject step:
Pratio = (1. if logPnext>logPcur else np.exp(logPnext-logPcur))
if np.random.rand() < Pratio:
vCur = vNext
logPcur = logPnext
v.append(vCur)
break
acceptRatio = Nsteps*(1./Nattempts)
return np.array(v), acceptRatio
I have tried to Bayesian fit the cosine wave and used the Metropolis algorithm above:
e_0 = -0.00155
def strain_t(e_0,t):
return e_0*np.cos(2*np.pi*t)
data = pd.read_csv('stressStrain.csv')
t = np.array(data['t'])
e = strain_t(e_0,t)
def logfitstrain_t(params,t,e):
e_0 = params[0]
sigmaR = params[1]
strainModel = strain_t(e_0,t)
return np.sum(-0.5*((e-strainModel)/sigmaR)**2 - np.log(sigmaR))
params0 = np.array([-0.00155,np.std(t)])
params, accRatio = metropolis(logfitstrain_t, (t,e), params0, 1000, 0.042)
print('Acceptance ratio:', accRatio)
e0 = np.mean(params[0])
print('e0=',e0)
e_t = e0*np.cos(2*np.pi*t)
sns.jointplot(t, e_t, kind='hex',color='purple')
The data in .csv looks like
There isn't any error message showing after I hit run, but it takes forever for python to give me an output. What did I do wrong here?
Why it might "take forever"
Your algorithm is designed to run until it accepts a given number of proposals (1000 in the example). Thus, if it's running for a long time, you're likely rejecting a bunch of proposals. This can happen when the step size is too large, leading new proposals to end up in distant, low probability regions of the likelihood space. Try reducing your step size. This may require you to also increase the number of samples to ensure the posterior space becomes adequately explored.
A more serious concern
Because you only append accepted proposals to the chain v, you haven't actually implemented the Metropolis algorithm, and instead obtain a biased set of samples that will tend to overrepresent less likely regions of the posterior space. A true Metropolis implementation re-appends the previous proposal whenever the new proposal is rejected. You can still enforce a minimum number of accepted proposals, but you really must append something every time.

Precision Matlab and Python (numpy)

I'm converting a Matlab script to Python and I am getting different results in the 10**-4 order.
In matlab:
f_mean=f_mean+nanmean(f);
f = f - nanmean(f);
f_t = gradient(f);
f_tt = gradient(f_t);
if n_loop==1
theta = atan2( sum(f.*f_tt), sum(f.^2) );
end
theta = -2.2011167e+03
In Python:
f_mean = f_mean + np.nanmean(vel)
vel = vel - np.nanmean(vel)
firstDerivative = np.gradient(vel)
secondDerivative = np.gradient(firstDerivative)
if numberLoop == 1:
theta = np.arctan2(np.sum(vel * secondDerivative),
np.sum([vel**2]))
Although first and secondDerivative give the same results in Python and Matlab, f_mean is slightly different: -0.0066412 (Matlab) and -0.0066414 (Python); and so theta: -0.4126186 (M) and -0.4124718 (P). It is a small difference, but in the end leads to different results in my scripts.
I know some people asked about this difference, but always regarding std, which I get, but not regarding mean values. I wonder why it is.
One possible source of the initial difference you describe (between means) could be numpy's use of pairwise summation which on large arrays will typically be appreciably more accurate than the naive method:
a = np.random.uniform(-1, 1, (10**6,))
a = np.r_[-a, a]
# so the sum should be zero
a.sum()
# 7.815970093361102e-14
# use cumsum to get naive summation:
a.cumsum()[-1]
# -1.3716805469243809e-11
Edit (thanks #sascha): for the last word and as a "provably exact" reference you could use math.fsum:
import math
math.fsum(a)
# 0.0
Don't have matlab, so can't check what they are doing.

lmfit for exponential data returns linear function

I'm working on fitting muon lifetime data to a curve to extract the mean lifetime using the lmfit function. The general process I'm using is to bin the 13,000 data points into 10 bins using the histogram function, calculating the uncertainty with the square root of the counts in each bin (it's an exponential model), then use the lmfit module to determine the best fit along with means and uncertainty. However, graphing the output of the model.fit() method returns this graph, where the red line is the fit (and obviously not the correct fit). Fit result output graph
I've looked online and can't find a solution to this, I'd really appreciate some help figuring out what's going on. Here's the code.
import os
import numpy as np
import matplotlib.pyplot as plt
from numpy import sqrt, pi, exp, linspace
from lmfit import Model
class data():
def __init__(self,file_name):
times_dirty = sorted(np.genfromtxt(file_name, delimiter=' ',unpack=False)[:,0])
self.times = []
for i in range(len(times_dirty)):
if times_dirty[i]<40000:
self.times.append(times_dirty[i])
self.counts = []
self.binBounds = []
self.uncertainties = []
self.means = []
def binData(self,k):
self.counts, self.binBounds = np.histogram(self.times, bins=k)
self.binBounds = self.binBounds[:-1]
def calcStats(self):
if len(self.counts)==0:
print('Run binData function first')
else:
self.uncertainties = sqrt(self.counts)
def plotData(self,fit):
plt.errorbar(self.binBounds, self.counts, yerr=self.uncertainties, fmt='bo')
plt.plot(self.binBounds, fit.init_fit, 'k--')
plt.plot(self.binBounds, fit.best_fit, 'r')
plt.show()
def decay(t, N, lamb, B):
return N * lamb * exp(-lamb * t) +B
def main():
muonEvents = data('C:\Users\Colt\Downloads\muon.data')
muonEvents.binData(10)
muonEvents.calcStats()
mod = Model(decay)
result = mod.fit(muonEvents.counts, t=muonEvents.binBounds, N=1, lamb=1, B = 1)
muonEvents.plotData(result)
print(result.fit_report())
print (len(muonEvents.times))
if __name__ == "__main__":
main()
This might be a simple scaling problem. As a quick test, try dividing all raw data by a factor of 1000 (both X and Y) to see if changing the magnitude of the data has any effect.
Just to build on James Phillips answer, I think the data you show in your graph imply values for N, lamb, and B that are very different from 1, 1, 1. Keep in mind that exp(-lamb*t) is essentially 0 for lamb = 1, and t> 100. So, if the algorithm starts at lamb=1 and varies that by a little bit to find a better value, it won't actually be able to see any difference in how well the model matches the data.
I would suggest trying to start with values that are more reasonable for the data you have, perhaps N=1.e6, lamb=1.e-4, and B=100.
As James suggested, having the variables have values on the order of 1 and putting in scale factors as necessary is often helpful in getting numerically stable solutions.

How to write a custom Deterministic or Stochastic in pymc3 with theano.op?

I'm doing some pymc3 and I would like to create custom Stochastics, however there doesn't seem to be a lot documentation about how it's done. I know how to use the as_op way, however apparently that makes it impossible to use the NUTS sampler, in which case I don't see the advantage of pymc3 over pymc.
The tutorial mentions that it can be done by inheriting from theano.Op. But can anyone show me how that would work (I'm still getting started on theano)? I have two Stochastics that I want to define.
The first one should be easier, it's an N dimension vector F that has only constant parent variables:
with myModel:
F = DensityDist('F', lambda value: pymc.skew_normal_like(value, F_mu_array, F_std_array, F_a_array), shape = N)
I want a skew normal distribution, which doesn't seem to be implemented in pymc3 yet, I just imported the pymc2 version. Unfortunately, F_mu_array, F_std_array, F_a_array and F are all N-dimensional vectors, and the lambda thing doesn't seem to work with an N-dimension list value.
Firstly, is there a way to make the lambda input an N-dimensional array? If not, I guess I would need to define the Stochastic F directly, and this is where I presume I need theano.Op to make it work.
The second example is a more complicated function of other Stochastics. Here how I want to define it (incorrectly at the moment):
with myModel:
ln2_var = Uniform('ln2_var', lower=-10, upper=4)
sigma = Deterministic('sigma', exp(0.5*ln2_var))
A = Uniform('A', lower=-10, upper=10, shape=5)
C = Uniform('C', lower=0.0, upper=2.0, shape=5)
sw = Normal('sw', mu=5.5, sd=0.5, shape=5)
# F from before
F = DensityDist('F', lambda value: skew_normal_like(value, F_mu_array, F_std_array, F_a_array), shape = N)
M = Normal('M', mu=M_obs_array, sd=M_stdev, shape=N)
# Radius forward-model (THIS IS THE STOCHASTIC IN QUESTION)
R = Normal('R', mu = R_forward(F, M, A, C, sw, N), sd=sigma, shape=N)
Where the function R_forward(F,M,A,C,sw,N) is naively defined as:
from theano.tensor import lt, le, eq, gt, ge
def R_forward(Flux, Mass, A, C, sw, num):
for i in range(num):
if lt(Mass[i], 0.2):
if lt(Flux[i], sw[0]):
muR = C[0]
else:
muR = A[0]*log10(Flux[i]) + C[0] - A[0]*log10(sw[0])
elif (le(0.2, Mass[i]) or le(Mass[i], 0.5)):
if lt(Flux[i], sw[1]):
muR = C[1]
else:
muR = A[1]*log10(Flux[i]) + C[1] - A[1]*log10(sw[1])
elif (le(0.5, Mass[i]) or le(Mass[i], 1.5)):
if lt(Flux[i], sw[2]):
muR = C[2]
else:
muR = A[2]*log10(Flux[i]) + C[2] - A[2]*log10(sw[2])
elif (le(1.5, Mass[i]) or le(Mass[i], 3.5)):
if lt(Flux[i], sw[3]):
muR = C[3]
else:
muR = A[3]*log10(Flux[i]) + C[3] - A[3]*log10(sw[3])
else:
if lt(Flux[i], sw[4]):
muR = C[4]
else:
muR = A[4]*log10(Flux[i]) + C[4] - A[4]*log10(sw[4])
return muR
This presumably won't work of course. I can see how I would use as_op, but I want to preserve the NUTS sampling.
I realize this is a bit late now, but I thought I'd answer the question (rather vaguely) anyways.
If you want to define a stochastic function (e.g. a probability distribution), then you need to do a couple of things:
First, define a subclass of either Discrete (pymc3.distributions.Discrete) or Continuous, which has at least the method logp, which returns the log-likelihood of your stochastic. If you define this as a simple symbolic equation (x+1), I believe you do not need to take care of any gradients (but don't quote me on this; see the documentation about this). I'll get on to more complicated cases below. In the unfortunate case that you need to do anything more complex, as in your second example (pymc3 now has a skew normal distribution implemented, by the way), you need to define the operations required for it (used in the logp method) as a Theano Op. If you need no derivatives, then the as_op does the job, but as you said, gradients are kind of the idea of pymc3.
This is where it gets complicated. If you want to use NUTS (or need gradients for whatever reason), then you need to implement your operation used in logp as a subclass of theano.gof.Op. Your new op class (let's call it just Op from now on) will need two or three methods at least. The first one defines inputs/outputs to the Op (check the Op documentation). The perform() method (or variants you might choose) is the one that does the operation you want (your R_forward function, for example). This can be done in pure python, if you so wish. The third method, grad(), is where you define the gradient of your perform()'s output wrt the inputs. The actual output to grad() is a bit different, but not a big deal.
And it is in grad() that using Theano pays off. If you define your entire perform() in Theano, then it might be that you can easily use automatic differentiation (theano.tensor.grad or theano.tensor.jacobian) to do the work for you (see the example below). However, this is not necessarily going to be easy.
In your second example, it would mean implementing your R_forward function in Theano, which could be complicated.
Here I include a somewhat minimal example of an Op that I created while learning to do these things.
def my_th_fun():
""" Some needed auxiliary functions.
"""
X = th.tensor.vector('X')
SCALE = th.tensor.scalar('SCALE')
X.tag.test_value = np.array([1,2,3,4])
SCALE.tag.test_value = 5.
Scale, upd_sm_X = th.scan(lambda x, scale: scale*(scale+ x),
sequences=[X],
outputs_info=[SCALE])
fun_Scale = th.function(inputs=[X, SCALE], outputs=Scale)
D_out_d_scale = th.tensor.grad(Scale[-1], SCALE)
fun_d_out_d_scale = th.function([X, SCALE], D_out_d_scale)
return Scale, fun_Scale, D_out_d_scale, fun_d_out_d_scale
class myOp(th.gof.Op):
""" Op subclass with a somewhat silly computation. It uses
th.scan and th.tensor.grad is used to calculate the gradient
automagically in the grad() method.
"""
__props__ = ()
itypes = [th.tensor.dscalar]
otypes = [th.tensor.dvector]
def __init__(self, *args, **kwargs):
super(myOp, self).__init__(*args, **kwargs)
self.base_dist = np.arange(1,5)
(self.UPD_scale, self.fun_scale,
self.D_out_d_scale, self.fun_d_out_d_scale)= my_th_fun()
def perform(self, node, inputs, outputs):
scale = inputs[0]
updated_scale = self.fun_scale(self.base_dist, scale)
out1 = self.base_dist[0:2].sum()
out2 = self.base_dist[2:4].sum()
maxout = np.max([out1, out2])
exp_out1 = np.exp(updated_scale[-1]*(out1-maxout))
exp_out2 = np.exp(updated_scale[-1]*(out2-maxout))
norm_const = exp_out1 + exp_out2
outputs[0][0] = np.array([exp_out1/norm_const, exp_out2/norm_const])
def grad(self, inputs, output_gradients): #working!
""" Calculates the gradient of the output of the Op wrt
to the input. As a simple example, the input is scalar.
Notice how the output is actually the gradient multiplied
by the output_gradients, which is an input provided by
theano when calculating gradients.
"""
scale = inputs[0]
X = th.tensor.as_tensor(self.base_dist)
# Do I need to recalculate all this or can I assume that perform() has
# always been called before grad() and thus can take it from there?
# In any case, this is a small enough example to recalculate quickly:
all_scale, _ = th.scan(lambda x, scale_1: scale_1*(scale_1+ x),
sequences=[X],
outputs_info=[scale])
updated_scale = all_scale[-1]
out1 = self.base_dist[0:1].sum()
out2 = self.base_dist[2:3].sum()
maxout = np.max([out1, out2])
exp_out1 = th.tensor.exp(updated_scale*(out1 - maxout))
exp_out2 = th.tensor.exp(updated_scale*(out2 - maxout))
norm_const = exp_out1 + exp_out2
d_S_d_scale = th.theano.grad(all_scale[-1], scale)
Jac1 = (-(out1-out2)*d_S_d_scale*
th.tensor.exp(updated_scale*(out1+out2 - 2*maxout))/(norm_const**2))
Jac2 = -Jac1
return Jac1*output_gradients[0][0]+ Jac2*output_gradients[0][1],
This Op can then be used inside the logp() method of a stochastic in pymc3:
import pymc3 as pm
class myDist(pm.distributions.Discrete):
def __init__(self, invT, *args, **kwargs):
super(myDist, self).__init__(*args, **kwargs)
self.invT = invT
self.myOp = myOp()
def logp(self, value):
return self.myOp(self.invT)[value]
I hope it helps any (hopeless) pymc3/theano newbie out there.

How to define General deterministic function in PyMC

In my model, I need to obtain the value of my deterministic variable from a set of parent variables using a complicated python function.
Is it possible to do that?
Following is a pyMC3 code which shows what I am trying to do in a simplified case.
import numpy as np
import pymc as pm
#Predefine values on two parameter Grid (x,w) for a set of i values (1,2,3)
idata = np.array([1,2,3])
size= 20
gridlength = size*size
Grid = np.empty((gridlength,2+len(idata)))
for x in range(size):
for w in range(size):
# A silly version of my real model evaluated on grid.
Grid[x*size+w,:]= np.array([x,w]+[(x**i + w**i) for i in idata])
# A function to find the nearest value in Grid and return its product with third variable z
def FindFromGrid(x,w,z):
return Grid[int(x)*size+int(w),2:] * z
#Generate fake Y data with error
yerror = np.random.normal(loc=0.0, scale=9.0, size=len(idata))
ydata = Grid[16*size+12,2:]*3.6 + yerror # ie. True x= 16, w= 12 and z= 3.6
with pm.Model() as model:
#Priors
x = pm.Uniform('x',lower=0,upper= size)
w = pm.Uniform('w',lower=0,upper =size)
z = pm.Uniform('z',lower=-5,upper =10)
#Expected value
y_hat = pm.Deterministic('y_hat',FindFromGrid(x,w,z))
#Data likelihood
ysigmas = np.ones(len(idata))*9.0
y_like = pm.Normal('y_like',mu= y_hat, sd=ysigmas, observed=ydata)
# Inference...
start = pm.find_MAP() # Find starting value by optimization
step = pm.NUTS(state=start) # Instantiate MCMC sampling algorithm
trace = pm.sample(1000, step, start=start, progressbar=False) # draw 1000 posterior samples using NUTS sampling
print('The trace plot')
fig = pm.traceplot(trace, lines={'x': 16, 'w': 12, 'z':3.6})
fig.show()
When I run this code, I get error at the y_hat stage, because the int() function inside the FindFromGrid(x,w,z) function needs integer not FreeRV.
Finding y_hat from a pre calculated grid is important because my real model for y_hat does not have an analytical form to express.
I have earlier tried to use OpenBUGS, but I found out here it is not possible to do this in OpenBUGS. Is it possible in PyMC ?
Update
Based on an example in pyMC github page, I found I need to add the following decorator to my FindFromGrid(x,w,z) function.
#pm.theano.compile.ops.as_op(itypes=[t.dscalar, t.dscalar, t.dscalar],otypes=[t.dvector])
This seems to solve the above mentioned issue. But I cannot use NUTS sampler anymore since it needs gradient.
Metropolis seems to be not converging.
Which step method should I use in a scenario like this?
You found the correct solution with as_op.
Regarding the convergence: Are you using pm.Metropolis() instead of pm.NUTS() by any chance? One reason this could not converge is that Metropolis() by default samples in the joint space while often Gibbs within Metropolis is more effective (and this was the default in pymc2). Having said that, I just merged this: https://github.com/pymc-devs/pymc/pull/587 which changes the default behavior of the Metropolis and Slice sampler to be non-blocked by default (so within Gibbs). Other samplers like NUTS that are primarily designed to sample the joint space still default to blocked. You can always explicitly set this with the kwarg blocked=True.
Anyway, update pymc with the most recent master and see if convergence improves. If not, try the Slice sampler.

Categories