Inverse probability density function - python

What do I have to use to figure out the inverse probability density function for normal distribution? I'm using scipy to find out normal distribution probability density function:
from scipy.stats import norm
norm.pdf(1000, loc=1040, scale=210)
0.0018655737107410499
How can I figure out that 0.0018 probability corresponds to 1000 in the given normal distribution?

There can be no 1:1 mapping from probability density to quantile.
Because the PDF of the normal distribution is quadratic, there can be either 2, 1 or zero quantiles that have a particular probability density.
Update
It's actually not that hard to find the roots analytically. The PDF of a normal distribution is given by:
With a bit of rearrangement we get:
(x - mu)**2 = -2 * sigma**2 * log( pd * sigma * sqrt(2 * pi))
If the discriminant on the RHS is < 0, there are no real roots. If it equals zero, there is a single root (where x = mu), and where it is > 0 there are two roots.
To put it all together into a function:
import numpy as np
def get_quantiles(pd, mu, sigma):
discrim = -2 * sigma**2 * np.log(pd * sigma * np.sqrt(2 * np.pi))
# no real roots
if discrim < 0:
return None
# one root, where x == mu
elif discrim == 0:
return mu
# two roots
else:
return mu - np.sqrt(discrim), mu + np.sqrt(discrim)
This gives the desired quantile(s), to within rounding error:
from scipy.stats import norm
pd = norm.pdf(1000, loc=1040, scale=210)
print get_quantiles(pd, 1040, 210)
# (1000.0000000000001, 1079.9999999999998)

import scipy.stats as stats
import scipy.optimize as optimize
norm = stats.norm(loc=1040, scale=210)
y = norm.pdf(1000)
print(y)
# 0.00186557371074
print(optimize.fsolve(lambda x:norm.pdf(x)-y, norm.mean()-norm.std()))
# [ 1000.]
print(optimize.fsolve(lambda x:norm.pdf(x)-y, norm.mean()+norm.std()))
# [ 1080.]
There exist distributions which attain any value an infinite number of times. (For example, the simple function with value 1 on an infinite sequence of intervals with lengths 1/2, 1/4, 1/8, etc. attains the value 1 an infinite number of times. And it is a distribution since 1/2 + 1/4 + 1/8 + ... = 1)
So the use of fsolve above is not guaranteed to find all values of x where pdf(x) equals a certain value, but it may help you find some root.

Related

RuntimeWarning in scipy.optimize.minimize when trying to MLE fit Weibull distribution

I have the following function to_minimize which should be equal to the log-likelihood of a dataset for a Weibull distribution, truncated from the left at d.
import numpy as np
from scipy.optimize import minimize
def to_minimize(args, data, d=1):
theta, tau = args
n = len(data)
if tau <= 0 or theta <= 0:
pass
term1 = n * (np.log(tau) - tau * np.log(theta) - (-d / theta) ** tau)
term2 = 0
for x in data:
term2 += (tau - 1) * np.log(x) + (-x / theta) ** tau
return term1 + term2
data = numpy.random.rand(100)
weibull = minimize(lambda args: -to_minimize(args, data),
x0=np.array((1., 1.)), bounds=np.array([(1e-15, 10), (1e-15, 10)]))
As far as I can tell, the only thing that should cause an error of the form
RuntimeWarning: invalid value encountered in double_scalars
should be if tau or theta are 0. But the bounds on those parameters is specifically above 0 so why does my optimization routine crash?
After calling np.seterr(all='raise') and debugging some more, I noticed that I had an error in my calculations. The - in the exponential function has to be applied after the power. Otherwise it will try to take the root of a negative number which won't work for obvious reasons.

How to create Non-Central Student’s T distribution and what priors to use with the distribution?

I have been working with the following link,
Fitting empirical distribution to theoretical ones with Scipy (Python)?
I have been using my data to the code from the link and found out that the common distribution for my data is the Non-Central Student’s T distribution. I couldn’t find the distribution in the pymc3 package, so, I decided to have a look with scipy to understand how the distribution is formed. I created a custom distribution and I have few questions:
I would like to know if my approach to creating the distribution is right?
How can I implement the custom distribution into models?
Regarding the prior distribution, do I use same steps in normal distribution priors (mu and sigma) combined with halfnormed for degree of freedom and noncentral value?
My custom distribution:
import numpy as np
import theano.tensor as tt
from scipy import stats
from scipy.special import hyp1f1, nctdtr
import warnings
from pymc3.theanof import floatX
from pymc3.distributions.dist_math import bound, gammaln
from pymc3.distributions.continuous import assert_negative_support, get_tau_sigma
from pymc3.distributions.distribution import Continuous, draw_values, generate_samples
class NonCentralStudentT(Continuous):
"""
Parameters
----------
nu: float
Degrees of freedom, also known as normality parameter (nu > 0).
mu: float
Location parameter.
sigma: float
Scale parameter (sigma > 0). Converges to the standard deviation as nu increases. (only required if lam is not specified)
lam: float
Scale parameter (lam > 0). Converges to the precision as nu increases. (only required if sigma is not specified)
"""
def __init__(self, nu, nc, mu=0, lam=None, sigma=None, sd=None, *args, **kwargs):
super().__init__(*args, **kwargs)
super(NonCentralStudentT, self).__init__(*args, **kwargs)
if sd is not None:
sigma = sd
warnings.warn("sd is deprecated, use sigma instead", DeprecationWarning)
self.nu = nu = tt.as_tensor_variable(floatX(nu))
self.nc = nc = tt.as_tensor_variable(floatX(nc))
lam, sigma = get_tau_sigma(tau=lam, sigma=sigma)
self.lam = lam = tt.as_tensor_variable(lam)
self.sigma = self.sd = sigma = tt.as_tensor_variable(sigma)
self.mean = self.median = self.mode = self.mu = mu = tt.as_tensor_variable(mu)
self.variance = tt.switch((nu > 2) * 1, (1 / self.lam) * (nu / (nu - 2)), np.inf)
assert_negative_support(lam, 'lam (sigma)', 'NonCentralStudentT')
assert_negative_support(nu, 'nu', 'NonCentralStudentT')
assert_negative_support(nc, 'nc', 'NonCentralStudentT')
def random(self, point=None, size=None):
"""
Draw random values from Non-Central Student's T distribution.
Parameters
----------
point: dict, optional
Dict of variable values on which random values are to be
conditioned (uses default point if not specified).
size: int, optional
Desired size of random sample (returns one sample if not
specified).
Returns
-------
array
"""
nu, nc, mu, lam = draw_values([self.nu, self.nc, self.mu, self.lam], point=point, size=size)
return generate_samples(stats.nct.rvs, nu, nc, loc=mu, scale=lam ** -0.5, dist_shape=self.shape, size=size)
def logp(self, value):
"""
Calculate log-probability of Non-Central Student's T distribution at specified value.
Parameters
----------
value: numeric
Value(s) for which log-probability is calculated. If the log probabilities for multiple
values are desired the values must be provided in a numpy array or theano tensor
Returns
-------
TensorVariable
"""
nu = self.nu
nc = self.nc
mu = self.mu
lam = self.lam
n = nu * 1.0
nc = nc * 1.0
x2 = value * value
ncx2 = nc * nc * x2
fac1 = n + x2
trm1 = n / 2. * tt.log(n) + gammaln(n + 1)
trm1 -= n * tt.log(2) + nc * nc / 2. + (n / 2.) * tt.log(fac1) + gammaln(n / 2.)
Px = tt.exp(trm1)
valF = ncx2 / (2 * fac1)
trm1 = tt.sqrt(2) * nc * value * hyp1f1(n / 2 + 1, 1.5, valF)
trm1 /= np.asarray(fac1 * tt.gamma((n + 1) / 2))
trm2 = hyp1f1((n + 1) / 2, 0.5, valF)
trm2 /= np.asarray(np.sqrt(fac1) * tt.gamma(n / 2 + 1))
Px *= trm1 + trm2
return bound(Px, lam > 0, nu > 0, nc > 0)
def logcdf(self, value):
"""
Compute the log of the cumulative distribution function for Non-Central Student's T distribution
at the specified value.
Parameters
----------
value: numeric
Value(s) for which log CDF is calculated. If the log CDF for multiple
values are desired the values must be provided in a numpy array or theano tensor.
Returns
-------
TensorVariable
"""
nu = self.nu
nc = self.nc
return nctdtr(nu, nc, value)
My Custom model:
with pm.Model() as model:
# Prior Distributions for unknown model parameters:
mu = pm.Normal('sigma', 0, 10)
sigma = pm.Normal('sigma', 0, 10)
nc= pm.HalfNormal('nc', sigma=10)
nu= pm.HalfNormal('nu', sigma=1)
# Observed data is from a Likelihood distributions (Likelihood (sampling distribution) of observations):
=> (input custom distribution) observed_data = pm.Beta('observed_data', alpha=alpha, beta=beta, observed=data)
# draw 5000 posterior samples
trace = pm.sample(draws=5000, tune=2000, chains=3, cores=1)
# Obtaining Posterior Predictive Sampling:
post_pred = pm.sample_posterior_predictive(trace, samples=3000)
print(post_pred['observed_data'].shape)
print('\nSummary: ')
print(pm.stats.summary(data=trace))
print(pm.stats.summary(data=post_pred))
Edit 1:
I redesigned the custom model to include the custom distribution, however, I keep on getting error based on the equations used to get the likelihood distribution or sometimes tensor locks down and the code just freeze. Find my code below,
with pm.Model() as model:
# Prior Distributions for unknown model parameters:
mu = pm.Normal('mu', mu=0, sigma=1)
sd = pm.HalfNormal('sd', sigma=1)
nc = pm.HalfNormal('nc', sigma=10)
nu = pm.HalfNormal('nu', sigma=1)
# Custom distribution:
# observed_data = pm.DensityDist('observed_data', NonCentralStudentT, observed=data_list)
# Observed data is from a Likelihood distributions (Likelihood (sampling distribution) of observations):
observed_data = NonCentralStudentT('observed_data', mu=mu, sd=sd, nc=nc, nu=nu, observed=data_list)
# draw 5000 posterior samples
trace_S = pm.sample(draws=5000, tune=2000, chains=3, cores=1)
# Obtaining Posterior Predictive Sampling:
post_pred_S = pm.sample_posterior_predictive(trace_S, samples=3000)
print(post_pred_S['observed_data'].shape)
print('\nSummary: ')
print(pm.stats.summary(data=trace_S))
print(pm.stats.summary(data=post_pred_S))
Edit 2:
I am looking online in order to convert the function to theano, the only thing that I found to define the function is from the following GitHub link hyp1f1 function GitHub
Will this be enough to use in order to convert the function into theano?
In addition, I have a question, it is okay to use NumPy arrays with theano?
Also, I thought of another way but I am not sure if this can be implemented, I looked into the nct function in scipy and they wrote the following,
If Y is a standard normal random variable and V is an independent
chi-square random variable ( chi2 ) with k degrees of freedom, then
X=(Y+c) / sqrt(V/k)
has a non-central Student’s t distribution on the real line. The
degrees of freedom parameter k (denoted df in the implementation)
satisfies k>0 and the noncentrality parameter c (denoted nc in the
implementation) is a real number.
The probability density above is defined in the “standardized” form.
To shift and/or scale the distribution use the loc and scale
parameters. Specifically, nct.pdf(x, df, nc, loc, scale) is
identically equivalent to nct.pdf(y, df, nc) / scale with y = (x -
loc) / scale .
So, I thought of only using the priors as normal and chi2 random variables code part in their distributions and use the degree of freedom variable as mentioned before in the code into the equation mentioned in SciPy, will it be enough to get the distribution?
Edit 3:
I managed to run the code in the link about fitting empirical distribution and found out the second best was the student t distribution, so, I will be using this. Thank you for your help. I just have a side question, I ran my model with student t distribution but I got these warnings:
There were 52 divergences after tuning. Increase target_accept or
reparameterize. The acceptance probability does not match the target.
It is 0.7037574708196309, but should be close to 0.8. Try to increase
the number of tuning steps. The number of effective samples is smaller
than 10% for some parameters.
I am just confused about these warnings, Do you have any idea what it means? I know that this won't affect my code, but, I can reduce the divergences? and regarding the effective samples, Do I need to increase the number of samples in the trace code?

pdf estimation with scipy.stats

Say I compute the density of Beta(4,8):
from scipy.stats import beta
rv = beta(4, 8)
x = np.linspace(start=0, stop=1, num=200)
my_pdf = rv.pdf(x)
Why does the integral of the pdf not equal one?
> my_pdf.sum()
199.00000139548044
The integral over the pdf is one. You can see this by using numerical integration from scipy
>>> from scipy.integrate import quad
>>> quad(rv.pdf, 0, 1)
(0.9999999999999999, 1.1102230246251564e-14)
or by writing your own ad-hoc integration (with a trapezoidal rule in this example)
>>> x = numpy.linspace(start=0, stop=1, num=201)
>>> (0.5 * rv.pdf(x[0]) + rv.pdf(x[1:-1]).sum() + 0.5 * rv.pdf(x[-1])) / 200.0
1.0000000068732813
rv.pdf returns the value of the pdf at each value of x. It doesn't sum to one because your aren't actually computing an integral. If you want to do that, you need to divide your sum by the number of intervals, which is len(x) - 1, which is 199. That would then give you a result very close to 1.

python scipy.stats.powerlaw negative exponent

I want to supply a negative exponent for the scipy.stats.powerlaw routine, e.g. a=-1.5, in order to draw random samples:
"""
powerlaw.pdf(x, a) = a * x**(a-1)
"""
from scipy.stats import powerlaw
R = powerlaw.rvs(a, size=100)
Why is a > 0 required, how can I supply a negative a in order to generate the random samples, and how can I supply a normalization coefficient/transform, i.e.
PDF(x,C,a) = C * x**a
The documentation is here
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.powerlaw.html
Thanks!
EDIT: I should add that I'm trying to replicate IDL's RANDOMP function:
http://idlastro.gsfc.nasa.gov/ftp/pro/math/randomp.pro
A PDF, integrated over its domain, must equal one. In other words, the area under a probability density function's curve must equal one.
In [36]: import scipy.integrate as integrate
In [40]: y, err = integrate.quad(lambda x: 0.5*x**(-0.5), 0, 1)
In [41]: y
Out[41]: 0.9999999999999998 # The integral is close to 1
The powerlaw density function has a domain from 0 <= x <= 1. On this domain, the integral of x**b is finite for any b > -1. When b is smaller, x**b blows up too rapidly near x = 0. So it is not a valid probability density function when b <= -1.
In [38]: integrate.quad(lambda x: x**(-1), 0, 1)
UserWarning: The maximum number of subdivisions (50) has been achieved...
# The integral blows up
Thus for x**(a-1), a must satisfy a-1 > -1 or equivalently, a > 0.
The first constant a in a * x**(a-1) is the normalizing constant which makes the integral of a * x**(a-1) over the domain [0,1] equal to 1. So you don't get to choose this constant independent of a.
Now if you change the domain to be a measurable distance away from 0, then yes, you could define a PDF of the form C * x**a for negative a. But you'd have to state what domain you want, and I don't think there is (yet) a PDF available in scipy.stats for this.
The Python package powerlaw can do this. Consider for a>1 a power law distribution with probability density function
f(x) = c * x^(-a)
for x > x_min and f(x) = 0 otherwise. Here c is a normalization factor and is determined as
c = (a-1) * x_min^(a-1).
In the example below it is a = 1.5 and x_min = 1.0 and comparing the probability density function estimated from the random sample with the PDF from the expression above gives the expected result.
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as pl
import numpy as np
import powerlaw
a, xmin = 1.5, 1.0
N = 10000
# generates random variates of power law distribution
vrs = powerlaw.Power_Law(xmin=xmin, parameters=[a]).generate_random(N)
# plotting the PDF estimated from variates
bin_min, bin_max = np.min(vrs), np.max(vrs)
bins = 10**(np.linspace(np.log10(bin_min), np.log10(bin_max), 100))
counts, edges = np.histogram(vrs, bins, density=True)
centers = (edges[1:] + edges[:-1])/2.
# plotting the expected PDF
xs = np.linspace(bin_min, bin_max, 100000)
pl.plot(xs, [(a-1)*xmin**(a-1)*x**(-a) for x in xs], color='red')
pl.plot(centers, counts, '.')
pl.xscale('log')
pl.yscale('log')
pl.savefig('powerlaw_variates.png')
returns
If r is a uniform random deviate U(0,1), then x in the following expression is a power-law distributed random deviate:
x = xmin * (1-r) ** (-1/(alpha-1))
where xmin is the smallest (positive) value above which the power-law distribution holds, and alpha is the exponent of the distribution.
If you want to generate power-law distribution, you can use a random deviation. You just have to generate a random number between [0,1] and apply the inverse method (Wolfram). In this case, the probability density function is:
p(k) = k^(-gamma)
and y is the variable uniform between 0 and 1.
y ~ U(0,1)
import numpy as np
def power_law(k_min, k_max, y, gamma):
return ((k_max**(-gamma+1) - k_min**(-gamma+1))*y + k_min**(-gamma+1.0))**(1.0/(-gamma + 1.0))
Now to generate a distribution, you just have to create an array
nodes = 1000
scale_free_distribution = np.zeros(nodes, float)
k_min = 1.0
k_max = 100*k_min
gamma = 3.0
for n in range(nodes):
scale_free_distribution[n] = power_law(k_min, k_max,np.random.uniform(0,1), gamma)
This will work to generate a power-law distribution with gamma=3.0, if you want to fix the average of distribution, you have to study Complex Networks cause the k_min depends of k_max and the average connectivity.
My answer is almost the same as Virgil's above, with the crucial difference that that alpha is actually the negative exponent of powerlaw distribution
So, if r is a uniform random deviate U(0,1), then x in the following expression is a power-law distributed random deviate:
x = xmin * (1-r) ** (-1/(alpha-1))
where xmin is the smallest (positive) value above which the power-law distribution holds, and alpha is the negative exponent of the distribution, that is the P(x) = [constant] * x**-alpha

Calculating Probability of a Random Variable in a Distribution in Python

Given a mean and standard-deviation defining a normal distribution, how would you calculate the following probabilities in pure-Python (i.e. no Numpy/Scipy or other packages not in the standard library)?
The probability of a random variable r where r < x or r <= x.
The probability of a random variable r where r > x or r >= x.
The probability of a random variable r where x > r > y.
I've found some libraries, like Pgnumerics, that provide functions for calculating these, but the underlying math is unclear to me.
Edit: To show this isn't homework, posted below is my working code for Python<=2.6, albeit I'm not sure if it handles the boundary conditions correctly.
from math import *
import unittest
def erfcc(x):
"""
Complementary error function.
"""
z = abs(x)
t = 1. / (1. + 0.5*z)
r = t * exp(-z*z-1.26551223+t*(1.00002368+t*(.37409196+
t*(.09678418+t*(-.18628806+t*(.27886807+
t*(-1.13520398+t*(1.48851587+t*(-.82215223+
t*.17087277)))))))))
if (x >= 0.):
return r
else:
return 2. - r
def normcdf(x, mu, sigma):
t = x-mu;
y = 0.5*erfcc(-t/(sigma*sqrt(2.0)));
if y>1.0:
y = 1.0;
return y
def normpdf(x, mu, sigma):
u = (x-mu)/abs(sigma)
y = (1/(sqrt(2*pi)*abs(sigma)))*exp(-u*u/2)
return y
def normdist(x, mu, sigma, f):
if f:
y = normcdf(x,mu,sigma)
else:
y = normpdf(x,mu,sigma)
return y
def normrange(x1, x2, mu, sigma, f=True):
"""
Calculates probability of random variable falling between two points.
"""
p1 = normdist(x1, mu, sigma, f)
p2 = normdist(x2, mu, sigma, f)
return abs(p1-p2)
All these are very similar: If you can compute #1 using a function cdf(x), then the solution to #2 is simply 1 - cdf(x), and for #3 it's cdf(x) - cdf(y).
Since Python includes the (gauss) error function built in since version 2.7 you can do this by calculating the cdf of the normal distribution using the equation from the article you linked to:
import math
print 0.5 * (1 + math.erf((x - mean)/math.sqrt(2 * standard_dev**2)))
where mean is the mean and standard_dev is the standard deviation.
Some notes since what you asked seemed relatively straightforward given the information in the article:
CDF of a random variable (say X) is the probability that X lies between -infinity and some limit, say x (lower case). CDF is the integral of the pdf for continuous distributions. The cdf is exactly what you described for #1, you want some normally distributed RV to be between -infinity and x (<= x).
< and <= as well as > and >= are same for continuous random variables as the probability that the rv is any single point is 0. So whether or not x itself is included doesn't actually matter when calculating the probabilities for continuous distributions.
Sum of probabilities is 1, if its not < x then it's >= x so if you have the cdf(x). then 1 - cdf(x) is the probability that the random variable X >= x. Since >= is equivalent for continuous random variables to >, this is also the probability X > x.

Categories