Generate Random Number in Range from Single-Tailed Distribution with Python - python

I want to generate a random float in the range [0, 1) from a one-tailed distribution that looks like this
The above is the chi-squared distribution. I can only find resources on drawing from a uniform distribution in a range, however.

You could use a Beta distribution, e.g.
import numpy as np
np.random.seed(2018)
np.random.beta(2, 5, 10)
#array([ 0.18094173, 0.26192478, 0.14055507, 0.07172968, 0.11830031,
# 0.1027738 , 0.20499125, 0.23220654, 0.0251325 , 0.26324832])
Here we draw numbers from a Beta(2, 5) distribution
The Beta distribution is a very versatile and fundamental distribution in statistics; without going into any details, by changing the parameters alpha and beta you can make the distribution left-skewed, right-skewed, uniform, symmetric etc. The distribution is defined on the interval [0, 1] which is consistent with what you're after.
A more technical comment
While the Kumaraswamy distribution certainly has more benign algebraic properties than the Beta distribution I would argue that the latter is the more fundamental distribution; for example, in Bayesian inference, the Beta distribution often enters as the conjugate prior when dealing with binomial(-like) processes.
Secondly, the mean and variance of the Beta distribution can be expressed quite simply in terms of the parameters alpha, beta; for example, the mean is simply given by alpha / (alpha + beta).
Lastly, from a computational and statistical inference point of view, fitting a Beta distribution to data is usually done in a few lines of code in Python (or R), where most Python libraries like numpy and scipy already include methods to deal with the Beta distribution.

I would leaning toward distribution which is naturally bounded on [0...1] interval (or any other [a...b] interval which could be rescaled later), like #MauritsEvers answer. Reason is, you know the distribution and could derive (or read) some interesting facts about it. If you use chi2 adn truncate it, it is unclear how to argue about properties of what you've got.
Personally I prefer Kumaraswamy distribution over Beta distribution, expressions for mean,mode, variance etc are a lot simpler.
Just install it
pip install kumaraswamy
and sample
from kumaraswamy import kumaraswamy
d = kumaraswamy(a=2.0, b=5.0)
q = d.rvs(10)
print(q)
will produce 10 numbers following magenta curve in the Wiki article.
If you don't want Beta or Kumaraswamy, there is f.e. Logit-normal distribution and quite a few others

Look at the numpy.random.chisquare method library.
numpy.random.chisquare(df, size=None)
>>> np.random.chisquare(2,4)
array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272])

If you want to draw a sample of size N = 5 from a ChiSquare distribution, you can try OpenTURNS library:
import openturns as ot`
# define your distribution. Here, nu = 3. (nu is a float > 0)
distribution = ot.ChiSquare(3)
# draw a sample of size N from `distribution`
N=5
sample = distribution.getSample(N)
A complete list of distributions is available here
sample has an OpenTURNS format but you can manipulate it as a Numpy array:
s = np.array(Sample)
print(s)
>>>array([[1.65299759],
[6.78405097],
[0.88528975],
[0.87900211],
[0.25031129]])
You can also easily plot the distribution PDF just by calling : distribution.drawPDF()
Customizations:
from openturns.viewer import View
graph = distribution.drawPDF()
title = str(distribution)[:100].split('\n')[0]
graph.setTitle(title)
View(graph, add_legend=False)

Related

How to generate random numbers with predefined probability distribution?

I would like to implement a function in python (using numpy) that takes a mathematical function (for ex. p(x) = e^(-x) like below) as input and generates random numbers, that are distributed according to that mathematical-function's probability distribution. And I need to plot them, so we can see the distribution.
I need actually exactly a random number generator function for exactly the following 2 mathematical functions as input, but if it could take other functions, why not:
1) p(x) = e^(-x)
2) g(x) = (1/sqrt(2*pi)) * e^(-(x^2)/2)
Does anyone have any idea how this is doable in python?
For simple distributions like the ones you need, or if you have an easy to invert in closed form CDF, you can find plenty of samplers in NumPy as correctly pointed out in Olivier's answer.
For arbitrary distributions you could use Markov-Chain Montecarlo sampling methods.
The simplest and maybe easier to understand variant of these algorithms is Metropolis sampling.
The basic idea goes like this:
start from a random point x and take a random step xnew = x + delta
evaluate the desired probability distribution in the starting point p(x) and in the new one p(xnew)
if the new point is more probable p(xnew)/p(x) >= 1 accept the move
if the new point is less probable randomly decide whether to accept or reject depending on how probable1 the new point is
new step from this point and repeat the cycle
It can be shown, see e.g. Sokal2, that points sampled with this method follow the acceptance probability distribution.
An extensive implementation of Montecarlo methods in Python can be found in the PyMC3 package.
Example implementation
Here's a toy example just to show you the basic idea, not meant in any way as a reference implementation. Please refer to mature packages for any serious work.
def uniform_proposal(x, delta=2.0):
return np.random.uniform(x - delta, x + delta)
def metropolis_sampler(p, nsamples, proposal=uniform_proposal):
x = 1 # start somewhere
for i in range(nsamples):
trial = proposal(x) # random neighbour from the proposal distribution
acceptance = p(trial)/p(x)
# accept the move conditionally
if np.random.uniform() < acceptance:
x = trial
yield x
Let's see if it works with some simple distributions
Gaussian mixture
def gaussian(x, mu, sigma):
return 1./sigma/np.sqrt(2*np.pi)*np.exp(-((x-mu)**2)/2./sigma/sigma)
p = lambda x: gaussian(x, 1, 0.3) + gaussian(x, -1, 0.1) + gaussian(x, 3, 0.2)
samples = list(metropolis_sampler(p, 100000))
Cauchy
def cauchy(x, mu, gamma):
return 1./(np.pi*gamma*(1.+((x-mu)/gamma)**2))
p = lambda x: cauchy(x, -2, 0.5)
samples = list(metropolis_sampler(p, 100000))
Arbitrary functions
You don't really have to sample from proper probability distributions. You might just have to enforce a limited domain where to sample your random steps3
p = lambda x: np.sqrt(x)
samples = list(metropolis_sampler(p, 100000, domain=(0, 10)))
p = lambda x: (np.sin(x)/x)**2
samples = list(metropolis_sampler(p, 100000, domain=(-4*np.pi, 4*np.pi)))
Conclusions
There is still way too much to say, about proposal distributions, convergence, correlation, efficiency, applications, Bayesian formalism, other MCMC samplers, etc.
I don't think this is the proper place and there is plenty of much better material than what I could write here available online.
The idea here is to favor exploration where the probability is higher but still look at low probability regions as they might lead to other peaks. Fundamental is the choice of the proposal distribution, i.e. how you pick new points to explore. Too small steps might constrain you to a limited area of your distribution, too big could lead to a very inefficient exploration.
Physics oriented. Bayesian formalism (Metropolis-Hastings) is preferred these days but IMHO it's a little harder to grasp for beginners. There are plenty of tutorials available online, see e.g. this one from Duke university.
Implementation not shown not to add too much confusion, but it's straightforward you just have to wrap trial steps at the domain edges or make the desired function go to zero outside the domain.
NumPy offers a wide range of probability distributions.
The first function is an exponential distribution with parameter 1.
np.random.exponential(1)
The second one is a normal distribution with mean 0 and variance 1.
np.random.normal(0, 1)
Note that in both case, the arguments are optional as these are the default values for these distributions.
As a sidenote, you can also find those distributions in the random module as random.expovariate and random.gauss respectively.
More general distributions
While NumPy will likely cover all your needs, remember that you can always compute the inverse cumulative distribution function of your distribution and input values from a uniform distribution.
inverse_cdf(np.random.uniform())
By example if NumPy did not provide the exponential distribution, you could do this.
def exponential():
return -np.log(-np.random.uniform())
If you encounter distributions which CDF is not easy to compute, then consider filippo's great answer.

How to make a sample from the empirical distribution function

I'm trying to implement the nonparametric bootstrapping on Python. It requires to take a sample, build an empirical distribution function from it and then to generate a bunch of samples from this edf. How can I do it?
In scipy I found only how to make your own distribution function if you know the exact formula describing it, but I have only an edf.
The edf you get by sorting the samples:
N = samples.size
ss = np.sort(samples) # these are the x-values of the edf
# the y-values are 1/(2N), 3/(2N), 5/(2N) etc.
edf = lambda x: np.searchsorted(ss, x) / N
However, if you only want to resample then you simply draw from your sample with equal probability and replacement.
If this is too "steppy" for your liking, you can probably use some kind of interpolation to get a smooth distribution.

How to infer the parameters of a 1D gaussian distribution using PyMC?

I'm pretty new to PyMC and I'm trying desperately to infer the parameters of an underlying gaussian distribution that best fits a distribution of observed data that I have, not with a pre-build normal distrubution, but with a more general method using histograms of the simulated data to build pdfs. But so far I can't get my code to converge, and I don't know why...
So here's a summary of what my code does.
I have a dataset of 5000 points distributed normally (mean=5,sigma=2). I want to retrieve these values (mean, sigma) with a bayesian inference (using MCMC).
I have a data simulator that generates for each iteration of the MCMC process a normal distribution of 5000 points with a random mean and sigma (uniform prior)
From the simulated distribution of points I compute a numpy histogram normed to 1 representing the pdf of the distribution (Nbins=int(sqrt(5000))). I then compute the mean and standard deviation of this distribution.
What I want is the set of parameters that will allow me to build a simulated distribution that best fits the observed data.
I use the most general definition of the log likelihood, that is:
ln L(θ|x)=∑ln(f(xi|θ)) (the likelihood function being defined as the probability distribution of the observed data given the parameters θ)
Then I interpolate linearly the histogram values for every bin center. Therefore I have a continuous pdf for the simulated distribution. So here f is the interpolated function I made from the histogram of the simulation.
I sum the log(f(xi)) contributions for every (real) data point and return the loglikelihood value at the end.
But some (real) data points are so far off the mean of the simulated distribution that f(xi)=0. For these points the code raises a math domain error (Reminder: log(0)=-inf). So I artificially set the pdf to a small epsilon for the points where it's usually set to 0.
But here's the thing. The loglikelihood is not computed for every iteration. And actually it is not computed at all, in the present architecture of my code. So that's why the MCMC process is not converging. But... I don't know why.
Turns out that building custom likelihood functions does not seem to be very casual approach in the PyMC community, where people usually prefer to used pre-built distributions. I'm having troubles to find some help on these matters, so ideas and suggestions will be deeply appreciated :)
import numpy as np
import matplotlib.pyplot as plt
import math
import pymc as pm
from scipy.interpolate import InterpolatedUnivariateSpline
# Generate the data
np.random.seed(0)
N=5000
true_mean=5.
true_sigma = 2.
data = np.random.normal(true_mean,true_sigma,N)
#prior
m=pm.Uniform('m', lower=4, upper=6)
s=pm.Uniform('s', lower=1, upper=3)
#pm.deterministic
def data_simulator(mean_input=m,sig_input=s):
out=np.empty(4,dtype=object)
datasim = np.random.normal(mean_input,sig_input,N)
hist, bin_edges = np.histogram(datasim, bins=int(math.sqrt(len(datasim))), density=True)
bin_centers = (bin_edges[:-1] + bin_edges[1:])/2
m_sim=np.mean(datasim)
s_sim=np.std(datasim)
out[0]=m_sim
out[1]=s_sim
out[2]=bin_centers
out[3]=hist
return out
#pm.stochastic(observed=True)
def logp(value=data,mean_output=data_simulator.value[0],sigma_output=data_simulator.value[1],bin_centers_sim=data_simulator.value[2],hist_sim=data_simulator.value[3]):
interp_sim=InterpolatedUnivariateSpline(bin_centers_sim,hist_sim,k=1,ext=0) #returns the extrapolated values
logp=np.sum(np.log(interp_sim(value)))
print 'logp=',logp
return logp
model = pm.Model({"mean": m,"sigma":s,"data_simulator":data_simulator,"loglikelihood":loglikelihood})
#Run the MCMC sampler
mcmc = pm.MCMC(model)
mcmc.sample(iter=10000, burn=5000)
#Plot the marginals
pm.Matplot.plot(mcmc)

Is there a method to do arithmetic with SciPy's random variables?

SciPy's stats module have objects of the type "random variable" (they call it rv_frozen). It makes it easy to plot, say, cdf's of random variables of a given distribution. Here's a very simple example:
import scipy.stats as stats
n = stats.norm()
x = linspace(-3, 3)
y = n.cdf(x)
plot(x, y)
I wondered whether there's a way of doing basic arithmetic manipulations on such random variables. The following example is a wishful thinking (it doesn't work).
du_list = [stats.randint(2, 5) for _ in xrange(100)]
du_avg = sum(du_list) / len(du_list)
x = linspace(0, 10)
y = du_avg.cdf(x)
plot(x, y)
This wishful-thinking example should produce the graph of the cumulative distribution function of the random variable which is the average of 100 i.i.d. random variables, each is distributed uniformly on the set {2,3,4}.
I realize this is a bit late, but I figured I'd answer in case anyone else needs this in the future. I needed the same functionality recently and even thought about extending scipy's rv_discrete to implement this, but then I found PaCAL.
PaCAL is a Python software package for doing arithmetic on random variables and it supports quite a few distributions, including continuous distributions. There is even some support for bivariate joint distributions. Available as a package on PyPI. Only for Python 2.x though.
EDIT: The PaCAL repo on Github now supports Python 3.x as well.
The method that matches your description exactly doesn't exist. The cdf of different distributions are all defined int the **/scipy/stats/distributions.py` source file. For example:
Boltzman distribution CDF (Line 7675):
def _cdf(self, x, lambda_, N):
k = floor(x)
return (1-exp(-lambda_*(k+1)))/(1-exp(-lambda_*N))
You can, estimate the MLE and then call the cdf method, see this sample:
import scipy.stats as ss
unknown=np.random.normal(loc=1.1, scale=2.0, size=100)
Loc, Scale=ss.norm.fit_loc_scale(unknown) #making a MLE fit
unknown_cdf=lambda x: ss.norm.cdf(x, loc=Loc, scale=Scale) #the cdf of the MLE to the data
plt.plot(np.linspace(-10, 10), unknown_cdf(np.linspace(-10, 10)), '-')
You could compute it by hand.
With X the random variable created as the sum of the Xi, i random variables uniformly distributed U(2,5). Sample your distribution created by X to obtain the pdf, and integrate to obtain the cdf.
Or you could try to find an analytical solution for this problem.
See the Irwin-Hall distribution
and the related discussion on Math-stackexchange.

How to perform a chi-squared goodness of fit test using scientific libraries in Python?

Let's assume I have some data I obtained empirically:
from scipy import stats
size = 10000
x = 10 * stats.expon.rvs(size=size) + 0.2 * np.random.uniform(size=size)
It is exponentially distributed (with some noise) and I want to verify this using a chi-squared goodness of fit (GoF) test. What is the simplest way of doing this using the standard scientific libraries in Python (e.g. scipy or statsmodels) with the least amount of manual steps and assumptions?
I can fit a model with:
param = stats.expon.fit(x)
plt.hist(x, normed=True, color='white', hatch='/')
plt.plot(grid, distr.pdf(np.linspace(0, 100, 10000), *param))
It is very elegant to calculate the Kolmogorov-Smirnov test.
>>> stats.kstest(x, lambda x : stats.expon.cdf(x, *param))
(0.0061000000000000004, 0.85077099515985011)
However, I can't find a good way of calculating the chi-squared test.
There is a chi-squared GoF function in statsmodel, but it assumes a discrete distribution (and the exponential distribution is continuous).
The official scipy.stats tutorial only covers a case for a custom distribution and probabilities are built by fiddling with many expressions (npoints, npointsh, nbound, normbound), so it's not quite clear to me how to do it for other distributions. The chisquare examples assume the expected values and DoF are already obtained.
Also, I am not looking for a way to "manually" perform the test as was already discussed here, but would like to know how to apply one of the available library functions.
An approximate solution for equal probability bins:
Estimate the parameters of the distribution
Use the inverse cdf, ppf if it's a scipy.stats.distribution, to get the binedges for a regular probability grid, e.g. distribution.ppf(np.linspace(0, 1, n_bins + 1), *args)
Then, use np.histogram to count the number of observations in each bin
then use chisquare test on the frequencies.
An alternative would be to find the bin edges from the percentiles of the sorted data, and use the cdf to find the actual probabilities.
This is only approximate, since the theory for the chisquare test assumes that the parameters are estimated by maximum likelihood on the binned data. And I'm not sure whether the selection of binedges based on the data affects the asymptotic distribution.
I haven't looked into this into a long time.
If an approximate solution is not good enough, then I would recommend that you ask the question on stats.stackexchange.
Why do you need to "verify" that it's exponential? Are you sure you need a statistical test? I can pretty much guarantee that is isn't ultimately exponential & the test would be significant if you had enough data, making the logic of using the test rather forced. It may help you to read this CV thread: Is normality testing 'essentially useless'?, or my answer here: Testing for heteroscedasticity with many observations.
It is typically better to use a qq-plot and/or pp-plot (depending on whether you are concerned about the fit in the tails or middle of the distribution, see my answer here: PP-plots vs. QQ-plots). Information on how to make qq-plots in Python SciPy can be found in this SO thread: Quantile-Quantile plot using SciPy
I tried you problem with OpenTURNS.
Beginning is the same:
import numpy as np
from scipy import stats
size = 10000
x = 10 * stats.expon.rvs(size=size) + 0.2 * np.random.uniform(size=size)
If you suspect that your sample x is coming from an Exponential distribution, you can use ot.ExponentialFactory() to fit the parameters:
import openturns as ot
sample = ot.Sample([[p] for p in x])
distribution = ot.ExponentialFactory().build(sample)
As Factory needs a an ot.Sample() as input, I needed format x and reshape it as 10.000 points of dimension 1.
Let's now assess this fitting using ChiSquared test:
result = ot.FittingTest.ChiSquared(sample, distribution, 0.01)
print('Exponential?', result.getBinaryQualityMeasure(), ', P-value=', result.getPValue())
>>> Exponential? True , P-value= 0.9275212544642293
Very good!
And of course, print(distribution) will give you the fitted parameters:
>>> Exponential(lambda = 0.0982391, gamma = 0.0274607)

Categories