SciPy's stats module have objects of the type "random variable" (they call it rv_frozen). It makes it easy to plot, say, cdf's of random variables of a given distribution. Here's a very simple example:
import scipy.stats as stats
n = stats.norm()
x = linspace(-3, 3)
y = n.cdf(x)
plot(x, y)
I wondered whether there's a way of doing basic arithmetic manipulations on such random variables. The following example is a wishful thinking (it doesn't work).
du_list = [stats.randint(2, 5) for _ in xrange(100)]
du_avg = sum(du_list) / len(du_list)
x = linspace(0, 10)
y = du_avg.cdf(x)
plot(x, y)
This wishful-thinking example should produce the graph of the cumulative distribution function of the random variable which is the average of 100 i.i.d. random variables, each is distributed uniformly on the set {2,3,4}.
I realize this is a bit late, but I figured I'd answer in case anyone else needs this in the future. I needed the same functionality recently and even thought about extending scipy's rv_discrete to implement this, but then I found PaCAL.
PaCAL is a Python software package for doing arithmetic on random variables and it supports quite a few distributions, including continuous distributions. There is even some support for bivariate joint distributions. Available as a package on PyPI. Only for Python 2.x though.
EDIT: The PaCAL repo on Github now supports Python 3.x as well.
The method that matches your description exactly doesn't exist. The cdf of different distributions are all defined int the **/scipy/stats/distributions.py` source file. For example:
Boltzman distribution CDF (Line 7675):
def _cdf(self, x, lambda_, N):
k = floor(x)
return (1-exp(-lambda_*(k+1)))/(1-exp(-lambda_*N))
You can, estimate the MLE and then call the cdf method, see this sample:
import scipy.stats as ss
unknown=np.random.normal(loc=1.1, scale=2.0, size=100)
Loc, Scale=ss.norm.fit_loc_scale(unknown) #making a MLE fit
unknown_cdf=lambda x: ss.norm.cdf(x, loc=Loc, scale=Scale) #the cdf of the MLE to the data
plt.plot(np.linspace(-10, 10), unknown_cdf(np.linspace(-10, 10)), '-')
You could compute it by hand.
With X the random variable created as the sum of the Xi, i random variables uniformly distributed U(2,5). Sample your distribution created by X to obtain the pdf, and integrate to obtain the cdf.
Or you could try to find an analytical solution for this problem.
See the Irwin-Hall distribution
and the related discussion on Math-stackexchange.
Related
I want to generate a random float in the range [0, 1) from a one-tailed distribution that looks like this
The above is the chi-squared distribution. I can only find resources on drawing from a uniform distribution in a range, however.
You could use a Beta distribution, e.g.
import numpy as np
np.random.seed(2018)
np.random.beta(2, 5, 10)
#array([ 0.18094173, 0.26192478, 0.14055507, 0.07172968, 0.11830031,
# 0.1027738 , 0.20499125, 0.23220654, 0.0251325 , 0.26324832])
Here we draw numbers from a Beta(2, 5) distribution
The Beta distribution is a very versatile and fundamental distribution in statistics; without going into any details, by changing the parameters alpha and beta you can make the distribution left-skewed, right-skewed, uniform, symmetric etc. The distribution is defined on the interval [0, 1] which is consistent with what you're after.
A more technical comment
While the Kumaraswamy distribution certainly has more benign algebraic properties than the Beta distribution I would argue that the latter is the more fundamental distribution; for example, in Bayesian inference, the Beta distribution often enters as the conjugate prior when dealing with binomial(-like) processes.
Secondly, the mean and variance of the Beta distribution can be expressed quite simply in terms of the parameters alpha, beta; for example, the mean is simply given by alpha / (alpha + beta).
Lastly, from a computational and statistical inference point of view, fitting a Beta distribution to data is usually done in a few lines of code in Python (or R), where most Python libraries like numpy and scipy already include methods to deal with the Beta distribution.
I would leaning toward distribution which is naturally bounded on [0...1] interval (or any other [a...b] interval which could be rescaled later), like #MauritsEvers answer. Reason is, you know the distribution and could derive (or read) some interesting facts about it. If you use chi2 adn truncate it, it is unclear how to argue about properties of what you've got.
Personally I prefer Kumaraswamy distribution over Beta distribution, expressions for mean,mode, variance etc are a lot simpler.
Just install it
pip install kumaraswamy
and sample
from kumaraswamy import kumaraswamy
d = kumaraswamy(a=2.0, b=5.0)
q = d.rvs(10)
print(q)
will produce 10 numbers following magenta curve in the Wiki article.
If you don't want Beta or Kumaraswamy, there is f.e. Logit-normal distribution and quite a few others
Look at the numpy.random.chisquare method library.
numpy.random.chisquare(df, size=None)
>>> np.random.chisquare(2,4)
array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272])
If you want to draw a sample of size N = 5 from a ChiSquare distribution, you can try OpenTURNS library:
import openturns as ot`
# define your distribution. Here, nu = 3. (nu is a float > 0)
distribution = ot.ChiSquare(3)
# draw a sample of size N from `distribution`
N=5
sample = distribution.getSample(N)
A complete list of distributions is available here
sample has an OpenTURNS format but you can manipulate it as a Numpy array:
s = np.array(Sample)
print(s)
>>>array([[1.65299759],
[6.78405097],
[0.88528975],
[0.87900211],
[0.25031129]])
You can also easily plot the distribution PDF just by calling : distribution.drawPDF()
Customizations:
from openturns.viewer import View
graph = distribution.drawPDF()
title = str(distribution)[:100].split('\n')[0]
graph.setTitle(title)
View(graph, add_legend=False)
I would like to implement a function in python (using numpy) that takes a mathematical function (for ex. p(x) = e^(-x) like below) as input and generates random numbers, that are distributed according to that mathematical-function's probability distribution. And I need to plot them, so we can see the distribution.
I need actually exactly a random number generator function for exactly the following 2 mathematical functions as input, but if it could take other functions, why not:
1) p(x) = e^(-x)
2) g(x) = (1/sqrt(2*pi)) * e^(-(x^2)/2)
Does anyone have any idea how this is doable in python?
For simple distributions like the ones you need, or if you have an easy to invert in closed form CDF, you can find plenty of samplers in NumPy as correctly pointed out in Olivier's answer.
For arbitrary distributions you could use Markov-Chain Montecarlo sampling methods.
The simplest and maybe easier to understand variant of these algorithms is Metropolis sampling.
The basic idea goes like this:
start from a random point x and take a random step xnew = x + delta
evaluate the desired probability distribution in the starting point p(x) and in the new one p(xnew)
if the new point is more probable p(xnew)/p(x) >= 1 accept the move
if the new point is less probable randomly decide whether to accept or reject depending on how probable1 the new point is
new step from this point and repeat the cycle
It can be shown, see e.g. Sokal2, that points sampled with this method follow the acceptance probability distribution.
An extensive implementation of Montecarlo methods in Python can be found in the PyMC3 package.
Example implementation
Here's a toy example just to show you the basic idea, not meant in any way as a reference implementation. Please refer to mature packages for any serious work.
def uniform_proposal(x, delta=2.0):
return np.random.uniform(x - delta, x + delta)
def metropolis_sampler(p, nsamples, proposal=uniform_proposal):
x = 1 # start somewhere
for i in range(nsamples):
trial = proposal(x) # random neighbour from the proposal distribution
acceptance = p(trial)/p(x)
# accept the move conditionally
if np.random.uniform() < acceptance:
x = trial
yield x
Let's see if it works with some simple distributions
Gaussian mixture
def gaussian(x, mu, sigma):
return 1./sigma/np.sqrt(2*np.pi)*np.exp(-((x-mu)**2)/2./sigma/sigma)
p = lambda x: gaussian(x, 1, 0.3) + gaussian(x, -1, 0.1) + gaussian(x, 3, 0.2)
samples = list(metropolis_sampler(p, 100000))
Cauchy
def cauchy(x, mu, gamma):
return 1./(np.pi*gamma*(1.+((x-mu)/gamma)**2))
p = lambda x: cauchy(x, -2, 0.5)
samples = list(metropolis_sampler(p, 100000))
Arbitrary functions
You don't really have to sample from proper probability distributions. You might just have to enforce a limited domain where to sample your random steps3
p = lambda x: np.sqrt(x)
samples = list(metropolis_sampler(p, 100000, domain=(0, 10)))
p = lambda x: (np.sin(x)/x)**2
samples = list(metropolis_sampler(p, 100000, domain=(-4*np.pi, 4*np.pi)))
Conclusions
There is still way too much to say, about proposal distributions, convergence, correlation, efficiency, applications, Bayesian formalism, other MCMC samplers, etc.
I don't think this is the proper place and there is plenty of much better material than what I could write here available online.
The idea here is to favor exploration where the probability is higher but still look at low probability regions as they might lead to other peaks. Fundamental is the choice of the proposal distribution, i.e. how you pick new points to explore. Too small steps might constrain you to a limited area of your distribution, too big could lead to a very inefficient exploration.
Physics oriented. Bayesian formalism (Metropolis-Hastings) is preferred these days but IMHO it's a little harder to grasp for beginners. There are plenty of tutorials available online, see e.g. this one from Duke university.
Implementation not shown not to add too much confusion, but it's straightforward you just have to wrap trial steps at the domain edges or make the desired function go to zero outside the domain.
NumPy offers a wide range of probability distributions.
The first function is an exponential distribution with parameter 1.
np.random.exponential(1)
The second one is a normal distribution with mean 0 and variance 1.
np.random.normal(0, 1)
Note that in both case, the arguments are optional as these are the default values for these distributions.
As a sidenote, you can also find those distributions in the random module as random.expovariate and random.gauss respectively.
More general distributions
While NumPy will likely cover all your needs, remember that you can always compute the inverse cumulative distribution function of your distribution and input values from a uniform distribution.
inverse_cdf(np.random.uniform())
By example if NumPy did not provide the exponential distribution, you could do this.
def exponential():
return -np.log(-np.random.uniform())
If you encounter distributions which CDF is not easy to compute, then consider filippo's great answer.
I'm trying to implement the nonparametric bootstrapping on Python. It requires to take a sample, build an empirical distribution function from it and then to generate a bunch of samples from this edf. How can I do it?
In scipy I found only how to make your own distribution function if you know the exact formula describing it, but I have only an edf.
The edf you get by sorting the samples:
N = samples.size
ss = np.sort(samples) # these are the x-values of the edf
# the y-values are 1/(2N), 3/(2N), 5/(2N) etc.
edf = lambda x: np.searchsorted(ss, x) / N
However, if you only want to resample then you simply draw from your sample with equal probability and replacement.
If this is too "steppy" for your liking, you can probably use some kind of interpolation to get a smooth distribution.
I have a power-law distribution of energies and I want to pick n random energies based on the distribution. I tried doing this manually using random numbers but it is too inefficient for what I want to do. I'm wondering is there a method in numpy (or other) that works like numpy.random.normal, except instead of a using normal distribution, the distribution may be specified. So in my mind an example might look like (similar to numpy.random.normal):
import numpy as np
# Energies from within which I want values drawn
eMin = 50.
eMax = 2500.
# Amount of energies to be drawn
n = 10000
photons = []
for i in range(n):
# Method that I just made up which would work like random.normal,
# i.e. return an energy on the distribution based on its probability,
# but take a distribution other than a normal distribution
photons.append(np.random.distro(eMin, eMax, lambda e: e**(-1.)))
print(photons)
Printing photons should give me a list of length 10000 populated by energies in this distribution. If I were to histogram this it would have much greater bin values at lower energies.
I am not sure if such a method exists but it seems like it should. I hope it is clear what I want to do.
EDIT:
I have seen numpy.random.power but my exponent is -1 so I don't think this will work.
Sampling from arbitrary PDFs well is actually quite hard. There are large and dense books just about how to efficiently and accurately sample from the standard families of distributions.
It looks like you could probably get by with a custom inversion method for the example that you gave.
If you want to sample from an arbitrary distribution you need the inverse of the cumulative density function (not the pdf).
You then sample a probability uniformly from range [0,1] and feed this into the inverse of the cdf to get the corresponding value.
It is often not possible to obtain the cdf from the pdf analytically.
However, if you're happy to approximate the distribution, you could do so by calculating f(x) at regular intervals over its domain, then doing a cumsum over this vector to get an approximation of the cdf and from this approximate the inverse.
Rough code snippet:
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate
def f(x):
"""
substitute this function with your arbitrary distribution
must be positive over domain
"""
return 1/float(x)
#you should vary inputVals to cover the domain of f (for better accurracy you can
#be clever about spacing of values as well). Here i space them logarithmically
#up to 1 then at regular intervals but you could definitely do better
inputVals = np.hstack([1.**np.arange(-1000000,0,100),range(1,10000)])
#everything else should just work
funcVals = np.array([f(x) for x in inputVals])
cdf = np.zeros(len(funcVals))
diff = np.diff(funcVals)
for i in xrange(1,len(funcVals)):
cdf[i] = cdf[i-1]+funcVals[i-1]*diff[i-1]
cdf /= cdf[-1]
#you could also improve the approximation by choosing appropriate interpolator
inverseCdf = scipy.interpolate.interp1d(cdf,inputVals)
#grab 10k samples from distribution
samples = [inverseCdf(x) for x in np.random.uniform(0,1,size = 100000)]
plt.hist(samples,bins=500)
plt.show()
Why don't you use eval and put the distribution in a string?
>>> cmd = "numpy.random.normal(500)"
>>> eval(cmd)
you can manipulate the string as you wish to set the distribution.
Let's assume I have some data I obtained empirically:
from scipy import stats
size = 10000
x = 10 * stats.expon.rvs(size=size) + 0.2 * np.random.uniform(size=size)
It is exponentially distributed (with some noise) and I want to verify this using a chi-squared goodness of fit (GoF) test. What is the simplest way of doing this using the standard scientific libraries in Python (e.g. scipy or statsmodels) with the least amount of manual steps and assumptions?
I can fit a model with:
param = stats.expon.fit(x)
plt.hist(x, normed=True, color='white', hatch='/')
plt.plot(grid, distr.pdf(np.linspace(0, 100, 10000), *param))
It is very elegant to calculate the Kolmogorov-Smirnov test.
>>> stats.kstest(x, lambda x : stats.expon.cdf(x, *param))
(0.0061000000000000004, 0.85077099515985011)
However, I can't find a good way of calculating the chi-squared test.
There is a chi-squared GoF function in statsmodel, but it assumes a discrete distribution (and the exponential distribution is continuous).
The official scipy.stats tutorial only covers a case for a custom distribution and probabilities are built by fiddling with many expressions (npoints, npointsh, nbound, normbound), so it's not quite clear to me how to do it for other distributions. The chisquare examples assume the expected values and DoF are already obtained.
Also, I am not looking for a way to "manually" perform the test as was already discussed here, but would like to know how to apply one of the available library functions.
An approximate solution for equal probability bins:
Estimate the parameters of the distribution
Use the inverse cdf, ppf if it's a scipy.stats.distribution, to get the binedges for a regular probability grid, e.g. distribution.ppf(np.linspace(0, 1, n_bins + 1), *args)
Then, use np.histogram to count the number of observations in each bin
then use chisquare test on the frequencies.
An alternative would be to find the bin edges from the percentiles of the sorted data, and use the cdf to find the actual probabilities.
This is only approximate, since the theory for the chisquare test assumes that the parameters are estimated by maximum likelihood on the binned data. And I'm not sure whether the selection of binedges based on the data affects the asymptotic distribution.
I haven't looked into this into a long time.
If an approximate solution is not good enough, then I would recommend that you ask the question on stats.stackexchange.
Why do you need to "verify" that it's exponential? Are you sure you need a statistical test? I can pretty much guarantee that is isn't ultimately exponential & the test would be significant if you had enough data, making the logic of using the test rather forced. It may help you to read this CV thread: Is normality testing 'essentially useless'?, or my answer here: Testing for heteroscedasticity with many observations.
It is typically better to use a qq-plot and/or pp-plot (depending on whether you are concerned about the fit in the tails or middle of the distribution, see my answer here: PP-plots vs. QQ-plots). Information on how to make qq-plots in Python SciPy can be found in this SO thread: Quantile-Quantile plot using SciPy
I tried you problem with OpenTURNS.
Beginning is the same:
import numpy as np
from scipy import stats
size = 10000
x = 10 * stats.expon.rvs(size=size) + 0.2 * np.random.uniform(size=size)
If you suspect that your sample x is coming from an Exponential distribution, you can use ot.ExponentialFactory() to fit the parameters:
import openturns as ot
sample = ot.Sample([[p] for p in x])
distribution = ot.ExponentialFactory().build(sample)
As Factory needs a an ot.Sample() as input, I needed format x and reshape it as 10.000 points of dimension 1.
Let's now assess this fitting using ChiSquared test:
result = ot.FittingTest.ChiSquared(sample, distribution, 0.01)
print('Exponential?', result.getBinaryQualityMeasure(), ', P-value=', result.getPValue())
>>> Exponential? True , P-value= 0.9275212544642293
Very good!
And of course, print(distribution) will give you the fitted parameters:
>>> Exponential(lambda = 0.0982391, gamma = 0.0274607)