Generating random number for a distribution of a real data? - python

I have a set of real data and I want use this data to find a probability distribution and then use their property to generate some random points according to their pdf. A sample of my data set is as following:
#Mag Weight
21.9786 3.6782
24.0305 6.1120
21.9544 4.2225
23.9383 5.1375
23.9352 4.6499
23.0261 5.1355
23.8682 5.9932
24.8052 4.1765
22.8976 5.1901
23.9679 4.3190
25.3362 4.1519
24.9079 4.2090
23.9851 5.1951
22.2094 5.1570
22.3452 5.6159
24.0953 6.2697
24.3901 6.9299
24.1789 4.0222
24.2648 4.4997
25.3931 3.3920
25.8406 3.9587
23.1427 6.9398
21.2985 7.7582
25.4807 3.1112
25.1935 5.0913
25.2136 4.0578
24.6990 3.9899
23.5299 4.6788
24.0880 7.0576
24.7931 5.7088
25.1860 3.4825
24.4757 5.8500
24.1398 4.9842
23.4947 4.4730
20.9806 5.2717
25.9470 3.4706
25.0324 3.3879
24.7186 3.8443
24.3350 4.9140
24.6395 5.0757
23.9181 4.9951
24.3599 4.1125
24.1766 5.4360
24.8378 4.9121
24.7362 4.4237
24.4119 6.1648
23.8215 5.9184
21.5394 5.1542
24.0081 4.2308
24.5665 4.6922
23.5827 5.4992
23.3876 6.3692
25.6872 4.5055
23.6629 5.4416
24.4821 4.7922
22.7522 5.9513
24.0640 5.8963
24.0361 5.6406
24.8687 4.5699
24.8795 4.3198
24.3486 4.5305
21.0720 9.5246
25.2960 3.0828
23.8204 5.8605
23.3732 5.1161
25.5097 2.9010
24.9206 4.0999
24.4140 4.9073
22.7495 4.5059
24.3394 3.5061
22.0560 5.5763
25.4404 5.4916
25.4795 4.4089
24.1772 3.8626
23.6042 4.7476
23.3537 6.4804
23.6842 4.3220
24.1895 3.6072
24.0328 4.3273
23.0243 5.6789
25.7042 4.4493
22.1983 6.1868
22.3661 5.9132
20.9426 4.8079
20.3806 10.1128
25.0105 4.4296
23.6648 6.6482
25.2780 4.4933
24.6870 4.4836
25.4565 4.0990
25.0415 3.9384
24.6098 4.6057
24.7796 4.2042
How could I do this? My first attempt was to fit a polynomial to the binned data and find the probability distribution of weights in each magnitude bin, but I reckon it might be a smarter way to do it. For instance, using scipy.stats.rv_continuous for sampling data from the given distribution but I don't know how it can work and there are not enough examples.
Update:
As I got a lot of comments to use KDE, I used scipy.stats.gaussian_kde and I got the following results.
I am wondering whether it is a good probability distribution to represent the property of my data? First, how could I test it, and second, whether there is a possibility to fit more than one gaussian kde with scipy.stats?

(1) If you have an idea about the distribution from which these data are sampled, then fit that distribution to the data (i.e., adjust parameters via maximum likelihood or whatever) and then sample that.
(2) For more nearly empirical approach, select one datum at random (with equal probability) and then pretend it is the center of a little Gaussian bump, and sample from that bump. This is equivalent to constructing a kernel density estimate and sampling from that. You will have to pick a standard deviation for the bumps.
(3) For an entirely empirical approach, select one datum at random (with equal probability). This is equivalent to assuming the empirical distribution is the same as the actual distribution.

What is this data representing?
SciPy won't help you decide what type of distribution to use. That choice is motivated by where your data is coming from. Once you do decide on a distribution (or you could try several), then you can easily do something like a scipy.optimize.curve_fit on your data to decide on the optimal parameters to give to feed into pdf class in scipy.stats so that it matches your data. Then use a scipy continuous random variable to generate new points from your distribution.
Also, a polynomial is not a probability density function since it is not normalized (integral over all x diverges). Polynomial fits are not going to help you, as far as I know.

Did you try creating a histogram of the data? That will give you a sense of the shape of the density function, at which point you can try fitting the data to a known distribution. Once you have a fitted distribution, you can generate pseudo-random variates to get a 'sanity check', perform a nonparametric test like the Kolmogorov–Smirnov.
So, I would take the following steps:
Create a histogram
Determine characteristics of the data (summary stats, etc.).
Try to fit to parametric distribution.
Try to fit to nonparametric distribution.
Conduct hypothesis tests to rate the fit.

Related

Fitting a theoretical distribution to a sampled empirical CDF with scipy stats

I have a plot for the CDF distribution of packet losses. I thus do not have the original data or the CDF model itself but samples from the CDF curve. (The data is extracted from plots published in literature.)
I want to find which distribution and with what parameters offers the closest fit to the CDF samples.
I've seen that Scipy stats distributions offer fit(data) method but all examples apply to raw data points. PDF/CDF is subsequently drawn from the fitted parameters. Using fit with my CDF samples does not give sensible results.
Am I right in assuming that fit() cannot be directly applied to data samples from an empirical CDF?
What alternatives could I use to find a matching known distribution?
I'm not sure exactly what you're trying to do. When you say you have a CDF, what does that mean? Do you have some data points, or the function itself? It would be helpful if you could post more information or some sample data.
If you have some data points and know the distribution its not hard to do using scipy. If you don't know the distribution, you could just iterate over all distributions until you find one which works reasonably well.
We can define functions of the form required for scipy.optimize.curve_fit. I.e., the first argument should be x, and then the other arguments are parameters.
I use this function to generate some test data based on the CDF of a normal random variable with a bit of added noise.
n = 100
x = np.linspace(-4,4,n)
f = lambda x,mu,sigma: scipy.stats.norm(mu,sigma).cdf(x)
data = f(x,0.2,1) + 0.05*np.random.randn(n)
Now, use curve_fit to find parameters.
mu,sigma = scipy.optimize.curve_fit(f,x,data)[0]
This gives output
>> mu,sigma
0.1828320963531838, 0.9452044983927278
We can plot the original CDF (orange), noisy data, and fit CDF (blue) and observe that it works pretty well.
Note that curve_fit can take some additional parameters, and that the output gives additional information about how good of a fit the function is.
#tch Thank you for the answer. I read on the technique and successfully applied it. I wanted to apply the fit to all continuous distribution supported by scipy.stats so I ended up doing the following:
fitted = []
failed = []
for d in dist_list:
dist_name = d[0] #fetch the distribution name
dist_object = getattr(ss, dist_name) #fetch the distribution object
param_default = d[1] #fetch the default distribution parameters
# For distributions with only location and scale set those to the default loc=0 and scale=1
if not param_default:
param_default = (0,1)
# Computed parameters of fitted distribution
try:
param,cov = curve_fit(dist_object.cdf,data_in,data_out,p0=param_default,method='trf')
# Only take distributions which do not result in zero covariance as those are not a valid fit
if np.any(cov):
fitted.append((dist_name,param),)
# Capture which distributions are not possible to be fitted (variety of reasons)
except (NotImplementedError,RuntimeError) as e:
failed.append((dist_name,e),)
pass
In the above, the empirical cdf distribution is captured in data_out which holds the sampled cdf values for a range of data_in data points. The list dist_list holds for each distribution in scipy.stats.rv_continuous the name of the distribution as first element and a list of the default parameters as second element. Default parameters I extract from scipy.stats._distr_params.
Some distributions cannot be fitted and raise an error. I keep those is failed list.
Finally, I generate a list fitted which holds for each successfully fitted distribution the estimated parameters.

Pareto distribution and whether a chart conforms to it

I have a figure as shown below, I want to know whether it conforms to the Pareto distribution, or not? Its a cumulative plot.
And, I want to find out the point in x axis which marks the point for the 80-20 rule, i.e the x-axis point which bifurcates the plot into 20 percent having 80 percent of the wealth.
Also, I'm really confused by the scipy.stats Pareto function, would be great if someone can give some intuitive explanation on that, since the documentation is pretty confusing.
scipy.stats.pareto provides a random draw from the Pareto distribution.
To know if your distribution conform to Pareto distribution you should perform a Kolmogorov-Smirnov test.
Draw a random sample from the Pareto distribution using pareto.rvs(shape, size=1000), where shape is the estimated shape parameter of your Pareto distribution, and use scipy.stats.kstest to perform the test :
pareto_smp = pareto.rvs(shape, size=1000)
D, p_value = scipy.stats.kstest(pareto_smp, values)
nobody can simply determine if an observation dataset follows a particular distribution. based on your situation, what you need:
fit empirical distribution using:
statsmodels.ECDF
then, compare (nonparametric) this with your data to see if the Null hypothesis can be rejected
for 20/80 rule:
rescale your X to range [0,1] and simply pick up 0.2 on x axis
source: https://arxiv.org/pdf/1306.0100.pdf

Fit gaussians (or other distributions) on my data using python

I have a database of features, a 2D np.array (2000 samples and each sample contains 100 features, 2000 X 100). I want to fit gaussian distributions to my database using python. My code is the following:
data = load_my_data() # loads a np.array with size 2000x200
clf = mixture.GaussianMixture(n_components= 50, covariance_type='full')
clf.fit(data)
I am not sure about the parameters for example the covariance_type and how can I investigate whether the fit was occured succesfully or not.
EDIT: I debug the code to investigate what is happening with the clf.means_ and appartently it produced a matrix n_components X size_of_features 50 X 20). Is there a way that i can check that the fitting was successful, or to plot data? What are the alternatives to Gaussian mixtures (mixtures of exponential for example, I cannot find any available implementation)?
I think you are using sklearn package.
Once you have fit, then type
print clf.means_
If it has output, then the data is fitted, if it raise errors, not fitted.
Hope this helps you.
You can do dimensionality reduction using PCA to 3D space (let's say) and then plot means and data.
Is is always preferred to choose a reduced set of candidate before trying to identify the distribution (in other words, use Cullen & Frey to reject the unlikely candidates) and then go for goodness of fit a select the best result,
You can just create a list of all available distributions in scipy. An example with two distributions and random data:
import numpy as np
import scipy.stats as st
data = np.random.random(10000)
#Specify all distributions here
distributions = [st.laplace, st.norm]
mles = []
for distribution in distributions:
pars = distribution.fit(data)
mle = distribution.nnlf(pars, data)
mles.append(mle)
results = [(distribution.name, mle) for distribution, mle in
zip(distributions, mles)]
best_fit = sorted(zip(distributions, mles), key=lambda d: d[1])[0]
print 'Best fit reached using {}, MLE value: {}'.format(best_fit[0].name, best_fit[1])
I understand, you may like to do regression of two different distributions, more than fitting them to an arithmetic curve. If this is the case, you may be interested in plotting one against the other one, and make a linear (or polynomial) regression, checking the coefficients
If this is the case, linear regression of two distributions, may tell you if there linear dependent or not.
Linear Regression using Scipy documentation

How to infer the parameters of a 1D gaussian distribution using PyMC?

I'm pretty new to PyMC and I'm trying desperately to infer the parameters of an underlying gaussian distribution that best fits a distribution of observed data that I have, not with a pre-build normal distrubution, but with a more general method using histograms of the simulated data to build pdfs. But so far I can't get my code to converge, and I don't know why...
So here's a summary of what my code does.
I have a dataset of 5000 points distributed normally (mean=5,sigma=2). I want to retrieve these values (mean, sigma) with a bayesian inference (using MCMC).
I have a data simulator that generates for each iteration of the MCMC process a normal distribution of 5000 points with a random mean and sigma (uniform prior)
From the simulated distribution of points I compute a numpy histogram normed to 1 representing the pdf of the distribution (Nbins=int(sqrt(5000))). I then compute the mean and standard deviation of this distribution.
What I want is the set of parameters that will allow me to build a simulated distribution that best fits the observed data.
I use the most general definition of the log likelihood, that is:
ln L(θ|x)=∑ln(f(xi|θ)) (the likelihood function being defined as the probability distribution of the observed data given the parameters θ)
Then I interpolate linearly the histogram values for every bin center. Therefore I have a continuous pdf for the simulated distribution. So here f is the interpolated function I made from the histogram of the simulation.
I sum the log(f(xi)) contributions for every (real) data point and return the loglikelihood value at the end.
But some (real) data points are so far off the mean of the simulated distribution that f(xi)=0. For these points the code raises a math domain error (Reminder: log(0)=-inf). So I artificially set the pdf to a small epsilon for the points where it's usually set to 0.
But here's the thing. The loglikelihood is not computed for every iteration. And actually it is not computed at all, in the present architecture of my code. So that's why the MCMC process is not converging. But... I don't know why.
Turns out that building custom likelihood functions does not seem to be very casual approach in the PyMC community, where people usually prefer to used pre-built distributions. I'm having troubles to find some help on these matters, so ideas and suggestions will be deeply appreciated :)
import numpy as np
import matplotlib.pyplot as plt
import math
import pymc as pm
from scipy.interpolate import InterpolatedUnivariateSpline
# Generate the data
np.random.seed(0)
N=5000
true_mean=5.
true_sigma = 2.
data = np.random.normal(true_mean,true_sigma,N)
#prior
m=pm.Uniform('m', lower=4, upper=6)
s=pm.Uniform('s', lower=1, upper=3)
#pm.deterministic
def data_simulator(mean_input=m,sig_input=s):
out=np.empty(4,dtype=object)
datasim = np.random.normal(mean_input,sig_input,N)
hist, bin_edges = np.histogram(datasim, bins=int(math.sqrt(len(datasim))), density=True)
bin_centers = (bin_edges[:-1] + bin_edges[1:])/2
m_sim=np.mean(datasim)
s_sim=np.std(datasim)
out[0]=m_sim
out[1]=s_sim
out[2]=bin_centers
out[3]=hist
return out
#pm.stochastic(observed=True)
def logp(value=data,mean_output=data_simulator.value[0],sigma_output=data_simulator.value[1],bin_centers_sim=data_simulator.value[2],hist_sim=data_simulator.value[3]):
interp_sim=InterpolatedUnivariateSpline(bin_centers_sim,hist_sim,k=1,ext=0) #returns the extrapolated values
logp=np.sum(np.log(interp_sim(value)))
print 'logp=',logp
return logp
model = pm.Model({"mean": m,"sigma":s,"data_simulator":data_simulator,"loglikelihood":loglikelihood})
#Run the MCMC sampler
mcmc = pm.MCMC(model)
mcmc.sample(iter=10000, burn=5000)
#Plot the marginals
pm.Matplot.plot(mcmc)

How to perform a chi-squared goodness of fit test using scientific libraries in Python?

Let's assume I have some data I obtained empirically:
from scipy import stats
size = 10000
x = 10 * stats.expon.rvs(size=size) + 0.2 * np.random.uniform(size=size)
It is exponentially distributed (with some noise) and I want to verify this using a chi-squared goodness of fit (GoF) test. What is the simplest way of doing this using the standard scientific libraries in Python (e.g. scipy or statsmodels) with the least amount of manual steps and assumptions?
I can fit a model with:
param = stats.expon.fit(x)
plt.hist(x, normed=True, color='white', hatch='/')
plt.plot(grid, distr.pdf(np.linspace(0, 100, 10000), *param))
It is very elegant to calculate the Kolmogorov-Smirnov test.
>>> stats.kstest(x, lambda x : stats.expon.cdf(x, *param))
(0.0061000000000000004, 0.85077099515985011)
However, I can't find a good way of calculating the chi-squared test.
There is a chi-squared GoF function in statsmodel, but it assumes a discrete distribution (and the exponential distribution is continuous).
The official scipy.stats tutorial only covers a case for a custom distribution and probabilities are built by fiddling with many expressions (npoints, npointsh, nbound, normbound), so it's not quite clear to me how to do it for other distributions. The chisquare examples assume the expected values and DoF are already obtained.
Also, I am not looking for a way to "manually" perform the test as was already discussed here, but would like to know how to apply one of the available library functions.
An approximate solution for equal probability bins:
Estimate the parameters of the distribution
Use the inverse cdf, ppf if it's a scipy.stats.distribution, to get the binedges for a regular probability grid, e.g. distribution.ppf(np.linspace(0, 1, n_bins + 1), *args)
Then, use np.histogram to count the number of observations in each bin
then use chisquare test on the frequencies.
An alternative would be to find the bin edges from the percentiles of the sorted data, and use the cdf to find the actual probabilities.
This is only approximate, since the theory for the chisquare test assumes that the parameters are estimated by maximum likelihood on the binned data. And I'm not sure whether the selection of binedges based on the data affects the asymptotic distribution.
I haven't looked into this into a long time.
If an approximate solution is not good enough, then I would recommend that you ask the question on stats.stackexchange.
Why do you need to "verify" that it's exponential? Are you sure you need a statistical test? I can pretty much guarantee that is isn't ultimately exponential & the test would be significant if you had enough data, making the logic of using the test rather forced. It may help you to read this CV thread: Is normality testing 'essentially useless'?, or my answer here: Testing for heteroscedasticity with many observations.
It is typically better to use a qq-plot and/or pp-plot (depending on whether you are concerned about the fit in the tails or middle of the distribution, see my answer here: PP-plots vs. QQ-plots). Information on how to make qq-plots in Python SciPy can be found in this SO thread: Quantile-Quantile plot using SciPy
I tried you problem with OpenTURNS.
Beginning is the same:
import numpy as np
from scipy import stats
size = 10000
x = 10 * stats.expon.rvs(size=size) + 0.2 * np.random.uniform(size=size)
If you suspect that your sample x is coming from an Exponential distribution, you can use ot.ExponentialFactory() to fit the parameters:
import openturns as ot
sample = ot.Sample([[p] for p in x])
distribution = ot.ExponentialFactory().build(sample)
As Factory needs a an ot.Sample() as input, I needed format x and reshape it as 10.000 points of dimension 1.
Let's now assess this fitting using ChiSquared test:
result = ot.FittingTest.ChiSquared(sample, distribution, 0.01)
print('Exponential?', result.getBinaryQualityMeasure(), ', P-value=', result.getPValue())
>>> Exponential? True , P-value= 0.9275212544642293
Very good!
And of course, print(distribution) will give you the fitted parameters:
>>> Exponential(lambda = 0.0982391, gamma = 0.0274607)

Categories