I have a dataset that I would like to fit to a known probability distribution. The intention is to use the fitted PDF in a data generator - such that I can sample data from the known (fitted) PDF. Data will be used for simulation purposes. At the moment I am just sampling from a normal distribution, which is inconsistent with the real-data, therefore simulation results are not accurate.
I first wanted to use the following method :
Fitting empirical distribution to theoretical ones with Scipy (Python)?
My first thought was to fit it to a weibull distribution, but the data is actually multimodal (picture attached). So I guess I need to combine multiple distributions and then fit the data to the resulting dist, is that right ? Maybe combine a gaussian AND a weibull distirbution ?
How can I use the scipy fit() function with a mixed/multimodal distribution ?
Also I would want to do this in Python (i.e. scipy/numpy/matplotlib), as the data generator is written in Python.
Many thanks !
I would suggest Kernel Density Estimation (KDE). It gives you a solution as a mixture of PDF.
SciPy has only Gaussian kernel (which lookes fine for your specific histogram), but you can find other kernels in the statsmodels or scikit-learn packages.
For reference, those are the relevant functions:
from sklearn.neighbors import KernelDensity
from scipy.stats import gaussian_kde
from statsmodels.nonparametric.kde import KDEUnivariate
from statsmodels.nonparametric.kernel_density import KDEMultivariate
A great resource for KDE in Python is here.
Related
I am doing some work which requires fitting a Gaussian to a cluster of points which is expected to be distributed normally.
I have data which looks like this, you can see the small tightly grouped cluster of points on the left:
I zoom in around the cluster, and use scikit-learn KDE to get a density distribution (with Gaussian kernel), which looks like this:
Then I fit the Gaussian and it turns out to have far too small sigma:
centroid_x: -36.3204357
centroid_y: -12.8734763
sigma_x: 0.17916588
sigma_y: 0.07428976
From inspection of the density distribution, the x and y sigma should be more on the order of ~1, rather than ~0.1. Does anyone know why this behaviour might be occurring? I don't believe there are significant errors in my code or method, this technique has worked well on other data sets, for example:
is there a way in python to generate random data based on the distribution of the alreday existing data?
Here are the statistical parameters of my dataset:
Data
count 209.000000
mean 1.280144
std 0.374602
min 0.880000
25% 1.060000
50% 1.150000
75% 1.400000
max 4.140000
as it is no normal distribution it is not possible to do it with np.random.normal. Any Ideas?
Thank you.
Edit: Performing KDE:
from sklearn.neighbors import KernelDensity
# Gaussian KDE
kde = KernelDensity(kernel='gaussian', bandwidth=0.525566).fit(data['y'].to_numpy().reshape(-1, 1))
sns.distplot(kde.sample(2400))
In general, real-world data doesn't exactly follow a "nice" distribution like the normal or Weibull distributions.
Similarly to machine learning, there are generally two steps to sampling from a distribution of data points:
Fit a data model to the data.
Then, predict a new data point based on that model, with the help of randomness.
There are several ways to estimate the distribution of data and sample from that estimate:
Kernel density estimation.
Gaussian mixture models.
Histograms.
Regression models.
Other machine learning models.
In addition, methods such as maximum likelihood estimation make it possible to fit a known distribution (such as the normal distribution) to data, but the estimated distribution is generally rougher than with kernel density estimation or other machine learning models.
See also my section "Random Numbers from a Distribution of Data Points".
Scikit-learn offers a utility make_blobs that generates Gaussian blobs. Is there any advantage to using this over, say, scipy's multivariate_normal?
As the documentation states Scikit-learn's make_blobs makes a number of isotropic Gaussian blobs. It can be viewed as a helper function, which saves you a little code. Nice if you have to demonstrate or test some clustering algorithm, so to avoid to much boilerplate code.
If you choose to use SciPy's multivariate_normal then can also control each cluster's covariance matrix. This could maybe be useful in some cases.
Update: Weighted samples are now supported by scipy.stats.gaussian_kde. See here and here for details.
It is currently not possible to use scipy.stats.gaussian_kde to estimate the density of a random variable based on weighted samples. What methods are available to estimate densities of continuous random variables based on weighted samples?
Neither sklearn.neighbors.KernelDensity nor statsmodels.nonparametric seem to support weighted samples. I modified scipy.stats.gaussian_kde to allow for heterogeneous sampling weights and thought the results might be useful for others. An example is shown below.
An ipython notebook can be found here: http://nbviewer.ipython.org/gist/tillahoffmann/f844bce2ec264c1c8cb5
Implementation details
The weighted arithmetic mean is
The unbiased data covariance matrix is then given by
The bandwidth can be chosen by scott or silverman rules as in scipy. However, the number of samples used to calculate the bandwidth is Kish's approximation for the effective sample size.
For univariate distributions you can use KDEUnivariate from statsmodels. It is not well documented, but the fit methods accepts a weights argument. Then you cannot use FFT. Here is an example:
import matplotlib.pyplot as plt
from statsmodels.nonparametric.kde import KDEUnivariate
kde1= KDEUnivariate(np.array([10.,10.,10.,5.]))
kde1.fit(bw=0.5)
plt.plot(kde1.support, [kde1.evaluate(xi) for xi in kde1.support],'x-')
kde1= KDEUnivariate(np.array([10.,5.]))
kde1.fit(weights=np.array([3.,1.]),
bw=0.5,
fft=False)
plt.plot(kde1.support, [kde1.evaluate(xi) for xi in kde1.support], 'o-')
which produces this figure:
Check out the packages PyQT-Fit and statistics for Python. They seem to have kernel density estimation with weighted observations.
I am trying to fit a gamma distribution to my data points, and I can do that using code below.
import scipy.stats as ss
import numpy as np
dataPoints = np.arange(0,1000,0.2)
fit_alpha,fit_loc,fit_beta = ss.rv_continuous.fit(ss.gamma, dataPoints, floc=0)
I want to reconstruct a larger distribution using many such small gamma distributions (the larger distribution is irrelevant for the question, only justifying why I am trying to fit a cdf as opposed to a pdf).
To achieve that, I want to fit a cumulative distribution, as opposed to a pdf, to my smaller distribution data.—More precisely, I want to fit the data to only a part of the cumulative distribution.
For example, I want to fit the data only until the cumulative probability function (with a certain scale and shape) reaches 0.6.
Any thoughts on using fit() for this purpose?
I understand that you are trying to piecewise reconstruct your cdf with several small gamma distributions each with a different scale and shape parameter capturing the 'local' regions of your distribution.
Probably makes sense if your empirical distribution is multi-modal / difficult to be summarized by one 'global' parametric distribution.
Don't know if you have specific reasons behind specifically fitting several gamma distributions, but in case your goal is to try to fit a distribution which is relatively smooth and captures your empirical cdf well perhaps you can take a look at Kernel Density Estimation. It is essentially a non-parametric way to fit a distribution to your data.
http://scikit-learn.org/stable/modules/density.html
http://en.wikipedia.org/wiki/Kernel_density_estimation
For example, you can try out a gaussian kernel and change the bandwidth parameter to control how smooth your fit is. A bandwith which is too small leads to an unsmooth ("overfitted") result [high variance, low bias]. A bandwidth which is too large results in a very smooth result but with high bias.
from sklearn.neighbors.kde import KernelDensity
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(dataPoints)
A good way then to select a bandwidth parameter that balances bias - variance tradeoff is to use cross-validation. Essentially the high level idea is you partition your data, run analysis on the training set and 'validate' on the test set, this will prevent overfitting the data.
Fortunately, sklearn also implements a nice example of choosing the best bandwidth of a Guassian Kernel using Cross Validation which you can borrow some code from:
http://scikit-learn.org/stable/auto_examples/neighbors/plot_digits_kde_sampling.html
Hope this helps!