How can I fit a gaussian curve in python? - python

I'm given an array and when I plot it I get a gaussian shape with some noise. I want to fit the gaussian. This is what I already have but when I plot this I do not get a fitted gaussian, instead I just get a straight line. I've tried this many different ways and I just can't figure it out.
random_sample=norm.rvs(h)
parameters = norm.fit(h)
fitted_pdf = norm.pdf(f, loc = parameters[0], scale = parameters[1])
normal_pdf = norm.pdf(f)
plt.plot(f,fitted_pdf,"green")
plt.plot(f, normal_pdf, "red")
plt.plot(f,h)
plt.show()

You can use fit from scipy.stats.norm as follows:
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
data = np.random.normal(loc=5.0, scale=2.0, size=1000)
mean,std=norm.fit(data)
norm.fit tries to fit the parameters of a normal distribution based on the data. And indeed in the example above mean is approximately 5 and std is approximately 2.
In order to plot it, you can do:
plt.hist(data, bins=30, density=True)
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
y = norm.pdf(x, mean, std)
plt.plot(x, y)
plt.show()
The blue boxes are the histogram of your data, and the green line is the Gaussian with the fitted parameters.

There are many ways to fit a gaussian function to a data set. I often use astropy when fitting data, that's why I wanted to add this as additional answer.
I use some data set that should simulate a gaussian with some noise:
import numpy as np
from astropy import modeling
m = modeling.models.Gaussian1D(amplitude=10, mean=30, stddev=5)
x = np.linspace(0, 100, 2000)
data = m(x)
data = data + np.sqrt(data) * np.random.random(x.size) - 0.5
data -= data.min()
plt.plot(x, data)
Then fitting it is actually quite simple, you specify a model that you want to fit to the data and a fitter:
fitter = modeling.fitting.LevMarLSQFitter()
model = modeling.models.Gaussian1D() # depending on the data you need to give some initial values
fitted_model = fitter(model, x, data)
And plotted:
plt.plot(x, data)
plt.plot(x, fitted_model(x))
However you can also use just Scipy but you have to define the function yourself:
from scipy import optimize
def gaussian(x, amplitude, mean, stddev):
return amplitude * np.exp(-((x - mean) / 4 / stddev)**2)
popt, _ = optimize.curve_fit(gaussian, x, data)
This returns the optimal arguments for the fit and you can plot it like this:
plt.plot(x, data)
plt.plot(x, gaussian(x, *popt))

Related

How to match a Gaussian normal to a histogram?

I'm wondering if there is a good way to match a Gaussian normal to a histogram in the form of a numpy array np.histogram(array, bins).
How can such a curve been plotted on the same graph and adjusted in height and width to the histogram?
You can fit your histogram using a Gaussian (i.e. normal) distribution, for example using scipy's curve_fit. I have written a small example below. Note that depending on your data, you may need to find a way to make good guesses for the starting values for the fit (p0). Poor starting values may cause your fit to fail.
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
from scipy.stats import norm
def fit_func(x,a,mu,sigma,c):
"""gaussian function used for the fit"""
return a * norm.pdf(x,loc=mu,scale=sigma) + c
#make up some normally distributed data and do a histogram
y = 2 * np.random.normal(loc=1,scale=2,size=1000) + 2
no_bins = 20
hist,left = np.histogram(y,bins=no_bins)
centers = left[:-1] + (left[1] - left[0])
#fit the histogram
p0 = [2,0,2,2] #starting values for the fit
p1,_ = curve_fit(fit_func,centers,hist,p0,maxfev=10000)
#plot the histogram and fit together
fig,ax = plt.subplots()
ax.hist(y,bins=no_bins)
x = np.linspace(left[0],left[-1],1000)
y_fit = fit_func(x, *p1)
ax.plot(x,y_fit,'r-')
plt.show()

Use scipy lognormal distribution to fit data with small values, then show in matplotlib

I have a data set which contains values from 0 to 1e-5. I guess the data can be described by lognormal distribution. So I use scipy.stats.lognorm to fit my data and want to plot the origin data and the fitting distribution on a same figure by using matplotlib.
Firstly, I plot the sample by histogram:
Then, I add the fitting distribution by line plot. However, this will change the Y-axis to a very large number:
So the origin data (sample) cannot be seen on the figure!
I've check all variables and I found that the variable pdf_fitted is so large (>1e7). I really don't understand why a simple fit scistats.lognorm.fit to a sample that was generated by the same distribution scistats.lognorm.pdf doesn't work. Here is the codes to demonstrate my problem:
from matplotlib import pyplot as plt
from scipy import stats as scistats
import numpy as np
# generate a sample for x between 0 and 1e-5
x = np.linspace(0, 1e-5, num=1000)
y = scistats.lognorm.pdf(x, 3, loc=0, scale=np.exp(10))
h = plt.hist(y, bins=40) # plot the sample by histogram
# plt.show()
# fit the sample by using Log Normal distribution
param = scistats.lognorm.fit(y)
print("Log-normal distribution parameters : ", param)
pdf_fitted = scistats.lognorm.pdf(
x, *param[:-2], loc=param[-2], scale=param[-1])
plt.plot(x, pdf_fitted, label="Fitted Lognormal distribution")
plt.ticklabel_format(style='sci', scilimits=(-3, 4), axis='x')
plt.legend()
plt.show()
The problem
The immediate problem that you're having is that your fit is really, really bad. You can see this if you set the x and y scale on the plot to log, like with plt.xscale('log') and plt.yscale('log'). This lets you see both your histogram and your fitted data on a single plot:
so it's off by many orders of magnitude in both directions.
The fix
Your whole approach to generating a sample from the probability distribution represented by stats.lognorm and fitting it was wrong. Here's a correct way to do it, using the same parameters for the lognorm distribution that you supplied in your question:
from matplotlib import pyplot as plt
from scipy import stats as scistats
import numpy as np
plt.figure(figsize=(12,7))
realparam = [.1, 0, np.exp(10)]
# generate pdf data around the mean value
m = realparam[2]
x = np.linspace(m*.6, m*1.4, num=10000)
y = scistats.lognorm.pdf(x, *realparam)
# generate a matching random sample
sample = scistats.lognorm.rvs(*realparam, size=100000)
# plot the sample by histogram
h = plt.hist(sample, bins=100, density=True)
# fit the sample by using Log Normal distribution
param = scistats.lognorm.fit(sample)
print("Log-normal distribution parameters : ", param)
pdf_fitted = scistats.lognorm.pdf(x, *param)
plt.plot(x, pdf_fitted, lw=5, label="Fitted Lognormal distribution")
plt.legend()
plt.show()
Output:
Log-normal distribution parameters : (0.09916091013245995, -215.9562383088556, 22245.970148671593)

How to random sample lognormal data in Python using the inverse CDF and specify target percentiles?

I'm trying to generate random samples from a lognormal distribution in Python, the application is for simulating network traffic. I'd like to generate samples such that:
The modal sample result is 320 (~10^2.5)
80% of the samples lie within the range 100 to 1000 (10^2 to 10^3)
My strategy is to use the inverse CDF (or Smirnov transform I believe):
Use the PDF for a normal distribution centred around 2.5 to calculate the PDF for 10^x where x ~ N(2.5,sigma).
Calculate the CDF for the above distribution.
Generate random uniform data along the interval 0 to 1.
Use the inverse CDF to transform the random uniform data into the required range.
The problem is, when I calculate the 10 and 90th percentile at the end, I have completely the wrong numbers.
Here is my code:
%matplotlib inline
import matplotlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
from scipy.stats import norm
# find value of mu and sigma so that 80% of data lies within range 2 to 3
mu=2.505
sigma = 1/2.505
norm.ppf(0.1, loc=mu,scale=sigma),norm.ppf(0.9, loc=mu,scale=sigma)
# output: (1.9934025, 3.01659743)
# Generate normal distribution PDF
x = np.arange(16,128000, 16) # linearly spaced here, with extra range so that CDF is correctly scaled
x_log = np.log10(x)
mu=2.505
sigma = 1/2.505
y = norm.pdf(x_log,loc=mu,scale=sigma)
fig, ax = plt.subplots()
ax.plot(x_log, y, 'r-', lw=5, alpha=0.6, label='norm pdf')
x2 = (10**x_log) # x2 should be linearly spaced, so that cumsum works (later)
fig, ax = plt.subplots()
ax.plot(x2, y, 'r-', lw=5, alpha=0.6, label='norm pdf')
ax.set_xlim(0,2000)
# Calculate CDF
y_CDF = np.cumsum(y) / np.cumsum(y).max()
fig, ax = plt.subplots()
ax.plot(x2, y_CDF, 'r-', lw=2, alpha=0.6, label='norm pdf')
ax.set_xlim(0,8000)
# Generate random uniform data
input = np.random.uniform(size=10000)
# Use CDF as lookup table
traffic = x2[np.abs(np.subtract.outer(y_CDF, input)).argmin(0)]
# Discard highs and lows
traffic = traffic[(traffic >= 32) & (traffic <= 8000)]
# Check percentiles
np.percentile(traffic,10),np.percentile(traffic,90)
Which produces the output:
(223.99999999999997, 2480.0000000000009)
... and not the (100, 1000) that I would like to see. Any advice appreciated!
First, I'm not sure about Use the PDF for a normal distribution centred around 2.5. After all, log-normal is about base e logarithm (aka natural log), which means 320 = 102.5 = e5.77.
Second, I would approach problem in a different way. You need m and s to sample from Log-Normal.
If you look at wiki article above, you could see that it is two-parametric distribution. And you have exactly two conditions:
Mode = exp(m - s*s) = 320
80% samples in [100,1000] => CDF(1000,m,s) - CDF(100,m,s) = 0.8
where CDF is expressed via error function (which is pretty much common function found in any library)
So two non-linear equations for two parameters. Solve them, find m and s and put it into any standard log-normal sampling
Severin's approach is much leaner than my original attempt using the Smirnov transform. This is the code that worked for me (using fsolve to find s, although its quite trivial to do it manually):
# Find lognormal distribution, with mode at 320 and 80% of probability mass between 100 and 1000
# Use fsolve to find the roots of the non-linear equation
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
from scipy.stats import lognorm
import math
target_modal_value = 320
# Define function to find roots of
def equation(s):
# From Wikipedia: Mode = exp(m - s*s) = 320
m = math.log(target_modal_value) + s**2
# Get probability mass from CDF at 100 and 1000, should equal to 0.8.
# Rearange equation so that =0, to find root (value of s)
return (lognorm.cdf(1000,s=s, scale=math.exp(m)) - lognorm.cdf(100,s=s, scale=math.exp(m)) -0.8)
# Solve non-linear equation to find s
s_initial_guess = 1
s = fsolve(equation, s_initial_guess)
# From s, find m
m = math.log(target_modal_value) + s**2
print('m='+str(m)+', s='+str(s)) #(m,s))
# Plot
x = np.arange(0,2000,1)
y = lognorm.pdf(x,s=s, scale=math.exp(m))
fig, ax = plt.subplots()
ax.plot(x, y, 'r-', lw=5, alpha=0.6, label='norm pdf')
plt.plot((100,100), (0,1), 'k--')
plt.plot((320,320), (0,1), 'k-.')
plt.plot((1000,1000), (0,1), 'k--')
plt.ylim(0,0.0014)
plt.savefig('lognormal_100_320_1000.png')

How to correctly use scikit-learn's Gaussian Process for a 2D-inputs, 1D-output regression?

Prior to posting I did a lot of searches and found this question which might be exactly my problem. However, I tried what is proposed in the answer but unfortunately this did not fix it, and I couldn't add a comment to request further explanation, as I am a new member here.
Anyway, I want to use the Gaussian Processes with scikit-learn in Python on a simple but real case to start (using the examples provided in scikit-learn's documentation). I have a 2D input set (8 couples of 2 parameters) called X. I have 8 corresponding outputs, gathered in the 1D-array y.
# Inputs: 8 points
X = np.array([[p1, q1],[p2, q2],[p3, q3],[p4, q4],[p5, q5],[p6, q6],[p7, q7],[p8, q8]])
# Observations: 8 couples
y = np.array([r1,r2,r3,r4,r5,r6,r7,r8])
I defined an input test space x:
# Input space
x1 = np.linspace(x1min, x1max) #p
x2 = np.linspace(x2min, x2max) #q
x = (np.array([x1, x2])).T
Then I instantiate the GP model, fit it to my training data (X,y), and make the 1D prediction y_pred on my input space x:
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
kernel = C(1.0, (1e-3, 1e3)) * RBF([5,5], (1e-2, 1e2))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=15)
gp.fit(X, y)
y_pred, MSE = gp.predict(x, return_std=True)
And then I make a 3D plot:
fig = pl.figure()
ax = fig.add_subplot(111, projection='3d')
Xp, Yp = np.meshgrid(x1, x2)
Zp = np.reshape(y_pred,50)
surf = ax.plot_surface(Xp, Yp, Zp, rstride=1, cstride=1, cmap=cm.jet,
linewidth=0, antialiased=False)
pl.show()
This is what I obtain:
When I modify the kernel parameters I get something like this, similar to what the poster I mentioned above got:
These plots don't even match the observation from the original training points (the lower response is obtained for [65.1,37] and the highest for [92.3,54]).
I am fairly new to GPs in 2D (also started Python not long ago) so I think I'm missing something here... Any answer would be helpful and greatly appreciated, thanks!
You're using two features to predict a third. Rather than a 3D plot like plot_surface, it's usually clearer if you use a 2D plot that's able to show information about a third dimension, like hist2d or pcolormesh. Here's a complete example using data/code similar to that in the question:
from itertools import product
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
X = np.array([[0,0],[2,0],[4,0],[6,0],[8,0],[10,0],[12,0],[14,0],[16,0],[0,2],
[2,2],[4,2],[6,2],[8,2],[10,2],[12,2],[14,2],[16,2]])
y = np.array([-54,-60,-62,-64,-66,-68,-70,-72,-74,-60,-62,-64,-66,
-68,-70,-72,-74,-76])
# Input space
x1 = np.linspace(X[:,0].min(), X[:,0].max()) #p
x2 = np.linspace(X[:,1].min(), X[:,1].max()) #q
x = (np.array([x1, x2])).T
kernel = C(1.0, (1e-3, 1e3)) * RBF([5,5], (1e-2, 1e2))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=15)
gp.fit(X, y)
x1x2 = np.array(list(product(x1, x2)))
y_pred, MSE = gp.predict(x1x2, return_std=True)
X0p, X1p = x1x2[:,0].reshape(50,50), x1x2[:,1].reshape(50,50)
Zp = np.reshape(y_pred,(50,50))
# alternative way to generate equivalent X0p, X1p, Zp
# X0p, X1p = np.meshgrid(x1, x2)
# Zp = [gp.predict([(X0p[i, j], X1p[i, j]) for i in range(X0p.shape[0])]) for j in range(X0p.shape[1])]
# Zp = np.array(Zp).T
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111)
ax.pcolormesh(X0p, X1p, Zp)
plt.show()
Output:
Kinda plain looking, but so was my example data. In general, you shouldn't expect to get particular interesting resulting with this few data points.
Also, if you do want the surface plot, you can just replace the pcolormesh line with what you originally had (more or less):
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_surface(X0p, X1p, Zp, rstride=1, cstride=1, cmap='jet', linewidth=0, antialiased=False)
Output:
I'm also fairly new using scikit-learn gaussian process. But after some effort, I managed to implement a 3-d gaussian process regression successfully. There are a lot of examples of 1-d regression but nothing on higher input dimensions.
Perhaps you could show the values that you are using.
I found that sometimes the format in which you send the inputs can produce some issues. Try formatting input X as:
X = np.array([param1, param2]).T
and format the output as:
gp.fit(X, y.reshape(-1,1))
Also, as I understood, the implementation assumes a mean function m=0. If the output you are trying to regress presents an average value which differs significantly from 0 you should normalize it (that will probably solve your problem). Standardizing the parameter space will help as well.

how does 2d kernel density estimation in python (sklearn) work?

I am sorry for the probably stupid question but I am trying now for hours to estimate a density from a set of 2d data. Let's assume my data is given by the array: sample = np.random.uniform(0,1,size=(50,2)) . I just want to use scipys scikit learn package to estimate the density from the sample array (which is here of course a 2d uniform density) and I am trying the following:
import numpy as np
from sklearn.neighbors.kde import KernelDensity
from matplotlib import pyplot as plt
sp = 0.01
samples = np.random.uniform(0,1,size=(50,2)) # random samples
x = y = np.linspace(0,1,100)
X,Y = np.meshgrid(x,y) # creating grid of data , to evaluate estimated density on
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(samples) # creating density from samples
kde.score_samples(X,Y) # I want to evaluate the estimated density on the X,Y grid
But the last step always yields the error: score_samples() takes 2 positional arguments but 3 were given
So probably .score_samples cannot take a grid as input, but there no tutorials/docs for the 2d case so I don't know how to fix this issue. It would be really great if someone could help.
Looking at the Kernel Density Estimate of Species Distributions example, you have to package the x,y data together (both the training data and the new sample grid).
Below is a function that simplifies the sklearn API.
from sklearn.neighbors import KernelDensity
def kde2D(x, y, bandwidth, xbins=100j, ybins=100j, **kwargs):
"""Build 2D kernel density estimate (KDE)."""
# create grid of sample locations (default: 100x100)
xx, yy = np.mgrid[x.min():x.max():xbins,
y.min():y.max():ybins]
xy_sample = np.vstack([yy.ravel(), xx.ravel()]).T
xy_train = np.vstack([y, x]).T
kde_skl = KernelDensity(bandwidth=bandwidth, **kwargs)
kde_skl.fit(xy_train)
# score_samples() returns the log-likelihood of the samples
z = np.exp(kde_skl.score_samples(xy_sample))
return xx, yy, np.reshape(z, xx.shape)
This gives you the xx, yy, zz needed for something like a scatter or pcolormesh plot. I've copied the example from the scipy page on the gaussian_kde function.
import numpy as np
import matplotlib.pyplot as plt
m1 = np.random.normal(size=1000)
m2 = np.random.normal(scale=0.5, size=1000)
x, y = m1 + m2, m1 - m2
xx, yy, zz = kde2D(x, y, 1.0)
plt.pcolormesh(xx, yy, zz)
plt.scatter(x, y, s=2, facecolor='white')

Categories