Creating Histogram from Poisson Distributions with weights (matplotlib) - python

I am working on a project looking at the Poisson filling of droplets by a contaminant whereby the Poisson mean depends on the droplets Volume. There is a volume distribution and each volume size has a likelihood from a Gaussian.
I have a loop generating a Poisson distribution (an array of 2000 numbers) for a different mean in each step. Each distribution has a weight that I generate from a gaussian. Currently, I am just adding all Poisson arrays and creating one large normalised histogram. I wish to weight the frequency of numbers in each array, such that the histogram can take into account the weight. I am unsure how to do this however as it is the frequency of the numbers in each array that has to be weighted and not the numbers themselves.
import numpy as np
from scipy.stats import poison
from matplotlib import pyplot as plt
def gaussian(mu,sig,x): # Gaussian Gives Weight
P_r = 1./(np.sqrt(2.*np.pi)*sig)*np.exp(-np.power((x - mu)/sig, 2.)/2)
return P_r
def poisson(mean):
P = np.random.poisson(mean, 2000)
return P
R= np.linspace(45, 75, 2000) #min and max radius and steps taken between them to gen Poisson
Average_Droplet_Radius = 60
Variance = 15
Mean_Droplet_Average_Occupancy = float(input('Enter mean droplet occupation ')) #Poisson Mean
for mu, sig in [(Average_Droplet_Radius,Variance)]:
np.prob = gaussian(mu,sig,R)
C = Mean_Droplet_Average_Occupancy / (4/3 *np.pi * ( Average_Droplet_Radius**3)) #The constant parameter for all distributions
i = 0
a = np.array([])
for cell in R:
Individual_Mean = C * (4/3 *np.pi * ( R[i]**3))
Individual_Weight = np.prob[i] #want to weight frequency in given Poisson by this
b = (poisson(Individual_Mean))
a = np.append(a, b) # Unweighted Poissons combined
i = i+1
bins_val = np.arange(0, a.max() + 1.5) - 0.5
count, bins, ignored = plt.hist( a, bins_val, density=True) # Creates unweighted, normalised histogram
plt.show()
I was unsure how to use the weights part of plt.hist, as it is a large array of numbers that has weight.
Currently, I get a histogram where each droplet size is equally likely, how can I get the weights in the final distribution?

Related

Strange increasing of sigma's pi distribution increasing the time of repetition Python

I'm trying to perform this exercise 3.1 using python:
the code "works" and is the following:
#NUMERICAL ESTIMATE OF PI
import numpy as np #library for numerical calculations
import matplotlib.pyplot as plt #library for plotting purposes
from scipy.stats import norm #needed for gaussian fit
#*******************************************************************************
M = 10**2 #number of times we calculate pi
N = 10**4 #number of point generated
mean_pi=[] #empy list
for i in range(M): #for loops over the period
x=np.random.uniform(-1,1,N) #array of the given shape d and populate it with random samples from a uniform distribution over [-1,1)
y=np.random.uniform(-1,1,N) #array of the given shape d and populate it with random samples from a uniform distribution over [-1,1)
x_sel=x[(x**2+y**2)<=1] #selection of x point
y_sel=y[(x**2+y**2)<=1] #selection of y point
mean_pi+=[4*len(x_sel)/len(x)] #list of pi's mean value
#*******************************************************************************
plt.figure(figsize=(8,3)) #a unique identifier for the figure
_,bins,_=plt.hist(mean_pi,bins=int(np.sqrt(N)),density=True, color="skyblue") #sintex to create a histogram from a dataset x with n bins
#and store an array specifying the bin ranges in the variable bins.
mu, sigma = norm.fit(mean_pi) #get the mean and standard deviation of data
k = sigma*np.sqrt(N) #k parameters
best_fit_line = norm.pdf(bins, mu, sigma) #get a line of best fit for the data
print("\nTime of repetitions:", M, ". The mean of the distribution is: ", mu, ". The standard deviation is:", sigma, ". The k parameters is:", k ,". \n")
#*******************************************************************************
plt.plot(bins, best_fit_line, color="red") #plot y versus x as lines and/or markers
plt.grid() #configure the grid lines
plt.xlabel('Bins',fontweight='bold') #set the label for the x-axis
plt.ylabel('Pi',fontweight='bold') #set the label for the y-axis
plt.title("Histogram for Pi vs. bins") #set a title for the scatter plot
plt.show() #display all open figures
print("\n")
#*******************************************************************************
M = 10**3 #number of times we calculate pi
N = 10**4 #number of point generated
mean_pi=[] #empy list
for i in range(M): #for loops over the period
x=np.random.uniform(-1,1,N) #array of the given shape d and populate it with random samples from a uniform distribution over [-1,1)
y=np.random.uniform(-1,1,N) #array of the given shape d and populate it with random samples from a uniform distribution over [-1,1)
x_sel=x[(x**2+y**2)<=1] #selection of x point
y_sel=y[(x**2+y**2)<=1] #selection of y point
mean_pi+=[4*len(x_sel)/len(x)] #list of pi's mean value
#*******************************************************************************
plt.figure(figsize=(8,3)) #a unique identifier for the figure
_,bins,_=plt.hist(mean_pi,bins=int(np.sqrt(N)),density=True, color="skyblue") #sintex to create a histogram from a dataset x with n bins
#and store an array specifying the bin ranges in the variable bins.
mu, sigma = norm.fit(mean_pi) #get the mean and standard deviation of data
k = sigma*np.sqrt(N) #k parameters
best_fit_line = norm.pdf(bins, mu, sigma) #get a line of best fit for the data
print("Time of repetitions:", M, ". The mean of the distribution is: ", mu, ". The standard deviation is:", sigma, ". The k parameters is:", k ,". \n")
#*******************************************************************************
plt.plot(bins, best_fit_line, color="red") #plot y versus x as lines and/or markers
plt.grid() #configure the grid lines
plt.xlabel('Bins',fontweight='bold') #set the label for the x-axis
plt.ylabel('Pi',fontweight='bold') #set the label for the y-axis
plt.title("Histogram for Pi vs. bins") #set a title for the scatter plot
plt.show() #display all open figures
print("\n")
#*******************************************************************************
M = 5*10**3 #number of times we calculate pi
N = 10**4 #number of point generated
mean_pi=[] #empy list
for i in range(M): #for loops over the period
x=np.random.uniform(-1,1,N) #array of the given shape d and populate it with random samples from a uniform distribution over [-1,1)
y=np.random.uniform(-1,1,N) #array of the given shape d and populate it with random samples from a uniform distribution over [-1,1)
x_sel=x[(x**2+y**2)<=1] #selection of x point
y_sel=y[(x**2+y**2)<=1] #selection of y point
mean_pi+=[4*len(x_sel)/len(x)] #list of pi's mean value
#*******************************************************************************
plt.figure(figsize=(8,3)) #a unique identifier for the figure
_,bins,_=plt.hist(mean_pi,bins=int(np.sqrt(N)),density=True, color="skyblue") #sintex to create a histogram from a dataset x with n bins
#and store an array specifying the bin ranges in the variable bins.
mu, sigma = norm.fit(mean_pi) #get the mean and standard deviation of data
k = sigma*np.sqrt(N) #k parameters
best_fit_line = norm.pdf(bins, mu, sigma) #get a line of best fit for the data
print("Time of repetitions:", M, ". The mean of the distribution is: ", mu, ". The standard deviation is:", sigma, ". The k parameters is:", k ,". \n")
#*******************************************************************************
plt.plot(bins, best_fit_line, color="red") #plot y versus x as lines and/or markers
plt.grid() #configure the grid lines
plt.xlabel('Bins',fontweight='bold') #set the label for the x-axis
plt.ylabel('Pi',fontweight='bold') #set the label for the y-axis
plt.title("Histogram for Pi vs. bins") #set a title for the scatter plot
plt.show() #display all open figures
#*******************************************************************************
print("\n How many couples N you need to estimate pi at better than 0.0001? The number of couples N is:", (k**2)*10**8 ,".")
#*******************************************************************************
With the output:
As you can see, the sigma increase, meanwhile i expect that it decrease when the time repetition increase...i don't understand where is the error.
I also tryed to increase N but the results are not better...
Someone can help me please?
I understand the error. In order to implement the right code it is necessary to chage the line:
_,bins,_=plt.hist(mean_pi,bins=int(np.sqrt(N)),density=True, color="slateblue") #sintex to create a histogram from a dataset x with n bins
in:
_,bins,_=plt.hist(mean_pi,bins=int(np.sqrt(M)),density=True, color="slateblue") #sintex to create a histogram from a dataset x with n bins
and the output will be:

How to estimate maximum likelihood with GEV in python

I'm trying to match the generalized extreme value (GEV) distribution's probability density function (pdf) to the data' pdf. This histogram is function of bin. As adjust this bin, the result of the function fitting also changes. And curve_fit(func, x, y) is playing this role properly. but this function uses a "least squares estimation". What I want is to use maximum likelihood estimation (MLE). And it has good results with the stats.genextreme.fit(data)function. However, this function does not represent histogram shape changes according to bin. Just use the original data.
I'm trying to use MLE. And I succeeded in estimating the parameters of the standard normal distribution using MLE. However, it is based on the original data and does not change according to the bin. Even the parameters of the GEV could not be estimated with the original data.
I checked the source code of genextreme_gen, rv_continuous, etc. But, this code is too complicated. I couldn't accept the source code with my Python skills.
I would like to estimate the parameters of the GEV distribution through MLE. And I want to get the result that the estimate changes according to bin.
What should I do?
I am sorry for my poor English, and thank you for your help.
+)
h = 0.5 # bin width
dat = h105[1] # data
b = np.arange(min(dat)-h/2, max(dat), h) # bin range
n, bins = np.histogram(dat, bins=b, density=True) # histogram
x = 0.5*(bins[1:]+bins[:-1]) # x-value of histogram
popt,_ = curve_fit(fg, x, n) # curve_fit(GEV's pdf, x-value of histogram, pdf value)
popt = -popt[0], popt[1], popt[2] # estimated paramter (Least squares estimation, LSE)
x1 = np.linspace((popt[1]-popt[2])/popt[0], dat.max(), 1000)
a1 = stats.genextreme.pdf(x1, *popt) # pdf
popt = stats.genextreme.fit(dat) # estimated parameter (Maximum likelihood estimation, MLE)
x2 = np.linspace((popt[1]-popt[2])/popt[0], dat.max(), 1000)
a2 = stats.genextreme.pdf(x2, *popt)
bin width = 2
bin width = 0.5
One way to do this is to convert bins to data. You can do so by counting number of data points in each bin and then repeating center of the bin this number of times.
I have also tried to sample uniform values from each bin, but using center of the bin and then repeating it seems to provide parameters with higher likelihood.
import scipy.stats as stats
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
ground_truth_params = (0.001, 0.5, 0.999)
count = 50
h = 0.2 # bin width
dat = stats.genextreme.rvs(*ground_truth_params, count) # data
b = np.arange(np.min(dat)-h/2, np.max(dat), h) # bin range
n, bins = np.histogram(dat, bins=b, density=True) # histogram
bin_counts, _ = np.histogram(dat, bins=b, density=False) # histogram
x = 0.5*(bins[1:]+bins[:-1]) # x-value of histogram
def flatten(l):
return [item for sublist in l for item in sublist]
popt,_ = curve_fit(stats.genextreme.pdf, x, n, p0=[0,1,1]) # curve_fit(GEV's pdf, x-value of histogram, pdf value)
popt_lse = -popt[0], popt[1], popt[2] # estimated paramter (Least squares estimation, LSE)
popt_mle = stats.genextreme.fit(dat) # estimated parameter (Maximum likelihood estimation, MLE)
uniform_dat_from_bins = flatten((np.linspace(x - h/2, x + h/2, n) for n, x in zip(bin_counts, x)))
popt_uniform_mle = stats.genextreme.fit(uniform_dat_from_bins) # estimated parameter (Maximum likelihood estimation, MLE)
centered_dat_from_bins = flatten(([x] * n for n, x in zip(bin_counts, x)))
popt_centered_mle = stats.genextreme.fit(centered_dat_from_bins) # estimated parameter (Maximum likelihood estimation, MLE)
plot_params = {
ground_truth_params: 'tab:green',
popt_lse: 'tab:red',
popt_mle: 'tab:orange',
popt_centered_mle: 'tab:blue',
popt_uniform_mle: 'tab:purple'
}
param_names = ['GT', 'LSE', 'MLE', 'bin centered MLE', 'bin uniform MLE']
plt.figure(figsize=(10,5))
plt.bar(x, n, width=h, color='lightgrey')
plt.ylim(0, 0.5)
plt.xlim(-2,10)
for params, color in plot_params.items():
x_pdf = np.linspace(-2, 10, 1000)
y_pdf = stats.genextreme.pdf(x_pdf, *params) # the normal pdf
plt.plot(x_pdf, y_pdf, label='pdf', color=color)
plt.legend(param_names)
plt.figure(figsize=(10,5))
for params, color in plot_params.items():
plt.plot(np.sum(stats.genextreme.logpdf(dat, *params)), 'o', color=color)
This plot shows PDFs that are estimated using different methods along with ground truth PDF
And the next plot shows of likelihoods of estimated parameters given original data.
PDF that is estimated by MLE on original data has the maximum value as expected. Then follow PDFs that are estimated using histogram bin (centered and uniform). After them there is ground truth PDF. And finally comes PDF with the lowest likelihood, which is estimated using least squares.

Generating normal distribution in order python, numpy

I am able to generate random samples of normal distribution in numpy like this.
>>> mu, sigma = 0, 0.1 # mean and standard deviation
>>> s = np.random.normal(mu, sigma, 1000)
But they are in random order, obviously. How can I generate numbers in order, that is, values should rise and fall like in a normal distribution.
In other words, I want to create a curve (gaussian) with mu and sigma and n number of points which I can input.
How to do this?
To (1) generate a random sample of x-coordinates of size n (from the normal distribution) (2) evaluate the normal distribution at the x-values (3) sort the x-values by the magnitude of the normal distribution at their positions, this will do the trick:
import numpy as np
mu,sigma,n = 0.,1.,1000
def normal(x,mu,sigma):
return ( 2.*np.pi*sigma**2. )**-.5 * np.exp( -.5 * (x-mu)**2. / sigma**2. )
x = np.random.normal(mu,sigma,n) #generate random list of points from normal distribution
y = normal(x,mu,sigma) #evaluate the probability density at each point
x,y = x[np.argsort(y)],np.sort(y) #sort according to the probability density

Calculate the Cumulative Distribution Function (CDF) in Python

How can I calculate in python the Cumulative Distribution Function (CDF)?
I want to calculate it from an array of points I have (discrete distribution), not with the continuous distributions that, for example, scipy has.
(It is possible that my interpretation of the question is wrong. If the question is how to get from a discrete PDF into a discrete CDF, then np.cumsum divided by a suitable constant will do if the samples are equispaced. If the array is not equispaced, then np.cumsum of the array multiplied by the distances between the points will do.)
If you have a discrete array of samples, and you would like to know the CDF of the sample, then you can just sort the array. If you look at the sorted result, you'll realize that the smallest value represents 0% , and largest value represents 100 %. If you want to know the value at 50 % of the distribution, just look at the array element which is in the middle of the sorted array.
Let us have a closer look at this with a simple example:
import matplotlib.pyplot as plt
import numpy as np
# create some randomly ddistributed data:
data = np.random.randn(10000)
# sort the data:
data_sorted = np.sort(data)
# calculate the proportional values of samples
p = 1. * np.arange(len(data)) / (len(data) - 1)
# plot the sorted data:
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(p, data_sorted)
ax1.set_xlabel('$p$')
ax1.set_ylabel('$x$')
ax2 = fig.add_subplot(122)
ax2.plot(data_sorted, p)
ax2.set_xlabel('$x$')
ax2.set_ylabel('$p$')
This gives the following plot where the right-hand-side plot is the traditional cumulative distribution function. It should reflect the CDF of the process behind the points, but naturally, it is not as long as the number of points is finite.
This function is easy to invert, and it depends on your application which form you need.
Assuming you know how your data is distributed (i.e. you know the pdf of your data), then scipy does support discrete data when calculating cdf's
import numpy as np
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
x = np.random.randn(10000) # generate samples from normal distribution (discrete data)
norm_cdf = scipy.stats.norm.cdf(x) # calculate the cdf - also discrete
# plot the cdf
sns.lineplot(x=x, y=norm_cdf)
plt.show()
We can even print the first few values of the cdf to show they are discrete
print(norm_cdf[:10])
>>> array([0.39216484, 0.09554546, 0.71268696, 0.5007396 , 0.76484329,
0.37920836, 0.86010018, 0.9191937 , 0.46374527, 0.4576634 ])
The same method to calculate the cdf also works for multiple dimensions: we use 2d data below to illustrate
mu = np.zeros(2) # mean vector
cov = np.array([[1,0.6],[0.6,1]]) # covariance matrix
# generate 2d normally distributed samples using 0 mean and the covariance matrix above
x = np.random.multivariate_normal(mean=mu, cov=cov, size=1000) # 1000 samples
norm_cdf = scipy.stats.norm.cdf(x)
print(norm_cdf.shape)
>>> (1000, 2)
In the above examples, I had prior knowledge that my data was normally distributed, which is why I used scipy.stats.norm() - there are multiple distributions scipy supports. But again, you need to know how your data is distributed beforehand to use such functions. If you don't know how your data is distributed and you just use any distribution to calculate the cdf, you most likely will get incorrect results.
The empirical cumulative distribution function is a CDF that jumps exactly at the values in your data set. It is the CDF for a discrete distribution that places a mass at each of your values, where the mass is proportional to the frequency of the value. Since the sum of the masses must be 1, these constraints determine the location and height of each jump in the empirical CDF.
Given an array a of values, you compute the empirical CDF by first obtaining the frequencies of the values. The numpy function unique() is helpful here because it returns not only the frequencies, but also the values in sorted order. To calculate the cumulative distribution, use the cumsum() function, and divide by the total sum. The following function returns the values in sorted order and the corresponding cumulative distribution:
import numpy as np
def ecdf(a):
x, counts = np.unique(a, return_counts=True)
cusum = np.cumsum(counts)
return x, cusum / cusum[-1]
To plot the empirical CDF you can use matplotlib's plot() function. The option drawstyle='steps-post' ensures that jumps occur at the right place. However, you need to force a jump at the smallest data value, so it's necessary to insert an additional element in front of x and y.
import matplotlib.pyplot as plt
def plot_ecdf(a):
x, y = ecdf(a)
x = np.insert(x, 0, x[0])
y = np.insert(y, 0, 0.)
plt.plot(x, y, drawstyle='steps-post')
plt.grid(True)
plt.savefig('ecdf.png')
Example usages:
xvec = np.array([7,1,2,2,7,4,4,4,5.5,7])
plot_ecdf(xvec)
df = pd.DataFrame({'x':[7,1,2,2,7,4,4,4,5.5,7]})
plot_ecdf(df['x'])
with output:
For calculating CDF for array of discerete numbers:
import numpy as np
pdf, bin_edges = np.histogram(
data, # array of data
bins=500, # specify the number of bins for distribution function
density=True # True to return probability density function (pdf) instead of count
)
cdf = np.cumsum(pdf*np.diff(bins_edges))
Note that the return array pdf has the length of bins (500 here) and bin_edges has the length of bins+1 (501 here).
So, to calculate the CDF which is nothing but the area below the PDF distribution curve, we can simply calculate the cumulative sum of bin widths (np.diff(bins_edges)) times pdf using Numpy cumsum function
Here's an alternative pandas solution to calculating the empirical CDF, using pd.cut to sort the data into evenly spaced bins first, and then cumsum to compute the distribution.
def empirical_cdf(s: pd.Series, n_bins: int = 100):
# Sort the data into `n_bins` evenly spaced bins:
discretized = pd.cut(s, n_bins)
# Count the number of datapoints in each bin:
bin_counts = discretized.value_counts().sort_index().reset_index()
# Calculate the locations of each bin as just the mean of the bin start and end:
bin_counts["loc"] = (pd.IntervalIndex(bin_counts["index"]).left + pd.IntervalIndex(bin_counts["index"]).right) / 2
# Compute the CDF with cumsum:
return bin_counts.set_index("loc").iloc[:, -1].cumsum()
Below is an example use of the function to discretize the distribution of 10000 datapoints into 100 evenly spaced bins:
s = pd.Series(np.random.randn(10000))
cdf = empirical_cdf(s, n_bins=100)
fig, ax = plt.subplots()
ax.scatter(cdf.index, cdf.values)
import random
import numpy as np
import matplotlib.pyplot as plt
def get_discrete_cdf(values):
values = (values - np.min(values)) / (np.max(values) - np.min(values))
values_sort = np.sort(values)
values_sum = np.sum(values)
values_sums = []
cur_sum = 0
for it in values_sort:
cur_sum += it
values_sums.append(cur_sum)
cdf = [values_sums[np.searchsorted(values_sort, it)]/values_sum for it in values]
return cdf
rand_values = [np.random.normal(loc=0.0) for _ in range(1000)]
_ = plt.hist(rand_values, bins=20)
_ = plt.xlabel("rand_values")
_ = plt.ylabel("nums")
cdf = get_discrete_cdf(rand_values)
x_p = list(zip(rand_values, cdf))
x_p.sort(key=lambda it: it[0])
x = [it[0] for it in x_p]
y = [it[1] for it in x_p]
_ = plt.plot(x, y)
_ = plt.xlabel("rand_values")
_ = plt.ylabel("prob")

Curve_Fit not returning expected values

I have code here that draws from two gaussian distributions with an equal number of points.
Ultimately, I want to simulate noise but I'm trying to see why if I have two gaussians with means that are really far off from each other, my curve_fit should return their average mean value. It doesn't do that.
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
import gauss
N_tot = 1000
# Draw from the major gaussian. Note the number N. It is
# the main parameter in obtaining your estimators.
mean = 0; sigma = 1; var = sigma**2; N = 100
A = 1/np.sqrt((2*np.pi*var))
points = gauss.draw_1dGauss(mean,var,N)
# Now draw from a minor gaussian. Note Np
meanp = 10; sigmap = 1; varp = sigmap**2; Np = N_tot-N
pointsp = gauss.draw_1dGauss(meanp,varp,Np)
Ap = 1/np.sqrt((2*np.pi*varp))
# Now implement the sum of the draws by concatenating the two arrays.
points_tot = np.array(points.tolist()+pointsp.tolist())
bins_tot = len(points_tot)/5
hist_tot, bin_edges_tot = np.histogram(points_tot,bins_tot,density=True)
bin_centres_tot = (bin_edges_tot[:-1] + bin_edges_tot[1:])/2.0
# Initial guess
p0 = [A, mean, sigma]
# Result of the fit
coeff, var_matrix = curve_fit(gauss.gaussFun, bin_centres_tot, hist_tot, p0=p0)
# Get the fitted curve
hist_fit = gauss.gaussFun(bin_centres, *coeff)
plt.figure(5); plt.title('Gaussian Estimate')
plt.suptitle('Gaussian Parameters: Mu = '+ str(coeff[1]) +' , Sigma = ' + str(coeff[2]) + ', Amplitude = ' + str(coeff[0]))
plt.plot(bin_centres,hist_fit)
plt.draw()
# Error on the estimates
error_parameters = np.sqrt(np.array([var_matrix[0][0],var_matrix[1][1],var_matrix[2][2]]))
The returned parameters are still centered about 0 and I'm not sure why. It should be centered around 10.
Edit: Changed the integer division portions but still not returning good fit value.
I should get a mean of about ~10 since most of my points are being drawn from that distribution (i.e. the minor distribution)
You find that the least-squares optimization converges to the larger of the two peaks.
The least-squares optimum does not find the "average mean value" of two component distributions, it algorithm merely minimizes the squared error. This will usually happen when the biggest peak is fit.
When the distribution is this lopsided (90% of the samples are from the larger of the two peaks) the error terms on the main peak destroy the local minima at the smaller peak and the minimum between the peaks.
You can get the fit to converge to a point in the center only when the peaks are nearly equal in size, otherwise you should expect least-squares to find the "strongest" peak if it doesn't get stuck in a local minimum.
With the following pieces, I can run your code:
bin_centres = bin_centres_tot
def draw_1dGauss(mean,var,N):
from scipy.stats import norm
from numpy import sqrt
return scipy.stats.norm.rvs(loc = mean, scale = sqrt(var), size=N)
def gaussFun(bin_centres, *coeff):
from numpy import sqrt, exp, pi
A, mean, sigma = coeff[0], coeff[1], coeff[2]
return exp(-(bin_centres-mean)**2 / 2. / sigma**2 ) / sigma / sqrt(2*pi)
plt.hist(points_tot, normed=True, bins=40)

Categories