Python equivalent for MATLAB's normplot? - python

Is there a python equivalent function similar to normplot from MATLAB?
Perhaps in matplotlib?
MATLAB syntax:
x = normrnd(10,1,25,1);
normplot(x)
Gives:
I have tried using matplotlib & numpy module to determine the probability/percentile of the values in array but the output plot y-axis scales are linear as compared to the plot from MATLAB.
import numpy as np
import matplotlib.pyplot as plt
data =[-11.83,-8.53,-2.86,-6.49,-7.53,-9.74,-9.44,-3.58,-6.68,-13.26,-4.52]
plot_percentiles = range(0, 110, 10)
x = np.percentile(data, plot_percentiles)
plt.plot(x, plot_percentiles, 'ro-')
plt.xlabel('Value')
plt.ylabel('Probability')
plt.show()
Gives:
Else, how could the scales be adjusted as in the first plot?
Thanks.

A late answer, but I just came across the same problem and found a solution, that is worth sharing. I guess.
As joris pointed out the probplot function is an equivalent to normplot, but the resulting distribution is in form of the cumulative density function. Scipy.stats also offers a function, to convert these values.
cdf -> percentile
stats.'distribution function'.cdf(cdf_value)
percentile -> cdf
stats.'distribution function'.ppf(percentile_value)
for example:
stats.norm.ppf(percentile)
To get an equivalent y-axis, like normplot, you can replace the cdf-ticks:
from scipy import stats
import matplotlib.pyplot as plt
nsample=500
#create list of random variables
x=stats.t.rvs(100, size=nsample)
# Calculate quantiles and least-square-fit curve
(quantiles, values), (slope, intercept, r) = stats.probplot(x, dist='norm')
#plot results
plt.plot(values, quantiles,'ob')
plt.plot(quantiles * slope + intercept, quantiles, 'r')
#define ticks
ticks_perc=[1, 5, 10, 20, 50, 80, 90, 95, 99]
#transfrom them from precentile to cumulative density
ticks_quan=[stats.norm.ppf(i/100.) for i in ticks_perc]
#assign new ticks
plt.yticks(ticks_quan,ticks_perc)
#show plot
plt.grid()
plt.show()
The result:

I'm fairly certain matplotlib doesn't provide anything like this.
It's possible to do, of course, but you'll have to either rescale your data and change your y axis ticks/labels to match, or, if you're planning on doing this often, perhaps code a new scale that can be applied to matplotlib axes, like in this example: http://matplotlib.sourceforge.net/examples/api/custom_scale_example.html.

Maybe you can use the probplot function of scipy (scipy.stats), this seems to me an equivalent for MATLABs normplot:
Calculate quantiles for a probability
plot of sample data against a
specified theoretical distribution.
probplot optionally calculates a
best-fit line for the data and plots
the results using Matplotlib or a
given plot function.
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html
But is does not solve your problem of the different y-axis scale.

Using matplotlib.semilogy will get closer to the matlab output.

Related

How to make a histogram from 30 csv files to plot the historgram and then for it with gaussian function and the standard deviation?

I want to make a histogram from 30 csv files, and then fit a gaussian function to see if my data is optimal. After that, I need to find the mean and standard deviation of those peaks. The file data size are too large, I do not know if I extract individual column and organize their value range into number of bins correctly.
I know it is a bit long and too many questions, please answer as much as you want, thank you very much!
> this is the links of the data
Below so far I have done (actually not much, coz I am beginner to data visualization.)
Firstly, I import the packages, savgol_filter to make the bin transparent, it seems better.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.signal import savgol_filter
And then I convert the dimension and set limit.
def cm2inch(value):
return value/2.54
width = 9
height = 6.75
sliceMin, sliceMax = 300, 1002
Next I load all the data jupyter notebook by iteration 30 times, where I set up two arrays "times" and "voltages" to store the values.
times, voltages = [], []
for i in range(30):
time, ch1 = np.loadtxt(f"{i+1}.txt", delimiter=',', skiprows=5,unpack=True)
times.append(time)
voltages.append(ch1)
t = (np.array(times[0]) * 1e5)[sliceMin:sliceMax]
voltages = (np.array(voltages))[:, sliceMin:sliceMax]
1. I think I should need a hist function to plot the graph. Although I have the plot, but I am not sure if it is the proper way to generate the histogram.
hist, bin_edges = np.histogram(voltages, bins=500, density=True)
hist = savgol_filter(hist, 51, 3)
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2
That is so far I have reached. the amplitude of the 3rd peak is too low, which is not what I expected. But please correct me if my expectation is wrong.
This is my histogram plot
I have updated my plot with the following code
labels = "hist"
if showGraph:
plt.title("Datapoints Distribution over Voltage [mV]", )
plt.xlabel("Voltage [mV]")
plt.ylabel("Data Points")
plt.plot(hist, label=labels)
plt.show()
2.(edited) I am not sure why my label cannot display, could you please correct me?
3.(edited) Besides, I want to make a fit curve by using gaussian function to the histogram. But there are three peaks, so how should I fit the function to them?
def gauss(x, *p):
A, mu, sigma = p
return A*np.exp(-(x-mu)**2/(2.*sigma**2))
4. (edited) I realised that I have not mentioned the mean value yet.
I suppose that if I can locate the maximum value of the peak, then I can find the mean value of the specific peak. Do I need to fit the Gaussian first to find the peak, or I can find the straight ahead? Is it to find the local maximum so I can find it? If yes, how can I proceed it?
5. (edited) I know how to find the standard deviation from a single list, if I want to do similar logic, how to implement the code?
sample = [1,2,3,4,5,5,5,5,10]
standard_deviation = np.std(sample, ddof=1)
print(standard_deviation)
Feedback to suggestions:
I try to implement the gaussian fit, below are the packages I import.
from sklearn.mixture import GaussianMixture
import numpy as np
import matplotlib.pyplot as plt
Here isthe gaussian function, I put my 30 datasets voltages as the parameter of the Gaussian Mixture fit, which print our lots of values regarding mu and variance.
gmm = GaussianMixture(n_components=1)
gmm.fit(voltages)
print(gmm.means_, gmm.covariances_)
mu = gmm.means_[0][0]
variance = gmm.covariances_[0][0][0]
print(mu, variance)
I process the code one by one. There is an error on the second line:
fig, ax = plt.subplots(figsize=(6,6))
Xs = np.arange(min(voltages), max(voltages), 0.05)
The truth value of an array with more than one element is ambiguous.
Use a.any() or a.all()
I search from the web that, to use this is to indicate there is only one value, like if there are[T,T,F,F,T], you can have 4 possibilities.
I edit my code to:
Xs = np.arange(min(np.all(voltages)), max(np.all(voltages)), 0.05)
which gives me this:
'numpy.bool_' object is not iterable
I understand it is not a boolean object. At this stage, I do not know how to proceed the gaussian curve fit. Can anyone provides me an alternate way to do it?
To plot a histogram, the most vanilla matplotlib function, hist, is my go-to. Basically, if I have a list of samples, then I can plot a histogram of them with 100 bins via:
import matplotlib.pyplot as plt
plt.hist(samples, bins=100)
plt.show()
If you'd like to fit normal distribution(s) to your data, the best model for that is a Gaussian Mixture Model, which you can find more info about via scikit-learn's GMM page. That said, this is the code I use to fit a singular gaussian distribution to a dataset. If I wanted to fit k normal distributions, I'd need to use n_components=k. I've also included the resulting plot:
from sklearn.mixture import GaussianMixture
import numpy as np
import matplotlib.pyplot as plt
data = np.random.uniform(-1,1, size=(800,1))
data += np.random.uniform(-1,1, size=(800,1))
gmm = GaussianMixture(n_components=1)
gmm.fit(data)
print(gmm.means_, gmm.covariances_)
mu = gmm.means_[0][0]
variance = gmm.covariances_[0][0][0]
print(mu, variance)
fig, ax = plt.subplots(figsize=(6,6))
Xs = np.arange(min(data), max(data), 0.05)
ys = 1.0/np.sqrt(2*np.pi*variance) * np.exp(-0.5/variance * (Xs + mu)**2)
ax.hist(data, bins=100, label='data')
px = ax.twinx()
px.plot(Xs, ys, c='r', linestyle='dotted', label='fit')
ax.legend()
px.legend(loc='upper left')
plt.show()
As for question 3, I'm not sure which axis you'd like to capture the standard deviations of. If you'd like to get the standard deviation of columns, you can use np.std(data, axis=1), and use axis=0 for row-by-row standard deviation.

Matplotlib Interpolation

I have a Data Frame df with two columns 'Egy' and 'fx' that I plot in this way:
plot_1 = df_data.plot(x="Egy", y="fx", color="red", ax=ax1, linewidth=0.85)
plot_1.set_xscale('log')
plt.show()
But then I want to smooth this curve using spline like this:
from scipy.interpolate import spline
import numpy as np
x_new = np.linspace(df_data['Egy'].min(), df_data['Egy'].max(),500)
f = spline(df_data['Egy'], df_data['fx'],x_new)
plot_1 = ax1.plot(x_new, f, color="black", linewidth=0.85)
plot_1.set_xscale('log')
plt.show()
And the plot I get is this (forget about the scatter blue points).
There are a lot of "peaks" in the smooth curve, mainly at lower x. How Can I smooth this curve properly?
When I consider the "busybear" suggestion of use np.logspace instead of np.linspace I get the following picture, which is not very satisfactory either.
You have your x values linearly scaled with np.linspace but your plot is log scaled. You could try np.geomspace for your x values or plot without the log scale.
Using spline will only work well for functions that are already smooth. What you need is to regularize the data and then interpolate afterwards. This will help to smooth out the bumps. Regularization is an advanced topic, and it would not be appropriate to discuss it in detail here.
Update: for regularization using machine learning, you might look into the scikit library for Python.

normalize histogram with bin size [duplicate]

I'd like to plot a normalized histogram from a vector using matplotlib. I tried the following:
plt.hist(myarray, normed=True)
as well as:
plt.hist(myarray, normed=1)
but neither option produces a y-axis from [0, 1] such that the bar heights of the histogram sum to 1.
If you want the sum of all bars to be equal unity, weight each bin by the total number of values:
weights = np.ones_like(myarray) / len(myarray)
plt.hist(myarray, weights=weights)
Note for Python 2.x: add casting to float() for one of the operators of the division as otherwise you would end up with zeros due to integer division
It would be more helpful if you posed a more complete working (or in this case non-working) example.
I tried the following:
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(1000)
fig = plt.figure()
ax = fig.add_subplot(111)
n, bins, rectangles = ax.hist(x, 50, density=True)
fig.canvas.draw()
plt.show()
This will indeed produce a bar-chart histogram with a y-axis that goes from [0,1].
Further, as per the hist documentation (i.e. ax.hist? from ipython), I think the sum is fine too:
*normed*:
If *True*, the first element of the return tuple will
be the counts normalized to form a probability density, i.e.,
``n/(len(x)*dbin)``. In a probability density, the integral of
the histogram should be 1; you can verify that with a
trapezoidal integration of the probability density function::
pdf, bins, patches = ax.hist(...)
print np.sum(pdf * np.diff(bins))
Giving this a try after the commands above:
np.sum(n * np.diff(bins))
I get a return value of 1.0 as expected. Remember that normed=True doesn't mean that the sum of the value at each bar will be unity, but rather than the integral over the bars is unity. In my case np.sum(n) returned approx 7.2767.
I know this answer is too late considering the question is dated 2010 but I came across this question as I was facing a similar problem myself. As already stated in the answer, normed=True means that the total area under the histogram is equal to 1 but the sum of heights is not equal to 1. However, I wanted to, for convenience of physical interpretation of a histogram, make one with sum of heights equal to 1.
I found a hint in the following question - Python: Histogram with area normalized to something other than 1
But I was not able to find a way of making bars mimic the histtype="step" feature hist(). This diverted me to : Matplotlib - Stepped histogram with already binned data
If the community finds it acceptable I should like to put forth a solution which synthesises ideas from both the above posts.
import matplotlib.pyplot as plt
# Let X be the array whose histogram needs to be plotted.
nx, xbins, ptchs = plt.hist(X, bins=20)
plt.clf() # Get rid of this histogram since not the one we want.
nx_frac = nx/float(len(nx)) # Each bin divided by total number of objects.
width = xbins[1] - xbins[0] # Width of each bin.
x = np.ravel(zip(xbins[:-1], xbins[:-1]+width))
y = np.ravel(zip(nx_frac,nx_frac))
plt.plot(x,y,linestyle="dashed",label="MyLabel")
#... Further formatting.
This has worked wonderfully for me though in some cases I have noticed that the left most "bar" or the right most "bar" of the histogram does not close down by touching the lowest point of the Y-axis. In such a case adding an element 0 at the begging or the end of y achieved the necessary result.
Just thought I'd share my experience. Thank you.
Here is another simple solution using np.histogram() method.
myarray = np.random.random(100)
results, edges = np.histogram(myarray, normed=True)
binWidth = edges[1] - edges[0]
plt.bar(edges[:-1], results*binWidth, binWidth)
You can indeed check that the total sums up to 1 with:
> print sum(results*binWidth)
1.0
The easiest solution is to use seaborn.histplot, or seaborn.displot with kind='hist', and specify stat='probability'
probability: or proportion: normalize such that bar heights sum to 1
density: normalize such that the total area of the histogram equals 1
data: pandas.DataFrame, numpy.ndarray, mapping, or sequence
seaborn is a high-level API for matplotlib
Tested in python 3.8.12, matplotlib 3.4.3, seaborn 0.11.2
Imports and Data
import seaborn as sns
import matplotlib.pyplot as plt
# load data
df = sns.load_dataset('penguins')
sns.histplot
axes-level plot
# create figure and axes
fig, ax = plt.subplots(figsize=(6, 5))
p = sns.histplot(data=df, x='flipper_length_mm', stat='probability', ax=ax)
sns.displot
figure-level plot
p = sns.displot(data=df, x='flipper_length_mm', stat='probability', height=4, aspect=1.5)
Since matplotlib 3.0.2, normed=True is deprecated. To get the desired output I had to do:
import numpy as np
data=np.random.randn(1000)
bins=np.arange(-3.0,3.0,51)
counts, _ = np.histogram(data,bins=bins)
if density: # equivalent of normed=True
counts_weighter=counts.sum()
else: # equivalent of normed=False
counts_weighter=1.0
plt.hist(bins[:-1],bins=bins,weights=counts/counts_weighter)
Trying to specify weights and density simultaneously as arguments to plt.hist() did not work for me. If anyone know of a way to get that working without having access to the normed keyword argument then please let me know in the comments and I will delete/modify this answer.
If you want bin centres then don't use bins[:-1] which are the bin edges - you need to choose a suitable scheme for how to calculate the centres (which may or may not be trivially derived).

Excel-like Interpolation in Python

Plotting my data in excel as a scatter plot with smooth line and markers produces the type of figure I'm expecting. Image of Excel plots:
However when trying to plot the data in matplotlib I'm running into some issues with interpolation. I'm using the interpolation package from SciPy, I've tried a range of different interpolation methods including spline interpolation and BarycentricInterpolator as suggested previously. These plots are obviously very different to the excel produced plots however:
I've tried different smoothing and k values for spline interpolation, while the curve changes the root problem still exists.
How would I be able to produce a fitted curve similar to the excel-produced plots?
Thanks
The problem is that you interpolate the data on a linear scale but expect the outcome to look smooth on a logarithmic scale.
The idea would therefore be perform the interpolation on a log scale already by transforming the data to its logarithm first and then perform the interpolation. You can then transform it back to linear scale such that you can plot it on a log scale again.
from scipy.interpolate import interp1d, Akima1DInterpolator
import numpy as np
import matplotlib.pyplot as plt
x = np.array([0.02,0.2,2,20,200])
y = np.array([700,850,680,410, 700])
plt.plot(x,y, marker="o", ls="")
sx=np.log10(x)
xi_ = np.linspace(sx.min(),sx.max(), num=201)
xi = 10**(xi_)
f = interp1d(sx,y, kind="cubic")
yi = f(xi_)
plt.plot(xi,yi, label="cubic spline")
f2 = Akima1DInterpolator(sx, y)
yi2 = f2(xi_)
plt.plot(xi,yi2, label="Akima")
plt.gca().set_xscale("log")
plt.legend()
plt.show()

Plot a histogram such that bar heights sum to 1 (probability)

I'd like to plot a normalized histogram from a vector using matplotlib. I tried the following:
plt.hist(myarray, normed=True)
as well as:
plt.hist(myarray, normed=1)
but neither option produces a y-axis from [0, 1] such that the bar heights of the histogram sum to 1.
If you want the sum of all bars to be equal unity, weight each bin by the total number of values:
weights = np.ones_like(myarray) / len(myarray)
plt.hist(myarray, weights=weights)
Note for Python 2.x: add casting to float() for one of the operators of the division as otherwise you would end up with zeros due to integer division
It would be more helpful if you posed a more complete working (or in this case non-working) example.
I tried the following:
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(1000)
fig = plt.figure()
ax = fig.add_subplot(111)
n, bins, rectangles = ax.hist(x, 50, density=True)
fig.canvas.draw()
plt.show()
This will indeed produce a bar-chart histogram with a y-axis that goes from [0,1].
Further, as per the hist documentation (i.e. ax.hist? from ipython), I think the sum is fine too:
*normed*:
If *True*, the first element of the return tuple will
be the counts normalized to form a probability density, i.e.,
``n/(len(x)*dbin)``. In a probability density, the integral of
the histogram should be 1; you can verify that with a
trapezoidal integration of the probability density function::
pdf, bins, patches = ax.hist(...)
print np.sum(pdf * np.diff(bins))
Giving this a try after the commands above:
np.sum(n * np.diff(bins))
I get a return value of 1.0 as expected. Remember that normed=True doesn't mean that the sum of the value at each bar will be unity, but rather than the integral over the bars is unity. In my case np.sum(n) returned approx 7.2767.
I know this answer is too late considering the question is dated 2010 but I came across this question as I was facing a similar problem myself. As already stated in the answer, normed=True means that the total area under the histogram is equal to 1 but the sum of heights is not equal to 1. However, I wanted to, for convenience of physical interpretation of a histogram, make one with sum of heights equal to 1.
I found a hint in the following question - Python: Histogram with area normalized to something other than 1
But I was not able to find a way of making bars mimic the histtype="step" feature hist(). This diverted me to : Matplotlib - Stepped histogram with already binned data
If the community finds it acceptable I should like to put forth a solution which synthesises ideas from both the above posts.
import matplotlib.pyplot as plt
# Let X be the array whose histogram needs to be plotted.
nx, xbins, ptchs = plt.hist(X, bins=20)
plt.clf() # Get rid of this histogram since not the one we want.
nx_frac = nx/float(len(nx)) # Each bin divided by total number of objects.
width = xbins[1] - xbins[0] # Width of each bin.
x = np.ravel(zip(xbins[:-1], xbins[:-1]+width))
y = np.ravel(zip(nx_frac,nx_frac))
plt.plot(x,y,linestyle="dashed",label="MyLabel")
#... Further formatting.
This has worked wonderfully for me though in some cases I have noticed that the left most "bar" or the right most "bar" of the histogram does not close down by touching the lowest point of the Y-axis. In such a case adding an element 0 at the begging or the end of y achieved the necessary result.
Just thought I'd share my experience. Thank you.
Here is another simple solution using np.histogram() method.
myarray = np.random.random(100)
results, edges = np.histogram(myarray, normed=True)
binWidth = edges[1] - edges[0]
plt.bar(edges[:-1], results*binWidth, binWidth)
You can indeed check that the total sums up to 1 with:
> print sum(results*binWidth)
1.0
The easiest solution is to use seaborn.histplot, or seaborn.displot with kind='hist', and specify stat='probability'
probability: or proportion: normalize such that bar heights sum to 1
density: normalize such that the total area of the histogram equals 1
data: pandas.DataFrame, numpy.ndarray, mapping, or sequence
seaborn is a high-level API for matplotlib
Tested in python 3.8.12, matplotlib 3.4.3, seaborn 0.11.2
Imports and Data
import seaborn as sns
import matplotlib.pyplot as plt
# load data
df = sns.load_dataset('penguins')
sns.histplot
axes-level plot
# create figure and axes
fig, ax = plt.subplots(figsize=(6, 5))
p = sns.histplot(data=df, x='flipper_length_mm', stat='probability', ax=ax)
sns.displot
figure-level plot
p = sns.displot(data=df, x='flipper_length_mm', stat='probability', height=4, aspect=1.5)
Since matplotlib 3.0.2, normed=True is deprecated. To get the desired output I had to do:
import numpy as np
data=np.random.randn(1000)
bins=np.arange(-3.0,3.0,51)
counts, _ = np.histogram(data,bins=bins)
if density: # equivalent of normed=True
counts_weighter=counts.sum()
else: # equivalent of normed=False
counts_weighter=1.0
plt.hist(bins[:-1],bins=bins,weights=counts/counts_weighter)
Trying to specify weights and density simultaneously as arguments to plt.hist() did not work for me. If anyone know of a way to get that working without having access to the normed keyword argument then please let me know in the comments and I will delete/modify this answer.
If you want bin centres then don't use bins[:-1] which are the bin edges - you need to choose a suitable scheme for how to calculate the centres (which may or may not be trivially derived).

Categories