distplot was deprecated in favour of displot.
The previous function had the option to draw a normal curve.
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
ax = sns.distplot(df.extracted, bins=40, kde=False, fit=stats.norm)
the fit=stats.norm doesn't work with displot anymore. In the answer to this question, I see the approach to plot the normal later, however it is done on some random data averaged around 0.
seaborn.displot is a figure-level plot where the kind parameter specifies the approach. When kind='hist' the parameters for seaborn.histplot are available.
For axes-level plots see How to add a standard normal pdf over a seaborn histogram
seaborn.axisgrid.FacetGrid.map expects dataframe column names, as such, to map the pdf onto seaborn.displot, the data needs to be in a dataframe.
An issue is that x_pdf is calculated for each axes:
x0, x1 = p1.axes[0][0].get_xlim()
If the axes are different for multiple Facets (sharex=False), then there's not a way to get xlim for each axes within .map.
References:
seaborn histplot and displot output doesn't match
Building structured multi-plot grids
Tested in python 3.8.11, pandas 1.3.2, matplotlib 3.4.2, seaborn 0.11.2
Single Facet
.map can be used
import pandas as pd
import seaborn as sns
import numpy as np
import scipy
# data
np.random.seed(365)
x1 = np.random.normal(10, 3.4, size=1000) # mean of 10
df = pd.DataFrame({'x1': x1})
# display(df.head(3))
x1
0 10.570932
1 11.779918
2 12.779077
# function for mapping the pdf
def map_pdf(x, **kwargs):
mu, std = scipy.stats.norm.fit(x)
x0, x1 = p1.axes[0][0].get_xlim() # axes for p1 is required to determine x_pdf
x_pdf = np.linspace(x0, x1, 100)
y_pdf = scipy.stats.norm.pdf(x_pdf, mu, std)
plt.plot(x_pdf, y_pdf, c='r')
p1 = sns.displot(data=df, x='x1', kind='hist', bins=40, stat='density')
p1.map(map_pdf, 'x1')
Single or Multiple Facets
It's easier to iterate through each axes and add the pdf
# data
np.random.seed(365)
x1 = np.random.normal(10, 3.4, size=1000) # mean of 10
x2 = np.random.standard_normal(1000) # mean of 0
df = pd.DataFrame({'x1': x1, 'x2': x2}).melt() # create long dataframe
# display(df.head(3))
variable value
0 x1 10.570932
1 x1 11.779918
2 x1 12.779077
p1 = sns.displot(data=df, x='value', col='variable', kind='hist', bins=40, stat='density', common_bins=False,
common_norm=False, facet_kws={'sharey': True, 'sharex': False})
# extract and flatten the axes from the figure
axes = p1.axes.ravel()
# iterate through each axes
for ax in axes:
# extract the variable name
var = ax.get_title().split(' = ')[1]
# select the data for the variable
data = df[df.variable.eq(var)]
mu, std = scipy.stats.norm.fit(data['value'])
x0, x1 = ax.get_xlim()
x_pdf = np.linspace(x0, x1, 100)
y_pdf = scipy.stats.norm.pdf(x_pdf, mu, std)
ax.plot(x_pdf, y_pdf, c='r')
If you want to replicate the same plot as your distplot, I suggest using histplot. Fitting our data to a normal is one line of code.
import numpy as np
from scipy import stats
import seaborn as sns
x = np.random.normal(10, 3.4, size=1000)
ax = sns.histplot(x, bins=40, stat='density')
mu, std = stats.norm.fit(x)
xx = np.linspace(*ax.get_xlim(),100)
ax.plot(xx, stats.norm.pdf(xx, mu, std));
Output
Related
As I go through online tutorials and\or articles in general, when I encounter a plot that uses
the Seaborn distplot plot I re-create it using either histplot or displot.
I do this because distplot is deprecated and I want to re-write the code using newer standards.
I am going through this article: https://www.kite.com/blog/python/data-analysis-visualization-python/
and there is a section using distplot whose output I cannot replicate.
This is the section of code that I am trying to replicate:
col_names = ['StrengthFactor', 'PriceReg', 'ReleaseYear', 'ItemCount', 'LowUserPrice', 'LowNetPrice']
fig, ax = plt.subplots(len(col_names), figsize=(8, 40))
for i, col_val in enumerate(col_names):
x = sales_data_hist[col_val][:1000]
sns.distplot(x, ax=ax[i], rug=True, hist=False)
outliers = x[percentile_based_outlier(x)]
ax[i].plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False)
ax[i].set_title('Outlier detection - {}'.format(col_val), fontsize=10)
ax[i].set_xlabel(col_val, fontsize=8)
plt.show()
Both the distplot itself and the axis variable are no longer used. The code, for now, runs.
In a nutshell, all I am trying to do is replicate the exact output of the code above (rug plot, the red dots representing the removed values, etc.) without using deprecated code.
I have tried various combinations of displot and histplot but I have been unable to get the exact same output any other way.
The sns.kdeplot() function shows the kde curve available in distplot. (In fact, distplot just calls kdeplot internally). Similarly, there is sns.rugplot() to show the rug.
Here is an example with the easier to replicate iris dataset:
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
def percentile_based_outlier(data, threshold=95):
diff = (100 - threshold) / 2
minval, maxval = np.percentile(data, [diff, 100 - diff])
return (data < minval) | (data > maxval)
iris = sns.load_dataset('iris')
col_names = [col for col in iris.columns if iris[col].dtype == 'float64'] # the numerical columns
fig, axs = plt.subplots(len(col_names), figsize=(5, 12))
for ax, col_val in zip(axs, col_names):
x = iris[col_val]
sns.kdeplot(x, ax=ax)
sns.rugplot(x, ax=ax, color='C0')
outliers = x[percentile_based_outlier(x)]
ax.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False)
ax.set_title(f'Outlier detection - {col_val}', fontsize=10)
ax.set_xlabel('') # ax[i].set_xlabel(col_val, fontsize=8)
plt.tight_layout()
plt.show()
To use displot, the dataframe can be converted to "long form" via pd.melt(). The outliers can be added via a custom function called by g.map_dataframe(...):
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
def percentile_based_outlier(data, threshold=95):
diff = (100 - threshold) / 2
minval, maxval = np.percentile(data, [diff, 100 - diff])
return (data < minval) | (data > maxval)
def show_outliers(data, color):
col_name = data['variable'].values[0]
x = data['value'].to_numpy()
outliers = x[percentile_based_outlier(x)]
plt.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False)
plt.xlabel('')
iris = sns.load_dataset('iris')
col_names = [col for col in iris.columns if iris[col].dtype == 'float64'] # the numerical columns
iris_long = iris.melt(value_vars=col_names)
g = sns.displot(data=iris_long, x='value', kind='kde', rug=True, row='variable',
height=2.2, aspect=3,
facet_kws={'sharey': False, 'sharex': False})
g.map_dataframe(show_outliers)
How can the following code be modified to show the mean as well as the different error bars on each bar of the bar plot?
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
a,b,c,d = [],[],[],[]
for i in range(1,5):
np.random.seed(i)
a.append(np.random.uniform(35,55))
b.append(np.random.uniform(40,70))
c.append(np.random.uniform(63,85))
d.append(np.random.uniform(59,80))
data_df =pd.DataFrame({'stages':[1,2,3,4],'S1':a,'S2':b,'S3':c,'S4':d})
print("Delay:")
display(data_df)
S1 S2 S3 S4
0 43.340440 61.609735 63.002516 65.348984
1 43.719898 40.777787 75.092575 68.141770
2 46.015958 61.244435 69.399904 69.727380
3 54.340597 56.416967 84.399056 74.011136
meansd_df=data_df.describe().loc[['mean', 'std'],:].drop('stages', axis = 1)
display(meansd_df)
sns.set()
sns.set_style('darkgrid',{"axes.facecolor": ".92"}) # (1)
sns.set_context('notebook')
fig, ax = plt.subplots(figsize = (8,6))
x = meansd_df.columns
y = meansd_df.loc['mean',:]
yerr = meansd_df.loc['std',:]
plt.xlabel("Time", size=14)
plt.ylim(-0.3, 100)
width = 0.45
for i, j,k in zip(x,y,yerr): # (2)
ax.bar(i,j, width, yerr = k, edgecolor = "black",
error_kw=dict(lw=1, capsize=8, capthick=1)) # (3)
ax.set(ylabel = 'Delay')
from matplotlib import ticker
ax.yaxis.set_major_locator(ticker.MultipleLocator(10))
plt.savefig("Over.png", dpi=300, bbox_inches='tight')
Given the example data, for a seaborn.barplot with capped error bars, data_df must be converted from a wide format, to a tidy (long) format, which can be accomplished with pandas.DataFrame.stack or pandas.DataFrame.melt
It is also important to keep in mind that a bar plot shows only the mean (or other estimator) value
Sample Data and DataFrame
.iloc[:, 1:] is used to skip the 'stages' column at column index 0.
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# given data_df from the OP, select the columns except stage and reshape to long format
df = data_df.iloc[:, 1:].melt(var_name='set', value_name='val')
# display(df.head())
set val
0 S1 43.340440
1 S1 43.719898
2 S1 46.015958
3 S1 54.340597
4 S2 61.609735
Updated as of matplotlib v3.4.2
Use matplotlib.pyplot.bar_label
See How to add value labels on a bar chart for additional details and examples with .bar_label.
Some formatting can be done with the fmt parameter, but more sophisticated formatting should be done with the labels parameter, as show in How to add multiple annotations to a barplot.
Tested with seaborn v0.11.1, which is using matplotlib as the plot engine.
fig, ax = plt.subplots(figsize=(8, 6))
# add the plot
sns.barplot(x='set', y='val', data=df, capsize=0.2, ax=ax)
# add the annotation
ax.bar_label(ax.containers[-1], fmt='Mean:\n%.2f', label_type='center')
ax.set(ylabel='Mean Time')
plt.show()
plot with seaborn.barplot
Using matplotlib before version 3.4.2
The default for the estimator parameter is mean, so the height of the bar is the mean of the group.
The bar height is extracted from p with .get_height, which can be used to annotate the bar.
fig, ax = plt.subplots(figsize=(8, 6))
sns.barplot(x='set', y='val', data=df, capsize=0.2, ax=ax)
# show the mean
for p in ax.patches:
h, w, x = p.get_height(), p.get_width(), p.get_x()
xy = (x + w / 2., h / 2)
text = f'Mean:\n{h:0.2f}'
ax.annotate(text=text, xy=xy, ha='center', va='center')
ax.set(xlabel='Delay', ylabel='Time')
plt.show()
Seaborn is most powerfull with long form data. So you might want to transform your data, something like this:
sns.barplot(data=data_df.melt('stages', value_name='Delay', var_name='Time'),
x='Time', y='Delay',
capsize=0.1, edgecolor='k')
Output:
I've decided to give seaborn version 0.11.0 a go! Playing around with the displot function, which will replace distplot, as I understand it. I'm just trying to figure out how to plot a gaussian fit on to a histogram. Here's some example code.
import seaborn as sns
import numpy as np
x = np.random.normal(size=500) * 0.1
With distplot I could do:
sns.distplot(x, kde=False, fit=norm)
But how to go about it in displot or histplot?
So far the closest I've come to is:
sns.histplot(x,stat="probability", bins=30, kde=True, kde_kws={"bw_adjust":3})
But I think this just increases the smoothening of the plotted kde, which isn't exactly what I'm going for.
I really miss the fit parameter too. It doesn't appear they replaced that functionality when they deprecated the distplot function. Until they plug that hole, I created a short function to add the normal distribution overlay to my histplot. I just paste the function at the top of a file along with the imports, and then I just have to add one line to add the overlay when I want it.
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
def normal(mean, std, color="black"):
x = np.linspace(mean-4*std, mean+4*std, 200)
p = stats.norm.pdf(x, mean, std)
z = plt.plot(x, p, color, linewidth=2)
data = np.random.normal(size=500) * 0.1
ax = sns.histplot(x=data, stat="density")
normal(data.mean(), data.std())
If you would rather use stat="probability" instead of stat="density", you can normalize the fit curve with something like this:
def normal(mean, std, histmax=False, color="black"):
x = np.linspace(mean-4*std, mean+4*std, 200)
p = stats.norm.pdf(x, mean, std)
if histmax:
p = p*histmax/max(p)
z = plt.plot(x, p, color, linewidth=2)
data = np.random.normal(size=500) * 0.1
ax = sns.histplot(x=data, stat="probability")
normal(data.mean(), data.std(), histmax=ax.get_ylim()[1])
Sorry I am late to the party. Just check if this will meet your requirement.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
data = np.random.normal(size=500) * 0.1
mu, std = norm.fit(data)
# Plot the histogram.
plt.hist(data, bins=25, density=True, alpha=0.6, color='g')
# Plot the PDF.
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'k', linewidth=2)
plt.show()
What I am trying to do is to play around with some random distribution. I don't want it to be normal. But for the time being normal is easier.
import matplotlib.pyplot as plt
from scipy.stats import norm
ws=norm.rvs(4.0, 1.5, size=100)
density, bins = np.histogram(ws, 50,normed=True, density=True)
unity_density = density / density.sum()
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(12,6))
widths = bins[:-1] - bins[1:]
ax1.bar(bins[1:], unity_density, width=widths)
ax2.bar(bins[1:], unity_density.cumsum(), width=widths)
fig.tight_layout()
Then what I can do it visualize CDF in terms of points.
density1=unity_density.cumsum()
x=bins[:-1]
y=density1
plt.plot(x, density1, 'o')
So what I have been trying to do is to use the np.interp function on the output of np.histogram in order to obtain a smooth curve representing the CDF and extracting the percent points to plot them. Ideally, I need to try to do it all both manually and using ppf function from scipy.
I have always struggled with statistics as an undergraduate. I am in grad school now and try to put me through as many exercises like this as possible in order to get a deeper understanding of what is happening. I've reached a point of desperation with this task.
Thank you!
One possibility to get smoother results is to use more samples, by using 10^5 samples and 100 bins I get the following images:
ws = norm.rvs(loc=4.0, scale=1.5, size=100000)
density, bins = np.histogram(ws, bins=100, normed=True, density=True)
In general you could use scipys interpolation module to smooth your CDF.
For 100 samples and a smoothing factor of s=0.01 I get:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import splev, splrep
density1 = unity_density.cumsum()
x = bins[:-1]
y = density1
# Interpolation
spl = splrep(x, y, s=0.01, per=False)
x2 = np.linspace(x[0], x[-1], 200)
y2 = splev(x2, spl)
# Plotting
fig, ax = plt.subplots()
plt.plot(x, density1, 'o')
plt.plot(x2, y2, 'r-')
The third possibility is to calculate the CDF analytically. If you generate the noise yourself with a numpy / scipy function most of the time there is already an implementation of the CDF available, otherwise you should find it on Wikipedia. If your samples come from measurements that is of course a different story.
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
x = np.linspace(-2, 10)
y = norm(loc=4.0, scale=1.5).cdf(x)
ax.plot(x, y, 'bo-')
I'm trying to plot a CDF from multiple simulation runs using Seaborn. I created a very simple code to emulate my results:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df1 = pd.DataFrame({'A':np.random.randint(0, 100, 1000)})
df2 = pd.DataFrame({'A':np.random.randint(0, 100, 1000)})
df3 = pd.DataFrame({'A':np.random.randint(0, 100, 1000)})
f, ax = plt.subplots(figsize=(8, 8))
ax = sns.kdeplot(df1['A'], cumulative=True)
ax = sns.kdeplot(df2['A'], cumulative=True)
ax = sns.kdeplot(df3['A'], cumulative=True)
plt.show()
The code above creates the following plot:
But, since the three lines are results from the same simulation with different seeds, I'd like to "merge" the three lines into one and add a shaded area around the line, representing min and max or the std of the three different runs.
How can this be accomplished in Seaborn?
You may use fill_between to fill between two curves. Now here the problem is that the kde support would be different for the three curves. Obtaining a common kde support will require to calculate the cdf manually. This could be done as follows.
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
def cdf(data, limits="auto", npoints=600):
kde = stats.gaussian_kde(data)
bw = kde.factor
if limits == "auto":
limits = (data.min(), data.max())
limits = (limits[0]-bw*np.diff(limits)[0],
limits[1]+bw*np.diff(limits)[0])
x = np.linspace(limits[0], limits[1], npoints)
y = [kde.integrate_box(x[0],x[i]) for i in range(len(x))]
return x, np.array(y)
d1 = np.random.randint(14, 86, 1000)
d2 = np.random.randint(10, 100, 1000)
d3 = np.random.randint(0, 90, 1000)
mini = np.min((d1.min(), d2.min(), d3.min()))
maxi = np.max((d1.max(), d2.max(), d3.max()))
x1,y1 = cdf(d1, limits=(mini, maxi))
x2,y2 = cdf(d2, limits=(mini, maxi))
x3,y3 = cdf(d3, limits=(mini, maxi))
y = np.column_stack((y1, y2, y3))
ymin = np.min(y, axis=1)
ymax = np.max(y, axis=1)
f, ax = plt.subplots()
ax.plot(x1,y1)
ax.plot(x2,y2)
ax.plot(x3,y3)
ax.fill_between(x1, ymin, ymax, color="turquoise", alpha=0.4, zorder=0)
plt.show()