Python, seaborn, statistic analysis using statannot doesn't look right - python

I used statannot to perform a statistical test on some basic data, but the results from the statistical test don't seem correct. I.e. a couple of my comparisons come up with "P_val=0.000e+00 U_stat=0.000e+00", which I think should not be possible. Is there something wrong with my data frame and/or code?
Here is the data frame I am using:
and here is my code:
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
from statannot import add_stat_annotation
import scipy.stats as sp
data = pd.read_excel('Z:/DMF/GROUPS/gr_Veening/Users/Vik/scRNA-seq/FACSAria/Adherence-invasion assays/adherence_invasion_assay_a549-RFP 4-6-21.xlsx',sheet_name="Sheet2", header = 0)
sns.set_theme(style="darkgrid")
ax1 = sns.boxplot(x="Strain", y="adherence_counts", data=data)
x = "Strain"
y = "adherence_counts"
order = ["D39", "D39 Δcps", "19F", "19F ΔcomCDE"]
ax1 = sns.boxplot(data=data, x=x, y=y, order=order)
plt.title("Adherence Assay")
plt.ylabel('CFU/ml')
plt.xlabel('')
ax1.set(xticklabels=["D39", "D39 Δ$\it{cps}$", "19F", "19F Δ$\it{comCDE}$"])
add_stat_annotation(ax1, data=data, x=x, y=y, order=order,
box_pairs=[("D39", "19F"), ("D39", "D39 Δcps"), ("D39 Δcps", "19F"), ("19F", "19F ΔcomCDE")],
test='Mann-Whitney', text_format='star', loc='inside', verbose=2)
Finally, here is the results from this statistical test:
D39 v.s. D39 Δcps: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=0.000e+00
D39 Δcps v.s. 19F: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=1.000e+00 U_stat=2.000e+00
19F v.s. 19F ΔcomCDE: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=7.617e-01 U_stat=8.000e+00
D39 v.s. 19F: Mann-Whitney-Wilcoxon test two-sided with Bonferroni correction, P_val=0.000e+00 U_stat=0.000e+00
C:\Users\Vik\anaconda3\lib\site-packages\scipy\stats\stats.py:7171: RuntimeWarning: divide by zero encountered in double_scalars
z = (bigu - meanrank) / sd
Any help would be greatly appreciated, thanks!

Your problems come from two parts:
Statistically, in some of your cases (such as "D39" vs "19F"), all items are larger/smaller in one group vs the other, hence the 0 U statistic and extreme p-value. It is very much possible to have these results. It comes from examining only the ranks of the values provided (what this test does), it has advantages and limitations (+ Mann-Whitney's test is not adapted to such small sample sizes either, especially with scipy assuming equivariance).
Now that line z = (bigu - meanrank) / sd failing means that np.sqrt(T * n1 * n2 * (n1+n2+1) / 12.0) = 0, so in this case n1 and/or n2 are 0, (these are len(x) and len(y)). source in scipy So,
There is a bug in statannot, because this can happen, silently, if order and box_pair both refer to a series which does not exist in the dataframe, which I'll correct in statannotations. Thank you, then.
However, I cannot reproduce your Warning with a copy of your dataframe.
If this were the only bug, you should see a missing box in your plot at the point you showed us.
If not, is it possible you updated some of the code but did not copy the last output here? Otherwise, there may be something more to uncover, please let us know.
EDIT: As discovered in the discussion, the second problem can happen in statannot if there is a mismatch between a label in order, box_pairs and in the dataset. This has been patched in statannotations, a fork of statannot.

Related

Setting different errors for pandas plot bar

I'm trying to plot a probability distribution using a pandas.Series and I'm struggling to set different yerr for each bar. In summary, I'm plotting the following distribution:
It comes from a Series and it is working fine, except for the yerr. It cannot overpass 1 or 0. So, I'd like to set different errors for each bar. Therefore, I went to the documentation, which is available here and here.
According to them, I have 3 options to use either the yerr aor xerr:
scalar: Symmetric +/- values for all data points.
scalar: Symmetric +/- values for all data points.
shape(2,N): Separate - and + values for each bar. The first row contains the lower errors, the second row contains the upper errors.
The case I need is the last one. In this case, I can use a DataFrame, Series, array-like, dict and str. Thus, I set the arrays for each yerr bar, however it's not working as expected. Just to replicate what's happening, I prepared the following examples:
First I set a pandas.Series:
import pandas as pd
se = pd.Series(data=[0.1,0.2,0.3,0.4,0.4,0.5,0.2,0.1,0.1],
index=list('abcdefghi'))
Then, I'm replicating each case:
This works as expected:
err1 = [0.2]*9
se.plot(kind="bar", width=1.0, yerr=err1)
This works as expected:
err2 = err1
err2[3] = 0.5
se.plot(kind="bar", width=1.0, yerr=err1)
Now the problem: This doesn't works as expected!
err_up = [0.3]*9
err_low = [0.1]*9
err3 = [err_low, err_up]
se.plot(kind="bar", width=1.0, yerr=err3)
It's not setting different errors for low and up. I found an example here and a similar SO question here, although they are using matplotlib instead of pandas, it should work here.
I'm glad if you have any solution about that.
Thank you.
Strangely, plt.bar works as expected:
err_up = [0.3]*9
err_low = [0.1]*9
err3 = [err_low, err_up]
fig, ax = plt.subplots()
ax.bar(se.index, se, width=1.0, yerr=err3)
plt.show()
Output:
A bug/feature/design-decision of pandas maybe?
Based on #Quanghoang comment, I started to think it was a a bug. So, I tried to change the yerr shape, and surprisely, the following code worked:
err_up = [0.3]*9
err_low = [0.1]*9
err3 = [[err_low, err_up]]
print (err3)
se.plot(kind="bar", width=1.0, yerr=err3)
Observe I included a new axis in err3. Now it's a (1,2,N) array. However, the documentation says it should be (2,N).
In addition, a possible work around that I found was set the ax.ylim(0,1). It doesn't solve the problem, but plots the graph correctly.

Quantile-Quantile Plot using python statsmodels api

I am trying to see whether a normal distribution with specific parameters fits to a data set. However it seems qqplot does not work as it is expected to. The following small example shows this:
import numpy as np
import statsmodels.api as sm
import pylab
test = np.random.normal(20,5, 1000)
sm.qqplot(test, loc = 20, scale = 5 , line='45')
pylab.show()
As one can see I expect the points to be around the line with slope = 1 but it gives the following figure:
Can anyone explain me why this happens?
You can use line = '45' and it will work well if you have z-normalized data, meaning your distribution will have mean = 0 and sd = 1. In other cases you have several options, e.g. line = 's' or line = 'q' in case you want to see a fit against standardized line (the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them) or against line fit through the quartiles, which in my opinion is the one really meaning full and let's observe well the deviation of your data distribution from the normal one. Also, you can use line = 'r' for to see the fit to regression line. By default line is set to "None"
simply use code like this
import numpy as np
import statsmodels.api as sm
import pylab
test = np.random.normal(20, 5, 1000)
sm.qqplot(test, line='q')
pylab.show()
Please add "fit" as :
sm.qqplot(aaa, line = "45", fit = True)
I noticed that when I omitted the line='45' parameter from your code the following plot results.
We can see that what has happened is that, in the Q-Q plot that statsmodels makes the theoretical quantiles are not rescaled back to the dimensions of the original pseudosample, which is why the blue line is confined to the left edge of the your plot.
I don't know how to make statsmodels do what you want; however, there is another way — see https://stackoverflow.com/a/47189575/131187.
You can try setting the fit parameter to True

`ValueError: A value in x_new is above the interpolation range.` - what other reasons than not ascending values?

I receive this error in scipy interp1d function. Normally, this error would be generated if the x was not monotonically increasing.
import scipy.interpolate as spi
def refine(coarsex,coarsey,step):
finex = np.arange(min(coarsex),max(coarsex)+step,step)
intfunc = spi.interp1d(coarsex, coarsey,axis=0)
finey = intfunc(finex)
return finex, finey
for num, tfile in enumerate(files):
tfile = tfile.dropna(how='any')
x = np.array(tfile['col1'])
y = np.array(tfile['col2'])
finex, finey = refine(x,y,0.01)
The code is correct, because it successfully worked on 6 data files and threw the error for the 7th. So there must be something wrong with the data. But as far as I can tell, the data increase all the way down.
I am sorry for not providing an example, because I am not able to reproduce the error on an example.
There are two things that could help me:
Some brainstorming - if the data are indeed monotonically
increasing, what else could produce this error? Another hint,
regarding the decimals, could be in this question, but I think
my solution (the min and max of x) is robust enough to avoid it. Or
isn't it?
Is it possible (how?) to return the value of x_new and
it's index when throwing the ValueError: A value in x_new is above the interpolation range. so that I could actually see where in the
file is the problem?
UPDATE
So the problem is that, for some reason, max(finex) is larger than max(coarsex) (one is .x39 and the other is .x4). I hoped rounding the original values to 2 significant digits would solve the problem, but it didn't, it displays fewer digits but still counts with the undisplayed. What can I do about it?
If you are running Scipy v. 0.17.0 or newer, then you can pass fill_value='extrapolate' to spi.interp1d, and it will extrapolate to accomadate these values of your's that lie outside the interpolation range. So define your interpolation function like so:
intfunc = spi.interp1d(coarsex, coarsey,axis=0, fill_value="extrapolate")
Be forewarned, however!
Depending on what your data looks like and the type on interpolation you are performing, the extrapolated values can be erroneous. This is especially true if you have noisy or non-monotonic data. In your case you might be ok because your x_new value is only slighly beyond your interpolation range.
Here's simple demonstration of how this feature can work nicely but also give erroneous results.
import scipy.interpolate as spi
import numpy as np
x = np.linspace(0,1,100)
y = x + np.random.randint(-1,1,100)/100
x_new = np.linspace(0,1.1,100)
intfunc = spi.interp1d(x,y,fill_value="extrapolate")
y_interp = intfunc(x_new)
import matplotlib.pyplot as plt
plt.plot(x_new,y_interp,'r', label='interp/extrap')
plt.plot(x,y, 'b--', label='data')
plt.legend()
plt.show()
So the interpolated portion (in red) worked well, but the extrapolated portion clearly fails to follow the otherwise linear trend in this data because of the noise. So have some understanding of your data and proceed with caution.
A quick test of your finex calc shows that it can (always?) gets into the extrapolation region.
In [124]: coarsex=np.random.rand(100)
In [125]: max(coarsex)
Out[125]: 0.97393109991816473
In [126]: step=.01;finex=np.arange(min(coarsex), max(coarsex)+step, step);(max(
...: finex),max(coarsex))
Out[126]: (0.98273730602114795, 0.97393109991816473)
In [127]: step=.001;finex=np.arange(min(coarsex), max(coarsex)+step, step);(max
...: (finex),max(coarsex))
Out[127]: (0.97473730602114794, 0.97393109991816473)
Again it is a quick test, and may be missing some critical step or value.

What's the correct usage of matplotlib.mlab.normpdf()?

I intend for part of a program I'm writing to automatically generate Gaussian distributions of various statistics over multiple raw text sources, however I'm having some issues generating the graphs as per the guide at:
python pylab plot normal distribution
The general gist of the plot code is as follows.
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as pyplot
meanAverage = 222.89219487179491 # typical value calculated beforehand
standardDeviation = 3.8857889432054091 # typical value calculated beforehand
x = np.linspace(-3,3,100)
pyplot.plot(x,mlab.normpdf(x,meanAverage,standardDeviation))
pyplot.show()
All it does is produce a rather flat looking and useless y = 0 line!
Can anyone see what the problem is here?
Cheers.
If you read documentation of matplotlib.mlab.normpdf, this function is deprycated and you should use scipy.stats.norm.pdf instead.
Deprecated since version 2.2: scipy.stats.norm.pdf
And because your distribution mean is about 222, you should use np.linspace(200, 220, 100).
So your code will look like:
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as pyplot
meanAverage = 222.89219487179491 # typical value calculated beforehand
standardDeviation = 3.8857889432054091 # typical value calculated beforehand
x = np.linspace(200, 220, 100)
pyplot.plot(x, norm.pdf(x, meanAverage, standardDeviation))
pyplot.show()
It looks like you made a few small but significant errors. You either are choosing your x vector wrong or you swapped your stddev and mean. Since your mean is at 222, you probably want your x vector in this area, maybe something like 150 to 300. This way you get all the good stuff, right now you are looking at -3 to 3 which is at the tail of the distribution. Hope that helps.
I see that, for the *args which are sending meanAverage, standardDeviation, the correct thing to be sent is:
mu : a numdims array of means of a
sigma : a numdims array of atandard deviation of a
Does this help?

Why are LASSO in sklearn (python) and matlab statistical package different?

I am using LaasoCV from sklearn to select the best model is selected by cross-validation. I found that the cross validation gives different result if I use sklearn or matlab statistical toolbox.
I used matlab and replicate the example given in
http://www.mathworks.se/help/stats/lasso-and-elastic-net.html
to get a figure like this
Then I saved the matlab data, and tried to replicate the figure with laaso_path from sklearn, I got
Although there are some similarity between these two figures, there are also certain differences. As far as I understand parameter lambda in matlab and alpha in sklearn are same, however in this figure it seems that there are some differences. Can somebody point out which is the correct one or am I missing something? Further the coefficient obtained are also different (which is my main concern).
Matlab Code:
rng(3,'twister') % for reproducibility
X = zeros(200,5);
for ii = 1:5
X(:,ii) = exprnd(ii,200,1);
end
r = [0;2;0;-3;0];
Y = X*r + randn(200,1)*.1;
save randomData.mat % To be used in python code
[b fitinfo] = lasso(X,Y,'cv',10);
lassoPlot(b,fitinfo,'plottype','lambda','xscale','log');
disp('Lambda with min MSE')
fitinfo.LambdaMinMSE
disp('Lambda with 1SE')
fitinfo.Lambda1SE
disp('Quality of Fit')
lambdaindex = fitinfo.Index1SE;
fitinfo.MSE(lambdaindex)
disp('Number of non zero predictos')
fitinfo.DF(lambdaindex)
disp('Coefficient of fit at that lambda')
b(:,lambdaindex)
Python Code:
import scipy.io
import numpy as np
import pylab as pl
from sklearn.linear_model import lasso_path, LassoCV
data=scipy.io.loadmat('randomData.mat')
X=data['X']
Y=data['Y'].flatten()
model = LassoCV(cv=10,max_iter=1000).fit(X, Y)
print 'alpha', model.alpha_
print 'coef', model.coef_
eps = 1e-2 # the smaller it is the longer is the path
models = lasso_path(X, Y, eps=eps)
alphas_lasso = np.array([model.alpha for model in models])
coefs_lasso = np.array([model.coef_ for model in models])
pl.figure(1)
ax = pl.gca()
ax.set_color_cycle(2 * ['b', 'r', 'g', 'c', 'k'])
l1 = pl.semilogx(alphas_lasso,coefs_lasso)
pl.gca().invert_xaxis()
pl.xlabel('alpha')
pl.show()
I do not have matlab but be careful that the value obtained with the cross--validation can be unstable. This is because it influenced by the way you subdivide the samples.
Even if you run 2 times the cross-validation in python you can obtain 2 different results.
consider this example :
kf=sklearn.cross_validation.KFold(len(y),n_folds=10,shuffle=True)
cv=sklearn.linear_model.LassoCV(cv=kf,normalize=True).fit(x,y)
print cv.alpha_
kf=sklearn.cross_validation.KFold(len(y),n_folds=10,shuffle=True)
cv=sklearn.linear_model.LassoCV(cv=kf,normalize=True).fit(x,y)
print cv.alpha_
0.00645093258722
0.00691712356467
it's possible that alpha = lambda / n_samples
where n_samples = X.shape[0] in scikit-learn
another remark is that your path is not very piecewise linear as it could/should be. Consider reducing the tol and increasing max_iter.
hope this helps
I know this is an old thread, but:
I'm actually working on piping over to LassoCV from glmnet (in R), and I found that LassoCV doesn't do too well with normalizing the X matrix first (even if you specify the parameter normalize = True).
Try normalizing the X matrix first when using LassoCV.
If it is a pandas object,
(X - X.mean())/X.std()
It seems you also need to multiple alpha by 2
Though I am unable to figure out what is causing the problem, there is a logical direction in which to continue.
These are the facts:
Mathworks have selected an example and decided to include it in their documentation
Your matlab code produces exactly the result as the example.
The alternative does not match the result, and has provided inaccurate results in the past
This is my assumption:
The chance that mathworks have chosen to put an incorrect example in their documentation is neglectable compared to the chance that a reproduction of this example in an alternate way does not give the correct result.
The logical conclusion: Your matlab implementation of this example is reliable and the other is not.
This might be a problem in the code, or maybe in how you use it, but either way the only logical conclusion would be that you should continue with Matlab to select your model.

Categories