Studentized range statistic (q*) in Python Scipy - python

I am wondering if it is possible to find the Studentized range statistic (q*) in Python Scipy lib as an input into Tukey's HSD calculation, similar to interpolating a table such as this (http://cse.niaes.affrc.go.jp/miwa/probcalc/s-range/srng_tbl.html#fivepercent) or pulling from a continuous distribution.
I have found some guidance here (http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.tukeylambda.html#scipy.stats.tukeylambda), but lost on how to input the df (degrees of freedom) or k (# of samples groups).
I am looking for something like the critical F or critical t statistic, which can be obtained via
scipy.stats.f.isf(alpha, df-between, df-within)
or
scipy.stats.t.isf(alpha, df).

from statsmodels.stats.libqsturng import psturng, qsturng
provides cdf (or tail probabilities) and quantile function (inverse of cdf or of survival function, I don't remember)
It was written by Roger Lew as a package for interpolating the distribution of the studentized range statistic and was incorporated in statsmodels for use in tukeyhsd.
Until now it has only be used internally in statsmodels, and you would have to check the limitations and explanation in libqsturng.
As reference, statsmodels has a tukeyhsd function and a MultipleComparison class.
http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.multicomp.pairwise_tukeyhsd.html

Related

A simple way to compute cumulative distribution function in Python

I'm trying to compute the distribution function of any of the usual distributions in Python... However, all the methods I've seen involve first drawing N samples from said distribution, and then order them somehow, and then do a cumulative sum.
In Mathematica, I can just do CDF[ChiSquaredDistribution[df],quantile]. If I want another distribution, I just substitute ChiSquaredDistribution for the name of that other distribution.
Is there a simple way, like in Mathematica, to compute a cumulative distribution function in Python?

Is there a fast alternative to scipy _norm_pdf for correlated distribution sampling?

I have fit a series of SciPy continuous distributions for a Monte-Carlo simulation and am looking to take a large number of samples from these distributions. However, I would like to be able to take correlated samples, such that the ith sample takes the e.g., 90th percentile from each of the distributions.
In doing this, I've found a quirk in SciPy performance:
# very fast way to many uncorrelated samples of length n
for shape, loc, scale, in distro_props:
sp.stats.norm.rvs(*shape, loc=loc, scale=scale, size=n)
# verrrrryyyyy slow way to take correlated samples of length n
correlate = np.random.uniform(size=n)
for shape, loc, scale, in distro_props:
sp.stats.norm.ppf(correlate, *shape, loc=loc, scale=scale)
Most of the results about this claim that the slowness on these SciPy distros if from the type-checking etc. wrappers. However when I profiled the code, the vast bulk of the time is spent in the underlying math function [_continuous_distns.py:179(_norm_pdf)]1. Furthermore, it scales with n, implying that it's looping through every elemnt internally.
The SciPy docs on rv_continuous almost seem to suggest that the subclass should override this for performance, but it seems bizarre that I would monkeypatch into SciPy to speed up their ppf. I would just compute this for the normal from the ppf formula, but I also use lognormal and skewed normal, which are more of a pain to implement.
So, what is the best way in Python to compute a fast ppf for normal, lognormal, and skewed normal distributions? Or more broadly, to take correlated samples from several such distributions?
If you need just the normal ppf, it is indeed puzzling that it is so slow, but you can use scipy.special.erfinv instead:
x = np.random.uniform(0,1,100)
np.allclose(special.erfinv(2*x-1)*np.sqrt(2),stats.norm().ppf(x))
# True
timeit(lambda:stats.norm().ppf(x),number=1000)
# 0.7717257660115138
timeit(lambda:special.erfinv(2*x-1)*np.sqrt(2),number=1000)
# 0.015020604943856597
EDIT:
lognormal and triangle are also straight forward:
c = np.random.uniform()
np.allclose(np.exp(c*special.erfinv(2*x-1)*np.sqrt(2)),stats.lognorm(c).ppf(x))
# True
np.allclose(((1-np.sqrt(1-(x-c)/((x>c)-c)))*((x>c)-c))+c,stats.triang(c).ppf(x))
# True
skew normal I'm not familiar enough, unfortunately.
Ultimately, this issue was caused by my use of the skew-normal distribution. The ppf of the skew-normal actually does not have a closed-form analytic definition, so in order to compute the ppf, it fell back to scipy.continuous_rv's numeric approximation, which involved iteratively computing the cdf and using that to zero in on the ppf value. The skew-normal pdf is the product of the normal pdf and normal cdf, so this numeric approximation called the normal's pdf and cdf many many times. This is why when I profiled the code, it looked like the normal distribution was the problem, not the SKU normal. The other answer to this question was able to achieve time savings by skipping type-checking, but didn't actually make a difference on the run-time growth, just a difference on small-n runtimes.
To solve this problem, I have replaced the skew-normal distribution with the Johnson SU distribution. It has 2 more free parameters than a normal distribution so it can fit different types of skew and kurtosis effectively. It's supported for all real numbers, and it has a closed-form ppf definition with a fast implementation in SciPy. Below you can see example Johnson SU distributions I've been fitting from the 10th, 50th, and 90th percentiles.

Adjusted Boxplot in Python

For my thesis, I am trying to identify outliers in my data set. The data set is constructed of 160000 times of one variable from a real process environment. In this environment however, there can be measurements that are not actual data from the process itself but simply junk data. I would like to filter them out with I little help of literature instead of only "expert opinion".
Now I've read about the IQR method of seeing whether possible outliers lie when dealing with a symmetric distribution like the normal distribution. However, my data set is right skewed and by distribution fitting, inverse gamma and lognormal where the best fit.
So, during my search for methods for non-symmetric distributions, I found this topic on crossvalidated where user603's answer is interesting in particular: Is there a boxplot variant for Poisson distributed data?
In user603's answer, he states that an adjusted boxplot helps to identify possible outliers in your dataset and that R and Matlab have functions for this
(There is an 𝚁R implementation of this
(πš›πš˜πš‹πšžπšœπšπš‹πšŠπšœπšŽ::πšŠπšπš“πš‹πš˜πš‘()robustbase::adjbox()) as well as
a matlab one (in a library called πš•πš’πš‹πš›πšŠlibra)
I was wondering if there is such a function in Python. Or is there a way to calculate the medcouple (see paper in user603's answer) with python?
I really would like to see what comes out the adjusted boxplot for my data..
In the module statsmodels.stats.stattools there is a function medcouple(), which is the measure of the skewness used in the Adjusted Boxplot.
enter link description here
With this variable you can calculate the interval beyond which outliers are defined.

Identifying a distribution from a pdf in python

I have a probability density function of an unknown distribution which is given as a set of tuples (x, f(x)), where x=numpy.arange(0,1,size) and f(x) is the corresponding probability.
What is the best way to identify the corresponding distribution? So far my idea is to draw a large amount of samples based on the pdf (by writing the code myself from scratch) and then use the obtained data to fit all of the distributions implemented in scipy.stats, then take the best fit.
Is there a better way to solve this problem? For example, is there some kind of utility in scipy.stats that I'm missing that would help me solve this problem?
In a fundamental sense, it's not really possible to summarize a distribution based on empirical samples - see here a discussion.
It's possible to do something more limited, which is to reject/accept the hypothesis that it comes out of one of a finite set of (parametric) distributions, based on a somewhat arbitrary criterion.
Given the finite set of distributions, for each distribution, you could perhaps realistically do the following:
Fit the distribution's parameters to the data. E.g., scipy.stats.beta.fit will fit the best parameters of the Beta distribution (all scipy distributions have this method).
Reject/accept the hypothesis that the data was generated by this distribution. There is more than a single way of doing this. A particularly simple way is to use the rvs() method of the distribution to generate another sample, then use ks_2samp to generate a Kolmogorov-Smirnov test.
Note that some specific distributions might have better, ad-hoc algorithms for testing whether a member of the distribution's family generated the data. As usual, the Normal distribution has many in particular (see Normalcy test).

Python - calculate normal distribution

I'm quite new to python world. Also, I'm not a statistician. I'm in the need to implementing mathematical models developed by mathematicians in a computer science programming language. I've chosen python after some research. I'm comfortable with programming as such (PHP/HTML/javascript).
I have a column of values that I've extracted from a MySQL database & in need to calculate the below:
Normal distribution of it. (I don't have the sigma & mu values. These need to be calculated too apparently).
Mixture of normal distribution
Estimate density of normal distribution
Calculate 'Z' score
The array of values looks similar to the one below ( I've populated sample data)-
data = [3,3,3,3,3,3,3,9,12,6,3,3,3,3,9,21,3,12,3,6,3,30,12,6,3,3,24,30,3,3,3,12,3,3,3,3,3,3,3,6,9,3,3,3,3,3,3,3,3,3,3,3,3,33,3,3,3,6,3,3,6,6,15,3,3,3,3,6,3,3,3,3,3,3,3,3,12,12,3,3,3,3,3,3,78,9,12,3,6,3,15,6,3,3,3,30,3,6,78,3,9,9,3,78,3,3,3,3,3,12,15,3,3,78,3,3,33,78,15,9,3,3,21,6,3,6,30,6,6,3,3,3,3,12,3,3,3,3,3,12,3,3,3,3,3,3,3,3,3,3,3,3,12,6,3,3,9,3,3,12,3,3,3,3,6,3,3,6,3,3,18,6,3,3,3,3,3,6,3,3,3,3,3,3,3,3,9,21,3,9,3,3,12,12,3,3,15,30,3,12,3,3,6,3,3,3,9,9,6,6,3,3,27,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,12,6,3,3,3,3,30,3,3,3,3,6,18,24,6,3,3,42,3,3,6,3,15,3,3,3,3,9,3,60,81,54,3,9,3,3,6,3,6,3,3,3,3,6,3,3,3,33,24,3,3,3,3,3,3,3,3,3,3,3,3,3,93,3,3,21,3,3,3,3,6,6,30,3,3,3,3,6,3,9,3,3,6,3,6,3,3,3,39,9,30,6,45,3,3,3,3,3,24,12,3,6,3,78,3,3,3,3,3,3,3,3,3,3,3,9,6,3,3,3,6,15,3,78,3,3,30,3,3,3,33,24,3,3,6,3,3,3,6,3,3,3,12,15,3,3,3,21,3,3,3,3,9,6,3,6,3,3,3,3,6,6,3,15,6,9,3,3,18,3,3,3,3,3,3,3,3,21,3,3,6,3,3,3,3,3,3,12,3,3,3,3,3,3,6,21,12,3,6,9,3,3,3,3,9,15,3,6,78,6,6,3,9,3,9,3,6,3,3,3,24,3,3,6,3,3,27,3,6,3,3,3,3,3,3,3,3,3,3,3,3,21,3,9,6,6,9,27,30,3,3,9,12,6,3,3,12,9,3,21,3,6,9,9,3,3,3,3,9,6,3,3,6,3,3,3,3,3,6,3,6,3,3,3,24,6,3,3,3,3,3,3,3,3,3,3,18,3,3,3,3,3,9,6,3,3,3,18,3,9,3,3,15,9,12,3,18,3,6,3,3,3,6,3,3,3,3,3,3,3,21,9,15,3,3,3,21,3,3,3,3,3,6,9,3,3,21,6,3,3,15,3,18,3,3,21,3,21,3,9,3,6,21,3,9,15,3,69,21,3,3,3,9,3,3,3,12,3,3,9,3,3,27,3,3,9,3,9,3,3,3,3,3,30,3,12,21,18,27,3,3,12,3,6,3,30,3,21,9,15,6,3,3,3,15,9,12,12,33,3,3,30,3,6,6,21,3,3,12,3,3,6,51,3,3,3,3,12,3,6,3,9,78,21,3,3,21,18,6,12,3,3,3,21,9,6,3,3,3,3,3,3,6,3,6,27,3,3,3,3,3,3,12,3,3,3,3,6,3,18,3,3,15,3,3,18,9,6,3,3,24,3,6,12,30,3,12,24,3,3,3,9,3,12,27,3,3,6,3,9,3,9,3,15,3,6,3,3,9,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,6,3,3,3,9,15,3,3,3,3,9,3,6,3,3,3,3,27,3,6,3,3,3,3,3,3,3,3,3,3,9,3,3,3,12,3,3,3,27,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,9,3,3,3,3,3,3,15,3,3,3,3,3,3,12,3,6,6,3,3,3,3,6,3,3,6,3,3,3,3,3,6,3,3,3,3,6,12,6,3,3,3,3,6,3,3,3,3,3,3,3,3,3,6,3,6,3,3,6,3,3,6,3,3,3,6,6,6,3,3,27,3,3,3,3,3,3,3,27,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,6,3,3,3,6,3,54,75,3,57,3,6,27,18,3,3,3,3,27,3,3,3,3,3,9,3,27,3,3,6,6,30,3,3,6,3,3,3,6,15,3,6,3,3,6,3,3,3,3,6,3,3,27,9,3,18,3,3,6,6,3,9,3,3,3,6,3,3,3,3,3,3,3,3,6,3,3,3,6,3,3,6,3,3,3,3,6,6,3,3,3,6,6,3,3,3,3,3,3,3,6,3,3,6,3,3,3,3,3,6,3,18,3,3,6,3,6,3,3,3,3,3,3,3,3,6,15,3,6,15,6,3,3,3,3,3,3,3,3,3,3,3,3,6,3,6,3,3,6,12,3,3,6,3,3,6,3,3,3,3,3,27,3,3,3,3,9,3,27,3,3,27,3,3,3,3,3,3,9,6,3,9,3,6,3,3,6,3,6,3,3,3,6,3,3,6,3,18,3,3,3,9,6,3,3,3,3,3,6,3,6,6,3,18,27,3,3,3,6,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,21,3,3,3,3,6,9,3,3,3,3,3,3,6,3,6,3,3,3,3,3,6,3,6,3,3,3,3,3,18,3,3,18,3,3,3,3,6,3,3,3,18,6,3,3,3,3,3,3,3,6,3,3,3,6,3,3,3,3,3,3,6,3,3,3,3,3,3,6,3,3,6,3,6,3,3,3,6,3,3,6,3,3,3,3,6,3,3,3,6,3,3,3,3,3,3,3,6,6,3,3,3,3,3,6,3,6,3,54,3,6,3,6,6,6,3,3,3,3,3,3,6,3,3,6,3,3,6,3,3,9,12,3,6,3,3,3,3,3,6,6,3,3,3,3,6,3,6,3,3,3,3,3,3,3,3,6,3,3,3,3,3,6,3,3,3,3,3,12,3,3,6,9,27,21,3,3,3,3,3,21,6,3,3,3,3,3,3,3,3,3,3,3,6,3,3,12,3,3,3,3,3,3,3,3,3,3,3,6,3,3,6,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,9,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,6,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,6,3,3,3,3,6,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,6,3,3,3,3,3,3,6,6,3,3,3,3,3,3,6,3,3,6,3,3,3,6,3,3,3,3,6,6,3,6,3,6,6,3,9,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,6,3,3,3,9,9,3,3,3,3,3,6,3,3,3,3,6,3,3,3,3,6,3,3,3,3,3,6,3,6,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,6,3,3,6,3,3,3,3,3,3,3,6,3,3,3,135,3,9,3,3,6,9,3,3,3,6,3,3,3,3,6,3,3,6,6,3,3,3,3,3,3,3,3,3,3,3,3,6,6,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,6,3,3,3,135,3,3,3,6,3,3,3,3,6,6,3,3,69,87,57,9,3,3,3,12,3,6,3,3,3,6,3,3,3,3,3,3,3,3,3,3,6,9,12,3,3,3,3,3,3,3,3,6,3,3,9,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,9,3,3,3,3,12,3,3,33,3,6,3,3,3,3,3,3,6,3,6,3,3,6,3,3,3,6,3,6,3,3,6,3,3,3,6,3,3,6,3,3,3,6,3,3,3,3,9,3,3,6,6,3,3,3,6,6,3,3,3,3,3,3,6,3,3,3,3,6,3,3,3,6,3,18,3,6,3,3,3,3,9,3,3,3,3,3,3,6,3,3,6,3,3,3,3,3,135,3,9,3,3,3,3,3,3,3,3,6,6,3,6,6,3,3,6,3,3,3,6,6,3,3,3,3,6,9,3,3,3,3,3,3,6,6,3,3,3,3,3,3,135,3,3,3,6,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,6,6,6,3,3,3,6,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,9,6,3,3,3,9,3,3,3,3,9,3,3,3,3,3,3,3,3,3,9,3,6,6,3,6,3,3,6,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,9,3,24,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,6,3,3,3,3,3,3,6,3,135,3,3,3,3,3,3,6,6,3,3,3,3,3,3,3,3,6,3,3,3,3,3,9,6,3,3,3,9,3,3,3,3,3,3,6,3,3,6,3,9,3,3,3,6,3,3,3,6,6,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,9,3,3,3,3,3,9,6,3,9,3,6,3,3,21,9,3,3,3,6,3,3,3,3,6,3,3,3,3,9,3,3,3,3,3,3,3,135,3,6,6,6,3,6,3,3,9,6,6,3,3,3,3,3,3,9,3,6,3,3,3,3,3,3,3,6,9,6,3,3,6,3,6,6,3,3,3,3,6,3,6,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,6,3,6,3,12,3,24,3,3,3,3,3,3,21,3,3,3,3,3,3,3,6,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,15,3,3,3,3,3,3,3,6,3,3,6,6,3,3,9,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,9,3,3,3,6,3,3,3,6,3,6,3,3,3,3,3,3,3,3,3,12,3,3,3,3,3,3,6,3,6,6,3,3,3,6,3,3,6,3,3,3,3,9,6,3,3,3,6,9,3,3,3,6,9,3,6,3,3,3,3,3,3,6,3,3,3,3,6,6,3,3,3,3,3,3,3,3,3,3,9,15,3,3,3,6,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,12,3,3,3,6,6,6,3,3,3,6,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,12,12,6,3,3,3,3,3,3,3,3,3,9,6,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,6,3,3,3,3,6,3,3,3,6,3,3,3,3,3,3,3,6,3,3,3,6,3,3,6,3,3,12,3,3,3,6,3,3,3,3,564,84,3,60,6,15,3,3,3,3,3,6,3,3,3,3,3,3,3,9,3,3,3,3,3,3,3,3,3,3,3,6,9,3,3,3,3,3,9,3,3,3,3,3,12,6,3,3,3,3,3,3,3,3,6,3,3,3,3,9,57,3,6,3,6,3,3,6,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,9,3,3,3,3,6,3,3,3,6,12,3,6,3,3,3,3,3,3,3,3,6,3,6,3,3,3,6,3,3,6,3,3,36,3,3,6,6,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,12,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,6,3,3,6,3,6,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,3,3,3,3,12,6,3,3,3,3,3,3,3,12,3,3,3,6,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,9,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,12,3,3,3,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,9,3,3,3,3,3,3,3,9,3,3,3,3,3,3,3,3,3,6,3,3,3,3,3,3,3,3,3,3,6,3,3,3,27,3,3,6,3,3,3,3,3,6,3,3,3,3,6,3,3,9,3,3,3,12,3,3,3,3,3,6,9,3,6,3,3]
I've looked around & found quite a bit about cumulative distribution as here (These have the mu & sigma values ready anyway which isn't the case in my scenario). I'm not too sure if cumulative normal distribution & normal distribution are the same. Could I please get some pointers on how to get started with this please?
I'd very much appreciate any help here please.
A distribution and the cumulative distribution are not the same - the latter is the integral of the former. If the normal distribution looks like a "bell", the cumulative normal distribution looks like a gentle "step" function.
E.g., for the following "bells"
you'd get the following "steps"
If you have an array data, the following will fit it to a normal distribution using scipy.stats.norm:
import numpy as np
from scipy.stats import norm
mu, std = norm.fit(data)
This will return the mean and standard deviation, the combination of which define a normal distribution.
Normal and cumulative distributions are not the same. I'll leave that bit of research to you.
The formula for normal distribution is easy if you have the mean and standard deviation:
The thing that you may look at is the normal distribution not the cumulative normal distribution. You can calculate the frequency of each element that occurs in the array and plot it to visualize the distribution.
Then you can use numpy to calculate mean = numpy.mean(array) and standard deviation as std = numpy.std(array).
Hope this helps.

Categories