I am writing a python script that needs to make a distribution fit against some generated data.
I found that this is possible using SciPy, or other packages having SciPy as dependency; however, due to administrative constraints, I am unable to install SciPy's dependencies (such as Blas) on the machine where the script will run.
Is there a way to perform distribution fitting in Python without using SciPy or packages depending on it?
EDIT: as asked in a comment, what I want to do is perform an Anderson-Darling test for normality.
The alternatives I found so far (but had to disregard):
statsmodel: has SciPy as dependency
R and Matlab python apis: need setup of external software, same problem for me as SciPy
Fitting the normal distribution only requires calculating mean and standard deviation.
The Anderson-Darling test only requires numpy or alternatively could be rewritten using list comprehension. The critical values for the AD-test are tabulated or based on a simple approximation formula. It does not use any difficult parts of scipy like optimize or special.
So, I think it should not be too difficult to translate either the scipy.stats or the statsmodels version to using pure Python or only with numpy as dependency.
Related
I am using one of the new MacBooks with the M1 chip. I cannot use SciPy natively unless I go through Rosetta, but for other reasons I cannot do that now.
The ONLY thing I need from SciPy is scipy.linalg.expm. Is there an alternative library where there is an implementation of expm that is equally (or almost equally) fast?
I'm trying to wrap my head around the Python ecosystem and parts of it aren't making complete sense to me so far.
I'm coming from the Java world and when I want to make use of, say JUnit, I simply add the JUnit jar to my classpath and that's pretty much it. If I want to be nice to my users I can also easily bunch together all my dependencies into a single jar, so that all that they need to do is install a Java Runtime get a hold of my jar.
Reading through the SciPy installation guide I can't find an explanation for why all this is really necessary. And how is this ever going to work at deployment time? It's like JUnit asked me to install a new JRE just for them.
SciPy has parts written in C that require compilation for the specific platform it's being deployed too.
How can SciPy be fast if it is written in an interpreted language like Python?
Actually, the time-critical loops are usually implemented in C or
Fortran. Much of SciPy is a thin layer of code on top of the
scientific routines that are freely available at
http://www.netlib.org/. Netlib is a huge repository of incredibly
valuable and robust scientific algorithms written in C and Fortran. It
would be silly to rewrite these algorithms and would take years to
debug them. SciPy uses a variety of methods to generate “wrappers”
around these algorithms so that they can be used in Python. Some
wrappers were generated by hand coding them in C. The rest were
generated using either SWIG or f2py. Some of the newer contributions
to SciPy are either written entirely or wrapped with Cython.
Source: http://www.scipy.org/scipylib/faq.html#id12
On Linux, SciPy and NumPy libraries’ official releases are source-code
only. Installing NumPy and SciPy from source is reasonably easy;
However, both packages depend on other software, some of them which
can be challenging to install, or shipped with incompatibilities by
major Linux distributions. Hopefully, you can install NumPy and SciPy
without any software outside the necessary tools to build python
extensions, as most dependencies are optional
Source: http://www.scipy.org/scipylib/building/linux.html
Both SciPy and Numpy have built in functions for singular value decomposition (SVD). The commands are basically scipy.linalg.svd and numpy.linalg.svd. What is the difference between these two? Is any of them better than the other one?
From the FAQ page, it says scipy.linalg submodule provides a more complete wrapper for the Fortran LAPACK library whereas numpy.linalg tries to be able to build independent of LAPACK.
I did some benchmarks for the different implementation of the svd functions and found scipy.linalg.svd is faster than the numpy counterpart:
However, jax wrapped numpy, aka jax.numpy.linalg.svd is even faster:
Full notebook for the benchmarks are available here.
Apart from the error checking, the actual work seems to be done within lapack
both with numpy and scipy.
Without having done any benchmarking, I guess the performance should be identical.
I've been using spatial.cKDTree in scipy to calculate distances between points. It has always run very quickly (~1 s) for my typical data sets (finding distances for ~1000 points to an array of ~1e6 points).
I'm running this code in python 2.7.6 on a computer with Ubuntu 14.10. Up until this morning, I had managed most python packages with apt-get, including scipy and numpy. I wanted up-to-date versions of a few packages though, so I decided to packages installed in /usr/lib/python2.7/ by apt-get, and re-installed all packages with pip install (taking care of scipy dependencies like liblapack-dev with apt-get, as necessary). Everything installed and is importable without a problem.
import scipy
import cython
scipy.__version__
'0.16.0'
cython.__version__
'0.22.1'
Now, running spatial.cKDTree on the same size data sets is going really slowly. I'm seeing run time of ~500 s rather than ~1 s. I'm having trouble figuring out what is going on.
Any suggestions as to what I might have done in installing using pip rather than apt-get that would have caused scipy.spatial.cKDTree to run so slowly?
In 0.16.x I added options to build the cKDTree with median or sliding midpoint rules, as well as choosing whether to recompute the bounding hyperrectangle at each node in the kd-tree. The defaults are based on experiences about the performance of scipy.spatial.cKDTree and sklearn.neighbors.KDTree. In some contrived cases (data that are highly streched along a dimension) it can have negative impact, but usually it should be faster. Experiment with bulding the cKDTree with balanced_tree=False and/or compact_nodes=False. Setting both to False gives you the same behavior as 0.15.x. Unfortunately it is difficult to set defaults that make everyone happy because the performance depends on the data.
Also note that with balanced_tree=True we compute medians by quickselect when the kd-tree is constructed. If the data for some reason is pre-sorted, it will be very slow. In this case it will help to shuffle the rows of the input data. Or you can set balanced_tree=False to avoid the partial quicksorts.
There is also a new option to multithread the nearest-neighbor query. Try to call query with n_jobs=-1 and see if it helps for you.
Update June 2020:
SciPy 1.5.0 will use a new algorithm (introselect based partial sort, from C++ STL) which solves the problems reported here.
In the next release of SciPy, balanced kd-trees will be created with introselect instead of quickselect, which is much faster on structured datasets. If you use cKDTree on a structured data set such as an image or a grid, you can look forward to a major boost in performance. It is already available if you build SciPy from its master branch on GitHub.
I'd like to calculate the time complexity of some algorithms that use Python's Scipy library. I found a table for general Python functions in https://wiki.python.org/moin/TimeComplexity that I plan to use. Is there a similar table for Scipy library?