Alternative to scipy and numpy for linear algebra? - python

Is there a good (small and light) alternative to numpy for python, to do linear algebra?
I only need matrices (multiplication, addition), inverses, transposes and such.
Why?
I am tired of trying to install numpy/scipy - it is such a pita to get
it to work - it never seems to install correctly (esp. since I have
two machines, one linux and one windows): no matter what I do: compile
it or install from pre-built binaries. How hard is it to make a
"normal" installer that just works?

I'm surprised nobody mentioned SymPy, which is written entirely in Python and does not require compilation like Numpy.
There is also tinynumpy, which is a pure python alternative to Numpy with limited features.

Given your question, I decided just factor out the matrix code from where I were using it, and put it in a publicly accessible place -
So, this is basically a pure python ad-hoc implementation of a Matrix class which can perform addition, multiplication, matrix determinant and matrix inversion - should be of some use -
Since it is in pure python, and not worried with performance at all it unsuitable for any real calculation - but it is good enough for playing around with matrices in an interactive way, or where matrix algebra is far from being the critical part of the code.
The repository is here,
https://bitbucket.org/jsbueno/toymatrix/
And you can download it straight from here:
https://bitbucket.org/jsbueno/toymatrix/downloads/toymatrix_0.1.tar.gz

I hear you, I have been there as well. Numpy/scipy are really wonderful libraries and it is a pity that installation problems get somewhat often in the way of their usage.
Also, as far as I understand there are not very many good (easier to use) options either. The only possibly easier solution for you I know about is the "Yet Another Matrix Module" (see NumericAndScientific/Libraries listing on python.org). I am not aware of the status of this library (stability, speed, etc.). The possibility is that in the long run your needs will outgrow any simple library and you will end up installing numpy anyway.
Another notable downside on using any other library is that your code will potentially be incompatible with numpy, which happens to be the de facto library for linear algebra in python. Note also that numpy has been heavily optimized - speed is something you are not guaranteed to get with other libraries.
I would really just put more effort on solving the installation/setup problems. The alternatives are potentially much worse.

Have you ever tried anaconda? https://www.anaconda.com/download
This should allow it to install those packages easily.
conda install -c conda-forge scipy
conda install -c conda-forge numpy
Apart from offering you an easy way to install them in linux/mac/linux you will get virtualenviroments management too

I sometimes have this problem..not sure if this works but I often install it using my own account then try to run it in an IDE(komodo in my case) and it doesn't work. Like your issue it says it cannot find it. The way I solve this is to use sudo -i to get into root and then install it from there.
If that does not work can you update your answer to provide a bit more info about the type of system your using(linux, mac, windows), version of python/numpy and how your accessing it so it'll be easier to help.

For people who still have the problem: Try python portable:
http://portablepython.com/wiki/Download/

Have a look: tinynumpy, tinyarray and sympy
https://github.com/wadetb/tinynumpy
https://github.com/kwant-project/tinyarray
https://docs.sympy.org/latest/index.html

Related

As a python package maintainer, how can I determine lowest working requirements

While it is possible to simply use pip freeze to get the current environment, it is not suitable to require an environment as bleeding edge as what I am used too.
Moreover, some developer tooling are only available on recent version of packages (think type annotations), but not needed for users.
My target users may want to use my package on slowly upgrading machines, and I want to get my requirements as low as possible.
For example, I cannot require better than Python 3.6 (and even then I think some users may be unable to use the package).
Similarly, I want to avoid requiring the last Numpy or Matplotlib versions.
Is there a (semi-)automatic way of determining the oldest compatible version of each dependency?
Alternatively, I can manually try to build a conda environment with old packages, but I would have to try pretty randomly.
Unfortunately, I inherited a medium-sized codebase (~10KLoC) with no automated test yet (I plan on making some, but it takes some time, and it sadly cannot be my priority).
The requirements were not properly defined either so that I don't know what it has been run with two years ago.
Because semantic versionning is not always honored (and because it may be difficult from a developper standpoint to determine what is a minor or major change exactly for each possible user), and because only a human can parse release notes to understand what has changed, there is no simple solution.
My technical approach would be to create a virtual environment with a known working combination of Python and libraries versions. From there, downgrade one version by one version, one lib at a time, verifying that it still works fine (may be difficult if it is manual and/or long to check).
My social solution would be to timebox the technical approach to take no more than a few hours. Then settle for what you have reached. Indicate in the README that lib requirements may be overblown and that help is welcome.
Without fast automated tests in which you are confident, there is no way to automate the exploration of the N-space (each library is a dimension) to find a some minimums.

Is there a way to use C in a Python program without multiple files?

I am taking part in a tournament that requires the submission of a single python file. The problem requires quite a bit of computation in very little time and any performance gain would be very beneficial.
What I am wondering is, can I utilize C in some way? I know about Cython as well as C extensions but they require more than one file which means they are unusable. Is there a way to execute compiled C from inside python without another file?
You could achieve this in two folds, by first making your own package offering wrappers to your C code, and then publishing it with PyPi:
You first write a Python interface in C, I guess you already know how to do it.
You can follow this tutorial to publish your package, and then call your package with pip install your-package-name
Keep in mind that this process is quite lengthy, if you are in the middle of a competition I am not sure it would be the best solution, but I guess that if you ask this question you have already done all what you can do to optimize the algorithm part. You could also use one of the high performance libs that exist (numpy, pandas). If you are not even allowed to use pip then there is no way to do it in a single file.
NB: I assume that you are allowed to install packages through the terminal

Real World Blind Source Separation

I am aware that Scipy has a few ICA algorithms, like FastICA, but it can only be used if the mixed signal observations are perfectly in sync.
My application is recording audio (speech) using microphones into mono audio files. So FastICA would not work.
In my research, a few others algorithms I have came across are: Jade, AMUSE and DUET. However, I'm not sure to what extent Python has support for these algorithms. I would prefer to stay in the Python programming language if possible.
Let me just add, I highly value ease of interface, built-in functionality of the Python library, as well as computational efficiency. With that in mind, can someone with experience in using Scipy or other relevant Python libraries suggest a suitable alternative?
I have Anaconda 4.0, and am running Python 3.5 -- just let me know what I should import.
Thank you for reading
You can run the code by installing pyemma toolkit available in python.
conda config --add conda-forge
conda install pyemma
You can refer to this site for more help: http://emma-project.org/latest/generated/MSM_BPTI.html

Why does SciPy require an installation procedure?

I'm trying to wrap my head around the Python ecosystem and parts of it aren't making complete sense to me so far.
I'm coming from the Java world and when I want to make use of, say JUnit, I simply add the JUnit jar to my classpath and that's pretty much it. If I want to be nice to my users I can also easily bunch together all my dependencies into a single jar, so that all that they need to do is install a Java Runtime get a hold of my jar.
Reading through the SciPy installation guide I can't find an explanation for why all this is really necessary. And how is this ever going to work at deployment time? It's like JUnit asked me to install a new JRE just for them.
SciPy has parts written in C that require compilation for the specific platform it's being deployed too.
How can SciPy be fast if it is written in an interpreted language like Python?
Actually, the time-critical loops are usually implemented in C or
Fortran. Much of SciPy is a thin layer of code on top of the
scientific routines that are freely available at
http://www.netlib.org/. Netlib is a huge repository of incredibly
valuable and robust scientific algorithms written in C and Fortran. It
would be silly to rewrite these algorithms and would take years to
debug them. SciPy uses a variety of methods to generate “wrappers”
around these algorithms so that they can be used in Python. Some
wrappers were generated by hand coding them in C. The rest were
generated using either SWIG or f2py. Some of the newer contributions
to SciPy are either written entirely or wrapped with Cython.
Source: http://www.scipy.org/scipylib/faq.html#id12
On Linux, SciPy and NumPy libraries’ official releases are source-code
only. Installing NumPy and SciPy from source is reasonably easy;
However, both packages depend on other software, some of them which
can be challenging to install, or shipped with incompatibilities by
major Linux distributions. Hopefully, you can install NumPy and SciPy
without any software outside the necessary tools to build python
extensions, as most dependencies are optional
Source: http://www.scipy.org/scipylib/building/linux.html

scipy.spatial.ckdtree running slowly

I've been using spatial.cKDTree in scipy to calculate distances between points. It has always run very quickly (~1 s) for my typical data sets (finding distances for ~1000 points to an array of ~1e6 points).
I'm running this code in python 2.7.6 on a computer with Ubuntu 14.10. Up until this morning, I had managed most python packages with apt-get, including scipy and numpy. I wanted up-to-date versions of a few packages though, so I decided to packages installed in /usr/lib/python2.7/ by apt-get, and re-installed all packages with pip install (taking care of scipy dependencies like liblapack-dev with apt-get, as necessary). Everything installed and is importable without a problem.
import scipy
import cython
scipy.__version__
'0.16.0'
cython.__version__
'0.22.1'
Now, running spatial.cKDTree on the same size data sets is going really slowly. I'm seeing run time of ~500 s rather than ~1 s. I'm having trouble figuring out what is going on.
Any suggestions as to what I might have done in installing using pip rather than apt-get that would have caused scipy.spatial.cKDTree to run so slowly?
In 0.16.x I added options to build the cKDTree with median or sliding midpoint rules, as well as choosing whether to recompute the bounding hyperrectangle at each node in the kd-tree. The defaults are based on experiences about the performance of scipy.spatial.cKDTree and sklearn.neighbors.KDTree. In some contrived cases (data that are highly streched along a dimension) it can have negative impact, but usually it should be faster. Experiment with bulding the cKDTree with balanced_tree=False and/or compact_nodes=False. Setting both to False gives you the same behavior as 0.15.x. Unfortunately it is difficult to set defaults that make everyone happy because the performance depends on the data.
Also note that with balanced_tree=True we compute medians by quickselect when the kd-tree is constructed. If the data for some reason is pre-sorted, it will be very slow. In this case it will help to shuffle the rows of the input data. Or you can set balanced_tree=False to avoid the partial quicksorts.
There is also a new option to multithread the nearest-neighbor query. Try to call query with n_jobs=-1 and see if it helps for you.
Update June 2020:
SciPy 1.5.0 will use a new algorithm (introselect based partial sort, from C++ STL) which solves the problems reported here.
In the next release of SciPy, balanced kd-trees will be created with introselect instead of quickselect, which is much faster on structured datasets. If you use cKDTree on a structured data set such as an image or a grid, you can look forward to a major boost in performance. It is already available if you build SciPy from its master branch on GitHub.

Categories