Array order in `numpy.dot` - python

In Python's numerical library NumPy, how does the numpy.dot function deal with arrays of different memory-order? numpy.dot(c-order, f-order) vs. dot(f-order, c-order) etc.
The reason I ask is that long time ago (numpy 1.0.4?), I made some tests and noticed numpy.dot performed worse than calling dgemm from scipy.linalg directly, with the correct transposition flags, though both call the same BLAS library internally. (I suspected the reason was copying of the input matrices inside numpy.dot, which is tragic if the input is large.)
Now I tried again and actually numpy.dot performs the same as dgemm, so there is no reason to keep the arrays in specific order and set transposition flags manually. Much cleaner code.
So my question is, how does a recent (let's say 1.6.0) numpy.dot work, guarantees on when things are copied and when not? I'm concerned about 1) memory 2) performance here. Cheers.

Possibly what you were seeing may have been related to a blas-optimized dot import error being caught and handled silently (this code snippet is from numeric.py)
# try to import blas optimized dot if available
try:
# importing this changes the dot function for basic 4 types
# to blas-optimized versions.
from _dotblas import dot, vdot, inner, alterdot, restoredot
except ImportError:
# docstrings are in add_newdocs.py
inner = multiarray.inner
dot = multiarray.dot

Related

Cython: Effectively using Numpy in Pure Python Mode

I am fairly new to using Cython and I am interested in using the "Pure Python" mode.
The work that I am doing right now uses numpy extensively and knowing that there is a C api for numpy, I was excited to see what it could do.
As a small test, I put together two small test files, test.py and test.pxd. Their content is as follows:
test.py:
import cython
import numpy as np
#cython.locals(array=np.ndarray)
#cython.returns(np.ndarray)
def test(array):
return np.cumsum(array)
test_array = np.array([1,2,3,4,5])
test(test_array)
test.pxd:
# cython: language_level=3
cimport numpy as np
cdef np.ndarray test(np.ndarray array)
I then compiled these files with cython -a test.py with the hopes that I would see little to no python interaction when calling np.cumsum(). However when I inspected the generated HTML file, I found the following:
From this, it appears that my call to np.cumsum heavily interacts with python, which is something that feels counter-intuitive. My expectation, since I (should) be using the cimported numpy, is that there should be very little python interaction.
My question is "is my intuition correct?". Have I set something up incorrectly with my files that is not allowing the cimported numpy to actually be used for the function call, and that is why I am still seeing so much yellow? Or am I fundamentally misunderstanding something.
Thanks for reading!
Defining the types as np.ndarray mainly improves one thing: it makes indexing them to get single values significantly faster. Almost everything else remains the same speed.
np.cumsum (and any other Numpy function) is called through the standard Python mechanism and runs at exactly the same speed (internally of course it's implemented in C and should be quite quick). Mathematical operator (such add +, -, *, etc.) are also called through Python and remain the same speed.
In reality your wrapping probably makes it slower - it adds an unnecessary type-check (to make sure that the array is an np.ndarray) and an extra layer of indirection.
There is nothing to be gained through typing here.

Scipy minimize function seems to be creating multiple threads by itself?

I am using the scipy minimize function. The function that it's calling was compiled with Cython and has an underlying C++ implementation that I wrote, but that shouldn't really matter. For some reason when I run my program, it creates as many threads as it can to fill all my cpus. For example if I run top I see that 800% of a cpu is being used or on htop I can see that 8 individual processors are being used, when I only created the program to be run on one. I didn't think that scipy even had parallel processing functionality and I can't find any documentation related to this. What could possible be going on and is there any way to control it?
If some BLAS-implementation (with threading-support) is available (default on Ubuntu for example), some expressions like np.dot() (only the dense case as far as i know) will automatically be run in parallel (reference). Another possible example is sparse-matrix factorization with SuperLU.
Of course different minimizers will behave different.
Newton-type methods (core: solve a system of sparse linear-equations) are probably based on SuperLU (if the code is not one of the common old Fortran/C ones, where the whole code is self-contained). CG-type methods are heavily based on matrix-vector products (np.dot; so the dense-case will be parallel).
For some control over this, start with this SO question.

Using scipy routines outside of the GIL

This is sort of a general question related to a specific implementation I have in mind, about whether it's safe to use python routines designed for use inside the GIL in a shared memory environment. Specifically what I'd like to do is use scipy.optimize.curve_fit on a large array inside a cython function.
The data can be expressed as a 2d numpy array (say, of floats) with the axis to be fit along and the other the serialized axis to be parallelized over. Then I'd just like to release the GIL and start looping through the data with a cython.parallel.prange (the idea being then that I can have all my cores working on fitting at once).
The main issue I can foresee is that curve_fit does not operate "in place"; it returns the fit values of the parameters (and optionally their covariance matrix) and so has to allocate that memory at some point. (Of course I also have no idea about any intermediate memory allocation the routine performs.) I'm worried about how this will operate outside the GIL with many threads working concurrently.
I realize that the answer could just be "it should work fine go try it," but I'm hoping to get some idea of what to look out for. I also realize that this question is similar to others about parallelizing scipy/numpy routines, but I think this one is worded differently in that falls within the cython scope of a C environment for python.
Thanks for any help/suggestions.
Not safe. If CPython could safely run that kind of code without the GIL, we wouldn't have the GIL in the first place.
You may find the following discussion to be of interest on Parallel Programming in SciPy.
[I would have posted this as merely a comment, but I lack the requisite reputation.]

How to improve Cython performance?

I am doing my first steps with Cython, and I am wondering how to improve performance even more.
Until now I got to half the usual (python only) execution time, but I think there must be more!
I know cython -a and I already typed my variables. But there is still a lot in yellow in my function. Is this because cython does not recognise numpy or is there something else I am missing?
I believe you can benefit by using math functions from libc as you are calling np.sqrt and np.floor on scalars. This has not only the Python call overhead but there are different code paths in the numpy ufuncs for scalars and arrays. So that involves at least a type switch.
I think it's not a problem, as I've tested with the official tutorial, it's also reported as yellow on every np.* lines, and involves python just the same as your code.
Point 3 at the end of that page should have explained this:
Calling NumPy/SciPy functions currently has a Python call overhead; it would be possible to take a short-cut from Cython directly to C. (This does however require some isolated and incremental changes to those libraries; mail the Cython mailing list for details).

Automatic CudaMat conversion in Python

I'm looking into speeding up my python code, which is all matrix math, using some form of CUDA. Currently my code is using Python and Numpy, so it seems like it shouldn't be too difficult to rewrite it using something like either PyCUDA or CudaMat.
However, on my first attempt using CudaMat, I realized I had to rearrange a lot of the equations in order to keep the operations all on the GPU. This included the creation of many temporary variables so I could store the results of the operations.
I understand why this is necessary, but it makes what were once easy to read equations into somewhat of a mess that difficult to inspect for correctness. Additionally, I would like to be able to easily modify the equations later on, which isn't in their converted form.
The package Theano manages to do this by first creating a symbolic representation of the operations, then compiling them to CUDA. However, after trying Theano out for a bit, I was frustrated by how opaque everything was. For example, just getting the actual value for myvar.shape[0] is made difficult since the tree doesn't get evaluated until much later. I would also much prefer less of a framework in which my code much conform to a library that acts invisibly in the place of Numpy.
Thus, what I would really like is something much simpler. I don't want automatic differentiation (there are other packages like OpenOpt that can do that if I require it), or optimization of the tree, but just a conversion from standard Numpy notation to CudaMat/PyCUDA/somethingCUDA. In fact, I want to be able to have it evaluate to just Numpy without any CUDA code for testing.
I'm currently considering writing this myself, but before even consider such a venture, I wanted to see if anyone else knows of similar projects or a good starting place. The only other project I know that might be close to this is SymPy, but I don't know how easy it would be to adapt to this purpose.
My current idea would be to create an array class that looked like a Numpy.array class. It's only function would be to build a tree. At any time, that symbolic array class could be converted to a Numpy array class and be evaluated (there would also be a one-to-one parity). Alternatively, the array class could be traversed and have CudaMat commands be generated. If optimizations are required they can be done at that stage (e.g. re-ordering of operations, creation of temporary variables, etc.) without getting in the way of inspecting what's going on.
Any thoughts/comments/etc. on this would be greatly appreciated!
Update
A usage case may look something like (where sym is the theoretical module), where we might be doing something such as calculating the gradient:
W = sym.array(np.rand(size=(numVisible, numHidden)))
delta_o = -(x - z)
delta_h = sym.dot(delta_o, W)*h*(1.0-h)
grad_W = sym.dot(X.T, delta_h)
In this case, grad_W would actually just be a tree containing the operations that needed to be done. If you wanted to evaluate the expression normally (i.e. via Numpy) you could do:
npGrad_W = grad_W.asNumpy()
which would just execute the Numpy commands that the tree represents. If on the other hand, you wanted to use CUDA, you would do:
cudaGrad_W = grad_W.asCUDA()
which would convert the tree into expressions that can executed via CUDA (this could happen in a couple of different ways).
That way it should be trivial to: (1) test grad_W.asNumpy() == grad_W.asCUDA(), and (2) convert your pre-existing code to use CUDA.
Have you looked at the GPUArray portion of PyCUDA?
http://documen.tician.de/pycuda/array.html
While I haven't used it myself, it seems like it would be what you're looking for. In particular, check out the "Single-pass Custom Expression Evaluation" section near the bottom of that page.

Categories