Decomposition of matrices for CPLEX and machine learning application - python

I am dealing with big matrices and time to time my code ends with 'killed:9' message in my terminal. I'm working on Mac OSx.
A wise programmer tells me the problem in my code is liked to the stored matrix I am dealing with.
nn = 35000
dd = 35
XX = np.random.rand(nn,dd)
XX = XX.dot(XX.T) #it should be faster than np.dot(XX,XX.T)
yy = np.random.rand(nn,1)
XX = np.multiply(XX,yy.T)
I have to store this huge matrix XX, my guess: I split the matrix with
upp = np.triu(XX)
Do I actually save space in terms of stored data?
What if later on I store
low = app.T
am I wasting memory and computational time?

It should take up the same total amount of memory. To avoid the error you are probably looking at a few options:
Process batch wise
If you create your model over the CPLEX API, once you supplied the data it is handled by CPLEX I believe. So you could split the data and load it piece by piece and add it to the model consecutively.
Allocate memory manually
If you use Cython you can use the function malloc to allocate memory manually for your array, the size will very likely be no issue then.
Option 1 would be the preferred option in my opinion.
EDIT:
I constructed a little example. It actually combines the two options. The array is not stored as a Python object, but as a C array and the values are computed piecewise.
I am allocating the memory for the array using Cython and malloc. To run the code you have to install Cython.Then you can open a python interpreter at the directory you saved the file and write:
import pyximport;pyximport.install()
import nameofscript
An example for processing your array:
import numpy as np
from libc.stdlib cimport malloc # Allocate memory manually
from cython.parallel import prange # Parallel processing without GIL
dd = 35
# With cdef we can define C variables in Cython.
cdef double **XXN
cdef double y[35000]
cdef int i, j, nn
nn = 35000
# Allocate memory for the Matrix with 1.225 billion double elements
XXN = <double **>malloc(nn * sizeof(double *))
for i in range(nn):
XXN[i] = <double *>malloc(nn * sizeof(double))
XX = np.random.rand(nn,dd)
for i in range(nn):
for j in range(nn):
# Compute the values for the new matrix element by element
XXN[i][j] = XX[i].dot(XX[j].T)
# Multiply the new matrix with y column wise
for i in prange(nn, nogil=True, num_threads=4):
for j in range(nn):
XXN[i][j] = XXN[i][j] * y[i]
Save this file as nameofscript.pyx and run it as described above. I have briefly tested this script and it runs about half an hour on my machine. You can extend this script and use the result array XXN for your further computations.
A little example for parallelization: I did not initialize y and did not assign any values. If you declare y as a C array, you can e. g. assign some values from python objects to fill it with values. Then, you can conduct the last multiplication without GIL, in a parallelized manner, as shown in the code sample.
Regarding computational efficiency: This is probably not the fastest way (which may be writing your code for the CPLEX C Interface entirely maybe), but it does not throw the memory error and does run in an acceptable time if you do not have to repeat this computation too often.

Related

Vectorization of recursively defined problem - Python

I was wondering if anyone would have an idea on how I am able to vectorize the following loop:
for i in range(1,(T*n)+1):
Y = Y + np.diag(mu) # Y * dt + np.multiply(np.diag(sigma)#Y, L # np.random.normal( 0, dt, (d,N)))
Whereas the following parameters are already a dxN matrices (I already vectorized a loop with that..):
Y (this is the recursive Parameter)
np.diag(mu) # Y * dt
np.diag(sigma) # Y
L # np.random.normal( 0, dt, (d,N))
Any help would be very appreciated. :)
With best regards!
Unfortunately, this doesn't look like vectorizable code:
Iterations should be independent. Typically, vectorization means making several iterations at once. Typically, it also implies using AVX, SSE or FMA instructions (if we talk about x86 processors) to make iterations go truly in parallel on a hardware level.
Continuing about vector assembly instructions, such level of optimization is typically unreachable from python code because the interpreter isn't that smart. An iteration is also doing too much to be vectorized. It actually contains sub-loops! We don't see it but matrix multiplications do involve more loops.
So I woudn't call optimization of this loop a "vectorization". But luckily, there are still things to check:
Profile it. Find out what part of the computation consumes most of the time.
Verify that np.random doesn't slow down the program significantly. If yes, you can rely on pre-generated values instead.
Check if code that can be vectorized is vectorized. That means, verify that your numpy is built with SSE/AVX support and that matrix multiplications use that under the hood. It can be a bit tricky to do but up to x4 speedups* are possible with AVX usage.
If parts of the code are indeed vectorized on the assembly level, switching to storing data in float16 arrays can make it even faster. To my knowledge, AVX does support operations on large blocks of 16-bit floats.
Rewrite it in C/Cython or try out Numba JIT compilation for the same task.
If compilation even with Numba is not the case, I wonder if Tensorflow can help here. With Tensorflow, Python code doesn't kick off computations immediately but rather constructs a computational graph that is then executed without returning to the interpreter level. Tensorflow does support AVX and SSE (although not without pain), so you may expect more control over low-level details than with numpy. And you can also try to launch it on GPU.
Last thing, I don't quite believe in it, but does loop unrolling help?
for i in range(1, (T * n + 1) // 4):
Y = Y + ...
Y = Y + ...
Y = Y + ...
Y = Y + ...
* - subject to Amdahl's law

Anaconda package for cufft keeping arrays in gpu memory between fft / ifft calls

I am using the anaconda suite with ipython 3.6.1 and their accelerate package. There is a cufft sub-package in this two functions fft and ifft. These, as far as I understand, takes in a numpy array and outputs to a numpy array, both in system ram, i.e. all gpu-memory and transfer between system and gpu memory is handled automatically and gpu memory is releaseed as function is ended. This seems all very nice and seems to work for me. However, I would like to run multiple fft/ifft calls on the same array and for each time extract just one number from the array. It would be nice to keep the array in the gpu memory to minimize system <-> gpu transfer. Am I correct that this is not possible using this package? If so, is there another package that would do the same. I have noticed the reikna project but that doesn't seem available in anaconda.
The thing I am doing (and would like to do efficiently on gpu) is in short shown here using numpy.fft
import math as m
import numpy as np
import numpy.fft as dft
nr = 100
nh = 2**16
h = np.random.rand(nh)*1j
H = np.zeros(nh,dtype='complex64')
h[10] = 1
r = np.zeros(nr,dtype='complex64')
fftscale = m.sqrt(nh)
corr = 0.12j
for i in np.arange(nr):
r[i] = h[10]
H = dft.fft(h,nh)/fftscale
h = dft.ifft(h*corr)*fftscale
r[nr-1] = h[10]
print(r)
Thanks in advance!
So I found Arrayfire which seems rather easy to work with.

Memory usage doubled when passing matrix to shared object

I have a linear set of equations, where
A x = b and A is a large matrix and b is known as well.
The matrix A is set up with python.
Now I want to invert matrix A to get x.
A and b are passed to a Fortran 90 program via a shared object. I compiled the Fortran program using numpy.f2py:
import numpy.f2py.f2py2e as f2py2e
import sys, os
sys.argv += "-lmkl_rt -c -m MKL_MODULE MKL_WRAPPER.f90".split()
f2py2e.main()
Finally, I call the f90 subroutine:
MKL_MODULE.mkl_wrapper.call_dgelsd(A, b, np.shape(A)[0], np.shape(A)[1])
When calling the fortran program, the memory usage doubles, apparently due to an internal copy of the matrix A and b.
However, once I have the vector x I'm not interested in A or b anymore.
Is there any way to avoid that internal copy and passing A to the fortran program?
I already had the idea of saving A and b to the HD and reading it from the Fortran program, but this takes very long time and is not really an option for matrices of the size I'm dealing with.
No internal copy will be made if the arrays are in F-order
[How to force numpy array order to fortran style?
to-fortran-style][1]

Parallelize python loop numpy.searchsorted using cython

I've coded a function using cython containing the following loop. Each row of array A1 is binary searched for all values in array A2. So each loop iteration returns a 2D array of index values. Arrays A1 and A2 enter as function arguments, properly typed.
The array C is pre-allocated at highest indent level as required in cython.
I simplified things a little for this question.
...
cdef np.ndarray[DTYPEint_t, ndim=3] C = np.zeros([N,M,M], dtype=DTYPEint)
for j in range(0,N):
C[j,:,:] = np.searchsorted(A1[j,:], A2, side='left' )
All's fine so far, things compile and run as expected. However, to gain even more speed I want to parallelize the j-loop. First attempt was simply writing
for j in prange(0,N, nogil=True):
C[j,:,:] = np.searchsorted(A1[j,:], A2, side='left' )
I tried many coding variations such as putting things in a separate nogil_function, assigning the result to an intermediate array and write a nested loop to avoid the assignment to the sliced part of C.
Errors usually are of the form "Accessing Python attribute not allowed without gil"
I can't get it to work. Any suggestions on how I can do this?
EDIT:
This is my setup.py
try:
from setuptools import setup
from setuptools import Extension
except ImportError:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import numpy
extensions = [Extension("matchOnDistanceVectors",
sources=["matchOnDistanceVectors.pyx"],
extra_compile_args=["/openmp", "/O2"],
extra_link_args=[]
)]
setup(
ext_modules = cythonize(extensions),
include_dirs=[numpy.get_include()]
)
I'm on windows 7 compiling with msvc. I did specify the /openmp flag, my arrays are of sizes 200*200. So everything seems in order...
I believe that searchsorted releases the GIL itself (see https://github.com/numpy/numpy/blob/e2805398f9a63b825f4a2aab22e9f169ff65aae9/numpy/core/src/multiarray/item_selection.c, line 1664 "NPY_BEGIN_THREADS_DEF").
Therefore, you can do
for j in prange(0,N, nogil=True):
with gil:
C[j,:,:] = np.searchsorted(A1[j,:], A2, side='left' )
That temporarily claims back the GIL to do the necessary work on Python objects (which is hopefully quick), and then it should be released again inside searchsorted allowing in the run largely in parallel.
To update I did a quick test of this (A1.shape==(105,100), A2.shape==(302,302), numbers are chosen pretty arbitrarily). For 10 repeats the serial version took 4.5 second, the parallel version took 1.4 seconds (test was run on a 4 core CPU). You don't get the 4x full speed-up, but you get close.
This was compiled as described in the documentation. I suspect if you aren't seeing speed-up then it could be any of: 1) your arrays are small enough that the function-call/numpy checking types and sizes overhead is dominating; 2) You aren't compiling it with OpenMP enabled; or 3) your compiler doesn't support OpenMP.
You have a bit of a catch 22. You need the GIL to call numpy.searchsorted but the GIL prevents any kind of parallel processing. Your best bet is to write your own nogil version of searchsorted:
cdef mySearchSorted(double[:] array, double target) nogil:
# binary search implementation
for j in prange(0,N, nogil=True):
for k in range(A2.shape[0]):
for L in range(A2.shape[1]):
C[j, k, L] = mySearchSorted(A1[j, :], A2[k, L])
numpy.searchsorted also has a non trivial amount of overhead, so if N is large it makes sense to use your own searchsorted just to reduce the overhead.

Optimising memory usage in numpy

The following program loads two images with PyGame, converts them to Numpy arrays, and then performs some other Numpy operations (such as FFT) to emit a final result (of a few numbers). The inputs can be large, but at any moment only one or two large objects should be live.
A test image is about 10M pixels, which translates to 10MB once it's greyscaled. It gets converted to a Numpy array of dtype uint8, which after some processing (applying Hamming windows), is an array of dtype float64. Two images are loaded into arrays this way; later FFT steps result in an array of dtype complex128. Prior to adding the excessive gc.collect calls, the program memory size tended to increase with each step. Additionally, it seems most Numpy operations will give a result in the highest precision available.
Running the test (sans the gc.collect calls) on my 1GB Linux machine results in prolonged thrashing, which I have not waited for. I don't yet have detailed memory use stats -- I tried some Python modules and the time command to no avail; now I'm looking into valgrind. Watching PS (and dealing with machine unresponsiveness in the later stages of the test) suggests a maximum memory usage of about 800 MB.
A 10 million cell array of complex128 should occupy 160 MB. Having (ideally) at most two of these live at one time, plus the not-insubstantial Python and Numpy libraries and other paraphernalia, probably means allowing for 500 MB.
I can think of two angles from which to attack the problem:
Discarding intermediate arrays as soon as possible. That's what the gc.collect calls are for -- they seem to have improved the situation, as it now completes with only a few minutes of thrashing ;-). I think one can expect that memory-intensive programming in a language like Python will require some manual intervention.
Using less-precise Numpy arrays at each step. Unfortunately the operations that return arrays, like fft2, do not appear to allow the type to be specified.
So my main question is: is there a way of specifying output precision in Numpy array operations?
More generally, are there other common memory-conserving techniques when using Numpy?
Additionally, does Numpy have a more idiomatic way of freeing array memory? (I imagine this would leave the array object live in Python, but in an unusable state.) Explicit deletion followed by immediate GC feels hacky.
import sys
import numpy
import pygame
import gc
def get_image_data(filename):
im = pygame.image.load(filename)
im2 = im.convert(8)
a = pygame.surfarray.array2d(im2)
hw1 = numpy.hamming(a.shape[0])
hw2 = numpy.hamming(a.shape[1])
a = a.transpose()
a = a*hw1
a = a.transpose()
a = a*hw2
return a
def check():
gc.collect()
print 'check'
def main(args):
pygame.init()
pygame.sndarray.use_arraytype('numpy')
filename1 = args[1]
filename2 = args[2]
im1 = get_image_data(filename1)
im2 = get_image_data(filename2)
check()
out1 = numpy.fft.fft2(im1)
del im1
check()
out2 = numpy.fft.fft2(im2)
del im2
check()
out3 = out1.conjugate() * out2
del out1, out2
check()
correl = numpy.fft.ifft2(out3)
del out3
check()
maxs = correl.argmax()
maxpt = maxs % correl.shape[0], maxs / correl.shape[0]
print correl[maxpt], maxpt, (correl.shape[0] - maxpt[0], correl.shape[1] - maxpt[1])
if __name__ == '__main__':
args = sys.argv
exit(main(args))
This
on SO says "Scipy 0.8 will have single precision support for almost all the fft code",
and SciPy 0.8.0 beta 1 is just out.
(Haven't tried it myself, cowardly.)
if I understand correctly, you are calculating a convolution between two images. The Scipy package contains a dedicated module for that (ndimage), which might be more memory efficient than the "manual" approach via Fourier transforms. It would be good to try using it instead of going through Numpy.

Categories