Dictionary of 4-D numpy array: how to optimize? - python

I have a problem of performances in inizializing a dictionary of 4-D numpy tensor.
I have a list of coefficients names:
cnames = ['CN', 'CM', 'CA', 'CY', 'CLN' ...];
that is not fixed-size (it depends from upper code).
For each coefficient i have to generate a 4-D tensor [nalpha X nmach X nbeta X nalt] of zeros (for preallocation purposes), so I do:
#Number of coefficients
numofc = len(cnames);
final_data = {};
#I have to generate <numofc> 4D matrixes
for i in range(numofc):
final_data[cnames[i]]=n.zeros((nalpha,nmach,nbeta,nalt));
each index is an integer between 10 and 30.
each index is an integer between 100 and 200
This takes like 4 minutes. How can I speed up this? Or am I doing something wrong?

The code you posted should not take 4 minutes to run (unless cnames is extremely large or you have very little RAM and is forced to use swap space).
import numpy as np
cnames = ['CN', 'CM', 'CA', 'CY', 'CLN']*1000
nalpha,nmach,nbeta,nalt = 10,20,30,40
#Number of coefficients
numofc = len(cnames)
final_data = {}
#I have to generate <numofc> 4D matrixes
for i in range(numofc):
final_data[cnames[i]]=np.zeros((nalpha,nmach,nbeta,nalt))
Even if cnames has 5000 elements, it should still take only on the order of a couple seconds:
% time test.py
real 0m4.559s
user 0m0.856s
sys 0m3.328s
The semicolons at the end of statements suggests you have experience in some other language. Be careful of translating commands from that language line-by-line into NumPy/Python. Coding in NumPy as one would in C is a recipe for slowness.
In particular, try to avoid updating elements in an array element-by-element. This works fine in C, but is very slow with Python. NumPy achieves speed by delegating to functions coded in Fortran or Cython or C or C++. By updating arrays element-by-element you are using Python loops which are not as fast.
Instead, try to rephrase your computation in terms of operations on whole arrays (or at least, slices of arrays).
I have probably speculated too much on the cause of the problem. You need to profile your code, and then, if you want more specific help, post the result of the profile plus the problematic code (most helpfully in the form of an SSCCE).

Related

Python - Multiprocess shared memory with python numpy array?

Here i have a 2 numpy arrays, and a function that will take those arrays as an input, and then do some numpy calculation and return the result. It works as it is but it's slow and i think we can use multiprocessing to make it a bit faster.
Anyway, here's my code :
A = #4 dimensions big numpy array
B = #1 dimension numpy array
def function(A, B):
P = np.einsum("ijkl,ij->kl", A, B)
return P.astype(np.uint8)
result = function(A,B)
I'm still quite new into this Multiprocessing stuff, but think that we're able to make array A and B as a shared memory (maybe using multiprocessing.Array() ??) , and then make multiple processes to compute the function(A, B). But i still can't quite understand how to put all of that in the code.
EDIT:
Alright, so it seems like the approach above doesn't work, but let's try another case, but now, lets say the length of array A is 120, and now i want to use only 3/4 parts of array A from index number 0 to 89 and use all of array B in the Process No.1
And then, i also want to use 3/4 parts of array A but from index number 30 to 119 and use all parts of array B in the Process No.2, will that help? Of course i can make the A array even larger to get it's part computed with even more process where, but the thing is, will this concept works?

avoid for loop in python normal(size={}))

My goal is to create an array where each elemet is normal(size={})) of each element of it.
I am trying to oprimize:
it = 2 ** arange(6, 25)
M = zeros(len(it))
for x in range(len(it)):
M[x] = (normal(size=it[x]))
I have these not working so far:
N = zeros(len(it))
it = 2 ** arange(6, 25)
N = (normal(size=it))
Further I tried:
N = (normal(size=it[:]))
Provided my data, I believe that such a manual work, or for loop is really inefficient, so I am trying to come up with vectorized operations.
i receive:
File "mtrand.pyx", line 1335, in numpy.random.mtrand.RandomState.normal
File "common.pyx", line 557, in numpy.random.common.cont
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
you've not been very precise in where these functions are coming from, but I'm guessing that by normal(size=it[:]) you mean:
import numpy as np
it = 2 ** np.arange(6, 25)
np.random.normal(size=it)
which would be telling numpy to create a 19 dimensional array (i.e. len(it)) that contains 6 × 1085 elements (i.e. np.prod(it.astype(float)) as floats because the number overflows an int64). numpy is saying that it can't do that, which seems like a reasonable thing to do.
Numpy doesn't like the "ragged arrays" you're trying to create, neither do most matrix/numeric libraries, hence support is limited!
I'm unsure why you consider that the "loop is really inefficient". You're creating ~33 million of floats from 19 iterations of a simple Python loop. The vast majority of time will be in highly optimised Numpy library code and some tiny (basically unmeasurable) amount of time will be spent evaluating your Python bytecode.
If you really want a one-liner then you can do:
X = [np.random.normal(size=2**i) for i in range(6, 25)]
which makes the split between Numpy and Python worlds more obvious.
Note that on my laptop, the Python code executes in ~5µs while the Numpy code runs for ~800ms. So you're trying to optimise the 0.0006% part!
Note that it's not always a win to use Numpy's vectorization, it only helps with larger arrays, for example the above loop is "faster" than:
X = [np.random.normal(i) for i in 2**np.arange(6, 25)]
4.8 vs 5.1 µs for the Python code, because of the time spent marshalling objects into/out of the Numpy world. Again, none of this matters, just use whichever solution makes your code easier to understand. A few microseconds is nothing compared to seconds.

Why do I keep getting this error 'RuntimeWarning: overflow encountered in int_scalars'

I am trying to multiply all the row values and column values of a 2 dimensional numpy array with an explicit for-loop:
product_0 = 1
product_1 = 1
for x in arr:
product_0 *= x[0]
product_1 *= x[1]
I realize the product will blow up to become an extremely large number but from my previous experience python has had no memory problem dealing very very extremely large numbers.
So from what I can tell this is a problem with numpy except I am not storing the gigantic product in a numpy array or any numpy data type for that matter its just a normal python variable.
Any idea how to fix this?
Using non inplace multiplication hasn't helped product_0 = x[0]*product_0
Python int are represented in arbitrary precision, so they cannot overflow. But numpy uses C++ under the hood, so the highest long signed integer is with fixed precision, 2^63 - 1. Your number is far beyond this value, having in average ((716-1)/2)^86507).
When you, in the for loop, extract the x[0] this is still a numpy object. To use the full power of python integers you need to clearly assign it as python int, like this:
product_0 = 1
product_1 = 1
for x in arr:
t = int(x[0])
product_0 = product_0 * t
and it will not overflow.
Following your comment, which makes your question more specific, your original problem is to calculate the geometric mean of the array for each row/column. Here the solution:
I generate first an array that has the same properties of your array:
arr = np.resize(np.random.randint(1,716,86507*2 ),(86507,2))
Then, calculate the geometric mean for each column/row:
from scipy import stats
gm_0 = stats.mstats.gmean(arr, axis = 0)
gm_1 = stats.mstats.gmean(arr, axis = 1)
gm_0 will be an array that contains the geometric mean of the xand y coordinates. gm_1 instead contains the geometric mean of the rows.
Hope this solves your problem!
You say
So from what I can tell this is a problem with numpy except I am not storing the gigantic product in a numpy array or any numpy data type for that matter its just a normal python variable.
Your product may not be a NumPy array, but it is using a NumPy data type. x[0] and x[1] are NumPy scalars, and multiplying a Python int by a NumPy scalar produces a NumPy scalar. NumPy integers have a finite range.
While you technically could call int on x[0] and x[1] to get a Python int, it'd probably be better to avoid needing such huge ints. You say you're trying to perform this multiplication to compute a geometric mean; in that case, it'd be better to compute the geometric mean by transforming to and from logarithms, or to use scipy.stats.mstats.gmean, which uses logarithms under the hood.
Numpy is compiled for 32 bit and not 64 bit , so while Python can handle this numpy will overflow for smaller values , if u want to use numpy then I recommend that you build it from source .
Edit
After some testing with
import numpy as np
x=np.abs(np.random.randn(1000,2)*1000)
np.max(x)
prod1=np.dtype('int32').type(1)
prod2=np.dtype('int32').type(1)
k=0
for i,j in x:
prod1*=i
prod2*=j
k+=1
print(k," ",prod1,prod2)
1.797693134e308 is the max value (to this many digits my numpy scalar was able to take)
if you run this you will see that numpy is able to handle quite a large value but when you said your max value is around 700 , even with a 1000 values my scalar overflowed.
As for how to fix this , rather than doing this manually the answer using scipy seems more viable now and is able to get the answer so i suggest that you go forward with that
from scipy.stats.mstats import gmean
x=np.abs(np.random.randn(1000,2)*1000)
print(gmean(x,axis=0))
You can achieve what you want with the following command in numpy:
import numpy as np
product_0 = np.prod(arr.astype(np.float64))
It can still reach np.inf if your numbers are large enough, but that can happen for any type.

NumPy: Compute mode row-wise spanning over multiple arrays from iterator

Have a look at this image:
In my application I receive from an iterator an arbitrary amount (let's say 1000 for now) of big 1-dimensional arrays arr1, arr2, arr3, ..., arr1000 (10000 entries each). Each entry is an integer between 0 and n, where in this case n = 9. My ultimate goal is to compute a 1-dimensional array result such that result[i] == the mode of arr1[i], arr2[i], arr3[i], ..., arr1000[i].
However, it is not tractable to concatenate the arrays to one big matrix and then compute the mode row-wise, since this may exceed the RAM on my machine.
An alternative would be to set up an array res2 of shape (10000, 10), then loop through every array, use each entry e as index and then to increase the value of res2[i][e] by 1. Alter looping, I would apply something like argmax. However, this is too slow.
So: Is the a way to perform the task in a fast way, maybe by using NumPy's advanced indexing?
EDIT (due to the comments):
This is basically the code which calculates the modes row-wise – avoiding to concatenate the arrays:
def foo(length, n):
counts = np.zeros((length, n), dtype=np.int_)
for arr in array_iterator():
i = 0
for e in arr:
counts[i][e] += 1
i += 1
return np.argmax(counts, axis=1)
It takes already 60 seconds for 100 arrays of size 10000 (although there is more work done behind the scenes, which results into that time – however, this work scales linearly with the amount of arrays).
Regarding the real sizes:
The amount of different arrays is really arbitrary. It's a parameter of experiments and I'd like to have the opportunity even to set this to values like 10^6. The length of each array is depending of my data set I'm working with. This could be 10000, or 100000 or even worse. However – spitting this into smaller pieces may be possible, though annoying.
My free RAM for this task is about 4 GB.
EDIT 2:
The running time I gave above leads to a wrong impression. Actually, the running time which just belongs to the inner loop (for e in arr) in the above mentioned scenario is just 5 seconds – which is now ok for me, since it's negligible compared to the remaining running time. I will leave this question open anyway for a moment, since there might be an even faster method waiting out there.

Scipy LinearOperator With Multiple Inputs

I need to invert a large, dense matrix which I hoped to use Scipy's gmres to do. Fortunately, the dense matrix A follows a pattern and I do not need to store the matrix in memory. The LinearOperator class allows us to construct an object which acts as the matrix for GMRES and can compute directly the matrix vector product A*v. That is, we write a function mv(v) which takes as input a vector v and returns mv(v) = A*v. Then, we can use the LinearOperator class to create A_LinOp = LinearOperator(shape = shape, matvec = mv). We can put the linear operator into the Scipy gmres command to evaluate the matrix vector products without ever having to fully load A into memory.
The documentation for the LinearOperator is found here: LinearOperator Documentation.
Here is my problem: to write the routine to compute the matrix vector product mv(v) = A*v, I need another input vector C. The entries in A are of the form A[i,j] = f(C[i] - C[j]). So, what I really want is for mv to be of two inputs, one fixed vector input C, and one variable input v which we want to compute A*v.
MATLAB has a similar setup, where would write x = gmres(#(v) mv(v,C),b) where b is the right hand side of the problem Ax = b, , and mv is the function that takes as variable input v which we want to compute A*v and C is the fixed, known vector which we need for the assembly of A.
My problem is that I can't figure out how to allow the LinearOperator class to accept two inputs, one variable and one "fixed" like I can in MATLAB.
Is there a way to do the analogous operation in SciPy? Alternatively, if anyone knows of a better way of inverting a large, dense matrix (50000, 50000) where the entries follow a pattern, I would greatly appreciate any suggestions.
Thanks!
EDIT: I should have stated this information actually. The matrix is actually (in block form) [A C; C^T 0], where A is N x N (N large) and C is N x 3, and the 0 is 3 x 3 and C^T is the transpose of C. This array C is the same array as the one mentioned above. The entries of A follow a pattern A[i,j] = f(C[i] - C[j]).
I wrote mv(v,C) to go row by row construct A*v[i] for i=0,N, by computing sum f(C[i]-C[j)*v[j] (actually, I do numpy.dot(FC,v) where FC[j] = f(C[i]-C[j]) which works well). Then, at the end doing the computations for the C^T rows. I was hoping to eventually replace the large for loop with a multiprocessing call to parallelize the for loop, but that's a future thing to consider. I will also look into using Cython to speed up the computations.
This is very late, but if you're still interested...
Your A matrix must be very low rank since it's a nonlinearly transformed version of a rank-2 matrix. Plus it's symmetric. That means it's trivial to inverse: get the truncated eigenvalue decompostion with, say, 5 eigenvalues: A = U*S*U', then invert that: A^-1 = U*S^-1*U'. S is diagonal so this is inexpensive. You can get the truncated eigenvalue decomposition with eigh.
That takes care of A. Then for the rest: use the block matrix inversion formula. Looks nasty, but I will bet you 100,000,000 prussian francs that it's 50x faster than the direct method you were using.
I faced the same situation (some years later than you) of trying to use more than one argument to LinearOperator, but for another problem. The solution I found was the use of global variables, to avoid passing the variables as arguments to the function.

Categories