I have two questions, but the first takes precedence.
I was doing some timeit testing of some basic numpy operations that will be relevant to me.
I did the following
n = 5000
j = defaultdict()
for i in xrange(n):
print i
j[i] = np.eye(n)
What happened is, python's memory use almost immediately shot up to 6gigs, which is over 90% of my memory. However, numbers printed off at a steady pace, about 10-20 per second. While numbers printed off, memory use sporadically bounced down to ~4 gigs, and back up to 5, back down to 4, up to 6, down to 4.5, etc etc.
At 1350 iterations I had a segmentation fault.
So my question is, what was actually occurring during this time? Are these matrices actually created one at a time? Why is memory use spiking up and down?
My second question is, I may actually need to do something like this in a program I am working on. I will be doing basic arithmetic and comparisons between many large matrices, in a loop. These matrices will sometimes, but rarely, be dense. They will often be sparse.
If I actually need 5000 5000x5000 matrices, is that feasible with 6 gigs of memory? I don't know what can be done with all the tools and tricks available... Maybe I would just have to store some of them on disk and pull them out in chunks?
Any advice for if I have to loop through many matrices and do basic arithmetic between them?
Thank you.
If I actually need 5000 5000x5000 matrices, is that feasible with 6 gigs of memory?
If they're dense matrices, and you need them all at the same time, not by a long shot. Consider:
5K * 5K = 25M cells
25M * 8B = 200MB (assuming float64)
5K * 200MB = 1TB
The matrices are being created one at a time. As you get near 6GB, what happens depends on your platform. It might start swapping to disk, slowing your system to a crawl. There might be a fixed-size or max-size swap, so eventually it runs out of memory anyway. It may make assumptions about how you're going to use the memory, guessing that there will always be room to fit your actual working set at any given moment into memory, only to segfault when it discovers it can't. But the one thing it isn't going to do is just work efficiently.
You say that most of your matrices are sparse. In that case, use one of the sparse matrix representations. If you know which of the 5000 will be dense, you can mix and match dense and sparse matrices, but if not, just use the same sparse matrix type for everything. If this means your occasional dense matrices take 210MB instead of 200MB, but all the rest of your matrices take 1MB instead of 200MB, that's more than worthwhile as a tradeoff.
Also, do you actually need to work on all 5000 matrices at once? If you only need, say, the current matrix and the previous one at each step, you can generate them on the fly (or read from disk on the fly), and you only need 400MB instead of 1TB.
Worst-case scenario, you can effectively swap things manually, with some kind of caching discipline, like least-recently-used. You can easily keep, say, the last 16 matrices in memory. Keep a dirty flag on each so you know whether you have to save it when flushing it to make room for another matrix. That's about as tricky as it's going to get.
Related
I am working in python and I have encountered a problem: I have to initialize a huge array (21 x 2000 x 4000 matrix) so that I can copy a submatrix on it.
The problem is that I want it to be really quick since it is for a real-time application, but when I run numpy.ones((21,2000,4000)), it takes about one minute to create this matrix.
When I run numpy.zeros((21,2000,4000)), it is instantaneous, but as soon as I copy the submatrix, it takes one minute, while in the first case the copying part was instantaneous.
Is there a faster way to initialize a huge array?
I guess there's not a faster way. The matrix you're building is quite large (8 byte float64 x 21 x 2000 x 4000 = 1.25 GB), and might be using up a large fraction of the physical memory on your system; thus, the one minute that you're waiting might be because the operating system has to page other stuff out to make room. You could check this by watching top or similar (e.g., System Monitor) while you're doing your allocation and watching memory usage and paging.
numpy.zeros seems to be instantaneous when you call it, because memory is allocated lazily by the OS. However, as soon as you try to use it, the OS actually has to fit that data somewhere. See Why the performance difference between numpy.zeros and numpy.zeros_like?
Can you restructure your code so that you only create the submatrices that you were intending to copy, without making the big matrix?
I am working on a project which basically boils down to solving the matrix equation
A.dot(x) = d
where A is a matrix with dimensions roughly 10 000 000 by 2000 (I would like to increase this in both directions eventually).
A obviously does not fit in memory, so this has to be parallelized. I do that by solving A.T.dot(A).dot(x) = A.T.dot(d) instead. A.T will have dimensions 2000 by 2000. It can be calculated by dividing A and d into chunks A_i and d_i, along the rows, calculate A_i.T.dot(A_i) and A_i.T.dot(d_i), and sum these. Perfect for parallellizing. I have been able to implement this with the multiprocessing module, but it is 1) hard to scale any further (increasing A in both dimensions), due to memory use, and 2) not pretty (and therefore not easy to maintain).
Dask seems to be a very promising library for solving both these problems, and I have made some attempts. My A matrix is complicated to calculate: It is based on approximately 15 different arrays (with size equal to the number of rows in A), and some are used in an iterative algorithm to evaluate associated Legendre function. When the chunks are small (10000 rows), it takes a very long time to build the task graph, and it takes a lot of memory (the memory increase coincides with the call to the iterative algorithm). When the chunks are larger (50000 rows), the memory consumption prior to calculations is a lot smaller, but it is rapidly depleted when calculating A.T.dot(A). I have tried with cache.Chest, but it significantly slows down the calculations.
The task graph must be very large and complicated - calling A._visualize() crashes. With simpler A matrices, it works to do this directly (see response by #MRocklin). Is there a way for me to simplify it?
Any advice on how to get around this would be highly appreciated.
As a toy example, I tried
A = da.ones((2e3, 1e7), chunks = (2e3, 1e3))
(A.T.dot(A)).compute()
This also failed, using up all the memory with only one core being active. With chunks = (2e3, 1e5), all cores start almost immediately, but MemoryError appears within 1 second (I have 15 GB on my current computer). chunks = (2e3, 1e4) was more promising, but it ended up consuming all memory as well.
Edit:
I struckthrough the toy example test, because the dimensions were wrong, and corrected the dimensions in the rest. As #MRocklin says, it does work with the right dimensions. I added a question which I now think is more relevant to my problem.
Edit2:
This is a much simplified example of what I was trying to do. The problem is, I believe, the recursions involved in defining the columns in A.
import dask.array as da
N = 1e6
M = 500
x = da.random.random((N, 1), chunks = 5*M)
# my actual
A_dict = {0:x}
for i in range(1, M):
A_dict[i] = 2*A_dict[i-1]
A = da.hstack(tuple(A_dict.values()))
A = A.rechunk((M*5, M))
ATA = A.T.dot(A)
This seems to lead to a very complicated task graph, which takes up a lot of memory before the calculations even start.
I have now solved this by placing the recursion in a function, with numpy arrays, and more or less do A = x.map_blocks(...).
As a second note, once I have the A matrix task graph, calculating A.T.dot(A) directly does seem to give some memory issues (memory usage is not very stable). I therefore explicitly calculate it in chunks, and sum the results. Even with these workarounds, dask makes a big difference in speed and readability.
Your output is very very large.
>>> A.T.dot(A).shape
(10000000, 10000000)
Perhaps you intended to compute this with the transposes in the other direction?
>>> A.dot(A.T).shape
(2000, 2000)
This still takes a while (it's a large computation) but it does complete.
I'm currently working on many small 6x6 matrices: shape A = (N, N, N, 6, 6) with N is about 500. I store these matrices in a HDF5 file by Pytables (http://www.pytables.org).
I want to do some calculations on these matrices, say inverting, transposing, multiplication, etc... It's quite easy while N is not very big, by example numpy.linalg.inv(A) should do the trick without loop. But in my case, it works very slow and sometimes I have a memory's problem.
Could you suggest me an approach to do this more efficiently?
A 6x6 matrix has 36 8 byte double values, or 288 bytes; we'll say 0.5KB for ease of calculation and overhead. If you accept that, then 500 of them would only represent 250KB - not much memory.
If you keep all those inverses in memory it's still not a lot - just 500KB.
Can you calculate the amount of RAM your program is consuming to confirm this?
What are you doing - finite element analysis? Are these stiffness, mass, and damping matricies for 500 elements?
If yes, you should not be inverting element matricies. You have to assemble those into global matricies, which will consume more memory, and then solve that. Inverse still isn't calculated - LU decomposition in place is the usual way to do it.
I wouldn't consider a 500 element mesh to be a large problem. I saw meshes with thousands and ten of thousands of elements when I stopped doing that kind of work in 1995. I'm sure that hardware makes even larger problems possible today.
You're doing something else wrong or there are details that you aren't providing.
This question already has answers here:
Very large matrices using Python and NumPy
(11 answers)
Closed 2 years ago.
There are times when you have to perform many intermediate operations on one, or more, large Numpy arrays. This can quickly result in MemoryErrors. In my research so far, I have found that Pickling (Pickle, CPickle, Pytables etc.) and gc.collect() are ways to mitigate this. I was wondering if there are any other techniques experienced programmers use when dealing with large quantities of data (other than removing redundancies in your strategy/code, of course).
Also, if there's one thing I'm sure of is that nothing is free. With some of these techniques, what are the trade-offs (i.e., speed, robustness, etc.)?
I feel your pain... You sometimes end up storing several times the size of your array in values you will later discard. When processing one item in your array at a time, this is irrelevant, but can kill you when vectorizing.
I'll use an example from work for illustration purposes. I recently coded the algorithm described here using numpy. It is a color map algorithm, which takes an RGB image, and converts it into a CMYK image. The process, which is repeated for every pixel, is as follows:
Use the most significant 4 bits of every RGB value, as indices into a three-dimensional look up table. This determines the CMYK values for the 8 vertices of a cube within the LUT.
Use the least significant 4 bits of every RGB value to interpolate within that cube, based on the vertex values from the previous step. The most efficient way of doing this requires computing 16 arrays of uint8s the size of the image being processed. For a 24bit RGB image that is equivalent to needing storage of x6 times that of the image to process it.
A couple of things you can do to handle this:
1. Divide and conquer
Maybe you cannot process a 1,000x1,000 array in a single pass. But if you can do it with a python for loop iterating over 10 arrays of 100x1,000, it is still going to beat by a very far margin a python iterator over 1,000,000 items! It´s going to be slower, yes, but not as much.
2. Cache expensive computations
This relates directly to my interpolation example above, and is harder to come across, although worth keeping an eye open for it. Because I am interpolating on a three-dimensional cube with 4 bits in each dimension, there are only 16x16x16 possible outcomes, which can be stored in 16 arrays of 16x16x16 bytes. So I can precompute them and store them using 64KB of memory, and look-up the values one by one for the whole image, rather than redoing the same operations for every pixel at huge memory cost. This already pays-off for images as small as 64x64 pixels, and basically allows processing images with x6 times the amount of pixels without having to subdivide the array.
3. Use your dtypes wisely
If your intermediate values can fit in a single uint8, don't use an array of int32s! This can turn into a nightmare of mysterious errors due to silent overflows, but if you are careful, it can provide a big saving of resources.
First most important trick: allocate a few big arrays, and use and recycle portions of them, instead of bringing into life and discarding/garbage collecting lots of temporary arrays. Sounds a little bit old-fashioned, but with careful programming speed-up can be impressive. (You have better control of alignment and data locality, so numeric code can be made more efficient.)
Second: use numpy.memmap and hope that OS caching of accesses to the disk are efficient enough.
Third: as pointed out by #Jaime, work un block sub-matrices, if the whole matrix is to big.
EDIT:
Avoid unecessary list comprehension, as pointed out in this answer in SE.
The dask.array library provides a numpy interface that uses blocked algorithms to handle larger-than-memory arrays with multiple cores.
You could also look into Spartan, Distarray, and Biggus.
If it is possible for you, use numexpr. For numeric calculations like a**2 + b**2 + 2*a*b (for a and b being arrays) it
will compile machine code that will execute fast and with minimal memory overhead, taking care of memory locality stuff (and thus cache optimization) if the same array occurs several times in your expression,
uses all cores of your dual or quad core CPU,
is an extension to numpy, not an alternative.
For medium and large sized arrays, it is faster that numpy alone.
Take a look at the web page given above, there are examples that will help you understand if numexpr is for you.
On top of everything said in other answers if we'd like to store all the intermediate results of the computation (because we don't always need to keep intermediate results in memory) we can also use accumulate from numpy after various types of aggregations:
Aggregates
For binary ufuncs, there are some interesting aggregates that can be computed directly from the object. For example, if we'd like to reduce an array with a particular operation, we can use the reduce method of any ufunc. A reduce repeatedly applies a given operation to the elements of an array until only a single result remains.
For example, calling reduce on the add ufunc returns the sum of all elements in the array:
x = np.arange(1, 6)
np.add.reduce(x) # Outputs 15
Similarly, calling reduce on the multiply ufunc results in the product of all array elements:
np.multiply.reduce(x) # Outputs 120
Accumulate
If we'd like to store all the intermediate results of the computation, we can instead use accumulate:
np.add.accumulate(x) # Outputs array([ 1, 3, 6, 10, 15], dtype=int32)
np.multiply.accumulate(x) # Outputs array([ 1, 2, 6, 24, 120], dtype=int32)
Wisely using these numpy operations while performing many intermediate operations on one, or more, large Numpy arrays can give you great results without usage of any additional libraries.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Python Numpy Very Large Matrices
I tried numpy.zeros((100k x 100k)) and it returned "array is too big".
Response to comments:
1) I could create 10k x 10k matrix but not 100kx100k and 1milx1mil.
2) The matrix is not sparse.
We can do simple maths to find out. A 1 million by 1 million matrix has 1,000,000,000,000 elements. If each element takes up 4 bytes, it would require 4,000,000,000,000 bytes of memory. That is, 3.64 terabytes.
There are also chances that a given implementation of Python uses more than that for a single number. For instance, just the leap from a float to a double means you'll need 7.28 terabytes instead. (There are also chances that Python stores the number on the heap and all you get is a pointer to it, approximately doubling the footprint, without even taking in account metadata–but that's slippery grounds, I'm always wrong when I talk about Python internals, so let's not dig it too much.)
I suppose numpy doesn't have a hardcoded limit, but if your system doesn't have that much free memory, there isn't really anything to do.
Does your matrix have a lot of zero entries? I suspect it does, few people do dense problems that large.
You can easily do that with a sparse matrix. SciPy has a good set built in. http://docs.scipy.org/doc/scipy/reference/sparse.html
The space required by a sparse matrix grows with the number of nonzero elements, not the dimensions.
Your system probably won't have enough memory to store the matrix in memory, but nowadays you might well have enough terabytes of free disk space. In that case, numpy.memmap would allow you to have the array stored on disk, but appear as if it resides in memory.
However, it's probably best to rethink the problem. Do you really need a matrix this large? Any computations involving it will probably be infeasibly slow, and need to be done blockwise.