Load .npy file with np.load progress bar - python

I have a really large .npy file (previously saved with np.save) and I am loading it with:
np.load(open('file.npy'))
Is there any way to see the progress of the loading process? I know tqdm and some other libraries for monitoring the progress but don't how to use them for this problem.
Thank you!

As far I am aware, np.load does not provide any callbacks or hooks to monitor progress. However, there is a work around which may work: np.load can open the file as a memory-mapped file, which means the data stays on disk and is loaded into memory only on demand. We can abuse this machinery to manually copy the data from the memory mapped file into actual memory using a loop whose progress can be monitored.
Here is an example with a crude progress monitor:
import numpy as np
x = np.random.randn(8096, 4096)
np.save('file.npy', x)
blocksize = 1024 # tune this for performance/granularity
try:
mmap = np.load('file.npy', mmap_mode='r')
y = np.empty_like(mmap)
n_blocks = int(np.ceil(mmap.shape[0] / blocksize))
for b in range(n_blocks):
print('progress: {}/{}'.format(b, n_blocks)) # use any progress indicator
y[b*blocksize : (b+1) * blocksize] = mmap[b*blocksize : (b+1) * blocksize]
finally:
del mmap # make sure file is closed again
assert np.all(y == x)
Plugging any progress-bar library into the loop should be straight forward.
I was unable to test this with exceptionally large arrays due to memory constraints, so I can't really tell if this approach has any performance issues.

Related

What's an intelligent way of loading a compressed array completely from disk into memory - also (indentically) compressed?

I am experimenting with a 3-dimensional zarr-array, stored on disk:
Name: /data
Type: zarr.core.Array
Data type: int16
Shape: (102174, 1100, 900)
Chunk shape: (12, 220, 180)
Order: C
Read-only: True
Compressor: Blosc(cname='zstd', clevel=3, shuffle=BITSHUFFLE, blocksize=0)
Store type: zarr.storage.DirectoryStore
No. bytes: 202304520000 (188.4G)
No. bytes stored: 12224487305 (11.4G)
Storage ratio: 16.5
Chunks initialized: 212875/212875
As I understand it, zarr-arrays can also reside in memory - compressed, as if they were on disk. So I thought why not try to load the entire thing into RAM on a machine with 32 GByte memory. Compressed, the dataset would require approximately 50% of RAM. Uncompressed, it would require about 6 times more RAM than available.
Preparation:
import os
import zarr
from numcodecs import Blosc
import tqdm
zpath = '...' # path to zarr data folder
disk_array = zarr.open(zpath, mode = 'r')['data']
c = Blosc(cname = 'zstd', clevel=3, shuffle = Blosc.BITSHUFFLE)
memory_array = zarr.zeros(
disk_array.shape, chunks = disk_array.chunks,
dtype = disk_array.dtype, compressor = c
)
The following experiment fails almost immediately with an out of memory error:
memory_array[:, :, :] = disk_array[:, :, :]
As I understand it, disk_array[:, :, :] will try to create an uncompressed, full-size numpy array, which will obviously fail.
Second attempt, which works but is agonizingly slow:
chunk_lines = disk_array.chunks[0]
chunk_number = disk_array.shape[0] // disk_array.chunks[0]
chunk_remain = disk_array.shape[0] % disk_array.chunks[0] # unhandled ...
for chunk in tqdm.trange(chunk_number):
chunk_slice = slice(chunk * chunk_lines, (chunk + 1) * chunk_lines)
memory_array[chunk_slice, :, :] = disk_array[chunk_slice, :, :]
Here, I am trying to reads a certain number of chunks at a time and put them into my in-memory array. It works, but it is about 6 to 7 times slower than what it took to write this thing to disk in the first place. EDIT: Yes, it's still slow, but the 6 to 7 times happened due to a disk issue.
What's an intelligent and fast way of achieving this? I'd guess, besides not using the right approach, my chunks might also be too small - but I am not sure.
EDIT: Shape, chunk size and compression are supposed to be identical for the on-disk array and the in-memory array. It should therefore be possible to eliminate the decompress-compress procedure in my example above.
I found zarr.convenience.copy but it is marked as an experimental feature, subject to further change.
Related issue on GitHub
You could conceivably try with fsspec.implementations.memory.MemoryFileSystem, which has a .make_mapper() method, with which you can make the kind of object expected by zarr.
However, this is really just a dict of path:io.BytesIO, which you could make yourself, if you want.
There are a couple of ways one might solve this issue today.
Use LRUStoreCache to cache (some) compressed data in memory.
Coerce your underlying store into a dict and use that as your store.
The first option might be appropriate if you only want some frequently used data in-memory. Of course how much you load into memory is something you can configure. So this could be the whole array. This will only happen with data on-demand, which may be useful for you.
The second option just creates a new in-memory copy of the array by pulling all of the compressed data from disk. The one downside is if you intend to write back to disk this will be something you need to do manually, but it is not too difficult. The update method is pretty handy for facilitating this copying of data between different stores.

numpy memmap memory usage - want to iterate once

let say I have some big matrix saved on disk. storing it all in memory is not really feasible so I use memmap to access it
A = np.memmap(filename, dtype='float32', mode='r', shape=(3000000,162))
now let say I want to iterate over this matrix (not essentially in an ordered fashion) such that each row will be accessed exactly once.
p = some_permutation_of_0_to_2999999()
I would like to do something like that:
start = 0
end = 3000000
num_rows_to_load_at_once = some_size_that_will_fit_in_memory()
while start < end:
indices_to_access = p[start:start+num_rows_to_load_at_once]
do_stuff_with(A[indices_to_access, :])
start = min(end, start+num_rows_to_load_at_once)
as this process goes on my computer is becoming slower and slower and my RAM and virtual memory usage is exploding.
Is there some way to force np.memmap to use up to a certain amount of memory? (I know I won't need more than the amount of rows I'm planning to read at a time and that caching won't really help me since I'm accessing each row exactly once)
Maybe instead is there some other way to iterate (generator like) over a np array in a custom order? I could write it manually using file.seek but it happens to be much slower than np.memmap implementation
do_stuff_with() does not keep any reference to the array it receives so no "memory leaks" in that aspect
thanks
This has been an issue that I've been trying to deal with for a while. I work with large image datasets and numpy.memmap offers a convenient solution for working with these large sets.
However, as you've pointed out, if I need to access each frame (or row in your case) to perform some operation, RAM usage will max out eventually.
Fortunately, I recently found a solution that will allow you to iterate through the entire memmap array while capping the RAM usage.
Solution:
import numpy as np
# create a memmap array
input = np.memmap('input', dtype='uint16', shape=(10000,800,800), mode='w+')
# create a memmap array to store the output
output = np.memmap('output', dtype='uint16', shape=(10000,800,800), mode='w+')
def iterate_efficiently(input, output, chunk_size):
# create an empty array to hold each chunk
# the size of this array will determine the amount of RAM usage
holder = np.zeros([chunk_size,800,800], dtype='uint16')
# iterate through the input, replace with ones, and write to output
for i in range(input.shape[0]):
if i % chunk_size == 0:
holder[:] = input[i:i+chunk_size] # read in chunk from input
holder += 5 # perform some operation
output[i:i+chunk_size] = holder # write chunk to output
def iterate_inefficiently(input, output):
output[:] = input[:] + 5
Timing Results:
In [11]: %timeit iterate_efficiently(input,output,1000)
1 loop, best of 3: 1min 48s per loop
In [12]: %timeit iterate_inefficiently(input,output)
1 loop, best of 3: 2min 22s per loop
The size of the array on disk is ~12GB. Using the iterate_efficiently function keeps the memory usage to 1.28GB whereas the iterate_inefficiently function eventually reaches 12GB in RAM.
This was tested on Mac OS.
I've been experimenting with this problem for a couple days now and it appears there are two ways to control memory consumption using np.mmap. The first is reliable while the second would require some testing and will be OS dependent.
Option 1 - reconstruct the memory map with each read / write:
def MoveMMapNPArray(data, output_filename):
CHUNK_SIZE = 4096
for idx in range(0,x.shape[1],CHUNK_SIZE):
x = np.memmap(data.filename, dtype=data.dtype, mode='r', shape=data.shape, order='F')
y = np.memmap(output_filename, dtype=data.dtype, mode='r+', shape=data.shape, order='F')
end = min(idx+CHUNK_SIZE, data.shape[1])
y[:,idx:end] = x[:,idx:end]
Where data is of type np.memmap. This discarding of the memmap object with each read keeps the array from being collected into memory and will keep memory consumption very low if the chunk size is low. It likely introduces some CPU overhead but was found to be small on my setup (MacOS).
Option 2 - construct the mmap buffer yourself and provide memory advice
If you look at the np.memmap source code here, you can see that it is relatively simple to create your own memmapped numpy array relatively easily. Specifically, with the snippet:
mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=start)
mmap_np_array = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, offset=array_offset, order=order)
Note this python mmap instance is stored as the np.memmap's private _mmap attribute.
With access to the python mmap object, and python 3.8, you can use its madvise method, described here.
This allows you to advise the OS to free memory where available. The various madvise constants are described here for linux, with some generic cross platform options specified.
The MADV_DONTDUMP constant looks promising but I haven't tested memory consumption with it like I have for option 1.

No space left while using Multiprocessing.Array in shared memory

I am using the multiprocessing functions of Python to run my code parallel on a machine with roughly 500GB of RAM. To share some arrays between the different workers I am creating a Array object:
N = 150
ndata = 10000
sigma = 3
ddim = 3
shared_data_base = multiprocessing.Array(ctypes.c_double, ndata*N*N*ddim*sigma*sigma)
shared_data = np.ctypeslib.as_array(shared_data_base.get_obj())
shared_data = shared_data.reshape(-1, N, N, ddim*sigma*sigma)
This is working perfectly for sigma=1, but for sigma=3 one of the harddrives of the device is slowly filled, until there is no free space anymore and then the process fails with this exception:
OSError: [Errno 28] No space left on device
Now I've got 2 questions:
Why does this code even write anything to the disc? Why isn't it all stored in the memory?
How can I solve this problem? Can I make Python store it entireley in the RAM without writing it to the HDD? Or can I change the HDD on which this array is written?
EDIT: I found something online which suggests, that the array is stored in the "shared memory". But the /dev/shm device has plenty more free space as the /dev/sda1 which is filled up by the code above.
Here is the (relevant part of the) strace log of this code.
Edit #2: I think that I have found a workarround for this problem. By looking at the source I found that multiprocessing tries to create a temporary file in a directory which is determinded by using
process.current_process()._config.get('tempdir')
Setting this value manually at the beginning of the script
from multiprocessing import process
process.current_process()._config['tempdir'] = '/data/tmp/'
seems to be solving this issue. But I think that this is not the best way to solve it. So: are there any other suggestions how to handle it?
These data are larger than 500GB. Just shared_data_base would be 826.2GB on my machine by sys.getsizeof() and 1506.6GB by pympler.asizeof.asizeof(). Even if they were only 500GB, your machine needs some of that memory in order to run. This is why the data are going to disk.
import ctypes
from pympler.asizeof import asizeof
import sys
N = 150
ndata = 10000
sigma = 3
ddim = 3
print(sys.getsizeof(ctypes.c_double(1.0)) * ndata*N*N*ddim*sigma*sigma)
print(asizeof(ctypes.c_double(1.0)) * ndata*N*N*ddim*sigma*sigma)
Note that on my machine (Debian 9), /tmp is the location that fills. If you find that you must use disk, be certain that the location on disk used has enough available space, typically /tmp isn't a large partition.

Not sure why memory usage seen in `top` is unintuitive

I am working with a simple numpy.dtype array and I am using numpy.savez and numpy.load methods to save the array to a disk and read it from the disk. During both storing and loading of the array, the memory usage as shown by 'top' doesn't appear to be what it should be like. Below is a sample code that demonstrates this.
import sys
import numpy as np
import time
RouteEntryNP = np.dtype([('final', 'u1'), ('prefix_len', 'u1'),
('output_idx', '>u4'), ('children', 'O')])
a = np.zeros(1000000, RouteEntryNP)
time.sleep(10)
print(sys.getsizeof(a))
with open('test.np.npz', 'wb+') as f:
np.savez(f, a=a)
while True:
time.sleep(10)
Program starts with memory usage of 25M - somewhat closer to intuition - the actual size of members of RouteEntryNP is 14 bytes - so 25M is somewhat closer to intuition. But as the data is being written to the file - the memory usage shoots up to approx 250M.
A similar behavior is observed when loading the file, in this case the memory usage shoots up to approximately 160M and explicit gc.collect() doesn't seem to help as well. The way I am reading the file is as follows.
import numpy as np
np.load('test.np.npz')
import gc
gc.collect()
The memory usage stays # 160M. Not sure why this is happening. Is there a way to 'reclaim' this memory?

Memory leak in Python-script using VTK

I am currently using a Python script to process information stored in the EnSight Gold format. My Python (2.6) scipt uses VTK (5.10.0) to process the file, where I used the vtkEnSightGoldReader for reading the data, and loop over time steps. In principle this works find for smaller datasets, however, for large datasets (GBs), I see the memory usage (via top) increasing with time while the process is running. This filling of the memory goes slow, but in some cases problems are inevitable.
The following script is the minimal productive script that I reduced my issue to.
import vtk
reader = vtk.vtkEnSightGoldReader()
reader.SetCaseFileName("case.case")
reader.Update()
# Get time values
timeset=reader.GetTimeSets()
time=timeset.GetItem(0)
timesteps=time.GetSize()
#reader.ReleaseDataFlagOn()
for j in range(timesteps):
curTime=time.GetTuple(j)[0]
print curTime
reader.SetTimeValue(curTime)
reader.Update()
#reader.RemoveAllInputs()
My question is, how can I unload/replace the data that is stored in the memory, instead of using more memory continuously?
As you can see in my source code, I tried member functions "RemoveAllInputs" and "ReleaseDataFlagOn", but they don't work or I used them in the wrong way. Infortunately, I am not getting any closer to a solution.
Something else I tried is the DeepCopy() approach, which I found on the VTK website. However, it seems that this approach is not useful for me, because I get the memory issues even before calling GetOutput()
There is indeed a (minor) memory leak in the vtkEnsightGoldReader. The memory leak is a result of not properly clear collection object, which becomes apparent only for processing very large datasets. Technically it is not a memoryleak, since it gets properly cleared after a run.
This can only be solved by applying a patch to the VTK source and recompiling. I received the patch below via people from Kitware, so I would assume this rolled out in later versions of VTK.
diff --git a/IO/vtkEnSightReader.cxx b/IO/vtkEnSightReader.cxx
index 68a9b8f..7ab8ddd 100644
--- a/IO/vtkEnSightReader.cxx
+++ b/IO/vtkEnSightReader.cxx
## -985,6 +985,8 ## int vtkEnSightReader::ReadCaseFileTime(char* line)
int timeSet, numTimeSteps, i, filenameNum, increment, lineRead;
float timeStep;
+ this->TimeSetFileNameNumbers->RemoveAllItems();
+
// found TIME section
int firstTimeStep = 1;

Categories