I have a while loop that collects data from a microphone (replaced here for np.random.random() to make it more reproducible). I do some operations, let's say I take the abs().mean() here because my output will be a one dimensional array.
This loop is going to run for a LONG time (e.g., once a second for a week) and I am wondering my options to save this. My main concerns are saving the data with acceptable performance and having the result being portable (e.g, .csv beats .npy).
The simple way: just append things into a .txt file. Could be replaced by csv.gz maybe? Maybe using np.savetxt()? Would it be worth it?
The hdf5 way: this should be a nicer way, but reading the whole dataset to append to it doesn't seem like good practice or better performing than dumping into a text file. Is there another way to append to hdf5 files?
The npy way (code not shown): I could save this into a .npy file but I would rather make it portable using a format that could be read from any program.
from collections import deque
import numpy as np
import h5py
amplitudes = deque(maxlen=save_interval_sec)
# Read from the microphone in a continuous stream
while True:
data = np.random.random(100)
amplitude = np.abs(data).mean()
print(amplitude, end="\r")
amplitudes.append(amplitude)
# Save the amplitudes to a file every n iterations
if len(amplitudes) == save_interval:
with open("amplitudes.txt", "a") as f:
for amp in amplitudes:
f.write(str(amp) + "\n")
amplitudes.clear()
# Save the amplitudes to an HDF5 file every n iterations
if len(amplitudes) == save_interval:
# Convert the deque to a Numpy array
amplitudes_array = np.array(amplitudes)
# Open an HDF5 file
with h5py.File("amplitudes.h5", "a") as f:
# Get the existing dataset or create a new one if it doesn't exist
dset = f.get("amplitudes")
if dset is None:
dset = f.create_dataset("amplitudes", data=amplitudes_array, dtype=np.float32,
maxshape=(None,), chunks=True, compression="gzip")
else:
# Get the current size of the dataset
current_size = dset.shape[0]
# Resize the dataset to make room for the new data
dset.resize((current_size + save_interval,))
# Write the new data to the dataset
dset[current_size:] = amplitudes_array
# Clear the deque
amplitudes.clear()
# For debug only
if len(amplitudes)>3:
break
Update
I get that the answer might depend a bit on the sampling frequency (once a second might be too slow) and the data dimensions (single column might be too little). I guess I asked because anything can work, but I always just dump to text. I am not sure where the breaking points are that tip the decision into one or the other method.
Related
I have a script that loops through ~335k filenames, opens the fits-tables from the filenames, performs a few operations on the tables and writes the results to a file. In the beginning the loop goes relatively fast but with time it consumes more and more RAM (and CPU resources, I guess) and the script also gets slower. I would like to know how can I improve the performance/make the code quicker. E.g. is there a better way to write to the output file (open the output file ones and do everything within a while-open loop vs opening the file every time a new to write in it)? Is there a better looping way? Can I dump memory that I don't need anymore?
My script looks like that:
#spectral is a package for manipulation of spectral data
from spectral import *
# I use this dictionary to store functions, that I don't want to generate a new each time I need them.
# Generating them a new would be more time consuming, I figured out
lam_resample_dic = {}
with open("/home/bla/Downloads/output.txt", "ab") as f:
for fname, ind in zip(list_of_fnames, range(len(list_of_fnames))):
data_s = Table.read('/home/nestor/Downloads/all_eBoss_QSO/'+fname, format='fits')
# lam_str_identifier is just the dic-key I need for finding the corresponding BandResampler function from below
lam_str_identifier = ''.join([str(x) for x in data_s['LOGLAM'].data.astype(str)])
if lam_str_identifier not in lam_resample_dic:
# BandResampler is the function I avoid doing everytime a new
# I do it only if necessary - when lam_str_identifier indicates a unique new set of data
resample = BandResampler(centers1=10**data_s['LOGLAM'], centers2=df_jpas["Filter.wavelength"].values, fwhm2=df_jpas["Filter.width"].values)
lam_resample_dic[lam_str_identifier] = resample
photo_spec = np.around(resample(data_s['FLUX']),4)
else:
photo_spec = np.around(lam_resample_dic[lam_str_identifier](data_s['FLUX']),4)
np.savetxt(f, [photo_spec], delimiter=',', fmt='%1.4f')
# this is just to keep track of the progress of the loop
if ind%1000==0:
print('num of files processed so far:',find)
Thanks for any suggestions!
I have 1000s of CSV files that I would like to append and create one big numpy array. The problem is that the numpy array would be much bigger than my RAM. Is there a way of writing a bit at a time to disk without having the entire array in RAM?
Also is there a way of reading only a specific part of the array from disk at a time?
When working with numpy and large arrays, there are several approaches depending on what you need to do with that data.
The simplest answer is to use less data. If your data has lots of repeating elements, it is often possible to use a sparse array from scipy because the two libraries are heavily integrated.
Another answer (IMO: the correct solution to your problem) is to use a memory mapped array. This will let numpy only load the necessary parts to ram when needed, and leave the rest on disk. The files containing the data can be simple binary files created using any number of methods, but the built-in python module that would handle this is struct. Appending more data would be as simple as opening the file in append mode, and writing more bytes of data. Make sure that any references to the memory mapped array are re-created any time more data is written to the file so the information is fresh.
Finally is something like compression. Numpy can compress arrays with savez_compressed which can then be opened with numpy.load. Importantly, compressed numpy files cannot be memory-mapped, and must be loaded into memory entirely. Loading one column at a time may be able to get you under the threshold, but this could similarly be applied to other methods to reduce memory usage. Numpy's built in compression techniques will only save disk space not memory. There may exist other libraries that perform some sorts of streaming compression, but that is beyond the scope of my answer.
Here is an example of putting binary data into a file then opening it as a memory-mapped array:
import numpy as np
#open a file for data of a single column
with open('column_data.dat', 'wb') as f:
#for 1024 "csv files"
for _ in range(1024):
csv_data = np.random.rand(1024).astype(np.float) #represents one column of data
f.write(csv_data.tobytes())
#open the array as a memory-mapped file
column_mmap = np.memmap('column_data.dat', dtype=np.float)
#read some data
print(np.mean(column_mmap[0:1024]))
#write some data
column_mmap[0:512] = .5
#deletion closes the memory-mapped file and flush changes to disk.
# del isn't specifically needed as python will garbage collect objects no
# longer accessable. If for example you intend to read the entire array,
# you will need to periodically make sure the array gets deleted and re-created
# or the entire thing will end up in memory again. This could be done with a
# function that loads and operates on part of the array, then when the function
# returns and the memory-mapped array local to the function goes out of scope,
# it will be garbage collected. Calling such a function would not cause a
# build-up of memory usage.
del column_mmap
#write some more data to the array (not while the mmap is open)
with open('column_data.dat', 'ab') as f:
#for 1024 "csv files"
for _ in range(1024):
csv_data = np.random.rand(1024).astype(np.float) #represents one column of data
f.write(csv_data.tobytes())
I am new to using HDF5 files and I am trying to read files with shapes of (20670, 224, 224, 3). Whenever I try to store the results from the hdf5 into a list or another data structure, it takes either takes so long that I abort the execution or it crashes my computer. I need to be able to read 3 sets of hdf5 files, use their data, manipulate it, use it to train a CNN model and make predictions.
Any help for reading and using these large HDF5 files would be greatly appreciated.
Currently this is how I am reading the hdf5 file:
db = h5py.File(os.getcwd() + "/Results/Training_Dataset.hdf5")
training_db = list(db['data'])
Crashes probably mean you are running out of memory. Like Vignesh Pillay suggested, I would try chunking the data and work on a small piece of it at a time. If you are using the pandas method read_hdf you can use the iterator and chunksize parameters to control the chunking:
import pandas as pd
data_iter = pd.read_hdf('/tmp/test.hdf', key='test_key', iterator=True, chunksize=100)
for chunk in data_iter:
#train cnn on chunk here
print(chunk.shape)
Note this requires the hdf to be in table format
My answer updated 2020-08-03 to reflect code you added to your question.
As #Tober noted, you are running out of memory. Reading a dataset of shape (20670, 224, 224, 3) will become a list of 3.1G entities. If you read 3 image sets, it will require even more RAM.
I assume this is image data (maybe 20670 images of shape (224, 224, 3) )?
If so, you can read the data in slices with both h5py and tables (Pytables).
This will return the data as a NumPy array, which you can use directly (no need to manipulate into a different data structure).
Basic process would look like this:
with h5py.File(os.getcwd() + "/Results/Training_Dataset.hdf5",'r') as db:
training_db = db['data']
# loop to get images 1 by 1
for icnt in range(20670) :
image_arr = training_db [icnt,:,:,:}
# then do something with the image
You could also read multiple images by setting the first index to a range (say icnt:icnt+100) then handle looping appropriately.
Your problem is arising as you are running out of memory. So, Virtual Datasets come in handy while dealing with large datasets like yours. Virtual datasets allow a number of real datasets to be mapped together into a single, sliceable dataset via an interface layer. You can read more about them here https://docs.h5py.org/en/stable/vds.html
I would recommend you to start from one file at a time. Firstly, create a Virtual Dataset file of your existing data like
with h5py.File(os.getcwd() + "/Results/Training_Dataset.hdf5", 'r') as db:
data_shape = db['data'].shape
layout = h5py.VirtualLayout(shape = (data_shape), dtype = np.uint8)
vsource = h5py.VirtualSource(db['data'])
with h5py.File(os.getcwd() + "/virtual_training_dataset.hdf5", 'w', libver = 'latest') as file:
file.create_virtual_dataset('data', layout = layout, fillvalue = 0)
This will create a virtual dataset of your existing training data. Now, if you want to manipulate your data, you should open your file in r+ mode like
with h5py.File(os.getcwd() + "/virtual_training_dataset.hdf5", 'r+', libver = 'latest') as file:
# Do whatever manipulation you want to do here
One more thing I would like to advise is make sure your indices while slicing are of int datatype, otherwise you will get an error.
I need to read in large (+15GB) NetCDF files into a program, which holds a 3D variable (etc Time as the record dimension, and the data is latitudes by longitudes).
I'm processing the data in a 3 level nested loop (checking each block of the NetCDF if it passes a certain criteria. For example;
from netCDF4 import Dataset
import numpy as np
File = Dataset('Somebigfile.nc', 'r')
Data = File.variables['Wind'][:]
Getdimensions = np.shape(Data)
Time = Getdimensions[0]
Latdim = Getdimensions[1]
Longdim = Getdimensions[2]
for t in range(0,Time):
for i in range(0,Latdim):
for j in range(0,Longdim):
if Data[t,i,j] > Somethreshold:
#Do something
Is there anyway I can read in the NetCDF file one time record at a time? Reducing the memory usage hugely. Any help hugely appreciated.
I know of NCO operators but would prefer not to use these methods to break up files before using the script.
It sounds like you've already settled on a solution but I'll throw out a much more elegant and vectorized (likely faster) solution that uses xarray and dask. Your nested for loop is going to be very inefficient. Combining xarray and dask, you can work on the data in your file incrementally in a semi-vectorized manor.
Since your Do something step isn't all that specific, you'll have to extrapolate from my example.
import xarray as xr
# xarray will open your file but doesn't load in any data until you ask for it
# dask handles the chunking and memory management for you
# chunk size can be optimized for your specific dataset.
ds = xr.open_dataset('Somebigfile.nc', chunks={'time': 100})
# mask out values below the threshold
da_thresh = ds['Wind'].where(ds['Wind'] > Somethreshold)
# Now just operate on the values greater than your threshold
do_something(da_thresh)
Xarray/Dask docs: http://xarray.pydata.org/en/stable/dask.html
I have around 4TB MERIS time series data which comes in netCDF format.
So I have a lot netCDF files containing several 'variables'.
NetCDF format is new to me and although I've read a lot about netCDF processing I don't get an idea of how to do it. This question 'Combining a large amount of netCDF files' deals somehow with my problem but I did not get there. My approach was to first mosaic, then stack and lately take the mean out of every pixel.
One file contains the following 32 variables
Here's additional the ncdump output of one .nc file of one day:
http://www.filedropper.com/ncdumpoutput
I managed to read the files, extract the variables I want (variable # 32) and put them into a list using the following code
l = list()
for i in files_in:
# read netCDF file
dset = nc.Dataset(i, mode = 'r')
# save variables
var = dset.variables['vegetation_index_mean'][:]
# write all temp loop outputs in a list
l.append (var)
# close netCDF file
dset.close()
The list now contains 24 'masked_arrays' of different locations of the same date.
Every time I want to print the contents of the list my Spyder freezes. Every command I run afterwards Spyder first freezes for five sec before starting.
My goal is to make a time series analysis for a specific time frame (every date stored in a single .nc file). So my plan was to mosaic (is this possible?) the variables in the list (treating them as raster bands), process additional dates and take the mean for every pixel (1800 x 1800 ).
Maybe my whole approach is wrong? Can I treat these 'variables' like raster bands?
I'm not sure if the following answer may respond to your needs, as this procedure is designed in order to process timeseries, is pretty manual and furthermore you have 4Tb of data...
Thus I apologize myself if this doesn't help.
This is for Python 2.7:
First import all the modules needed:
import tkFileDialog
from netCDF4 import Dataset
import matplotlib.pyplot as plt
Second parse multiple nc files:
n = []
filename = {}
filename = tkFileDialog.askopenfilenames()
filename = list(filename)
n = len(filename)
Third read nc files and classify data and metadata within dictionaries using a loop:
wtr_tem = {} # create empty arrays for variable sea water temperature
fh = {} # create empty arrays for filehandler and variables nc file
vars = {}
for i in range(n):
filename[i]=filename[i].decode('unicode_escape').encode('ascii','ignore') # remove unicode in order to execute the following command
filename1 = ''.join(filename[i]) # converts list to string
fh[i] = Dataset(filename1, mode='r') #create the file handle
vars[i] = fh[i].variables.keys() #returns a list with the variables of the file
wtr_tem[i] = fh[i].variables['WTR_TEM']
#plot variables in different figures
plt.plot(wtr_tem[i],'r-')
plt.xlabel(fh[i].title) #add specific title from each nc file
plt.show()
I hope it may help to somebody.