I met a problem!
Recently I meet a problem of I/O issue. The target and input data are stored with h5py files. Each target file is 2.6GB while each input file is 10.2GB. I have 5 input datasets and 5 target datasets in total.
I created a custom dataset function for each h5py file and then use data.ConcatDataset class to link all the datasets. The custom dataset function is:
class MydataSet(Dataset):
def __init__(self, indx=1, root_path='./xxx', tar_size=128, data_aug=True, train=True):
self.train = train
if self.train:
self.in_file = pth.join(root_path, 'train', 'train_noisy_%d.h5' % indx)
self.tar_file = pth.join(root_path, 'train', 'train_clean_%d.h5' % indx)
else:
self.in_file = pth.join(root_path, 'test', 'test_noisy.h5')
self.tar_file = pth.join(root_path, 'test', 'test_clean.h5')
self.h5f_n = h5py.File(self.in_file, 'r', driver='core')
self.h5f_c = h5py.File(self.tar_file, 'r')
self.keys_n = list(self.h5f_n.keys())
self.keys_c = list(self.h5f_c.keys())
# h5f_n.close()
# h5f_c.close()
self.tar_size = tar_size
self.data_aug = data_aug
def __len__(self):
return len(self.keys_n)
def __del__(self):
self.h5f_n.close()
self.h5f_c.close()
def __getitem__(self, index):
keyn = self.keys_n[index]
keyc = self.keys_c[index]
datan = np.array(self.h5f_n[keyn])
datac = np.array(self.h5f_c[keyc])
datan_tensor = torch.from_numpy(datan).unsqueeze(0)
datac_tensor = torch.from_numpy(datac)
if self.data_aug and np.random.randint(2, size=1)[0] == 1: # horizontal flip
datan_tensor = torch.flip(datan_tensor,dims=[2]) # c h w
datac_tensor = torch.flip(datac_tensor,dims=[2])
Then I use dataset_train = data.ConcatDataset([MydataSet(indx=index, train=True) for index in range(1, 6)]) for training. When only 2-3 h5py files are used, the I/O speed is normal and everything goes right. However, when 5 files are used, the training speed is gradually decreasing (5 iterations/s to 1 iterations/s). I change the num_worker and the problem still exists.
Could anyone give me a solution? Should I merge several h5py files into a bigger one? Or other methods? Thanks in advance!
Improving performance requires timing benchmarks. To do that you need to identify potential bottlenecks and associated scenarios. You said "with 2-3 files the I/O speed is normal" and "when 5 files are used, the training speed gradually decreases". So, is your performance issue I/O speed, or training speed? Or do you know? If you don't know, you need to isolate and compare I/O performance and training performance separately for the 2 scenarios.
In other words, to measure I/O performance (only) you need to run the following tests:
Time to read and concatenate 2-3 files,
Time to read and concatenate 5 files,
Copy the 5 files into 1, and time the read from the merged file,
Or, link the 5 files to 1 file, and time.
And to measure training speed (only) you need to compare performance for the following tests:
Merge 2-3 files, then read and train from the merged file.
Merge all 5 files, then read and train from merged file.
Or, link the 5 files to 1 file, then read and train from linked file.
As noted in my comment, merging (or linking) multiple HDF5 files into one is easy if all datasets are at the root level and all dataset names are unique. I added the external link method because it might provide the same performance, without duplicating large data files.
Below is the code that shows both methods. Substitute your file names in the fnames list, and it should be ready to run. If your dataset names aren't unique, you will need to create unique names, and assign in h5fr.copy() -- like this: h5fr.copy(h5fr[ds],h5fw,'unique_dataset_name')
Code to merge -or- link files :
(comment/uncomment lines as appropriate)
import h5py
fnames = ['file_1.h5','file_2.h5','file_3.h5']
# consider changing filename to 'linked_' when using links:
with h5py.File(f'merge_{len(fnames)}.h5','w') as h5fw:
for fname in fnames:
with h5py.File(fname,'r') as h5fr:
for ds in h5fr.keys():
# To copy datasets into 1 file use:
h5fr.copy(h5fr[ds],h5fw)
# to link datasets to 1 file use:
# h5fw[ds] = h5py.ExternalLink(fname,ds)
Related
Following suggestions on SO Post, I also found PyTables-append is exceptionally time efficient. However, in my case the output file (earray.h5) has huge size. Is there a way to append the data such that the output file is not as huge? For example, in my case (see link below) a 13GB input file (dset_1: 2.1E8 x 4 and dset_2: 2.1E8 x 4) gives a 197 GB output file with just one column (2.5E10 x 1). All elements are float64.
I want to reduce the output file size such that the execution speed of the script is not compromised and the output file reading is also efficient for later use. Can saving the data along columns and not just rows help? Any suggestions on this? Given below is a MWE.
Output and input files' details here
# no. of chunks from dset-1 and dset-2 in inp.h5
loop_1 = 40
loop_2 = 20
# save to disk after these many rows
app_len = 10**6
# **********************************************
# Grabbing input.h5 file
# **********************************************
filename = 'inp.h5'
f2 = h5py.File(filename, 'r')
chunks1 = f2['dset_1']
chunks2 = f2['dset_2']
shape1, shape2 = chunks1.shape[0], chunks2.shape[0]
f1 = tables.open_file("table.h5", "w")
a = f1.create_earray(f1.root, "dataset_1", atom=tables.Float64Atom(), shape=(0, 4))
size1 = shape1//loop_1
size2 = shape2//loop_2
# ***************************************************
# Grabbing chunks to process and append data
# ***************************************************
for c in range(loop_1):
h = c*size1
# grab chunks from dset_1 of inp.h5
chunk1 = chunks1[h:(h + size1)]
for d in range(loop_2):
g = d*size2
chunk2 = chunks2[g:(g + size2)] # grab chunks from dset_2 of inp.h5
r1 = chunk1.shape[0]
r2 = chunk2.shape[0]
left, right = 0, 0
for j in range(r1): # grab col.2 values from dataset-1
e1 = chunk1[j, 1]
#...Algaebraic operations here to output a row containing 4 float64
#...append to a (earray) when no. of rows reach a million
del chunk2
del chunk1
f2.close()
I wrote the answer you are referencing. That is a simple example that "only" writes 1.5e6 rows. I didn't do anything to optimize performance for very large files. You are creating a very large file, but did not say how many rows (obviously way more than 10**6). Here are some suggestions based on comments in another thread.
Areas I recommend (3 related to PyTables code, and 2 based on external utilizes).
PyTables code suggestions:
Enable compression when you create the file (add the filters= parameter when you create the file). Start with tb.Filters(complevel=1).
Define the expectedrows= parameter in .create_tables() (per PyTables docs, 'this will optimize the HDF5 B-Tree and amount of memory used'). The default value is set in tables/parameters.py (look for EXPECTED_ROWS_TABLE; It's only 10000 in my installation). I suggest you set this to a larger value if you are creating 10**6 (or more) rows.
There is a side benefit to setting expectedrows=. If you don't define chunkshape, 'a sensible value is calculated based on the expectedrows parameter'. Check the value used. This won't decrease the created file size, but will improve I/O performance.
If you didn't use compression when you created the file, there are 2 methods to compress existing files:
External Utilities:
The PyTables utility ptrepack - run against a HDF5 file to create a
new file (useful to go from uncompressed to compressed, or vice-versa). It is delivered with PyTables, and runs on the command line.
The HDF5 utility h5repack - works similar to ptrepack. It is delivered with the HDF5 installer from The HDF Group.
There are trade-offs with file compression: it reduces the file size, but increases access time (reduces I/O performance). I tend to use uncompressed files I open frequently (for best I/O performance). Then when done, I convert to compressed format for long term archiving. You can continue to work with them in compress format (the API handles cleanly).
I have an HDF5 output file from NASTRAN that contains mode shape data. I am trying to read them into Matlab and Python to check various post-processing techniques. The file in question is in the local directory for both of these tests. The file is semi-large at 1.2 GB but certainly not that large in terms of HDF5 files I have read previously. There are 17567342 rows and 8 columns in the table I want to access. The first and last columns are integers the middle 6 are floating point numbers.
Matlab:
file = 'HDF5.h5';
hinfo = hdf5info(file);
% ... Find the dataset I want to extract
t = hdf5read(file, '/NASTRAN/RESULT/NODAL/EIGENVECTOR');
This last operation is extremely slow (can be measured in hours).
Python:
import tables
hfile = tables.open_file("HDF5.h5")
modetable = hfile.root.NASTRAN.RESULT.NODAL.EIGENVECTOR
data = modetable.read()
This last operation is basically instant. I can then access data as if it were a numpy array. I am clearly missing something very basic about what these commands are doing. I'm thinking it might have something to do with data conversion but I'm not sure. If I do type(data) I get back numpy.ndarray and type(data[0]) returns numpy.void.
What is the correct (i.e. speedy) way to read the dataset I want into Matlab?
Matt, Are you still working on this problem?
I am not a matlab guy, but I am familiar with Nastran HDF5 file. You are right; 1.2 GB is big, but not that big by today's standards.
You might be able to diagnose the matlab performance bottle neck by running tests with different numbers of rows in your EIGENVECTOR dataset. To do that (without running a lot of Nastran jobs), I created some simple code to create a HDF5 file with a user defined # of rows. It mimics the structure of the Nastran Eigenvector Result dataset. See below:
import tables as tb
import numpy as np
hfile = tb.open_file('SO_54300107.h5','w')
eigen_dtype = np.dtype([('ID',int), ('X',float),('Y',float),('Z',float),
('RX',float),('RY',float),('RZ',float), ('DOMAIN_ID',int)])
fsize = 1000.0
isize = int(fsize)
recarr = np.recarray((isize,),dtype=eigen_dtype)
id_arr = np.arange(1,isize+1)
dom_arr = np.ones((isize,), dtype=int)
arr = np.array(np.arange(fsize))/fsize
recarr['ID'] = id_arr
recarr['X'] = arr
recarr['Y'] = arr
recarr['Z'] = arr
recarr['RX'] = arr
recarr['RY'] = arr
recarr['RZ'] = arr
recarr['DOMAIN_ID'] = dom_arr
modetable = hfile.create_table('/NASTRAN/RESULT/NODAL', 'EIGENVECTOR',
createparents=True, obj=recarr )
hfile.close()
Try running this with different values for fsize (# of rows), then attach the HDF5 file it creates to matlab. Maybe you can find the point where performance noticeably degrades.
Matlab provided another HDF5 reader called h5read. Using the same basic approach the amount of time taken to read the data was drastically reduced. In fact hdf5read is listed for removal in a future version. Here is same basic code with the perfered functions.
file = 'HDF5.h5';
hinfo = h5info(file);
% ... Find the dataset I want to extract
t = h5read(file, '/NASTRAN/RESULT/NODAL/EIGENVECTOR');
WHAT I WAS ABLE TO DO:
Currently I'm able to generate mfcc for all files in a given folder and save them as follows:
def gen_features(in_path, out_path):
src = in_path + '/'
output_path = out_path + '/'
sr = 22050
path_to_audios = [os.path.join(src, f) for f in os.listdir(src)]
for audio in path_to_audios:
audio_data = librosa.load(audio_path, sr=22050)[0] # getting y
mfcc_feature_list = librosa.feature.mfcc(y=audio_data,sr=sr) # create mfcc features
np.savetxt(blah blahblah , mfcc_feature_list, delimiter ="\t")
gen_features('/home/data','home/data/features')
DIFFICULTY:
my input audio recordings are pretty long, each is atleast 3-4 hours long.
this program is very inefficient as the file size after np.savetxt is becoming pretty big ~ 1.5MB txt file for 1 minute of audio. I plan to combine mfcc with more features in the future. So the saved file text size will explode. I want to keep it smaller 5 minute chunks for easy of processing.
WHAT I WANT TO DO:
add one more parameter len to gen_features, this must specify the length of audio to be processed at a time.
So if the input audio abc.mp3is 13 minutes long, and I specify len = 5 meaning 5minutes,
then mfcc's should be computed for [0.0,5.0) [5.0,10.0) and [10.0,13.0] and they should be saved
as
mfcc_filename_chunk_1.csv
mfcc_filename_chunk_2.csv
mfcc_filename_chunk_3.csv
Like this I want to do it for all files in that directory.
I want to do achieve this using librosa.
I am unable to get any ideas on how to proceed.
More awesome thing to do would be, compute this over overlapping intervals, example if len =5 is passed ,
then
chunk one should be over [0.0,5.1]
chunk two should be over [5.0,10.1]
chunk three should be over [10.0,13.0]
Goal is to feed large datasets to Tensorflow. I came to the following implementation. However, while io of HDF5 is supposed to be very fast my implementation is slow. Is this due to not using the chunks function? I do not seem to get the dimensions right for the chunks, should I see this as a third dimension. Like; (4096, 7, 1000) for chunksize 1000?
Please note, I could have simplified my code below more by finding solution for a single generator. However, I think the data/label combination is very common and usefull for others.
I use the following function to create two generators, one for the data and one for the corresponding labels.
def read_chunks(file, dim, batch_size=batch_size):
chunk = np.empty(dim,)
current_size = 1
# read input file line by line
for line in file:
current_size += 1
# build chunk
chunk = np.vstack((chunk, np.genfromtxt(io.BytesIO(line.encode()))))
# reaches batch size
if current_size == batch_size:
yield chunk
# reset counters
current_size = 1
chunk = np.empty(dim,)
Then I wish move the data and labels produced by these generators to HDF5.
def write_h5(data_gen, label_gen, out_file, batch_size, h5_batch_size, data_dtype, label_dtype):
# remove existing file
if os.path.isfile(out_file):
os.remove(out_file)
with h5py.File(out_file, 'a') as f:
# create a dataset and labelset in the same file
d = f.create_dataset('data', (batch_size,data_dim), maxshape=(None,data_dim), dtype=data_dtype)
l = f.create_dataset('label', (batch_size,label_dim), maxshape=(None,label_dim), dtype=label_dtype)
# use generators to fill both sets
for data in data_gen:
d.resize(d.shape[0]+batch_size, axis=0)
d[-batch_size:] = data
l.resize(l.shape[0]+batch_size, axis=0)
l[-batch_size:] = next(label_gen)
With the following constants I combined both functions like so;
batch_size = 4096
h5_batch_size = 1000
data_dim = 7 #[NUM_POINT, 9]
label_dim = 1 #[NUM_POINT]
data_dtype = 'float32'
label_dtype = 'uint8'
for data_file, label_file in data_label_files:
print(data_file)
with open(data_file, 'r') as data_f, open(label_file, 'r') as label_f:
data_gen = read_chunks(data_f, dim=data_dim)
label_gen = read_chunks(label_f, dim=label_dim)
out_file = data_file[:-4] + '.h5'
write_h5(data_gen, label_gen, out_file, batch_size, h5_batch_size, data_dtype, label_dtype)
The problem is not that HDF5 is slow. The problem is that you are reading a single line at a time using a Python loop, calling genfromtxt() once per line! That function is meant to read entire files. And then you use the anti-pattern of "array = vstack(array, newstuff)` in the same loop.
In short, your performance problem starts here:
chunk = np.vstack((chunk, np.genfromtxt(io.BytesIO(line.encode()))))
You should just read the entire file at once. If you can't do that, read half of it (you can set a max number of lines to read each time, such as 1 million).
From a list of word images with their transcriptions, I am trying to create and read sparse sequence labels (for tf.nn.ctc_loss) using a tf.train.slice_input_producer, avoiding
serializing pre-packaged training data to disk in
TFRecord format
the apparent limitations of tf.py_func,
any unnecessary or premature padding, and
reading the entire data set to RAM.
The main issue seems to be converting a string to the sequence of labels (a SparseTensor) needed for tf.nn.ctc_loss.
For example, with the character set in the (ordered) range [A-Z], I'd want to convert the text label string "BAD" to the sequence label class list [1,0,3].
Each example image I want to read contains the text as part of the filename, so it's straightforward to extract and do the conversion in straight up python. (If there's a way to do it within TensorFlow computations, I haven't found it.)
Several previous questions glance at these issues, but I haven't been able to integrate them successfully. For example,
Tensorflow read images with labels
shows a straightforward framework with discrete, categorical labels,
which I've begun with as a model.
How to load sparse data with TensorFlow?
nicely explains an approach for loading sparse data, but assumes
pre-packaging tf.train.Examples.
Is there a way to integrate these approaches?
Another example (SO question #38012743) shows how I might delay the conversion from string to list until after dequeuing the filename for decoding, but it relies on tf.py_func, which has caveats. (Should I worry about them?)
I recognize that "SparseTensors don't play well with queues" (per the tf docs), so it might be necessary to do some voodoo on the result (serialization?) before batching, or even rework where the computation happens; I'm open to that.
Following MarvMind's outline, here is a basic framework with the computations I want (iterate over lines containing example filenames, extract each label string and convert to sequence), but I have not successfully determined the "Tensorflow" way to do it.
Thank you for the right "tweak", a more appropriate strategy for my goals, or an indication that tf.py_func won't wreck training efficiency or something else downstream (e.g.,loading trained models for future use).
EDIT (+7 hours) I found the missing ops to patch things up. While still need to verify this connects with CTC_Loss downstream, I have checked that the edited version below correctly batches and reads in the images and sparse tensors.
out_charset="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def input_pipeline(data_filename):
filenames,seq_labels = _get_image_filenames_labels(data_filename)
data_queue = tf.train.slice_input_producer([filenames, seq_labels])
image,label = _read_data_format(data_queue)
image,label = tf.train.batch([image,label],batch_size=2,dynamic_pad=True)
label = tf.deserialize_many_sparse(label,tf.int32)
return image,label
def _get_image_filenames_labels(data_filename):
filenames = []
labels = []
with open(data_filename)) as f:
for line in f:
# Carve out the ground truth string and file path from
# lines formatted like:
# ./241/7/158_NETWORK_51375.jpg 51375
filename = line.split(' ',1)[0][2:] # split off "./" and number
# Extract label string embedded within image filename
# between underscores, e.g. NETWORK
text = os.path.basename(filename).split('_',2)[1]
# Transform string text to sequence of indices using charset, e.g.,
# NETWORK -> [13, 4, 19, 22, 14, 17, 10]
indices = [[i] for i in range(0,len(text))]
values = [out_charset.index(c) for c in list(text)]
shape = [len(text)]
label = tf.SparseTensorValue(indices,values,shape)
label = tf.convert_to_tensor_or_sparse_tensor(label)
label = tf.serialize_sparse(label) # needed for batching
# Add data to lists for conversion
filenames.append(filename)
labels.append(label)
filenames = tf.convert_to_tensor(filenames)
labels = tf.convert_to_tensor_or_sparse_tensor(labels)
return filenames, labels
def _read_data_format(data_queue):
label = data_queue[1]
raw_image = tf.read_file(data_queue[0])
image = tf.image.decode_jpeg(raw_image,channels=1)
return image,label
The key ideas seem to be creating a SparseTensorValue from the data wanted, pass it through tf.convert_to_tensor_or_sparse_tensor and then (if you want to batch the data) serialize it with tf.serialize_sparse. After batching, you can restore the values with tf.deserialize_many_sparse.
Here's the outline. Create the sparse values, convert to tensor, and serialize:
indices = [[i] for i in range(0,len(text))]
values = [out_charset.index(c) for c in list(text)]
shape = [len(text)]
label = tf.SparseTensorValue(indices,values,shape)
label = tf.convert_to_tensor_or_sparse_tensor(label)
label = tf.serialize_sparse(label) # needed for batching
Then, you can do the batching and deserialize:
image,label = tf.train.batch([image,label],dynamic_pad=True)
label = tf.deserialize_many_sparse(label,tf.int32)