I'm trying to train a basic Graph Neural Network using the StellarGraph library, in particular starting from the example provided in [0].
The example works fine, but now I would like to repeat the same exercize removing the N-Fold Crossvalidation and providing specific training, validation and test sets. I'm trying to do so with the following code:
# One hot encoding
graph_training_set_labels_encoded = pd.get_dummies(graphs_training_set_labels, drop_first=True)
graph_validation_set_labels_encoded = pd.get_dummies(graphs_validation_set_labels, drop_first=True)
graphs = graphs_training_set + graphs_validation_set
# Graph generator preparation
generator = PaddedGraphGenerator(graphs=graphs)
train_gen = generator.flow([x for x in range(0, len(graphs_training_set))],
targets=graph_training_set_labels_encoded,
batch_size=batch_size)
valid_gen = generator.flow([x for x in range(len(graphs_training_set),
len(graphs_training_set) + len(graphs_validation_set))],
targets=graph_validation_set_labels_encoded,
batch_size=batch_size)
# Stopping criterium
es = EarlyStopping(monitor="val_loss",
min_delta=0,
patience=20,
restore_best_weights=True)
# Model definition
gc_model = GCNSupervisedGraphClassification(layer_sizes=[64, 64],
activations=["relu", "relu"],
generator=generator,
dropout=dropout_value)
x_inp, x_out = gc_model.in_out_tensors()
predictions = Dense(units=32, activation="relu")(x_out)
predictions = Dense(units=16, activation="relu")(predictions)
predictions = Dense(units=1, activation="sigmoid")(predictions)
# Creating Keras model and preparing it for training
model = Model(inputs=x_inp, outputs=predictions)
model.compile(optimizer=Adam(adam_value), loss=binary_crossentropy, metrics=["acc"])
# GNN Training
history = model.fit(train_gen, epochs=num_epochs, validation_data=valid_gen, verbose=0, callbacks=[es])
# Calculate performance on the validation data
test_metrics = model.evaluate(valid_gen, verbose=0)
valid_acc = test_metrics[model.metrics_names.index("acc")]
print(f"Test Accuracy model = {valid_acc}")
Where graphs_training_set and graphs_validation_set are lists of StellarDiGraphs.
I am able to run this piece of code, but it provides NaN as result. What could be the problem?
Since it is the first time I am using StellarGraph and in particular PaddedGraphGenerator. I think my mistake rely on the usage of that generator, but providing training set and validation set in different manner didn't produce better results.
Thank you in advance.
UPDATE Fixed I typo in the code, as pointed out here (thanks to george123).
[0] https://stellargraph.readthedocs.io/en/stable/demos/graph-classification/gcn-supervised-graph-classification.html
I found a solution digging in the StellarGraph documentation for PaddedGraphGenerator and GCN Neural Network Class GCNSupervisedGraphClassification. Furthermore, I have found a similar question on StellarGraph Issue Tracker which also points out to the solution.
# Graph generator preparation
generator = PaddedGraphGenerator(graphs=graphs)
train_gen = generator.flow([x for x in range(0, num_graphs_for_training)],
targets=training_graphs_labels,
batch_size=35)
valid_gen = generator.flow([x for x in range(num_graphs_for_training, num_graphs_for_training + num_graphs_for_validation)],
targets=validation_graphs_labels,
batch_size=35)
# Stopping criterium
es = EarlyStopping(monitor="val_loss",
min_delta=0.001,
patience=10,
restore_best_weights=True)
# Model definition
gc_model = GCNSupervisedGraphClassification(layer_sizes=[64, 64],
activations=["relu", "relu"],
generator=generator,
dropout=dropout_value)
x_inp, x_out = gc_model.in_out_tensors()
predictions = Dense(units=32, activation="relu")(x_out)
predictions = Dense(units=16, activation="relu")(predictions)
predictions = Dense(units=1, activation="sigmoid")(predictions)
# Let's create the Keras model and prepare it for training
model = Model(inputs=x_inp, outputs=predictions)
model.compile(optimizer=Adam(adam_value), loss=binary_crossentropy, metrics=["acc"])
# GNN Training
history = model.fit(train_gen, epochs=num_epochs, validation_data=valid_gen, verbose=1, callbacks=[es])
# Evaluate performance on the validation data
valid_metrics = model.evaluate(valid_gen, verbose=0)
valid_acc = valid_metrics[model.metrics_names.index("acc")]
# Define test set indices temporary vars
index_begin_test_set = num_graphs_for_training + num_graphs_for_validation
index_end_test_set = index_begin_test_set + num_graphs_for_testing
test_set_indices = [x for x in range(index_begin_test_set, index_end_test_set)]
# Evaluate performance on test set
generator_for_test_set = PaddedGraphGenerator(graphs=graphs)
test_gen = generator_for_test_set.flow(test_set_indices)
result = model.predict(test_gen)
Related
I was currently using the following code to set 1 column equal to zero and consequently retrain the model for all 10 NNs in NN1_List. However, as the model is going through the loop it slowly (very slowly but still is a big deal if I train 1660 NNs) increases the training time of the Neural Network. I checked a variety of websites and implemented all the possible solutions that I could find such as tf.keras.backend.clear_session(), tf.compat.v1.reset_default_graph(), del model, and gc.collect().
r2_list = list()
for i in tf.range(0, len(training_x.columns), 1):
column = training_x.columns[i]
df = training_x.copy()
df[column].values[:] = 0
prediction_list = list()
for j in tf.range(0, len(NN1_List), 1):
np.random.seed(int(seed_list[j]))
random.seed(int(seed_list[j]))
tf.random.set_seed(int(seed_list[j]))
model = keras.Sequential()
model.add(keras.layers.Dense(
units=64,
kernel_regularizer=keras.regularizers.L1(l1=0.00001),
input_shape=(training_x.shape[1],),
activation='relu')
)
model.add(keras.layers.Dense(
units=1))
## Compile Model.
opt = keras.optimizers.Adam(learning_rate=0.01)
model.compile(optimizer=opt,
loss='mean_squared_error')
## Fit Model.
callback = keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5, restore_best_weights=True)
model.fit(x=df,
y=training_y,
validation_data=(validation_x, validation_y),
batch_size=10000,
epochs=100,
callbacks=[callback])
prediction_testing = model.predict(testing_x)
del model
tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
gc.collect()
prediction_list.append(prediction_testing)
prediction_array = np.mean(prediction_list, axis=0).ravel()
r2 = kelly_gu_r_squared(testing_y, prediction_array)
r2_list.append(r2)
I was wondering if you guys could point me in the right direction to fix this problem.
I'm working on a probabilistic forecast model using RNNs and want to log multiple runs with different parameters in Tensorboard to evaluate and compare them. I'm quite new to Tensorboard and couldn't really come up with a good way to organize my runs. I want to be able to sort through them in Tensorboard by parameter values, so currently I'm using this rather clunky approach:
tb = SummaryWriter(log_dir=f'runs/leakyrelu/cuda{cuda_id}/m_epochs{max_epochs}/lr{learning_rate}/'
f'bs{batch_size}/h_h{history_horizon}/f_h{forecast_horizon}/'
f'core_{core_net}/drop_fc{dropout_fc}/'
f'drop_core{dropout_core}')
Is there any smart way or convention on how to do this without creating mile-long filenames or directories kilometres deep?
It seems you are doing HyperParameter tuning with multiple parameters.
The best way to log such runs in Tensorboard is by using its HParams plugin.
Step1: Importing
import tensorflow as tf
from tensorboard.plugins.hparams import api as hp
After that, you create Hparam object of parameters you want to try different values for and create a summary writer.
Step 2: Creating Hparam object and summary writer
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
METRIC_ACCURACY = 'accuracy'
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(
hparams=[HP_NUM_UNITS, HP_DROPOUT, HP_OPTIMIZER],
metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')],
)
Your created object will look something like this:
HP_NUM_UNITS
HParam(name='num_units', domain=IntInterval(16, 32), display_name=None, description=None)
Step 3: Create a function for training and testing
def train_test_model(hparams):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
model.fit(x_train, y_train, epochs=1) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test, y_test)
return accuracy
In this function hparams is a dictionary of type:
{
HParam Object 1: VALUE-FOR-THE-OBJECT,
HParam Object 2: VALUE-FOR-THE-OBJECT,
HParam Object 3: VALUE-FOR-THE-OBJECT,
}
The actual dictionary looks like this:
{HParam(name='num_units', domain=Discrete([16, 32]), display_name=None, description=None): 32,
HParam(name='dropout', domain=RealInterval(0.1, 0.2), display_name=None, description=None): 0.2,
HParam(name='optimizer', domain=Discrete(['adam', 'sgd']), display_name=None, description=None): 'sgd'}
Step 4: Function for logging into the Tensorboard.
def run(run_dir, hparams):
with tf.summary.create_file_writer(run_dir).as_default():
hp.hparams(hparams) # record the values used in this trial
accuracy = train_test_model(hparams)
tf.summary.scalar(METRIC_ACCURACY, accuracy, step=1)
Here, run_dir is a path for each individual run.
Step 5: Trying different parameter:
session_num = 0
for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
run('logs/hparam_tuning/' + run_name, hparams)
session_num += 1
Note: num_units will take 2 values '16' and '32' not every value between 16 and 32.
Your Tensorboard will look like this:
Tabular View:
Scatter Plot View:
.
You can also combine this with Tensorboard callback in Keras by setting the path of the callback to run_dir.
For eg:
def train_test_model(hparams, run_dir):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
callbacks = [
tf.keras.callbacks.TensorBoard(run_dir),
]
model.fit(x_train, y_train, epochs=10, callbacks = callbacks) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test,
y_test)
return accuracy
The above-mentioned steps are good if you want log custom metrics or a variety of metrics other than accuracy or loss which you have defined in the compile method.
But if you don't want to use custom metrics or don't want to deal with summary writers etc. You can use Keras callbacks to simplify the process.
Complete code with callbacks without summary writers
# Creating Hparams
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
# Creating train test function
def train_test_model(hparams, run_dir):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
callbacks = [
tf.keras.callbacks.TensorBoard(run_dir),# log metrics
hp.KerasCallback(run_dir, hparams), # log hparams
]
model.fit(x_train, y_train, epochs=10, callbacks = callbacks) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test,
y_test)
return accuracy
# Running different configurations
session_num = 0
for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
train_test_model(hparams, 'logs/hparam_tuning/' + run_name)
session_num += 1
Useful Links:
Hyperparameter Tuning with the HParams Dashboard
Hparams demo using all possible Hparam objects - Official Github Repo
I was trying to use the attention model described here in a simple bidirectional lstm model. However, after adding the attention model, I got this error:
ValueError: Unknown initializer: GlorotUniform
To begin with, my code didn't have any incompatibility issue in terms of using TensorFlow in some part and Keras in other parts of the code. I also tried every solution addressed in this post. However, none of them worked for me. I must mention that my code worked with no issues before adding the attention model. So, I tried removing every line of the attention part of the network structure to see what line is causing this problem:
inputs = tf.keras.layers.Input(shape=(n_timesteps, n_features))
units = 50
activations = tf.keras.layers.Bidirectional(tf.compat.v1.keras.layers.CuDNNLSTM(units,
return_sequences=True),
merge_mode='concat')(inputs)
print(np.shape(activations))
# Implementation of attention
x1 = tf.keras.layers.Dense(1, activation='tanh')(activations)
print(np.shape(x1))
x1= tf.keras.layers.Flatten()(x1)
print(np.shape(x1))
x1= tf.keras.layers.Activation('softmax')(x1)
print(np.shape(x1))
x1=tf.keras.layers.RepeatVector(units*2)(x1)
print(np.shape(x1))
x1 = tf.keras.layers.Permute([2,1])(x1)
print(np.shape(x1))
sent_representation = tf.keras.layers.Multiply()([activations, x1])
print(np.shape(sent_representation))
sent_representation = tf.keras.layers.Lambda(lambda xin:tf.keras.backend.sum(xin, axis=-2),
output_shape=(units*2,))(sent_representation)
# softmax for classification
x = tf.keras.layers.Dense(n_outputs, activation='softmax')(sent_representation)
model = tf.keras.models.Model(inputs=inputs, outputs=x)
I realized it is the line with lambda function and tf.keras.backend.sum that is causing the error. So, after some search I decided to replace that line with the following:
sent_representation = tf.math.reduce_sum(sent_representation, axis=-2)
Now, my code works. However, I am not quite sure if this substitution is correct. Am I doing this right?
Edit: Here is the next lines of the code, the problem is caused when I try to load the best model for testing:
optimizer = tf.keras.optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9)
model.compile(loss=lossFunction, optimizer=optimizer, metrics=['accuracy'])
print(model.summary())
# early stopping
es = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min',
verbose=1, patience=20)
mc = tf.keras.callbacks.ModelCheckpoint('best_model.h5',
monitor='val_accuracy', mode='max', verbose=1,
save_best_only=True)
history = model.fit(trainX, trainy, validation_data=(valX, valy),
shuffle = True, epochs=epochs, verbose=0,
callbacks=[es, mc])
saved_model = tf.keras.models.load_model('best_model.h5',
custom_objects={"GlorotUniform": tf.keras.initializers.glorot_uniform()})
# evaluate the model
_, train_acc = saved_model.evaluate(trainX, trainy, verbose=0) # saved_model
_, val_acc = saved_model.evaluate(valX, valy, verbose=0) # saved_model
_, accuracy = saved_model.evaluate(testX, testy, verbose=0) # saved_model
print('Train: %.3f, Validation: %.3f, Test: %.3f' % (train_acc, val_acc, accuracy))
y_pred = saved_model.predict(testX, batch_size=64, verbose=1)
Do you see any problem in my code that might be the cause of the error that I get when I use Lambda layer?
The code you provided works for me without problem with tf.keras.backend.sum and with tf.math.reduce_sum
The answer is that your substitution doesn't alter your network or what you are you looking for. You can test it on your own and verify that tf.keras.backend.sum is equal to tf.math.reduce_sum
X = np.random.uniform(0,1, (32,100,10)).astype('float32')
(tf.keras.backend.sum(X, axis=-2) == tf.reduce_sum(X, axis=-2)).numpy().all() # TRUE
I also suggest you to wrap the operation with a Lambda layer
EDIT: the usage of tf.reduce_sum or tf.keras.backend.sum, wrapped in a Lambda layer, don't raise error if using a TF version >= 2.2.
In the model building, you need to use layers only. If you want to use some tensorflow ops (like tf.reduce_sum or tf.keras.backend.sum) you need to wrap them in keras Lambda layer. Without this the model can still work but using Lambda is a good practice in order to avoid future problem
I'm new to machine learning and deep learning and I'm trying to classify texts from 5 categories using neural networks. For that, I made a dictionary in order to translate the words to indexes, finally getting an array with lists of indexes. Moreover I change the labels to integers. I also did the padding and that stuff. The problem is that when I fit the model the accuracy keeps quite low (~0.20) and does not change across the epochs. I have tried to change a lot of params, like the size of the vocabulary, number of neurones, dropout probability, optimizer parameter, etc. The key parts of the code are below.
# Arrays with indexes (that works fine)
X_train = tokens_to_indexes(tokenized_tr_mrp, vocab, return_vocab=False)
X_test, vocab_dict = tokens_to_indexes(tokenized_te_mrp, vocab)
# Labels to integers
labels_dict = {}
labels_dict['Alzheimer'] = 0
labels_dict['Bladder Cancer'] = 1
labels_dict['Breast Cancer'] = 2
labels_dict['Cervical Cancer'] = 3
labels_dict['Negative'] = 4
y_train = np.array([labels_dict[i] for i in y_tr])
y_test = np.array([labels_dict[i] for i in y_te])
# One-hot encoding of labels
from keras.utils import to_categorical
encoded_train = to_categorical(y_train)
encoded_test = to_categorical(y_test)
# Padding
max_review_length = 235
X_train_pad = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test_pad = sequence.pad_sequences(X_test, maxlen=max_review_length)
# Model
# Vocab size
top_words = len(list(vocab_dict.keys()))
# Neurone type
rnn = LSTM
# dropout
set_dropout = True
p = 0.2
# embedding size
embedding_vector_length = 64
# regularization strength
L = 0.0005
# Number of neurones
N = 50
# Model
model = Sequential()
# Embedding layer
model.add(Embedding(top_words,
embedding_vector_length,
embeddings_regularizer=regularizers.l1(l=L),
input_length=max_review_length
#,embeddings_constraint=UnitNorm(axis=1)
))
# Dropout layer
if set_dropout:
model.add(Dropout(p))
# Recurrent layer
model.add(rnn(N))
# Output layer
model.add(Dense(5, activation='softmax'))
# Compilation
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['Accuracy'])
# Split training set for validation
X_tr, X_va, y_tr_, y_va = train_test_split(X_train_pad, encoded_train,
test_size=0.3, random_state=2)
# Parameters
batch_size = 50
# N epochs
n_epocas = 20
best_val_acc = 0
best_val_loss = 1e20
best_i = 0
best_weights = []
acum_tr_acc = []
acum_tr_loss = []
acum_val_acc = []
acum_val_loss = []
# Training
for e in range(n_epocas):
h = model.fit(X_tr, y_tr_,
batch_size=batch_size,
validation_data=(X_va, y_va),
epochs=1, verbose=1)
acum_tr_acc = acum_tr_acc + h.history['accuracy']
acum_tr_loss = acum_tr_loss + h.history['loss']
val_acc = h.history['val_accuracy'][0]
val_loss = h.history['val_loss'][0]
acum_val_acc = acum_val_acc + [val_acc]
acum_val_loss = acum_val_loss + [val_loss]
# if val_acc > best_val_acc:
if val_loss < best_val_loss:
best_i = len(acum_val_acc)-1
best_val_acc = val_acc
best_val_loss = val_loss
best_weights = model.get_weights().copy()
if len(acum_tr_acc)>1 and (len(acum_tr_acc)+1) % 1 == 0:
if e>1:
clear_output()
The code you posted is really bad practice.
You can either train for n_epocas using your current method and add callbacks to get the best weights (ex ModelCheckpoint) or use tf.GradientTape but using model.fit() for one epoch at a time can lead to weird results, since your optimizer doesn't know which epoch it is at.
I suggest keeping your current code but training for n_epocas all in one go and report the results here (accuracy + loss).
Someone gave me the solution. I just had to change this line:
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['Accuracy'])
For this:
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['acc'])
I also changed the lines in the final loop relating to accuracy. The one-hot encoding was necessary as well.
I'm trying to build a class to quickly initialize and train an autoencoder for rapid prototyping. One thing I'd like to be able to do is quickly adjust the number of epochs I train for. However, it seems like no matter what I do, the model trains each layer for 100 epochs! I'm using the tensorflow backend.
Here is the code from the two offending methods.
def pretrain(self, X_train, nb_epoch = 10):
data = X_train
for ae in self.pretrains:
ae.fit(data, data, nb_epoch = nb_epoch)
ae.layers[0].output_reconstruction = False
ae.compile(optimizer='sgd', loss='mse')
data = ae.predict(data)
.........
def fine_train(self, X_train, nb_epoch):
weights = [ae.layers[0].get_weights() for ae in self.pretrains]
dims = self.dims
encoder = containers.Sequential()
decoder = containers.Sequential()
## add special input encoder
encoder.add(Dense(output_dim = dims[1], input_dim = dims[0],
weights = weights[0][0:2], activation = 'linear'))
## add the rest of the encoders
for i in range(1, len(dims) - 1):
encoder.add(Dense(output_dim = dims[i+1],
weights = weights[i][0:2], activation = self.act))
## add the decoders from the end
decoder.add(Dense(output_dim = dims[len(dims) - 2], input_dim = dims[len(dims) - 1],
weights = weights[len(dims) - 2][2:4], activation = self.act))
for i in range(len(dims) - 2, 1, -1):
decoder.add(Dense(output_dim = dims[i - 1],
weights = weights[i-1][2:4], activation = self.act))
## add the output layer decoder
decoder.add(Dense(output_dim = dims[0],
weights = weights[0][2:4], activation = 'linear'))
masterAE = AutoEncoder(encoder = encoder, decoder = decoder)
masterModel = models.Sequential()
masterModel.add(masterAE)
masterModel.compile(optimizer = 'sgd', loss = 'mse')
masterModel.fit(X_train, X_train, nb_epoch = nb_epoch)
self.model = masterModel
Any suggestions on how to fix the problem would be appreciated. My original suspicion was that it was something to do with tensorflow, so I tried running with the theano backend but encountered the same problem.
Here is a link to the full program.
Following the Keras doc, the fit method uses a default of 100 training epochs (nb_epoch=100):
fit(X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, show_accuracy=False, class_weight=None, sample_weight=None)
I'm sure how you are running these methods, but following the "Typical usage" from the original code, you should be able to run something like (adjusting the variable num_epoch as required):
#Typical usage:
num_epoch = 10
ae = JPAutoEncoder(dims)
ae.pretrain(X_train, nb_epoch = num_epoch)
ae.train(X_train, nb_epoch = num_epoch)
ae.predict(X_val)