passing keras optimizer as string parameter to keras optimizer function - python

I’m tuning hyperparameters of a keras deep learning model with the help of a config.json file containing hyperparameters.
{ “opt: “Adam”,
“lr”: 0.01,
“grad_clip”: 0.5
}
Keras allows to specify an optimizer in two ways:
As string argument in call to function without additional parameters.
model.compile(loss='categorical_crossentropy’,
optimizer=’Adam’,
metrics=['mse'])
As eponymous function with additional parameters.
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.Adam(lr=0.01, clipvalue=0.5),
metrics=['mse'])
My question is: how to pass the optimizer (SGD, Adam etc.) as argument from config file along with the subparameters and employ the keras.optimizers.optimizer() function call as in (2)?
from keras.models import Sequential
from keras.layers import LSTM, Dense, TimeDistributed, Bidirectional
from keras import optimizers
def train(X,y, opt, lr, clip):
model = Sequential()
model.add(Bidirectional(LSTM(100, return_sequences=True), input_shape=(500, 300)))
model.add(TimeDistributed(Dense(5, activation='sigmoid')))
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.opt(lr=lr, clipvalue=clip),
metrics=['mse'])
model.fit(X, y, epochs=100, batch_size=1, verbose=2)
return(model)
When I try to pass parameters from my config file to the above train() function, I get the following error:
AttributeError: module 'keras.optimizers' has no attribute 'opt'
How do I parse the optimizer in string from as a function?

You could use a class that constructs an optimizer like so:
class Optimizer:
def __init__(self, lr, clip):
self.lr=lr
self.clip = clip
def get_opt(self, opt):
"""Dispatch method"""
method_name = 'opt_' + str(opt)
# Get the method from 'self'. Default to a lambda.
method = getattr(self, method_name, lambda: "Invalid optimizier")
# Call the method as we return it
return method()
def opt_Adam(self):
return optimizers.Adam(lr=self.lr, clipvalue=self.clip)
def opt_example(self):
return optimizers.example(lr=self.lr, clipvalue=self.clip)
#and so on for how many cases you would need
then you can call it as:
a=Optimizer(lr, clip)
model.compile(loss='categorical_crossentropy',
optimizer=a.get_opt(opt=opt),
metrics=['mse'])

You can initialize a json configuration file which contains the initialization of the optimizers:
eg:
"Adam": {
"lr":0.001,
"beta_1":0.9,
"beta_2":0.999,
"epsilon":None,
"decay":0.0,
"amsgrad":False
}
Then you can parse it from the configuration using the following lines:
with open('configuration.json') as json_data_file:
data = json.load(json_data_file)
in the data structure you will find the parameters setup of the optimizer:
optimizer = data["Adam"]
After all, you can access all the parameters of the chosen optimizer:
lr = data["lr"]
beta_1 = data["beta_1"]
etc...
An other way is to access the configuration of the optimizer using only the configuration file. Using Keras, you are able to compile your neural network with a specific optimizer chosen from a configuration file using an optimizer dispatcher:
optimizer= {"Adam": keras.optimizers.Adam(**config}
keep in mind that the keras optimizer name should be the the same in the config file.

Make sure the keys of your csl object (the config object) actually match those arguments of the classes. Then, the following will create the optimizer object, search for the appropriate arguments from the configuration object, and pass them to it
csl = { "opt": "Adam",
"lr": 0.01,
"grad_clip": 0.5}
optimizer = eval(f"keras.optimizers.{csl["opt"]}")()
optimizer = optimizer.from_config({k:v for k,v in csl.items() if hasattr(optimizer, k)})

Related

Organizing runs in Tensorboard

I'm working on a probabilistic forecast model using RNNs and want to log multiple runs with different parameters in Tensorboard to evaluate and compare them. I'm quite new to Tensorboard and couldn't really come up with a good way to organize my runs. I want to be able to sort through them in Tensorboard by parameter values, so currently I'm using this rather clunky approach:
tb = SummaryWriter(log_dir=f'runs/leakyrelu/cuda{cuda_id}/m_epochs{max_epochs}/lr{learning_rate}/'
f'bs{batch_size}/h_h{history_horizon}/f_h{forecast_horizon}/'
f'core_{core_net}/drop_fc{dropout_fc}/'
f'drop_core{dropout_core}')
Is there any smart way or convention on how to do this without creating mile-long filenames or directories kilometres deep?
It seems you are doing HyperParameter tuning with multiple parameters.
The best way to log such runs in Tensorboard is by using its HParams plugin.
Step1: Importing
import tensorflow as tf
from tensorboard.plugins.hparams import api as hp
After that, you create Hparam object of parameters you want to try different values for and create a summary writer.
Step 2: Creating Hparam object and summary writer
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
METRIC_ACCURACY = 'accuracy'
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(
hparams=[HP_NUM_UNITS, HP_DROPOUT, HP_OPTIMIZER],
metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')],
)
Your created object will look something like this:
HP_NUM_UNITS
HParam(name='num_units', domain=IntInterval(16, 32), display_name=None, description=None)
Step 3: Create a function for training and testing
def train_test_model(hparams):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
model.fit(x_train, y_train, epochs=1) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test, y_test)
return accuracy
In this function hparams is a dictionary of type:
{
HParam Object 1: VALUE-FOR-THE-OBJECT,
HParam Object 2: VALUE-FOR-THE-OBJECT,
HParam Object 3: VALUE-FOR-THE-OBJECT,
}
The actual dictionary looks like this:
{HParam(name='num_units', domain=Discrete([16, 32]), display_name=None, description=None): 32,
HParam(name='dropout', domain=RealInterval(0.1, 0.2), display_name=None, description=None): 0.2,
HParam(name='optimizer', domain=Discrete(['adam', 'sgd']), display_name=None, description=None): 'sgd'}
Step 4: Function for logging into the Tensorboard.
def run(run_dir, hparams):
with tf.summary.create_file_writer(run_dir).as_default():
hp.hparams(hparams) # record the values used in this trial
accuracy = train_test_model(hparams)
tf.summary.scalar(METRIC_ACCURACY, accuracy, step=1)
Here, run_dir is a path for each individual run.
Step 5: Trying different parameter:
session_num = 0
for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
run('logs/hparam_tuning/' + run_name, hparams)
session_num += 1
Note: num_units will take 2 values '16' and '32' not every value between 16 and 32.
Your Tensorboard will look like this:
Tabular View:
Scatter Plot View:
.
You can also combine this with Tensorboard callback in Keras by setting the path of the callback to run_dir.
For eg:
def train_test_model(hparams, run_dir):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
callbacks = [
tf.keras.callbacks.TensorBoard(run_dir),
]
model.fit(x_train, y_train, epochs=10, callbacks = callbacks) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test,
y_test)
return accuracy
The above-mentioned steps are good if you want log custom metrics or a variety of metrics other than accuracy or loss which you have defined in the compile method.
But if you don't want to use custom metrics or don't want to deal with summary writers etc. You can use Keras callbacks to simplify the process.
Complete code with callbacks without summary writers
# Creating Hparams
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
# Creating train test function
def train_test_model(hparams, run_dir):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
callbacks = [
tf.keras.callbacks.TensorBoard(run_dir),# log metrics
hp.KerasCallback(run_dir, hparams), # log hparams
]
model.fit(x_train, y_train, epochs=10, callbacks = callbacks) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test,
y_test)
return accuracy
# Running different configurations
session_num = 0
for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
train_test_model(hparams, 'logs/hparam_tuning/' + run_name)
session_num += 1
Useful Links:
Hyperparameter Tuning with the HParams Dashboard
Hparams demo using all possible Hparam objects - Official Github Repo

ValueError: No gradients provided for any variable - Keras Tensorflow 2.0

I'm trying to follow this example on the TensorFlow site, but it's not working.
Here's my code:
import tensorflow as tf
def vectorize(vector_like):
return tf.convert_to_tensor(vector_like)
def batchify(vector):
'''Make a batch out of a single example'''
return vectorize([vector])
data = [(batchify([0]), batchify([0, 0, 0])), (batchify([1]), batchify([0, 0, 0])), (batchify([2]), batchify([0, 0, 0]))]
num_hidden = 5
num_classes = 3
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(num_hidden, activation='relu'))
model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid'))
loss_fn = lambda: tf.keras.backend.cast(tf.keras.losses.mse(model(input), output), tf.float32)
var_list_fn = lambda: model.trainable_weights
for input, output in data:
opt.minimize(loss_fn, var_list_fn)
For a while, I was getting a warning about the loss function having the wrong datatype (int instead of float), which is why I've added the casting to the loss function.
Instead of the network training, I'm instead getting the error:
ValueError: No gradients provided for any variable:
['sequential/dense/kernel:0', 'sequential/dense/bias:0',
'sequential/dense_1/kernel:0', 'sequential/dense_1/bias:0'].
Why aren't the gradients getting passed through? What am I doing wrong?
You need to use GradientTape if you want to manipulate gradients in TF2. For example, following works.
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(num_hidden, activation='relu'))
model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid'))
with tf.GradientTape() as tape:
loss = tf.keras.backend.mean(tf.keras.losses.mse(model(input),tf.cast(output, tf.float32)))
gradients = tape.gradient(loss, model.trainable_variables)
opt.apply_gradients(zip(gradients, model.trainable_variables))
Edit:
You can actually get your sample to work by doing the following changes.
Using cast just for the output instead of the full loss_fn (note I'm also doing a mean as we optimize w.r.t mean of loss)
By "work", I mean it doesn't complain. But you will need to further investigate to make sure it's working as intended.
loss_fn = lambda: tf.keras.backend.mean(tf.keras.losses.mse(model(input), tf.cast(output, tf.float32)))
var_list_fn = lambda: model.trainable_weights
opt.minimize(loss_fn, var_list_fn)

This model has not yet been built error on model.summary()

I've keras model defined as follow
class ConvLayer(Layer) :
def __init__(self, nf, ks=3, s=2, **kwargs):
self.nf = nf
self.grelu = GeneralReLU(leak=0.01)
self.conv = (Conv2D(filters = nf,
kernel_size = ks,
strides = s,
padding = "same",
use_bias = False,
activation = "linear"))
super(ConvLayer, self).__init__(**kwargs)
def rsub(self): return -self.grelu.sub
def set_sub(self, v): self.grelu.sub = -v
def conv_weights(self): return self.conv.weight[0]
def build(self, input_shape):
# No weight to train.
super(ConvLayer, self).build(input_shape) # Be sure to call this at the end
def compute_output_shape(self, input_shape):
output_shape = (input_shape[0],
input_shape[1]/2,
input_shape[2]/2,
self.nf)
return output_shape
def call(self, x):
return self.grelu(self.conv(x))
def __repr__(self):
return f'ConvLayer(nf={self.nf}, activation={self.grelu})'
class ConvModel(tf.keras.Model):
def __init__(self, nfs, input_shape, output_shape, use_bn=False, use_dp=False):
super(ConvModel, self).__init__(name='mlp')
self.use_bn = use_bn
self.use_dp = use_dp
self.num_classes = num_classes
# backbone layers
self.convs = [ConvLayer(nfs[0], s=1, input_shape=input_shape)]
self.convs += [ConvLayer(nf) for nf in nfs[1:]]
# classification layers
self.convs.append(AveragePooling2D())
self.convs.append(Dense(output_shape, activation='softmax'))
def call(self, inputs):
for layer in self.convs: inputs = layer(inputs)
return inputs
I'm able to compile this model without any issues
>>> model.compile(optimizer=tf.keras.optimizers.Adam(lr=lr),
loss='categorical_crossentropy',
metrics=['accuracy'])
But when I query the summary for this model, I see this error
>>> model = ConvModel(nfs, input_shape=(32, 32, 3), output_shape=num_classes)
>>> model.summary()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-220-5f15418b3570> in <module>()
----> 1 model.summary()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in summary(self, line_length, positions, print_fn)
1575 """
1576 if not self.built:
-> 1577 raise ValueError('This model has not yet been built. '
1578 'Build the model first by calling `build()` or calling '
1579 '`fit()` with some data, or specify '
ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.
I'm providing input_shape for the first layer of my model, why is throwing this error?
The error says what to do:
This model has not yet been built. Build the model first by calling build()
model.build(input_shape) # `input_shape` is the shape of the input data
# e.g. input_shape = (None, 32, 32, 3)
model.summary()
There is a very big difference between keras subclassed model and other keras models (Sequential and Functional).
Sequential models and Functional models are datastructures that represent a DAG of layers. In simple words, Functional or Sequential model are static graphs of layers built by stacking one on top of each other like LEGO. So when you provide input_shape to first layer, these (Functional and Sequential) models can infer shape of all other layers and build a model. Then you can print input/output shapes using model.summary().
On the other hand, subclassed model is defined via the body (a call method) of Python code. For subclassed model, there is no graph of layers here. We cannot know how layers are connected to each other (because that's defined in the body of call, not as an explicit data structure), so we cannot infer input / output shapes. So for a subclass model, the input/output shape is unknown to us until it is first tested with proper data. In the compile() method, we will do a deferred compile and wait for a proper data. In order for it to infer shape of intermediate layers, we need to run with a proper data and then use model.summary(). Without running the model with a data, it will throw an error as you noticed. Please check GitHub gist for complete code.
The following is an example from Tensorflow website.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class ThreeLayerMLP(keras.Model):
def __init__(self, name=None):
super(ThreeLayerMLP, self).__init__(name=name)
self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
self.pred_layer = layers.Dense(10, name='predictions')
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
return self.pred_layer(x)
def get_model():
return ThreeLayerMLP(name='3_layer_mlp')
model = get_model()
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop())
model.summary() # This will throw an error as follows
# ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.
# Need to run with real data to infer shape of different layers
history = model.fit(x_train, y_train,
batch_size=64,
epochs=1)
model.summary()
Thanks!
Another method is to add the attribute input_shape() like this:
model = Sequential()
model.add(Bidirectional(LSTM(n_hidden,return_sequences=False, dropout=0.25,
recurrent_dropout=0.1),input_shape=(n_steps,dim_input)))
# X is a train dataset with features excluding a target variable
input_shape = X.shape
model.build(input_shape)
model.summary()
Make sure you create your model properly. A small typo mistake like the following code may also cause a problem:
model = Model(some-input, some-output, "model-name")
while the correct code should be:
model = Model(some-input, some-output, name="model-name")
If your Tensorflow, Keras version is 2.5.0 then just add Tensorflow when you import Keras package
Not this:
from tensorflow import keras
from keras.models import Sequential
import tensorflow as tf
Like this:
from tensorflow import keras
from tensorflow.keras.models import Sequential
import tensorflow as tf
Version issues of your Tensorflow, Keras, can be the reason for this.
Same problem I encountered during training for the LSTM model for regression.
Error:
ValueError: This model has not yet been built. Build the model first
by calling build() or by calling the model on a batch of data.
Earlier:
from tensorflow.keras.models import Sequential
from tensorflow.python.keras.models import Sequential
Corrected:
from keras.models import Sequential
I was also facing same error, so I have removed model.summary(). Then issue is resolved. As it arises if model of summary is defined before the model is built.
Here is the LINK for description which states that
Raises:
ValueError: if `summary()` is called before the model is built.**

How to implement Beholder (Tensorboard plugin) for Keras?

I am trying to implement the Beholder plugin from Tensorboard into a simple CNN code (I am a beginner at Tensorflow), but I am not sure where to put the visualizer.update(session=session).
At the beginning I have:
from tensorboard.plugins.beholder import Beholder
LOG_DIRECTORY='/tmp/tensorflow_logs'
visualizer = Beholder(logdir=LOG_DIRECTORY)
I train my model like this:
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(253,27,3)))
.
.
.
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
Where should I put the visualizer.update(session=session) and what else should I put in my code, as for now it says No Beholder data was found. Thank you!
It would be appropriate to create a custom Keras callback, so that you can call visualizer.update(session=session) at the end of each epoch (or whenever you want). Here is an example showing how such callback could look like:
from tensorboard.plugins.beholder import Beholder
import tensorflow as tf
import keras.backend as K
import keras
LOG_DIRECTORY='/tmp/tensorflow_logs'
class BeholderCallback(keras.callbacks.Callback):
def __init__(self, tensor, logdir=LOG_DIRECTORY, sess=None):
self.visualizer = Beholder(logdir=logdir)
self.sess = sess
if sess is None:
self.sess = K.get_session()
self.tensor = tensor
def on_epoch_end(self, epoch, logs=None):
frame = self.sess.run(self.tensor) # depending on the tensor, this might require a feed_dict
self.visualizer.update(
session=self.sess,
frame=frame
)
Then, after defining your model, instantiate the callback and pass it to model.fit:
# Define your Keras model
# ...
# Prepare callback
sess = K.get_session()
beholder_callback = BeholderCallback(your_tensor, sess=sess)
# Fit data into model and pass callback to model.fit
model.fit(x=x_train,
y=y_train,
callbacks=[beholder_callback])
You could also use the argument arrays of visualizer.update in a similar way.

How to save best weights of the encoder part only during auto-encoder training??

I am using keras with tensor flow to implement a deep auto-encoder with CNN:
So basically the model would be similar to:
input_data = Input(shape=(40,500,1))
#encoder
x= Conv2D(32,kernel_size=(3,3), padding="same",activation='linear')(input_data)
encoded= Conv2D(15,kernel_size=(1,2), strides=(1,2), padding="same",activation='linear')(x)
#decoder
x= Conv2DTranspose(15,kernel_size=(1,2), padding="same",activation='linear')(encoded)
x= Conv2DTranspose(32,kernel_size=(3,3), padding="same",activation='linear')(x)
decoded = Conv2DTranspose(1, (3, 3), activation=activationfuntion, padding="same")(x)
autoencoder = Model(inputs=input_data,outputs=decoded)
encoder = Model(inputs=input_data,outputs=encoded)
In order to save the best model weights during training, I am using ModelCheckpoint:
autoencoder.compile(loss='mean_squared_error', optimizer='rmsprop');
checkpoint=ModelCheckpoint('bestweight.best.hdf5',monitor='val_loss',verbose=1,save_best_only=True,mode='min');
callbacks_list=[checkpoint]
history_info =autoencoder.fit(x_train, x_train,
batch_size=batch_size,
epochs=50,
validation_data=(x_validation,x_validation),
callbacks=callbacks_list,
shuffle=True)
and then later to test on the testdataset:
autoencoder.load_weights('bestweight.best.hdf5');
autoencoder.predict(test_data);
My question is:
I know how to save the best weights of the whole auto-encoder, but is there a way to just save the best training weights of the encoder part so I can use it later for testing.
so I can use it in this way:
encoder.load_weights('encoderbestweight.best.hdf5');
encoder.predict(test_data);
Before trying to answer your question, I would like to make a quick remark about your use of the ModelCheckpoint callback. Let's have a look at the default parameters :
keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)
The save_weights_only parameter's default value is False which means what you are actually saving is not only the model's weights but the entire architecture ! Thus, when loading the weights of your model you can either redefine the model and use load_weights. Or you can directly load your model from the file, using the load_model function.
Now, to save only the encoder, I would write a new checkpoint callback, like this :
class CustomCheckpoint(Callback):
def __init__(self, filepath, encoder):
self.monitor = 'val_loss'
self.monitor_op = np.less
self.best = np.Inf
self.filepath = filepath
self.encoder = encoder
def on_epoch_end(self, epoch, logs=None):
current = logs.get(self.monitor)
if self.monitor_op(current, self.best):
self.best = current
# self.encoder.save_weights(self.filepath, overwrite=True)
self.encoder.save(self.filepath, overwrite=True) # Whichever you prefer
As an alternative, since you already have the save file for the entire network, you can separate your encoder from the decoder like this :
from keras.models import load_model
autoencoder = load_model("path_to_file")
encoder = Model(autoencoder.layers[0].input, autoencoder.layers[1].output)
The encoder part is the first two layers. So after "autoencoder.fit()" try this
encoder = Model(input_data, autoencoder.layers[2].output)
for more "https://www.kaggle.com/marlesson/autoencoder-embedding-for-food"

Categories