I have this autoencoder:
input_dim = Input(shape=(10,))
encoded1 = Dense(30, activation = 'relu')(input_dim)
encoded2 = Dense(20, activation = 'relu')(encoded1)
encoded3 = Dense(10, activation = 'relu')(encoded2)
encoded4 = Dense(6, activation = 'relu')(encoded3)
decoded1 = Dense(10, activation = 'relu')(encoded4)
decoded2 = Dense(20, activation = 'relu')(decoded1)
decoded3 = Dense(30, activation = 'relu')(decoded2)
decoded4 = Dense(ncol, activation = 'sigmoid')(decoded3)
autoencoder = Model(input = input_dim, output = decoded4)
autoencoder.compile(-...)
autoencoder.fit(...)
Now I would like print or save the features generate in encoded4.
Basically, starting from a huge dataset I would like to extract the features generated by autoencoder, after the training part, to obtain a restricted representation of my dataset.
Could you help me?
You can do it by creating the "encoder" model:
encoder = Model(input = input_dim, output = encoded4)
This will use the same layers instances that you trained with the autoencoder and should be producing the feature if you use it in "inference mode" like encoder.predict()
I hope this helps :)
So, basically, by creating an encoder like this:
encoder = Model (input_dim,encoded4)
encoded_input=Input(shape=(6,))
and then using:
encoded_data=encoder.predict(data)
where the data within the predict function is the dataset, the output generate by
print encoded_data
is the restricted representation of my dataset.
It is right?
Thanks
Related
I had 5 LSTM layers and 2 MLP's which must be concatenate together into another MLP which produce the final output. Here is the code I wrote using the API approach, which works fine:
lstm_input = Input(shape=(X_dynamic_LSTM.shape[1], X_dynamic_LSTM.shape[2]))
x = LSTM(70, activation='tanh', return_sequences=True)(lstm_input )
x = Dropout(0.3)(x)
x = Dense(1, activation='tanh')(x)
mlp_input=Input(shape=(X_static_MLP.shape[1]))
mlp = Dense(30, activation='relu')(mlp_input)
mlp = Dense(10, activation='relu')(mlp)
merge = Concatenate()([x, mlp])
hidden1 = Dense(5, activation='relu')(merge)
mlp_out = Dense(1, activation='relu')(hidden1)
model = Model(inputs=[lstm_input, mlp_input], outputs=mlp_out)
model.compile(loss='mae', optimizer='Adam')
history = model.fit([X_dynamic_LSTM, X_static_MLP], y_train, batch_size=20,
epochs=10, validation_split=0.2)
If I want to convert this format to one similar to below:
x = Sequential()
x.add(LSTM(70, return_sequences=True))
x.add(Dropout(0.3))
x.add(Dense(1, activation='tanh'))
Can any one help me how should I define the MLP, the Concatenate and the part regarding the "model = Model(inputs=[lstm_input, mlp_input], outputs=mlp_out)" ??
My main problem is I want to add an Embedding layer to the LSTM. when I add the dollowing code to non-API approach the model works perfect.
x.add(Embedding(X_dynamic_LSTM.shape[0], 1,mask_zero=True))
But when instead I used
lstm_input = Embedding(X_dynamic_LSTM.shape[0], 1,mask_zero=True)
It gave me the error : TypeError: Inputs to a layer should be tensors, So I got to stick with non-API approach.
I am trying to have a normal classification model with a tabular dataset. I came across the Attention layer and I would like to use it to improve my model's accuracy.
input_features_size = X_train.shape[1]
layers = [
tf.keras.Input(shape = input_features_size),
tf.keras.layers.Dense(64, activation = 'relu', name = 'first_layer'),
tf.keras.layers.Dense(128, activation = 'relu', name = 'second_layer'),
tf.keras.layers.BatchNormalization(axis = 1),
tf.keras.layers.Dense(1, activation = 'sigmoid', name = 'output_layer')
]
metrics = [
tf.keras.metrics.BinaryAccuracy(name = 'accuracy'),
tf.keras.metrics.Precision(name = 'precision'),
tf.keras.metrics.Recall(name = 'recall')
]
NUM_EPOCHS = 20
deep_learning_model = Sequential(layers = layers, name = 'DL_Classifier')
deep_learning_model.compile(
loss = binary_crossentropy,
optimizer = Adam(learning_rate = 1e-4),
metrics = metrics
)
I tried adding Addention layer (tf.keras.layers.Attention()) in the layers list but I am doing some mistake here. I am getting this error : Attention layer must be called on a list of inputs, namely [query, value]
How to add an Attention layer?
I have implemented a CNN with two output layers for GTSRB Dataset problem. One output layer classifies images into their respective classes and second layer predicts bounding box coordinates. In dataset, the upper left and lower right coordinate is provided for training images. We have to predict the same for the test images. How do i define the loss metric(MSE or any other) and performance metric(R-Squared or any other) for regression layer since it outputs 4 values(x and y coordinates for upper left and lower right point)? Below is the code of model.
def get_model() :
#Input layer
input_layer = Input(shape=(IMG_HEIGHT, IMG_WIDTH, N_CHANNELS, ), name="input_layer", dtype='float32')
#Convolution, maxpool and dropout layers
conv_1 = Conv2D(filters=8, kernel_size=(3,3), activation=relu,
kernel_initializer=he_normal(seed=54), bias_initializer=zeros(),
name="first_convolutional_layer") (input_layer)
maxpool_1 = MaxPool2D(pool_size=(2,2), name = "first_maxpool_layer")(conv_1)
#Fully connected layers
flat = Flatten(name="flatten_layer")(maxpool_1)
d1 = Dense(units=64, activation=relu, kernel_initializer=he_normal(seed=45),
bias_initializer=zeros(), name="first_dense_layer", kernel_regularizer = l2(0.001))(flat)
d2 = Dense(units=32, activation=relu, kernel_initializer=he_normal(seed=47),
bias_initializer=zeros(), name="second_dense_layer", kernel_regularizer = l2(0.001))(d1)
classification = Dense(units = 43, activation=None, name="classification")(d2)
regression = Dense(units = 4, activation = 'linear', name = "regression")(d2)
#Model
model = Model(inputs = input_layer, outputs = [classification, regression])
model.summary()
return model
For classification output, you need to use softmax.
classification = Dense(units = 43, activation='softmax', name="classification")(d2)
You should use categorical_crossentropy loss for the classification output.
For regression, you can use mse loss.
I am building a basic seq2seq autoencoder, but I'm not sure if I'm doing it correctly.
model = Sequential()
# Encoder
model.add(LSTM(32, activation='relu', input_shape =(timesteps, n_features ), return_sequences=True))
model.add(LSTM(16, activation='relu', return_sequences=False))
model.add(RepeatVector(timesteps))
# Decoder
model.add(LSTM(16, activation='relu', return_sequences=True))
model.add(LSTM(32, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(n_features)))'''
The model is then fit using a batch size parameter
model.fit(data, data,
epochs=30,
batch_size = 32)
The model is compiled with the mse loss function and seems to learn.
To get the encoder output for the test data, I am using a K function:
get_encoder_output = K.function([model.layers[0].input],
[model.layers[1].output])
encoder_output = get_encoder_output([test_data])[0]
My first question is whether the model is specified correctly. In particular whether the RepeatVector layer is needed. I'm not sure what it is doing. What if I omit it and specify the preceding layer with return_sequences = True?
My second question is whether I need to tell get_encoder_output about the batch_size used in training?
Thanks in advance for any help on either question.
This might prove useful to you:
As a toy problem I created a seq2seq model for predicting the continuation of different sine waves.
This was the model:
def create_seq2seq():
features_num=5
latent_dim=40
##
encoder_inputs = Input(shape=(None, features_num))
encoded = LSTM(latent_dim, return_state=False ,return_sequences=True)(encoder_inputs)
encoded = LSTM(latent_dim, return_state=False ,return_sequences=True)(encoded)
encoded = LSTM(latent_dim, return_state=False ,return_sequences=True)(encoded)
encoded = LSTM(latent_dim, return_state=True)(encoded)
encoder = Model (input=encoder_inputs, output=encoded)
##
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]
decoder_inputs=Input(shape=(1, features_num))
decoder_lstm_1 = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_lstm_2 = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_lstm_3 = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_lstm_4 = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_dense = Dense(features_num)
all_outputs = []
inputs = decoder_inputs
states_1=encoder_states
# Placeholder values:
states_2=states_1; states_3=states_1; states_4=states_1
###
for _ in range(1):
# Run the decoder on the first timestep
outputs_1, state_h_1, state_c_1 = decoder_lstm_1(inputs, initial_state=states_1)
outputs_2, state_h_2, state_c_2 = decoder_lstm_2(outputs_1)
outputs_3, state_h_3, state_c_3 = decoder_lstm_3(outputs_2)
outputs_4, state_h_4, state_c_4 = decoder_lstm_4(outputs_3)
# Store the current prediction (we will concatenate all predictions later)
outputs = decoder_dense(outputs_4)
all_outputs.append(outputs)
# Reinject the outputs as inputs for the next loop iteration
# as well as update the states
inputs = outputs
states_1 = [state_h_1, state_c_1]
states_2 = [state_h_2, state_c_2]
states_3 = [state_h_3, state_c_3]
states_4 = [state_h_4, state_c_4]
for _ in range(149):
# Run the decoder on each timestep
outputs_1, state_h_1, state_c_1 = decoder_lstm_1(inputs, initial_state=states_1)
outputs_2, state_h_2, state_c_2 = decoder_lstm_2(outputs_1, initial_state=states_2)
outputs_3, state_h_3, state_c_3 = decoder_lstm_3(outputs_2, initial_state=states_3)
outputs_4, state_h_4, state_c_4 = decoder_lstm_4(outputs_3, initial_state=states_4)
# Store the current prediction (we will concatenate all predictions later)
outputs = decoder_dense(outputs_4)
all_outputs.append(outputs)
# Reinject the outputs as inputs for the next loop iteration
# as well as update the states
inputs = outputs
states_1 = [state_h_1, state_c_1]
states_2 = [state_h_2, state_c_2]
states_3 = [state_h_3, state_c_3]
states_4 = [state_h_4, state_c_4]
# Concatenate all predictions
decoder_outputs = Lambda(lambda x: K.concatenate(x, axis=1))(all_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
#model = load_model('pre_model.h5')
print(model.summary()
return (model)
The best way, in my opinion, to implement a seq2seq LSTM in Keras, is by using 2 LSTM models and having the first one transfer its states to the second one.
Your last LSTM layer in the encoder will need
return_state=True ,return_sequences=False so it will pass on its h and c.
You will then need to set an LSTM decoder that will receive these as it's initial_state.
For decoder input you will most likely want a "start of sequence" token as the first time step input, and afterwards use the decoder output of the nth time step as the input of the the decoder in the (n+1)th time step.
After you have mastered this, have a look at Teacher Forcing.
I'm using pre-trained ResNet-50 model and want to feed the outputs of the penultimate layer to a LSTM Network. Here is my sample code containing only CNN (ResNet-50):
N = NUMBER_OF_CLASSES
#img_size = (224,224,3)....same as that of ImageNet
base_model = ResNet50(include_top=False, weights='imagenet',pooling=None)
x = base_model.output
x = GlobalAveragePooling2D()(x)
predictions = Dense(1024, activation='relu')(x)
model = Model(inputs=base_model.input, outputs=predictions)
Next, I want to feed it to a LSTM network, as follows...
final_model = Sequential()
final_model.add((model))
final_model.add(LSTM(64, return_sequences=True, stateful=True))
final_model.add(Dense(N, activation='softmax'))
But I'm confused how to reshape the output to the LSTM input. My original input is (224*224*3) to CNN.
Also, should I use TimeDistributed?
Any kind of help is appreciated.
Adding an LSTM after a CNN does not make a lot of sense, as LSTM is mostly used for temporal/sequence information, whereas your data seems to be only spatial, however if you still like to use it just use
x = Reshape((1024,1))(x)
This would convert it to a sequence of 1024 samples, with 1 feature
If you are talking of spatio-temporal data, Use Timedistributed on the Resnet Layer and then you can use convlstm2d
Example of using pretrained network with LSTM:
inputs = Input(shape=(config.N_FRAMES_IN_SEQUENCE, config.IMAGE_H, config.IMAGE_W, config.N_CHANNELS))
cnn = VGG16(include_top=False, weights='imagenet', input_shape=(config.IMAGE_H, config.IMAGE_W, config.N_CHANNELS))
x = TimeDistributed(cnn)(inputs)
x = TimeDistributed(Flatten())(x)
x = LSTM(256)(x)