Suppose I have dataset 100k x 400. I created this model:
model = Sequential()
model.add(Dense(200, input_dim = 400, init = init_weights))
model.add(BatchNormalization())
model.add(SReLU())
model.add(Dropout(0.5))
model.add(Dense(200, init = init_weights))
model.add(BatchNormalization())
model.add(SReLU())
model.add(Dropout(0.5))
model.add(Dense(1, activation = 'linear', init = init_weights))
Than I call
model.compile(loss = ..
And
model.fit(input_matrix,..
After training I can call model.predict(.. for predictions.
What I would like to get is prediction matrix from model without last linear layer..
So something like:
model.remove_last_layer
pred_matrix = model.predict(input_matrix)
where output is 100k x 200 array, how can I do this with keras? thx a lot
thx to the link to docs I found this
layer_name = 'dropout_2'
intermediate_layer_model = Model(input = model.input, output = model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(matrix_test)
Related
I have a large csv file that I want to train a tensorflow model on, so I am creating a PrefetchDataset object to read from the csv file in epochs.
dataset = tf.data.experimental.make_csv_dataset("test.csv", batch_size = 2, label_name="next_movement", shuffle = False, num_epochs=1)
But when I train to train the model
X_dim = pd.read_csv("test.csv", nrows = 1).shape[1]
optimizer = Adam(learning_rate = 1e-4)
model = Sequential()
model.add(Dense(128, activation="relu", kernel_regularizer=regularizers.l2(.0001),
input_shape=(X_dim,)))
#model.add(layers.Dropout(.5))
model.add(Dense(64, activation="relu", kernel_regularizer=regularizers.l2(.0001)))
#model.add(layers.Dropout(.5))
model.add(Dense(32, activation="relu",kernel_regularizer=regularizers.l2(.0001)))
#model.add(layers.Dropout(.5))
model.add(Dense(8, activation = "relu", kernel_regularizer=regularizers.l2(.0001)))
#model.add(layers.Dropout(.5))
model.add(Dense(1, activation="sigmoid", kernel_regularizer=regularizers.l2(.0001)))
#model = keras.Model(inputs, model(x))
model.compile(optimizer=optimizer,
loss="binary_crossentropy",
metrics = ['accuracy'])
model.fit(dataset)
I get this error
ValueError: Missing data for input "dense_7_input". You passed a data dictionary with keys ['', 'bid_prc1', 'bid_vol1', 'ask_prc1', 'ask_vol1', 'bid_prc2'... ] Expected the following keys: ['dense_7_input']
For a chess engine I want to use two autoencoder models, which extract key-features out of a chess-position, concatenate them and build a model on top to compare two chess positions.
My code looks like this so far:
enc1 = keras.models.load_model("autoencoder.h5")
enc2 = keras.models.load_model("autoencoder.h5")
encoder1 = Model(
inputs=enc1.input,
outputs=[enc1.get_layer(index=2).output,
enc1.get_layer(index=4).output,
enc1.get_layer(index=6).output,
enc1.get_layer(index=7).output
]
)
encoder1.trainable = False
encoder2 = Model(
inputs=enc2.input,
outputs=[enc2.get_layer(index=2).output,
enc2.get_layer(index=4).output,
enc2.get_layer(index=6).output,
enc2.get_layer(index=7).output
]
)
encoder2.trainable = False
model = Sequential()
model.add(concatenate([encoder1, encoder2]))
model.add(Dense(400, activation="relu", input_shape=(2,769,)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(200, activation='relu', kernel_regularizer=l2(), bias_regularizer=l2()))
model.add(Dropout(0.2))
model.add(Dense(100, activation='relu', kernel_regularizer=l2(), bias_regularizer=l2()))
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax'))
metric = tf.keras.metrics.CategoricalAccuracy()
model.compile(optimizer=Adam(learning_rate=0.001), loss="categorical_crossentropy", metrics=metric)
This is giving errors. How do I concatenate these two autoencoder layers?
Thanks so much!
I am trying to add autoencoder layer to LSTM neural network. The input data is the pandas DataFrame with numerical features.
To do this task, I am using Keras and Python My current code in Python is given below.
I cannot compile the model because I seem to mix Keras and Tensorflow:
TypeError: The added layer must be an instance of class Layer. Found: Tensor("input_2:0", shape=(?, 22), dtype=float32)
I am quite new to both packages, and I'd appreciate if somebody could tell me how to fix this error.
nb_features = X_train.shape[2]
hidden_neurons = nb_classes*3
timestamps = X_train.shape[1]
NUM_CLASSES = 3
BATCH_SIZE = 32
input_size = len(col_names)
hidden_size = int(input_size/2)
code_size = int(input_size/4)
model = Sequential()
model.add(LSTM(
units=hidden_neurons,
return_sequences=True,
input_shape=(timestamps, nb_features),
dropout=0.15,
recurrent_dropout=0.20
)
)
input_vec = Input(shape=(input_size,))
# Encoder
hidden_1 = Dense(hidden_size, activation='relu')(input_vec)
code = Dense(code_size, activation='relu')(hidden_1)
# Decoder
hidden_2 = Dense(hidden_size, activation='relu')(code)
output_vec = Dense(input_size, activation='relu')(hidden_2)
model.add(input_vec)
model.add(hidden_1)
model.add(code)
model.add(hidden_2)
model.add(output_vec)
model.add(Dense(units=100,
kernel_initializer='normal'))
model.add(LeakyReLU(alpha=0.5))
model.add(Dropout(0.20))
model.add(Dense(units=200,
kernel_initializer='normal',
activation='relu'))
model.add(Flatten())
model.add(Dense(units=200,
kernel_initializer='uniform',
activation='relu'))
model.add(Dropout(0.10))
model.add(Dense(units=NUM_CLASSES,
activation='softmax'))
model.compile(loss="categorical_crossentropy",
metrics = ["accuracy"],
optimizer='adam')
The issue is that you are mixing Keras' sequential API with its functional API. To fix your issue, you must replace:
input_vec = Input(shape=(input_size,))
# Encoder
hidden_1 = Dense(hidden_size, activation='relu')(input_vec)
code = Dense(code_size, activation='relu')(hidden_1)
# Decoder
hidden_2 = Dense(hidden_size, activation='relu')(code)
output_vec = Dense(input_size, activation='relu')(hidden_2)
With:
# Encoder
model.add(Dense(hidden_size, activation='relu'))
model.add(Dense(code_size, activation='relu'))
# Decoder
model.add(Dense(hidden_size, activation='relu'))
model.add(Dense(input_size, activation='relu'))
Or convert everything to the functional API
I am using Python 3.X and TensorFlow 2.0 along with "tensorflow_model_optimization" package for neural network pruning. The code I have is as follows-
from tensorflow_model_optimization.sparsity import keras as sparsity
l = tf.keras.layers
# Original model without pruning-
model = Sequential()
model.add(l.InputLayer(input_shape = (784, )))
model.add(Flatten())
model.add(Dense(units = 300, activation='relu', kernel_initializer = tf.initializers.GlorotUniform()))
model.add(l.Dropout(0.2))
model.add(Dense(units = 100, activation='relu', kernel_initializer = tf.initializers.GlorotUniform()))
model.add(l.Dropout(0.1))
model.add(Dense(units = num_classes, activation='softmax'))
# Define callbacks-
callbacks = [
# tf.keras.callbacks.TensorBoard(log_dir=logdir, profile_batch = 0),
tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience = 3)
]
# Compile designed Neural Network-
model.compile(
loss = tf.keras.losses.categorical_crossentropy,
optimizer = 'adam',
metrics = ['accuracy'])
# Save untrained and initial weights to disk-
model.save_weights("Initial_non_trained_weights.h5")
epochs = 12
num_train_samples = X_train.shape[0]
end_step = np.ceil(1.0 * num_train_samples / batch_size).astype(np.int32) * epochs
print("end_step parameter for this dataset = {0}".format(end_step))
# end_step = 5628
# Specify the parameters to be used for layer-wise pruning:
pruning_params = {
'pruning_schedule': sparsity.PolynomialDecay(
initial_sparsity=0.50, final_sparsity=0.90,
begin_step=2000, end_step=end_step, frequency=100)
}
# Neural network which is to be pruned-
pruned_model = Sequential()
pruned_model.add(l.InputLayer(input_shape=(784, )))
pruned_model.add(Flatten())
pruned_model.add(sparsity.prune_low_magnitude(Dense(units = 300, activation='relu', kernel_initializer=tf.initializers.GlorotUniform()),
**pruning_params))
pruned_model.add(l.Dropout(0.2))
pruned_model.add(sparsity.prune_low_magnitude(Dense(units = 100, activation='relu', kernel_initializer=tf.initializers.GlorotUniform()),
**pruning_params))
pruned_model.add(l.Dropout(0.1))
pruned_model.add(sparsity.prune_low_magnitude(Dense(units = num_classes, activation='softmax'), **pruning_params))
# Compile pruned CNN-
pruned_model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
# Load weights from before-
pruned_model.load_weights("Initial_non_trained_weights.h5")
This last line of loading initial weights into the pruned model gives me error:
ValueError: Layer #0 (named "prune_low_magnitude_dense_9" in the current model) was found to correspond to layer dense in the save file.
However the new layer prune_low_magnitude_dense_9 expects 5 weights, but the saved weights have 2 elements.
What's going wrong?
Thanks!
print("Building model...")
ques1_enc = Sequential()
ques1_enc.add(Embedding(output_dim=64, input_dim=vocab_size, weights=[embedding_weights], mask_zero=True))
ques1_enc.add(LSTM(100, input_shape=(64, seq_maxlen), return_sequences=False))
ques1_enc.add(Dropout(0.3))
ques2_enc = Sequential()
ques2_enc.add(Embedding(output_dim=64, input_dim=vocab_size, weights=[embedding_weights], mask_zero=True))
ques2_enc.add(LSTM(100, input_shape=(64, seq_maxlen), return_sequences=False))
ques2_enc.add(Dropout(0.3))
model = Sequential()
model.add(Merge([ques1_enc, ques2_enc], mode="sum"))
model.add(Dense(2, activation="softmax"))
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
print("Building model costs:", time.time() - start)
print("Training...")
checkpoint = ModelCheckpoint(filepath=os.path.join("C:/Users/", "quora_dul_best_lstm.hdf5"), verbose=1, save_best_only=True)
model.fit([x_ques1train, x_ques2train], ytrain, batch_size=32, epochs=1, validation_split=0.1, verbose=2, callbacks=[checkpoint])
print("Training neural network costs:", time.time() - start)
I want to convert the above code into functional API in keras as in sequential API Merge() function is not supported. I have been trying it for long time but getting few errors. About the details of the attrributes:
ques_pairs contains the preprocessed data,
word2index contains the word count,
seq_maxlen contains the maximum length of question one or two.
iam trying to implement this model on Quora Question Pair Dataset https://www.kaggle.com/c/quora-question-pairs
I will give you a small example, that you can apply to your own model:
from keras.layers import Input, Dense, Add
input1 = Input(shape=(16,))
output1 = Dense(8, activation='relu')(input1)
output1 = Dense(4, activation='relu')(output1) # Add as many layers as you like like this
input2 = Input(shape=(16,))
output2 = Dense(8, activation='relu')(input2)
output2 = Dense(4, activation='relu')(output2) # Add as many layers as you like like this
output_full = Add()([output1, output2])
output_full = Dense(1, activation='sigmoid')(output_full) # Add as many layers as you like like this
model_full = Model(inputs=[input1, input2], outputs=output_full)
You need to define an Input for each of your model parts first, then add layers (as shown in the code) to both models. Then you can add them using the Add layer. Finally you call Model with a list of the input layers and the output layer.
model_full can then be compiled and trained like any other model.
Are you trying to achieve something like the following ?
from tensorflow.python import keras
from keras.layers import *
from keras.models import Sequential, Model
vocab_size = 1000
seq_maxlen = 32
embedding_weights = np.zeros((vocab_size, 64))
print("Building model...")
ques1_enc = Sequential()
ques1_enc.add(Embedding(output_dim=64, input_dim=vocab_size, weights=[embedding_weights], mask_zero=True))
ques1_enc.add(LSTM(100, input_shape=(64, seq_maxlen), return_sequences=False))
ques1_enc.add(Dropout(0.3))
ques2_enc = Sequential()
ques2_enc.add(Embedding(output_dim=64, input_dim=vocab_size, weights=[embedding_weights], mask_zero=True))
ques2_enc.add(LSTM(100, input_shape=(64, seq_maxlen), return_sequences=False))
ques2_enc.add(Dropout(0.3))
merge = Concatenate(axis=1)([ques1_enc.output, ques2_enc.output])
output = Dense(2, activation="softmax")(merge)
model = Model([ques1_enc.input, ques2_enc.input], output)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
model.summary()