So I was using GloVe with my model and it worked, but now I changed to Elmo (reference that Keras code available on GitHub Elmo Keras Github, utils.py
however, when I print model.summary I get 0 parameters in the ELMo Embedding layer unlike when I was using Glove is that normal ? If not can you please tell me what am I doing wrong
Using glove I Got over 20Million parameters
##--------> When I was using Glove Embedding Layer
word_embedding_layer = emb.get_keras_embedding(#dropout = emb_dropout,
trainable = True,
input_length = sent_maxlen,
name='word_embedding_layer')
## --------> Deep layers
pos_embedding_layer = Embedding(output_dim =pos_tag_embedding_size, #5
input_dim = len(SPACY_POS_TAGS),
input_length = sent_maxlen, #20
name='pos_embedding_layer')
latent_layers = stack_latent_layers(num_of_latent_layers)
##--------> 6] Dropout
dropout = Dropout(0.1)
## --------> 7]Prediction
predict_layer = predict_classes()
## --------> 8] Prepare input features, and indicate how to embed them
inputs = [Input((sent_maxlen,), dtype='int32', name='word_inputs'),
Input((sent_maxlen,), dtype='int32', name='predicate_inputs'),
Input((sent_maxlen,), dtype='int32', name='postags_inputs')]
## --------> 9] ELMo Embedding and Concat all inputs and run on deep network
from elmo import ELMoEmbedding
import utils
idx2word = utils.get_idx2word()
ELmoembedding1 = ELMoEmbedding(idx2word=idx2word, output_mode="elmo", trainable=True)(inputs[0]) # These two are interchangeable
ELmoembedding2 = ELMoEmbedding(idx2word=idx2word, output_mode="elmo", trainable=True)(inputs[1]) # These two are interchangeable
embeddings = [ELmoembedding1,
ELmoembedding2,
pos_embedding_layer(inputs[3])]
con1 = keras.layers.concatenate(embeddings)
## --------> 10]Build model
outputI = predict_layer(dropout(latent_layers(con1)))
model = Model(inputs, outputI)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
model.summary()
Trials:
note: I tried using the TF-Hub Elmo with Keras code, but the output was always a 2D tensor [even when I changed it to 'Elmo' setting and 'LSTM' instead of default']so I couldn't Concatenate with POS_embedding_layer. I tried reshaping but eventually I got the same issue total Parameters 0.
From the TF-Hub description (https://tfhub.dev/google/elmo/2), the embeddings of individual words are not trainable. Only the weighted sum of the embedding and LSTM layers are. So you should get 4 trainable parameters at the ELMo level.
I was able to get the trainable parameters using the class defined in StrongIO's example on Github. The example only provides a class where the output is the default layer, which is a 1024 vector for each input example (essentially a document/sentence encoder). To access the embeddings of each word (the elmo layer), a few changes are needed as suggested in this issue:
class ElmoEmbeddingLayer(Layer):
def __init__(self, **kwargs):
self.dimensions = 1024
self.trainable=True
super(ElmoEmbeddingLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.elmo = hub.Module('https://tfhub.dev/google/elmo/2', trainable=self.trainable,
name="{}_module".format(self.name))
self.trainable_weights += K.tf.trainable_variables(scope="^{}_module/.*".format(self.name))
super(ElmoEmbeddingLayer, self).build(input_shape)
def call(self, x, mask=None):
result = self.elmo(
K.squeeze(
K.cast(x, tf.string), axis=1
),
as_dict=True,
signature='default',
)['elmo']
return result
def compute_output_shape(self, input_shape):
return (input_shape[0], None, self.dimensions)
You can stack the ElmoEmbeddingLayer with the POS layer.
As a more general example, one can use the ELMo embeddings in a 1D ConvNet model for classification:
elmo_input_layer = Input(shape=(None, ), dtype="string")
elmo_output_layer = ElmoEmbeddingLayer()(elmo_input_layer)
conv_layer = Conv1D(
filters=100,
kernel_size=3,
padding='valid',
activation='relu',
strides=1)(elmo_output_layer)
pool_layer = GlobalMaxPooling1D()(conv_layer)
dense_layer = Dense(32)(pool_layer)
output_layer = Dense(1, activation='sigmoid')(dense_layer)
model = Model(
inputs=elmo_input_layer,
outputs=output_layer)
model.summary()
The model summary looks like this:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_62 (InputLayer) (None, None) 0
_________________________________________________________________
elmo_embedding_layer_13 (Elm (None, None, 1024) 4
_________________________________________________________________
conv1d_46 (Conv1D) (None, None, 100) 307300
_________________________________________________________________
global_max_pooling1d_42 (Glo (None, 100) 0
_________________________________________________________________
dense_53 (Dense) (None, 32) 3232
_________________________________________________________________
dense_54 (Dense) (None, 1) 33
=================================================================
Total params: 310,569
Trainable params: 310,569
Non-trainable params: 0
_________________________________________________________________
Related
I have used a code that trained a resnet model as none functional layer;
base_model = tf.keras.applications.ResNet50(include_top=False, weights=None, input_shape=(224, 224, 3))
base_model.trainable = True
inputs = Input((224, 224, 3))
h = base_model(inputs, training=True)
model = Model(inputs, projection_3)
when you call summary:
Layer (type) Output Shape Param #
=================================================================
input_image (InputLayer) [(None, 256, 256, 3)] 0
resnet50 (Functional) (None, 8, 8, 2048) 23587712
=================================================================
Now, I need to load the weight into resnet built of many layer
Resmodel = tf.keras.applications.ResNet50(input_tensor=inputs, weights=None, include_top=False)
However, when loading the weight, I got:
model.load_weights(filename)
ValueError: Layer count mismatch when loading weights from file. Model expected 106 layers, found 4 saved layers.
Its the same model, only one functional (whole model as one layer) and the other split into many layers. How do I transfer the weights between them.
try saving the model again
model_n = model.layers[1]
model_n.save("new_model.h5")
i'm fairly new to tensorflow and would appreciate answers a lot.
i'm trying to use a transformer model as an embedding layer and feed the data to a custom model.
from transformers import TFAutoModel
from tensorflow.keras import layers
def build_model():
transformer_model = TFAutoModel.from_pretrained(MODEL_NAME, config=config)
input_ids_in = layers.Input(shape=(MAX_LEN,), name='input_ids', dtype='int32')
input_masks_in = layers.Input(shape=(MAX_LEN,), name='attention_mask', dtype='int32')
embedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in)[0]
X = layers.Bidirectional(tf.keras.layers.LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(embedding_layer)
X = layers.GlobalMaxPool1D()(X)
X = layers.Dense(64, activation='relu')(X)
X = layers.Dropout(0.2)(X)
X = layers.Dense(30, activation='softmax')(X)
model = tf.keras.Model(inputs=[input_ids_in, input_masks_in], outputs = X)
for layer in model.layers[:3]:
layer.trainable = False
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
model = build_model()
model.summary()
r = model.fit(
train_ds,
steps_per_epoch=train_steps,
epochs=EPOCHS,
verbose=3)
I have 30 classes and the labels are not one-hot encoded so im using sparse_categorical_crossentropy as my loss function but i keep getting the following error
ValueError: Shape mismatch: The shape of labels (received (1,)) should equal the shape of logits except for the last dimension (received (10, 30)).
how can i solve this?
and why is the (10, 30) shape required? i know 30 is because of the last Dense layer with 30 units but why the 10? is it because of the MAX_LENGTH which is 10?
my model summary:
Model: "model_16"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_ids (InputLayer) [(None, 10)] 0
__________________________________________________________________________________________________
attention_mask (InputLayer) [(None, 10)] 0
__________________________________________________________________________________________________
tf_bert_model_21 (TFBertModel) TFBaseModelOutputWit 162841344 input_ids[0][0]
attention_mask[0][0]
__________________________________________________________________________________________________
bidirectional_17 (Bidirectional (None, 10, 100) 327600 tf_bert_model_21[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_15 (Global (None, 100) 0 bidirectional_17[0][0]
__________________________________________________________________________________________________
dense_32 (Dense) (None, 64) 6464 global_max_pooling1d_15[0][0]
__________________________________________________________________________________________________
dropout_867 (Dropout) (None, 64) 0 dense_32[0][0]
__________________________________________________________________________________________________
dense_33 (Dense) (None, 30) 1950 dropout_867[0][0]
==================================================================================================
Total params: 163,177,358
Trainable params: 336,014
Non-trainable params: 162,841,344
10 is a number of sequences in one batch. I suspect that it is a number of sequences in your dataset.
Your model acting as a sequence classifier. So you should have one label for every sequence.
I have a simple GRU network coded with Keras in python as below:
gru1 = GRU(16, activation='tanh', return_sequences=True)(input)
dense = TimeDistributed(Dense(16, activation='tanh'))(gru1)
output = TimeDistributed(Dense(1, activation="sigmoid"))(dense)
I've used a sigmoid activation for output since my purpose is classification. But I need to use the same model for regression as well. I'll need to change the output activation as linear. However, the rest of the network is still the same. So in this case, I'll use two different networks for two different purposes. Inputs are the same. But outputs are classes for sigmoid and values for linear activation.
My question is, is there any way to use only one network but get two different outputs at the end? Thanks.
Yes, you can use functional API to design a multi-output model. You can keep shared layers and 2 different outputs one with sigmoid another with linear activation.
N.B: Don't use input as a variable, it's a function name in python.
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
seq_len = 100 # your sequence length
input_ = Input(shape=(seq_len,1))
gru1 = GRU(16, activation='tanh', return_sequences=True)(input_)
dense = TimeDistributed(Dense(16, activation='tanh'))(gru1)
output1 = TimeDistributed(Dense(1, activation="sigmoid", name="out1"))(dense)
output2 = TimeDistributed(Dense(1, activation="linear", name="out2"))(dense)
model = Model(input_, [output1, output2])
model.summary()
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) [(None, 100, 1)] 0
__________________________________________________________________________________________________
gru_2 (GRU) (None, 100, 16) 912 input_3[0][0]
__________________________________________________________________________________________________
time_distributed_3 (TimeDistrib (None, 100, 16) 272 gru_2[0][0]
__________________________________________________________________________________________________
time_distributed_4 (TimeDistrib (None, 100, 1) 17 time_distributed_3[0][0]
__________________________________________________________________________________________________
time_distributed_5 (TimeDistrib (None, 100, 1) 17 time_distributed_3[0][0]
==================================================================================================
Total params: 1,218
Trainable params: 1,218
Non-trainable params: 0
Compiling with two loss functions:
losses = {
"out1": "binary_crossentropy",
"out2": "mse",
}
# initialize the optimizer and compile the model
model.compile(optimizer='adam', loss=losses, metrics=["accuracy", "mae"])
I am trying to understand attention model and also build one myself. After many searches I came across this website which had an atteniton model coded in keras and also looks simple. But when I tried to build that same model in my machine its giving multiple argument error. The error was due to the mismatched argument passing in class Attention. In the website's attention class it's asking for one argument but it initiates the attention object with two arguments.
import tensorflow as tf
max_len = 200
rnn_cell_size = 128
vocab_size=250
class Attention(tf.keras.Model):
def __init__(self, units):
super(Attention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
hidden_with_time_axis = tf.expand_dims(hidden, 1)
score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))
attention_weights = tf.nn.softmax(self.V(score), axis=1)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
sequence_input = tf.keras.layers.Input(shape=(max_len,), dtype='int32')
embedded_sequences = tf.keras.layers.Embedding(vocab_size, 128, input_length=max_len)(sequence_input)
lstm = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM
(rnn_cell_size,
dropout=0.3,
return_sequences=True,
return_state=True,
recurrent_activation='relu',
recurrent_initializer='glorot_uniform'), name="bi_lstm_0")(embedded_sequences)
lstm, forward_h, forward_c, backward_h, backward_c = tf.keras.layers.Bidirectional \
(tf.keras.layers.LSTM
(rnn_cell_size,
dropout=0.2,
return_sequences=True,
return_state=True,
recurrent_activation='relu',
recurrent_initializer='glorot_uniform'))(lstm)
state_h = tf.keras.layers.Concatenate()([forward_h, backward_h])
state_c = tf.keras.layers.Concatenate()([forward_c, backward_c])
# PROBLEM IN THIS LINE
context_vector, attention_weights = Attention(lstm, state_h)
output = keras.layers.Dense(1, activation='sigmoid')(context_vector)
model = keras.Model(inputs=sequence_input, outputs=output)
# summarize layers
print(model.summary())
How can I make this model work?
There is a problem with the way you initialize attention layer and pass parameters. You should specify the number of attention layer units in this place and modify the way of passing in parameters:
context_vector, attention_weights = Attention(32)(lstm, state_h)
The result:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 200) 0
__________________________________________________________________________________________________
embedding (Embedding) (None, 200, 128) 32000 input_1[0][0]
__________________________________________________________________________________________________
bi_lstm_0 (Bidirectional) [(None, 200, 256), ( 263168 embedding[0][0]
__________________________________________________________________________________________________
bidirectional (Bidirectional) [(None, 200, 256), ( 394240 bi_lstm_0[0][0]
bi_lstm_0[0][1]
bi_lstm_0[0][2]
bi_lstm_0[0][3]
bi_lstm_0[0][4]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 256) 0 bidirectional[0][1]
bidirectional[0][3]
__________________________________________________________________________________________________
attention (Attention) [(None, 256), (None, 16481 bidirectional[0][0]
concatenate[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 1) 257 attention[0][0]
==================================================================================================
Total params: 706,146
Trainable params: 706,146
Non-trainable params: 0
__________________________________________________________________________________________________
None
Attention layers are part of Keras API of Tensorflow(2.1) now. But it outputs the same sized tensor as your "query" tensor.
This is how to use Luong-style attention:
query_attention = tf.keras.layers.Attention()([query, value])
And Bahdanau-style attention :
query_attention = tf.keras.layers.AdditiveAttention()([query, value])
The adapted version:
attention_weights = tf.keras.layers.Attention()([lstm, state_h])
Check out the original website for more information: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Attention
https://www.tensorflow.org/api_docs/python/tf/keras/layers/AdditiveAttention
To answer Arman's specific query - these libraries use post-2018 semantics of queries, values and keys. To map the semantics back to Bahdanau or Luong's paper, you can consider the 'query' to be the last decoder hidden state. The 'values' will be the set of the encoder outputs - all the hidden states of the encoder. The 'query' 'attends' to all the 'values'.
Whichever version of code or library you are using, always note that the 'query' will be expanded over the time axis to prepare it for the subsequent addition that follows. This value (that is being expanded) will always be the last hidden state of the RNN. The other value will always be the values that need to be attended to - all the hidden states at the encoder end. This simple check of the code can be done to determine what 'query' and 'values' map to irrespective of the library or code that you are using.
You can refer to https://towardsdatascience.com/create-your-own-custom-attention-layer-understand-all-flavours-2201b5e8be9e to write your own custom attention layer in less than 6 lines of code
I have been through the Keras documentation but I am still unable to figure how does the input_shape parameter works and why it does not change the number of parameters for my DenseNet model when I pass it my custom input shape. An example:
import keras
from keras import applications
from keras.layers import Conv3D, MaxPool3D, Flatten, Dense
from keras.layers import Dropout, Input, BatchNormalization
from keras import Model
# define model 1
INPUT_SHAPE = (224, 224, 1) # used to define the input size to the model
n_output_units = 2
activation_fn = 'sigmoid'
densenet_121_model = applications.densenet.DenseNet121(include_top=False, weights=None, input_shape=INPUT_SHAPE, pooling='avg')
inputs = Input(shape=INPUT_SHAPE, name='input')
model_base = densenet_121_model(inputs)
output = Dense(units=n_output_units, activation=activation_fn)(model_base)
model = Model(inputs=inputs, outputs=output)
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) (None, 224, 224, 1) 0
_________________________________________________________________
densenet121 (Model) (None, 1024) 7031232
_________________________________________________________________
dense_1 (Dense) (None, 2) 2050
=================================================================
Total params: 7,033,282
Trainable params: 6,949,634
Non-trainable params: 83,648
_________________________________________________________________
# define model 2
INPUT_SHAPE = (512, 512, 1) # used to define the input size to the model
n_output_units = 2
activation_fn = 'sigmoid'
densenet_121_model = applications.densenet.DenseNet121(include_top=False, weights=None, input_shape=INPUT_SHAPE, pooling='avg')
inputs = Input(shape=INPUT_SHAPE, name='input')
model_base = densenet_121_model(inputs)
output = Dense(units=n_output_units, activation=activation_fn)(model_base)
model = Model(inputs=inputs, outputs=output)
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) (None, 512, 512, 1) 0
_________________________________________________________________
densenet121 (Model) (None, 1024) 7031232
_________________________________________________________________
dense_2 (Dense) (None, 2) 2050
=================================================================
Total params: 7,033,282
Trainable params: 6,949,634
Non-trainable params: 83,648
_________________________________________________________________
Ideally with an increase in the input shape the number of parameters should increase, however as you can see they stay exactly the same. My questions are thus:
Why do the number of parameters not change with a change in the input_shape?
I have only defined one channel in my input_shape, what would happen to my model training in this scenario? The documentation says the following:
input_shape: optional shape tuple, only to be specified if include_top
is False (otherwise the input shape has to be (224, 224, 3) (with
'channels_last' data format) or (3, 224, 224) (with 'channels_first'
data format). It should have exactly 3 inputs channels, and width and
height should be no smaller than 32. E.g. (200, 200, 3) would be one
valid value.
However when I run the model with this configuration it runs without any problems. Could there be something that I am missing out?
Using Keras 2.2.4 with Tensorflow 1.12.0 as backend.
1.
In the convolutional layers the input size does not influence the number of weights, because the number of weights is determined by the kernel matrix dimensions. A larger input size leads to a larger output size, but not to an increasing number of weights.
This means, that the output size of the convolutional layers of the second model will be larger than for the first model, which would increase the number of weights in the following dense layer. However if you take a look into the architecture of DenseNet you notice that there's a GlobalMaxPooling2D layer after all the convolutional layers, which averages all the values for each output channel. Thats why the output of DenseNet will be of size 1024, whatever the input shape.
2.
Yes, the model will still work. I'm not entirely sure about that, but my guess is that the single channel will be broadcasted (dublicated) to fill all three channels. Thats at least how these things are usually handled (see for exaple tensorflow or numpy).
The DenseNet is composed of two parts, the convolution part, and the global pooling part.
The number of the convolution part's trainable weights doesn't depend on the input shape.
Usually, a classification network should employ fully connected layers to infer the classification, however, in DenseNet, global pooling is used and doesn't bring any trainable weights.
Therefore, the input shape doesn't affect the number of weights of the entire network.