Adding a Concatenated layer to TensorFlow 2.0 (using Attention) - python

In building a model that uses TensorFlow 2.0 Attention I followed the example given in the TF docs. https://www.tensorflow.org/api_docs/python/tf/keras/layers/Attention
The last line in the example is
input_layer = tf.keras.layers.Concatenate()(
[query_encoding, query_value_attention])
Then the example has the comment
# Add DNN layers, and create Model.
# ...
So it seemed logical to do this
model = tf.keras.Sequential()
model.add(input_layer)
This produces the error
TypeError: The added layer must be an instance of class Layer.
Found: Tensor("concatenate/Identity:0", shape=(None, 200), dtype=float32)
UPDATE (after #thushv89 response)
What I am trying to do in the end is add an attention layer in the following model which works well (or convert it to an attention model).
model = tf.keras.Sequential()
model.add(layers.Embedding(vocab_size, embedding_nodes, input_length=max_length))
model.add(layers.LSTM(20))
#add attention here?
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', metrics=['accuracy'])
My data looks like this
4912,5059,5079,0
4663,5145,5146,0
4663,5145,5146,0
4840,5117,5040,0
Where the first three columns are the inputs and the last column is binary and the goal is classification. The data was prepared similarly to this example with a similar purpose, binary classification. https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/

So, first thing is Keras has three APIs when it comes to creating models.
Sequential - (Which is what you're doing here)
Functional - (Which is what I'm using in the solution)
Subclassing - Creating Python classes to represent custom models/layers
The way the model created in the tutorial is not to be used with sequential models but a model from the Functional API. So you got to do the following. Note that, I've taken the liberty of defining the dense layers with arbitrary parameters (e.g. number of output classes, which you can change as needed).
import tensorflow as tf
# Variable-length int sequences.
query_input = tf.keras.Input(shape=(None,), dtype='int32')
value_input = tf.keras.Input(shape=(None,), dtype='int32')
# ... the code in the middle
# Concatenate query and document encodings to produce a DNN input layer.
input_layer = tf.keras.layers.Concatenate()(
[query_encoding, query_value_attention])
# Add DNN layers, and create Model.
# ...
dense_out = tf.keras.layers.Dense(50, activation='relu')(input_layer)
pred = tf.keras.layers.Dense(10, activation='softmax')(dense_out)
model = tf.keras.models.Model(inputs=[query_input, value_input], outputs=pred)
model.summary()

Related

How to edit functional model in keras?

I am using the tf.keras.applications.efficientnet_v2.EfficientNetV2L model and I want to edit the last layers of the model to make the model a regression and classification layer. However, I am unsure of how to edit this model because it is not a linear sequential model, and thus I cannot do:
for layer in model.layers[:-2]:
model.add(layer)
as certain layers of the model have multiple inputs. Is there a way of preserving the model except the last layer so the model will diverge before the last layer?
efficentnet[:-2]
|
|
/ \
/ \
/ \
output1 output2
To enable a functional model to have a classification layer and a regression layer, you can change the model as follows. Note, there are various ways to achieve this, and this is one of them.
import tensorflow as tf
from tensorflow import keras
prev_model = keras.applications.EfficientNetV2B0(
input_tensor=keras.Input(shape=(224, 224, 3)),
include_top=False
)
Next, we will write our expected head layers, shown below.
neck_branch = keras.Sequential(
[
# we can add more layers i.e. batch norm, etc.
keras.layers.GlobalAveragePooling2D()
],
name='neck_head'
)
classification_head = keras.Sequential(
[
keras.layers.Dense(10, activation='softmax')
],
name='classification_head'
)
regression_head = keras.Sequential(
[
keras.layers.Dense(1, activation=None)
],
name='regression_head'
)
Now, we can build the desired model.
x = neck_branch(prev_model.output)
output_a = classification_head(x)
output_b = regression_head(x)
final_model = keras.Model(prev_model.inputs, [output_a, output_b])
Test
keras.utils.plot_model(final_model, expand_nested=True)
# OK
final_model(tf.ones(shape=(1, 224, 224, 3)))
# OK
Update
Based on your comments,
how you would tackle the problem if the previous model was imported from a h5 file since there I cannot declare the top layer not to be included?
If I understand your query, you have a saved model (in .h5 format) with top layers. In that case, you don't have include_top params to exclude the top branch. So, what you can do is remove the top branch of your saved model first. Here is how,
# a saved model with top layers
prev_model = keras.models.load_model('model.h5')
prev_model_with_top_remove = keras.Model(
prev_model.input ,
prev_model.layers[-4].output
)
prev_model_with_top_remove.summary()
This prev_model.layers[-4].output will remove the top branch. In the end, you will give similar output as we can get with include_top=True. Check the model summary to visually inspect.
Keras' functional API works by linking Keras tensors (hereby called KTensor) and not your everyday TF tensors.
Therefore, the first thing you need to do is feeding KTensors (created using tf.keras.Input) of proper shapes to the original model. This will trigger the forward chain, prompting the model's layers to produce their own output KTensors that are properly linked to the input KTensors. After the forward pass,
The layers will store their received/produced KTensors in their input and output attributes.
The model itself will also store the KTensors you fed to it and the corresponding final output KTensors in its inputs and outputs attributes (note the s).
Like so,
>>> from tensorflow.keras import Input
>>> from tensorflow.keras.layers import Dense
>>> from tensorflow.keras.models import Sequential, Model
>>> seq_model = Sequential()
>>> seq_model.add(Dense(1))
>>> seq_model.add(Dense(2))
>>> seq_model.add(Dense(3))
>>> hasattr(seq_model.layers[0], 'output')
False
>>> seq_model.inputs is None
True
>>> _ = seq_model(Input(shape=(10,))) # <--- Feed input KTensor to the model
>>> seq_model.layers[0].output
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'dense')>
>>> seq_model.inputs
[<KerasTensor: shape=(None, 10) dtype=float32 (created by layer 'dense_input')>]
Once you've obtained these internal KTensors, everything becomes trivial. To extract the KTensor right before the last two layers and forward it to two different branches to form a new functional model, do
>>> intermediate_ktensor = seq_model.layers[-3].output
>>> branch_1_output = Dense(20)(intermediate_ktensor)
>>> branch_2_output = Dense(30)(intermediate_ktensor)
>>> branched_model = Model(inputs=seq_model.inputs, outputs=[branch_1_output, branch_2_output])
Note that the shapes of the KTensors you fed at the very first step must conform to the shape requirements of the layers that receive them. In my example, the input KTensor would be fed to Dense(1) layer. As Dense requires the input shape to be defined in the last dimension, the input KTensor could be of shapes, e.g., (10,) or (None,10) but not (None,) or (10, None).

What is the most efficient way to modify a Keras Model?

Is there a way to add nodes to a layer in an existing Keras model? if so, what is the most efficient way to do so?
Also, is it possible to do the same but with layers? i.e. add a new layer to an existing Keras model (for example, right after the input layer).
One way I know of is to use Keras functional API by iterating and cloning each layer of the model in order to create a "copy" of the original model with the desired changes, but is it the most efficient way to accomplish this task?
You can take the output of a layer in a model and build another model starting from it:
import tensorflow as tf
# One simple model
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation='relu')(inputs)
outputs = tf.keras.layers.Dense(5, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
# Make a second model starting from layer in previous model
x2 = tf.keras.layers.Dense(8, activation='relu')(model.layers[1].output)
outputs2 = tf.keras.layers.Dense(7, activation='softmax')(x2)
model2 = tf.keras.Model(inputs=model.input, outputs=outputs2)
Note that in this case model and model2 share the same input layer and first dense layer objects (model.layers[0] is model2.layers[0] and model.layers[1] is model2.layers[1]).

How to bypass portion of neural network in TensorFlow for some (but not all) features

In my TensorFlow model I have some data that I feed into a stack of CNNs before it goes into a few fully connected layers. I have implemented that with Keras' Sequential model. However, I now have some data that should not go into the CNN and instead be fed directly into the first fully connected layer because that data contains some values and labels that are part of the input data but that data should not undergo convolutions as it is not image data.
Is such a thing possible with tensorflow.keras or should I do that with tensorflow.nn instead? As far as I understand Keras' sequential models is that the input goes in one end and comes out the other with no special wiring in the middle.
Am I correct that to do this I have to use tensorflow.concat on the data from the last CNN layer and the data that bypasses the CNNs before feeding it into the first fully connected layer?
Here is an simple example in which the operation is to sum the activations from different subnets:
import keras
import numpy as np
import tensorflow as tf
from keras.layers import Input, Dense, Activation
tf.reset_default_graph()
# this represents your cnn model
def nn_model(input_x):
feature_maker = Dense(10, activation='relu')(input_x)
feature_maker = Dense(20, activation='relu')(feature_maker)
feature_maker = Dense(1, activation='linear')(feature_maker)
return feature_maker
# a list of input layers, of course the input shapes can be different
input_layers = [Input(shape=(3, )) for _ in range(2)]
coupled_feature = [nn_model(input_x) for input_x in input_layers]
# assume you take the sum of the outputs
coupled_feature = keras.layers.Add()(coupled_feature)
prediction = Dense(1, activation='relu')(coupled_feature)
model = keras.models.Model(inputs=input_layers, outputs=prediction)
model.compile(loss='mse', optimizer='adam')
# example training set
x_1 = np.linspace(1, 90, 270).reshape(90, 3)
x_2 = np.linspace(1, 90, 270).reshape(90, 3)
y = np.random.rand(90)
inputs_x = [x_1, x_2]
model.fit(inputs_x, y, batch_size=32, epochs=10)
You can actually plot the model to gain more intuition
from keras.utils.vis_utils import plot_model
plot_model(model, show_shapes=True)
The model of the above code looks like this
With a little remodeling and the functional API you can:
#create the CNN - it can also be a sequential
cnn_input = Input(image_shape)
cnn_output = Conv2D(...)(cnn_input)
cnn_output = Conv2D(...)(cnn_output)
cnn_output = MaxPooling2D()(cnn_output)
....
cnn_model = Model(cnn_input, cnn_output)
#create the FC model - can also be a sequential
fc_input = Input(fc_input_shape)
fc_output = Dense(...)(fc_input)
fc_output = Dense(...)(fc_output)
fc_model = Model(fc_input, fc_output)
There is a lot of space for creativity, this is just one of the ways.
#create the full model
full_input = Input(image_shape)
full_output = cnn_model(full_input)
full_output = fc_model(full_output)
full_model = Model(full_input, full_output)
You can use any of the three models in any way you want. They share the layers and the weights, so internally they are the same.
Saving and loading the full model might be quirky. I'd probably save the other two separately and when loading create the full model again.
Notice also that if you save two models that share the same layers, after loading they will probably not share these layers anymore. (Another reason for saving/loading only fc_model and cnn_model, while creating full_model again from code)

how to get data from within Keras model for visualisation?

I am using Tensorflow 1.12 which has Keras integrated together with Python 3.6.x
I wish to use Keras for its simplicity of model building, but also would like to use data on the intermediate layer for visualization of feature maps and kernels to better understand how machine learning works(even though this is admittedly not so evident)
I am using the mnist data base and a very basic Keras model to try to do what I want to do.
Here is the code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import keras
print(tf.VERSION)
print(tf.keras.__version__)
tf.keras.backend.clear_session()
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train_shaped = np.expand_dims(x_train, axis=3) / 255.0
x_test_shaped = np.expand_dims(x_test, axis=3) / 255.0
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Conv2D(32, kernel_size=(4, 4),strides=(1,1),activation='relu', input_shape=(28,28,1)),
keras.layers.Dropout(0.5),
keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2)),
keras.layers.Conv2D(24, kernel_size=(8, 8),strides=(1,1)),
keras.layers.Flatten(),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
The above sets up the dataset and the model
Next I define my session for Tensorflow and do the training.
This all works fine but now I want to get my data for the, as example, the first layer out as ideally a numpy array on which I can do the visualization.
My model.layers[0].output gives me a Tensor of (?,25,25,32) as expected and now I try to do a eval() and thenafter a .numpy() method to get my result.
The error message is
You must feed a value for placeholder tensor 'conv2d_6_input' with dtype float and shape [?,28,28,1]
I am looking for help on how to get my data (32 feature maps of 25x25 pixels) out as numpy array for visualization.
sess = tf.Session(graph=tf.get_default_graph())
tf.keras.backend.set_session(sess)
with sess.as_default():
model = create_model()
model.summary()
model.fit(x_train_shaped[:10000], y_train[:10000], epochs=2,
batch_size=64, validation_split=.2,)
model.layers[0].output
print(model.layers[0].output.shape)
my_array = model.layers[0].output
my_array.eval()
tf.keras.backend.clear_session()
sess.close()
First of all, you must note that getting the output of a model or a layer only makes sense when you feed the input layers with some data. You get the model something (i.e. input data), you get something in return (i.e. output or feature map or activation map). That's why it would produce the following error:
You must feed a value for placeholder tensor 'conv2d_6_input'
You haven't fed the baby, so it would cry :)
Now, the idea of building a new Keras model is counterproductive. When you have a large model in the first place, one would like to plug in some kind of ready-made code that can get the output of the feature maps and visualize them. So this route seems not really interesting.
I think you are mistakenly thinking that when you construct a new model out of the layers of another model, a whole new model is cloned. That's not the case since the parameters of the layers would be shared.
Concretely, what you are looking for can be achieved like this:
viz_conv = Model(model.input, model.layers[0].output)
conv_active = viz_conv(my_input_data) # my_input_data is a numpy array of shape `(num_samples,28,28,1)`
All the parameters of viz_conv are shared with model and they have not been copied either. Under the hood they are using the same weight Tensors.
Alternatively, you could define a backend function to do this:
from tensorflow.keras import backend as K
viz_func = K.function([model.input], [any layer(s) you would like in the model])
output = viz_func([my_input_data])
This has been covered in Keras documentation and I highly recommend to read that as well.

Constructing a keras model

I don't understand what's happening in this code:
def construct_model(use_imagenet=True):
# line 1: how do we keep all layers of this model ?
model = keras.applications.InceptionV3(include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3),
weights='imagenet' if use_imagenet else None) # line 1: how do we keep all layers of this model ?
new_output = keras.layers.GlobalAveragePooling2D()(model.output)
new_output = keras.layers.Dense(N_CLASSES, activation='softmax')(new_output)
model = keras.engine.training.Model(model.inputs, new_output)
return model
Specifically, my confusion is, when we call the last constructor
model = keras.engine.training.Model(model.inputs, new_output)
we specify input layer and output layer, but how does it know we want all the other layers to stay?
In other words, we append the new_output layer to the pre-trained model we load in line 1, that is the new_output layer, and then in the final constructor (final line), we just create and return a model with a specified input and output layers, but how does it know what other layers we want in between?
Side question 1): What is the difference between keras.engine.training.Model and keras.models.Model?
Side question 2): What exactly happens when we do new_layer = keras.layers.Dense(...)(prev_layer)? Does the () operation return new layer, what does it do exactly?
This model was created using the Functional API Model
Basically it works like this (perhaps if you go to the "side question 2" below before reading this it may get clearer):
You have an input tensor (you can see it as "input data" too)
You create (or reuse) a layer
You pass the input tensor to a layer (you "call" a layer with an input)
You get an output tensor
You keep working with these tensors until you have created the entire graph.
But this hasn't created a "model" yet. (One you can train and use other things).
All you have is a graph telling which tensors go where.
To create a model, you define it's start end end points.
In the example.
They take an existing model: model = keras.applications.InceptionV3(...)
They want to expand this model, so they get its output tensor: model.output
They pass this tensor as the input of a GlobalAveragePooling2D layer
They get this layer's output tensor as new_output
They pass this as input to yet another layer: Dense(N_CLASSES, ....)
And get its output as new_output (this var was replaced as they are not interested in keeping its old value...)
But, as it works with the functional API, we don't have a model yet, only a graph. In order to create a model, we use Model defining the input tensor and the output tensor:
new_model = Model(old_model.inputs, new_output)
Now you have your model.
If you use it in another var, as I did (new_model), the old model will still exist in model. And these models are sharing the same layers, in a way that whenever you train one of them, the other gets updated as well.
Question: how does it know what other layers we want in between?
When you do:
outputTensor = SomeLayer(...)(inputTensor)
you have a connection between the input and output. (Keras will use the inner tensorflow mechanism and add these tensors and nodes to the graph). The output tensor cannot exist without the input. The entire InceptionV3 model is connected from start to end. Its input tensor goes through all the layers to yield an ouptut tensor. There is only one possible way for the data to follow, and the graph is the way.
When you get the output of this model and use it to get further outputs, all your new outputs are connected to this, and thus to the first input of the model.
Probably the attribute _keras_history that is added to the tensors is closely related to how it tracks the graph.
So, doing Model(old_model.inputs, new_output) will naturally follow the only way possible: the graph.
If you try doing this with tensors that are not connected, you will get an error.
Side question 1
Prefer to import from "keras.models". Basically, this module will import from the other module:
https://github.com/keras-team/keras/blob/master/keras/models.py
Notice that the file keras/models.py imports Model from keras.engine.training. So, it's the same thing.
Side question 2
It's not new_layer = keras.layers.Dense(...)(prev_layer).
It is output_tensor = keras.layers.Dense(...)(input_tensor).
You're doing two things in the same line:
Creating a layer - with keras.layers.Dense(...)
Calling the layer with an input tensor to get an output tensor
If you wanted to use the same layer with different inputs:
denseLayer = keras.layers.Dense(...) #creating a layer
output1 = denseLayer(input1) #calling a layer with an input and getting an output
output2 = denseLayer(input2) #calling the same layer on another input
output3 = denseLayer(input3) #again
Bonus - Creating a functional model that is equal to a sequential model
If you create this sequential model:
model = Sequential()
model.add(Layer1(...., input_shape=some_shape))
model.add(Layer2(...))
model.add(Layer3(...))
You're doing exactly the same as:
inputTensor = Input(some_shape)
outputTensor = Layer1(...)(inputTensor)
outputTensor = Layer2(...)(outputTensor)
outputTensor = Layer3(...)(outputTensor)
model = Model(inputTensor,outputTensor)
What is the difference?
Well, functional API models are totally free to be build anyway you want. You can create branches:
out1 = Layer1(..)(inputTensor)
out2 = Layer2(..)(inputTensor)
You can join tensors:
joinedOut = Concatenate()([out1,out2])
With this, you can create anything you want with all kinds of fancy stuff, branches, gates, concatenations, additions, etc., which you can't do with a sequential model.
In fact, a Sequential model is also a Model, but created for a quick use in models without branches.
There's this way of building a model from a pretrained one that you may build upon.
See https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes:
base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(200, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
Each time a layer is added by an op like "x=Dense(...", information about the computational graph is updated. You can type this interactively to see what it contains:
x.graph.__dict__
You can see there's all kinds of attributes, including about previous and next layers. These are internal implementation details and possibly change over time.

Categories