Keras retrieve value of node before activation function - python

Imagine a fully-connected neural network with its last two layers of the following structure:
[Dense]
units = 612
activation = softplus
[Dense]
units = 1
activation = sigmoid
The output value of the net is 1, but I'd like to know what the input x to the sigmoidal function was (must be some high number, since sigm(x) is 1 here).
Folllowing indraforyou's answer I managed to retrieve the output and weights of Keras layers:
outputs = [layer.output for layer in model.layers[-2:]]
functors = [K.function( [model.input]+[K.learning_phase()], [out] ) for out in outputs]
test_input = np.array(...)
layer_outs = [func([test_input, 0.]) for func in functors]
print layer_outs[-1][0] # -> array([[ 1.]])
dense_0_out = layer_outs[-2][0] # shape (612, 1)
dense_1_weights = model.layers[-1].weights[0].get_value() # shape (1, 612)
dense_1_bias = model.layers[-1].weights[1].get_value()
x = np.dot(dense_0_out, dense_1_weights) + dense_1_bias
print x # -> -11.7
How can x be a negative number? In that case the last layers output should be a number closer to 0.0 than 1.0. Are dense_0_out or dense_1_weights the wrong outputs or weights?

Since you're using get_value(), I'll assume that you're using Theano backend. To get the value of the node before the sigmoid activation, you can traverse the computation graph.
The graph can be traversed starting from outputs (the result of some computation) down to its inputs using the owner field.
In your case, what you want is the input x of the sigmoid activation op. The output of the sigmoid op is model.output. Putting these together, the variable x is model.output.owner.inputs[0].
If you print out this value, you'll see Elemwise{add,no_inplace}.0, which is an element-wise addition op. It can be verified from the source code of Dense.call():
def call(self, inputs):
output = K.dot(inputs, self.kernel)
if self.use_bias:
output = K.bias_add(output, self.bias)
if self.activation is not None:
output = self.activation(output)
return output
The input to the activation function is the output of K.bias_add().
With a small modification of your code, you can get the value of the node before activation:
x = model.output.owner.inputs[0]
func = K.function([model.input] + [K.learning_phase()], [x])
print func([test_input, 0.])
For anyone using TensorFlow backend: use x = model.output.op.inputs[0] instead.

I can see a simple way just changing a little the model structure. (See at the end how to use the existing model and change only the ending).
The advantages of this method are:
You don't have to guess if you're doing the right calculations
You don't need to care about the dropout layers and how to implement a dropout calculation
This is a pure Keras solution (applies to any backend, either Theano or Tensorflow).
There are two possible solutions below:
Option 1 - Create a new model from start with the proposed structure
Option 2 - Reuse an existing model changing only its ending
Model structure
You could just have the last dense separated in two layers at the end:
[Dense]
units = 612
activation = softplus
[Dense]
units = 1
#no activation
[Activation]
activation = sigmoid
Then you simply get the output of the last dense layer.
I'd say you should create two models, one for training, the other for checking this value.
Option 1 - Building the models from the beginning:
from keras.models import Model
#build the initial part of the model the same way you would
#add the Dense layer without an activation:
#if using the functional Model API
denseOut = Dense(1)(outputFromThePreviousLayer)
sigmoidOut = Activation('sigmoid')(denseOut)
#if using the sequential model - will need the functional API
model.add(Dense(1))
sigmoidOut = Activation('sigmoid')(model.output)
Create two models from that, one for training, one for checking the output of dense:
#if using the functional API
checkingModel = Model(yourInputs, denseOut)
#if using the sequential model:
checkingModel = model
trainingModel = Model(checkingModel.inputs, sigmoidOut)
Use trianingModel for training normally. The two models share weights, so training one is training the other.
Use checkingModel just to see the outputs of the Dense layer, using checkingModel.predict(X)
Option 2 - Building this from an existing model:
from keras.models import Model
#find the softplus dense layer and get its output:
softplusOut = oldModel.layers[indexForSoftplusLayer].output
#or should this be the output from the dropout? Whichever comes immediately after the last Dense(1)
#recreate the dense layer
outDense = Dense(1, name='newDense', ...)(softPlusOut)
#create the new model
checkingModel = Model(oldModel.inputs,outDense)
It's important, since you created a new Dense layer, to get the weights from the old one:
wgts = oldModel.layers[indexForDense].get_weights()
checkingModel.get_layer('newDense').set_weights(wgts)
In this case, training the old model will not update the last dense layer in the new model, so, let's create a trainingModel:
outSigmoid = Activation('sigmoid')(checkingModel.output)
trainingModel = Model(checkingModel.inputs,outSigmoid)
Use checkingModel for checking the values you want with checkingModel.predict(X). And train the trainingModel.

So this is for fellow googlers, the working of the keras API has changed significantly since the accepted answer was posted. The working code for extracting a layer's output before activation (for tensorflow backend) is:
model = Your_Keras_Model()
the_tensor_you_need = model.output.op.inputs[0] #<- this is indexable, if there are multiple inputs to this node then you can find it with indexing.
In my case, the final layer was a dense layer with activation softmax, so the tensor output I needed was <tf.Tensor 'predictions/BiasAdd:0' shape=(?, 1000) dtype=float32>.

(TF backend)
Solution for Conv layers.
I had the same question, and to rewrite a model's configuration was not an option.
The simple hack would be to perform the call function manually. It gives control over the activation.
Copy-paste from the Keras source, with self changed to layer. You can do the same with any other layer.
def conv_no_activation(layer, inputs, activation=False):
if layer.rank == 1:
outputs = K.conv1d(
inputs,
layer.kernel,
strides=layer.strides[0],
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate[0])
if layer.rank == 2:
outputs = K.conv2d(
inputs,
layer.kernel,
strides=layer.strides,
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate)
if layer.rank == 3:
outputs = K.conv3d(
inputs,
layer.kernel,
strides=layer.strides,
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate)
if layer.use_bias:
outputs = K.bias_add(
outputs,
layer.bias,
data_format=layer.data_format)
if activation and layer.activation is not None:
outputs = layer.activation(outputs)
return outputs
Now we need to modify the main function a little. First, identify the layer by its name. Then retrieve activations from the previous layer. And at last, compute the output from the target layer.
def get_output_activation_control(model, images, layername, activation=False):
"""Get activations for the input from specified layer"""
inp = model.input
layer_id, layer = [(n, l) for n, l in enumerate(model.layers) if l.name == layername][0]
prev_layer = model.layers[layer_id - 1]
conv_out = conv_no_activation(layer, prev_layer.output, activation=activation)
functor = K.function([inp] + [K.learning_phase()], [conv_out])
return functor([images])
Here is a tiny test. I'm using VGG16 model.
a_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=True)[0]
a_no_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=False)[0]
print(np.sum(a_no_relu < 0))
> 245293
Set all negatives to zero to compare with the results retrieved after an embedded in VGG16 ReLu operation.
a_no_relu[a_no_relu < 0] = 0
print(np.allclose(a_relu, a_no_relu))
> True

easy way to define new layer with new activation function:
def change_layer_activation(layer):
if isinstance(layer, keras.layers.Conv2D):
config = layer.get_config()
config["activation"] = "linear"
new = keras.layers.Conv2D.from_config(config)
elif isinstance(layer, keras.layers.Dense):
config = layer.get_config()
config["activation"] = "linear"
new = keras.layers.Dense.from_config(config)
weights = [x.numpy() for x in layer.weights]
return new, weights

I had the same problem but none of the other answers worked for me. Im using a newer version of Keras with Tensorflow so some answers dont work now. Also the structure of the model is given so i can't change it easely. The general idea is to create a copy of the original model that will work exactly like the original one but spliting the activation from the outputs layers. Once this is done we can easely access the outputs values before the activation is applied.
First we will create a copy of the original model but with no activation on the outputs layers. This will be done using Keras clone_model function (See Docs).
from tensorflow.keras.models import clone_model
from tensorflow.keras.layers import Activation
original_model = get_model()
def f(layer):
config = layer.get_config()
if not isinstance(layer, Activation) and layer.name in original_model.output_names:
config.pop('activation', None)
layer_copy = layer.__class__.from_config(config)
return layer_copy
copy_model = clone_model(model, clone_function=f)
This alone will only make a clone with new weights so we must copy the original_model weights to the new one:
copy_model.build(original_model.input_shape)
copy_model.set_weights(original_model.get_weights())
Now we will add the activations layers:
from tensorflow.keras.models import Model
old_outputs = [ original_model.get_layer(name=name) for name in copy_model.output_names ]
new_outputs = [ Activation(old_output.activation)(output) if old_output.activation else output
for output, old_output in zip(copy_model.outputs, old_outputs) ]
copy_model = Model(copy_model.inputs, new_outputs)
Finally we could create a new model whose evaluation will be the outputs with no activation applied:
no_activation_outputs = [ copy_model.get_layer(name=name).output for name in original_model.output_names ]
no_activation_model = Model(copy.inputs, no_activation_outputs)
Now we could use copy_model like the original_model and no_activation_model to access pre-activation outputs. Actually you could even modify the code to split a custom set of layers instead of the outputs.

Related

How to edit functional model in keras?

I am using the tf.keras.applications.efficientnet_v2.EfficientNetV2L model and I want to edit the last layers of the model to make the model a regression and classification layer. However, I am unsure of how to edit this model because it is not a linear sequential model, and thus I cannot do:
for layer in model.layers[:-2]:
model.add(layer)
as certain layers of the model have multiple inputs. Is there a way of preserving the model except the last layer so the model will diverge before the last layer?
efficentnet[:-2]
|
|
/ \
/ \
/ \
output1 output2
To enable a functional model to have a classification layer and a regression layer, you can change the model as follows. Note, there are various ways to achieve this, and this is one of them.
import tensorflow as tf
from tensorflow import keras
prev_model = keras.applications.EfficientNetV2B0(
input_tensor=keras.Input(shape=(224, 224, 3)),
include_top=False
)
Next, we will write our expected head layers, shown below.
neck_branch = keras.Sequential(
[
# we can add more layers i.e. batch norm, etc.
keras.layers.GlobalAveragePooling2D()
],
name='neck_head'
)
classification_head = keras.Sequential(
[
keras.layers.Dense(10, activation='softmax')
],
name='classification_head'
)
regression_head = keras.Sequential(
[
keras.layers.Dense(1, activation=None)
],
name='regression_head'
)
Now, we can build the desired model.
x = neck_branch(prev_model.output)
output_a = classification_head(x)
output_b = regression_head(x)
final_model = keras.Model(prev_model.inputs, [output_a, output_b])
Test
keras.utils.plot_model(final_model, expand_nested=True)
# OK
final_model(tf.ones(shape=(1, 224, 224, 3)))
# OK
Update
Based on your comments,
how you would tackle the problem if the previous model was imported from a h5 file since there I cannot declare the top layer not to be included?
If I understand your query, you have a saved model (in .h5 format) with top layers. In that case, you don't have include_top params to exclude the top branch. So, what you can do is remove the top branch of your saved model first. Here is how,
# a saved model with top layers
prev_model = keras.models.load_model('model.h5')
prev_model_with_top_remove = keras.Model(
prev_model.input ,
prev_model.layers[-4].output
)
prev_model_with_top_remove.summary()
This prev_model.layers[-4].output will remove the top branch. In the end, you will give similar output as we can get with include_top=True. Check the model summary to visually inspect.
Keras' functional API works by linking Keras tensors (hereby called KTensor) and not your everyday TF tensors.
Therefore, the first thing you need to do is feeding KTensors (created using tf.keras.Input) of proper shapes to the original model. This will trigger the forward chain, prompting the model's layers to produce their own output KTensors that are properly linked to the input KTensors. After the forward pass,
The layers will store their received/produced KTensors in their input and output attributes.
The model itself will also store the KTensors you fed to it and the corresponding final output KTensors in its inputs and outputs attributes (note the s).
Like so,
>>> from tensorflow.keras import Input
>>> from tensorflow.keras.layers import Dense
>>> from tensorflow.keras.models import Sequential, Model
>>> seq_model = Sequential()
>>> seq_model.add(Dense(1))
>>> seq_model.add(Dense(2))
>>> seq_model.add(Dense(3))
>>> hasattr(seq_model.layers[0], 'output')
False
>>> seq_model.inputs is None
True
>>> _ = seq_model(Input(shape=(10,))) # <--- Feed input KTensor to the model
>>> seq_model.layers[0].output
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'dense')>
>>> seq_model.inputs
[<KerasTensor: shape=(None, 10) dtype=float32 (created by layer 'dense_input')>]
Once you've obtained these internal KTensors, everything becomes trivial. To extract the KTensor right before the last two layers and forward it to two different branches to form a new functional model, do
>>> intermediate_ktensor = seq_model.layers[-3].output
>>> branch_1_output = Dense(20)(intermediate_ktensor)
>>> branch_2_output = Dense(30)(intermediate_ktensor)
>>> branched_model = Model(inputs=seq_model.inputs, outputs=[branch_1_output, branch_2_output])
Note that the shapes of the KTensors you fed at the very first step must conform to the shape requirements of the layers that receive them. In my example, the input KTensor would be fed to Dense(1) layer. As Dense requires the input shape to be defined in the last dimension, the input KTensor could be of shapes, e.g., (10,) or (None,10) but not (None,) or (10, None).

How to split a Keras model, with a non-sequential architecture like ResNet, into sub-models?

My model is a resnet-152 i wanna cutting it into two submodels and the problem is with the second one i can't figure out how to build a model from an intermediate layer to the output
I tried this code from this response and it doesn't work for me here is my code:
def getLayerIndexByName(model, layername):
for idx, layer in enumerate(model.layers):
if layer.name == layername:
return idx
idx = getLayerIndexByName(resnet, 'res3a_branch2a')
input_shape = resnet.layers[idx].get_input_shape_at(0) # which is here in my case (None, 55, 55, 256)
layer_input = Input(shape=input_shape[1:]) # as keras will add the batch shape
# create the new nodes for each layer in the path
x = layer_input
for layer in resnet.layers[idx:]:
x = layer(x)
# create the model
new_model = Model(layer_input, x)
And i am getting this error:
ValueError: Input 0 is incompatible with layer res3a_branch1: expected axis -1 of input shape to have value 256 but got shape (None, 28, 28, 512).
I also tried this function:
def split(model, start, end):
confs = model.get_config()
kept_layers = set()
for i, l in enumerate(confs['layers']):
if i == 0:
confs['layers'][0]['config']['batch_input_shape'] = model.layers[start].input_shape
if i != start:
confs['layers'][0]['name'] += str(random.randint(0, 100000000)) # rename the input layer to avoid conflicts on merge
confs['layers'][0]['config']['name'] = confs['layers'][0]['name']
elif i < start or i > end:
continue
kept_layers.add(l['name'])
# filter layers
layers = [l for l in confs['layers'] if l['name'] in kept_layers]
layers[1]['inbound_nodes'][0][0][0] = layers[0]['name']
# set conf
confs['layers'] = layers
confs['input_layers'][0][0] = layers[0]['name']
confs['output_layers'][0][0] = layers[-1]['name']
# create new model
submodel = Model.from_config(confs)
for l in submodel.layers:
orig_l = model.get_layer(l.name)
if orig_l is not None:
l.set_weights(orig_l.get_weights())
return submodel
and i am getting this error:
ValueError: Unknown layer: Scale
as my resnet152 contains a Scale layer.
Here is a working version:
import resnet # pip install resnet
from keras.models import Model
from keras.layers import Input
def getLayerIndexByName(model, layername):
for idx, layer in enumerate(model.layers):
if layer.name == layername:
return idx
resnet = resnet.ResNet152(weights='imagenet')
idx = getLayerIndexByName(resnet, 'res3a_branch2a')
model1 = Model(inputs=resnet.input, outputs=resnet.get_layer('res3a_branch2a').output)
input_shape = resnet.layers[idx].get_input_shape_at(0) # get the input shape of desired layer
print(input_shape[1:])
layer_input = Input(shape=input_shape[1:]) # a new input tensor to be able to feed the desired layer
# create the new nodes for each layer in the path
x = layer_input
for layer in resnet.layers[idx:]:
x = layer(x)
# create the model
model2 = Model(layer_input, x)
model2.summary()
Here is the error:
ValueError: Input 0 is incompatible with layer res3a_branch1: expected axis -1 of input shape to have value 256 but got shape (None, 28, 28, 512)
As I mentioned in the comments section since the ResNet model does not have a linear architecture (i.e. it has skip connections and a layer may be connected to multiple layers), you can't simply go through the layers of the model one after another in a loop and apply a layer on the output of previous layer in the loop (i.e. unlike the models with a linear architecture for which this method works).
So you need to find the connectivity of the layers and traverse that connectivity map to be able to construct a sub-model of the original model. Currently, this solution comes to my mind:
Specify the last layer of your sub-model.
Start from that layer and find all the connected layers to it.
Get the output of those connected layers.
Apply the last layer on the collected output.
Obviously step #3 implies a recursion: to get the output of connected layers (i.e. X), we first need to find their connected layers (i.e. Y), get their outputs (i.e. outputs of Y) and then apply them on those outputs (i.e. apply X on outputs of Y). Further, to find the connected layer you need to know a bit about the internals of Keras which has been covered in this answer. So we come up with this solution:
from keras.applications.resnet50 import ResNet50
from keras import models
from keras import layers
resnet = ResNet50()
# this is the split point, i.e. the starting layer in our sub-model
starting_layer_name = 'activation_46'
# create a new input layer for our sub-model we want to construct
new_input = layers.Input(batch_shape=resnet.get_layer(starting_layer_name).get_input_shape_at(0))
layer_outputs = {}
def get_output_of_layer(layer):
# if we have already applied this layer on its input(s) tensors,
# just return its already computed output
if layer.name in layer_outputs:
return layer_outputs[layer.name]
# if this is the starting layer, then apply it on the input tensor
if layer.name == starting_layer_name:
out = layer(new_input)
layer_outputs[layer.name] = out
return out
# find all the connected layers which this layer
# consumes their output
prev_layers = []
for node in layer._inbound_nodes:
prev_layers.extend(node.inbound_layers)
# get the output of connected layers
pl_outs = []
for pl in prev_layers:
pl_outs.extend([get_output_of_layer(pl)])
# apply this layer on the collected outputs
out = layer(pl_outs[0] if len(pl_outs) == 1 else pl_outs)
layer_outputs[layer.name] = out
return out
# note that we start from the last layer of our desired sub-model.
# this layer could be any layer of the original model as long as it is
# reachable from the starting layer
new_output = get_output_of_layer(resnet.layers[-1])
# create the sub-model
model = models.Model(new_input, new_output)
Important notes:
This solution assumes that each layer in the original model has been used only once, i.e. it does not work for Siamese networks where a layer may be shared and therefore might be applied more than once on different input tensors.
If you want to have a proper split of a model into multiple sub-models, then it makes sense to use only those layers for split point (e.g. indicated by starting_layer_name in the above code) which are NOT in a branch (e.g. in ResNet the activation layers after merge layers are a good option, but the res3a_branch2a you have selected is not a good option since it's in a branch). To get a better view of the original architecture of the model, you can always plot its diagram using plot_model() utility function:
from keras.applications.resnet50 import ResNet50
from keras.utils import plot_model
resnet = ResNet50()
plot_model(model, to_file='resnet_model.png')
Since new nodes are created after constructing a sub-model, don't try to construct another sub-model which has overlap (i.e. if it does not have overlap, it's OK!) with the previous sub-model in the same run of the code above; otherwise, you may encounter errors.
In the case, when there is a layer with index middle, that is connected only to the previous layer (# middle-1) and all layers after are not connected directly to layers before it, we can use the fact that every model is saved as a list of layers and create two partial models this way:
model1 = keras.models.Model(inputs=model.input, outputs=model.layers[middle - 1].output)
input = keras.Input(shape=model.layers[middle-1].output_shape[1:])
# layers is a dict in the form {name : output}
layers = {}
layers[model.layers[middle-1].name] = input
for layer in model.layers[middle:]:
if type(layer.input) == list:
x = []
for layer_input in layer.input:
x.append(layers[layer_input.name.split('/')[0]])
else:
x = layers[layer.input.name.split('/')[0]]
y = layer(x)
layers[layer.name] = y
model2 = keras.Model(inputs = [input], outputs = [y])
Then it is easy to check that model2.predict(model1.predict(x)) gives same results as model.predict(x)
I had a similar problem with slicing an Inception CNN for transfer learning, to set only the layers after a certain point to trainable.
def get_layers_above(cutoff_layer,model):
def get_next_level(layer,model):
def wrap_list(val):
if type(val) is list:
return val
return [val]
r=[]
for output_t in wrap_list(layer.output):
r+=[x for x in model.layers if output_t.name in [y.name for y in wrap_list(x.input)]]
return r
visited=set()
to_visit=set([cutoff_layer])
while to_visit:
layer=to_visit.pop()
to_visit.update(get_next_level(layer,model))
visited.add(layer)
return list(visited)
I went with an iterative instead of a recursive solution because breadth-first traverse with sets seems like a safer solution for a network with many converging branches.
it should be used like this (InceptionV3 for example)
model = tf.keras.applications.InceptionV3(include_top=False,weights='imagenet',input_shape=(299,299,3))
layers=get_layers_above(model.get_layer('mixed9'),model)
print([l.name for l in layers])
output
['batch_normalization_89',
'conv2d_93',
'activation_86',
'activation_91',
'mixed10',
'activation_88',
'batch_normalization_85',
'activation_93',
'batch_normalization_90',
'conv2d_87',
'conv2d_86',
'batch_normalization_86',
'activation_85',
'conv2d_91',
'batch_normalization_91',
'batch_normalization_87',
'activation_90',
'mixed9',
'batch_normalization_92',
'batch_normalization_88',
'activation_87',
'concatenate_1',
'activation_89',
'conv2d_88',
'conv2d_92',
'average_pooling2d_8',
'activation_92',
'mixed9_1',
'conv2d_89',
'conv2d_85',
'conv2d_90',
'batch_normalization_93']

Keras give input to intermediate layer and get final output

My model is a simple fully connected network like this:
inp=Input(shape=(10,))
d=Dense(64, activation='relu')(inp)
d=Dense(128,activation='relu')(d)
d=Dense(256,activation='relu')(d) #want to give input here, layer3
d=Dense(512,activation='relu')(d)
d=Dense(1024,activation='relu')(d)
d=Dense(128,activation='linear')(d)
So, after saving the model I want to give input to layer 3. What I am doing right now is this:
model=load_model('blah.h5') #above described network
print(temp_input.shape) #(16,256), which is equal to what I want to give
index=3
intermediate_layer_model = Model(inputs=temp_input,
outputs=model.output)
End_output = intermediate_layer_model.predict(temp_input)
But it isn't working, i.e. I am getting errors like incompatible input, inputs should be tuple etc. The error message is:
raise TypeError('`inputs` should be a list or tuple.')
TypeError: `inputs` should be a list or tuple.
Is there any way I can pass my own inputs in middle of network and get the output instead of giving an input at the start and getting output from the end? Any help will be highly appreciated.
First you must learn that in Keras when you apply a layer on an input, a new node is created inside this layer which connects the input and output tensors. Each layer may have multiple nodes connecting different input tensors to their corresponding output tensors. To build a model, these nodes are traversed and a new graph of the model is created which consists all the nodes needed to reach output tensors from input tensors (i.e. which you specify when creating a model: model = Model(inputs=[...], outputs=[...]).
Now you would like to feed an intermediate layer of a model and get the output of the model. Since this is a new data-flow path, we need to create new nodes for each layer corresponding to this new computational graph. We can do it like this:
idx = 3 # index of desired layer
input_shape = model.layers[idx].get_input_shape_at(0) # get the input shape of desired layer
layer_input = Input(shape=input_shape) # a new input tensor to be able to feed the desired layer
# create the new nodes for each layer in the path
x = layer_input
for layer in model.layers[idx:]:
x = layer(x)
# create the model
new_model = Model(layer_input, x)
Fortunately, your model consists of one-branch and we could simply use a for loop to construct the new model. However, for more complex models it may not be easy to do so and you may need to write more codes to construct the new model.
Here is another method for achieving the same result. Initially create a new input layer and then connect it to the lower layers(with weights).
For this purpose, first re-initialize these layers(with same name) and reload the corresponding weights from the parent model using
new_model.load_weights("parent_model.hdf5", by_name=True)
This will load the required weights from the parent model.Just make sure you name your layers properly beforehand.
idx = 3
input_shape = model.layers[idx].get_input_shape_at(0) layer
new_input = Input(shape=input_shape)
d=Dense(256,activation='relu', name='layer_3')(new_input)
d=Dense(512,activation='relu', name='layer_4'))(d)
d=Dense(1024,activation='relu', name='layer_5'))(d)
d=Dense(128,activation='linear', name='layer_6'))(d)
new_model = Model(new_input, d)
new_model.load_weights("parent_model.hdf5", by_name=True)
This method will work for complex models with multiple inputs or branches.You just need to copy the same code for required layers, connect the new inputs and finally load the corresponding weights.
You can easily use keras.backend.function for this purpose:
import numpy as np
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K
inp=Input(shape=(10,))
d=Dense(64, activation='relu')(inp)
d=Dense(128,activation='relu')(d)
d=Dense(256,activation='relu')(d) #want to give input here, layer3
d=Dense(512,activation='relu')(d)
d=Dense(1024,activation='relu')(d)
d=Dense(128,activation='linear')(d)
model = Model(inp, d)
foo1 = K.function(
[inp],
model.layers[2].output
)
foo2 = K.function(
[model.layers[2].output],
model.output
)
X = np.random.rand(1, 10)
X_intermediate = foo1([X])
print(np.allclose(foo2([X_intermediate]), model.predict(X)))
Sorry for ugly function naming - do it best)
I was having the same problem and the proposed solutions worked for me but I was looking for something more explicit, so here it is for future reference:
d1 = Dense(64, activation='relu')
d2 = Dense(128,activation='relu')
d3 = Dense(256,activation='relu')
d4 = Dense(512,activation='relu')
d5 = Dense(1024,activation='relu')
d6 = Dense(128,activation='linear')
inp = Input(shape=(10,))
x = d1(inp)
x = d2(x)
x = d3(x)
x = d4(x)
x = d5(x)
x = d6(x)
full_model = tf.keras.Model(inp, x)
full_model.summary()
intermediate_input = Input(shape=d3.get_input_shape_at(0)) # get shape at node 0
x = d3(intermediate_input)
x = d4(x)
x = d5(x)
x = d6(x)
partial_model = tf.keras.Model(intermediate_input, x)
partial_model.summary()
Reference:
https://keras.io/guides/functional_api/#shared-layers

Constructing a keras model

I don't understand what's happening in this code:
def construct_model(use_imagenet=True):
# line 1: how do we keep all layers of this model ?
model = keras.applications.InceptionV3(include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3),
weights='imagenet' if use_imagenet else None) # line 1: how do we keep all layers of this model ?
new_output = keras.layers.GlobalAveragePooling2D()(model.output)
new_output = keras.layers.Dense(N_CLASSES, activation='softmax')(new_output)
model = keras.engine.training.Model(model.inputs, new_output)
return model
Specifically, my confusion is, when we call the last constructor
model = keras.engine.training.Model(model.inputs, new_output)
we specify input layer and output layer, but how does it know we want all the other layers to stay?
In other words, we append the new_output layer to the pre-trained model we load in line 1, that is the new_output layer, and then in the final constructor (final line), we just create and return a model with a specified input and output layers, but how does it know what other layers we want in between?
Side question 1): What is the difference between keras.engine.training.Model and keras.models.Model?
Side question 2): What exactly happens when we do new_layer = keras.layers.Dense(...)(prev_layer)? Does the () operation return new layer, what does it do exactly?
This model was created using the Functional API Model
Basically it works like this (perhaps if you go to the "side question 2" below before reading this it may get clearer):
You have an input tensor (you can see it as "input data" too)
You create (or reuse) a layer
You pass the input tensor to a layer (you "call" a layer with an input)
You get an output tensor
You keep working with these tensors until you have created the entire graph.
But this hasn't created a "model" yet. (One you can train and use other things).
All you have is a graph telling which tensors go where.
To create a model, you define it's start end end points.
In the example.
They take an existing model: model = keras.applications.InceptionV3(...)
They want to expand this model, so they get its output tensor: model.output
They pass this tensor as the input of a GlobalAveragePooling2D layer
They get this layer's output tensor as new_output
They pass this as input to yet another layer: Dense(N_CLASSES, ....)
And get its output as new_output (this var was replaced as they are not interested in keeping its old value...)
But, as it works with the functional API, we don't have a model yet, only a graph. In order to create a model, we use Model defining the input tensor and the output tensor:
new_model = Model(old_model.inputs, new_output)
Now you have your model.
If you use it in another var, as I did (new_model), the old model will still exist in model. And these models are sharing the same layers, in a way that whenever you train one of them, the other gets updated as well.
Question: how does it know what other layers we want in between?
When you do:
outputTensor = SomeLayer(...)(inputTensor)
you have a connection between the input and output. (Keras will use the inner tensorflow mechanism and add these tensors and nodes to the graph). The output tensor cannot exist without the input. The entire InceptionV3 model is connected from start to end. Its input tensor goes through all the layers to yield an ouptut tensor. There is only one possible way for the data to follow, and the graph is the way.
When you get the output of this model and use it to get further outputs, all your new outputs are connected to this, and thus to the first input of the model.
Probably the attribute _keras_history that is added to the tensors is closely related to how it tracks the graph.
So, doing Model(old_model.inputs, new_output) will naturally follow the only way possible: the graph.
If you try doing this with tensors that are not connected, you will get an error.
Side question 1
Prefer to import from "keras.models". Basically, this module will import from the other module:
https://github.com/keras-team/keras/blob/master/keras/models.py
Notice that the file keras/models.py imports Model from keras.engine.training. So, it's the same thing.
Side question 2
It's not new_layer = keras.layers.Dense(...)(prev_layer).
It is output_tensor = keras.layers.Dense(...)(input_tensor).
You're doing two things in the same line:
Creating a layer - with keras.layers.Dense(...)
Calling the layer with an input tensor to get an output tensor
If you wanted to use the same layer with different inputs:
denseLayer = keras.layers.Dense(...) #creating a layer
output1 = denseLayer(input1) #calling a layer with an input and getting an output
output2 = denseLayer(input2) #calling the same layer on another input
output3 = denseLayer(input3) #again
Bonus - Creating a functional model that is equal to a sequential model
If you create this sequential model:
model = Sequential()
model.add(Layer1(...., input_shape=some_shape))
model.add(Layer2(...))
model.add(Layer3(...))
You're doing exactly the same as:
inputTensor = Input(some_shape)
outputTensor = Layer1(...)(inputTensor)
outputTensor = Layer2(...)(outputTensor)
outputTensor = Layer3(...)(outputTensor)
model = Model(inputTensor,outputTensor)
What is the difference?
Well, functional API models are totally free to be build anyway you want. You can create branches:
out1 = Layer1(..)(inputTensor)
out2 = Layer2(..)(inputTensor)
You can join tensors:
joinedOut = Concatenate()([out1,out2])
With this, you can create anything you want with all kinds of fancy stuff, branches, gates, concatenations, additions, etc., which you can't do with a sequential model.
In fact, a Sequential model is also a Model, but created for a quick use in models without branches.
There's this way of building a model from a pretrained one that you may build upon.
See https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes:
base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(200, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
Each time a layer is added by an op like "x=Dense(...", information about the computational graph is updated. You can type this interactively to see what it contains:
x.graph.__dict__
You can see there's all kinds of attributes, including about previous and next layers. These are internal implementation details and possibly change over time.

How to verify structure a neural network in keras model?

I'm new in Keras and Neural Networks. I'm writing a thesis and trying to create a SimpleRNN in Keras as it is illustrated below:
As it is shown in the picture, I need to create a model with 4 inputs + 2 outputs and with any number of neurons in the hidden layer.
This is my code:
model = Sequential()
model.add(SimpleRNN(4, input_shape=(1, 4), activation='sigmoid', return_sequences=True))
model.add(Dense(2))
model.compile(loss='mean_absolute_error', optimizer='adam')
model.fit(data, target, epochs=5000, batch_size=1, verbose=2)
predict = model.predict(data)
1) Does my model implement the graph?
2) Is it possible to specify connections between neurons Input and Hidden layers or Output and Input layers?
Explanation:
I am going to use backpropagation to train my network.
I have input and target values
Input is a 10*4 array and target is a 10*2 array which I then reshape:
input = input.reshape((10, 1, 4))
target = target.reshape((10, 1, 2))
It is crucial for to able to specify connections between neurons as they can be different. For instance, here you can have an example:
1) Not really. But I'm not sure about what exactly you want in that graph. (Let's see how Keras recurrent layers work below)
2) Yes, it's possible to connect every layer to every layer, but you can't use Sequential for that, you must use Model.
This answer may not be what you're looking for. What exactly do you want to achieve? What kind of data you have, what output you expect, what is the model supposed to do? etc...
1 - How does a recurrent layer work?
Documentation
Recurrent layers in keras work with an "input sequence" and may output a single result or a sequence result. It's recurrency is totally contained in it and doesn't interact with other layers.
You should have inputs with shape (NumberOrExamples, TimeStepsInSequence, DimensionOfEachStep). This means input_shape=(TimeSteps,Dimension).
The recurrent layer will work internally with each time step. The cycles happen from step to step and this behavior is totally invisible. The layer seems to work just like any other layer.
This doesn't seem to be what you want. Unless you have a "sequence" to input. The only way I know if using recurrent layers in Keras that is similar to you graph is when you have a segment of a sequence and want to predict the next step. If that's the case, see some examples by searching for "predicting the next element" in Google.
2 - How to connect layers using Model:
Instead of adding layers to a sequential model (which will always follow a straight line), start using the layers independently, starting from an input tensor:
from keras.layers import *
from keras.models import Model
inputTensor = Input(shapeOfYourInput) #it seems the shape is "(2,)", but we must see your data.
#A dense layer with 2 outputs:
myDense = Dense(2, activation=ItsAGoodIdeaToUseAnActivation)
#The output tensor of that layer when you give it the input:
denseOut1 = myDense(inputTensor)
#You can do as many cycles as you want here:
denseOut2 = myDense(denseOut1)
#you can even make a loop:
denseOut = Activation(ItsAGoodIdeaToUseAnActivation)(inputTensor) #you may create a layer and call it with the input tensor in just one line if you're not going to reuse the layer
#I'm applying this activation layer here because since we defined an activation for the dense layer and we're going to cycle it, it's not going to behave very well receiving huge values in the first pass and small values the next passes....
for i in range(n):
denseOut = myDense(denseOut)
This kind of usage allows you to create any kind of model, with branches, alternative ways, connections from anywhere to anywhere, provided you respect the shape rules. For a cycle like that, inputs and outputs must have the same shape.
At the end, you must define a model from one or many inputs to one or many outputs (you must have training data to match all inputs and outputs you choose):
model = Model(inputTensor,denseOut)
But notice that this model is static. If you want to change the number of cycles, you will have to create a new model.
In this case, it would be as simple as repeating the loop step denseOut = myDense(denseOut) and creating another model2=Model(inputTensor,denseOut).
3 - Trying to create something like the image below:
I am supposing C and F will participate in all iterations. If not,
Since there are four actual inputs, and we are going to treat them all separately, let's create 4 inputs instead, all like (1,).
Your input array should be divided in 4 arrays, all being (10,1).
from keras.models import Model
from keras.layers import *
inputA = Input((1,))
inputB = Input((1,))
inputC = Input((1,))
inputF = Input((1,))
Now the layers N2 and N3, that will be used only once, since C and F are constant:
outN2 = Dense(1)(inputC)
outN3 = Dense(1)(inputF)
Now the recurrent layer N1, without giving it the tensors yet:
layN1 = Dense(1)
For the loop, let's create outA and outB. They start as actual inputs and will be given to the layer N1, but in the loop they will be replaced
outA = inputA
outB = inputB
Now in the loop, let's do the "passes":
for i in range(n):
#unite A and B in one
inputAB = Concatenate()([outA,outB])
#pass through N1
outN1 = layN1(inputAB)
#sum results of N1 and N2 into A
outA = Add()([outN1,outN2])
#this is constant for all the passes except the first
outB = outN3 #looks like B is never changing in your image....
Now the model:
finalOut = Concatenate()([outA,outB])
model = Model([inputA,inputB,inputC,inputF], finalOut)

Categories