Updating Number of Neurons in a Keras Layer - python

I'm working on a callback to dynamically "split" the number of neurons in a network at the end of each epoch. However, I'm having some trouble figuring out how to update the layer size. Here is a simplified example:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Model / data parameters
num_classes = 10
num_neurons = 32
input_shape = (28, 28, 1)
model = keras.Sequential(
[
keras.Input(shape=input_shape),
layers.Flatten(),
layers.Dense(num_neurons, activation="relu"),
layers.Dense(num_classes, activation="softmax"),
]
)
model.summary()
modellayers=model.layers
c=0
for l in modellayers:
if c==1:
l.output_shape=[None,64]
if c==2:
l.input_shape=[None,64]
print(str(c),l.input)
print(str(c),l.output)
c+=1
This gives back the following error:
AttributeError: Can't set the attribute "output_shape", likely because it conflicts with an existing read-only #property of the object. Please choose a different name.
If I print the shapes I get back:
0 Tensor("input_1:0", shape=(None, 28, 28, 1), dtype=float32)
0 Tensor("flatten/Reshape:0", shape=(None, 784), dtype=float32)
1 Tensor("flatten/Reshape:0", shape=(None, 784), dtype=float32)
1 Tensor("dense/Relu:0", shape=(None, 32), dtype=float32)
2 Tensor("dense/Relu:0", shape=(None, 32), dtype=float32)
2 Tensor("dense_1/Softmax:0", shape=(None, 10), dtype=float32)
I see they are Tensors, so I've also tried set_shape, but that didn't work either. Basically, I want to know how to update the number of neurons in a layer object.
P.S. I'm not having difficulty in splitting weights and biases and transferring those to a new working model, as can be seen in the example here: https://jjohnson-777.medium.com/machine-learning-granularity-by-splitting-neurons-fd2f02e07817. But now I would like to develop a callback function to do this on the fly between epochs.

Related

How to Feed Tensor Dataset to Model

I am new to Tensorflow and trying to figure out how to build a simple text classification model. Taking a basic model from this tutorial, I am trying to adapt it to my own custom dataset.
I have tensors with shape=(32, 2, 500) grouped into training and validation datasets with shape=(None, 2, 500).
def get_model(max_features=20000, embedding_dim=128):
# A integer input for vocab indices.
inputs = tf.keras.Input(shape=(None,), dtype="int64")
# Next, we add a layer to map those vocab indices into a space of dimensionality
#'embedding_dim'.
x = layers.Embedding(max_features, embedding_dim)(inputs)
x = layers.Dropout(0.5)(x)
# Conv1D + global max pooling
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.GlobalMaxPooling1D()(x)
# We add a vanilla hidden layer:
x = layers.Dense(128, activation="relu")(x)
x = layers.Dropout(0.5)(x)
# We project onto a single unit output layer, and squash it with a sigmoid:
predictions = layers.Dense(1, activation="sigmoid", name="predictions")(x)
model = tf.keras.Model(inputs, predictions)
# Compile the model with binary crossentropy loss and an adam optimizer.
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
I get the following warning:
WARNING:tensorflow:Model was constructed with shape (None, None) for input KerasTensor(type_spec=TensorSpec(shape=(None, None), dtype=tf.int64, name='input_16'), name='input_16', description="created by layer 'input_16'"), but it was called on an input with incompatible shape (None, 2, 500).
And the following error message:
Input 0 of layer "global_max_pooling1d_6" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 2, 53, 128)
Call arguments received by layer "model_7" " f"(type Functional):
• inputs=tf.Tensor(shape=(None, 2, 500), dtype=int64)
• training=True
• mask=None
What do I need to change to get rid of this error and get the model working?

remove only last(dense) layer of an already trained model, keeping all the weights of the model intact, add a different dense layer

I want to remove only the last dense layer from an already saved model in .h5 file and add a new dense layer.
Information about the saved model:
I used transfer learning on the EfficientNet B0 model and added a dropout with 2 dense layers. The last dense layer had 3 nodes equal to my number of classes, as shown below:
inputs = tf.keras.layers.Input(shape=(IMAGE_HEIGHT, IMAGE_WIDTH, 3))
x = img_augmentation(inputs)
model = tf.keras.applications.EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = tf.keras.layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.Dense(5, activation=tf.nn.relu)(x)
outputs = tf.keras.layers.Dense(len(class_names), activation="softmax", name="pred")(x)
After training, I saved my model as my_h5_model.h5
Main Task: I want to use the saved model architecture with its weights and replace only the last dense layer with 4 nodes dense layer.
I tried many things as suggested by the StackOverflow community as:
Iterate over all the layers except the last layer and add them to a separate already defined sequential model
new_model = Sequential()
for layer in (model.layers[:-1]):
new_model.add(layer)
But it gives an error which state:
ValueError: Exception encountered when calling layer "block1a_se_excite" (type Multiply).
A merge layer should be called on a list of inputs. Received: inputs=Tensor("Placeholder:0", shape=(None, 1, 1, 32), dtype=float32) (not a list of tensors)
Call arguments received:
• inputs=tf.Tensor(shape=(None, 1, 1, 32), dtype=float32)
I also tried the functional approach as:
input_layer = model.input
for layer in (model.layers[:-1]):
x = layer(input_layer)
which throws an as mention below:
ValueError: Exception encountered when calling layer "stem_bn" (type BatchNormalization).
Dimensions must be equal, but are 3 and 32 for '{{node stem_bn/FusedBatchNormV3}} = FusedBatchNormV3[T=DT_FLOAT, U=DT_FLOAT, data_format="NHWC", epsilon=0.001, exponential_avg_factor=1, is_training=false](Placeholder, stem_bn/ReadVariableOp, stem_bn/ReadVariableOp_1, stem_bn/FusedBatchNormV3/ReadVariableOp, stem_bn/FusedBatchNormV3/ReadVariableOp_1)' with input shapes: [?,224,224,3], [32], [32], [32], [32].
Call arguments received:
• inputs=tf.Tensor(shape=(None, 224, 224, 3), dtype=float32)
• training=False
Lastly, I did something that came to my mind
inputs = tf.keras.layers.Input(shape=(IMAGE_HEIGHT, IMAGE_WIDTH, 3))
x = img_augmentation(inputs)
x = model.layers[:-1](x)
x = keras.layers.Dense(5, name="compress_1")(x)
which simply gave an error as:
'list' object is not callable
I did some more experiments and was able to remove the last layer and add the new dense layer
# imported a pretained saved model
from tensorflow import keras
import tensorflow as tf
model = keras.models.load_model('/content/my_h5_model.h5')
# selected all layers except last one
x= model.layers[-2].output
outputs = tf.keras.layers.Dense(4, activation="softmax", name="predictions")(x)
model = tf.keras.Model(inputs = model.input, outputs = outputs)
model.summary()
In the saved model, I had 3 nodes on dense layers, but in the current model, I added 4 layers. The last layer summary is shown below:
dropout_3 (Dropout) (None, 1280) 0 ['batch_normalization_4[0][0]']
dense_3 (Dense) (None, 5) 6405 ['dropout_3[0][0]']
predictions (Dense) (None, 4) 24 ['dense_3[0][0]']
==================================================================================================
Have you tried switching between import keras and tensorflow.keras in your import? This has worked in other issues.

How to fix error with Keras Flatten layers?

This is my code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(1,11)),
keras.layers.Dense(4, activation='relu'),
keras.layers.Dense(10, activation='softmax')
]
)
My data is 1000 rows with 11 columns (11 inputs for the model). So to make the input layer of the NN I used flatten. This gives me the error:
WARNING:tensorflow:Model was constructed with shape (None, 1, 11) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1, 11), dtype=tf.float32, name='flatten_1_input'), name='flatten_1_input', description="created by layer 'flatten_1_input'"), but it was called on an input with incompatible shape (None, 11).
It seems like your input shape is (num_inputs, 11) already so you don't need to flatten it. Taking out the Flatten layer should fix this.

Add GlobalAveragePooling2D (before ResNet50)

I'm trying to do a model using ResNet50 for image classification into 6 classes and I want to reduce the dimension of the images before using them to train the ResNet50 model. To do this I start creating a ResNet50 model using the model in keras:
ResNet = ResNet50(
include_top= None, weights='imagenet', input_tensor=None, input_shape=([64, 109, 3]),
pooling=None, classes=6)
And then I create a sequential model that includes ResNet50 but adding some final layers for the classification and also the first layer for dimensionality reduction before using ResNet50:
(About the input shape: The images I'm using have a dimension of 128x217 and the 3 is for the channel that ResNet needs)
model = models.Sequential()
model.add(GlobalAveragePooling2D(input_shape = ([128, 217, 3])))
model.add(ResNet)
model.add(GlobalAveragePooling2D())
model.add(Dense(units=512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=6, activation='softmax'))
But this doesn't work because the dimension after the first global average pooling doesn't fit with the input shape in the Resnet, the error I get is:
WARNING:tensorflow:Model was constructed with shape (None, 64, 109, 3) for input Tensor("input_6:0", shape=(None, 64, 109, 3), dtype=float32), but it was called on an input with incompatible shape (None, 3).
ValueError: Input 0 of layer conv1_pad is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 3]
I think I understand what is the problem but I don't know how to fix it since (None, 3) is not a valid input shape for ResNet50. How can I fix this? Thank you!:)
You should first understand what GlobalAveragePooling actually does. This layer cannot be apllied right after the input, because it will only give the maximum value of all the images for each channel (in your case 3 values, because you have 3 channels).
You have to use another method to reduce the size of the images (e.g. simple conversion to a smaller size.

Keras: Input 0 of layer sequential is incompatible with the layer

I am trying to create a neural network model with one hidden layer and then trying to evaluate it, but I am getting an error that I am not able to understand clearly:
ValueError: Input 0 of layer sequential_1 is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [30]
It looks like I have an error with the dimensions of my input layer, but I can't quite spot what. I've googled and looked on stackoverflow, but haven't found anything that worked so far. Any help please?
Here's a minimal working example:
import tensorflow as tf
# Define Sequential model with 3 layers
input_dim = 30
num_neurons = 10
output_dim = 12
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(input_dim, activation="relu", name="layer1"),
tf.keras.layers.Dense(num_neurons, activation="relu", name="layer2"),
tf.keras.layers.Dense(output_dim, name="layer3"),
]
)
model(tf.ones(input_dim))
Layers have an input and output dimension. For layers in the "middle" of the NN, they figure out their input domain from the output domain of the previous layer. The only exception is the first layer that has nothing to go by, and requires input_dim to be set. Here is how to fix your code. Note how we pass the dimensions. First (hidden) layer is input_dim x num_neurons, second (output layer) num_neurons x output_dim
You can stick more layers in between the two; they only require the first argument, their output dimension
also note I had to fix your last line as well, tf.ones needs to be a 2D shape num_observation x input_dim
import tensorflow as tf
# Define Sequential model with 1 hidden layer
input_dim = 30
num_neurons = 10
output_dim = 12
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(num_neurons, input_dim = input_dim, activation="relu", name="layer1"),
tf.keras.layers.Dense(output_dim, name="layer3"),
]
)
model(tf.ones((1,input_dim)))
produces (for me; I think the numbers are essentially random initialization)
<tf.Tensor: shape=(1, 12), dtype=float32, numpy=
array([[ 0.06973769, -0.1798143 , -0.2920275 , 0.84811246, 0.44899416,
-0.10300556, 0.00831143, -0.16158538, 0.13395026, 0.4352504 ,
0.19114715, 0.44100884]], dtype=float32)>

Categories