I'm working on a project and i need to make my CNN output like the output of the "Flatten" Layer.
No classification just a vector of input photo features, and I'm kind of lost... i know every thing about CNN structure but how can i start doing this with python?
Another alternative is the following.
Imagine you have a tf Keras model (here I take a small one for the sake of simplicity).
>>> model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 128) 100480
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 32) 2080
_________________________________________________________________
dense_4 (Dense) (None, 1) 33
=================================================================
Total params: 110,849
Trainable params: 110,849
Non-trainable params: 0
_________________________________________________________________
Let's say you want a $32$-long feature vector, corresponding to the layer dense_3.
now you can create another object
outputs = model.get_layer('dense_3').output
child_model = tf.keras.Model(inputs = model.inputs, outputs= outputs)
and your child model does what you want. You need to call
features = child_model.predict(image)
Important note: If you print model.layers and child_model.layers you will not be surprised they share the same layers at the same memory addresses. This means training one will set the layers weights of the other.
Are you using Keras? If you are, you can take a look here for examples (https://keras.io/api/applications/#extract-features-with-vgg16).
Basically, you do
features = model.predict(x)
np.save(outfile, features) # outfile is your desire output filename
you can load back the file using
features = np.load(outfile)
Related
I am working on a gesture recognition problem. For that I have a train set. Train set consists of multiple folders and each folder consists of a series of 30 images. From those images the model is trained. Also I have a csv file that contains the class label of each folder. The class labels are : "Left Swipe", "Right Swipe", "Stop", "Thumbs Down" and "Thumbs Up". Those labels are present in one np.array variable train_class. Now, I have created a CNN model then feeding that in a Sequential model.
The code is available in below GIT location
https://github.com/subhrajyoti-ghosh/ML-and-Deep-Learning/blob/main/Gesture_Recognition.ipynb
But when I am trying to fit the model, I am receiving error. Can you please help me understanding the error and how to solve that?
You are trying to use a TimeDistributed layer on a 2D input (batch_size, 256), which will not work, because the layer needs at least a 3D tensor. You should try using tf.keras.layers.RepeatVector:
import tensorflow as tf
resnet = tf.keras.applications.ResNet50(include_top=False,weights='imagenet',input_shape=(224,224,3))
cnn = tf.keras.Sequential([resnet])
cnn.add(tf.keras.layers.Conv2D(64,(2,2),strides=(1,1)))
cnn.add(tf.keras.layers.Conv2D(16,(3,3),strides=(1,1)))
cnn.add(tf.keras.layers.Flatten())
inputs = tf.keras.layers.Input(shape=(224,224,3))
x = cnn(inputs)
x = tf.keras.layers.RepeatVector(n=30)(x)
x = tf.keras.layers.GRU(16,return_sequences=True)(x)
x = tf.keras.layers.GRU(8)(x)
outputs = tf.keras.layers.Dense(5,activation='softmax')(x)
model = tf.keras.Model(inputs, outputs)
dummy_x = tf.random.normal((1, 224,224,3))
print(model.summary())
print(model(dummy_x))
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_14 (InputLayer) [(None, 224, 224, 3)] 0
sequential_6 (Sequential) (None, 256) 24121296
repeat_vector_2 (RepeatVect (None, 30, 256) 0
or)
gru_5 (GRU) (None, 30, 16) 13152
gru_6 (GRU) (None, 8) 624
dense_7 (Dense) (None, 5) 45
=================================================================
Total params: 24,135,117
Trainable params: 24,081,997
Non-trainable params: 53,120
_________________________________________________________________
None
I'm trying to get some heatmaps from a computervision model that's it's already working to classify images but I'm finding some difficulties.
This is the model summary:
model.summary()
Model: "model_4"
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) [(None, 512, 512, 1)] 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 512, 512, 3) 30
_________________________________________________________________
densenet121 (Functional) (None, 1024) 7037504
_________________________________________________________________
dense_4 (Dense) (None, 100) 102500
_________________________________________________________________
dropout_4 (Dropout) (None, 100) 0
_________________________________________________________________
predictions (Dense) (None, 2) 202
=================================================================
Total params: 7,140,236
Trainable params: 7,056,588
Non-trainable params: 83,648
As part of the standard procces to create a heatmap, I know I have to acces to the last convolutional layer in the model, that in this case I'll say it's a layer inside the Densenet121, but I can not find a way to access to all the layers belonging to densenet121.
Right now, I've been using conv2d_4 layer to run some tests, but I feel is not the right way because that layer is before all the Transfer learning work from densenet.
Also, I just looked up for Funcitnal layers in KErar official documentation but I cound't find it, so I guess it's not a layer, it's like the hole densenet model embedded there, but I can not find a way to access.
By the way, here I share the model construction because it may help to answer this:
from tensorflow.keras.applications.densenet import DenseNet121
num_classes = 2
input_tensor = Input(shape=(IMG_SIZE,IMG_SIZE,1))
x = Conv2D(3,(3,3), padding='same')(input_tensor)
x = DenseNet121(include_top=False, classes=2, pooling="avg", weights="imagenet")(x)
x = Dense(100)(x)
x = Dropout(0.45)(x)
predictions = Dense(num_classes, activation='softmax', name="predictions")(x)
model = Model(inputs=input_tensor, outputs=predictions)
I found you can use
.get_layer()
twice to acces layers inside functional densenet model embebeed in the "main" model.
In this case I can use model.get_layer('densenet121').summary() to check all thje layer inside the embebeed model, and then use them with this code: model.get_layer('densenet121').get_layer('xxxxx')
I have simple multi-layer perceptron for MNIST data classification problem.
model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
When printing summary i receive following output:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_8 (Flatten) (None, 784) 0
_________________________________________________________________
dense_16 (Dense) (None, 128) 100480
_________________________________________________________________
dense_17 (Dense) (None, 10) 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
How do I interpret output shape printed in the summary? Why is there None therm in the output shape tuple? Why is it not just (784) in the first layer?
The "None" value refers to the number of input samples (the batch size). To allow you to train on different sized training sets, this value is None. If it were a number, let's say 50 for example, that means you can only train on exactly 50 samples which is usually not very useful (but does occasionally have applications).
I am working on Text Classification Problem. My Model looks like this :
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_6 (Embedding) (None, 100, 50) 676050
_________________________________________________________________
lstm_6 (LSTM) (None, 16) 4288
_________________________________________________________________
dropout_1 (Dropout) (None, 16) 0
_________________________________________________________________
dense_6 (Dense) (None, 3) 51
=================================================================
Total params: 680,389
Trainable params: 680,389
Non-trainable params: 0
_________________________________________________________________
None
The dataset contains around 5300 No. of Sentences. I am using validation split=0.33.
The Model behaves in abnormal way. The validation loss keeps increasing and validation accuracy moves in constant way. I am attaching the graph.
Please guide me how to solve this issue.
My Model looks like this :
model=Sequential()
model.add(Embedding(
num_words,
EMBEDDING_DIM,
input_length=MAX_SEQUENCE_LENGTH
))
model.add(LSTM(32,return_sequences=True))
model.add(Dropout(0.5))
model.add(GlobalMaxPool1D())
model.add(Dense(len(possible_labels), activation="softmax"))
I am also attaching Accuracy Graph.
Increase dropout.
Train for fewer epochs.
Try Conv1D instead of LSTM to see if the overfitting goes away.
I am using keras with TF backend to build a simple Conv1d net. The data has the following shape:
train feature shape: (33960, 3053, 1)
train label shape: (33960, 686, 1)
I build my model with:
def create_conv_model():
inp = Input(shape=(3053, 1))
conv = Conv1D(filters=2, kernel_size=2)(inp)
pool = MaxPool1D(pool_size=2)(conv)
flat = Flatten()(pool)
dense = Dense(686)(flat)
model = Model(inp, dense)
model.compile(loss='mse', optimizer='adam')
return model
Model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 3053, 1) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 3052, 2) 6
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 1526, 2) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 3052) 0
_________________________________________________________________
dense_1 (Dense) (None, 686) 2094358
=================================================================
Total params: 2,094,364
Trainable params: 2,094,364
Non-trainable params: 0
Upon running
model.fit(x=train_feature,
y=train_label_categorical,
epochs=100,
batch_size=64,
validation_split=0.2,
validation_data=(test_feature,test_label_categorical),
callbacks=[tensorboard,reduce_lr,early_stopping])
i get the following VERY USUAL ERROR:
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (8491, 3053)
I've checked pretty much all the posts regarding this very common problem, but I've been unable to find a solution. What am i doing wrong? I don't understand what's going on. Where is the shape (8491, 3053) coming from?
Any help will be much appreciated, I am not able to make this go away.
Change validation_data=(test_feature,test_label_categorical) in model.fit function to
validation_data=(np.expand_dims(test_feature, -1),test_label_categorical)
The model is expecting validation feature of shape (8491, 3053, 1), but in above code you are providing it (8491, 3053).