Extract the output of cnn - python

I have trained a cnn model to classify images of dog and cat
it is giving 98% accuracy
But I want to visualize the output of cnn layer i.e the features from which my cnn is predicting whether it is a dog or a cat
If there any way to visualize the output of cnn?

You can divide your model into two models:
Previous Model:
input = Input(...)
# Your Layers
output = Dense(1)
old_model = Model(inputs=[input], output)
New Model:
input = Input(...)
#Add the first layers and the CNN here
cnn_layer = Conv2D(...)
feature_extraction_model = Model(inputs=[input], outputs=cnn_layer)
input_cnn = Input(...) # The shape of your CNN output
# Add the classification layer here
output = Dense(1)
classifier_model = Model(inputs=[input_cnn], outputs=output)
Now you define the new model as a combination of: feature_extraction_model and classifier_model
new_model = Model(inputs=[input], outputs=classifier_model(input_cnn))
# Train the model
new_model.fit(x, y)
Now you can have access to the CNNlayer post training:
cnn_output = feature_extraction_model.predict(x)

Related

get outputs of classifier layers from '.pt' pretrained AlexNet model

I have a '.pt' file that includes alexnet model which trained on my dataset. How I can get "out_features" of classifier layers (layers 1 & 4) after running my model for different dataset.
I needs this data for inputs of SVM.
I have tried:
Model(inputs, outputs=model.classifier[1].out_features)
model.classifier[1].out_features(inputs)
model.classifier[1].parameters(torch.tensor(inputs))
but they didn't work
at first you have to load model
model = torch.load(model.pt)
after that you have to remove the last layer
features = list(model.classifier.children())[:-4] # Remove last layer
model.classifier = nn.Sequential(*features)
then you can have the weights of last layer by applying model for inputs
out = model(inputs)

How do I interpret model summary of merged models in keras?

I want to built a model with many smaller model's output merged as one. I want 146 network taking 17 input each and giving a probability as output. The output of all these network need to be merged and used as single unit .For which I did something like this:
def build(layer_str,actv):
#take the input layer structure and convert it into a list
layers=layer_str.split("-")
#print(layers)
#convert the strings in the list to integer
layers=list(map(int,layers))
#let's build our model
model= tf.keras.Sequential()
#we add the first layer and the input layer to our network
model.add(Dense(layers[1],input_shape=(layers[0],),activation=actv[0]))
#we add the hidden layers
for (x,i) in enumerate(layers):
if(x>1 and x!=(len(layers)-1)):
model.add(Dense(i,activation=actv[x]))
#then add the final layer
model.add(Dense(layers[-1],activation=actv[-1]))
#return the construtcted model
return model
Then, Merged models like this:
def Merge_model(layer,act,data,label,lr,epochs,batch_size):
model_list=[]
for i in range(146):
model=nn.build(layer,act)
model_list.append(model)
merged_layers = concatenate([model_list[i].output for i in range(146)])
x = merged_layers
out = Activation('sigmoid')(x)
merged_model = Model([model_list[i].input for i in range(146)], [out])
print(merged_model.summary())
merged_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
result,predictions=nn.train_eval(data,label,merged_model,lr,epochs,batch_size)
data=np.random.rand(10,146,17)
data=[d for d in data]
label=np.random.randint(0,1,(10,146,1))
label=[lb for lb in label]
print(len(label[0]))
lr=0.01
epochs=100
batch_size=16
Merge_model("17-7-1",["relu","sigmoid"],data,label,lr,epochs,batch_size)
I get the model summary as such But do not understand what to make of it. What is supposed to be my trainig data and layer's shape?
https://drive.google.com/file/d/1juffdLY0i9f9rgldKfHG_MYXCK8wBV09/view?usp=sharing

Keras Model with 2 inputs during training, but only 1 during inferencing

A similar question has been asked earlier by someone but the answer was not satisfactory.
Given a model made using Functional API in keras.
During training the model we have two inputs and one output. One input is image. Another input is an array of costs that is needed for custom loss function.
During inferencing however we will get only image as input and no costs are there. Hence only one input and one output.
How to adapt the same model which has been trained on two inputs for inferencing ?
The model during training is somewhat like this :
input1 = Input(shape=(64,64,3)) #RGB Image
input2 = Input(shape=(4,))#Costs associated with the image, input to the custom loss function
conv1 = Conv2D(16, 3 , padding = 'same', activation = 'relu')(input1)
#Other layers
output = Dense(6)(x) # last layer gives classification output
model = Model(inputs = [input1, input2] , outputs = output)
model.compile(loss = custom_loss_function(input2) , optimizer = 'adam')
This is the model during training.
What to do during inferencing when only one input for the image is needed and no cost inputs are present ?
Just use a "dummy" input if it doesn't affect the forward pass
You can wrap your inference model in a training model.
In pseudo-code:
def make_model():
input1 = Input(...)
conv1 = Conv2D(16, 3 , padding = 'same', activation = 'relu')(input1)
#Other layers
output = Dense(6)(x) # last layer gives classification output
return keras.Model(input1, output)
def make_train_model():
input1 = Input(...)
input2 = Input(...)
m_inner = make_model()
output = m_inner(input1)
model = keras.Model([input1, input2], output)
model.compile(...)
return model, m_inner
You can train using model and save the inner model for inference.

Feature extraction from LSTM to Sklearn models

I've a LSTM model and i want to extract features from this LSTM to send it into a Random Forest or a logistic regression on Sklearn.
model = tf.keras.Sequential()
inputs = tf.keras.Input(shape=(t+1, n_features))
x=tf.keras.layers.LSTM(128, dropout=0.1, return_sequences=True)(inputs)
x1=tf.keras.layers.LSTM(128, dropout=0.1, return_sequences=False)(x)
o=tf.keras.layers.Dense(3,activation='softmax')(x1)
model = tf.keras.Model(inputs = inputs, outputs = o)
so i want to use x1 as the input of my Random forest.
Any idea ?
Thanks :)
Just create a model with the desired input/output tensors. For example:
feat_extractor = tf.keras.Model(inputs=inputs, outputs=x1)
# Then, assuming X is a batch of input patterns:
feats = feat_extractor.predict(X)

Combine outputs of two Pre Trained models (trained on different dataset) and use some form of binary classifier to predict images

I have two Pre-Trained models.
Model_1 = Inception Model with Imagenet Dataset (1000 classes)
My_Model = Inception Model trained with a custom dataset (20 classes) via Transfer Learning and Fine-Tuning
I would like to combine the outputs of both models (Model_1 and My_Model) in a new layer.
The new layer should use some binary classifier to tell whether to use Model_1 or My_Model for prediction based on the input image.
For Example:
If I try to predict a "Dog" image, the binary classifier which combines both models should say that I need to use Model_1 to predict the Dog image (since My_Model dataset is not trained with Dog image) whereas Model_1 is trained with Dog images.
Can anyone tell me how to achieve this? Some example implementation or code snippet will be helpful.
Thanks
To do this you need to make a combined model and then train the combined model on another custom dataset here is an example of what the combined model can look like. To make the dataset, simply take each image and decide which model you'd like to use and then you can train the output of the combined model to give a positive value for one model and a negative value for the other model. hope it helps
import numpy as np
import pandas as pd
import keras
from keras.layers import Dense, Flatten, Concatenate
from tensorflow.python.client import device_lib
# check for my gpu
print(device_lib.list_local_devices())
# making some models like the ones you have
input_shape = (10000, 3)
m1_input = Input(shape = input_shape, name = "m1_input")
fc = Flatten()(m1_input)
m1_output = Dense(1000, activation='sigmoid',name = "m1_output")(fc)
Model_1 = Model(m1_input,m1_output)
m2_input = Input(shape = input_shape, name = "m2_input")
fc = Flatten()(m2_input)
m2_output = Dense(20, activation='sigmoid',name = "m2_output")(fc)
My_Model = Model(m2_input,m2_output)
# set the trained models to be untrainable
for layer in Model_1.layers:
layer.trainable = False
for layer in My_Model.layers:
layer.trainable = False
#build a combined model
combined_model_input = Input(shape = input_shape, name = "combined_model_input")
m1_predict = Model_1(combined_model_input)
m2_predict = My_Model(combined_model_input)
combined = Concatenate()([m1_predict, m2_predict])
fc = Dense(500, activation='sigmoid',name = "fc1")(combined)
fc = Dense(100, activation='sigmoid',name = "fc2")(fc)
output_layer = Dense(1, activation='tanh',name = "fc3")(fc)
model = Model(combined_model_input, output_layer)
#check the number of parameters that are trainable
print(model.summary())
#psudocode to show how to make a training set for the combined model:
combined_model_y= []
for im in images:
if class_of(im) in list_of_my_model_classes:
combined_model_y.append(1)
else:
combined_model_y.append(-1)
combined_model_y = np.array(combined_model_y)
# then train the combined model:
model.compile('adam', loss = 'binary_crossentropy')
model.fit(images, combined_model_y, ....)

Categories