Related
I am using a ResNet50 as base model to predict multiple label in an image and sum up the respective values of the labels.
reading the data:
#read the data
data_path = '/content/drive/MyDrive/Notifyer-dataset/dataset'
def load_dataset(folder):
X = [] # create an empty list to store the images
y = [] # create an empty list to store the labels
# get a list of all the files in the folder
filenames = os.listdir(folder)
# iterate over the files
for filename in filenames:
# get the label from the filename
label = filename.split('_')[0]
# open the image file and convert it to a NumPy array
image = Image.open(os.path.join(folder, filename))
image = image.resize((200, 200)) # resize the image to 200x200
image = image.convert('RGB') # convert the image to RGB
image = np.array(image) / 255 # normalize the pixel values
image = image.reshape(-1, 200, 200, 3) # reshape to (batch_size, height, width, channels)
# append the image and label to the list
X.append(image)
y.append(label)
# convert the lists to NumPy arrays
X = np.array(X)
y = np.array(y)
#preprocessing
X = X.reshape(-1, 200, 200, 3) # reshape arrays to 200x200 images with 1 channel
X = X / 255.0 # normalize pixel values
#one hot encoding
num_classes = len(np.unique(y))
y = to_categorical(y, num_classes)
return X, y,num_classes
X, y, num_classes = load_dataset(data_path)
building the model:
def build_r_cnn_model(num_classes):
"""
Build a region-based CNN model.
Parameters:
num_classes (int): number of classes to classify
Returns:
Model: the R-CNN model
"""
# load the ResNet50 model pre-trained on ImageNet
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(200, 200, 3))
# freeze the base model layers
for layer in base_model.layers:
layer.trainable = False
# add a global average pooling layer
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = tf.keras.layers.Dense(1024, activation='relu')(x)
# add a dropout layer
x = tf.keras.layers.Dropout(0.5)(x)
# add a classification layer
predictions = tf.keras.layers.Dense(num_classes, activation='softmax')(x)
#build the model
model = Model(inputs=base_model.input, outputs=predictions)
return model
compiling the model:
# build and compile the model
model = build_r_cnn_model(num_classes)
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
training the model:
#train
history = model.fit(X_train, y_train, epochs=10, batch_size=128, validation_data=(X_val, y_val))
function to sum up all label values in the image:
#function to calculate total sum of value of predicted labels
def predict_total_sum(model, image):
y_pred = model.predict(image) # classify the image
# define a lookup table to map class indices to values
value_lookup = {
0: 1, # class 0 corresponds to value 1
1: 2, # class 1 corresponds to value 2
}
total_sum = 0
for prediction in y_pred:
# get the class index with the highest predicted probability
class_index = np.argmax(prediction)
print(class_index)
# add the value of the detected denomination to the total sum
total_sum += value_lookup[class_index]
return total_sum
It gives value 1 or 2 for every image for each model compilation which means it is only predicting only one label even if the image has multiple objects of both the labels.
My dataset is small and every image in it contains object of one of the label, do I need to diversify my dataset to make the model identify both labels in an image or is there something wrong with the model architecture? I have also tried to build a CNN model from scratch but it is giving the same result...
I think the output of model.predict has shape [1, num_of_classes] (you can verify it by printing it's shape once). Hence when you are looping on y_pred then you basically iterate only once and add one of the class index to the total_sum. Even if the shape was [num_of_classes], then also I think that this is not how you should try multi-class classification. Would prefer you to read more about how multiclass classification is done.
You can take help from this link: https://www.kaggle.com/code/prateek0x/multiclass-image-classification-using-keras
I am trying to use the output of a variational autoencoder to aid in classifying images. I have pre-trainned the autoencoder and am now trying to load the weights in another script to use the weights of the encoder model for prediction. I am having a weird error when calling the encoder that I cannot make sense of. When I try to call the encoder on a sample, I am told that the shapes are incompatible:
ValueError: Input 0 of layer dense is incompatible with the layer: expected axis -1 of input shape to have value 1048576 but received input with shape (256, 8192). This is confusing because I have pre-trained the model fine and have instantiated the model like I did before (I copy/pasted the code). I have based my model on this YouTube tutorial.
I will also paste in my code:
########## Library Imports ##########
import os, sys
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Conv2D, Input, Flatten, Dense, Lambda, Reshape, Conv2DTranspose
import keras
import keras.backend as K
from keras.models import Model
from PIL import Image
print(tf.version.VERSION)
img_height = 256 #chosen
img_width = 256
num_channels = 1 #grayscale
input_shape = (img_height, img_width, num_channels)
########## Load VAE Weights ##########
vae_path = os.path.join(os.getcwd(), 'vae_training')
checkpoint_path = os.path.join(vae_path, 'cp.ckpt')
print('vae_path listdir\n', os.listdir(vae_path))
#load patches
#patch_locs = sys.argv[1] #path to the patch folders
patch_locs = r'C:\Users\Daniel\Documents\GitHub\endo_git_v2\patches\single_wsi_for_local_parent'
patch_folders = os.listdir(patch_locs)
print(patch_folders)
########## INSTANTIATE MODEL AND LOAD WEIGHTS ##########
#REPARAMETERIZATION TRICK
# Define sampling function to sample from the distribution
# Reparameterize sample based on the process defined by Gunderson and Huang
# into the shape of: mu + sigma squared x eps
#This is to allow gradient descent to allow for gradient estimation accurately.
def sample_z(args):
z_mu, z_sigma = args
z_mu = tf.cast(z_mu, dtype=tf.float32)
z_sigma = tf.cast(z_sigma, dtype=tf.float32)
eps = K.random_normal(shape=(K.shape(z_mu)[0], K.int_shape(z_mu)[1]))
out = z_mu + K.exp(z_sigma / 2) * eps
return out
#Define custom loss
#VAE is trained using two loss functions reconstruction loss and KL divergence
#Let us add a class to define a custom layer with loss
class CustomLayer(keras.layers.Layer):
def vae_loss(self, x, z_decoded):
x = K.flatten(x)
z_decoded = K.flatten(z_decoded)
# Reconstruction loss (as we used sigmoid activation we can use binarycrossentropy)
recon_loss = keras.metrics.binary_crossentropy(x, z_decoded)
recon_loss = tf.cast(recon_loss, dtype=tf.float32)
# KL divergence
kl_loss = -5e-4 * K.mean(1 + z_sigma - K.square(z_mu) - K.exp(z_sigma), axis=-1)
kl_loss = tf.cast(kl_loss, dtype=tf.float32)
return K.mean(recon_loss + kl_loss)
# add custom loss to the class
def call(self, inputs):
x = inputs[0]
z_decoded = inputs[1]
loss = self.vae_loss(x, z_decoded)
self.add_loss(loss, inputs=inputs)
return x
# # ================= #############
# # Encoder
#Let us define 4 conv2D, flatten and then dense
# # ================= ############
latent_dim = 256 # Number of latent dim parameters
input_img = Input(shape=input_shape, name='encoder_input')
print(input_img.shape)
x = Conv2D(32, 3, padding='same', activation='relu')(input_img)
print(x.shape)
x = Conv2D(64, 3, padding='same', activation='relu',strides=(2, 2))(x)
print(x.shape)
x = Conv2D(64, 3, padding='same', activation='relu')(x)
print(x.shape)
x = Conv2D(64, 3, padding='same', activation='relu')(x)
print(x.shape)
conv_shape = K.int_shape(x) #Shape of conv to be provided to decoder (taken after all the conv layers)
print(conv_shape)
#Flatten
x = Flatten()(x)
print(x.shape)
x = Dense(32, activation='relu')(x)
print(x.shape)
# Two outputs, for latent mean and log variance (std. dev.)
#Use these to sample random variables in latent space to which inputs are mapped.
z_mu = Dense(latent_dim, name='latent_mu')(x) #Mean values of encoded input
z_sigma = Dense(latent_dim, name='latent_sigma')(x) #Std dev. (variance) of encoded
z_mu = tf.cast(z_mu, dtype=tf.float32)
z_sigma = tf.cast(z_sigma, dtype=tf.float32)
print('z_mu.dtype:', z_mu.dtype)
print('z_sigma.dtype:', z_sigma.dtype)
# sample vector from the latent distribution
# z is the labda custom layer we are adding for gradient descent calculations
# using mu and variance (sigma)
z = Lambda(sample_z, output_shape=(latent_dim, ), name='z')([z_mu, z_sigma])
print('z.dtype:', z.dtype)
#Z (lambda layer) will be the last layer in the encoder.
# Define and summarize encoder model.
encoder = Model(input_img, [z_mu, z_sigma, z], name='encoder')
print(encoder.summary())
# ================= ###########
# Decoder
#
# ================= #################
# decoder takes the latent vector as input
decoder_input = Input(shape=(latent_dim, ), name='decoder_input')
# Need to start with a shape that can be remapped to original image shape as
#we want our final utput to be same shape original input.
#So, add dense layer with dimensions that can be reshaped to desired output shape
x = Dense(conv_shape[1]*conv_shape[2]*conv_shape[3], activation='relu')(decoder_input)
# reshape to the shape of last conv. layer in the encoder, so we can
x = Reshape((conv_shape[1], conv_shape[2], conv_shape[3]))(x)
# upscale (conv2D transpose) back to original shape
# use Conv2DTranspose to reverse the conv layers defined in the encoder
x = Conv2DTranspose(32, 3, padding='same', activation='relu',strides=(2, 2))(x)
#Can add more conv2DTranspose layers, if desired.
#Using sigmoid activation
x = Conv2DTranspose(num_channels, 3, padding='same', activation='sigmoid', name='decoder_output')(x)
# Define and summarize decoder model
decoder = Model(decoder_input, x, name='decoder')
decoder.summary()
# apply the decoder to the latent sample
z_decoded = decoder(z)
# apply the custom loss to the input images and the decoded latent distribution sample
y = CustomLayer()([input_img, z_decoded])
# y is basically the original image after encoding input img to mu, sigma, z
# and decoding sampled z values.
#This will be used as output for vae
vae = Model(input_img, y, name='vae')
# Compile VAE
vae.compile(optimizer='adam', loss=None, experimental_run_tf_function=False)
vae.summary()
model_weights_dir = r'C:\Users\Daniel\Documents\GitHub\endo_git_v2\vae_training'
checkpoint_path = os.path.join(model_weights_dir, 'cp.ckpt')
print(os.listdir(model_weights_dir))
#vae.load_weights(checkpoint_path)
##################################################################
########## Open all WSI, then Open all Patches ##########
#for wsi in patch_folders: #loops through all the wsi folders
wsi = patch_folders[0]
#start of wsi loop
print('wsi:', wsi)
current_wsi_directory = os.path.join(patch_locs, wsi) #take the current wsi
print('current_wsi_directory:', current_wsi_directory)
patches = os.listdir(current_wsi_directory)
latent_shape = (203, 147, 256)
latent_wsi = np.zeros(latent_shape) #initialized placeholders for latent representations
row = 0
col = 0
for i in range(1):#len(patches)): #should be 29841 every time
#load patch as numpy array
patch_path = os.path.join(current_wsi_directory, '{}_{}.jpeg'.format(wsi, i)) #numerical order not alphabetical
print('patch_path:', patch_path)
image = Image.open(patch_path)
data = np.asarray(image)
#emulate rescale of 1/.255
data = data / 255.
data = np.expand_dims(data, axis=-1)
print('data.shape:', data.shape)
encoder(data, training=False)
Any help or tips are very much appreciated
I solved my issue. Long story short that I'm an idiot. I was passing in a numpy array that was (256,256,1) in size (note that the batch dimension was missing). Reshaping to (1, 256, 256, 1) solved my issue (note that the first 1 is the batch dimension)
I have already trained a network and I have saved it in the form of mynetwork.model. I want to apply gradcam using my own model and not VGG16 or ResNet etc.
apply_gradcam.py
# import the necessary packages
from Grad_CAM.gradcam import GradCAM
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.applications import VGG16
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.applications import imagenet_utils
from tensorflow.keras.models import load_model
import numpy as np
import argparse
import imutils
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to the input image")
ap.add_argument("-m", "--model", type=str, default="vgg",
#choices=("vgg", "resnet"),
help="model to be used")
args = vars(ap.parse_args())
# initialize the model to be VGG16
Model = VGG16
# check to see if we are using ResNet
if args["model"] == "resnet":
Model = ResNet50
# load the pre-trained CNN from disk
print("[INFO] loading model...")
model = Model(weights="imagenet")
# load the original image from disk (in OpenCV format) and then
# resize the image to its target dimensions
orig = cv2.imread(args["image"])
resized = cv2.resize(orig, (224, 224))
# load the input image from disk (in Keras/TensorFlow format) and
# preprocess it
image = load_img(args["image"], target_size=(224, 224))
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = imagenet_utils.preprocess_input(image)
# use the network to make predictions on the input image and find
# the class label index with the largest corresponding probability
preds = model.predict(image)
i = np.argmax(preds[0])
# decode the ImageNet predictions to obtain the human-readable label
decoded = imagenet_utils.decode_predictions(preds)
(imagenetID, label, prob) = decoded[0][0]
label = "{}: {:.2f}%".format(label, prob * 100)
print("[INFO] {}".format(label))
# initialize our gradient class activation map and build the heatmap
cam = GradCAM(model, i)
heatmap = cam.compute_heatmap(image)
# resize the resulting heatmap to the original input image dimensions
# and then overlay heatmap on top of the image
heatmap = cv2.resize(heatmap, (orig.shape[1], orig.shape[0]))
(heatmap, output) = cam.overlay_heatmap(heatmap, orig, alpha=0.5)
cv2.rectangle(output, (0, 0), (340, 40), (0, 0, 0), -1)
cv2.putText(output, label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX,
0.8, (255, 255, 255), 2)
# display the original image and resulting heatmap and output image
# to our screen
output = np.vstack([orig, heatmap, output])
output = imutils.resize(output, height=700)
cv2.imshow("Output", output)
cv2.waitKey(0)
gradcam.py
from tensorflow.keras.models import Model
import tensorflow as tf
import numpy as np
import cv2
class GradCAM:
def __init__(self, model, classIdx, layerName=None):
# store the model, the class index used to measure the class
# activation map, and the layer to be used when visualizing
# the class activation map
self.model = model
self.classIdx = classIdx
self.layerName = layerName
# if the layer name is None, attempt to automatically find
# the target output layer
if self.layerName is None:
self.layerName = self.find_target_layer()
def find_target_layer(self):
# attempt to find the final convolutional layer in the network
# by looping over the layers of the network in reverse order
for layer in reversed(self.model.layers):
# check to see if the layer has a 4D output
if len(layer.output_shape) == 4:
return layer.name
# otherwise, we could not find a 4D layer so the GradCAM
# algorithm cannot be applied
raise ValueError("Could not find 4D layer. Cannot apply GradCAM.")
def compute_heatmap(self, image, eps=1e-8):
# construct our gradient model by supplying (1) the inputs
# to our pre-trained model, (2) the output of the (presumably)
# final 4D layer in the network, and (3) the output of the
# softmax activations from the model
gradModel = Model(
inputs=[self.model.inputs],
outputs=[self.model.get_layer(self.layerName).output,
self.model.output])
# record operations for automatic differentiation
with tf.GradientTape() as tape:
# cast the image tensor to a float-32 data type, pass the
# image through the gradient model, and grab the loss
# associated with the specific class index
inputs = tf.cast(image, tf.float32)
(convOutputs, predictions) = gradModel(inputs)
loss = predictions[:, self.classIdx]
# use automatic differentiation to compute the gradients
grads = tape.gradient(loss, convOutputs)
# compute the guided gradients
castConvOutputs = tf.cast(convOutputs > 0, "float32")
castGrads = tf.cast(grads > 0, "float32")
guidedGrads = castConvOutputs * castGrads * grads
# the convolution and guided gradients have a batch dimension
# (which we don't need) so let's grab the volume itself and
# discard the batch
convOutputs = convOutputs[0]
guidedGrads = guidedGrads[0]
# compute the average of the gradient values, and using them
# as weights, compute the ponderation of the filters with
# respect to the weights
weights = tf.reduce_mean(guidedGrads, axis=(0, 1))
cam = tf.reduce_sum(tf.multiply(weights, convOutputs), axis=-1)
# grab the spatial dimensions of the input image and resize
# the output class activation map to match the input image
# dimensions
(w, h) = (image.shape[2], image.shape[1])
heatmap = cv2.resize(cam.numpy(), (w, h))
# normalize the heatmap such that all values lie in the range
# [0, 1], scale the resulting values to the range [0, 255],
# and then convert to an unsigned 8-bit integer
numer = heatmap - np.min(heatmap)
denom = (heatmap.max() - heatmap.min()) + eps
heatmap = numer / denom
heatmap = (heatmap * 255).astype("uint8")
# return the resulting heatmap to the calling function
return heatmap
def overlay_heatmap(self, heatmap, image, alpha=0.5,
colormap=cv2.COLORMAP_VIRIDIS):
# apply the supplied color map to the heatmap and then
# overlay the heatmap on the input image
heatmap = cv2.applyColorMap(heatmap, colormap)
output = cv2.addWeighted(image, alpha, heatmap, 1 - alpha, 0)
# return a 2-tuple of the color mapped heatmap and the output,
# overlaid image
return (heatmap, output)
As you can see in apply_gradcam.py, the VGG16 or ResNet pretrained models are used. I want to perform gradcam by using my own trained model. For this reason I commented these lines:
# initialize the model to be VGG16
Model = VGG16
# check to see if we are using ResNet
if args["model"] == "resnet":
Model = ResNet50
# load the pre-trained CNN from disk
print("[INFO] loading model...")
model = Model(weights="imagenet")
and I used
model = load_model(args["model"])
in order to use my own model. Then I executed:
python apply_gradcam.py --image /home/antonis/IM0001.jpeg --model /home/antonis/mynetwork.model
However, I get the following error:
ValueError: `decode_predictions` expects a batch of predictions (i.e.
a 2D array of shape (samples, 1000)). Found array with shape: (1, 3)
which is expected as the model outputs the ImageNet classes (1000-dimensional) while my model returns predictions over 2 classes.
I wonder how to fix this and apply gradcam using my own model.
One thing I don't get is if you've your own classifier (2) why then use imagenet_utils.decode_predictions? I'm not sure if my following answer will satisfy you or not. But here are some pointer.
DataSet
import tensorflow as tf
import numpy as np
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# train set / data
x_train = x_train.astype('float32') / 255
# train set / target
y_train = tf.keras.utils.to_categorical(y_train , num_classes=10)
# validation set / data
x_test = x_test.astype('float32') / 255
# validation set / target
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
# (50000, 32, 32, 3) (50000, 10)
# (10000, 32, 32, 3) (10000, 10
Model
input = tf.keras.Input(shape=(32,32,3))
efnet = tf.keras.applications.EfficientNetB0(weights='imagenet',
include_top = False,
input_tensor = input)
# Now that we apply global max pooling.
gap = tf.keras.layers.GlobalMaxPooling2D()(efnet.output)
# Finally, we add a classification layer.
output = tf.keras.layers.Dense(10, activation='softmax')(gap)
# bind all
func_model = tf.keras.Model(efnet.input, output)
Compile and Run
func_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(),
metrics = tf.keras.metrics.CategoricalAccuracy(),
optimizer = tf.keras.optimizers.Adam())
# fit
func_model.fit(x_train, y_train, batch_size=128, epochs=15, verbose = 2)
Epoch 14/15
391/391 - 13s - loss: 0.1479 - categorical_accuracy: 0.9491
Epoch 15/15
391/391 - 13s - loss: 0.1505 - categorical_accuracy: 0.9481
Grad CAM
Same as your set up.
from tensorflow.keras.models import Model
import tensorflow as tf
import numpy as np
import cv2
class GradCAM:
def __init__(self, model, classIdx, layerName=None):
# store the model, the class index used to measure the class
# activation map, and the layer to be used when visualizing
# the class activation map
self.model = model
self.classIdx = classIdx
self.layerName = layerName
# if the layer name is None, attempt to automatically find
# the target output layer
if self.layerName is None:
self.layerName = self.find_target_layer()
def find_target_layer(self):
# attempt to find the final convolutional layer in the network
# by looping over the layers of the network in reverse order
for layer in reversed(self.model.layers):
# check to see if the layer has a 4D output
if len(layer.output_shape) == 4:
return layer.name
# otherwise, we could not find a 4D layer so the GradCAM
# algorithm cannot be applied
raise ValueError("Could not find 4D layer. Cannot apply GradCAM.")
def compute_heatmap(self, image, eps=1e-8):
# construct our gradient model by supplying (1) the inputs
# to our pre-trained model, (2) the output of the (presumably)
# final 4D layer in the network, and (3) the output of the
# softmax activations from the model
gradModel = Model(
inputs=[self.model.inputs],
outputs=[self.model.get_layer(self.layerName).output, self.model.output])
# record operations for automatic differentiation
with tf.GradientTape() as tape:
# cast the image tensor to a float-32 data type, pass the
# image through the gradient model, and grab the loss
# associated with the specific class index
inputs = tf.cast(image, tf.float32)
(convOutputs, predictions) = gradModel(inputs)
loss = predictions[:, tf.argmax(predictions[0])]
# use automatic differentiation to compute the gradients
grads = tape.gradient(loss, convOutputs)
# compute the guided gradients
castConvOutputs = tf.cast(convOutputs > 0, "float32")
castGrads = tf.cast(grads > 0, "float32")
guidedGrads = castConvOutputs * castGrads * grads
# the convolution and guided gradients have a batch dimension
# (which we don't need) so let's grab the volume itself and
# discard the batch
convOutputs = convOutputs[0]
guidedGrads = guidedGrads[0]
# compute the average of the gradient values, and using them
# as weights, compute the ponderation of the filters with
# respect to the weights
weights = tf.reduce_mean(guidedGrads, axis=(0, 1))
cam = tf.reduce_sum(tf.multiply(weights, convOutputs), axis=-1)
# grab the spatial dimensions of the input image and resize
# the output class activation map to match the input image
# dimensions
(w, h) = (image.shape[2], image.shape[1])
heatmap = cv2.resize(cam.numpy(), (w, h))
# normalize the heatmap such that all values lie in the range
# [0, 1], scale the resulting values to the range [0, 255],
# and then convert to an unsigned 8-bit integer
numer = heatmap - np.min(heatmap)
denom = (heatmap.max() - heatmap.min()) + eps
heatmap = numer / denom
heatmap = (heatmap * 255).astype("uint8")
# return the resulting heatmap to the calling function
return heatmap
def overlay_heatmap(self, heatmap, image, alpha=0.5,
colormap=cv2.COLORMAP_VIRIDIS):
# apply the supplied color map to the heatmap and then
# overlay the heatmap on the input image
heatmap = cv2.applyColorMap(heatmap, colormap)
output = cv2.addWeighted(image, alpha, heatmap, 1 - alpha, 0)
# return a 2-tuple of the color mapped heatmap and the output,
# overlaid image
return (heatmap, output)
Prediction
image = cv2.imread('/content/dog.jpg')
image = cv2.resize(image, (32, 32))
image = image.astype('float32') / 255
image = np.expand_dims(image, axis=0)
preds = func_model.predict(image)
i = np.argmax(preds[0])
To get the layer's name of the model
for idx in range(len(func_model.layers)):
print(func_model.get_layer(index = idx).name)
# we picked `block5c_project_con` layer
Passing to GradCAM class
icam = GradCAM(func_model, i, 'block5c_project_conv')
heatmap = icam.compute_heatmap(image)
heatmap = cv2.resize(heatmap, (32, 32))
image = cv2.imread('/content/dog.jpg')
image = cv2.resize(image, (32, 32))
print(heatmap.shape, image.shape)
(heatmap, output) = icam.overlay_heatmap(heatmap, image, alpha=0.5)
Visualization
fig, ax = plt.subplots(1, 3)
ax[0].imshow(heatmap)
ax[1].imshow(image)
ax[2].imshow(output)
Ref. Grad-CAM class activation visualization
I have a Generative Adversarial Network (GAN) Keras code shown below. My train directory is composed of 512x512x3 images. Why does the print statement return the following? How can I have the generated images be also of shape (374, 512, 512, 3)?
This is the shape of the generated images (374, 32, 32, 3)
This is the shape of the real images (374, 512, 512, 3)
import keras
from keras import layers
import numpy as np
import cv2
import os
from keras.preprocessing import image
latent_dimension = 512
height = 512
width = 512
channels = 3
iterations = 100
batch_size = 20
number_of_images = 374
real_images = []
# paths to the training and results directories
train_directory = '/train'
results_directory = '/results'
# GAN generator
generator_input = keras.Input(shape=(latent_dimension,))
# transform the input into a 16x16 128-channel feature map
x = layers.Dense(128*16*16)(generator_input)
x = layers.LeakyReLU()(x)
x = layers.Reshape((16,16,128))(x)
x = layers.Conv2D(256,5,padding='same')(x)
x = layers.LeakyReLU()(x)
# upsample to 32x32
x = layers.Conv2DTranspose(256,4,strides=2,padding='same')(x)
x = layers.LeakyReLU()(x)
x = layers.Conv2D(256,5,padding='same')(x)
x = layers.LeakyReLU()(x)
x = layers.Conv2D(256,5,padding='same')(x)
x = layers.LeakyReLU()(x)
# a 32x32 1-channel feature map is generated (i.e. shape of image)
x = layers.Conv2D(channels,7,activation='tanh',padding='same')(x)
# instantiae the generator model, which maps the input of shape (latent dimension) into an image of shape (32,32,1)
generator = keras.models.Model(generator_input,x)
generator.summary()
# GAN discriminator
discriminator_input = layers.Input(shape=(height,width,channels))
x = layers.Conv2D(128,3)(discriminator_input)
x = layers.LeakyReLU()(x)
x = layers.Conv2D(128,4,strides=2)(x)
x = layers.LeakyReLU()(x)
x = layers.Conv2D(128,4,strides=2)(x)
x = layers.LeakyReLU()(x)
x = layers.Conv2D(128,4,strides=2)(x)
x = layers.LeakyReLU()(x)
x = layers.Flatten()(x)
# dropout layer
x = layers.Dropout(0.4)(x)
# classification layer
x = layers.Dense(1,activation='sigmoid')(x)
# instantiate the discriminator model, and turn a (32,32,1) input
# into a binary classification decision (fake or real)
discriminator = keras.models.Model(discriminator_input,x)
discriminator.summary()
discriminator_optimizer = keras.optimizers.RMSprop(
lr=0.0008,
clipvalue=1.0,
decay=1e-8)
discriminator.compile(optimizer=discriminator_optimizer, loss='binary_crossentropy')
# adversarial network
discriminator.trainable = False
gan_input = keras.Input(shape=(latent_dimension,))
gan_output = discriminator(generator(gan_input))
gan = keras.models.Model(gan_input,gan_output)
gan_optimizer = keras.optimizers.RMSprop(
lr=0.0004,
clipvalue=1.0,
decay=1e-8)
gan.compile(optimizer=gan_optimizer,loss='binary_crossentropy')
for step in range(iterations):
# sample random points in the latent space
random_latent_vectors = np.random.normal(size=(number_of_images,latent_dimension))
# decode the random latent vectors into fake images
generated_images = generator.predict(random_latent_vectors)
#i = start
for root, dirs, files in os.walk(train_directory):
for i in range(number_of_images):
img = cv2.imread(root + '/' + str(i) + '.jpg')
real_images.append(img)
print 'This is the shape of the generated images'
print np.array(generated_images).shape
print 'This is the shape of the real images'
print np.array(real_images).shape
# combine fake images with real images
combined_images = np.concatenate([generated_images,real_images])
# assemble labels and discrminate between real and fake images
labels = np.concatenate([np.ones((number_of_images,1)),np.zeros((number_of_images,1))])
# add random noise to the labels
labels = labels + 0.05 * np.random.random(labels.shape)
# train the discriminator
discriminator_loss = discriminator.train_on_batch(combined_images,labels)
random_latent_vectors = np.random.normal(size=(number_of_images,latent_dimension))
# assemble labels that classify the images as "real", which is not true
misleading_targets = np.zeros((number_of_images,1))
# train the generator via the GAN model, where the discriminator weights are frozen
adversarial_loss = gan.train_on_batch(random_latent_vectors,misleading_targets)
# save the model weights
gan.save_weights('gan.h5')
print'discriminator loss: '
print discriminator_loss
print 'adversarial loss: '
print adversarial_loss
img = image.array_to_img(generated_images[0] * 255.)
img.save(os.path.join(results_directory,'generated_melanoma_image' + str(step) + '.png'))
img = image.array_to_img(real_images[0] * 255.)
img.save(os.path.join(results_directory,'real_melanoma_image' + str(step) + '.png'))
Thanks.
I noticed that in order to have the generated images be of size 512x512, one can edit the following statements as follows:
x = layers.Dense(128*256*256)(generator_input)
x = layers.Reshape((256,256,128))(x)
The comments in your code hint at the solution:
# upsample to 32x32 and
a 32x32 1-channel feature map is generated (i.e. shape of image).
You can upsample to larger image sizes by adding more Conv2DTranspose layers to your generator.
I have undertaken a project in which I must use a convolutional network which will output an image instead of logit class predictors. For this purpose I've adapter the CNN code I downloaded from https://github.com/aymericdamien/TensorFlow-Examples
My input data are 64x64 images read from a binary file. The binary file is comprised of records of two 64x64 images in sequence. I need to minimize a cost function which is the difference of the second image and the 64x64 output of the network.
This is the module I've written to read the input data:
import tensorflow as tf
# various initialization variables
BATCH_SIZE = 128
N_FEATURES = 9
# This function accepts a tensor of size [batch_size, 2 ,record_size]
# and segments in into two tensors of size [batch_size, record] along the second dimension
# IMPORTANT: to be executed within an active session
def segment_batch(batch_p, batch_size, n_input):
batch_xs = tf.slice(batch_p, [0,0,0], [batch_size,1,n_input]) # optical data tensor
batch_ys = tf.slice(batch_p, [0,1,0], [batch_size,1,n_input]) # GT data tensor
optical = tf.reshape([batch_xs], [batch_size, n_input])
gt = tf.reshape([batch_ys], [batch_size, n_input])
return [optical, gt]
def batch_generator(filenames, record_size, batch_size):
""" filenames is the list of files you want to read from.
record_bytes: The size of a record in bytes
batch_size: The size a data batch (examples/batch)
"""
filename_queue = tf.train.string_input_producer(filenames)
reader = tf.FixedLengthRecordReader(record_bytes=2*record_size) # record size is double the value given (optical + ground truth images)
_, value = reader.read(filename_queue)
# read in the data (UINT8)
content = tf.decode_raw(value, out_type=tf.uint8)
# The bytes read represent the image, which we reshape
# from [depth * height * width] to [depth, height, width].
# read optical data slice
depth_major = tf.reshape(
tf.strided_slice(content, [0],
[record_size]),
[1, 64, 64])
# read GT (ground truth) data slice
depth_major1 = tf.reshape(
tf.strided_slice(content, [record_size],
[2*record_size]),
[1, 64, 64])
# Optical data
# Convert from [depth, height, width] to [height, width, depth].
uint8image = tf.transpose(depth_major, [1, 2, 0])
uint8image = tf.reshape(uint8image, [record_size]) # reshape into a single-dimensional vector
uint8image = tf.cast(uint8image, tf.float32) # cast into a float32
uint8image = uint8image/255 # normalize
# Ground Truth data
# Convert from [depth, height, width] to [height, width, depth].
gt_image = tf.transpose(depth_major1, [1, 2, 0])
gt_image = tf.reshape(gt_image, [record_size]) # reshape into a single-dimensional vector
gt_image = tf.cast(gt_image, tf.float32) # cast into a float32
gt_image = gt_image/255 # normalize
# stack them into a single features tensor
features = tf.stack([uint8image, gt_image])
# minimum number elements in the queue after a dequeue, used to ensure
# that the samples are sufficiently mixed
# I think 10 times the BATCH_SIZE is sufficient
min_after_dequeue = 10 * batch_size
# the maximum number of elements in the queue
capacity = 20 * batch_size
# shuffle the data to generate BATCH_SIZE sample pairs
data_batch = tf.train.shuffle_batch([features], batch_size=batch_size,
capacity=capacity, min_after_dequeue=min_after_dequeue)
return data_batch
This is the main code of my implementation:
from __future__ import print_function
# Various initialization variables
DATA_PATH_OPTICAL_TRAIN = 'data/building_ground_truth_for_training.bin'
DATA_PATH_EVAL = 'data/building_ground_truth_for_eval.bin'
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import time
# custom imports
import data_reader2
# Parameters
learning_rate = 0.001
training_iters = 200000
batch_size = 128
epochs = 10
display_step = 10
rows = 64
cols = 64
# Network Parameters
n_input = 4096 # optical image data (img shape: 64*64)
n_classes = 4096 # output is an image of same resolution as initial image
dropout = 0.75 # Dropout, probability to keep units
# input data parameters
record_size = 64**2
total_bytes_of_optical_binary_file = 893329408 # total size of binary file containing training data ([64z64 optical] [64x64 GT])
# create the data batches (queue)
# Accepts two parameters. The tensor containing the binary files and the size of a record
data_batch = data_reader2.batch_generator([DATA_PATH_OPTICAL_TRAIN],record_size, batch_size) # train set
data_batch_eval = data_reader2.batch_generator([DATA_PATH_EVAL],record_size, batch_size) # train set
##############################################################
######################### FUNCTIONS ##########################
##############################################################
# extract optical array from list
# A helper function. Data returned from segment_batch is a list which contains two arrays.
# The first array contains the optical data while the second contains the ground truth data
def extract_optical_from_list(full_batch):
optical = full_batch[0] # extract array from list
return optical
# extract ground truth array from list
# A helper function. Data returned from segment_batch is a list which contains two arrays.
# The first array contains the optical data while the second contains the ground truth data
def extract_gt_from_list(full_batch):
gt = full_batch[1] # extract array from list
return gt
# This function accepts a tensor of size [batch_size, 2 ,record_size]
# and segments in into two tensors of size [batch_size, record] along the second dimension
# IMPORTANT: to be executed within an active session
def segment_batch(batch_p):
batch_xs = tf.slice(batch_p, [0,0,0], [batch_size,1,n_input]) # optical data tensor
batch_ys = tf.slice(batch_p, [0,1,0], [batch_size,1,n_input]) # GT data tensor
optical = tf.reshape([batch_xs], [batch_size, n_input])
gt = tf.reshape([batch_ys], [batch_size, n_input])
return [optical, gt]
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x, weights, biases, dropout):
# Reshape input picture into 64x64 subimages [rows, rows, cols, channels]
x1 = tf.reshape(x, shape=[-1, rows, cols, 1]) # this is the 4-dimensional that tf.conv2D expects as Input
# Convolution Layer
conv1 = conv2d(x1, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
#fc1 = tf.nn.dropout(fc1, dropout)
# Output image (edge), prediction
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
# Add print operation
out = tf.Print(out, [out], message="This is out: ")
return [out, x]
# Store layers weight & bias
weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
# fully connected, 7*7*64 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([16*16*64, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, n_classes]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
####################################################################
##################### PLACEHOLDERS #################################
####################################################################
# tf Graph input (only pictures)
X = tf.placeholder_with_default(extract_optical_from_list(segment_batch(data_batch)), [batch_size, n_input])
####################################################################
##################### END OF PLACEHOLDERS ##########################
####################################################################
# tf Graph input
keep_prob = tf.Variable(dropout) #dropout (keep probability)
# Construct model
pred = conv_net(extract_optical_from_list(X), weights, biases, keep_prob) # x[0] is the optical data
y_true = extract_gt_from_list(extract_gt_from_list(X)) # y_true is the ground truth data
# Define loss and optimizer
cost = tf.reduce_mean(tf.pow(y_true - pred[0], 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
print("Optimizing")
sess.run(optimizer)
print("Iter " + str(step*batch_size))
step += 1
print("Optimization Finished!")
After a lot of tweaking with the shape of the tensors I managed to fix the Syntax errors. Unfortunately, it just hangs the moment it starts executing the optimization part of the Graph. Since I have no way to debug this (found very scarce info on using the Tensorflow debugger) I'm really at a loss as to what has gone wrong! If someone with more experience on Tensorflow can point out what is wrong with this code it would help me a lot.
Thanks, in advance
you need to start the queue runner to get the data for optimizing from the queue.
....
coord = tf.train.Coordinator()
with tf.Session() as sess:
sess.run(init)
tf.train.start_queue_runners(sess=sess, coord=coord)
....
# also use tf.nn.sparse_softmax_cross_entropy_with_logits for cost