"Invalid argument: indices[0,0,0,0] = 30 is not in [0, 30)" - python

Error:
InvalidArgumentError: indices[0,0,0,0] = 30 is not in [0, 30)
[[{{node GatherV2}}]] [Op:IteratorGetNext]
History:
I have a custom data loader for a tf.keras based U-Net for semantic segmentation, based on this example. It is written as follows:
def parse_image(img_path: str) -> dict:
# read image
image = tf.io.read_file(img_path)
#image = tfio.experimental.image.decode_tiff(image)
if xf == "png":
image = tf.image.decode_png(image, channels = 3)
else:
image = tf.image.decode_jpeg(image, channels = 3)
image = tf.image.convert_image_dtype(image, tf.uint8)
#image = image[:, :, :-1]
# read mask
mask_path = tf.strings.regex_replace(img_path, "X", "y")
mask_path = tf.strings.regex_replace(mask_path, "X." + xf, "y." + yf)
mask = tf.io.read_file(mask_path)
#mask = tfio.experimental.image.decode_tiff(mask)
mask = tf.image.decode_png(mask, channels = 1)
#mask = mask[:, :, :-1]
mask = tf.where(mask == 255, np.dtype("uint8").type(NoDataValue), mask)
return {"image": image, "segmentation_mask": mask}
train_dataset = tf.data.Dataset.list_files(
dir_tls(myear = year, dset = "X") + "/*." + xf, seed = zeed)
train_dataset = train_dataset.map(parse_image)
val_dataset = tf.data.Dataset.list_files(
dir_tls(myear = year, dset = "X_val") + "/*." + xf, seed = zeed)
val_dataset = val_dataset.map(parse_image)
## data transformations--------------------------------------------------------
#tf.function
def normalise(input_image: tf.Tensor, input_mask: tf.Tensor) -> tuple:
input_image = tf.cast(input_image, tf.float32) / 255.0
return input_image, input_mask
#tf.function
def load_image_train(datapoint: dict) -> tuple:
input_image = tf.image.resize(datapoint["image"], (imgr, imgc))
input_mask = tf.image.resize(datapoint["segmentation_mask"], (imgr, imgc))
if tf.random.uniform(()) > 0.5:
input_image = tf.image.flip_left_right(input_image)
input_mask = tf.image.flip_left_right(input_mask)
input_image, input_mask = normalise(input_image, input_mask)
return input_image, input_mask
#tf.function
def load_image_test(datapoint: dict) -> tuple:
input_image = tf.image.resize(datapoint["image"], (imgr, imgc))
input_mask = tf.image.resize(datapoint["segmentation_mask"], (imgr, imgc))
input_image, input_mask = normalise(input_image, input_mask)
return input_image, input_mask
## create datasets-------------------------------------------------------------
buff_size = 1000
dataset = {"train": train_dataset, "val": val_dataset}
# -- Train Dataset --#
dataset["train"] = dataset["train"]\
.map(load_image_train, num_parallel_calls = tf.data.experimental.AUTOTUNE)
dataset["train"] = dataset["train"].shuffle(buffer_size = buff_size,
seed = zeed)
dataset["train"] = dataset["train"].repeat()
dataset["train"] = dataset["train"].batch(bs)
dataset["train"] = dataset["train"].prefetch(buffer_size = AUTOTUNE)
#-- Validation Dataset --#
dataset["val"] = dataset["val"].map(load_image_test)
dataset["val"] = dataset["val"].repeat()
dataset["val"] = dataset["val"].batch(bs)
dataset["val"] = dataset["val"].prefetch(buffer_size = AUTOTUNE)
print(dataset["train"])
print(dataset["val"])
Now I wanted to use a weighted version of tf.keras.losses.SparseCategoricalCrossentropy for my model and I found this tutorial, which is rather similar to the example above.
However, they also offered a weighted version of the loss, using:
def add_sample_weights(image, label):
# The weights for each class, with the constraint that:
# sum(class_weights) == 1.0
class_weights = tf.constant([2.0, 2.0, 1.0])
class_weights = class_weights/tf.reduce_sum(class_weights)
# Create an image of `sample_weights` by using the label at each pixel as an
# index into the `class weights` .
sample_weights = tf.gather(class_weights, indices=tf.cast(label, tf.int32))
return image, label, sample_weights
and
weighted_model.fit(
train_dataset.map(add_sample_weights),
epochs=1,
steps_per_epoch=10)
I combined those approaches since the latter tutorial uses previously loaded data, while I want to draw the images from disc (not enough RAM to load all at once).
Resulting in the code from the first example (long code block above) followed by
def add_sample_weights(image, segmentation_mask):
class_weights = tf.constant(inv_weights, dtype = tf.float32)
class_weights = class_weights/tf.reduce_sum(class_weights)
sample_weights = tf.gather(class_weights,
indices = tf.cast(segmentation_mask, tf.int32))
return image, segmentation_mask, sample_weights
(inv_weights are my weights, an array of 30 float64 values) and
model.fit(dataset["train"].map(add_sample_weights),
epochs = 45, steps_per_epoch = np.ceil(N_img/bs),
validation_data = dataset["val"],
validation_steps = np.ceil(N_val/bs),
callbacks = cllbs)
When I run
dataset["train"].map(add_sample_weights).element_spec
as in the second example, I get an output that looks reasonable to me (similar to the one in the example):
Out[58]:
(TensorSpec(shape=(None, 512, 512, 3), dtype=tf.float32, name=None),
TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name=None),
TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name=None))
However, when I try to fit the model or run something like
a, b, c = dataset["train"].map(add_sample_weights).take(1)
I will receive the error mentioned above.
So far, I have found quite some questions regarding this error (e.g., a, b, c, d), however, they all talk of "embedding layers" and things I am not aware of using.
Where does this error come from and how can I solve it?

Picture tf.gather as a fancy way to do indexing. The error you get is akin to the following example in python:
>>> my_list = [1,2,3]
>>> my_list[3]
IndexError: list index out of range
If you want to use tf.gather, then the range of value of your indices should not be bigger than the dimension size of the Tensor you are willing to index.
In your case, in the call tf.gather(class_weights,indices = tf.cast(segmentation_mask, tf.int32)), with class_weights being a Tensor of dimension (30,), the range of values of segmentation_mask should be between 0 and 29. As far as I can tell from your data pipeline, segmentation_mask has a range of value between 0 and 255. The fix will be problem dependent.

Related

Why my prediction function is giving error? ValueError: not enough values to unpack (expected 2, got 1)

I'm trying to make prediction using the pre-trained model for binary segmentation using UNET and pytorch. Here is my code:
model.eval() # Set model to evaluate mode
class SimDataset(Dataset):
def __init__(self, path, transform=None, isMask=False):
self.m = ("test")
self.path = path
self.transform = transform
self.isMask = isMask
def __len__(self):
return len(self.path)
def __getitem__(self, idx):
one_image = os.path.join(self.m, self.path[idx]) # preparing image path/location
img_temp = Image.open(one_image) # load RGB input image
if self.transform:
image = self.transform(img_temp)
input_image = np.array(img_temp).astype('float32') # converting one image to np array
input_image = np.transpose(input_image, (2, 0 ,1)) # converting from hwc to chw [(256,256,3) => (3, 256, 256)]
return [input_image]
testlist = list(os.listdir(r"test"))
len(testlist)
image_datasets = {
'testlist': testlist
}
dataset_sizes = {
x: len(image_datasets[x]) for x in image_datasets.keys()
}
test_dataset = SimDataset(testlist, transform = trans, isMask=False)
test_loader = DataLoader(test_dataset, batch_size=3, shuffle=False, num_workers=0)
inputs, labels = next(iter(test_loader))
inputs = inputs.to(device)
labels = labels.to(device)
pred = model(inputs)
pred = torch.sigmoid(pred)
pred = pred.data.cpu().numpy()
print(pred.shape)
But it is showing error saying -> ValueError: not enough values to unpack (expected 2, got 1).
Your code expects two outputs from the data loader:
inputs, labels = next(iter(test_loader))
However, your __getitem__ method in your dataset, returns only a single output:
return [input_image]
Either you return two outputs from __getitem__, both images and labels.
Or expect only a single output from the test_loader.

U-net training Error: The size of tensor a (16) must match the size of tensor b (6) at non-singleton dimension 1

I’m trying to train a Unit model on LandCoverNet dataset, which is a satellite imagery dataset that contains input images and corresponding land cover type masks.
I have created a custom dataset to get my images and masks:
# Create custom dataset that accepts 4 channels images
from torch.utils.data import Dataset, DataLoader, sampler
from pathlib import Path
from PIL import Image
import matplotlib.pyplot as plt
import os
import numpy as np
import rasterio as rio
from torchvision import transforms, datasets, models
# We have two dir: inputs(folder for each image) and tatgets
class LandCoverNetDataset(BaseDataset):
CLASSES = ['otherland', 'cropland', 'pastureland', 'bare soil', 'openwater', 'forestland']
def __init__(self, inputs_dir, targets_dir,
classes = None,
augmentation=None ,
preprocessing = False,
pytorch=True):
super().__init__()
self.samples = []
self.pytorch = pytorch
self.augmentation = augmentation
self.preprocessing = preprocessing
# Convert str names to class values on masks
self.class_value = [self.CLASSES.index(cls.lower()) for cls in classes]
# Create dictionary for images and targets
for sub_dir in os.listdir(inputs_dir):
files = {}
files = {
'img_bands' : os.path.join(inputs_dir, sub_dir),
'target' : os.path.join(targets_dir, sub_dir[:13] + "_LC_10m.png")
}
self.samples.append(files)
def __len__(self):
return len(self.samples)
def normalize(self, band):
'''Notmalize a numpy array to have values between 0 and 1'''
band_min, band_max = band.min(), band.max()
np.seterr(divide='ignore', invalid='ignore')
normalized_band = ((band - band_min)/(band_max - band_min))
#Remove any nan value and subtitute by zero
where_are_NaNs = isnan(normalized_band)
normalized_band[where_are_NaNs] = 0
return normalized_band
def open_as_array(self, idx, include_ndvi = False):
'''
Merge the 4 bands into one image and normalize the bands
'''
# List indivisual bands in each image folder
# Stack them togather
list_bands = []
for img_file in os.listdir(self.samples[idx]['img_bands']):
# Get the ndvi band
if 'NDVI' in img_file:
ndvi_band = os.path.join(self.samples[idx]['img_bands'], img_file)
else:
# Get the rgb bands
band = rio.open(os.path.join(self.samples[idx]['img_bands'], img_file)).read(1)
if self.preprocessing:
# preprocess the bands before stacking them (only rgb)
band = self.normalize(band)
list_bands.append(band)
# Stack the bands
raw_rgb = np.stack(list_bands, axis=2).astype('float32')
if include_ndvi:
# Include the NDVI band in the input images
ndvi = np.expand_dims(rio.open(ndvi_band).read(1).astype('float32'), 2)
raw_rgb = np.concatenate([raw_rgb, ndvi], axis=2)
if self.augmentation:
transformed = self.augmentation(image = raw_rgb)
raw_rgb = transformed["image"]
if self.preprocessing:
# transpose to tensor shape
raw_rgb = raw_rgb.transpose((2,0,1)).astype('float32')
return raw_rgb
def open_mask(self, idx):
# Extract certain classes from mask
mask = cv2.imread(self.samples[idx]['target'], 0)
masks = [(mask == v) for v in self.class_value]
mask = np.stack(masks, axis=-1).astype('long')
if self.augmentation:
transformed = self.augmentation(image = mask)
mask = transformed["image"]
if self.preprocessing:
# preprocess the mask
mask = self.normalize(mask)
# transpose to tensor shape
mask = mask.transpose((2, 0, 1)).astype('long')
mask = mask[0, :, :]
return mask
def __getitem__(self, idx):
x = torch.tensor(self.open_as_array(idx, include_ndvi=True), dtype=torch.float)
y = torch.tensor(self.open_mask(idx), dtype=torch.long)
return x, y
def open_as_pil(self, idx):
arr = 256*self.open_as_array(idx)
return Image.fromarray(arr.astype(np.uint8), 'RGB')
def __repr__(self):
s = 'Dataset class with {} files'.format(self.__len__())
return s
The input here is 4 bands.
This is the shape of the first batch for both input/target
torch.Size([16, 4, 224, 224])
torch.Size([16, 224, 224])
I’m using a model from segmentation-models-pytorch library, and here is how I customized it for my case:
ENCODER = 'se_resnext50_32x4d'
ENCODER_WEIGHTS = 'imagenet'
ACTIVATION = 'softmax2d'
DEVICE = 'cuda'
model = smp.FPN(ENCODER, classes=len(CLASSES), activation=ACTIVATION)
# Replace the model.conv1 to accept 4 channels
# first: copy the layer's weights
weight = model.encoder.layer0.conv1.weight.clone()
model.encoder.layer0.conv1 = nn.Conv2d(4, 64,kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
with torch.no_grad():
model.encoder.layer0.conv1.weight[:, :3] = weight
model.encoder.layer0.conv1.weight[:, 3] = model.encoder.layer0.conv1.weight[:, 0]
loss = smp.utils.losses.NLLLoss()
metrics = [
smp.utils.metrics.IoU(threshold=0.5),
]
optimizer = torch.optim.SGD([
dict(params=model.parameters(), lr=0.001, weight_decay=1e-8, momentum=0.9),
])
# create epoch runners
# it is a simple loop of iterating over dataloader`s samples
train_epoch = smp.utils.train.TrainEpoch(
model,
loss=loss,
metrics=metrics,
optimizer=optimizer,
device=DEVICE,
verbose=True,
)
valid_epoch = smp.utils.train.ValidEpoch(
model,
loss=loss,
metrics=metrics,
device=DEVICE,
verbose=True,
)
And here is my training loop
# train model for 40 epochs
max_score = 0
for i in range(0, 40):
print('\nEpoch: {}'.format(i))
train_logs = train_epoch.run(train_loader)
valid_logs = valid_epoch.run(valid_loader)
# do something (save model, change lr, etc.)
if max_score < valid_logs['iou_score']:
max_score = valid_logs['iou_score']
torch.save(model, './best_model.pth')
print('Model saved!')
if i == 25:
optimizer.param_groups[0]['lr'] = 1e-5
print('Decrease decoder learning rate to 1e-5!')
At first, the target shape was [16, 6, 224, 224] but I had an error and found this thread that it should be [batch_size, height, width]
That’s why I added this line in the Dataset class : mask = mask[0, :, :]
to get ride of the number of classes dim, and here where things get confusing for me, because the output of me model is torch.Size([10, 6, 224, 224]).
This is the entire error message:
Epoch: 0
train: 0%| | 0/157 [00:00<?, ?it/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-215-2ae39e205dee> in <module>()
7
8 print('\nEpoch: {}'.format(i))
----> 9 train_logs = train_epoch.run(train_loader)
10 valid_logs = valid_epoch.run(valid_loader)
11
3 frames
/usr/local/lib/python3.6/dist-packages/segmentation_models_pytorch/utils/functional.py in iou(pr, gt, eps, threshold, ignore_channels)
32 pr, gt = _take_channels(pr, gt, ignore_channels=ignore_channels)
33
---> 34 intersection = torch.sum(gt * pr)
35 union = torch.sum(gt) + torch.sum(pr) - intersection + eps
36 return (intersection + eps) / union
RuntimeError: The size of tensor a (16) must match the size of tensor b (6) at non-singleton dimension 1
Thanks!
OK, I changed the loss function to smp.utils.losses.DiceLoss(), and I was able to start training my model. I also removed mask = mask[0, :, :].
I also had an issue with my normalization. Here is how I did it:
for input (4 bands):
for i in range(raw_rgb.shape[0]):
raw_rgb[i, :, :] = self.normalize(raw_rgb[i, :, :])
And the same for the masks (3 channels)
This was after converting them to tensor.
I would also still want to know how to prepare my masks for CrossEntropyLoss.

How to use flow_from_directory in Keras for multi-class semantic segmentation?

Let's say I have 100 training grayscale images and 100 RGB training masks, each of size 512x512. I was able to one-hot encode the masks using to_categorical in Keras with the below
numclasses=3
masks_one_hot=to_categorical(maskArr,numclasses)
where maskArr is a 100x512x512x1, and masks_one_hot is 100x512x512x3.
However, to use ImageDataGenerator and flow_from_directory using trainGenerator from https://github.com/zhixuhao/unet/blob/master/data.py, I tried to save the one-hot encoded training images and then read them using trainGenerator. However, I noticed after using imwrite on them and then reading them with imread, they changed from one-hot encoded 512x512x3 to 512x512x3 RGB images. That is, instead of each channel having a value of 0 or 1, they now range from 0-255
As a result, if I do:
myGenerator = trainGeneratorOneHot(20,'data/membrane/train','image','label',data_gen_args,save_to_dir = "data/membrane/train/aug", flag_multi_class = True,
num_class = 3, target_size=(512,512,3))
num_batch=3
for i,batch in enumerate(myGenerator):
if(i >= num_batch):
break
where trainGeneratorOneHot is below:
def trainGeneratorOneHot(batch_size,...class_mode=None, image_class_mode=None):
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)
image_generator = image_datagen.flow_from_directory(train_path,classes = [image_folder], class_mode = image_class_mode, color_mode = image_color_mode,target_size = target_size, ...)
mask_generator = mask_datagen.flow_from_directory(train_path, classes = [mask_folder], class_mode = class_mode, target_size = target_size,...)
train_generator = zip(image_generator, mask_generator)
for (img,mask) in train_generator:
img,mask = adjustDataOneHot(img,mask)
yield (img,mask)
def adjustDataOneHot(img,mask):
return (img,mask)
Then I get `ValueError: could not broadcast input array from shape (512,512,1) into shape (512,512,3,1)
How can I fix this?
Was dealing with the same issue a few days ago. I found it essential to make my own data generator class to deal with taking in data from a dataframe, augmenting it, and then one-hot-encoding it before passing it to my model. I was never able to get the Keras ImageDataGenerator to work for semantic segmentation problems with multiple classes.
Below is a data generator class in case it might help you out:
def one_hot_encoder(mask, num_classes = 8):
hot_mask = np.zeros(shape = mask.shape, dtype = 'uint8')
for _ in range(8):
temp = np.zeros(shape = mask.shape[0:2], dtype = 'uint8')
temp[mask[:, :, _] != 0] = 1
hot_mask[:, :, _] = temp
return hot_mask
# Image data generator class
class DataGenerator(keras.utils.Sequence):
def __init__(self, dataframe, batch_size, n_classes = 8, augment = False):
self.dataframe = dataframe
self.batch_size = batch_size
self.n_classes = n_classes
self.augment = augment
# Steps per epoch
def __len__(self):
return len(self.dataframe) // self.batch_size
# Shuffles and resets the index at the end of training epoch
def on_epoch_end(self):
self.dataframe = self.dataframe.reset_index(drop = True)
# Generates data, feeds to training
def __getitem__(self, index):
processed_images = []
processed_masks = []
for _ in range(self.batch_size):
the_image = io.imread(self.dataframe['Images'][index])
the_mask = io.imread(self.dataframe['Masks'][index]).astype('uint8');
one_hot_mask = one_hot_encoder(the_mask, 8)
if(self.augment):
# Resizing followed by some augmentations
processed_image = augs_for_images(image = the_image) / 255.0
processed_mask = augs_for_masks(image = one_hot_mask)
else:
# Still resizing but no augmentations
processed_image = resize(image = the_image) / 255.0
processed_mask = resize(image = one_hot_mask)
processed_images.append(processed_image)
processed_masks.append(processed_mask)
batch_x = np.array( processed_images )
batch_y = np.array( processed_masks )
return (batch_x, batch_y)
Also, here's a link to a repo with some semantic segmentation models that might be of interest to you. The notebook itself shows how the author dealt with multi-class semantic segmentation.

Using Tensorflow Estimator API with Images for SemSeg

I tried to implement my model with the Tensorflow Estimator API.
As a data input function I use
def input_fn():
train_in_np = sorted(io_utils.loadDataset(join(basepath, r"leftImg8bit/train/*/*")))
train_out_np = sorted(io_utils.loadDataset(join(basepath, r"gtFine/train/*/*_ignoreLabel.png")))
train_in = tf.constant(train_in_np)
train_out = tf.constant(train_out_np)
tr_data = tf.data.Dataset.from_tensor_slices((train_in, train_out))
tr_data = tr_data.shuffle(len(train_in_np))
tr_data = tr_data.repeat(epoch_cnt+1)
tr_data = tr_data.apply(tf.contrib.data.map_and_batch(parse_files, batch_size=batchSize))
tr_data = tr_data.prefetch(buffer_size=batchSize)
iterator = tr_data.make_initializable_iterator()
return iterator.get_next()
io_utils.loadDataset just returns a list of filepaths.
The data itself is parsed by
def parse_files(in_file, gt_file):
image_in = tf.read_file(in_file)
image_in = tf.image.decode_image(image_in, channels=3)
image_in.set_shape([None, None, 3])
image_in = tf.cast(image_in, tf.float32)
mean, std = tf.nn.moments(image_in, [0, 1])
image_in = image_in - mean
image_in = image_in / std
gt = tf.read_file(gt_file)
gt = tf.image.decode_image(gt, channels=1)
gt.set_shape([None, None, 1])
gt = tf.cast(gt, tf.int32)
return {'img':image_in}, gt
My estimator starts with
def estimator_fcn_model_fn(features, labels, mode, params):
x = tf.feature_column.input_layer(features, params['feature_columns'])
and the feature columns are defined as
my_feature_columns = []
my_feature_columns.append(tf.feature_column.numeric_column(key='img'))
the rest of the code I skipped for clearability.
My problem is with the shape of the feature:
Blockquote ValueError: ('Convolution not supported for input with rank', 2)
The print output of x, features and feature_columns:
Tensor("input_layer/concat:0", shape=(?, 1), dtype=float32)
{'img': tf.Tensor 'IteratorGetNext:0' shape=(?, ?, ?, 3) dtype=float32}
[_NumericColumn(key='img', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)]
Has anyone an idea how to resolve this, I would guess it has to do with the kind of feature column, but I have no idea who to apply this for images.
As Y.Luo gave a hint tf.feature_column.input_layer is not suitable for images. An easier way is to directly use the feature dictionary by a key, which might be passed by params for more flexibility.
x = features[params['input_name']]
instead of
x = tf.feature_column.input_layer(features, params['feature_columns'])

TensorFlow model gets zero loss

import tensorflow as tf
import numpy as np
import os
import re
import PIL
def read_image_label_list(img_directory, folder_name):
# Input:
# -Name of folder (test\\\\train)
# Output:
# -List of names of files in folder
# -Label associated with each file
cat_label = 1
dog_label = 0
filenames = []
labels = []
dir_list = os.listdir(os.path.join(img_directory, folder_name)) # List of all image names in 'folder_name' folder
# Loop through all images in directory
for i, d in enumerate(dir_list):
if re.search("train", folder_name):
if re.search("cat", d): # If image filename contains 'Cat', then true
labels.append(cat_label)
else:
labels.append(dog_label)
filenames.append(os.path.join(img_dir, folder_name, d))
return filenames, labels
# Define convolutional layer
def conv_layer(input, channels_in, channels_out):
w_1 = tf.get_variable("weight_conv", [5,5, channels_in, channels_out], initializer=tf.contrib.layers.xavier_initializer())
b_1 = tf.get_variable("bias_conv", [channels_out], initializer=tf.zeros_initializer())
conv = tf.nn.conv2d(input, w_1, strides=[1,1,1,1], padding="SAME")
activation = tf.nn.relu(conv + b_1)
return activation
# Define fully connected layer
def fc_layer(input, channels_in, channels_out):
w_2 = tf.get_variable("weight_fc", [channels_in, channels_out], initializer=tf.contrib.layers.xavier_initializer())
b_2 = tf.get_variable("bias_fc", [channels_out], initializer=tf.zeros_initializer())
activation = tf.nn.relu(tf.matmul(input, w_2) + b_2)
return activation
# Define parse function to make input data to decode image into
def _parse_function(img_path, label):
img_file = tf.read_file(img_path)
img_decoded = tf.image.decode_image(img_file, channels=3)
img_decoded.set_shape([None,None,3])
img_decoded = tf.image.resize_images(img_decoded, (28, 28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
img_decoded = tf.image.per_image_standardization(img_decoded)
img_decoded = tf.cast(img_decoded, dty=tf.float32)
label = tf.one_hot(label, 1)
return img_decoded, label
tf.reset_default_graph()
# Define parameterspe
EPOCHS = 10
BATCH_SIZE_training = 64
learning_rate = 0.001
img_dir = 'C:/Users/tharu/PycharmProjects/cat_vs_dog/data'
batch_size = 128
# Define data
features, labels = read_image_label_list(img_dir, "train")
# Define dataset
dataset = tf.data.Dataset.from_tensor_slices((features, labels)) # Takes slices in 0th dimension
dataset = dataset.map(_parse_function)
dataset = dataset.batch(batch_size)
iterator = dataset.make_initializable_iterator()
# Get next batch of data from iterator
x, y = iterator.get_next()
# Create the network (use different variable scopes for reuse of variables)
with tf.variable_scope("conv1"):
conv_1 = conv_layer(x, 3, 32)
pool_1 = tf.nn.max_pool(conv_1, ksize=[1,2,2,1], strides=[1,2,2,1], padding="SAME")
with tf.variable_scope("conv2"):
conv_2 = conv_layer(pool_1, 32, 64)
pool_2 = tf.nn.max_pool(conv_2, ksize=[1,2,2,1], strides=[1,2,2,1], padding="SAME")
flattened = tf.contrib.layers.flatten(pool_2)
with tf.variable_scope("fc1"):
fc_1 = fc_layer(flattened, 7*7*64, 1024)
with tf.variable_scope("fc2"):
logits = fc_layer(fc_1, 1024, 1)
# Define loss function
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf.cast(y, dtype=tf.int32)))
# Define optimizer
train = tf.train.AdamOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess:
# Initiliaze all the variables
sess.run(tf.global_variables_initializer())
# Train the network
for i in range(EPOCHS):
# Initialize iterator so that it starts at beginning of training set for each epoch
sess.run(iterator.initializer)
print("EPOCH", i)
while True:
try:
_, epoch_loss = sess.run([train, loss])
except tf.errors.OutOfRangeError: # Error given when out of data
if i % 2 == 0:
# [train_accuaracy] = sess.run([accuracy])
# print("Step ", i, "training accuracy = %{}".format(train_accuaracy))
print(epoch_loss)
break
I've spent a few hours trying to figure out systematically why I've been getting 0 loss when I run this model.
Features = list of file locations for each image (e.g. ['\data\train\cat.0.jpg', /data\train\cat.1.jpg])
Labels = [Batch_size, 1] one_hot vector
Initially I thought it was because there was something wrong with my data. But I've viewed the data after being resized and the images seems fine.
Then I tried a few different loss functions because I thought maybe I'm misunderstanding what the the tensorflow function softmax_cross_entropy does, but that didn't fix anything.
I've tried running just the 'logits' section to see what the output is. This is just a small sample and the numbers seem fine to me:
[[0.06388957]
[0. ]
[0.16969752]
[0.24913025]
[0.09961276]]
Surely then the softmax_cross_entropy function should be able to compute this loss given that the corresponding labels are 0 or 1? I'm not sure if I'm missing something. Any help would be greatly appreciated.
As documented:
logits and labels must have the same shape, e.g. [batch_size, num_classes] and the same dtype (either float16, float32, or float64).
Since you mentioned your label is "[Batch_size, 1] one_hot vector", I would assume both your logits and labels are [Batch_size, 1] shape. This will certainly lead to zero loss. Conceptually speaking, you have only 1 class (num_classes=1) and your cannot be wrong (loss=0).
So at least for you labels, you should transform it: tf.one_hot(indices=labels, depth=num_classes). Your prediction logits should also have a shape [batch_size, num_classes] output.
Alternatively, you can use sparse_softmax_cross_entropy_with_logits, where:
A common use case is to have logits of shape [batch_size, num_classes] and labels of shape [batch_size]. But higher dimensions are supported.

Categories