Why is TensorFlow putting all data into system memory? - python

I keep getting OOM error for system memory (not GPU memory) but I'm not sure which function is causing tensorflow to load everything into RAM. I ran an image classifier on a different dataset half current size a month ago and copied the code with some small changes. So there are two changes that could cause OOM compared to the previous dataset. 1) The image sizes are much larger, but I resized them to 224x224 early on so I don't think it should have any effect at runtime. 2) The dataset is double the size but I am not using cache or shuffle this time around, so I'm not sure why it isn't just the batch size being loaded into memory.
def read_and_decode(filename, label):
# Returns a tensor with byte values of the entire contents of the input filename.
img = tf.io.read_file(filename)
# Decoding raw JPEG tensor data into 3D (RGB) uint8 pixel value tensor
img = tf.io.decode_jpeg(img, channels=3)
#Resize
img = tf.image.resize_with_pad(
img,
224,
224,
method=tf.image.ResizeMethod.BILINEAR,
antialias=False
)
img = preprocess_input(img)
return img, label
ds_oh = tf.data.Dataset.from_tensor_slices((img_paths, oh_input))
ds_oh = ds_oh.map(read_and_decode)
All data is now in ds_oh, size 224x224, with correct labels.
def ds_split(ds, ds_size, shuffle_size, train_split=0.8, val_split=0.2, shuffle=True):
assert (train_split + val_split) == 1
if shuffle:
ds = ds.shuffle(shuffle_size, seed=99)
train_size = int(train_split * ds_size)
val_size = int(val_split * ds_size)
train_ds = ds.take(train_size)
val_ds = ds.skip(train_size).take(val_size)
return train_ds, val_ds
train_ds, val_ds = ds_split(ds_oh, len(img_paths), len(img_paths), train_split=0.8, val_split=0.2, shuffle=True)
Split into train and validate datasets, shuffled.
#One hot
#train_ds = train_ds.cache()
#train_ds = train_ds.shuffle(buffer_size=len(img_paths), reshuffle_each_iteration=True)
train_ds = train_ds.batch(BATCH_SIZE)
train_ds = train_ds.prefetch(tf.data.AUTOTUNE)
#val_ds = val_ds.cache()
val_ds = val_ds.batch(BATCH_SIZE)
val_ds = val_ds.prefetch(tf.data.AUTOTUNE)
Batching and prefetching, removing caching and shuffling for OOM error.
# input layers
inputs = tf.keras.Input(shape=(224, 224, 3))
base_model = ResNet50(weights="imagenet", include_top=False, input_shape=(224, 224, 3))(inputs)
# creating our new model head to combine with the ResNet base model
head_model = MaxPool2D(pool_size=(4, 4))(base_model)
head_model = Flatten(name='flatten')(head_model)
head_model = Dense(1024, activation='relu')(head_model)
head_model = Dropout(0.2)(head_model)
head_model = Dense(512, activation='relu')(head_model)
head_model = Dropout(0.2)(head_model)
head_model = Dense(29, activation='softmax')(head_model)
# final configuration
model = Model(inputs, head_model)
model.layers[2].trainable = False
optimizer = SGD(learning_rate=0.01, momentum=0.9)
model.compile(loss="categorical_crossentropy", optimizer=optimizer, metrics=['accuracy'])
Model built
INITIAL_EPOCHS = 35
history = model.fit(train_ds,
epochs=INITIAL_EPOCHS,
validation_data=val_ds)
Epoch 1/35
Fails before first epoch

For anyone wondering, the problem was in splitting my tf.data dataset (all images are in one combined folder location) into train and val. The function I found online (ds_split) caused a memory leak for some reason. Even using td.data.take(1) caused an OOM error. I tried a second, similar function I found online and received the same issues.
I decided to use train_test_split from scikitlearn on the list of image file paths and labels and built of two tf.data datasets right off the bat. Everything appears to be working fine now.

Related

Long initialization time for model.fit when using tensorflow dataset from generator

This is my first question on stack overflow. I apologise in advance for the poor formatting and indentation due to my troubles with the interface.
Environment specifications:
Tensorflow version - 2.7.0 GPU (tested and working properly)
Python version - 3.9.6
CPU - Intel Core i7 7700HQ
GPU - NVIDIA GTX 1060 3GB
RAM - 16GB DDR4 2400MHz
HDD - 1TB 5400 RPM
Problem Statement:
I wish to train a TensorFlow 2.7.0 model to perform multilabel classification with six classes on CT scans stored as DICOM images. The dataset is from Kaggle, link here. The training labels are stored in a CSV file, and the DICOM image names are of the format ID_"random characters".dcm. The images have a combined size of 368 GB.
Approach used:
The CSV file containing the labels is imported into a pandas
DataFrame and the image filenames are set as the index.
A simple data generator is created to read the DICOM image and the
labels by iterating on the rows of the DataFrame. This generator is used to create a
training dataset using tf.data.Dataset.from_generator. The images are
pre-processed using bsb_window().
The training dataset is shuffled and split into a training(90%) and
validation set(10%)
The model is created using Keras Sequential, compiled, and fit using the training and validation datasets created earlier.
code:
def train_generator():
for row in df.itertuples():
image = pydicom.dcmread(train_images_dir + row.Index + ".dcm")
try:
image = bsb_window(image)
except:
image = np.zeros((256,256,3))
labels = row[1:]
yield image, labels
train_images = tf.data.Dataset.from_generator(train_generator,
output_signature =
(
tf.TensorSpec(shape = (256,256,3)),
tf.TensorSpec(shape = (6,))
)
)
train_images = train_images.batch(4)
TRAIN_NUM_FILES = 752803
train_images = train_images.shuffle(40)
val_size = int(TRAIN_NUM_FILES * 0.1)
val_images = train_images.take(val_size)
train_images = train_images.skip(val_size)
def create_model():
model = Sequential([
InceptionV3(include_top = False, input_shape = (256,256,3), weights = "imagenet"),
GlobalAveragePooling2D(name = "avg_pool"),
Dense(6, activation = "sigmoid", name = "dense_output"),
])
model.compile(loss = "binary_crossentropy",
optimizer = tf.keras.optimizers.Adam(5e-4),
metrics = ["accuracy", tf.keras.metrics.SpecificityAtSensitivity(0.8)]
)
return model
model = create_model()
history = model.fit(train_images,
batch_size=4,
epochs=5,
verbose=1,
validation_data=val_images
)
Issue:
When executing this code, there is a delay of a few hours of high disk usage (~30MB/s reads) before training begins. When a DataGenerator is made using tf.keras.utils.Sequence, training commences within seconds of calling model.fit().
Potential causes:
Iterating over a pandas DataFrame in train_generator(). I am not sure how to avoid this issue.
The use of external functions to pre-process and load the data.
The usage of the take() and skip() methods to create training and validation datasets.
How do I optimise this code to run faster? I've heard splitting the data generator into label creation, image pre-processing functions and parallelising operations would improve performance. Still, I'm not sure how to apply those concepts in my case. Any advice would be highly appreciated.
I FOUND THE ANSWER
The problem was in the following code:
TRAIN_NUM_FILES = 752803
train_images = train_images.shuffle(40)
val_size = int(TRAIN_NUM_FILES * 0.1)
val_images = train_images.take(val_size)
train_images = train_images.skip(val_size)
It takes an inordinate amount of time to split the dataset into training and validation datasets after loading the images. This step should be done early in the process, before loading any images. Hence, I split the image path loading and actual image loading, then parallelized the functions using the recommendations given here. The final optimized code is as follows
def train_generator():
for row in df.itertuples():
image_path = f"{train_images_dir}{row.Index}.dcm"
labels = np.reshape(row[1:], (1,6))
yield image_path, labels
def test_generator():
for row in test_df.itertuples():
image_path = f"{test_images_dir}{row.Index}.dcm"
labels = np.reshape(row[1:], (1,6))
yield image_path, labels
def image_loading(image_path):
image_path = tf.compat.as_str_any(tf.strings.reduce_join(image_path).numpy())
dcm = pydicom.dcmread(image_path)
try:
image = bsb_window(dcm)
except:
image = np.zeros((256,256,3))
return image
def wrap_img_load(image_path):
return tf.numpy_function(image_loading, [image_path], [tf.double])
def set_shape(image, labels):
image = tf.reshape(image,[256,256,3])
labels = tf.reshape(labels,[1,6])
labels = tf.squeeze(labels)
return image, labels
train_images = tf.data.Dataset.from_generator(train_generator, output_signature = (tf.TensorSpec(shape=(), dtype=tf.string), tf.TensorSpec(shape=(None,6)))).prefetch(tf.data.AUTOTUNE)
test_images = tf.data.Dataset.from_generator(test_generator, output_signature = (tf.TensorSpec(shape=(), dtype=tf.string), tf.TensorSpec(shape=(None,6)))).prefetch(tf.data.AUTOTUNE)
TRAIN_NUM_FILES = 752803
train_images = train_images.shuffle(40)
val_size = int(TRAIN_NUM_FILES * 0.1)
val_images = train_images.take(val_size)
train_images = train_images.skip(val_size)
train_images = train_images.map(lambda image_path, labels: (wrap_img_load(image_path),labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
test_images = test_images.map(lambda image_path, labels: (wrap_img_load(image_path),labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
val_images = val_images.map(lambda image_path, labels: (wrap_img_load(image_path),labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
train_images = train_images.map(lambda image, labels: set_shape(image,labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
test_images = test_images.map(lambda image, labels: set_shape(image,labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
val_images = val_images.map(lambda image, labels: set_shape(image,labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
train_images = train_images.batch(4).prefetch(tf.data.AUTOTUNE)
test_images = test_images.batch(4).prefetch(tf.data.AUTOTUNE)
val_images = val_images.batch(4).prefetch(tf.data.AUTOTUNE)
def create_model():
model = Sequential([
InceptionV3(include_top = False, input_shape = (256,256,3), weights='imagenet'),
GlobalAveragePooling2D(name='avg_pool'),
Dense(6, activation="sigmoid", name='dense_output'),
])
model.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(5e-4), metrics=["accuracy"])
return model
model = create_model()
history = model.fit(train_images,
epochs=5,
verbose=1,
callbacks=[checkpointer, scheduler],
validation_data=val_images
)
The CPU, GPU, and HDD are utilized very efficiently, and the training time is much faster than with a tf.keras.utils.Sequence datagenerator

Why is my model giving poor accuracy when the data is loaded using tf.data?

I am new to the tf.data API and trying to use it to load images from disk in the Dogs vs. Cats Redux: Kernels Edition Kaggle competition. To do this, I first created a pandas DataFrame named train_df with two columns - file_path containing the relative path of images and target containing the target labels 0 (for cat) and 1(for dog). Here's how the first 10 rows of the DataFrame looks like:
Then, I tried loading the images with the following code:
import tensorflow as tf
BATCH_SIZE = 128
IMG_HEIGHT = 224
IMG_WIDTH = 224
def read_images(X, y):
X = tf.io.read_file(X)
X = tf.io.decode_image(X, expand_animations=False, dtype=tf.float32, channels=3)
X = tf.image.resize(X, [IMG_HEIGHT, IMG_WIDTH])
X = tf.keras.applications.efficientnet.preprocess_input(X, data_format="channels_last")
return (X, y)
def build_data_pipeline(X, y):
data = tf.data.Dataset.from_tensor_slices((X, y))
data = data.map(read_images)
data = data.batch(BATCH_SIZE)
data = data.prefetch(tf.data.AUTOTUNE)
return data
tf_data = build_data_pipeline(train_df["file_path"], train_df["target"])
After this, I tried training my model using the following code
model.fit(tf_data, epochs=10)
but got a training accuracy of only 50% whereas with ImageDataGenerator, I am getting an accuracy of 99%. Thus, the problem lies somewhere in the data loading part which I am not able find out.
I have used EfficientNetB0 with weights trained from imagenet as feature extractor and single neuron layer at the end as classifier.
Pretrained EfficientNetB0 model:
pretrained_model = tf.keras.applications.EfficientNetB0(
input_shape=(IMG_HEIGHT, IMG_WIDTH, 3),
include_top=False,
weights="imagenet"
)
for layer in pretrained_model.layers:
layer.trainable = False
Dense layer with one neuron at the end of the EfficientNetB0:
pretrained_output = pretrained_model.get_layer('top_activation').output
x = tf.keras.layers.GlobalAveragePooling2D()(pretrained_output)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.models.Model(pretrained_model.input, x)
Compiling the model:
model.compile(
optimizer="adam",
loss="binary_crossentropy",
metrics=["accuracy"]
)
In the above notebook, change the input reading function read_images as follows:
def read_images(X, y):
X = tf.io.read_file(X)
X = tf.image.decode_jpeg(X, channels = 3)
X = tf.image.resize(X, [IMG_HEIGHT, IMG_WIDTH]) #/255.0
return (X, y)
Also note that, tf.keras.applications.EfficientNet-Bx has in-built normalization layer. So, it's better not to normalize the data in the above function (i.e. /255.0).

How to train transfer-learning model on custom dataset? ValueError: Shape must be rank 4

I am trying to build a transfer learning model to classify images. The images are a gray-scale (2D). previously I used image_dataset_from_directory method to read the images and there was no problem. However, I am trying to use a custom read function to have more control and access on the data such as knowing how many images in each class. When using this custom read function, I get an error (down below) while trying to train the model. I am not sure about what caused this error.
part1: reading the dataset
import numpy as np
import os
import tensorflow as tf
import cv2
from tensorflow import keras
# neural network
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers.experimental import preprocessing
IMG_WIDTH=160
IMG_HEIGHT=160
DATA_PATH = r"C:\Users\user\Documents\chest_xray"
TRAIN_DIR = os.path.join(DATA_PATH, 'train')
def create_dataset(img_folder):
img_data_array=[]
class_name=[]
for dir1 in os.listdir(img_folder):
for file in os.listdir(os.path.join(img_folder, dir1)):
image_path= os.path.join(img_folder, dir1, file)
image= cv2.imread( image_path, 0)
image=cv2.resize(image, (IMG_HEIGHT, IMG_WIDTH),interpolation = cv2.INTER_AREA)
image=np.array(image)
image = image.astype('float32')
image /= 255
img_data_array.append(image)
class_name.append(dir1)
return img_data_array, class_name
# extract the image array and class name
img_data, class_name =create_dataset(TRAIN_DIR)
target_dict={k: v for v, k in enumerate(np.unique(class_name))}
target_dict
target_val= [target_dict[class_name[i]] for i in range(len(class_name))]
this part will produce A list that has a size of 5232. inside the list there are numpy arrays of size 160X160 (float 32)
part 2: creating the model
def build_model():
inputs = tf.keras.Input(shape=(160, 160, 3))
x = Sequential(
[
preprocessing.RandomRotation(factor=0.15),
preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),
preprocessing.RandomFlip(),
preprocessing.RandomContrast(factor=0.1),
],
name="img_augmentation",
)(inputs)
# x = img_augmentation(inputs)
model=tf.keras.applications.EfficientNetB7(include_top=False,
drop_connect_rate=0.4,
weights='imagenet',
input_tensor=x)
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = tf.keras.layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = tf.keras.layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = tf.keras.layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = tf.keras.layers.Dense(1, name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"]
)
return model
model = build_model()
part 3: train the model
history = model.fit(x=np.array(img_data), y=np.array(target_val), epochs=5)
the error I get:
ValueError: Shape must be rank 4 but is rank 3 for '{{node
EfficientNet/img_augmentation/random_rotation_1/transform/ImageProjectiveTransformV3}} =
ImageProjectiveTransformV3[dtype=DT_FLOAT, fill_mode="REFLECT", interpolation="BILINEAR"]
(IteratorGetNext, EfficientNet/img_augmentation/random_rotation_1/rotation_matrix/concat,
EfficientNet/img_augmentation/random_rotation_1/transform/strided_slice,
EfficientNet/img_augmentation/random_rotation_1/transform/fill_value)' with input shapes:
[?,160,160], [?,8], [2], [].
The problem in the code is that OpenCV reads the image in grayscale format, but the grayscale format of the image returned is not (160,160,1) but (160,160).
Because of this fact, the error is thrown.
I managed to replicate your problem by testing it locally.
Say we randomly train on 12 samples.
Possible input formats:
#This one works
1. history = model.fit(x=np.random.rand(12,160,160,3), y=np.array([1,1,1,1,1,1,0,0,0,0,0,0]), epochs=5,verbose=1) WORKS
#This one works
2. history = model.fit(x=np.random.rand(12,160,160,1), y=np.array([1,1,1,1,1,1,0,0,0,0,0,0]), epochs=5,verbose=1) WORKS
#This one fails
3. history = model.fit(x=np.random.rand(12,160,160), y=np.array([1,1,1,1,1,1,0,0,0,0,0,0]), epochs=5,verbose=1) FAILS
(1) and (2) work.
(3) fails, yielding:
ValueError: Shape must be rank 4 but is rank 3 for '{{node
EfficientNet/img_augmentation/random_rotation_4/transform/ImageProjectiveTransformV2}} = ImageProjectiveTransformV2[dtype=DT_FLOAT, fill_mode="REFLECT", interpolation="BILINEAR"](IteratorGetNext,
EfficientNet/img_augmentation/random_rotation_4/rotation_matrix/concat,
EfficientNet/img_augmentation/random_rotation_4/transform/strided_slice)'
with input shapes: [?,160,160], [?,8], [2].
Therefore, ensure that your data format is in the shape (160,160,1) or (160,160,3).
As an alternative, after you you read the image with OpenCV, you can use
image = np.expand_dims(image,axis=-1)
to programatically insert the last axis (the grayscale).

TensorFlow batch tensors with different shapes

I am studying Neural Network and I have encountered what is probably a silly problem which I can't figure out. For my first ever network, I have to create a flower image classifier in Keras and TensorFlow using the oxford_flowers102 and the MobileNet pre-trained model from TensorFlow Hub.
The issue seems to be that the images are not re-sized to (224,224,3), but they keep their original shapes which are different from one and other. However, according to my class material, my re-sizing code is correct so I don't understand what is going on and what I am doing wrong.
Thank you very much for all your help.
# LOADING
dataset, dataset_info = tfds.load('oxford_flowers102', as_supervised=True, with_info=True)
training_set, testing_set, validation_set = dataset['train'], dataset['test'],dataset['validation']
# PROCESSING AND BATCHES
def normalize(img, lbl):
img = tf.cast(img, tf.float32)
img = tf.image.resize(img, size=(224,224))
img /= 255
return img, lbl
batch_size = 64
training_batches = training_set.cache().shuffle(train_examples//4).batch(batch_size).map(normalize).prefetch(1)
validation_batches = validation_set.cache().batch(batch_size).map(normalize).prefetch(1)
testing_batches = testing_set.cache().batch(batch_size).map(normalize).prefetch(1)
# BUILDING THE NETWORK
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
mobile_net = hub.KerasLayer(URL, input_shape=(224, 224,3))
mobile_net.trainable = False
skynet = tf.keras.Sequential([
mobile_net,
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(num_classes, activation= 'softmax')
])
# TRAINING THE NETWORK
skynet.compile(optimizer='adam', loss= 'sparse_categorical_crossentropy', metrics=['accuracy'])
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
Epochss = 25
history = skynet.fit(training_batches,
epochs= Epochss,
validation_data=validation_set,
callbacks=[early_stopping])
ERROR:
InvalidArgumentError: Cannot batch tensors with different shapes in component 0. First element had shape [590,501,3] and element 1 had shape [500,752,3].
[[node IteratorGetNext (defined at /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_18246]
Function call stack:
distributed_function
The problem was that in your input pipeline you were batching your dataset before you were making your images of equal size. You're def normalize(img, lbl) is only made to handle a single image and not a complete batch.
So in order to make your code run, you will have to make the following changes, you will have to call the map API before batch API as shown below.
batch_size = 64
training_batches = training_set.cache().map(normalize).batch(batch_size).prefetch(1)
validation_batches = validation_set.cache().map(normalize).batch(batch_size).prefetch(1)
testing_batches = testing_set.cache().map(normalize).batch(batch_size).prefetch(1)

How do I structure a Keras model for a custom image regression problem?

I'm attempting to develop a regression model using Tensorflow 2 and the keras API using a custom data set of png images. However, I'm not entirely sure what layers I should be using and how. I put together what I thought was a very simple model as a starting point however when I attempt to train the model the loss and accuracy values printed out are consistently 0. This leads me to believe my loss calculations are not working but I have no idea why. Below is a snippet of my source code, the full project for which can be found here:
import tensorflow as tf
import os
import random
import pathlib
AUTOTUNE = tf.data.experimental.AUTOTUNE
TRAINING_DATA_DIR = r'specgrams'
def gen_model():
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(256, 128, 3)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
def fetch_batch(batch_size=1000):
all_image_paths = []
all_image_labels = []
data_root = pathlib.Path(TRAINING_DATA_DIR)
files = data_root.iterdir()
for file in files:
file = str(file)
all_image_paths.append(os.path.abspath(file))
label = file[:-4].split('-')[2:3]
label = float(label[0]) / 200
all_image_labels.append(label)
def preprocess_image(path):
img_raw = tf.io.read_file(path)
image = tf.image.decode_png(img_raw, channels=3)
image = tf.image.resize(image, [256, 128])
image /= 255.0
return image
def preprocess(path, label):
return preprocess_image(path), label
path_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)
image_ds = path_ds.map(preprocess_image, num_parallel_calls=AUTOTUNE)
label_ds = tf.data.Dataset.from_tensor_slices(all_image_labels)
ds = tf.data.Dataset.zip((image_ds, label_ds))
ds = ds.shuffle(buffer_size=len(os.listdir(TRAINING_DATA_DIR)))
ds = ds.repeat()
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
ds = fetch_batch()
model = gen_model()
model.fit(ds, epochs=1, steps_per_epoch=10)
The code above is supposed to read in some spectrograms stored as 256 x 128 px png files, convert them to tensors and fit them so a regression model to predict a value (in this case the BPM of the music used to generate the spectrogram). The image file names contain the BPM which is divided by 200 to produce a value between 0 and 1 as the label.
As stated before, this code does run successfully but after each training step the loss and accuracy values printed out are always exactly 0.00000 and do not change.
It's also worth noting that I actually want my model to predict multiple values, not just a single BPM value but this is a separate issue and as such I have posted a separate question for that here.
Anyway for the answer. Regression model requires loss function related such as 'mean_squared_error', 'mean_absolut_error', 'mean_absolute_percentage_error' and 'mean_squared_logarithmic_error.
def gen_model():
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(256, 128, 3)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='mean_squared_error',
metrics=['accuracy'])
return model

Categories