Keras Transformer - Test Loss Not Changing - python

I'm trying to create a small transformer model with Keras to model stock prices, based off of this tutorial from the Keras docs. The problem is, my test loss is massive and barely changes between epochs, unsurprisingly resulting in severe underfitting, with my outputs all the same arbitrary value.
My code is below:
def transformer_encoder_block(inputs, head_size, num_heads, filters, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=filters, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
data = ...
input = np.array(
keras.preprocessing.sequence.pad_sequences(data["input"], padding="pre", dtype="float32"))
output = np.array(
keras.preprocessing.sequence.pad_sequences(data["output"], padding="pre", dtype="float32"))
# Input shape: (723, 36, 22)
# Output shape: (723, 36, 1)
# Train data
train_features = input[100:]
train_labels = output[100:]
train_labels = tf.keras.utils.to_categorical(train_labels, num_classes=3)
# Test data
test_features = input[:100]
test_labels = output[:100]
test_labels = tf.keras.utils.to_categorical(test_labels, num_classes=3)
inputs = keras.Input(shape=(None,22), dtype="float32", name="inputs")
# Ignore padding in inputs
x = layers.Masking(mask_value=0)(inputs)
x = transformer_encoder_block(x, head_size=64, num_heads=16, filters=3, dropout=0.2)
# Multiclass = Softmax (decrease, no change, increase)
outputs = layers.TimeDistributed(layers.Dense(3, activation="softmax", name="outputs"))(x)
# Create model
model = keras.Model(inputs=inputs, outputs=outputs)
# Compile model
model.compile(loss="categorical_crossentropy", optimizer=(tf.keras.optimizers.Adam(learning_rate=0.005)), metrics=['accuracy'])
# Train model
history = model.fit(train_features, train_labels, epochs=10, batch_size=32)
# Evaluate on the test data
test_loss = model.evaluate(test_features, test_labels, verbose=0)
print("Test loss:", test_loss)
out = model.predict(test_features)
After padding, input is of shape (723, 36, 22), and output is of shape (723, 36, 1) (before converting output to one hop, after which there are 3 output classes).
Here's an example output for ten epochs (trust me, more than ten doesn't make it better):
Epoch 1/10
20/20 [==============================] - 2s 62ms/step - loss: 10.7436 - accuracy: 0.3335
Epoch 2/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7083 - accuracy: 0.3354
Epoch 3/10
20/20 [==============================] - 1s 60ms/step - loss: 10.6555 - accuracy: 0.3392
Epoch 4/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7846 - accuracy: 0.3306
Epoch 5/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7600 - accuracy: 0.3322
Epoch 6/10
20/20 [==============================] - 1s 59ms/step - loss: 10.7074 - accuracy: 0.3358
Epoch 7/10
20/20 [==============================] - 1s 59ms/step - loss: 10.6569 - accuracy: 0.3385
Epoch 8/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7767 - accuracy: 0.3314
Epoch 9/10
20/20 [==============================] - 1s 61ms/step - loss: 10.7346 - accuracy: 0.3341
Epoch 10/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7093 - accuracy: 0.3354
Test loss: [10.073813438415527, 0.375]
4/4 [==============================] - 0s 22ms/step
Using the same data on a simple LSTM model with the same shape yielded a desirable prediction with a constantly decreasing loss.
Tweaking the learning rate appears to have no effect, nor does stacking more transformer_encoder_block()s.
If anyone has any suggestions for how I can solve this, please let me know.

Related

learn a new set of data from existing model for enhancing it (tensorflow - keras - callbacks)

I make a learning on a dataset, and everything is ok. Sometimes, changes occur, and I've got some new data. I'd like to "continue" the learning from my existing model, with the new set of data, without begining from scratch again.
Here is a simple example to show the problematic and where I'm stucked.
import tensorflow as tf
mnist = tf.keras.datasets.mnist
# get the data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# split in 2 parts to get 2 set of data
x_train1 = x_train[:5000]
x_train2 = x_train[5000:10000]
y_test1 = y_test[:5000]
y_test2 = y_test[5000:]
# set the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Compile the model
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
# set the callback
checkpoint_path = "CHECKPOINTS/cp.ckpt"
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, monitor='accuracy', mode="max", save_best_only=False, save_weights_only=False, save_freq="epoch", verbose=0)
Then I fit my first set of data :
# fit the 1st dataset
model.fit(x_train1, y_test1, epochs=5,callbacks=[cp_callback],verbose=1)
Output:
Epoch 1/5
157/157 [==============================] - 0s 897us/step - loss: >2.3518 - accuracy: >0.1075
Epoch 2/5
157/157 [==============================] - 0s 752us/step - loss: >2.2833 - accuracy: >0.1438
Epoch 3/5
157/157 [==============================] - 0s 731us/step - loss: >2.2656 - accuracy: >0.1564
Epoch 4/5
157/157 [==============================] - 0s 755us/step - loss: >2.2388 - accuracy: >0.1719
Epoch 5/5
157/157 [==============================] - 0s 759us/step - loss: >2.2117 - accuracy: >0.1901
Then my 2nd one :
# fit the 2nd one
model.fit(x_train2, y_test2, epochs=5,callbacks=[cp_callback],verbose=1)
Output:
Epoch 1/5
157/157 [==============================] - 0s 943us/step - loss: >2.3240 - accuracy: >0.0964
Epoch 2/5
157/157 [==============================] - 0s 778us/step - loss: >2.2881 - accuracy: >0.1238
Epoch 3/5
157/157 [==============================] - 0s 805us/step - loss: >2.2688 - accuracy: >0.1514
Epoch 4/5
157/157 [==============================] - 0s 814us/step - loss: >2.2498 - accuracy: >0.1496
Epoch 5/5
157/157 [==============================] - 0s 1ms/step - loss: >2.2289 - accuracy: 0.1704
As you can see, it begins from 0 again the training, without keeping the existing train made before.
How could I do that.
Thanx by advance.

Tensorflow tf.data.Dataset.cache seems do not take the expected effect

I am trying to improve my model training performance following the Better performance with the tf.data API guideline. However, I have observed that the performance using .cache() is almost the same or even worse if compared to same settings without .cache().
datafile_list = load_my_files()
RAW_BYTES = 403*4
BATCH_SIZE = 32
raw_dataset = tf.data.FixedLengthRecordDataset(filenames=datafile_list, record_bytes=RAW_BYTES, num_parallel_reads=10, buffer_size=1024*RAW_BYTES)
raw_dataset = raw_dataset.map(tf.autograph.experimental.do_not_convert(decode_and_prepare),
num_parallel_calls=tf.data.AUTOTUNE)
raw_dataset = raw_dataset.cache()
raw_dataset = raw_dataset.shuffle(buffer_size=1024)
raw_dataset = raw_dataset.batch(BATCH_SIZE)
raw_dataset = raw_dataset.prefetch(tf.data.AUTOTUNE)
The data in datafile_list hold 9.92GB which fairly fits the system total physical RAM available (100GB). System swap is disabled.
By training the model using the dataset:
model = build_model()
model.fit(raw_dataset, epochs=5, verbose=2)
results in:
Epoch 1/5
206247/206247 - 126s - loss: 0.0043 - mae: 0.0494 - mse: 0.0043
Epoch 2/5
206247/206247 - 125s - loss: 0.0029 - mae: 0.0415 - mse: 0.0029
Epoch 3/5
206247/206247 - 129s - loss: 0.0027 - mae: 0.0397 - mse: 0.0027
Epoch 4/5
206247/206247 - 125s - loss: 0.0025 - mae: 0.0386 - mse: 0.0025
Epoch 5/5
206247/206247 - 125s - loss: 0.0024 - mae: 0.0379 - mse: 0.0024
This result is frustrating. By the docs:
The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
And from this guide:
When iterating over this dataset, the second iteration will be much faster than the first one thanks to the caching.
However, the elapsed time took by all epochs are almost the same. In addition, during the training both CPU and GPU usage are very low (see images below).
By commenting out the line raw_dataset = raw_dataset.cache() the results do not show any notable difference:
Epoch 1/5
206067/206067 - 129s - loss: 0.0042 - mae: 0.0492 - mse: 0.0042
Epoch 2/5
206067/206067 - 127s - loss: 0.0028 - mae: 0.0412 - mse: 0.0028
Epoch 3/5
206067/206067 - 134s - loss: 0.0026 - mae: 0.0393 - mse: 0.0026
Epoch 4/5
206067/206067 - 127s - loss: 0.0024 - mae: 0.0383 - mse: 0.0024
Epoch 5/5
206067/206067 - 126s - loss: 0.0023 - mae: 0.0376 - mse: 0.0023
As pointed out in the docs, my expectations were using cache would result in a much fast training time. I would like to know what I am doing wrong.
Attachments
GPU usage during training using cache:
GPU usage during training WITHOUT cache:
System Stats (Memory, CPU etc) during training using cache:
System Stats (Memory, CPU etc) during training WITHOUT cache:
Just a small observation using Google Colab. According to the docs:
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
And
Note: cache will produce exactly the same elements during each
iteration through the dataset. If you wish to randomize the iteration
order, make sure to call shuffle after calling cache.
I did notice a few differences when using caching and iterating over the dataset beforehand. Here is an example.
Prepare data:
import random
import struct
import tensorflow as tf
import numpy as np
RAW_N = 2 + 20*20 + 1
bytess = random.sample(range(1, 5000), RAW_N*4)
with open('mydata.bin', 'wb') as f:
f.write(struct.pack('1612i', *bytess))
def decode_and_prepare(register):
register = tf.io.decode_raw(register, out_type=tf.float32)
inputs = register[2:402]
label = tf.random.uniform(()) + register[402:]
return inputs, label
raw_dataset = tf.data.FixedLengthRecordDataset(filenames=['/content/mydata.bin']*7000, record_bytes=RAW_N*4)
raw_dataset = raw_dataset.map(decode_and_prepare)
Train model without caching and iterating beforehand:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).batch(32).prefetch(tf.data.AUTOTUNE)
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 4s 3ms/step - loss: 0.1425
Epoch 2/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 4s 3ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41be037d0>
Training model with caching but no iterating:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).cache().batch(32).prefetch(tf.data.AUTOTUNE)
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 4s 2ms/step - loss: 0.1428
Epoch 2/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 2s 3ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41fa87810>
Training model with caching and iterating:
total_data_entries = len(list(raw_dataset.map(lambda x, y: (x, y))))
train_ds = raw_dataset.shuffle(buffer_size=total_data_entries).cache().batch(32).prefetch(tf.data.AUTOTUNE)
_ = list(train_ds.as_numpy_iterator()) # iterate dataset beforehand
inputs = tf.keras.layers.Input((400,))
x = tf.keras.layers.Dense(200, activation='relu', kernel_initializer='normal')(inputs)
x = tf.keras.layers.Dense(100, activation='relu', kernel_initializer='normal')(x)
outputs = tf.keras.layers.Dense(1, kernel_initializer='normal')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(optimizer='adam', loss='mse')
model.fit(train_ds, epochs=5)
Epoch 1/5
875/875 [==============================] - 3s 3ms/step - loss: 0.1427
Epoch 2/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0841
Epoch 3/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 4/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
Epoch 5/5
875/875 [==============================] - 2s 2ms/step - loss: 0.0840
<keras.callbacks.History at 0x7fc41ac9c850>
Conclusion: The caching and the prior iteration of the dataset seem to have an effect on training, but in this example only 7000 files were used.

Output shape of Sequential Network is wrong in Keras

I have a sequential network that takes in vectored sentences of length 20 words and aims to classify the sentence based on a label. Each word has 300 dimensions. Therefore each sentence has a shape (20, 300). The dataset has 11 samples currently therefore the full x_train is of shape (11, 20, 300)
Below is the code for my Network:
nnmodel = keras.Sequential()
nnmodel.add(keras.layers.InputLayer(input_shape = (20, 300)))
nnmodel.add(keras.layers.Dense(units = 300, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 20, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 1, activation = "sigmoid"))
nnmodel.compile(optimizer='adam',
loss='SparseCategoricalCrossentropy',
metrics=['accuracy'])
nnmodel.fit(x_train, y_train, epochs=10, batch_size = 1)
for layer in nnmodel.layers:
print(layer.output_shape)
This gives:
Epoch 1/10
11/11 [==============================] - 0s 1ms/step - loss: 2.9727 - accuracy: 0.0455
Epoch 2/10
11/11 [==============================] - 0s 1ms/step - loss: 2.7716 - accuracy: 0.0682
Epoch 3/10
11/11 [==============================] - 0s 1ms/step - loss: 2.6279 - accuracy: 0.0682
Epoch 4/10
11/11 [==============================] - 0s 1ms/step - loss: 2.4878 - accuracy: 0.0682
Epoch 5/10
11/11 [==============================] - 0s 1ms/step - loss: 2.3145 - accuracy: 0.0545
Epoch 6/10
11/11 [==============================] - 0s 1ms/step - loss: 2.0505 - accuracy: 0.0545
Epoch 7/10
11/11 [==============================] - 0s 1ms/step - loss: 1.7010 - accuracy: 0.0545
Epoch 8/10
11/11 [==============================] - 0s 992us/step - loss: 1.2874 - accuracy: 0.0545
Epoch 9/10
11/11 [==============================] - 0s 891us/step - loss: 0.9628 - accuracy: 0.0545
Epoch 10/10
11/11 [==============================] - 0s 794us/step - loss: 0.7960 - accuracy: 0.0545
(None, 20, 300)
(None, 20, 20)
(None, 20, 1)
Why is my output layer returning (20,1)? It needs to be of shape (1) because my label is just an integer. I'm quite confused and unsure how it is calculating the loss too if the shape is wrong.
Any help would be greatly appreciated/
Thanks
With the current code, it is the expected output. Adding a simple dense layer for a multidimensional input will only change the size of the last dimension. If you notice, in CNNs, we generally add a Flatten after the convolution layers for the same reason. A Flatten layer essentially reshapes the input array to remove extra dimensions (each sample is now 1 dimensional). Updated code should be:
nnmodel = keras.Sequential()
nnmodel.add(keras.layers.InputLayer(input_shape = (20, 300)))
nnmodel.add(keras.layers.Flatten()) #This is the code change
nnmodel.add(keras.layers.Dense(units = 300, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 20, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 1, activation = "sigmoid"))
nnmodel.compile(optimizer='adam',
loss='SparseCategoricalCrossentropy',
metrics=['accuracy'])
nnmodel.fit(x_train, y_train, epochs=10, batch_size = 1)
for layer in nnmodel.layers:
print(layer.output_shape)

How to control if input features contribute exclusively to one neuron in subsequent layer of a Tensorflow neural network?

I'm trying to make the most basic of basic neural networks to get familiar with functional API in Tensorflow 2.x.
Basically what I'm trying to do is the following with my simplified iris dataset (i.e. setosa or not)
Use the 4 features as input
Dense layer of 3
Sigmoid activation function
Dense layer of 2 (one for each class)
Softmax activation
Binary cross entropy / log-loss as my loss function
However, I can't figure out how to control one key aspect of the model. That is, how can I ensure that each feature from my input layer contributes to only one neuron in my subsequent dense layer? Also, how can I allow a feature to contribute to more than one neuron?
This isn't clear to me from the documentation.
# Load data
from sklearn.datasets import load_iris
import pandas as pd
iris = load_iris()
X, y = load_iris(return_X_y=True, as_frame=True)
X = X.astype("float32")
X.index = X.index.map(lambda i: "iris_{}".format(i))
X.columns = X.columns.map(lambda j: j.split(" (")[0].replace(" ","_"))
y.index = X.index
y = y.map(lambda i:iris.target_names[i])
y_simplified = y.map(lambda i: {True:1, False:0}[i == "setosa"])
y_simplified = pd.get_dummies(y_simplified, columns=["setosa", "not_setosa"])
# Traing test split
from sklearn.model_selection import train_test_split
seed=0
X_train,X_test, y_train,y_test= train_test_split(X,y_simplified, test_size=0.3, random_state=seed)
# Simple neural network
import tensorflow as tf
tf.random.set_seed(seed)
# Input[4 features] -> Dense layer of 3 neurons -> Activation function -> Dense layer of 2 (one per class) -> Softmax
inputs = tf.keras.Input(shape=(4))
x = tf.keras.layers.Dense(3)(inputs)
x = tf.keras.layers.Activation(tf.nn.sigmoid)(x)
x = tf.keras.layers.Dense(2)(x)
outputs = tf.keras.layers.Activation(tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name="simple_binary_iris")
model.compile(loss="binary_crossentropy", metrics=["accuracy"] )
model.summary()
history = model.fit(X_train, y_train, batch_size=64, epochs=10, validation_split=0.2)
test_scores = model.evaluate(X_test, y_test)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
Results:
Model: "simple_binary_iris"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_44 (InputLayer) [(None, 4)] 0
_________________________________________________________________
dense_96 (Dense) (None, 3) 15
_________________________________________________________________
activation_70 (Activation) (None, 3) 0
_________________________________________________________________
dense_97 (Dense) (None, 2) 8
_________________________________________________________________
activation_71 (Activation) (None, 2) 0
=================================================================
Total params: 23
Trainable params: 23
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
2/2 [==============================] - 0s 40ms/step - loss: 0.6344 - accuracy: 0.6667 - val_loss: 0.6107 - val_accuracy: 0.7143
Epoch 2/10
2/2 [==============================] - 0s 6ms/step - loss: 0.6302 - accuracy: 0.6667 - val_loss: 0.6083 - val_accuracy: 0.7143
Epoch 3/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6278 - accuracy: 0.6667 - val_loss: 0.6056 - val_accuracy: 0.7143
Epoch 4/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6257 - accuracy: 0.6667 - val_loss: 0.6038 - val_accuracy: 0.7143
Epoch 5/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6239 - accuracy: 0.6667 - val_loss: 0.6014 - val_accuracy: 0.7143
Epoch 6/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6223 - accuracy: 0.6667 - val_loss: 0.6002 - val_accuracy: 0.7143
Epoch 7/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6209 - accuracy: 0.6667 - val_loss: 0.5989 - val_accuracy: 0.7143
Epoch 8/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6195 - accuracy: 0.6667 - val_loss: 0.5967 - val_accuracy: 0.7143
Epoch 9/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6179 - accuracy: 0.6667 - val_loss: 0.5953 - val_accuracy: 0.7143
Epoch 10/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6166 - accuracy: 0.6667 - val_loss: 0.5935 - val_accuracy: 0.7143
2/2 [==============================] - 0s 607us/step - loss: 0.6261 - accuracy: 0.6444
Test loss: 0.6261375546455383
Test accuracy: 0.644444465637207
how can I ensure that each feature from my input layer contributes to
only one neuron in my subsequent dense layer?
Have one input layer per feature and feed each input layer to a separate dense layer. Later you can concatenate the output of all the dense layers and proceed.
NOTE: One neuron can take any size input (in this case the input size is 1 as you want one feature to be used by the neuron) and the output size if always 1. A Dense layer with with n units will have n neurons and and so will have output size of n.
Working Sample
import tensorflow as tf
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# Model architecutre
x1 = tf.keras.Input(shape=(1,))
x2 = tf.keras.Input(shape=(1,))
x3 = tf.keras.Input(shape=(1,))
x4 = tf.keras.Input(shape=(1,))
x1_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x1)
x2_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x2)
x3_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x3)
x4_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x4)
merged = tf.keras.layers.concatenate([x1_, x2_, x3_, x4_])
merged = tf.keras.layers.Dense(16, activation=tf.nn.relu)(merged)
outputs = tf.keras.layers.Dense(3, activation=tf.nn.softmax)(merged)
model = tf.keras.Model(inputs=[x1,x2,x3,x4], outputs=outputs)
model.compile(loss="sparse_categorical_crossentropy", metrics=["accuracy"] )
# Load and prepare data
iris = load_iris()
X = iris.data
y = iris.target
X_train,X_test, y_train,y_test= train_test_split(X,y, test_size=0.3)
# Fit the model
model.fit([X_train[:,0],X_train[:,1],X_train[:,2],X_train[:,3]], y_train, batch_size=64, epochs=100, validation_split=0.25)
# Evaluate the model
test_scores = model.evaluate([X_test[:,0],X_test[:,1],X_test[:,2],X_test[:,3]], y_test)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
Output:
Epoch 1/100
2/2 [==============================] - 0s 75ms/step - loss: 1.6446 - accuracy: 0.4359 - val_loss: 1.6809 - val_accuracy: 0.5185
Epoch 2/100
2/2 [==============================] - 0s 10ms/step - loss: 1.4151 - accuracy: 0.6154 - val_loss: 1.4886 - val_accuracy: 0.5556
Epoch 3/100
2/2 [==============================] - 0s 9ms/step - loss: 1.2725 - accuracy: 0.6795 - val_loss: 1.3813 - val_accuracy: 0.5556
Epoch 4/100
2/2 [==============================] - 0s 9ms/step - loss: 1.1829 - accuracy: 0.6795 - val_loss: 1.2779 - val_accuracy: 0.5926
Epoch 5/100
2/2 [==============================] - 0s 10ms/step - loss: 1.0994 - accuracy: 0.6795 - val_loss: 1.1846 - val_accuracy: 0.5926
Epoch 6/100
.................. [ Truncated ]
Epoch 100/100
2/2 [==============================] - 0s 2ms/step - loss: 0.4049 - accuracy: 0.9333
Test loss: 0.40491223335266113
Test accuracy: 0.9333333373069763
Pictorial representation of the above model architecture
Dense layers in Keras/TF are fully connected layers. For example, when you use a Dense layer as follows
inputs = tf.keras.Input(shape=(4))
x = tf.keras.layers.Dense(3)(inputs)
all the 4 connected input neurons are connected to all the 3 output neurons.
There isn't any predefined layer in Keras/TF to specify how to connect input and output neurons. However, Keras/TF is very flexible in that it allows you to define your custom layers easily.
Borrowing the idea from this answer, you could define a CustomConnected layer as follows:
class CustomConnected(tf.keras.layers.Dense):
def __init__(self, units, connections, **kwargs):
self.connections = connections
super(CustomConnected, self).__init__(units, **kwargs)
def call(self, inputs):
self.kernel = self.kernel * self.connections
return super(CustomConnected, self).call(inputs)
Using this layer, you can then specify the connections between two layers through the connections argument. For example:
inputs = tf.keras.Input(shape=(4))
connections = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 1]])
x = CustomConnected(3, connections)(inputs)
Here, the 1st, 2nd, and 3rd input neurons are connected to the 1st, 2nd, and 3rd output neurons, respectively. Additionally, the 4th input neuron is connected to the 3rd output neuron.
UPDATE: As discussed in the comments section, an adaptive approach (e.g. by using only the maximum weight for each output neuron) is also possible but not recommended. You could implement this via the following layer:
class CustomSparse(tf.keras.layers.Dense):
def __init__(self, units, **kwargs):
super(CustomSparse, self).__init__(units, **kwargs)
def call(self, inputs):
nb_in, nb_out = self.kernel.shape
argmax = tf.argmax(self.kernel, axis=0) # Shape=(nb_out,)
argmax_onehot = tf.transpose(tf.one_hot(argmax, depth=nb_in)) # Shape=(nb_in, nb_out)
kernel_max = self.kernel * argmax_onehot
# tf.print(kernel_max) # Uncomment this line to print the weights
out = tf.matmul(inputs, kernel_max)
if self.bias is not None:
out += self.bias
if self.activation is not None:
out = self.activation(out)
return out
The main issue of this approach is that you cannot propagate gradients through the argmax operation required to select the maximum weight. As a result, the network will only "switch input neurons" when the selected weight is no longer the maximum weight.

How to take as Input a list of arrays in Keras API

Well, i'm new to Machine Learning, and so with Keras. I'm trying to create a model from which can be passed as Input a list of arrays of arrays (a list of 6400 arrays within 2 arrays).
This is my code's problem:
XFIT = np.array([x_train, XX_train])
YFIT = np.array([y_train, yy_train])
Inputs = keras.layers.Input(shape=(6400, 2))
hidden1 = keras.layers.Dense(units=100, activation="sigmoid")(Inputs)
hidden2 = keras.layers.Dense(units=100, activation='relu')(hidden1)
predictions = keras.layers.Dense(units=3, activation='softmax')(hidden2)
model = keras.Model(inputs=Inputs, outputs=predictions)
There's no error; however, the Input layer (Inputs) forces me to pass a (6400, 2) shape, as each array (x_train and XX_train) has 6400 arrays inside. The result, with the epochs done, is this:
Train on 2 samples
Epoch 1/5
2/2 [==============================] - 1s 353ms/sample - loss: 1.1966 - accuracy: 0.2488
Epoch 2/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.1303 - accuracy: 0.2544
Epoch 3/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.0982 - accuracy: 0.3745
Epoch 4/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.0854 - accuracy: 0.3745
Epoch 5/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.0835 - accuracy: 0.3745
Process finished with exit code 0
I can't train more than twice in each epoch because of the input shape. How can I change this input?
I have triend other shapes but they got me errors.
x_train, XX_train seems like this
[[[0.505834 0.795461]
[0.843175 0.975741]
[0.22349 0.035036]
...
[0.884796 0.867509]
[0.396942 0.659936]
[0.873194 0.05454 ]]
[[0.95968 0.281957]
[0.137547 0.390005]
[0.635382 0.901555]
...
[0.887062 0.486206]
[0.49827 0.949123]
[0.034411 0.983711]]]
Thank you and forgive me if i've commited any fault, first time in Keras and first time in StackOverFlow :D
You are almost there. The problem is with:
XFIT = np.array([x_train, XX_train])
YFIT = np.array([y_train, yy_train])
Let's see with an example:
import numpy as np
x_train = np.random.random((6400, 2))
y_train = np.random.randint(2, size=(6400,1))
xx_train = np.array([x_train, x_train])
yy_train = np.array([y_train, y_train])
print(xx_train.shape)
(2, 6400, 2)
print(yy_train.shape)
(2, 6400, 1)
In the array, we have 2 batches with 6400 samples each. This means when we call model.fit, it only has 2 batches to train on. Instead, what we can do:
xx_train = np.vstack([x_train, x_train])
yy_train = np.vstack([y_train, y_train])
print(xx_train.shape)
(12800, 2)
print(yy_train.shape)
(12800, 1)
Now, we have correctly joined both sample and can now train.
Inputs = Input(shape=(2, ))
hidden1 = Dense(units=100, activation="sigmoid")(Inputs)
hidden2 = Dense(units=100, activation='relu')(hidden1)
predictions = Dense(units=1, activation='sigmoid')(hidden2)
model = Model([Inputs], outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(xx_train, yy_train, batch_size=10, epochs=5)
Train on 12800 samples
Epoch 1/5
12800/12800 [==============================] - 3s 216us/sample - loss: 0.6978 - acc: 0.5047
Epoch 2/5
12800/12800 [==============================] - 2s 186us/sample - loss: 0.6952 - acc: 0.5018
Epoch 3/5
12800/12800 [==============================] - 3s 196us/sample - loss: 0.6942 - acc: 0.4962
Epoch 4/5
12800/12800 [==============================] - 3s 217us/sample - loss: 0.6938 - acc: 0.4898
Epoch 5/5
12800/12800 [==============================] - 3s 217us/sample - loss: 0.6933 - acc: 0.5002

Categories