I have a sequential network that takes in vectored sentences of length 20 words and aims to classify the sentence based on a label. Each word has 300 dimensions. Therefore each sentence has a shape (20, 300). The dataset has 11 samples currently therefore the full x_train is of shape (11, 20, 300)
Below is the code for my Network:
nnmodel = keras.Sequential()
nnmodel.add(keras.layers.InputLayer(input_shape = (20, 300)))
nnmodel.add(keras.layers.Dense(units = 300, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 20, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 1, activation = "sigmoid"))
nnmodel.compile(optimizer='adam',
loss='SparseCategoricalCrossentropy',
metrics=['accuracy'])
nnmodel.fit(x_train, y_train, epochs=10, batch_size = 1)
for layer in nnmodel.layers:
print(layer.output_shape)
This gives:
Epoch 1/10
11/11 [==============================] - 0s 1ms/step - loss: 2.9727 - accuracy: 0.0455
Epoch 2/10
11/11 [==============================] - 0s 1ms/step - loss: 2.7716 - accuracy: 0.0682
Epoch 3/10
11/11 [==============================] - 0s 1ms/step - loss: 2.6279 - accuracy: 0.0682
Epoch 4/10
11/11 [==============================] - 0s 1ms/step - loss: 2.4878 - accuracy: 0.0682
Epoch 5/10
11/11 [==============================] - 0s 1ms/step - loss: 2.3145 - accuracy: 0.0545
Epoch 6/10
11/11 [==============================] - 0s 1ms/step - loss: 2.0505 - accuracy: 0.0545
Epoch 7/10
11/11 [==============================] - 0s 1ms/step - loss: 1.7010 - accuracy: 0.0545
Epoch 8/10
11/11 [==============================] - 0s 992us/step - loss: 1.2874 - accuracy: 0.0545
Epoch 9/10
11/11 [==============================] - 0s 891us/step - loss: 0.9628 - accuracy: 0.0545
Epoch 10/10
11/11 [==============================] - 0s 794us/step - loss: 0.7960 - accuracy: 0.0545
(None, 20, 300)
(None, 20, 20)
(None, 20, 1)
Why is my output layer returning (20,1)? It needs to be of shape (1) because my label is just an integer. I'm quite confused and unsure how it is calculating the loss too if the shape is wrong.
Any help would be greatly appreciated/
Thanks
With the current code, it is the expected output. Adding a simple dense layer for a multidimensional input will only change the size of the last dimension. If you notice, in CNNs, we generally add a Flatten after the convolution layers for the same reason. A Flatten layer essentially reshapes the input array to remove extra dimensions (each sample is now 1 dimensional). Updated code should be:
nnmodel = keras.Sequential()
nnmodel.add(keras.layers.InputLayer(input_shape = (20, 300)))
nnmodel.add(keras.layers.Flatten()) #This is the code change
nnmodel.add(keras.layers.Dense(units = 300, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 20, activation = "relu"))
nnmodel.add(keras.layers.Dense(units = 1, activation = "sigmoid"))
nnmodel.compile(optimizer='adam',
loss='SparseCategoricalCrossentropy',
metrics=['accuracy'])
nnmodel.fit(x_train, y_train, epochs=10, batch_size = 1)
for layer in nnmodel.layers:
print(layer.output_shape)
Related
I'm trying to create a small transformer model with Keras to model stock prices, based off of this tutorial from the Keras docs. The problem is, my test loss is massive and barely changes between epochs, unsurprisingly resulting in severe underfitting, with my outputs all the same arbitrary value.
My code is below:
def transformer_encoder_block(inputs, head_size, num_heads, filters, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=filters, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
data = ...
input = np.array(
keras.preprocessing.sequence.pad_sequences(data["input"], padding="pre", dtype="float32"))
output = np.array(
keras.preprocessing.sequence.pad_sequences(data["output"], padding="pre", dtype="float32"))
# Input shape: (723, 36, 22)
# Output shape: (723, 36, 1)
# Train data
train_features = input[100:]
train_labels = output[100:]
train_labels = tf.keras.utils.to_categorical(train_labels, num_classes=3)
# Test data
test_features = input[:100]
test_labels = output[:100]
test_labels = tf.keras.utils.to_categorical(test_labels, num_classes=3)
inputs = keras.Input(shape=(None,22), dtype="float32", name="inputs")
# Ignore padding in inputs
x = layers.Masking(mask_value=0)(inputs)
x = transformer_encoder_block(x, head_size=64, num_heads=16, filters=3, dropout=0.2)
# Multiclass = Softmax (decrease, no change, increase)
outputs = layers.TimeDistributed(layers.Dense(3, activation="softmax", name="outputs"))(x)
# Create model
model = keras.Model(inputs=inputs, outputs=outputs)
# Compile model
model.compile(loss="categorical_crossentropy", optimizer=(tf.keras.optimizers.Adam(learning_rate=0.005)), metrics=['accuracy'])
# Train model
history = model.fit(train_features, train_labels, epochs=10, batch_size=32)
# Evaluate on the test data
test_loss = model.evaluate(test_features, test_labels, verbose=0)
print("Test loss:", test_loss)
out = model.predict(test_features)
After padding, input is of shape (723, 36, 22), and output is of shape (723, 36, 1) (before converting output to one hop, after which there are 3 output classes).
Here's an example output for ten epochs (trust me, more than ten doesn't make it better):
Epoch 1/10
20/20 [==============================] - 2s 62ms/step - loss: 10.7436 - accuracy: 0.3335
Epoch 2/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7083 - accuracy: 0.3354
Epoch 3/10
20/20 [==============================] - 1s 60ms/step - loss: 10.6555 - accuracy: 0.3392
Epoch 4/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7846 - accuracy: 0.3306
Epoch 5/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7600 - accuracy: 0.3322
Epoch 6/10
20/20 [==============================] - 1s 59ms/step - loss: 10.7074 - accuracy: 0.3358
Epoch 7/10
20/20 [==============================] - 1s 59ms/step - loss: 10.6569 - accuracy: 0.3385
Epoch 8/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7767 - accuracy: 0.3314
Epoch 9/10
20/20 [==============================] - 1s 61ms/step - loss: 10.7346 - accuracy: 0.3341
Epoch 10/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7093 - accuracy: 0.3354
Test loss: [10.073813438415527, 0.375]
4/4 [==============================] - 0s 22ms/step
Using the same data on a simple LSTM model with the same shape yielded a desirable prediction with a constantly decreasing loss.
Tweaking the learning rate appears to have no effect, nor does stacking more transformer_encoder_block()s.
If anyone has any suggestions for how I can solve this, please let me know.
I have this code in which I am trying to fit a model of a neural network which has just three layers: the input layer, a hidden layer and, at the end, the ouput layer which must have just one neuron for the single ouput. The problem is that when doing the fit I'm always obtaining the same values for the accuracy (null) an the loss (remains constant), and I've tried changing the optimizer from 'sgd' to 'adam' and still anything works as it should be. What would you recommend?
Layer (type) Output Shape Param N°
=================================================================
data_in (InputLayer) [(None, 4, 256)] 0
dense (Dense) (None, 4, 124) 31868
dense_1 (Dense) (None, 4, 1) 125
=================================================================
Total params: 31,993
Trainable params: 31,993
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
20/20 [==============================] - 8s 350ms/step - loss: 0.3170 - accuracy: 1.7361e-05
Epoch 2/20
20/20 [==============================] - 7s 348ms/step - loss: 0.2009 - accuracy: 6.7817e-08
Epoch 3/20
20/20 [==============================] - 7s 348ms/step - loss: 0.0513 - accuracy: 0.0000e+00
Epoch 4/20
20/20 [==============================] - 7s 348ms/step - loss: 0.0437 - accuracy: 0.0000e+00
Epoch 5/20
20/20 [==============================] - 7s 346ms/step - loss: 0.0430 - accuracy: 0.0000e+00
Epoch 6/20
20/20 [==============================] - 7s 346ms/step - loss: 0.0428 - accuracy: 0.0000e+00
Epoch 7/20
20/20 [==============================] - 7s 345ms/step - loss: 0.0428 - accuracy: 0.0000e+00
Epoch 8/20
20/20 [==============================] - 7s 345ms/step - loss: 0.0430 - accuracy: 0.0000e+00
Epoch 9/20
20/20 [==============================] - 7s 348ms/step - loss: 0.0429 - accuracy: 0.0000e+00
Epoch 10/20
20/20 [==============================] - 7s 348ms/step - loss: 0.0429 - accuracy: 0.0000e+00
Epoch 11/20
20/20 [==============================] - 7s 346ms/step - loss: 0.0429 - accuracy: 0.0000e+00
Epoch 12/20
20/20 [==============================] - 7s 344ms/step - loss: 0.0428 - accuracy: 0.0000e+00
Epoch 13/20
20/20 [==============================] - 7s 348ms/step - loss: 0.0428 - accuracy: 0.0000e+00
Epoch 14/20
20/20 [==============================] - 7s 345ms/step - loss: 0.0433 - accuracy: 0.0000e+00
Epoch 15/20
20/20 [==============================] - 7s 345ms/step - loss: 0.0430 - accuracy: 0.0000e+00
Epoch 16/20
20/20 [==============================] - 7s 347ms/step - loss: 0.0432 - accuracy: 0.0000e+00
Epoch 17/20
20/20 [==============================] - 7s 346ms/step - loss: 0.0429 - accuracy: 0.0000e+00
Epoch 18/20
20/20 [==============================] - 7s 347ms/step - loss: 0.0430 - accuracy: 0.0000e+00
Epoch 19/20
20/20 [==============================] - 7s 348ms/step - loss: 0.0428 - accuracy: 0.0000e+00
Epoch 20/20
20/20 [==============================] - 7s 348ms/step - loss: 0.0428 - accuracy: 0.0000e+00
*TEST*
1800/1800 [==============================] - 3s 2ms/step - loss: 0.0449 - accuracy: 0.0000e+00
accuracy: 0%
My input_shape is (4, 256) and my array of training data has shape (57600, 4, 256), meaning I have 57600 samples of shape (4,256). I also have my training labels array (the values I should obtain with the data), having shape (57600,). Finally, the library I am using is TENSORFLOW.
My code is the next one
from keras.layers import Input, Dense, concatenate, Conv2D, MaxPooling2D, Flatten
from keras.models import Model
from tensorflow import keras
from sklearn.preprocessing import MinMaxScaler
div_n = 240
#DIVIDING THE DATA WE WANT TO CLASIFFY AND ITS LABELS - It is being used the
#scaled data
data = np.array([self_mbyy_scaled,
self_mvyy_scaled,
self_mtpr_scaled,
self_mrho_scaled])
labels = self_iout_scaled
print(np.shape(data))
print(np.shape(labels))
#TRAINING SET AND DATA SET
tr_data = []
tr_labels = []
#Here I'm dividing the whole data in half for the nx, nz dimensions. The first half is the training set and the second half is the test set
for j in range(div_n):
for k in range(div_n):
tr_data.append([data[0][j,:,k],
data[1][j,:,k],
data[2][j,:,k],
data[3][j,:,k]]) #It puts the magnetic field, velocity, temperature and density values in one row for 240x240=57600 columns
tr_labels.append(labels[j,k]) #the values from the column of targets
tr_data = np.array(tr_data)
tr_data = tr_data.reshape(div_n*div_n, len(data), self_ny, 1)
tr_labels = np.array(tr_labels)
print('\n training data shape')
print(np.shape(tr_data))
print('\n training labels shape')
print(np.shape(tr_labels))
te_data = []
te_labels = []
for j in range(div_n):
for k in range(div_n):
te_data.append([data[0][div_n+j,:,div_n+k],
data[1][div_n+j,:,div_n+k],
data[2][div_n+j,:,div_n+k],
data[3][div_n+j,:,div_n+k]]) #It puts the magnetic field, velocity, temperature and density values in one row for 240x240=57600 columns
te_labels.append(labels[div_n+j,div_n+k]) #the values from the column of targets
te_data = np.array(te_data)
te_data = te_data.reshape(div_n*div_n, len(data), self_ny, 1)
te_labels = np.array(te_labels)
print('\n test data shape')
print(np.shape(te_data))
print('\n test labels shape')
print(np.shape(te_labels))
print('\n')
#NEURAL NETWORK MODEL
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(4, 256, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
model.summary()
model.compile(
optimizer=keras.optimizers.Adam(0.001),
loss=keras.losses.MeanSquaredError(),
metrics=['accuracy'],
)
model.fit(
tr_data, tr_labels,
epochs=6,
validation_data=ds_valid,
)
Since your data seems to have a spatial dimension 57600, 4, 256 --> (samples, timesteps, features), I would recommend using Conv1D layers instead of Conv2D. Here is a simple working example:
import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(128, 2, activation='relu', input_shape=(4, 256)))
model.add(tf.keras.layers.Conv1D(64, 2, activation='relu'))
model.add(tf.keras.layers.Conv1D(32, 2, activation='relu'))
model.add(tf.keras.layers.GlobalMaxPool1D())
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.MeanSquaredError(),
metrics=['mse'],
)
samples = 50
x = tf.random.normal((50, 4, 256))
y = tf.random.normal((50,))
model.fit(x, y, batch_size=10, epochs=6)
And note that you usually do not use the accuracy metric for the tf.keras.losses.MeanSquaredError loss function.
Im trying to implement a Cnn using Keras on a Sklearn dataset for handwritten digits recognition (load_digits). I have got the model to run but it is not improving the accuracy for each 'epochs' cycle, Im guessing its because my labels are incorrect, I have tried encoding my Y values with use of 'to_categorical' but it displays the following error:
C:\Users\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\backend.py:4979 binary_crossentropy
return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
C:\Users\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
return target(*args, **kwargs)
C:\Users\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\nn_impl.py:173 sigmoid_cross_entropy_with_logits
raise ValueError("logits and labels must have the same shape (%s vs %s)" %
ValueError: logits and labels must have the same shape ((None, 1) vs (None, 10))
When i run my code without trying to encode the Y values it seems to go through the Cnn Model however it isn't accurate and it doesn't increase, this is my code:
import tensorflow as tf
from sklearn import datasets
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
#from keras.utils.np_utils import to_categorical
X,y = datasets.load_digits(return_X_y = True)
X = X/16
#X = X.reshape(1797,8,8,1)
train_x, test_x, train_y, test_y = train_test_split(X, y)
train_x = train_x.reshape(1347,8,8,1)
#test_x = test_x.reshape()
#train_y = to_categorical(train_y, num_classes = 10)
model = Sequential()
model.add(Conv2D(32, (2, 2), input_shape=( 8, 8, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(train_x, train_y, batch_size=32, epochs=6, validation_split=0.3)
print(train_x[0])
And this gives me the following output:
Epoch 1/6
1/30 [>.............................] - ETA: 13s - loss: 1.1026 - accuracy: 0.0938
6/30 [=====>........................] - ETA: 0s - loss: 0.2949 - accuracy: 0.0652
30/30 [==============================] - 1s 33ms/step - loss: -5.4832 - accuracy: 0.0893 - val_loss: -49.9462 - val_accuracy: 0.1012
Epoch 2/6
1/30 [>.............................] - ETA: 0s - loss: -52.2145 - accuracy: 0.0625
30/30 [==============================] - 0s 3ms/step - loss: -120.6972 - accuracy: 0.0961 - val_loss: -513.0211 - val_accuracy: 0.1012
Epoch 3/6
1/30 [>.............................] - ETA: 0s - loss: -638.2873 - accuracy: 0.1250
30/30 [==============================] - 0s 3ms/step - loss: -968.3621 - accuracy: 0.1006 - val_loss: -2804.1062 - val_accuracy: 0.1012
Epoch 4/6
1/30 [>.............................] - ETA: 0s - loss: -3427.3135 - accuracy: 0.0000e+00
30/30 [==============================] - 0s 3ms/step - loss: -4571.7894 - accuracy: 0.0934 - val_loss: -10332.9727 - val_accuracy: 0.1012
Epoch 5/6
1/30 [>.............................] - ETA: 0s - loss: -12963.2559 - accuracy: 0.0625
30/30 [==============================] - 0s 3ms/step - loss: -15268.3010 - accuracy: 0.0887 - val_loss: -29262.1191 - val_accuracy: 0.1012
Epoch 6/6
1/30 [>.............................] - ETA: 0s - loss: -30990.6758 - accuracy: 0.1562
30/30 [==============================] - 0s 3ms/step - loss: -40321.9540 - accuracy: 0.0960 - val_loss: -68548.6094 - val_accuracy: 0.1012
Any guidance is greatly appricated, Thanks!
When you have a CNN you want the last layer to have as many nodes as the labels. So if you have 10 digits you want the last layer to have an output size 10. It usually has the activation function "softmax", which makes every value go to 0, except on value which is 1.
model.add(Dense(10))
model.add(Activation('softmax'))
I'm trying to make the most basic of basic neural networks to get familiar with functional API in Tensorflow 2.x.
Basically what I'm trying to do is the following with my simplified iris dataset (i.e. setosa or not)
Use the 4 features as input
Dense layer of 3
Sigmoid activation function
Dense layer of 2 (one for each class)
Softmax activation
Binary cross entropy / log-loss as my loss function
However, I can't figure out how to control one key aspect of the model. That is, how can I ensure that each feature from my input layer contributes to only one neuron in my subsequent dense layer? Also, how can I allow a feature to contribute to more than one neuron?
This isn't clear to me from the documentation.
# Load data
from sklearn.datasets import load_iris
import pandas as pd
iris = load_iris()
X, y = load_iris(return_X_y=True, as_frame=True)
X = X.astype("float32")
X.index = X.index.map(lambda i: "iris_{}".format(i))
X.columns = X.columns.map(lambda j: j.split(" (")[0].replace(" ","_"))
y.index = X.index
y = y.map(lambda i:iris.target_names[i])
y_simplified = y.map(lambda i: {True:1, False:0}[i == "setosa"])
y_simplified = pd.get_dummies(y_simplified, columns=["setosa", "not_setosa"])
# Traing test split
from sklearn.model_selection import train_test_split
seed=0
X_train,X_test, y_train,y_test= train_test_split(X,y_simplified, test_size=0.3, random_state=seed)
# Simple neural network
import tensorflow as tf
tf.random.set_seed(seed)
# Input[4 features] -> Dense layer of 3 neurons -> Activation function -> Dense layer of 2 (one per class) -> Softmax
inputs = tf.keras.Input(shape=(4))
x = tf.keras.layers.Dense(3)(inputs)
x = tf.keras.layers.Activation(tf.nn.sigmoid)(x)
x = tf.keras.layers.Dense(2)(x)
outputs = tf.keras.layers.Activation(tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs, name="simple_binary_iris")
model.compile(loss="binary_crossentropy", metrics=["accuracy"] )
model.summary()
history = model.fit(X_train, y_train, batch_size=64, epochs=10, validation_split=0.2)
test_scores = model.evaluate(X_test, y_test)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
Results:
Model: "simple_binary_iris"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_44 (InputLayer) [(None, 4)] 0
_________________________________________________________________
dense_96 (Dense) (None, 3) 15
_________________________________________________________________
activation_70 (Activation) (None, 3) 0
_________________________________________________________________
dense_97 (Dense) (None, 2) 8
_________________________________________________________________
activation_71 (Activation) (None, 2) 0
=================================================================
Total params: 23
Trainable params: 23
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
2/2 [==============================] - 0s 40ms/step - loss: 0.6344 - accuracy: 0.6667 - val_loss: 0.6107 - val_accuracy: 0.7143
Epoch 2/10
2/2 [==============================] - 0s 6ms/step - loss: 0.6302 - accuracy: 0.6667 - val_loss: 0.6083 - val_accuracy: 0.7143
Epoch 3/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6278 - accuracy: 0.6667 - val_loss: 0.6056 - val_accuracy: 0.7143
Epoch 4/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6257 - accuracy: 0.6667 - val_loss: 0.6038 - val_accuracy: 0.7143
Epoch 5/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6239 - accuracy: 0.6667 - val_loss: 0.6014 - val_accuracy: 0.7143
Epoch 6/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6223 - accuracy: 0.6667 - val_loss: 0.6002 - val_accuracy: 0.7143
Epoch 7/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6209 - accuracy: 0.6667 - val_loss: 0.5989 - val_accuracy: 0.7143
Epoch 8/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6195 - accuracy: 0.6667 - val_loss: 0.5967 - val_accuracy: 0.7143
Epoch 9/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6179 - accuracy: 0.6667 - val_loss: 0.5953 - val_accuracy: 0.7143
Epoch 10/10
2/2 [==============================] - 0s 7ms/step - loss: 0.6166 - accuracy: 0.6667 - val_loss: 0.5935 - val_accuracy: 0.7143
2/2 [==============================] - 0s 607us/step - loss: 0.6261 - accuracy: 0.6444
Test loss: 0.6261375546455383
Test accuracy: 0.644444465637207
how can I ensure that each feature from my input layer contributes to
only one neuron in my subsequent dense layer?
Have one input layer per feature and feed each input layer to a separate dense layer. Later you can concatenate the output of all the dense layers and proceed.
NOTE: One neuron can take any size input (in this case the input size is 1 as you want one feature to be used by the neuron) and the output size if always 1. A Dense layer with with n units will have n neurons and and so will have output size of n.
Working Sample
import tensorflow as tf
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# Model architecutre
x1 = tf.keras.Input(shape=(1,))
x2 = tf.keras.Input(shape=(1,))
x3 = tf.keras.Input(shape=(1,))
x4 = tf.keras.Input(shape=(1,))
x1_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x1)
x2_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x2)
x3_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x3)
x4_ = tf.keras.layers.Dense(3, activation=tf.nn.relu)(x4)
merged = tf.keras.layers.concatenate([x1_, x2_, x3_, x4_])
merged = tf.keras.layers.Dense(16, activation=tf.nn.relu)(merged)
outputs = tf.keras.layers.Dense(3, activation=tf.nn.softmax)(merged)
model = tf.keras.Model(inputs=[x1,x2,x3,x4], outputs=outputs)
model.compile(loss="sparse_categorical_crossentropy", metrics=["accuracy"] )
# Load and prepare data
iris = load_iris()
X = iris.data
y = iris.target
X_train,X_test, y_train,y_test= train_test_split(X,y, test_size=0.3)
# Fit the model
model.fit([X_train[:,0],X_train[:,1],X_train[:,2],X_train[:,3]], y_train, batch_size=64, epochs=100, validation_split=0.25)
# Evaluate the model
test_scores = model.evaluate([X_test[:,0],X_test[:,1],X_test[:,2],X_test[:,3]], y_test)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
Output:
Epoch 1/100
2/2 [==============================] - 0s 75ms/step - loss: 1.6446 - accuracy: 0.4359 - val_loss: 1.6809 - val_accuracy: 0.5185
Epoch 2/100
2/2 [==============================] - 0s 10ms/step - loss: 1.4151 - accuracy: 0.6154 - val_loss: 1.4886 - val_accuracy: 0.5556
Epoch 3/100
2/2 [==============================] - 0s 9ms/step - loss: 1.2725 - accuracy: 0.6795 - val_loss: 1.3813 - val_accuracy: 0.5556
Epoch 4/100
2/2 [==============================] - 0s 9ms/step - loss: 1.1829 - accuracy: 0.6795 - val_loss: 1.2779 - val_accuracy: 0.5926
Epoch 5/100
2/2 [==============================] - 0s 10ms/step - loss: 1.0994 - accuracy: 0.6795 - val_loss: 1.1846 - val_accuracy: 0.5926
Epoch 6/100
.................. [ Truncated ]
Epoch 100/100
2/2 [==============================] - 0s 2ms/step - loss: 0.4049 - accuracy: 0.9333
Test loss: 0.40491223335266113
Test accuracy: 0.9333333373069763
Pictorial representation of the above model architecture
Dense layers in Keras/TF are fully connected layers. For example, when you use a Dense layer as follows
inputs = tf.keras.Input(shape=(4))
x = tf.keras.layers.Dense(3)(inputs)
all the 4 connected input neurons are connected to all the 3 output neurons.
There isn't any predefined layer in Keras/TF to specify how to connect input and output neurons. However, Keras/TF is very flexible in that it allows you to define your custom layers easily.
Borrowing the idea from this answer, you could define a CustomConnected layer as follows:
class CustomConnected(tf.keras.layers.Dense):
def __init__(self, units, connections, **kwargs):
self.connections = connections
super(CustomConnected, self).__init__(units, **kwargs)
def call(self, inputs):
self.kernel = self.kernel * self.connections
return super(CustomConnected, self).call(inputs)
Using this layer, you can then specify the connections between two layers through the connections argument. For example:
inputs = tf.keras.Input(shape=(4))
connections = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 1]])
x = CustomConnected(3, connections)(inputs)
Here, the 1st, 2nd, and 3rd input neurons are connected to the 1st, 2nd, and 3rd output neurons, respectively. Additionally, the 4th input neuron is connected to the 3rd output neuron.
UPDATE: As discussed in the comments section, an adaptive approach (e.g. by using only the maximum weight for each output neuron) is also possible but not recommended. You could implement this via the following layer:
class CustomSparse(tf.keras.layers.Dense):
def __init__(self, units, **kwargs):
super(CustomSparse, self).__init__(units, **kwargs)
def call(self, inputs):
nb_in, nb_out = self.kernel.shape
argmax = tf.argmax(self.kernel, axis=0) # Shape=(nb_out,)
argmax_onehot = tf.transpose(tf.one_hot(argmax, depth=nb_in)) # Shape=(nb_in, nb_out)
kernel_max = self.kernel * argmax_onehot
# tf.print(kernel_max) # Uncomment this line to print the weights
out = tf.matmul(inputs, kernel_max)
if self.bias is not None:
out += self.bias
if self.activation is not None:
out = self.activation(out)
return out
The main issue of this approach is that you cannot propagate gradients through the argmax operation required to select the maximum weight. As a result, the network will only "switch input neurons" when the selected weight is no longer the maximum weight.
Well, i'm new to Machine Learning, and so with Keras. I'm trying to create a model from which can be passed as Input a list of arrays of arrays (a list of 6400 arrays within 2 arrays).
This is my code's problem:
XFIT = np.array([x_train, XX_train])
YFIT = np.array([y_train, yy_train])
Inputs = keras.layers.Input(shape=(6400, 2))
hidden1 = keras.layers.Dense(units=100, activation="sigmoid")(Inputs)
hidden2 = keras.layers.Dense(units=100, activation='relu')(hidden1)
predictions = keras.layers.Dense(units=3, activation='softmax')(hidden2)
model = keras.Model(inputs=Inputs, outputs=predictions)
There's no error; however, the Input layer (Inputs) forces me to pass a (6400, 2) shape, as each array (x_train and XX_train) has 6400 arrays inside. The result, with the epochs done, is this:
Train on 2 samples
Epoch 1/5
2/2 [==============================] - 1s 353ms/sample - loss: 1.1966 - accuracy: 0.2488
Epoch 2/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.1303 - accuracy: 0.2544
Epoch 3/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.0982 - accuracy: 0.3745
Epoch 4/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.0854 - accuracy: 0.3745
Epoch 5/5
2/2 [==============================] - 0s 9ms/sample - loss: 1.0835 - accuracy: 0.3745
Process finished with exit code 0
I can't train more than twice in each epoch because of the input shape. How can I change this input?
I have triend other shapes but they got me errors.
x_train, XX_train seems like this
[[[0.505834 0.795461]
[0.843175 0.975741]
[0.22349 0.035036]
...
[0.884796 0.867509]
[0.396942 0.659936]
[0.873194 0.05454 ]]
[[0.95968 0.281957]
[0.137547 0.390005]
[0.635382 0.901555]
...
[0.887062 0.486206]
[0.49827 0.949123]
[0.034411 0.983711]]]
Thank you and forgive me if i've commited any fault, first time in Keras and first time in StackOverFlow :D
You are almost there. The problem is with:
XFIT = np.array([x_train, XX_train])
YFIT = np.array([y_train, yy_train])
Let's see with an example:
import numpy as np
x_train = np.random.random((6400, 2))
y_train = np.random.randint(2, size=(6400,1))
xx_train = np.array([x_train, x_train])
yy_train = np.array([y_train, y_train])
print(xx_train.shape)
(2, 6400, 2)
print(yy_train.shape)
(2, 6400, 1)
In the array, we have 2 batches with 6400 samples each. This means when we call model.fit, it only has 2 batches to train on. Instead, what we can do:
xx_train = np.vstack([x_train, x_train])
yy_train = np.vstack([y_train, y_train])
print(xx_train.shape)
(12800, 2)
print(yy_train.shape)
(12800, 1)
Now, we have correctly joined both sample and can now train.
Inputs = Input(shape=(2, ))
hidden1 = Dense(units=100, activation="sigmoid")(Inputs)
hidden2 = Dense(units=100, activation='relu')(hidden1)
predictions = Dense(units=1, activation='sigmoid')(hidden2)
model = Model([Inputs], outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(xx_train, yy_train, batch_size=10, epochs=5)
Train on 12800 samples
Epoch 1/5
12800/12800 [==============================] - 3s 216us/sample - loss: 0.6978 - acc: 0.5047
Epoch 2/5
12800/12800 [==============================] - 2s 186us/sample - loss: 0.6952 - acc: 0.5018
Epoch 3/5
12800/12800 [==============================] - 3s 196us/sample - loss: 0.6942 - acc: 0.4962
Epoch 4/5
12800/12800 [==============================] - 3s 217us/sample - loss: 0.6938 - acc: 0.4898
Epoch 5/5
12800/12800 [==============================] - 3s 217us/sample - loss: 0.6933 - acc: 0.5002