I use the neuralfit package to evolve a neural network, but am not sure how I can avoid printing completely. I would simply like to plot the history after training. I currently have:
import neuralfit
import numpy as np
x = np.asarray([[0],[1]])
y = np.asarray([[1],[0]])
model = neuralfit.Model(1,1)
model.compile('alpha', loss='mse')
model.evolve(x,y)
But it prints
...
Epoch 96/100 - 1/1 [==============================] - 3ms 1ms/step - loss: 0.000000
Epoch 97/100 - 1/1 [==============================] - 3ms 1ms/step - loss: 0.000000
Epoch 98/100 - 1/1 [==============================] - 4ms 2ms/step - loss: 0.000000
Epoch 99/100 - 1/1 [==============================] - 3ms 1ms/step - loss: 0.000000
Epoch 100/100 - 1/1 [==============================] - 4ms 2ms/step - loss: 0.000000
From the NeuralFit documentation in model.evolve(), you can use the 'verbose' parameter.
model.evolve(x,y,verbose=0)
Related
from sklearn.utils import class_weight
no_classes = 2
def run_experiment(model):
optimizer = tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
)
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[
keras.metrics.SparseCategoricalCrossentropy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
!mkdir "/content/CV_Checkpoints"
checkpoint_filepath = "/content/CV_Checkpoints/"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
history = model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.1,
callbacks=[checkpoint_callback],
)
model.load_weights(checkpoint_filepath)
_, accuracy, top_5_accuracy = model.evaluate(x_test, y_test)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
predicted_label.append(np.argmax(model.predict(x_test),axis=-1))
return history
vit_classifier = create_vit_classifier()
history = run_experiment(vit_classifier)
Output:
Epoch 1/15
1/1 [==============================] - 12s 12s/step - loss: 1.2682 - accuracy: 3.8050 - top-5-accuracy: 1.0000 - val_loss: 0.0278 - val_accuracy: 1.1921e-07 - val_top-5-accuracy: 1.0000
Epoch 2/15
1/1 [==============================] - 1s 1s/step - loss: 3.5516 - accuracy: 6.5027 - top-5-accuracy: 1.0000 - val_loss: 37.0684 - val_accuracy: 16.1181 - val_top-5-accuracy: 1.0000
Epoch 3/15
1/1 [==============================] - 0s 190ms/step - loss: 11.7986 - accuracy: 4.5956 - top-5-accuracy: 1.0000 - val_loss: 10.2448 - val_accuracy: 16.1181 - val_top-5-accuracy: 1.0000
Epoch 4/15
1/1 [==============================] - 0s 177ms/step - loss: 4.1671 - accuracy: 4.6267 - top-5-accuracy: 1.0000 - val_loss: 0.9128 - val_accuracy: 1.8679 - val_top-5-accuracy: 1.0000
Epoch 5/15
1/1 [==============================] - 0s 170ms/step - loss: 8.1745 - accuracy: 10.4437 - top-5-accuracy: 1.0000 - val_loss: 8.3098 - val_accuracy: 10.8216 - val_top-5-accuracy: 1.0000
Epoch 6/15
1/1 [==============================] - 0s 170ms/step - loss: 3.4625 - accuracy: 3.8120 - top-5-accuracy: 1.0000 - val_loss: 5.8123 - val_accuracy: 12.3951 - val_top-5-accuracy: 1.0000
Epoch 7/15
1/1 [==============================] - 0s 179ms/step - loss: 3.3535 - accuracy: 4.2599 - top-5-accuracy: 1.0000 - val_loss: 1.6153 - val_accuracy: 5.3999 - val_top-5-accuracy: 1.0000
Epoch 8/15
1/1 [==============================] - 0s 164ms/step - loss: 1.6320 - accuracy: 2.2927 - top-5-accuracy: 1.0000 - val_loss: 0.0629 - val_accuracy: 0.0770 - val_top-5-accuracy: 1.0000
Epoch 9/15
1/1 [==============================] - 0s 165ms/step - loss: 1.8593 - accuracy: 3.0573 - top-5-accuracy: 1.0000 - val_loss: 4.1515 - val_accuracy: 12.6133 - val_top-5-accuracy: 1.0000
Epoch 10/15
1/1 [==============================] - 0s 224ms/step - loss: 2.0062 - accuracy: 3.6780 - top-5-accuracy: 1.0000 - val_loss: 2.5899 - val_accuracy: 9.3397 - val_top-5-accuracy: 1.0000
Epoch 11/15
1/1 [==============================] - 0s 250ms/step - loss: 1.9134 - accuracy: 3.9371 - top-5-accuracy: 1.0000 - val_loss: 0.2281 - val_accuracy: 1.8226 - val_top-5-accuracy: 1.0000
Epoch 12/15
1/1 [==============================] - 0s 239ms/step - loss: 1.5485 - accuracy: 2.5761 - top-5-accuracy: 1.0000 - val_loss: 0.0980 - val_accuracy: 0.0584 - val_top-5-accuracy: 1.0000
Epoch 13/15
1/1 [==============================] - 0s 172ms/step - loss: 1.4785 - accuracy: 2.1410 - top-5-accuracy: 1.0000 - val_loss: 1.4698 - val_accuracy: 1.0202 - val_top-5-accuracy: 1.0000
Epoch 14/15
1/1 [==============================] - 0s 167ms/step - loss: 1.3027 - accuracy: 3.4242 - top-5-accuracy: 1.0000 - val_loss: 2.3393 - val_accuracy: 7.6722 - val_top-5-accuracy: 1.0000
Epoch 15/15
1/1 [==============================] - 0s 165ms/step - loss: 1.5553 - accuracy: 3.1357 - top-5-accuracy: 1.0000 - val_loss: 0.6237 - val_accuracy: 0.5839 - val_top-5-accuracy: 1.0000
3/3 [==============================] - 0s 24ms/step - loss: 13.3386 - accuracy: 5.7951 - top-5-accuracy: 1.0000
Test accuracy: 579.51%
Test top 5 accuracy: 100.0%
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I am trying to implement an artificial neural network in python using 'keras'. The problem I am facing is that my model is returning the 'loss: nan' for every epoch. I want to mention that the dataset that I have used from the csv file has a column with some missing values. I would like to asked if is this 'nan' is due to these missing data? and is there any way to get a numerical value of loss intead of 'nan'?
following is my code,
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 7))
# Adding the second hidden layer
classifier.add(Dense(6, kernel_initializer = 'uniform', activation = 'relu'))
# Adding the output layer
classifier.add(Dense(1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
# Part 3 - Making the predictions and evaluating the model
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
following is the output I got,
Epoch 1/100
72/72 [==============================] - 1s 1ms/step - loss: nan - accuracy: 0.6299
Epoch 2/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6133
Epoch 3/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5996
Epoch 4/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6477
Epoch 5/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6131
Epoch 6/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6256
Epoch 7/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5933
Epoch 8/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5806
Epoch 9/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6123
Epoch 10/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6342
Epoch 11/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5910
Epoch 12/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6151
Epoch 13/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5999
Epoch 14/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5828
Epoch 15/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6317
Epoch 16/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5782
Epoch 17/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6085
Epoch 18/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6383
Epoch 19/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6102
Epoch 20/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5944
Epoch 21/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5966
Epoch 22/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6168
Epoch 23/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6010
Epoch 24/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5946
Epoch 25/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6586
Epoch 26/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6527
Epoch 27/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6222
Epoch 28/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6322
Epoch 29/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6211
Epoch 30/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6261
Epoch 31/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6164
Epoch 32/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6242
Epoch 33/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5785
Epoch 34/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6115
Epoch 35/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6290
Epoch 36/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5886
Epoch 37/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6225
Epoch 38/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6109
Epoch 39/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5951
Epoch 40/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6126
Epoch 41/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6096
Epoch 42/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6146
Epoch 43/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6031
Epoch 44/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6232
Epoch 45/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6403
Epoch 46/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6068
Epoch 47/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6126
Epoch 48/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5879
Epoch 49/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6173
Epoch 50/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6379
Epoch 51/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6323
Epoch 52/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6202
Epoch 53/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5956
Epoch 54/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6069
Epoch 55/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6090
Epoch 56/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6400
Epoch 57/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6293
Epoch 58/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6008
Epoch 59/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6422
Epoch 60/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6270
Epoch 61/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5997
Epoch 62/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5959
Epoch 63/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6409
Epoch 64/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6185
Epoch 65/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6096
Epoch 66/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6240
Epoch 67/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6176
Epoch 68/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5958
Epoch 69/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5994
Epoch 70/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6375
Epoch 71/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6238
Epoch 72/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6214
Epoch 73/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6147
Epoch 74/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6046
Epoch 75/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5876
Epoch 76/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6085
Epoch 77/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6265
Epoch 78/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5964
Epoch 79/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6177
Epoch 80/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6143
Epoch 81/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6093
Epoch 82/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6162
Epoch 83/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.5974
Epoch 84/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6151
Epoch 85/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6304
Epoch 86/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6148
Epoch 87/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6224
Epoch 88/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6076
Epoch 89/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6228
Epoch 90/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6203
Epoch 91/100
72/72 [==============================] - 0s 2ms/step - loss: nan - accuracy: 0.6431
Epoch 92/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6258
Epoch 93/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6425
Epoch 94/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6436
Epoch 95/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6318
Epoch 96/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6273
Epoch 97/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6230
Epoch 98/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5846
Epoch 99/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.5866
Epoch 100/100
72/72 [==============================] - 0s 1ms/step - loss: nan - accuracy: 0.6027
[[110 0]
[ 69 0]]
The values of X_train are as follows,
X_train values
As discussed in the comments section, the issue is that your input dataset X_train contains NaNs. Since any mathematical operation involving a NaN value results in NaN (and the loss function directly depends on X_train) your loss also ends up being NaN.
To overcome this issue, you can impute the missing values. For example, replacing NaNs with 0 is a common way to tackle missing values (though not necessarily the best). Another typical choice is to impute NaNs with the mean or median value of the corresponding feature. In any case, you can always see what works best via the validation loss.
I am running the same lines of code w/ the same source files on both Google App Engine and Jupyter notebook:
model = load_model("test.h5")
model.compile(optimizer=Adam(lr=1e-2, decay=0), loss="binary_crossentropy", metrics=['accuracy'])
with open("data.json", 'r') as f:
data = json.load(f)
X = data[0]
y = data[1]
history = model.fit(X, y, validation_split=0, epochs=50, batch_size=10)
The output of GAE is as follows:
Epoch 1/50
2/2 [==============================] - 1s 316ms/step - loss: 8.0590 - acc: 0.5000
Epoch 2/50
2/2 [==============================] - 0s 50ms/step - loss: 8.0590 - acc: 0.5000
Epoch 3/50
2/2 [==============================] - 0s 40ms/step - loss: 8.0590 - acc: 0.5000
Epoch 4/50
2/2 [==============================] - 0s 37ms/step - loss: 8.0590 - acc: 0.5000
Epoch 5/50
2/2 [==============================] - 0s 34ms/step - loss: 8.0590 - acc: 0.5000
Epoch 6/50
2/2 [==============================] - 0s 40ms/step - loss: 8.0590 - acc: 0.5000
Epoch 7/50
2/2 [==============================] - 0s 44ms/step - loss: 8.0590 - acc: 0.5000
Epoch 8/50
2/2 [==============================] - 0s 40ms/step - loss: 8.0590 - acc: 0.5000
Epoch 9/50
2/2 [==============================] - 0s 31ms/step - loss: 8.0590 - acc: 0.5000
Epoch 10/50
2/2 [==============================] - 0s 40ms/step - loss: 8.0590 - acc: 0.5000
...
Epoch 50/50
2/2 [==============================] - 0s 45ms/step - loss: 8.0590 - acc: 0.5000
Whereas Jupyter Notebook is:
Epoch 1/50
2/2 [==============================] - 0s 164ms/step - loss: 952036.8125 - accuracy: 0.5000
Epoch 2/50
2/2 [==============================] - 0s 39ms/step - loss: 393826.0000 - accuracy: 0.5000
Epoch 3/50
2/2 [==============================] - 0s 38ms/step - loss: 99708.9375 - accuracy: 0.5000
Epoch 4/50
2/2 [==============================] - 0s 39ms/step - loss: 8989.7822 - accuracy: 0.5000
Epoch 5/50
2/2 [==============================] - 0s 39ms/step - loss: 8760.8223 - accuracy: 0.5000
Epoch 6/50
2/2 [==============================] - 0s 40ms/step - loss: 3034.8613 - accuracy: 0.5000
Epoch 7/50
2/2 [==============================] - 0s 40ms/step - loss: 167.2695 - accuracy: 0.0000e+00
Epoch 8/50
2/2 [==============================] - 0s 39ms/step - loss: 0.6670 - accuracy: 1.0000
Epoch 9/50
2/2 [==============================] - 0s 41ms/step - loss: 0.6619 - accuracy: 1.0000
Epoch 10/50
2/2 [==============================] - 0s 40ms/step - loss: 0.6551 - accuracy: 1.0000
...
Epoch 50/50
2/2 [==============================] - 0s 42ms/step - loss: 0.3493 - accuracy: 1.0000
Why might this be the case? I'm pretty lost at this point. Both machines have keras==2.2.4 and tensorflow==1.14.0 installed.
So I'm trying to build a word embedding model but I keep getting this error.
During training, the accuracy does not change and the val_loss remains "nan"
The raw shape of the data is
x.shape, y.shape
((94556,), (94556, 2557))
Then I reshape it so:
xr= np.asarray(x).astype('float32').reshape((-1,1))
yr= np.asarray(y).astype('float32').reshape((-1,1))
((94556, 1), (241779692, 1))
Then I run it through my model
model = Sequential()
model.add(Embedding(2557, 64, input_length=150, embeddings_initializer='glorot_uniform'))
model.add(Flatten())
model.add(Reshape((64,), input_shape=(94556, 1)))
model.add(Dense(512, activation='sigmoid'))
model.add(Dense(128, activation='sigmoid'))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='sigmoid'))
model.add(Dense(1, activation='relu'))
# compile the mode
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# summarize the model
print(model.summary())
plot_model(model, show_shapes = True, show_layer_names=False)
After training, I get a constant accuracy and a val_loss nan for every epoch
history=model.fit(xr, yr, epochs=20, batch_size=32, validation_split=3/9)
Epoch 1/20
WARNING:tensorflow:Model was constructed with shape (None, 150) for input Tensor("embedding_6_input:0", shape=(None, 150), dtype=float32), but it was called on an input with incompatible shape (None, 1).
WARNING:tensorflow:Model was constructed with shape (None, 150) for input Tensor("embedding_6_input:0", shape=(None, 150), dtype=float32), but it was called on an input with incompatible shape (None, 1).
1960/1970 [============================>.] - ETA: 0s - loss: nan - accuracy: 0.9996WARNING:tensorflow:Model was constructed with shape (None, 150) for input Tensor("embedding_6_input:0", shape=(None, 150), dtype=float32), but it was called on an input with incompatible shape (None, 1).
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 2/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 3/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 4/20
1970/1970 [==============================] - 8s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 5/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 6/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 7/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 8/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 9/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 10/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 11/20
1970/1970 [==============================] - 8s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 12/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 13/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 14/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 15/20
1970/1970 [==============================] - 8s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 16/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 17/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 18/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 19/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
Epoch 20/20
1970/1970 [==============================] - 7s 4ms/step - loss: nan - accuracy: 0.9996 - val_loss: nan - val_accuracy: 0.9996
I think it has to do whit the input/output shape but I'm not certain. I tried modifying the model in various ways, adding layers/ removing layers/ different optimizers/ different batch sizes and nothing worked so far.
Ok so, here is what I understood, correct me if I'm wrong:
x contains 94556 integers, each being the index of one out of 2557 words.
y contains 94556 vectors of 2557 integers, each containing also the index of one word, but this time it is a one-hot encoding instead of a categorical encoding.
Finally, a corresponding pair of words from x and y represents two words that are close by in the original text.
If I am correct so far, then the following runs correctly:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
x = np.random.randint(0,2557,94556)
y = np.eye((2557))[np.random.randint(0,2557,94556)]
xr = x.reshape((-1,1))
print("x.shape: {}\nxr.shape:{}\ny.shape: {}".format(x.shape, xr.shape, y.shape))
model = Sequential()
model.add(Embedding(2557, 64, input_length=1, embeddings_initializer='glorot_uniform'))
model.add(Reshape((64,)))
model.add(Dense(512, activation='sigmoid'))
model.add(Dense(2557, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
history=model.fit(xr, y, epochs=20, batch_size=32, validation_split=3/9)
The most import modifications:
The y reshaping was losing the relationship between elements from x and y.
The input_length in the Embedding layer should correspond to the second dimension of xr.
The output of the last layer from the network should be the same dimension as the second dimension of y.
I am actually surprised the code ran without crashing.
Finally, from my research, it seems that people are not training skipgrams like this in practice, but rather they are trying to predict whether a training example is correct (the two words are close by) or not. Maybe this is the reason you came up with an output of dimension one.
Here is a model inspired from https://github.com/PacktPublishing/Deep-Learning-with-Keras/blob/master/Chapter05/keras_skipgram.py :
word_model = Sequential()
word_model.add(Embedding(2557, 64, embeddings_initializer="glorot_uniform", input_length=1))
word_model.add(Reshape((embed_size,)))
context_model = Sequential()
context_model.add(Embedding(2557, 64, embeddings_initializer="glorot_uniform", input_length=1))
context_model.add(Reshape((64,)))
model = Sequential()
model.add(Merge([word_model, context_model], mode="dot", dot_axes=0))
model.add(Dense(1, kernel_initializer="glorot_uniform", activation="sigmoid"))
In that case, you would have 3 vectors, all from the same size (94556, 1) (or probably even bigger than 94556, since you might have to generate additional negative samples):
x containing integers from 0 to 2556
y containing integers from 0 to 2556
output containing 0s and 1s, whether each pair from x and y is a negative or a positive example
and the training would look like:
history = model.fit([x, y], output, epochs=20, batch_size=32, validation_split=3/9)
I am trying to compute Jacobian Matrix in Tensorflow for the following Network:
but It didn't work with my Neural Network!
I found jacobian Matrix code in https://medium.com/unit8-machine-learning-publication/computing-the-jacobian-matrix-of-a-neural-network-in-python-4f162e5db180
Unfortunately it doesn't work with my network ... the problem message is "ValueError: Cannot feed value of shape (1, 51000) for Tensor 'dense_1_input:0', which has shape '(?, 6)'"
I think that is the problem in the loop function inside jacobian_tensorflow function?
# Importing some Libraries
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
import numpy as np
import statsmodels.api as sm
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from tqdm import tqdm
import tensorflow as tf
# Simulation of some data
np.random.seed (245)
nobs =10000
# Definition normalverteilte Features
x1= np.random.normal(size=nobs ,scale=1)
x2= np.random.normal(size=nobs ,scale=1)
x3= np.random.normal(size=nobs ,scale=1)
x4= np.random.normal(size=nobs ,scale=1)
x5= np.random.normal(size=nobs ,scale=1)
# Features
X= np.c_[np.ones((nobs ,1)),x1,x2,x3,x4,x5]
y= np.cos(x1) + np.sin(x2) + 2*x3 + x4 + 0.01*x5 + np.random.normal(size=nobs , scale=0.01)
#Learningrate
LR=0.05
# Number of Neurons
Neuron_Out=1
Neuron_Hidden1=64
Neuron_Hidden2=32
#The Activation function
Activate_output='linear' # für letzte Schicht verwende ich linear
Activate_hidden='relu' # unterschied ist Hidden-Layer-Neuronen werden nicht linear transformiert
#The Optimizer
Optimizer= SGD(lr=LR)
# The loss function
loss='mean_squared_error'
# Splitting Data
from sklearn.model_selection import train_test_split
x_train , x_test , y_train , y_test = train_test_split(X, y, test_size =0.15, random_state =77)
## Neural Network
from tensorflow import set_random_seed
set_random_seed (245)
# As in Medium Essa
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
#Initialize the ANN
model_ANN= Sequential()
# Hidden Layer-> hier wird Hidden Layer definiert-> Anzahl der Neuronen hier sind 64, 32
# input ist 6 (also 1,x1,x2,x3,x4,x5)-> one is the first column in X Matrix
model_ANN.add(Dense(Neuron_Hidden1, activation=Activate_hidden, input_shape=(6,), use_bias=True))
model_ANN.add(Dense(Neuron_Hidden2, activation=Activate_hidden, use_bias=True))
#Output Layer-> hier wird Output-Layer defniniert
model_ANN.add(Dense(Neuron_Out, activation=Activate_output,use_bias=True))
model_ANN.summary()
#Fit the model
history_ANN=model_ANN.fit(
x_train, # training data
y_train, # training targets
epochs=125)
def jacobian_tensorflow(x):
jacobian_matrix = []
for m in range(Neuron_Out):
# We iterate over the M elements of the output vector
grad_func = tf.gradients(model_ANN.output[:, m], model_ANN.input)
gradients = sess.run(grad_func, feed_dict={model_ANN.input: x.reshape((1, x.size))})
jacobian_matrix.append(gradients[0][0,:])
return np.array(jacobian_matrix)
#Jacobian matrix computation
def jacobian_tensorflow(x):
jacobian_matrix = []
for m in range(Neuron_Out):
# We iterate over the M elements of the output vector
grad_func = tf.gradients(model_ANN.output[:, m], model_ANN.input)
gradients = sess.run(grad_func, feed_dict={model_ANN.input: x.reshape((1, x.size))})
jacobian_matrix.append(gradients[0][0,:])
return np.array(jacobian_matrix)
jacobian_tensorflow(x_train)
How I could use Jacobian Computation Function for my Network?
Thanks in Advance
I have modified your code to fix the error and now its working. There were few errors like compile statement missing, function defined twice and forcing the reshape of input for the Dense layer even though the shape was good for feed_dict in jacobian_tensorflow function. Have added comments in the code for changes.
Fixed Code -
%tensorflow_version 1.x
# Importing some Libraries
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
import numpy as np
import statsmodels.api as sm
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from tqdm import tqdm
import tensorflow as tf
# Simulation of some data
np.random.seed (245)
nobs =10000
# Definition normalverteilte Features
x1= np.random.normal(size=nobs ,scale=1)
x2= np.random.normal(size=nobs ,scale=1)
x3= np.random.normal(size=nobs ,scale=1)
x4= np.random.normal(size=nobs ,scale=1)
x5= np.random.normal(size=nobs ,scale=1)
# Features
X= np.c_[np.ones((nobs ,1)),x1,x2,x3,x4,x5]
y= np.cos(x1) + np.sin(x2) + 2*x3 + x4 + 0.01*x5 + np.random.normal(size=nobs , scale=0.01)
#Learningrate
LR=0.05
# Number of Neurons
Neuron_Out=1
Neuron_Hidden1=64
Neuron_Hidden2=32
#The Activation function
Activate_output='linear' # für letzte Schicht verwende ich linear
Activate_hidden='relu' # unterschied ist Hidden-Layer-Neuronen werden nicht linear transformiert
#The Optimizer
Optimizer= SGD(lr=LR)
# The loss function
loss='mean_squared_error'
# Splitting Data
from sklearn.model_selection import train_test_split
x_train , x_test , y_train , y_test = train_test_split(X, y, test_size =0.15, random_state =77)
## Neural Network
from tensorflow import set_random_seed
set_random_seed (245)
# As in Medium Essa
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
#Initialize the ANN
model_ANN= Sequential()
# Hidden Layer-> hier wird Hidden Layer definiert-> Anzahl der Neuronen hier sind 64, 32
# input ist 6 (also 1,x1,x2,x3,x4,x5)-> one is the first column in X Matrix
model_ANN.add(Dense(Neuron_Hidden1, activation=Activate_hidden, input_shape=(6,), use_bias=True))
model_ANN.add(Dense(Neuron_Hidden2, activation=Activate_hidden, use_bias=True))
#Output Layer-> hier wird Output-Layer defniniert
model_ANN.add(Dense(Neuron_Out, activation=Activate_output,use_bias=True))
model_ANN.summary()
# Added the compile statement
model_ANN.compile(loss=loss, optimizer=Optimizer, metrics=['accuracy'])
#Fit the model
history_ANN=model_ANN.fit(
x_train, # training data
y_train, # training targets
epochs=125)
#Jacobian matrix computation
def jacobian_tensorflow(x):
jacobian_matrix = []
for m in range(Neuron_Out):
# We iterate over the M elements of the output vector
grad_func = tf.gradients(model_ANN.output[:, m],model_ANN.input)
gradients = sess.run(grad_func, feed_dict={model_ANN.input: x}) # Removed x.reshape((1, x.size)) as reshape is not required bcoz dense accepts the shape
jacobian_matrix.append(gradients[0][0,:])
return np.array(jacobian_matrix)
jacobian_tensorflow(x_train)
Output -
/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py:1750: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).
warnings.warn('An interactive session is already active. This can '
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_10 (Dense) (None, 64) 448
_________________________________________________________________
dense_11 (Dense) (None, 32) 2080
_________________________________________________________________
dense_12 (Dense) (None, 1) 33
=================================================================
Total params: 2,561
Trainable params: 2,561
Non-trainable params: 0
_________________________________________________________________
Epoch 1/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.1999 - accuracy: 0.0000e+00
Epoch 2/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0501 - accuracy: 0.0000e+00
Epoch 3/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0277 - accuracy: 0.0000e+00
Epoch 4/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0219 - accuracy: 0.0000e+00
Epoch 5/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0190 - accuracy: 0.0000e+00
Epoch 6/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0114 - accuracy: 0.0000e+00
Epoch 7/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0138 - accuracy: 0.0000e+00
Epoch 8/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0092 - accuracy: 0.0000e+00
Epoch 9/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0072 - accuracy: 0.0000e+00
Epoch 10/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0073 - accuracy: 0.0000e+00
Epoch 11/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0065 - accuracy: 0.0000e+00
Epoch 12/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0070 - accuracy: 0.0000e+00
Epoch 13/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0055 - accuracy: 0.0000e+00
Epoch 14/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0055 - accuracy: 0.0000e+00
Epoch 15/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0055 - accuracy: 0.0000e+00
Epoch 16/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0057 - accuracy: 0.0000e+00
Epoch 17/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0045 - accuracy: 0.0000e+00
Epoch 18/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0052 - accuracy: 0.0000e+00
Epoch 19/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0055 - accuracy: 0.0000e+00
Epoch 20/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0040 - accuracy: 0.0000e+00
Epoch 21/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0045 - accuracy: 0.0000e+00
Epoch 22/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0040 - accuracy: 0.0000e+00
Epoch 23/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0041 - accuracy: 0.0000e+00
Epoch 24/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0044 - accuracy: 0.0000e+00
Epoch 25/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0037 - accuracy: 0.0000e+00
Epoch 26/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0046 - accuracy: 0.0000e+00
Epoch 27/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0042 - accuracy: 0.0000e+00
Epoch 28/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0039 - accuracy: 0.0000e+00
Epoch 29/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0037 - accuracy: 0.0000e+00
Epoch 30/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0042 - accuracy: 0.0000e+00
Epoch 31/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0033 - accuracy: 0.0000e+00
Epoch 32/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0032 - accuracy: 0.0000e+00
Epoch 33/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0033 - accuracy: 0.0000e+00
Epoch 34/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0038 - accuracy: 0.0000e+00
Epoch 35/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0035 - accuracy: 0.0000e+00
Epoch 36/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0032 - accuracy: 0.0000e+00
Epoch 37/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0032 - accuracy: 0.0000e+00
Epoch 38/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0032 - accuracy: 0.0000e+00
Epoch 39/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0032 - accuracy: 0.0000e+00
Epoch 40/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0028 - accuracy: 0.0000e+00
Epoch 41/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0032 - accuracy: 0.0000e+00
Epoch 42/125
8500/8500 [==============================] - 1s 77us/step - loss: 0.0029 - accuracy: 0.0000e+00
Epoch 43/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0029 - accuracy: 0.0000e+00
Epoch 44/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0029 - accuracy: 0.0000e+00
Epoch 45/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0028 - accuracy: 0.0000e+00
Epoch 46/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0031 - accuracy: 0.0000e+00
Epoch 47/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0025 - accuracy: 0.0000e+00
Epoch 48/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0022 - accuracy: 0.0000e+00
Epoch 49/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0027 - accuracy: 0.0000e+00
Epoch 50/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0028 - accuracy: 0.0000e+00
Epoch 51/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0024 - accuracy: 0.0000e+00
Epoch 52/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0024 - accuracy: 0.0000e+00
Epoch 53/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0023 - accuracy: 0.0000e+00
Epoch 54/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0026 - accuracy: 0.0000e+00
Epoch 55/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0023 - accuracy: 0.0000e+00
Epoch 56/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0020 - accuracy: 0.0000e+00
Epoch 57/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0024 - accuracy: 0.0000e+00
Epoch 58/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0022 - accuracy: 0.0000e+00
Epoch 59/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0021 - accuracy: 0.0000e+00
Epoch 60/125
8500/8500 [==============================] - 1s 78us/step - loss: 0.0022 - accuracy: 0.0000e+00
Epoch 61/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0020 - accuracy: 0.0000e+00
Epoch 62/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0022 - accuracy: 0.0000e+00
Epoch 63/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0022 - accuracy: 0.0000e+00
Epoch 64/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0023 - accuracy: 0.0000e+00
Epoch 65/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0023 - accuracy: 0.0000e+00
Epoch 66/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 67/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0021 - accuracy: 0.0000e+00
Epoch 68/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0020 - accuracy: 0.0000e+00
Epoch 69/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0020 - accuracy: 0.0000e+00
Epoch 70/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 71/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0019 - accuracy: 0.0000e+00
Epoch 72/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 73/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 74/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 75/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0019 - accuracy: 0.0000e+00
Epoch 76/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 77/125
8500/8500 [==============================] - 1s 83us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 78/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0019 - accuracy: 0.0000e+00
Epoch 79/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0019 - accuracy: 0.0000e+00
Epoch 80/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 81/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 82/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0017 - accuracy: 0.0000e+00
Epoch 83/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0017 - accuracy: 0.0000e+00
Epoch 84/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 85/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0017 - accuracy: 0.0000e+00
Epoch 86/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0017 - accuracy: 0.0000e+00
Epoch 87/125
8500/8500 [==============================] - 1s 83us/step - loss: 0.0017 - accuracy: 0.0000e+00
Epoch 88/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 89/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 90/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0018 - accuracy: 0.0000e+00
Epoch 91/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0015 - accuracy: 0.0000e+00
Epoch 92/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 93/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 94/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 95/125
8500/8500 [==============================] - 1s 83us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 96/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 97/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0016 - accuracy: 0.0000e+00
Epoch 98/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0015 - accuracy: 0.0000e+00
Epoch 99/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 100/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0015 - accuracy: 0.0000e+00
Epoch 101/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 102/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0015 - accuracy: 0.0000e+00
Epoch 103/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0013 - accuracy: 0.0000e+00
Epoch 104/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 105/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0013 - accuracy: 0.0000e+00
Epoch 106/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 107/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 108/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 109/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0013 - accuracy: 0.0000e+00
Epoch 110/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 111/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 112/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0011 - accuracy: 0.0000e+00
Epoch 113/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0014 - accuracy: 0.0000e+00
Epoch 114/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0011 - accuracy: 0.0000e+00
Epoch 115/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 116/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0013 - accuracy: 0.0000e+00
Epoch 117/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0013 - accuracy: 0.0000e+00
Epoch 118/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0011 - accuracy: 0.0000e+00
Epoch 119/125
8500/8500 [==============================] - 1s 79us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 120/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 121/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0011 - accuracy: 0.0000e+00
Epoch 122/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0011 - accuracy: 0.0000e+00
Epoch 123/125
8500/8500 [==============================] - 1s 82us/step - loss: 0.0012 - accuracy: 0.0000e+00
Epoch 124/125
8500/8500 [==============================] - 1s 81us/step - loss: 0.0011 - accuracy: 0.0000e+00
Epoch 125/125
8500/8500 [==============================] - 1s 80us/step - loss: 0.0018 - accuracy: 0.0000e+00
array([[ 0.6434634 , -0.09752402, 0.8342059 , 1.6331654 , 0.82901144,
-0.00917255]], dtype=float32)
Hope this answers your question. Happy Learning.