I'm making a model (from this dataset: https://www.kaggle.com/karangadiya/fifa19) which guesses the "overall" of a player given some of his statistics. I've done the required data cleaning but the loss and accuracy is coming as "loss: nan - accuracy: 0.0000e+00."
Below is my code for data cleaning:-
%matplotlib notebook
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
data = pd.read_csv("./data.csv")
data.dropna().replace(np.nan, 0)
data.columns = [x.lower() for x in data.columns]
data = data.select_dtypes(include="number").drop(columns = ["id", "unnamed: 0"], axis=1)
y = data["overall"]
X = data[["crossing", "headingaccuracy", "standingtackle", "gkreflexes"]]
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=45)
X_train = (X_train - np.max(X_train))/(np.max(X_train) - np.min(X_train))
y_train = np.array(y_train, dtype = 'float32')
And here's my model's code:-
model = keras.models.Sequential([
keras.layers.Dense(4, input_shape=(4,)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.relu)
])
model.compile(optimizer="sgd", loss="mean_squared_error", metrics=["accuracy"])
model.fit(X_train, y_train, epochs=5)
This was the output while model training:-
Epoch 1/5
427/427 [==============================] - 1s 2ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 2/5
427/427 [==============================] - 1s 1ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 3/5
427/427 [==============================] - 1s 1ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 4/5
427/427 [==============================] - 1s 2ms/step - loss: nan - accuracy: 0.0000e+00
Epoch 5/5
427/427 [==============================] - 1s 2ms/step - loss: nan - accuracy: 0.0000e+00
<tensorflow.python.keras.callbacks.History at 0x13b16c6e280>
Thank you for your help!
Related
I have made a model that tries to predict the chances of every piano key playing in a time step given all time steps before it. I tried making a GRU network with 88 outputs(one for every piano key)
input shape = (600,88,)
desired output/ label shape = (88, )
import numpy as np
import midi_processer
from keras import models
from keras import layers
x_train, x_test = np.load("samples.npy", mmap_mode='r'), np.load("test_samples.npy", mmap_mode='r')
y_train, y_test = np.load("labels.npy", mmap_mode='r'), np.load("test_labels.npy", mmap_mode='r')
def build_model():
model = models.Sequential()
model.add(layers.Input(shape=(600,88,)))
model.add(layers.GRU(512,activation='tanh',recurrent_activation='hard_sigmoid'))
model.add(layers.RepeatVector(600))
model.add(layers.GRU(512,activation='tanh', recurrent_activation='hard_sigmoid'))
model.add(layers.Dense(88, activation = 'sigmoid'))
return model
x_partial, x_val = x_train[:13000], x_train[13000:]
y_partial, y_val = y_train[:13000], y_train[13000:]
model = build_model()
model.compile(optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = ['accuracy'])
history = model.fit(x_partial, y_partial, batch_size = 50, epochs = , validation_data= (x_val,y_val))
instead of learning normally my algorithm had stayed with constant accuracy throughout all of the epochs
Epoch 1/15
260/260 [==============================] - 998s 4s/step - loss: -0.1851 - accuracy: 0.0298 - val_loss: -8.8735 - val_accuracy: 0.0310
Epoch 2/15
260/260 [==============================] - 827s 3s/step - loss: -33.6520 - accuracy: 0.0382 - val_loss: -56.0122 - val_accuracy: 0.0310
Epoch 3/15
260/260 [==============================] - 844s 3s/step - loss: -78.6130 - accuracy: 0.0382 - val_loss: -98.2798 - val_accuracy: 0.0310
Epoch 4/15
260/260 [==============================] - 906s 3s/step - loss: -121.0963 - accuracy: 0.0382 - val_loss: -139.3440 - val_accuracy: 0.0310
Epoch 5/15
I would like to retrain a tensorflow.keras model, and I tried to compare both models:
Code 1:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
optimizer = tf.keras.optimizers.Adam(lr=1e-2)
model = keras.models.Sequential([keras.layers.Dense(1, input_shape=[8])])
model.compile(loss="mse", optimizer=optimizer)
model.fit(X_train_scaled, y_train, epochs=3)
model.fit(X_train_scaled, y_train, epochs=3)
Code 2:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
optimizer = tf.keras.optimizers.Adam(lr=1e-2)
model = keras.models.Sequential([keras.layers.Dense(1, input_shape=[8])])
model.compile(loss="mse", optimizer=optimizer)
model.fit(X_train_scaled, y_train, epochs=6)
In code 1, I get:
Epoch 1/3
363/363 [==============================] - 0s 330us/step - loss: 1.8888
Epoch 2/3
363/363 [==============================] - 0s 342us/step - loss: 0.5772
Epoch 3/3
363/363 [==============================] - 0s 397us/step - loss: 0.5508
Epoch 1/3
363/363 [==============================] - 0s 470us/step - loss: 0.5428
Epoch 2/3
363/363 [==============================] - 0s 486us/step - loss: 0.5338
Epoch 3/3
363/363 [==============================] - 0s 479us/step - loss: 0.5519
In code 2, I get:
Epoch 1/6
363/363 [==============================] - 0s 332us/step - loss: 1.8888
Epoch 2/6
363/363 [==============================] - 0s 322us/step - loss: 0.5772
Epoch 3/6
363/363 [==============================] - 0s 333us/step - loss: 0.5508
Epoch 4/6
363/363 [==============================] - 0s 331us/step - loss: 0.5413
Epoch 5/6
363/363 [==============================] - 0s 371us/step - loss: 0.5440
Epoch 6/6
363/363 [==============================] - 0s 356us/step - loss: 0.5318
If you want to check on your own computer, I use this dataset:
import tensorflow as tf
from tensorflow import keras
import numpy as np
import os
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
I compared both ways of training a model:
first is by running 3 epochs, then 3 epochs
second is by running 6 epochs
It's unclear where the problem is. I use tensorflow 2.2.2 (or 2.3 provides the same results)
I don't understand why they don't provide the same result.
I make a learning on a dataset, and everything is ok. Sometimes, changes occur, and I've got some new data. I'd like to "continue" the learning from my existing model, with the new set of data, without begining from scratch again.
Here is a simple example to show the problematic and where I'm stucked.
import tensorflow as tf
mnist = tf.keras.datasets.mnist
# get the data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# split in 2 parts to get 2 set of data
x_train1 = x_train[:5000]
x_train2 = x_train[5000:10000]
y_test1 = y_test[:5000]
y_test2 = y_test[5000:]
# set the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Compile the model
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
# set the callback
checkpoint_path = "CHECKPOINTS/cp.ckpt"
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, monitor='accuracy', mode="max", save_best_only=False, save_weights_only=False, save_freq="epoch", verbose=0)
Then I fit my first set of data :
# fit the 1st dataset
model.fit(x_train1, y_test1, epochs=5,callbacks=[cp_callback],verbose=1)
Output:
Epoch 1/5
157/157 [==============================] - 0s 897us/step - loss: >2.3518 - accuracy: >0.1075
Epoch 2/5
157/157 [==============================] - 0s 752us/step - loss: >2.2833 - accuracy: >0.1438
Epoch 3/5
157/157 [==============================] - 0s 731us/step - loss: >2.2656 - accuracy: >0.1564
Epoch 4/5
157/157 [==============================] - 0s 755us/step - loss: >2.2388 - accuracy: >0.1719
Epoch 5/5
157/157 [==============================] - 0s 759us/step - loss: >2.2117 - accuracy: >0.1901
Then my 2nd one :
# fit the 2nd one
model.fit(x_train2, y_test2, epochs=5,callbacks=[cp_callback],verbose=1)
Output:
Epoch 1/5
157/157 [==============================] - 0s 943us/step - loss: >2.3240 - accuracy: >0.0964
Epoch 2/5
157/157 [==============================] - 0s 778us/step - loss: >2.2881 - accuracy: >0.1238
Epoch 3/5
157/157 [==============================] - 0s 805us/step - loss: >2.2688 - accuracy: >0.1514
Epoch 4/5
157/157 [==============================] - 0s 814us/step - loss: >2.2498 - accuracy: >0.1496
Epoch 5/5
157/157 [==============================] - 0s 1ms/step - loss: >2.2289 - accuracy: 0.1704
As you can see, it begins from 0 again the training, without keeping the existing train made before.
How could I do that.
Thanx by advance.
I'm trying to do a simple Keras Neural Network but the model doesn't fit:
Train on 562 samples, validate on 188 samples
Epoch 1/20
562/562 [==============================] - 1s 1ms/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 2/20
562/562 [==============================] - 0s 298us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 3/20
562/562 [==============================] - 0s 295us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 4/20
562/562 [==============================] - 0s 282us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 5/20
562/562 [==============================] - 0s 289us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 6/20
562/562 [==============================] - 0s 265us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
The data base is structured in a CSV file like this:
doc venda img1 img2 v1 v2 gt
RG venda1 img123 img12 [3399, 162675, ...] [3399, 162675, ...] 1
My intent its to use the diff between v1 and v2 vector to answer me if img1 and im2 are from the same class.
The code:
from sklearn.model_selection import train_test_split
(X_train, X_test, Y_train, Y_test) = train_test_split(train, train_labels, test_size=0.25, random_state=42)
# create the model
model = Sequential()
model.add(Dense(10, activation="relu", input_dim=10, kernel_initializer="uniform"))
model.add(Dense(6, activation="relu", kernel_initializer="uniform"))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(
np.array(X_train),
np.array(Y_train),
shuffle=True,
epochs=20,
verbose=1,
batch_size=5,
validation_data=(np.array(X_test), np.array(Y_test)),
)
What i'm doing wrong?
Divide the difference vector by some constant number so that the feature vector is in range 0 to 1 or -1 to 1. Right now the values are too big and the loss is coming high. Network learns faster if the data is normalized properly.
I have had success normalizing features using this function. I forget exactly why I use the same mu and sigma from train set on the test and val but I am pretty sure I learned it during the deep.ai course on coursera
def normalize_features(dataset):
mu = np.mean(dataset, axis = 0) # columns
sigma = np.std(dataset, axis = 0)
norm_parameters = {'mu': mu,
'sigma': sigma}
return (dataset-mu)/(sigma+1e-10), norm_parameters
# Normal X data; using same mu and sigma from test set;
x_train, norm_parameters = normalize_features(x_train)
x_val = (x_val-norm_parameters['mu'])/(norm_parameters['sigma']+1e-10)
x_test = (x_test-norm_parameters['mu'])/(norm_parameters['sigma']+1e-10)
If I switch this to Python 2.x, it performs 10. Why is that?
Training a logistic regression model
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from sklearn.model_selection import train_test_split, cross_val_score
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.3,
random_state = 42)
# NOTE: If I run this in Python 3.x, it only performs 1 Epoch
K.clear_session()
model = Sequential()
model.add(Dense(1, input_shape=(4,), activation='sigmoid'))
model.compile(loss = 'binary_crossentropy',
optimizer= 'sgd',
metrics = ['accuracy'])
# Saved the result of the fitting, to display the history as a data frame & see how the model does
history = model.fit (X_train, y_train)
result = model.evaluate(X_test, y_test)
Output:
Epoch 1/10
960/960 [==============================] - 0s - loss: 0.7943 - acc: 0.5219
Epoch 2/10
960/960 [==============================] - 0s - loss: 0.7338 - acc: 0.5469
Epoch 3/10
960/960 [==============================] - 0s - loss: 0.6847 - acc: 0.5688
Epoch 4/10
960/960 [==============================] - 0s - loss: 0.6446 - acc: 0.6177
Epoch 5/10
960/960 [==============================] - 0s - loss: 0.6113 - acc: 0.6719
Epoch 6/10
960/960 [==============================] - 0s - loss: 0.5832 - acc: 0.7000
Epoch 7/10
960/960 [==============================] - 0s - loss: 0.5591 - acc: 0.7177
Epoch 8/10
960/960 [==============================] - 0s - loss: 0.5381 - acc: 0.7365
Epoch 9/10
960/960 [==============================] - 0s - loss: 0.5196 - acc: 0.7542
Epoch 10/10
960/960 [==============================] - 0s - loss: 0.5031 - acc: 0.7688
32/412 [=>............................] - ETA: 0s
The fit function has parameter epochs with default value 1.
fit(self, x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None,
validation_split=0.0, validation_data=None, shuffle=True, class_weight=None,
sample_weight=None, initial_epoch=0, steps_per_epoch=None,
validation_steps=None)
However, the default used to be 10. See the changes to fit in models.py in this commit for example. You most likely have an older version of Keras with Python 2.