Currently studying the 'Deep Learning with Python' book by Francios Chollet. I am very new to this and I am getting this error code despite following his code verbatim. Can anyone interpret the error message or what needs to be done to solve it? Any help would be greatly appreciated!
from keras.datasets import imdb
import numpy as np
from keras import models
from keras import layers
(train_data, train_labels), (test_data, test_labels) =
imdb.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
x_train = vectorize_sequences(train_data)
y_train = vectorize_sequences(test_data)
x_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
Edit: Here is an image of the error code that I am getting:
I tested your Code and found, that x_test was not defined. I think you meant to vectorize it as follows. With this Code it worked:
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
Related
I am trying to define a sequential model for a program that analyses text reviews using sentiment analysis. I am having trouble however when defining one of the models I am trying to use.
Below is the section of my code (with the relevant imports) I am having problems with:
from sklearn.model_selection import train_test_split
from tensorflow.keras import Model
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense
from tensorflow.keras import Sequential
X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=42)
n_features = X_train.shape[1]
model = Sequential()
model.add(Dense(20, activation='relu', kernel_initializer='he_normal',
input_shape=(n_features,)))
model.add(Dense(10, activation='tanh', kernel_initializer='he_normal'))
model.add(Dense(8, activation='sigmoid', kernel_initializer='he_normal'))
model.add(Dense(6, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=0)
loss, acc = model.evaluate(X_test, y_test, verbose=0)
print('Test Accuracy: %.3f' % (acc * 100))
Below is the warning message that I am receiving:
WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: <class 'scipy.sparse.csr.csr_matrix'>, <class 'NoneType'>
WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: <class 'scipy.sparse.csr.csr_matrix'>, <class 'NoneType'>
Test Accuracy: 78.668
As you can see I am still receiving the accuracy for the model but also the warning. This is the first time creating something like this so I am a little confused and any help would be much appreciated. I am programming in Python and I am using a Jupyter Notebook.
Im currently trying to train a categorical model in keras and I'm getting stumped by this error.
This is my code I have so far:
predictors=["Body Mass (g)", "Flipper Length (mm)", "Culmen Depth (mm)", "Culmen Length (mm)", "Stage", "Island", "Region"]
x_train, x_test, y_train, y_test =train_test_split(db[predictors], db["Species"], test_size=.2)
x_train= x_train.to_numpy()
x_test = x_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(7,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'],
run_eagerly=True)
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
Note: I have tried both with and without run_eagerly. I'm not sure what else I could be missing.
Tensorflow error: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)
Edit:
Here is the shape of partial_x_train:
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = y_train[:1000]
partial_y_train = y_train[1000:]
I believe I have correctly vectorized train and test data, labels, adequate layers, a suitable optimizer, but I cannot understand what is wrong. Why am I getting a ValueError for incompatible shapes?
My code:
from keras.datasets import imdb
(train_data, train_labels),(test_data, test_labels)=imdb.load_data(num_words=10000)
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
one_hot_train_labels = to_one_hot(train_labels)
one_hot_test_labels = to_one_hot(test_labels)
from tensorflow.keras.utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)
My error message:
ValueError: Shapes (None, 1) and (None, 46) are incompatible
According to the comment, if your partial_y_train shape is (24000, 1) then that means it has not been one hot encoded correctly. You are using the function to_one_hot() in your code, but I dont know what this code is doing. Using Tensorflow's one_hot function or scikit-learn's version would be best, then the shapes should match and the error will be removed.
I've been playing around with Tensorflow and Keras and I finally got the following error while trying hyper parameter tuning:
"ValueError: activation is not a legal parameter"
The point is that I want to try different activation functions in my model to see which one works best.
I have the following code:
import pandas as pd
import tensorflow as tf
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
import numpy as np
ds = pd.read_csv(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv",
names=["Length", "Diameter", "Height", "Whole weight", "Shucked weight",
"Viscera weight", "Shell weight", "Age"])
print(ds)
x_train = ds.copy()
y_train = x_train.pop('Age')
x_train = np.array(x_train)
def create_model(layers, activations):
model = tf.keras.Sequential()
for i, nodes in enumerate(layers):
if i == 0:
model.add(tf.keras.layers.Dense(nodes, input_dim=x_train.shape[1]))
model.add(layers.Activation(activations))
model.add(Dropout(0.3))
else:
model.add(tf.keras.layers.Dense(nodes))
model.add(layers.Activation(activations))
model.add(Dropout(0.3))
model.add(tf.keras.layers.Dense(units=1, kernel_initializer='glorot_uniform'))
model.add(layers.Activation('sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, verbose=0)
layers = [[20], [40,20], [45, 30, 15]]
activations = ['sigmoid', 'relu']
param_grid = dict(layers=layers, activation=activations, batch_size = [128, 256], epochs=[30])
grid = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
grid_result = grid.fit(x_train, y_train)
print(grid_result.best_score_,grid_reslult.best_params_)
pred_y = grid.predict(x_test)
y_pred = (pred_y > 0.5)
cm=confusion_matrix(y_pred, y_test)
score=accuracy_score(y_pred, y_test)
model.fit(x_train, y_train, epochs=30, callbacks=[cp_callback])
#steps_per_epoch
model.evaluate(x_test, y_test, verbose=2)
probability_model = tf.keras.Sequential([
model,
tf.keras.layers.Softmax()
])
probability_model(x_test[:100])
If you see here, you must specify activations as :
from tensorflow.keras import activations
layers.Activation(activations.relu)
Right now, you have:
activations = ['sigmoid', 'relu']
So , that's why the value error.
You should change your code to sth like this:
model.add(tf.keras.layers.Dense(nodes, activation=activations[i], input_dim=x_train.shape[1]))
So, remove the Activation layer: model.add(layers.Activation(activations)) and instead use the activation inside each layer.
Example:
def create_model(layers, activations):
model = tf.keras.Sequential()
for i in range(2):
if i == 0:
model.add(tf.keras.layers.Dense(2, activation=activations[i], input_dim=x_train.shape[1]))
model.add(tf.keras.layers.Dropout(0.3))
else:
model.add(tf.keras.layers.Dense(2, activation=activations[i]))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(units=1, activation='sigmoid', kernel_initializer='glorot_uniform'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
return model
layers.Activation() expects a function or a string, such as 'sigmoid' but you are currently passing an array activations to it. Use your index i (or a different index) to access the activation function like activations[i].
You can also pass the activation as string directly to the Dense layer like so:
model.add(tf.keras.layers.Dense(nodes, activation=activations[i], input_dim=x_train.shape[1])))
I'm getting this error:
ValueError: Error when checking input: expected dense_27_input to have shape (20,) but got array with shape (3495,)
Here is my code:
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Input, Dense
from keras.layers import Flatten
from sklearn.preprocessing import StandardScaler
import numpy as np
df = pd.read_csv('../input/nasa-asteroids-classification/nasa.csv')
df = pd.get_dummies(df)
X = df.loc[:, df.columns != 'Harzardous']
y = df.loc[:, 'Hazardous']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
model = Sequential()
model.add(Dense(64, input_dim=(20), activation = 'relu'))
model.add(Dense(32, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(8, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
compilation = model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=100, batch_size=32)
scores = model.evaluate(X_train, y_train, verbose=False)
print("Training Accuracy: %.2f%%\n" % (scores[1]*100))
scores = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: %.2f%%\n" % (scores[1]*100))
How do i fix this?
The data set has 20 columns after get_dummies was applied to it, and it had 20 rows before it was applied.
Link to data set: https://www.kaggle.com/shrutimehta/nasa-asteroids-classification
Your input dimension is not correct. Print the shape of X_train, y_train. They should have shapes (X, 20) and (X,1), where X is a constant.