I have a 3 dimensional dataset of audio files where X.shape is (329,20,85). I want to have a simpl bare-bones model running, so please don't nitpick and address only the issue at hand. Here is the code:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(32, return_sequences=True, stateful=False, input_shape = (20,85,1)))
model.add(tf.keras.layers.LSTM(20))
model.add(tf.keras.layers.Dense(nb_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=["accuracy"])
model.summary()
print("Train...")
model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=50, validation_data=(X_test, y_test))
But then I had the error mentioned in the title:
ValueError: Shapes (None, 1) and (None, 3) are incompatible
Here is the model.summary()
Model: "sequential_13"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_21 (LSTM) (None, 20, 32) 15104
_________________________________________________________________
lstm_22 (LSTM) (None, 20) 4240
_________________________________________________________________
dense_8 (Dense) (None, 3) 63
=================================================================
Total params: 19,407
Trainable params: 19,407
Non-trainable params: 0
_________________________________________________________________
Train...
For this, I followed this post and updated Tensorflow to the latest version, but the issue persists. This post is completely unrelated and highly unreliable.This post although a bit relatable is unanswered for a while now.
Update 1.0:
I strongly think the problem has something to do with the final Dense layer where I pass nb_classes as 3, since I am classifying for 3 categories in y.
So I changed the Dense layer's nb_classes to 1, which ran the model and gives me this output, which I am positive is wrong.
Train...
9/9 [==============================] - 2s 177ms/step - loss: 0.0000e+00 - accuracy: 0.1520 - val_loss: 0.0000e+00 - val_accuracy: 0.3418
<tensorflow.python.keras.callbacks.History at 0x7f50f1dcebe0>
Update 2.0:
I one hot encoded the ys and resolved the shape issue. But now the above output with <tensorflow.python.keras.callbacks.History at 0x7f50f1dcebe0> persists. Any help with this? Or should I post a new question for this? Thanks for all the help.
How should I proceed, or what should I be changing?
The first problem is with the LSTM input_shape. input_shape = (20,85,1).
From the doc: https://keras.io/layers/recurrent/
LSTM layer expects 3D tensor with shape (batch_size, timesteps, input_dim).
model.add(tf.keras.layers.Dense(nb_classes, activation='softmax')) - this suggets you're doing a multi-class classification.
So, you need your y_train and y_test have to be one-hot-encoded. That means they must have dimension (number_of_samples, 3), where 3 denotes number of classes.
You need to apply tensorflow.keras.utils.to_categorical to them.
y_train = to_categorical(y_train, 3)
y_test = to_categorical(y_test, 3)
ref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical
tf.keras.callbacks.History() - this callback is automatically applied to every Keras model. The History object gets returned by the fit method of models.
ref: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History
Check if the last Dense Layer(output) has same number of classes as the number of target classes in the training dataset. I made similar mistake while training the dataset and correcting it helped me.
Another thing to check, is whether your labels are one-hot-coded, or integers only. See this post: https://www.kaggle.com/general/197993
The error arose because 'categorical_crossentropy' works on one-hot encoded target, while 'sparse_categorical_crossentropy' works on integer target.
model.add(tf.keras.layers.Dense(nb_classes, activation='softmax'))
The nb_classes should be same as y_train.shape[1]
Issue was with the wrong variables used after One Hot Encoding for Classification problem.
trainY = tf.keras.utils.to_categorical(y_train, num_classes=9)
testY = tf.keras.utils.to_categorical(y_test, num_classes=9)
Modeling was done with y_train and y_test as below:
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=50,
batch_size = 128)
Correction did below and it worked as expected:
trainY = tf.keras.utils.to_categorical(y_train, num_classes=9)
testY = tf.keras.utils.to_categorical(y_test, num_classes=9)
Related
I have just started with implementing a LSTM in Python with Tensorflow / Keras to test out an idea I had, however I am struggling to properly create a model. This post is mainly about a Value error that I often get (see the code at the bottom), but any and all help with creating a proper LSTM model for the problem below is greatly appreciated.
For each day, I want to predict which of a group of events will occur. The idea is that some events are recurring / always occur after a certain amount of time has passed, whereas other events occur only rarely or without any structure. A LSTM should be able to pick up on these recurring events, in order to predict their occurences for days in the future.
In order to display the events, I use a list with values 0 and 1 (non-occurence and occurence). So for example if I have the events ["Going to school", "Going to the gym" , "Buying a computer"] I have lists like [1, 0, 1], [1, 1, 0], [1, 0, 1], [1, 1, 0] etc. The idea is then that the LSTM will recognize that I go to school every day, the gym every other day and that buying a computer is very rare. So following the sequence of vectors, for the next day it should predict [1,0,0].
So far I have done the following:
Create x_train: a numpy.array with shape (305, 60, 193). Each entry of x_train contains 60 consecutive days, where day is represented by a vector of the same 193 events that can take place like described above.
Create y_train: a numpy.array with shape (305, 1, 193). Similar to x_train, but y_train only contains 1 day per entry.
x_train[0] consists of day 1,2,...,60 and y_train[0] contains day 61. x_train[1] then contains day 2,...,61 and y_train[1] contains day 62, etc. The idea is that the LSTM should learn to use data from the past 60 days, and that it can then iteratively start predicting/generating new vectors of event occurences for future days.
I am really struggling with how to create a simple implementation of a LSTM that can handle this. So far I think I have figured out the following:
I need to start with the below block of code, where N_INPUTS = 60 and N_FEATURES = 193. I am not sure what N_BLOCKS should be, or if the value it should take is strictly bound by some conditions. EDIT: According to https://zhuanlan.zhihu.com/p/58854907 it can be whatever I want
model = Sequential()
model.add(LSTM(N_BLOCKS, input_shape=(N_INPUTS, N_FEATURES)))
I should probably add a dense layer. If I want the output of my LSTM to be a vector with the 193 events, this should look as follows:
model.add(layers.Dense(193,activation = 'linear') #or some other activation function
I can also add a dropout layer to prevent overfitting, for example with model.add.layers.dropout(0.2) where the 0.2 is some rate at which things are set to 0.
I need to add a model.compile(loss = ..., optimizer = ...). I am not sure if the loss function (e.g. MSE or categorical_crosstentropy) and optimizer matter if I just want a working implementation.
I need to train my model, which I can achieve by using model.fit(x_train,y_train)
If all of the above works well, I can start to predict values for the next day using model.predict(the 60 days before the day I want to predict)
One of my attempts can be seen here:
print(x_train.shape)
print(y_train.shape)
model = keras.Sequential()
model.add(layers.LSTM(256, input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(y_train.shape[2], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
model.fit(x_train,y_train) #<- This line causes the ValueError
Output:
(305, 60, 193)
(305, 1, 193)
Model: "sequential_29"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_27 (LSTM) (None, 256) 460800
dense_9 (Dense) (None, 1) 257
=================================================================
Total params: 461,057
Trainable params: 461,057
Non-trainable params: 0
_________________________________________________________________
ValueError: Shapes (None, 1, 193) and (None, 193) are incompatible
Alternatively, I have tried replacing the line model.add(layers.Dense(y_train.shape[2], activation='softmax')) with model.add(layers.Dense(y_train.shape[1], activation='softmax')). This produces ValueError: Shapes (None, 1, 193) and (None, 1) are incompatible .
Are my ideas somewhat okay? How can I resolve this Value Error? Any help would be greatly appreciated.
EDIT: As suggested in the comments, changing the size of y_train did the trick.
print(x_train.shape)
print(y_train.shape)
model = keras.Sequential()
model.add(layers.LSTM(193, input_shape=(x_train.shape[1], x_train.shape[2]))) #De 193 mag ieder mogelijk getal zijn. zie: https://zhuanlan.zhihu.com/p/58854907
model.add(layers.Dropout(0.2))
model.add(layers.Dense(y_train.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
model.fit(x_train,y_train)
(305, 60, 193)
(305, 193)
Model: "sequential_40"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_38 (LSTM) (None, 193) 298764
dropout_17 (Dropout) (None, 193) 0
dense_16 (Dense) (None, 193) 37442
=================================================================
Total params: 336,206
Trainable params: 336,206
Non-trainable params: 0
_________________________________________________________________
10/10 [==============================] - 3s 89ms/step - loss: 595.5011
Now I am stuck on the fact that model.predict(x) requires x to be of the same size as x_train, and will output an array with the same size as y_train. I was hoping only one set of 60 days would be required to output the 61th day. Does anyone know how to achieve this?
The solution may be to have y_train of shape (305, 193) instead of (305, 1, 193) as you predict one day, this does not change the data, just its shape. You should then be able to train and predict.
With model.add(layers.Dense(y_train.shape[1], activation='softmax')) of course.
I'm currently working on a Keras neural network for fun. I'm just learning the basics, but cant get over this dimension problem:
So my input data (X) should be a 12x6 matrix, with 12 timestamps and 6 different data values for every timestamp:
X = np.zeros([2867, 12, 6])
Y = np.zeros([2867, 3])
My Output (Y) should be a one-hot encoded 3x1 vector.
Now i want to feed this data through the following LSTM model.
model = Sequential()
model.add(LSTM(30, activation="softsign", return_sequences=True, input_shape=(12, 6)))
model.add(Dense(3))
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x=X, y=Y, batch_size=100, epochs=1000, verbose=2, validation_split=0.2)
The Summary looks like this:
Model: "sequential"
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 12, 30) 4440
_________________________________________________________________
dense (Dense) (None, 12, 3) 93
=================================================================
Total params: 4,533
Trainable params: 4,533
Non-trainable params: 0
_________________________________________________________________
When i run this program, i get this error:
ValueError: Shapes (None, 3) and (None, 12, 3) are incompatible.
I already tried to reshape my data to a 72x1 vector, but this doesnt work either.
Maybe someone can help me how to shape my input data correctly :).
You probably need to define your model as follows as you used the categorical_crossentropy loss function.
model.add(LSTM(30, activation="softsign",
return_sequences=False, input_shape=(12, 6)))
model.add(Dense(3, activations='softmax'))
I'm currently facing an issue with my Tensorflow pipeline.
Don't know if it's specific of Tensorflow or Python.
I'm trying to do a confusion matrix afterward my compiled vgg16 model.
So i used the model object got after the fit method and try to predict the same features to compute my CM.
But the message "Processus arrêté" or process stopped in English appear and the script stop working
Here is the output :
Using TensorFlow backend.
Load audio features and labels : 100% Time: 0:00:50 528.41 B/s
VGG16 model with last layer changed
Number of label: 17322
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 4, 13, 512) 14713536
_________________________________________________________________
flatten (Flatten) (None, 26624) 0
_________________________________________________________________
dense (Dense) (None, 256) 6816000
_________________________________________________________________
dropout (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 257
=================================================================
Total params: 21,529,793
Trainable params: 13,895,681
Non-trainable params: 7,634,112
_________________________________________________________________
2772/2772 [==============================] - 121s 44ms/step - loss: 0.2315 - acc: 0.9407 - val_loss: 0.0829 - val_acc: 0.9948
Processus arrêté
Here is the model :
def launch2(self):
print("VGG16 model with last layer changed")
x = np.array(self.getFeatures())[...,np.newaxis]
print("Number of label: " + str(len(self.getLabels())))
vgg_conv=VGG16(weights=None, include_top=False, input_shape=(128, 431, 1))
#Freeze the layers except the last 4 layers
for layer in vgg_conv.layers[:-4]:
layer.trainable = False
#Create the model
model = tensorflow.keras.Sequential()
#Add the vgg convolutional base model
model.add(vgg_conv)
opt = Adam(lr=1e-4)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['acc'])
model.summary()
model.fit(x=x,y=self.getLabels(),shuffle=True,batch_size=5,epochs=1, validation_split=0.2, verbose=1)
model.save('vggModelLastLayer.h5')
self.testModel(model,x)
Here is the function which allow me to compute the CM :
def testModel(self, model,x):
print("Informations about model still processing. Last step is long")
y_labels = [int(i) for i in self.getLabels().tolist()]
classes = model.predict_classes(x)
predicted_classes = np.argmax(results, axis=1)
# Call model info (true labels, predited labels)
#self.modelInfo(y_labels, predicted_classes)
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
cm=confusion_matrix(y_labels,predicted_classes)
target_names=["Bulls","No bulls"]
print(classification_report(y_labels,predicted_classes, target_names=target_names))
print(cm)
How could I fix this ? Is this a memory leak or something ?
Thank you in advance
I've found why this turned out like this
Just because of memory. My memory RAM wasn't enough big to calculate the total amount of data i had
I am new to Keras so I really appreciate any help here. For my project, I am trying to train the neural network on multiple time series. I got it work by running a for loop through to fit each time series to the model. The code looks like this:
for i in range(len(train)):
history = model.fit(train_X[i], train_Y[i], epochs=20, batch_size=6, verbose=0, shuffle=True)
If I am not wrong, I am doing online training here. Now I'm trying to do batch training to see if the result is better. I tried to fit a list consisting of all timeseries (each converted into a numpy array), but I get this error:
Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 56 arrays:
Here is the info about the dataset and the model:
model = Sequential()
model.add(LSTM(1, input_shape=(1,16),return_sequences=True))
model.add(Flatten())
model.add(Dense(1, activation='tanh'))
model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
model.summary()
Layer (type) Output Shape Param #
=================================================================
lstm_2 (LSTM) (None, 1, 1) 72
_________________________________________________________________
flatten_2 (Flatten) (None, 1) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 2
=================================================================
Total params: 74
Trainable params: 74
Non-trainable params: 0
print(len(train_X), train_X[0].shape, len(train_Y), train_Y[0].shape)
56 (1, 23, 16) 56 (1, 23, 1)
Here is the block of code that gives me the error :
pyplot.figure(figsize=(16, 25))
history = model.fit(train_X, train_Y, epochs=1, verbose=1, shuffle=False, batch_size = len(train_X))
Input shape of LSTM should be - batch_size, timesteps, features.But we need not to mention batch_size in input shape if you want you can use batch_input_shape.
model = Sequential()
model.add(LSTM(1, input_shape=(23,16),return_sequences=True))
# model.add(Flatten())
model.add(Dense(1, activation='tanh'))
model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
model.summary()
X = np.random.random((56,1, 23, 16))
y = np.random.random((56,1, 23, 1))
X=np.squeeze(X,axis =1) #as input shape should be (`batch_size`, `timesteps`, `features`)
y = np.squeeze(y,axis =1)
model.fit(X,y,epochs=1, verbose=1, shuffle=False, batch_size = len(X))
I am not sure if it serves your purpose.
Here is my code to train an RNN with LSTM nodes:
# LSTM RNN with dropout for sequence classification
from keras.models import Sequential
from keras.layers import Dense, LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
import pickle, numpy, pandas as pd
###################################### CONSTANTS #############################################
SEED = 7 # Fixes random seed for reproducibility.
URL = 'ibcData.tsv' # Specified dataset to gather data from.
SEPERATOR = '\t' # Seperator the dataset uses to divide data.
RANDOM_STATE = 1 # Pseudo-random number generator state used for random sampling.
TOP_WORDS = 5000 # Most used words in the dataset.
MAX_REVIEW_LENGTH = 500 # Length of each sentence being sent in (necessary).
EMBEDDING_VECTOR_LENGTH = 32 # The specific Embedded later will have 32-length vectors to
# represent each word.
BATCH_SIZE = 64 # Takes 64 sentences at a time and continually retrains RNN.
NUMBER_OF_EPOCHS = 3 # Fits RNN to more accurately guess the data's political bias.
DROPOUT = 0.2 # Helps slow down overfitting of data (slower convergence rate)
RECURRENT_DROPOUT = 0.2 # Helps slow down overfitting of data when recurrently training
##############################################################################################
# fix random seed for reproducibility
numpy.random.seed(SEED)
readData = pd.read_csv(URL, header=None, names=['label', 'message'], sep=SEPERATOR)
# convert label to a numerical variable
readData['label_num'] = readData.label.map({'Liberal' : 0, 'Neutral': 0.5, 'Conservative' : 1})
X = readData.message # Contains the dataset's actual sentences that were labeled
Y = readData.label_num # Either 0.0, 0.5, or 1.0 depending on label mapped to
# load the dataset into training and testing datasets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=RANDOM_STATE)
# truncate and pad input sequences
for sentence in X_train:
sentence.zfill(MAX_REVIEW_LENGTH)
for sentence in X_test:
sentence.zfill(MAX_REVIEW_LENGTH)
# create the model
model = Sequential()
model.add(Embedding(TOP_WORDS, EMBEDDING_VECTOR_LENGTH, input_length=MAX_REVIEW_LENGTH))
model.add(LSTM(100, recurrent_dropout=RECURRENT_DROPOUT dropout=DROPOUT)) # Dropouts help prevent overfitting
model.add(Dense(2, activation='sigmoid')) # Layers deal with a 2D tensor, and output a 2D tensor
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=NUMBER_OF_EPOCHS, batch_size=BATCH_SIZE)
# Final evaluation of the model
scores = model.evaluate(X_test, Y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
It is training a .tsv file that has data like this:
"Liberal","Forcing middle-class workers to bear a greater share of the cost of government weakens their support for needed investments and stirs resentment toward those who depend on public services the most ."
"Liberal", "Because it would not be worthwhile to bring a case for $ 30.22 , the arbitration clause would , as a practical matter , deny the Concepcions any relief and , more important , eliminate a class action that might punish AT&T for its pattern of fraudulent behavior ."
I try to run it and I get this from the console and I have no idea how to fix it nor do my professors trying to help me with this research:
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 32) 160000
_________________________________________________________________
lstm_1 (LSTM) (None, 100) 53200
_________________________________________________________________
dense_1 (Dense) (None, 2) 202
=================================================================
Total params: 213,402
Trainable params: 213,402
Non-trainable params: 0
_________________________________________________________________
None
Traceback (most recent call last):
File "LSTM-RNN.py", line 55, in <module>
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=NUMBER_OF_EPOCHS
, batch_size=BATCH_SIZE)
File "C:\Users\Hydur\Anaconda3\lib\site-packages\keras\keras\models.py", line 871, in f
it
initial_epoch=initial_epoch)
File "C:\Users\Hydur\Anaconda3\lib\site-packages\keras\keras\engine\training.py", line
1525, in fit
batch_size=batch_size)
File "C:\Users\Hydur\Anaconda3\lib\site-packages\keras\keras\engine\training.py", line
1379, in _standardize_user_data
exception_prefix='input')
File "C:\Users\Hydur\Anaconda3\lib\site-packages\keras\keras\engine\training.py", line
144, in _standardize_input_data
str(array.shape))
ValueError: Error when checking input: expected embedding_1_input to have shape (None, 50
0) but got array with shape (3244, 1)
Main problem seems to be that X contained raw strings, while the Embedding layer expected data already coded numerically. The Keras text preprocessing utilities will take care of that:
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=MAX_REVIEW_LENGTH)
tokenizer.fit_on_texts(readData.message)
X = numpy.array(tokenizer.texts_to_matrix(readData.message)) # shape (None, 500)
This will code each message as a 500 integers, with a unique integer assigned to each word.
Once that was fixed, I also got an error on the "dense_1" layer. The last layer in your network was specified to have two output nodes, but the loss function you used (binary_cross_entropy) expects a single column coded as 0/1. I edited it so that layer had only one output node so the process would complete, but doubt using 0, 0.5, 1 with binary cross entropy will do what you want. I think you'd probably be between off with a 3-level one-hot encoding and categorical_cross_entropy, but that's out of scope for this question.
Here is the full edited script that ran for me. I was only able to run it on the two observations you provided but it did complete.
from keras.models import Sequential
from keras.layers import Dense, LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
import os, pickle, numpy, pandas as pd
from keras.preprocessing.text import Tokenizer
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
################################### CONSTANTS ################################################
SEED = 7 # Fixes random seed for reproducibility.
URL = 'ibcData.tsv' # Specified dataset to gather data from.
SEPERATOR = '\t' # Seperator the dataset uses to divide data.
RANDOM_STATE = 1 # Pseudo-random number generator state used for random sampling.
TOP_WORDS = 5000 # Most used words in the dataset.
MAX_REVIEW_LENGTH = 500 # Length of each sentence being sent in (necessary).
EMBEDDING_VECTOR_LENGTH = 32 # The specific Embedded later will have 32-length vectors to
# represent each word.
BATCH_SIZE = 64 # Takes 64 sentences at a time and continually retrains RNN.
NUMBER_OF_EPOCHS = 3 # Fits RNN to more accurately guess the data's political bias.
# fix random seed for reproducibility
numpy.random.seed(SEED)
readData = pd.read_csv(URL, header=None, names=['label', 'message'], sep=SEPERATOR)
# convert label to a numerical variable
tokenizer = Tokenizer(num_words=MAX_REVIEW_LENGTH)
tokenizer.fit_on_texts(readData.message)
X = numpy.array(tokenizer.texts_to_matrix(readData.message)) # shape (None, 32)
readData['label_num'] = readData.label.map({'Liberal' : 0, 'Neutral': 0.5, 'Conservative' : 1})
Y = numpy.array(readData.label_num) # Either 0.0, 0.5, or 1.0 depending on label mapped to
# load the dataset into training and testing datasets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state=RANDOM_STATE)
# create the model
model = Sequential()
model.add(Embedding(TOP_WORDS, EMBEDDING_VECTOR_LENGTH, input_length=MAX_REVIEW_LENGTH))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid')) # Layers deal with a 2D tensor, and output a 2D tensor
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=NUMBER_OF_EPOCHS, batch_size=BATCH_SIZE)
# Final evaluation of the model
scores = model.evaluate(X_test, Y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
I then received the following output:
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 500, 32) 160000
_________________________________________________________________
lstm_1 (LSTM) (None, 100) 53200
_________________________________________________________________
dense_1 (Dense) (None, 1) 101
=================================================================
Total params: 213,301
Trainable params: 213,301
Non-trainable params: 0
_________________________________________________________________
None
Train on 1 samples, validate on 1 samples
Epoch 1/3
1/1 [==============================] - 0s - loss: 0.6953 - acc: 0.0000e+00 - val_loss: 0.6814 - val_acc: 1.0000
Epoch 2/3
1/1 [==============================] - 0s - loss: 0.6814 - acc: 1.0000 - val_loss: 0.6670 - val_acc: 1.0000
Epoch 3/3
1/1 [==============================] - 0s - loss: 0.6670 - acc: 1.0000 - val_loss: 0.6516 - val_acc: 1.0000
Hope that helps.