Tensorflow error when defining sequential model - python

I am trying to define a sequential model for a program that analyses text reviews using sentiment analysis. I am having trouble however when defining one of the models I am trying to use.
Below is the section of my code (with the relevant imports) I am having problems with:
from sklearn.model_selection import train_test_split
from tensorflow.keras import Model
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense
from tensorflow.keras import Sequential
X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=42)
n_features = X_train.shape[1]
model = Sequential()
model.add(Dense(20, activation='relu', kernel_initializer='he_normal',
input_shape=(n_features,)))
model.add(Dense(10, activation='tanh', kernel_initializer='he_normal'))
model.add(Dense(8, activation='sigmoid', kernel_initializer='he_normal'))
model.add(Dense(6, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=0)
loss, acc = model.evaluate(X_test, y_test, verbose=0)
print('Test Accuracy: %.3f' % (acc * 100))
Below is the warning message that I am receiving:
WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: <class 'scipy.sparse.csr.csr_matrix'>, <class 'NoneType'>
WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: <class 'scipy.sparse.csr.csr_matrix'>, <class 'NoneType'>
Test Accuracy: 78.668
As you can see I am still receiving the accuracy for the model but also the warning. This is the first time creating something like this so I am a little confused and any help would be much appreciated. I am programming in Python and I am using a Jupyter Notebook.

Related

Conv1D for classify non-image dataset show error ValueError : `logits` and `labels` must have the same shape

I found this paper they present Convolutional Neural Network can get the best accuracy for non-image classify. So, I want to use CNN with non-image dataset. I download Early Stage Diabetes Risk Prediction Dataset form kaggle. I create CNN moldel like this code.
dataset = loadtxt('diabetes_data_upload.csv', delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:16]
Y = dataset[:,16]
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3)
model = Sequential()
model.add(Conv1D(16,2, activation='relu', input_shape=(16, 1)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=100, batch_size=10)
It show error like this.
ValueError: `logits` and `labels` must have the same shape, received ((None, 15, 1) vs (None,)).
How to fix it ?
You can use tf.keras.layers.Flatten(). Something like below can solve youe problem.
from sklearn.model_selection import train_test_split
import tensorflow as tf
import numpy as np
X = np.random.rand(100, 16)
Y = np.random.randint(0,2, size = 100) # <- Because you have two labels, I generate ranom 0,1
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(16,2, activation='relu', input_shape=(16, 1)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=1, batch_size=10)
Update by thanks Ameya, we can solve this problem by only using tf.keras.layers.GlobalAveragePooling1D() too.
(by thanks Djinn and his_comment, but consider: these are two different approaches that do different things. Flatten() preserves all data, and just converts input tensors to a 1D tensor BUT GlobalAveragePooling1D() tries to generalize and loses data. Pooling layers with non-image data can significantly affect performance but I've noticed AveragePooling does the least "damage,")
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(16,2, activation='relu', input_shape=(16, 1)))
model.add(tf.keras.layers.GlobalAveragePooling1D())
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
7/7 [==============================] - 0s 2ms/step - loss: 0.6954 - accuracy: 0.0000e+00

How to implement LSTM in tensorflow v1 from pandas dataframe

I've tried following tutorials on implementing this but I keep getting dimension errors on the LSTM layer.
ValueError: Input 0 of layer LSTM is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 2]
import random
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, DenseFeatures, Reshape
from sklearn.model_selection import train_test_split
def df_to_dataset(features, target, batch_size=32):
return tf.data.Dataset.from_tensor_slices((dict(features), target)).batch(batch_size)
# Reset randomization seeds
np.random.seed(0)
tf.random.set_random_seed(0)
random.seed(0)
# Assume 'frame' to be a dataframe with 3 columns: 'optimal_long_log_return', 'optimal_short_log_return' (independent variables) and 'equilibrium_log_return' (dependent variable)
X = frame[['optimal_long_log_return', 'optimal_short_log_return']][:-1]
Y = frame['equilibrium_log_return'].shift(-1)[:-1]
X_train, _X, y_train, _y = train_test_split(X, Y, test_size=0.5, shuffle=False, random_state=1)
X_validation, X_test, y_validation, y_test = train_test_split(_X, _y, test_size=0.5, shuffle=False, random_state=1)
train = df_to_dataset(X_train, y_train)
validation = df_to_dataset(X_validation, y_validation)
test = df_to_dataset(X_test, y_test)
feature_columns = [fc.numeric_column('optimal_long_log_return'), fc.numeric_column('optimal_short_log_return')]
model = Sequential()
model.add(DenseFeatures(feature_columns, name='Metadata'))
model.add(LSTM(256, name='LSTM'))
model.add(Dense(1, name='Output'))
model.compile(loss='logcosh', metrics=['mean_absolute_percentage_error'], optimizer='Adam')
model.fit(train, epochs=10, validation_data=validation, verbose=1)
loss, accuracy = model.evaluate(test, verbose=0)
print(f'Target Error: {accuracy}%')
After seeing this issue elsewhere I've tried setting input_shape=(None, *X_train.shape), input_shape=X_train.shape, neither works. I also tried inserting a Reshape layer model.add(Reshape(X_train.shape)) before the LSTM layer and it fixed the issue but I got another issue in its place:
InvalidArgumentError: Input to reshape is a tensor with 64 values, but the requested shape has 8000
...and I'm not even sure adding the Reshape layer is doing what I think it is doing. After all, why would reshaping the data to its own shape fix things? Something is happening with my data that I just don't understand.
Also, I'm using this for time series analysis (stock returns), so I would think that the LSTM model should be stateful and temporal. Would I need to move the timestamp index into its own column in the pandas database before converting to a tensor?
Unfortunately I'm obligated to use tensorflow v1.15 as this is being developed on the QuantConnect platform and they presumably won't be updating the library any time soon.
EDIT: I've made a bit of progress by using TimeseriesGenerator, but now I'm getting the following error (which returns no results on Google):
KeyError: 'No key found for either mapped or original key. Mapped Key: []; Original Key: []'
Code below (I'm sure I'm using the input_shape arguments incorrectly):
train = TimeseriesGenerator(X_train, y_train, 1, batch_size=batch_size)
validation = TimeseriesGenerator(X_validation, y_validation, 1, batch_size=batch_size)
test = TimeseriesGenerator(X_test, y_test, 1, batch_size=batch_size)
model = Sequential(name='Expected Equilibrium Log Return')
model.add(LSTM(256, name='LSTM', stateful=True, batch_input_shape=(1, batch_size, X_train.shape[1]), input_shape=(1, X_train.shape[1])))
model.add(Dense(1, name='Output'))
model.compile(loss='logcosh', metrics=['mean_absolute_percentage_error'], optimizer='Adam', sample_weight_mode='temporal')
print(model.summary())
model.fit_generator(train, epochs=10, validation_data=validation, verbose=1)
loss, accuracy = model.evaluate_generator(test, verbose=0)
print(f'Model Accuracy: {accuracy}')
Turns out this specific issue relates to a patch that Quantconnect made to pandas dataframes which interfered with the older version of tensorflow/keras.

Deep Learning - Keep getting low accuracy

I'm new to Python (although not to programming - I'm usually programming in JavaScript) and I'm very interested in AI development.
Recently I've been trying to develop a deep learning algorithm by following this article.
My goal is to predict a set of 7 numbers, based on a CSV file that contains a large list, with each row having 7 numbers as well. The order of the list matters.
I ended up having the following code:
from keras.models import Sequential
from keras.layers import Dense
from sklearn.model_selection import train_test_split
from numpy import loadtxt, random
random.seed(seed)
dataset = loadtxt("data/Lotto.csv", delimiter=",", skiprows=1)
X = dataset[:, 0:7]
Y = dataset[:, 6]
(X_train, X_test, Y_train, Y_test) = train_test_split(X, Y, test_size=0.33, random_state=4)
model = Sequential()
model.add(Dense(8, input_dim=7, kernel_initializer="uniform", activation="relu"))
model.add(Dense(6, kernel_initializer="uniform", activation="relu"))
model.add(Dense(1, kernel_initializer="uniform", activation="sigmoid"))
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=100, batch_size=5, shuffle=False)
scores = model.evaluate(X_test, Y_test)
print("Accuracy: %.2f%%" %(scores[1] * 100))
After running it in Google Colaboratory, while I'm not getting any errors - I noticed that for each epoch, the loss result doesn't change, and as a result, I keep getting low accuracy (~6%).
What am I doing wrong?
Try changing optimizer to RMSprop with learning rate of around 0.0001.
RMSprop is usually better than most optimizers and gives better accuracy and lesser loss than others. You could alternatively try SGD, which is also a good optimizer.
Also increase number of parameters, as more trainable parameters leads to the model being trained with more precision and gives a much accurate prediction.
You could update the code to tensorflow 2.x and change the code to :
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from numpy import loadtxt, random
#Rest of the code
.......
.......
.......
model = Sequential()
model.add(Dense(64, input_shape=(7,), activation='relu', kernel_initializer='uniform'))
model.add(Dense(64, kernel_initializer="uniform", activation="relu"))
model.add(Dense(1, kernel_initializer="uniform", activation="sigmoid"))
model.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001), metrics=["accuracy"])
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=100, batch_size=5, shuffle=False)
scores = model.evaluate(X_test, Y_test)
print("Accuracy: %.2f%%" %(scores[1] * 100))
Correct me if I'm wrong, but by the looks of it your input is a list of 7 numbers, and you want to output the 7th number in that list. Now by using a sigmoid activation in your last layer, you're restricting your model output to the interval (0,1). Are you sure that your data is in this interval?
Also, your model is way to complicated for that task. You really need only one dense layer without an activation or bias to be able to do this.

Invalid array shape with neural network using Keras?

Currently studying the 'Deep Learning with Python' book by Francios Chollet. I am very new to this and I am getting this error code despite following his code verbatim. Can anyone interpret the error message or what needs to be done to solve it? Any help would be greatly appreciated!
from keras.datasets import imdb
import numpy as np
from keras import models
from keras import layers
(train_data, train_labels), (test_data, test_labels) =
imdb.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
x_train = vectorize_sequences(train_data)
y_train = vectorize_sequences(test_data)
x_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=4, batch_size=512)
results = model.evaluate(x_test, y_test)
Edit: Here is an image of the error code that I am getting:
I tested your Code and found, that x_test was not defined. I think you meant to vectorize it as follows. With this Code it worked:
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

Real bad accuracy of training test on neural network on keras

I'm doing a Neural Network for the "Default of credit card clients" from http://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients.
But the accuracy of my models is pretty bad, worse than if I predicted all zeros. I already did some research on it and did oversample to correct the classes, and change the optimizer, because adam was not increasing the accuracy.
What else could I do?
import pandas
import numpy
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
import keras
from imblearn.over_sampling import SMOTE
seed = 8
numpy.random.seed(seed)
base = pandas.read_csv('base_nao_trabalhada.csv')
train, test = train_test_split(base, test_size = 0.2)
train=train.values
test=test.values
X_train = train[:,1:23]
Y_train = train[:,24]
X_test = test[:,1:23]
Y_test = test[:,24]
sm = SMOTE(kind='regular')
X_resampled, Y_resampled = sm.fit_sample(X_train, Y_train)
# Model Creation
model = Sequential()
model.add(Dense(40, input_dim=22, init='uniform', activation='relu'))
model.add(Dense(4, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
#activation='relu'
opt = keras.optimizers.SGD(lr=0.000001)
# Compile model
model.compile(loss='binary_crossentropy', optimizer=opt , metrics=['accuracy'])
#loss=binary_crossentropy
#optimizer='adam'
# creating .fit
model.fit(X_resampled, Y_resampled, nb_epoch=10000, batch_size=30)
# evaluate the model
scores = model.evaluate(X_test, Y_test)
print ()
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

Categories