Setup Keras options for LSTM modelling - python

I am trying to figure out to setup process of forecasting some value. Currently, I can't understand what is issue in below code:
in_neurons = 1
out_neurons = 1
hidden_neurons = 20
nb_features = 9
# retrieve data
y_train = train.pop(target).values
X_train = pd.concat([train[['QTR_HR_START', 'QTR_HR_END', 'HOLIDAY_RANK_', 'SPECIAL_EVENT_RANK_',
'IS_AM', 'IS_TOP_RANKED', 'AWARDS_WINS_ANY', 'YEARS_SINCE_RELEASE']],
pd.DataFrame({'DATETIME': pd.DatetimeIndex(train['DATETIME']).astype(np.int64)})])
X_train = X_train.values
y_test = test.pop(target).values
X_test = pd.concat([test[['QTR_HR_START', 'QTR_HR_END', 'HOLIDAY_RANK_', 'SPECIAL_EVENT_RANK_',
'IS_AM', 'IS_TOP_RANKED', 'AWARDS_WINS_ANY', 'YEARS_SINCE_RELEASE']],
pd.DataFrame({'DATETIME': pd.DatetimeIndex(test['DATETIME']).astype(np.int64)})])
X_test = X_test.values
model = Sequential()
model.add(TimeDistributed(Dense(8, input_shape=(X_train.shape[0], 100, nb_features), activation='softmax')))
model.add(LSTM(4, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(1))
model.add(Activation("sigmoid"))
model.compile(loss="mean_squared_error", optimizer="rmsprop", metrics=['accuracy'])
After running the code, I got an exception:
raise Exception('The first layer in a Sequential model must '
Exception: The first layer in a Sequential model must get an input_shape or batch_input_shape argument.
Please advice where I am wrong
EDIT1: I just configured the model as was mentioned in official documentation - http://keras.io/layers/recurrent/
model.add(LSTM(32, input_dim=nb_features, input_length=100))
model.compile(loss="mean_squared_error", optimizer="rmsprop", metrics=['accuracy'])
Exception: Error when checking model input: expected lstm_input_1 to have 3 dimensions, but got array with shape (48614, 9)

It's old, but I'm posting for future use. Keras as input requiers 3D data, as stated in error. It is samples, time steps, features. Despite the fact that you have (48614, 9) Keras takes it as 2D - [samples, features]. In order to fix it, do something like this
def reshape_dataset(train):
trainX = numpy.reshape(train, (train.shape[0], 1, train.shape[1]))
return numpy.array(trainX)
x = reshape_dataset(your_dataset_48614, 9)
now X should be 48614,1, 9 which is [samples, time steps, features] - 3D

Related

What are problems here on Tenser flow with keras cnn problem

i ask what’s I doing to edait code in order preventing errors
I use keras model conv1d for raw dataset X_train= (142315, 23)
Y_train = (142315,)
my code
My code is
n_timesteps = X_train.shape[1] #23
input_layer = tensorflow.keras.layers.Input(shape=(n_timesteps,1))
conv_layer1 = tensorflow.keras.layers.Conv1D(filters=5,
kernel_size=7,
activation=”relu”)(input_layer)
max_pool1 = tensorflow.keras.layers.MaxPooling1D(pool_size=2, strides=5)(conv_layer1)
conv_layer2 = tensorflow.keras.layers.Conv1D(filters=3,
kernel_size=3,
activation=”relu”)(max_pool1)
flatten_layer = tensorflow.keras.layers.Flatten()(conv_layer2)
dense_layer = tensorflow.keras.layers.Dense(15, activation=”relu”)(flatten_layer)
output_layer = tensorflow.keras.layers.Dense(6, activation=”softmax”)(dense_layer)
model = tensorflow.keras.Model(inputs=input_layer, outputs=output_layer)
# Prints a string summary of the network.
model.summary()
and after that i use optimization technological for hyperprameters and when # Returning the details of the best solution. print this error can helpe me?????
error
5121 # Use logits whenever they are available. softmax and sigmoid
ValueError: Shapes (142315,) Hi and (142315, 2) are incompatible

Python Bayesian Optimization - inhomogeneous shape after 1 dimensions

I'm attempting to perform Bayesian Optimization on deep learning models to expedite hyperparameter tuning compared with grid search. I found code from [https://www.analyticsvidhya.com/blog/2021/05/tuning-the-hyperparameters-and-layers-of-neural-network-deep-learning/] which illustrates a working example but I cannot seem to apply it to my data. My data contains 33 features, hyperparameters I am trying to optimize are 'number of neurons'. 'activation function', 'learning_rate' (for optimizer Adam),'decay' (for optimizer Adam),'batch_size', 'number of epochs'. The error shown in my code is as follow - ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (7,) + inhomogeneous part. When look at stack overflow solutions to similar problems i.e., [https://stackoverflow.com/questions/67183501/setting-an-array-element-with-a-sequence-requested-array-has-an-inhomogeneous-sh], it appears there may be a problem with input shapes? Or values as floats? I am still unsure after looking at these pages which is why I have posted the question here.
Below is the code and the error.
from keras.layers import LeakyReLU
LeakyReLU = LeakyReLU(alpha=0.1)
from bayes_opt import BayesianOptimization
from sklearn.model_selection import StratifiedKFold
def nn_cl_bo(neurons, activation, optimizer, learning_rate, batch_size, epochs ):
optimizerL = ['Adam']
optimizerD= {'Adam':tf.keras.optimizers.Adam(lr=learning_rate)}
activationL = ['relu', 'sigmoid', 'softplus', 'softsign', 'tanh', 'selu',
'elu', 'exponential', LeakyReLU, 'relu']
neurons = round(neurons)
activation = activationL[round(activation)]
batch_size = round(batch_size)
epochs = round(epochs)
def nn_cl_fun():
opt = tf.keras.optimizers.Adam(lr = learning_rate)
nn = Sequential()
nn.add(LSTM(units=neurons, input_shape=
(X_train.shape[1],X_train.shape[2]), activation=activation))
nn.add(Dense(1, activation='sigmoid'))
nn.compile(loss='mae', optimizer=opt, metrics=['mae'])
return nn
es = EarlyStopping(monitor='mae', mode='max', verbose=0, patience=20)
nn = KerasClassifier(build_fn=nn_cl_fun, epochs=epochs, batch_size=batch_size,
verbose=0)
kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=123)
score = cross_val_score(nn, X_train, y_train, scoring=score_acc, cv=kfold, fit_params={'callbacks':[es]}).mean()
return score
params_nn ={
'neurons': (10, 100),
'activation':(0, 9),
'optimizer':(0),
'learning_rate':(0.01, 1),
'decay':(0,0.1),
'batch_size':(7,112),
'epochs':(20, 100)
}
Run Bayesian Optimization
nn_bo = BayesianOptimization(nn_cl_bo, params_nn, random_state=111)
nn_bo.maximize(init_points=25, n_iter=4)
ValueError Traceback (most recent call last)
\<ipython-input-67-4f3ad5be1912\> in \<module\>()
10 }
11 # Run Bayesian Optimization
\---\> 12 nn_bo = BayesianOptimization(nn_cl_bo, params_nn, random_state=111)
13 nn_bo.maximize(init_points=25, n_iter=4)
1 frames
/usr/local/lib/python3.7/dist-packages/bayes_opt/target_space.py in __init__(self, target_func, pbounds, random_state)
47 self.\_bounds = np.array(
48 \[item\[1\] for item in sorted(pbounds.items(), key=lambda x: x\[0\])\],
\---\> 49 dtype=np.float
50 )
51
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (7,) + inhomogeneous part.

Using fit_generator with multiple inputs gives error at output dense layer

In my case I am using a set of sequential features and also non sequential features to train the model. Following is the architecture of my model
Sequential features -> LSTM -> Dense(1) --->>
\
\
-- Dense -> Dense -> Dense(1) ->output
/
Non-sequential features---/
I am using data generator to generate batches for sequential data. Here the batch size is varying for each batch. For one batch I am keeping the non-sequential feature fixed. Following is my data generator.
def training_data_generator(raw_data):
while True:
for index, row in raw_data.iterrows():
x_train, y_train = list(), list()
feature1 = row['xxx']
x_current_batch = []
y_current_batch = []
for j in range(yyy):
x_current_batch.append(row['zz1'])
y_current_batch.append(row['zz2'])
x_train.append(x_current_batch)
y_train.append(y_current_batch)
x_train = array(x_train)
y_train = array(y_train)
yield [x_train, np.reshape(feature1,1)], y_train
Note: x_train y_train sizes are varying.
Following is my model implementation.
seq_input = Input(shape=(None, 3))
lstm_layer = LSTM(50)(seq_input)
dense_layer1 = Dense(1)(lstm_layer)
non_seq_input = Input(shape=(1,))
hybrid_model = concatenate([dense_layer1, non_seq_input])
hidden1 = Dense(10, activation = 'relu')(hybrid_model)
hidden2 = Dense(10, activation='relu')(hidden1)
final_output = Dense(1, activation='sigmoid')(hidden2)
model = Model(inputs = [seq_input, non_seq_input], outputs = final_output)
model.compile(loss='mse',optimizer='adam')
model.fit_generator(training_data_generator(flatten), steps_per_epoch= 5017,
epochs = const.NUMBER_OF_EPOCHS, verbose=1)
I am getting error at the output dense layer
ValueError: Error when checking target:
expected dense_4 to have shape (1,) but got array with shape (4,)
I think the last layer is getting whole output of the generator but not as one by one.
What is the reason for this issue. Appreciate your insights on this issue.
The output gives a Dense layer with a size of 4. Since you've declared your output as a Dense layer with a size of 1, it crashes.
What you can do is change your output dense Layer to 4. And then manually convert this to one value.
Hopefully this answers your question.

Tensorflow: bag of words text classification: Cannot feed value of shape

I've nicked this code here: https://sourcedexter.com/tensorflow-text-classification-python/ to try to predict if a given question is either one of two categories.
However, I'm getting the following error:
Cannot feed value of shape (1, 1666) for Tensor 'TargetsData/Y:0', which has shape '(?, 2)'
Relevant code below:
# train_x contains the Bag of words and train_y contains the label/ category
train_x = list(training[:,0])
train_y = list(training[:,1])
#reset underlying graph data
tf.reset_default_graph()
#Build neural network
net = tflearn.input_data(shape=[None,len(train_x[0])])
#layer?
net = tflearn.fully_connected(net,8)
#layer?
net = tflearn.fully_connected(net,8)
#output layer
net = tflearn.fully_connected(net, len(train_y[0]),activation='softmax')
net = tflearn.regression(net)
#define model and set up tensorboard
model = tflearn.DNN(net, tensorboard_dir = 'tflearn_logs')
#start training (grad descent algo)
model.fit(train_x, train_x, n_epoch = 1000, batch_size=1, show_metric = True)
model.save('model.tflearn')
How do I fix it?
This is common shape mismatch error
Error is self-explanatory
Your target tensor is of shape [None, 2]
You are feeding you target tensor an array of (1, 1666)
Your model.fit() should be:
model.fit(train_x, train_y, n_epoch=1000, batch_size=8, show_metric=True)
See the second parameter which is the target train_y but you have given the second parameter as train_x
which means you are saying that your inputs are you labels which is not true
May be that's why its throwing you the error

expected dense_1 to have 2 dimensions, but got array with shape (308, 1, 6)

I'm trying to use Conv1D for the first time for multiclass classification of time series data and my model keeps throwing this error when I use it.
import numpy as np
import os
import keras
from keras.models import Sequential
from keras.layers import Conv1D, Dense, TimeDistributed, MaxPooling1D, Flatten
# fix random seed for reproducibility
np.random.seed(7)
dataset1 = np.genfromtxt(os.path.join('data', 'norm_cellcycle_384_17.txt'), delimiter=',', dtype=None)
data = dataset1[1:]
# extract columns
genes = data[:,0]
y_all = data[:,1].astype(int)
x_all = data[:,2:-1].astype(float)
# deleted this line when using sparse_categorical_crossentropy
# 384x6
y_all = keras.utils.to_categorical(y_all)
# 5
num_classes = np.unique(y_all).shape[0]
# split entire data into train set and test set
validation_split = 0.2
val_idx = np.random.choice(range(x_all.shape[0]), int(validation_split*x_all.shape[0]), replace=False)
train_idx = [x for x in range(x_all.shape[0]) if x not in val_idx]
x_train = x_all[train_idx]
y_train = y_all[train_idx]
# 308x17x1
x_train = x_train[:, :, np.newaxis]
# 308x1
y_train = y_train[:,np.newaxis]
x_test = x_all[val_idx]
y_test = y_all[val_idx]
# deleted this line when using sparse_categorical_crossentropy
y_test = keras.utils.to_categorical(y_test)
# 76x17x1
x_test = x_test[:, :, np.newaxis]
# 76x1
y_test = y_test[:,np.newaxis]
print(x_train.shape[0],'train samples')
print(x_test.shape[0],'test samples')
# Create Model
# number of filters for 1D conv
nb_filter = 4
filter_length = 5
window = x_train.shape[1]
model = Sequential()
model.add(Conv1D(filters=nb_filter,kernel_size=filter_length,activation="relu", input_shape=(window,1)))
model.add(MaxPooling1D())
model.add(Conv1D(nb_filter=nb_filter, filter_length=filter_length, activation='relu'))
model.add(MaxPooling1D())
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=25, batch_size=2, validation_data=(x_test, y_test))
I don't know why I get this error. When I use binary_crossentropy loss and no one hot encoding for y_all, my model works. But it fails when I use one hot encoding for y_all with categorical_crossentropy loss. When I don't use one hot encoding, keras throws an error making me change y_all to one a binary matrix.
I don't even know where the (1,6) are coming from in the array.
ValueError: Error when checking model target: expected dense_1 to have 2 dimensions, but got array with shape (308, 1, 6)
Please help! I've been stuck on this for many hours! Already went through all the related questions but still doesn't make sense.
Update: I now use sparse_categorical_crossentropy because it has integer support. I deleted the to_categorical lines from the above code and I get this new error:
InvalidArgumentError (see above for traceback): Received a label value
of 5 which is outside the valid range of [0, 5). Label values: 2 5
[[Node:
SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits
= SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_1, Cast)]]
Requested sample of data:
,Main,Gp,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15,c16,c17
YDL179w,1,-0.75808,-0.90319,-0.98935,-0.73995,-0.67193,-0.12777,-0.95307,-1.01656,0.79730,2.11688,1.98537,0.61591,0.56603,-0.13684,-0.52228,-0.05068,0.78823,
YLR079w,1,-0.48845,-0.70828,-0.47688,-0.65814,-0.45374,-0.47302,-0.71214,-1.02839,0.24048,3.11376,1.28952,0.44874,0.04379,-0.31104,-0.30332,-0.34575,0.82285,
YER111c,1,-0.42218,0.23887,1.84427,-0.02083,-0.61105,-0.65827,-0.79992,-0.39857,-0.09166,2.03314,1.58457,0.68744,0.14443,-0.72910,-1.46097,-0.82353,-0.51662,
YBR200w,1,0.09824,0.55258,-0.89641,-1.19111,-1.11744,-0.76133,0.09824,2.16120,1.46126,1.03148,0.67537,-0.33155,-0.60170,-1.39987,-0.42978,-0.15963,0.81045,
YPL209c,2,-0.65282,-0.32055,2.53702,2.00538,0.60982,0.51014,-0.55314,-1.01832,-0.78573,0.01173,0.07818,-0.05473,-0.22087,0.24432,-0.28732,-1.11801,-0.98510,
YJL074c,2,-0.81087,-0.19448,1.72941,0.59002,-0.53069,-0.25051,-0.92294,-0.92294,-0.53069,0.08570,1.87884,1.97223,0.45927,-0.36258,-0.34390,-1.07237,-0.77351,
YNL233w,2,-0.43997,0.66325,2.85098,0.74739,-0.42127,-0.47736,-0.79524,-0.80459,-0.48671,-0.21558,1.25226,1.01852,-0.10339,-0.56151,-0.96353,-0.46801,-0.79524,
YLR313c,2,-0.46611,0.42952,3.01689,1.13856,0.01902,-0.44123,-0.66514,-0.98856,-0.59050,-0.47855,0.84002,0.39220,0.50416,-0.50342,-0.82685,-0.64026,-0.73977,
YGR041w,2,-0.57187,-0.26687,1.10561,-0.38125,-0.68624,-0.26687,-0.87687,-1.18186,-0.80062,0.60999,2.09686,1.82998,1.14374,0.11437,-0.80062,-0.87687,-0.19062,
So I noticed that even though I know there are 5 classes in this dataset as seen by the unique values obtained for y_all, for some reason Keras to_categorical thinks there are 6 classes.
# 384x6
y_all = keras.utils.to_categorical(y_all)
# 5
num_classes = np.unique(y_all).shape[0]
I don't know why that is. Keeping this in mind I changed this line of code and my model began to run:
model.add(Dense(num_classes, activation='softmax'))
to
model.add(Dense(num_classes+1, activation='softmax'))
I still don't know why to_categorical behaves this way. Anyone know?
to_categorical(x) in Keras will encode the given parameter into n number of classes where n = max(x) + 1, i.e. generally speaking from [0 , max(x)].

Categories