The following code gives me an input error and i cannot figure it out.
import tensorflow as tf
import neural_structured_learning as nsl
.
.
.
b_size = 132
m = tf.keras.Sequential()
m.add(tf.keras.layers.Dense(980, activation = 'relu', input_shape = (2206,2,)))
m.add(tf.keras.layers.Dense(560, activation = 'relu'))
m.add(tf.keras.layers.Dense(10, activation = 'softmax'))
adv_config = nsl.configs.make_adv_reg_config(multiplier=0.2, adv_step_size=0.5)
adv_model = nsl.keras.AdversarialRegularization(m, adv_config=adv_config)
adv_model.compile(optimizer = "adam",
loss = "sparse_categorical_crossentropy",
metrics = ['accuracy'])
adv_model.fit({"feature" : x_Train, "label" : y}, epochs = 50, batch_size=b_size)
My x_Train has shape (5002, 2206, 2) (5002 samples of size (2206,2)). I have tried to add a Flatten() layer at the beginning but it gives me a object of type 'NoneType' has no len() error, even though this works perfectly with tf.keras. I also have tried different shapes for the input but none of them work. So it throws me one of the following errors
KeyError: 'dense_115_input'
ValueError: Input 0 of layer sequential_40 is incompatible with the layer: expected axis -1 of input shape to have value 2206 but received input with shape [None, 2206, 2]
TypeError: object of type 'NoneType' has no len()
To train an NSL model with an input dictionary (like your {"feature" : x_Train, "label" : y}), the base model has to know which feature(s) in the dictionary to look at.
One way to specify the feature names is to add an Input layer:
m = tf.keras.Sequential()
m.add(tf.keras.Input(name="feature", shape=(2206, 2)))
Also as this answer pointed out, the input feature has to be flatten before passing to dense layers:
m.add(tf.keras.layers.Flatten())
m.add(tf.keras.layers.Dense(...))
If you want to use a dense layer the input should be (5002, 2206*2), i.e a matrix.
Maybe the simplest solution is to reshape your input x_train before the "fit".
Alternatively, you can use a TimeDistributed layer (see here), but the usage of this kind of layer depends on the physical meaning behind the input dimensions. Basically, TimeDistributed applies a certain operation many times, in your case twice.
Hoping this can help you.
Related
I have 9 features, one output variable i.e. to be predicted, window size is 5
code works very well without "TimeDistributed" command
MODEL INPUT SHAPE: feature_tensor.shape=(1649, 5, 9) MODEL OUTPUT
SHAPE: y_train.shape= (1649,)
Thats my Code:
#Build the network model
act_fn='relu'
modelq = Sequential()
modelq.add(TimeDistributed(Conv1D(filters=105, kernel_size=2, activation=act_fn, input_shape=(None, feature_tensor.shape[1],feature_tensor.shape[2]))))
modelq.add(TimeDistributed(AveragePooling1D(pool_size=1)))
modelq.add(TimeDistributed(Flatten()))
modelq.add(LSTM(50))
modelq.add(Dense(64, activation=act_fn))
modelq.add(Dense(1))
#Compile the model
modelq.compile(optimizer='adam', loss='mean_squared_error')
modelq.fit(feature_tensor, y_train ,batch_size=1, epochs=epoch_count)
THE ERROR STATEMENT IS :
ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (5, 9)
I feel like there is some thing wrong with dimensionality of "feature_tensor" during "Model FITTING" i.e last command... But I don't know what's wrong with it :(
Your intuition is right, the problem is the dimensionality of tensor_feature. If you take a look in the documentation of TimeDistributed you see an example with images and Conv2d layers. There the the input has to have the following shape: batch_size, time steps, x_dim, y_dim, channels. Since you use time-series you need: batch_size, time steps, 1, features. E.g. you can reshape your data by numpy:
feature_tensor = np.reshape(feature_tensor, (-1, 5, 1, 9))
However, I am not sure if it useful to combine Conv1D with TimeDistributed, since in that case you apply the convolution only on the features and not on temporal contiguous values, where a 1d Convolution should be applied.
I'm pretty new with NLP and I want to classify different words depending on their language (basically my model should tell me if a word is french, or english, or spanish and so on).
When I fit the following model I get a dimension error. The "dataset" contains the words, it's a padded tensor of size (1550, 19) and the "y" contains the different languages, it's also a padded tensor of size (1550, 10).
np.random.seed(42)
tf.random.set_seed(42)
from tensorflow.keras.layers import LSTM, GRU, Input, Embedding, Dense
input = Input(shape=[None])
z = Embedding(max_id + 1, 128, input_shape=[None], mask_zero=True)(input)
z = GRU(128)(z)
output = Dense(18, activation='softmax')(z)
model = keras.models.Model(input, output)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
h = model.fit(dataset, y, epochs=5)
ValueError: Shapes (None, 10) and (None, 18) are incompatible
Do you see where the problem is?
Thanks!
The message tells you that the shapes are not compatible, they need to match. I would have put this as a comment, but I can't due to my reputation, so that's why I answered directly, however I am not sure if it works, have you tried:
output = Dense(10, activation='softmax')(z)
Trying to set up a Conv1D layer to be the input layer in keras.
The dataset is 1000 timesteps, and each timestep has 1 feature.
After reading a bunch of answers I reshaped my dataset to be in the following format of (n_samples, timesteps, features), which corresponds to the following in my case:
train_data = (78968, 1000, 1)
test_data = (19742, 1000, 1)
train_target = (78968,)
test_target = (19742,)
I later create and compile the code using the following lines
model = Sequential()
model.add(Conv1D(64, (4), input_shape = (1000,1) ))
model.add(MaxPooling1D(pool_size=2))
model.add(Dense(1))
optimizer = opt = Adam(decay = 1.000-0.999)
model.compile(optimizer=optimizer,
loss='mean_squared_error',
metrics=['mean_absolute_error','mean_squared_error'])
Then I try to fit, note, train_target and test_target are pandas series so i'm calling DataFrame.values to convert to numpy array, i suspect there might be an issue there?
training = model.fit(train_data,
train_target.values,
validation_data=(test_data, test_target.values),
epochs=epochs,
verbose=1)
The model compiles but I get an error when I try to fit
Error when checking target: expected dense_4 to have 3 dimensions,
but got array with shape (78968, 1)
I've tried every combination of reshaping the data and can't get this to work.
I've used keras with dense layers only before for a different project where the input_dimension was specificied instead of the input_shape, so I'm not sure what I'm doing wrong here. I've read almost every stack overflow question about data shape issues and I'm afraid the problem is elsewhere, any help is appreciated, thank you.
Under the line model.add(MaxPooling1D(pool_size=2)), add one line model.add(Flatten()), your problem will be solved. Flatten function will help you convert your data into correct shape, please see this site for more information https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten
I'm trying to concatenate few flatten layers and one input layer:
navigation_flatten = Flatten()(navigator_conv)
# speed is float (0.0-1.0)
speed_input = keras.layers.Input(shape=(1,))
images_output = Concatenate()([dashcam_flatten, navigation_flatten])
image_and_speed = Concatenate()([speed_input, images_output])
And check output shapes and etc:
model = keras.models.Model([Dashcam_input, RADAR_INPUT], image_and_speed)
model.compile(loss=MSE,
optimizer=keras.optimizers.Adam(lr=0.0001),
metrics=['accuracy'])
print(model.summary())
And get this error:
ValueError: Graph disconnected: cannot obtain value for tensor
Tensor("input_3:0", shape=(?, 1), dtype=float32) at layer "input_3".
The following previous layers were accessed without issue: ['input_2',
'batch_normalization_2', 'input_1', 'conv2d_8',
'batch_normalization_1', 'max_pooling2d_4', 'conv2d_1',
'batch_normalization_3', 'conv2d_2', 'conv2d_9', 'conv2d_3',
'batch_normalization_4', 'max_pooling2d_1', 'conv2d_10', 'conv2d_4',
'batch_normalization_5', 'conv2d_5', 'conv2d_11', 'max_pooling2d_2',
'batch_normalization_6', 'conv2d_6', 'conv2d_12', 'conv2d_7',
'max_pooling2d_5', 'max_pooling2d_3', 'flatten_1', 'flatten_2']
How to right concatenate flatten layers with input layer?
The problem is that you haven't included speed_input to the inputs of your model. Adding it will solve the issue:
model = keras.models.Model([Dashcam_input, RADAR_INPUT, speed_input], image_and_speed)
I'm trying to assign one of two classes (positive/nagative) to audio using CNN with Keras. My model should accept varied lengths of input (frames) in which each frame contains 41 features but I struggle with the input size. Bear in mind that I haven't acquired full dataset so I just mocked some meaningless data just to check if network works at all.
According to documentation https://keras.io/layers/convolutional/ and my best understanding Conv1D can tackle varied lengths if first element of input_shape tuple is None. Shape of variable containing input data X_train.shape is (4, 497, 41).
data = pd.read_csv('output_file.csv', sep=';')
featureCount = data.values.shape[1]
#mocks because full data is not available yet
Y_train = np.asarray([1, 0, 1, 0])
X_train = np.asarray(
[np.array(data.values, copy=True), np.array(data.values, copy=True), np.array(data.values, copy=True),
np.array(data.values, copy=True)])
# variable length with 41 features
model = keras.models.Sequential()
model.add(keras.layers.Conv1D(100, 5, activation='relu', input_shape=(None, featureCount)))
model.add(keras.layers.GlobalMaxPooling1D())
model.add(keras.layers.Dense(10, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(X_train, Y_train, epochs=10, verbose=False, validation_data=(np.array(data.values, copy=True), [1]))
This code produces error
ValueError: Error when checking input: expected conv1d_input to have 3 dimensions, but got array with shape (497, 41). So it appears like the first dimension was cut out as it contains training samples (it seems correct to me) what bothered me is the required dimensionality, why is it 3?
After searching for the answer I stumbled onto Dimension of shape in conv1D and followed it by adding last dimension (using X_train = np.expand_dims(X_train, axis=3)) that contains only single digit but I ended up with another, similar error:
ValueError: Error when checking input: expected conv1d_input to have 3 dimensions, but got array with shape (4, 497, 41, 1) now it seems that first dimension that previously was treated as sample "list" is now part of actual data.
I also tried fiddling with input_shape parameter but to no avail and using Reshape layer but ended up fighting with size the
What should I do to satisfy required shape? How to prepare data for processing?