i'm trying to train an autoencoder in the following code:
encoder_input = keras.layers.Input(shape=(x_Train.shape[1]), name='img')
encoder_out = keras.layers.Dense(1, activation = "relu")(encoder_input)
encoder = keras.Model(encoder_input, encoder_out, name="encoder")
decoder_input = keras.layers.Dense(602896, activation = "relu")(encoder_out)
decoder_output = keras.layers.Reshape((769, 28, 28))(decoder_input)
opt = keras.optimizers.RMSprop(learning_rate=1e-3)
autoencoder = keras.Model(encoder_input, decoder_output, name = "autoencoder")
autoencoder.summary()
autoencoder.compile(opt, loss='mse')
autoencoder.fit(x_Train, x_Train, epochs=10, batch_size=64, validation_split = 0.1)
However, it returns the error:
"tensorflow:Model was constructed with shape (None, 28) for input KerasTensor(type_spec=TensorSpec(shape=(None, 28), dtype=tf.float32, name='img'), name='img', description="created by layer 'img'"), but it was called on an input with incompatible shape (None, 28, 28)."
I don't know how to deal with that or to resize my input. My x_train is a vector with size [769,28,28]
Could someone help me to handle the error?
That's the summary
Thanks
Your input shape for your autoencoder is a little weird, your training data has a shaped of 28x28, with 769 as your batch, so the fix should be like this:
encoder_input = keras.layer.Input(shape=(28, 28), name='img')
encoder_out = keras.layers.Dense(1, activation = "relu")(encoder_input)
# For ur decoder, you need to change a bit as well
decoder_input = keras.layers.Dense(784, activation = "sigmoid")(encoder_out) # Flatten until 28x28 =784
decoder_output = keras.layers.Reshape((28, 28))(decoder_input) # From there reshape back to 28x28
The problem (apart from the wrong shape in the input layer (has to be shape=(28, 28) and the output layer (has to be (28,28)) like in Edwin Cheong 's answer) is that you forgot a flatten layer after your input layer. This leads to the incompatible shape.
Adapted the answer from above:
encoder_input = keras.layer.Input(shape=(28, 28), name='img')
encoder_input = keras.layer.Flatten()(encoder_input)
encoder_out = keras.layers.Dense(1, activation = "relu")(encoder_input)
decoder_input = keras.layers.Dense(784, activation = "sigmoid")(encoder_out)
decoder_output = keras.layers.Reshape((28, 28))(decoder_input)
Related
I'm looking to use two pre-trained autoencoder models and build a model on top with binary output. This is my code:
autoencoder = load_model("autoencoder.h5")
model1 = autoencoder
model2 = autoencoder
model1_out = model1.get_layer(index=7).output
model2_out = model2.get_layer(index=7).output
x = tf.keras.layers.concatenate([model1_out, model2_out])
x = Dense(400, activation='softmax')(x)
x = Dense(200, activation='softmax')(x)
x = Dense(100, activation='softmax')(x)
x = Dense(1, activation='sigmoid')(x)
model=tf.keras.Model(inputs=[model1.input,model2.input],outputs=x)
model.compile(optimizer=Adam(learning_rate=0.001),loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([X_train,X_train], y_train)
I get the following error:
ValueError: The list of inputs passed to the model contains the same input multiple times.
All inputs should only appear once.
Received inputs=
[<KerasTensor: shape=(None, 768) dtype=float32 (created by layer 'input_1')>,
<KerasTensor: shape=(None, 768) dtype=float32 (created by layer 'input_1')>]
Can somebody tell me what I am doing wrong?
Thanks!
What i am trying to do is create a text classification model which combines CNNS and word embeddings.The basic idea is that we have an Embedding layer at the start of the network and then 2 parallel convolutional networks for find 2,3 - grams.
Each of these convolution layers takes the output of the embedding layer as input.
After the outputs of the two cnn layers are concatenated,flattend and feeded to a Dense.
My input is tokenized,numerical sentences of length 27(shape = (None,27)) and i have 1244 of these sentences.
I've managed to create a sequential model wit ha single cnn layer but struggle wit hthe above
My code so far :
input_shape = Embedding(voc, 100,weights=[embedding_matrix], input_length=X.shape[1])
tower_1 = Conv1D(filters=100, kernel_size=2, activation='relu')(input_shape)
tower_1 = MaxPooling1D(pool_size=2)(tower_1)
tower_2 = Conv1D(filters=100, kernel_size=3, activation='relu')(input_shape)
tower_2 = MaxPooling1D(pool_size=2)(tower_2)
merged = keras.layers.concatenate([tower_1, tower_2,], axis=1)
merged = Flatten()(merged)
out = Dense(3, activation='softmax')(merged)
model = Model(input_shape, out)
This produces this error:
TypeError: Inputs to a layer should be tensors. Got: <keras.layers.embeddings.Embedding object at 0x7fadca016dd0>
i have also trid replacing
input_shape = Embedding(voc, 100,weights=[embedding_matrix], input_length=X.shape[1])
with:
input_tensor = Input(shape=(1244,27))
input_shape = Embedding(voc, 100,weights=[embedding_matrix], input_length=X.shape[1])(input_tensor)
which gives me this error:
ValueError: Input 0 of layer "max_pooling1d_23" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 1244, 26, 100)
You should define your Input layer without the number of samples. Just the sentence length:
import tensorflow as tf
inputs = tf.keras.layers.Input((27,))
embedded = tf.keras.layers.Embedding(50, 100, input_length=27)(inputs)
tower_1 = tf.keras.layers.Conv1D(filters=100, kernel_size=2, activation='relu')(embedded)
tower_1 = tf.keras.layers.MaxPooling1D(pool_size=2)(tower_1)
tower_2 = tf.keras.layers.Conv1D(filters=100, kernel_size=3, activation='relu')(embedded)
tower_2 = tf.keras.layers.MaxPooling1D(pool_size=2)(tower_2)
merged = tf.keras.layers.concatenate([tower_1, tower_2,], axis=1)
merged = tf.keras.layers.Flatten()(merged)
out = tf.keras.layers.Dense(3, activation='softmax')(merged)
model = tf.keras.Model(inputs, out)
print(model.summary())
Usage:
samples = 5
random_input = tf.random.uniform((samples, 27), maxval=50, dtype=tf.int32)
print(model(random_input))
tf.Tensor(
[[0.31525075 0.33163014 0.3531191 ]
[0.3266019 0.3295619 0.34383622]
[0.32351935 0.32669052 0.34979013]
[0.32954428 0.33178467 0.33867106]
[0.32966062 0.3283257 0.34201372]], shape=(5, 3), dtype=float32)
I'm creating an autoencoder using this tutorial. When I define the encoder and decoder models separately, I get the following error:
decoder = tf.keras.Model(encoded_input, decoder_layer(encoded_input))
File ".../site-packages/tensorflow/python/keras/engine/base_layer.py", line 586, in __call__
self.name)
File ".../site-packages/tensorflow/python/keras/engine/input_spec.py", line 159, in assert_input_compatibility
' but received input with shape ' + str(shape))
ValueError: Input 0 of layer dense_3 is incompatible with the layer: expected axis -1 of input shape to have value 128 but received input with shape [None, 16]
I'm thinking that I need to reshape the output of my layer somewhere but I don't fully understand the reason behind this error.
Here is a minimal working example of my code:
def top_k(input, k):
return tf.nn.top_k(input, k=k, sorted=True).indices
encoding_dim = 16
input_img = tf.keras.layers.Input(shape=(16, 16, 256), name ="input")
encoded = tf.keras.layers.Dense(encoding_dim, activation='relu')(input_img)
encoded2 = tf.keras.layers.Dense(256, activation='sigmoid')(encoded)
# top_k layer
topk = tf.keras.layers.Lambda(lambda x: tf.nn.top_k(x, k=int(int(x.shape[-1])/2),
sorted=True,
name="topk").values)(encoded)
decoded = tf.keras.layers.Dense(128, activation='relu')(topk) # one dimensional problem
decoded2 = tf.keras.layers.Dense(256, activation='sigmoid')(decoded)
autoencoder = tf.keras.Model(input_img, decoded2)
encoded_input = tf.keras.layers.Input(shape=(encoding_dim,))
# this is the problem
decoder_layer = autoencoder.layers[-1]
encoder = tf.keras.Model(input_img, encoded)
decoder = tf.keras.Model(encoded_input, decoder_layer(encoded_input))
You have several mistakes in you code. Check out the code snippet below and the comments listing what I changed.
def top_k(input, k):
return tf.nn.top_k(input, k=k, sorted=True).indices
encoding_dim = 16
input_img = tf.keras.layers.Input(shape=(16, 16, 256), name ="input")
# The MNIST images are flattened in the tutorial you are following, so you have to do the same if you want to proceed in the same way.
flatten = tf.keras.layers.Flatten()(input_img)
encoded = tf.keras.layers.Dense(encoding_dim, activation='relu')(flatten)
encoded2 = tf.keras.layers.Dense(256, activation='sigmoid')(encoded)
# You were using encoded as input, which makes the encoded2 redundant, so I changed the input to be encoded2
topk = tf.keras.layers.Lambda(lambda x: tf.nn.top_k(x, k=int(int(x.shape[-1])/2),
sorted=True,
name="topk").values)(encoded2)
decoded = tf.keras.layers.Dense(128, activation='relu')(topk) # one dimensional problem
decoded2 = tf.keras.layers.Dense(256, activation='sigmoid')(decoded)
autoencoder = tf.keras.Model(input_img, decoded2)
encoder = tf.keras.Model(input_img, encoded2)
# The actual input to the decoder is the shape of topk as in the autoencoder model
encoded_input = tf.keras.layers.Input(shape=topk.shape)
# You model is more complicated than the one in the tutorial, so if you want to recreate the decoder you have to do it layer by layer. This is the first layer
decoded1 = autoencoder.layers[-2](encoded_input)
# This is the second layer
decoded2 = autoencoder.layers[-1](decoded1)
# Finally, the decoder
decoder = tf.keras.Model(encoded_input, decoded2)
I assume that it should be pretty clear for you now.
Lets suppose I have specified mobilenet from keras models this way:
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(12, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
model.compile(loss='categorical_crossentropy', optimizer = Adam(),
metrics=['accuracy'])
But I would like to add custom layer to preporess input image this way:
def myFunc(x):
return K.reshape(x/255,(-1,224,224,3))
new_model = Sequential()
new_model.add(Lambda(myFunc,input_shape =( 224, 224, 3), output_shape=(224, 224, 3)))
new_model.add(model)
new_model.compile(loss='categorical_crossentropy', optimizer = Adam(),
metrics=['accuracy'])
new_model.summary()
It works pretty well but now I need to have it input shape 224 224 3 instead of (None, 224, 224, 3) - how to make it
In order to expand the dimension of your tensor, you can use
import tensorflow.keras.backend as K
# adds a new dimension to a tensor
K.expand_dims(tensor, 0)
However, I do not see why you would need it, just like #meonwongac mentioned.
If you still want to use a Lambda layer instead of resizing / applying other operations on images with skimage/OpenCV/ other library, one way of using the Lambda layer is the following:
import tensorflow as tf
input_ = Input(shape=(None, None, 3))
next_layer = Lambda(lambda image: tf.image.resize_images(image, (128, 128))(input_)
I am trying to apply transfer learning to my ANN for image classification.
I have found an example of it, and I would personalize the network.
Here there are the main blocks of code:
model = VGG19(weights='imagenet',
include_top=False,
input_shape=(224, 224, 3))
batch_size = 16
for layer in model.layers[:5]:
layer.trainable = False
x = model.output
x = Flatten()(x)
x = Dense(1024, activation="relu")(x)
x = Dense(1024, activation="relu")(x)
predictions = Dense(16, activation="sigmoid")(x)
model_final = Model(input = model.input, output = predictions)
model_final.fit_generator(
train_generator,
samples_per_epoch = nb_train_samples,
epochs = epochs,
validation_data = validation_generator,
validation_steps = nb_validation_samples,
callbacks = [checkpoint, early])
When I run the code above I get this error:
ValueError: Error when checking target: expected dense_3 to have shape (16,) but got array with shape (1,).
I suppose that the problem is about the dimensions' order in the dense layer, I have tried to transpose it, but I get the same error.
Maybe this simple example can help:
import numpy as np
test = np.array([1,2,3])
print(test.shape) # (3,)
test = test[np.newaxis]
print(test.shape) # (1, 3)
Try apply [np.newaxis] in your train_generator output.