Why encoder part of Autoencoder can work without fitting? - python

Sample code from https://blog.keras.io/building-autoencoders-in-keras.html
import keras
from keras import layers
# This is our input image
input_img = keras.Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = layers.Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = layers.Dense(784, activation='sigmoid')(encoded)
# This model maps an input to its reconstruction
autoencoder = keras.Model(input_img, decoded)
# This model maps an input to its encoded representation
encoder = keras.Model(input_img, encoded)
# This is our encoded (32-dimensional) input
encoded_input = keras.Input(shape=(encoding_dim,))
# Retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# Create the decoder model
decoder = keras.Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
...
# Encode and decode some digits
# Note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
In the example, only model autoencoder had been compiled and fitted, encoder is not.
I am so confused, why encoder can predict new data directly without any compiling and fitting?

encoder is a sub-model inside autoencoder (specifically, the part between input_img and encoded), it's not a separate model: you're just referring to a part of autoencoder with an explicit name.
When you train autoencoder, you're training both the encoder and decoder parts simultaneously. After training, encoder refers to the trained sub-model, that you can then use for inference.

Related

How to train a Keras Model with L1-norm reconstruction loss function?

I am currently building an auto-encoder for the MNIST dataset with Kears, here is my code:
import all the dependencies
from keras.layers import Dense,Conv2D,MaxPooling2D,UpSampling2D
from keras import Input, Model
from keras.datasets import mnist
import numpy as np
import matplotlib.pyplot as plt
encoding_dim = 15
input_img = Input(shape=(784,))
# encoded representation of input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# decoded representation of code
decoded = Dense(784, activation='sigmoid')(encoded)
# Model which take input image and shows decoded images
autoencoder = Model(input_img, decoded)
# This model shows encoded images
encoder = Model(input_img, encoded)
# Creating a decoder model
encoded_input = Input(shape=(encoding_dim,))
# last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
the last step is the compiling step, but I need to use a L1-norm reconstruction loss function. From Keras losses description, it seems they don't have this function. How can I apply a L1-norm reconstruction loss function to the autoencoder.compile() function? Thank you!
In loss function, we refer to the expected error values. Hence, for L1-norm you can use MAE (mean absolute error) with the name of mean_absolute_error. So, you can rewrite the last line of your code as follow:
autoencoder.compile(optimizer='adam', loss='mean_absolute_error')

split neural network in two nets preserving weights in python

In keras I would like to use the model with the initial layers of the structure for a given trained neuralnet with the weights I got for the training process.
Going to the case: Lets imagine we have a dataset df, after spliting into train, dev and test we train a neural network, for this example an autoencoder.
A real piece of code illustrating this concept, without providing data(i didn't consider it necessary):
from keras.models import Model
from keras.layers import Activation, Dense, Dropout, Input
# Define input layer
input_data = Input(shape=(train.shape[1],), name='Input')
# Define encoding layer
encoded = Dense(encoding_dim, activation='relu')(input_data)
# Define decoding layer
decoded = Dense(train.shape[1], activation='sigmoid')(encoded)
# Create the autoencoder model
autoencoder = Model(input_data, decoded, name='Simple AutoEncoder')
#Compile the autoencoder model
autoencoder.compile(optimizer='rmsprop',
loss='binary_crossentropy')
autoencoder.fit(train, train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(dev_x, dev_x), verbose=0)
After compile and fit the model we have a neural network with their weights that we got from fitting process.
How could I use only the encoder part of this net by preserving the weight I got?
I believe something along this line should do the trick:
#...all the code from above, including training...
# Define the encoder model
encoder = Model(input_data, encoded, name='Encoder')
The encoder model can be treated as a fully-fledged Keras model (you can save/load/fit/evaluate/predict).
By training an Autoencoder, the encoder neuralnet part would be created with the encoded object that contains the trained weights of the autoencoder.
# Getting the trained weights of the first layer(dense layer of encoder)
weights_ae = autoencoder.layers[1].get_weights()[0]
# The previous code of the example...
# Creating the encoder model
encoder = Model(input_data, encoded, name='Encoder')
# Getting the weights of the encoder model
weights_e = encoder.layers[1].get_weights()[0]
So, finally it would be confirmed that by creating the model encoder would have the weights ("trainied experience") from the autoencoder.

Autoencoder and SVM implementation and improving AUC

Currently, I am using autoencoder to extract features from the fMRI images. I have a NumPy array of 9950 for each candidate and 4975 features are being extracted using autoencoder which is given SVM to predict if the candidate is suffering from a disease or not.
Here is my autoencoder model:
encoding_dim =4975
# this is our input placeholder
input_img = Input(shape=(9950,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(9950, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
print(autoencoder)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
import tensorflow as tf
from keras.optimizers import Adam,SGD
opt = SGD(lr=0.001)
autoencoder.compile(loss='binary_crossentropy', optimizer=opt, metrics=['AUC'])
While I am print the AUC which remains in 40-50 range. I want to improve my encoded features so that when I give it to the SVM for classification I should receive good accuracy. Any suggestions to improve the model?

If I pass layers to two Keras models and Train only one ,will both the model share weights after the former is trained

I tried to build a simple Autoencoder using Keras for this I started with a single fully-connected neural layer as an encoder and as a decoder.
> input_img = Input(shape=(784,))
>encoded = Dense(encoding_dim,activation='relu')(input_img)
>decoded = Dense(784, activation='sigmoid')(encoded)
>autoencoder =Model(input_img, decoded)
I also created a separate encoder module with the help of
encoder = Model(input_img, encoded)
As well as the decoder model:
encoded_input = Input(shape=(32,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
Then I trained the model
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
but even if i didn't train my encoder and decoder, Those are sharing the weights of autoencoder even if I passed the layers before training. I trained only the encoder but both encoder and decoder are getting trained.
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
I should have been more careful while reading the text.
If two Keras models are sharing some layers, when you train the first model, the weights from the shared layers will be updated automatically in the other model.
https://keras.io/getting-started/functional-api-guide/
This blog illustrates the use of shared layers nicely.
https://blog.keras.io/building-autoencoders-in-keras.html/

Dimension reduction using Keras

I have a data set of MNIST hand written digits with 257 columns.
1 : 256 - pixel values
257 - target
How do I design an auto encoder using keras to reduce the input using 2 dimensions.
What I have tried
encoding_dim = 32
input_img = Input(shape=(256,))
encoded = Dense(encoding_dim, activation='relu')(input_img)
decoded = Dense(256, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input=input_img, output=decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(X, X,
nb_epoch=50,
batch_size=256,
shuffle=True)
Error
KeyError: '[318 327 ...] not in index

Categories