multiple loss functions , None for gradient - python

I am using the custom loss function in addition to the mean squared error loss function in my Keras model. Code for the custom loss function is given below:
def grad1(matrix):
dx = 1.0
u_x = np.gradient(matrix,dx,axis=0)
u_xx = np.gradient(u_x,dx,axis=0)
return u_xx
def artificial_diffusion(y_true, y_pred):
u_xxt = tf.py_func(grad1,[y_true],tf.float32)
u_xxp = tf.py_func(grad1,[y_pred],tf.float32)
lap_mse = tf.losses.mean_squared_error(u_xxt,u_xxp) + K.epsilon()
I have the 1D CNN model.
input_img = Input(shape=(n_states,n_features))
x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(input_img)
x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(x)
x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(x)
decoded1 = Conv1D(n_outputs, kernel_size=3, activation='linear', padding='same',
name='regression')(x)
decoded2 = Conv1D(n_outputs, kernel_size=3, activation='linear', padding='same',
name='diffusion')(x)
model = Model(inputs=input_img, outputs=[decoded1,decoded2])
model.compile(loss=['mse',artificial_diffusion],
loss_weights=[1, 1],
optimizer='adam',metrics=[coeff_determination])
When I compile and run the model, I get an error An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.. If I create the model as model = Model(inputs=input_img, outputs=[decoded1,decoded1]), then there is no error. But, then I can't monitor two losses separately. Am I doing any mistake while constructing the model?

Related

Get softmax output and raw output of the last layer of a model

When creating a neural network for image classification, I want to get the classification on one hand and the raw output on the other hand to determine if the image really contains one of the images I want to classify or not. If not then the raw output should contain very low values for all classes. But if the image really contains one of the objects that I want to classify, then the raw output should have a high value for one of the neurons.
Assuming I have the following code:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(80, 80, 3)))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(4, activation='softmax'))
How would I get the raw output of the last dense layer?
You can use functional API and implement your model in a next way:
inputs = tf.keras.Input(shape=(80, 80, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Dropout(0.3)(x)
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
# here you can get raw output
logits = tf.keras.layers.Dense(4)(x)
model = tf.keras.Model(
inputs=inputs,
outputs={
'logits': logits,
'predictions': tf.nn.softmax(logits)
}
)
model.summary()
After that, your model will have two outputs in dictionary format. Beware that you can't use a simple loss function like categorical_crossentropy because it will try to minimize loss for both outputs. You need to use losses argument in compile method to specify the loss for each output. For example:
model.compile(
optimizer='adam',
loss={
# ignore logits loss
'logits': lambda y_true, y_pred: 0.0,
'predictions': tf.keras.losses.CategoricalCrossentropy()
})
And your fit would look like this:
model.fit(
x_train,
{
'logits': y_train,
'predictions': y_train
},
epochs=10
)

How can I concatenate dynamic values with the flatten vector in CNN

I'm trying to increase the accuracy of CNN by computing some dynamic values such as Hu moments of the images during the training phase and then feed them to the fully connected layer with the flatten vector as shown in the image of my model:
I want to compute Hu moments for each image in the dataset then after the flatten operation, I want to concatenate the values of the Hu moments with the fatten vector and feed it to the fully connected layer.
This is the model I'm using (Tensorflow Keras):
layer1 = Conv2D(16, (3, 3),padding="same", activation='relu')(inpx)
layer2 = Conv2D(32, kernel_size=(3, 3),padding="same", activation='relu')(layer1)
layer3 = MaxPooling2D(pool_size=(2, 2))(layer2)
layer4 = Conv2D(64, kernel_size=(5, 5),padding="same", activation='relu')(layer3)
layer5 = Conv2D(128, kernel_size=(5, 5),padding="same", activation='relu')(layer4)
layer6 = MaxPooling2D(pool_size=(2, 2))(layer5)
layer7 = Dropout(0.5)(layer6)
layer8 = Flatten()(layer7)
layer9 = Dense(250, activation='sigmoid')(layer8)
layer10 = Dense(10, activation='softmax')(layer9)
model = Model([inpx], layer10)
model.compile(optimizer=keras.optimizers.Adadelta(),
loss=keras.losses.categorical_crossentropy,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, batch_size=500)
score = model.evaluate(x_test, y_test, verbose=0)
The dataset I'm using is MNIST handwritten digits.
Hmm I don't know what the Hu moments and the Extend and Soidty are, but I'm assuming they're 1dimensional:
# image = tf.Tensor
#tf.function
def calc_hu(image):
""" calculate hu """
hu = ...
return hu
class HuLayer(tf.keras.layers.Layer):
def call(self, inputs):
return calc_hu(inputs)
#tf.function
def calc_extend(image):
""" calculate extend """
extend = ...
return extend
class ExtendLayer(tf.keras.layers.Layer):
def call(self, inputs):
return calc_extend(inputs)
layer1 = Conv2D(16, (3, 3),padding="same", activation='relu')(inpx)
layer2 = Conv2D(32, kernel_size=(3, 3),padding="same", activation='relu')(layer1)
layer3 = MaxPooling2D(pool_size=(2, 2))(layer2)
layer4 = Conv2D(64, kernel_size=(5, 5),padding="same", activation='relu')(layer3)
layer5 = Conv2D(128, kernel_size=(5, 5),padding="same", activation='relu')(layer4)
layer6 = MaxPooling2D(pool_size=(2, 2))(layer5)
layer7 = Dropout(0.5)(layer6)
layer8 = Flatten()(layer7)
layer8_ = tf.layers.keras.concatenate([layer_8, HuLayer()(tf.keras.layers.Input(input_shape)(inpx)), ExtendLayer()(tf.keras.layers.Input(input_shape)(inpx))])
layer9 = Dense(250, activation='sigmoid')(layer8_)
layer10 = Dense(10, activation='softmax')(layer9)
I didn't test this code but it should set you on your way. Hope it helps you enough to get going!

How to remove the FC layer off of a fine turned model keras

So I have finetuned a Resnet50 model with the following architecture:
model = models.Sequential()
model.add(resnet)
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(layers.Dense(2048, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(4096, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(736, activation='softmax')) # Output layer
So now I have a saved model (.h5) which I want to use as input into another model. But I don't want the last layer. I would normally do it like this with a base resnet50 model:
def base_model():
resnet = resnet50.ResNet50(weights="imagenet", include_top=False)
x = resnet.output
x = GlobalAveragePooling2D()(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.6)(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.6)(x)
x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x)
return Model(inputs=resnet.input, outputs=x)
but that does not work for the model as it gives me an error. I am trying it like this right now but still, it does not work.
def base_model():
resnet = load_model("../Models/fine_tuned_model/fine_tuned_resnet50.h5")
x = resnet.layers.pop()
#resnet = resnet50.ResNet50(weights="imagenet", include_top=False)
#x = resnet.output
#x = GlobalAveragePooling2D()(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.6)(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.6)(x)
x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x)
return Model(inputs=resnet.input, outputs=x)
enhanced_resent = base_model()
This is the error that it gives me.
Layer dense_3 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.core.Dense'>. Full input: [<keras.layers.core.Dense object at 0x000001C61E68E2E8>]. All inputs to the layer should be tensors.
I don't know if I can do this or not.
I have finally figured it out after quitting for an hour. So this is how you will do it.
def base_model():
resnet = load_model("../Models/fine_tuned_model/42-0.85.h5")
x = resnet.layers[-2].output
x = Dense(4096, activation='relu', name="FC1")(x)
x = Dropout(0.6, name="FCDrop1")(x)
x = Dense(4096, activation='relu', name="FC2")(x)
x = Dropout(0.6, name="FCDrop2")(x)
x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x)
return Model(inputs=resnet.input, outputs=x)
enhanced_resent = base_model()
And this works perfectly. I hope this helps out someone else as I have never seen this done in any tutorial before.
x = resnet.layers[-2].output
This will get the layer you want, but you need to know which index the layer you want is at. -2 is the 2nd to last FC layer that I wanted as I wanted the feature extractions, not the final classification. This can be found doing a
model.summary()

VAE in Keras: how to define the end-to-end model?

I am learning the tutorial here. My Model part is:
input_img = keras.Input(shape=img_shape)
x = layers.Conv2D(32, (3, 3),
padding='same', activation='relu')(input_img)
...
x = layers.Conv2D(64, (3, 3),
padding='same', activation='relu')(x)
shape_before_flattening = K.int_shape(x)
x = layers.Flatten()(x)
x = layers.Dense(32, activation='relu')(x)
z_mean = layers.Dense(latent_dim)(x)
z_log_var = layers.Dense(latent_dim)(x)
def sampling(args):
...
z = layers.Lambda(sampling)([z_mean, z_log_var])
decoder_input = layers.Input(K.int_shape(z)[1:])
x = layers.Dense(np.prod(shape_before_flattening[1:]),
activation='relu')(decoder_input)
x = layers.Reshape(shape_before_flattening[1:])(x)
x = layers.Conv2DTranspose(32, 3,
padding='same', activation='relu',
strides=(2, 2))(x)
x = layers.Conv2D(1, 3,
padding='same', activation='sigmoid')(x)
# This is our decoder model from letent space to reconstructed images
decoder = Model(decoder_input, x)
# We then apply it to `z` to recover the decoded `z`.
z_decoded = decoder(z)
def vae_loss(self, x, z_decoded):
...
# Fit the end-to-end model
vae = Model(input_img, z_decoded) # vae = Model(input_img, x)
vae.compile(optimizer='rmsprop', loss=vae_loss)
vae.summary()
My question is: the end-to-end is vae = Model(input_img, z_decoded) or vae = Model(input_img, x). Should we compute loss on input_img and z_decoded OR between input_img and x? Thanks
x is changing throughout the model, where x = layers.Conv2D(1, 3,padding='same', activation='sigmoid')(x) you set x to be the last layer of your decoder model.
When doing z_decoded = decoder(z) you chain your decoder straight after the encoder, z_decoded is actually the output layer of your decoder, thus, the same x as earlier. Also, you create the link between the actual input and the output.
Computing the loss would yield the same results on both (as they both represent the same layer).
In short - Both vae = Model(input_img, z_decoded) and vae = Model(input_img, x) are the end to end model, i would suggest using the z_decoded version, for readability.

Runtime Error: Disconnected graph for GANs because input can't be obtained

Here is my discriminator architecture:
def build_discriminator(img_shape,embedding_shape):
model1 = Sequential()
model1.add(Conv2D(32, kernel_size=5, strides=2, input_shape=img_shape, padding="same"))
model1.add(LeakyReLU(alpha=0.2))
model1.add(Dropout(0.25))
model1.add(Conv2D(48, kernel_size=5, strides=2, padding="same"))
#model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model1.add(BatchNormalization(momentum=0.8))
model1.add(LeakyReLU(alpha=0.2))
model1.add(Dropout(0.25))
model1.add(Conv2D(64, kernel_size=5, strides=2, padding="same"))
model1.add(BatchNormalization(momentum=0.8))
model1.add(LeakyReLU(alpha=0.2))
model1.add(Dropout(0.25))
model1.add(Conv2D(128, kernel_size=5, strides=2, padding="same"))
model1.add(BatchNormalization(momentum=0.8))
model1.add(LeakyReLU(alpha=0.2))
model1.add(Dropout(0.25))
model1.add(Conv2D(256, kernel_size=5, strides=2, padding="same"))
model1.add(BatchNormalization(momentum=0.8))
model1.add(LeakyReLU(alpha=0.2))
model1.add(Dropout(0.25))
model1.add(Flatten())
model1.add(Dense(200))
model2=Sequential()
model2.add(Dense(50, input_shape=embedding_shape))
model2.add(Dense(100))
model2.add(Dense(200))
model2.add(Flatten())
merged_model = Sequential()
merged_model.add(Merge([model1, model2], mode='concat'))
merged_model.add(Dense(1, activation='sigmoid', name='output_layer'))
#merged_model.compile(loss='binary_crossentropy', optimizer='adam',
#metrics=['accuracy'])
#model1.add(Dense(1, activation='sigmoid'))
merged_model.summary()
merged_model.input_shape
img = Input(shape=img_shape)
emb = Input(shape=embedding_shape)
validity = merged_model([img,emb])
return Model([img,emb],validity)
and here is the generator architecture:
def build_generator(latent_dim=484):
model = Sequential()
model.add(Dense(624 * 2 * 2, activation="relu", input_dim=latent_dim))
model.add(Reshape((2, 2, 624)))
model.add(UpSampling2D())
model.add(Conv2D(512, kernel_size=5, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
#4x4x512
model.add(Conv2D(256, kernel_size=5, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
#8x8x256
model.add(Conv2D(128, kernel_size=5, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
#16x16x128
model.add(Conv2D(64, kernel_size=5, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
#32x32x64
model.add(Conv2D(32, kernel_size=5, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
#64x64x32
model.add(Conv2D(3, kernel_size=5, padding="same"))
model.add(Activation("tanh"))
#128x128x3
noise = Input(shape=(latent_dim,))
img = model(noise)
return Model(noise, img)
and here is how I am making the GAN network:
optimizer = Adam(0.0004, 0.5)
discriminator=build_discriminator((128,128,3),(1,128,3))
discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
generator = build_generator()
# The generator takes noise as input and generates imgs
z = Input(shape=(100+384,))
img = generator(z)
# For the combined model we will only train the generator
discriminator.trainable = False
temp=Input(shape=(1,128,3))
# The discriminator takes generated images as input and determines validity
valid = discriminator([img,temp])
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
combined = Model(z, valid)
combined.compile(loss='binary_crossentropy', optimizer=optimizer)
The discriminator have 2 models, and will get as input an image of shape 128x128x3 and an embedding of shape 1x128x3 and both models are merged then. The generator model just gets noise and generates a 128x128x3 image. So at the line combined = Model(z, valid) I am getting the followiing error:
RuntimeError: Graph disconnected: cannot obtain value for tensor Tensor("input_5:0", shape=(?, 1, 128, 3), dtype=float32) at layer "input_5". The following previous layers were accessed without issue: ['input_4', 'model_2']
which I think is because of the fact that discriminator can't find embedding input but I am feeding it a tensor of shape (1,128,3), just like noise is being fed to the generator model. Can anyone please help me where I am doing wrong?
And after everything is set here is how I will generate images from noise and embedding vector merged together and discriminator will take image and vector to identify fakes:
#texts has embedding vectors
pics=np.array(pics) . #images
noise = np.random.normal(0, 1, (batch_size, 100))
j=0
latent_code=[]
for j in range(len(texts)): #appending embedding at the end of noise
n=np.append(noise[j],texts[j])
n=n.tolist()
latent_code.append(n)
latent_code=np.array(latent_code)
gen_imgs = generator.predict(latent_code) #gen making fakes
j=0
vects=[]
for im in gen_imgs:
t=np.array(texts[j])
t=np.reshape(t,[128,3])
t=np.expand_dims(t, axis=0)
vects.append(t)
j+=1
vects=np.array(vects) #vector of ?,1,128,3
#disc marking fakes and reals
d_loss_real = discriminator.train_on_batch([pics,vects], valid)
d_loss_fake = discriminator.train_on_batch([gen_pics,vects], fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
g_loss = combined.train_on_batch(latent_code, valid)
You have forgotten to add the temp as one of the inputs of the GAN (that's why the error says it can't feed the corresponding tensor since it is essentially disconnected):
combined = Model([z, temp], valid)
As a side note, I highly recommend to use Keras Functional API for building complicated and multi branch models like your discriminator. It is much easier to use, being more flexible and less error-prone.
For example, this is the descriminator you have written but I have rewritten it using Functional API. I personally think it is much easier to follow:
def build_discriminator(img_shape,embedding_shape):
input_img = Input(shape=img_shape)
x = Conv2D(32, kernel_size=5, strides=2, padding="same")(input_img)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = Conv2D(48, kernel_size=5, strides=2, padding="same")(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = Conv2D(64, kernel_size=5, strides=2, padding="same")(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = Conv2D(128, kernel_size=5, strides=2, padding="same")(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = Conv2D(256, kernel_size=5, strides=2, padding="same")(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = Flatten()(x)
output_img = Dense(200)(x)
input_emb = Input(shape=embedding_shape)
y = Dense(50)(input_emb)
y = Dense(100)(y)
y = Dense(200)(y)
output_emb = Flatten()(y)
merged = concatenate([output_img, output_emb])
output_merge = Dense(1, activation='sigmoid', name='output_layer')(merged)
return Model([input_img, input_emb], output_merge)

Categories