TensorBoard Callbacks in Keras Backend Function - python

In order to get training and graph information of a CNN in Keras, I simply define tensorboard = TensorBoard(log_dir="CNN/Logs/{}".format(MODEL_ID)) and then add callbacks=[tensorboard] to the fit() method of my model.
I am implementing a Policy Network following this example, and am wondering now how I can get training and graph information of the PN in Tensorboard.
Possibly there is a callbacks equivalent in Keras Backend's function(), I have not had any hope with the documentation.
Specifically, my equivalent of fit() looks as follows:
def build_train_fn(self):
""" Train function for the Policy Network.
This function replaces model.fit(X, y).
"""
with self.graph.as_default():
K.set_session(self.session)
action_prob_placeholder = self.model.output
action_onehot_placeholder = K.placeholder(shape=(None, ACTIONS), name="action_onehot")
discount_reward_placeholder = K.placeholder(shape=(None,), name="discount_reward")
action_prob = K.sum(action_prob_placeholder * action_onehot_placeholder, axis=1)
log_action_prob = K.log(action_prob)
loss = - log_action_prob * discount_reward_placeholder
loss = K.mean(loss)
adam = optimizers.Adam()
updates = adam.get_updates(params=self.model.trainable_weights,
loss=loss)
self.train_fn = K.function(inputs=[self.model.input,
action_onehot_placeholder,
discount_reward_placeholder],
outputs=[],
updates=updates)

Related

Testing Concept Activation Vector (TCAV) for Pytorch

I am trying to compute the Testing Concept Activation Vectors (TCAV, as described here) vectors for different concepts for my classification model. So far, I haven't successfully found code online for Pytorch models so I have decided to rewrite it myself. The code I am trying to copy is:
def compute_tcav(input_tensor, model, AA, layer_name, filter_indices, optimizer, seed_input=None, wrt_tensor=None, backprop_modifier=None, grad_modifier='absolute'):
layer_AA = AA[layer_name]
losses = [
(ActivationMaximization(layer_AA, filter_indices), -1)
]
opt = optimizer(input_tensor, losses, wrt_tensor=wrt_tensor, norm_grads=False)
#grads = opt.minimize(seed_input=seed_input, max_iter=1, grad_modifier=grad_modifier, verbose=False)[1]
return losses #utils.normalize(grads)[0]
source: https://github.com/maragraziani/iMIMIC-RCVs/blob/master/rcv_utils.py
This is what I have so far:
def ActivationMaximizationLoss(input_AA):
loss = torch.mean(input_AA)
return loss
def compute_tcav_pytorch(model, layer_predictions):
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
optimizer.zero_grad()
input_AA = torch.from_numpy(layer_predictions['_blocks.6._project_conv']) # middle
input_AA.requires_grad=True
loss = ActivationMaximizationLoss2(input_AA)
loss.backward()
optimizer.step()
img = input_AA.grad
return img[0][0]
I am trying to maximise a layer activation from the model and from that get the TCAV vector.

How can I pass Input/Output images to Tensorboard using Keras model.fit() method to train a model?

I recently switched from Tensorflow 1.14 and Estimaror API to Tensorflow 2.0 and keras API.I am working on an image segmentation problem so the inputs/outputs/labels are all images. When I used Estimator, things where pretty straight forward. In model_fn where the arguments were (features, labels, mode, params) I could just pick the features and labels, do the necessary processing and then pass it in tf.summary.image() and everything worked like a charm. Now, using the keras API, although it provides greater ease of use, it makes hard to do simple handling on data during training, which becomes even harder when it is used with dataset API.Example:
Tensorflow 1.14/Estimator:
def model_fn(features, labels, mode, params):
loss, train_op, = None, None
eval_metric_ops, training_hooks, evaluation_hooks = None, None, None
output = model(input=features)
predictions = tf.argmax(output, axis=-1)
predictions_dict = {'predicted': predictions}
dice_score = tf.contrib.metrics.f1_score(labels=label, predictions=predictions[:, :, :, 1])
if mode in (estimator.ModeKeys.TRAIN, estimator.ModeKeys.EVAL):
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.train.exponential_decay(params['lr'], global_step=global_step,
decay_steps=params['decay_steps'],
decay_rate=params['decay_rate'], staircase=False)
loss = loss_fn(outputs=predictions, labels=labels)
summary.image('Input_Image', features)
summary.image('Label', tf.expand_dims(tf.cast(label, dtype=tf.float32), axis=-1))
summary.image('Prediction', tf.expand_dims(tf.cast(predictions, dtype=tf.float32), axis=-1))
if mode == estimator.ModeKeys.TRAIN:
with tf.name_scope('Metrics'):
summary.scalar('Dice_Coefficient', dice_score[1])
summary.scalar('Learning_Rate', learning_rate)
summary.merge_all()
train_logs_hook = tf.estimator.LoggingTensorHook({'Dice_Coefficient': dice_score[1]},every_n_iter=params['train_log_every_n_steps']) every_n_iter=params['train_log_every_n_steps'])
training_hooks = [train_logs_hook]
train_op = Adam(learning_rate=learning_rate, epsilon=params['epsilon']).minimize(loss=loss, global_step=global_step)
if mode == estimator.ModeKeys.EVAL:
eval_metric_ops = {'Metrics/Dice_Coefficient': dice_score}
eval_summary_hook = tf.estimator.SummarySaverHook(output_dir=params['eval_metrics_path'],
summary_op=summary.merge_all(),
save_steps=params['eval_steps_per_summary_save'])
evaluation_hooks = [eval_summary_hook]
return estimator.EstimatorSpec(mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
training_hooks=training_hooks,
evaluation_hooks=evaluation_hooks)
Using Keras with Tensorflow 2.0 AFAIK, I can't have this kind of access to the Input/Output tensors during training or evaluation (notice than even though during evaluation estimator dont get the image summaries, you can still have access to preview the results by using a tf.estimator.SummarySaverHook). Below is my falied attempt:
def train_data(params): # Similar is the eval_data
def standardization_summaries(image, label, step, writer):
# Some processing to images
with writer.as_default():
tf.summary.image('Input_dataset', image, step=step, max_outputs=1)
tf.summary.image('label_dataset', label, step=step, max_outputs=1)
return image, label
data_set = tf.data.Dataset.from_generator(generator=lambda: data_generator(params),
output_types=(tf.float32, tf.int64),
output_shapes=(tf.TensorShape([None, None]), tf.TensorShape([None, None])))
data_set = data_set.map(lambda x, y: standardization_summaries(image=x, label=y, step=params['global_step'], writer=params['writer']))
data_set = data_set.batch(params['batch_size'])
data_set = data_set.prefetch(buffer_size=-1)
return data_set
model = tf.keras.models.load_model(saved_model)
summary_writer = tf.summary.create_file_writer(save_model_path)
step = tf.Variable(0, trainable=False, dtype=tf.int64)
tensorboard = tf.keras.callbacks.TensorBoard(log_dir=save_model_path, histogram_freq=1, write_graph=True,
write_images=False)
early_stop = tf.keras.callbacks.EarlyStopping(patience=args.early_stop)
callbacks = [tensorboard, early_stop]
params = {'batch_size': args.batch_size,
'global_step': step,
'writer': summary_writer}
model.fit(x=train_data(params), epochs=args.epochs, initial_epoch=args.initial_epoch,
validation_data=val_data(params), steps_per_epoch=2, callbacks=callbacks)
Getting the input images from the dataset API came from here but this just gets tons of images whenever the dataset fetches data from the generator. Also, with the step variable being constant and not changing (I can't figure out how to make it walk) everything is just under the step 0 and I can't think any viable way to connect these outputs with the predicted output, given that I would find a way to print them.
So, the question is: Is there anything that I am still missing with Keras API and Tensorboard synergies on image summaries. Is there a way to save image summaries lets say, for every half epoch in training and once at the end of evaluation or should I just let the model be trained and get the training outputs through model.predict() at the end of training an then inspect if something goes wrong(which is not efficient)?

Knowledge Distillation loss with Tensorflow 2 + Keras

I am trying to implement a very simple keras model that uses Knowledge Distillation [1] from another model.
Roughly, I need to replace the original loss L(y_true, y_pred) by L(y_true, y_pred)+L(y_teacher_pred, y_pred) where y_teacher_pred is the prediction of another model.
I've tried to do
def create_student_model_with_distillation(teacher_model):
inp = tf.keras.layers.Input(shape=(21,))
model = tf.keras.models.Sequential()
model.add(inp)
model.add(...)
model.add(tf.keras.layers.Dense(units=1))
teacher_pred = teacher_model(inp)
def my_loss(y_true,y_pred):
loss = tf.keras.losses.mean_squared_error(y_true, y_pred)
loss += tf.keras.losses.mean_squared_error(teacher_pred, y_pred)
return loss
model.compile(loss=my_loss, optimizer='adam')
return model
However, when I try to call fit on my model, I am getting
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
How can I solve this issue ?
Refs
[1] https://arxiv.org/abs/1503.02531
Actually, this blogpost is answer to your question: keras blog
But in short - you should use new TF2 API and call teacher's predict before the tf.GradientTape() block:
def train_step(self, data):
# Unpack data
x, y = data
# Forward pass of teacher
teacher_predictions = self.teacher(x, training=False)
with tf.GradientTape() as tape:
# Forward pass of student
student_predictions = self.student(x, training=True)
# Compute losses
student_loss = self.student_loss_fn(y, student_predictions)
distillation_loss = self.distillation_loss_fn(
tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
tf.nn.softmax(student_predictions / self.temperature, axis=1),
)
loss = self.alpha * student_loss + (1 - self.alpha) * distillation_loss

Tensorflow:How to add regularization in the model

I want to add regularization into my optimizer like this:
tf.train.AdadeltaOptimizer(learning_rate=1).minimize(loss)
But I don't know how to design the function "loss" into the code below
The website I saw is:
https://blog.csdn.net/marsjhao/article/details/72630147
The modified code originally came from the Google machine Learning course:
https://colab.research.google.com/notebooks/mlcc/improving_neural_net_performance.ipynb?utm_source=mlcc&utm_campaign=colab-external&utm_medium=referral&utm_content=improvingneuralnets-colab&hl=zh-tw#scrollTo=P8BLQ7T71JWd
Can someone give me some advice or discuss with me?
def train_nn_classifier_model_new(
my_optimizer,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
periods = 10
steps_per_period = steps / periods
# Create a DNNClassifier object.
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_classifier = tf.estimator.DNNClassifier(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["deal_or_not"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["deal_or_not"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["deal_or_not"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss (on training data):")
training_log_losses = []
validation_log_losses = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_probabilities =
dnn_classifier.predict(input_fn=predict_training_input_fn)
training_probabilities = np.array([item['probabilities'] for item in training_probabilities])
print(training_probabilities)
validation_probabilities = dnn_classifier.predict(input_fn=predict_validation_input_fn)
validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])
training_log_loss = metrics.log_loss(training_targets, training_probabilities)
validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_log_loss))
# Add the loss metrics from this period to our list.
training_log_losses.append(training_log_loss)
validation_log_losses.append(validation_log_loss)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.tight_layout()
plt.plot(training_log_losses, label="training")
plt.plot(validation_log_losses, label="validation")
plt.legend()
return dnn_classifier
result = train_nn_classifier_model_new(
my_optimizer=tf.train.AdadeltaOptimizer (learning_rate=1),
steps=30000,
batch_size=250,
hidden_units=[150, 150, 150, 150],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets
)
Regularization are added to loss function. Your Optimizer AdadeltaOptimizer do not support regularization parameter. If you want to add regularization to your optimizer you should use tf.train.ProximalAdagradOptimizer as it has l2_regularization_strength and l1_regularization_strength parameters where you can set the values.These parameters were part of the original algorithm.
Other wise you simply have to apply regularization to your custom loss function but DNNClassifier does not allow to use any custom loss function.You have to create your network manually for that.
How to add regularization ,check it here.

'This' function is not equivalent (isomorphic) to the function restored from a checkpoint in CNTK

I got the following exception in calling trainer.restore_from_checkpoint in CNTK.
'This' function is not equivalent (isomorphic) to the function restored from a checkpoint.
My restoring code comes in the below. These are the same as the structure of the creating the trainer and saving the trainer.dnn by trainer.save_checkpoint("trainer.dnn") as mentioned in this document.
def evaluate(reader, model):
criterion = create_criterion_function(model)
criterion.replace_placeholders({criterion.placeholders[0]: Input(input_dim),
criterion.placeholders[1]: Input(label_dim)})
# training config
epoch_size = 34
minibatch_size = 17
# LR schedule over epochs
lr_per_sample = [0.003]*4+[0.0015]*24+[0.0003]
lr_per_minibatch = [x * minibatch_size for x in lr_per_sample]
lr_schedule = learning_rate_schedule(lr_per_minibatch, UnitType.minibatch, epoch_size)
# Momentum
momentum_as_time_constant = momentum_as_time_constant_schedule(70)
learner = adam_sgd(criterion.parameters,
lr=lr_schedule, momentum=momentum_as_time_constant,
low_memory=True,
gradient_clipping_threshold_per_sample=15, gradient_clipping_with_truncation=True)
trainer = Trainer(model, criterion.outputs[0], criterion.outputs[1], learner)
trainer.restore_from_checkpoint("trainer.dnn")
def do_test():
reader = create_reader('Test.txt', is_training=False)
model = create_model()
evaluate(reader, model)
do_test()
There are two ways to checkpoints.
Model check-pointing: Checkpoint the model only, then when you restore the model, create a new trainer.
Trainer check-pointing: Checkpoint the trainer which will save the model, and the criterion functions. Restore the trainer from the checkpoint.
This error could come because you have a criterion function being passed to the trainer and then restoring from a previous checkpoint that has a different function.
Some relevant code here

Categories