I am trying to develop my own deep learning library to enhance my knowledge and gain some experience. I am using Tensorflow. I am done with fully connected layers and trying to implement LSTM. I will use LSTM for character level joke generation.
My question is, I want to store previous hidden state in my LSTM class and update the Tensorflow Variable. Is this possible in Tensorflow? I am not sure this works since Tensorflow variables are intialized beforehand. I have implemented LSTM like the following:
def feed(self, input_tensor):
with self.graph.as_default():
with tf.name_scope(self.name):
i_t = tf.sigmoid(
tf.matmul(input_tensor, self.weights['W_i']) +
tf.matmul(self.hidden_state, self.weights['U_i']) +
self.biases['b_i']
)
C_head_t = tf.tanh(
tf.matmul(input_tensor, self.weights['W_c']) +
tf.matmul(self.hidden_state, self.weights['U_c']) +
self.biases['b_c']
)
f_t = tf.sigmoid(
tf.matmul(input_tensor, self.weights['W_f']) +
tf.matmul(self.hidden_state, self.weights['U_f']) +
self.biases['b_f']
)
next_cell_state = tf.multiply(i_t, C_head_t) + tf.multiply(f_t, self.cell_state)
o_t = tf.sigmoid(
tf.matmul(input_tensor, self.weights['W_o']) +
tf.matmul(self.hidden_state, self.weights['U_o']) +
self.biases['b_o']
)
next_hidden_state = tf.multiply(o_t, tf.tanh(next_cell_state))
self.hidden_state = tf.assign(ref=self.hidden_state, value=next_hidden_state)
self.cell_state = tf.assign(ref=self.cell_state, value=next_cell_state)
tf.summary.histogram('hidden_state', self.hidden_state)
tf.summary.histogram('cell_state', self.cell_state)
for k, v in self.weights.items():
tf.summary.histogram(k, v)
for k, v in self.biases.items():
tf.summary.histogram(k, v)
return next_hidden_state
And I form Tensorflow Graph like the following:
self.feed_forward = self.input_tensor
for l in self.layers:
self.feed_forward = l.feed(self.feed_forward)
self.loss_opt = self.loss_function(self.feed_forward, self.output_tensor)
self.fit_opt = self.optimizer.minimize(self.loss_opt)
init = tf.global_variables_initializer()
self.sess.run(init)
Then I run feed_forward variable in a session to predict, fit_opt variable to update the model.
I connected LSTM layer with a fully connected layer. I believe that fully connected layer works fine since I tested it on a basic dataset. LSTM's hidden state is input to a fully connected layer. softmax_cross_entropy used as a loss function and AdamOptimizer is used to update the model.
I'm getting meaningless results when I train my LSTM. I think that hidden state and cell state updates does not work properly. What is the best way to debug my models? I looked at my graph and tensor histograms through Tensorboard. Graph looks fine and histograms are updated through time.
I suspect the following part
self.hidden_state = tf.assign(ref=self.hidden_state, value=next_hidden_state)
self.cell_state = tf.assign(ref=self.cell_state, value=next_cell_state)
PS: I use truncated_normal to initialize the trensors. Cell state and hidden state variables has trainable=False and their initial value is zero vector.
https://github.com/ceteke/MyNN here you can find the whole code.
I am using the algorithm: http://deeplearning.net/tutorial/lstm.html
Related
I'm building a multi-model neural network for reinforcement learning to include an action network, a world model network, and a critic. The idea is train the world model to emulate whatever simulation you are trying to master based on input from the action network and the previous state, to train the critic to maximize the Bellman equation (total reinforcement over time) based on the world model output, and then backpropagate the critic value through the world model to provide gradient targets for training the actions. So - from some state, the action network outputs an action which is fed into the model to generate the next state, and that state feeds into the critic network for evaluation against some goal state.
For all this to work, I must use 3 separate loss functions, one for each network, and they all add something to the gradients in one or more networks but they can be in conflict. For example - to train the world model I use a target from an environmental simulation and for the critic I use a target of the current state reward + discount * next state forecast value. However, to train the a actor I just use the negative critic value as a loss and backpropagate all the way through all three models to calibrate the best action.
I can make this work without any batching by zeroing out gradients incrementally, but that is inefficient and doesn't let me accumulate gradients for any kind of "time-series batching" optimizer update step. Each model has its own trainable parameters, but the execution graph flows through all three networks. So inside the calibration loop after firing the networks in sequence:
...
if self.actor.calibrating:
self.actor.optimizer.zero_grad()
#Pick loss For maximizing the value of all actions
loss = -self.critic.value
#Backpropagate through all three networks to train actor output
#How do I stop the critic and model networks from incrementing their gradient values?
loss.backward(retain_graph=True)
self.actor.optimizer.step()
if self.model.calibrating:
self.model.optimizer.zero_grad()
#Reduce loss for ambiguous actions
loss = self.model.get_loss() * self.actor.get_confidence()**2
#How can I block this from backpropagating through action network?
loss.backward(retain_graph=True)
self.model.optimizer.step()
if self.critic.calibrating:
self.critic.optimizer.zero_grad()
#Reduce loss for ambiguous actions
loss = self.critic.get_loss(self.goal) * self.actor.get_confidence()**2
#How do I stop this from backpropagating through the model and action networks?
loss.backward(retain_graph=True)
self.critic.optimizer.step()
...
Finally - my question is in two parts:
How can I temporarily stop loss.backward() at a given layer without detaching it forever?
How can I block loss.backward() from updating some gradients where I'm just flowing through a model to get gradients for another model?
Got this figured out thanks to a suggestion from a colleague to try the requires_grad setting. (I had assumed that would break the execution graph, but it doesn't)
So - to answer my own two questions:
If you calibrate the chained models in the correct order, you can detach them one at a time so that loss.backward() doesn't run over models that aren't needed. I was thinking that this would break the graph but... this is Pytorch, not Tensorflow 1.x and the graph is regenerated on every forward pass anyway. Silly me for missing this yesterday.
If you set requires_grad to False for a model (or a layer or an individual weight) then loss.backward() will STILL traverse the entire connected graph but it will leave those individual gradients as they were while still setting any gradients earlier in the graph. Exactly what I wanted.
This code works to minimize the execution of unnecessary graph traversals and gradient updates. I still need to refactor it for staggered updates over time so that it can accumulate gradients for several cycles before stepping the optimizers, but this definitely works as intended.
#Step through all models in a chain to create gradient paths from critic back through the world model, to the actor.
def step(self):
#Get the current state from the simulation
state = self.world.state
#Fire the actor to select a softmax action.
self.actor(state)
#run the world simulation on that action.
self.world.step(self.actor.action)
#Combine the action and starting state as input to the world model.
if self.actor.calibrating:
action_state = torch.cat([self.actor.value, state], dim=0)
else:
#Push softmax action closer to 1.0
action_state = torch.cat([self.actor.hard_value, state], dim=0)
#Run the model and then the critic on the action_state
self.critic(self.model(action_state))
if self.actor.calibrating:
self.actor.optimizer.zero_grad()
self.model.requires_grad = False
self.critic.requires_grad = False
#Pick loss For maximizing the value of the action choice
loss = -self.critic.value * self.actor.get_confidence()
loss.backward(retain_graph=True)
self.actor.optimizer.step()
if self.model.calibrating:
#Don't need to backpropagate through actor again
self.actor.value.detach_()
self.model.optimizer.zero_grad()
self.model.requires_grad = True
#Reduce loss for ambiguous actions
loss = self.model.get_loss() * self.actor.get_confidence()**2
loss.backward(retain_graph=True)
self.model.optimizer.step()
if self.critic.calibrating:
#Don't need to backpropagate through the model or actor again
self.model.value.detach_()
self.critic.optimizer.zero_grad()
self.critic.requires_grad = True
#Reduce loss for ambiguous actions
loss = self.critic.get_loss(self.goal) * self.actor.get_confidence()**2
loss.backward(retain_graph=True)
self.critic.optimizer.step()
here’s a more precise and fuller example.
import torch
import torch.nn as nn
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layers = nn.ModuleList([
nn.Linear(10, 10),
nn.Linear(10, 10),
nn.Linear(10, 10),
nn.Linear(10, 10),
])
def forward(self, x):
self.output = []
self.input = []
for layer in self.layers:
# detach from previous history
x = Variable(x.data, requires_grad=True)
#you can add this line after each layer to stop back propagation of that layer
self.input.append(x)
# compute output
x = layer(x)
# add to list of outputs
self.output.append(x)
return x
def backward(self, g):
for i, output in reversed(list(enumerate(self.output))):
if i == (len(self.output) - 1):
# for last node, use g
output.backward(g)
else:
output.backward(self.input[i+1].grad.data)
print(i, self.input[i+1].grad.data.sum())
model = Net()
inp = Variable(torch.randn(4, 10))
output = model(inp)
gradients = torch.randn(*output.size())
model.backward(gradients)
Currently I'm trying to train a complex data generated from telecom engineering models. The weights and biases are also complex. I have used the relu activation for the hidden layers as follows at the l-th layer:
A_l = tf.complex(tf.nn.relu(tf.real(Z_l)), tf.nn.relu(tf.imag(Z_l)))
But how to do it for the cost and optimizer, please? I am really confused because I'm a beginner in machine learning. I have gone through some papers about non-analytic functions, but none of them helped to use the Tensorflow API. For example: how do I rewrite the functions below?
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z_out, labels = y))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
I have seen a recommendation to split the cost as real and imaginary parts as:
cost_R = .., cost_I = ...
but I didn't try it because I think the optimizer will be split and the optimization not be working. I used the relu activation for the hidden layers as follows at the l-th layer:
A_l = tf.complex(tf.nn.relu(tf.real(Z_l)), tf.nn.relu(tf.imag(Z_l)))
But how to the cost and optimizer?
Any help is much appreciated.
I am trying to modify the projection layer of my NMT (neural machine translation) model. I want to be able to update the number of units without reinitializing all of the weights. I followed the tutorial from the tensorflow NMT tutorial found here. Here is the code for my decoder:
# Decoder
train_decoder = tf.contrib.seq2seq.BasicDecoder(
decoder_cell, train_helper, decoder_initial_state)
maximum_iterations = tf.round(tf.reduce_max(encoder_input_lengths) * 2)
# Dynamic decoding
train_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(train_decoder)
# Projection layer -- THIS IS WHAT I WANT TO MODIFY
projection_layer = layers_core.Dense(
len(language_base.vocabulary), use_bias=False)
train_logits = projection_layer(train_outputs.rnn_output)
train_crossent = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=decoder_outputs, logits=train_logits)
# Target weights
target_weights = tf.sequence_mask(
decoder_input_lengths, params.tgt_max_len, dtype=train_logits.dtype)
target_weights = tf.transpose(target_weights)
# Loss function
train_loss = (tf.reduce_sum(train_crossent * target_weights) /
tf.to_float(params.batch_size))
# Calculate and clip gradients
train_vars = tf.trainable_variables()
gradients = tf.gradients(train_loss, train_vars)
clipped_gradients, _ = tf.clip_by_global_norm(
gradients, params.max_gradient_norm)
# Optimization
optimizer = tf.train.AdamOptimizer(params.learning_rate)
update_step = optimizer.apply_gradients(
zip(clipped_gradients, train_vars))
Tensorflow doesn't really let you change the shape of a variable (which is related to the number of units here) without some effort.
Instead, you're better off preallocating a larger-than-usual number of units and masking the units you don't use to 0 during the early stages of training, and update your mask as you go (use a variable to store the mask to make updating it easier).
Imagine a fully-connected neural network with its last two layers of the following structure:
[Dense]
units = 612
activation = softplus
[Dense]
units = 1
activation = sigmoid
The output value of the net is 1, but I'd like to know what the input x to the sigmoidal function was (must be some high number, since sigm(x) is 1 here).
Folllowing indraforyou's answer I managed to retrieve the output and weights of Keras layers:
outputs = [layer.output for layer in model.layers[-2:]]
functors = [K.function( [model.input]+[K.learning_phase()], [out] ) for out in outputs]
test_input = np.array(...)
layer_outs = [func([test_input, 0.]) for func in functors]
print layer_outs[-1][0] # -> array([[ 1.]])
dense_0_out = layer_outs[-2][0] # shape (612, 1)
dense_1_weights = model.layers[-1].weights[0].get_value() # shape (1, 612)
dense_1_bias = model.layers[-1].weights[1].get_value()
x = np.dot(dense_0_out, dense_1_weights) + dense_1_bias
print x # -> -11.7
How can x be a negative number? In that case the last layers output should be a number closer to 0.0 than 1.0. Are dense_0_out or dense_1_weights the wrong outputs or weights?
Since you're using get_value(), I'll assume that you're using Theano backend. To get the value of the node before the sigmoid activation, you can traverse the computation graph.
The graph can be traversed starting from outputs (the result of some computation) down to its inputs using the owner field.
In your case, what you want is the input x of the sigmoid activation op. The output of the sigmoid op is model.output. Putting these together, the variable x is model.output.owner.inputs[0].
If you print out this value, you'll see Elemwise{add,no_inplace}.0, which is an element-wise addition op. It can be verified from the source code of Dense.call():
def call(self, inputs):
output = K.dot(inputs, self.kernel)
if self.use_bias:
output = K.bias_add(output, self.bias)
if self.activation is not None:
output = self.activation(output)
return output
The input to the activation function is the output of K.bias_add().
With a small modification of your code, you can get the value of the node before activation:
x = model.output.owner.inputs[0]
func = K.function([model.input] + [K.learning_phase()], [x])
print func([test_input, 0.])
For anyone using TensorFlow backend: use x = model.output.op.inputs[0] instead.
I can see a simple way just changing a little the model structure. (See at the end how to use the existing model and change only the ending).
The advantages of this method are:
You don't have to guess if you're doing the right calculations
You don't need to care about the dropout layers and how to implement a dropout calculation
This is a pure Keras solution (applies to any backend, either Theano or Tensorflow).
There are two possible solutions below:
Option 1 - Create a new model from start with the proposed structure
Option 2 - Reuse an existing model changing only its ending
Model structure
You could just have the last dense separated in two layers at the end:
[Dense]
units = 612
activation = softplus
[Dense]
units = 1
#no activation
[Activation]
activation = sigmoid
Then you simply get the output of the last dense layer.
I'd say you should create two models, one for training, the other for checking this value.
Option 1 - Building the models from the beginning:
from keras.models import Model
#build the initial part of the model the same way you would
#add the Dense layer without an activation:
#if using the functional Model API
denseOut = Dense(1)(outputFromThePreviousLayer)
sigmoidOut = Activation('sigmoid')(denseOut)
#if using the sequential model - will need the functional API
model.add(Dense(1))
sigmoidOut = Activation('sigmoid')(model.output)
Create two models from that, one for training, one for checking the output of dense:
#if using the functional API
checkingModel = Model(yourInputs, denseOut)
#if using the sequential model:
checkingModel = model
trainingModel = Model(checkingModel.inputs, sigmoidOut)
Use trianingModel for training normally. The two models share weights, so training one is training the other.
Use checkingModel just to see the outputs of the Dense layer, using checkingModel.predict(X)
Option 2 - Building this from an existing model:
from keras.models import Model
#find the softplus dense layer and get its output:
softplusOut = oldModel.layers[indexForSoftplusLayer].output
#or should this be the output from the dropout? Whichever comes immediately after the last Dense(1)
#recreate the dense layer
outDense = Dense(1, name='newDense', ...)(softPlusOut)
#create the new model
checkingModel = Model(oldModel.inputs,outDense)
It's important, since you created a new Dense layer, to get the weights from the old one:
wgts = oldModel.layers[indexForDense].get_weights()
checkingModel.get_layer('newDense').set_weights(wgts)
In this case, training the old model will not update the last dense layer in the new model, so, let's create a trainingModel:
outSigmoid = Activation('sigmoid')(checkingModel.output)
trainingModel = Model(checkingModel.inputs,outSigmoid)
Use checkingModel for checking the values you want with checkingModel.predict(X). And train the trainingModel.
So this is for fellow googlers, the working of the keras API has changed significantly since the accepted answer was posted. The working code for extracting a layer's output before activation (for tensorflow backend) is:
model = Your_Keras_Model()
the_tensor_you_need = model.output.op.inputs[0] #<- this is indexable, if there are multiple inputs to this node then you can find it with indexing.
In my case, the final layer was a dense layer with activation softmax, so the tensor output I needed was <tf.Tensor 'predictions/BiasAdd:0' shape=(?, 1000) dtype=float32>.
(TF backend)
Solution for Conv layers.
I had the same question, and to rewrite a model's configuration was not an option.
The simple hack would be to perform the call function manually. It gives control over the activation.
Copy-paste from the Keras source, with self changed to layer. You can do the same with any other layer.
def conv_no_activation(layer, inputs, activation=False):
if layer.rank == 1:
outputs = K.conv1d(
inputs,
layer.kernel,
strides=layer.strides[0],
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate[0])
if layer.rank == 2:
outputs = K.conv2d(
inputs,
layer.kernel,
strides=layer.strides,
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate)
if layer.rank == 3:
outputs = K.conv3d(
inputs,
layer.kernel,
strides=layer.strides,
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate)
if layer.use_bias:
outputs = K.bias_add(
outputs,
layer.bias,
data_format=layer.data_format)
if activation and layer.activation is not None:
outputs = layer.activation(outputs)
return outputs
Now we need to modify the main function a little. First, identify the layer by its name. Then retrieve activations from the previous layer. And at last, compute the output from the target layer.
def get_output_activation_control(model, images, layername, activation=False):
"""Get activations for the input from specified layer"""
inp = model.input
layer_id, layer = [(n, l) for n, l in enumerate(model.layers) if l.name == layername][0]
prev_layer = model.layers[layer_id - 1]
conv_out = conv_no_activation(layer, prev_layer.output, activation=activation)
functor = K.function([inp] + [K.learning_phase()], [conv_out])
return functor([images])
Here is a tiny test. I'm using VGG16 model.
a_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=True)[0]
a_no_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=False)[0]
print(np.sum(a_no_relu < 0))
> 245293
Set all negatives to zero to compare with the results retrieved after an embedded in VGG16 ReLu operation.
a_no_relu[a_no_relu < 0] = 0
print(np.allclose(a_relu, a_no_relu))
> True
easy way to define new layer with new activation function:
def change_layer_activation(layer):
if isinstance(layer, keras.layers.Conv2D):
config = layer.get_config()
config["activation"] = "linear"
new = keras.layers.Conv2D.from_config(config)
elif isinstance(layer, keras.layers.Dense):
config = layer.get_config()
config["activation"] = "linear"
new = keras.layers.Dense.from_config(config)
weights = [x.numpy() for x in layer.weights]
return new, weights
I had the same problem but none of the other answers worked for me. Im using a newer version of Keras with Tensorflow so some answers dont work now. Also the structure of the model is given so i can't change it easely. The general idea is to create a copy of the original model that will work exactly like the original one but spliting the activation from the outputs layers. Once this is done we can easely access the outputs values before the activation is applied.
First we will create a copy of the original model but with no activation on the outputs layers. This will be done using Keras clone_model function (See Docs).
from tensorflow.keras.models import clone_model
from tensorflow.keras.layers import Activation
original_model = get_model()
def f(layer):
config = layer.get_config()
if not isinstance(layer, Activation) and layer.name in original_model.output_names:
config.pop('activation', None)
layer_copy = layer.__class__.from_config(config)
return layer_copy
copy_model = clone_model(model, clone_function=f)
This alone will only make a clone with new weights so we must copy the original_model weights to the new one:
copy_model.build(original_model.input_shape)
copy_model.set_weights(original_model.get_weights())
Now we will add the activations layers:
from tensorflow.keras.models import Model
old_outputs = [ original_model.get_layer(name=name) for name in copy_model.output_names ]
new_outputs = [ Activation(old_output.activation)(output) if old_output.activation else output
for output, old_output in zip(copy_model.outputs, old_outputs) ]
copy_model = Model(copy_model.inputs, new_outputs)
Finally we could create a new model whose evaluation will be the outputs with no activation applied:
no_activation_outputs = [ copy_model.get_layer(name=name).output for name in original_model.output_names ]
no_activation_model = Model(copy.inputs, no_activation_outputs)
Now we could use copy_model like the original_model and no_activation_model to access pre-activation outputs. Actually you could even modify the code to split a custom set of layers instead of the outputs.
I have written an RNN language model using TensorFlow. The model is implemented as an RNN class. The graph structure is built in the constructor, while RNN.train and RNN.test methods run it.
I want to be able to reset the RNN state when I move to a new document in the training set, or when I want to run a validation set during training. I do this by managing the state inside the training loop, passing it into the graph via a feed dictionary.
In the constructor I define the the RNN like so
cell = tf.nn.rnn_cell.LSTMCell(hidden_units)
rnn_layers = tf.nn.rnn_cell.MultiRNNCell([cell] * layers)
self.reset_state = rnn_layers.zero_state(batch_size, dtype=tf.float32)
self.state = tf.placeholder(tf.float32, self.reset_state.get_shape(), "state")
self.outputs, self.next_state = tf.nn.dynamic_rnn(rnn_layers, self.embedded_input, time_major=True,
initial_state=self.state)
The training loop looks like this
for document in document:
state = session.run(self.reset_state)
for x, y in document:
_, state = session.run([self.train_step, self.next_state],
feed_dict={self.x:x, self.y:y, self.state:state})
x and y are batches of training data in a document. The idea is that I pass the latest state along after each batch, except when I start a new document, when I zero out the state by running self.reset_state.
This all works. Now I want to change my RNN to use the recommended state_is_tuple=True. However, I don't know how to pass the more complicated LSTM state object via a feed dictionary. Also I don't know what arguments to pass to the self.state = tf.placeholder(...) line in my constructor.
What is the correct strategy here? There still isn't much example code or documentation for dynamic_rnn available.
TensorFlow issues 2695 and 2838 appear relevant.
A blog post on WILDML addresses these issues but doesn't directly spell out the answer.
See also TensorFlow: Remember LSTM state for next batch (stateful LSTM).
One problem with a Tensorflow placeholder is that you can only feed it with a Python list or Numpy array (I think). So you can't save the state between runs in tuples of LSTMStateTuple.
I solved this by saving the state in a tensor like this
initial_state = np.zeros((num_layers, 2, batch_size, state_size))
You have two components in an LSTM layer, the cell state and hidden state, thats what the "2" comes from. (this article is great: https://arxiv.org/pdf/1506.00019.pdf)
When building the graph you unpack and create the tuple state like this:
state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size])
l = tf.unpack(state_placeholder, axis=0)
rnn_tuple_state = tuple(
[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0],l[idx][1])
for idx in range(num_layers)]
)
Then you get the new state the usual way
cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)
cell = tf.nn.rnn_cell.MultiRNNCell([cell] * num_layers, state_is_tuple=True)
outputs, state = tf.nn.dynamic_rnn(cell, series_batch_input, initial_state=rnn_tuple_state)
It shouldn't be like this... perhaps they are working on a solution.
A simple way to feed in an RNN state is to simply feed in both components of the state tuple individually.
# Constructing the graph
self.state = rnn_cell.zero_state(...)
self.output, self.next_state = tf.nn.dynamic_rnn(
rnn_cell,
self.input,
initial_state=self.state)
# Running with initial state
output, state = sess.run([self.output, self.next_state], feed_dict={
self.input: input
})
# Running with subsequent state:
output, state = sess.run([self.output, self.next_state], feed_dict={
self.input: input,
self.state[0]: state[0],
self.state[1]: state[1]
})