TF Model Sub-Classing Errors with Symbolic Tensors/Graph Execution - python

I'm trying to recreate results from a paper and I'm having a lot of issues in running a Tensorflow sub-classed model without resorting to eager execution. Even when using eager execution it seems that it isn't training properly (showing zero trainable parameters and loss metric is near constant). Any time an error is thrown the input arguments received all have 'None' shape (as shown in the error below), leading me to believe this is somehow an issue with the symbolic tensors used on execution?
This model uses the Spektral package with a dataset of labelled graphs as input. Here 'x' is the Spektral graph features, 'a' is the adjacency matrix, 'e' is the (unused) edge features matrix. 'C' is just a constant. The 'x' node features contain one-hot encodings and node coordinates which are split and treated separately throughout the model.
This is the current state of the model.
class egcl(Model):
def __init__(self):
super().__init__()
def call(self, inputs):
x, a, C = InputParser()(inputs)
h_feat, x_feat = NodeFeatureSplitter()(x)
distance = NodeDistance()(x_feat)
# Message matrices
m_ij_input = tf.keras.layers.Concatenate()([tf.squeeze(h_feat), distance, tf.squeeze(a)])
m_ij = tf.keras.layers.Dense(64, activation='swish')(m_ij_input)
m_ij = tf.keras.layers.Dense(9, activation='swish')(m_ij)
m_ij = tf.reshape(m_ij, shape=(3, 3, 3))
m_i = tf.reduce_sum(m_ij, axis=2)
# Update node feature
h_i_next_input = tf.keras.layers.concatenate([distance, m_i])
h_i_next = tf.keras.layers.Dense(64, activation='swish')(h_i_next_input)
h_i_next = tf.keras.layers.Dense(3, activation='swish')(h_i_next)
# Update coord feature
x_i_next = tf.keras.layers.Dense(64, activation='swish')(m_i)
x_i_next = tf.keras.layers.Dense(3, activation='swish')(x_i_next)
x_i_next = x_feat + (C * distance * x_i_next)
x_i_next = tf.squeeze(x_i_next)
# Fit with graph labels
out = tf.keras.layers.concatenate([h_i_next, x_i_next])
out = tf.keras.layers.Dense(64, activation='swish')(out)
out = tf.keras.layers.Dense(9, activation='swish')(out)
out = tf.math.reduce_mean(out, axis=0)
return out
# Model call
epochs = 5
model = egcl()
model.compile(optimizer=Adam(learning_rate=1e-03), loss=MeanSquaredError(), run_eagerly=False)
history = model.fit(loader.load(), steps_per_epoch=loader.steps_per_epoch, epochs=epochs)
And the custom classes called are defined as follows.
class InputParser(Layer):
def __init__(self):
super(InputParser, self).__init__()
def call(self, inputs):
x, a, e = inputs
C = tf.cast(1/(len(x)), tf.float32)
return x, a, C
class NodeFeatureSplitter(Layer):
def __init__(self):
super(NodeFeatureSplitter, self).__init__()
def call(self, x):
h_feat = x[...,:2]
x_feat = x[...,-3:]
return h_feat, x_feat
class NodeDistance(Layer):
def __init__(self):
super(NodeDistance, self).__init__()
def call(self, x_feat):
norm = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
for i in range(len(x_feat[0])):
for j in range(len(x_feat[0][i])):
norm = norm.write(norm.size(), tf.math.reduce_euclidean_norm([x_feat[0][i] - x_feat[0][j]]))
norm = norm.stack()
norm = tf.reshape(norm, shape=(len(x_feat[0]), len(x_feat[0])))
return norm
The current issue is the first concatenate layer throwing the following error,
ValueError: as_list() is not defined on an unknown TensorShape.
Call arguments received:
• inputs=('tf.Tensor(shape=(None, None, None), dtype=float32)', 'tf.Tensor(shape=(None, None, None), dtype=float32)', 'tf.Tensor(shape=(None, None, None, None), dtype=float32)')
Am I incorrectly sub-classing Model? Is this setup configured to train correctly even in eager execution mode? Please don't hesitate to ask for any clarifications or further info.
Thanks for any and all input <3
Edit: I tried Loris' suggestion of adding my layers into the init and calling them with 'self.'... This has gotten past the previous error however there is now another issue,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Can not squeeze dim[1], expected a dimension of 1, got 9
[[node mean_squared_error/remove_squeezable_dimensions/Squeeze
W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: FAILED_PRECONDITION: Python interpreter state is not initialized. The process may be terminated.
How is the loss function and squeeze causing this?

Related

How to override gradient for the nonlinearity functions in lasagne?

I have a model, for which i need to compute the gradients of output w.r.t the model's input. But I want to apply some custom gradients for some of the nonlinearity functions applied on some of the model's layers. So i tried the idea explained here, which computes the nonlinear rectifier (RELU) in the forward pass but modifies the gradients of Relu in the backward pass. I added the following two classes:
The helper class that allows us to replace a nonlinearity with an Op
that has the same output, but a custom gradient
class ModifiedBackprop(object):
def __init__(self, nonlinearity):
self.nonlinearity = nonlinearity
self.ops = {} # memoizes an OpFromGraph instance per tensor type
def __call__(self, x):
# OpFromGraph is oblique to Theano optimizations, so we need to move
# things to GPU ourselves if needed.
if theano.sandbox.cuda.cuda_enabled:
maybe_to_gpu = theano.sandbox.cuda.as_cuda_ndarray_variable
else:
maybe_to_gpu = lambda x: x
# We move the input to GPU if needed.
x = maybe_to_gpu(x)
# We note the tensor type of the input variable to the nonlinearity
# (mainly dimensionality and dtype); we need to create a fitting Op.
tensor_type = x.type
# If we did not create a suitable Op yet, this is the time to do so.
if tensor_type not in self.ops:
# For the graph, we create an input variable of the correct type:
inp = tensor_type()
# We pass it through the nonlinearity (and move to GPU if needed).
outp = maybe_to_gpu(self.nonlinearity(inp))
# Then we fix the forward expression...
op = theano.OpFromGraph([inp], [outp])
# ...and replace the gradient with our own (defined in a subclass).
op.grad = self.grad
# Finally, we memoize the new Op
self.ops[tensor_type] = op
# And apply the memoized Op to the input we got.
return self.ops[tensor_type](x)
The subclass that does guided backpropagation through a nonlinearity:
class GuidedBackprop(ModifiedBackprop):
def grad(self, inputs, out_grads):
(inp,) = inputs
(grd,) = out_grads
dtype = inp.dtype
print('It works')
return (grd * (inp > 0).astype(dtype) * (grd > 0).astype(dtype),)
Then i used them in my code as follows:
import lasagne as nn
model_in = T.tensor3()
# model_in = net['input'].input_var
nn.layers.set_all_param_values(net['l_out'], model['param_values'])
relu = nn.nonlinearities.rectify
relu_layers = [layer for layer in
nn.layers.get_all_layers(net['l_out']) if getattr(layer,
'nonlinearity', None) is relu]
modded_relu = GuidedBackprop(relu)
for layer in relu_layers:
layer.nonlinearity = modded_relu
prop = nn.layers.get_output(
net['l_out'], model_in, deterministic=True)
for sample in range(ini, batch_len):
model_out = prop[sample, 'z'] # get prop for label 'z'
gradients = theano.gradient.jacobian(model_out, wrt=model_in)
# gradients = theano.grad(model_out, wrt=model_in)
get_gradients = theano.function(inputs=[model_in],
outputs=gradients)
grads = get_gradients(X_batch) # gradient dimension: X_batch == model_in(64, 20, 32)
grads = np.array(grads)
grads = grads[sample]
Now when i run the code, it works without any error, and the shape of the output is also correct. But that's because it executes the default theano.grad function and not the one supposed to override it. In other words, the grad() function in the class GuidedBackprop never been invoked.
I can't understand what is the issue?
is there's a solution?
If this is an unresolved issue, is there's an implementation for a Theano Op that can achieve such a functionality or some other way to override gradient for specific nonlinearity functions applied on some of the model's layers?
Are you try to set it back the value of model output into model layer input, all gradients calculation
group_1_ShoryuKen_Left = tf.constant([ 0,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,0,0,1,0,1,0,0,0,0, 0,0,0,0,0,0,0,1,0,0,0,0, 0,0,0,0,0,0,0,0,0,1,0,0 ], shape=(1, 1, 48), dtype=tf.float32)
## layer_2 = tf.keras.layers.Dense(256, kernel_initializer=tf.constant_initializer(1.))
layer_2 = tf.keras.layers.LSTM(32, kernel_initializer=tf.constant_initializer(1.))
b_out = layer_2(group_1_ShoryuKen_Left)
layer_2.set_weights(layer_1.get_weights())

Using RNN Trained Model without pytorch installed

I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.
I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.
I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random
torch.manual_seed(1)
random.seed(1)
device = torch.device('cpu')
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
self.batch_size = batch_size
self.matching_in_out = matching_in_out #length of input vector matches the length of output vector
self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
self.hidden2out = nn.Linear(hidden_size, output_size)
self.hidden = self.init_hidden()
def forward(self, feature_list):
feature_list=torch.tensor(feature_list)
if self.matching_in_out:
lstm_out, _ = self.lstm( feature_list.view(len( feature_list), 1, -1))
output_space = self.hidden2out(lstm_out.view(len( feature_list), -1))
output_scores = torch.sigmoid(output_space) #we'll need to check if we need this sigmoid
return output_scores #output_scores
else:
for i in range(len(feature_list)):
cur_ft_tensor=feature_list[i]#.view([1,1,self.input_size])
cur_ft_tensor=cur_ft_tensor.view([1,1,self.input_size])
lstm_out, self.hidden = self.lstm(cur_ft_tensor, self.hidden)
outs=self.hidden2out(lstm_out)
return outs
def init_hidden(self):
#return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device),
torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device))
I am aware of this question, but I'm willing to go as low level as possible. I can work with numpy array instead of tensors, and reshape instead of view, and I don't need a device setting.
Based on the class definition above, what I can see here is that I only need the following components from torch to get an output from the forward function:
nn.LSTM
nn.Linear
torch.sigmoid
I think I can easily implement the sigmoid function using numpy. However, can I have some implementation for the nn.LSTM and nn.Linear using something not involving pytorch? Also, how will I use the weights from the state dict into the new class?
So, the question is, how can I "translate" this RNN definition into a class that doesn't need pytorch, and how to use the state dict weights for it?
Alternatively, is there a "light" version of pytorch, that I can use just to run the model and yield a result?
EDIT
I think it might be useful to include the numpy/scipy equivalent for both nn.LSTM and nn.linear. It would help us compare the numpy output to torch output for the same code, and give us some modular code/functions to use. Specifically, a numpy equivalent for the following would be great:
rnn = nn.LSTM(10, 20, 2)
input = torch.randn(5, 3, 10)
h0 = torch.randn(2, 3, 20)
c0 = torch.randn(2, 3, 20)
output, (hn, cn) = rnn(input, (h0, c0))
and also for linear:
m = nn.Linear(20, 30)
input = torch.randn(128, 20)
output = m(input)
You should try to export the model using torch.onnx. The page gives you an example that you can start with.
An alternative is to use TorchScript, but that requires torch libraries.
Both of these can be run without python. You can load torchscript in a C++ application https://pytorch.org/tutorials/advanced/cpp_export.html
ONNX is much more portable and you can use in languages such as C#, Java, or Javascript
https://onnxruntime.ai/ (even on the browser)
A running example
Just modifying a little your example to go over the errors I found
Notice that via tracing any if/elif/else, for, while will be unrolled
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random
torch.manual_seed(1)
random.seed(1)
device = torch.device('cpu')
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
self.batch_size = batch_size
self.matching_in_out = matching_in_out #length of input vector matches the length of output vector
self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
self.hidden2out = nn.Linear(hidden_size, output_size)
def forward(self, x, h0, c0):
lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
outs=self.hidden2out(lstm_out)
return outs, (hidden_a, hidden_b)
def init_hidden(self):
#return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
# convert the arguments passed during onnx.export call
class MWrapper(nn.Module):
def __init__(self, model):
super(MWrapper, self).__init__()
self.model = model;
def forward(self, kwargs):
return self.model(**kwargs)
Run an example
rnn = RNN(10, 10, 10, 3)
X = torch.randn(3,1,10)
h0,c0 = rnn.init_hidden()
print(rnn(X, h0, c0)[0])
Use the same input to trace the model and export an onnx file
torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx',
dynamic_axes={'x':{1:'N'},
'c0':{1: 'N'},
'h0':{1: 'N'}
},
input_names=['x', 'h0', 'c0'],
output_names=['y', 'hn', 'cn']
)
Notice that you can use symbolic values for the dimensions of some axes of some inputs. Unspecified dimensions will be fixed with the values from the traced inputs. By default LSTM uses dimension 1 as batch.
Next we load the ONNX model and pass the same inputs
import onnxruntime
ort_model = onnxruntime.InferenceSession('rnn.onnx')
print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
Basically implementing it in numpy and copying weights from your pytorch model can do the trick. For your usecase you will only need to do a forward pass so we just need to implement that only
#Set Parameters for a small LSTM network
input_size = 2 # size of one 'event', or sample, in our batch of data
hidden_dim = 3 # 3 cells in the LSTM layer
output_size = 1 # desired model output
num_layers=3
torch_lstm = RNN( input_size,
hidden_dim ,
output_size,
num_layers,
matching_in_out=True
)
state = torch_lstm.state_dict() # state will capture the weights of your model
Now for LSTM in numpy these functions will be used:
got the below code from this link: https://towardsdatascience.com/the-lstm-reference-card-6163ca98ae87
### NOT MY CODE
import numpy as np
from scipy.special import expit as sigmoid
def forget_gate(x, h, Weights_hf, Bias_hf, Weights_xf, Bias_xf, prev_cell_state):
forget_hidden = np.dot(Weights_hf, h) + Bias_hf
forget_eventx = np.dot(Weights_xf, x) + Bias_xf
return np.multiply( sigmoid(forget_hidden + forget_eventx), prev_cell_state )
def input_gate(x, h, Weights_hi, Bias_hi, Weights_xi, Bias_xi, Weights_hl, Bias_hl, Weights_xl, Bias_xl):
ignore_hidden = np.dot(Weights_hi, h) + Bias_hi
ignore_eventx = np.dot(Weights_xi, x) + Bias_xi
learn_hidden = np.dot(Weights_hl, h) + Bias_hl
learn_eventx = np.dot(Weights_xl, x) + Bias_xl
return np.multiply( sigmoid(ignore_eventx + ignore_hidden), np.tanh(learn_eventx + learn_hidden) )
def cell_state(forget_gate_output, input_gate_output):
return forget_gate_output + input_gate_output
def output_gate(x, h, Weights_ho, Bias_ho, Weights_xo, Bias_xo, cell_state):
out_hidden = np.dot(Weights_ho, h) + Bias_ho
out_eventx = np.dot(Weights_xo, x) + Bias_xo
return np.multiply( sigmoid(out_eventx + out_hidden), np.tanh(cell_state) )
We would need the sigmoid function as well so
def sigmoid(x):
return 1/(1 + np.exp(-x))
Because pytorch stores weights in stacked manner so we need to break it up for that we would need the below function
def get_slices(hidden_dim):
slices=[]
breaker=(hidden_dim*4)
slices=[[i,i+3] for i in range(0, breaker, breaker//4)]
return slices
Now we have the functions ready for lstm, now we create an lstm class to copy the weights from pytorch class and get the output from it.
class numpy_lstm:
def __init__( self, layer_num=0, hidden_dim=1, matching_in_out=False):
self.matching_in_out=matching_in_out
self.layer_num=layer_num
self.hidden_dim=hidden_dim
def init_weights_from_pytorch(self, state):
slices=get_slices(self.hidden_dim)
print (slices)
#Event (x) Weights and Biases for all gates
lstm_weight_ih='lstm.weight_ih_l'+str(self.layer_num)
self.Weights_xi = state[lstm_weight_ih][slices[0][0]:slices[0][1]].numpy() # shape [h, x]
self.Weights_xf = state[lstm_weight_ih][slices[1][0]:slices[1][1]].numpy() # shape [h, x]
self.Weights_xl = state[lstm_weight_ih][slices[2][0]:slices[2][1]].numpy() # shape [h, x]
self.Weights_xo = state[lstm_weight_ih][slices[3][0]:slices[3][1]].numpy() # shape [h, x]
lstm_bias_ih='lstm.bias_ih_l'+str(self.layer_num)
self.Bias_xi = state[lstm_bias_ih][slices[0][0]:slices[0][1]].numpy() #shape is [h, 1]
self.Bias_xf = state[lstm_bias_ih][slices[1][0]:slices[1][1]].numpy() #shape is [h, 1]
self.Bias_xl = state[lstm_bias_ih][slices[2][0]:slices[2][1]].numpy() #shape is [h, 1]
self.Bias_xo = state[lstm_bias_ih][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
lstm_weight_hh='lstm.weight_hh_l'+str(self.layer_num)
#Hidden state (h) Weights and Biases for all gates
self.Weights_hi = state[lstm_weight_hh][slices[0][0]:slices[0][1]].numpy() #shape is [h, h]
self.Weights_hf = state[lstm_weight_hh][slices[1][0]:slices[1][1]].numpy() #shape is [h, h]
self.Weights_hl = state[lstm_weight_hh][slices[2][0]:slices[2][1]].numpy() #shape is [h, h]
self.Weights_ho = state[lstm_weight_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, h]
lstm_bias_hh='lstm.bias_hh_l'+str(self.layer_num)
self.Bias_hi = state[lstm_bias_hh][slices[0][0]:slices[0][1]].numpy() #shape is [h, 1]
self.Bias_hf = state[lstm_bias_hh][slices[1][0]:slices[1][1]].numpy() #shape is [h, 1]
self.Bias_hl = state[lstm_bias_hh][slices[2][0]:slices[2][1]].numpy() #shape is [h, 1]
self.Bias_ho = state[lstm_bias_hh][slices[3][0]:slices[3][1]].numpy() #shape is [h, 1]
def forward_lstm_pass(self,input_data):
h = np.zeros(self.hidden_dim)
c = np.zeros(self.hidden_dim)
output_list=[]
for eventx in input_data:
f = forget_gate(eventx, h, self.Weights_hf, self.Bias_hf, self.Weights_xf, self.Bias_xf, c)
i = input_gate(eventx, h, self.Weights_hi, self.Bias_hi, self.Weights_xi, self.Bias_xi,
self.Weights_hl, self.Bias_hl, self.Weights_xl, self.Bias_xl)
c = cell_state(f,i)
h = output_gate(eventx, h, self.Weights_ho, self.Bias_ho, self.Weights_xo, self.Bias_xo, c)
if self.matching_in_out: # doesnt make sense but it was as it was in main code :(
output_list.append(h)
if self.matching_in_out:
return output_list
else:
return h
Similarly for fully connected layer,
class fully_connected_layer:
def __init__(self,state, dict_name='fc', ):
self.fc_Weight = state[dict_name+'.weight'][0].numpy()
self.fc_Bias = state[dict_name+'.bias'][0].numpy() #shape is [,output_size]
def forward(self,lstm_output, is_sigmoid=True):
res=np.dot(self.fc_Weight, lstm_output)+self.fc_Bias
print (res)
if is_sigmoid:
return sigmoid(res)
else:
return res
Now we would need one class to call all of them together and generalise them with respect to multiple layers
You can modify the below class if you need more Fully connected layers or want to set false condition for sigmoid etc.
class RNN_model_Numpy:
def __init__(self, state, input_size, hidden_dim, output_size, num_layers, matching_in_out=True):
self.lstm_layers=[]
for i in range(0, num_layers):
lstm_layer_obj=numpy_lstm(layer_num=i, hidden_dim=hidden_dim, matching_in_out=True)
lstm_layer_obj.init_weights_from_pytorch(state)
self.lstm_layers.append(lstm_layer_obj)
self.hidden2out=fully_connected_layer(state, dict_name='hidden2out')
def forward(self, feature_list):
for x in self.lstm_layers:
lstm_output=x.forward_lstm_pass(feature_list)
feature_list=lstm_output
return self.hidden2out.forward(feature_list, is_sigmoid=False)
Sanity check on a numpy variable:
data = np.array(
[[1,1],
[2,2],
[3,3]])
check=RNN_model_Numpy(state, input_size, hidden_dim, output_size, num_layers)
check.forward(data)
EXPLANATION:
Since we just need forward pass, we would need certain functions that are required in LSTM, for that we have the forget gate, input gate, cell gate and output gate. They are just some operations that are done on the input that you give.
For get_slices function, this is used to break down the weight matrix that we get from pytorch state dictionary (state dictionary) is the dictionary which contains the weights of all the layers that we have in our network.
For LSTM particularly have it in this order ignore, forget, learn, output. So for that we would need to break it up for different LSTM cells.
For numpy_lstm class, we have init_weights_from_pytorch function which must be called, what it will do is that it will extract the weights from state dictionary which we got earlier from pytorch model object and then populate the numpy array weights with the pytorch weights. You can first train your model and then save the state dictionary through pickle and then use it.
The fully connected layer class just implements the hidden2out neural network.
Finally our rnn_model_numpy class is there to ensure that if you have multiple layers then it is able to send the output of one layer of lstm to other layer of lstm.
Lastly there is a small sanity check on data variable.
IMPORTANT NOTE: PLEASE NOTE THAT YOU MIGHT GET DIMENSION ERROR AS PYTORCH WAY OF HANDLING INPUT IS COMPLETELY DIFFERENT SO PLEASE ENSURE THAT YOU INPUT NUMPY IS OF SIMILAR SHAPE AS DATA VARIABLE.
Important references:
https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html
https://christinakouridi.blog/2019/06/19/backpropagation-lstm/

What is the correct way to update an input variable during training?

I have an input
inp = torch.tensor([1.0])
and a neural network
class Model_updater(nn.Module):
def __init__(self):
super(Model_updater, self).__init__()
self.fc1 = nn.Linear(1, 2)
self.fc2 = nn.Linear(2, 3)
self.fc3 = nn.Linear(3, 2)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
net_updater = Model_updater()
opt_updater = optim.Adam(net_updater.parameters())
I'm trying to update my input using the neural network's output:
inp = torch.tensor([1.0])
epochs = 3
for i in range(epochs):
opt_updater.zero_grad()
inp_copy = inp.detach().clone()
mu, sigma = net_updater(inp_copy)
dist1 = Normal(mu, torch.abs(sigma))
a = dist1.rsample()
inp += a
loss = torch.tensor(5.0) - inp
loss.backward(retain_graph=True)
opt_updater.step()
But getting the error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 2]], which is output 0 of TBackward, is at version 2; expected version 1
I also tried changing the loss calculations with
loss = torch.tensor(5.0) - inp_copy
But got the error
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I also tried without the retain_graph=True but I get
RuntimeError: Trying to backward through the graph a second time,
but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
Which doesn't really makes sense to me because I don't see where I'm calling backward() twice
Most likely, this is what you want
inp1 = inp + a # create a separate variable for updated value
inp.data = inp1.data # update the value without touching the graph
loss = torch.tensor(5.0) - inp1 # use updated value which has gradient

Error when defining custom gradients in Keras

I have been trying to define a custom layer in Keras with a custom discrete gradient, as the activation function is discrete.
The layer looks like this:
class DiffLayer(tf.keras.layers.Layer):
def __init__(self):
super(DiffLayer, self).__init__()
def build(self, input_shape):
self.w = self.add_weight(
shape=(15, 1),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(1, 1), initializer="random_normal", trainable=True
)
def call(self, x):
z = tf.matmul(Flatten()(x), self.w) + self.b
a = custom_op(z)
self.a = a
if K.greater(a,0.5):
return x-1
else:
return x
And the custom_op function:
#tf.custom_gradient
def custom_op(x):
a = 1. / (1. + K.exp(-x))
def custom_grad(dy):
if K.greater(a, 0.5):
grad = K.exp(x)
else:
grad = 0
return grad
return a, custom_grad
I have followed the tutorials from this post but when I try to fit the network that I am working with I get the following warning:
WARNING:tensorflow:Gradients do not exist for variables ['diff_layer_10/Variable:0', 'diff_layer_10/Variable:0'] when minimizing the loss.
My guess is that Keras is not detecting the defined gradient because of the way it is defined but I cannot think of a different way of defining it.
Is this the case or am I missing something in my code?
EDIT
As suggested by one of the comments I am going to further explain what I am trying to do. I want a to be a parameter that decides what happens to the input data. If a is greater than 0.5 then I want 1 subtracted to the input data, otherwise the layer should return the input data.
I do not know if that is possible to do in Keras.

Keras : How to create a custom layer with weights when the input shape is unknow during compilation?

I want to define a Pre-processing layer just after my input layer, ie it will use the mean and variance of a scaler that was computed before and apply it on my inputs before passing them to the Dense network.
Lambda layers do not work in my case because I want to save the model, the objective is that when applied on data, there is not need to process the inputs since it will be done in the early stage of the network.
Using K.variables for the mean and var works, but I would like to use weights instead and set trainable=False. This way they will be saved in the weights of the network and I don't have to provide them each time.
class PreprocessLayer(Layer):
"""
Defines a layer that applies the preprocessing from a scaler
Needed because lambda layers are too fragile to be saved in a model
"""
def __init__(self, batch_size, mean, var, **kwargs):
self.b = batch_size
self.m = mean
self.v = var
super(PreprocessLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.mean = self.add_weight(name='mean',
shape=(self.b,input_shape[1]),
initializer=tf.constant_initializer(self.m),
trainable=False)
self.var = self.add_weight(name='var',
shape=(self.b,input_shape[1]),
initializer=tf.constant_initializer(self.v),
trainable=False)
super(PreprocessLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
return (x-self.mean)/self.var
def compute_output_shape(self, input_shape):
return (input_shape[0],input_shape[1])
def get_config(self):
config = super(PreprocessLayer, self).get_config()
config['mean'] = self.m
config['var'] = self.v
return config
And I call this layer with
L0 = PreprocessLayer(batch_size=20,mean=scaler.mean_,var=scaler.scale_)(IN)
The problem arises at
shape=(self.b,input_shape[1]),
Which give me the error (when batch_size is 20)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [32,15] vs. [20,15]
[[Node: preprocess_layer_1/sub = Sub[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_IN_0_0, preprocess_layer_1/mean/read)]]
From what I understand, since my weights (mean and var) need to have the same shape as the input x, the first axis poses problems when the batch_size is not a divisor of the training size because it will have different values during the training. That causes the crash because the shape has to be determined at compilation time and I cannot leave it blank.
Is there any way to have a dynamic value for the first value of shape ? If not, a work around for this problem ?
For anyone having the same issue - which is a remainder different from the batch_size at the end of the epoch (due to the training and testing size not being a multiple of the batch size) that results in a InvalidArgumentError: Incompatible shapes - here is my fix.
Since this remainder will always have a size smaller than the batch_size, what I did in the call function is to slice the weights like this :
def call(self, x):
mean = self.mean[:K.shape(x)[0],:]
std = self.std[:K.shape(x)[0],:]
return (x-mean)/std
This works but it means that if a batch size larger than the one that initialized the layer is used to evaluate the model, the error will pop up again.
This is why I put at in the __init__ :
self.b = max(32,batch_size).
Because predict() uses by default batch_size = 32
I do not think you need to add mean and var as weights. You can calculate them in your call function. I also do not exactly understand why you want to use this instead of BatchNormalization but anyway, maybe you can try this code
class PreprocessLayer(Layer):
def __init__(self, eps=1e-6, **kwargs):
self.eps = eps
super(PreprocessLayer, self).__init__(**kwargs)
def build(self, input_shape):
super(PreprocessLayer, self).build(input_shape)
def call(self, x):
mean = K.mean(x, axis=-1, keepdims=True)
std = K.std(x, axis=-1, keepdims=True)
return (x - mean) / (std + self.eps)
def compute_output_shape(self, input_shape):
return input_shape
eps is to avoid division by 0.
I do not guarantee this will work, but maybe give it a try.

Categories