Keras: Constraint linking input and output - python

Is there a way in which we can enforce constraint on the prediction of sequences?
Say, if my modeling is as follows:
model = Sequential()
model.add(LSTM(150, input_shape=(n_timesteps_in, n_features)))
model.add(RepeatVector(n_timesteps_in))
model.add(LSTM(150, return_sequences=True))
model.add(TimeDistributed(Dense(n_features, activation='linear')))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['acc'])
Can I somehow capture the constraint that model.pred(x) <= x
The docs shows that we can add constraints to the network weights. However, they do not mention how to map relationship or constraints between input and output.

Never heard of it.... but there are quite a few ways you can implement that yourself using a functional API model and custom functions.
Below, there is a possible answer to this, but first, is this really the best to do??
If you're trying to create an autoencoder, you should not care about limiting the outputs. Otherwise, your model will not really learn much.
Maybe the best to do is simply normalizing the inputs first (between -1 and +1), and using the tanh activation at the end.
Funtional API model to preserve the input:
inputTensor = Input(n_timesteps_in, n_features)
out = LSTM(150, input_shape=)(inputTensor)
out = RepeatVector(n_timesteps_in)(out) #this line sounds funny in your model...
out = LSTM(150, return_sequences=True)(out)
out = TimeDistributed(Dense(n_features))(out)
out = Activation(chooseOneActivation)(out)
out = Lambda(chooseACustomFunction)([out,inputTensor])
model = Model(inputTensor,out)
model.compile(...)
Custom limit options
There are infinite ways of doing this, here are some examples that may or may not be what you need. But from this you are free to develop anything similar.
The options below limits the individual outputs to the respective individual inputs. But you may prefer to use all outputs confined to the maximum input instead.
If so, use this below: maxInput = max(originalInput, axis=1, keepdims=True)
1 - A simple stretched 'tanh':
You can simply define both top and bottom limits by using a tanh (that ranges from -1 to +1) and multiplying it by the inputs.
Use the Activation('tanh') layer, and the following custom function in the Lambda layer:
import keras.backend as K
def stretchedTanh(x):
originalOutput = x[0]
originalInput = x[1]
return K.abs(originalInput) * originalOutput
I'm not totally sure this will be a healthy option. If the idea is to create an autoencoder, this model will easily find a solution of outputing all tanh activations as close to 1 as possible, without really looking at the inputs.
2 - Modified 'relu'
First, you could simply clip your outputs based on the inputs, changing a relu activation. Use the Activation('relu')(out) in your model above, and the following custom function in the Lambda layer:
def modifiedRelu(x):
negativeOutput = (-1) * x[0] #ranging from -infinite to 0
originalInput = x[1]
#ranging from -infinite to originalInput
return negativeOutput + originalInput #needs the same shape between input and output
This may have a downside when everything goes above the limit and backpropagation gets unable to return. (A problem that might happen with 'relu').
3 - Half linear, half modified tanh
In this case, you don't need the Activation layer, or you can use it as 'linear'.
import keras.backend as K
def halfTanh(x):
originalOutput = x[0]
originalInput = x[1] #assuming all inputs are positive
#find the positive outputs and get a tensor with 1's at their positions
positiveOutputs = K.greater(originalOuptut,0)
positiveOutputs = K.cast(positiveOutputs,K.floatx())
#now the 1's are at the negative positions
negativeOutputs = 1 - positiveOutputs
tanhOutputs = K.tanh(originalOutput) #function limited to -1 or +1
tanhOutputs = originalInput * sigmoidOutputs #raises the limit from 1 to originalInput
#use the conditions above to select between the negative and the positive side
return positiveOutputs * tanhOutputs + negativeOutputs * originalOutputs

Keras provides an easy way to handle such trivial constraints. We could write out = Minimum()([out, input_tensor])
Complete example
import keras
from keras.layers.merge import Maximum, Minimum
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
n_timesteps_in = 20
n_features=2
input_tensor = keras.layers.Input(shape = [n_timesteps_in, n_features])
out = keras.layers.LSTM(150)(input_tensor)
out = keras.layers.RepeatVector(n_timesteps_in)(out)
out = keras.layers.LSTM(150, return_sequences=True)(out)
out = keras.layers.TimeDistributed(keras.layers.Dense(n_features))(out)
out = Minimum()([out, input_tensor])
model = keras.Model(input_tensor, out )
SVG(model_to_dot(model, show_shapes=True, show_layer_names=True, rankdir='HB').create(prog='dot', format='svg'))
Here's the network structure of the model. It shows how the input and the output are used to compute clamped output.

Related

Realization of time dependent neural network in keras using for-loop

I have a recurrent model in keras that explicitly realized a for-loop over sequence. See model bellow [edited]
I have a large matrix input of size [samples~1M x time ~1k) and I need to use the same model which take as input a segment of that time neighbors ~ (-5,+5).
That writing seems:
Non-natural in term of Keras usage
for-loop might be inefficient and it is hard to tell the reason.
The question is: Should I and can I replace the use of for-loop within the model with something more built-in in keras layer?
Will that also help in terms of speed?
If so, how will that being done, is the TimeDistributed keras model the correct approach? Couldn't figure out how to take running time segment with that model.
Thanks
def my_model(signal_length = 1000 ,pre = 5 ,post = 5, num_classes=4):
# Define the input
X = Input(shape=(signal_length,))
# output as a list of size (slightly less) as X
outputs = []
# sliding window on X, considering each segment as input
for t in range(pre,signal_length-post):
# per time unit consider the input signal of the neighboring ~10 signals:
p1 = t - pre
p2 = t + post
Xt = Lambda(lambda x: X[:,p1:p2])(X)
# Connect to network:
Xt = Dense(20, activation='relu')(Xt)
Xt = Dense(20, activation='relu')(Xt)
Xt = Dense(num_classes, activation='softmax')(Xt)
#output per unit time in a list:
outputs.append(Xt)
model = Model(inputs=X, outputs=outputs)
mode.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
return model

How to verify structure a neural network in keras model?

I'm new in Keras and Neural Networks. I'm writing a thesis and trying to create a SimpleRNN in Keras as it is illustrated below:
As it is shown in the picture, I need to create a model with 4 inputs + 2 outputs and with any number of neurons in the hidden layer.
This is my code:
model = Sequential()
model.add(SimpleRNN(4, input_shape=(1, 4), activation='sigmoid', return_sequences=True))
model.add(Dense(2))
model.compile(loss='mean_absolute_error', optimizer='adam')
model.fit(data, target, epochs=5000, batch_size=1, verbose=2)
predict = model.predict(data)
1) Does my model implement the graph?
2) Is it possible to specify connections between neurons Input and Hidden layers or Output and Input layers?
Explanation:
I am going to use backpropagation to train my network.
I have input and target values
Input is a 10*4 array and target is a 10*2 array which I then reshape:
input = input.reshape((10, 1, 4))
target = target.reshape((10, 1, 2))
It is crucial for to able to specify connections between neurons as they can be different. For instance, here you can have an example:
1) Not really. But I'm not sure about what exactly you want in that graph. (Let's see how Keras recurrent layers work below)
2) Yes, it's possible to connect every layer to every layer, but you can't use Sequential for that, you must use Model.
This answer may not be what you're looking for. What exactly do you want to achieve? What kind of data you have, what output you expect, what is the model supposed to do? etc...
1 - How does a recurrent layer work?
Documentation
Recurrent layers in keras work with an "input sequence" and may output a single result or a sequence result. It's recurrency is totally contained in it and doesn't interact with other layers.
You should have inputs with shape (NumberOrExamples, TimeStepsInSequence, DimensionOfEachStep). This means input_shape=(TimeSteps,Dimension).
The recurrent layer will work internally with each time step. The cycles happen from step to step and this behavior is totally invisible. The layer seems to work just like any other layer.
This doesn't seem to be what you want. Unless you have a "sequence" to input. The only way I know if using recurrent layers in Keras that is similar to you graph is when you have a segment of a sequence and want to predict the next step. If that's the case, see some examples by searching for "predicting the next element" in Google.
2 - How to connect layers using Model:
Instead of adding layers to a sequential model (which will always follow a straight line), start using the layers independently, starting from an input tensor:
from keras.layers import *
from keras.models import Model
inputTensor = Input(shapeOfYourInput) #it seems the shape is "(2,)", but we must see your data.
#A dense layer with 2 outputs:
myDense = Dense(2, activation=ItsAGoodIdeaToUseAnActivation)
#The output tensor of that layer when you give it the input:
denseOut1 = myDense(inputTensor)
#You can do as many cycles as you want here:
denseOut2 = myDense(denseOut1)
#you can even make a loop:
denseOut = Activation(ItsAGoodIdeaToUseAnActivation)(inputTensor) #you may create a layer and call it with the input tensor in just one line if you're not going to reuse the layer
#I'm applying this activation layer here because since we defined an activation for the dense layer and we're going to cycle it, it's not going to behave very well receiving huge values in the first pass and small values the next passes....
for i in range(n):
denseOut = myDense(denseOut)
This kind of usage allows you to create any kind of model, with branches, alternative ways, connections from anywhere to anywhere, provided you respect the shape rules. For a cycle like that, inputs and outputs must have the same shape.
At the end, you must define a model from one or many inputs to one or many outputs (you must have training data to match all inputs and outputs you choose):
model = Model(inputTensor,denseOut)
But notice that this model is static. If you want to change the number of cycles, you will have to create a new model.
In this case, it would be as simple as repeating the loop step denseOut = myDense(denseOut) and creating another model2=Model(inputTensor,denseOut).
3 - Trying to create something like the image below:
I am supposing C and F will participate in all iterations. If not,
Since there are four actual inputs, and we are going to treat them all separately, let's create 4 inputs instead, all like (1,).
Your input array should be divided in 4 arrays, all being (10,1).
from keras.models import Model
from keras.layers import *
inputA = Input((1,))
inputB = Input((1,))
inputC = Input((1,))
inputF = Input((1,))
Now the layers N2 and N3, that will be used only once, since C and F are constant:
outN2 = Dense(1)(inputC)
outN3 = Dense(1)(inputF)
Now the recurrent layer N1, without giving it the tensors yet:
layN1 = Dense(1)
For the loop, let's create outA and outB. They start as actual inputs and will be given to the layer N1, but in the loop they will be replaced
outA = inputA
outB = inputB
Now in the loop, let's do the "passes":
for i in range(n):
#unite A and B in one
inputAB = Concatenate()([outA,outB])
#pass through N1
outN1 = layN1(inputAB)
#sum results of N1 and N2 into A
outA = Add()([outN1,outN2])
#this is constant for all the passes except the first
outB = outN3 #looks like B is never changing in your image....
Now the model:
finalOut = Concatenate()([outA,outB])
model = Model([inputA,inputB,inputC,inputF], finalOut)

Keras retrieve value of node before activation function

Imagine a fully-connected neural network with its last two layers of the following structure:
[Dense]
units = 612
activation = softplus
[Dense]
units = 1
activation = sigmoid
The output value of the net is 1, but I'd like to know what the input x to the sigmoidal function was (must be some high number, since sigm(x) is 1 here).
Folllowing indraforyou's answer I managed to retrieve the output and weights of Keras layers:
outputs = [layer.output for layer in model.layers[-2:]]
functors = [K.function( [model.input]+[K.learning_phase()], [out] ) for out in outputs]
test_input = np.array(...)
layer_outs = [func([test_input, 0.]) for func in functors]
print layer_outs[-1][0] # -> array([[ 1.]])
dense_0_out = layer_outs[-2][0] # shape (612, 1)
dense_1_weights = model.layers[-1].weights[0].get_value() # shape (1, 612)
dense_1_bias = model.layers[-1].weights[1].get_value()
x = np.dot(dense_0_out, dense_1_weights) + dense_1_bias
print x # -> -11.7
How can x be a negative number? In that case the last layers output should be a number closer to 0.0 than 1.0. Are dense_0_out or dense_1_weights the wrong outputs or weights?
Since you're using get_value(), I'll assume that you're using Theano backend. To get the value of the node before the sigmoid activation, you can traverse the computation graph.
The graph can be traversed starting from outputs (the result of some computation) down to its inputs using the owner field.
In your case, what you want is the input x of the sigmoid activation op. The output of the sigmoid op is model.output. Putting these together, the variable x is model.output.owner.inputs[0].
If you print out this value, you'll see Elemwise{add,no_inplace}.0, which is an element-wise addition op. It can be verified from the source code of Dense.call():
def call(self, inputs):
output = K.dot(inputs, self.kernel)
if self.use_bias:
output = K.bias_add(output, self.bias)
if self.activation is not None:
output = self.activation(output)
return output
The input to the activation function is the output of K.bias_add().
With a small modification of your code, you can get the value of the node before activation:
x = model.output.owner.inputs[0]
func = K.function([model.input] + [K.learning_phase()], [x])
print func([test_input, 0.])
For anyone using TensorFlow backend: use x = model.output.op.inputs[0] instead.
I can see a simple way just changing a little the model structure. (See at the end how to use the existing model and change only the ending).
The advantages of this method are:
You don't have to guess if you're doing the right calculations
You don't need to care about the dropout layers and how to implement a dropout calculation
This is a pure Keras solution (applies to any backend, either Theano or Tensorflow).
There are two possible solutions below:
Option 1 - Create a new model from start with the proposed structure
Option 2 - Reuse an existing model changing only its ending
Model structure
You could just have the last dense separated in two layers at the end:
[Dense]
units = 612
activation = softplus
[Dense]
units = 1
#no activation
[Activation]
activation = sigmoid
Then you simply get the output of the last dense layer.
I'd say you should create two models, one for training, the other for checking this value.
Option 1 - Building the models from the beginning:
from keras.models import Model
#build the initial part of the model the same way you would
#add the Dense layer without an activation:
#if using the functional Model API
denseOut = Dense(1)(outputFromThePreviousLayer)
sigmoidOut = Activation('sigmoid')(denseOut)
#if using the sequential model - will need the functional API
model.add(Dense(1))
sigmoidOut = Activation('sigmoid')(model.output)
Create two models from that, one for training, one for checking the output of dense:
#if using the functional API
checkingModel = Model(yourInputs, denseOut)
#if using the sequential model:
checkingModel = model
trainingModel = Model(checkingModel.inputs, sigmoidOut)
Use trianingModel for training normally. The two models share weights, so training one is training the other.
Use checkingModel just to see the outputs of the Dense layer, using checkingModel.predict(X)
Option 2 - Building this from an existing model:
from keras.models import Model
#find the softplus dense layer and get its output:
softplusOut = oldModel.layers[indexForSoftplusLayer].output
#or should this be the output from the dropout? Whichever comes immediately after the last Dense(1)
#recreate the dense layer
outDense = Dense(1, name='newDense', ...)(softPlusOut)
#create the new model
checkingModel = Model(oldModel.inputs,outDense)
It's important, since you created a new Dense layer, to get the weights from the old one:
wgts = oldModel.layers[indexForDense].get_weights()
checkingModel.get_layer('newDense').set_weights(wgts)
In this case, training the old model will not update the last dense layer in the new model, so, let's create a trainingModel:
outSigmoid = Activation('sigmoid')(checkingModel.output)
trainingModel = Model(checkingModel.inputs,outSigmoid)
Use checkingModel for checking the values you want with checkingModel.predict(X). And train the trainingModel.
So this is for fellow googlers, the working of the keras API has changed significantly since the accepted answer was posted. The working code for extracting a layer's output before activation (for tensorflow backend) is:
model = Your_Keras_Model()
the_tensor_you_need = model.output.op.inputs[0] #<- this is indexable, if there are multiple inputs to this node then you can find it with indexing.
In my case, the final layer was a dense layer with activation softmax, so the tensor output I needed was <tf.Tensor 'predictions/BiasAdd:0' shape=(?, 1000) dtype=float32>.
(TF backend)
Solution for Conv layers.
I had the same question, and to rewrite a model's configuration was not an option.
The simple hack would be to perform the call function manually. It gives control over the activation.
Copy-paste from the Keras source, with self changed to layer. You can do the same with any other layer.
def conv_no_activation(layer, inputs, activation=False):
if layer.rank == 1:
outputs = K.conv1d(
inputs,
layer.kernel,
strides=layer.strides[0],
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate[0])
if layer.rank == 2:
outputs = K.conv2d(
inputs,
layer.kernel,
strides=layer.strides,
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate)
if layer.rank == 3:
outputs = K.conv3d(
inputs,
layer.kernel,
strides=layer.strides,
padding=layer.padding,
data_format=layer.data_format,
dilation_rate=layer.dilation_rate)
if layer.use_bias:
outputs = K.bias_add(
outputs,
layer.bias,
data_format=layer.data_format)
if activation and layer.activation is not None:
outputs = layer.activation(outputs)
return outputs
Now we need to modify the main function a little. First, identify the layer by its name. Then retrieve activations from the previous layer. And at last, compute the output from the target layer.
def get_output_activation_control(model, images, layername, activation=False):
"""Get activations for the input from specified layer"""
inp = model.input
layer_id, layer = [(n, l) for n, l in enumerate(model.layers) if l.name == layername][0]
prev_layer = model.layers[layer_id - 1]
conv_out = conv_no_activation(layer, prev_layer.output, activation=activation)
functor = K.function([inp] + [K.learning_phase()], [conv_out])
return functor([images])
Here is a tiny test. I'm using VGG16 model.
a_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=True)[0]
a_no_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=False)[0]
print(np.sum(a_no_relu < 0))
> 245293
Set all negatives to zero to compare with the results retrieved after an embedded in VGG16 ReLu operation.
a_no_relu[a_no_relu < 0] = 0
print(np.allclose(a_relu, a_no_relu))
> True
easy way to define new layer with new activation function:
def change_layer_activation(layer):
if isinstance(layer, keras.layers.Conv2D):
config = layer.get_config()
config["activation"] = "linear"
new = keras.layers.Conv2D.from_config(config)
elif isinstance(layer, keras.layers.Dense):
config = layer.get_config()
config["activation"] = "linear"
new = keras.layers.Dense.from_config(config)
weights = [x.numpy() for x in layer.weights]
return new, weights
I had the same problem but none of the other answers worked for me. Im using a newer version of Keras with Tensorflow so some answers dont work now. Also the structure of the model is given so i can't change it easely. The general idea is to create a copy of the original model that will work exactly like the original one but spliting the activation from the outputs layers. Once this is done we can easely access the outputs values before the activation is applied.
First we will create a copy of the original model but with no activation on the outputs layers. This will be done using Keras clone_model function (See Docs).
from tensorflow.keras.models import clone_model
from tensorflow.keras.layers import Activation
original_model = get_model()
def f(layer):
config = layer.get_config()
if not isinstance(layer, Activation) and layer.name in original_model.output_names:
config.pop('activation', None)
layer_copy = layer.__class__.from_config(config)
return layer_copy
copy_model = clone_model(model, clone_function=f)
This alone will only make a clone with new weights so we must copy the original_model weights to the new one:
copy_model.build(original_model.input_shape)
copy_model.set_weights(original_model.get_weights())
Now we will add the activations layers:
from tensorflow.keras.models import Model
old_outputs = [ original_model.get_layer(name=name) for name in copy_model.output_names ]
new_outputs = [ Activation(old_output.activation)(output) if old_output.activation else output
for output, old_output in zip(copy_model.outputs, old_outputs) ]
copy_model = Model(copy_model.inputs, new_outputs)
Finally we could create a new model whose evaluation will be the outputs with no activation applied:
no_activation_outputs = [ copy_model.get_layer(name=name).output for name in original_model.output_names ]
no_activation_model = Model(copy.inputs, no_activation_outputs)
Now we could use copy_model like the original_model and no_activation_model to access pre-activation outputs. Actually you could even modify the code to split a custom set of layers instead of the outputs.

Confused about tensor dimensions and batch sizes in pytorch

So I'm very new to PyTorch and Neural Networks in general, and I'm having some problems creating a Neural Network that classifies names by gender.
I based this off of the PyTorch tutorial for RNNs that classify names by nationality, but I decided not to go with a recurrent approach... Stop me right here if this was the wrong idea!
However, whenever I try to run an input through the network it tells me:
RuntimeError: matrices expected, got 3D, 2D tensors at /py/conda-bld/pytorch_1493681908901/work/torch/lib/TH/generic/THTensorMath.c:1232
I know this has something to do with how PyTorch always expects there to be a batch size or something, and I have my tensor set up that way, but you can probably tell by this point that I have no idea what I'm talking about.
Here's my code:
from future import unicode_literals, print_function, division
from io import open
import glob
import unicodedata
import string
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import random
from torch.autograd import Variable
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
"""------GLOBAL VARIABLES------"""
all_letters = string.ascii_letters + " .,;'"
num_letters = len(all_letters)
all_names = {}
genders = ["Female", "Male"]
"""-------DATA EXTRACTION------"""
def findFiles(path):
return glob.glob(path)
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
# Read a file and split into lines
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
for file in findFiles("/home/andrew/PyCharm/PycharmProjects/CantStop/data/names/*.txt"):
gender = file.split("/")[-1].split(".")[0]
names = readLines(file)
all_names[gender] = names
"""-----DATA INTERPRETATION-----"""
def nameToTensor(name):
tensor = torch.zeros(len(name), 1, num_letters)
for index, letter in enumerate(name):
tensor[index][0][all_letters.find(letter)] = 1
return tensor
def outputToGender(output):
gender, gender_index = output.data.topk(1)
if gender_index[0][0] == 0:
return "Female"
return "Male"
"""------NETWORK SETUP------"""
class Net(nn.Module):
def __init__(self, input_size, output_size):
super(Net, self).__init__()
#Layer 1
self.Lin1 = nn.Linear(input_size, int(input_size/2))
self.ReLu1 = nn.ReLU()
self.Batch1 = nn.BatchNorm1d(int(input_size/2))
#Layer 2
self.Lin2 = nn.Linear(int(input_size/2), output_size)
self.ReLu2 = nn.ReLU()
self.Batch2 = nn.BatchNorm1d(output_size)
self.softMax = nn.LogSoftmax()
def forward(self, input):
output1 = self.Batch1(self.ReLu1(self.Lin1(input)))
output2 = self.softMax(self.Batch2(self.ReLu2(self.Lin2(output1))))
return output2
NN = Net(num_letters, 2)
"""------TRAINING------"""
def getRandomTrainingEx():
gender = genders[random.randint(0, 1)]
name = all_names[gender][random.randint(0, len(all_names[gender])-1)]
gender_tensor = Variable(torch.LongTensor([genders.index(gender)]))
name_tensor = Variable(nameToTensor(name))
return gender_tensor, name_tensor, gender
def train(input, target):
loss_func = nn.NLLLoss()
optimizer = optim.SGD(NN.parameters(), lr=0.0001, momentum=0.9)
optimizer.zero_grad()
output = NN(input)
loss = loss_func(output, target)
loss.backward()
optimizer.step()
return output, loss
all_losses = []
current_loss = 0
for i in range(100000):
gender_tensor, name_tensor, gender = getRandomTrainingEx()
output, loss = train(name_tensor, gender_tensor)
current_loss += loss
if i%1000 == 0:
print("Guess: %s, Correct: %s, Loss: %s" % (outputToGender(output), gender, loss.data[0]))
if i%100 == 0:
all_losses.append(current_loss/10)
current_loss = 0
# plt.figure()
# plt.plot(all_losses)
# plt.show()
Please help a newbie out!
Debugging your bug out:
Pycharm is a helpful python debugger that let you set breakpoint and views dimension of your tensor.
For easier debug, do not stack forward thing up like that
output1 = self.Batch1(self.ReLu1(self.Lin1(input)))
Instead,
h1 = self.ReLu1(self.Lin1(input))
h2 = self.Batch1(h1)
For the stacktrace, Pytorch also provide Pythonic error stacktrack. I believe that before
RuntimeError: matrices expected, got 3D, 2D tensors at /py/conda-bld/pytorch_1493681908901/work/torch/lib/TH/generic/THTensorMath.c:1232
There are some python error stacktrace that point right into your code. For easier debug, as I said, don't stack forward.
You use Pycharm to create break point before crash point. In debugger watcher Then use Variable(torch.rand(dim1, dim2)) to test out forward pass input, output dimension, and if a dimension is incorrect. Comparing with dimension of input. Call input.size() in debugger watcher.
For example, self.ReLu1(self.Lin1(Variable(torch.rand(10, 20)))).size() . If it show read text (error), then the input dimension is incorrect. Else, it show the size of the output.
Read the docs
In Pytorch Docs, it specify input/output dimension. It also have a example code snip
>>> rnn = nn.RNN(10, 20, 2)
>>> input = Variable(torch.randn(5, 3, 10))
>>> h0 = Variable(torch.randn(2, 3, 20))
>>> output, hn = rnn(input, h0)
You may use the code snip in PyCharm Debugger to explore dimension of input, output of specific layer of your interest (RNN, Linear, BatchNorm1d).
First, regarding your error, as other answers say and also your exception, it is probably because your input parameters are not shaped correctly. You could try debugging to isolate the line that gives the error, and then edit your question with it, so we know for sure what is causing the problem and correct it (without full stack trace it is harder to know what is the problem).
Now, you are trying to implement a Neural Network that classifies names by gender, as you indicated. We can see that this task will require to somehow input a name (which have different sizes) and output a gender (a binary variable: male, female). However, Neural Networks in general are built and trained to classify inputs (vectors) of fixed size of features, like they mention in the pytorch docs:
Parameters: input_size – The number of expected features in the input x
...
Looking at the tutorial you mentioned, they do consider this situation, as in their case the input for the network is a single letter transformed to a "one-hot vector", as they indicate:
To run a step of this network we need to pass an input (in our case, the Tensor for the current letter) and a previous hidden state (which we initialize as zeros at first). We’ll get back the output (probability of each language) and a next hidden state (which we keep for the next step).
And even give an example of it (remember tensors are Variables in pytorch):
input = Variable(letterToTensor('A'))
hidden = Variable(torch.zeros(1, n_hidden))
output, next_hidden = rnn(input, hidden)
Note: That being said, there are some other things you can do to adapt your implementation to variable-sized inputs. Based on my experience and also complemented by this and this other great questions, you could:
Preprocess your data to extract new features and transform it to fixed-size inputs. This is usually the most used approach but requires experience and patience to get good features. Some techniques used are PCA (Principal Component Analysis) and LDA (Latent Dirichlet Allocation)
For example, you could extract from your data features like: the length of the name, the number of letter a's in the name (female names tend to have more a's), the number of letter e's in the name (the same but with male names maybe?), and others... so you can generate new features like [name_length, a_found, e_found, ...]. Then you could follow a regular approach with you new fixed-size vectors. Do note that those features have to be meaningful; these ones I just came up for example (although they could work).
Split your input names into fixed-sized substring (or iterate them with a sliding window), so then you can classify them with a network designed for that size and combine the outputs in an ensemble way to obtain the final classification.

Keras: zero division error

I'm trying to get the activation values for each layer in this baseline autoencoder built using Keras since I want to add a sparsity penalty to the loss function based on the Kullbach-Leibler (KL) divergence, as shown here, pag. 14.
In this scenario, I'm going to calculate the KL divergence for each layer and then sum all of them with the main loss function, e.g. mse.
I therefore made a script in Jupyter where I do that but all the time, when I try to compile I get ZeroDivisionError: integer division or modulo by zero.
This is the code
import numpy as np
from keras.layers import Conv2D, Activation
from keras.models import Sequential
from keras import backend as K
from keras import losses
x_train = np.random.rand(128,128).astype('float32')
kl = K.placeholder(dtype='float32')
beta = K.constant(value=5e-1)
p = K.constant(value=5e-2)
# encoder
model = Sequential()
model.add(Conv2D(filters=16,kernel_size=(4,4),padding='same',
name='encoder',input_shape=(128,128,1)))
model.add(Activation('relu'))
# get the average activation
A = K.mean(x=model.output)
# calculate the value for the KL divergence
kl = K.concatenate([kl, losses.kullback_leibler_divergence(p, A)],axis=0)
# decoder
model.add(Conv2D(filters=1,kernel_size=(4,4),padding='same', name='encoder'))
model.add(Activation('relu'))
B = K.mean(x=model.output)
kl = K.concatenate([kl, losses.kullback_leibler_divergence(p, B)],axis=0)
Here seems the cause
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in _normalize_axis(axis, ndim)
989 else:
990 if axis is not None and axis < 0:
991 axis %= ndim <----------
992 return axis
993
so there might be something wrong in the mean calculation. If I print the value I get
Tensor("Mean_10:0", shape=(), dtype=float32)
that is quite strange because the weights and the biases are non-zero initialised. Thus, there might be something wrong in the way of getting the activation values either.
I really would not know hot to fix it, I'm not much of a skilled programmer.
Could anyone help me in understanding where I'm wrong?
First, you shouldn't be doing calculations outside layers. The model must keep track of all calculations.
If you need a specific calculation to be done in the middle of the model, you should use a Lambda layer.
If you need that a specific output be used in the loss function, you should split your model for that output and do calculations inside a custom loss function.
Here, I used Lambda layer to calculate the mean, and a customLoss to calculate the kullback-leibler divergence.
import numpy as np
from keras.layers import *
from keras.models import Model
from keras import backend as K
from keras import losses
x_train = np.random.rand(128,128).astype('float32')
kl = K.placeholder(dtype='float32') #you'll probably not need this anymore, since losses will be treated individually in each output.
beta = beta = K.constant(value=5e-1)
p = K.constant(value=5e-2)
# encoder
inp = Input((128,128,1))
lay = Convolution2D(filters=16,kernel_size=(4,4),padding='same', name='encoder',activation='relu')(inp)
#apply the mean using a lambda layer:
intermediateOut = Lambda(lambda x: K.mean(x),output_shape=(1,))(lay)
# decoder
finalOut = Convolution2D(filters=1,kernel_size=(4,4),padding='same', name='encoder',activation='relu')(lay)
#but from that, let's also calculate a mean output for loss:
meanFinalOut = Lambda(lambda x: K.mean(x),output_shape=(1,))(finalOut)
#Now, you have to create a model taking one input and those three outputs:
splitModel = Model(inp,[intermediateOut,meanFinalOut,finalOut])
And finally, compile your model with your custom loss function (we will define that later). But since I don't know if you're actually using the final output (not mean) for training, I'll suggest creating one model for training and another for predicting:
trainingModel = Model(inp,[intermediateOut,meanFinalOut])
trainingModel.compile(...,loss=customLoss)
predictingModel = Model(inp,finalOut)
#you don't need to compile the predicting model since you're only training the trainingModel
#both will share the same weights, you train one, and predict in the other
Our custom loss function should then deal with the kullback.
def customLoss(p,mean):
return #your own kullback expression (I don't know how it works, but maybe keras' one can be used with single values?)
Alternatively, if you want a single loss function to be called instead of two:
summedMeans = Add([intermediateOut,meanFinalOut])
trainingModel = Model(inp, summedMeans)

Categories