I'm trying to get the activation values for each layer in this baseline autoencoder built using Keras since I want to add a sparsity penalty to the loss function based on the Kullbach-Leibler (KL) divergence, as shown here, pag. 14.
In this scenario, I'm going to calculate the KL divergence for each layer and then sum all of them with the main loss function, e.g. mse.
I therefore made a script in Jupyter where I do that but all the time, when I try to compile I get ZeroDivisionError: integer division or modulo by zero.
This is the code
import numpy as np
from keras.layers import Conv2D, Activation
from keras.models import Sequential
from keras import backend as K
from keras import losses
x_train = np.random.rand(128,128).astype('float32')
kl = K.placeholder(dtype='float32')
beta = K.constant(value=5e-1)
p = K.constant(value=5e-2)
# encoder
model = Sequential()
model.add(Conv2D(filters=16,kernel_size=(4,4),padding='same',
name='encoder',input_shape=(128,128,1)))
model.add(Activation('relu'))
# get the average activation
A = K.mean(x=model.output)
# calculate the value for the KL divergence
kl = K.concatenate([kl, losses.kullback_leibler_divergence(p, A)],axis=0)
# decoder
model.add(Conv2D(filters=1,kernel_size=(4,4),padding='same', name='encoder'))
model.add(Activation('relu'))
B = K.mean(x=model.output)
kl = K.concatenate([kl, losses.kullback_leibler_divergence(p, B)],axis=0)
Here seems the cause
/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in _normalize_axis(axis, ndim)
989 else:
990 if axis is not None and axis < 0:
991 axis %= ndim <----------
992 return axis
993
so there might be something wrong in the mean calculation. If I print the value I get
Tensor("Mean_10:0", shape=(), dtype=float32)
that is quite strange because the weights and the biases are non-zero initialised. Thus, there might be something wrong in the way of getting the activation values either.
I really would not know hot to fix it, I'm not much of a skilled programmer.
Could anyone help me in understanding where I'm wrong?
First, you shouldn't be doing calculations outside layers. The model must keep track of all calculations.
If you need a specific calculation to be done in the middle of the model, you should use a Lambda layer.
If you need that a specific output be used in the loss function, you should split your model for that output and do calculations inside a custom loss function.
Here, I used Lambda layer to calculate the mean, and a customLoss to calculate the kullback-leibler divergence.
import numpy as np
from keras.layers import *
from keras.models import Model
from keras import backend as K
from keras import losses
x_train = np.random.rand(128,128).astype('float32')
kl = K.placeholder(dtype='float32') #you'll probably not need this anymore, since losses will be treated individually in each output.
beta = beta = K.constant(value=5e-1)
p = K.constant(value=5e-2)
# encoder
inp = Input((128,128,1))
lay = Convolution2D(filters=16,kernel_size=(4,4),padding='same', name='encoder',activation='relu')(inp)
#apply the mean using a lambda layer:
intermediateOut = Lambda(lambda x: K.mean(x),output_shape=(1,))(lay)
# decoder
finalOut = Convolution2D(filters=1,kernel_size=(4,4),padding='same', name='encoder',activation='relu')(lay)
#but from that, let's also calculate a mean output for loss:
meanFinalOut = Lambda(lambda x: K.mean(x),output_shape=(1,))(finalOut)
#Now, you have to create a model taking one input and those three outputs:
splitModel = Model(inp,[intermediateOut,meanFinalOut,finalOut])
And finally, compile your model with your custom loss function (we will define that later). But since I don't know if you're actually using the final output (not mean) for training, I'll suggest creating one model for training and another for predicting:
trainingModel = Model(inp,[intermediateOut,meanFinalOut])
trainingModel.compile(...,loss=customLoss)
predictingModel = Model(inp,finalOut)
#you don't need to compile the predicting model since you're only training the trainingModel
#both will share the same weights, you train one, and predict in the other
Our custom loss function should then deal with the kullback.
def customLoss(p,mean):
return #your own kullback expression (I don't know how it works, but maybe keras' one can be used with single values?)
Alternatively, if you want a single loss function to be called instead of two:
summedMeans = Add([intermediateOut,meanFinalOut])
trainingModel = Model(inp, summedMeans)
Related
BatchNormalization (BN) operates slightly differently when in training and in inference. In training, it uses the average and variance of the current mini-batch to scale its inputs; this means that the exact result of the application of batch normalization depends not only on the current input, but also on all other elements of the mini-batch. This is clearly not desirable when in inference mode, where we want a deterministic result. Therefore, in that case, a fixed statistic of the global average and variance over the entire training set is used.
In Tensorflow, this behavior is controlled by a boolean switch training that needs to be specified when calling the layer, see https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization. How do I deal with this switch when using Keras high-level API? Am I correct in assuming that it is dealt with automatically, depending whether we are using model.fit(x, ...) or model.predict(x, ...)?
To test this, I have written this example. We start with a random distribution and we want to classify whether the input is positive or negative. However, we also have a test dataset coming from a different distribution where the inputs are displaced by 2 (and consequently the labels check whether x>2).
import numpy as np
from math import ceil
from tensorflow.python.data import Dataset
from tensorflow.python.keras import Input, Model
from tensorflow.python.keras.layers import Dense, BatchNormalization
np.random.seed(18)
xt = np.random.randn(10_000, 1)
yt = np.array([[int(x > 0)] for x in xt])
train_data = Dataset.from_tensor_slices((xt, yt)).shuffle(10_000).repeat().batch(32).prefetch(2)
xv = np.random.randn(100, 1)
yv = np.array([[int(x > 0)] for x in xv])
valid_data = Dataset.from_tensor_slices((xv, yv)).repeat().batch(32).prefetch(2)
xs = np.random.randn(100, 1) + 2
ys = np.array([[int(x > 2)] for x in xs])
test_data = Dataset.from_tensor_slices((xs, ys)).repeat().batch(32).prefetch(2)
x = Input(shape=(1,))
a = BatchNormalization()(x)
a = Dense(8, activation='sigmoid')(a)
a = BatchNormalization()(a)
y = Dense(1, activation='sigmoid')(a)
model = Model(inputs=x, outputs=y, )
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(train_data, epochs=10, steps_per_epoch=ceil(10_000 / 32), validation_data=valid_data,
validation_steps=ceil(100 / 32))
zs = model.predict(test_data, steps=ceil(100 / 32))
print(sum([ys[i] == int(zs[i] > 0.5) for i in range(100)]))
Running the code prints the value 0.5, meaning that half the examples are labeled properly. This is what I would expect if the system was using the global statistics on the training set to implement BN.
If we change the BN layers to read
x = Input(shape=(1,))
a = BatchNormalization()(x, training=True)
a = Dense(8, activation='sigmoid')(a)
a = BatchNormalization()(a, training=True)
y = Dense(1, activation='sigmoid')(a)
and run the code again we find 0.87. Forcing always the training state, the percentage of correct prediction has changed. This is consistent with the idea that model.predict(x, ...) is now using the statistic of the mini-batch to implement BN, and is therefore able to slightly "correct" the mismatch in the source distributions between training and test data.
Is that correct?
If I'm understanding your question correctly, then yes, keras does automatically manage training vs inference behavior based on fit vs predict/evaluate. The flag is called learning_phase, and it determines the behavior of batch norm, dropout, and potentially other things. The current learning phase can be seen with keras.backend.learning_phase(), and set with keras.backend.set_learning_phase().
https://keras.io/backend/#learning_phase
I'm training a GAN-like models, but not exactly the same. I'm using Keras with TensorFlow backend.
I have two Keras models G and D. I want to output the weights parameter of a target layer in G, as the input of model D, and use the result of D.predict(G.weights) as part of the loss function for G, i.e. D is not trainable, but the argument G.weights are trainable. In this way to want to further train G.weights.
I tried to use
def custom_loss(ytrue, ypred):
### Something to do with ytrue and ypred
weight = self.G.get_layer('target').get_weights()
loss += self.D.predict(weight)
return loss
but apparently it does not work since weight is just a numpy array and is not trainable.
Is there a way to get the weights of model that is still trainable in Keras? I'm new to Keras and know very little about TensorFlow. I will be very appreciate it someone can help!
As you mention, layer.get_weights() will return the current weights of the matrix. What you want to feed for prediction is a the node in the computation graph representing such weights. You can use layer.trainable_weights instead, which will return two tf.Variable which you can feed to another layer/model.
Note that there is one variable for the unit to unit connections and another one for the bias. If you want to get a flattened tensor from it you could do something like:
from keras import backend as K
...
ww, bias = self.G.get_layer('target').trainable_weights
flattened_weights = Flatten()(K.concat([ww, K.reshape(bias, (5, 1))], axis=1))
I am trying to train a simple LSTM to fit a line. My hypothesis is that I should be able to fit a linearly decreasing trend with zero input since the LSTM can decide how much it listens to its input vs. internal state, and can thus learn to just operate on the internal state. Basically a degenerate case for testing whether the LSTM can fit an expected result with zero input.
I create my input and target data:
seq_len = 1000
x_train = np.zeros((1, seq_len, 1)) # [batch_size, seq_len, num_feat]
target = np.linspace(100, 0, num=seq_len).reshape(1, -1, 1)
I create a pretty simple network:
from keras.models import Model
from keras.layers import LSTM, Dense, Input, TimeDistributed
x_in = Input((seq_len, 1))
seq1 = LSTM(8, return_sequences=True)(x_in)
dense1 = TimeDistributed(Dense(8))(seq1)
seq2 = LSTM(8, return_sequences=True)(dense1)
dense2 = TimeDistributed(Dense(8))(seq2)
out = TimeDistributed(Dense(1))(dense2)
model = Model(inputs=x_in, outputs=out)
model.compile(optimizer='adam', loss='mean_squared_error')
history = model.fit(x_train, target, batch_size=1, epochs=1000,
validation_split=0.)
I also created a custom callback that calls model.predict(x_train) after every epoch and adds the results to an array so I can see how my model's output is evolving over time. Basically the model just learns to predict a constant value which gradually (asymptotically) approaches the mean of my target line (target line is in red, not sure why the legend didn't show):
So basically nothing is driving my response to fit the actual line, I'm just gradually approaching the mean of the line. I suspect I am not getting any gradient with respect to time (data index), just an average gradient over time. But I would have thought LSTM losses would automagically give you gradient through time.
I've tried:
different activation functions for the LSTM layers (None, 'relu' for both the regular activation and recurrent activation)
different optimizers ('nadam', 'adadelta', 'rmsprop')
the 'mean_aboslute_error' loss function, which I didn't expect to improve the results, and it acted about the same
passing sequences of random numbers drawn from a normal distribution as input
replacing LSTM with GRU
Nothing seems to do it.
Anybody have a suggestion as to how I can force this thing to train on the gradient as a function of my sequence index, i.e. g(t)? Or any other suggestions on how I can get this to work?
Note: with the trend as shown, if the LSTM results in a constant value at exactly the mean (50), the minimum mean absolute error will be 25 and the minimum mean squared error will be about 835.8. So if we don't see any better than that, we probably aren't fitting the line, just the mean.
Just some references in case you run this yourself.
Is there a way in which we can enforce constraint on the prediction of sequences?
Say, if my modeling is as follows:
model = Sequential()
model.add(LSTM(150, input_shape=(n_timesteps_in, n_features)))
model.add(RepeatVector(n_timesteps_in))
model.add(LSTM(150, return_sequences=True))
model.add(TimeDistributed(Dense(n_features, activation='linear')))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['acc'])
Can I somehow capture the constraint that model.pred(x) <= x
The docs shows that we can add constraints to the network weights. However, they do not mention how to map relationship or constraints between input and output.
Never heard of it.... but there are quite a few ways you can implement that yourself using a functional API model and custom functions.
Below, there is a possible answer to this, but first, is this really the best to do??
If you're trying to create an autoencoder, you should not care about limiting the outputs. Otherwise, your model will not really learn much.
Maybe the best to do is simply normalizing the inputs first (between -1 and +1), and using the tanh activation at the end.
Funtional API model to preserve the input:
inputTensor = Input(n_timesteps_in, n_features)
out = LSTM(150, input_shape=)(inputTensor)
out = RepeatVector(n_timesteps_in)(out) #this line sounds funny in your model...
out = LSTM(150, return_sequences=True)(out)
out = TimeDistributed(Dense(n_features))(out)
out = Activation(chooseOneActivation)(out)
out = Lambda(chooseACustomFunction)([out,inputTensor])
model = Model(inputTensor,out)
model.compile(...)
Custom limit options
There are infinite ways of doing this, here are some examples that may or may not be what you need. But from this you are free to develop anything similar.
The options below limits the individual outputs to the respective individual inputs. But you may prefer to use all outputs confined to the maximum input instead.
If so, use this below: maxInput = max(originalInput, axis=1, keepdims=True)
1 - A simple stretched 'tanh':
You can simply define both top and bottom limits by using a tanh (that ranges from -1 to +1) and multiplying it by the inputs.
Use the Activation('tanh') layer, and the following custom function in the Lambda layer:
import keras.backend as K
def stretchedTanh(x):
originalOutput = x[0]
originalInput = x[1]
return K.abs(originalInput) * originalOutput
I'm not totally sure this will be a healthy option. If the idea is to create an autoencoder, this model will easily find a solution of outputing all tanh activations as close to 1 as possible, without really looking at the inputs.
2 - Modified 'relu'
First, you could simply clip your outputs based on the inputs, changing a relu activation. Use the Activation('relu')(out) in your model above, and the following custom function in the Lambda layer:
def modifiedRelu(x):
negativeOutput = (-1) * x[0] #ranging from -infinite to 0
originalInput = x[1]
#ranging from -infinite to originalInput
return negativeOutput + originalInput #needs the same shape between input and output
This may have a downside when everything goes above the limit and backpropagation gets unable to return. (A problem that might happen with 'relu').
3 - Half linear, half modified tanh
In this case, you don't need the Activation layer, or you can use it as 'linear'.
import keras.backend as K
def halfTanh(x):
originalOutput = x[0]
originalInput = x[1] #assuming all inputs are positive
#find the positive outputs and get a tensor with 1's at their positions
positiveOutputs = K.greater(originalOuptut,0)
positiveOutputs = K.cast(positiveOutputs,K.floatx())
#now the 1's are at the negative positions
negativeOutputs = 1 - positiveOutputs
tanhOutputs = K.tanh(originalOutput) #function limited to -1 or +1
tanhOutputs = originalInput * sigmoidOutputs #raises the limit from 1 to originalInput
#use the conditions above to select between the negative and the positive side
return positiveOutputs * tanhOutputs + negativeOutputs * originalOutputs
Keras provides an easy way to handle such trivial constraints. We could write out = Minimum()([out, input_tensor])
Complete example
import keras
from keras.layers.merge import Maximum, Minimum
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
n_timesteps_in = 20
n_features=2
input_tensor = keras.layers.Input(shape = [n_timesteps_in, n_features])
out = keras.layers.LSTM(150)(input_tensor)
out = keras.layers.RepeatVector(n_timesteps_in)(out)
out = keras.layers.LSTM(150, return_sequences=True)(out)
out = keras.layers.TimeDistributed(keras.layers.Dense(n_features))(out)
out = Minimum()([out, input_tensor])
model = keras.Model(input_tensor, out )
SVG(model_to_dot(model, show_shapes=True, show_layer_names=True, rankdir='HB').create(prog='dot', format='svg'))
Here's the network structure of the model. It shows how the input and the output are used to compute clamped output.
I'm working on a multi-label classifier. I have many output labels [1, 0, 0, 1...] where 1 indicates that the input belongs to that label and 0 means otherwise.
In my case the loss function that I use is based on MSE. I want to change the loss function in a way that when the output label is -1 than it will change to the predicted probability of this label.
Check the attached images to best understand what I mean:
The scenario is - when the output label is -1 I want the MSE to be equal to zero:
This is the scenario:
And in such case I want it to change to:
In such case the MSE of the second label (the middle output) will be zero (this is a special case where I don't want the classifier to learn about this label).
It feels like this is a needed method and I don't really believe that I'm the first to think about it so firstly I wanted to know if there's a name for such way of training Neural Net and second I would like to know how can I do it.
I understand that I need to change some stuff in the loss function but I'm really newbie to Theano and not sure about how to look there for a specific value and how to change the content of the tensor.
I believe this is what you looking for.
import theano
from keras import backend as K
from keras.layers import Dense
from keras.models import Sequential
def customized_loss(y_true, y_pred):
loss = K.switch(K.equal(y_true, -1), 0, K.square(y_true-y_pred))
return K.sum(loss)
if __name__ == '__main__':
model = Sequential([ Dense(3, input_shape=(4,)) ])
model.compile(loss=customized_loss, optimizer='sgd')
import numpy as np
x = np.random.random((1, 4))
y = np.array([[1,-1,0]])
output = model.predict(x)
print output
# [[ 0.47242549 -0.45106074 0.13912249]]
print model.evaluate(x, y) # keras's loss
# 0.297689884901
print (output[0, 0]-1)**2 + 0 +(output[0, 2]-0)**2 # double-check
# 0.297689929093