AttributeError:'Tensor' object has no attribute '_keras_history' - python

Aux_input = Input(shape=(wrd_temp.shape[1],1), dtype='float32')#shape (,200)
Main_input = Input(shape=(wrdvec.shape[1],),dtype='float32')#shape(,367)
X = Bidirectional(LSTM(20,return_sequences=True))(Aux_input)
X = Dropout(0.2)(X)
X = Bidirectional(LSTM(28,return_sequences=True))(X)
X = Dropout(0.2)(X)
X = Bidirectional(LSTM(28,return_sequences=False))(X)
Aux_Output = Dense(Opt_train.shape[1], activation= 'softmax' )(X)#total 22 classes
x = keras.layers.concatenate([Main_input,Aux_Output],axis=1)
x = tf.reshape(x,[1,389,1])#here 389 is the shape of the new input i.e.(
Main_input+Aux_Output)
x = Bidirectional(LSTM(20,return_sequences=True))(x)
x = Dropout(0.2)(x)
x = Bidirectional(LSTM(28,return_sequences=True))(x)
x = Dropout(0.2)(x)
x = Bidirectional(LSTM(28,return_sequences=False))(x)
Main_Output = Dense(Opt_train.shape[1], activation= 'softmax' )(x)
model = Model(inputs=[Aux_input,Main_input], outputs= [Aux_Output,Main_Output])
Error occurs in line declaring the model i.e. model = Model(), here the attribute error has occurred, Also if there is any other mistake in my implementation please do take a not and notify me in the comment section.

The problem lied in the fact that using every tf operation should be encapsulated by either:
Using keras.backend functions,
Lambda layers,
Designated keras functions with the same behavior.
When you are using tf operation - you are getting tf tensor object which doesn't have history field. When you use keras functions you will get keras.tensors.

Related

Performing Differentiation wrt input within a keras model for use in loss

Is there any layer in keras which calculates the derivative wrt input? For example if x is input, the first layer is say f(x), then the next layer's output should be f'(x). There are multiple question here about this topic but all of them involve computation of derivative outside the model. In essence, I want to create a neural network whose loss function involves both the jacobian and hessians wrt the inputs.
I've tried the following
import keras.backend as K
def create_model():
x = keras.Input(shape = (10,))
layer = Dense(1, activation = "sigmoid")
output = layer(x)
jac = K.gradients(output, x)
model = keras.Model(inputs=x, outputs=jac)
return model
model = create_model()
X = np.random.uniform(size = (3, 10))
This is gives the error tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.
So I tried using that
def create_model2():
with tf.GradientTape() as tape:
x = keras.Input(shape = (10,))
layer = Dense(1, activation = "sigmoid")
output = layer(x)
jac = tape.gradient(output, x)
model = keras.Model(inputs=x, outputs=jac)
return model
model = create_model2()
X = np.random.uniform(size = (3, 10))
but this tells me 'KerasTensor' object has no attribute '_id'
Both these methods work fine outside the model. My end goal is to use the Jacobian and Hessian in the loss function, so alternative approaches would also be appreciated
Not sure what exactly you want to do, but maybe try a custom Keras layer with tf.gradients:
import tensorflow as tf
tf.random.set_seed(111)
class GradientLayer(tf.keras.layers.Layer):
def __init__(self):
super(GradientLayer, self).__init__()
self.dense = tf.keras.layers.Dense(1, activation = "sigmoid")
#tf.function
def call(self, inputs):
outputs = self.dense(inputs)
return tf.gradients(outputs, inputs)
def create_model2():
gradient_layer = GradientLayer()
inputs = tf.keras.layers.Input(shape = (10,))
outputs = gradient_layer(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
model = create_model2()
X = tf.random.uniform((3, 10))
print(model(X))
tf.Tensor(
[[-0.07935508 -0.12471244 -0.0702782 -0.06729251 0.14465885 -0.0818079
-0.08996294 0.07622238 0.11422144 -0.08126545]
[-0.08666676 -0.13620329 -0.07675356 -0.07349276 0.15798753 -0.08934557
-0.09825202 0.08324542 0.12474566 -0.08875315]
[-0.08661086 -0.13611545 -0.07670406 -0.07344536 0.15788564 -0.08928795
-0.09818865 0.08319173 0.12466521 -0.08869591]], shape=(3, 10), dtype=float32)

Keras model graph is disconnected when trying to use a shared model

I'm trying to train a neural network in keras but I'm getting as error that there are no gradients for any variable, which may imply that the graph is disconnected.
I'm copying here a stripped down version of the code with only the bit related to the model definition.
The model accepts two inputs that will be fed, one at time, to the same shared model: the encoder.
The two outputs of the encoder are then concatenated and sent to a dense layer to compute the final output.
I don't get what's wrong, it looks like that when instantiating the encoder I'm creating additional trainable variables that are not used anywhere.
For the network layout I was getting inspiration from the official keras docs:
https://keras.io/guides/functional_api/#all-models-are-callable-just-like-layers
def _get_encoder(self, model_input_shape):
encoder_input = Input(shape=model_input_shape)
x = encoder_input
x = Conv2D(32, (3, 3), strides=1, padding="same")(x)
x = BatchNormalization(axis=-1)(x)
x = LeakyReLU(alpha=0.1)(x)
latent_z = Flatten()(x)
latent_z = Dense(self.latent_dim)(latent_z)
encoder = Model(
encoder_input,
latent_z,
name='encoder'
)
return encoder
def build_model(self):
model_input_shape = (self.height, self.width, self.depth)
model_input_1 = Input(shape=model_input_shape)
model_input_2 = Input(shape=model_input_shape)
self.encoder = self._get_encoder(model_input_shape)
z_1 = self.encoder(model_input_1)
z_2 = self.encoder(model_input_2)
x = concatenate([z_1, z_2])
prediction = Dense(1, activation='sigmoid')(x)
self.network = Model(
inputs=[model_input_1, model_input_2],
outputs=[prediction],
name = 'network'
)
network.network.compile(
optimizer='rmsprop',
loss='mse',
metrics=['mae'])
H = network.network.fit(
x=train_gen,
validation_data=test_gen,
epochs=EPOCHS,
steps_per_epoch=STEPS,
validation_steps=STEPS)
I found the problem. My custom data generator was returning a list [x,y] instead of a tuple (x,y). Where x is the input and y the target. A simple mistake that was causing totally unrelated errors.

keras model equivalent of tf.depth_to_space

I want to accomplish the equivalent of tf.depth_to_space in a Keras model. Specifically, the data in the Keras model is shaped H x W x 4 (i.e., depth of 4) and I want to permute the data so that the output is sized H x W x 1, with the mapping done as viewing the 4 input channels as 2x2 blocks; i.e.,
input location is y, x, k
output location is 2*y+(k//2), 2*x+(k%2), 1
I know that I can get the correct shape with:
outputs = keras.layers.Reshape((H*2,W*2,1), input_shape=(H,W,4))(inputs)
But I think that the mapping will be
input location is y, x, k
Linear_addess is y*W*4+x*4+k
output location is Linear_addess//(H*2), Linear_addess % (H*2), 1
which is not what I want
I tried directly using the
outputs = tf.depth_to_space(inputs, 2)
but that lead to an error:
TypeError: Output tensors to a Model must be Keras tensors. Found Tensor("DepthToSpace:0", shape=(?, 1024, 1024, 1), dtype=float32)
the problem can be seen with this simple function
def simple_net(H=512, W=512):
inputs = keras.layers.Input((H, W, 4))
# gets the correct shape but not the correct order
outputs = keras.layers.Reshape((H*2,W*2,1), input_shape=(H,W,4))(inputs)
# Run time error message
#outputs = tf.depth_to_space(output_planes, 2)
model = keras.models.Model(inputs, outputs)
return model
you should use Keras Lamda layer
from keras.layers import Lambda
import tensorflow as tf
Subpixel_layer = Lambda(lambda x:tf.nn.depth_to_space(x,scale))
x = Subpixel_layer(inputs=x)
MINIMAL MODEL
import tensorflow as tf
from keras.layers import Input,Lambda
in=Input(shape=(32,32,3))
x = Conv2D(32, (3,3), activation='relu')(in)
x = Conv2D(32, (3,3), activation='relu')(x)
sub_layer = Lambda(lambda x:tf.nn.depth_to_space(x,2))
x = sub_layer(inputs=x)
model = Model(inputs=in, outputs=x)
# model.compile(optimizer = Adam(), loss = mean_squared_error)
model.summary()
Summary

load_model and Lamda layer in Keras

How to load model that have lambda layer?
Here is the code to reproduce behaviour:
MEAN_LANDMARKS = np.load('data/mean_shape_68.npy')
def add_mean_landmarks(x):
mean_landmarks = np.array(MEAN_LANDMARKS, np.float32)
mean_landmarks = mean_landmarks.flatten()
mean_landmarks_tf = tf.convert_to_tensor(mean_landmarks)
x = x + mean_landmarks_tf
return x
def get_model():
inputs = Input(shape=(8, 128, 128, 3))
cnn = VGG16(include_top=False, weights='imagenet', input_shape=(128, 128, 3))
x = TimeDistributed(cnn)(inputs)
x = TimeDistributed(Flatten())(x)
x = LSTM(256)(x)
x = Dense(68 * 2, activation='linear')(x)
x = Lambda(add_mean_landmarks)(x)
model = Model(inputs=inputs, outputs=x)
optimizer = Adadelta()
model.compile(optimizer=optimizer, loss='mae')
return model
Model compiles and I can save it, but when I tried to load it with load_model function I get an error:
in add_mean_landmarks
mean_landmarks = np.array(MEAN_LANDMARKS, np.float32)
NameError: name 'MEAN_LANDMARKS' is not defined
Аs I understand MEAN_LANDMARKS is not incorporated in graph as constant tensor. Also it's related to this question: How to add constant tensor in Keras?
You need to pass custom_objects argument to load_model function:
model = load_model('model_file_name.h5', custom_objects={'MEAN_LANDMARKS': MEAN_LANDMARKS})
Look for more info in Keras docs: Handling custom layers (or other custom objects) in saved models
.

How to add Dropout in Keras functional model?

Let's say I have an LSTM layer in Keras like this:
x = Input(shape=(input_shape), dtype='int32')
x = LSTM(128,return_sequences=True)(x)
Now I am trying to add Dropout to this layer using:
X = Dropout(0.5)
but this gives error, which I am assuming the above line is redefining X instead of adding Dropout to it.
How to fix this?
Just add x = Dropout(0.5)(x) like this:
x = Input(shape=(input_shape), dtype='int32')
x = LSTM(128,return_sequences=True)(x)
x = Dropout(0.5)(x)

Categories