To solve the issue that I've posted here : Adjust the output of a CNN as an input for TimeDistributed tensorflow layer which is about input data format of the Time distributed tensorflow layer, I think about another idea: instead of passing two inputs to a CNN model, what if , before designing the CNN model, I merge the two inputs in one input using pandas or numpy, and then pass it to the CNN model and then AFTER the INPUT LAYER and BEFORE the CONVOLUTION LAYER, I add a customized layer that separate feature that I concatenate them !! Is this possible ? the following picture explain more what I am talking about:
Thank you #Marco for the help. Exactly like Marco says, I separate the input using index slicing and was done using a Lambda layer. This is the code:
input_layer1=tf.keras.Input(shape=(input_shape))
separate_features1 = tf.keras.layers.Lambda(lambda x : tf.transpose(x,[0,1,2,3])[:,:-1,:,:])(input_layer1)
separate_features2 = tf.keras.layers.Lambda(lambda x : tf.transpose(x,[0,1,2,3])[:,-1:,:,:])(input_layer1)
This is the model architecture:
Related
I am trying to do video classification using Conv2D and LSTM. After getting the features from Conv2D I pass them to LSTM. As they are two different models how can I merge them into one for getting a single .h5py?
And also the features from Conv2D are passed to LSTM as sequences of frames saved as ".npy" array files.
I need to save the model for different purposes.
Assuming you are using tf keras for merge you can use this method:
model = keras.models.Model(inputs=[Conv2D, LSTM], outputs=out)
And to save your model you can just use .save as explained here
model.save('your_model.h5')
Not exactly sure if this is what you need but hope it helps.
I want to make CNN model including multiple inputs. But there is not any examples about that. Without Keras, Can I make multiple inputs model ?
My model has 3 inputs. After the first input(atom_in_fea[8,9]) passes through the embedding(linear or dense in keras) layer, the output should merge with the second(nbr_fea_idx[8,12]) and third input(nbr_fea[8,12,4]).
atom_in_fea[8,9] -> embedding layer -> atom_in_fea[8,4]
[] is their shape.
There is my code
python, tensorflow
atom_nbr_fea = atom_in_fea[nbr_fea_idx, :]
final = tf.concat([atom_in_fea.unsqueeze(1).expand(8, 12, 4),atom_nbr_fea, nbr_fea], 2)
atom_nbr_fea shape is [8,12,4]
Final shape is [8,12,12]
I built a model with Keras but there was limited.
If I can solve this problem, it's fine whether it's Keras or TensorFlow. Thank you
I'd like to apply lstm in my speech emotion datasets (dataset of features in numeric values with one column of targets).
I've done split_train_test. Do I need some other transformation to do in the data set before the model?
I ask this question because when I compile and fit the model I've got one error in the last dense layer.
Error when checking model target: expected activation_2 to have shape (8,) but got array with shape (1,).
Thanks.
After my internship I learn how to fix out this error and where to look.
Here's what you have to take care.
Unexpected error input form
If the reported layer is the first it is a cause of the input data for the train of a model as a same shape for a create your model.
If this is the last layer that bug then it is the labels that are well coded
Either you put a sigmoid but the labels are not binary either you put softmax and the labels are in one-hot format [0,1,0]: example 3 classes, this element is of class 2. So, the labels are badly encoded or you are deceived in the function (sigmoid / softmax) of your output layer.
Hope this help
I have trained a fully convolutional neural network with Keras. I have used the Functional API and have defined the input layer as Input(shape=(128,128,3)), corresponding to the size of the images in my training set.
However, I want to use the trained model on images of variable sizes (which should be ok because the network is fully convolutional). To do this, I need to change my input layer to Input(shape=(None,None,3)). The obvious way to solve the problem would have been to train my model directly with an input shape of (None,None,3) but I use a custom loss function where I need to specify the size of my training images.
I have tried to define a new input layer and assign it to my model like this :
from keras.engine import InputLayer
input_layer = InputLayer(input_shape=(None, None, 3), name="input")
model.layers[0] = input_layer
This actually changes the size of the input layers accordingly but the following layers still expect (128,128,filters) inputs.
Is there a way to change all of the inputs values at once ?
Create a new model, exactly the same, except for new input shape; and tranfer weights:
newModel.set_weights(oldModel.get_weights())
If anything goes wrong, then it might not be fully convolutional (ex: contains a Flatten layer).
I am using Keras with tensorflow backend. I am using functional layers in Keras. What I want to achieve is that I have the following architecture at some layer.
Tensor of (20,200)----> LSTM----> Split into two Tensors of size (20,100) each
Then use those two tensors as two branches of a further network. (We can think of this as being the opposite of a Merge operation)
I am given to understand that the only way to achieve this is currently using the Lambda layer since there is no "Split" functionality in Keras.
However looking into the documentation for Lambda function, it seems the output_shape functionality is only relevant iff we are using Keras.
Can anyone offer any advice on how to achieve this ? This is the rough pseudo-code of what I want to achieve.
#Other code before this
lstm_1st_layer = LSTM(nos_hidden_neurons, return_sequences=True)(lstm_0th_layer)
#this layer outputs a tensor of size(20,200)
#Split it into two tensors of size (20,100) each (call them left and right)
left_lstm = LSTM(200, return_sequences=True)(left)
right_lstm = LSTM(200, return_sequences=True)(right)
#Other code after this
In your place I would simply use two LSTM layers with half the numbers of units.
Then you get the two outputs ready to go:
left = LSTM(half_nos_hidden_neurons,.....)(lstm_0th_layer)
right = LSTM(half_nos_hidden_neurons,.....)(lstm_0th_layer)
The effect is the same.