I am trying to do video classification using Conv2D and LSTM. After getting the features from Conv2D I pass them to LSTM. As they are two different models how can I merge them into one for getting a single .h5py?
And also the features from Conv2D are passed to LSTM as sequences of frames saved as ".npy" array files.
I need to save the model for different purposes.
Assuming you are using tf keras for merge you can use this method:
model = keras.models.Model(inputs=[Conv2D, LSTM], outputs=out)
And to save your model you can just use .save as explained here
model.save('your_model.h5')
Not exactly sure if this is what you need but hope it helps.
Related
I have a pre-trained image classification model saved in caffe, the model is expected to get grayscale(one channel) images. I want to use this model in a tool that only provides input of RGB(three channels) to the model. It is not possible to change the way this tool provides images so I thought of adding a layer before the input layer that transforms the input to one channel only, is that possible in caffe? and how?
I'm looking for a solution that doesn't require to define new layers to caffe if possible.
Note that I have the ".prototxt" and the ".weights" files of the model.
I previously did a similar thing in tensorflow but I don't know if this is possible in caffe and didn't find much material online.
You can add a Python layer to do it for you.
What is a Python layer.
An example of such a layer can be found here.
To solve the issue that I've posted here : Adjust the output of a CNN as an input for TimeDistributed tensorflow layer which is about input data format of the Time distributed tensorflow layer, I think about another idea: instead of passing two inputs to a CNN model, what if , before designing the CNN model, I merge the two inputs in one input using pandas or numpy, and then pass it to the CNN model and then AFTER the INPUT LAYER and BEFORE the CONVOLUTION LAYER, I add a customized layer that separate feature that I concatenate them !! Is this possible ? the following picture explain more what I am talking about:
Thank you #Marco for the help. Exactly like Marco says, I separate the input using index slicing and was done using a Lambda layer. This is the code:
input_layer1=tf.keras.Input(shape=(input_shape))
separate_features1 = tf.keras.layers.Lambda(lambda x : tf.transpose(x,[0,1,2,3])[:,:-1,:,:])(input_layer1)
separate_features2 = tf.keras.layers.Lambda(lambda x : tf.transpose(x,[0,1,2,3])[:,-1:,:,:])(input_layer1)
This is the model architecture:
Really don't have much idea of what I'm doing, followed this tutorial to process deepdream images https://www.youtube.com/watch?v=Wkh72OKmcKI
Trying to change the base model data set to any from here, https://keras.io/api/applications/#models-for-image-classification-with-weights-trained-on-imagenet particularly InceptionResNetV2 currently. InceptionV3 uses "mixed0" up to "mixed10" whereas, the former data set uses a different naming system apparently.
Would have to change this section
# Maximize the activations of these layers
names = ['mixed3', 'mixed5']
layers = [base_model.get_layer(name).output for name in names]
# Create the feature extraction model
dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)
I'm getting an error "no such layer: mixed3"
So yea, just trying to figure out how to get the names for the layers in this data set as well as others
You can simply enter the following code to find out the model architecture(including layer names).
#Model denotes the Inception model
model.summary()
Or to visualize complex relationships,
tf.keras.utils.plot_model(model, to_file='model.png')
I am trying to get a better understanding of CNNs and so I am using keras to basically make a small CNN and want to go through the calculations by hand.
I downloaded the images from the GTSRB database, then using PIL library package converted the image set to greyscale and resized to (6 x 6).
The code below shows the CNN I've created.
It includes 1 convolution layer (with 2 filters of size 2x2), 1 max pooling layer (2x2), a flattening layer and a dense layer at the end.
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(2, kernel_size=(2,2),activation='relu', input_shape=(6,6,1)))
model.add(keras.layers.MaxPool2D(pool_size=(2,2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(len(sign_label_list),activation='relu'))
I then trained the network and saved the model and weights.
I read online that for checking the weights (h5 file type), I need a tool to view the weights. So I downloaded HDFView tool.
Now I am trying to view the weights for each of the filters, but I can only see the weights of 1 of the filters.
Filter weights
How would I get the weights of both the filters?
Does anyone know if there is a way to view the weights through python?
Originally, I wanted to test with only 1 filter but I get nan when I view the weights.
Looking through the documentation and Keras FAQ found here.
The suggested way to view weights for a particular layer is to do this:
weights,biases = model.layers[0].get_weights()
I then printed the weights to the console using print(weights) and this displayed the values of all filters.
However, I still had trouble viewing the weights of multiple filters using the HDFView tool.
I have a pre-trained CNN model as a .pb file. I can load the model and extract the final vector from the last layer for all images. Now I would like to extract the vector coming from a specific layer and not the final for my images. I am using an import_graph_def function to load the model and I don't know the names of the layers because .pb file is large and I can't open it.
How can I run one part of the model and not the whole in order to get vectors until the layer I want?
One approach other than what was mentioned by Peter Hawkins, to use tf.Graph.get_operations() in the comments is to use tensorboard to find the name of the layer you would like to extract from.
From there you can just use
graph.get_tensor_by_name("import/layer_name")
to extract out whichever features you want.