get the output of intermediate layers using keras as backend - python

I want to extract four intermediate layers of my model. Here is how I setup my function:
K.function([yolo_model.layers[0].input, yolo_model.layers[4].input, K.learning_phase()],
[yolo_model.layers[1].layers[17].output,
yolo_model.layers[1].layers[27].output,
yolo_model.layers[1].layers[43].output,
yolo_model.layers[1].layers[69].output])
I always got the error that tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'input_3' with dtype float and shape [?,416,416,3]
It seems my input has dimension or type error but I used this input for my model train_on_batch or predict and it worked. I got the same error even when I passed np.zeros((1,416,416,3)) into it. Another wired thing is I don't have input_3 since my model only takes two inputs. I have no idea where input_3 tensor comes from.
Thanks in advance if anyone can give me some hints.

I found out where the problem is. My model is composed of one inner model and several layers. When I build the function, my input is from outer model but output is from inner model, which causes disconnection between input and output. I just changed the original input to the input layer of inner model and it works.

Related

Using custom keras model with layer sharing together dqn_agent.DqnAgent()

I am trying to use a custom neural network with the DqnAgent() from tf. In my model I need to use layer sharing. Thus, I use the functional API to build the model. The model has a dict as input and one layer with n neurons as output. The last layer is a Concat- and not a Dens-Layer though. The type of the model that i get from the functional API keras.Model(inputs=[...], outpunts[...]
is "keras.engine.functional.Functional".
Now i want to use my Model with the tf Agent like this:
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=model,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
I get the following Error thoug:
AttributeError: 'Functional' object has no attribute 'create_variables'
In call to configurable 'DqnAgent' (<class 'tf_agents.agents.dqn.dqn_agent.DqnAgent'>)
The q_network expects a network from type "network.Network". I am not sure how to convert or wrap my environment in such a way, that the DqnAgent() will accept it. How could I manage to do this? Any support is much appreciated. If you need more information about anything let me know.
Additional information about my Network:
Input dict with multiple inputs.
multiple shared dens layers. output of the last one is shape (1,).
concatenate all those outputs of shape (1,)
one multipy layer to eliminate infeasible actions by multiplying the outputs with 0 or 1 respectively.

How to cast the Average layer output to push it into a Conv2D layer in Keras?

I am trying to build a custom CNN using keras functional API. The issue with my theoretical idea and pratical one is that when I try to Average the ouput of three Conv2D layers and pass it to another Conv2D. I get an error saying that the output of Average is tf.float meanwhile Conv2D (that is the first convolution layer of VGG16, because I am doing transfer learning) expects to get a tf.int32
I am getting the follwing error: **TypeError: Expected int32, got 0.0 of type 'float' instead.
Notes:
I have tried Maximum and Minimum as a test only and still getting the same error.
Unfortunetly I can't share the code because of NDA.
Here's a code snippet:
self._layer_hs_o = Average(name="heads")(
[self._layer_hs_s, self._layer_hs_m, self._layer_hs_l])
# Trying to pass the average output to the following Conv2D layer
self._layer_d2c_c = Conv2D(d2c_config["filters"],
d2c_config["kernels"][0],
padding="same",
activation=d2c_config["activation"],
name="d2c_c",
kernel_initializer=d2c_config["init"],
input_shape=self._layer_hs_o.shape,
dilation_rate=d2c_config["dilations"][0]
)(self._layer_hs_o)
At this moment the model can't be compiled because of this step. When I skip it to normal convolutions only it get compiled and the learning happens normally. Yet I need to use the Average layer at some extent.

LSTM Inputs Confusion

I've been trying to understand LSTM inputs for a while now and I think I understand but I keep getting confused on how to implement them.
This is what I think, please correct me if I am wrong.
When specifying an LSTM, you specify the number of cells and the input shape (I've been having issues with the input shape). The Number of cells specifies how many cells should look at the data given and does not affect the required input shape. The input shape (When Stateful) goes by batch size, Timesteps in a batch, and features in a time step. A stateful LSTM retains it's Internal States until reset. Is this right?
If so I'm having so much confusion trying to specify the input shape for my network. This is because I'm trying to upgrade a current network and I cant figure out how and where to specify the input shape without an error.
The way I'm trying to upgrade it is initially I have a CNN going to a dense layer. I'm trying to change it such that it adds an LSTM that takes the CNN's flattened 1D output as one batch and one time step with features dependent on the size of the CNN's Output. Then concatenates its output with the CNN's output (The LSTM's Input) then feeds into the dense layer. Thus it now behaves like LSTM with a skip connection. The issue that I cant seem to understand is when and how to specify the LSTM layer's Input_shape as it has NO INPUT_SHAPE parameter for the functional API? Or maybe I'm just super confused, Everyone uses different API's going over different examples and it gets super confusing what is and isn't specified and how.
Thank you, even if you just help with one of the two parts.
TLDR:
Do I understand LSTM Parameters correctly?
How and when do I specify LSTM Input_shapes if at all?
LSTM units argument means dimensions of LSTM matrices and output shape.
With Functional API you can specify input shape for the very first layer only. If your LSTM layer follows CNN - then its input shape is determined automatically as CNN output.

What does input_shape,input_dim and units indicate or mean while adding layers in a Keras?

Im new to keras and i was wondering if I could do some work regarding text classification using neural networks.
So i went ahead and got a data set regarding spam or ham and I vectorized the data using tfidf and converted the labels to a numpy array using the to_categorical() and managed to split my data into train and test each of which is a numpy array having around 7k columns.
This the code i used.
model.add(Dense(8,input_dim=7082,activation='relu'))
model.add(Dense(8,input_dim=7082))
model.add(Dense(2,activation='softmax'))
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=['accuracy'])
I dont know if im doing something totally wrong. Could someone point me in the right direction as to what i should change.
The error thrown:
Error when checking input: expected dense_35_input to have 2 dimensions, but got array with shape ()
Dense layers doesn't seem to have any input_dim parameter according to the documentation.
input_shape is a tuple and must be used in the first layer of your model. It refers to the shape of the input data.
units refers to the dimension of the output space, that is the shape of each output element processed by the dense layer.
In your case, if your input data has is of dimensionality 7082, this should work:
model.add(Dense(8,input_shape=(7082,),activation='relu'))
model.add(Dense(2,activation='softmax'))
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=['accuracy'])

How to apply lstm in speech emotion feature

I'd like to apply lstm in my speech emotion datasets (dataset of features in numeric values with one column of targets).
I've done split_train_test. Do I need some other transformation to do in the data set before the model?
I ask this question because when I compile and fit the model I've got one error in the last dense layer.
Error when checking model target: expected activation_2 to have shape (8,) but got array with shape (1,).
Thanks.
After my internship I learn how to fix out this error and where to look.
Here's what you have to take care.
Unexpected error input form
If the reported layer is the first it is a cause of the input data for the train of a model as a same shape for a create your model.
If this is the last layer that bug then it is the labels that are well coded
Either you put a sigmoid but the labels are not binary either you put softmax and the labels are in one-hot format [0,1,0]: example 3 classes, this element is of class 2. So, the labels are badly encoded or you are deceived in the function (sigmoid / softmax) of your output layer.
Hope this help

Categories