Set bias in CNN - python

I have two massive numpy arrays of weights and biases for a CNN. I can set weights for each layer (using set_weights) but I don't see a way to set the bias for each layer. How do I do this?

You do this by using layer.set_weights(weights). From the documentation:
weights: a list of Numpy arrays. The number
of arrays and their shape must match
number of the dimensions of the weights
of the layer (i.e. it should match the
output of `get_weights`).
You don't just put the weights for the filter in there but for each parameter the layer has. The order in which you have to put in the weights depends on layer.weights. You may look at the code or print the names of the weights of the layer by doing something like
print([p.name for p in layer.weights])

Related

how to Create Multi Head Convolutional layers merging as Temporal dimension in keras

I want to implement a time-series prediction model which has a window of non-image matrixes as input , each matrix to be processed by a Conv2d layer at the first layer and then the output of this conv layers merged as time dimension to be passed to a recurrent layer like LSTM,
one way is to use Time-Distribution technique but TimeDistributed layer apply the same layer to several inputs. And it produce one output per input to get the result in time, the Time-Distribution technique will share the same weights among all convolution heads which is not what I want, for example If you injects 5 Matrixes, the weights are not tweaked 5 times, but only once, and distributed to every blocks defined in the current Time Distributed layer. how can I avoid this and have independent Convolutional heads with outputs merging as time dimension for the next layer?
I have tried to implement it as following
Matrix_Dimention=20;
Input_Window=4;
Input_Matrixes=[]
ConvLayers=[]
for i in range(0 , Input_Window):
Inp_Matrix=layers.Input(shape=(Matrix_Dimention,Matrix_Dimention,1));
Input_Matrixes.append(Inp_Matrix);
conv=layers.Conv2D(64, 5, activation='relu', input_shape=(Matrix_Dimention,Matrix_Dimention,1))(Inp_Matrix)
ConvLayers.append(conv);
#Temporal Concatenation
Spatial_Layers_Concate = layers.Concatenate(ConvLayers); # this causes error : Inputs to a layer should be tensors
#Temporal Component
LSTM_Layer=layers.LSTM(activation='relu',return_sequences=False)(Spatial_Layers_Concate )
Model = keras.Model(Input_Matrixes, LSTM_Layer)
Model.compile(optimizer='adam', loss=keras.losses.MeanSquaredError)
it would be great if you provide your answer by correcting my implementation or provide your own if there is a better way to form this idea , tnx.

Unflattening Layer in Keras

I would like to create a simple Keras neural network that accepts an input matrix of dimension (rows, columns) = (n, m), flattens the matrix to a dimension (n*m, 1), sends the flattened matrix through a number of arbitrary layers, and in the final layer, once more unflattens the matrix to a dimension of (n, m) before releasing this final matrix as an output.
The issue I'm having is that I haven't found any documentation for an Unflatten layer at the keras.io page, and I'm wondering whether there is a reason that such a seemingly standard common use layer doesn't exist. Is there a much more natural and easy way to do what I'm proposing?
You can use the Reshape layer for this purpose. It accepts the desired output shape as its argument and would reshape the input tensor to that shape. For example:
from keras.layers import Reshape
rsh_inp = Reshape((n*m, 1))(inp) # if you don't want the last axis with dimension 1, you can also use Flatten layer
# rsh_inp goes through a number of arbitrary layers ...
# reshape back the output
out = Reshape((n,m))(out_rsh_inp)

How to implement a weighted mean squared error function in Keras

I am defining a weighted mean squared error in Keras as follows:
def weighted_mse(yTrue,yPred):
data_weights = [w0,w1,w2,w3]
data_weights_np = np.asarray(data_weights, np.float32)
weights = tf.convert_to_tensor(data_weights_np, np.float32)
return K.mean(weights*K.square(yTrue-yPred))
I have a list of weights for each prediction. The predictions are of shape for example: (25,4). That is generated via final dense layer with dimension 4. I wish to weights these prediction in the mean squared error, so I generate a tensor and multiply it with the sum of squares error. Is this the correct way to do so?
Because, when I print the shape of the tensor, using tf.shape for YTrue and YPred it shows:
Tensor("loss_19/dense_20_loss/Shape:0", shape=(3,), dtype=int32)
and for weights:
Tensor("loss_19/dense_20_loss/Shape_2:0", shape=(1,), dtype=int32)
The Keras API already provides a mechanism to provide weights, for example the model.fit function. From the documentation:
class_weight: Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
sample_weight: Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. In this case you should make sure to specify sample_weight_mode="temporal" in compile().
If you have a weight for each sample, you can pass the NumPy array as sample_weight to achieve the same effect without writing your own loss function.

How to set size of hidden state vector in LSTM, keras?

I am currently setting the vector size by using model.add(LSTM(50)) i.e setting the value in units attribute but I highly doubt its correctness(In keras documentation, units is explained as dimensionality of the output space). Anyone who can help me here?
If by vector size you mean the number of nodes in a layer, then yes you are doing it right. The output dimensionality of your layer
is the same as the number of nodes. The same thing applies to convolutional layers, number of filters and output dimensionality along the last axis, aka number of color channels is the same.

Multidimensional Input to Keras LSTM - (for Classification)

I am trying to classify a bunch of spectrograms into C classes using keras' LSTM (with a Dense Layer at the end). To clarify, each spectrogram belongs to a single class from those C classes. Each spectrogram is basically a matrix. It is constructed by taking (lets say, K) measurements at every second for about 1000 seconds. So the matrix has K rows and 1000 columns.
Considering this, how may I specify the shape of this input for the LSTM layer ?
Thank you!
It doesn't seem to be in the current documentation for LSTM layers, but input_shape can be provided as (timesteps, input_dim).
If each spectrogram to be classified has 1000 time steps and K measurements at each time step, an LSTM layer can be constructed like this:
LSTM(num_units, input_shape=(1000, K))
Then the shape of the input array for all of the spectrograms should have the shape (num_spectrograms, 1000, K).

Categories