Modifying TensorFlow Neural Network connections - python

I am using Python 3.X along with TensorFlow 2.0 to create a toy neural network model which is as follows:
model = Sequential()
model.add(
Dense(
units = 2, activation = 'relu',
kernel_initializer = tf.keras.initializers.GlorotNormal(),
input_shape = (2,)
)
)
model.add(
Dense(
units = 2, activation = 'relu',
kernel_initializer = tf.keras.initializers.GlorotNormal()
)
)
model.add(
Dense(
units = 1, activation = 'sigmoid'
)
)
I now want to modify the weights/biases of the model in a layer-wise manner. The code I have come up with to change the connections of the randomly initialized weights/biases of the model is that connections having magnitude less than 0.5 should become zero, while the others should remain the same:
for layer in model.trainable_weights:
layer = tf.where(tf.less(layer, 0.5), 0, layer)
However, this code does not change the connections as I want. What should I do?
Thanks!

Your code simply creates new tensors that have the desired values and puts them in the Python variable layer, but doesn't change the Tensorflow variables as you want to. You need to use the assign method of the Variable class:
for layer in model.trainable_weights:
layer.assign(tf.where(tf.less(layer, 0.5), 0, layer))

Related

How to train only the last convolutional layer?

Could you help me with the code such that along with the dense layers also the last convolutional layer of Efficientnet is trained as well ?
features_url ="https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b3/feature_vector/2"
img_shape = (299,299,3)
features_layer = hub.KerasLayer(features_url,
input_shape=img_shape)
# below commented code keeps all the cnn layers frozen, thus it does not work for me at the moment
#features_layer.trainable = False
model = tf.keras.Sequential([
features_layer,
tf.keras.layers.Dense(256, activation = 'relu'),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(4, activation = 'softmax')
])
In addition how can I save in a variable the name of the last convolutional layer ?

Training simple CNN-LSTM model

I have a task for my project paper and I do not get how to train the model. This model is supposed to take an image and segment it into different classes. The hard part is that the different segmentation is the same but I would like to differentiate between them. When I try to make a model with convolutional layers and LSTM, The model only predicts the class of the background.
Here is my model:
def LSTMconv10x9(input_size = (200, 9, 10, 1)):
input = Input(input_size)
conv1 = TimeDistributed(Conv2D(32, 3, padding = "same", activation='relu'))(input)
conv2 = TimeDistributed(Conv2D(64, 3, padding = "same", activation='relu'))(conv1)
lstm = ConvLSTM2D(32, 3, return_sequences=True, padding="same", activation="softmax")(conv2)
conv4 = TimeDistributed(Conv2D(64,3, padding = 'same', activation='relu'))(lstm)
conv5 = TimeDistributed(Conv2D(32,3, padding = 'same', activation='relu'))(conv4)
output = ConvLSTM2D(11,1, return_sequences = True, padding = "same", activation = None)(conv5)
model = Model(inputs = input, outputs = output)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer = tf.keras.optimizers.Adam(),
metrics=["accuracy"], sample_weight_mode='temporal')
And the way I train the model:
weights = np.where(train_y == 0, 0.1, 0.9)
model1 = LSTMconv10x9simple()
model1.fit(train_x,train_y,epochs=20, batch_size=32,validation_data=(test_x, test_y),sample_weight=weights)
The training set size is (2000,200,9,10,1) and the validation set is (1000,200,9,10,1), where I have 2000 videos of 200 frames in the trainingset, the videos are of 10 structures that look the same but I would to numerate them in a way as different structures. This is a segmentation problem.
The data is very unbalanced as there are objects in each video that I want to separate, but the background is about 90% of the videos. I have tried initializing weights with the "sample_weight_mode='temporal'" method in TensorFlow, but it did not seem to work. The most important thing in the model is to find the structures.
Does anyone have any solutions to my problems?

Custom memory layer for A3C

I'm trying to create an A3C to play a game using frames as an input.
I think my A3C could benefit from having a form of memory layer like a lstm layer.
From what I understand of how the lstm works, you have to give it data by batch and the memory is only going to work on what's given in the batch.
Unfortunately, it's not possible for me to give the whole replay in the batch as the batch size would be way too big. So I wanted to know if it was possible to create a memory layer that would similar to how a lstm layer works. What I had in mind would generate some values based on the output of a layer from the neural network and decide if it's worth saving the values or keep the previous ones and then this memory layer would be fed to the next layer of the neural network.
S = Input(shape = (self.IMAGE_ROWS, self.IMAGE_COLS, self.IMAGE_CHANNELS, ), name = 'Input')
h0 = Convolution2D(1, kernel_size = (8,8), strides = (4,4), activation = 'relu', kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform')(S)
h1 = Convolution2D(1, kernel_size = (4,4), strides = (2,2), activation = 'relu', kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform')(h0)
h2 = Flatten()(h1)
h3 = Dense(256, activation = 'relu', kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform') (h2)
h3 = Dropout(0.5)(h3)
# I was thinking of adding the memory layer here
# It would take the values of h3 and the output would feed h4_k with the values of h3
h4_k = Dense(256, activation = 'relu', kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform') (h3)
h4_k = Dropout(0.5)(h4_k)
h5_k = Dense(256, activation = 'relu', kernel_initializer = 'random_uniform', bias_initializer = 'random_uniform') (h4_k)
h5_k = Dropout(0.5)(h5_k)
probs_k = Dense(self.n_actions_k, activation = 'softmax')(h5_k)
values_k = Dense(1, activation = 'linear')(h5_k)
Does this kind of layer already exist? If not how can I create a custom layer in tensorflow with the capacity to choose if it should update it's values or not?

Convert Convnet.js neural network model to Keras Tensorflow

I have a neural network model that is created in convnet.js that I have to define using Keras. Does anyone have an idea how can I do that?
neural = {
net : new convnetjs.Net(),
layer_defs : [
{type:'input', out_sx:4, out_sy:4, out_depth:1},
{type:'fc', num_neurons:25, activation:"regression"},
{type:'regression', num_neurons:5}
],
neuralDepth: 1
}
this is what I could do so far. I cannot ve sure if it's correct.
#---Build Model-----
model = models.Sequential()
# Input - Layer
model.add(layers.Dense(4, activation = "relu", input_shape=(4,)))
# Hidden - Layers
model.add(layers.Dense(25, activation = "relu"))
model.add(layers.Dense(5, activation = "relu"))
# Output- Layer
model.add(layers.Dense(1, activation = "linear"))
model.summary()
# Compile Model
model.compile(loss= "mean_squared_error" , optimizer="adam", metrics=["mean_squared_error"])
From the Convnet.js doc : "your last layer must be a loss layer ('softmax' or 'svm' for classification, or 'regression' for regression)."
Also : "Create a regression layer which takes a list of targets (arbitrary numbers, not necessarily a single discrete class label as in softmax/svm) and backprops the L2 Loss."
It's unclear. I suspect "regression" layer is just another layer of Dense (Fully connected) neurons. The 'regression' word probably refers to linear activity. So, no 'relu' this time ?
Anyway, it would probably look something like (no sequential mode):
from keras.layers import Dense
from keras.models import Model
my_input = Input(shape = (4, ))
x = Dense(25, activation='relu')(x)
x = Dense(4)(x)
my_model = Model(input=my_input, output=x, loss='mse', metrics='mse')
my_model.compile(optimizer=Adam(LEARNING_RATE), loss='binary_crossentropy', metrics=['mse'])
After reading a bit of the docs, the convnet.js seems like a nice project. It would be much better with somebody with neural network knowledge on board.

Setting up a CNN network with multi-label classification

I have a set of 100x100 images, and an output array corresponding to the size of the input (i.e. length of 10000), where each element can be an 1 or 0.
I am trying to write a python program using TensorFlow/Keras to train a CNN on this data, however, I am not sure how to setup the layers to handle it, or the type of network to use.
Currently, I am doing the following (based off the TensorFlow tutorials):
model = keras.Sequential([
keras.layers.Flatten(input_shape=(100, 100)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10000, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
However, I can't seem to find what type of activation I should be using for the output layer to enable me to have multiple output values?
How would I set that up?
I am not sure how to setup the layers to handle it.
Your code is one way to handle that but as you might read in literature, is not the best one. State-of-the-art models usually use 2D Convolution Neural Networks. E.g:
img_input = keras.layers.Input(shape=img_shape)
conv1 = keras.layers.Conv2D(16, 3, activation='relu', padding='same')(img_input)
pol1 = keras.layers.MaxPooling2D(2)(conv1)
conv2 = keras.layers.Conv2D(32, 3, activation='relu', padding='same')(pol1)
pol2 = keras.layers.MaxPooling2D(2)(conv2)
conv3 = keras.layers.Conv2D(64, 3, activation='relu', padding='same')(pol2)
pol3 = keras.layers.MaxPooling2D(2)(conv3)
flatten = keras.layers.Flatten()(pol3)
dens1 = keras.layers.Dense(512, activation='relu')(flatten)
dens2 = keras.layers.Dense(512, activation='relu')(dens1)
drop1 = keras.layers.Dropout(0.2)(dens2)
output = keras.layers.Dense(10000, activation='softmax')(drop1)
I can't seem to find what type of activation I should be using for the
output layer to enable me to have multiple output values
Softmax is a good choice. It squashes a K-dimensional vector of arbitrary real values to a K-dimensional vector of real values, where each entry is in the range (0, 1].
You can pas output of your Softmax to top_k function to extract top k prediction:
softmax_out = tf.nn.softmax(logit)
tf.nn.top_k(softmax_out, k=5, sorted=True)
If you need multi-label classification you should change the above network. Last Activation function will change to sigmoid:
output = keras.layers.Dense(10000, activation='sigmoid')(drop1)
Then use tf.round and tf.where to extract labels:
indices = tf.where(tf.round(output) > 0.5)
final_output = tf.gather(x, indices)

Categories