How to change data during training - python

I want to multiply and divide the data value during training. How should I try?
model = models.Sequential()
model.add(layers.Conv2D(32,(3,3), activation='relu', padding = 'same', input_shape=(28,28,1)))
model.add(layers.MaxPooling2D((2,2)))
I want to multiply and divide the data at this time.
model.add(layers.Conv2D(64,(3,3), activation='relu', padding = 'same'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3), activation='relu', padding = 'same'))

You can use the Multiply (and other such operations) layer, as per this documentation. More options are available in tf.math, including divide, just keep in mind that mixing tensorflow keras with pure keras creates problems, so if you use tf.math be sure to from tensorflow import keras rather than just import keras.

Related

Making a Convolutional Neural Network from a flow diagram

I am trying to make a neural network from a flow diagram. It is necessary for my analysis to translate this network into a code. Could you help me if I'm doing anything wrong. Here is the diagram. The author used binary classification but I'm doing multiple so ignore that one. I'm a kind a new to building CNN and this is all I could come up with different sources from the internet.
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, Concatenate,Dense,Flatten
from tensorflow.keras.models import Sequential
from keras.layers import BatchNormalization
model_1=Sequential()
#First Stacked
model_1.add(Conv2D(filters=64,kernel_size=7,stride=(2,2),activation='relu',input_shape=(128,128,1)))
model_1.add(BatchNormalization())
model_1.add(LeakyReLU(alpha=0.1))
layer_1=Conv2D(filters=32,kernel_size=3,stride=(1,1),activation='relu')(model_1)
layer_2=Conv2D(filters=64,kernel_size=5,stride=(1,1),activation='relu')(model_1)
layer_3=Conv2D(filters=128,kernel_size=5,stride=(1,1),activation='relu')(model_1)
concatenate_1 = keras.layers.concatenate([layer_1, layer_2,layer_3], axis=1)
#Second Stacked
concatenate_1.add(Conv2D(filters=64,kernel_size=1,stride=(1,1),activation='relu')
concatenate_1.add(BatchNormalization())
concatenate_1.add(LeakyReLU(alpha=0.1))
concatenate_1.add(MaxPooling2D((2, 2), strides=(2, 2), padding='same'))
layer_1=Conv2D(filters=32,kernel_size=1,stride=(1,1),activation='relu')(concatenate_1)
layer_2=Conv2D(filters=64,kernel_size=3,stride=(1,1),activation='relu')(concatenate_1)
layer_3=Conv2D(filters=128,kernel_size=5,stride=(1,1),activation='relu')(concatenate_1)
concatenate_2 = keras.layers.concatenate([layer_1, layer_2,layer_3], axis=1)
#Third Stacked
concatenate_2.add(Conv2D(filters=64,kernel_size=1,stride=(1,1),activation='relu')
concatenate_2.add(BatchNormalization())
concatenate_2.add(LeakyReLU(alpha=0.1))
concatenate_2.add(MaxPooling2D((2, 2), strides=(2, 2), padding='same'))
layer_1=Conv2D(filters=32,kernel_size=1,stride=(1,1),activation='relu')(concatenate_2)
layer_2=Conv2D(filters=64,kernel_size=3,stride=(1,1),activation='relu')(concatenate_2)
layer_3=Conv2D(filters=128,kernel_size=5,stride=(1,1),activation='relu')(concatenate_2)
concatenate_3 = keras.layers.concatenate([layer_1, layer_2,layer_3], axis=1)
#Final
concatenate_3.add(Conv2D(filters=64,kernel_size=1,stride=(1,1),activation='relu')
concatenate_3.add(BatchNormalization())
concatenate_3.add(LeakyReLU(alpha=0.1))
concatenate_3.add(MaxPooling2D((2, 2), strides=(2, 2), padding='same'))
concatenate_3=Flatten()(concatenate_3)
model_dfu_spnet=Dense(200, activation='relu')(concatenate_3)
mode_dfu_spnet.add(Dropout(0.3,activation='softmax'))
Concatenate() is done by doing Concatenate(**args)([layers])
keras.layers.concatenate([layer_1, layer_2,layer_3], axis=1)
should be (note the capitalization)
keras.layers.Concatenate(axis=1)([layer_1, layer_2,layer_3])
# axis=1 is default, so you can just do
# keras.layers.Concatenate()([layer_1, layer_2,layer_3])
Then do the same for the other Concatenate().
I'm not sure what you want to do with this:
model_dfu_spnet=Dense(200, activation='relu')(concatenate_3)
But following the picture, that layer should have 32 neurons (seems kinda small for that but idk...)
model_dfu_spnet=Dense(32, activation='relu')(concatenate_3)
You don't put the activation function on Droupout
mode_dfu_spnet.add(Dropout(0.3,activation='softmax'))
but you probably want it on another Dense layer, also with the number of classes as the neurons.
mode_dfu_spnet.add(Dropout(0.3))
mode_dfu_spnet.add(Dense(num_of_classes, activation="softmax", name="visualized_layer"))
I'm not used to doing Sequential models with Concatenate, usually Functional but it shouldn't be any different.

Keras Negative dimension [duplicate]

I got this error message when declaring the input layer in Keras.
ValueError: Negative dimension size caused by subtracting 3 from 1 for
'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,28,28],
[3,3,28,32].
My code is like this
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
Sample application: https://github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb
By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to be in the format (samples, rows, cols, channels), which is "channels-last". Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format = 'channels_first' when declaring the Convolution2D layer.
model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first'))
I had the same problem, however the solution provided in this thread did not help me.
In my case it was a different problem that caused this error:
Code
imageSize=32
classifier=Sequential()
classifier.add(Conv2D(64, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
Error
The image size is 32 by 32. After the first convolutional layer, we reduced it to 30 by 30. (If I understood convolution correctly)
Then the pooling layer divides it, so 15 by 15.
Then another convolutional layer reduces it to 13 by 13...
I hope you can see where this is going:
In the end, my feature map is so small that my pooling layer (or convolution layer) is too big to go over it - and that causes the error
Solution
The easy solution to this error is to either make the image size bigger or use less convolutional or pooling layers.
Keras is available with following backend compatibility:
TensorFlow : By google,
Theano : Developed by LISA lab,
CNTK : By Microsoft
Whenever you see a error with [?,X,X,X], [X,Y,Z,X], its a channel issue to fix this use auto mode of Keras:
Import
from keras import backend as K
K.set_image_dim_ordering('th')
"tf" format means that the convolutional kernels will have the shape (rows, cols, input_depth, depth)
This will always work ...
You can instead preserve spatial dimensions of the volume such that the output volume size matches the input volume size, by setting the value to “same”.
use padding='same'
Use the following:
from keras import backend
backend.set_image_data_format('channels_last')
Depending on your preference, you can use 'channels_first' or 'channels_last' to set the image data format. (Source)
If this does not work and your image size is small, try reducing the architecture of your CNN, as previous posters mentioned.
Hope it helps!
# define the model as a class
class LeNet:
'''
In a sequential model, we stack layers sequentially.
So, each layer has unique input and output, and those inputs and outputs
then also come with a unique input shape and output shape.
'''
#staticmethod ## class can instantiated only once
def init(numChannels, imgRows, imgCols , numClasses, weightsPath=None):
# if we are using channel first we have update the input size
if backend.image_data_format() == "channels_first":
inputShape = (numChannels , imgRows , imgCols)
else:
inputShape = (imgRows , imgCols , numChannels)
# initilize the model
model = models.Sequential()
# Define the first set of CONV => ACTIVATION => POOL LAYERS
model.add(layers.Conv2D( filters=6,kernel_size=(5,5),strides=(1,1),
padding="valid",activation='relu',kernel_initializer='he_uniform',input_shape=inputShape))
model.add(layers.AveragePooling2D(pool_size=(2,2),strides=(2,2)))
I hope it would help :)
See code : Fashion_Mnist_Using_LeNet_CNN

ValueError: Unknown activation function: LeakyReLU [duplicate]

I am trying to produce a CNN using Keras, and wrote the following code:
batch_size = 64
epochs = 20
num_classes = 5
cnn_model = Sequential()
cnn_model.add(Conv2D(32, kernel_size=(3, 3), activation='linear',
input_shape=(380, 380, 1), padding='same'))
cnn_model.add(Activation('relu'))
cnn_model.add(MaxPooling2D((2, 2), padding='same'))
cnn_model.add(Conv2D(64, (3, 3), activation='linear', padding='same'))
cnn_model.add(Activation('relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))
cnn_model.add(Conv2D(128, (3, 3), activation='linear', padding='same'))
cnn_model.add(Activation('relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))
cnn_model.add(Flatten())
cnn_model.add(Dense(128, activation='linear'))
cnn_model.add(Activation('relu'))
cnn_model.add(Dense(num_classes, activation='softmax'))
cnn_model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
I want to use Keras's LeakyReLU activation layer instead of using Activation('relu'). However, I tried using LeakyReLU(alpha=0.1) in place, but this is an activation layer in Keras, and I get an error about using an activation layer and not an activation function.
How can I use LeakyReLU in this example?
All advanced activations in Keras, including LeakyReLU, are available as layers, and not as activations; therefore, you should use it as such:
from keras.layers import LeakyReLU
# instead of cnn_model.add(Activation('relu'))
# use
cnn_model.add(LeakyReLU(alpha=0.1))
Sometimes you just want a drop-in replacement for a built-in activation layer, and not having to add extra activation layers just for this purpose.
For that, you can use the fact that the activation argument can be a callable object.
lrelu = lambda x: tf.keras.activations.relu(x, alpha=0.1)
model.add(Conv2D(..., activation=lrelu, ...)
Since a Layer is also a callable object, you could also simply use
model.add(Conv2D(..., activation=tf.keras.layers.LeakyReLU(alpha=0.1), ...)
which now works in TF2. This is a better solution as this avoids the need to use a custom_object during loading as #ChristophorusReyhan mentionned.
you can import the function to make the code cleaner and then use it like any other activation.
if you choose not to define alpha, don't forget to add brackets "LeakyReLU()"
from tensorflow.keras.layers import LeakyReLU
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(512, activation=LeakyReLU()))
model.add(tf.keras.layers.Dense(512, activation=LeakyReLU(alpha=0.1)))

How to output a 3D tensor from a neural network?

My main input feature is 60x256x256 numpy array that is meant to generate a 60x256x256 binary mask (also in the form of a numpy array). The binary mask functions as a label, but I do not know how to generate a 3D numpy array or tensor output from my neural network. This is my current code:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(5, 5), strides=(1, 1),
activation='relu',
input_shape=(60, 256, 256)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(tf.keras.layers.Conv2D(64, (5, 5), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1000, activation='relu'))
model.add(tf.keras.layers.Dense(256, activation='softmax'))
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.CosineSimilarity(),
metrics=[tf.keras.metrics.CosineSimilarity()],
)
model.fit(
train,
epochs=6,
validation_data=ds_valid,
)
In short, I want the output of the last layer to match the input layer so that it can work with the CosineSimilarity loss function. Any suggestions other than this CNN-based approach will also be very helpful, as it seems CNNs are mostly used for classification.
At the most basic level you can use tf.keras.layers.Reshape. See https://www.tensorflow.org/tutorials/generative/autoencoder
So your last two layers could be:
model.add(tf.keras.layers.Dense(60*256*256))
model.add(tf.keras.layers.Reshape(60, 256, 256))
However I think what you're looking for is an autoencoder type network and to usetf.keras.layers.Conv2DTranspose layers.
The above link is an intro to Autoencoders and should be a good starting point I think.
Not sure about your use case but I think it's very likely you do want to use a convolution based approach because when you flatten the convolution you are forcing your network to forget all the information about the symmetry of the problem (i.e that it is a picture in 2D space). I don't think the fact that your problem is a regression problem affects this.

Error "Negative dimension size caused by subtracting 3 from 2" in CNN for mnist dataset [duplicate]

I got this error message when declaring the input layer in Keras.
ValueError: Negative dimension size caused by subtracting 3 from 1 for
'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,28,28],
[3,3,28,32].
My code is like this
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
Sample application: https://github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb
By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to be in the format (samples, rows, cols, channels), which is "channels-last". Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format = 'channels_first' when declaring the Convolution2D layer.
model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first'))
I had the same problem, however the solution provided in this thread did not help me.
In my case it was a different problem that caused this error:
Code
imageSize=32
classifier=Sequential()
classifier.add(Conv2D(64, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
Error
The image size is 32 by 32. After the first convolutional layer, we reduced it to 30 by 30. (If I understood convolution correctly)
Then the pooling layer divides it, so 15 by 15.
Then another convolutional layer reduces it to 13 by 13...
I hope you can see where this is going:
In the end, my feature map is so small that my pooling layer (or convolution layer) is too big to go over it - and that causes the error
Solution
The easy solution to this error is to either make the image size bigger or use less convolutional or pooling layers.
Keras is available with following backend compatibility:
TensorFlow : By google,
Theano : Developed by LISA lab,
CNTK : By Microsoft
Whenever you see a error with [?,X,X,X], [X,Y,Z,X], its a channel issue to fix this use auto mode of Keras:
Import
from keras import backend as K
K.set_image_dim_ordering('th')
"tf" format means that the convolutional kernels will have the shape (rows, cols, input_depth, depth)
This will always work ...
You can instead preserve spatial dimensions of the volume such that the output volume size matches the input volume size, by setting the value to “same”.
use padding='same'
Use the following:
from keras import backend
backend.set_image_data_format('channels_last')
Depending on your preference, you can use 'channels_first' or 'channels_last' to set the image data format. (Source)
If this does not work and your image size is small, try reducing the architecture of your CNN, as previous posters mentioned.
Hope it helps!
# define the model as a class
class LeNet:
'''
In a sequential model, we stack layers sequentially.
So, each layer has unique input and output, and those inputs and outputs
then also come with a unique input shape and output shape.
'''
#staticmethod ## class can instantiated only once
def init(numChannels, imgRows, imgCols , numClasses, weightsPath=None):
# if we are using channel first we have update the input size
if backend.image_data_format() == "channels_first":
inputShape = (numChannels , imgRows , imgCols)
else:
inputShape = (imgRows , imgCols , numChannels)
# initilize the model
model = models.Sequential()
# Define the first set of CONV => ACTIVATION => POOL LAYERS
model.add(layers.Conv2D( filters=6,kernel_size=(5,5),strides=(1,1),
padding="valid",activation='relu',kernel_initializer='he_uniform',input_shape=inputShape))
model.add(layers.AveragePooling2D(pool_size=(2,2),strides=(2,2)))
I hope it would help :)
See code : Fashion_Mnist_Using_LeNet_CNN

Categories