For part of an embedded project, I trained a network in Tensorflow, and now I'm reloading the variables in a Numpy/Scipy-based model script. However, I am unclear on how to redo the conv2d steps with the weights I have.
I've looked at this link: Difference between Tensorflow convolution and numpy convolution,
but I haven't made the connection to a problem where the weights are four-dimensional.
This is my Tensorflow code:
# input shape: (1, 224, 224, 1)
weight1 = tf.Variable([3,3,1,16],stddev)
conv1 = tf.nn.conv2d(input,w,[1,1,1,1])
# conv1 shape: (1, 224, 224, 16)
weight2 = tf.Variable([3,3,16,32],stddev)
conv2 = tf.nn.conv2d(conv2,w,[1,1,1,1])
# conv2 shape: (1, 224, 224, 32)
And when I try to use convolve functions from Scipy or Numpy libraries, the output dimensions are incorrect:
from scipy.ndimage.filters import convolve
conv1 = convolve(input, weight1[::-1])
# conv1 shape: (1, 224, 224, 1)
conv2 = convolve(conv1, weight2[::-1])
# conv2 shape: (1, 224, 224, 16)
Related
TensorFlow 2.7, Keras 2.7
I am trying to use an existing TFHub model as a layer in my model. Wrapped it by a custom keras layer but probably missed something around the batch size. Wrote a simple version of it below.
The model below receives [ BATCH_SIZE, 224, 224, 3 ], uses a TFHub model to generate one representation, another simple layer to generate another representation. Then it concatenates both, uses a Dense layer and outputs [BATCH_SIZE, 10].
It works fine with batch size = 1 but with batch size > 1 predict and evaluate works but fit returns an error related to the last Dense layer receiving an incorrect size input.
Simple model and call code:
def simple_model( ):
input = Input(shape=(224, 224, 3))
representation = Conv2D(1,(224,224))(input)
# Prepare input for depth estimation model
resized_for_midas = tf.image.resize(input, (384, 384))
transposed = tf.transpose(resized_for_midas, [0, 3, 1, 2])
depth_estimation = tfhub.KerasLayer('https://tfhub.dev/intel/midas/v2/2', signature = 'serving_default',
tags=['serve'])(transposed)
depth_estimation_reshaped = tf.expand_dims(depth_estimation, axis=-1) # Adding 1 in the end so the shape will be [batch_size, 384, 384, 1]
depth_estimation_repmat = Conv2D(3,(3,3),padding='same')(depth_estimation_reshaped) # Repeat the depth estimation single channel 3 times to match ResNet input using a convolution layer
depth_estimation_resized = tf.image.resize(depth_estimation_repmat, (224, 224))
depth_estimation_representation = Conv2D(1,(224,224))(depth_estimation_resized)
# Concatenate representations
representation_full = tf.concat([representation, depth_estimation_representation], axis=-1)
flat = tf.reshape(representation_full, (-1, representation_full.shape[-1]))
# Outputs
output = Dense(10, input_shape=(-1, flat.shape[-1]),
activation='linear')(flat)
model = Model(inputs=input, outputs=output)
return model
model = simple_model()
model.compile(loss='mae')
batch_size = 1
model(np.random.rand(batch_size, 224, 224, 3)) # Works
model.evaluate(np.random.rand(batch_size, 224, 224, 3), np.random.rand(batch_size, 10)) # Works
model.fit(np.random.rand(batch_size, 224, 224, 3), np.random.rand(batch_size, 10)) # Works
batch_size = 2
model(np.random.rand(batch_size, 224, 224, 3)) # Works
model.evaluate(np.random.rand(batch_size, 224, 224, 3), np.random.rand(batch_size, 10)) # Works
model.fit(np.random.rand(batch_size, 224, 224, 3), np.random.rand(batch_size, 10)) # Fails
Error:
Traceback (most recent call last): File "", line 1, in
File
"/home/dani/projects/venv/lib/python3.8/site-packages/wandb/integration/keras/keras.py",
line 150, in new_v2
return old_v2(*args, **kwargs) File "/home/dani/projects/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py",
line 67, in error_handler
raise e.with_traceback(filtered_tb) from None File "/home/dani/projects/venv/lib/python3.8/site-packages/tensorflow/python/eager/execute.py",
line 58, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError:
Matrix size-incompatible: In[0]: [2,2], In[1]: [1,10] [[node
gradient_tape/model_1/dense_1/MatMul/MatMul_1 (defined at
/home/dani/projects/venv/lib/python3.8/site-packages/keras/optimizer_v2/optimizer_v2.py:464)
]] [Op:__inference_train_function_30163]
Errors may have originated from an input operation. Input Source
operations connected to node
gradient_tape/model_1/dense_1/MatMul/MatMul_1: In[0]
model_1/tf.reshape_1/Reshape (defined at
/home/dani/projects/venv/lib/python3.8/site-packages/keras/layers/core/tf_op_layer.py:261)
In[1] gradient_tape/mean_absolute_error/sub/Reshape:
#Dani This is because the tfhub.KerasLayer you are using in your code is built for a single image.
It is clearly mentioned in that tfhub page https://tfhub.dev/intel/midas/v2/2
Convolutional neural network for monocular depth estimation from a
single RGB image.
Also, check the model.summary which shows the discrepancy
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3 0 []
)]
tf.image.resize (TFOpLambda) (None, 384, 384, 3) 0 ['input_1[0][0]']
tf.compat.v1.transpose (TFOpLa (None, 3, 384, 384) 0 ['tf.image.resize[0][0]']
mbda)
keras_layer (KerasLayer) (1, 384, 384) 0 ['tf.compat.v1.transpose[0][0]']
tf.expand_dims (TFOpLambda) (1, 384, 384, 1) 0 ['keras_layer[0][0]']
conv2d_1 (Conv2D) (1, 384, 384, 3) 30 ['tf.expand_dims[0][0]']
tf.image.resize_1 (TFOpLambda) (1, 224, 224, 3) 0 ['conv2d_1[0][0]']
conv2d (Conv2D) (None, 1, 1, 1) 150529 ['input_1[0][0]']
conv2d_2 (Conv2D) (1, 1, 1, 1) 150529 ['tf.image.resize_1[0][0]']
tf.concat (TFOpLambda) (1, 1, 1, 2) 0 ['conv2d[0][0]',
'conv2d_2[0][0]']
tf.reshape (TFOpLambda) (1, 2) 0 ['tf.concat[0][0]']
dense (Dense) (1, 10) 30 ['tf.reshape[0][0]']
==================================================================================================
Total params: 301,118
Trainable params: 301,118
Non-trainable params: 0
__________________________________________________________________________________________________
Please check the gist here
I guess you need to find a layer that supports multiple images as a batch. Thanks!
I am having trouble understanding how 2D Conv calculations are done on 4D inputs. Basically, this is the situation, I have an image of height, width, channels = 128, 128, 103. I want each of these 103 channels to be processed individually as if I'm inputting them to the network one by one. Would the following line work?
import tensorflow.keras
from tensorflow.keras.layers import Conv2D
model1 = tensorflow.keras.models.Sequential()
model1.add(Conv2D(1, kernel_size=(3,3), input_shape = (128, 128,103,1), padding='same'))
I want to avoid splitting the image and inputting it into the network as 103 batches of (128,128,1)
As explained in the documentation: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D?version=nightly
4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or
4+D tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'.
(by default: data_format='channels_last'.)
You are passing a 5D tensor of shape (batch_shape, 128, 128, 103, 1).
I suggest you reshape your tensor into something that will yield a shape like this one (None, 128, 128, 103).
Also please change input_shape = (128, 128,103,1) to input_shape = (128, 128,103)
I am building a keras UNET model for 3D image segmentation.
Image shape 240, 240, 150
The input shape is 240, 240, 150, 4, 335 >> training data
The output shape should be 240, 240, 150, 335 >> training labels
I am using Conv3D, MaxPooling3D, Conv3DTranspose, and concatenate layers to build the model
I am facing this error during the model building where I am doing upsampling
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling3d_3/MaxPool3D' (op: 'MaxPool3D') with input shapes: [?,1,60,60,128].
I searched for some solutions and found Layers padding='same' and k.set_image_data_format('channels_last')
with this I faced a new error when doing the concatination after the up sampling
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 30, 30, 18, 256), (None, 30, 30, 19, 256)]
I currently looping between those two errors and can't find what is exact issue to solve nether how to solve it
Here is the code where I am building the model
def build_unet_model(input_shape):
inputs = Input(input_shape)
conv1 = create_shared_convolution(inputs, 32, config.KERNEL_SIZE)
block1 = down_convolution(conv1, config.POOL_SIZE)
conv2 = create_shared_convolution(block1, 64, config.KERNEL_SIZE)
block2 = down_convolution(conv2, config.POOL_SIZE)
conv3 = create_shared_convolution(block2, 128, config.KERNEL_SIZE)
block3 = down_convolution(conv3, config.POOL_SIZE)
conv4 = create_shared_convolution(block3, 256, config.KERNEL_SIZE)
block4 = down_convolution(conv4, config.POOL_SIZE)
conv5 = create_shared_convolution(block4, 512, config.KERNEL_SIZE) # mid_con
up1 = concatenate_layers(create_up_convolution(conv5, 256, config.STRIDE_SIZE), conv4)
conv6 = create_shared_convolution(up1, 256, config.KERNEL_SIZE)
up2 = concatenate_layers(create_up_convolution(conv6, 128, config.STRIDE_SIZE), conv3)
conv7 = create_shared_convolution(up2, 128, config.KERNEL_SIZE)
up3 = concatenate_layers(create_up_convolution(conv7, 64, config.STRIDE_SIZE), conv2)
conv8 = create_shared_convolution(up3, 64, config.KERNEL_SIZE)
up4 = concatenate_layers(create_up_convolution(conv8, 32, config.STRIDE_SIZE), conv1)
conv9 = create_shared_convolution(up4, 32, config.KERNEL_SIZE)
outputs = create_output_layer(conv9, 4, (1, 1, 1))
model = Model(inputs=[inputs], outputs=[outputs])
print(model.summary())
return model.compile(optimizer=AdaBound(lr=1e-5, final_lr=1), loss=utils.ce_dl_loss, metrics=['accuracy'])
and those are the 5 functions used in the model building
def create_shared_convolution(input_layer, number_of_nets, kernel_size,
activation='relu', padding='same',
kernel_initializer=initializers.random_normal(stddev=0.01)):
conv = Conv3D(number_of_nets, kernel_size, activation=activation, padding=padding,
kernel_initializer=kernel_initializer)(input_layer)
conv = Conv3D(number_of_nets, kernel_size, activation=activation, padding=padding,
kernel_initializer=kernel_initializer)(conv)
return conv
def down_convolution(input_layer, pool_size):
return MaxPooling3D(pool_size=pool_size)(input_layer)
def create_up_convolution(input_layer, number_of_nets, stride_size, padding='same',
kernel_initializer=initializers.random_normal(stddev=0.01)):
return Conv3DTranspose(number_of_nets, stride_size, strides=stride_size, padding=padding,
kernel_initializer=kernel_initializer)(input_layer)
def concatenate_layers(layer1, layer2):
return merge.concatenate([layer1, layer2])
def create_output_layer(input_layer, number_of_nets, kernel_size, activation='relu',
kernel_initializer=initializers.random_normal(stddev=0.01)):
conv = Conv3D(number_of_nets, kernel_size, activation=activation,
kernel_initializer=kernel_initializer)(input_layer)
return Activation('softmax')(conv)
Here are some explanations on both errors.
The first one is due to your features maps being too small in your network. I don't have the detail of your network architecture, but if you apply a lot of maxpooling layers to your input (of shape 240, 240, 150), it might end up with only one value on a dimension (probably something like (N, N, 1)). Adding an other maxpooling on top of this is impossible because you don't have enough value on the dimension to perform it. That's why it raises the negative dimension error.
The second one is probably due to the maxpooling layers. When you apply your first maxpooling, there isn't any issue: the output shape is (120, 120, 75), so upsampling it would give you back the (240, 240, 150). But the next maxpooling (applied to (120, 120, 75)) would produce an output with shape (60, 60, 37) because the last dimension is odd. And upsampling it would give (120, 120, 74). Hence the mismatch. A solution to this is to add ZeroPadding layers when the dimension is odd before concatenating them.
I'm trying to implement a special type of neural network with Keras functional API, as seen below:
But I'm having a problem with the concatenate layer:
ValueError: A "Concatenate" layer requires inputs with matching shapes
except for the concat axis. Got inputs shapes: [(None, 160, 160, 384),
(None, 160, 160, 48)]
Notice: From my research I assume that this question is not duplicate, I've seen this question, and this post (translated with Google), but they don't seem to work (instead, they make problems even slightly "worse").
Here's the code of the neural network before concat layer:
from keras.layers import Input, Dense, Conv2D, ZeroPadding2D, MaxPooling2D, BatchNormalization, concatenate
from keras.activations import relu
from keras.initializers import RandomUniform, Constant, TruncatedNormal
# Network 1, Layer 1
screenshot = Input(shape=(1280, 1280, 0), dtype='float32', name='screenshot')
# padded1 = ZeroPadding2D(padding=5, data_format=None)(screenshot)
conv1 = Conv2D(filters=96, kernel_size=11, strides=(4, 4), activation=relu, padding='same')(screenshot)
# conv1 = Conv2D(filters=96, kernel_size=11, strides=(4, 4), activation=relu, padding='same')(padded1)
pooling1 = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(conv1)
normalized1 = BatchNormalization()(pooling1) # https://stats.stackexchange.com/questions/145768/importance-of-local-response-normalization-in-cnn
# Network 1, Layer 2
# padded2 = ZeroPadding2D(padding=2, data_format=None)(normalized1)
conv2 = Conv2D(filters=256, kernel_size=5, activation=relu, padding='same')(normalized1)
# conv2 = Conv2D(filters=256, kernel_size=5, activation=relu, padding='same')(padded2)
normalized2 = BatchNormalization()(conv2)
# padded3 = ZeroPadding2D(padding=1, data_format=None)(normalized2)
conv3 = Conv2D(filters=384, kernel_size=3, activation=relu, padding='same',
kernel_initializer=TruncatedNormal(stddev=0.01),
bias_initializer=Constant(value=0.1))(normalized2)
# conv3 = Conv2D(filters=384, kernel_size=3, activation=relu, padding='same',
# kernel_initializer=RandomUniform(stddev=0.1),
# bias_initializer=Constant(value=0.1))(padded3)
# Network 2, Layer 1
textmaps = Input(shape=(160, 160, 128), dtype='float32', name='textmaps')
txt_conv1 = Conv2D(filters=48, kernel_size=1, activation=relu, padding='same',
kernel_initializer=TruncatedNormal(stddev=0.01), bias_initializer=Constant(value=0.1))(textmaps)
# (Network 1 + Network 2), Layer 1
merged = concatenate([conv3, txt_conv1], axis=1)
This is how interpreter evaluates variables conv3 and txt_conv1:
>>> conv3
<tf.Tensor 'conv2d_3/Relu:0' shape=(?, 160, 160, 384) dtype=float32>
>>> txt_conv1
<tf.Tensor 'conv2d_4/Relu:0' shape=(?, 160, 160, 48) dtype=float32>
This is how the interpreter evaluates txt_conv1 and conv3 variables after setting image_data_format to channels_first:
>>> conv3
<tf.Tensor 'conv2d_3/Relu:0' shape=(?, 384, 160, 0) dtype=float32>
>>> txt_conv1
<tf.Tensor 'conv2d_4/Relu:0' shape=(?, 48, 160, 128) dtype=float32>
Both of the layers have shapes which are not actually described in the architecture.
Is there any way to solve this problem? Maybe I didn't write the appropriate code (I'm new to Keras).
P.S
I know that the code above is not organized, I'm just testing.
Thank you!
You should change the axis to -1 in the concatenate layer since the shapes of the two tensors that you want to concatenate only differ in their last dimension. The resulting tensor will then be of shape (?, 160, 160, 384 + 48).
I'm building a model in Tensorflow using tf.layers objects. When I run the following code using tf.layers.MaxPooling2D my model does not reduce in size. I've only recently switched from using Keras to Tensorflow directly so I presume I'm misunderstanding the usage.
import tensorflow as tf
import numpy as np
features = tf.constant(np.random.random((20,128,128,3)), dtype=tf.float32)
y_true = tf.constant(np.random.random((20,1)), dtype=tf.float32)
print('features = %s' % features)
conv = tf.layers.Conv2D(32,(2,2),padding='same')(features)
print('conv = %s' % conv)
pool = tf.layers.MaxPooling2D((2,2),(1,1),padding='same')(conv)
print('pool = %s' % pool)
# and so on ...
I see this output:
features = Tensor("Const:0", shape=(20, 128, 128, 3), dtype=float32)
conv = Tensor("conv2d/BiasAdd:0", shape=(20, 128, 128, 32), dtype=float32)
pool = Tensor("max_pooling2d/MaxPool:0", shape=(20, 128, 128, 32), dtype=float32)
I was expecting to see the output from the MaxPool layer to have a shape of (20,64,64,32).
Am I using this this correctly?
If you want to downsample by a factor of 2 your feature map, you should use a stride 2.
In [1]: tf.layers.MaxPooling2D(2, 2, padding='same')(conv)
Out[1]: <tf.Tensor 'max_pooling2d/MaxPool:0' shape=(20, 64, 64, 32) dtype=float32>