I'm new to pytorch. Here's an architecture of a tensorflow model and I'd like to convert it into a pytorch model.
I have done most of the codes but am confused about a few places.
1) In tensorflow, the Conv2D function takes filter as an input. However, in pytorch, the function takes the size of input channels and output channels as inputs. So how do I find the equivalent number of input channels and output channels, provided with the size of the filter.
2) In tensorflow, the dense layer has a parameter called 'nodes'. However, in pytorch, the same layer has 2 different inputs (the size of the input parameters and size of the targeted parameters), how do I determine them based on the number of the nodes.
Here's the tensorflow code.
from keras.utils import to_categorical
from keras.models import Sequential, load_model
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape=X_train.shape[1:]))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation='softmax'))
Here's my code.:
import torch.nn.functional as F
import torch
# The network should inherit from the nn.Module
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Define 2D convolution layers
# 3: input channels, 32: output channels, 5: kernel size, 1: stride
self.conv1 = nn.Conv2d(3, 32, 5, 1) # The size of input channel is 3 because all images are coloured
self.conv2 = nn.Conv2d(32, 64, 5, 1)
self.conv3 = nn.Conv2d(64, 128, 3, 1)
self.conv3 = nn.Conv2d(128, 256, 3, 1)
# It will 'filter' out some of the input by the probability(assign zero)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
# Fully connected layer: input size, output size
self.fc1 = nn.Linear(36864, 128)
self.fc2 = nn.Linear(128, 10)
# forward() link all layers together,
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = self.conv3(x)
x = F.relu(x)
x = self.conv4(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
Thanks in advance!
1) In pytorch, we take input channels and output channels as an input. In your first layer, the input channels will be the number of color channels in your image. After that it's always going to be the same as the output channels from your previous layer (output channels are specified by the filters parameter in Tensorflow).
2). Pytorch is slightly annoying in the fact that when flattening your conv outputs you'll have to calculate the shape yourself. You can either use an equation to calculate this (𝑂𝑢𝑡=(𝑊−𝐹+2𝑃)/𝑆+1), or make a shape calculating function to get the shape of a dummy image after it's been passed through the conv part of the network. This parameter will be your size of input argument; the size of your output argument will just be the number of nodes you want in your next fully connected layer.
Related
I have this class that inherits from tf.keras.Model:
import tensorflow as tf
from tensorflow.keras.layers import Dense
class Actor(tf.keras.Model):
def __init__(self):
super().__init__()
self.linear1 = Dense(128, activation = 'relu')
self.linear2 = Dense(256, activation = 'relu')
self.linear3 = Dense(3, activation = 'softmax')
# model override method
def call(self, state):
x = tf.convert_to_tensor(state)
x = self.linear1(x)
x = self.linear2(x)
x = self.linear3(x)
return x
it is used like this:
prob = self.actor(np.array([state]))
it works with (5,) input and returns (1,3) tensor which is what I expect
state: (5,) data: [0.50267935 0.50267582 0.50267935 0.50268406 0.5026817 ]
prob: (1, 3) data: tf.Tensor([[0.29540768 0.3525798 0.35201252]], shape=(1, 3), dtype=float32)
however if I pass a higher dimension input this return higher dimension tensor:
state: (5, 3) data: [[0.50789109 0.49648439 0.49651666]
[0.5078905 0.49648391 0.49648928]
[0.50788815 0.49648356 0.49643452]
[0.50788677 0.4964834 0.49640713]
[0.50788716 0.49648329 0.49635237]]
prob: (1, 5, 3) data: tf.Tensor(
[[[0.34579638 0.342928 0.3112757 ]
[0.34579614 0.34292707 0.31127676]
[0.34579575 0.34292522 0.31127906]
[0.3457955 0.3429243 0.31128016]
[0.34579512 0.34292242 0.3112824 ]]], shape=(1, 5, 3), dtype=float32)
But I need it to be (1,3) still. I never used raw keras models implemented like this. What can I do to fix it?
Tensorflow 2.9.1 with keras 2.9.0
Looks like you are working on a reinforcement learning problem. Try adding a Flatten layer to the beginning of your model (or a Reshape layer):
class Actor(tf.keras.Model):
def __init__(self):
super().__init__()
self.flatten = tf.keras.layers.Flatten()
self.linear1 = Dense(128, activation = 'relu')
self.linear2 = Dense(256, activation = 'relu')
self.linear3 = Dense(3, activation = 'softmax')
# model override method
def call(self, state):
x = tf.convert_to_tensor(state)
x = self.flatten(x)
x = self.linear1(x)
x = self.linear2(x)
x = self.linear3(x)
return x
Also check the design of the Dense layer:
Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 0 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units).
I have written the following multi-input Keras TensorFlow model:
CHARPROTLEN = 25 #size of vocab
CHARCANSMILEN = 62 #size of vocab
protein_input = Input(shape=(train_protein.shape[1:]))
compound_input = Input(shape=(train_smile.shape[1:]))
#protein layers
x = Embedding(input_dim=CHARPROTLEN+1,output_dim=128, input_length=maximum_amino_acid_sequence_length) (protein_input)
x = Conv1D(filters=32, padding="valid", activation="relu", strides=1, kernel_size=4)(x)
x = Conv1D(filters=64, padding="valid", activation="relu", strides=1, kernel_size=8)(x)
x = Conv1D(filters=96, padding="valid", activation="relu", strides=1, kernel_size=12)(x)
final_protein = GlobalMaxPooling1D()(x)
#compound layers
y = Embedding(input_dim=CHARCANSMISET+1,output_dim=128, input_length=maximum_SMILES_length) (compound_input)
y = Conv1D(filters=32, padding="valid", activation="relu", strides=1, kernel_size=4)(y)
y = Conv1D(filters=64, padding="valid", activation="relu", strides=1, kernel_size=6)(y)
y = Conv1D(filters=96, padding="valid", activation="relu", strides=1, kernel_size=8)(y)
final_compound = GlobalMaxPooling1D()(y)
join = tf.keras.layers.concatenate([final_protein, final_compound], axis=-1)
x = Dense(1024, activation="relu")(join)
x = Dropout(0.1)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(512, activation='relu')(x)
predictions = Dense(1,kernel_initializer='normal')(x)
model = Model(inputs=[protein_input, compound_input], outputs=[predictions])
The inputs have the following shapes:
train_protein.shape
TensorShape([5411, 1500, 1])
train_smile.shape
TensorShape([5411, 100, 1])
I get the following error message:
ValueError: One of the dimensions in the output is <= 0 due to downsampling in conv1d. Consider increasing the input size. Received input shape [None, 1500, 1, 128] which would produce output shape with a zero or negative value in a dimension.
Is this due to the Embedding layer having the incorrect output_dim? How do I correct this? Thanks.
A Conv1D layer requires the input shape (batch_size, timesteps, features), which train_protein and train_smile already have. For example, train_protein consists of 5411 samples, where each sample has 1500 timesteps, and each timestep one feature. Applying an Embedding layer to them results in adding an additional dimension, which Conv1D layers cannot work with.
You have two options. You either leave out the Embedding layer altogether and feed your inputs directly to the Conv1D layers, or you reshape your data to be (5411, 1500) for train_protein and (5411, 100) for train_smile. You can use tf.reshape, tf.squeeze, or tf.keras.layers.Reshape to reshape the data. Afterwards you can use the Embedding layer as planned. And note that output_dim determines the n-dimensional vector to which each timestep will be mapped. See also this and this.
I created this model
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D, Input, Dense
from tensorflow.keras.layers import Reshape, Flatten
from tensorflow.keras import Model
def create_DeepCAPCHA(input_shape=(28,28,1),n_prediction=1,n_class=10,optimizer='adam',
show_summary=True):
inputs = Input(input_shape)
x = Conv2D(filters=32, kernel_size=3, activation='relu', padding='same')(inputs)
x = MaxPooling2D(pool_size=2)(x)
x = Conv2D(filters=48, kernel_size=3, activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Conv2D(filters=64, kernel_size=3, activation='relu', padding='same')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = Dense(units=n_prediction*n_class, activation='softmax')(x)
outputs = Reshape((n_prediction,n_class))(x)
model = Model(inputs, outputs)
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics= ['accuracy'])
if show_summary:
model.summary()
return model
I tried the model on MNIST dataset
import tensorflow as tf
import numpy as np
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
inputs = x_train
outputs = tf.keras.utils.to_categorical(y_train, num_classes=10)
outputs = np.expand_dims(outputs,1)
model = create_DeepCAPCHA(input_shape=(28,28,1),n_prediction=1,n_class=10)
model.fit(inputs, outputs, epochs=10, validation_split=0.1)
but it failed to converge (stuck at 10% accuracy => same as random guessing). Yet when I remove the "padding='same'" argument from Conv2D layers, it works flawlessly:
def working_DeepCAPCHA(input_shape=(28,28,1),n_prediction=1,n_class=10,optimizer='adam',
show_summary=True):
inputs = Input(input_shape)
x = Conv2D(filters=32, kernel_size=3, activation='relu')(inputs)
x = MaxPooling2D(pool_size=2)(x)
x = Conv2D(filters=48, kernel_size=3, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Conv2D(filters=64, kernel_size=3, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = Dense(units=n_prediction*n_class, activation='softmax')(x)
outputs = Reshape((n_prediction,n_class))(x)
model = Model(inputs, outputs)
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics= ['accuracy'])
if show_summary:
model.summary()
return model
Anyone has any idea what problem this is?
Thank you for sharing, it was really interesting to me. So I wrote the code and tested several scenarios. Note that what I'm going to say is just my guest and I'm not sure about it.
My conclusion from those tests is that no padding or valid padding works because it produces (1, 1, 64) output shape for the last conv layer. But if you set the padding to same it will produce (3, 3, 64) and because the next layer is a big Dense layer, it will multiply the number of networks parameters by 9 (I expected to somehow result in overfitting) and it seems to make it much harder for the network to find the good values of the parameters. So I tried some different ways to reduce the output of last conv layer to (1, 1, 64) as below:
using one more conv layer + maxpooling
change the last maxpooling to pool_size of 4
using stride of 2 for one of conv layers
change the filters of last conv layer to 20
and they all worked well. even changing the dense units from 512 to 64 will help as well (note that even now you may get poor results with a little chance, because of bad initialization I guess).
Then I changed the shape of the last conv layer to (2, 2, 64) and the chance to get a good result (more than 90% accuracy) reduced (alot of time I've got 10% accuracy).
So it seems that a lot of parameters can confuse the model. But if you want to know why the network does not overfit, I have no answer for you.
I'm programming in python 3.7.5 using keras and TensorFlow 1.13.1
I want remove batch normalization layer from model coded below:
from keras import backend as K
from keras.callbacks import *
from keras.layers import *
from keras.models import *
from keras.utils import *
from keras.optimizers import Adadelta, RMSprop, Adam, SGD
from keras.callbacks import ModelCheckpoint
from keras.callbacks import TensorBoard
from config import *
def ctc_lambda_func(args):
iy_pred, ilabels, iinput_length, ilabel_length = args
# the 2 is critical here since the first couple outputs of the RNN
# tend to be garbage:
iy_pred = iy_pred[:, 2:, :] # no such influence
return K.ctc_batch_cost(ilabels, iy_pred, iinput_length, ilabel_length)
def CRNN_model(is_training=True):
inputShape = Input((width, height, 1), name='input') # base on Tensorflow backend
conv_1 = Conv2D(64, (3, 3), activation='relu', padding='same')(inputShape)
conv_2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv_1)
#batchnorm_2 = BatchNormalization()(conv_2)
pool_2 = MaxPooling2D(pool_size=(2, 2))(conv_2)
conv_3 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool_2)
conv_4 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv_3)
#batchnorm_4 = BatchNormalization()(conv_4)
pool_4 = MaxPooling2D(pool_size=(2, 2))(conv_4)
conv_5 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool_4)
conv_6 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv_5)
pool_5 = MaxPool2D(pool_size=(2, 2))(conv_6)
#batchnorm_6 = BatchNormalization()(conv_6)
#bn_shape = batchnorm_6.get_shape()
#print(bn_shape)
#x_reshape = Reshape(target_shape=(int(bn_shape[1]), int(bn_shape[2] * bn_shape[3])))(batchnorm_6)
#drop_reshape = Dropout(0.25, name='d1')(x_reshape)
fl_1 = Flatten()(pool_5)
fc_1 = Dense(256, activation='relu')(fl_1)
#print(x_reshape.get_shape())
#print(fc_1.get_shape())
bi_LSTM_1 = Bidirectional(LSTM(256, return_sequences=True, kernel_initializer='he_normal'), merge_mode='sum')(fc_1)
bi_LSTM_2 = Bidirectional(LSTM(128, return_sequences=True, kernel_initializer='he_normal'), merge_mode='concat')(bi_LSTM_1)
#drop_rnn = Dropout(0.3, name='d2')(bi_LSTM_2)
fc_2 = Dense(label_classes, kernel_initializer='he_normal', activation='softmax')(bi_LSTM_2)
base_model = Model(inputs=[inputShape], outputs=fc_2)
labels = Input(name='the_labels', shape=[label_len], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([fc_2, labels, input_length, label_length])
if is_training:
return Model(inputs=[inputShape, labels, input_length, label_length], outputs=[loss_out]), base_model
else:
return base_model
but I get this error:
Traceback (most recent call last):
File "C:/Users/Babak/PycharmProjects/CRNN-OCR/captcha-recognition-master1/captcha-recognition-master/training.py", line 79, in <module>
model, base_model = CRNN_model(is_training=True)
File "C:\Users\Babak\PycharmProjects\CRNN-OCR\captcha-recognition-master1\captcha-recognition-master\model.py", line 51, in CRNN_model
bi_LSTM_1 = Bidirectional(LSTM(256, return_sequences=True, kernel_initializer='he_normal'), merge_mode='sum')(fc_1)
File "C:\Program Files\Python37\lib\site-packages\keras\layers\wrappers.py", line 437, in __call__
return super(Bidirectional, self).__call__(inputs, **kwargs)
File "C:\Program Files\Python37\lib\site-packages\keras\engine\base_layer.py", line 446, in __call__
self.assert_input_compatibility(inputs)
File "C:\Program Files\Python37\lib\site-packages\keras\engine\base_layer.py", line 342, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer bidirectional_1: expected ndim=3, found ndim=2
Process finished with exit code 1
How can I remove batch norm layers which is commented. I note that I manually remove drop out layers. So assume that dropout are removed. I remove dropout layers without problem. But I have problem in removing batch normalization layers
As per the error code, LSTM layers expect 3D input tensors, but Dense outputs only 2D. Many possible fixes exist, but not all will work equally well:
Conv2D outputs 4D tensors, shaped (samples, height, width, channels)
LSTM expects input shaped (samples, timesteps, channels)
Thus, you need to somehow transform the (height, width) dimensions into timesteps
In existing research, image data is flattened and treated sequentially - however, channels remain untouched. Thus, a viable approach is to use Reshape to yield a 3D tensor shaped (samples, height*width, channels). Finally, as Dense cannot work with 3D data, you'll need the TimeDistributed wrapper that'll apply the same Dense weights to dim 1 of input - i.e. to timesteps:
pool_shapes = K.int_shape(pool_5)
fl_1 = Reshape((pool_shapes[1] * pool_shapes[2], pool_shapes[3]))(pool_5)
fc_1 = TimeDistributed(Dense(256, activation='relu'))(fl_1)
Lastly, return_sequences=True outputs a 3D tensor, which your output Dense cannot handle - so either use return_sequences=False to output 2D, or insert a Flatten before the Dense.
I have an issue with Recurrentshop and Keras. I am trying to use Concatenate and multidimensional tensors in a Recurrent Model, and I get dimension issue regardless of how I arrange the Input, shape and batch_shape.
Minimal code:
from keras.layers import *
from keras.models import *
from recurrentshop import *
from keras.layers import Concatenate
input_shape=(128,128,3)
x_t = Input(shape=(128,128,3,))
h_tm1 = Input(shape=(128,128,3, ))
h_t1 = Concatenate()([x_t, h_tm1])
last = Conv2D(3, kernel_size=(3,3), strides=(1,1), padding='same', name='conv2')(h_t1)
# Build the RNN
rnn = RecurrentModel(input=x_t, initial_states=[h_tm1], output=last, final_states=[last], state_initializer=['zeros'])
x = Input(shape=(128,128,3, ))
y = rnn(x)
model = Model(x, y)
model.predict(np.random.random((1, 128, 128, 3)))
ErrorCode:
ValueError: Shape must be rank 3 but it is rank 4 for 'recurrent_model_1/concatenate_1/concat' (op:ConcatV2) with input shapes: [?,128,3], [?,128,128,3], [].
Please help.
Try this (the changed lines are commented):
from recurrentshop import *
from keras.layers import Concatenate
x_t = Input(shape=(128, 128, 3,))
h_tm1 = Input(shape=(128, 128, 3,))
h_t1 = Concatenate()([x_t, h_tm1])
last = Conv2D(3, kernel_size=(3, 3), strides=(1, 1), padding='same', name='conv2')(h_t1)
rnn = RecurrentModel(input=x_t,
initial_states=[h_tm1],
output=last,
final_states=[last],
state_initializer=['zeros'])
x = Input(shape=(1, 128, 128, 3,)) # a series of 3D tensors -> 4D
y = rnn(x)
model = Model(x, y)
model.predict(np.random.random((1, 1, 128, 128, 3))) # a batch of x -> 5D