Transfer an LSTM model from cpu to GPU - python

I have a very simple LSTM model which I've built in tensorflow and it works on CPU. However, I want to use this model on GPU. For the pytorch, I've defined the device and etc, however for tensorflow, I don't have any idea why it can not work. Do you have any suggestion for me? Thanks
model = Sequential()
model.add(LSTM(64, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True))
model.add(LSTM(32, activation='relu', return_sequences=False))
model.add(Dropout(0.1))
model.add(Dense(Y_train.shape[1], kernel_regularizer='l2'))
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=50)
opt = keras.optimizers.Adam(learning_rate=0.0008)
model.compile(optimizer=opt, loss='mse')
#model.summary()
history = model.fit(X_train, Y_train, epochs=2, batch_size=100, validation_data=(X_val, Y_val), callbacks=[callback],verbose=1, device).to(device)

For tensorflow, the models run on GPU for computations by default. It is given on their official documentation.
Is there some kind of error that shows up when you run your model? Because this should work just fine when running on GPU as well instead of a CPU.

Related

Keras classification model with pure numpy classification layer

I have a multiclass(108 classes) classification model which I want to apply transfer learning to the classification layer. I want to deploy this model in a low computing resource device (Raspberry Pi) and I thought to implement the classification layer in pure numpy instead of using Keras or TF.
Below is my original model.
from tensorflow.keras.models import Sequential, Model, LSTM, Embedding
model = Sequential()
model.add(Embedding(108, 50, input_length=10))
model.add((LSTM(32, return_sequences=False)))
model.add(Dense(108, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=['accuracy'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3)
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.5, callbacks=[es]).history
I split this model into two parts, encoder and decoder as follows. decoder is the classification layer which I want to convert into NumPy model and then do the on-device transfer learning later.
encoder = Sequential([
Embedding(108, 50, input_length=10),
GRU(32, return_sequences=False)
])
decoder = Sequential([
Dense(108, activation="softmax")
])
model = Model(inputs=encoder.input, outputs=decoder(encoder.output))
model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=['accuracy'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3)
history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_split=0.5, callbacks=[es]).history
I have a few questions related to this approach.
Only way i know to train this model is, first to train the encoder and decoder. then
train NumPy classification layer using trained encoder outputs.
Is there any way i can train the NumPy model at the same time when i train the encoder (without using the above Keras decoder part and Model)? I can't use Model as I can't use Keras or TF in raspberry Pi during the transfer learning.
If there is no any way to train encoder and Numpy model at the same time,
How to use learned decoder weights as the starting weights of the Numpy Model instead of starting from random weights?
What is the most efficient code (or way) to implement the Numpy classification layer (decoder)? It requires a highly efficient model as i do the transfer learning on Raspberry Pi for incoming streaming data.
Once i trained the model for reasonable data, i plan to convert the encoder into TFLite and do the inference
Highly appreciate any help or guidance to achieve this as I'm new to NumPy-based NN implementations.
Thanks in advance

Tensorflow: Train Keras model using GPU

I know, there are a lot of related questions, but they are outdated, mostly they are even dealing with TensorFlow 1.
I have 1 GPU (GeForce 960) which is recognized by TensorFlow, so the installation was successful.
I'm not sure if this is the right way to do it, but this is how I train a Keras-model:
def create_model():
model = Sequential()
model.add(Conv2D(128, (3,3), padding="valid"))
model.add(layers.BatchNormalization())
model.add(layers.Activation(activations.relu))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(layers.Dense(10, activation="softmax"))
return model
strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0"])
with strategy.scope():
model = create_model()
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics=["acc"])
train_dataset, test_dataset = get_dataset()
model.fit(train_dataset,
epochs=20,
verbose=1,
validation_data=test_dataset)
But I get a lot of problems:
The exact same code is equally fast when I turn off the Strategy part
I always get this "warning": BaseCollectiveExecutor::StartAbort Out of range: End of sequence
I found out, that when I run this code, with the strategy part turned off, in a different Anaconda environment which does not have GPU support (CUDA etc), then it is way slowlier. So, is the GPU automatically used when you are in a GPU supporting environment (because, as stated in 1., it is equally fast without the strategy part)?
Is this the right way to use my GPU? If not, what is the right way?

Text summarization

I built a model for text summarization, I created a small document (text file) with its summary, then I trained the model on that, I created again the same type of document for test, the training and test documents are pretty similar but with different data.
For example, the training document contains:
name : train
family name : train
The test document:
name : test
family name : test
I was hoping that after training the model, it will remember the structure of the important sentences, after testing I've got an accuracy of 100%.
The problem is when I train the model on another document, the previous test gives lower accuracy, it's like it forgets the previous training.
here is my model:
model = Sequential()
model.add(Embedding(200,64, input_length=max_sent_length))
model.add(Conv1D(filters=64, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
for i in range(0,len(xtrains)):
model.fit(xtrains[i],ytrains[i], epochs=200, batch_size=64, shuffle=False)
I've searched about that and the answers I got is that refitting the model doesn't reset weights, so I was wondering why whenever I train the model on new documents I get lower accuracy for the previous tests, whereas at the beginning I received an accuracy of 100%.
How can I solve this problem?

Using Tensorflow 2.0 and eager execution without Keras

So this question might stem from a lack of knowledge about tensorflow. But I am trying to build a multilayer perceptron with tensorflow 2.0, but without Keras.
The reason being that it is a requirement for my machine learning course that we do not use keras. Why you might ask? I am not sure.
I already have implemented our model in tensorflow 2.0 with Keras ease, and now I want to do the exact same thing without keras.
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=784))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(5, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=Adam(),
metrics=['accuracy'])
X_train = X[:7000]
y_train = tf.keras.utils.to_categorical(y[:7000], num_classes=5)
X_dev = X[7000:]
y_dev = tf.keras.utils.to_categorical(y[7000:], num_classes=5)
model.fit(X_train, y_train,
epochs=100,
batch_size=128)
score = model.evaluate(X_dev, y_dev, batch_size=128)
print(score)
Here is my problem. Whenever I look up the documentation on Tensorflow 2.0, then even the guides on custom training are using Keras.
As placeholders and sessions are a thing of the past in tensorflow 2.0, as I understand it, then I am a bit unsure of how to structure it.
I can make tensor objects. I have the impression that I need to use eager execution and use gradient tape. But I still am unsure of how to put these things together.
Now my question is. Where should I look to get a better understanding? Which direction has the greatest descent?
Please do tell me if I am doing this stack overflow post wrong. It is my first time here.
As #Daniel Möller stated, there are these tutorials for custom training and custom layers on the official TensorFlow page. As stated on the custom training page:
This tutorial used tf.Variable to build and train a simple linear model.
There is also this blog that creates custom layers and training without Keras API. You can check this code on Google Colab, which uses Cifar-10 with custom layers and training in the same manner.

Prediction differences between keras and tensorflow lite model

I've created keras model to recognize human activity, based on data from mobile accelerometer:
model = Sequential()
model.add(Reshape((const.PERIOD, const.N_FEATURES), input_shape=(240,)))
model.add(Conv1D(100, 10, activation='relu', input_shape=(const.PERIOD, const.N_FEATURES)))
model.add(Conv1D(100, 10, activation='relu'))
model.add(MaxPooling1D(const.N_FEATURES))
model.add(Conv1D(160, 10, activation='relu'))
model.add(Conv1D(160, 10, activation='relu'))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
I've tested model, and the accuracy after ten epochs is like 85-90%. I don't know, but when I converse my model to TF Lite and I run interpreter in my android app, there's horrible predictions. What can be reason of that bad results? No compatibility on keras -> tensorflow -> tensorflow lite line? Should I run it with another way, using something like servlet + keras model?
A few suggestions:
Try to visualize your tflite graph with
https://lutzroeder.github.io/netron/. See if there's anything
unexpected.
Try to debug with tensorflow lite's python API first. Feed the same
input to the keras model and tflite model and compare the output
tensor.

Categories