How to enable GPU for SetFit? - python

I am following this tutorial for SetFit: https://www.philschmid.de/getting-started-setfit
When the training is running, it is using my CPU instead of my GPU. Is there a way I can enable it?
Here is the main part of the code:
from setfit import SetFitModel, SetFitTrainer
from sentence_transformers.losses import CosineSimilarityLoss
# Load a SetFit model from Hub
model_id = "sentence-transformers/all-mpnet-base-v2"
model = SetFitModel.from_pretrained(model_id)
# Create trainer
trainer = SetFitTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=test_dataset,
loss_class=CosineSimilarityLoss,
metric="accuracy",
batch_size=64,
num_iterations=20, # The number of text pairs to generate for contrastive learning
num_epochs=1, # The number of epochs to use for constrastive learning
)
# Train and evaluate
trainer.train()
metrics = trainer.evaluate()

If your training is running on CPU rather than GPU, it is because:
Either you installed the CPU version of the PyTorch.
Either the version of CUDA/CUDNN and PyTorch are not compatible, and the training falls back to CPU instead of GPU.
In essence it has nothing to do with the SetFit model.
A working example for me in recent projects is:
(1) pip/pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
(2) pip install transformers==4.22.0
Note that you may have to uninstall pytorch first before reinstalling it: pip uninstall pytorch.
In order to make sure your GPU is visible, a short print would suffice:
training_device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

Related

How to use integrated GPU while training XGBoost model?

Firstly, I want to say that I am new in this field and don't know much.
I have following laptop: "dell vostro 15 5510", with GPU: "Intel(R) iris(R) Xe Graphics"
I have installed xgboost with following code
pip install xgboost
now am trying to train a model on GPU:
param = {'objective': 'multi:softmax', 'num_class':22}
param['tree_method'] = 'gpu_hist'
bst = xgb.train(param, dtrain, 50, verbose_eval=True, evals=eval_set)
but it throws following error:
XGBoostError: [11:16:53] C:/buildkite-agent/builds/buildkite-windows-cpu-autoscaling-group-i-0ac76685cf763591d-1/xgboost/xgboost-ci-windows/src/gbm/gbtree.cc:611: Check failed: common::AllVisibleGPUs() >= 1 (0 vs. 1) : No visible GPU is found for XGBoost.
I have tried to execute same code on google colab and it worked perfectly well. That's why I am thinking maybe my laptop needs to have dedicated GPU instead of integrated. and I don't think it is a problem of installation because https://xgboost.readthedocs.io/en/stable/install.html#python claims that pip install xgboost have GPU support on Windows

How to convert Tensorflow 2.* trained with Keras model to .onnx format?

I use the Python 3.7.4 with TensorFlow 2.0 and Keras 2.2.4-tf to train my own CNN model. Everything goes fine. I can use e.g. model.save(my_model), and then use it in other Python scripts. Problem appears when I want to use trained model in OpenCV with its DNN module in C++. cv::dnn:readNetFromTensorflow(model.pb, model.pbtxt), takes as you can see two arguments, and I can't get the second .pbtxt file. So I decide to use .onnx format, because of its flexibility. The problem is that existing libraries keras2onnx takes only model from TensorFlow 1.*, and I want to avoid working with it. Example of code to convert it is presented below:
import tensorflow as tf
import onnx
import keras2onnx
model = tf.keras.models.load_model(my_model_folder_path)
onnx_model = keras2onnx.convert_keras(model, model.name)
onnx.save_model(onnx_model, model_name_onnx)
Is there some other ways to convert such model to onnx format?
The latest version of keras2onnx (in github master) supports TensorFlow 2.
You can install it like this:
pip install git+https://github.com/microsoft/onnxconverter-common
pip install git+https://github.com/onnx/keras-onnx
You need to create a file which can hold ONNX object. Visit https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowExport.ipynb
import tensorflow as tf
import onnx
import keras2onnx
model = tf.keras.models.load_model('Model.h5')
onnx_model = keras2onnx.convert_keras(model, model.name)
file = open("Sample_model.onnx", "wb")
file.write(onnx_model.SerializeToString())
file.close()

Keras create dataset from CSV without TensorFlow

On every question and tutorial I have found, tf.data.Dataset is used for CSV files, but I am not using tensorflow, I am using PlaidML because my AMD GPU is not supported in ROCm. I have tried using the same code by doing
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
from tensorflow import keras
but that still does not use the plaidml backend. How do I load this dataset: https://www.kaggle.com/keplersmachines/kepler-labelled-time-series-data into keras without tensorflow? Thank you. I check if the plaidml backend is used by looking at the output. If it says "Using plaidml.keras.backend backend", then it is using plaidml. Also, only plaidml recognizes my GPU, so tensorflow will use my CPU.

Fail to find the dnn implementation for LSTM

I'm trying to run a simple LSTM model with following code
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(32,
input_shape=x_train_single.shape[-2:]))
model.add(tf.keras.layers.Dense(1))
model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss='mae')
single_step_history = model.fit(train_data_single, epochs=EPOCHS,
steps_per_epoch=EVALUATION_INTERVAL)
The error happened when it trying to fit the model
tensorflow.python.framework.errors_impl.UnknownError: [_Derived_] Fail to find the dnn implementation.
[[{{node CudnnRNN}}]]
[[sequential/lstm/StatefulPartitionedCall]] [Op:__inference_distributed_function_3107]
There's another error like this
2020-02-22 19:08:06.478567: W tensorflow/core/kernels/data/cache_dataset_ops.cc:820] The calling
iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the
dataset, the partially cached contents of the dataset will be discarded. This can happen if you have
an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use
`dataset.take(k).cache().repeat()` instead.
I tried all methods on this question which doesn't work for me
my envrionment is
tensorflow-gpu 2.0
CUDA v10
CuDNN 7.6.5
Solution
OK.. I found that I didn't have the latest Nvidia driver, so I upgraded, and works
Answering here for the benefit of the community even if the user has provided the solution.
Upgrading Nvidia driver to the latest has resolved the issue.
You can update NVIDIA manually from here here by selecting the product details and OS, you’re going to have to download the most recent drivers from their website. You’ll then have to run the installer and overwrite the old driver.
Try below
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], enable=True)

Changing keras backend from tensorflow cpu to gpu

Is there a difference (in code) between keras tensorflow-cpu backend and tensorflow-gpu backend? If I want to change tensorflow from cpu to gpu, what code do I need to add or what environmental variables do I need to set?
From keras link I know that I can use tf.devices - something like the code below. But what if I want the whole code, not just some part to run on GPU?
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on GPU:0
with tf.device('/cpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on CPU:0
Just uninstall tensorflow-cpu (pip uninstall tensorflow) and install tensorflow-gpu (pip install tensorflow-gpu). Now tensorflow will always use your gpu(s).
If you only want to use cpu in tensorflow-gpu set the environmental variable CUDA_VISIBLE_DEVICES so that the gpus are invisible. Before loading tensorflow do this in your script:
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
import tensorflow

Categories