How to train models in TensorFlow using GPU? This code shows that I have 0 GPU, although I have a video card.
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
I just started learning TensorFlow, so I apologize in advance if I ask a stupid question.
Related
I am trying to build a simple multilabel text classification pipeline using BERT; the goal is to classify the content of social media posts and any post can have more than one label (i.e., a post can be labeled both "Medications" and "Physical and Mental Health"). I am very new to BERT and was trying to follow this example I found: https://towardsdatascience.com/building-a-multi-label-text-classifier-using-bert-and-tensorflow-f188e0ecdc5d
I have some questions on how to set it up for this task.
In my Anaconda system I have previously installed Tensorflow version 2.0. I have ran the command "pip install bert-tensorflow" and then ran the following:
import tensorflow as tf
import tensorflow_hub as hub
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
from bert import modeling
And got this error at the step "from bert import run_classifier":
ModuleNotFoundError: No module named 'tensorflow.contrib'
I did some research and found out that Tensorflow 2.0 indeed does not have the contrib module, but earlier versions do. I just wanted some ideas on how to resolve this issue? I do not want to downgrade my Tensorflow version. And, if anyone can please point me to other examples of multilabel text classification and BERT I would greatly appreciate it (so far the ones I've seen are not very easy to follow).
Thank you!
You can avoid this error by selecting Tensorflow version 1.x before your code:
%tensorflow_version 1.x
import tensorflow as tf
tf.__version__ # to check the tensorflow version
Output:
TensorFlow 1.x selected.
'1.15.2'
This code line will convert Tensorflow version to 1.15 for your kernel runtime and now you can import the libraries and run your code without error:
import tensorflow as tf
import tensorflow_hub as hub
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
from bert import modeling
I have used "from tensorflow_addons.layers import CRF" for the CRF implementation in TensorFlow. I am familiar with the way around using Keras contrib library but I was wondering how did tf didn't make any implementation of crf loss in tf 2?
On every question and tutorial I have found, tf.data.Dataset is used for CSV files, but I am not using tensorflow, I am using PlaidML because my AMD GPU is not supported in ROCm. I have tried using the same code by doing
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
from tensorflow import keras
but that still does not use the plaidml backend. How do I load this dataset: https://www.kaggle.com/keplersmachines/kepler-labelled-time-series-data into keras without tensorflow? Thank you. I check if the plaidml backend is used by looking at the output. If it says "Using plaidml.keras.backend backend", then it is using plaidml. Also, only plaidml recognizes my GPU, so tensorflow will use my CPU.
I have a set of weights from a model developed in keras 1.10 with theano backend.
Right now I would like to use those weights in a more recent version of keras (2.2.4).
Just by loading them on the more recent version of keras the results are not the same (in 2.2.4 the results are not accurate). I can retrain the model but it would take some time. Is there some way to use the weights?
I tried to load the weights from 1.10 to 2.2.4 and it did not work. Also found a project on git:
https://gist.github.com/davecg/e33b9b29d218b5966fb8e2f617e90399
To update the weights from Keras 1.x to 2.x and also did not work.
Thanks for the answers.
We are working with Keras on top of TensorFlow 1.x. But now that TF 2.0 is coming we are thinking of switching to that update, using the Keras API implementation built into TF 2.0.
But before we do so, I would like to ask you guys: Do you know whether the Keras implementation in TF 2.0 does support everything native Keras does with TF 1.0, or are there any features missing?
Moreover, will I be able to use my Keras code 1:1 with the new TF 2.0 implementation of the Keras API, or do we need to re-write parts of our existing Keras code?
If you want to use TensorFlow, then I highly recommend you to switch and use the TensorFlow implementation of Keras (i.e. tf.keras) because it will support more features of TF and also would be much more efficient and optimized than native Keras.
Actually, Keras maintainers released a new version (2.2.5) of Keras a few days ago (after more than 10 months with no new release!) and they also recommend to use tf.keras. Here are the release notes:
Keras 2.2.5 is the last release of Keras that implements the 2.2.* API. It is the last release to only support TensorFlow 1 (as well as Theano and CNTK).
The next release will be 2.3.0, which makes significant API changes and add support for TensorFlow 2.0. The 2.3.0 release will be the last major release of multi-backend Keras. Multi-backend Keras is superseded by tf.keras.
At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to tf.keras in TensorFlow 2.0. tf.keras is better maintained and has better integration with TensorFlow features.
This: "Multi-backend Keras is superseded by tf.keras" is a strong indicator that it is better to switch to tf.keras, especially if you are still at the beginning of your project.