Message after installing tensorflow - python

I have just installed tensorflow on Ubuntu 16.04 for Python3.5 as it is the preinstalled Python3 version.
I installed via pip3 install tensorflow-cpu, i used cpu because my Ubuntu 16.04 does not recognize my GPU in my fairly new laptop but this is another issue.
So after I just tried a simple hello world program with tensorflow I got following message:
2020-11-01 09:36:51.577315: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
I don't really understand what this means. Do I have to build tensorflow again or can I use it for ML tasks like this? If so how can I do this correctly with appropriate compiler flags?
Best regards :)

You can just ignore that informational message and carry on with your ML work.
It's just telling you that you could rebuild Tensorflow to maybe use more of the advanced instructions your CPU supports.

Related

My tensorflow doesn't detect my gpu and use my cpu (machine learning)

I try to train a machine learning model with my rtx380 10g and I use ubuntu 20.4 on my computer.
When i lunch my application, it says me :
2021-07-02 10:29:53.581609: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2496000000 Hz
after that, i tried to print len(my phisical_devices('Gpu')) but it gives me "0".
I installed Conda, but i'm not sure it's working. (Same for Cudnn)
To be honest, i'm quite lost now.
I had the same issue, did you install CUDA Toolkit, is it the right version based on your GPU?
Also did you do
pip install tensorflow-gpu
Try this thread: Tensorflow not running on GPU
it's probably a compatibility issue. Make sure your TF version is compatible with the Python, Cuda & Cudnn versions that are installed on ur PC and make sure that Microsoft Visual Studio is installed.
I'm currently using TF2.4, Python 3.8, Cuda 11.0 and Cudnn 8.0. Give this combination a try and see if it works.

Update Tensorflow binary in virtual environment in PyCharm to use AVX2

My question is related to this one here, but I am using PyCharm and I set up my virtual environment with Python interpreter according to this guide, page 5.
When I run my tensorflow code, I get the warning:
Your CPU supports instructions that this TensorFlow binary was not
compiled to use: AVX2
I could ignore it, but since my model fitting is quite slow, I would like to take advantage of it. However, I do not know how to update my system here in this virtual environment PyCharm setting to make use of AVX2?
Anaconda/conda as package management tool:
Assuming that you have installed anaconda/conda on your machine, if not follow this - https://docs.anaconda.com/anaconda/install/windows/
conda create --name tensorflow_optimized python=3.7
conda activate tensorflow_optimized
# you need intel's tensorflow version that's optimized to use SSE4.1 SSE4.2 AVX AVX2 FMA
conda install tensorflow-mkl -c anaconda
#run this to check if the installed version is using MKL,
#which in turns uses all the optimizations that your system provide.
python -c "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"
# you should see something like this as the output.
2020-07-14 19:19:43.059486: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
pip3 as package management tool:
py -m venv tensorflow_optimized
.\tensorflow_optimized\Scripts\activate
#once the env is activated, you need intel's tensorflow version
#that's optimized to use SSE4.1 SSE4.2 AVX AVX2 FMA
pip install intel-tensorflow
#run this to check if the installed version is using MKL,
#which in turns uses all the optimizations that your system provide.
py -c "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"
# you should see something like this as the output.
2020-07-14 19:19:43.059486: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
Once you have this, you can set use this env in pycharm.
Before that, run
where python on windows, which python on Linux and Mac when the env is activated, should give you the path for the interpreter. In Pycharm,
Go to Preference -> Project: your project name -> Project interpreter -> click on settings symbol -> click on add.
Select System interpreter -> click on ... -> this will open a popup window which asks for location of python interpreter.
In the location path, paste the path from where python ->click ok
now you should see all the packages installed in that env.
From Next time, if you want select that interpreter for your project, Click on the lower right half where it says python3/python2 (your interpreter name) and select the one you need.
I'd suggest you to install Anaconda as your default package manager, as it makes your dev life easier wrt python on Windows machine, but you can make do with pip as well.
If your CPU utilization during training stays under 100% for most of the time you should not even bother getting a different TF binary.
You might not see much if any benefit of using AVX2 (or AVX512 for that matter) depending on the workload you are running.
AVX2 is a set of CPU vector instructions of size 256(bits). Chances are, you can get at most x2 times benefit comparing to 128-bits streaming instructions. When it comes to deep learning models, they are very much memory-bandwidth bound and would not see much, if at all, benefits from switching to larger register sizes. Easy way to check it: see how long does your CPU utilization stays at 100% during training. If most of the time it is under 100% than you are probably memory (or else-wise) bound already. If your training is running on GPU and CPU is used only for data-preprocessing and occasional operations the benefit would be even less noticeable.
Back to answering your question. The best way to update TF binary to get the most out of the latest CPU architecture, CUDA version, python version and etc. would be to build tensorflow from source. Which might take a few hours of your time. That would be an official and the most robust way of solving your issue.
If you would be satisfied with using better CPU instructions you can try installing different 3-rd party binaries from wherever you can find them. Installing Conda and pointing pycharm interpreter to conda installation would be one of the options.

Getting the .whl for tensorflow to support all CPU instruction sets

When I start jupyter-notebook I see:
I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
When I go to the tensorflow GitHub project page intending to raise an issue about supporting all CPU instruction sets, it tells me to ask on SO first (so here I am).
Why wouldn't tensorflow be compiled to support all CPU instruction sets?
How would I contact the .whl packager to ask them to include all CPU instruction sets? (I signed up to pypi.org but I don't seem to be able to message people through there.

Tensorflow CPU warning for tensorflow-gpu-nightly package

I'm receiving the following error when I start my tensorflow session:
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
I have installed the GPU nightly version for windows and have CUDA GPU toolkit 9.0 installed. This is a CPU warning and shouldn't come as I have a GPU and using that to run tensorflow models.
Following is my GPU usage(task manager) while training models:GPU Usage link - task manager
A Tensorflow binary always has CPU code, no matter it supports GPU or not. This warning will show up on any reasonably new CPUs with pre-built Tensorflow binaries.
A GPU-enabled binary contains GPU kernels for Tensorflow OPs, such that many computation-heavy OPs could be offloaded to GPU. But there are always some OPs that does not have GPU kernels, and most of all, there is always code that runs on CPU just to start the program.
Pre-built Tensorflow binaries are not built with instructions supported on newer CPUs in order to be able to run (almost) everywhere.
The only way to have a binary to leverage all capabilities your CPU has to offer is to build from source, either natively or cross-compiling with proper target. And only then these warnings will be gone.

How to check whether computer capable to run tensorflow gpu and how to install tensorflow gpu version

First, I am not sure my computer(macbook pro) is capable of running tensor flow on GPU. I have checked system/hardware/Graphics-Display settings. You can find below related infos.
Graphics: Intel Iris Pro 1536 MB
So, Is it possible to run tensor flow on GPU with this computer graphics capabilities?
Second question, I am using conda environment to install tensor flow. On tensorflow installation page I could only find pip package for cpu. How could I install tensor gpu package? tensorflow installarion page
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0-py3-none-any.whl
The newest version of tensorflow does not have gpu support for mac os, as seen here.
I havent found any way to run my application (which uses tensorflow) with gpu support in my Mac. It has an "Intel Iris Ghaphics 6100" graphic card.
In this report (Can I run Cuda or opencl on intel iris?) says that only NVDIA ghaphic cards have CUDA supports, then likely I wont be able to do ..
But I have installed tensorflow-gpu without problems following this ways:
https://wangpei.ink/2019/03/29/Install-TensorFlow-GPU-by-Anaconda(conda-install-tensorflow-gpu)/

Categories