Tensorflow CPU warning for tensorflow-gpu-nightly package - python

I'm receiving the following error when I start my tensorflow session:
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
I have installed the GPU nightly version for windows and have CUDA GPU toolkit 9.0 installed. This is a CPU warning and shouldn't come as I have a GPU and using that to run tensorflow models.
Following is my GPU usage(task manager) while training models:GPU Usage link - task manager

A Tensorflow binary always has CPU code, no matter it supports GPU or not. This warning will show up on any reasonably new CPUs with pre-built Tensorflow binaries.
A GPU-enabled binary contains GPU kernels for Tensorflow OPs, such that many computation-heavy OPs could be offloaded to GPU. But there are always some OPs that does not have GPU kernels, and most of all, there is always code that runs on CPU just to start the program.
Pre-built Tensorflow binaries are not built with instructions supported on newer CPUs in order to be able to run (almost) everywhere.
The only way to have a binary to leverage all capabilities your CPU has to offer is to build from source, either natively or cross-compiling with proper target. And only then these warnings will be gone.

Related

Message after installing tensorflow

I have just installed tensorflow on Ubuntu 16.04 for Python3.5 as it is the preinstalled Python3 version.
I installed via pip3 install tensorflow-cpu, i used cpu because my Ubuntu 16.04 does not recognize my GPU in my fairly new laptop but this is another issue.
So after I just tried a simple hello world program with tensorflow I got following message:
2020-11-01 09:36:51.577315: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
I don't really understand what this means. Do I have to build tensorflow again or can I use it for ML tasks like this? If so how can I do this correctly with appropriate compiler flags?
Best regards :)
You can just ignore that informational message and carry on with your ML work.
It's just telling you that you could rebuild Tensorflow to maybe use more of the advanced instructions your CPU supports.

Update Tensorflow binary in virtual environment in PyCharm to use AVX2

My question is related to this one here, but I am using PyCharm and I set up my virtual environment with Python interpreter according to this guide, page 5.
When I run my tensorflow code, I get the warning:
Your CPU supports instructions that this TensorFlow binary was not
compiled to use: AVX2
I could ignore it, but since my model fitting is quite slow, I would like to take advantage of it. However, I do not know how to update my system here in this virtual environment PyCharm setting to make use of AVX2?
Anaconda/conda as package management tool:
Assuming that you have installed anaconda/conda on your machine, if not follow this - https://docs.anaconda.com/anaconda/install/windows/
conda create --name tensorflow_optimized python=3.7
conda activate tensorflow_optimized
# you need intel's tensorflow version that's optimized to use SSE4.1 SSE4.2 AVX AVX2 FMA
conda install tensorflow-mkl -c anaconda
#run this to check if the installed version is using MKL,
#which in turns uses all the optimizations that your system provide.
python -c "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"
# you should see something like this as the output.
2020-07-14 19:19:43.059486: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
pip3 as package management tool:
py -m venv tensorflow_optimized
.\tensorflow_optimized\Scripts\activate
#once the env is activated, you need intel's tensorflow version
#that's optimized to use SSE4.1 SSE4.2 AVX AVX2 FMA
pip install intel-tensorflow
#run this to check if the installed version is using MKL,
#which in turns uses all the optimizations that your system provide.
py -c "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"
# you should see something like this as the output.
2020-07-14 19:19:43.059486: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
Once you have this, you can set use this env in pycharm.
Before that, run
where python on windows, which python on Linux and Mac when the env is activated, should give you the path for the interpreter. In Pycharm,
Go to Preference -> Project: your project name -> Project interpreter -> click on settings symbol -> click on add.
Select System interpreter -> click on ... -> this will open a popup window which asks for location of python interpreter.
In the location path, paste the path from where python ->click ok
now you should see all the packages installed in that env.
From Next time, if you want select that interpreter for your project, Click on the lower right half where it says python3/python2 (your interpreter name) and select the one you need.
I'd suggest you to install Anaconda as your default package manager, as it makes your dev life easier wrt python on Windows machine, but you can make do with pip as well.
If your CPU utilization during training stays under 100% for most of the time you should not even bother getting a different TF binary.
You might not see much if any benefit of using AVX2 (or AVX512 for that matter) depending on the workload you are running.
AVX2 is a set of CPU vector instructions of size 256(bits). Chances are, you can get at most x2 times benefit comparing to 128-bits streaming instructions. When it comes to deep learning models, they are very much memory-bandwidth bound and would not see much, if at all, benefits from switching to larger register sizes. Easy way to check it: see how long does your CPU utilization stays at 100% during training. If most of the time it is under 100% than you are probably memory (or else-wise) bound already. If your training is running on GPU and CPU is used only for data-preprocessing and occasional operations the benefit would be even less noticeable.
Back to answering your question. The best way to update TF binary to get the most out of the latest CPU architecture, CUDA version, python version and etc. would be to build tensorflow from source. Which might take a few hours of your time. That would be an official and the most robust way of solving your issue.
If you would be satisfied with using better CPU instructions you can try installing different 3-rd party binaries from wherever you can find them. Installing Conda and pointing pycharm interpreter to conda installation would be one of the options.

Getting the .whl for tensorflow to support all CPU instruction sets

When I start jupyter-notebook I see:
I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
When I go to the tensorflow GitHub project page intending to raise an issue about supporting all CPU instruction sets, it tells me to ask on SO first (so here I am).
Why wouldn't tensorflow be compiled to support all CPU instruction sets?
How would I contact the .whl packager to ask them to include all CPU instruction sets? (I signed up to pypi.org but I don't seem to be able to message people through there.

Ignoring visible gpu device with compute capability 3.0. The minimum required Cuda capability is 3.5

I am running Tensorflow 1.5.0 on a docker container because i need to use a version that doesn't use the AVX bytecodes because the hardware i am running on is too old to support it.
I finally got tensorflow-gpu to import correctly (after downgrading the docker image to tf 1.5.0) but now when i run any code to detect the GPU it says the GPU is not there.
I looked at the docker log and Jupyter is spitting out this message
Ignoring visible gpu device (device: 0, name: GeForce GTX 760, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
The tensorflow website says that GPUs with compute capability of 3.0 is supported so why does it says it needs compute capability 3.5?
Is there any way to get a docker image for tensorflow and jupyter that uses tf 1.5.0 but supports GPUs with compute capability?
You need to build TensorFlow from source, the typical wheels that you install using pip were built with the requirement of using Compute Capability 3.5, but TensorFlow does indeed support Compute Capability 3.0:
https://www.tensorflow.org/install/install_sources
GPU card with CUDA Compute Capability 3.0 or higher. See NVIDIA
documentation for a list of supported GPU cards.
You can build the latest TF version as this will also auto-detect the capabilities of your CPU and should not use AVX.
The Tensorflow 1.5 docs say
The following NVIDIA hardware must be installed on your system:
GPU card with CUDA Compute Capability 3.5 or higher. See NVIDIA documentation for a list of supported GPU cards.
Other Tensorflow versions support GPUs with compute capability 3.0, including both older versions and later versions, but specifically not Tensorflow 1.5. Upgrade your hardware, or pick a different Tensorflow version.
I just spent a day trying to build this thing from source and what worked for me finally is quite surprising: the pre-built wheel for TF 1.5.0 does not complain about this anymore, while pre-built wheel for TF 1.14.0 does complain. It seems you have used the same version, so it's quite interesting, but I thought I would share, so if anyone struggles with this, there seems to be an easy way out.
Configs:
Visual Studio version: 2017
Cuda compute capabilitz: 3.0
GPU: two Geforce GPU 755M
OS: Windows 10
Python: 3.6.8
Cuda Toolkit: 9.0
CuDNN: 7.0 (the earliest available is needed from, but it will complain anyway)

How to check whether computer capable to run tensorflow gpu and how to install tensorflow gpu version

First, I am not sure my computer(macbook pro) is capable of running tensor flow on GPU. I have checked system/hardware/Graphics-Display settings. You can find below related infos.
Graphics: Intel Iris Pro 1536 MB
So, Is it possible to run tensor flow on GPU with this computer graphics capabilities?
Second question, I am using conda environment to install tensor flow. On tensorflow installation page I could only find pip package for cpu. How could I install tensor gpu package? tensorflow installarion page
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0-py3-none-any.whl
The newest version of tensorflow does not have gpu support for mac os, as seen here.
I havent found any way to run my application (which uses tensorflow) with gpu support in my Mac. It has an "Intel Iris Ghaphics 6100" graphic card.
In this report (Can I run Cuda or opencl on intel iris?) says that only NVDIA ghaphic cards have CUDA supports, then likely I wont be able to do ..
But I have installed tensorflow-gpu without problems following this ways:
https://wangpei.ink/2019/03/29/Install-TensorFlow-GPU-by-Anaconda(conda-install-tensorflow-gpu)/

Categories