I've tried tensorflow on both cuda 7.5 and 8.0, w/o cudnn (my GPU is old, cudnn doesn't support it).
When I execute device_lib.list_local_devices(), there is no gpu in the output. Theano sees my gpu, and works fine with it, and examples in /usr/share/cuda/samples work fine as well.
I installed tensorflow through pip install. Is my gpu too old for tf to support it? gtx 460
I came across this same issue in jupyter notebooks. This could be an easy fix.
$ pip uninstall tensorflow
$ pip install tensorflow-gpu
You can check if it worked with:
tf.test.gpu_device_name()
Update 2020
It seems like tensorflow 2.0+ comes with gpu capabilities therefore
pip install tensorflow should be enough
Summary:
check if tensorflow sees your GPU (optional)
check if your videocard can work with tensorflow (optional)
find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version
install CUDA Toolkit
install cuDNN SDK
pip uninstall tensorflow; pip install tensorflow-gpu
check if tensorflow sees your GPU
* source - https://www.tensorflow.org/install/gpu
Detailed instruction:
check if tensorflow sees your GPU (optional)
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
# my output was => ['/device:CPU:0']
# good output must be => ['/device:CPU:0', '/device:GPU:0']
check if your card can work with tensorflow (optional)
my PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook
tensorflow needs Compute Capability 3.5 or higher. (https://www.tensorflow.org/install/gpu#hardware_requirements)
https://developer.nvidia.com/cuda-gpus
select "CUDA-Enabled GeForce Products"
result - "GeForce GTX 1060 Compute Capability = 6.1"
my card can work with tf!
find versions of CUDA Toolkit and cuDNN SDK, that you need
a) find your tf version
import sys
print (sys.version)
# 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
import tensorflow as tf
print(tf.__version__)
# my output was => 1.13.1
b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version
https://www.tensorflow.org/install/source#linux
* it is written for linux, but worked in my case
see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4
install CUDA Toolkit
a) install CUDA Toolkit 10.0
https://developer.nvidia.com/cuda-toolkit-archive
select: CUDA Toolkit 10.0 and download base installer (2 GB)
installation settings: select only CUDA
(my installation path was: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development)
b) add environment variables:
system variables / path must have:
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\bin
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\libnvvp
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\extras\CUPTI\libx64
D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\include
install cuDNN SDK
a) download cuDNN SDK v7.4
https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple)
select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0"
b) add path to 'bin' folder into "environment variables / system variables / path":
D:\Programs\x64\Nvidia\cudnn_for_cuda_10_0\bin
pip uninstall tensorflow
pip install tensorflow-gpu
check if tensorflow sees your GPU
- restart your PC
- print(get_available_devices())
- # now this code should return => ['/device:CPU:0', '/device:GPU:0']
If you are using conda, you might have installed the cpu version of the tensorflow. Check package list (conda list) of the environment to see if this is the case . If so, remove the package by using conda remove tensorflow and install keras-gpu instead (conda install -c anaconda keras-gpu. This will install everything you need to run your machine learning codes in GPU. Cheers!
P.S. You should check first if you have installed the drivers correctly using nvidia-smi. By default, this is not in your PATH so you might as well need to add the folder to your path. The .exe file can be found at C:\Program Files\NVIDIA Corporation\NVSMI
When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through https://developer.nvidia.com/cuda-gpus) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0.
https://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux
You might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU.
The following worked for me, hp laptop. I have a Cuda Compute capability
(version) 3.0 compatible Nvidia card. Windows 7.
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe install tensorflow-gpu
I had a problem because I didn't specify the version of Tensorflow so my version was 2.11. After many hours I found that my problem is described in install guide:
Caution: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin
Before that, I read most of the answers to this and similar questions. I followed #AndrewPt answer. I already had installed CUDA but updated the version just in case, installed cudNN, and restarted the computer.
The easiest solution for me was to downgrade to 2.10 (you can try different options mentioned in the install guide). I first uninstalled all of these packages (probably it's not necessary, but I didn't want to see how pip messed up versions at 2 am):
pip uninstall keras
pip uninstall tensorflow-io-gcs-filesystem
pip uninstall tensorflow-estimator
pip uninstall tensorflow
pip uninstall Keras-Preprocessing
pip uninstall tensorflow-intel
because I wanted only packages required for the old version, and I didn't do it for all required packages for 2.11 version. After that I installed tensorflow 2.10:
pip install tensorflow<2.11
and it worked.
I used this code to check if GPU is visible:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
So as of 2022-04, the tensorflow package contains both CPU and GPU builds. To install a GPU build, search to see what's available:
λ conda search tensorflow
Loading channels: done
# Name Version Build Channel
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
…
tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main
tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main
tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main
tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main
tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main
tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main
You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for MKL (Intel CPU), Eigen, or GPU.
To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:
λ conda search tensorflow=2*=gpu*
Loading channels: done
# Name Version Build Channel
tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main
tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main
tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main
tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main
tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main
tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main
tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
To install a specific version in an otherwise empty environment, you can use a command like:
λ conda activate tf
(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0
…
The following NEW packages will be INSTALLED:
_tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu
…
cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2
cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0
…
tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0
tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0
…
As you can see, if you install a GPU build, it will automatically also install compatible cudatoolkit and cudnn packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on the official website.
After installation, confirm that it worked and it sees the GPU by running:
λ python
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
>>> tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Getting conda to install a GPU build and other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.
This tries to install any TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance:
λ conda install tensorflow=2*=gpu* spyder matplotlib
For me, this ended up installing a two year old GPU version of tensorflow:
matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1
spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1
tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0
I had previously been using the tensorflow-gpu package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow or the CUDA dependencies:
λ conda list
…
cookiecutter 1.7.2 pyhd3eb1b0_0
cryptography 3.4.8 py38h71e12ea_0
cycler 0.11.0 pyhd3eb1b0_0
dataclasses 0.8 pyh6d0b6a4_7
…
tensorflow 2.3.0 mkl_py38h8557ec7_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
tensorflow-gpu 2.3.0 he13fc11_0
I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was
conda install cudatoolkit==11.2
pip install tensorflow-gpu==2.8.0
Although I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where libcudart.so.11.0 was not found. As a result, GPUs were not visible. The remedy was to set environmental variable LD_LIBRARY_PATH to point to your anaconda3/envs/<your_tensorflow_environment>/lib with this command
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib
Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this procedure from conda's official website.
In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using:
pip uninstall tensorflow-gpu==1.14
pip install tensorflow-gpu==1.14
I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success.
What solved my issue was to update my GPU drivers. You can update them via:
Pressing windows-button + r
Entering devmgmt.msc
Right-Clicking on "Display adapters" and clicking on the "Properties" option
Going to the "Driver" tab and selecting "Updating Driver".
Finally, click on "Search automatically for updated driver software"
Restart your machine and run the following check again:
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
[x.name for x in local_device_protos]
Sample output:
2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189
pciBusID: 0000:01:00.0
2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)
>>> [x.name for x in local_device_protos]
['/device:CPU:0', '/device:GPU:0']
Related
I had to reinstall completely my python distribution lately and for some reasons I cannot run keras on GPU anymore.
I followed the instructions from Can I run Keras model on gpu? but for some reason, I do not see my GPU when trying to list the devices.
my versions are :
tensorflow & tensorflow-gpu : 2.3.0
keras : 2.3.1
cudatoolkit : 11.3.1
I have not installed cudnn yet as the instructions are a bit blurry for me : do I have to install it in the cudatoolkit directory ? Is it required to run on GPU ?
Thanks
OK so here is the solution :
Python 3.7.13 requires tensorflow-gpu 2.1.0 to detect gpu.
So the correct set of versions is :
Python 3.7.13
tensorflow 2.1.0
keras 2.3.0
cudnn 7.6.5
cudatoolkit 10.1.243
with this it works. Be careful that for some reason, in my case tensorflow downloaded tensorflow-estimator 2.6.0. I had to downgrade to 2.1.0 to get spyder running.
When I create a python 3.8 environment using tensorflow-gpu 2.5.0 package using conda, I get the error "Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found". However, I have an existing python 3.7 environment that also has tensorflow-gpu 2.5.0, and it is able to find the library OK.
Interestingly enough, if I clone the python 3.7 environment where I'm able to load the library, it also loads in the cloned environment, but if I create a new python 3.7 environment from scratch with tensorflow-gpu 2.5.0, I get the error in that new environment.
I'm not sure why I'm able to load the library in the one environment, but not the others, since the library is in the same location in each of the environments, and it should be a link back to the same file in the package cache, anyhow.
In the python 3.7 environment where I am able to load the cudart64_110.dll, the following relevant packages are installed:
# Name Version Build Channel
cudatoolkit 11.3.1 h280eb24_9 conda-forge
python 3.7.12 h7840368_100_cpython conda-forge
tensorflow 2.5.0 gpu_py37h23de114_0
tensorflow-base 2.5.0 gpu_py37hb3da07e_0
tensorflow-gpu 2.5.0 h17022bd_0
In the python 3.8 environment where I'm not able to load cudart64_110.dll, the following relevant packages are installed:
# Name Version Build Channel
cudatoolkit 11.3.1 h280eb24_9 conda-forge
python 3.8.12 h7840368_2_cpython conda-forge
tensorflow 2.5.0 gpu_py38h8e8c102_0
tensorflow-base 2.5.0 gpu_py38hb3da07e_0
tensorflow-gpu 2.5.0 h17022bd_0
Note that both environments include the same cudatoolkit version.
Also, I do realize that I'm mixing channels. However, (a) tensorflow 2.x is not available from conda-forge, and (b) that shouldn't matter in this case since I am able to load tensorflow with CUDA in one environment, but not the other.
For tensorflow_gpu==2.5.0, you need to install CUDA 11.2.
Please check the below tested build configuration details and install the suitable cuDNN and CUDA to use TF-gpu 2.5.
Version Python version Compiler Build tools cuDNN CUDA
tensorflow_gpu-2.7.0 3.7-3.9 MSVC 2019 Bazel 3.7.2 8.1 11.2
tensorflow_gpu-2.6.0 3.6-3.9 MSVC 2019 Bazel 3.7.2 8.1 11.2
tensorflow_gpu-2.5.0 3.6-3.9 MSVC 2019 Bazel 3.7.2 8.1 11.2
Follow this link to install specified CUDA and cuDNN in your system.
I have both CPU and a GPU version of tensorflow installed in Windows 10.
conda list t.*flow
# packages in environment at C:\Users\Dell\anaconda4:
#
# Name Version Build Channel
tensorflow 2.3.1 pypi_0 pypi
tensorflow-estimator 2.3.0 pypi_0 pypi
tensorflow-gpu 2.3.1 pypi_0 pypi
tensorflow-gpu-estimator 2.3.0 pypi_0 pypi
Also, I have already installed CUDA and cuDNN by following the steps at this link https://towardsdatascience.com/installing-tensorflow-with-cuda-cudnn-and-gpu-support-on-windows-10-60693e46e781 the only difference is that I downloaded the latest versions of CUDA and cuDNN to conform with the requirements of tensorflow 2.3.1 but still I could not access my GPU, which is a NVIDIA GeForce MX150.
import tensorflow as tf
tf.test.is_built_with_cuda()
return True.
tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)
output:
WARNING:tensorflow:From :1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.config.list_physical_devices('GPU') instead.
False
Any thoughts as to why tensorflow 2.3.1 cannot access/find the GPU? Please help me solve this problem.
I believe tensorflow-gpu does not require tensorflow in order to work, and by having both installed you may be importing the cpu version instead.
First uninstall the standard tensorflow and see if that fixes.
The NVIDIA GeForce MX150 does support CUDA, but there may still be compatibility issues with the most recent versions of tensorflow, CUDA and CUDNN.
The discusson here claims a working combination with with CUDA 9.1 and CUDNN 7.0.5. My advice would be to remove your installed versions and try these, though this will probably require a downgrade of tensorflow-gpu to make it compatible.
Your warning showing that tf.test.is_gpu_available is deprecated. If you visit the tensorflow doc here:
https://www.tensorflow.org/api_docs/python/tf/test/is_gpu_available. It is mentioned on the doc that this method to check the access to GPU is deprecated.
You should use tf.config.experimental.list_physical_devices('GPU').
To be more precise, use below:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
Your expected output should be as below if you have one GPU:
# Num GPUs Available: 1
This is an issue that many of us must have come across. While installing tensorflow, this is one of the error messages that pops up for most of the users. I could not install Tensorflow 1.10.0 due to the following error that I posted a few days back at
ImportError: Could not find 'cudnn64_7.dll'
I am using Windows 10 and was trying to implement
import tensorflow as tf
through Conda environment.
What can I do to resolve this issue?
1) Go to the cuDNN Archive
2) Click on Download cuDNN v7.6.1 (June 24, 2019), for CUDA 10.0
(you need CUDA 10 installed. NOT 10.1. If you installed the wrong version, uninstall
it and install the 10 which works with tensorflow-gpu)
3) Click on the link for your operating system.
4) Unzip it. It should unzip to a folder called CUDA.
5) Go into the CUDA folder and copy the contents
6) Open the installed CUDA 10 location. For windows 10 it is "Download cuDNN v7.6.1 (June 24, 2019), for CUDA 10.0"
7) Paste the contents from your clipboard to the folder.
8) have a coffee. You are done!
Jeremy Demers' answer worked for me, and I was able to repeat his process. However, I used cuDNN 10.1 instead of version 10 and installed tensorflow version 2.4.0-dev20200705 first via pip install tensorflow-gpu, and then `pip install tensorflow-nightly to get the latest build. Hardware: 2060 Super, 8GB.
Edit:
The recommended way to get tensorflow nightly via pip is:
pip install tf-nightly
Here is what I did.
Step 1) Installed 'NVIDIA GEFORCE EXPERIENCE' in my computer to check my Driver version.
Step 2) The driver version was an old one. Update was available. So I updated my Graphic driver.
My GPU properties now are:-
NVIDIA GEFORCE EXPERIENCE Version 3.14.1.48
GeForce 940MX
Driver Version 398.82
Intel(R) Core(TM) i5-7200U CPU #2.50GHz
7.9 GB RAM
Now, through conda environment ( I created an environment named 'tensorflow' ), when I executed the statement
(tensorflow) C:\Users\Arnab Sinha>pip install --ignore-installed --upgrade tensorflow-gpu
I encountered the following message :-
pandas 0.23.4 requires python-dateutil>=2.5.0, which is not installed.
pandas 0.23.4 requires pytz>=2011k, which is not installed.
I then installed the required packages by executing the following commands one after the other
pip install python-dateutil
and
pip install pytz
after which I ran the command in Python 3.6.6
import tensorflow as tf
and then
print(tf.__version__)
which gave the output
1.10.0
Here is how I installed Tensorflow 1.10.0 into my computer. The Anaconda Navigator however does not have the update of Tensorflow 1.10.0. Please inform me if you have found the update for it.
I'm getting started with TensorFlow, but I cannot make it use GPU instead of CPU with TensorFlow 1.2.1.
I've got a laptop equipped with a NVIDIA GTX 850M which is CUDA 5.0 compatibility.
The CUDA Toolkit is installed with the latest version available.
cuDNN is installed with the latest version available.
I've set up the environment variables just as is shown here : https://nitishmutha.github.io/tensorflow/2017/01/22/TensorFlow-with-gpu-for-windows.html
If I install the latest version of TensorFlow via pip: "pip install tensorflow-gpu" in the cmd prompt, then TensorFlow does not recognize my GPU and acts like I've got none: 'Device mapping: no known device'.
If instead I install tensorflow via 'pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-0.12.1-cp35-cp35m-win_amd64.whl' then everything works fine.
Has anyone an idea why the latest version of TF does that?
In the latest version of Tensorflow, you can check the GPU availability as
gpu_available = tf.test.is_gpu_available()
is_cuda_gpu_available = tf.test.is_gpu_available(cuda_only=True)
is_cuda_gpu_min_3 = tf.test.is_gpu_available(True, (3,0))
tf.test.is_gpu_available will be removed in a future version. Instructions for updating: Use tf.config.list_physical_devices('GPU') instead