raise AssertionError("Torch not compiled with CUDA enabled") - python

I Try to install Pytorch on my Windows 10 system.
I wanna Use a anaconda env.
i followed the instruction 'https://pytorch.org/' stable 1.12.1 && Conda && Python && cuda 11.6
(conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge)
Before I installed conda 11.6, when i enter nvcc --version in the console i get the output :
NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_Mar__8_18:36:24_Pacific_Standard_Time_2022
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0
I also installed conda forge following this instruction https://conda-forge.org/docs/user/introduction.html
but now, if i try to run print(torch.cuda.is_available()) 'false' is outprinted
If i run conda list i get this (just some):
pytorch 1.10.2 py3.9_cpu_0 pytorch
pytorch 1.10.2 py3.9_cpu_0 pytorch
torchaudio 0.10.2 py39_cpu [cpuonly] pytorch
torchvision 0.11.3 py39_cpu [cpuonly] pytorch
My GPU is an RTX 2070 Super. Can anyone help me?

In my case i try to change the environment so i create new environment using conda then i download pytorch again from pytorch.org the compatible version for my GPU and then i tap the cmd of training and it works. hope it helps you

Related

Pytorch CUDA not available with correct versions

I really need help setting up CUDA for development with Pytorch. I have a Nvidia graphics card and am using Python 3.8. To install pytorch with the correct CUDA integration I ran conda install pytorch torchvision cudatoolkit=10.1 -c python. The problem is that torch.cuda.is_available() always returns False.
Can anyone help me here?
I the following are my versionings:
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
nvidia-smi
NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6
As in the PyTorch website, install with conda install pytorch torchvision cudatoolkit=11.3 -c pytorch or conda install pytorch torchvision cudatoolkit=11.6 -c pytorch

pytorch CUDA version vs. Nvidia CUDA version

Till Apr26th, 2022, CUDA has updated to version 11.6, which can be installed by Nvidia Instruction:
wget https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda_11.6.2_510.47.03_linux.run
sudo sh cuda_11.6.2_510.47.03_linux.run
I guess the version of cudatoolkit will also be 11.6
However, there is no version of pytorch that matches CUDA11.6.
On the website of pytorch, the newest CUDA version is 11.3, pytorch version will be 1.11.0(stable)
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
So if I used CUDA11.6 and pytorch1.11.0 with cudatoolkit=11.3, will it perform normally?
and if there is any difference between Nvidia Instruction and conda method below?
conda install cuda -c nvidia
Best regards!
It should be fine. Otherwise, I saw here that you can build it from the source (I have python=3.8.13) build instructions
pip install torch --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu116

"1 Physical GPUs, 0 Logical GPU " when i train the model the gpu is not working

ubntu version 18.04
nvidia Smia 440.1.0
cuda 10.2
GTx 960
tensorboard 2.3.0
tensorboard-plugin-wit 1.7.0
tensorflow-estimator 2.3.0
tensorflow-gpu 2.3.0
My gpu is not working or you can say its installed but when i run the model it's not allocating to the gpu
here the image
run this code to see if tensorflow is detecting your gpu. If number of gpus is listed as 0 then it is not detecting it. You need to have Cuda 10.1 on your systen and cuDNN v7.6.5. In that case if you are using Anaconda open the conda prompt and run conda install cuDNN=7.6.5. You may also have to install CUDA Toolkit 10.1. If you installed tensorflow with pip then you have to download and install CUDA Toolkit 10.1 and modify your environment variables etc. I found the easiest solution is to install tensorflow using Conda because it installs both the toolkit and cudnn automatically. IF you are using Anaconda then open the conda prompt and run conda fistall --upgrade tensorflow
import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
print(tf.__version__)
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
tf.test.is_gpu_available()
!python --version

How do I install Pytorch 1.3.1 with CUDA enabled

I have a conda environment on my Ubuntu 16.04 system.
When I install Pytorch using:
conda install pytorch
and I try and run the script I need, I get the error message:
raise AssertionError("Torch not compiled with CUDA enabled")
From looking at forums, I see that this is because I have installed Pytorch without CUDA support.
I then tried:
conda install -c pytorch torchvision cudatoolkit=10.1 pytorch
but now I get the error:
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
File "/home/username/miniconda3/envs/super_resolution/lib/python3.6/site-packages/torch/__init__.py", line 81, in <module>
from torch._C import *
ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found
So it seems that these two installs are installing different versions of Pytorch(?). The first one that seemed to work was Pytorch 1.3.1.
My question: How do I install Pytorch with CUDA enabled, but ensure it is version 1.3.1 so that it works with my system?
Given that your system is running Ubuntu 16.04, it comes with glibc installed. You can check your version by typing ldd --version.
Keep in mind that PyTorch is compiled on CentOS which runs glibc version 2.17.
Then check the CUDA version installed on your system nvcc --version
Then install PyTorch as follows e.g. if your cuda version is 9.2:
conda install pytorch torchvision cudatoolkit=9.2 -c pytorch
If you get the glibc version error, try installing an earlier version of PyTorch.
If neither of the above options work, then try installing PyTorch from sources.
If you would like to set a specific PyTorch version to install, please set it as <version_nr> in the below command:
conda install pytorch=<version_nr> torchvision cudatoolkit=9.2 -c pytorch
For CUDA 10.1:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
For CUDA 9.2:
conda install pytorch torchvision cudatoolkit=9.2 -c pytorch
For no CUDA:
conda install pytorch torchvision cpuonly -c pytorch
Not sure whether you have solved your problem or not, but I have this exact same problem before because I was trying to install pytorch on a cluster and I don't have root access. You need to download glibc into your directoryand set the environmental variable LD_LIBRARY_PATH to your local glibc https://stackoverflow.com/a/48650638/5662642.
To install glibc locally, I will point you to this thread that I read to solve my problem
https://stackoverflow.com/a/38317265/5662642 (instead of setting --prefix=/opt/glibc-2.14 when installing, you might want to set it to other directory that you have access to). Hope it works for you

Why in anaconda installing cudnn 7.6.5 when the supported version for TF 2.0 (with GPU) is 7.4

Related to this post
I have installed TensorFlow (GPU) with anaconda using the installation instructions in here
conda create -n tf-gpu tensorflow-gpu
conda activate tf-gpu
But I realized Anaconda is installing library cudnn version 7.6.5
According to TensorFlow page
it should not be installing cudnn 7.4?
This can cause any problem?

Categories