Warning with CUDF/Python: "User Warning: No NVIDIA GPU detected" - python

I am having some difficulty running code with the cudf and dask_cudf modules in python.
I am working on Jupyter Labs through Anaconda. I have been able to correctly install my nvidia-gpu driver, cudf (through rapidsai), and cuda. Only, when I go to import cudf in python using import cudf, I get an error reading: "home/lib/python3.7/site-packages/cudf/utils/gpu_utils.py:120: UserWarning: No NVIDIA GPU detected. Warnings.warn("No NVIDIA GPU detected")
My environment:
Linux: RHEL8
Python: 3.7.7
Cuda: 10.2
Nvidia Driver: 390.138
CUDF/Dask_CUDF: 0.13 through rapidsai
I am trying to load and manipulate datasets with data in the hundreds of thousands to millions of items, so I really need the cudf/dask_cudf utility to maximize my time.
When I run nvidia-smi in the terminal, everything looks fine and the persistence mode is on. I have searched all over the internet for a solution with no great ideas. Any help would be appreciated.

Based on the conversations you're having with Robert, it seems that your GPU's architecture being a few generations outside of what RAPIDS will work with is the issue. Thanks Robert for working with Maggie to figure that out!
I wouldn't try to force RAPIDS to work on Kepler when there are so many alternative ways to provision a GPU - even free options for trial purposes!
If you are still interested in trying out RAPIDS and only need a single GPU, please look at our Google Colab notebooks and set up script OR app.blazingsql.com. They are shared or extra instances, with Colab allowing you more customization of your workspace if you need to install more packages and blazing having the fastest "get up and running" time.
If you feel that you need more than one GPU, you move to the paid realm and can provision it with any major cloud provider, install RHEL version of your choice (we only officially support RHEL 7, though).
Does that help you?

Related

how to run GPU on Mac OS Big Sur/Jupyter notebook

I am trying to create a GPU environment in Jupyter notebook to run CNN models but have had trouble. I am on MacOS (Big Sur) and was following the instructions from: https://www.techentice.com/how-to-make-jupyter-notebook-to-run-on-gpu/
First, to create a separate GPU environment in Jupyter understand that I need CUDA toolkit. However, found out that CUDA toolkit no longer supports Mac.
Second, understand that I have to download tensor flow GPU which apparently doesn't support MAC/python 3.7.
would be grateful for any help or advice please. essentially I just want to be able to run my code on GPU as CPU is way too slow for machine learning models. is there any way around this?

How to check if an NVIDIA GPU is available on my system?

Is there a simple way to check if an NVIDIA GPU is available on my system using only standard libraries? I've already seen other answers where they recommend using PyTorch or TensorFlow but that's not what I'm looking for. I'd like to know how to do this on both Windows and Linux. Thanks!
When you have Nvidia drivers installed, the command nvidia-smi outputs a neat table giving you information about your GPU, CUDA, and driver setup.
By checking whether or not this command is present, one can know whether or not an Nvidia GPU is present.
Do note that this code will only work if both an Nvidia GPU and appropriate drivers are installed.
This code should work on both Linux and Windows, and the only library it uses is subprocess, which is a standard library.
import subprocess
try:
subprocess.check_output('nvidia-smi')
print('Nvidia GPU detected!')
except Exception: # this command not being found can raise quite a few different errors depending on the configuration
print('No Nvidia GPU in system!')
Following code shows if cuda available. cuda is in contact with gpu
print(torch.cuda.is_available())
print(torch.backends.cudnn.enabled)

GPU processing - cuDF install problem (O/S or hardware issue?)

My aim to to explore GPU acceleration for tabular data with 10,000 to 10M+ records. I am most familiar with Pandas, so cuDF seems like a good place to start.
I'm finding mixed results re: whether cuDF will run on my system (Windows 7 Pro 64-bit, i7-6820HQ, 32GB RAM, NVidia Quadro M2000M 4GB). There is also an onboard graphics card.
per the gitHub page (https://github.com/rapidsai/cudf):
CUDA/GPU Requirements
CUDA 10.0+ (YES - I have v10.1.120)
NVIDIA driver 410.48+ (YES - I have 432.06)
Pascal architecture or better (NO - Maxwell)
I have heard that Pascal architecture is preferred/optimal as opposed to a requirement, but maybe that was for older versions of cuDF? Just this morning I heard it will run on Win 64, though performance benefits may also be reduced. Nonetheless, I'm interested in giving it a shot.
When I install from the conda prompt (python 3.6 env) using the recommended command for my CUDA version:
conda install -c rapidsai -c nvidia -c numba -c conda-forge cudf=0.13
python=3.6 cudatoolkit=10.1
I get:
Collecting package metadata (repodata.json): done Solving environment:
failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from
current channels:
cudf=0.13
Current channels:
https://conda.anaconda.org/rapidsai/win-64
https://conda.anaconda.org/rapidsai/noarch
https://conda.anaconda.org/nvidia/win-64
https://conda.anaconda.org/nvidia/noarch
https://conda.anaconda.org/numba/win-64
https://conda.anaconda.org/numba/noarch
https://conda.anaconda.org/conda-forge/win
https://conda.anaconda.org/conda-forge/noa
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-6
https://repo.anaconda.com/pkgs/msys2/noarc
To search for alternate channels that may provide the conda package
you're looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
When I go to anaconda.org and search for cuDF (or RAPIDS), all I find are Linux installs.
I attended an Anaconda-sponsored webinar earlier today where the speaker said it'll run in Win-64, though this older post suggest maybe I need to build from source:
Package not found error while installing CuSpatial or CuDf library
I'm not ready to attempt a build from source. Am I just wasting my time? Recommendations appreciated (for either resolving cuDF with my system or alternative packages).
cuDF maintainer here.
Currently, cuDF nor any other RAPIDS libraries are supported in a native Windows environment. There's an issue tracking Windows support here: https://github.com/rapidsai/cudf/issues/28.
In general, native Windows support is not a priority for us, especially given the push towards GPU support in WSL2 that is currently in open beta.
Apparently there are some news regarding this. Here one can find the guide for using NVIDIA CUDA on Windows Subsystem for Linux.
Getting started with running CUDA on WSL requires you to complete
these steps in order:
1. Installing the latest builds from the Microsoft Windows Insider Program
2. Installing the NVIDIA preview driver for WSL 2
3. Installing WSL 2
Important note regarding the installation of the latest builds from the Microsoft Windows Insider Program
Ensure that you install Build version 20145 or higher.
You can check your build version number by running winver via the Windows Run command. (Source)
Hopefully next year a version of Windows that meets the Build version 20145 or higher requirement will be released and then one doesn't need to run an "Insider Program" build.
Source for Windows 10 release information.
Here one will be able to follow all the updates regarding the Support for Windows.

how to use GPU in kaggle_python docker image

I install kaggle_python docker image from this tutorial:
http://blog.kaggle.com/2016/02/05/how-to-get-started-with-data-science-in-containers/
this image is perfect but I don't know how to use GPU in it. anyone have any idea?
Nvidia has released a docker runtime that allows docker containers to access their host GPU. Assuming the image you're running has the CUDA libraries built in, you ought to be able to install nvidia-docker as per their instructions, then just launch a container using docker run --runtime=nvidia ...
There's an FAQ for using nvidia-dockers if you run into other roadblocks. I haven't done this myself, but lots of issues are probably going to be specific to how you installed the drivers and cuda libraries on your particular machine. You may also have to modify the image to include any necessary CUDA libraries if they aren't already installed.
Did you download the CUDA branch (link: https://github.com/Kaggle/docker-python/tree/cuda)? If so, all the infrastructure for the GPUs should already be set up and ready to go. Otherwise, you're going to have to do the setup yourself. :)

How to implement tensorflow with gpu in Windows 10?

There are many sources on configuring TensorFlow in Windows but none of them clearly states the steps and paths we should follow.
I have configured TensorFlow halfway but still I've missed few steps. Can anyone help with the whole configuration process?
I've used TensorFlow with gpu,
NVIDIA 950M, Windows 10, Python 3.5.2, CUDA nn8 v5.1
Glad even if someone could send me a link to complete process :)
The documentation in tensorflow.org has the complete process.

Categories