how to run GPU on Mac OS Big Sur/Jupyter notebook - python

I am trying to create a GPU environment in Jupyter notebook to run CNN models but have had trouble. I am on MacOS (Big Sur) and was following the instructions from: https://www.techentice.com/how-to-make-jupyter-notebook-to-run-on-gpu/
First, to create a separate GPU environment in Jupyter understand that I need CUDA toolkit. However, found out that CUDA toolkit no longer supports Mac.
Second, understand that I have to download tensor flow GPU which apparently doesn't support MAC/python 3.7.
would be grateful for any help or advice please. essentially I just want to be able to run my code on GPU as CPU is way too slow for machine learning models. is there any way around this?

Related

Keras stopped training despite using GPU memory

similarly to the topic below, keras stopped working.
tf.keras - Training on first epoch not progressing despite using GPU memory
I've a python 3.7 anaconda installation on windows
cuda 10.2 and cudnn installed
3080 GPU
keras 2.3.1
TF 1.4
A few days ago everything was running perfectly. Then after installing pytorch keras stopped working. The same script I was training before now get stuck on the first epoch. No errors are displayed when running model.fit (verbose 2). Simply the whole memory is full (even using a very small dataset) and the training is not advancing.
As additional information pytorch displayed an error about the impossibility to use cuda.
I've tried to format the whole PC (factory reset) and the issue is still happening.
I'm out of ideas. Any suggestion would be more then welcome.
Thanks!
I really think that factory reset of the whole PC was really not necessary. I would suggest creating two conda virtual environments, one with Tensorflow and the other with PyTorch. Conda virtual environments are a really useful, they keep things separated and this might be really useful for your application. Here there is the Anaconda official reference explaining how to manage the environments.

Run PyTorch in GPU with visual studio code

I am trying to execute code with pytorch in visual studio code, the problem is that I must be able to do it from the CPU. But my idea is that for certain deep learning projects to use the gpu and others not. How can I switch from CPU to GPU
when i run
import torch
torch.cuda.is_available()
the output is
"False".
i have cuda already installed. I'm using Ubuntu 20.04.2 . It is important for me to do it in Visual Studio Code
Several issues could prevent you from using a GPU.
The GPU is not supported by CUDA or does not have the minimum CUDA version
You installed PyTorch CPU instead of the GPU variant. You will need to reinstall from the PyTorch website

Warning with CUDF/Python: "User Warning: No NVIDIA GPU detected"

I am having some difficulty running code with the cudf and dask_cudf modules in python.
I am working on Jupyter Labs through Anaconda. I have been able to correctly install my nvidia-gpu driver, cudf (through rapidsai), and cuda. Only, when I go to import cudf in python using import cudf, I get an error reading: "home/lib/python3.7/site-packages/cudf/utils/gpu_utils.py:120: UserWarning: No NVIDIA GPU detected. Warnings.warn("No NVIDIA GPU detected")
My environment:
Linux: RHEL8
Python: 3.7.7
Cuda: 10.2
Nvidia Driver: 390.138
CUDF/Dask_CUDF: 0.13 through rapidsai
I am trying to load and manipulate datasets with data in the hundreds of thousands to millions of items, so I really need the cudf/dask_cudf utility to maximize my time.
When I run nvidia-smi in the terminal, everything looks fine and the persistence mode is on. I have searched all over the internet for a solution with no great ideas. Any help would be appreciated.
Based on the conversations you're having with Robert, it seems that your GPU's architecture being a few generations outside of what RAPIDS will work with is the issue. Thanks Robert for working with Maggie to figure that out!
I wouldn't try to force RAPIDS to work on Kepler when there are so many alternative ways to provision a GPU - even free options for trial purposes!
If you are still interested in trying out RAPIDS and only need a single GPU, please look at our Google Colab notebooks and set up script OR app.blazingsql.com. They are shared or extra instances, with Colab allowing you more customization of your workspace if you need to install more packages and blazing having the fastest "get up and running" time.
If you feel that you need more than one GPU, you move to the paid realm and can provision it with any major cloud provider, install RHEL version of your choice (we only officially support RHEL 7, though).
Does that help you?

Exporting my Tensorflow model and code to a different PC

I have referred to a number of tutorials and built an object detection model using Faster-RCNN on an Anaconda Virtual Environment. Now I want to show case this model, and find problem when I run it on a different system without Anaconda, I try running it on CMD. In fact, it doesn't run at all.
I have done my research on exporting the model but hit a deadend each time.
I use Anaconda Prompt + Windows 10 + NVidia GPU + Tensorflow-gpu=1.5 to run model on my dedicated system.
I would want to know how can I export this to a different PC which doesn't have the GPU or the Anaconda installed. Or am I completely wrong in the approach and need all the dependencies used when I run it on my system?

How to check whether computer capable to run tensorflow gpu and how to install tensorflow gpu version

First, I am not sure my computer(macbook pro) is capable of running tensor flow on GPU. I have checked system/hardware/Graphics-Display settings. You can find below related infos.
Graphics: Intel Iris Pro 1536 MB
So, Is it possible to run tensor flow on GPU with this computer graphics capabilities?
Second question, I am using conda environment to install tensor flow. On tensorflow installation page I could only find pip package for cpu. How could I install tensor gpu package? tensorflow installarion page
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.3.0-py3-none-any.whl
The newest version of tensorflow does not have gpu support for mac os, as seen here.
I havent found any way to run my application (which uses tensorflow) with gpu support in my Mac. It has an "Intel Iris Ghaphics 6100" graphic card.
In this report (Can I run Cuda or opencl on intel iris?) says that only NVDIA ghaphic cards have CUDA supports, then likely I wont be able to do ..
But I have installed tensorflow-gpu without problems following this ways:
https://wangpei.ink/2019/03/29/Install-TensorFlow-GPU-by-Anaconda(conda-install-tensorflow-gpu)/

Categories