Importing theano with GPU device on Windows conda - python

I'm having some troubles working with a workstation with Conda for Windows. I'm not too familiar with the OS and this is the first time I try GPU support for theano there, to no avail.
The thing is, when I use an Anaconda bash, I can put this:
set "MKL_THREADING_LAYER=GNU"
set THEANO_FLAGS=device=cuda
python
import theano
This works fine, with GPU support. However, I need the script to switch between devices (GPUs and CPU) during the execution. I read somewhere it can be done by setting the environment variables directly on the code, but I tried this to no avail:
import os
os.environ["THEANO_FLAGS"] = "device=cuda"
import theano
The MKL_THREADING_LAYER environment variable is already put in the system, so I guess the error isn't there. Anyways, the code can't run:
RuntimeError: 'path' must be None or a list, not <class '_frozen_importlib_external._NamespacePath'>
Any ideas? Thanks.

Related

CUDA_HOME environment variable is not set

I have a working environment for using pytorch deep learning with gpu, and i ran into a problem when i tried using mmcv.ops.point_sample, which returned :
ModuleNotFoundError: No module named 'mmcv._ext'
I have read that you should actually use mmcv-full to solve it, but i got another error when i tried to install it:
pip install mmcv-full
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Which seems logic enough since i never installed cuda on my ubuntu machine(i am not the administrator), but it still ran deep learning training fine on models i built myself, and i'm guessing the package came in with minimal code required for running cuda tensors operations.
So my main question is where is cuda installed when used through pytorch package, and can i use the same path as the environment variable for cuda_home?
Additionaly if anyone knows some nice sources for gaining insights on the internals of cuda with pytorch/tensorflow I'd like to take a look (I have been reading cudatoolkit documentation which is cool but this seems more targeted at c++ cuda developpers than the internal working between python and the library)
you can chek it and check the paths with these commands :
which nvidia-smi
which nvcc
cat /usr/local/cuda/version.txt

Jupyter notebook : Import error: DLL load failed (but works on .py) without Anaconda

I was trying to use CuPy inside a Jupyter Notebook on Windows10 and got this error :
---> from cupy_backends.cuda.libs import nvrtc
ImportError: DLL load failed while importing nvrtc: The specified procedure could not be found.
This is triggered by import cupy.
I know there is a bunch of threads about similar issues (DLLs not found by Jupyter under Windows), but everyone of them relies on conda, that I'm not using anymore.
I checked os.environ['CUDA_PATH'], which is set to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6 and is the right path.
Also, os.environ['PATH'] contains C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\bin which is where the DLL is located.
I fixed it once by running pip install -U notebook, then it started failing again after restarting Jupyter. Running the same command (even with --force-reinstall) could not produce the same effect again.
I have no problems with CuPy when using a shell or a regular Python IDE. I could use workarounds like executing CuPy based commands outside Jupyter for myself, but that would go against using notebooks for pedagogy and examples, which is my major use of notebooks.
Would anyone have a fix for that not relying on conda ?
The error was showing up because I was importing PyTorch before CuPy.
Solution was to import cupy before torch.

Importing PyTorch before using pandas.DataFrame.plot() causes Jupyter kernel to crash

I'm running windows 10 with the following library versions:
matplotlib 3.3.3
torch 1.7.0
pandas 1.1.4
When I load from CSV a dataframe and plot its data BEFORE import torch, I get no issues. However, if I put all of my import statements at the top of the notebook, as is tradition, I instead get a crashed kernel with the following pop up message:
The kernel for anom_detect_nn.ipynb appears to have died. It will
restart automatically.
When I look at my shell, I see two error messages:
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll
already initialized.
OMP: Hint This means that multiple copies of the
OpenMP runtime have been linked into the program. That is dangerous,
since it can degrade performance or cause incorrect results. The best
thing to do is to ensure that only a single OpenMP runtime is linked
into the process, e.g. by avoiding static linking of the OpenMP
runtime in any library. As an unsafe, unsupported, undocumented
workaround you can set the environment variable
KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute,
but that may cause crashes or silently produce incorrect results. For
more information, please see
http://www.intel.com/software/products/support/.
It seems like other users on SO have experienced this before, but all of the solutions apply to MacOS users. I've tried them anyway:
conda install nomkl
pip uninstall everything, pip install everything
I did not use the dangerous workaround KMP_DUPLICATE_LIB_OK=TRUE
Thanks to the above steps entire set-up is now a tangled mess where I can't pip install anything successfully, and modules that are successfully installed can no longer be imported without Module not found errors.
This is a real pain, and I'm at my wits end. I'm currently uninstalling everything Python on my system and starting over. Not a happy camper. Any solutions that are for Windows and not Mac, should this problem persist when I start over?
Same issue. This worked for me:
put plt before import torch
use imshow immediately
import torch
use imshow (and it doesn't crash)
I don't know why, but it works...

Setting of backend for keras

I need to change the keras backend from default tensorflow to theano. But my default python version is 3.7, which does not seem to work with keras (the import line crashes). So, I first had to create a specific environment.
After creating a specific python environment with anaconda, as suggested by 47263006, I did the following:
vi ~/.keras/keras.json (and change the backend name in it)
But with a virtualenv, editing the keras.json file had no effect. So, I resorted to the following solution in the python code:
import os
os.environ['KERAS_BACKEND'] = 'theano'
So I thought that maybe the latter is the more generic soution, and I tried to use it with my anaconda env, but surprise - that did not work there.
So my current solution is that, for anaconda edit the keras.json file and for virtualenv use os.environ.
Is there a more generic solution for setting keras backend which will work for both conda and virtualenv?

How to use tensorflow in pycharm?

I have installed tensorflow on my Ubuntu, I can import it from shell but I can't from pycharm. Here is my screen :
How can I solve this?
You should name your project differently: now you are trying to import package tensorflow (your project) and use it inside of it.
Try to rename your working directory to tensorflow_test or something different than tensorflow.
You have to change the python interpreter that PyCharm is using.
Go to options->project->project interpreter and select the correct one, most likely a virtualenv you created, where you installed tensorflow instead of the default.

Categories