I have created a Python 3.7 conda virtual environment and installed the following packages using this command:
conda install pytorch torchvision torchaudio cudatoolkit=11.3 matplotlib scipy opencv -c pytorch
They install fine, but then when I come to run my program I get the following error which suggests that a CUDA enabled device is not found:
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I have an NVIDIA RTX 3060ti GPU, which as far as I am aware is cuda enabled, but whenever I go into the Python interactive shell within my conda environment I get False when evaluating torch.cuda.is_available() suggesting that perhaps CUDA is not installed properly or is not found.
When I then perform a conda list to view my installed packages:
# packages in environment at /home/user/anaconda3/envs/FGVC:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
blas 1.0 mkl
brotli 1.0.9 he6710b0_2
bzip2 1.0.8 h7b6447c_0
ca-certificates 2021.10.26 h06a4308_2
cairo 1.16.0 hf32fb01_1
certifi 2021.10.8 py37h06a4308_2
cpuonly 1.0 0 pytorch
cudatoolkit 11.3.1 h2bc3f7f_2
cycler 0.11.0 pyhd3eb1b0_0
dbus 1.13.18 hb2f20db_0
expat 2.4.4 h295c915_0
ffmpeg 4.0 hcdf2ecd_0
fontconfig 2.13.1 h6c09931_0
fonttools 4.25.0 pyhd3eb1b0_0
freeglut 3.0.0 hf484d3e_5
freetype 2.11.0 h70c0345_0
giflib 5.2.1 h7b6447c_0
glib 2.69.1 h4ff587b_1
graphite2 1.3.14 h23475e2_0
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
harfbuzz 1.8.8 hffaf4a1_0
hdf5 1.10.2 hba1933b_1
icu 58.2 he6710b0_3
imageio 2.16.0 pypi_0 pypi
imageio-ffmpeg 0.4.5 pypi_0 pypi
imutils 0.5.4 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
jasper 2.0.14 hd8c5072_2
jpeg 9d h7f8727e_0
kiwisolver 1.3.2 py37h295c915_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libglu 9.0.0 hf484d3e_1
libgomp 9.3.0 h5101ec6_17
libopencv 3.4.2 hb342d67_1
libopus 1.3.1 h7b6447c_0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.3.0 hd4cf53a_17
libtiff 4.2.0 h85742a9_0
libuuid 1.0.3 h7f8727e_2
libuv 1.40.0 h7b6447c_0
libvpx 1.7.0 h439df22_0
libwebp 1.2.0 h89dd481_0
libwebp-base 1.2.0 h27cfd23_0
libxcb 1.14 h7b6447c_0
libxml2 2.9.12 h03d6c58_0
lz4-c 1.9.3 h295c915_1
matplotlib 3.5.0 py37h06a4308_0
matplotlib-base 3.5.0 py37h3ed280b_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py37h7f8727e_0
mkl_fft 1.3.1 py37hd3c417c_0
mkl_random 1.2.2 py37h51133e4_0
munkres 1.1.4 py_0
ncurses 6.3 h7f8727e_2
networkx 2.6.3 pypi_0 pypi
ninja 1.10.2 py37hd09550d_3
numpy 1.21.2 py37h20f2e39_0
numpy-base 1.21.2 py37h79a1101_0
olefile 0.46 py37_0
opencv 3.4.2 py37h6fd60c2_1
openssl 1.1.1m h7f8727e_0
packaging 21.3 pyhd3eb1b0_0
pcre 8.45 h295c915_0
pillow 8.4.0 py37h5aabda8_0
pip 21.2.2 py37h06a4308_0
pixman 0.40.0 h7f8727e_1
py-opencv 3.4.2 py37hb342d67_1
pyparsing 3.0.4 pyhd3eb1b0_0
pyqt 5.9.2 py37h05f1152_2
python 3.7.11 h12debd9_0
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch 1.7.0 py3.7_cpu_0 [cpuonly] pytorch
pywavelets 1.2.0 pypi_0 pypi
qt 5.9.7 h5867ecd_1
readline 8.1.2 h7f8727e_1
scikit-image 0.19.1 pypi_0 pypi
scipy 1.7.3 py37hc147768_0
setuptools 58.0.4 py37h06a4308_0
sip 4.19.8 py37hf484d3e_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.37.2 hc218d9a_0
tifffile 2021.11.2 pypi_0 pypi
tk 8.6.11 h1ccaba5_0
torchaudio 0.7.0 py37 pytorch
torchvision 0.8.1 py37_cpu [cpuonly] pytorch
tornado 6.1 py37h27cfd23_0
typing_extensions 3.10.0.2 pyh06a4308_0
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7f8727e_4
zstd 1.4.9 haebb681_0
There seems to be a lot of things saying cpuonly, but I am not sure how they came about, since I did not install them.
I am running Ubuntu version 20.04.4 LTS
I ran into a similar problem when I tried to install Pytorch with CUDA 11.1. Although the anaconda site explicitly lists a pre-built version of Pytorch with CUDA 11.1 is available, conda still tries to install the cpu-only version. After a lot of trial-and-fail, I realize that the packages torchvision torchaudio are the root cause of the problem. So installing just PyTorch would fix this:
conda install pytorch cudatoolkit=11.1 -c pytorch -c nvidia
You can ask conda to install a specific build of your required package.pytorch builds supporting cuda have the phrase cuda somewhere in their build string, so you can ask conda to match that spec. For more information, have a look at conda's package match spec.
$ conda install pytorch=*=*cuda* cudatoolkit -c pytorch
I believe I had the following things wrong that prevented me from using Cuda. Despite having cuda installed the nvcc --version command indicated that Cuda was not installed and so what I did was add it to the path using this answer.
Despite doing that and deleting my original conda environment and using the conda install pytorch torchvision torchaudio cudatoolkit=11.3 matplotlib scipy opencv -c pytorch command again I still got False when evaluating torch.cuda.is_available().
I then used this command conda install pytorch torchvision torchaudio cudatoolkit=10.2 matplotlib scipy opencv -c pytorch changing cudatoolkit from verison 11.3 to version 10.2 and then it worked!
Now torch.cuda.is_available() evaluates to True
Unfortunately, Cuda version 10.2 was incompatible with my RTX 3060 gpu (and I'm assuming it is not compatible with all RTX 3000 cards). Cuda version 11.0 was giving me errors and Cuda version 11.3 only installs the CPU only versions for some reason. Cuda version 11.1 worked perfectly though!
This is the command I used to get it to work in the end:
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
If there is nothing wrong with your nvidia driver setup, maybe you are missing nvidia channel from installation arguments. The pytorch documentation helped me generate this command that eventually solved my problem:
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
Installing jupyter inside conda's virtual environment solve my issue. I was having the same issue, even pytorch with cuda is installed and !nvidia-smi showing GPU , but while trying to access jupyter notebook , it was showing only cpu.
While I was trying from command line torch is finding CUDA but from jupyter is not showing, So I just pip install jupyter on virtual environment of conda and after that problem is solved .
Use the exact script from the Pytorch website works for me:
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=10.2 -c pytorch
But if I use
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=10.2 -c pytorch
no installing torchaudio, it will install cpu versions of pytorch and torchvision. I found it interesting and don't know why.
Related
I'm working on applying a clustering algorithm (sklearn.AgglomerativeClustering) on a dataset. I've tried running this same block of code in Spyder IDE and VScode, each time the cell runs for about 35-45s then returns a message that the kernel crashed unexpectedly and and new kernel is created. Im using Python 3.10 with Anaconda package manager.
Spyder had no information about the kernel crash but in VScode I was pointed to this github post Kernel-crashes. I figured somehow my numpy installation was affecting kernel performance or execution. I created a new virtual env using conda; re-installed numpy, pandas, sci-kit learn, etc. Same kernel crash occurring with the same message.
I am using a new Macbook with the m1 chip. Unsure if that's has an influence or is a hint at how to solve this issue. Did see the sci-kit learn installation docs had separate section sci-kit learn install docs about installing with m1 chip but to be honest wasn't exactly sure how to interpret and use that info.
this is the code Im trying to run that causes the kernel to crash. the pca_features is the numpy array return after running .fit_transform method on my raw data. PCA executed fine for what it's worth and showed no issues. Tried using subset of rows, first 5000, to see if that helped cell run at all but still no luck.
Error Message:
The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.
Canceled future for execute_request message before replies were done
Im open to other clustering algorithms but fact that I can't get this to execute makes me less confident any others (e.g. KMeans) would successfully run. But I could be wrong.
My goal is get a unsupervised clustering algorithm fitted to the data so I can get labels for each observation and compare it against primary components from PCA.
```
aggclus = AgglomerativeClustering(n_clusters = 4, affinity="euclidean",linkage="ward")
subset_pcafeatures = pca_features[:5000,5]
cluster_labels = aggclus.fit_predict(pca_features)
```
Any tips/advice/help/assistance would be much appreciated. Thank you
Updated 29-01-23
output from $ conda list
# packages in environment at ../anaconda3/envs/work-env:
#
# Name Version Build Channel
appnope 0.1.3 pyhd8ed1ab_0 conda-forge
asttokens 2.2.1 pyhd8ed1ab_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 pyhd8ed1ab_3 conda-forge
backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge
blas 2.116 openblas conda-forge
blas-devel 3.9.0 16_osxarm64_openblas conda-forge
bottleneck 1.3.5 py310h96f19d2_0
brotli 1.0.9 h1a28f6b_7
brotli-bin 1.0.9 h1a28f6b_7
bzip2 1.0.8 h620ffc9_4
ca-certificates 2023.01.10 hca03da5_0
certifi 2022.12.7 py310hca03da5_0
comm 0.1.2 pyhd8ed1ab_0 conda-forge
contourpy 1.0.5 py310h525c30c_0
cycler 0.11.0 pyhd3eb1b0_0
debugpy 1.5.1 py310hc377ac9_0
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
executing 1.2.0 pyhd8ed1ab_0 conda-forge
fftw 3.3.9 h1a28f6b_1
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 h1192e45_0
giflib 5.2.1 h80987f9_1
ipykernel 6.20.2 pyh736e0ef_0 conda-forge
ipython 8.8.0 pyhd1c38e8_0 conda-forge
jedi 0.18.2 pyhd8ed1ab_0 conda-forge
joblib 1.1.1 py310hca03da5_0
jpeg 9e h1a28f6b_0
jupyter_client 7.4.9 pyhd8ed1ab_0 conda-forge
jupyter_core 5.1.1 py310hca03da5_0
kiwisolver 1.4.4 py310h313beb8_0
lcms2 2.12 hba8e193_0
lerc 3.0 hc377ac9_0
libblas 3.9.0 16_osxarm64_openblas conda-forge
libbrotlicommon 1.0.9 h1a28f6b_7
libbrotlidec 1.0.9 h1a28f6b_7
libbrotlienc 1.0.9 h1a28f6b_7
libcblas 3.9.0 16_osxarm64_openblas conda-forge
libcxx 14.0.6 h848a8c0_0
libdeflate 1.8 h1a28f6b_5
libffi 3.4.2 hca03da5_6
libgfortran 5.0.0 11_3_0_hca03da5_28
libgfortran5 11.3.0 h009349e_28
liblapack 3.9.0 16_osxarm64_openblas conda-forge
liblapacke 3.9.0 16_osxarm64_openblas conda-forge
libopenblas 0.3.21 openmp_hc731615_3 conda-forge
libpng 1.6.37 hb8d0fd4_0
libsodium 1.0.18 h27ca646_1 conda-forge
libtiff 4.5.0 h2fd578a_0
libwebp 1.2.4 h68602c7_0
libwebp-base 1.2.4 h1a28f6b_0
llvm-openmp 14.0.6 hc6e5704_0
lz4-c 1.9.4 h313beb8_0
matplotlib 3.6.2 py310hca03da5_0
matplotlib-base 3.6.2 py310h8bbb115_0
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
missingno 0.4.2 pyhd3eb1b0_1
munkres 1.1.4 py_0
ncurses 6.4 h313beb8_0
nest-asyncio 1.5.6 pyhd8ed1ab_0 conda-forge
numexpr 2.8.4 py310hecc3335_0
numpy 1.23.5 py310hb93e574_0
numpy-base 1.23.5 py310haf87e8b_0
openblas 0.3.21 openmp_hf78f355_3 conda-forge
openssl 1.1.1s h1a28f6b_0
packaging 23.0 pyhd8ed1ab_0 conda-forge
pandas 1.5.2 py310h46d7db6_0
parso 0.8.3 pyhd8ed1ab_0 conda-forge
pexpect 4.8.0 pyh1a96a4e_2 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 9.3.0 py310hf4a492f_1
pip 22.3.1 py310hca03da5_0
platformdirs 2.6.2 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.36 pyha770c72_0 conda-forge
psutil 5.9.0 py310h1a28f6b_0
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pygments 2.14.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.0.9 py310hca03da5_0
python 3.10.9 hc0d8a6c_0
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
pytz 2022.7 py310hca03da5_0
pyzmq 23.2.0 py310hc377ac9_0
readline 8.2 h1a28f6b_0
scikit-learn 1.2.0 py310h313beb8_0
scipy 1.9.3 py310h20cbe94_0
seaborn 0.12.2 py310hca03da5_0
setuptools 65.6.3 py310hca03da5_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.40.1 h7a7dc30_0
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.12 hb8d0fd4_0
tornado 6.2 py310h1a28f6b_0
traitlets 5.8.1 pyhd8ed1ab_0 conda-forge
typing-extensions 4.4.0 hd8ed1ab_0 conda-forge
typing_extensions 4.4.0 pyha770c72_0 conda-forge
tzdata 2022g h04d1e81_0
wcwidth 0.2.6 pyhd8ed1ab_0 conda-forge
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.10 h80987f9_1
zeromq 4.3.4 hbdafb3b_1 conda-forge
zlib 1.2.13 h5a0b063_0
zstd 1.5.2 h8574219_0
I am trying to install pytorch-geometric for a deep-learning project. Torch-sparse is throwing segmentation faults when I attempt to import it (see below). Initially I tried different versions of each required library, as I thought it might be a GPU issue, but I've since tried to simplify by installing cpu-only versions.
Python 3.9.12 (main, Apr 5 2022, 06:56:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import torch_scatter
>>> import torch_cluster
>>> import torch_sparse
Segmentation fault (core dumped)
And the same issue, presumably due to torch_sparse, when importing pytorch_geometric:
Python 3.9.12 (main, Apr 5 2022, 06:56:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch_geometric
Segmentation fault (core dumped)
I'm on an Ubuntu distribution:
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
Here's my (lightweight for DL) conda installs:
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
blas 1.0 mkl
brotlipy 0.7.0 py310h7f8727e_1002
bzip2 1.0.8 h7b6447c_0
ca-certificates 2022.07.19 h06a4308_0
certifi 2022.9.24 py310h06a4308_0
cffi 1.15.1 py310h74dc2b5_0
charset-normalizer 2.0.4 pyhd3eb1b0_0
cpuonly 2.0 0 pytorch
cryptography 37.0.1 py310h9ce1e76_0
fftw 3.3.9 h27cfd23_1
idna 3.4 py310h06a4308_0
intel-openmp 2021.4.0 h06a4308_3561
jinja2 3.0.3 pyhd3eb1b0_0
joblib 1.1.1 py310h06a4308_0
ld_impl_linux-64 2.38 h1181459_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgfortran-ng 11.2.0 h00389a5_1
libgfortran5 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.0.3 h7f8727e_2
markupsafe 2.1.1 py310h7f8727e_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310h7f8727e_0
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
ncurses 6.3 h5eee18b_3
numpy 1.23.3 py310hd5efca6_0
numpy-base 1.23.3 py310h8e6c178_0
openssl 1.1.1q h7f8727e_0
pip 22.2.2 py310h06a4308_0
pycparser 2.21 pyhd3eb1b0_0
pyg 2.1.0 py310_torch_1.12.0_cpu pyg
pyopenssl 22.0.0 pyhd3eb1b0_0
pyparsing 3.0.9 py310h06a4308_0
pysocks 1.7.1 py310h06a4308_0
python 3.10.6 haa1d7c7_0
pytorch 1.12.1 py3.10_cpu_0 pytorch
pytorch-cluster 1.6.0 py310_torch_1.12.0_cpu pyg
pytorch-mutex 1.0 cpu pytorch
pytorch-scatter 2.0.9 py310_torch_1.12.0_cpu pyg
pytorch-sparse 0.6.15 py310_torch_1.12.0_cpu pyg
readline 8.1.2 h7f8727e_1
requests 2.28.1 py310h06a4308_0
scikit-learn 1.1.2 py310h6a678d5_0
scipy 1.9.1 py310hd5efca6_0
setuptools 63.4.1 py310h06a4308_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.39.3 h5082296_0
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.12 h1ccaba5_0
tqdm 4.64.1 py310h06a4308_0
typing_extensions 4.3.0 py310h06a4308_0
tzdata 2022e h04d1e81_0
urllib3 1.26.12 py310h06a4308_0
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.6 h5eee18b_0
zlib 1.2.13 h5eee18b_0
Any help would be greatly appreciated!
I've found a combination of packages that works for me - hopefully someone else will have this issue at some point and be able to reproduce the steps from me talking to myself here. The full process for getting stuff working was:
Fresh conda environment with forced Python=3.9 (conda create -n ENVNAME python=3.9)
Activate that environment
Install basic python packages (conda install numpy pandas matplotlib scikit-learn)
Check CUDA version if working with a GPU (nvidia-smi in terminal prints these details for NVIDIA cards)
Install Pytorch using their suggested conda command (conda install pytorch torchvision torchaudio cudatoolkit=CUDA_VERSION -c pytorch -c conda-forge). This had to go through the env solving process on my machine.
Install pytorch geometric (or just torch sparse if that's all you need) with conda install pyg -c pyg. Again this had a solving process.
Check that torch_sparse imports without fault
Here's the conda list for this working combination of packages:
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
blas 1.0 mkl
bottleneck 1.3.5 py39h7deecbd_0
brotli 1.0.9 h5eee18b_7
brotli-bin 1.0.9 h5eee18b_7
brotlipy 0.7.0 py39hb9d737c_1004 conda-forge
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2022.9.24 ha878542_0 conda-forge
certifi 2022.9.24 py39h06a4308_0
cffi 1.14.6 py39he32792d_0 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
cryptography 37.0.2 py39hd97740a_0 conda-forge
cudatoolkit 11.6.0 hecad31d_10 conda-forge
cycler 0.11.0 pyhd3eb1b0_0
dbus 1.13.18 hb2f20db_0
expat 2.4.9 h6a678d5_0
ffmpeg 4.3 hf484d3e_0 pytorch
fftw 3.3.9 h27cfd23_1
fontconfig 2.13.1 h6c09931_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.11.0 h70c0345_0
giflib 5.2.1 h7b6447c_0
glib 2.69.1 h4ff587b_1
gmp 6.2.1 h58526e2_0 conda-forge
gnutls 3.6.13 h85f3911_1 conda-forge
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
icu 58.2 he6710b0_3
idna 3.4 pyhd8ed1ab_0 conda-forge
intel-openmp 2021.4.0 h06a4308_3561
jinja2 3.0.3 pyhd3eb1b0_0
joblib 1.1.1 py39h06a4308_0
jpeg 9e h7f8727e_0
kiwisolver 1.4.2 py39h295c915_0
krb5 1.19.2 hac12032_0
lame 3.100 h7f98852_1001 conda-forge
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libbrotlicommon 1.0.9 h5eee18b_7
libbrotlidec 1.0.9 h5eee18b_7
libbrotlienc 1.0.9 h5eee18b_7
libclang 10.0.1 default_hb85057a_2
libdeflate 1.8 h7f8727e_5
libedit 3.1.20210910 h7f8727e_0
libevent 2.1.12 h8f2d780_0
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgfortran-ng 11.2.0 h00389a5_1
libgfortran5 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.17 h166bdaf_0 conda-forge
libllvm10 10.0.1 hbcb73fb_5
libpng 1.6.37 hbc83047_0
libpq 12.9 h16c4e8d_3
libstdcxx-ng 11.2.0 h1234567_1
libtiff 4.4.0 hecacb30_0
libuuid 1.0.3 h7f8727e_2
libwebp 1.2.4 h11a3e52_0
libwebp-base 1.2.4 h5eee18b_0
libxcb 1.15 h7f8727e_0
libxkbcommon 1.0.1 hfa300c1_0
libxml2 2.9.14 h74e7548_0
libxslt 1.1.35 h4e12654_0
lz4-c 1.9.3 h295c915_1
markupsafe 2.1.1 py39h7f8727e_0
matplotlib 3.5.2 py39h06a4308_0
matplotlib-base 3.5.2 py39hf590b9c_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py39h7f8727e_0
mkl_fft 1.3.1 py39hd3c417c_0
mkl_random 1.2.2 py39h51133e4_0
munkres 1.1.4 py_0
ncurses 6.3 h5eee18b_3
nettle 3.6 he412f7d_0 conda-forge
nspr 4.33 h295c915_0
nss 3.74 h0370c37_0
numexpr 2.8.3 py39h807cd23_0
numpy 1.23.3 py39h14f4228_0
numpy-base 1.23.3 py39h31eccc5_0
openh264 2.1.1 h780b84a_0 conda-forge
openssl 1.1.1q h7f8727e_0
packaging 21.3 pyhd3eb1b0_0
pandas 1.4.4 py39h6a678d5_0
pcre 8.45 h295c915_0
pillow 9.2.0 py39hace64e9_1
pip 22.2.2 py39h06a4308_0
ply 3.11 py39h06a4308_0
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pyg 2.1.0 py39_torch_1.12.0_cu116 pyg
pyopenssl 22.0.0 pyhd8ed1ab_1 conda-forge
pyparsing 3.0.9 py39h06a4308_0
pyqt 5.15.7 py39h6a678d5_1
pyqt5-sip 12.11.0 py39h6a678d5_1
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.9.13 haa1d7c7_2
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.9 2_cp39 conda-forge
pytorch 1.12.1 py3.9_cuda11.6_cudnn8.3.2_0 pytorch
pytorch-cluster 1.6.0 py39_torch_1.12.0_cu116 pyg
pytorch-mutex 1.0 cuda pytorch
pytorch-scatter 2.0.9 py39_torch_1.12.0_cu116 pyg
pytorch-sparse 0.6.15 py39_torch_1.12.0_cu116 pyg
pytz 2022.1 py39h06a4308_0
qt-main 5.15.2 h327a75a_7
qt-webengine 5.15.9 hd2b0992_4
qtwebkit 5.212 h4eab89a_4
readline 8.2 h5eee18b_0
requests 2.28.1 pyhd8ed1ab_1 conda-forge
scikit-learn 1.1.2 py39h6a678d5_0
scipy 1.9.1 py39h14f4228_0
setuptools 63.4.1 py39h06a4308_0
sip 6.6.2 py39h6a678d5_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.39.3 h5082296_0
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.12 h1ccaba5_0
toml 0.10.2 pyhd3eb1b0_0
torchaudio 0.12.1 py39_cu116 pytorch
torchvision 0.13.1 py39_cu116 pytorch
tornado 6.2 py39h5eee18b_0
tqdm 4.64.1 py39h06a4308_0
typing_extensions 4.4.0 pyha770c72_0 conda-forge
tzdata 2022e h04d1e81_0
urllib3 1.26.11 pyhd8ed1ab_0 conda-forge
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.6 h5eee18b_0
zlib 1.2.13 h5eee18b_0
zstd 1.5.2 ha4553b6_0
I'm trying to import some packages with spyder (OS x64), Anaconda and pyton 3.x
The error is pretty famous in the internet. The solution proposed is to match the version of the library 1.10.5 with the HDF5 (mine is 1.10.4)
The question is that I can't find HDF5 version 1.10.5
and, the other hand, cannot understand what I could downgrade.
At this link: https://anaconda.org/conda-forge/hdf5 seems exist version 1.10.5 but when I type in the prompt of anaconda conda install -c conda-forge hdf5
the version remain 1.10.4.
Here the warning:
Warning! ***HDF5 library version mismatched error***
The HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
Data corruption or segmentation faults may occur if the application continues.
This can happen when an application was compiled by one version of HDF5 but
linked with a different version of static or shared HDF5 library.
You should recompile the application or check your shared library related
settings such as 'LD_LIBRARY_PATH'.
You can, at your own risk, disable this warning by setting the environment
variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'.
Setting it to 2 or higher will suppress the warning messages totally.
Headers are 1.10.4, library is 1.10.5
SUMMARY OF THE HDF5 CONFIGURATION
=================================
General Information:
‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
HDF5 Version: 1.10.5
Configured on: 2019
Configured by: Visual Studio 15 2017 Win64
Host system: Windows.0.17763
Uname information: Windows
Byte sex: little‑endian
Installation point: C:/Program Files/HDF5
Compiling Options:
‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
Build Mode:
Debugging Symbols:
Asserts:
Profiling:
Optimization Level:
Linking Options:
‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
Libraries:
Statically Linked Executables: OFF
LDFLAGS: /machine:x64
H5_LDFLAGS:
AM_LDFLAGS:
Extra libraries:
Archiver:
Ranlib:
Languages:
‑‑‑‑‑‑‑‑‑‑
C: yes
C Compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe 19.16.27027.1
CPPFLAGS:
H5_CPPFLAGS:
AM_CPPFLAGS:
CFLAGS: /DWIN32 /D_WINDOWS /W3
H5_CFLAGS:
AM_CFLAGS:
Shared C Library: YES
Static C Library: YES
Fortran: OFF
Fortran Compiler:
Fortran Flags:
H5 Fortran Flags:
AM Fortran Flags:
Shared Fortran Library: YES
Static Fortran Library: YES
C++: ON
C++ Compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe 19.16.27027.1
C++ Flags: /DWIN32 /D_WINDOWS /W3 /GR /EHsc
H5 C++ Flags:
AM C++ Flags:
Shared C++ Library: YES
Static C++ Library: YES
JAVA: OFF
JAVA Compiler:
Features:
‑‑‑‑‑‑‑‑‑
Parallel HDF5: OFF
Parallel Filtered Dataset Writes:
Large Parallel I/O:
High‑level library: ON
Threadsafety: OFF
Default API mapping: v110
With deprecated public symbols: ON
I/O filters (external): DEFLATE DECODE ENCODE
MPE:
Direct VFD:
dmalloc:
Packages w/ extra debug output:
API Tracing: OFF
Using memory checker: OFF
Memory allocation sanity checks: OFF
Function Stack Tracing: OFF
Strict File Format Checks: OFF
Optimization Instrumentation:
Bye...
Here all the packages installed:
# packages in environment at C:\Users\Megaport\Anaconda3\envs\venv:
#
# Name Version Build Channel
_py-xgboost-mutex 2.0 cpu_0
_tflow_select 2.3.0 mkl
absl-py 0.8.0 pypi_0 pypi
alabaster 0.7.12 py37_0
asn1crypto 0.24.0 py37_0
astor 0.8.0 pypi_0 pypi
astroid 2.2.5 py37_0
atomicwrites 1.3.0 py37_1
attrs 19.1.0 py37_1
babel 2.7.0 py_0
backcall 0.1.0 py37_0
blas 1.0 mkl
bleach 3.1.0 py37_0
ca-certificates 2019.5.15 1
certifi 2019.6.16 py37_1
cffi 1.12.3 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
cloudpickle 1.2.1 py_0
colorama 0.4.1 py37_0
cryptography 2.7 py37h7a1dbc1_0
cycler 0.10.0 py37_0
decorator 4.4.0 py37_1
defusedxml 0.6.0 py_0
docutils 0.15.2 py37_0
entrypoints 0.3 py37_0
fastcache 1.1.0 py37he774522_0
freetype 2.9.1 ha9979f8_1
gast 0.2.2 pypi_0 pypi
google-pasta 0.1.7 pypi_0 pypi
grpcio 1.23.0 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
idna 2.8 py37_0
imagesize 1.1.0 py37_0
importlib_metadata 0.19 py37_0
intel-openmp 2019.4 245
ipykernel 5.1.2 py37h39e3cac_0
ipython 7.8.0 py37h39e3cac_0
ipython_genutils 0.2.0 py37_0
isort 4.3.21 py37_0
jedi 0.15.1 py37_0
jinja2 2.10.1 py37_0
joblib 0.13.2 py37_0
jpeg 9b hb83a4c4_2
jsonschema 3.0.2 py37_0
jupyter_client 5.3.1 py_0
jupyter_core 4.5.0 py_0
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
keyring 18.0.0 py37_0
kiwisolver 1.1.0 py37ha925a31_0
lazy-object-proxy 1.4.2 py37he774522_0
libmklml 2019.0.5 0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.8.0 h7bd577a_0
libsodium 1.0.16 h9d3ae62_0
libxgboost 0.90 0
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
markdown 3.1.1 py37_0
markupsafe 1.1.1 py37he774522_0
matplotlib 3.1.1 py37hc8f65d3_0
mccabe 0.6.1 py37_1
mistune 0.8.4 py37he774522_0
mkl 2019.4 245
mkl-service 2.0.2 py37he774522_0
mkl_fft 1.0.14 py37h14836fe_0
mkl_random 1.0.2 py37h343c172_0
more-itertools 7.2.0 py37_0
mpmath 1.1.0 py37_0
msys2-conda-epoch 20160418 1
nbconvert 5.5.0 py_0
nbformat 4.4.0 py37_0
numpy 1.17.2 pypi_0 pypi
numpy-base 1.16.4 py37hc3f5095_0
numpydoc 0.9.1 py_0
openssl 1.1.1c he774522_1
opt-einsum 3.0.1 pypi_0 pypi
packaging 19.1 py37_0
pandas 0.25.1 py37ha925a31_0
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py37_1
parso 0.5.1 py_0
pickleshare 0.7.5 py37_0
pip 19.2.2 py37_0
pluggy 0.12.0 py_0
prompt_toolkit 2.0.9 py37_0
protobuf 3.9.1 pypi_0 pypi
psutil 5.6.3 py37he774522_0
py 1.8.0 py37_0
py-xgboost 0.90 py37_0
py-xgboost-cpu 0.90 py37_0
pycodestyle 2.5.0 py37_0
pycparser 2.19 py37_0
pyflakes 2.1.1 py37_0
pygments 2.4.2 py_0
pylint 2.3.1 py37_0
pyopenssl 19.0.0 py37_0
pyparsing 2.4.2 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
pyrsistent 0.14.11 py37he774522_0
pysocks 1.7.0 py37_0
pytest 5.0.1 py37_0
python 3.7.4 h5263a28_0
python-dateutil 2.8.0 py37_0
pytz 2019.2 py_0
pywin32 223 py37hfa6e2cd_1
pyyaml 5.1.2 py37he774522_0
pyzmq 18.1.0 py37ha925a31_0
qt 5.9.7 vc14h73c81de_0
qtawesome 0.5.7 py37_1
qtconsole 4.5.4 py_0
qtpy 1.9.0 py_0
requests 2.22.0 py37_0
rope 0.14.0 py_0
scikit-learn 0.21.2 py37h6288b17_0
scipy 1.3.1 py37h29ff71c_0
setuptools 41.2.0 pypi_0 pypi
sip 4.19.8 py37h6538335_0
six 1.12.0 pypi_0 pypi
snowballstemmer 1.9.0 py_0
sphinx 2.1.2 py_0
sphinxcontrib-applehelp 1.0.1 py_0
sphinxcontrib-devhelp 1.0.1 py_0
sphinxcontrib-htmlhelp 1.0.2 py_0
sphinxcontrib-jsmath 1.0.1 py_0
sphinxcontrib-qthelp 1.0.2 py_0
sphinxcontrib-serializinghtml 1.1.3 py_0
spyder 3.3.6 py37_0
spyder-kernels 0.5.1 py37_0
sqlite 3.29.0 he774522_0
sympy 1.4 py37_0
tb-nightly 1.15.0a20190806 pypi_0 pypi
tensorboard 1.14.0 py37he3c9ec2_0
tensorflow 1.14.0 mkl_py37h7908ca0_0
tensorflow-base 1.14.0 mkl_py37ha978198_0
tensorflow-estimator 1.14.0 py_0
termcolor 1.1.0 pypi_0 pypi
testpath 0.4.2 py37_0
tornado 6.0.3 py37he774522_0
traitlets 4.3.2 py37_0
urllib3 1.24.2 py37_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_0
wcwidth 0.1.7 py37_0
webencodings 0.5.1 py37_1
werkzeug 0.15.6 pypi_0 pypi
wheel 0.33.6 pypi_0 pypi
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.11.2 py37he774522_0
yaml 0.1.7 hc54c509_2
zeromq 4.3.1 h33f27b4_3
zipp 0.5.2 py_0
zlib 1.2.11 h62dcd97_3
Anyway, I don't understand why in the prompt HDF5 is version 1.10.4 and in the warning, version of HDF5 is 1.10.5
Maybe I am late, but I resolved this problem by upgrading hdf5 to 1.10.5.
On Windows 10, with anaconda you can do this:
conda install -c conda-forge hdf5=1.10.5
I'll leave this here, since it's a top stack thread for me without clear answer.
pip uninstall h5py
pip install h5py
If you are using conda to install tensorflow it installs h5py with 1.10.5 version and on top installs hdf5 1.10.4. Creating conflict that resolves after pip "juggling" since 1.10.4 satisfies the latest h5py.
I have the same problem with Windows 10. Here is what I did
Install some requirements for TensorFlow > 2.0
https://www.tensorflow.org/install/pip?lang
Create conda virtual environment:
conda create -n ai python==3.7.6
conda activate ai
conda install pandas matplotlib scikit-learn scrapy seaborn
conda install -c anaconda tensorflow
I had following same issue.
Warning! HDF5 library version mismatched error
Headers are 1.10.4, library is 1.10.6
My solution is making another conda environment and do every conda w/ 'conda-forge'.
Since hdf5 1.10.4 was installed with following command on my win10 PC w/ no GPU. Python is 3.7.10.
conda install tensorflow
By above command, 1.10.4 came with.
So, I should have done following.
conda install conda-forge tensorflow
Then, 1.10.6 was installed.
'conda-forge' w/ conda install is highly recommended.
I had the same problem as you. It came about because the tensorflow was installed by conda. And the error disappears when using channel anaconda.
conda install -c anaconda tensorflow
I actually solved this problem when I realized (on Mac OSX Mojave) that I had used Homebrew to install Octave, which was built to work with HDF5 1.10.5. I first ran in to this issue trying to install and run TensorFlow from iPython. I'm not actively using Octave, so I uninstalled Octave as well as HDF5 with
brew uninstall --force octave
brew uninstall hdf5
Then upon re-running
conda install h5py
and subsequently importing TensorFlow from iPython, everything seems to be working.
I have simple tensorflow code sum.py:
import tensorflow as tf
a = tf.Variable(1, name="a")
b = tf.Variable(2, name="b")
f = a + b
tf.print("The sum of a and b is", f)
===
I am window 10 user with Anaconda 3, tensorflow 2.0, jupyter, and pyhton 3.
I have similiar issues and i resovle the following iusses:
Error:
UserWarning: h5py is running against HDF5 1.10.5 when it was built against 1.10.4, this may cause problems
My environment was messad up with lots of pip install.
The following video resolve my problem.
https://www.youtube.com/watch?v=RgO8BBNGB8w&t=376s
It uses the tensorflow.yml with a list of clean environment:
https://github.com/jeffheaton/t81_558_deep_learning
Under the window prompt:
conda env create -v -f tensorflow
Then open anaconda prompt
conda acticate tensorflow
python sum.py
or in jupyter notebook. run with OK.
This happened to me when I installed tensorflow via
conda install -c conda-forge tensorflow
I resolved it as follows:
I uninstalled h5py and tensorflow by:
pip uninstall h5py
conda uninstall h5py
conda uninstall tensorflow
and reinstalled tensorflow by:
conda install -c anaconda tensorflow
Damn I had the same error shown in the anaconda prompt and the reason believe me is really silly.
I was multi-tasking and I forgot to activate the environment which resulted in two different versions of HDF5.
Please make sure to conda activate environment_name before launching the jupyter notebook.
I can not import pytorch on my gpu conda env:
C:\Users\Jeffy\Desktop
$ python
Python 3.7.2 (default, Feb 11 2019, 14:11:50) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\gpu\lib\site-packages\torch\__init__.py", line 84, in <module>
from torch._C import *
ImportError: DLL load failed: The specified module could not be found.
I have two conda env, one is gpu with external gpu GTX1050, one is base.
On my base env, I have installed pytorch-cpu version and it works well.
However, I cannot install pytorch gpu version on my gpu env.
on my gpu env, I have the following packages installed (including cudnn, intel-openmp, cmake and so on):
$ conda list
packages in environment at C:\ProgramData\Anaconda3\envs\gpu:
Name Version Build Channel
absl-py 0.7.0 pypi_0 pypi
astor 0.7.1 pypi_0 pypi
blas 1.0 mkl
ca-certificates 2019.1.23 0
certifi 2018.11.29 py37_0
cffi 1.11.5 py37h74b6da3_1
cmake 3.12.2 he025d50_0
cudatoolkit 10.0.130 0
cudnn 7.3.1 cuda10.0_0
freetype 2.9.1 ha9979f8_1
gast 0.2.2 pypi_0 pypi
grpcio 1.18.0 pypi_0 pypi
h5py 2.9.0 pypi_0 pypi
icc_rt 2019.0.0 h0cc432a_1
intel-openmp 2019.0 pypi_0 pypi
jpeg 9b hb83a4c4_2
keras-applications 1.0.7 pypi_0 pypi
keras-preprocessing 1.0.9 pypi_0 pypi
libpng 1.6.36 h2a8f88b_0
libtiff 4.0.10 hb898794_2
markdown 3.0.1 pypi_0 pypi
mkl 2019.1 144
mkl-include 2019.1 144
mkl_fft 1.0.10 py37h14836fe_0
mkl_random 1.0.2 py37h343c172_0
mock 2.0.0 pypi_0 pypi
ninja 1.8.2.post2 pypi_0 pypi
numpy 1.15.4 py37h19fb1c0_0
numpy-base 1.15.4 py37hc3f5095_0
olefile 0.46 py37_0
openssl 1.1.1a he774522_0
pbr 5.1.2 pypi_0 pypi
pillow 5.4.1 py37hdc69c19_0
pip 19.0.1 py37_0
protobuf 3.6.1 pypi_0 pypi
pycparser 2.19 py37_0
python 3.7.2 h8c8aaf0_2
pytorch 1.0.1 py3.7_cuda100_cudnn7_1 pytorch
pyyaml 3.13 py37hfa6e2cd_0
setuptools 40.7.3 py37_0
six 1.12.0 py37_0
sqlite 3.26.0 he774522_0
tensorboard 1.12.2 pypi_0 pypi
tensorflow-estimator 1.13.0rc0 pypi_0 pypi
tensorflow-gpu 1.13.0rc1 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
tk 8.6.8 hfa6e2cd_0
torchvision 0.2.1 py_2 pytorch
typing 3.6.4 py37_0
vc 14.1 h21ff451_1 peterjc123
vs2015_runtime 14.15.26706 h3a45250_0
vs2017_runtime 15.4.27004.2010 1 peterjc123
werkzeug 0.14.1 pypi_0 pypi
wheel 0.32.3 py37_0
wincertstore 0.2 py37_0
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0
Expecting Cuda and Cudnn has been already installed and Updated the Environment variable list.
Try installing pytorch using the command
conda install pytorch -c pytorch
or
conda install pytorch torchvision cudatoolkit=10.0.130 -c pytorch
The goal is I'm trying to use autograd in Jupyter Notebook on my Windows 7 machine.
Here is what I have done:
I activated a conda environment, in git bash, using source activate myenv
I installed autograd using conda install -c omnia autograd
I started Jupyter notebook with jupyter notebook
Now when I try to import autograd in Jupyter notebook, I have the following error:
No module named 'autograd'
So I stopped the Jupyter notebook and tried to use pip to install again. But I have this error:
$ pip install autograd
Requirement already satisfied: autograd in c:\users\******\appdata\local\conda\conda\envs\myenv\lib\site-packages (1.1.2)
Requirement already satisfied: numpy>=1.9 in c:\users\******\appdata\local\conda\conda\envs\myenv\lib\site-packages (from autograd) (1.14.5)
Requirement already satisfied: future in c:\users\******\appdata\local\conda\conda\envs\myenv\lib\site-packages (from autograd) (0.16.0)
Apparently, conda thinks it already installs autograd.
So I thought I might have two versions of conda installed? Here are the results of my conda env list:
# conda environments:
#
base C:\ProgramData\Anaconda3
myenv * C:\Users\******\AppData\Local\conda\conda\envs\myenv
And in both conda installations there is a 'pkg' folder, with different packages installed.
My speculation is Jupyter notebook is connected to the 'base' anaconda3, which does not have autograd installed?
My question is simply how can I use autograd in Jupyter notebook, and possibly clean everything up so I do not have two condas installed on my machine?
Here are the results for activate myenv and run conda list:
# packages in environment at C:\Users\******\AppData\Local\conda\conda\envs\myenv:
#
_py-xgboost-mutex 2.0 cpu_0
autograd 1.1.2 np112py36_0 omnia
blas 1.0 mkl
certifi 2018.4.16 py36_0
chardet 3.0.4 <pip>
Cython 0.28.4 <pip>
django 2.0.5 py36hd476221_0 anaconda
future 0.16.0 py36_1
icc_rt 2017.0.4 h97af966_0
idna 2.7 <pip>
intel-openmp 2018.0.3 0
kaggle 1.3.12 <pip>
libxgboost 0.72 0
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
mkl 2018.0.3 1
mkl_fft 1.0.1 py36h452e1ab_0
mkl_random 1.0.1 py36h9258bd6_0
msys2-conda-epoch 20160418 1
numpy 1.12.1 py36hf30b8aa_1
numpy-base 1.14.5 py36h5c71026_0
pandas 0.23.1 py36h830ac7b_0
pip 10.0.1 py36_0
py-xgboost 0.72 py36h6538335_0
pyodbc 4.0.23 <pip>
python 3.6.5 h0c2934d_0
python-dateutil 2.7.3 py36_0
pytz 2018.4 py36_0 anaconda
requests 2.19.1 <pip>
scikit-learn 0.19.1 py36h53aea1b_0
scipy 1.1.0 py36h672f292_0
setuptools 39.2.0 py36_0
six 1.11.0 py36h4db2310_1
tqdm 4.23.4 <pip>
urllib3 1.22 <pip>
vc 14 h0510ff6_3
vs2015_runtime 14.0.25123 3
wheel 0.31.1 py36_0
wincertstore 0.2 py36h7fe50ca_0
xgboost 0.72 <pip>
There are a few things you can check. First, guarantee that your package exists inside the environment by running:
> source activate myenv
(myenv) > conda list
There will be a list of packages that conda can find for that environment. Make sure you see autograd there!
Next, in your Jupyter notebook, run the following:
import sys
print(sys.executable)
This shows the full path of the python executable running the notebook. You should see something similar to: ~/anaconda3/envs/myenv/bin/python. If you don't see myenv in the path, then Jupyter is running in a different environment. It's likely that your system path finds a different Jupyter first. Check your environment variables to see if another Jupyter comes first.
You can force Jupyter to run from a specific environment by starting it with the full path: ~/anaconda3/envs/myenv/bin/juypter
You can use the exclamation bang in an iPython cell, to install autograd as such:
!pip install autograd.
This way the installation is guaranteed to correspond to the iPython kernel.