I am new to virtualenvironment and an advanced beginner in python.
I am trying to run a jupyter notebook but it seems that when I create a virtualenvironment the jupyter kernel used is the one of my system and not the one of the virtualenvironment I created.
For this reason I am trying to understand how to create a clean virtualenvironment.
What I do is the following:
- Create virtualenvironment named testenv1:
virtualenv -p python3 testenv1
- activate testenv1:
source testenv1/bin/activate
Here it starts something I don't understand. If I list all the packages installed I have a lot of them already. Is there a way to force a completely clean virtualenv?
Thanks.
pip list
Package Version
----------------------------- -------
actionlib 1.11.13
angles 1.9.12
bondpy 1.8.3
camera-calibration 1.12.23
camera-calibration-parsers 1.11.13
catkin 0.7.20
cv-bridge 1.12.8
diagnostic-analysis 1.9.3
diagnostic-common-diagnostics 1.9.3
diagnostic-updater 1.9.3
dynamic-reconfigure 1.5.50
gazebo-plugins 2.5.19
gazebo-ros 2.5.19
gencpp 0.6.0
geneus 2.2.6
genlisp 0.4.16
genmsg 0.5.11
gennodejs 2.0.1
genpy 0.6.7
image-geometry 1.12.8
interactive-markers 1.11.5
joint-state-publisher 1.12.15
laser-geometry 1.6.5
message-filters 1.12.14
pip 20.0.2
pluginlib 1.11.3
python-qt-binding 0.3.7
qt-dotgraph 0.3.17
qt-gui 0.3.17
qt-gui-cpp 0.3.17
qt-gui-py-common 0.3.17
resource-retriever 1.12.6
rosbag 1.12.14
rosboost-cfg 1.14.6
rosclean 1.14.6
roscreate 1.14.6
rosgraph 1.12.14
roslaunch 1.12.14
roslib 1.14.6
roslint 0.11.0
roslz4 1.12.14
rosmake 1.14.6
rosmaster 1.12.14
rosmsg 1.12.14
rosnode 1.12.14
rosparam 1.12.14
rospy 1.12.14
rosservice 1.12.14
rostest 1.12.14
rostopic 1.12.14
rosunit 1.14.6
roswtf 1.12.14
rqt-action 0.4.9
rqt-bag 0.4.12
rqt-bag-plugins 0.4.12
rqt-console 0.4.9
rqt-dep 0.4.9
rqt-graph 0.4.11
rqt-gui 0.5.0
rqt-gui-py 0.5.0
rqt-image-view 0.4.14
rqt-launch 0.4.8
rqt-logger-level 0.4.8
rqt-moveit 0.5.7
rqt-msg 0.4.8
rqt-nav-view 0.5.7
rqt-plot 0.4.8
rqt-pose-view 0.5.8
rqt-publisher 0.4.8
rqt-py-common 0.5.0
rqt-py-console 0.4.8
rqt-reconfigure 0.5.1
rqt-robot-dashboard 0.5.7
rqt-robot-monitor 0.5.8
rqt-robot-steering 0.5.9
rqt-runtime-monitor 0.5.7
rqt-rviz 0.5.10
rqt-service-caller 0.4.8
rqt-shell 0.4.9
rqt-srv 0.4.8
rqt-tf-tree 0.6.0
rqt-top 0.4.8
rqt-topic 0.4.11
rqt-web 0.4.8
rviz 1.12.17
sensor-msgs 1.12.7
setuptools 46.1.3
smach 2.0.1
smach-ros 2.0.1
smclib 1.8.3
tf 1.11.9
tf-conversions 1.11.9
tf2-geometry-msgs 0.5.20
tf2-kdl 0.5.20
tf2-py 0.5.20
tf2-ros 0.5.20
topic-tools 1.12.14
wheel 0.34.2
xacro 1.11.3
WARNING: You are using pip version 20.0.2; however, version 20.1 is available.
You should consider upgrading via the '/home/schiano/virtualenvs/testenv1/bin/python -m pip install --upgrade pip' command.
Main python installed in your system already have the packages in it. and when you try to create a virtual environment in your system, it create a copy of the main python environment.
you can uninstall all the packages from main python environment by running:
pip uninstall <package name>
It will ask you for your permission for removal, press y.
or if you want to do it in one go:
pip freeze > any_path_on_your_system/requirements.txt
pip uninstall -r any_path_on_your_system/requirements.txt
it will ask for your permission for each uninstall. press y
Install package virtualenv
pip install virtualenv
In this way, all the packages from main python will be removed and then, try your method of creating virtual environment.
reference: https://docs.python-guide.org/dev/virtualenvs/
Related
I have created a Python 3.7 conda virtual environment and installed the following packages using this command:
conda install pytorch torchvision torchaudio cudatoolkit=11.3 matplotlib scipy opencv -c pytorch
They install fine, but then when I come to run my program I get the following error which suggests that a CUDA enabled device is not found:
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I have an NVIDIA RTX 3060ti GPU, which as far as I am aware is cuda enabled, but whenever I go into the Python interactive shell within my conda environment I get False when evaluating torch.cuda.is_available() suggesting that perhaps CUDA is not installed properly or is not found.
When I then perform a conda list to view my installed packages:
# packages in environment at /home/user/anaconda3/envs/FGVC:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
blas 1.0 mkl
brotli 1.0.9 he6710b0_2
bzip2 1.0.8 h7b6447c_0
ca-certificates 2021.10.26 h06a4308_2
cairo 1.16.0 hf32fb01_1
certifi 2021.10.8 py37h06a4308_2
cpuonly 1.0 0 pytorch
cudatoolkit 11.3.1 h2bc3f7f_2
cycler 0.11.0 pyhd3eb1b0_0
dbus 1.13.18 hb2f20db_0
expat 2.4.4 h295c915_0
ffmpeg 4.0 hcdf2ecd_0
fontconfig 2.13.1 h6c09931_0
fonttools 4.25.0 pyhd3eb1b0_0
freeglut 3.0.0 hf484d3e_5
freetype 2.11.0 h70c0345_0
giflib 5.2.1 h7b6447c_0
glib 2.69.1 h4ff587b_1
graphite2 1.3.14 h23475e2_0
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
harfbuzz 1.8.8 hffaf4a1_0
hdf5 1.10.2 hba1933b_1
icu 58.2 he6710b0_3
imageio 2.16.0 pypi_0 pypi
imageio-ffmpeg 0.4.5 pypi_0 pypi
imutils 0.5.4 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
jasper 2.0.14 hd8c5072_2
jpeg 9d h7f8727e_0
kiwisolver 1.3.2 py37h295c915_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libglu 9.0.0 hf484d3e_1
libgomp 9.3.0 h5101ec6_17
libopencv 3.4.2 hb342d67_1
libopus 1.3.1 h7b6447c_0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.3.0 hd4cf53a_17
libtiff 4.2.0 h85742a9_0
libuuid 1.0.3 h7f8727e_2
libuv 1.40.0 h7b6447c_0
libvpx 1.7.0 h439df22_0
libwebp 1.2.0 h89dd481_0
libwebp-base 1.2.0 h27cfd23_0
libxcb 1.14 h7b6447c_0
libxml2 2.9.12 h03d6c58_0
lz4-c 1.9.3 h295c915_1
matplotlib 3.5.0 py37h06a4308_0
matplotlib-base 3.5.0 py37h3ed280b_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py37h7f8727e_0
mkl_fft 1.3.1 py37hd3c417c_0
mkl_random 1.2.2 py37h51133e4_0
munkres 1.1.4 py_0
ncurses 6.3 h7f8727e_2
networkx 2.6.3 pypi_0 pypi
ninja 1.10.2 py37hd09550d_3
numpy 1.21.2 py37h20f2e39_0
numpy-base 1.21.2 py37h79a1101_0
olefile 0.46 py37_0
opencv 3.4.2 py37h6fd60c2_1
openssl 1.1.1m h7f8727e_0
packaging 21.3 pyhd3eb1b0_0
pcre 8.45 h295c915_0
pillow 8.4.0 py37h5aabda8_0
pip 21.2.2 py37h06a4308_0
pixman 0.40.0 h7f8727e_1
py-opencv 3.4.2 py37hb342d67_1
pyparsing 3.0.4 pyhd3eb1b0_0
pyqt 5.9.2 py37h05f1152_2
python 3.7.11 h12debd9_0
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch 1.7.0 py3.7_cpu_0 [cpuonly] pytorch
pywavelets 1.2.0 pypi_0 pypi
qt 5.9.7 h5867ecd_1
readline 8.1.2 h7f8727e_1
scikit-image 0.19.1 pypi_0 pypi
scipy 1.7.3 py37hc147768_0
setuptools 58.0.4 py37h06a4308_0
sip 4.19.8 py37hf484d3e_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.37.2 hc218d9a_0
tifffile 2021.11.2 pypi_0 pypi
tk 8.6.11 h1ccaba5_0
torchaudio 0.7.0 py37 pytorch
torchvision 0.8.1 py37_cpu [cpuonly] pytorch
tornado 6.1 py37h27cfd23_0
typing_extensions 3.10.0.2 pyh06a4308_0
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7f8727e_4
zstd 1.4.9 haebb681_0
There seems to be a lot of things saying cpuonly, but I am not sure how they came about, since I did not install them.
I am running Ubuntu version 20.04.4 LTS
I ran into a similar problem when I tried to install Pytorch with CUDA 11.1. Although the anaconda site explicitly lists a pre-built version of Pytorch with CUDA 11.1 is available, conda still tries to install the cpu-only version. After a lot of trial-and-fail, I realize that the packages torchvision torchaudio are the root cause of the problem. So installing just PyTorch would fix this:
conda install pytorch cudatoolkit=11.1 -c pytorch -c nvidia
You can ask conda to install a specific build of your required package.pytorch builds supporting cuda have the phrase cuda somewhere in their build string, so you can ask conda to match that spec. For more information, have a look at conda's package match spec.
$ conda install pytorch=*=*cuda* cudatoolkit -c pytorch
I believe I had the following things wrong that prevented me from using Cuda. Despite having cuda installed the nvcc --version command indicated that Cuda was not installed and so what I did was add it to the path using this answer.
Despite doing that and deleting my original conda environment and using the conda install pytorch torchvision torchaudio cudatoolkit=11.3 matplotlib scipy opencv -c pytorch command again I still got False when evaluating torch.cuda.is_available().
I then used this command conda install pytorch torchvision torchaudio cudatoolkit=10.2 matplotlib scipy opencv -c pytorch changing cudatoolkit from verison 11.3 to version 10.2 and then it worked!
Now torch.cuda.is_available() evaluates to True
Unfortunately, Cuda version 10.2 was incompatible with my RTX 3060 gpu (and I'm assuming it is not compatible with all RTX 3000 cards). Cuda version 11.0 was giving me errors and Cuda version 11.3 only installs the CPU only versions for some reason. Cuda version 11.1 worked perfectly though!
This is the command I used to get it to work in the end:
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
If there is nothing wrong with your nvidia driver setup, maybe you are missing nvidia channel from installation arguments. The pytorch documentation helped me generate this command that eventually solved my problem:
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
Installing jupyter inside conda's virtual environment solve my issue. I was having the same issue, even pytorch with cuda is installed and !nvidia-smi showing GPU , but while trying to access jupyter notebook , it was showing only cpu.
While I was trying from command line torch is finding CUDA but from jupyter is not showing, So I just pip install jupyter on virtual environment of conda and after that problem is solved .
Use the exact script from the Pytorch website works for me:
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=10.2 -c pytorch
But if I use
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=10.2 -c pytorch
no installing torchaudio, it will install cpu versions of pytorch and torchvision. I found it interesting and don't know why.
This question already has answers here:
PyCharm doesn't recognize installed module
(23 answers)
How to cache downloaded PIP packages [duplicate]
(3 answers)
Closed 1 year ago.
I am starting to learn python, and I've been following courses using either command prompt or PyCharm.
I've been downloading packages through the command line prompt. Where can I find the installation directory and put it somewhere in PyCharm so that I don't have to download it twice?
To give an example, I just downloaded to matplotlib library using pip on the command line prompt as pip install maptlotlib. It got downloaded. When I then go to PyCharm - Setting - Python Environement, the maptlotlib package does not show up. How may I make it appear?
You need to choose this Python interpreter from PyCharm:
Settings > Project > Python interpreter.
It will show you every installed package.
use pip list you can get all installed package
bibo#esi09:~$ pip list
Package Version
---------------------- --------------------
attrs 19.3.0
Automat 0.8.0
blinker 1.4
certifi 2019.11.28
chardet 3.0.4
Click 7.0
cloud-init 21.1
colorama 0.4.3
command-not-found 0.3
configobj 5.0.6
constantly 15.1.0
cryptography 2.8
dbus-python 1.2.16
distro 1.4.0
distro-info 0.23ubuntu1
entrypoints 0.3
httplib2 0.14.0
hyperlink 19.0.0
idna 2.8
importlib-metadata 1.5.0
incremental 16.10.1
Jinja2 2.10.1
jsonpatch 1.22
jsonpointer 2.0
jsonschema 3.2.0
keyring 18.0.1
language-selector 0.1
launchpadlib 1.10.13
lazr.restfulclient 0.14.2
lazr.uri 1.0.3
MarkupSafe 1.1.0
meson 0.53.2
more-itertools 4.2.0
netifaces 0.10.4
oauthlib 3.1.0
pexpect 4.6.0
pip 20.3.3
pyasn1 0.4.2
pyasn1-modules 0.2.1
Pygments 2.3.1
PyGObject 3.36.0
PyHamcrest 1.9.0
PyJWT 1.7.1
pymacaroons 0.13.0
PyNaCl 1.3.0
pyOpenSSL 19.0.0
pyrsistent 0.15.5
pyserial 3.4
python-apt 2.0.0+ubuntu0.20.4.4
python-debian 0.1.36ubuntu1
PyYAML 5.3.1
requests 2.22.0
requests-unixsocket 0.2.0
SecretStorage 2.3.1
service-identity 18.1.0
setuptools 45.2.0
simplejson 3.16.0
simplelzo1x 1.1
six 1.14.0
sos 4.1
ssh-import-id 5.10
systemd-python 234
Twisted 18.9.0
ubuntu-advantage-tools 20.3
ufw 0.36
unattended-upgrades 0.1
urllib3 1.25.8
vtk 9.0.1
wadllib 1.3.3
wheel 0.34.2
zipp 1.0.0
zope.interface 4.7.1
WARNING: You are using pip version 20.3.3; however, version 21.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.
bibo#esi09:~$
I've tried uninstalling and reinstalling Anaconda. and upgrading pandas and most other questions that ask this question. I am working with a brand new installation of Anaconda. with python 3.7, Why is pandas not importing normally?
I have tried manually installing pip install pytz --upgrade and pip install python-datutil --upgrade to no avail. However, After doing these two commands I can now import pandas in the terminal but not in my script where i need it.
The Script where i am trying to import pandas is inside a git repo that perhaps needs to be reconfigured. I suspect that might be the issue. but im not sure how to change how python interacts with pandas from within git.
Here is the stacktrace...
File "C:\Users\jgreaves\Anaconda3\lib\site-packages\pandas\__init__.py", line 37, in <module>
f"C extension: {module} not built. If you want to import "
ImportError: C extension: No module named 'dateutil.tz'; 'dateutil' is not a package not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first.```
Here is the config of my virtual environment
```# Name Version Build Channel
ca-certificates 2020.1.1 0 anaconda
certifi 2020.4.5.2 py38_0 anaconda
numpy 1.18.5 pypi_0 pypi
openssl 1.1.1g he774522_0 anaconda
pandas 1.0.4 pypi_0 pypi
pip 20.0.2 py38_3 anaconda
pyodbc 4.0.30 pypi_0 pypi
python 3.8.3 he1778fa_0 anaconda
python-dateutil 2.8.1 pypi_0 pypi
pytz 2020.1 pypi_0 pypi
regex 2020.6.8 pypi_0 pypi
setuptools 47.1.1 py38_0 anaconda
six 1.15.0 pypi_0 pypi
sqlite 3.31.1 he774522_0 anaconda
vc 14.1 h0510ff6_4 anaconda
vs2015_runtime 14.16.27012 hf0eaf9b_2 anaconda
wheel 0.34.2 py38_0 anaconda
wincertstore 0.2 py38_0 anaconda ```
So after having a friend of mine take a look at my directory. We discovered that I had a file in my working directory called "dateutil.py" which was supposed to be a module for my code that I very uncleverly named. This was what was causing the issue. I have since renamed the file and everything is working fine now.
Last week I installed awscli with pip3, and today I decided to uninstall it. The uninstall was successful, but pip3 list gives me the following output:
~
❯ pip3 list
Package Version
----------------- ----------
- scli
-wscli 1.16.137
astroid 2.0.4
botocore 1.12.127
certifi 2018.10.15
colorama 0.3.9
docutils 0.14
isort 4.3.4
jmespath 0.9.4
lazy-object-proxy 1.3.1
mccabe 0.6.1
pip 19.0.3
pyasn1 0.4.5
pylint 2.1.1
python-dateutil 2.8.0
PyYAML 3.13
rsa 3.4.2
s3transfer 0.2.0
setuptools 40.8.0
six 1.11.0
urllib3 1.24.1
virtualenv 16.1.0
virtualenv-clone 0.4.0
wheel 0.33.1
wrapt 1.10.11
The top two entries appear to be related to awscli. Even the version number (1.16.137) is the same as awscli's. Anyone know how to resolve this issue?
EDIT:
Found this:
/usr/local/lib/python3.7/site-packages
❯ ls
__pycache__ mccabe-0.6.1.dist-info virtualenv.py
astroid mccabe.py virtualenv_clone-0.4.0.dist-info
astroid-2.0.4.dist-info pip virtualenv_support
botocore pip-19.0.3-py3.7.egg-info wheel
botocore-1.12.130.dist-info pkg_resources wheel-0.32.2-py3.7.egg-info
certifi pylint wheel-0.33.0-py3.7.egg-info
certifi-2018.10.15.dist-info pylint-2.1.1.dist-info wheel-0.33.1-py3.7.egg-info
clonevirtualenv.py setuptools wrapt
easy_install.py setuptools-40.8.0-py3.7.egg-info wrapt-1.10.11.dist-info
isort sitecustomize.py ~-scli-1.16.137.dist-info
isort-4.3.4.dist-info six-1.11.0.dist-info ~wscli-1.16.137.dist-info
lazy_object_proxy six.py
lazy_object_proxy-1.3.1.dist-info virtualenv-16.1.0.dist-info
Safe to delete the two offending directories?
pip list takes this information from .dist-info entries in your path. You appear to have some extra names there, given your listing. Note the two entries at the end:
~-scli-1.16.137.dist-info
~wscli-1.16.137.dist-info
Simply delete these two directory entries.
Note that awscli did not create these directories, especially because pip would have used the universal wheel file to install awscli, so no setup script needed to be run when it was installed. They remind me of the Windows hidden lock files (which start with ~$), so perhaps they were created when you used another tool I'm not familiar with that may have accidentally left these lying around.
It doesn't really matter if those .dist-info entries are directories, symlinks, or files, all that pip list does is take all names that end in .dist-info then splits out version and name at the first -. You can create any phantom entry just by creating empty files:
$ mkdir demo && cd demo && virtualenv-3.8 .
# ....
$ bin/pip list # new, empty virtualenv
Package Version
---------- -------
pip 19.0.3
setuptools 41.0.0
wheel 0.33.1
$ touch lib/python3.8/site-packages/foobar-version.dist-info
$ bin/pip list # surprise package listed
Package Version
---------- -------
foobar version
pip 19.0.3
setuptools 41.0.0
wheel 0.33.1
The goal is I'm trying to use autograd in Jupyter Notebook on my Windows 7 machine.
Here is what I have done:
I activated a conda environment, in git bash, using source activate myenv
I installed autograd using conda install -c omnia autograd
I started Jupyter notebook with jupyter notebook
Now when I try to import autograd in Jupyter notebook, I have the following error:
No module named 'autograd'
So I stopped the Jupyter notebook and tried to use pip to install again. But I have this error:
$ pip install autograd
Requirement already satisfied: autograd in c:\users\******\appdata\local\conda\conda\envs\myenv\lib\site-packages (1.1.2)
Requirement already satisfied: numpy>=1.9 in c:\users\******\appdata\local\conda\conda\envs\myenv\lib\site-packages (from autograd) (1.14.5)
Requirement already satisfied: future in c:\users\******\appdata\local\conda\conda\envs\myenv\lib\site-packages (from autograd) (0.16.0)
Apparently, conda thinks it already installs autograd.
So I thought I might have two versions of conda installed? Here are the results of my conda env list:
# conda environments:
#
base C:\ProgramData\Anaconda3
myenv * C:\Users\******\AppData\Local\conda\conda\envs\myenv
And in both conda installations there is a 'pkg' folder, with different packages installed.
My speculation is Jupyter notebook is connected to the 'base' anaconda3, which does not have autograd installed?
My question is simply how can I use autograd in Jupyter notebook, and possibly clean everything up so I do not have two condas installed on my machine?
Here are the results for activate myenv and run conda list:
# packages in environment at C:\Users\******\AppData\Local\conda\conda\envs\myenv:
#
_py-xgboost-mutex 2.0 cpu_0
autograd 1.1.2 np112py36_0 omnia
blas 1.0 mkl
certifi 2018.4.16 py36_0
chardet 3.0.4 <pip>
Cython 0.28.4 <pip>
django 2.0.5 py36hd476221_0 anaconda
future 0.16.0 py36_1
icc_rt 2017.0.4 h97af966_0
idna 2.7 <pip>
intel-openmp 2018.0.3 0
kaggle 1.3.12 <pip>
libxgboost 0.72 0
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
mkl 2018.0.3 1
mkl_fft 1.0.1 py36h452e1ab_0
mkl_random 1.0.1 py36h9258bd6_0
msys2-conda-epoch 20160418 1
numpy 1.12.1 py36hf30b8aa_1
numpy-base 1.14.5 py36h5c71026_0
pandas 0.23.1 py36h830ac7b_0
pip 10.0.1 py36_0
py-xgboost 0.72 py36h6538335_0
pyodbc 4.0.23 <pip>
python 3.6.5 h0c2934d_0
python-dateutil 2.7.3 py36_0
pytz 2018.4 py36_0 anaconda
requests 2.19.1 <pip>
scikit-learn 0.19.1 py36h53aea1b_0
scipy 1.1.0 py36h672f292_0
setuptools 39.2.0 py36_0
six 1.11.0 py36h4db2310_1
tqdm 4.23.4 <pip>
urllib3 1.22 <pip>
vc 14 h0510ff6_3
vs2015_runtime 14.0.25123 3
wheel 0.31.1 py36_0
wincertstore 0.2 py36h7fe50ca_0
xgboost 0.72 <pip>
There are a few things you can check. First, guarantee that your package exists inside the environment by running:
> source activate myenv
(myenv) > conda list
There will be a list of packages that conda can find for that environment. Make sure you see autograd there!
Next, in your Jupyter notebook, run the following:
import sys
print(sys.executable)
This shows the full path of the python executable running the notebook. You should see something similar to: ~/anaconda3/envs/myenv/bin/python. If you don't see myenv in the path, then Jupyter is running in a different environment. It's likely that your system path finds a different Jupyter first. Check your environment variables to see if another Jupyter comes first.
You can force Jupyter to run from a specific environment by starting it with the full path: ~/anaconda3/envs/myenv/bin/juypter
You can use the exclamation bang in an iPython cell, to install autograd as such:
!pip install autograd.
This way the installation is guaranteed to correspond to the iPython kernel.