I want to be able to run both Python 3.8 (currrent version) and Python 3.7 in my Jupyter Notebook. I understand creating different IPython kernels from virtual environments is the way.
So I downloaded Python 3.7 and locally installed it in my home directory. Used this python binary file to create a virtual environment by
> virtualenv -p ~/Python3.7/bin/python3 py37
> source py37/bin/activate
This works perfectly and gives 'Python 3.7' correctly on checking with python --version and sys.version.
Then for creating IPython kernel,
(py37) > ipython kernel install --user --name py37 --display-name "Python 3.7"
(py37) > jupyter notebook
This also runs without error and the kernel can be confirmed to be added in the Notebook. However it does not run Python 3.7 like the virtual environment, but Python 3.8 like the default kernel. (confirmed with sys.version)
I checked ~/.local/share/jupyter/kernels/py37/kernel.json and saw its contents as
{
"argv": [
"/usr/bin/python3",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3.7",
"language": "python"
So naturally I tried editing the /usr/bin/python3 to point to my Python 3.7 binary file path that is ~/Python3.7/bin/python3, but then even the kernel doesn't work properly in the notebook.
What can I possibly do?
NB: I use Arch Linux, so I installed jupyter, virtualenv, ... through pacman not pip as its recommended in Arch.
Found it myself, the hard way. Let me share anyway, in case this helps anyone.
I guess, the problem was that, jupyter notebook installed through pacman searches for python binary files in the PATH variable and not in the path specified by the virtual environment. Since I installed Python 3.7 locally in my home directory, Jupyter can't find it and it might have defaulted to the default python version.
So the possible solutions are:
Install Jupyter Notebook through pip (instead of pacman) within the virtual environment set on Python 3.7 (This is not at all recommended for Arch Linux users, as installing packages through pip can probably cause issues in future)
> wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz
> tar -xvf Python-3.7.4.tgz
> cd Python-3.5.1/
> ./configure --prefix=$HOME/Python37
> make
> make install
> virtualenv -p ~/Python3.7/bin/python3 py37
> source py37/bin/activate
(py37) > pip install notebook
(py37) > python -m notebook
Install Python 3.7 within default directory (instead of specifying somewhere else). Create a new IPython kernel using the suitable virtual environment and use jupyter-notebook installed through pacman. (Recommended for Arch Linux users)
Note 1: > python points to the updated global Python 3.8 version and > python3 or > python3.7 points to newly installed Python 3.7
Note 2: Once the required kernel is created, you might even be able to use that python version outside the virtual environment.
> wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz
> tar -xvf Python-3.7.4.tgz
> cd Python-3.5.1/
> ./configure
> make
> sudo make install
> virtualenv -p $(which python3.7) py37
> source py37/bin/activate
(py37) > ipython kernel install --user --name py37 --display-name "Python 3.7"
(py37) > jupyter notebook
Add the path of the directory where you have locally installed the new Python version to the $PATH variable, create an IPython kernel and run Jupyter Notebook within suitable virtual environment. (Haven't yet tried this one personally. Just felt that this should work. So no guarantee. Also I don't think this is a good solution)
> wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz
> tar -xvf Python-3.7.4.tgz
> cd Python-3.5.1/
> ./configure --prefix=$HOME/Python37
> make
> make install
> export PATH="$HOME/Python37/bin:$PATH"
> virtualenv -p py37
> source py37/bin/activate
(py37) > ipython kernel install --user --name py37 --display-name "Python 3.7"
(py37) > jupyter notebook
Another approach is to just run the notebook application directly using the version of Python you require - provided it's installed for that version of Python (e.g. on a Mac with brew installed version of Python3.8):
/usr/local/opt/python#3.8/bin/python3 -m notebook
Also if you want to install packages for that version:
/usr/local/opt/python#3.8/bin/pip3 install that_package
in the official jupyter dcoumentation, there is a description now to do it, when customizing jupyter stacks. I think the mamba command will solve the problem. The commands starting with RUN can also just executed on a linux system.
# Choose your desired base image
FROM jupyter/minimal-notebook:latest
# name your environment and choose the python version
ARG conda_env=python37
ARG py_ver=3.7
# you can add additional libraries you want mamba to install by listing them below the first line and ending with "&& \"
RUN mamba create --quiet --yes -p "${CONDA_DIR}/envs/${conda_env}" python=${py_ver} ipython ipykernel && \
mamba clean --all -f -y
# alternatively, you can comment out the lines above and uncomment those below
# if you'd prefer to use a YAML file present in the docker build context
# COPY --chown=${NB_UID}:${NB_GID} environment.yml "/home/${NB_USER}/tmp/"
# RUN cd "/home/${NB_USER}/tmp/" && \
# mamba env create -p "${CONDA_DIR}/envs/${conda_env}" -f environment.yml && \
# mamba clean --all -f -y
# create Python kernel and link it to jupyter
RUN "${CONDA_DIR}/envs/${conda_env}/bin/python" -m ipykernel install --user --name="${conda_env}" && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
# any additional pip installs can be added by uncommenting the following line
# RUN "${CONDA_DIR}/envs/${conda_env}/bin/pip" install --quiet --no-cache-dir
# if you want this environment to be the default one, uncomment the following line:
# RUN echo "conda activate ${conda_env}" >> "${HOME}/.bashrc"
Hope this helps
Related
I have installed miniconda on my AWS SageMaker persistent EBS instance. Here is my starting script:
#!/bin/bash
set -e
# OVERVIEW
# This script installs a custom, persistent installation of conda on the Notebook Instance's EBS volume, and ensures
# that these custom environments are available as kernels in Jupyter.
#
# The on-start script uses the custom conda environment created in the on-create script and uses the ipykernel package
# to add that as a kernel in Jupyter.
#
# For another example, see:
# https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-add-external.html#nbi-isolated-environment
sudo -u ec2-user -i <<'EOF'
unset SUDO_UID
WORKING_DIR=/home/ec2-user/SageMaker/
for env in $WORKING_DIR/miniconda/envs/*; do
BASENAME=$(basename "$env")
source "$WORKING_DIR/miniconda/bin/activate"
source activate "$BASENAME"
pip install ipykernel boto3
python -m ipykernel install --user --name "$BASENAME" --display-name "Custom ($BASENAME)"
done
# Optionally, uncomment these lines to disable SageMaker-provided Conda functionality.
# echo "c.EnvironmentKernelSpecManager.use_conda_directly = False" >> /home/ec2-user/.jupyter/jupyter_notebook_config.py
# rm /home/ec2-user/.condarc
EOF
echo "Restarting the Jupyter server.."
restart jupyter-server
I use this in order to load my custom envs. However, when I access the JupyterLab interface, even if I see that the activated kernel is the Custom one, the only version of python running on my notebook kernel is /home/ec2-user/anaconda3/envs/JupyterSystemEnv/bin/python:
I also inspected the CloudWatch logs, and I see this error log: Could not find conda environment: [custom_env].
But, when I run the commands of the starting script within the JupyterLab terminal, conda succeeds in finding those envs. So the question is: what am I missing?
Thanks a lot.
Using !which python in a jupyter cell will always use the default system python.
But, if you selected your custom kernel in jupyter, the python used behind the scenes is the right one, you can verify it by comparing :
!python --version
!/home/ec2-user/SageMaker/miniconda/envs/<YOUR_CUSTOM_ENV_NAME> --version
Create a custom SageMaker Image with your kernel preloaded https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi.html
I faced the same issue as this, and couldn't find a way out. Found a simple and straightforward workaround solution which I tried ( but with very minor modifications), where the kernel is not registered using python -m ipykernel install --user --name "$BASENAME" --display-name "Custom ($BASENAME)" command, but the Conda kernel is made to persist through the use of symlinks created in already existing anaconda3 environment.
Please refer this - https://medium.com/decathlontechnology/making-jupyter-kernels-remanent-in-aws-sagemaker-a130bc47eab7 and try it for yourself. Thanks.
I faced the same issue with not working custom kernel in the JupyterLab interface.
And I found this solution:
first, in the SageMaker terminal, create custom conda environment (you can indicate python version as well) and install dependencies with these commands:
conda create -n custom_kernel_name python=3.6
source activate custom_kernel_name
pip install ipykernel
install your kernel with ipykernel
python -m ipykernel install --user --name custom_kernel_name --display-name "custom_kernel_display_name"
Then I found out, that something is wrong with kernel.json (wrong paths and launch commands), so you need to change it. Go to its location
cd /home/ec2-user/.local/share/jupyter/kernels/custom_kernel_name
open kernel.json file, for example with nano
nano kernel.json
change the content of kernel.json to this
{
"argv": [
"bash",
"-c",
"source \"/home/ec2-user/anaconda3/bin/activate\" \"/home/ec2-user/anaconda3/envs/custom_kernel_name\" && exec /home/ec2-user/anaconda3/envs/custom_kernel_name/bin/python -m ipykernel_launcher -f '{connection_file}' "
],
"display_name": "custom_kernel_display_name",
"language": "python",
"metadata": {
"debugger": true
}
}
After this, you will be able to open Jupyter Notebook through Launcher (or File - New - Notebook) with your custom kernel.
Use !python --version and !which python in this Notebook to be sure of using your custom kernel settings.
This is #Anastasiia Khil's answer, with a bit more abstraction and inline comments. They key part you were missing was updating kernel.json so the !jupyter cli was in your env.
#Set the internal & display names, pick your python
CKN=custom_kernel_name
CKNAME=$CKN
PYV=3.8
#create and activate the env
conda create -y -n $CKN python=$PYV
source activate $CKN
# Install ipykernel
pip install ipykernel
python -m ipykernel install --user --name $CKN --display-name $CKNAME
# Update kernel.json to match the others from SageMaker, which activate the env.
cat >/home/ec2-user/.local/share/jupyter/kernels/$CKN/kernel.json <<EOL
{
"argv": [
"bash",
"-c",
"source \"/home/ec2-user/anaconda3/bin/activate\" \"/home/ec2-user/anaconda3/envs/$CKN\" && exec /home/ec2-user/anaconda3/envs/$CKN/bin/python -m ipykernel_launcher -f '{connection_file}' "
],
"display_name": "$CKNAME",
"language": "python",
"metadata": {
"debugger": true
}
}
EOL
I've just rebuilt my mac environment using the tutorials here:
https://hackercodex.com/guide/mac-development-configuration/ & here: https://hackercodex.com/guide/python-development-environment-on-mac-osx/
I want to require a virtualenv for pip, and have set that by opening:
vim ~/Library/Application\ Support/pip/pip.conf
and adding:
[install]
require-virtualenv = true
[uninstall]
require-virtualenv = true
Then, I followed a guide to set up jupyter notebooks w/tensorflow, because I am trying to follow a udemy course on machine learning that requires both: https://medium.com/#margaretmz/anaconda-jupyter-notebook-tensorflow-and-keras-b91f381405f8
During this tutorial, it mentions that you should use pip install instead of conda install for tensorflow, because the conda package isn't officially supported.
I can install pip on conda just fine by running:
conda install pip
But when I try to run:
pip3 install tensorflow
I get the error:
"Could not find an activated virtualenv (required)."
I know why I'm getting this error, I just don't know how to change my code to ALSO accept use of pip & pip3 inside anaconda venvs.
My anaconda3 folder is inside my Virtualenvs folder, along with all of my other virtual environments.
I've tried temporarily turning off the restriction by defining a new function in ~/.bashrc:
cpip(){
PIP_REQUIRE_VIRTUALENV="0" pip3 "$#"
}
and using that instead, with no luck, not surprisingly.
I think the problem may be here, inside my bash_profile:
# How to Set Up Mac For Dev:
# https://hackercodex.com/guide/mac-development-configuration/
# Ensure user-installed binaries take precedence
export PATH=/usr/local/bin:$PATH
# Load .bashrc if it exists
test -f ~/.bashrc && source ~/.bashrc
# Activate Bash Completion:
if [ -f $(brew --prefix)/etc/bash_completion ]; then
source $(brew --prefix)/etc/bash_completion
fi
# Toggle for installing global packages:
gpip(){
PIP_REQUIRE_VIRTUALENV="0" pip3 "$#"
}
# Toggle for installing conda packages:
cpip(){
PIP_REQUIRE_VIRTUALENV="0" pip3 "$#"
}
# Be sure to run "source ~/.bash_profile after toggle for changes to
take effect.
# Run "gpip install" (i.e. "gpip install --upgrade pip setuptools
wheel virtualenv")
# added by Anaconda3 2018.12 installer
# >>> conda init >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$(CONDA_REPORT_ERRORS=false
'/Users/erikhayton/Virtualenvs/anaconda3/bin/conda' shell.bash hook
2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f
"/Users/erikhayton/Virtualenvs/anaconda3/etc/profile.d/conda.sh" ];
then
.
"/Users/erikhayton/Virtualenvs/anaconda3/etc/profile.d/conda.sh"
CONDA_CHANGEPS1=false conda activate base
else
\export
PATH="/Users/erikhayton/Virtualenvs/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda init <<<
I want to be able to use pip (& pip3, pip2) in both (& only in) anaconda3's activated 'env's
and virtualenvs.
When you conda install pip , a new pip is placed inside your anaconda virtualenv's bin/ directory. Each pip knows whether/which virtualenv it's inside of, and each pip only installs packages inside its own virtualenv. You can run it like /Users/erikhayton/Virtualenvs/anaconda3/bin/pip install tenserflow
You can know where pip3 is by running which pip3.
When you activate a virtualenv, environment variables in your shell are being modified. The virtualenv's bin/ directory is placed in your PATH. If you run /Users/erikhayton/Virtualenvs/anaconda3/bin/activate and then which pip3, you'll see a different path.
See also Using Pip to install packages to Anaconda Environment
Usually when you use virtual environments, you need to activate them first before you can use them. Somewhere along the line, you would have needed to run a command to create your virtual environment:
virtualenv awesome_virtualenv
Then to make it active:
cd ~/Virtualenvs/awesome_virtualenv
source bin/activate
pip3 install tensorflow # this will install TensorFlow into your awesome_virtualenv
You can create as many virtual environments as you want and install different sets of libraries in each.
The problem is that pip doesn't recognise the conda environment as being not the global environment. It doesn't look like the pip authors intend to fix this (for good reasons I think btw). On the conda side there seems to be no movement either (considering this github issue that has not seen any movement over the past year). So basically, we'll have to do our own scripting :).
This means that whenever we activate a conda environment, we either need to make it look like we're also in a virtual environment, or we switch off PIP_REQUIRE_VIRTUALENV. The solution below uses the latter option (but I can imagine the former working just as well). There is unfortunately no global activate hook in conda, but there are per environment hooks. So all we need to do is run the following 2 commands in our environment:
echo "export PIP_REQUIRE_VIRTUALENV=false" > "$CONDA_PREFIX/etc/conda/activate.d/dont-require-venv-for-pip.sh"
echo "export PIP_REQUIRE_VIRTUALENV=true" > "$CONDA_PREFIX/etc/conda/deactivate.d/require-venv-for-pip.sh"
Now whenever we activate this conda environment, PIP_REQUIRE_VIRTUALENV will be set to false, and it will be reset to true as soon as wel deactivate the environment.
Since we want to (easily) install this is into all our environments, I made a function which I placed in my .zshrc (should work just as well in your .bashrc/bash_profile).
function allow_pip_in_conda_environment() {
# abort if we're not in a conda env (or in the base environment)
if [[ -z "$CONDA_DEFAULT_ENV" || "$CONDA_DEFAULT_ENV" == "base" ]]; then
echo "Should be run from within a conda environment (not base)"
return
fi
ACTIVATE="$CONDA_PREFIX/etc/conda/activate.d/dont-require-venv-for-pip.sh"
DEACTIVATE="$CONDA_PREFIX/etc/conda/deactivate.d/require-venv-for-pip.sh"
# abort if either the activate or the deactivate hook already exists in this env
if [[ -f "$ACTIVATE" || -f "$DEACTIVATE" ]]; then
echo "This hook is already installed in this conda environment"
return
fi
# write the hooks (create dirs if they don't exist)
mkdir -p "$(dirname "$ACTIVATE")"
mkdir -p "$(dirname "$DEACTIVATE")"
echo "export PIP_REQUIRE_VIRTUALENV=false" > "$ACTIVATE"
echo "export PIP_REQUIRE_VIRTUALENV=true" > "$DEACTIVATE"
# switch off PIP_REQUIRE_VIRTUALENV in the current session as well
export PIP_REQUIRE_VIRTUALENV=false
}
Now every time I run into a dreaded Could not find an activated virtualenv (required)., all I need to do is run allow_pip_in_conda_environment, and it fixes it in my current session, and forever after in this conda environment.
(PS: same code also works with mamba)
TL;DR
Would like to run Jupyter notebooks with different python setups. Python packages always install globally. Don't understand why.\TL;DR
I would like to run Jupyter notebooks with different python setups using venv. See here for the official documentation.
Python 3.6 is already installed on my system
$ python --version
Python 3.6.1 :: Continuum Analytics, Inc.
Using the following commands I have created two virtual environments:
$ python3 -m venv --without-pip Documents/venv/test01
$ python3 -m venv --without-pip Documents/venv/test02
Following this guide I tried setting up different kernels for each notebook with
$ source activate test01
(Documents/venv/test02) $ python -m ipykernel install --user --name test01 --display-name test01
However, the second command failed with
/Users/dominik/Documents/venv/test02/bin/python: No module named ipykernel
So, I deactivated my venv and ran the same command outside the venv which succeeded
$ source deactivate test01
$ python -m ipykernel install --user --name test01 --display-name test01
$ python -m ipykernel install --user --name test02 --display-name test02
Inside my Jupyter notebook I can see the different kernels now:
new kernels available
Now I'm creating a new notebook using test01 kernel. Inside the notebook, I try to add a module which is not available in Python by default:
$ import mord
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-a00d777a7e47> in <module>()
----> 1 import mord
ModuleNotFoundError: No module named 'mord'
This is as expected. I then go about to installing that package into my test01 environment using pip
$ source activate Documents/venv/test01
(Documents/venv/test01) $ pip install mord
After restarting my test01 kernel the import error disappears - as expected. However - and now this is my question - when I import mord package on a test02 notebook, there is no import error either. Why is that?
I would expect that mord package was only installed for test01. However, it seems to be installed globally.
Looking at the venv folders it shows that nothing was added specifically to those projects
venv folders
The pyvenv.cfg file seems also unchanged.
home = /Users/dominik/anaconda/bin
include-system-site-packages = false
version = 3.6.1
Can anyone give me some pointers what I am doing wrong?
because you created virtualenv with --without-pip flag, there is no pip executable in the virtual environment, you used system pip to install package.
basically, virtual environment not involved in your setup, even though there are two kernel specs, they all created with virtualenv deactivated.
setup steps recommend: create virtualenv without --without-pip option; install ipykernel for each of your virtual environments, which means, install while virtualenv is activated; create kernel spec from corresponding virtualenv.
After georgexsh had pointed me in the right direction regarding the --without-pip flag, this is how I solved it in my case:
Create two virtual environments
$ python -m venv --without-pip Documents/venv/test01
$ python -m venv --without-pip Documents/venv/test02
I am still using the --without-pip flag, because the command otherwise produces an error message
The following steps are executed for each of the environments test01, test02:
Activate virtual environment and install pip manually
$ source Documents/venv/test01/bin/activate
(test01) $ curl https://bootstrap.pypa.io/get-pip.py | python
Install and run Jupyter inside the virtual environment
(test01) $ pip install jupyter
(test01) $ jupyter notebook
After having completed the above steps for both environments, I tested installing a package only in one of the environments
Activate the environment you want to install to first (if not activated already)
$ source Documents/venv/test01/bin/activate
(test01) $ pip install numpy
numpy package is now only available to the Jupyter version installed in test01 environment.
If found that installing a dedicated kernel per each environment was optional (for my purpose). Anyway, here are the steps to run in an activated environment:
(test01) $ pip install ipykernel
(test01) $ python -m ipykernel install --user --name test01 --display-name test01
Now within Jupyter you can also create notebooks with different kernel types.
I use IPython notebooks and would like to be able to select to create a 2.x or 3.x python notebook in IPython.
I initially had Anaconda. With Anaconda a global environment variable had to be changed to select what version of python you want and then IPython could be started. This is not what I was looking for so I uninstalled Anaconda and now have set up my own installation using MacPorts and PiP. It seems that I still have to use
port select --set python <python version>
to toggle between python 2.x and 3.x. which is no better than the anaconda solution.
Is there a way to select what version of python you want to use after you start an IPython notebook, preferably with my current MacPorts build?
The idea here is to install multiple ipython kernels. Here are instructions for anaconda. If you are not using anaconda, I recently added instructions using pure virtualenvs.
Anaconda >= 4.1.0
Since version 4.1.0, anaconda includes a special package nb_conda_kernels that detects conda environments with notebook kernels and automatically registers them. This makes using a new python version as easy as creating new conda environments:
conda create -n py27 python=2.7 ipykernel
conda create -n py36 python=3.6 ipykernel
After a restart of jupyter notebook, the new kernels are available over the graphical interface. Please note that new packages have to be explicitly installed into the new environments. The Managing environments section in conda's docs provides further information.
Manually registering kernels
Users who do not want to use nb_conda_kernels or still use older versions of anaconda can use the following steps to manually register ipython kernels.
configure the python2.7 environment:
conda create -n py27 python=2.7
conda activate py27
conda install notebook ipykernel
ipython kernel install --user
configure the python3.6 environment:
conda create -n py36 python=3.6
conda activate py36
conda install notebook ipykernel
ipython kernel install --user
After that you should be able to choose between python2
and python3 when creating a new notebook in the interface.
Additionally you can pass the --name and --display-name options to ipython kernel install if you want to change the names of your kernels. See ipython kernel install --help for more informations.
If you’re running Jupyter on Python 3, you can set up a Python 2 kernel like this:
python2 -m pip install ipykernel
python2 -m ipykernel install --user
http://ipython.readthedocs.io/en/stable/install/kernel_install.html
These instructions explain how to install a python2 and python3 kernel in separate virtual environments for non-anaconda users. If you are using anaconda, please find my other answer for a solution directly tailored to anaconda.
I assume that you already have jupyter notebook installed.
First make sure that you have a python2 and a python3 interpreter with pip available.
On ubuntu you would install these by:
sudo apt-get install python-dev python3-dev python-pip python3-pip
Next prepare and register the kernel environments
python -m pip install virtualenv --user
# configure python2 kernel
python -m virtualenv -p python2 ~/py2_kernel
source ~/py2_kernel/bin/activate
python -m pip install ipykernel
ipython kernel install --name py2 --user
deactivate
# configure python3 kernel
python -m virtualenv -p python3 ~/py3_kernel
source ~/py3_kernel/bin/activate
python -m pip install ipykernel
ipython kernel install --name py3 --user
deactivate
To make things easier, you may want to add shell aliases for the activation command to your shell config file. Depending on the system and shell you use, this can be e.g. ~/.bashrc, ~/.bash_profile or ~/.zshrc
alias kernel2='source ~/py2_kernel/bin/activate'
alias kernel3='source ~/py3_kernel/bin/activate'
After restarting your shell, you can now install new packages after activating the environment you want to use.
kernel2
python -m pip install <pkg-name>
deactivate
or
kernel3
python -m pip install <pkg-name>
deactivate
With a current version of the Notebook/Jupyter, you can create a Python3 kernel. After starting a new notebook application from the command line with Python 2 you should see an entry „Python 3“ in the dropdown menu „New“. This gives you a notebook that uses Python 3. So you can have two notebooks side-by-side with different Python versions.
The Details
Create this directory: mkdir -p ~/.ipython/kernels/python3
Create this file ~/.ipython/kernels/python3/kernel.json with this content:
{
"display_name": "IPython (Python 3)",
"language": "python",
"argv": [
"python3",
"-c", "from IPython.kernel.zmq.kernelapp import main; main()",
"-f", "{connection_file}"
],
"codemirror_mode": {
"version": 2,
"name": "ipython"
}
}
Restart the notebook server.
Select „Python 3“ from the dropdown menu „New“
Work with a Python 3 Notebook
Select „Python 2“ from the dropdown menu „New“
Work with a Python 2 Notebook
A solution is available that allows me to keep my MacPorts installation by configuring the Ipython kernelspec.
Requirements:
MacPorts is installed in the usual /opt directory
python 2.7 is installed through macports
python 3.4 is installed through macports
Ipython is installed for python 2.7
Ipython is installed for python 3.4
For python 2.x:
$ cd /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin
$ sudo ./ipython kernelspec install-self
For python 3.x:
$ cd /opt/local/Library/Frameworks/Python.framework/Versions/3.4/bin
$ sudo ./ipython kernelspec install-self
Now you can open an Ipython notebook and then choose a python 2.x or a python 3.x notebook.
From my Linux installation I did:
sudo ipython2 kernelspec install-self
And now my python 2 is back on the list.
Reference:
http://ipython.readthedocs.org/en/latest/install/kernel_install.html
UPDATE:
The method above is now deprecated and will be dropped in the future. The new method should be:
sudo ipython2 kernel install
Following are the steps to add the python2 kernel to jupyter notebook::
open a terminal and create a new python 2 environment: conda create -n py27 python=2.7
activate the environment: Linux source activate py27 or windows activate py27
install the kernel in the env: conda install notebook ipykernel
install the kernel for outside the env: ipython kernel install --user
close the env: source deactivate
Although a late answer hope someone finds it useful :p
Use sudo pip3 install jupyter for installing jupyter for python3 and sudo pip install jupyter for installing jupyter notebook for python2. Then, you can call ipython kernel install command to enable both types of notebook to choose from in jupyter notebook.
I looked at this excellent info and then wondered, since
i have python2, python3 and IPython all installed,
i have PyCharm installed,
PyCharm uses IPython for its Python Console,
if PyCharm would use
IPython-py2 when Menu>File>Settings>Project>Project Interpreter == py2 AND
IPython-py3 when Menu>File>Settings>Project>Project Interpreter == py3
ANSWER: Yes!
P.S. i have Python Launcher for Windows installed as well.
Under Windows 7 I had anaconda and anaconda3 installed.
I went into \Users\me\anaconda\Scripts and executed
sudo .\ipython kernelspec install-self
then I went into \Users\me\anaconda3\Scripts and executed
sudo .\ipython kernel install
(I got jupyter kernelspec install-self is DEPRECATED as of 4.0. You probably want 'ipython kernel install' to install the IPython kernelspec.)
After starting jupyter notebook (in anaconda3) I got a neat dropdown menu in the upper right corner under "New" letting me choose between Python 2 odr Python 3 kernels.
If you are running anaconda in virtual environment.
And when you create a new notebook but i's not showing to select the virtual environment kernel.
Then you have to set it into the ipykernel using the following command
$ pip install --user ipykernel
$ python -m ipykernel install --user --name=test2
I would like to execute a long running Python script from within a Jupyter notebook so that I can hack on the data structures generated mid-run.
The script has many dependencies and command line arguments and is executed with a specific virtualenv. Is it possible to interactively run a Python script inside a notebook from a specified virtualenv (different to that of the Jupyter installation)?
Here's what worked for me (non conda python):
(MacOS, brew version of python. if you are working with system python, you may (will) need prepend each command with sudo)
First activate virtualenv. If starting afresh then, e.g., you could use virtualenvwrapper:
$ pip install virtualenvwrapper
$ mkvirtualenv -p python2 py2env
$ workon py2env
# This will activate virtualenv
(py2env)$
# Then install jupyter within the active virtualenv
(py2env)$ pip install jupyter
# jupyter comes with ipykernel, but somehow you manage to get an error due to ipykernel, then for reference ipykernel package can be installed using:
(py2env)$ pip install ipykernel
Next, set up the kernel
(py2env)$ python -m ipykernel install --user --name py2env --display-name "Python2 (py2env)"
then start jupyter notebook (the venv need not be activated for this step)
(py2env)$ jupyter notebook
# or
#$ jupyter notebook
In the jupyter notebook dropdown menu: Kernel >> Change Kernel >> <list of kernels> you should see Python2 (py2env) kernel.
This also makes it easy to identify python version of kernel, and maintain either side by side.
Here is the link to detailed docs:
http://ipython.readthedocs.io/en/stable/install/kernel_install.html
A bit more simple solution to get notebook kernels available in other notebooks.
I'm using Linux + virtualenv + virtualenvwrapper. If you are using different setup, change some commands to the appropriate ones, but you should get the idea.
mkvirtualenv jupyter2
workon jupyter2
(jupyter2) pip install jupyter
(jupyter2) ipython kernel install --name "jupyter2_Python_2" --user
last command creates ~/.local/share/jupyter/kernels/jupyter2\ python\ 2/ directory
same stuff for 3
mkvirtualenv -p /usr/bin/python3 jupyter3
// this uses python3 as default python in virtualenv
workon jupyter3
(jupyter3) pip install jupyter
(jupyter3) ipython kernel install --name "jupyter3_Python_3" --user
When done you should see both kernels, no matter what env are you using to start jupyter.
You can delete links to kernels directly in ~/.local/share/jupyter/kernels/.
To specify location provide options to ipython kernel install (--help) or just copy directories from ~/.local/share/jupyter/kernels/ to ~/envs/jupyter3/share/jupyter if you want to run multiple kerenels from one notebook only.
I found this link to be very useful:
https://ocefpaf.github.io/python4oceanographers/blog/2014/09/01/ipython_kernel/
Make sure that you pip install jupyter into your virtualenv. In case the link goes away later, here's the gist:
You need to create a new kernel. You specify your kernel with a JSON file. Your kernels are usually located at ~/.ipython/kernels. Create a directory with the name of your virtualenv and create your kernel.json file in it. For instance, one of my paths looks like ~./ipython/kernels/datamanip/kernel.json
Here's what my kernel.json file looks like:
{
"display_name": "Data Manipulation (Python2)",
"language": "python",
"codemirror_mode": {
"version": 3,
"name":"ipython"
},
"argv": [
"/Users/ed/.virtualenvs/datamanip/bin/python",
"-c",
"from IPython.kernel.zmq.kernelapp import main; main()",
"-f",
"{connection_file}"
]
}
I am not certain exactly what the codemirror_mode object is doing, but it doesn't seem to do any harm.
It is really simple, based on the documentation
You can use a virtualenv for your IPython notebook. Follow the following steps, actually no need for step one, just make sure you activated your virtualenv via source ~/path-to-your-virtualenv/
Install the ipython kernel module into your virtualenv
workon my-virtualenv-name # activate your virtualenv, if you haven't already
pip install ipykernel
(The most important step) Now run the kernel "self-install" script:
python -m ipykernel install --user --name=my-virtualenv-name
Replacing the --name parameter as appropriate.
You should now be able to see your kernel in the IPython notebook menu: Kernel -> Change kernel and be able to switch to it (you may need to refresh the page before it appears in the list). IPython will remember which kernel to use for that notebook from then on.
#singer's solution didn't work for me. Here's what worked:
. /path/to/virtualenv/.venv/bin/activate
python -m ipykernel install --user --name .venv --display-name .venv
Reference: Kernels for different environments (official docs)
the nb_canda is useful:
conda install nb_conda
so,you can create and select your own python kernel with conda virtual environment,and manage the packages in venv
Screenshots
List item
conda environment manager Conda tab in jupyter notebook allows you to manage your environments right from within your notebook.
Change Kernel
You can also select which kernel to run a notebook in by using the Change kernel option in Kernel menu