Postcommands not working properly in Github Codespaces - python

I want to create a codespace for python development with some post commands like:
creating a conda environment
activate it
installing ipkyernel and creating a kernel
install requirements.txt
However when I rebuild the container I dont have any error, and when I open the codespace terminal and type conda env list, only thing I see is the base environment
I tried both ways:
Put many commands on the same postCreateCommand
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.11",
// Features to add to the dev container. More info: https://containers.dev/features.
"features": {
"ghcr.io/devcontainers/features/anaconda:1": {}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "conda create --name ForecastingSarimax && conda activate ForecastingSarimax",
// Configure tool-specific properties.
"customizations": {
// Configure properties specific to VS Code.
"vscode": {
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"streetsidesoftware.code-spell-checker"
]
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
Or create a .sh script with the commands and execute it
#!/usr/bin/env bash
conda create --name ForecastingSarimax
conda activate ForecastingSarimax
conda install pip
conda install ipykernel
python -m ipykernel install --user --name ForecastingSarimaxKernel311 — display-name "ForecastingSarimaxKernel311"
pip3 install --user -r requirements.txt
What am I missing here to have my requirements met?
A custom environment with my pip packages and a custom kernel

Related

Loading custom conda envs not working in SageMaker

I have installed miniconda on my AWS SageMaker persistent EBS instance. Here is my starting script:
#!/bin/bash
set -e
# OVERVIEW
# This script installs a custom, persistent installation of conda on the Notebook Instance's EBS volume, and ensures
# that these custom environments are available as kernels in Jupyter.
#
# The on-start script uses the custom conda environment created in the on-create script and uses the ipykernel package
# to add that as a kernel in Jupyter.
#
# For another example, see:
# https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-add-external.html#nbi-isolated-environment
sudo -u ec2-user -i <<'EOF'
unset SUDO_UID
WORKING_DIR=/home/ec2-user/SageMaker/
for env in $WORKING_DIR/miniconda/envs/*; do
BASENAME=$(basename "$env")
source "$WORKING_DIR/miniconda/bin/activate"
source activate "$BASENAME"
pip install ipykernel boto3
python -m ipykernel install --user --name "$BASENAME" --display-name "Custom ($BASENAME)"
done
# Optionally, uncomment these lines to disable SageMaker-provided Conda functionality.
# echo "c.EnvironmentKernelSpecManager.use_conda_directly = False" >> /home/ec2-user/.jupyter/jupyter_notebook_config.py
# rm /home/ec2-user/.condarc
EOF
echo "Restarting the Jupyter server.."
restart jupyter-server
I use this in order to load my custom envs. However, when I access the JupyterLab interface, even if I see that the activated kernel is the Custom one, the only version of python running on my notebook kernel is /home/ec2-user/anaconda3/envs/JupyterSystemEnv/bin/python:
I also inspected the CloudWatch logs, and I see this error log: Could not find conda environment: [custom_env].
But, when I run the commands of the starting script within the JupyterLab terminal, conda succeeds in finding those envs. So the question is: what am I missing?
Thanks a lot.
Using !which python in a jupyter cell will always use the default system python.
But, if you selected your custom kernel in jupyter, the python used behind the scenes is the right one, you can verify it by comparing :
!python --version
!/home/ec2-user/SageMaker/miniconda/envs/<YOUR_CUSTOM_ENV_NAME> --version
Create a custom SageMaker Image with your kernel preloaded https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi.html
I faced the same issue as this, and couldn't find a way out. Found a simple and straightforward workaround solution which I tried ( but with very minor modifications), where the kernel is not registered using python -m ipykernel install --user --name "$BASENAME" --display-name "Custom ($BASENAME)" command, but the Conda kernel is made to persist through the use of symlinks created in already existing anaconda3 environment.
Please refer this - https://medium.com/decathlontechnology/making-jupyter-kernels-remanent-in-aws-sagemaker-a130bc47eab7 and try it for yourself. Thanks.
I faced the same issue with not working custom kernel in the JupyterLab interface.
And I found this solution:
first, in the SageMaker terminal, create custom conda environment (you can indicate python version as well) and install dependencies with these commands:
conda create -n custom_kernel_name python=3.6
source activate custom_kernel_name
pip install ipykernel
install your kernel with ipykernel
python -m ipykernel install --user --name custom_kernel_name --display-name "custom_kernel_display_name"
Then I found out, that something is wrong with kernel.json (wrong paths and launch commands), so you need to change it. Go to its location
cd /home/ec2-user/.local/share/jupyter/kernels/custom_kernel_name
open kernel.json file, for example with nano
nano kernel.json
change the content of kernel.json to this
{
"argv": [
"bash",
"-c",
"source \"/home/ec2-user/anaconda3/bin/activate\" \"/home/ec2-user/anaconda3/envs/custom_kernel_name\" && exec /home/ec2-user/anaconda3/envs/custom_kernel_name/bin/python -m ipykernel_launcher -f '{connection_file}' "
],
"display_name": "custom_kernel_display_name",
"language": "python",
"metadata": {
"debugger": true
}
}
After this, you will be able to open Jupyter Notebook through Launcher (or File - New - Notebook) with your custom kernel.
Use !python --version and !which python in this Notebook to be sure of using your custom kernel settings.
This is #Anastasiia Khil's answer, with a bit more abstraction and inline comments. They key part you were missing was updating kernel.json so the !jupyter cli was in your env.
#Set the internal & display names, pick your python
CKN=custom_kernel_name
CKNAME=$CKN
PYV=3.8
#create and activate the env
conda create -y -n $CKN python=$PYV
source activate $CKN
# Install ipykernel
pip install ipykernel
python -m ipykernel install --user --name $CKN --display-name $CKNAME
# Update kernel.json to match the others from SageMaker, which activate the env.
cat >/home/ec2-user/.local/share/jupyter/kernels/$CKN/kernel.json <<EOL
{
"argv": [
"bash",
"-c",
"source \"/home/ec2-user/anaconda3/bin/activate\" \"/home/ec2-user/anaconda3/envs/$CKN\" && exec /home/ec2-user/anaconda3/envs/$CKN/bin/python -m ipykernel_launcher -f '{connection_file}' "
],
"display_name": "$CKNAME",
"language": "python",
"metadata": {
"debugger": true
}
}
EOL

Replicate Python environment on another computer

How can I replicate the python environment setup of a windows machine onto another computer and be able to run very specific scripts successfully.
We have scripts that were written and run in python 3.6.5 in the anaconda environment, we want to be able to run these scripts on a new Windows 10 computer.
The scripts also connect to a local database on the computer (Postgres).
Since you are using anaconda environment, i assume that you have been using the virtualenv for the project you mentioned. It is actually easy to replicate with the following codes:
# list all virtualenvs in your anaconda folder
$ conda info –envs # this will list all virtualenvs created by you, you can then choose the specific virtualenv here.
# to activate the virtualenv of your interest
$ conda activate [virtualenv_name]
# export all packages used in the specific virtualenv (conda activated)
$ pip freeze > requirements.txt # save the output file as requirements.txt
# set up a new conda virtualenv in current or separate machine and install with the requirements.txt
$ conda create --name <env_name> python=3.6.5 --file requirements.txt
# Please note that occasionally you may need to check requirements.txt if there is any abnormal list of packages. The format should be in either [package==version] or [package].
OR you can create the entire virtualenv directly.
# copy exactly same virtualenv on separate machine
# export all packages used in the specific virtualenv (conda activated), including current python version and virtualenv name
$ conda env export > environment.yml # save the output file as environment.yml
# set up a new conda virtualenv in current or separate machine and install with the requirements.txt
$ conda env create -f environment.yml # using Conda; to modify “name” in the environment.yml file if to set up own same anaconda/machine

How do I allow pip inside anaconda3 venv when pip set to require virtualenv?

I've just rebuilt my mac environment using the tutorials here:
https://hackercodex.com/guide/mac-development-configuration/ & here: https://hackercodex.com/guide/python-development-environment-on-mac-osx/
I want to require a virtualenv for pip, and have set that by opening:
vim ~/Library/Application\ Support/pip/pip.conf
and adding:
[install]
require-virtualenv = true
[uninstall]
require-virtualenv = true
Then, I followed a guide to set up jupyter notebooks w/tensorflow, because I am trying to follow a udemy course on machine learning that requires both: https://medium.com/#margaretmz/anaconda-jupyter-notebook-tensorflow-and-keras-b91f381405f8
During this tutorial, it mentions that you should use pip install instead of conda install for tensorflow, because the conda package isn't officially supported.
I can install pip on conda just fine by running:
conda install pip
But when I try to run:
pip3 install tensorflow
I get the error:
"Could not find an activated virtualenv (required)."
I know why I'm getting this error, I just don't know how to change my code to ALSO accept use of pip & pip3 inside anaconda venvs.
My anaconda3 folder is inside my Virtualenvs folder, along with all of my other virtual environments.
I've tried temporarily turning off the restriction by defining a new function in ~/.bashrc:
cpip(){
PIP_REQUIRE_VIRTUALENV="0" pip3 "$#"
}
and using that instead, with no luck, not surprisingly.
I think the problem may be here, inside my bash_profile:
# How to Set Up Mac For Dev:
# https://hackercodex.com/guide/mac-development-configuration/
# Ensure user-installed binaries take precedence
export PATH=/usr/local/bin:$PATH
# Load .bashrc if it exists
test -f ~/.bashrc && source ~/.bashrc
# Activate Bash Completion:
if [ -f $(brew --prefix)/etc/bash_completion ]; then
source $(brew --prefix)/etc/bash_completion
fi
# Toggle for installing global packages:
gpip(){
PIP_REQUIRE_VIRTUALENV="0" pip3 "$#"
}
# Toggle for installing conda packages:
cpip(){
PIP_REQUIRE_VIRTUALENV="0" pip3 "$#"
}
# Be sure to run "source ~/.bash_profile after toggle for changes to
take effect.
# Run "gpip install" (i.e. "gpip install --upgrade pip setuptools
wheel virtualenv")
# added by Anaconda3 2018.12 installer
# >>> conda init >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$(CONDA_REPORT_ERRORS=false
'/Users/erikhayton/Virtualenvs/anaconda3/bin/conda' shell.bash hook
2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f
"/Users/erikhayton/Virtualenvs/anaconda3/etc/profile.d/conda.sh" ];
then
.
"/Users/erikhayton/Virtualenvs/anaconda3/etc/profile.d/conda.sh"
CONDA_CHANGEPS1=false conda activate base
else
\export
PATH="/Users/erikhayton/Virtualenvs/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda init <<<
I want to be able to use pip (& pip3, pip2) in both (& only in) anaconda3's activated 'env's
and virtualenvs.
When you conda install pip , a new pip is placed inside your anaconda virtualenv's bin/ directory. Each pip knows whether/which virtualenv it's inside of, and each pip only installs packages inside its own virtualenv. You can run it like /Users/erikhayton/Virtualenvs/anaconda3/bin/pip install tenserflow
You can know where pip3 is by running which pip3.
When you activate a virtualenv, environment variables in your shell are being modified. The virtualenv's bin/ directory is placed in your PATH. If you run /Users/erikhayton/Virtualenvs/anaconda3/bin/activate and then which pip3, you'll see a different path.
See also Using Pip to install packages to Anaconda Environment
Usually when you use virtual environments, you need to activate them first before you can use them. Somewhere along the line, you would have needed to run a command to create your virtual environment:
virtualenv awesome_virtualenv
Then to make it active:
cd ~/Virtualenvs/awesome_virtualenv
source bin/activate
pip3 install tensorflow # this will install TensorFlow into your awesome_virtualenv
You can create as many virtual environments as you want and install different sets of libraries in each.
The problem is that pip doesn't recognise the conda environment as being not the global environment. It doesn't look like the pip authors intend to fix this (for good reasons I think btw). On the conda side there seems to be no movement either (considering this github issue that has not seen any movement over the past year). So basically, we'll have to do our own scripting :).
This means that whenever we activate a conda environment, we either need to make it look like we're also in a virtual environment, or we switch off PIP_REQUIRE_VIRTUALENV. The solution below uses the latter option (but I can imagine the former working just as well). There is unfortunately no global activate hook in conda, but there are per environment hooks. So all we need to do is run the following 2 commands in our environment:
echo "export PIP_REQUIRE_VIRTUALENV=false" > "$CONDA_PREFIX/etc/conda/activate.d/dont-require-venv-for-pip.sh"
echo "export PIP_REQUIRE_VIRTUALENV=true" > "$CONDA_PREFIX/etc/conda/deactivate.d/require-venv-for-pip.sh"
Now whenever we activate this conda environment, PIP_REQUIRE_VIRTUALENV will be set to false, and it will be reset to true as soon as wel deactivate the environment.
Since we want to (easily) install this is into all our environments, I made a function which I placed in my .zshrc (should work just as well in your .bashrc/bash_profile).
function allow_pip_in_conda_environment() {
# abort if we're not in a conda env (or in the base environment)
if [[ -z "$CONDA_DEFAULT_ENV" || "$CONDA_DEFAULT_ENV" == "base" ]]; then
echo "Should be run from within a conda environment (not base)"
return
fi
ACTIVATE="$CONDA_PREFIX/etc/conda/activate.d/dont-require-venv-for-pip.sh"
DEACTIVATE="$CONDA_PREFIX/etc/conda/deactivate.d/require-venv-for-pip.sh"
# abort if either the activate or the deactivate hook already exists in this env
if [[ -f "$ACTIVATE" || -f "$DEACTIVATE" ]]; then
echo "This hook is already installed in this conda environment"
return
fi
# write the hooks (create dirs if they don't exist)
mkdir -p "$(dirname "$ACTIVATE")"
mkdir -p "$(dirname "$DEACTIVATE")"
echo "export PIP_REQUIRE_VIRTUALENV=false" > "$ACTIVATE"
echo "export PIP_REQUIRE_VIRTUALENV=true" > "$DEACTIVATE"
# switch off PIP_REQUIRE_VIRTUALENV in the current session as well
export PIP_REQUIRE_VIRTUALENV=false
}
Now every time I run into a dreaded Could not find an activated virtualenv (required)., all I need to do is run allow_pip_in_conda_environment, and it fixes it in my current session, and forever after in this conda environment.
(PS: same code also works with mamba)

How to transfer Anaconda env installed on one machine to another? [Both with Ubuntu installed]

I have been using Anaconda(4.3.23) on my GuestOS ubuntu 14.04 which is installed on Vmware on HostOS windows 8.1. I have setup an environment in anaconda and have installed many libraries, some of which were very hectic to install (not straight forward pip installs). few libraries had inner dependencies and had to be build together and from their git source.
Problem
I am going to use Cloud based VM (Azure GPU instance) to use GPU. but I don't want to get into the hectic installation again as i don't want to waste money on the time it will take me to install all the packages and libraries again
Is there any way to transfer/copy my existing env (which has everything already installed) to the Cloud VM?
From the very end of this documentation page:
Save packages for future use:
conda list --export > package-list.txt
Reinstall packages from an export file:
conda create -n myenv --file package-list.txt
If conda list --export failes like this ...
Executing conda list --export > package-list.txt creates a file which looks like this:
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
_tflow_1100_select=0.0.1=gpu
absl-py=0.5.0=py_0
astor=0.7.1=py_0
...
But creating a new environment by executing conda create -n myenv --file package-list.txt gives me this error:
Solving environment: ...working... failed
PackagesNotFoundError: The following packages are not available from current channels:
- markdown==2.6.11=py_0
...
... then try to use conda env export
According to this discussion execute the following command on your source machine:
source activate yourEnvironment
conda env export --no-builds > file.txt
On the target machine execute:
conda env create --file /path/to/file.txt
The file generated by conda env export looks a bit different, but it contains pip packages as well:
name: yourEnvironment
channels:
- conda-forge
- defaults
dependencies:
- absl-py=0.5.0
...
- pip:
- astroid==2.0.4
...
## You can try below approach to move all the package from one machine to other :
## Note : Machine that packages are being moved should be same and python version also should be same
$ pip install conda-pack
# To package an environment:
## Pack environment my_env into my_env.tar.gz
$ conda pack -n my_env
## Pack environment my_env into out_name.tar.gz
$ conda pack -n my_env -o out_name.tar.gz
## Pack environment located at an explicit path into my_env.tar.gz
$ conda pack -p /explicit/path/to/my_env
# After following above approach, you will end up with a tar.gz file. Now to install package from this zip file follow below approach.
## To install the environment:
## Unpack environment into directory `my_env`
$ mkdir -p my_env
$ tar -xzf my_env.tar.gz -C my_env
## Use Python without activating or fixing the prefixes. Most Python
## libraries will work fine, but things that require prefix cleanups
## will fail.
$ ./my_env/bin/python
## Activate the environment. This adds `my_env/bin` to your path
$ source my_env/bin/activate
## Run Python from in the environment
(my_env) $ python
## Cleanup prefixes from in the active environment.
## Note that this command can also be run without activating the environment
## as long as some version of Python is already installed on the machine.
(my_env) $ conda-unpack
You can probably get away with copying the whole Anaconda installation to your cloud instance.
According to github thread execute the following command on your source machine:
https://github.com/conda/conda/issues/3847
source activate yourEnvironment
conda env export --no-builds > environment.yml
On the target machine execute:
conda env create -f environment.yml
The file generated by conda env export looks a bit different, but it contains pip packages as well:
name: yourEnvironment
channels:
conda-forge
defaults
dependencies:
absl-py=0.5.0
...
pip:
astroid==2.0.4
...
I found the answer from this
you can export your Anaconda environment using:
conda env export > environment.yml
In order to recreate it on another machine using:
conda env create -f environment.yml
You can modify the environment.yml as required because some of the python libraries may be obsolete or due to version conflict in future releases.

How can you "clone" a conda environment into the root environment?

I'd like the root environment of conda to copy all of the packages in another environment. How can this be done?
There are options to copy dependency names/urls/versions to files.
Recommendation
Normally it is safer to work from a new environment rather than changing root. However, consider backing up your existing environments before attempting changes. Verify the desired outcome by testing these commands in a demo environment. To backup your root env for example:
λ conda activate root
λ conda env export > environment_root.yml
λ conda list --explicit > spec_file_root.txt
Options
Option 1 - YAML file
Within the second environment (e.g. myenv), export names+ to a yaml file:
λ activate myenv
λ conda env export > environment.yml
then update the first environment+ (e.g. root) with the yaml file:
λ conda env update --name root --file environment.yml
Option 2 - Cloning an environment
Use the --clone flag to clone environments (see #DevC's post):
λ conda create --name myclone --clone root
This basically creates a direct copy of an environment.
Option 3 - Spec file
Make a spec-file++ to append dependencies from an env (see #Ormetrom):
λ activate myenv
λ conda list --explicit > spec_file.txt
λ conda install --name root --file spec_file.txt
Alternatively, replicate a new environment (recommended):
λ conda create --name myenv2 --file spec_file.txt
See Also
conda env for more details on the env sub-commands.
Anaconada Navigator desktop program for a more graphical experience.
Docs on updated commands. With older conda versions use activate (Windows) and source activate (Linux/Mac OS). Newer versions of conda can use conda activate (this may require some setup with your shell configuration via conda init).
Discussion on keeping conda env
Extras
There appears to be an undocumented conda run option to help execute commands in specific environments.
# New command
λ conda run --name myenv conda list --explicit > spec_file.txt
The latter command is effective at running commands in environments without the activation/deactivation steps. See the equivalent command below:
# Equivalent
λ activate myenv
λ conda list --explicit > spec_file.txt
λ deactivate
Note, this is likely an experimental feature, so this may not be appropriate in production until official adoption into the public API.
+Conda docs have changed since the original post; links updated.
++Spec-files only work with environments created on the same OS. Unlike the first two options, spec-files only capture links to conda dependencies; pip dependencies are not included.
To make a copy of your root environment (named base), you can use following command; worked for me with Anaconda3-5.0.1:
conda create --name <env_name> --clone base
you can list all the packages installed in conda environment with following command
conda list -n <env_name>
When setting up a new environment and I need the packages from the base environment in my new one (which is often the case) I am building in the prompt a identical conda environment by using a spec-file.txt with:
conda list --explicit > spec-file.txt
The spec-file includes the packages of for example the base environment.
Then using the prompt I install the the packages into the new environment:
conda create --name myenv --file spec-file.txt
The packages from base are then available in the new environment.
The whole process is describe in the doc:
https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#building-identical-conda-environments
I also ran into the trouble of cloning an environment onto another machine and wanted to provide an answer. The key issue I had was addressing errors when the current environment contains development packages which cannot be obtained directly from conda install or pip install. For these cases I highly recommend conda-pack (see this answer):
pip install conda-pack
or,
conda install conda-pack
then back up the environment, to use the current environment just omit the my_env name,
# Pack environment my_env into my_env.tar.gz
$ conda pack -n my_env
# Pack environment my_env into out_name.tar.gz
$ conda pack -n my_env -o out_name.tar.gz
# Pack environment located at an explicit path into my_env.tar.gz
$ conda pack -p /explicit/path/to/my_env
and restoring,
# Unpack environment into directory `my_env`
$ mkdir -p my_env
$ tar -xzf my_env.tar.gz -C my_env
# Use Python without activating or fixing the prefixes. Most Python
# libraries will work fine, but things that require prefix cleanups
# will fail.
$ ./my_env/bin/python
# Activate the environment. This adds `my_env/bin` to your path
$ source my_env/bin/activate
# Run Python from in the environment
(my_env) $ python
# Cleanup prefixes from in the active environment.
# Note that this command can also be run without activating the environment
# as long as some version of Python is already installed on the machine.
(my_env) $ conda-unpack

Categories