"ImportError: No module named site" with Jupyter notebooks - python

Hi Stack Overflow community, I've been encountering the titular error when trying to run the command 'jupyter notebook'
I'm running a fresh install of Scientific Linux 7. When opening a new terminal I can run the Jupyter notebook in my browser with no problem. I installed a package that includes its own Python distribution, and requires me to run a setup script before using. After running the setup script (which does various things to my environment variables) Jupyter no longer works (gives me the "No module named site" error).
Googling told me to try unsetting both PYTHONPATH and PYTHONHOME, but that didn't work. Could someone explain to me how the environment variables change how Python looks for packages? Please let me know if I can clarify something to make answering my query easier.
Thanks!
Edit: the setup script is not very illuminating as far as I can tell. For reference the package I'm looking to use is the Fermi Science Tools (http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/). Here's the code for the setup script (some of the indentation may be off a bit because I'm new to this, but rest assured the script runs without a hitch)
# Filename: fermi-init.sh
# Description: Bourne-shell flavor initialization for all FERMI software.
# Runs fermi-setup to generate a sh script tailored
# specifically to this user and FERMI software
# installation, then source that.
# Author/Date: James Peachey, HEASARC/GSFC/NASA, May 3, 1999
# Modified for HEADAS December 2001
# Adapted for FERMI September 2008
#
#if [ "x$HEADAS" = x ]; then
# echo "fermi-init.sh: WARNING -- set HEADAS and source headas-init.sh before sourcing fermi-init.sh!"
# echo "Do you wish to proceed with sourcing?"
# select yn in "Yes" "No"; do
# case $yn in
# Yes )
# if [ "x$FERMI_DIR" = x ]; then
# echo "fermi-init.sh: ERROR -- set FERMI_DIR before sourcing fermi-init.sh"
# elif [ -x "$FERMI_DIR/BUILD_DIR/fermi-setup" ]; then
# export FERMI_INST_DIR=${FERMI_DIR}
# fermi_init=`$FERMI_INST_DIR/BUILD_DIR/fermi-setup sh`
# if [ $? -eq 0 -a "x$fermi_init" != x ]; then
# if [ -f "$fermi_init" ]; then
# . $fermi_init
# fi
# rm -f $fermi_init
# fi
# unset fermi_init
# else
# echo "fermi-init.sh: ERROR -- cannot execute $FERMI_DIR/BUILD_DIR/fermi-setup"
# fi
# break;;
# No )
# break;;
# esac
#done
#else
if [ "x$FERMI_DIR" = x ]; then
echo "fermi-init.sh: ERROR -- set FERMI_DIR before sourcing fermi-init.sh"
elif [ -x "$FERMI_DIR/BUILD_DIR/fermi-setup" ]; then
export FERMI_INST_DIR=${FERMI_DIR}
fermi_init=`$FERMI_INST_DIR/BUILD_DIR/fermi-setup sh`
if [ $? -eq 0 -a "x$fermi_init" != x ]; then
if [ -f "$fermi_init" ]; then
. $fermi_init
fi
rm -f $fermi_init
fi
unset fermi_init
else
echo "fermi-init.sh: ERROR -- cannot execute $FERMI_DIR/BUILD_DIR/fermi-setup"
fi
#fi

Disclaimer: I work at Datmo, where we’re building a new tool to improve quantitative workflows.
I would highly suggest using Docker in this case. You can setup your environment using a container instead. For example, in your case you might be running on Scientific Linux 7, so you can directly install the Fermi Science Tools, but instead you could just use a standard linux installation (CentOS, Debian, or Ubuntu) and install Docker from this link.
Once you have that installed you can pull the image as is written on this page using the command:
docker pull sfegan/fermitools_ubuntu
You can then run any code you want by running docker run (docs here)
If you're working on a particularly difficult quantitative workflow (e.g. working with large scientific data sets etc), there are free tools like Datmo which use Docker to help you set this up.

Related

Can't load same name readonly virtual environment variable to python virtual environment

I'm trying to understand this behavior:
In a bash script I set a variable as readonly as I use it in multiple sections:
readonly PROJECT_PATH="project/dir/path"
In a different section I add it to a .env file that different submodules will make use of
setup_submodule() {
local proj_path="$PROJECT_PATH"
local env_path=".env"
if [[ "$CLEAN" == "true" ]]; then
echo "Remove ${env_path}"
rm -rf "$env_path"
fi
if [[ -e "${env_path}" ]]; then
echo "File ${env_path} exists, nothing to do"
return 0
fi
cat <<EOF > "$env_path"
# common
export PROJECT_PATH=${proj_path} # Same name as the variable defined in the script!!!
export RUN_DIRECTORY=.
EOF
This previous part was working fine.
In a different section I'm creating a virtual environment, and I want to also load it with different environmental variables (towards the end):
setup_python_venv() {
local env_path=".env" # RELEVANT
if [[ ! -d "$PYTHON_VENV" ]]; then
echo "Create python venv with '${PYTHON_BIN}' to '${PYTHON_VENV}' and update pip to latest version"
"$PYTHON_BIN" -m venv "$PYTHON_VENV"
(
source "${PYTHON_VENV}/bin/activate"
pip install -U pip
)
else
echo "Python venv '${PYTHON_VENV}' already exists, nothing to do"
fi
echo "Install python dependencies"
(
source "${PYTHON_VENV}/bin/activate"
pip install -r requirements.txt
)
echo "source $env_path" >> "${PYTHON_VENV}/bin/activate" # RELEVANT
return 0
}
But in this case I get PROJECT_PATH: readonly variable
The code works if I change PROJECT_PATH name so that it's not the same name variable that is exported into .env. Still, I'm not entirely sure why it works when I first laod it into the file .env, but it then fails when loading it into the virtual environment.

python virtual env succesfully activated via WSL but not working

on my windows system I've succesfully installed a virtual environment (python version is 3.9) using windows command prompt
python -m venv C:\my_path\my_venv
Always using windows command prompt, I'm able to activate the created venv via
C:\my_path\my_venv\Scripts\activate.bat
I am sure the venv is correctly activated since:
on the windows terminal, I see the command line is preceded by (my_venv)
if I activate python from the terminal (python) and run the following commands: import sys ; sys.path I can see, in the list of paths, the desired path [..., 'C:\\my_path\\my_venv\\lib\\site-packages\\win32\\lib', ...]
if I do stuff in the activated venv (like installing packages) everything works and is done inside the venv
To sum up, everything is fine so far.
I also have WSL2 (Ubuntu) and I'd like to activate the same venv using the Ubuntu terminal.
If, from the Ubuntu terminal, I activate the venv
source /mnt/c/my_path/my_venv/Scripts/activate
it seems to work since the command line is preceeded by (my_venv), but when I run python (python3 command) and then run import sys ; sys.path I see that the system is targeting the base Ubuntu python installation (version 3.8) and not the venv installation:
['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages']
The venv is not really activated. Any suggestions to solve the issue?
If it can help, I add a couple of information.
If I try to create a venv directly using the Ubuntu terminal
python3 -m venv /mnt/c/my_path/my_venv_unix
and activate it via the Ubuntu terminal (source /mnt/c/my_path/my_venv_unix/bin/activate) everything works fine, but that's not what I want: I'd like to use WSL to activate a virtual environment created using windows command prompt, since on my machine I've a lot of venvs created with windows and I don't want to replicate them.
Following the script C:\my_path\my_venv\Scripts\activate (/mnt/c/my_path/my_venv/Scripts/activate using wsl folders naming) (I had to change the EOL from windows to Ubuntu, otherwise the command source /mnt/c/my_path/my_venv/Scripts/activate would not have worked)
# This file must be used with "source bin/activate" *from bash*
# you cannot run it directly
deactivate () {
# reset old environment variables
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
PATH="${_OLD_VIRTUAL_PATH:-}"
export PATH
unset _OLD_VIRTUAL_PATH
fi
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
export PYTHONHOME
unset _OLD_VIRTUAL_PYTHONHOME
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r 2> /dev/null
fi
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
PS1="${_OLD_VIRTUAL_PS1:-}"
export PS1
unset _OLD_VIRTUAL_PS1
fi
unset VIRTUAL_ENV
if [ ! "${1:-}" = "nondestructive" ] ; then
# Self destruct!
unset -f deactivate
fi
}
# unset irrelevant variables
deactivate nondestructive
VIRTUAL_ENV="C:\my_path\my_venv"
export VIRTUAL_ENV
_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/Scripts:$PATH"
export PATH
# unset PYTHONHOME if set
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
# could use `if (set -u; : $PYTHONHOME) ;` in bash
if [ -n "${PYTHONHOME:-}" ] ; then
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
unset PYTHONHOME
fi
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
_OLD_VIRTUAL_PS1="${PS1:-}"
PS1="(.venv_ml_dl_gen_purpose) ${PS1:-}"
export PS1
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r 2> /dev/null
fi
Finally, here also the script /mnt/c/my_path/my_venv_unix/bin/activate
# This file must be used with "source bin/activate" *from bash*
# you cannot run it directly
deactivate () {
# reset old environment variables
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
PATH="${_OLD_VIRTUAL_PATH:-}"
export PATH
unset _OLD_VIRTUAL_PATH
fi
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
export PYTHONHOME
unset _OLD_VIRTUAL_PYTHONHOME
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r
fi
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
PS1="${_OLD_VIRTUAL_PS1:-}"
export PS1
unset _OLD_VIRTUAL_PS1
fi
unset VIRTUAL_ENV
if [ ! "${1:-}" = "nondestructive" ] ; then
# Self destruct!
unset -f deactivate
fi
}
# unset irrelevant variables
deactivate nondestructive
VIRTUAL_ENV="/mnt/c/my_path/my_venv_unix"
export VIRTUAL_ENV
_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/bin:$PATH"
export PATH
# unset PYTHONHOME if set
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
# could use `if (set -u; : $PYTHONHOME) ;` in bash
if [ -n "${PYTHONHOME:-}" ] ; then
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
unset PYTHONHOME
fi
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
_OLD_VIRTUAL_PS1="${PS1:-}"
if [ "x(venv_unix) " != x ] ; then
PS1="(venv_unix) ${PS1:-}"
else
if [ "`basename \"$VIRTUAL_ENV\"`" = "__" ] ; then
# special case for Aspen magic directories
# see https://aspen.io/
PS1="[`basename \`dirname \"$VIRTUAL_ENV\"\``] $PS1"
else
PS1="(`basename \"$VIRTUAL_ENV\"`)$PS1"
fi
fi
export PS1
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r
fi
Thanks to anyone who wants to answer!
Short answer: It's highly recommended to use the Linux version of Python and tools when in WSL. You'll find a number of posts here on Stack Overflow related to this, but your question is different enough (regarding venv) that it deserves its own answer.
More Detail:
Also worth reading this question. In that case, the question was around a dual-boot system and whether or not the same venv could be shared between Windows and Linux.
I know it seems like things might be better on WSL, where you can run Windows .executables under Linux, but it really isn't for this particular case.
You've solved the first problem, in the difference in line endings, but the next problem that you are facing is the difference in the directory format. After sourcing activate, do an echo $PATH and you'll see that the Windows style C:\path\to\the\venv path has been prepended to your PATH. For WSL, that would need to be /mnt/c/path/to/the/venv.
That's not going to work.
Once you fix that (again, by editing activate), you are still trying to run python3. The venv executable is actually python.exe. Under WSL, you do have to specify the extension.
So if you:
Change the line-endings from CRLF to LF
Change the path style in activate from Windows to WSL2 format
Use the python.exe executable
Then you can at least launch the Windows Python version. Your import sys; sys.path will show the Windows paths.
That said, you are almost certainly going to run into additional problems that don't make it worth doing this. For instance, if a script assumes python or python3, or even pip; then those are going to fail because it needs to call, e.g., pip.exe.
Line endings and native code will also be a problem.
For these reasons (and likely more), it's highly recommended to use the Linux version of Python when in WSL.

Conda init polluting environment?

I have a project set up in Pycharm, with an existing conda environment. My scripts work when run from within the console.
I would like to be able to run python -m path_to_my_script/script.py from any location, but I need conda activated. Conda recommends I do conda init but I'm worried it may change settings someplace and break things.
What does conda init do?
Strategy for Answering
Exactly what the conda init command does and its consequences are shell-specific. Instead of trying to cover all cases, let's walk through a case, noting along the way that one can replicate this analysis by substituting their shell of interest.
Case Study: conda init zsh
Let's look at zsh as the shell. This is a common shell (default for macOS 10.15+) and very close to bash. Plus, I don't already have it configured.
Probing the Command: Dry Run
Many Conda commands include some form of dry run functionality via a --dry-run, -d flag, which - combined with verbosity flags - enables seeing what this would do without doing them. For the init command, dry run alone will only tell us what files it would modify:
$ conda init -d zsh
no change /Users/mfansler/miniconda3/condabin/conda
no change /Users/mfansler/miniconda3/bin/conda
no change /Users/mfansler/miniconda3/bin/conda-env
no change /Users/mfansler/miniconda3/bin/activate
no change /Users/mfansler/miniconda3/bin/deactivate
no change /Users/mfansler/miniconda3/etc/profile.d/conda.sh
no change /Users/mfansler/miniconda3/etc/fish/conf.d/conda.fish
no change /Users/mfansler/miniconda3/shell/condabin/Conda.psm1
no change /Users/mfansler/miniconda3/shell/condabin/conda-hook.ps1
no change /Users/mfansler/miniconda3/lib/python3.7/site-packages/xontrib/conda.xsh
no change /Users/mfansler/miniconda3/etc/profile.d/conda.csh
modified /Users/mfansler/.zshrc
==> For changes to take effect, close and re-open your current shell. <==
Here we can see that it plans to target the user-level resources file for zsh, /Users/mfansler/.zshrc, but it doesn't tell us how it will modified it. Also, OMG! the UX here is awful, because it in no way reflects the fact that I used the -d flag. But don't worry: as long as the -d flag is there it won't actually change things.
Patch Preview
To see what exactly it will do, add a single verbosity flag (-v) to the command. This will give everything from the previous output, but will now shows us the diff it will use to patch (update) the .zshrc file.
$ conda init -dv zsh
/Users/mfansler/.zshrc
---
+++
## -0,0 +1,16 ##
+
+# >>> conda initialize >>>
+# !! Contents within this block are managed by 'conda init' !!
+__conda_setup="$('/Users/mfansler/miniconda3/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)"
+if [ $? -eq 0 ]; then
+ eval "$__conda_setup"
+else
+ if [ -f "/Users/mfansler/miniconda3/etc/profile.d/conda.sh" ]; then
+ . "/Users/mfansler/miniconda3/etc/profile.d/conda.sh"
+ else
+ export PATH="/Users/mfansler/miniconda3/bin:$PATH"
+ fi
+fi
+unset __conda_setup
+# <<< conda initialize <<<
+
# ...the rest is exactly as above
That is, the plan of action is to add these 16 lines to the .zshrc file. In this case, I don't have an existing .zshrc file, so it plans to add it at line 1. If the file had already existed, it would append these lines.
Interpreting the Shell Code
Let's overview this code, before focusing on the details. Essentially, this is a redundant sequence of attempts to set up some shell functionality. They are ordered from most to least functional.
What Conda Hopes To Do
The code
__conda_setup="$('/Users/mfansler/miniconda3/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
gets something from conda itself, storing the result to a string, and then evaluates that string if the command had a clean exit ($? -eq 0). The neat engineering here is that the subprocess (technically python -m conda) passes back a result that can be run within this current process (zsh), allowing it to define shell functions.
I'll dig deeper into what is going on here in a second.
Fallback 1: Hardcoded Shell Functions
If that strange internal command fails, the devs included a hardcoded version of some essential shell functions (specifically conda activate). This is in:
miniconda3/etc/profile.d/conda.sh
and they simply check the file exists and source it. Let's hit that last option, then we'll swing back to look at the functionality.
Fallback 2: The Last Resort
The absolute last resort is to literally violate the standing recommendation since Conda v4.4, which is to simply put the base environment's bin directory on PATH. In this case, there is no conda activate functionality; this only ensures that Conda is on your PATH.
Details: Shell Functionality
Coming back to the intended case, we can inspect exactly what it would evaluate by simply getting that string result:
$ conda shell.zsh hook
__add_sys_prefix_to_path() {
# In dev-mode CONDA_EXE is python.exe and on Windows
# it is in a different relative location to condabin.
if [ -n "${_CE_CONDA}" ] && [ -n "${WINDIR+x}" ]; then
SYSP=$(\dirname "${CONDA_EXE}")
else
SYSP=$(\dirname "${CONDA_EXE}")
SYSP=$(\dirname "${SYSP}")
fi
if [ -n "${WINDIR+x}" ]; then
PATH="${SYSP}/bin:${PATH}"
PATH="${SYSP}/Scripts:${PATH}"
PATH="${SYSP}/Library/bin:${PATH}"
PATH="${SYSP}/Library/usr/bin:${PATH}"
PATH="${SYSP}/Library/mingw-w64/bin:${PATH}"
PATH="${SYSP}:${PATH}"
else
PATH="${SYSP}/bin:${PATH}"
fi
\export PATH
}
__conda_exe() (
__add_sys_prefix_to_path
"$CONDA_EXE" $_CE_M $_CE_CONDA "$#"
)
__conda_hashr() {
if [ -n "${ZSH_VERSION:+x}" ]; then
\rehash
elif [ -n "${POSH_VERSION:+x}" ]; then
: # pass
else
\hash -r
fi
}
__conda_activate() {
if [ -n "${CONDA_PS1_BACKUP:+x}" ]; then
# Handle transition from shell activated with conda <= 4.3 to a subsequent activation
# after conda updated to >= 4.4. See issue #6173.
PS1="$CONDA_PS1_BACKUP"
\unset CONDA_PS1_BACKUP
fi
\local ask_conda
ask_conda="$(PS1="${PS1:-}" __conda_exe shell.posix "$#")" || \return
\eval "$ask_conda"
__conda_hashr
}
__conda_reactivate() {
\local ask_conda
ask_conda="$(PS1="${PS1:-}" __conda_exe shell.posix reactivate)" || \return
\eval "$ask_conda"
__conda_hashr
}
conda() {
\local cmd="${1-__missing__}"
case "$cmd" in
activate|deactivate)
__conda_activate "$#"
;;
install|update|upgrade|remove|uninstall)
__conda_exe "$#" || \return
__conda_reactivate
;;
*)
__conda_exe "$#"
;;
esac
}
if [ -z "${CONDA_SHLVL+x}" ]; then
\export CONDA_SHLVL=0
# In dev-mode CONDA_EXE is python.exe and on Windows
# it is in a different relative location to condabin.
if [ -n "${_CE_CONDA:+x}" ] && [ -n "${WINDIR+x}" ]; then
PATH="$(\dirname "$CONDA_EXE")/condabin${PATH:+":${PATH}"}"
else
PATH="$(\dirname "$(\dirname "$CONDA_EXE")")/condabin${PATH:+":${PATH}"}"
fi
\export PATH
# We're not allowing PS1 to be unbound. It must at least be set.
# However, we're not exporting it, which can cause problems when starting a second shell
# via a first shell (i.e. starting zsh from bash).
if [ -z "${PS1+x}" ]; then
PS1=
fi
fi
conda activate base
I'm not going to walk through all this, but the main part is that instead of directly putting bin on PATH, it defines a shell function called conda and this serves as a wrapper for the condabin/conda entrypoint. This also defines a new functionality conda activate, which uses a shell function, __conda_activate(), behind the scenes. At the final step, it then activates the base environment.
Why do it this way?
This is engineered like this in order to be responsive to the configuration settings. Configuration options like auto_activate_base and change_ps1 affect how Conda manipulates the shell, and so that changes what functionality Conda includes in its shell functions.
Does Conda "Pollute the Environment"?
Not really. The main behavioral things like auto-activation and prompt modification can be disabled through configuration settings, so that conda init ultimately just adds the conda activate function to the shell, enabling clean switching between environments without ever having to manually manipulate PATH.
Your caution may be warranted. The conda init command adds Anaconda to the path on Linux or Mac (not recommended on Windows). Anaconda FAQ
You also need to prep for conda init by running source <path to conda>/bin/activate first.
You can run your script in your desired conda environment by specifying the python environment in the shebang statement in the first line of your code.
You can get that value by activating your desired environment and executing which python
(base) -> conda activate py39
(py39) -> which python
/home/user/anaconda3/envs/py39/bin/python
The shebang would be:
#!/home/user/anaconda3/env/py39/bin/python
On Mac and Linux, once you add the shebang you can chmod the file to be executable (e.g. chmod 700 myscript.py) and run from the command line directly. (I'm not a Windows user, so ymmv.)
(base) -> <path-to-script>/myscript.py
(Which now runs in shebang virtual environment instead of base.)

Updating .bashrc and environment variables during Vagrant provisioning

I'm using Vagrant to set up a box with python, pip, virtualenv, virtualenvwrapper and some requirements. A provisioning shell script adds the required lines for virtualenvwrapper to .bashrc. It does a very basic check that they're not already there, so that it doesn't duplicate them with every provision:
if ! grep -Fq "WORKON_HOME" /home/vagrant/.bashrc; then
echo 'export WORKON_HOME=/home/vagrant/.virtualenvs' >> /home/vagrant/.bashrc
echo 'export PROJECT_HOME=/home/vagrant/Devel' >> /home/vagrant/.bashrc
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> /home/vagrant/.bashrc
source /home/vagrant/.bashrc
fi
That seems to work fine; after provisioning is finished, the lines are in .bashrc, and I can ssh to the box and use virtualenvwrapper.
However, virtualenvwrapper doesn't work during provisioning. After the section above, this next checks for a pip requirements file and tries to install with virtualenvwrapper:
if [[ -f /vagrant/requirements.txt ]]; then
mkvirtualenv 'myvirtualenv' -r /vagrant/requirements.txt
fi
But that generates:
==> default: /tmp/vagrant-shell: line 50: mkvirtualenv: command not found
If I try and echo $WORKON_HOME from that shell script, nothing appears.
What am I missing to have those environment variables available, so virtualenvwrapper will run?
UPDATE: Further attempts... it seems that doing source /home/vagrant/.bashrc has no effect in my shell script - I can put echo "hello" in the .bashrc file , and that isn't output during provisioning (but is if I run source /home/vagrant/.bashrc when logged in.
I've also tried su -c "source /home/vagrant/.bashrc" vagrant in the shell script but that is no different.
UPDATE 2: Removed the $BASHRC_PATH variable, which was confusing the issue.
UPDATE 3: In another question I got the answer as to why source /home/vagrant/.bashrc wasn't working: the first part of the .bashrc file prevented it from doing anything when run "not interactively" in that way.
The Vagrant script provisioner will run as root, so it's home dir (~) will be /root. In your script if you define BASHRC_PATH=/home/vagrant, then I believe your steps will work: appending to, then sourcing from /home/vagrant/.bashrc.
Update:
Scratching my earlier idea ^^ because BASHRC_PATH is already set correctly.
As an alternative we could use .profile or .bash_profile. Here's a simplified example which sets environment variable FOO, making it available during provisioning and after ssh login:
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
$prov_script = <<SCRIPT
if ! grep -q "export FOO" /home/vagrant/.profile; then
sudo echo "export FOO=bar" >> /home/vagrant/.profile
echo "before source, FOO=$FOO"
source /home/vagrant/.profile
echo "after source, FOO=$FOO"
fi
SCRIPT
config.vm.provision "shell", inline: $prov_script
end
Results
$ vagrant up
...
==> default: Running provisioner: shell...
default: Running: inline script
==> default: before source, FOO=
==> default: after source, FOO=bar
$ vagrant ssh -c 'echo $FOO'
bar
$ vagrant ssh -c 'tail -n 1 ~/.profile'
export FOO=bar
I found a solution, but I don't know if it's the best. It feels slightly wrong as it's repeating things, but...
I still append those lines to .bashrc, so that virtualenvwrapper will work if I ssh into the machine. But, because source /home/vagrant/.bashrc appears to have no effect during the running of the script, I have to explicitly repeat those three commands:
if ! grep -Fq "WORKON_HOME" $BASHRC_PATH; then
echo 'export WORKON_HOME=$HOME/.virtualenvs' >> $BASHRC_PATH
echo 'export PROJECT_HOME=$HOME/Devel' >> $BASHRC_PATH
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> $BASHRC_PATH
fi
WORKON_HOME=/home/vagrant/.virtualenvs
PROJECT_HOME=/home/vagrant/Devel
source /usr/local/bin/virtualenvwrapper.sh
(As an aside, I also realised that during vagrant provisioning $HOME is /root, not the /home/vagrant I was assuming.)
The .bashrc in Ubuntu box does not work. You have to create the .bash_profile and add:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
As mentioned in your other Q, Vagrant prohibits interactive shells during provisioning - apparently, only for some boxes (need to reference this though). For me, this affects the official Ubuntu Trusty and Xenial boxes.
However, you can simulate an interactive bash shell using sudo -H -u USER_HERE bash -i -c 'YOUR COMMAND HERE'
Answer taken from: https://stackoverflow.com/a/30106828/4186199
This has worked for me installing Ruby via rbenv and Node via nvm when provisioning the Ubuntu/trusty64 and xenial64 boxes.

How can a shell function know if it is running within a virtualenv?

How should a bash function test whether it is running inside a Python virtualenv?
The two approaches that come to mind are:
[[ "$(type -t deactivate)" != function ]]; INVENV=$?
or
[[ "x$(which python)" != "x$VIRTUAL_ENV/bin/python" ]]; INVENV=$?
(Note: wanting $INVENV to be 1 if we're inside a virtualenv, and 0 otherwise, is what forces the backward-looking tests above.)
Is there something less hacky?
if [[ "$VIRTUAL_ENV" != "" ]]
then
INVENV=1
else
INVENV=0
fi
// or shorter if you like:
[[ "$VIRTUAL_ENV" == "" ]]; INVENV=$?
EDIT: as #ThiefMaster mentions in the comments, in certain conditions (for instance, when starting a new shell – perhaps in tmux or screen – from within an active virtualenv) this check may fail (however, starting new shells from within a virtualenv may cause other issues as well, I wouldn't recommend it).
Actually, I just found a similar question, from which one can easily derive an answer to this one:
Python: Determine if running inside virtualenv
E.g., a shell script can use something like
python -c 'import sys; print (sys.real_prefix)' 2>/dev/null && INVENV=1 || INVENV=0
(Thanks to Christian Long for showing how to make this solution work with Python 3 also.)
EDIT: Here's a more direct (hence clearer and cleaner) solution (taking a cue from JuanPablo's comment):
INVENV=$(python -c 'import sys; print ("1" if hasattr(sys, "real_prefix") else "0")')
If you use virtualenvwrappers there are pre/post scripts that run that could set INVENV for you.
Or what I do, put the following in your your .bashrc, and make a file called .venv in your working directory (for django) so that the virtual env is automatically loaded when you cd into the directory
export PREVPWD=`pwd`
export PREVENV_PATH=
handle_virtualenv(){
if [ "$PWD" != "$PREVPWD" ]; then
PREVPWD="$PWD";
if [ -n "$PREVENV_PATH" ]; then
if [ "`echo "$PWD" | grep -c $PREVENV_PATH`" = "0" ]; then
deactivate
unalias python 2> /dev/null
PREVENV_PATH=
fi
fi
# activate virtualenv dynamically
if [ -e "$PWD/.venv" ] && [ "$PWD" != "$PREVENV_PATH" ]; then
PREVENV_PATH="$PWD"
workon `basename $PWD`
if [ -e "manage.py" ]; then
alias python='python manage.py shell_plus'
fi
fi
fi
}
export PROMPT_COMMAND=handle_virtualenv

Categories