PATH environment variables different in activated virtual environment - python

I've noticed when I get traceback errors in any project, the file path in the trace shows one specific project I no longer use. For example:
File "/Users/MyComputer/Developer/SampleApp/venv/lib/python2.7/site-packages/flask/app.py"...
I'm assuming the site-packages are not suppose to be from SampleApp, correct? I do use virtual environments in all my projects. When I run echo $PATH while in an activated virtual environment I get:
/Users/MyComputer/Developer/SampleApp/venv/bin: #<----- Shouldn't point to a specific project!
/Library/Frameworks/Python.framework/Versions/3.5/bin:
/Users/MyComputer/bin/:
/usr/local/bin:
/usr/bin:
/bin:
/usr/sbin:
/sbin:
/usr/local/git/bin:
/Users/MyComputer/.rvm/bin:
/Applications/Postgres.app/Contents/Versions/9.4/bin
When I run echo $PATH in a deactivated virtual environment I get the above without /Users/MyComputer/Developer/SampleApp/venv/bin.
Here is what is in my ./bashrc:
export PATH=$PATH:$HOME/usr/local/bin
export PATH=$PATH:$HOME/usr/bin/env
Here's what is in my .bash_profile:
[[ -s "$HOME/.profile" ]] && source "$HOME/.profile" # Load the default .profile
# Postgres.app
export PATH=$PATH:/Applications/Postgres.app/Contents/Versions/latest/bin
# Setting PATH for Python 3.5
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/3.5/bin:${PATH}"
export PATH
Why is the activated venv path different from a deactivated env? Thank you in advance! :)

Related

python virtual env succesfully activated via WSL but not working

on my windows system I've succesfully installed a virtual environment (python version is 3.9) using windows command prompt
python -m venv C:\my_path\my_venv
Always using windows command prompt, I'm able to activate the created venv via
C:\my_path\my_venv\Scripts\activate.bat
I am sure the venv is correctly activated since:
on the windows terminal, I see the command line is preceded by (my_venv)
if I activate python from the terminal (python) and run the following commands: import sys ; sys.path I can see, in the list of paths, the desired path [..., 'C:\\my_path\\my_venv\\lib\\site-packages\\win32\\lib', ...]
if I do stuff in the activated venv (like installing packages) everything works and is done inside the venv
To sum up, everything is fine so far.
I also have WSL2 (Ubuntu) and I'd like to activate the same venv using the Ubuntu terminal.
If, from the Ubuntu terminal, I activate the venv
source /mnt/c/my_path/my_venv/Scripts/activate
it seems to work since the command line is preceeded by (my_venv), but when I run python (python3 command) and then run import sys ; sys.path I see that the system is targeting the base Ubuntu python installation (version 3.8) and not the venv installation:
['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages']
The venv is not really activated. Any suggestions to solve the issue?
If it can help, I add a couple of information.
If I try to create a venv directly using the Ubuntu terminal
python3 -m venv /mnt/c/my_path/my_venv_unix
and activate it via the Ubuntu terminal (source /mnt/c/my_path/my_venv_unix/bin/activate) everything works fine, but that's not what I want: I'd like to use WSL to activate a virtual environment created using windows command prompt, since on my machine I've a lot of venvs created with windows and I don't want to replicate them.
Following the script C:\my_path\my_venv\Scripts\activate (/mnt/c/my_path/my_venv/Scripts/activate using wsl folders naming) (I had to change the EOL from windows to Ubuntu, otherwise the command source /mnt/c/my_path/my_venv/Scripts/activate would not have worked)
# This file must be used with "source bin/activate" *from bash*
# you cannot run it directly
deactivate () {
# reset old environment variables
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
PATH="${_OLD_VIRTUAL_PATH:-}"
export PATH
unset _OLD_VIRTUAL_PATH
fi
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
export PYTHONHOME
unset _OLD_VIRTUAL_PYTHONHOME
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r 2> /dev/null
fi
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
PS1="${_OLD_VIRTUAL_PS1:-}"
export PS1
unset _OLD_VIRTUAL_PS1
fi
unset VIRTUAL_ENV
if [ ! "${1:-}" = "nondestructive" ] ; then
# Self destruct!
unset -f deactivate
fi
}
# unset irrelevant variables
deactivate nondestructive
VIRTUAL_ENV="C:\my_path\my_venv"
export VIRTUAL_ENV
_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/Scripts:$PATH"
export PATH
# unset PYTHONHOME if set
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
# could use `if (set -u; : $PYTHONHOME) ;` in bash
if [ -n "${PYTHONHOME:-}" ] ; then
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
unset PYTHONHOME
fi
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
_OLD_VIRTUAL_PS1="${PS1:-}"
PS1="(.venv_ml_dl_gen_purpose) ${PS1:-}"
export PS1
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r 2> /dev/null
fi
Finally, here also the script /mnt/c/my_path/my_venv_unix/bin/activate
# This file must be used with "source bin/activate" *from bash*
# you cannot run it directly
deactivate () {
# reset old environment variables
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
PATH="${_OLD_VIRTUAL_PATH:-}"
export PATH
unset _OLD_VIRTUAL_PATH
fi
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
export PYTHONHOME
unset _OLD_VIRTUAL_PYTHONHOME
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r
fi
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
PS1="${_OLD_VIRTUAL_PS1:-}"
export PS1
unset _OLD_VIRTUAL_PS1
fi
unset VIRTUAL_ENV
if [ ! "${1:-}" = "nondestructive" ] ; then
# Self destruct!
unset -f deactivate
fi
}
# unset irrelevant variables
deactivate nondestructive
VIRTUAL_ENV="/mnt/c/my_path/my_venv_unix"
export VIRTUAL_ENV
_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/bin:$PATH"
export PATH
# unset PYTHONHOME if set
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
# could use `if (set -u; : $PYTHONHOME) ;` in bash
if [ -n "${PYTHONHOME:-}" ] ; then
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
unset PYTHONHOME
fi
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
_OLD_VIRTUAL_PS1="${PS1:-}"
if [ "x(venv_unix) " != x ] ; then
PS1="(venv_unix) ${PS1:-}"
else
if [ "`basename \"$VIRTUAL_ENV\"`" = "__" ] ; then
# special case for Aspen magic directories
# see https://aspen.io/
PS1="[`basename \`dirname \"$VIRTUAL_ENV\"\``] $PS1"
else
PS1="(`basename \"$VIRTUAL_ENV\"`)$PS1"
fi
fi
export PS1
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r
fi
Thanks to anyone who wants to answer!
Short answer: It's highly recommended to use the Linux version of Python and tools when in WSL. You'll find a number of posts here on Stack Overflow related to this, but your question is different enough (regarding venv) that it deserves its own answer.
More Detail:
Also worth reading this question. In that case, the question was around a dual-boot system and whether or not the same venv could be shared between Windows and Linux.
I know it seems like things might be better on WSL, where you can run Windows .executables under Linux, but it really isn't for this particular case.
You've solved the first problem, in the difference in line endings, but the next problem that you are facing is the difference in the directory format. After sourcing activate, do an echo $PATH and you'll see that the Windows style C:\path\to\the\venv path has been prepended to your PATH. For WSL, that would need to be /mnt/c/path/to/the/venv.
That's not going to work.
Once you fix that (again, by editing activate), you are still trying to run python3. The venv executable is actually python.exe. Under WSL, you do have to specify the extension.
So if you:
Change the line-endings from CRLF to LF
Change the path style in activate from Windows to WSL2 format
Use the python.exe executable
Then you can at least launch the Windows Python version. Your import sys; sys.path will show the Windows paths.
That said, you are almost certainly going to run into additional problems that don't make it worth doing this. For instance, if a script assumes python or python3, or even pip; then those are going to fail because it needs to call, e.g., pip.exe.
Line endings and native code will also be a problem.
For these reasons (and likely more), it's highly recommended to use the Linux version of Python when in WSL.

Why am I stuck with the system default Python 2 interpreter after activating a Python 3 Conda environment?

I created a new Conda environment on an x86-64 Linux mainframe, using the command
conda create --name myenv --file somefile.txt --python=3.8.
I double checked my Python version in this environment using conda list, which returns
...
python 3.8.3 hcff3b4d_0
...
However, after activating this environment, Python 3 scripts doesn't run, and running which python reveals that the environment defaults to using the default system Python 2 interpreter:
$ which python
/usr/bin/python
My Efforts So Far
First, I added the line export PATH=$PATH:/home/miniconda3/envs/myenv/bin to my ~/.bashrc and ~/.profile files, to no effect. which python still returns /usr/bin/python.
I then checked my alias file to see if python is aliased to Python 2. But there is no entry in the alias file about python.
For your reference
my ~/.bashrc looks like this:
$ cat ~/.bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/miniconda3/etc/profile.d/conda.sh" ]; then
. "/home/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/home/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
alias julia="/home/julia-1.4.0/bin/julia"
export PATH=$PATH:/home/miniconda3/envs/myenv/bin
$ cat ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
$ cat ~/.profile
export PATH=$PATH:/home/miniconda3/envs/myenv/bin
I think you need to add your env path to the front of the PATH variable so it will find python there first. The OS will look for a file or application in the PATH list, and use the first match that it finds.
Change
export PATH=$PATH:/home/qingyanz/miniconda3/envs/myenv/bin
to this:
export PATH=/home/qingyanz/miniconda3/envs/myenv/bin:$PATH
As an example, here are the settings on my conda environment:
(ds_tensorflow) C:\Users\me>which python
/c/Users/me/miniconda3/envs/ds_tensorflow/python
(ds_tensorflow) C:\Users\me>env | grep PATH
...
PATH=/c/Users/me/miniconda3/envs/ds_tensorflow:/mingw-w64/bin:/usr/bin:/bin:...

Required to run "source .bash" for a new terminal for the correct python path

I have a strange problem with ipython path. When I open a terminal and type
which ipython, I get
"/Users/test/anaconda3/bin/ipython"
which messes up with some of my definitions. Regardless, when I run source .bash_profile it changes the path to "/usr/local/bin/ipython".
Basically, everytime I open a tab, I have to run source .bash_profile for the correct path.
These are the contents of my bash file and I have not seen a problem like this before.
export PATH="/opt/local/bin:/opt/local/sbin:$PATH"
# Finished adapting your PATH environment variable for use with MacPorts.
export PATH="/usr/local/bin:$PATH"
# added by Anaconda3 2019.07 installer
# >>> conda init >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$(CONDA_REPORT_ERRORS=false '/Users/test/anaconda3/bin/conda' shell.bash hook 2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f "/Users/test/anaconda3/etc/profile.d/conda.sh" ]; then
. "/Users/test/anaconda3/etc/profile.d/conda.sh"
CONDA_CHANGEPS1=false conda activate base
else
\export PATH="/Users/test/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda init <<<
PYTHONPATH="/Users/test/Documents/heasoft-6.25/x86_64-apple-darwin18.2.0/lib/python"
PYTHONPATH="/Users/test/anaconda3/lib/python3.7:$PYTHONPATH"
This is not a bug or a problem. It's an expected behavior that /Users/test/anaconda3/bin/ipython prioritize /usr/local/bin/ipython.
This is because the "base" Python environment from Anaconda/Miniconda, /Users/test/anaconda3/bin, is prepended in the PATH.
The following content in your .bash_profile is the conda initialization script, which makes sure command conda works properly.
# added by Anaconda3 2019.07 installer
# >>> conda init >>>
...
# <<< conda init <<<
And one of the default behavior of initialization of conda is to expose its "base" Python environment.
All you need to do is disable the activation of "base" Python environment.
Solution
Remove the above conda initialization script in .bash_profile. Replace it with
# only expose the `conda` command but not the "base" environment
export PATH="/Users/test/anaconda3/condabin:$PATH"
Disable activation of the "base" Python env by running following command in a new interactive shell.
# re-generate init script
conda init
# disable activation of "base" environment
conda config --set auto_activate_base false
References
How do I prevent Conda from activating the base environment by default?
Activation script initialization: conda init

autoenv executes even in subfolder

I uses autoenv for automatic virtualenv activate. Python project's top folder has .env file with following contents
source venv/bin/activate
This command executed whenever cd to any sub folder of the project. Then throws
-bash: venv/bin/activate: No such file or directory
It failed because it is trying to execute activate relative to sub folder. Why it executes even in subfolder? How to resolve the issue?
Had this issue today. The current answer doesn't address the fact that the environment is activated every time you cd into a subfolder or back to the root folder. Solved it with the following .env script:
venv=venv
currentvenv=""
if [[ $VIRTUAL_ENV != "" ]]
then
# Strip out the path and just leave the env name
currentvenv="${VIRTUAL_ENV##*/}"
fi
if [[ "$currentvenv" != "$venv" ]]
then
echo "Switching to environment: $venv"
workon $venv
#else
# echo "Already on environment $venv"
fi
Replace venv with the name of your environment. You can uncomment the else block to see that it doesn't try to activate the environment every time, given that the desired environment is already activated.
Note: If you're not using virtualenvwrapper then you should replace the workon command with whatever command you're using to activate your virtual environment. I do recommend using virtualenvwrapper though.
In your workspace root, a .env containing:
test (command -v deactivate) && deactivate
and in each of your relevant project folders:
workon venv_of_project
As this person points out, it means that cding around in a project will turn the workspace on and off, but at least it is simple and very clear what is going on.

How to activate virtualenv in Linux?

I have been searching and tried various alternatives without success and spent several days on it now - driving me mad.
Running on Red Hat Linux with Python 2.5.2
Began using most recent Virtualenv but could not activate it, I found somewhere suggesting needed earlier version so I have used Virtualenv 1.6.4 as that should work with Python 2.6.
It seems to install the virtual environment ok
[necrailk#server6 ~]$ python virtualenv-1.6.4/virtualenv.py virtual
New python executable in virtual/bin/python
Installing setuptools............done.
Installing pip...............done.
Environment looks ok
[necrailk#server6 ~]$ cd virtual
[necrailk#server6 ~/virtual]$ dir
bin include lib
Trying to activate
[necrailk#server6 ~/virtual]$ . bin/activate
/bin/.: Permission denied.
Checked chmod
[necrailk#server6 ~/virtual]$ cd bin
[necrailk#server6 bin]$ ls -l
total 3160
-rw-r--r-- 1 necrailk biz12 2130 Jan 30 11:38 activate
-rw-r--r-- 1 necrailk biz12 1050 Jan 30 11:38 activate.csh
-rw-r--r-- 1 necrailk biz12 2869 Jan 30 11:38 activate.fish
-rw-r--r-
Problem, so I changed it
[necrailk#server6 bin]$ ls -l
total 3160
-rwxr--r-- 1 necrailk biz12 2130 Jan 30 11:38 activate
-rw-r--r-- 1 necrailk biz12 1050 Jan 30 11:38 activate.csh
-rw-r--r-- 1 necrailk biz12 2869 Jan 30 11:38 activate.fish
-rw-r--r-- 1 necrailk biz12 1005 Jan 30 11:38 activate_this.py
-rwxr-xr-x 1 necrailk biz
Try activate again
[necrailk#server6 ~/virtual]$ . bin/activate
/bin/.: Permission denied.
Still no joy...
Here is my workflow after creating a folder and cd'ing into it:
$ virtualenv venv --distribute
New python executable in venv/bin/python
Installing distribute.........done.
Installing pip................done.
$ source venv/bin/activate
(venv)$ python
You forgot to do source bin/activate where source is a executable name.
Struck me first few times as well, easy to think that manual is telling "execute this from root of the environment folder".
No need to make activate executable via chmod.
You can do
source ./python_env/bin/activate
or just go to the directory
cd /python_env/bin/
and then
source ./activate
Good Luck.
Go to the project directory. In my case microblog is the flask project directory and under microblog directory there should be app and venv folders. then run the below command, This is one worked for me in Ubuntu.
source venv/bin/activate
Cd to the environment path, go to the bin folder.
At this point when you use ls command, you should see the "activate" file.
now type
source activate
$ mkdir <YOURPROJECT>
Create a new project
$ cd <YOURPROJECT>
Change directory to that project
$ virtualenv <NEWVIRTUALENV>
Creating new virtualenv
$ source <NEWVIRTUALENV>/bin/activate
Activating that new virtualenv
run this code it will get activated if you on a windows machine
source venv/Scripts/activate
run this code it will get activated if you on a linux/mac machine
. venv/bin/activate
The problem there is the /bin/. command. That's really weird, since . should always be a link to the directory it's in. (Honestly, unless . is a strange alias or function, I don't even see how it's possible.) It's also a little unusual that your shell doesn't have a . builtin for source.
One quick fix would be to just run the virtualenv in a different shell. (An obvious second advantage being that instead of having to deactivate you can just exit.)
/bin/bash --rcfile bin/activate
If your shell supports it, you may also have the nonstandard source command, which should do the same thing as ., but may not exist. (All said, you should try to figure out why your environment is strange or it will cause you pain again in the future.)
By the way, you didn't need to chmod +x those files. Files only need to be executable if you want to execute them directly. In this case you're trying to launch them from ., so they don't need it.
instead of ./activate
use source activate
For Windows You can perform as:
TO create the virtual env as: virtualenv envName –python=python.exe (if not create environment variable)
To activate the virtual env : > \path\to\envName\Scripts\activate
To deactivate the virtual env : > \path\to\env\Scripts\deactivate
It fine works on the new python version .
Windows 10
In Windows these directories are created :
To activate Virtual Environment in Windows 10.
down\scripts\activate
\scripts directory contain activate file.
Linux Ubuntu
In Ubuntu these directories are created :
To activate Virtual Environment in Linux Ubuntu.
source ./bin/activate
/bin directory contain activate file.
Virtual Environment copied from Windows to Linux Ubuntu vice versa
If Virtual environment folder copied from Windows to Linux Ubuntu then according to directories:
source ./down/Scripts/activate
I would recommend virtualenvwrapper as well. It works wonders for me and how I always have problems with activating. http://virtualenvwrapper.readthedocs.org/en/latest/
Create your own Python virtual environment called <Your Env _name >:.
I have given it VE.
git clone https://github.com/pypa/virtualenv.git
python virtualenv.py VE
To activate your new virtual environment, run (notice it's not ./ here):
. VE/bin/activate
Sample output (note prompt changed):
(VE)c34299#a200dblr$
Once your virtual environment is set, you can remove the Virtualenv repo.
On Mac, change shell to BASH (keep note that virtual env works only in bash shell )
[user#host tools]$. venv/bin/activate
.: Command not found.
[user#host tools]$source venv/bin/activate
Badly placed ()'s.
[user#host tools]$bash
bash-3.2$ source venv/bin/activate
(venv) bash-3.2$
Bingo , it worked. See prompt changed.
On Ubuntu:
user#local_host:~/tools$ source toolsenv/bin/activate
(toolsenv) user#local_host~/tools$
Note : prompt changed
I had trouble getting running source /bin/activate then I realized I was using tcsh as my terminal shell instead of bash. once I switched I was able to activate venv.
Probably a little late to post my answer here but still I'll post, it might benefit someone though,
I had faced the same problem,
The main reason being that I created the virtualenv as a "root" user
But later was trying to activate it using another user.
chmod won't work as you're not the owner of the file, hence the alternative is to use chown (to change the ownership)
For e.g. :
If you have your virtualenv created at /home/abc/ENV
Then CD to /home/abc
and run the command : chown -Rv [user-to-whom-you want-change-ownership] [folder/filename whose ownership needs to be changed]
In this example the commands would be : chown -Rv abc ENV
After the ownership is successfully changed you can simply run source /ENV/bin/./activate and your should be able to activate the virtualenv correctly.
1- open powershell and navigate to your application folder
2- enter your virtualenv folder ex : cd .\venv\Scripts\
3- active virtualenv by type .\activate

Categories