I'm a novice in shell scripting, but I want to make a bash script for activate/deactivate a virtual enviroment using virtualenv.
Then I want to use this script like a service in Ubuntu copying it inside /etc/init.d folder.
In my script, I have a variable like this:
VENV=/opt/odoo/odoo_server/venv_oddo/bin
This variable represents the bin path in my virtual enviroment.
Inside the script, I can activate the virtual enviroment with this statement:
. ${VENV}/activate
This is possible because activate is a file inside bin directory in the virtual enviroment.
But I don't know the statement to use in my script to deactivate my virtual enviroment.
I can't do this: . ${VENV}/deactivate
The problem is that doesn't exist a file named deactivate, but deactivated is a function inside the bin/activate file in the virtual enviroment.
Just deactivate. It will work in the script as well as in command line, as long as you're using bash.
Edit: also in most cases it is a better idea to spell full python path in your scripts and services. It is stateless, more portable and works pretty much everywhere. So instead of doing
. $VENV/bin/activate
/path/to/my/script.py --parameters
it is usually preferable to do
$VENV/bin/python /path/to/my/script --parameters
Trust me, it will save you debugging time)
It'll be hard to make a service like that useful.
. ${VENV}/activate # note the dot
or
source ${VENV}/activate
will source the activate script, i.e. run its contents as if they were part of the shell or script where you source them. virtualenvironment's activate is designed for this usage. In contrast, just executing the script normally with
${VENV}/activate # note: NO dot and NO 'source' command
will run its content in a subshell and won't have any useful effect.
However, your service script will already run in a subshell of its own. So except for any python commands you run as part of the service start process, it won't have any effect.
On the plus side, you won't even have to care about de-activating the environment, unless you want to run even more python stuff in the service start process, but outside of your virtualenv.
The deactivate "command" provided by virtualenvwrapper is actually a shell function, likewise so for workon. If you have a virtual env active, you can list the names of these functions with typeset -F.
In order to use them in a script, they need to be defined there, because shell functions do not propagate to child shells.
To define these functions, source the virtualenvwrapper.sh script in the shell script where you intend to invoke these functions, e.g.:
source $(which virtualenvwrapper.sh)
That allows you to invoke these functions in your shell script like you would do in the shell:
deactivate
Update: What I described works for the other functions provided by virtualenvwrapper (e.g. workon). I incorrectly assumed it would work also for deactivate, but that one is a more complicated case, because it is a function that will be defined only in the shell where workon or activate was run.
copy deactivate code in ${VENV}/activate.
paste your ~/.bashrc
deactivate() {
# reset old environment variables
if [ -n "$_OLD_VIRTUAL_PATH" ] ; then
PATH="$_OLD_VIRTUAL_PATH"
export PATH
unset _OLD_VIRTUAL_PATH
fi
if [ -n "$_OLD_VIRTUAL_PYTHONHOME" ] ; then
PYTHONHOME="$_OLD_VIRTUAL_PYTHONHOME"
export PYTHONHOME
unset _OLD_VIRTUAL_PYTHONHOME
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "$BASH" -o -n "$ZSH_VERSION" ] ; then
hash -r
fi
if [ -n "$_OLD_VIRTUAL_PS1" ] ; then
PS1="$_OLD_VIRTUAL_PS1"
export PS1
unset _OLD_VIRTUAL_PS1
fi
unset VIRTUAL_ENV
if [ ! "$1" = "nondestructive" ] ; then
# Self destruct!
unset -f deactivate
fi
}
run command.
$ $VENV/activate
$ deactivate
I have selectively used without problems python 2.7 and python 3.5 in this way.
I want to know the reason for the negative evaluation.
Somehow deactivate also can't be found in my case (usually I work under far2l inside bash). I use the solution:
unset VIRTUAL_ENV & deactivate
After that pip -V is showing path in .local.
If you only need to programatically disable / change virtualenv, you can use a shell function instead of a shell script. For example, put at the end of your ~/.bashrc or ~/.bash_aliases (if you have it set up) or ~/.zshrc or ~/.zsh_aliases (if you use zsh):
function ch() {
# change this to your directory
cd ~/git-things/my_other_py_project
# this works, as a shell function won't spawn a subshell as the script would
deactivate
# re-source to change the virtualenv (my use case; change to fit yours)
source .venv-myotherpyproject/bin/activate
}
Restart the shell or re-source the file you changed with source ~/.zsh_aliases and use the command ch to execute the function.
Related
I created a python file and used some packages. I installed packages on a virtual Environment. Now, when I try to run that file, it run on the default installed interpreter and I have to activate the virtual Environment every time I have to run that file. Is there any way possible to do that.
In conclusion: The code from which the file can select the place where to look for the packages.
You can add the path to the virtual environment's interpreter directly to the shebang at the top of the script. For example, if your virtual environment is stored at /home/ishant/venv, the shebang would be
#!/home/ishant/venv/bin/python
Then, if you execute your script directly (after making it executable with chmod +x or the like), your virtual environment will be used.
(Activating a virtual environment simply updates your PATH variable so that python resolves to the virtual environment, rather than your "regular" environment. You can always access the tools in the virtual enivironment directly instead.)
Command line only (Linux)
Put this in your ~/.bashrc and create the virtual env with name 'venv' inside the project root:
function cd() {
if [[ -d ./venv ]] ; then
deactivate
fi
builtin cd $1
if [[ -d ./venv ]] ; then
. ./venv/bin/activate
fi
}
When cd-ing into a directory, it searches for a virtualenv named venv and disables when leaving.
Alternative with absolute paths (Linux and Windows)
In cases where you want to run the script without a bash, you could run it with the absolute path to the python interpreter inside the virtualenv.
This is from inside the project dir:
# Posix:
/path/to/virtualenvname/bin/python run.py
# Windows:
C:\path\to\virtualenvname\Scripts\python.exe run.py
Or if you want to execute it from outside of the project dir:
# Posix:
/path/to/virtualenvname/bin/python /path/to/projectdir/run.py
# Windows:
C:\path\to\virtualenvname\Scripts\python.exe C:\path\to\projectdir\run.py
Okk So i got your point what i think is why not to add some function in your code so that whenever you excute it it will automatically use vitual environment
--code for linux
import os
os.system("source <virtualenv_name>/bin/activate")
--code for windows
import os
os.system("<virtualenv_name>/bin/activate")
And at last line add
os.system("deactivate")
Add these lines to beginning of your programm see if it works.
May this help you solving your probelem
Thanks!!
At the top of my makefile I have this line:
SHELL := /bin/sh
which is needed for most of the commands. However, I would like to also have a make command to activate my virtual env, which is on a different path.
Here is the code that I wrote for it:
activate:
source ~/.envs/$(APP)/bin/activate; \
The problem with this is, that this just prints out what is written here, and it doesn't get executed. I read that it might have something todo with only bash knowing about source, but I can't figure out how to temporarily switch modes within the activate command.
How would I have to write this method, so that it activates my virtualenv?
It does get executed.
Virtualenv works by modifying your current process's environment (that's why you have to "source" it). However, one process cannot modify the environment of the other process. So, to run your recipe make invokes a shell and passes it your virtualenv command, it works, then the shell exits, and your virtualenv is gone.
In short, there's no easy way to do this in a makefile. The simplest thing to do is create a script that first sources the virtualenv then runs make, and run that instead of running make.
Create a file called "make-venv" like this:
#!/bin/bash
source ./.venv/bin/activate
$2
Then add this to the first line of your Makefile
SHELL=./make-venv
Now, make-venv activates virtualenv before every command runs. Probably inefficient, but functional.
You can do it by using set, which allows you to set or unset values of shell options and positional parameters:
set -a && . venv/bin/activate && set +a
I do have a bash script that needs to install some Python packages on the system instead of an virtual environment which may or may not be activated when the script is executed.
This script is called by people that may already have a python virtual environment activated and I do want to be sure that for few commands I do not use it.
I tried to use the deactivate command but it seems that is not available, even if bash detects the virtual environment (presence of the VIRTUAL_ENV variable).
As a side note, I don't want to permanently disable the virtual environment. I just want to run few commands outside it. How can I do this?
If activating before starting the script
If you did the activate step in a parent shell, not in the shell instance running the script itself, then non-exported variables and functions are unavailable during its runtime.
To be entirely clear about definitions:
source my-virtualenv/bin/activate # this runs in the parent shell
./my-shell-script # the shell script itself is run in a child process created by
# fork()+execve(); does not inherit shell variables / functions, so
# deactivate WILL NOT WORK here.
(source my-shell-script) # creates a subshell with fork(), then directly invokes
# my-shell-script inside that subshell; this DOES inherit shell
# variables / functions, and deactivate WILL WORK here.
You have three options:
Export the deactivate function and its dependencies from the parent shell, before starting the script.
This is as given below, and looks something like:
source my-virtualenv/bin/activate
export VIRTUAL_ENV ${!_OLD_VIRTUAL_#}
export -f deactivate
./my-script-that-needs-to-be-able-to-deactivate
You could optionally define an activation function that does this for you, like so:
# put this in your .bashrc
activate() {
source "$1"/bin/activate && {
export VIRTUAL_ENV ${!_OLD_VIRTUAL_#}
export -f deactivate
}
}
# ...and then activate virtualenvs like so:
activate my-virtualenv
Make some guesses, within the script, about what the prior Python environment looked like.
This is less reliable, for obvious reasons; however, as virtualenv does not export the shell variables containing the original PYTHON_HOME, that information is simply unavailable to child-process shells; a guess is thus the best option available:
best_guess_deactivate() {
if [[ $VIRTUAL_ENV && $PATH =~ (^|:)"$VIRTUAL_ENV/bin"($|:) ]]; then
PATH=${PATH%":$VIRTUAL_ENV/bin"}
PATH=${PATH#"$VIRTUAL_ENV/bin:"}
PATH=${PATH//":$VIRTUAL_ENV/bin:"/}
unset PYTHONHOME VIRTUAL_ENV
fi
}
...used within a limited scope as:
run_python_code_in_virtualenv_here
(best_guess_deactivate; run_python_code_outside_virtualenv_here)
run_python_code_in_virtualenv_here
Run the script in a forked child of the shell that first sourced activate with no intervening exec() call
That is, instead of invoking your script as a regular subprocess, with:
# New shell instance, does not inherit non-exported (aka regular shell) variables
./my-shell-script
...source it into a forked copy of the current shell, as
# Forked copy of existing shell instance, does inherit variables
(source ./my-shell-script)
...or, if you trust it to hand back control to your interactive shell after execution without messing up state too much (which I don't advise), simply:
# Probably a bad idea
source ./my-shell-script
All of these approaches have some risk: Because they don't use an execve call, they don't honor any shebang line on the script, so if it's written specifically for ksh93, zsh, or another shell that differs from the one you're using interactively, they're likely to misbehave.
If activating within the script
The most likely scenario is that the shell where you're running deactivate isn't a direct fork()ed child (with no intervening exec-family call) of the one where activate was sourced, and thus has inherited neither functions or (non-exported) shell variables created by that script.
One means to avoid this is to export the deactivate function in the shell that sources the activate script, like so:
printf 'Pre-existing interpreter: '; type python
. venv-dir/bin/activate
printf 'Virtualenv interpreter: '; type python
# deactivate can be run in a subshell without issue, scoped to same
printf 'Deactivated-in-subshell interpreter: '
( deactivate && type python ) # this succeeds
# however, it CANNOT be run in a child shell not forked from the parent...
printf 'Deactivated-in-child-shell (w/o export): '
bash -c 'deactivate && type python' # this fails
# ...unless the function is exported with the variables it depends on!
export -f deactivate
export _OLD_VIRTUAL_PATH _OLD_VIRTUAL_PYTHONHOME _OLD_VIRTUAL_PS1 VIRTUAL_ENV
# ...after which it then succeeds in the child.
printf 'Deactivated-in-child-shell (w/ export): '
bash -c 'deactivate && type python'
My output from the above follows:
Pre-existing interpreter: python is /usr/bin/python
Virtualenv interpreter: python is /Users/chaduffy/test.venv/bin/python
Deactivated-in-subshell interpreter: python is /usr/bin/python
Deactivated-in-child-shell (w/o export): bash: deactivate: command not found
Deactivated-in-child-shell (w/ export): python is /usr/bin/python
Assuming you've fixed that, let's run once more through using a subshell to scope deactivation to make it temporary:
. venv-dir/activate
this-runs-in-venv
# minor performance optimization: exec the last item in the subshell to balance out
# ...the performance cost of creating that subshell in the first place.
(deactivate; exec this-runs-without-venv)
this-runs-in-venv
You could always reference the global Python directly:
/usr/bin/python2.7 -E my_python_command
If you're concerned about such a path being unreliable, you could either:
Configure Python in a unique, safe, static location upon installation, and reference that
Invoke a subshell that is not inside the virtual environment
Use an alternate name for your virtualenv Python executable so that it doesn't conflict on the path. ie. the virtualenv Python would be python-virtual, and python would still lead to the global installation.
Then it would be something like:
python -E my_command # global
python-virtual my_command # virtual
I put the following in my python_go.py
import os
os.system("cd some_dir") # This is the directory storing an existing virtual environment
os.system(". activate") #because I want to activate the virtual environment
os.system("cd another_dir") #this the directory I can start my work
I hope I can run the python_go.py, it can do the work mentioned above.
But when I run it, seems it can only do the first step, the rest of it, e.g. . activate seems not working.
Can someone tell me how to do it? Thank you!!
Most probably you don't have to change to some_dir to source activate so saving these lines
. some_dir/activate
cd another_dir
as, let's say go.sh and doing
. go.sh
has the same effect
If you're relying to os.system(". activate") to work if it's in the directory some_dir that won't work because the current directory won't persist across calls to os.system().
You're going to be better off calling a shell script that aggregates all three of the commands you want to do and execute that once from the python script.
Otherwise you want to set up the environment for the parent python process using os.chdir() before calling os.system on the activate call. Also, the os.system(". activate") call won't do what you want because the "dot space" notation will load information into a shell that's going to go away when the os.system call finishes.
Edited (to your followup comment):
Your shell script should look like this (do_activate.sh):
cd some_dir
. activate
cd another_dir
and the python code like this:
os.system("db_activate.sh").
Keep in mind that whatever environment variables were saved by ". activate" won't persist after the os.system call.
Your code does nothing. os.system() starts a new shell for each command i.e., all os.system() calls have no positive effect: cd and . activate may have effect only on the current shell (and possibly its children).
If all you want is to activate a virtualenv in the current shell then you should use a shell command:
$ . some_dir/activate && cd another_dir
Note: the command has effect only on the current (running) shell (and its descendants).
virtualenvwrapper provides several hooks that allows to execute commands before/after activating a virtualenv e.g., you could put cd another_dir into $VIRTUAL_ENV/bin/postactivate then it is enough to run:
$ workon <virtualenv-name>
to activate virtualenv-name virtualenv and run all the hooks (cd another_dir in this case).
You probably want to install virtualenvwrapper which does what you want:
workon envname will source the file and activate the virtualenv.
You can then do setvirtualenvproject in the desired directory and you'll automatically go to the directory where the project is located. You only need to do this command once as it'll then happen automatically from then on.
I recently installed the Anaconda version of Python. Now when I type python into the terminal it opens the Anaconda distribution rather than the default distribution. How do I get it to use the default version for the command python on Linux (Ubuntu 12.04 (Precise Pangolin))?
Anaconda adds the path to your .bashrc, so it is found first. You can add the path to your default Python instance to .bashrc or remove the path to Anaconda if you don't want to use it.
You can also use the full path /usr/bin/python in Bash to use the default Python interpreter.
If you leave your .bashrc file as is, any command you run using python will use the Anaconda interpreter. If you want, you could also use an alias for each interpreter.
You will see something like export PATH=$HOME/anaconda/bin:$PATH in your .bashrc file.
So basically, if you want to use Anaconda as your main everyday interpreter, use the full path to your default Python or create an alias. If you want it the other way around, remove the export PATH=.... from bashrc and use full path to Anaconda Python interpreter.
Having tried all the suggestions so far, I think modifying the export statement in file ~/.bashrc, as Piotr Dobrogost seems to suggest, is the best option considering the following:
If you remove the whole statement, you have to use full paths for Conda binaries.
Using Conda 4.4.10 links in the directory anaconda/bin/ point to binaries in the same directory, not the system ones in /usr/bin.
Using this approach you get the system programs for all that have been previously included in $PATH and also the ones specific to anaconda without using full paths.
So in file ~/.bashrc instead of
# Added by the Anaconda3 4.3.0 installer
export PATH="/home/user/anaconda3/bin:$PATH"
one would use
export PATH="$PATH:/home/user/anaconda3/bin"
I faced the same issue and you can do the following.
Go into your .bashrc file and you will find a similar sort of line:
export PATH=~/anaconda3/bin:$PATH
You comment it out and instead type out:
alias pyconda='~/anaconda3/bin/python3'
Or whatever your path is. This worked out for me.
In the year 2020, Conda adds in a more complicated block of code at the bottom of your .bash_profile file that looks something like this:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/spacetyper/opt/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/spacetyper/opt/miniconda3/etc/profile.d/conda.sh" ]; then
. "/Users/spacetyper/opt/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/Users/spacetyper/opt/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
To use the default Python install by default: Simply move this section of code to the very top of your .bash_profile file.
To give yourself the option of using the Conda installed Python: Add this line below the Conda code block above.
alias pyconda="/Users/spacetyper/opt/miniconda3/bin/python3"
Now you should be able to call the system Python install with python and the Conda install with pyconda.
at 2020, like the #spacetyper mentioned, it acted differently. I've found a handy solution for that from this question: How do I prevent Conda from activating the base environment by default?
To disable automatic base activation:
conda config --set auto_activate_base false
it'll create a ./condarc in home directory after running the first time.
I found that, though I remove export=.../anaconda3/bin:$PATH, there is still .../anaconda3/envs/py36/bin(my virtual environment in Anaconda) in PATH, and the shell still uses Anaconda Python.
So I export PATH=/usr/bin:$PATH (/usr/bin is where system Python reside). Though there is already /usr/bin inPATH, we make it searched before the path of Anaconda, and then the shell will use the system Python when you key python, python3.6, pip, pip3 ....
You can get back to Anaconda by using an alias like mentioned above, or default to Anaconda again by comment export PATH=/usr/bin:$PATH.
There are python, python2 and python2.7 shortcuts in both the /home/username/anaconda/bin/ and /usr/bin/ directory. So you can delete any one of them from one folder and use that for the other.
I mean, if you delete the python2 shortcut from the Anaconda directory, you will have the Python for Anaconda version and
python2 for the default version in the terminal.
I use Anaconda sparingly to build cross-platform packages, but I don't want to use it as my daily driver for Python. For Anaconda, Ruby, and Node.js projects I've adopted to use environment sand-boxing, which essentially hides functionality behind a function away from your path until you specifically need it. I first learned about it from these two GitHub repositories:
https://github.com/benvan/sandboxd
https://github.com/maximbaz/dotfiles
I have a file of sandboxing functions that looks like this:
.zsh/sandboxd.zsh:
#!/bin/zsh
# Based on
# https://github.com/maximbaz/dotfiles/.zsh/sandboxd.zsh
# which was originally adapted from:
# https://github.vom/benvan/sandboxd
# Start with an empty list of all sandbox cmd:hook pairs
sandbox_hooks=()
# deletes all hooks associated with cmd
function sandbox_delete_hooks() {
local cmd=$1
for i in "${sandbox_hooks[#]}";
do
if [[ $i == "${cmd}:"* ]]; then
local hook=$(echo $i | sed "s/.*://")
unset -f "$hook"
fi
done
}
# Prepares the environment and removes hooks
function sandbox() {
local cmd=$1
# NOTE: Use original grep, because aliased grep is using color
if [[ "$(type $cmd | \grep -o function)" = "function" ]]; then
(>&2 echo "Lazy-loading '$cmd' for the first time...")
sandbox_delete_hooks $cmd
sandbox_init_$cmd
else
(>&2 echo "sandbox '$cmd' not found.\nIs 'sandbox_init_$cmd() { ... }' defined and 'sandbox_hook $cmd $cmd' called?")
return 1
fi
}
function sandbox_hook() {
local cmd=$1
local hook=$2
#echo "Creating hook ($2) for cmd ($1)"
sandbox_hooks+=("${cmd}:${hook}")
eval "$hook(){ sandbox $cmd; $hook \$# }"
}
.zshrc
In my .zshrc I create my sandbox'd function(s):
sandbox_hook conda conda
This command turns the normal conda executable into:
conda () {
sandbox conda
conda $#
}
An added bonus of using this technique is that it speeds up shell loading times because sourcing a number of wrapper scripts (e.g. nvm, rvm, etc.) can slow your shell startup time.
It also bugged me that Anaconda installed its Python 3 executable as python by default, which breaks a lot of legacy Python scripts, but that's a separate issue. Using sandboxing like this makes me explicitly aware that I'm using Anaconda's Python instead of the system default.
Anaconda 3 adds more than a simple line in my .bashrc file.
However, it also backs up the original .bashrc file into a .bashrc-anaconda3.bak file.
So my solution was to swap the two.
For my case, when I had
alias python='/usr/bin/python3.6'
in the ~/.bashrc, it always called python3.6 inside and outside of Anaconda Virtual Environment.
In this setting, you could set the Python version by python3 in each Virtual Environment.