Move the virtualenvs to another host folder - python

By error, I forgot to specify the WORKON_HOME variable before creating my virtual environments, and they were created in /root/.virtualenvs directory. They worked fine, and I did some testing by activating certain environment and then doing (env)$ pip freeze to see what specific modules are installed there.
So, whe I discovered the workon home path error, I needed to change the host directory to /usr/local/pythonenv. I created it and moved all the contents of /root/.virtualenvs directory to /usr/local/pythonenv, and changed the value of WORKON_HOME variable. Now, activating an environment using workon command seems to work fine (ie, the promt changes to (env)$), however if I do (env)$ pip freeze, I get way longer list of modules than before and those do not include the ones installed in that particular env before the move.
I guess that just moving the files and specifying another dir for WORKON_HOME variable was not enough. Is there some config where I should specify the new location of the host directory, or some config files for the particular environment?

Virtualenvs are not by default relocatable. You can use virtualenv --relocatable <virtualenv> to turn an existing virtualenv into a relocatable one, and see if that works. But that option is experimental and not really recommended for use.
The most reliable way is to create new virtualenvs. Use pip freeze -l > requirements.txt in the old ones to get a list of installed packages, create the new virtualenv, and use pip install -r requirements.txt to install the packages in the new one.

I used the virtualenv --relocatable feature. It seemed to work but then I found a different python version installed:
$ . VirtualEnvs/moslog/bin/activate
(moslog)$ ~/VirtualEnvs/moslog/bin/mosloganalisys.py
python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
Remember to recreate the same virtualenv tree on the destination host.

Related

Cannot execute pip after changing virtualenv folder name

So previously I named my virtual environment "test". After that, I changed it to "testt", and after that I can't access pip commands anymore and it gives me the following error:
Fatal error in launcher: Unable to create process using '"C:\coding\test\test\Scripts\python.exe" "C:\coding\test\testt\Scripts\pip.exe" ': The system cannot find the file specified.
How can I fix this?
You should never rename a virtual Python environment. When creating the virtual environment, the path is hardcoded in several places (see the Scripts/activate* scripts for example).
You could try to replace the hardcoded paths in all files, but I'm not sure how good this works and if this changes between (Python/venv) versions.
Best thing to do is just remove the old virtual environment and create a new one.
If you're using a requirements.txt file, this as as easy as:
py -3.10 -m venv new_env
new_env\Scripts\python.exe -m pip install -r requirements.txt
(example commands are for Windows and Python 3.10, but are similiar for Linux and/or other Python versions)
If you didn't use a requirements.txt file, you could run pip freeze before removing the old virtual environment to see which modules you had installed.
Side note: this method has the advantage that you're actually verifying that your environment is well documented (i.e., the requirements.txt is up-to-date) and that you can reproduce it. This will make it easier to repeat this process in the future (e.g. on another computer) as well.

Is there a python package that allows teams to share venvs through a git-like interface?

I'm working with a team. We each have our own Windows system. We have shared drives and a shared git repository. We want to have a shared virtual environment (in Python).
My understanding (from previous questions from myself and others) is that virtual environments do not include all files necessary for running python, in particular, the shared VE does not include the Python interpreter.
I can see how we can create a shared VE and it seems we could just copy that around, or put it on the shared drive, or put it in a git repository. But my understanding of this is that it does not eliminate the need for individuals to install their own local versions of python. Is that correct?
One of my colleagues has heard (or read) that "there is a package that allows teams to share their virtual environment configuration through a git-like interface. That way you can “pull” the updated configuration and it will install the new packages automatically. This allows each person to change the configuration and test it before releasing it to the team."
So is there a special package to enable this? Or is it just a regular venv that is included in the git repository with the other files? If we do this, then we must all put the venvs in the same place in on our file systems OR we have to go in and manually change the VIRTUAL_ENV variable in activate.bat. Is that correct?
In any case, we do all have to install our own local versions of python anyway. Is that correct?
If the virtual environment is on a shared drive(group readable), then your team members should be able to access it. A virtual environment is just a directory.
But my understanding of this is that it does not eliminate the need for individuals to install their own local versions of python. Is that correct?
Virtual environments have their own python binaries, which you can see when you run which python inside the virtual environment after it is activated.
So is there a special package to enable this? Or is it just a regular venv that is included in the git repository with the other files? If we do this, then we must all put the venvs in the same place in on our file systems OR we have to go in and manually change the VIRTUAL_ENV variable in activate.bat. Is that correct?
I would advise against uploading a virtual environment directory to version control, since it contains binaries, configuration files that don't belong in there. Its also unnecessary to do this because the dependencies are tracked in a requirements.txt file, which list the pip dependencies and is committed to version control. Additionally, When the virtual environment is activated, the VIRTUAL_ENV environment variable is automatically exported, so there is no need to modify it.
Conclusion
For simplicity, its probably best to have each user create their own virtual environment and install the dependencies from requirements.txt on their local machines. This also ensures users don't make a change to the virtual environment that will affect other users, which is a drawback of the above shared drive approach.
If they want to pull the latest requirements, then pulling the latest change using git pull and reinstalling the dependencies with pip install -r requirements.txt is good enough. You just have to ensure the virtual environment is activated, otherwise the dependencies will get installed system wide. This is where the pipenv package also comes in handy.
Usually in my team projects, the README contains instructions to get this setup for each team member.
Additionally, as Daniel Farrell helpfully mentioned in the comments, pip won't be able to manage packages like libffi, openssl, python-devel etc. inside a virtual environment. This is where using Docker containers become useful, since you can install dependencies inside a isolated environment built on top of the host operating system. This ensures the dependencies don't mess with the system wide packages, which is a good practice to follow in any case.
An example Dockerfile I have used in the past:
FROM python:3.8-slim-buster
# Set environment variables:
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Create virtual environment:
RUN python3 -m venv $VIRTUAL_ENV
# Install dependencies:
COPY requirements.txt .
RUN pip install -r requirements.txt
# Run the application:
COPY app.py .
CMD ["python", "app.py"]
Which I modified from this Elegantly activating a virtualenv in a Dockerfile article.
containerization aims to solve the "from where comes python?" problem. My developers' teams usually use a Dockerfile that installs their requirements within a docker-compose that spins ups a development environment for their applications . Unlike a virtual environment, containers offer a complete userspace solution that works pretty well in windows and osx.

How can I install a conda environment when offline?

I would like to create a conda environment on a machine that has no network connection. What I've done so far is:
On a machine that is connected to the internet:
conda create -n python3 python=3.4 anaconda
Conda archived all of the relevant packages into \Anaconda\pkgs. I put these into a separate folder and moved it to the machine with no network connection. The folder has the path PATHTO\Anaconda_py3\win-64
I tried
conda create -n python=3.4 anaconda --offline --channel PATHTO\Anaconda_py3
This gives the error message
Fetching package metadata:
Error: No packages found in current win-64 channels matching: anaconda
You can search for this package on Binstar with
binstar search -t conda anaconda
What am I doing wrong? How do I tell conda to create an environment based on the packages in this directory?
You could try cloning root which is the base env.
conda create -n yourenvname --clone root
Short answer: copy the whole environment from another machine with the same OS.
Why
Dependency. A package depends on other packages. When you install a package online, the package manager conda analyzes the package dependencies and install all the required packages for you.
The dependency is especially heavy in anaconda. Cause anaconda is a meta package depends on another 160+ packages.
Meta packages,are packages do not contain actual softwares and simply depend on other packages to be installed.
It's totally absurd to download all these dependencies one by one and install them on the offline machine.
Detail Solution
Get conda installed on another machine with same OS. Install the packages you need in an isolated virtual environment.
# create a env named "myvenv", name it whatever you want
# and install the package into this env
conda create -n myvenv --copy anaconda
--copy is used to
Install all packages using copies instead of hard- or
soft-linking.
Find where the environments are stored with
conda info
The 1st value of key "envs directories" is the location. Go there and package the whole sub-folder named "myvenv" (the env name in previous step) into an archive.
Copy the archive to your offline machine. Check "envs directories" from conda info. And extract the environment from the archive into the env directory on the offline machine.
Done.
In addition to copying the pkgs folder, you need to index it, so that conda knows how to find the dependencies. See this ticket for more details and this script for an example of indexing the pkgs folder.
Using --unknown as #asmeurer suggests will only work if the package you're trying to install has no dependencies, otherwise you will get a "Could not find some dependencies" error.
Cloning is another option, but this will give you all root packages, which may not be what you want.
A lot of the answers here are not 100% related to the "when offline" part. They talk about the rest of OP's question, not reflected in question title.
If you came here because you need offline env creation on top of an existing Anaconda install you can try:
conda create --offline --name $NAME
You can find the --offline flag documented here
Have you tried without the --offline?
conda create -n anaconda python=3.4 --channel PATHTO\Anaconda_py3
This works for me if I am not connected to the Internet if I do have anaconda already on the machine but in another location. If you are connected to the Internet when you run this command you will probably get an error associated with not finding something on Binstar.
I'm not sure whether this contradicts the other answers or is the same but I followed the instructions in the conda documentation and set up a channel on the local file system.
Then it's a simple matter of moving new package files to the local directory, running conda index on the channel sub-folder (which should have a name like linux-64).
I also set the Anaconda config setting offline to True as described here but not sure if that was essential.
Hope that helps.
The pkgs directory is not a channel. The flag you are looking for is --unknown, which causes conda to include files in the pkgs directory even if they aren't found in one of the channels.
Here's what worked for me in Linux -
(a) Create a blank environment - Just create an empty directory under $CONDA_HOME/envs. Verify with - conda info --envs.
(b) Activate the new env - source activate
(c) Download the appropriate package (*.bz2) from https://anaconda.org/anaconda/repo on a machine with internet connection and move it to the isolated host.
(d) Install using local package - conda install . For example - conda install python-3.6.4-hc3d631a_1.tar.bz2, where python-3.6.4-hc3d631a_1.tar.bz2 exists in the current dir.
That's it. You can verify by the usual means (python -V, conda list -n ). All related packages can be installed in the same manner.
I found the simplest method to be as follows:
Run 'conda create --name name package' with no special switches
Copy the URL of the first package it tried (unsuccessfully) to download
Use the URL on a connected machine to fetch the tar.bz2
Copy the tar.bz2 to the offline machine's /home/user/anaconda3/pkgs
Deploy the tar.bz2 in place
Delete the now unneeded tar.bz2
Repeat until the 'conda create' command succeeds
Here's a solution that may help. It's not very pretty but it gets the job done. So i suppose you have a machine where you have a conda environment in which you've installed all the packages you need. I will refer to this as ENV1 You will have to go to this environment directory and locate it. It is usually found in \Anaconda3\envs. I suggest compressing the folder but you could just use it as is. Copy the desired environment folder into your offline machine's directory for anaconda environments. This first step should get your new environment to respond to commands like conda activate.
You will notice though that software like spyder and jupyter don't work anymore (probably because of path differences). My solution to this was to clone the base environment in the offline machine into a new environment that i will refer to as ENV2. What you need to do then is copy the contents of ENV2 into those of ENV1 and replace files.
This should overwrite the files related to spyder, jupyter.. and keep your imported packages intact.

setting up Django Virtual Env error "The executable /var/bin/python (from --python=/var/bin/python) does not exist"

I was given a project to work on and am now trying to run that project in a virtual environment.
I am new to python, but in the past, I was comfortable with the "manage.py runserver" concept. I'm having trouble learning virtual environments.
I know that I have virtualenv installed.
My first direction given to run the virtual environment for this project was to run virtualenv --python=/var/bin/python --clear --no-site-packages --unzip-setuptools --setuptools ~/virtualenvs/project_name
That results in this error:
The executable /var/bin/python (from --python=/var/bin/python) does not exist
I already have python installed, what does this even mean? I am also confused about this syntax, --python=/var/bin/python, was that a relative path that I should have switched out "python=/" for something else? what does the "=/" actually represent?
Am I running the command in the wrong folder? I have tried running it in both the outer project_name folder, containing a subfolder of the same name, and also, inside that subfolder (which contains the manage.py).
However, I can't find the var/bin/... paths anywhere in either folder. Where should the bin paths be located?
Any help or insights would be much appreciated, thanks!
If you are new to virtual environments, these are the steps I would take to install a virtual environment. I hope this helps.
Setuptools
First to check if you already have it installed type the following:
python
>>>import setuptools
If you get another >>> then you have it installed, otherwise you'll get an error. If you happen to blow up setuptools, here's how you reinstall it:
http://pypi.python.org/pypi/setuptools
1.Download Python 2.7 egg
2.Change directory into new unzipped folder
3.Run the following command:
sudo sh ~/folder/you/downloaded/to/setuptools-0.6c11-py2.7.egg
Virtialenvwrapper
sudo pip install virtualenvwrapper
Setup
1.Create your directories
sudo mkdir /project_name
sudo chown -R yourusername:admin /project_name
2.Find virtualenvwrapper.sh to use in step 3 below, check the following paths:
/Library/Frameworks/Python.framework/Versions/2.6/bin/virtualenvwrapper.sh
/usr/local/bin/virtualenvwrapper.sh
3.Update your profile script (~/.bash_profile or ~/.profile) in a text editor, adding the lines below at the bottom of the file. If you don't have either of these files in your home directory, create a file named .bash_profile in your home directory.
export WORKON_HOME=$HOME/.virtualenvs
source /insert/your/path/to/virtualenvwrapper.sh
4.Quit your Terminal app and restart it. You should see a bunch of folders get created when you restart it. This will only happen once.
5.Make your environment
mkvirtualenv django
(django)$ <- now you are in your new virtualenv
6.To leave your environment:
(django)$ deactivate
7.To enter your environment, quit Terminal again to reset paths so we can test our setup and move into your working directory to checkout a project:
workon django
(django)$ <- you are back in your environment
It seems like that python is not installed in /var/bin/python on your machine. The path seems a bit odd, a more common path is /usr/bin/python.
One way to check where python is installed is to run which python. Try to replace /var/bin/python in the command you use when creating the virtualenv with the result from the which command.
#Try this step by step procedure
1. open terminal
pip install virtualenv
2. cd desktop
desktop> virtualenv env
# A folder named env will appear on your desktop
3. desktop > cd env
#now activate the virtualenv
desktop/env>.\Scripts\activate
# now you will see
(env)c:\...\desktop\env>

How to get virtualenv to use dist-packages on Ubuntu?

I know that virtualenv, if not passed the --no-site-packages argument when creating a new virtual environment, will link the packages in /usr/local/lib/python2.7/site-packages (for Python 2.7) with a newly-created virtual environment. On Ubuntu 12.04 LTS, I have three locations where Python 2.7 packages can be installed (using the default, Ubuntu-supplied Python 2.7 installation):
/usr/lib/python2.7/dist-packages: this has my global installation of ipython, scipy, numpy, matplotlib – packages that I would find difficult and time-consuming to install individually (and all their dependences) if they were not available via the scipy stack.
/usr/local/lib/python2.7/site-packages: this is empty, and I think it will stay that way on Ubuntu unless I install a package from source.
/usr/local/lib/python2.7/dist-packages: this has very important local packages for astronomy, notably those related to PyRAF, STScI, etc., and they are extremely difficult and time-consuming to install individually.
Note that a global directory such as /usr/lib/python2.7/site-packages does not exist on my system. Note also that my global installation of ipython, scipy, etc. lets me use those packages on-the-fly without having to source/activate a virtual environment every time.
Naturally, I now want to use virtualenv to create one virtual environment in my user home directory which I will source/activate for my future projects. However, I would like this virtual environment, while being created, to link/copy all of my packages in locations (1) and (3) in the list above. The main reason for this is that I don't want to go through the pip install process (if it is even possible) to re-install ipython, scipy, the astro-packages, etc. for this (and maybe other) virtual environments.
Here are my questions:
Is there a way for me to specify to virtualenv that I would like it to link/copy packages in these two dist-packages directories for virtual environments that are created in the future?
When I eventually update my global installation of scipy, ipython, etc. in the two dist-packages directories, will this also update/change the packages that my virtual environment uses (and which it originally got during virtualenv creation)?
If I ever install a package from source on Ubuntu, will it go in /usr/local/lib/python2.7/dist-packages, or /usr/local/lib/python2.7/site-packages?
Thanks in advance for your help!
This might be a legitimate use of PYTHONPATH - an environmental variable that virtualenv doesn't touch, which uses the same syntax as the environmental variable PATH, in bash PYTHONPATH=/usr/lib/python2.7/dist-packages:/usr/local/lib/python2.7/dist-packages in a .bashrc or similar. If you followed this path,
You don't have to tell your virtual environment about this at all, it won't try to change it.
No relinking will be required, and
That will still go wherever it would have gone (pip install always uses /usr/local/lib/python2.7/dist-packages/ for my Ubuntu) if you install them outside of your virtual environment. If you install them from within your virtual environment (while it's activated) then of course it'll be put in the virtualenvironment.
I'm just getting my head around virtualenv, but there seems to be an easier way than mentioned so far.
Since virtualenv 1.7 --no-site-packages has been the default behavior.
Therefore using the --system-site-packages flag to virtualenv is all that is needed to get dist-packages in your path - if you use the tweaked virtualenv shipped by Ubuntu. (This answer and this one give some useful history). I've tested this and it does work.
$ virtualenv --system-site-packages .
I agree with Thomas here - I can't see any action required in virtualenv to see the effect of updates in dist-packages.
Having tested that with python setup.py install, it does (again as Thomas said) still go to dist-packages. You could change that by building your own python, but that's a bit extreme.
PYTHONPATH works for me.
vim ~/.bashrc
add this line below:
export PYTHONPATH=$PYTHONPATH:/usr/lib/python2.7/dist-packages:/usr/local/lib/python2.7/dist-packages
source ~/.bashrc
In the directory site-packages, create a file dist.pth
In the file dist.path, put the following:
../dist-packages
Now deactivate and activate your virtualenv. You should be set.
What you want to achieve here is essentially add specific folder (dist-packages) to Python search path. You have a number of options for this:
Use path configuration (.pth) file, entries will be appended to the system path.
Modify PYTHONPATH (entries from it go to the beginning of system path).
Modify sys.path directly from your Python script, i.e. append required folders to it.
I think that for this particular case (enable global dist-packages folder) third option is better, because with first option you have to create .pth file for every virtualenv you'll be working in (with some external shell script?). It's easy to forget it when you distribute your package. Second option requires run-time setup (add a envvar), which is, again, easy to miss.
And only third option doesn't require any prerequisites at configure- or run-time and can be distributed without issues (on the same-type system, of course).
You can use function like this:
def enable_global_distpackages():
import sys
sys.path.append('/usr/lib/python2.7/dist-packages')
sys.path.append('/usr/local/lib/python2.7/dist-packages')
And then in __init__.py file of your package:
enable_global_distpackages()

Categories