Pipenv: Activate virtual environment on new computer after git clone - python

I copied a repo containing a Python Django project from my old computer to my new one via git clone. I manage my dependecies in the project via pipenv.
After successfully cloning my repo I wanted to start working on the new computer and tried to select the relevant python interpreter for my project in VS Code . However, the path was not on the list.
So first I tried the command pipenv --venv which gave me the feedback: No virtualenv has been created for this project
So I thought I might need to activate the virtual environment first, before being able to select it in VS Studio code. So i ran pipenv shell in the root directory of my project.
However this seem to have created a new virtual environment: Creating a virtualenv for this project… Pipfile: C:\path\to\my\cloned\project\Pipfile
My questions:
1.) Is this the correct way to activate a pipenv virtualenvironment on a new computer after copying the project via git clone? And if not,...
2.1) Does the way I did it cause any problems, which I should be aware of?
2.2) What would be the correct procedure to activate my virtual enviroment on a new computer?

In general an environment image probably shouldn't be copied to github. You'll get a bunch of unneeded files which clogs your repo.
Instead you should create a requirements.txt from your existing environment pip freeze > requirements.txt and commit that file.
Then when someone else clones your repo they can set up a new virtual environment using any tool they want and run python -m pip install -r requirements.txt
That is, requirements.txt is like a recipe for how to create your environment. By providing the recipe users can use it any way they want.

use:
pipenv install
It worked on Ubuntu, should work also on a mac.
I tried on a windows, it triggered some errors.
"If you download a source repository for a project that uses Pipenv for package management, all you need to do is unpack the contents of the repository into a directory and run pipenv install (no package names needed). Pipenv will read the Pipfile and Pipfile.lock files for the project, create the virtual environment, and install all of the dependencies as needed."
https://www.infoworld.com/article/3561758/how-to-manage-python-projects-with-pipenv.html

Related

Unable to clone Python venv to another PC

I want to clone my existing venv to another PC but simply copy paste is not working. When I copy the venv and paste to the second machine and run
pip list
It only list pip and setup_tools as the installed dependencies.
I tried another way to clone the packages.
I created a new venv in the second machine and copied all the file of first venv to that new venv with skipping the existing files with the same name in new venv. Now, when I run
pip list
It shows all the dependencies but, when I try to launch the jupyter notebook as
jupyter notebook
It gives the following error.
Fatal error in launcher: Unable to create process using '"f:\path\to\first_venv\on_first_machine\scripts\python.exe"
"C:\path\to\new_venv\on_the_second_machine\Scripts\jupyter.exe" notebook': The system cannot find the file specified.
I don't know to make things working. Please help!
Edit
The problem is I don't have internet connection on the second machine. Actually it's a remote machine with some security protocols applied and having no internet connection is part of security ! My bad :'(
You can't copy-paste venvs from one machine to another since scripts in them may refer to system locations. (The same stands for attempting to move venvs within a machine.)
Instead, recreate the environment on the new machine:
On the old machine, run pip freeze -l > packages.txt in the virtualenv.
Move packages.txt over to the new machine.
Create a new virtualenv on the new machine and enter it.
Install the packages from the txt file: pip install -r packages.txt.
EDIT: If you don't have internet access on the second machine, you can continue from step 2 with:
Run pip wheel -w wheels -r packages.txt in the venv on the first machine. This will download and build *.whl packages for all the packages you require. Note that this assumes both machines are similar in OS and architecture!
Copy the wheel files over to the new machine.
Create a new virtualenv on the new machine and enter it.
Install the packages from wheels in the new virtualenv: pip install *.whl.
You should never copy a virtual environment between machines. The correct way is to export the dependencies installed in the environment using pip freeze and create a new virtual environment on the other machine.
# One the first machine
pip freeze > requirements.txt
# Copy requirements.txt to the other machine, or store in a source repository
# Then install the requirements in the new virtual environment
pip install -r requirements.txt

How to move installed packages to a newly created virtual environment ?

I've downloaded a lots of packages into global environment (lets say so). Now, I want to create a new virtual environment and move some of the packages to that environment. How would I do that ?
While you could copy files/directories from the site-packages directory of your global installation into the site-packages of your virtual env, you may experience problems (missing files, binary mismatch, or others). Don't do this if you're new to python packaging mechanisms.
I would advise that you run pip freeze from your global installation to get a list of what you installed, and then store that output as a requirements.txt with your source, and put it under source management. Then run pip install -r requirements.txt after activating your virtualenv, and you'll replicate the dependencies (with the same versions) into your virtualenv.
If you try to copy or rename a virtual environment, you will discover that the copied environment does not work. This is because a virtual environment is closely tied to both the Python it was created with, and the location it was created in. (The “relocatable” option does not work.
However, this is very easy to fix. Instead of moving/copying, just create a new environment in the new location. To create VirtualEnvironment. This way work for me or you can see the link below:
pip install virtualenv
virtualenv NameOfYourVirtualEnvironment
virtualenv NameOfYourVirtualEnvironment/bin/activate
Then, run pip freeze > requirements.txt in the old environment to create a list of packages installed in it which is in your case the global environment. With that, you can just run pip install -r requirements.txt in the new environment to install packages from the saved list. Of course, you can copy requirements.txt between machines. In many cases, it will just work; sometimes, you might need a few modifications to requirements.txt to remove OS-specific stuff.
Source:https://chriswarrick.com/blog/2018/09/04/python-virtual-environments/
And also this may it work for you:
How to import a globally installed package to virtualenv folder
https://gist.github.com/k4ml/4080461

Do we need to upload virtual env on github too?

This is my GitHub repo
https://github.com/imsaiful/backmyitem
I push from my local machine and pull the changes in Amazon EC2.
Earlier I have not added the virtual env file in my repo but now I have changed some file in admin directory which is containing in the virtual env. So should I go for to add the virtual env too on my GitHub or instead I change the same thing on my remote server manually?
As was mentioned in a comment it is standard to do this through a requirements.txt file instead of including the virtualenv itself.
You can easily generate this file with the following:
pip freeze > requirements.txt
You can then install the virtualenv packages on the target machine with:
pip install -r requirements.txt
It is important to note that including the virtualenv will often not work at all as it may contain full paths for your local system. It is much better to use a requirements.txt file.
No - although the environment is 100% there, if someone else where to pull it down the path environment hasn't been exported not to mention Python version discrepancies will likely crop up.
The best thing to do is to create what is known as a requirements.txt file.
When you have created your environment, you can pip install this and pip install that. You'll start to built a number of project specific dependencies.
Once you start to build up a number of project dependencies I would then freeze your local python environment (analogoues to a package.json for node.js package dependency management). I would recommend doing the following in your terminal:
(local_python_environment) $ pip install django && pip freeze > requirements.txt
(local_python_environment) $ pip install requests && pip freeze > requirements.txt
That is to say, freeze your environment to a requirements.txt file every time a new dependency is installed.
Once a collaborator pulls down your project - they can then install a fresh python environment:
$ python3 -m venv local_python_environment
(* Please use Python 3 and not Python 2!)
And then activate that environment and install from your requirements.txt which you have included in your version control:
$ source local_python_environment/bin/activate
(local_python_environment) $ pip install -r requirements.txt
Excluding your virtual environment is probably analogous to ignoring node_modules! :)
No Its not necessary to upload virtualenv file on github. and even some time when you push your code to github then it ignore python file only if add into ignore.
Virtual Environment
Basically virtual environment is nothing but itis a tool that helps to keep dependencies required by different projects separate by creating isolated python virtual environments for them. This is one of the most important tools that most of the Python developers use. Apart from that you can add requirement.txt file into your project.
Requirement.txt
It is file that tells us to which library and application are need to run this application. you can add requirement.txt file with this simple command.
pip freeze > requirements.txt
After run this command all application and library add in this file. and if you make your project without activate any virtualenv then python automatically use system environment variable it will also add all the file that not necessary for your project.
You should add the virtualenv in your gitignore. Infact github has a recommended format for python, which files should be added and which shouldn't
Github recommendation for gitignore

pipenv: deployment workflow

I am thinking about switching from pip & virtualenv to pipenv. But after studying the documentation I am still at a loss on how the creators of pipenv structured the deployment workflow.
For example, in development I have a Pipfile & a Pipfile.lock that define the environment. Using a deployment script I want to deploy
git pull via Github to production server
pipenv install creates/refreshes the environment in the home directory of the deployment user
But I need a venv in a specific directory which is already configured in systemd or supervisor. E.g.: command=/home/ubuntu/production/application_xy/env/bin/gunicorn module:app
pipenv creates the env in some location such as
/home/ultimo/.local/share/virtualenvs/application_xy-jvrv1OSi
What is the intended workflow to deploy an application with pipenv?
You have few options there.
You can run your gunicorn via pipenv run:
pipenv run gunicorn module:app
This creates a slight overhead, but has the advantage of also loading environment from $PROJECT_DIR/.env (or other $PIPENV_DOTENV_LOCATION).
You can set the PIPENV_VENV_IN_PROJECT environment variable. This will keep pipenv's virtualenv in $PROJECT_DIR/.venv instead of the global location.
You can use an existing virtualenv and run pipenv from it. Pipenv will not attempt to create its own virtualenv if it's run from one.
You can just use the weird pipenv-created virtualenv path.
I've just switched to pipenv for deployment and my workflow is roughly as follows (managed with ansible). For an imaginary project called "project", assuming that a working Pipfile.lock is checked into source control:
Clone the git repository:
git clone https://github.com/namespace/project.git /opt/project
Change into that directory
cd /opt/project
Check out the target reference (branch, tag, ...):
git checkout $git_ref
Create a virtualenv somewhere, with the target Python version (3.6, 2.7, etc):
virtualenv -p"python$pyver" /usr/local/project/$git_ref
Call pipenv in the context of that virtualenv, so it won't install its own:
VIRTUAL_ENV="/usr/local/project/$git_ref" pipenv --python="/usr/local/project/$git_ref/bin/python" install --deploy
The --deploy will throw an error, when the Pipfile.lock does not match the Pipfile.
Install the project itself using the virtualenv's pip (only necessary if it isn't already in the Pipfile):
/usr/local/project/$git_ref/bin/pip install /opt/project
Set a symlink to the new installation directory:
ln -s /usr/local/project/$git_ref /usr/local/project/current
My application is then callable e.g. with /usr/local/project/current/bin/project_exec --foo --bar, which is what's configured in supervisor, for instance.
All of this is triggered when a tag is pushed to the remote.
As the virtualenvs of earlier versions remain intact, a rollback is simply done by setting the current-symlink back to an earlier version. I.e. if tag 1.5 is broken, and I want to go back to 1.4, all I have to do is ln -s /usr/local/project/1.4 /usr/local/project/current and restart the application with supervisorctl.
I think pipenv is very good for managing dependencies but is too slow, cumbersome and still a bit unstable for using it for automatic deployments.
Instead I use virtualenv (or virtualenvwrapper) and pip on the target machine.
On my build/development machine I create a requirements.txt compatible text file using pipenv lock -r:
$ pipenv lock -r > deploy-requirements.txt
While deploying, inside a virtualenv I run:
$ pip install -r deploy-requirements.txt
Just do this:
mkdir .venv
pipenv install
Explanation:
pipenv checks your project directory for a sub directory named .venv. If it finds it, then pipenv creates a local virtual environment (because then it sets automatically PIPENV_VENV_IN_PROJECT=true)
So now if you want you can either activate the virtual environment with:
source .venv/bin/activate
Or config you app.conf for gunicorn with something like this:
exec /path/to/.venv/bin/gunicorn myapp:app
To create virtual environment in the same directory as the project set the following environment variable doc
PIPENV_VENV_IN_PROJECT=true
This installs the dependencies to .venv directory inside project. Available from PipEnv v2.8.7

Confused with virtual environments in python django

I have my Django project folder and inside that i have my virtualenv folder
I have few questions
I have packages already installed in main installation and as well in virtual env. Dont those packages mix with each other. I mean if i have old version in main installation and new version in virtual env how does the system knows which one to choose
Suppose i move my project folder to new computer than can i use same virtual env folder because it was in the same app directory or i have to start all over again
How will i know that pip install package to virtual env or main installation
Unless you created the virtualenv with --system-site-packages, packages don't mix at all. If they did, Virtualenv has priority.
If the path doesn't change, there are chances that you can reuse it. You could make a virtualenv --relocatable if the path changes. But you should make a requirements file and be able to regenerate a fresh virtualenv in one pip -r req.txt command.
If a virtualenv is activated, pip will install in the virtualenv, it has priority.

Categories