Set up virtualenv using a requirements.txt generated by conda - python

I'm setting up a python project, using an Anaconda virtual environment. I'm generating a requirements.txt so other people can easily set up their own virtual environment for the project.
I was wondering though, when other developers want to contribute to the project, but want to use virtualenv instead of Anaconda, can they do that?
I tried the following:
I set up an empty project in an Anaconda environment and installed the aiohttp module. Then conda list --export > requirements.txt generates the following:
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
aiohttp=2.3.9=py36_0
async-timeout=2.0.0=py36hc3e01a3_0
certifi=2018.1.18=py36_0
chardet=3.0.4=py36h420ce6e_1
multidict=3.3.2=py36h72bac45_0
pip=9.0.1=py36h226ae91_4
python=3.6.4=h6538335_1
setuptools=38.4.0=py36_0
vc=14=h0510ff6_3
vs2015_runtime=14.0.25123=3
wheel=0.30.0=py36h6c3ec14_1
wincertstore=0.2=py36h7fe50ca_0
yarl=0.14.2=py36h27d1bf2_0
I set up an empty project in a virtualenv environment and installed the aiohttp module there too. pip freeze > requirements.txt then generates:
aiohttp==3.0.1
async-timeout==2.0.0
attrs==17.4.0
chardet==3.0.4
idna==2.6
idna-ssl==1.0.0
multidict==4.1.0
yarl==1.1.0
So apparently both outputs are different, and my theory is: once I generate my requirements.txt with conda on my project, other developers can't choose virtualenv instead - at least not if they're not prepared to install a long list requirements by hand (it will be more than just the aiohttp module of course).
A first sight, importing the conda-generated requirements.txt in a project on virtualenv (pip install -r requirements-conda.txt) throws this error:
Invalid requirement: 'aiohttp=2.3.9=py36_0'
= is not a valid operator. Did you mean == ?
Am I right when I think that if developers would like to do this, they would need to programmatically change the package list to the format that virtualenv understands, or they would have to import all packages manually? Meaning that I impose them to choose conda as virtual environment as well if they want to save themselves some extra work?

I'm setting up a python project, using an Anaconda virtual environment. I was wondering though, when other developers want to contribute to the project, but want to use virtualenv instead of Anaconda, can they do that?
Yes - in fact this is how many of my projects are structured. To accomplish what you're looking for, here is a simple directory that we'll use as reference:
.
├── README.md
├── app
│   ├── __init__.py
│   ├── static
│   ├── templates
├── migrations
├── app.py
├── environment.yml
├── requirements.txt
In the project directory above, we have both environment.yml (for Conda users) and requirements.txt (for pip).
environment.yml
So apparently both outputs are different, and my theory is: once I generate my requirements.txt with conda on my project, other developers can't choose virtualenv instead - at least not if they're not prepared to install a long list requirements by hand (it will be more than just the aiohttp module of course).
To overcome this, we are using two different environment files, each in their own distinct format allowing for other contributors to pick the one they prefer. If Adam uses Conda to manage his environments, then all he need to do create his Conda from the environment.yml file:
conda env create -f environment.yml
The first line of the yml file sets the new environment's name. This is how we create the Conda-flavored environment file. Activate your environment (source activate or conda activate) then:
conda env export > environment.yml
In fact, because the environment file created by the conda env export command handles both the environment's pip packages and conda packages, we don't even need to have two distinct processes to create this file. conda env export will export all packages within your environment regardless of the channel they're installed from. It will have a record of this in environment.yml as well:
name: awesomeflask
channels:
- anaconda
- conda-forge
- defaults
dependencies:
- appnope=0.1.0=py36hf537a9a_0
- backcall=0.1.0=py36_0
- cffi=1.11.5=py36h6174b99_1
- decorator=4.3.0=py36_0
- ...
requirements.txt
Am I right when I think that if developers would like to do this, they would need to programmatically change the package list to the format that virtualenv understands, or they would have to import all packages manually? Meaning that I impose them to choose conda as virtual environment as well if they want to save themselves some extra work?
The recommended (and conventional) way to _change to the format that pip understands is through requirements.txt. Activate your environment (source activate or conda activate) then:
pip freeze > requirements.txt
Say Eve, one of the project contributor want to create an identical virtual environment from requirements.txt, she can either:
# using pip
pip install -r requirements.txt
# using Conda
conda create --name <env_name> --file requirements.txt
A full discussion is beyond the scope of this answer, but similar questions are worth a read.
An example of requirements.txt:
alembic==0.9.9
altair==2.2.2
appnope==0.1.0
arrow==0.12.1
asn1crypto==0.24.0
astroid==2.0.2
backcall==0.1.0
...
Tips: always create requirements.txt
In general, even on a project where all members are Conda users, I still recommend creating both the environment.yml (for the contributors) as well as the requirements.txt environment file. I also recommend having both these environment files tracked by version control early on (ideally from the beginning). This has many benefits, among them being the fact that it simplifies your debugging process and your deployment process later on.
A specific example that spring to mind is that of Azure App Service. Say you have a Django / Flask app, and want to deploy the app to a Linux server using Azure App Service with git deployment enabled:
az group create --name myResourceGroup --location "Southeast Asia"
az webapp create --resource-group myResourceGroup --plan myServicePlan
The service will look for two files, one being application.py and another being requirements.txt. You absolutely need both of these file (even if they're blank files) for the automation to work. This varies by cloud infrastructure / providers of course, but I find that having requirements.txt in our project generally saves us a lot of trouble down the road and worth the initial set-up overhead.
If your code is on GitHub, having requirements.txt also give you extra peace of mind by having its vulnerability detection pick up on any issue before alerting you / repo owner. That's a lot of great value for free, on the merit of having this extra dependency file (small price to pay).
This is as easy as making sure that every time a new dependency is installed, we run both the conda env export and pip freeze > requirements.txt command.

I found these all commands very hardly and will put these below. Hope these commands will help you, If you have any questions just reply (Windows users)
For conda environment users follow these (But first you need to install conda in your pc)
Create conda virtual environment:
conda create --name <env_name>
See all modules installed in the environment as a list:
conda env list
Import packages to your environment and create conda environment
(change environment name by editing the first line of
environment.yml file) (you can use whatever name for your environment.yml but usually it is named as environment.yml)
conda env create -f environment.yml
Import packages to your environment and create conda environment
using pip environment
conda create --name <env_name> --file requirements.txt
See imported packages:
conda list
Activate the environment to export:
conda activate <env_name> or activate <env_name>
Deactivate your conda environment:
conda deactivate
Export your active environment to a new file:
conda env export > environment.yml
This is for pip environment users
Create virtual environment:
python -m venv (<env_name> or path)
Can use python inside virtual environment by: (but this is annoying
and therefore normally we activate the environment and then use its
python using following Activate virtual environment(for cmd) code)
path\to\venv\Scripts\python.exe <file_name_path>
Activate virtual environment (for cmd)
path\to\venv\Scripts\activate.bat
Activate virtual environment (for powershell)
path\to\venv\Scripts\Activate.ps1
Deactivate your pip environment:
deactivate.bat
Import packages in new virtual env
pip install -r requirements.txt
Export your active environment to a new text file
pip freeze > requirements.txt

Related

conda export environment and package

I want to export an environment AND its packages, using conda 4.10.
Reading the conda docs, it suggests exporting environments using conda env export > environment.yml. However, I am not sure if it is my problem (and if so, what the solution is), but there is no package information.
name: guest
channels:
- defaults
prefix: C:\Anaconda3\envs\guest
After some googling, I learnt to export packages using conda list --export > requirements.txt. This time, there is no environment information.
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
ca-certificates=2021.5.30=h5b45459_0
certifi=2021.5.30=py38haa244fe_0
...
How do I export both into one file and use it? Or should I just export two files, create an environment first, and install the packages?
On a side note, how do I make my packages match requirements.txt, that is, to remove extra packages, install missing ones, and update/ downgrade to the specific version? Is there a command for this, or should I delete the whole environment and start from scratch?

Replicate Python environment on another computer

How can I replicate the python environment setup of a windows machine onto another computer and be able to run very specific scripts successfully.
We have scripts that were written and run in python 3.6.5 in the anaconda environment, we want to be able to run these scripts on a new Windows 10 computer.
The scripts also connect to a local database on the computer (Postgres).
Since you are using anaconda environment, i assume that you have been using the virtualenv for the project you mentioned. It is actually easy to replicate with the following codes:
# list all virtualenvs in your anaconda folder
$ conda info –envs # this will list all virtualenvs created by you, you can then choose the specific virtualenv here.
# to activate the virtualenv of your interest
$ conda activate [virtualenv_name]
# export all packages used in the specific virtualenv (conda activated)
$ pip freeze > requirements.txt # save the output file as requirements.txt
# set up a new conda virtualenv in current or separate machine and install with the requirements.txt
$ conda create --name <env_name> python=3.6.5 --file requirements.txt
# Please note that occasionally you may need to check requirements.txt if there is any abnormal list of packages. The format should be in either [package==version] or [package].
OR you can create the entire virtualenv directly.
# copy exactly same virtualenv on separate machine
# export all packages used in the specific virtualenv (conda activated), including current python version and virtualenv name
$ conda env export > environment.yml # save the output file as environment.yml
# set up a new conda virtualenv in current or separate machine and install with the requirements.txt
$ conda env create -f environment.yml # using Conda; to modify “name” in the environment.yml file if to set up own same anaconda/machine

How can you specify where to create a virtual environment using pipenv?

I'm learning django and just installed pipenv via pip install pipenv and then pipenv shell and I notice that the virtual environment files are installed or created in some random default directory, I have two questions regarding this:
1) How can I customize that installation/creation directory for the virtual environment? Do I have to use a different command line from pipenv shell?
2) Can you have multiple folders with different virtual environments inside each folder/project?
According to the pipenv advanced readme (https://github.com/pypa/pipenv/blob/master/docs/advanced.rst#-custom-virtual-environment-location):
You can set the environment variable WORKON_HOME to whichever directory you want,
e.g.: by setting export WORKON_HOME=~/.venvs in your .bashrc file (if you are using bash).
According to this https://github.com/pypa/pipenv/issues/1071#issuecomment-370561179 comment (from the pipenv github repo), you can use a workaround for achieving this:
To be super clear, you can still get your own custom environments set
up just by sourcing virtualenvs.
virtualenv 35 --python=python3.5
virtualenv 36 --python=python3.6
source 35/bin/activate && pipenv install
source 36/bin/activate && pipenv install
source 35/bin/activate && pipenv run <whatever>
a tiny bit of additional visual clutter to the commands but is pretty
straightforward.
You would execute the virtualenv x commands inside the project folder.

How to move installed packages to a newly created virtual environment ?

I've downloaded a lots of packages into global environment (lets say so). Now, I want to create a new virtual environment and move some of the packages to that environment. How would I do that ?
While you could copy files/directories from the site-packages directory of your global installation into the site-packages of your virtual env, you may experience problems (missing files, binary mismatch, or others). Don't do this if you're new to python packaging mechanisms.
I would advise that you run pip freeze from your global installation to get a list of what you installed, and then store that output as a requirements.txt with your source, and put it under source management. Then run pip install -r requirements.txt after activating your virtualenv, and you'll replicate the dependencies (with the same versions) into your virtualenv.
If you try to copy or rename a virtual environment, you will discover that the copied environment does not work. This is because a virtual environment is closely tied to both the Python it was created with, and the location it was created in. (The “relocatable” option does not work.
However, this is very easy to fix. Instead of moving/copying, just create a new environment in the new location. To create VirtualEnvironment. This way work for me or you can see the link below:
pip install virtualenv
virtualenv NameOfYourVirtualEnvironment
virtualenv NameOfYourVirtualEnvironment/bin/activate
Then, run pip freeze > requirements.txt in the old environment to create a list of packages installed in it which is in your case the global environment. With that, you can just run pip install -r requirements.txt in the new environment to install packages from the saved list. Of course, you can copy requirements.txt between machines. In many cases, it will just work; sometimes, you might need a few modifications to requirements.txt to remove OS-specific stuff.
Source:https://chriswarrick.com/blog/2018/09/04/python-virtual-environments/
And also this may it work for you:
How to import a globally installed package to virtualenv folder
https://gist.github.com/k4ml/4080461

Do we need to upload virtual env on github too?

This is my GitHub repo
https://github.com/imsaiful/backmyitem
I push from my local machine and pull the changes in Amazon EC2.
Earlier I have not added the virtual env file in my repo but now I have changed some file in admin directory which is containing in the virtual env. So should I go for to add the virtual env too on my GitHub or instead I change the same thing on my remote server manually?
As was mentioned in a comment it is standard to do this through a requirements.txt file instead of including the virtualenv itself.
You can easily generate this file with the following:
pip freeze > requirements.txt
You can then install the virtualenv packages on the target machine with:
pip install -r requirements.txt
It is important to note that including the virtualenv will often not work at all as it may contain full paths for your local system. It is much better to use a requirements.txt file.
No - although the environment is 100% there, if someone else where to pull it down the path environment hasn't been exported not to mention Python version discrepancies will likely crop up.
The best thing to do is to create what is known as a requirements.txt file.
When you have created your environment, you can pip install this and pip install that. You'll start to built a number of project specific dependencies.
Once you start to build up a number of project dependencies I would then freeze your local python environment (analogoues to a package.json for node.js package dependency management). I would recommend doing the following in your terminal:
(local_python_environment) $ pip install django && pip freeze > requirements.txt
(local_python_environment) $ pip install requests && pip freeze > requirements.txt
That is to say, freeze your environment to a requirements.txt file every time a new dependency is installed.
Once a collaborator pulls down your project - they can then install a fresh python environment:
$ python3 -m venv local_python_environment
(* Please use Python 3 and not Python 2!)
And then activate that environment and install from your requirements.txt which you have included in your version control:
$ source local_python_environment/bin/activate
(local_python_environment) $ pip install -r requirements.txt
Excluding your virtual environment is probably analogous to ignoring node_modules! :)
No Its not necessary to upload virtualenv file on github. and even some time when you push your code to github then it ignore python file only if add into ignore.
Virtual Environment
Basically virtual environment is nothing but itis a tool that helps to keep dependencies required by different projects separate by creating isolated python virtual environments for them. This is one of the most important tools that most of the Python developers use. Apart from that you can add requirement.txt file into your project.
Requirement.txt
It is file that tells us to which library and application are need to run this application. you can add requirement.txt file with this simple command.
pip freeze > requirements.txt
After run this command all application and library add in this file. and if you make your project without activate any virtualenv then python automatically use system environment variable it will also add all the file that not necessary for your project.
You should add the virtualenv in your gitignore. Infact github has a recommended format for python, which files should be added and which shouldn't
Github recommendation for gitignore

Categories