How can I replicate the python environment setup of a windows machine onto another computer and be able to run very specific scripts successfully.
We have scripts that were written and run in python 3.6.5 in the anaconda environment, we want to be able to run these scripts on a new Windows 10 computer.
The scripts also connect to a local database on the computer (Postgres).
Since you are using anaconda environment, i assume that you have been using the virtualenv for the project you mentioned. It is actually easy to replicate with the following codes:
# list all virtualenvs in your anaconda folder
$ conda info –envs # this will list all virtualenvs created by you, you can then choose the specific virtualenv here.
# to activate the virtualenv of your interest
$ conda activate [virtualenv_name]
# export all packages used in the specific virtualenv (conda activated)
$ pip freeze > requirements.txt # save the output file as requirements.txt
# set up a new conda virtualenv in current or separate machine and install with the requirements.txt
$ conda create --name <env_name> python=3.6.5 --file requirements.txt
# Please note that occasionally you may need to check requirements.txt if there is any abnormal list of packages. The format should be in either [package==version] or [package].
OR you can create the entire virtualenv directly.
# copy exactly same virtualenv on separate machine
# export all packages used in the specific virtualenv (conda activated), including current python version and virtualenv name
$ conda env export > environment.yml # save the output file as environment.yml
# set up a new conda virtualenv in current or separate machine and install with the requirements.txt
$ conda env create -f environment.yml # using Conda; to modify “name” in the environment.yml file if to set up own same anaconda/machine
Related
I am using Jupyterhub and I hope to get students on to it.
My problem is I have setup a conda environment which has about 10GB of default packages to make use of GPU server.
If a user wants to add a package not installed I (as the admin) do not want to install it in the main Conda enviroment. I thought clone would work but it seem to just copy the whole lot?
For example on a (smaller example) system env:
$ conda create --prefix /opt/conda/envs/python310 python=3.10 ipykernel matplotlib numpy
$ du -hs /opt/conda/envs/python310/
2.0G /opt/conda/envs/python310/
So as a normal user I did
$ conda create --name MyEnv --clone python310
$ du .conda/envs/MyEnv
1.99G .conda/envs/MyEnv
$ conda install scipy
$ du -hs .conda/envs/MyEnv
2.12G .conda/envs/MyEnv
It is the same size, so with a 10G system environment I would soon run out of storage if every user did it.
I thought I read clone just copied links to original? Is there a way I can get users to add a package to there own system so when they ran a jupypter notebook everything would work?
Thanks
I have an environment.yml file, which contains the necessary packages required to build a Docker container. Though, one of the packages I am using is installed locally (--offline flag is passed).
Shell file, which installs necessary packages looks like this right now:
conda env update -n base -f ./environment.yml
conda install --offline <LOCAL_PACKAGE.tar.bz2>
Is it possible to somehow add this package inside my environment.yml file?
I want to export an environment AND its packages, using conda 4.10.
Reading the conda docs, it suggests exporting environments using conda env export > environment.yml. However, I am not sure if it is my problem (and if so, what the solution is), but there is no package information.
name: guest
channels:
- defaults
prefix: C:\Anaconda3\envs\guest
After some googling, I learnt to export packages using conda list --export > requirements.txt. This time, there is no environment information.
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
ca-certificates=2021.5.30=h5b45459_0
certifi=2021.5.30=py38haa244fe_0
...
How do I export both into one file and use it? Or should I just export two files, create an environment first, and install the packages?
On a side note, how do I make my packages match requirements.txt, that is, to remove extra packages, install missing ones, and update/ downgrade to the specific version? Is there a command for this, or should I delete the whole environment and start from scratch?
So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the "python virtual -m venv env" or the one that's on the repo, if there is such a thing. Thanks
virtual env is used to make your original env clean. you can pip install virtualenv and then create a virtual env like virtualenv /path/to/folder then use source /path/to/folder/bin/activate to activate the env. then you can do pip install -r requirements.txt to install dependencies into the env. then everything will be installed into /path/to/folder/lib
alteratively, you can use /path/to/folder/bin/pip install or /path/to/folder/bin/python without activating the env.
Yes, you'll want to create your own with something like: python -m venv venv. The final argument specifies where your environment will live; you could put it anywhere you like. I often have a venv folder in Python projects, and just .gitignore it.
After you have the environment, you can activate it. On Linux: source venv/bin/activate. Once activated, any packages you install will go into it; you can run pip install -r requirements.txt for instance.
I'm setting up a python project, using an Anaconda virtual environment. I'm generating a requirements.txt so other people can easily set up their own virtual environment for the project.
I was wondering though, when other developers want to contribute to the project, but want to use virtualenv instead of Anaconda, can they do that?
I tried the following:
I set up an empty project in an Anaconda environment and installed the aiohttp module. Then conda list --export > requirements.txt generates the following:
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
aiohttp=2.3.9=py36_0
async-timeout=2.0.0=py36hc3e01a3_0
certifi=2018.1.18=py36_0
chardet=3.0.4=py36h420ce6e_1
multidict=3.3.2=py36h72bac45_0
pip=9.0.1=py36h226ae91_4
python=3.6.4=h6538335_1
setuptools=38.4.0=py36_0
vc=14=h0510ff6_3
vs2015_runtime=14.0.25123=3
wheel=0.30.0=py36h6c3ec14_1
wincertstore=0.2=py36h7fe50ca_0
yarl=0.14.2=py36h27d1bf2_0
I set up an empty project in a virtualenv environment and installed the aiohttp module there too. pip freeze > requirements.txt then generates:
aiohttp==3.0.1
async-timeout==2.0.0
attrs==17.4.0
chardet==3.0.4
idna==2.6
idna-ssl==1.0.0
multidict==4.1.0
yarl==1.1.0
So apparently both outputs are different, and my theory is: once I generate my requirements.txt with conda on my project, other developers can't choose virtualenv instead - at least not if they're not prepared to install a long list requirements by hand (it will be more than just the aiohttp module of course).
A first sight, importing the conda-generated requirements.txt in a project on virtualenv (pip install -r requirements-conda.txt) throws this error:
Invalid requirement: 'aiohttp=2.3.9=py36_0'
= is not a valid operator. Did you mean == ?
Am I right when I think that if developers would like to do this, they would need to programmatically change the package list to the format that virtualenv understands, or they would have to import all packages manually? Meaning that I impose them to choose conda as virtual environment as well if they want to save themselves some extra work?
I'm setting up a python project, using an Anaconda virtual environment. I was wondering though, when other developers want to contribute to the project, but want to use virtualenv instead of Anaconda, can they do that?
Yes - in fact this is how many of my projects are structured. To accomplish what you're looking for, here is a simple directory that we'll use as reference:
.
├── README.md
├── app
│ ├── __init__.py
│ ├── static
│ ├── templates
├── migrations
├── app.py
├── environment.yml
├── requirements.txt
In the project directory above, we have both environment.yml (for Conda users) and requirements.txt (for pip).
environment.yml
So apparently both outputs are different, and my theory is: once I generate my requirements.txt with conda on my project, other developers can't choose virtualenv instead - at least not if they're not prepared to install a long list requirements by hand (it will be more than just the aiohttp module of course).
To overcome this, we are using two different environment files, each in their own distinct format allowing for other contributors to pick the one they prefer. If Adam uses Conda to manage his environments, then all he need to do create his Conda from the environment.yml file:
conda env create -f environment.yml
The first line of the yml file sets the new environment's name. This is how we create the Conda-flavored environment file. Activate your environment (source activate or conda activate) then:
conda env export > environment.yml
In fact, because the environment file created by the conda env export command handles both the environment's pip packages and conda packages, we don't even need to have two distinct processes to create this file. conda env export will export all packages within your environment regardless of the channel they're installed from. It will have a record of this in environment.yml as well:
name: awesomeflask
channels:
- anaconda
- conda-forge
- defaults
dependencies:
- appnope=0.1.0=py36hf537a9a_0
- backcall=0.1.0=py36_0
- cffi=1.11.5=py36h6174b99_1
- decorator=4.3.0=py36_0
- ...
requirements.txt
Am I right when I think that if developers would like to do this, they would need to programmatically change the package list to the format that virtualenv understands, or they would have to import all packages manually? Meaning that I impose them to choose conda as virtual environment as well if they want to save themselves some extra work?
The recommended (and conventional) way to _change to the format that pip understands is through requirements.txt. Activate your environment (source activate or conda activate) then:
pip freeze > requirements.txt
Say Eve, one of the project contributor want to create an identical virtual environment from requirements.txt, she can either:
# using pip
pip install -r requirements.txt
# using Conda
conda create --name <env_name> --file requirements.txt
A full discussion is beyond the scope of this answer, but similar questions are worth a read.
An example of requirements.txt:
alembic==0.9.9
altair==2.2.2
appnope==0.1.0
arrow==0.12.1
asn1crypto==0.24.0
astroid==2.0.2
backcall==0.1.0
...
Tips: always create requirements.txt
In general, even on a project where all members are Conda users, I still recommend creating both the environment.yml (for the contributors) as well as the requirements.txt environment file. I also recommend having both these environment files tracked by version control early on (ideally from the beginning). This has many benefits, among them being the fact that it simplifies your debugging process and your deployment process later on.
A specific example that spring to mind is that of Azure App Service. Say you have a Django / Flask app, and want to deploy the app to a Linux server using Azure App Service with git deployment enabled:
az group create --name myResourceGroup --location "Southeast Asia"
az webapp create --resource-group myResourceGroup --plan myServicePlan
The service will look for two files, one being application.py and another being requirements.txt. You absolutely need both of these file (even if they're blank files) for the automation to work. This varies by cloud infrastructure / providers of course, but I find that having requirements.txt in our project generally saves us a lot of trouble down the road and worth the initial set-up overhead.
If your code is on GitHub, having requirements.txt also give you extra peace of mind by having its vulnerability detection pick up on any issue before alerting you / repo owner. That's a lot of great value for free, on the merit of having this extra dependency file (small price to pay).
This is as easy as making sure that every time a new dependency is installed, we run both the conda env export and pip freeze > requirements.txt command.
I found these all commands very hardly and will put these below. Hope these commands will help you, If you have any questions just reply (Windows users)
For conda environment users follow these (But first you need to install conda in your pc)
Create conda virtual environment:
conda create --name <env_name>
See all modules installed in the environment as a list:
conda env list
Import packages to your environment and create conda environment
(change environment name by editing the first line of
environment.yml file) (you can use whatever name for your environment.yml but usually it is named as environment.yml)
conda env create -f environment.yml
Import packages to your environment and create conda environment
using pip environment
conda create --name <env_name> --file requirements.txt
See imported packages:
conda list
Activate the environment to export:
conda activate <env_name> or activate <env_name>
Deactivate your conda environment:
conda deactivate
Export your active environment to a new file:
conda env export > environment.yml
This is for pip environment users
Create virtual environment:
python -m venv (<env_name> or path)
Can use python inside virtual environment by: (but this is annoying
and therefore normally we activate the environment and then use its
python using following Activate virtual environment(for cmd) code)
path\to\venv\Scripts\python.exe <file_name_path>
Activate virtual environment (for cmd)
path\to\venv\Scripts\activate.bat
Activate virtual environment (for powershell)
path\to\venv\Scripts\Activate.ps1
Deactivate your pip environment:
deactivate.bat
Import packages in new virtual env
pip install -r requirements.txt
Export your active environment to a new text file
pip freeze > requirements.txt