conda why create new environment for install - python

I was suggested to conda create a new environment for installing tensorflow
First question, in general:
Why do environment exist in conda or in Python ? (Why) is it preferable to install a new library in a new environment ?
Here, in practice:
After install conda shell says $conda activate test will activate the test environment. Does it mean i can't access the lib in Spyder unless i activate test in conda shell ? Do i need to restart python shell to see the lib ? I can't access the lib (no module named tensorflow) and I assume it has to do with python not finding the path.

After install conda shell says $conda activate test will activate the
test environment. Does it mean i can't access the lib in Spyder unless
i activate test in conda shell ? Do i need to restart python shell to
see the lib ? I can't access the lib (no module named tensorflow) and
I assume it has to do with python not finding the path.
Have you installed TF within the environment?
I haven't used Spyder in a while, but what usually happens is that you can start a program (like Spyder or Jupyter) from an environment if you have installed the application within it and the environment is active. (Some editors/IDE like VS Code lets you choose the environment for a specific project, once it is able to discover all the environments.)
And, also usually, though perhaps not always, you will not need to restart the shell to import a library, after installing it. It's best to refer to the specific library's installation instructions for details like this.

Virtual Environment is used to manage Python packages for different projects. Using virtual environment allows you to avoid installing Python packages globally which could break system tools or other projects. You can install virtual environment using pip.
For example, say you have two projects, and each requires a different version of Tensorflow. This is a real problem for Python since it can’t differentiate between versions in the “site-packages” directory. So both say V1.1 and V2.1 would reside in the same directory with the same name.
This also allows easy clean up, once you are done with the project just delete the virtual environment.
Checkout more, https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/

Related

Is it possible to have two distinct install of Python 3 of the same revision on a Windows system?

I know it possible to have two installs of Python of different versions on a Windows system. But I cannot manage to have two installs of the same revision (in my case 3.8.10) to coexist.
I'm designing an application that creates a Python process. That process needs to run from a specific version of Python with packages of specific versions installed on it. In order to fully control the Python install, decision was made to install it inside the application distribution directory, segregating it from any other Python installed on the system. No environment variable refers to it.
As part of the the deployment/install process for the application, a PowerShell script downloads the Python installer and installs Python and the necessary packages into the application distribution directory. The Python installer is invoked as follows:
.\\python-3.8.10-amd64.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0 TargetDir="$curDir\\Python" Include_exe=1 Include_lib=1 Include_pip=1 Include_tcltk=1 | Out-Null
It works well unless the system has already a Python install of the same version installed on it. In that case, running the installer will break the existing install, and not fully install the new one.
I tried to run the installer manually and I noticed that it is able, somehow, to detect that an install of the same revision exist on the system. In that case, it does not allow an new install. To do so, I would have to uninstall Python at its current location to be able to install it somewhere else.
Is there a way to have two distinct installs of Python 3 of the same revision on a Windows system? And if yes, how can it be done?
A better aproach instead of installing python again would be using virtual environments.
To create a new python env. Open the command line (Powershell) on Windows and navigate to the directory you want your python env to be.
Type python3 -m venv tutorial-env. This will create a new python virtual env named tutorial-env
To activate that env on Windows powershell type: tutorial-env\Scripts\activate.bat
To deactivate the env type deactivate
If you are wondering what python virtual environments do. They basically do what you are trying to do but without installing python globally again. When you create a new python env, a new python3 is placed in your env directory, in this case in the tutorial-env directory, and when you activate the environment, it replaces the python global path to the path in your env (in this case in tutorial-env). Now when you are on this virtual env and install new python packages, they will only be available when you activate that env.
For more information about virtual environments please refer to Python official docs.

Why i already have installed modules in my virtualenv?

I have globally installed modules in my pc, but when i create a virtualenv some of the modules are already preinstalled in it, but when i execute 'pip freeze' in my virtualenv there are no installed modules. commands like django-admin , cookiecutter already work in my virtualenv though i have never installed them in it. But other commands like numpy or pandas do not work , though i have installed them in my machine globally like django or cookiecutter. How do i fix this? I am using python version 3.9.6.
TL;DR: The django-admin and cookiecutter commands are accessible from your virtual environment because they are on PATH. This isn't related to the Python virtual environment, but rather due to your whole system. If you want to make global packages accessible in your virtual environment, see this answer.
django-admin and cookiecutter are executables. They're located in some folder on your system (most likely the Scripts folder of your Python installation), and that folder is in PATH. Therefore, the shell can access them, no matter if you are in a virtual environment.
To contrast with that, numpy and pandas are only libraries. Therefore, when you try to import them in your code which is run in the virtual environment, they cannot be accessed. This can be changed by either installing them in the virtual environment, or including system site packages, which you can see how to do in this answer.
If you tried to import django or cookiecutter, that wouldn't work either (in your virtual environment), just like numpy or pandas. There's no way to "fix this", because it isn't broken. I wouldn't suggest removing Scripts from PATH, because that would mean those commands would never be accessible.

Create an anaconda environment with an specific installed Python interpreter

I'm using anaconda for manage my Python's environments and I want to create an env with a python executable file as interpreter. I didn't find similar question on other topic.
To be more precise, I don't want to create an env like this :
conda create --name my_env python=3.6.9
I want to create an env with a pre-installed python interpreter, tell to anaconda where to find the python executable file in my machine (in my case /usr/bin/python3) and use it as the interpreter for the env. Is it possible?
The only way I can imagine accomplishing this would be to make your own build of the python package. Perhaps having a look at how Conda Forge does this might be informative, though you can likely do a very trimmed down version depending on where you expect to run it.
Otherwise, no, not possible, AFAIK.

Source Activate Conda

I am a new user to conda environment and was setting up to use TensorFlow , on Windows.
I came across a command -
source activate IntroToTensorFlow.
I understood IntroToTensorFlow is an environment we are creating, but does it mean we need to create this environment every time?. I am using jupyter notebook, so if I shutdown the kernel will the environment get deactivated?
And if I restart my PC, should I activate the environment everytime ?
Conda is a package manager that installs and manages (usually) Python libraries and (sometimes) non-Python packages. A conda environment is a sort of virtualenv virtual environment; its typical use case is to have a Python interpreter (any version) along with your choice of compatible Python libraries (any version).
The following example might most likely pertain to you. Suppose you have downloaded the implementation of a very nice paper implemented in TF and you want to try it out. But the authors implemented that when Tensorflow was just growing. The APIs have changed now, and so is the required CUDA version. You want to work ideally on the latest TF. Now, what do you do? An easy way to just try out this implementation is to create a different conda environment with the libraries needed for that implementation, run that in this environment, and perhaps if you like it, you might consider upgrading the TF APIs and use it in your code.
The conda environments are also pretty simple in its construction. If you installed conda using Anaconda and default options, you will have your environments in ~/anaconda3/envs. The environments are nothing but directories here, each having various configurations of Python interpreter and libraries of your choice. (So when you shutdown your PC/Jupyter, the environments will of course persist.) At the time of usage, you just switch between the environments to suit your needs. That is, when you source activate an environment, you will be allowed to use the Python interpreter and installed libraries from that environment. Note if you source deactivate or start a new terminal session, you will still be using the root environment.
Besides, Jupyter notebook, if setup with this plugin, will allow you to have nice integration with conda environments and you wouldn't even need to source activate everytime you want to switch. You can choose between the various settings (or conda environments), which are interpreted as different kernels in the notebook. So it would as simple as choosing some environment using a drop-down.
source activate IntroToTensorFlow does not create an environment, it simply activates an environment that has already been created. To create that environment (with tensorflow installed), use conda create -n IntroToTensorFlow tensorflow.
You do not need to create the environment every time, but you do need to activate it every time in order to use the packages installed in it. This is done using source activate IntroToTensorFlow
If you shutdown a kernel, the environment does not get deactivated automatically. To do so, you have to explicitly say source deactivate, or activate a separate environment using source activate xxx, replacing xxx with whatever environment name you want (that you have created previously).
When restarting your PC, (or starting a new session at the command line), you have to manually activate your desired environment to use it. Otherwise, by default, it will be running in your root environment. So, if you've only installed tensorflow in IntroToTensorFlow environment, you have to use source activate IntroToTensorFlow every time in order to use it.
Take a look here for more info

How to move django project from ubuntu env to virtual env?

I have an existing django project which I have developed using python libraries installed in system and adding missing ones to the system. But now conflict has come for python-requests as system has 2.2 version but I need >2.5. Dont't want to uninstall and put newer one as it may break the OS. So now, I want to use virtual env and install packages there in complete isolation to that of OS.
I think the solution you're looking for is to download a different version of python without uninstalling your original, then start up virtualenv venv, but by passing in a path to the new python.exe file. Like this: virtualenv -p venv <path-to-executable-here>, then just do source bin\activate, as usual. This starts the virtual environment using the python executable you passed to virtualenv through the terminal.
Also, this might not be the only way, but, there's something called ModuleFinder which enables you to get a list of all the modules your script is importing--That is if you don't want to type them out manually, and you have extra modules installed (otherwise pip freeze > requirements.txt would do the job, and your new virtual environment would install all the packages in requirements.txt).

Categories