Multiple tensorflow-gpu Versions with Conda - python

I am using Windows 10 with the latest pip and Conda versions.
I am trying to set up two different Conda environments with different versions of tensorflow-gpu, CUDA and cuDNN. But I am not sure if it's even possible. Any reply is greatly appreciated.
I am currently perfectly running a tf-gpu=2.1 with python=3.7, cuda=10.1 and cudnn=7.6.5. But I would like to create a new environment of tf-gpu=1.13.1 with python=3.6, cuda=10.0 and cudnn=7.4.2. I am having trouble with it, and wondering if it's doable. For the second environment, the Cuda and cuDNN versions are matched from a post I have seen a few days ago. Thank you.
p.s. if you're wondering, the second environment is for stable-baselines which is only compatible with 1.8.0 < tf < 1.14.0.

It is normal to do that, normally virtual environments are handled (if you are doing it this way there is no problem) each environment will work differently as you configure it.either way you can check the information in the official documentation in https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html

Related

Anaconda3 infinite solving environment problem

I have downloaded the anaconda3 a complete newer version from official site.. I tried to create an environment but the 'solving environment' keeps on running..
I tried turning off windows defender but it didn't work.. someone plz help.. I am using windows 11 pro, and I have downloaded and installed Anaconda3-2022.10-Windows-x86_64 on 23/12/2022
The actively available Anaconda builds for Python 3.6 all use 3.6.10, so you would have a possibly easier solve with:
conda create -n ssd_env python=3.6 anaconda
However, there is the custom build, anaconda=custom=py36_1 that should be generally compatible with python>=3.6,<3.7. That is what I get when simulating the solve from OP:
CONDA_SUBDIR=win-64 mamba create -n ssd_env --override-channels -c anaconda python=3.6.8 anaconda
which solves almost instantly with Mamba.
Using conda also takes unreasonably long for me, even when explicitly identifying the anaconda=custom=py36_1. The reason this takes so long is that this anaconda package has no version constraints on the 100+ packages it specifies, which means a huge search space. This is aggravated by the fact that the solvers work from latest to oldest package versions, and the versions that are expected to be identified are ~3 years down the stack.
I recommend:
Use Mamba as the solver.
Don't use the anaconda package unless absolutely needed. Most users do not ever need all those packages - instead just specify the packages actually required.

Tensorflow in pycharm

I'm trying to use Tensorflow in Pycharm, I have selected the Python interpreter Anaconda in the setting, and I have added the package Tensorflow but it doesn't seem working. Plus I did the installation with the Anaconda prompt writing pip install tensorflow but it still not working and obtain this error:
No module named 'tensorflow'
Someone could help me? Thank you so much
Tensorflow can be a bit of a pain to install, the process is completely different if you are doing it outside anaconda so I won't go into that.
This documentation is particularly helpful and what I have used to get tensorflow working on my own pc
https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/
If you are doing cpu only stuff in tensorflow then running this in an anaconda command prompt will create an env for you to work on tf.
conda create -n tf tensorflow
conda activate tf
If you want to use your GPU with tensorflow then you need to check various things such as windows and linux will only support CUDA 10.0 for tensorflow 2.0. That being said you can use the following to set up a GPU env:
conda create -n tf-gpu tensorflow-gpu
conda activate tf-gpu
Be aware that this may not result in a working env depending on your GPU ect, So I would recommend that you refer to this page: https://www.tensorflow.org/guide/gpu
As a personal side note: I would highly recommend using jupyter lab when organising and running machine learning tasks as you can split up codes into cells with markdown decriptions of what occurs in cells which I find really helpful for readability and organisation.

Huge number of package changes when using conda to upgrade from tensorflow 1.14.0 to 2.2.0

I'm trying to update tensorflow within one of my conda environments. But, each time I get set to update, the preview shows far more packages being upgraded/added/removed than I believe is anywhere near reasonable. I want to figure out whether I have a faulty understanding of:
Package interdependency, so should just let everything proceed, because it is fine.
What I'm doing, so need to proceed cautiously in order to understand how I'm changing my environment.
When I examine my current environment (i.e. dnn_py3), I see:
(dnn_py3) me#Home:~$ conda list
# packages in environment at /home/me/anaconda3/envs/dnn_py3:
#
# Name Version Build Channel
...
tensorboard 1.14.0 py37hf484d3e_0 anaconda
tensorflow 1.14.0 gpu_py37h4491b45_0 anaconda
tensorflow-base 1.14.0 gpu_py37h8d69cac_0 anaconda
tensorflow-estimator 1.14.0 py_0 anaconda
tensorflow-gpu 1.14.0 h0d30ee6_0 anaconda
I want to upgrade to tensorflow 2.2.0
When I enter
(dnn_py3) me#Home:~$ conda update tensorflow-gpu
or
(dnn_py3) me#Home:~$ conda update -n dnn_py3 tensorflow-gpu
I get an extremely excessive (I think) list of downloads, new installs, removals, superceded packages and downloads, which I've summarized here:
Summary of Changes:
122 - download packages
5 - installs (none actually tensorflow !?)
24 - removals
263 - upgrades
140 - updates
I currently use tensorflow 1.14 in my environment, so I don't understand why this upgrade requires so much to be done. I expected to see only 1 upgrade (for tensorflow-gpu) and, possibly, a small number of other changes - nothing like the avalanche of proposed changes that I do see.
Since what I see differs so greatly from what I expect, I'd like to understand what's going on, before proceeding.
There are multiple reasons why this might be occurring.
1: Dependency Updates
Conda has a two-stage solving strategy. First, it attempts to satisfy the user specification by only installing/updating the specified packages. If this is possible without any other changes, then it considers the solve complete and will propose these (minimal) changes. Otherwise, it will move on to the second stage of solving, which allows all dependencies that need to be changed to be updated. This is recursively true, i.e., dependencies of dependencies may also update. Hence, this could lead to many updates.
Additionally, there is a configuration option, update_dependencies, to allow all packages in a dependency chain to update, not just the ones that conflict with the user specification. The default for this option is False, but it may be worth checking that it is off (conda config —-show update_dependencies).
2: Changes in Channel Configuration
Many users eventually require a package from non-default channels, Conda Forge being the most common. Since Conda Forge recommends prioritizing the conda-forge channel, this often leads users to alter their global channel configuration. Whenever a user makes such a change, it effectively changes the context in which all future solving occurs. In particular, whenever a package is subject to changing it will try to switch to a build from the higher priority channel.
OP only shows builds from anaconda channel in the original environment, so a change in channel priorities is a definite possibility. However, without seeing the actual proposed changes, I can’t say for sure this is the driving cause.
3: Auto-Updates
The conda package and the aggressive_updates_packages will attempt update whenever a mutating operation is requested for an environment. So, these could show up as unrequested installations. However, this likely not relevant to OP, since such packages mainly pertain to the base environment, but OP clearly shows this is issue is not with base. Nevertheless, I enumerate it here mostly for completeness, since it could be the issue for other users.
I would use conda to install tensorflow. It will install tensorflow 2.1.0. Tensorflow 2 works with or without a gpu. It also installs cuda toolkit 10.1.243 and cudnn 7.6.5. Conda can only install tensorflow up to version 2.1.0. After you install version 2.1.0 you can install version 2 using pip. Version 2 of tensorflow is compatible with the toolkit version 10.1.243 and cudnn 7.6.5 which conda already installed.

Pip installed packages don't show up in Anaconda Navigator

Many Python packages don't have pre-built conda packages, so Anaconda users are frequently forced to use pip to install packages. I have to do this routinely, since so many packages don't have conda packages, not even in the most common alternate channel(s) like conda-forge or bio-conda.
This open issue was already reported in the Anaconda github support repo https://github.com/ContinuumIO/anaconda-issues/issues/10634. However, no answers have been forthcoming in almost 1 year. I am asking here because responses are typically faster and shared more widely than in support forums for individual products.
I hate the productivity loss of re-installing Anaconda, particularly a long-standing installation, because it can take 3-4 hours to backup and export existing environment build files as requirements.txt, remove an existing Anaconda installation, clean out the Windows Registry, search the Windows file system for leftover detritus, and then rebuild all of my environments one at a time.
Does anyone know a trick, or have a Python script or some other workaround(s) to refresh the Package Index within a conda environment or for ALL environments created and managed by the Anaconda Navigator GUI. It would be awesome if there was an updater widget within the Anaconda Navigator GUI to allow users to select for which virtual environment(s) they want the package index to be updated.
What I have tried
In the Anaconda Navigator GUI "Environments" tab, clicking on the "Update Index..." button does not get updated with the list of packages installed "behind the scenes" from a Anaconda Command Prompt.
The result I get
pip-installed packages are NOT included in the Anaconda Package Index update process. It does not find packages in environments installed inside and outside of the Anaconda3 root directory. It doesn't even find all packages underneath the \envs folder. This makes me think packages are not installed into the currently selected environments, so it takes time to verify their location in C:\ProgramData\Anaconda3\envs, C:\Users\username\AppData\Local, or elsewhere.
What else I have tried: after having a corrupted Anaconda and Spyder installation that would not start at all, I posted requests for help on various support forums. I got answers that were not much help, like "Just don't mix pip and conda packages, use one or the other". That is not practical since so many Python packages are not available in the conda package format. I have gotten that impractical advice from Anaconda and Spyder developers in the past.
Severity and impact
This is an important issue, since it is possible to use / misuse conda and pip and inadvertently corrupt Anaconda so badly that it requires a painful and time-consuming removal and re-installation of the entire Anaconda distribution.
A possible solution beyond my current cable-tow... If someone can build an intelligent and transparent converter built into the PyPi, Anaconda Cloud, Conda-forge, and other channels that made this conversion automatic and validated, then this conversation might not be needed.

Pinning a specific python version in conda recipe

I'm looking for a way to build a conda package which would use a specific python version.
According to the issue submitted a while ago conda treats python in a special way, so simply putting
run:
- python ==2.7.13 <build>
into meta.yaml won't work: conda will create a dependency on the latest release of 2.7 (>=2.7,<2.8) ignoring the minor version and build.
Why?
The aforementioned issue suggests that pinning the python version is the wrong thing to do, but I really like the idea of my builds and deployments being reproducible. Running conda create -n prod_env my_application=1.0.0 today should produce exactly the same environment as it did yesterday, including all dependencies and python version.

Categories