Setting up custom conda channel for offline environment installation - python

I am trying to set up a custom channel to install and manage a python environment on several offline computers. I am following the instructions here. I have completed steps 1 (install conda-build) and 2 (organize packages). However attempting step 3 on the offline machine with a miniconda python 3.10 build results in several problems.
First, conda-build is missing dependencies. I addressed two of these by installing pyyaml and python-libarchive-c. However, python-libarchive-c needs an archive.dll and I cannot find a recompiled 64bit dll.
My questions are two fold. 1) is there an easier way to setup and maintain offline environments? 2) has anyone setup an offline custom conda channel and have instructions or guidance?

Related

Tensorflow path-performance installation

I've noticed that upon running my tensorflow, I have a spike on first few runs on start-up
upon searching on the internet, I came into this blog
the replies and official documents got me even more confusing on which installation is best for performance.
Is it
Conda
pip
docker
some others that are not listed?
My current setup is windows 10 laptop with GTX965M, tensorflow installation accordingly to this guide
Another weird thing, my conda installation is only able to install tensorflow 1.8 for some reason. Upon running the code conda update tensorflow, it returns latest version has been installed. But I can install tensorflow 2 by stating conda install tensorflow=2. Is this normal? If not what could be the issue? (I have all the packages updated before 'updating/installing' tensorflow. Doesn't help.)
As per my knowledge creating a virtual environment in anaconda and install the Tensorflow in virtual environment has advantages.
Please refer to this SO Answer for the advantages and steps to create virtual environment.
Upon running the code conda update tensorflow, it returns latest
version has been installed. But I can install tensorflow 2 by stating
conda install tensorflow=2. Is this normal?
Yes, this is a normal behavior.
Google Colab is an easy way to learn and use TensorFlow. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.

Can we have multiple tensor flow versions on Mac?

I am using Mac. I am wondering is it possible to have 2 versions of tensor flow co-existing in my computer? I pip installed tensorflow-1.13 and tensor flow-1.8 through two python virtual env. However, there seem to be some problems ...
How do I find out the corresponding c++ tensor flow library in my Mac? Where are they installed? Thanks!
Yes, you can do this with virtual environments: each virtual environment will contain a different version of TensorFlow, and you can switch from one to the other easily. There are many solutions to create virtual environments, but some of the most popular are:
conda
virtualenv
pipenv
Conda is a general-purpose, cross-platform package manager, mostly used with Python, but it can also install many other software packages. A conda environment includes everything, including Python itself, and the system binaries for the libraries you use. So you can have different conda environments with different versions of Python, and different versions of every package you want, including TensorFlow, and any C++ library your code relies on. You can install Anaconda, which is a bundle that includes Conda + Python + many scientific libraries. Or you can install miniconda which includes the bare minimum to run conda.
Virtualenv is a python library which allows you to create virtual environments strictly for Python.
pipenv is also a python library that seems to be gaining a lot of momentum right now, and includes a lot of the functionality of virtualenv.
If you are a beginner, I would recommend going with conda. You will usually run into less issues.
First, download and install either Anaconda or Miniconda.
Next, create a virtual environment:
conda create --name myenv
Then activate this virtual environment:
conda activate myenv
Now you can install all the libraries you need:
conda install whatever-library-you-need
However, not all libraries are available in conda. For example, TensorFlow 2.0 is not there yet (as of May 13th 2019). But that's okay, you can also use pip!
pip install --pre tensorflow
This will install TF 2.0 alpha.
You can then create another environment and install a different version of TF.
You can read more about the interaction between Conda and Pip on the web, but the short story is that they work well together as long as you use pip last. In short, install everything you can with conda, and finish with pip.

Installing Tensorflow-gpu windows version on a computer with no internet access

First off, I'm a totally newbie when it come to working with Anaconda, python, and other conda packages. I'm trying to get Tensorflow-gpu installed on a Windows 10 system without Internet access. I was able to successfully install Anaconda 5.1 (Python 3.6.4) on the system and it works great. I followed the installation instructions for installing the nvidia cuda drivers and updated drivers. I'm running into an issue when trying to run pip install tensorflow_gpu-1.9.0-cp36-cp36m-win_amd64.whl. It's trying to find additional dependencies in order to complete the install. This is where my knowledge of configuring anaconda in an offline environment fails. How do you download all the dependencies of a package from an online system and transfer them to the offline system? I've searched for documentation on how to perform this task but haven't come across anything streamlining the process. Would someone happen to know of a guide that steps through this process?

Why manually install a pre-built python package in Anaconda virtualenv?

The Anaconda website mentions that the installer has 100 of pre-built packages. Even the installer size of 500mb hints that there should be some pre-built packages.
Yet when we want to use any of the packages we have to install them through the command eg. conda install nltk
Which basically downloads the package from internet and then installs it. Which seems counterintuitive since it is already mentioned on website that nltk is present in the installer.
Can anybody throw some light on this?
There are two parts:
Conda - Package & environment management system. This gives you the
conda command and serves a similar function as pip and
virtualenv.
Anaconda - Python package distribution containing 100's of scientific
packages that are tests and verified to work together.
If you install Miniconda, you will just get conda without the full Anaconda distribution. If you install Anaconda, you will get both the conda management system and the Python distribution. You can also get Anaconda after only having installed conda by running conda install Anaconda.

upgrade to dev version of scikit-learn on Anaconda?

I'm using python through Anaconda, and would like to use a new feature (http://scikit-learn.org/dev/modules/neural_networks_supervised.html) in scikit-learn that's currently only available in the development version 0.18.dev0.
However, doing the classical conda update doesn't seem to work, as conda doesn't list any dev packages. What would be the simplest way to install a development version into my Anaconda? (For what it's worth, I'm using 64-bit windows 7.)
You can only use conda to install a package if someone has built and made available binaries for the package. Some packages publish nightly builds that would allow this, but scikit-learn is not one of them.
To install the bleeding-edge version in one command, you could use pip; e.g.:
$ conda install pip
$ pip install git+git://github.com/scikit-learn/scikit-learn.git
but keep in mind that this requires compiling all the C extensions within the library, and so it will fail if your system is not set up for that.
I had scikit-learn 0.17 which did not have MLPClassifier. I just did a conda update like below:
conda update scikit-learn
conda takes care of updating all dependent packages and after the update it works!
You should build your own scikit-learn package on Anaconda. I did it in about 10 mins (repo)(package). The conda tutorial on how to build packages was helpful. There are probably more ways than one to do this, but I just downloaded the scikit-learn github repo, dropped it into a new repo, added a directory that housed my conda recipe, and then built the package from the recipe which pointed to the source code I just downloaded.

Categories