The Anaconda website mentions that the installer has 100 of pre-built packages. Even the installer size of 500mb hints that there should be some pre-built packages.
Yet when we want to use any of the packages we have to install them through the command eg. conda install nltk
Which basically downloads the package from internet and then installs it. Which seems counterintuitive since it is already mentioned on website that nltk is present in the installer.
Can anybody throw some light on this?
There are two parts:
Conda - Package & environment management system. This gives you the
conda command and serves a similar function as pip and
virtualenv.
Anaconda - Python package distribution containing 100's of scientific
packages that are tests and verified to work together.
If you install Miniconda, you will just get conda without the full Anaconda distribution. If you install Anaconda, you will get both the conda management system and the Python distribution. You can also get Anaconda after only having installed conda by running conda install Anaconda.
Related
I followed the instructions here: Can't find package on Anaconda Navigator. What to do next?
I clicked Open terminal from environment on Anaconda navigator, and then used "pip3 install lmfit" in the terminal. But after installing the lmfit package using pip3, I still cannot find it in the conda list. What should I do?
The Problem
At the time of this question, Conda builds of pip had only just started including a pip3 entrypoint,1 therefore pip3 is very likely referring to a non-Conda version of Python and that is where the package was installed. Try checking which pip3 to find out where it went.
Recommendation
Conda First
Generally, it is preferable to use Conda to install packages in Conda environments, and in this case the package is available via the Conda Forge channel:
conda install -c conda-forge lmfit
Contrary to M. Newville's answer, this recommendation to prefer Conda packages is not about benefiting Conda developers, but instead a rule of thumb to help users avoid creating unstable or unreproducible environments. More info about the risks of mixing pip install and conda install can be found in the post "Using Pip in a Conda Environment".
Nevertheless, the comment that not all packages (and specifically lmfit) are found in the default repository and this complicates installation by requiring resorting to third-party channels is a good point. In fact, because third-parties are free to use different build stacks there are known problems with mixing packages built by Anaconda and those from Conda Forge. However, these issues tend to be rare and limited to compiled code. Additionally, adding trusted channels to a configuration and setting channel priorities heuristically solves the issue.
As for risks in using third-party channels, arbitrary Anaconda Cloud user channels are risky: one should only source packages from channels you trust (just like anything else one installs). Conda Forge in particular is well-reputed and all feedstocks are freely available on GitHub. Moreover, many Python package builds on Conda Forge are simply wrappers around the PyPI build of the package.
PyPI Last
Sometimes it isn't possible to avoid using PyPI. When one must resort to installing from PyPI, it is better practice to use the pip entrypoint from an activate environment, rather than pip3, since only some Conda builds of pip include pip3. For example,
conda activate my_env
pip install lmfit
Again, following the recommendations in "Using Pip in a Conda Environment", one should operate under the assumption that any subsequent calls to conda (install|upgrade|remove) in the environment could have undefined behavior.
PyPI Only
For the sake of completeness, I will note that a stable way of using Conda that is consistent with the recommendations is to limit Conda to the role of environment creation and use pip for all package installation.
This strategy is perhaps the least burden on the Python-only user, who doesn't want to deal with things like finding the Conda-equivalent package name or searching non-default channels. However, its applicability seems limited to Python-only environments, since other libraries may still need to resort to conda install.
[1]: Conda Forge and Anaconda started consistently including pip3 entrypoints for the pip module after version 20.2.
Installing a pure Python package, such as lmfit with the correct version of pip install lmfit should be fine.
Conda first is recommended to make the life of the conda maintainers and packagers easier, not the user's life. FWIW, I maintain both kinds of packages,
and there is no reason to recommend conda install lmfit over pip install lmfit.
In fact, lmfit is not in the default anaconda repository so that installing it requires going to a third-party conda channel such as conda-forge. That adds complexity and risk to your conda environment.
Really, pip install lmfit should be fine.
On my Ubuntu 18.04, I have Python 3 and R installed. I am about to study some data science, and just found https://www.anaconda.com/distribution/.
anaconda comes with Python and R and some packages. Will installing anaconda conflict with my existing installation of Python 3 and R?
Shall I install anaconda, or shall I install the packages manually and individually on demand?
How do people in data science install the tools?
Thanks.
I have a system python and R as well as anaconda and they don’t seem to conflict and I have the same OS as you. Conda, the package and environment manager that comes with anaconda, supposedly does not mix well with pip, meaning if you have a virtual environment you should use one or the other. However, I have mixed them without any difficulties.
I prefer creating virtual environments the more standard way, but there are some scientific packages that are much easier to install using conda.
I currently use Conda to capture my dependencies for a python project in a environment.yml.
When I build a docker service from the project I need to reinstall these dependencies. I would like to get around, having to add (mini-)conda to my docker image.
Is it possible to parse environment.yml with pip/pipenv or transform this into a corresponding requirements.txt?
(I don't want to leave conda just yet, as this is what MLflow captures, when I log models)
Nope.
conda automatically installs dependencies of conda packages. These are resolved differently by pip, so you'd have to resolve the Anaconda dependency tree in your transformation script.
Many conda packages are non-Python. You couldn't install those dependencies with pip at all.
Some conda packages contain binaries that were compiled with the Anaconda compiler toolchain. Even if the corresponding pip package can compile such binaries on installation, it wouldn't be using the Anaconda toolchain. What you'd get would be fundamentally different from the corresponding conda package.
Some conda packages have fixes applied, which are missing from corresponding pip packages.
I hope this is enough to convince you that your idea won't fly.
Installing Miniconda isn't really a big deal. Just do it :-)
I am using Mac. I am wondering is it possible to have 2 versions of tensor flow co-existing in my computer? I pip installed tensorflow-1.13 and tensor flow-1.8 through two python virtual env. However, there seem to be some problems ...
How do I find out the corresponding c++ tensor flow library in my Mac? Where are they installed? Thanks!
Yes, you can do this with virtual environments: each virtual environment will contain a different version of TensorFlow, and you can switch from one to the other easily. There are many solutions to create virtual environments, but some of the most popular are:
conda
virtualenv
pipenv
Conda is a general-purpose, cross-platform package manager, mostly used with Python, but it can also install many other software packages. A conda environment includes everything, including Python itself, and the system binaries for the libraries you use. So you can have different conda environments with different versions of Python, and different versions of every package you want, including TensorFlow, and any C++ library your code relies on. You can install Anaconda, which is a bundle that includes Conda + Python + many scientific libraries. Or you can install miniconda which includes the bare minimum to run conda.
Virtualenv is a python library which allows you to create virtual environments strictly for Python.
pipenv is also a python library that seems to be gaining a lot of momentum right now, and includes a lot of the functionality of virtualenv.
If you are a beginner, I would recommend going with conda. You will usually run into less issues.
First, download and install either Anaconda or Miniconda.
Next, create a virtual environment:
conda create --name myenv
Then activate this virtual environment:
conda activate myenv
Now you can install all the libraries you need:
conda install whatever-library-you-need
However, not all libraries are available in conda. For example, TensorFlow 2.0 is not there yet (as of May 13th 2019). But that's okay, you can also use pip!
pip install --pre tensorflow
This will install TF 2.0 alpha.
You can then create another environment and install a different version of TF.
You can read more about the interaction between Conda and Pip on the web, but the short story is that they work well together as long as you use pip last. In short, install everything you can with conda, and finish with pip.
I'm using python through Anaconda, and would like to use a new feature (http://scikit-learn.org/dev/modules/neural_networks_supervised.html) in scikit-learn that's currently only available in the development version 0.18.dev0.
However, doing the classical conda update doesn't seem to work, as conda doesn't list any dev packages. What would be the simplest way to install a development version into my Anaconda? (For what it's worth, I'm using 64-bit windows 7.)
You can only use conda to install a package if someone has built and made available binaries for the package. Some packages publish nightly builds that would allow this, but scikit-learn is not one of them.
To install the bleeding-edge version in one command, you could use pip; e.g.:
$ conda install pip
$ pip install git+git://github.com/scikit-learn/scikit-learn.git
but keep in mind that this requires compiling all the C extensions within the library, and so it will fail if your system is not set up for that.
I had scikit-learn 0.17 which did not have MLPClassifier. I just did a conda update like below:
conda update scikit-learn
conda takes care of updating all dependent packages and after the update it works!
You should build your own scikit-learn package on Anaconda. I did it in about 10 mins (repo)(package). The conda tutorial on how to build packages was helpful. There are probably more ways than one to do this, but I just downloaded the scikit-learn github repo, dropped it into a new repo, added a directory that housed my conda recipe, and then built the package from the recipe which pointed to the source code I just downloaded.