I am attempting to follow this tutorial https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-14-04
and see that I run into a problem similar to this one (urllib3 not importing) https://github.com/certbot/certbot/issues/2108
I had installed and used conda actively in this Ubuntu environment and suspect that this is causing virtual env to look in the wrong places for urllib3?
Basically:
How can I remove all of conda's interference? Deletion of conda from env completely? Removing path? Which path variable should I change?
Related
I'm having a problem with entering a virtual environment through conda.
It used to be no problem before.. after installing miniforge, the hierarchy seems to be changed (Shown in the screenshot below).
I used to put conda activate preactice but it doesn't work anymore...
Would there be any way to fix this?
I've tried a solution
conda config --append envs_dirs Users/minseong/opt/miniconda3/envs/practice
conda activate practice
from: https://github.com/conda/conda/issues/7831:
But, it shows this result.
I am following the tutorial on MLFlow website. I was able to run the train.py and mlflow ui worked fine. Packaging the project tries to use env variable MLFLOW_CONDA_HOME but can't find conda.
I have tried setting the variable to the path of anaconda3/condabin but it doesn't seem to find my executable. This is the error I get:
ERROR mlflow.cli: === Could not find Conda executable at /anaconda3/condabin\bin/conda. Ensure Conda is installed as per the inst
ructions at https://conda.io/docs/user-guide/install/index.html. You can also configure MLflow to look for a specific Conda executable by setting the MLFLOW_CONDA_HOME environment variable
to the path of the Conda executable ===
Adding \bin/conda at the end of my path seems to be the problem, I am not sure why mlflow is doing it. I even tried setting it to my python.exe in my conda env, but no luck. I can't find bin/conda folder in my Anaconda folder anywhere.
I resolved this by running it from Anaconda Prompt. Make sure mlflow is installed in anaconda first as well, nothing else. But the problem then is that it's not well compatible on windows, you would need to split into two steps, activate the conda environment and then run with --no-conda as mentioned here https://github.com/mlflow/mlflow/issues/2674
MLflow 1.5 was just released today.
It doesn't specifically mention it in the github notes, but I had the same issue, where it affixed \bin/conda, and now it doesn't do that anymore.
If you don't have conda environment then you can execute the following command from your terminal
mlflow run <enter your local directory name> --no-conda -P alpha=0.5
This should solve the issues with the environment variable.
I solved the issue by removing the MLFLOW_CONDA_HOME environment variable alltogether. Make sure you have added the path to the conda executable to your PATH variable.
Here is one possible solutions (the fastest one, in my opinion).
Key points:
The project virtual environment should be created with conda.
Use pip to install MLFlow.
Follow the steps for Windows:
Install miniconda (in my case, version 3)
Set conda bat file (installation path + condabin dir + conda.bat) in PATH
Create your project without virtual environment (in my case, I set in PyCharm conda instead of venv and it did not create any virtual environment, just added some external libraries), at least not in the project directory.
Create conda virtual environment manually in the project directory. In your project directory, execute conda create -n venv and follow the instructions (I used default for all the questions there).
Open a terminal and activate conda virtual environment. If you use PyCharm, you will be positioned properly, otherwise just prompt yourself in the project directory. Execute conda activate venv where venv is my virtual environment created at point 4.
Execute python -m pip install mlflow
If you want to test it, you can try one of the tests from MLFlow. E.g., you can use mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=5.0
In my case, it worked.
If you're using mlflow.pyfunc.spark_udf and get an error saying Could not find Conda executable conda then try to define the environment variable MLFLOW_CONDA_HOME in spark-env.sh as Spark doesn't recognize variables defined elsewhere. Also make sure to use the absolute path for the Conda executable.
I faced this issue within a kubernetes deployment with miniconda3 as the base image. Fixed this by setting the MLFLOW_CONDA_HOME env variable to "/opt/conda/"
I am using Windows 10 (all commands run as administrator). I created an environment called myenv. Then I used
conda env remove -n myenv
Now, if I try
conda info --envs
I only see the base environment. However, if I try
conda activate myenv
I'm still able to activate it! I think because under the folder envs, there is still a folder with the name myenv there which doesn't get deleted.
How do I delete the environment for good?
Command-line options can only go so far, unless you get very specific; perhaps the simplest approach is to delete things manually:
Locate Anaconda folder; I'll use "D:\Anaconda\"
In envs, delete environment of interest: "D:\Anaconda\envs\myenv"
Are you done? Not quite; even while in myenv, conda will still sometimes install packages to the base environment, in "D:\Anaconda\pkgs\"; thus, to clean traces of myenv,
Delete packages installed to myenv that ended up in "D:\Anaconda\pkgs\"
(If above don't suffice) Anaconda Navigator -> Environments -> myenv -> Remove
(If above don't suffice) Likely corrupted Anaconda; make note of installed packages, completely uninstall Anaconda, reinstall.
Note: step 3 is redundant for the goal of simply removing myenv, but it's recommended to minimize future package conflicts.
In addition to the first command in the question posted, I had to complete one additional step to completely remove the environment. I had to go to the folder where the environment was stored (e.g. C:\Users*username*.conda\envs\ on a windows machine) and remove the folder with the same name as the environment I deleted. After this second step, I was able to reuse the environment name without any errors.
The interpreter I use is
and it works in virtual environment. I have both anaconda and python interpreter installed in my system
But if I want to install something using pip for instance "Flask" then it happens
I am using Linux Mint 18.1 Serena"
And the way I tried to create the virtual environment is
Lastly there is no space in the directories where I tried to create virtual environment
Then I tried this link
Specifically the following commands
All those things didn't solve my problem and lastly I ended with the following errors each time I open my shell
Then I change my source of bashrc & bashrc-org to
virtualenv
export WORKON_HOME=~/virtualenvs
source /home/cryptosilicon/anaconda3/bin/python
Now get the following error
How do I correct the error and make the pip work inside virtual environment ?
I just solved (or at least found a work-around) for a similar problem.
I am using Linux Mint 18 and python 3.
I was trying to install a dependency inside a Python virtual environment using pip and it would fail (and actually pretty much mess up my whole virtual env).
The message was : "bad interpreter: No such file or directory".
But I noticed that the referenced path was actually truncated at the first space.
So I tested an virtual env in a folder for which path there is no space and it worked.
I would like to create a conda environment on a machine that has no network connection. What I've done so far is:
On a machine that is connected to the internet:
conda create -n python3 python=3.4 anaconda
Conda archived all of the relevant packages into \Anaconda\pkgs. I put these into a separate folder and moved it to the machine with no network connection. The folder has the path PATHTO\Anaconda_py3\win-64
I tried
conda create -n python=3.4 anaconda --offline --channel PATHTO\Anaconda_py3
This gives the error message
Fetching package metadata:
Error: No packages found in current win-64 channels matching: anaconda
You can search for this package on Binstar with
binstar search -t conda anaconda
What am I doing wrong? How do I tell conda to create an environment based on the packages in this directory?
You could try cloning root which is the base env.
conda create -n yourenvname --clone root
Short answer: copy the whole environment from another machine with the same OS.
Why
Dependency. A package depends on other packages. When you install a package online, the package manager conda analyzes the package dependencies and install all the required packages for you.
The dependency is especially heavy in anaconda. Cause anaconda is a meta package depends on another 160+ packages.
Meta packages,are packages do not contain actual softwares and simply depend on other packages to be installed.
It's totally absurd to download all these dependencies one by one and install them on the offline machine.
Detail Solution
Get conda installed on another machine with same OS. Install the packages you need in an isolated virtual environment.
# create a env named "myvenv", name it whatever you want
# and install the package into this env
conda create -n myvenv --copy anaconda
--copy is used to
Install all packages using copies instead of hard- or
soft-linking.
Find where the environments are stored with
conda info
The 1st value of key "envs directories" is the location. Go there and package the whole sub-folder named "myvenv" (the env name in previous step) into an archive.
Copy the archive to your offline machine. Check "envs directories" from conda info. And extract the environment from the archive into the env directory on the offline machine.
Done.
In addition to copying the pkgs folder, you need to index it, so that conda knows how to find the dependencies. See this ticket for more details and this script for an example of indexing the pkgs folder.
Using --unknown as #asmeurer suggests will only work if the package you're trying to install has no dependencies, otherwise you will get a "Could not find some dependencies" error.
Cloning is another option, but this will give you all root packages, which may not be what you want.
A lot of the answers here are not 100% related to the "when offline" part. They talk about the rest of OP's question, not reflected in question title.
If you came here because you need offline env creation on top of an existing Anaconda install you can try:
conda create --offline --name $NAME
You can find the --offline flag documented here
Have you tried without the --offline?
conda create -n anaconda python=3.4 --channel PATHTO\Anaconda_py3
This works for me if I am not connected to the Internet if I do have anaconda already on the machine but in another location. If you are connected to the Internet when you run this command you will probably get an error associated with not finding something on Binstar.
I'm not sure whether this contradicts the other answers or is the same but I followed the instructions in the conda documentation and set up a channel on the local file system.
Then it's a simple matter of moving new package files to the local directory, running conda index on the channel sub-folder (which should have a name like linux-64).
I also set the Anaconda config setting offline to True as described here but not sure if that was essential.
Hope that helps.
The pkgs directory is not a channel. The flag you are looking for is --unknown, which causes conda to include files in the pkgs directory even if they aren't found in one of the channels.
Here's what worked for me in Linux -
(a) Create a blank environment - Just create an empty directory under $CONDA_HOME/envs. Verify with - conda info --envs.
(b) Activate the new env - source activate
(c) Download the appropriate package (*.bz2) from https://anaconda.org/anaconda/repo on a machine with internet connection and move it to the isolated host.
(d) Install using local package - conda install . For example - conda install python-3.6.4-hc3d631a_1.tar.bz2, where python-3.6.4-hc3d631a_1.tar.bz2 exists in the current dir.
That's it. You can verify by the usual means (python -V, conda list -n ). All related packages can be installed in the same manner.
I found the simplest method to be as follows:
Run 'conda create --name name package' with no special switches
Copy the URL of the first package it tried (unsuccessfully) to download
Use the URL on a connected machine to fetch the tar.bz2
Copy the tar.bz2 to the offline machine's /home/user/anaconda3/pkgs
Deploy the tar.bz2 in place
Delete the now unneeded tar.bz2
Repeat until the 'conda create' command succeeds
Here's a solution that may help. It's not very pretty but it gets the job done. So i suppose you have a machine where you have a conda environment in which you've installed all the packages you need. I will refer to this as ENV1 You will have to go to this environment directory and locate it. It is usually found in \Anaconda3\envs. I suggest compressing the folder but you could just use it as is. Copy the desired environment folder into your offline machine's directory for anaconda environments. This first step should get your new environment to respond to commands like conda activate.
You will notice though that software like spyder and jupyter don't work anymore (probably because of path differences). My solution to this was to clone the base environment in the offline machine into a new environment that i will refer to as ENV2. What you need to do then is copy the contents of ENV2 into those of ENV1 and replace files.
This should overwrite the files related to spyder, jupyter.. and keep your imported packages intact.