import existing conda environment from network location - python

Brief description of the situation:
We have some image analysis workstations running Windows which I can book on an hourly basis. When I log in, my user account is loaded from the domain and my network drives are automatically mounted.
I'm now looking for a way to install python on these workstations in a manageable way, meaning:
each user would want to use their own set of packages and dependencies
users should have access to their packages and dependencies regardless of which specific workstation they booked.
for safety/maintainability, users are not allowed to install anything on the systems
users packages cannot be maintained centrally
After a bit of googling, I came up with the following workflow:
Install a vanilla anaconda on each of the workstations for all users. This will be updated/maintained as the need arises. In order to give the users flexibility, they would install one/several conda environment(s) on their network drives.
I started testing:
Everything seems to work fine on the PC I used to create the environment using
conda create -p Z:\path\to\env\my-env python=3.7 anaconda
conda activate Z:\path\to\env\my-env
pip install somepackages
conda install somemorepackages
I can run code from ipython, jupyter notebook ...
On a different PC however, I run into issues:
I added the network path using conda config --add envs_dirs Z:\path\to\env. I am able to activate the environment using conda activate Z:\path\to\env\my-env. I can also import packages that are installed in the environment, but not in the base anaconda (I tested with napari).
However, with some other packages (dask_image.imread) I get an error message:
WARNING: This application failed to start because it could not find or load the Qt platform plugin "windows" in "".
Reinstalling the application may fix this problem.
My question is now two-fold:
am I choosing the most feasible way? If not, what should be done differently?
if it's the best way, how do I fix the error message? I am guessing that the created environment is missing some dependency or path because I'm executing on two systems. When I google for solutions, it's mostly users trying to install into folders with non-unicode characters, so not helpful.

Windows ...
the PCs were setup by IT such that the assignment of the drive letters is somewhat arbitrary.
So activating the environments works, but then the absolute paths written within the environment point to a non-existing location (now also the error message could not find or load ... in "" makes much more sense - python is trying to load a dependency from a path which does not exist).
Workaround (for now): I'm making sure that the drive letters are consistent, then everything works.
I'm going to test if it's possible to install environments with relative paths, because this should solve the issue all together.

Related

Cannot Install PIP in a conda enviroment created form a .txt file [duplicate]

I downloaded Anaconda and started using it on my Mac but now I am switching laptops. I will be using a Windows laptop now and I need to transfer my environments to my new laptop. How best can I do this?
I am using Python version 3.8 and was using Jupyter notebooks to run my code. But if I simply try to run the notebook on my Windows laptop I am getting one error after another (because I don't have the packages installed). Installing them one by one will take time and I don't even remember most of what I installed.
If you are working across platforms (osx-64 -> win-64) you'll need to be minimal about what packages you export from the existing environment. While Conda does have a recommended intra-platform procedure for exactly recreating environments, it does not directly translate to the cross-platform situation. Instead, try using:
conda env export --from-history > environment.yml
and then, on the new computer,
conda env create -f environment.yml
This will only export the packages that you have explicitly specified to be in the environment at some point (e.g., using conda install foo). Dependencies will be resolved automatically on the new system. This does not guarantee there still won't be packages that aren't available on Windows, but they should be less frequent and easier to resolve manually (typically by removing them from the YAML or adjusting versions).

pip install multiple users on shared server

At work we have a Windows machine with a lot of power. It runs different programs, software etc., but as a Python user I would also like to be able to run scripts, write code etc. on that machine as well to take advantage of the power.
As of now we have gotten Python installed. The issue arises when I log onto the server with my account, then when I do a e.g. pip install numpy it installs this package on my account/user folder. So basically that means that every person logging in needs to download every package from the beginning if they want to use it etc. Somewhat not what we want to do.
So my question is: How do we enable global installation from all users via pip ?
Maybe install python to the machine instead of installing it for a specific user?
In "Customize installation->Next":
Select install for all users when installing python.
Create a public package installation folder, like "C:\Users\Public\site-packages"
Add or set public installation folder to environment variable PYTHONPATH and PYTHONUSERBASE
Execute pip config set global.target [YOUR PUBLIC FOLDER] in command line.
Then pip will install package to the public folder and also python could find it by PYTHONPATH and PYTHONUSERBASE.

Transferring Conda environments across platforms

I downloaded Anaconda and started using it on my Mac but now I am switching laptops. I will be using a Windows laptop now and I need to transfer my environments to my new laptop. How best can I do this?
I am using Python version 3.8 and was using Jupyter notebooks to run my code. But if I simply try to run the notebook on my Windows laptop I am getting one error after another (because I don't have the packages installed). Installing them one by one will take time and I don't even remember most of what I installed.
If you are working across platforms (osx-64 -> win-64) you'll need to be minimal about what packages you export from the existing environment. While Conda does have a recommended intra-platform procedure for exactly recreating environments, it does not directly translate to the cross-platform situation. Instead, try using:
conda env export --from-history > environment.yml
and then, on the new computer,
conda env create -f environment.yml
This will only export the packages that you have explicitly specified to be in the environment at some point (e.g., using conda install foo). Dependencies will be resolved automatically on the new system. This does not guarantee there still won't be packages that aren't available on Windows, but they should be less frequent and easier to resolve manually (typically by removing them from the YAML or adjusting versions).

How does python web developers in general include the required python modules?

I am writing a code in python that uses numpy, matplotlib etc.
How to make sure that even a remote web server with python installed but no extra modules, can run the code without errors?
I usually work on linux environment. Hence from source code, I can install the libraries in a prefix directory and can keep that along with my code. Then add pythonpath locally in my python code to use the directory.
But, I started to realize it's not correct way as first thing, it can't work on cross platform as the libraries are different, and my code inside the script to extend the pythonpath may not work due to the use of "/" in path. Also, I am not sure if the compiled code can work in different environments of the same Linux Platform.
So I think I need to create a directory like unix,windows,osx etc. and put my code there? I believe this is what I find when I download any code online. Is that what developers generally do to avoid these issues?
A popular convention is to list requirements in a text file (requirements.txt) and install them when deploying the project. Depending on your deployment configuration, libraries can be installed in a virtual environment (google keyword: virtualenv), or in a local user folder (pip install --user -r requirements.txt, if this is the only project under this account) or globally (pip install -r requirements.txt, e.g. in a docker container)

Relocating virtual environment project from local to server (flask project), have to install dependencies again?

I have created a flask application in a virtual environment on my local machine and I could run it locally (at http://localhost:5000).
I then put this project in a repo and I then went to my server and git clone this project.
All files are identical on my local machine and in my server.
I then wanted to test this virtual environment on the server by trying .venv/bin/activate
However I ran into an error. It says I do not have flask!:
Traceback (most recent call last):
File "__init__.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
I am assuming that I have to initialize something in the virtual environment first, like installing all of the dependencies. Or do I have to pip install flask again? (It would be kind of funny to do that...)
As a general rule python environments are not portable across machines.
This means that you cannot reliably expect to port the virtual environment across machines. This is especially true if you are moving stuff between different operating systems. For example, a virtual environment created in Windows will not work in Linux.
Similarly, a virtual environment created in OSX will not work in Linux. Sometimes, you can get Linux > Linux compatibility, but this is by chance and not to be relied upon.
The reasons are numerous - some libraries need to be built against native extensions, others require compatible system libraries in place to work, etc.
So, the most reliable workflow is the following:
You can (but I would recommend against this) put your virtual environment in the same directory as your project. If you do so, make sure you don't add the virtual environment root directory to your source control system. It is best to separate your virtual environments from your source code (see the virtualenvwrapper project project for a great way to manage your virtual environments separately).
You should create a requirements file, by running pip freeze > requirements.txt. Keep this file updated and add it to your source control system. In your target system, simply create an empty virtual environment and then pip install -r requirements.txt to make sure all requirements are installed correctly. Doing so will make sure that any native extensions are also built and installed.
A few possible issues:
When you created your original virtual environment did you specify --no-site-packages if not your package may be using elements from the system.
Some packages rely on system installed libraries that may be missing on your target system
Is your server running on a similar set of hardware to your development system with the same OS - if not your virtualenv is likely not to work without re-installing packages as any C/C++ extensions will have been built for the wrong hardware/OS and will not work.
The thing is that virtualenv is not a package builder, (look at pyinstaller for that), but rather a development and test environment when you go to distribute your code to a new platform then, provided you started off with --no-site-packages you can easily find out which packages you need to find out what you need to install on the new target.
So basically - Yes you, or more likely the system admin, does need to run pip install flask and probably several other things!

Categories