I need to install python on a sever to run scripts but the server has no access to the internet.
The server has access to a local network that has access to the internet*. I would like to use pip to manage the packages through a local network directory as specified here.
How can I install pip, python and their dependancies on a windows machine, offline so that I can use pip, as specified in the link above to manage the packages I require?
*For Clarity: I have no ability to mirror, hack or otherwise to get information to pass through the local network directly from the internet.
The official Python installer for Windows has no other dependencies. It runs completely offline.
For other packages that may have dependencies (that are difficult to install on Windows); Christopher Gholke maintains a list of Windows installers for common Python packages. These are msi installers (or whl files) that are self-contained.
They are designed to work with the official Python installer for Windows - as they use its registry entries to identify the install location.
You can download these and move them to your Windows machine.
Beyond those two - if you have further requirements you can use tools like basket to download packages and then provide the location as a source for offline pip installs; or create your own pip repository.
If you do decide to create a local pip repository, it is better to create a pip proxy (see pypicache for example) this way you are only requesting those packages that are required, rather than trying to mirror the entire cheeseshop.
Related
I currently work on a server of my university, where I am using Python with Tensorflow and Cuda. I built some machine learning models on my local machine and copied them onto the server, where they run fine with the basic Python installation.
However, I want to install a few additional packages and I also want to make sure that I do not have to be worried about updates of the software installed on the server that could cause problems with my code.
I do not have and cannot get admin permissions to install any programs or modules outside my personal directory, since I am only a regular user and a student above that.
Now I wondered if there is an easy way to just clone the main Python installation to a virtual environment in my home folder. I could basically create a new virtual environment and install Tensorflow etc, but I found it very frustrating when I tried to set it up with Cuda on my private computer. Moreover, I find it rather complicated to access the server (log-in credentials, vpn, 2-factor-authorisation), so I want to minimize the trial-and-error time that I often need when I try custom installation of modules.
Approach 1:
use "pip freeze > requirement.txt" to generate the requirements file and install all requirements using requirements file using "pip install -r requirement.txt" command.
Approach 2:
use a virtualenv-clone package to make clone of the existing environment.
https://pypi.org/project/virtualenv-clone/
First, my reasons to do this - I know it's a bad idea but I am out of ideas.
I want to install a package which requires a ld version, which is higher than the one in the repo of my Centos 6.5. So I should either go for setting up everything in a Docker and running it in production - something I lack experience with and I don't feel comfortable doing for a serious project. Or upgrade ld manually building from external source. Which I read, could result in devastation of my Centos. So the last option I am left with is install the packed on other machine and manually copy it to site-packages.
I have successfully installed the package on my home laptop under Debian.
I encountered everywhere advice to copy the whole site-packages directory. Something which I don't want to do as I have different packages on both machines and I want to avoid messing up with other stuff.
I copied the .so build and .egginfo of the package. Then, on the target machine, pip freeze indeed showed me the transferred package. However, Python can't find it when I try to import and use it.
Am I missing something else?
Not any of that.
Don't mess with system Python's site-packages dir, this belongs to the system Python env only. You should only add/remove code in there by using the package manager of your OS (that's yum for CentOS). This is especially true in Linux where many OS services can rely on system Python.
So what to do instead? Use a virtualenv and/or pipx to isolate any other dependencies of the package you want to install from the system versions.
How to package Python itself into virtualenv? Is this even possible?
I'm trying to run python on a machine which it is not installed on, and I thought virtualenv made this possible. It activates, but can't run any Python.
When setting up the virtualenv (this can also be done if it already set up) simply do:
python -m virtualenv -p python env
And Python will be added to the virtualenv, and will become the default python of it.
The version of Python can also be passed, as python uses the first version found in the PATH.
virtualenv makes it convenient to use multiple python versions in different projects on the same machine, and isolate the pip install libraries installed by each project. It doesn’t install or manage the overall python environment. Python must be installed on the machine before you can install or configure the virtualenv tool itself or switch into a virtual environment.
Side note, consider using virtualenvwrapper — great helper for virtualenv.
You haven't specified the Operating System you are using.
In case you're using Windows, you don't use virtualenv for this. Instead you:
Download the Python embeddable package
Unpack it
Uncomment import site in the python37._pth file (only if you want to add additional packages)
Manually copy your additional packages (the ones you usually install with pip) to Lib\site-packages (you need to create that directory first, of course)
Such a python installation is configured in such a way that it can be moved and run from any location.
You only have to ensure the Microsoft C Runtime is installed on the system (but it almost always already is). See the documentation note:
Note The embedded distribution does not include the Microsoft C Runtime and it is the responsibility of the application installer to provide this. The runtime may have already been installed on a user’s system previously or automatically via Windows Update, and can be detected by finding ucrtbase.dll in the system directory.
You might need to install python in some location you have the permissions to do so.
I have computer that is not connected to the internet.
I download Anaconda installation wizard , but it is not contains all the packages that Anaconda contains in their website , and this uncomfortable .
How I can download all the packages that display in Anaconda for offline installation (the size of the installation is not a problem).
According to the Anaconda page here: Obtain a copy of the system appropriate installer, copy to the air-gapped machine, then follow the detailed instructions for you operating system (see links under "Detailed installation information".
Check the environment tab and see if the package you want is part of the environment, but not installed - many are already there. If it is part of the environment, then use the interactive installer to get it installed.
You can also try (from the command line/console) conda install <package-file-name>.tar.bz2 if you have a particular downloaded package that you want to install. Obviously in this case it needs to be a tar.bz file, but you should be able to modify that for other compression types.
Other environments, such as R are also available in the downloaded, but not installed category.
I think (not sure on this) that things such as Rstudio need to be downloaded separately to install.
I am currently working on a system that requires a python installation without an internet connection available (or at least I can't assume that there is an internet connection available),
I am wondering what will be the overhead price of maintaining a PIP repository , also will be possible that such repo will also hold the system require (i.e python-dev packages and any other Ubuntu packages I may require).
There's no such thing as a "pip" repository, pip is just a tool to manage packages & their dependencies. The underlying packages used are (for now) "egg" packages; pypi is a repository of these. This Q/A should answer how to setup your own: How do you host your own egg repository?