How to package Python itself into virtualenv? Is this even possible?
I'm trying to run python on a machine which it is not installed on, and I thought virtualenv made this possible. It activates, but can't run any Python.
When setting up the virtualenv (this can also be done if it already set up) simply do:
python -m virtualenv -p python env
And Python will be added to the virtualenv, and will become the default python of it.
The version of Python can also be passed, as python uses the first version found in the PATH.
virtualenv makes it convenient to use multiple python versions in different projects on the same machine, and isolate the pip install libraries installed by each project. It doesn’t install or manage the overall python environment. Python must be installed on the machine before you can install or configure the virtualenv tool itself or switch into a virtual environment.
Side note, consider using virtualenvwrapper — great helper for virtualenv.
You haven't specified the Operating System you are using.
In case you're using Windows, you don't use virtualenv for this. Instead you:
Download the Python embeddable package
Unpack it
Uncomment import site in the python37._pth file (only if you want to add additional packages)
Manually copy your additional packages (the ones you usually install with pip) to Lib\site-packages (you need to create that directory first, of course)
Such a python installation is configured in such a way that it can be moved and run from any location.
You only have to ensure the Microsoft C Runtime is installed on the system (but it almost always already is). See the documentation note:
Note The embedded distribution does not include the Microsoft C Runtime and it is the responsibility of the application installer to provide this. The runtime may have already been installed on a user’s system previously or automatically via Windows Update, and can be detected by finding ucrtbase.dll in the system directory.
You might need to install python in some location you have the permissions to do so.
Related
Folks,
I plan to use Python and various python packages like robot framework, appium, selenium etc for test automation. But as we all know, python and all the package versions keep revving.
If we pick a version of all of these to start with, and as these packages up rev, what is the recommended process for keeping the development environment up to date with the latest versions?
Appreciate some guidance on this.
Thanks.
If you wrote the code with a given version of a library, updating that library in the future is more likely to break your code than make it run better unless you intend to make use of the new features. Most of the time, you are better off sticking with the version you used when you wrote the code unless you want to change the code to use a new toy.
In order to ensure that the proper versions of every library are installed when the program is loaded on a new machine, you need a requirements.txt document. Making one of these is easy. All you do is build your program inside a virtual environment (e.g. conda create -n newenv conda activate newenv) Only install libraries you need for your program and then, once all of your dependencies are installed, in your terminal, type pip freeze > requirements.txt. This will put all your dependencies and their version information in the text document. When you want to use the program on a new machine, simply incorporate pip install -r requirements.txt into the loading process for the program.
If you containerize it using something like docker, your requirements.txt dependencies can be installed automatically whenever the container is created. If you want to use a new library or library version, simply update it in your requirements.txt and boom, you are up to date.
In this case you would want to isolate your package (and the external packages/versions it depends on) using a virtual environment. A virtual environment can be thought of as a file that tracks the specific package versions you're importing. Thus you can have the latest package installed on your system, but your project will still only import the version in your virtual environment.
What is the difference between venv, pyvenv, pyenv, virtualenv, virtualenvwrapper, pipenv, etc?
https://virtualenv.pypa.io/en/stable/
https://docs.python-guide.org/dev/virtualenvs/
Will Anaconda Python config scripts clash with Homebrew's? Note that I do not use these config scripts in any of my workflows, I'm just wondering if any of these config scripts may get called "behind the scenes". Sample output below (with username replaced by '..'):
$ brew doctor
...
Having additional scripts in your path can confuse software installed via
Homebrew if the config script overrides a system or Homebrew provided
script of the same name. We found the following "config" scripts:
/Users/../anaconda/bin/curl-config
/Users/../anaconda/bin/freetype-config
/Users/../anaconda/bin/libdynd-config
/Users/../anaconda/bin/libpng-config
/Users/../anaconda/bin/libpng15-config
/Users/../anaconda/bin/llvm-config
/Users/../anaconda/bin/python-config
/Users/../anaconda/bin/python2-config
/Users/../anaconda/bin/python2.7-config
/Users/../anaconda/bin/xml2-config
/Users/../anaconda/bin/xslt-config
Clearly some of these clash with some Homebrew-installed packages.
$ ls /usr/local/bin/*-config
/usr/local/bin/Magick++-config /usr/local/bin/libpng-config
/usr/local/bin/Magick-config /usr/local/bin/libpng16-config
/usr/local/bin/MagickCore-config /usr/local/bin/pcre-config
/usr/local/bin/MagickWand-config /usr/local/bin/pkg-config
/usr/local/bin/Wand-config /usr/local/bin/python-config
/usr/local/bin/freetype-config /usr/local/bin/python2-config
/usr/local/bin/gdlib-config /usr/local/bin/python2.7-config
Such clashes are entirely possible. When you install softwares that depends on Python using Homebrew, you want it to see Python packages and libraries installed via Homebrew but not those installed by Anaconda.
My solution to this is not putting
export PATH=$HOME/anaconda/bin:$PATH
into .bashrc. Normally, you'll just use Python and pip installed via Homebrew and packages installed by that pip. Sometimes, when you are developing Python projects that is convenient to use Anaconda's environment management mechanism (conda create -n my-env), you can temporarily do export PATH=$HOME/anaconda/bin:$PATH to turn it on. From what I gathered, one important benefit of using Anaconda compared to using regular Python is that conda create -n my-env anaconda will not duplicate package installations unnecessarily as virtualenv my-env will when you have a large number of virtual environments. If you do not mind having some degree of duplication, you could just avoid installing Anaconda all together and just use virtualenv.
It's entirely possible you won't notice any problems. On the other hand, you may have some pretty frustrating ones. It all depends on what you use and how your $PATH is ordered. Homebrew will take whatever file has precedence in your $PATH; if another Homebrew package needs to use Homebrew-installed config files and it sees the Anaconda versions first, it doesn't known any better than to use the wrong ones. In a sense, that's what you told it to do.
My recommendation is to keep things simple and clean. Unless you have a particular reason to keep Anaconda on your $PATH, you should probably pop it out and alias anything you need. Alternatively you could just install the things you require (e.g., numpy) via Homebrew and eliminate Anaconda altogether. (Actually, that's really what I would do. Anaconda comes with way more stuff than I have any reason to be dumping onto my machine.)
I don't know what your $PATH looks like, but in my experience, keeping it short and systematic has a lot of advantages.
I have a couple projects that require similar dependencies, and I don't want to have pip going out and DLing the dependencies from the web every time. For instance I am using the norel-django package which would conflict with my standard django (rdbms version) if I installed it system wide.
Is there a way for me to "reuse" the downloaded dependancies using pip? Do I need to DL the source tar.bz2 files and make a folder structure similar to that of a pip archive or something? Any assistance would be appreciated.
Thanks
Add the following to $HOME/.pip/pip.conf:
[global]
download_cache = ~/.pip/cache
This tells pip to cache downloads in ~/.pip/cache so it won't need to go out and download them again next time.
it looks like virtualenv has a virtualenv-clone command, or perhaps virtualenvwrapper does?
Regardless, it looks to be a little more involved then just copyin and pasting virtual environment directories:
https://github.com/edwardgeorge/virtualenv-clone
additionally it appears virtualenv has a flag that will facilitate in moving your virtualenv.
http://www.virtualenv.org/en/latest/#making-environments-relocatable
$ virtualenv --relocatable ENV from virtualenv doc:
This will make some of the files created by setuptools or distribute
use relative paths, and will change all the scripts to use
activate_this.py instead of using the location of the Python
interpreter to select the environment.
Note: you must run this after you’ve installed any packages into the
environment. If you make an environment relocatable, then install a
new package, you must run virtualenv --relocatable again.
Also, this does not make your packages cross-platform. You can move
the directory around, but it can only be used on other similar
computers. Some known environmental differences that can cause
incompatibilities: a different version of Python, when one platform
uses UCS2 for its internal unicode representation and another uses
UCS4 (a compile-time option), obvious platform changes like Windows
vs. Linux, or Intel vs. ARM, and if you have libraries that bind to C
libraries on the system, if those C libraries are located somewhere
different (either different versions, or a different filesystem
layout).
If you use this flag to create an environment, currently, the
--system-site-packages option will be implied.
I'm setting up a new system for a group of Python rookies to do a specific kind of scientific work using Python. It's got 2 different pythons on it (32 and 64 bit), and I want to install a set of common modules that users on the system will use.
(a) Some modules work out of the box for both pythons,
(b) some compile code and install differently depending on the python, and
(c) some don't work at all on certain pythons.
I've been told that virtualenv (+ wrapper) is good for this type of situation, but it's not clear to me how.
Can I use virtualenv to set up sandboxed modules across multiple user accounts without having to install each module for each user?
Can I use virtualenv to save me some time for case (a), i.e. install a module, but have all pythons see it?
I like the idea of isolating environments, and then having them just type "workon science32", "workon science64", depending on the issues with case (c).
Any advice is appreciated.
With virtualenv, you can allow each environment to use globally installed system packages simply by omitting the --no-site-packages option. This is the default behavior.
If you want to make each environment install all of their own packages, then use --no-site-packages and you will get a bare python installation to install your own modules. This is useful when you do not want packages to conflict with system packages. I normally do this just to keep system upgrades from interfering with working code.
I would be careful about thinking about these as sandboxes, because they are only partially isolated. The paths to python binaries and libraries are modified to use the environment, but really that is all that is going on. Virtualenv does nothing to prevent code running from doing destructive things to the system. Best way to sandbox is set Linux/Unix permissions properly, and give them their own user accounts.
EDIT For Version 1.7+
The default for 1.7 is to not include system packages, so if you want the behavior of using system packages, use the --system-site-packages option. Check the docs for more info.
I'm newish to the python ecosystem, and have a question about module editing.
I use a bunch of third-party modules, distributed on PyPi. Coming from a C and Java background, I love the ease of easy_install <whatever>. This is a new, wonderful world, but the model breaks down when I want to edit the newly installed module for two reasons:
The egg files may be stored in a folder or archive somewhere crazy on the file system.
Using an egg seems to preclude using the version control system of the originating project, just as using a debian package precludes development from an originating VCS repository.
What is the best practice for installing modules from an arbitrary VCS repository? I want to be able to continue to import foomodule in other scripts. And if I modify the module's source code, will I need to perform any additional commands?
Pip lets you install files gives a URL to the Subversion, git, Mercurial or bzr repository.
pip install -e svn+http://path_to_some_svn/repo#egg=package_name
Example:
pip install -e hg+https://rwilcox#bitbucket.org/ianb/cmdutils#egg=cmdutils
If I wanted to download the latest version of cmdutils. (Random package I decided to pull).
I installed this into a virtualenv (using the -E parameter), and pip installed cmdutls into a src folder at the top level of my virtualenv folder.
pip install -E thisIsATest -e hg+https://rwilcox#bitbucket.org/ianb/cmdutils#egg=cmdutils
$ ls thisIsATest/src
cmdutils
Are you wanting to do development but have the developed version be handled as an egg by the system (for instance to get entry-points)? If so then you should check out the source and use Development Mode by doing:
python setup.py develop
If the project happens to not be a setuptools based project, which is required for the above, a quick work-around is this command:
python -c "import setuptools; execfile('setup.py')" develop
Almost everything you ever wanted to know about setuptools (the basis of easy_install) is available from the the setuptools docs. Also there are docs for easy_install.
Development mode adds the project to your import path in the same way that easy_install does. An changes you make will be available to your apps the next time they import the module.
As others mentioned, you can also directly use version control URLs if you just want to get the latest version as it is now without the ability to edit, but that will only take a snapshot, and indeed creates a normal egg as part of the process. I know for sure it does Subversion and I thought it did others but I can't find the docs on that.
You can use the PYTHONPATH environment variable or symlink your code to somewhere in site-packages.
Packages installed by easy_install tend to come from snapshots of the developer's version control, generally made when the developer releases an official version. You're therefore going to have to choose between convenient automatic downloads via easy_install and up-to-the-minute code updates via version control. If you pick the latter, you can build and install most packages seen in the python package index directly from a version control checkout by running python setup.py install.
If you don't like the default installation directory, you can install to a custom location instead, and export a PYTHONPATH environment variable whose value is the path of the installed package's parent folder.