I am building Python 3.10 from source on Ubuntu 18.04, following instructions from several web links, primarily the Python website (https://devguide.python.org/setup) and RealPython (https://realpython.com/installing-python/#how-to-build-python-from-source-code). I extracted Python-3.10.0.tgz into /opt/Python3.10. I have three questions.
First, the Python website says to use ./configure --with-pydebug and RealPython says to use ./configure --enable-optimizations --with-ensurepip=install. Another source says to include --enable-shared and --enable-unicode=ucs4. Which of these is best? Should I use all of those flags?
Second, I currently have Python 3.6 and Python 3.8 installed. They are installed in several directories under /usr. Following the directions I have seen on the web I am building in /opt/Python3.10. I assume that make altinstall (the final build step) will take care of installing the build in the usual folders under /usr, but that's not clear. Should I use ./configure --prefix=directory although none of the web sources mention doing that?
Finally, how much does --enable-optimizations slow down the install process?
This is my first time building Python from source, and it will help to clear these things up. Thanks for any help.
Welcome to the world of Python build configuration! I'll go through the command line options to ./configure one by one.
--with-pydebug is for core Python developers, not developers (like you and me) just using Python. It creates debugging symbols and slows down execution. You don't need it.
--enable-optimizations is good for performance in the long run, at the expense of lengthening the compiling process, possibly by 3-fold (or more), depending on your system. However, it results in faster execution, so I would use it in your situation.
--with-ensurepip=install is good. You want the most up-to-date version of pip.
--enable-shared is maybe not a good idea in your case, so I'd recommend not using it here. Read Difference between static and shared libraries? to understand the difference. Basically, since you'll possibly be installing to a non-system path (/opt/local, see below) that almost certainly isn't on your system's search path for shared libraries, you'll very likely run into problems down the road. A static build has all the pieces in one place, so you can install and run it from wherever. This is at the expense of size - the python binary will be rather large - but is great for non-sys admins. Even if you end up installing to /usr/local, I would argue that static is better/easier than shared.
--enable-unicode=ucs4 is optional, and may not be compatible with your system. You don't need it. ./configure is smart enough to figure out what Unicode settings are best. This option is left over from build instructions that are quite a few versions out of date.
--prefix I would suggest you use --prefix=/opt/local if that directory already exists and is in your $PATH, or if you know how to edit your $PATH in ~/.bashrc. Otherwise, use /usr/local or $HOME. /usr/local is the designated system-wide location for local software installs (i.e., stuff that doesn't come with Ubuntu), and is likely already on your $PATH. $HOME is always an option that doesn't require the use of sudo, which is great from a security perspective. You'll need to add /home/your_username/bin to your $PATH if it isn't already present.
Related
I wish to place a python program on GitHub and have other people download and run it on their computers with assorted operating systems. I am relatively new to python but have used it enough to have noticed that getting the assorted versions of all the included modules to work together can be problematic. I just discovered the use of requirements.txt (generated with pipreqs and deployed with the command pip install -r /path/to/requirements.txt) but was very surprised to notice that requirements.txt does not actually state what version of python is being used so obviously it is not the complete solution on its own. So my question is: what set of specifications/files/something-else is needed to ensure that someone downloading my project will actually be able to run it with the fewest possible problems.
EDIT: My plan was to be guided by whichever answer got the most upvotes. But so far, after 4 answers and 127 views, not a single answer has even one upvote. If some of the answers are no good, it would be useful to see some comments as to why they are no good.
Have you considered setting up a setup.py file? It's a handy way of bundling all of your... well setup into a single location. So all your user has to do is A) clone your repo and B) run pip install . to run the setup.py
There's a great stack discussion about this.
As well as a handle example written by the requests guy.
This should cover most use cases. Now if you want to make it truly distributable then you'll want to look into setting it up in PyPi, the official distribution hub.
Beyond that if you're asking how to make a program "OS independent" there isn't a one size fits all. It depends on what you are doing with your code. Requires researching how your particular code interacts with those OS's etc.
There are many, many, many, many, many, many, many ways to do this. I'll skate over the principles behind each, and it's use case.
1. A python environment
There are many ways to do this. pipenv, conda, requirments.txt, etc etc.
With some of these, you can specify python versions. With others, just specify a range of python versions you know it works with - for example, if you're using python 3.7, it's unlikely not to support 3.6; there's only one or two minor changes. 3.8 should work as well.
Another similar method is setup.py. These are generally used to distribute libraries - like PyInstaller (another solution I'll mention below), or numpy, or wxPython, or PyQt5 etc - for import/command line use. The python packaging guide is quite useful, and there are loads of tutorials out there. (google python setup.py tutorial) You can also specify requirements in these files.
2. A container
Docker is the big one. If you haven't heard of it, I'll be surprised. A quick google of a summary comes up with this, which I'll quote part of:
So why does everyone love containers and Docker? James Bottomley, formerly Parallels' CTO of server virtualization and a leading Linux kernel developer, explained VM hypervisors, such as Hyper-V, KVM, and Xen, all are "based on emulating virtual hardware. That means they're fat in terms of system requirements."
Containers, however, use shared operating systems. This means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can "leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application,"
That should summarise it for you. (Note you don't need a specific OS for containers.)
3. An executable file
There are 2 main tools that do this at the time of writing. PyInstaller, and cx_Freeze. Both are actively developed. Both are open source.
You take your script, and the tool compiles it to bytecode, finds the imports, copies those, and creates a portable python environment that runs your script on the target system without the end user needing python.
Personally, I prefer PyInstaller - I'm one of the developers. PyInstaller provides all of its functionality through a command line script, and supports most libraries that you can think of - and is extendable to support more. cx_Freeze requires a setup script.
Both tools support windows, Linux, macOS, and more. PyInstaller can create single file exes, or a one folder bundle, whereas cx_Freeze only supports one folder bundles. PyInstaller 3.6 supports python 2.7, and 3.5-3.7 - but 4.0 won't support python 2. cx_Freeze has dropped python 2 support as of the last major release (6.0 I think).
Anyway, enough about the tools features; you can look into those yourself. (See https://pyinstaller.org and https://cx-freeze.readthedocs.io for more info)
When using this distribution method, you usually provide source code on the GitHub repo, a couple of exes (one for each platform) ready for download, and instructions on how to build the code into an executable file.
The best tool I have used so far for this is Pipenv. Not only it unifies and simplifies the whole pip+virtualenv workflow for you, developer, but it also guarantees that the exact versions of all dependencies (including Python itself) are met when other people run your project with it.
The project website does a pretty good job at explaining how to use the tool, but, for completeness sake, I'll give a short explanation here.
Once you have Pipenv installed (for instance, by running pip install --user pipenv), you can go to the directory of your project and run pipenv --python 3.7, so Pipenv will create a new virtualenv for your project, create a Pipfile and a Pipfile.lock (more on them later). If you go ahead and run pipenv install -r requirements.txt it will install all your packages. Now you can do a pipenv shell to activate your new virtualenv, or a pipenv run your_main_file.py to simply run your project.
Now let's take a look at the contents of your Pipfile. It should be something resembling this:
[packages]
Django = "*"
djangorestframework = "*"
iso8601 = "*"
graypy = "*"
whitenoise = "*"
[requires]
python_version = "3.7"
This file has the human-readable specifications for the dependencies of your project (note that it specifies the Python version too). If your requirements.txt had pinned versions, your Pipfile could have them too, but you can safely wildcard them, because the exact versions are stored in the Pipfile.lock. Now you can run things like pipenv update to update your dependencies and don't forget to commit Pipfile and Pipfile.lock to your VCS.
Once people clone your project, all they have to do is run pipenv install and Pipenv will take care of the rest (it may even install the correct version of Python for them).
I hope this was useful. I'm not affiliated in any way with Pipenv, just wanted to share this awesome tool.
If your program is less about GUI, or has a web GUI, then you can share the code using Google Colaboratory.
https://colab.research.google.com/
Everyone can run it with the same environment. No need for installation.
In case converting all your python scripts into one executable can help you, then my answer below would help ...
I have been developing a large desktop application purely in python since 3 years. It is a GUI-based tool built on top of pyqt library (python-bindings of QT C++ framework).
I am currently using "py2exe" packaging library : is a distutils extension which allows to build standalone Windows executable programs (32-bit and 64-bit) from Python scripts; all you have to do is to:
install py2exe: 'pip install py2exe'
Create a setup.py script: It is used to specify the content of the final EXE (name, icon, author, data files, shared libraries, etc ..)
Execute: python setup.py py2exe
I am also using "Inno Setup" software to create installer: Creating shortcuts, setting environment variables, icons, etc ...
I'll give you a very brief summary of some of the existing available solutions when it comes to python packaging you may choose from (knowledge is power):
Follow the guidelines provided at Structuring Your Project, these conventions are widely accepted by python community and it's usually a good starting point when newcomers start coding in python. By following these guidelines pythonists watching your project/source at github or other similar places will know straightaway how to install it. Also, uploading your project to pypi as well as adding CI by following those rules will be painless.
Once your project is structured properly according to standard conventions, the next step might be using some of the available freezers, in case you'd like to ship to your end-users a package they can install without forcing them to have python installed on their machines. Be aware though these tools won't provide you any code protection... said otherwise, extracting the original python code from the final artifacts would be trivial in all cases
If you still want to ship your project to your users without forcing them to install any dev dependency and you do also care about code protection so you don't want to consider any of the existing freezers you might use tools such as nuitka, shedskin, cython or similar ones. Usually reversing code from the artifacts produced by these tools isn't trivial at all... Cracking protection on the other hand is a different matter and unless you don't provide a physical binary to your end-user you can't do much about it other than slowing them down :)
Also, in case you'd need to use external languages in your python project another classic link that comes to mind would be https://wiki.python.org/moin/IntegratingPythonWithOtherLanguages, adding the build systems of such tools to CI by following rules of 1 would be pretty easy.
That said, I'd suggest stick to bulletpoint 1 as I know that will be more than good enough to get you started, also that particular point should cover many of the existing use-cases for python "standard" projects.
While this is not intended to be a full guide by following those you'll be able to publish your python project to the masses in no time.
I think you can use docker with your python https://github.com/celery/celery/tree/master/docker
kindly follow the files and I think you can figure out the way to make your docker file for your python scripts!
Because it is missing from the other answers, I would like to add one completely different aspect:
Unit testing. Or testing in general.
Usually, it is good to have one known good configuration. Depending on what the dependencies of the program are, you might have to test different combinations of packages. You can do that in an automated fashion with e.g. tox or as part of a CI/CD pipeline.
There is no general rule of what combination of packages should be tested, but usually python2/3 compatability is a major issue. If you have strong dependencies on packages with major version differences, you might want to consider testing against these different versions.
I just downloaded Python sources, unpacked them to /usr/local/src/Python-3.5.1/, run ./configure and make there. Now, according to documentation, I should run make install.
But I don't want to install it somewhere in common system folders, create any links, change or add environment variables, doing anything outside this folder. In other words, I want it to be portable. How do I do it? Will /usr/local/src/Python-3.5.1/python get-pip.py install Pip to /usr/local/src/Python-3.5.1/Lib/site-packages/? Will /usr/local/src/Python-3.5.1/python work properly?
make altinstall, as I understand, still creates links what is not desired. Is it correct that it creates symbolic links as well but simply doesn't touch /usr/bin/python and man?
Probably, I should do ./configure prefix=some/private/path and just make and make install but I still wonder if it's possible to use Python make install.
If you don't want to copy the binaries you built into a shared location for system-wide use, you should not make install at all. If the build was successful, it will have produced binaries you can run. You may need to set up an environment for making them use local run-time files instead of the system-wide ones, but this is a common enough requirement for developers that it will often be documented in a README or similar (though as always when dealing with development sources, be prepared that it might not be as meticulously kept up to date as end-user documentation in a released version).
I'm perplexed about how best to use pip in the face of security concerns about malicious packages or install scripts. I'm not much of a security expert, so I may just be confused (bear with me), but it seems that there are 4, possibly overlapping, approaches:
(1) Use sudo pip for everything
This is how I do things now. I generally do not need virtualenvs and like the convenience of having all my packages work for all my tools. I also don't install a lot of experimental packages, sticking pretty much to the well-known and widely used ones (matplotlib, six, etc).
I gather this can be a risky approach though because the installation process has su privileges, and could potentially do anything; however it has the advantage of protecting the site-packages directory from subsequent mischief by anything (not just packages) running as non-su after an install.
This approach also can't be completely avoided, as some packages (pip itself) need it to bootstrap any Python installation.
(2) Create a pip user and give it ownership of site-packages
This would seem to have the advantage of restricting what pip can do: all it can do is install to site-packages. But I'm not sure about side effects, or if it would even work (when, for example pip needs to put things in other locations). A more realistic variant of this is to set things up this way, and use pip as "pip-user" when it works, and as su when it doesn't.
(3) Give myself ownership of site-packages
I gather this is a very had idea, but I'm not sure quite why. It would mean that any code I run would be able to tamper with site-packages; but it would mean that malicious install scripts could only damage things I can damage myself anyway.
(4) "Use a virtualenv"
This suggestion comes up a lot, but I don't see how it helps. It seems no different from 3 to me since it creates a site-packages that I own.
Which, if any of these approaches, or combinations of approaches, is best for ensuring that pip does not result in exposing my system? My concern is mostly with my system as a whole, and only secondarily with my Python installation in site-packages (which I can always rebuild if need be).
Part of the problem I have, is that a don't know how to weigh the risks. An example approach, that seems to make sense to my limited understanding is simply to do (1) for the most part, and use a virtualenv (4) for any package that I worry might damage my site-packages. Anything I've installed will still be able to damage anything I have access to, but that seems unavoidable, and at least things I don't have access to will be safe (except during the installation process itself). But I have trouble evaluating whether the protection this affords is worth the risk it creates.
You probably want to look at using a virtualenv. To quote the docs:
Virtualenv is a tool to create isolated Python environments. The basic problem
being addressed is one of dependencies and versions, and indirectly
permissions.
Virtualenv will create a folder with an isolated copy of python, an isolated pip and an isolated site-packages. You're thinking that this is the same as option 3 because you're taking that advice you linked at face value and not reading into it:
If you give yourself write privilege to the system site-packages,
you're risking that any program that runs under you (not necessarily
python program) can inject malicious code into the system
site-packages and obtain root privilege.
The problem is not with having access to site-packages (you have to have privilages for site-packages to be able to do anything). The problem is with having access to the system site-packages. A virtual environment's site-packages does not expose root privilages to malicious code the same as the one that your entire system is using.
However, I see nothing wrong with using sudo pip for well known and familiar packages. At the end of the day, it's like installing any other program, even non-python. If you go to its website and it looks honest and you trust it, there's no reason not to sudo.
moreover, pip is pretty safe - it uses https for pypi and if you --allow-external it will download packages from third-party, but will still keep checksums on pypi and compare them. For third-party with no checksum you need to explicitly call --allow-unverified which is the only option considered unsafe.
As a personal note, I can add that I use sudo pip most of the times, but as a WEB developer virtualenv is kind of a day-to-day thing, and I can recommend using it as well (especially if you see anything sketchy but you still want to try it out).
I'm currently doing some embedded systems programming. This was set up by somebody else a few years ago. So now I'm looking to upgrade to Python 2.7.2 to make things simpler because I have already run into two cases where what I coded wasn't supported.
What is currently running:
: uname -a
Linux host1 2.6.18-6-486 #1 Sun Feb 10 22:06:33 UTC 2008 i586 GNU/Linux
: python -v
Python 2.4.4
: pyversions -i
python2.4
So right now only 2.4 is installed.
I untarred python2.7.2 and when I go to that directory and run python27 setup.py install --home=/home/jhemilian and it seems like python2.4 doesn't seem to know the with...as statement syntax:
host1:/home/jhemilian/src/Python-2.7.2: python setup.py install --home=/home/jhe
milian
File "setup.py", line 361
with open(tmpfile) as fp:
^
SyntaxError: invalid syntax
Before I go figuring this out I first have a question: python itself is being used to install Python? What if I didn't have the first version of Python installed? I know it's shipped with most Linux but hypothetically -- how does such a seeming catch-22 like that work?
What I am looking to do is install python2.7 in a benign location, keeping the python command still as using Python 2.4 just in case the "legacy" software i'm running is dependent on it, and running python2.7 myscript.py et cetera when I want to run one of my newer scripts. Feel free to comment if there is a cleaner or more practical (or even safer!) way to do this.
I don't think it would make much sense to go replacing all the with statements with compatible try blocks. I've looked though the READMEs and online documentation but I can't seem to find a way to install Python without already having Python. Note that I DO NOT have internet connection, although if desirable or necessary I could. It would be great if somebody could point me in the right direction. Thanks!!
It's all right in the README...
You don't need to use python to install, in fact, you shouldn't...just:
./configure
make
make install
If you want to install in a specific dir, just follow what the README says:
Installing
To install the Python binary, library modules, shared library modules
(see below), include files, configuration files, and the manual page,
just type
make install
This will install all platform-independent files in subdirectories of
the directory given with the --prefix option to configure or to the
prefix' Make variable (default /usr/local). All binary and other
platform-specific files will be installed in subdirectories if the
directory given by --exec-prefix or theexec_prefix' Make variable
(defaults to the --prefix directory) is given.
If DESTDIR is set, it will be taken as the root directory of the
installation, and files will be installed into $(DESTDIR)$(prefix),
$(DESTDIR)$(exec_prefix), etc.
All subdirectories created will have Python's version number in their
name, e.g. the library modules are installed in
"/usr/local/lib/python/" by default, where is the
. release number (e.g. "2.1"). The Python binary is
installed as "python" and a hard link named "python" is
created. The only file not installed with a version number in its
name is the manual page, installed as "/usr/local/man/man1/python.1"
by default.
If you want to install multiple versions of Python see the section
below entitled "Installing multiple versions".
The only thing you may have to install manually is the Python mode for
Emacs found in Misc/python-mode.el. (But then again, more recent
versions of Emacs may already have it.) Follow the instructions that
came with Emacs for installation of site-specific files.
EDIT: virtualenv is apparently for already-installed Python versions. Disregard this recommendation.
I think what you want is virtualenv.
I haven't used it myself, but I understand this is what it's meant for.
From the website:
virtualenv is a tool to create isolated Python environments.
The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.7/site-packages (or whatever your platform's standard location is), it's easy to end up in a situation where you unintentionally upgrade an application that shouldn't be upgraded.
EDIT: Upon review, I think you want Alberto's answer, so I voted him up for visibility.
I am building python2.6 from source on Debian Lenny.
( ./configure make make altinstall )
I don't want it to conflict with anything existing, but I want it to be in the default search path for bash.
Suggestions?
(ps, I'm using a vm, so I can trash it and rebuild.)
That's the purpose of /usr/local according to the FHS.
The /usr/local hierarchy is for use by the system administrator when installing software locally.
I think configure typically defaults to /usr/local unless told otherwise, but to be sure you could run ./configure --prefix=/usr/local ....
I strongly recommend you do one of these two options.
Build a .deb package, and then install the .deb package; the installed then go in the usual places (/usr/bin/python26 for the main interpreter).
Build from source, and install from source into /usr/local/bin.
I think it is a very bad idea to start putting files in the usual places, but not known or understood by the package manager. If you built it by hand and installed it by hand, it should be confined in the /usr/local tree.
I would recommend to fetch the source package from testing or unstable and to rebuild it locally so that you get a .deb instead. Doesn't backports.org have it?
Edit: Debian has python2.6 only in experimental, see here. You could also take the source package from Ubuntu.
Your safest bet is to put Python 2.6 in /opt (./configure --prefix=/opt), and modify /etc/profile so that /opt/bin is searched first.