What I'm trying to do is ship my code to a remote server, that may have different python version installed and/or may not have packages my app requires.
Right now to achieve such portability I have to build relocatable virtualenv with interpreter and code. That approach has some issues (for example, you have to manually copy a bunch of libraries into your virtualenv, since --always-copy doesn't work as expected) and generally slow.
There's (in theory) a way to build python itself statically.
I wonder if I could pack interpreter with my code into one binary and run my application as module. Something like that: ./mypython -m myapp run or ./mypython -m gunicorn -c ./gunicorn.conf myapp.wsgi:application.
There are two ways you could go about to solve your problem
Use a static builder, like freeze, or pyinstaller, or py2exe
Compile using cython
This answer explains how you can go about doing it using the second approach, since the first method is not cross platform and version, and has been explained in other answers. Also, using programs like pyinstaller typically results in huge file sizes, while using cython will result in a file that's much smaller
First, install cython.
sudo -H pip3 install cython
Then, you can use cython to generate a C file out of the Python .py file
(in reference to https://stackoverflow.com/a/22040484/5714445)
cython example_file.py --embed
Use GCC to compile it after getting your current python version (Note: The below assumes you are trying to compile it to Python3)
PYTHONLIBVER=python$(python3 -c 'import sys; print(".".join(map(str, sys.version_info[:2])))')$(python3-config --abiflags)
gcc -Os $(python3-config --includes) example_file.c -o output_bin_file $(python3-config --ldflags) -l$PYTHONLIBVER
You will now have a binary file output_bin_file, which is what you are looking for
Other things to note:
Change example_file.py to whatever file you are actually trying to compile.
Cython is used to use C-Type Variable definitions for static memory allocation to speed up Python programs. In your case however, you will still be using traditional Python definitions.
If you are using additional libraries (like opencv, for example), you might have to provide the directory to them using -L and then specify the name of the library using -l in the GCC Flags. For more information on this, please refer to GCC flags
The above method might not work for anaconda python, as you will likely have to install a version of gcc that is compatible with your conda-python.
You might wish to investigate Nuitka. It takes python source code and converts it in to C++ API calls. Then it compiles into an executable binary (ELF on Linux). It has been around for a few years now and supports a wide range of Python versions.
You will probably also get a performance improvement if you use it. Recommended.
You're probably looking for something like Freeze, which is able to compile your Python application with all its libraries into a static binary:
PyPi page of Freeze
Python Wiki page of Freeze
Sourceforge page of Freeze
If you are on a Mac you can use py2app to create a .app bundle, which starts your Django app when you double-click on it.
I described how to bundle Django and CherryPy into such a bundle at https://moosystems.com/articles/14-distribute-django-app-as-native-desktop-app-01.html
In the article I use pywebview to display your Django site in a local application window.
Freeze options:
https://pypi.python.org/pypi/bbfreeze/1.1.3
http://cx-freeze.sourceforge.net/
However, your target server should have the environment you want -> you should be able to 'create' it. If it doesn't, you should build your software to match the environment.
I found this handy guide on how to install custom version of python to a virtualenv, assuming you have ssh access: https://stackoverflow.com/a/5507373/5616110
In virtualenv, you should be able to pip install anything and you shouldn't need to worry about sudo privileges. Of course, having those and access to package manager like apt makes everything a lot easier.
I have created a docker image that relies on Nuitka and a custom statically linked python3.10 to create a static binary.
Did not test it extensively, if you have the chance please let me know if it works for your use case.
You can check it at:
https://github.com/joaompinto/docker-build-python-static-bin
Related
I want to package a python module containing python source and a native c++ library. Cppyy is used to dynamically generate the bindings so the library is really just a normal library. The build system for the library is meson and should not be replaced. The whole thing is in a git repository. I only care about Linux.
My question is how to get from this to “pip install url_to_package builds/installs everything.” in the least complicated way possible.
What I’ve tried:
Extending setuptools with a custom build command:
…that executes meson compile and copies the result in the right place. But pip install will perform its work in some random split-off temporary directory and I can’t find my C++ sources from there.
The Meson python module:
…can build my library and install files directly into some python env. Does not work with pip and has very limited functionality.
Wheels:
…are incredibly confusing and overkill for me. I will likely be the only user of this module. Actually, all I want is to easily use the module in projects that live in different directories…
Along the way, I also came across different CMake solutions, but those are disqualified because of my build system choice. What should I do?
I'm using nuitka to compile my python codes. I use --module option to import my code inside other python files:
nuitka --module --recurse-none file.py
Output: file.so
If I don't need to import the code and just need to run on terminal, I'm following regular compiling process:
nuitka --recurse-none file.py
Output: file.exe
I'm compiling these files under Debian and they work without a problem under Debian. When I move these files to an Ubuntu system, I sometimes get Segmentation Fault errors. Is it because a compiled python code under Debian is not compatible with Ubuntu or am I doing a personal mistake (like missing library etc.)
As answered by abarnert, if you want to make your executable independent from the specific python installation on your device, you need to use the --standalone option.
You can check that info in the Nuitka Manual
Dynamic Linking
From the docs,
It translates the Python into a C level program that then uses "libpython" to execute in the same way as CPython does.
Do you have libpython installed and pointing to the same version as the one you are compiling from? Example, on arch:
$ whereis libpython
libpython: /usr/lib/libpython3.so
Shows I have libpython installed and belonging to python 3.x (notice 3 at end of path).
Static linking.
The other way to do is I guess as suggested by others, i.e, using --standalone option. This should avoid the need of libpython
I'm kind of suspicious that you have your hint right in your question. *.exe is generally a Windows executable, while *.so is a UNIX/Linux reloadable module. Without delving into the manual very far, I notice that in one example you have --module and you get, sure enough, a Linux module. In the other case, you don't. And you don't.
My understanding was that as long as a non-Apple-default Python is employed to build, that the end-user need not install Python him/herself to execute a py2app-built app. In developing and testing the app in my own environment, I obviously have Python installed. Specifically, I built in a pyenv with with a python.org install, not Apple's own. Yet when I give the app to an end-user who doesn't have Python installed, she gets:
A Python runtime could not be located. You may need to install a
framework build of Python, or edit the PyRuntimeLocations array in
this application's Info.plist file.
The second line is concerning; if what it states is true, then a separate app instance would need to be built for every possible location of an end-user's install e.g /usr/bin, /Library/Frameworks etc.
UPDATE: Info.plist defines:
<key>PythonExecutable</key>
<string>/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python</string
Yet the end-user in question only has a system install in /usr/bin.
Does this mean that every end-user needs to have an externally-installed Python, and it must live in /Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python
What if they don't have a non-Apple Python? What if they have a non-Apple Python but it's not 2.6? How can this somewhat hardcoded dependency be avoided?
py2app automatically defaults to --semi-standalone mode if it thinks you are using the system interpreter. Your interpreter from Python.org shouldn't count as a "system" interpreter, but you could see what py2app thinks using this command:
$ python -c "import py2app.build_app; print py2app.build_app.is_system()"
False
One issue to watch out for: After I installed a Python.org interpreter today, bash didn't update it's hash cache, causing strange incompatibilities when I launched python. I had to type hash -r python to reset the cache and make sure the correct version of python was getting used. (Another way to fix this is to log out and log in again.) I suppose it's possible that the same issue could have caused py2app to be confused about whether or not you were using the system python.
If that doesn't do the trick, then try installing your python interpreter to a weird location, like ~/mypython or something like that, just to make sure there's no way it can be confused for a system python.
As a last resort, I suppose you could just hack the py2app source code so that is_system() always returns False. Not sure if that would have any adverse consequences, though.
PS -- Here's a little tutorial on using py2app with a conda-packaged application:
https://github.com/stuarteberg/helloworld Not exactly relevant to your problem here, but you could compare it with your own setup and look for any conspicuous differences.
I currently have Xcode (along with command line tools) and gfrotran from HPC installed on my Yosemite system, and would like to replace HPC's gfortran with Homebrew's (because I'm having trouble building Python packages with the HPC gfortran).
What are the steps to accomplish this?
I want to be sure that
HPC's gfortran is gone (I just want one Fortran) and
Apple's tools still work (for Xcode, Swift, OS X and iOS development, etc.)
and of course
that I have a working version of gfortran that reliably builds Python packages.
Can this be done? I see for example that Homebrew's gfortran is now packaged as part of gcc, so it looks like I'll end up with two versions gcc (which I'd like to avoid) or one that doesn't play well with Xcode.
What are the steps to accomplish 1-3?
I recently worked through this very same problem. This is how I got/have it working on my Yosemite system.
Some things about home-brew and the command line tools compilers:
The compiler you get through home-brew will overwrite the existing one if it is in the same place. If you have a fortran compiler that you obtained from the command line tools then it will not be affected.(Home-brew will warn you about this before doing anything)
The binaries for the compilers you get through the command line tools are in /usr/bin .
The compilers (and anything else) you obtain using home-brew are stored in the cellar ( /usr/local/Cellar)
and the executables are linked into the directory /usr/local/bin.
What to do?
Modify the path:
I only needed to use the compilers I got through home-brew so I moved /usr/local/bin to the top of my $path this ensures that the /usr/local/bin directory is searched before /usr/bin.
Use aliases :
You can create an alias for each compiler so that you can use them interchangeably as you wish. To create an alias you will have to modify your shell-rc file. On my system I use the tcsh, and to create an alias for the home-brew compiler I would add something similar to this to my ~/.cshrc file.
alias brewgfortran '/usr/local/bin/gfortran'
this now executes the home-brew gfortran-4.9 executable that is stored in my cellar and linked to /usr/local/bin/gfortran/
IF you absolutely want rid of the apple compilers you can of course remove them from the /usr/bin/ directory completely. I don't think this is the best idea though. I have shown you a few ways to avoid using them and if you ever needed them for some reason you would be SOL. I cant say if those tools you need will work without them as I have never used any of those, however I know it will build python packages for you (sorry hope this still helps)
NOTE: If you are not using the same shell as me the syntax for the alias is a little different (I think) just google it or man alias
I'm currently doing some embedded systems programming. This was set up by somebody else a few years ago. So now I'm looking to upgrade to Python 2.7.2 to make things simpler because I have already run into two cases where what I coded wasn't supported.
What is currently running:
: uname -a
Linux host1 2.6.18-6-486 #1 Sun Feb 10 22:06:33 UTC 2008 i586 GNU/Linux
: python -v
Python 2.4.4
: pyversions -i
python2.4
So right now only 2.4 is installed.
I untarred python2.7.2 and when I go to that directory and run python27 setup.py install --home=/home/jhemilian and it seems like python2.4 doesn't seem to know the with...as statement syntax:
host1:/home/jhemilian/src/Python-2.7.2: python setup.py install --home=/home/jhe
milian
File "setup.py", line 361
with open(tmpfile) as fp:
^
SyntaxError: invalid syntax
Before I go figuring this out I first have a question: python itself is being used to install Python? What if I didn't have the first version of Python installed? I know it's shipped with most Linux but hypothetically -- how does such a seeming catch-22 like that work?
What I am looking to do is install python2.7 in a benign location, keeping the python command still as using Python 2.4 just in case the "legacy" software i'm running is dependent on it, and running python2.7 myscript.py et cetera when I want to run one of my newer scripts. Feel free to comment if there is a cleaner or more practical (or even safer!) way to do this.
I don't think it would make much sense to go replacing all the with statements with compatible try blocks. I've looked though the READMEs and online documentation but I can't seem to find a way to install Python without already having Python. Note that I DO NOT have internet connection, although if desirable or necessary I could. It would be great if somebody could point me in the right direction. Thanks!!
It's all right in the README...
You don't need to use python to install, in fact, you shouldn't...just:
./configure
make
make install
If you want to install in a specific dir, just follow what the README says:
Installing
To install the Python binary, library modules, shared library modules
(see below), include files, configuration files, and the manual page,
just type
make install
This will install all platform-independent files in subdirectories of
the directory given with the --prefix option to configure or to the
prefix' Make variable (default /usr/local). All binary and other
platform-specific files will be installed in subdirectories if the
directory given by --exec-prefix or theexec_prefix' Make variable
(defaults to the --prefix directory) is given.
If DESTDIR is set, it will be taken as the root directory of the
installation, and files will be installed into $(DESTDIR)$(prefix),
$(DESTDIR)$(exec_prefix), etc.
All subdirectories created will have Python's version number in their
name, e.g. the library modules are installed in
"/usr/local/lib/python/" by default, where is the
. release number (e.g. "2.1"). The Python binary is
installed as "python" and a hard link named "python" is
created. The only file not installed with a version number in its
name is the manual page, installed as "/usr/local/man/man1/python.1"
by default.
If you want to install multiple versions of Python see the section
below entitled "Installing multiple versions".
The only thing you may have to install manually is the Python mode for
Emacs found in Misc/python-mode.el. (But then again, more recent
versions of Emacs may already have it.) Follow the instructions that
came with Emacs for installation of site-specific files.
EDIT: virtualenv is apparently for already-installed Python versions. Disregard this recommendation.
I think what you want is virtualenv.
I haven't used it myself, but I understand this is what it's meant for.
From the website:
virtualenv is a tool to create isolated Python environments.
The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.7/site-packages (or whatever your platform's standard location is), it's easy to end up in a situation where you unintentionally upgrade an application that shouldn't be upgraded.
EDIT: Upon review, I think you want Alberto's answer, so I voted him up for visibility.