I developed my first webserver app in Python.
It's a but unusual, because it does not only depend on python modules (like tornado) but also on some proprietary C++ libs wrapped using SWIG.
And now it's time to deliver it (to Linux platform).
Due to dependency on C++ lib, just sending sources with requirements.txt does not seem enough. The only workaround would be to have exact Linux installation to ensure binary compatibility of the lib. But in this case there will be problems with LD_PATH etc.
Another option is to write setup.py to create sdist and then deploy it with pip install.
Unfortunately that would mean I have to kill all instances of the server before installing my package. The workaround would be to use virtualenv for each instance though.
But maybe I'm missing something much simpler?
If you need the package to be installed by some user the easiest way will be to write the setup.py - but no just with simple setup function like most of installers. If you look at some packages, they have very complicated setup.py scripts which builds many things and C extensions with installation scripts for many external dependences.
The LD_PATH problem you can solve like this. If your application have an entry-point like some script which you save in python's bin directory (or system /usr/bin) you override LD_PATH like export LD_PATH="/my/path:$LD_PATH".
If your package is system service, like some servers or daemons, you can write system package, for example debian package or rpm. Debian has a lot of scripts and mechanism to point out the dependencies with packages.
So, if you need some system libraries on the list you write it down in package source and debian will install them when you will be installing your package. For example your package have dependencies for SWIG and other DEV modules, and your C extension will be built properly.
Related
We develop a system that includes a lot of different Python scripts and modules. Some are deployed to a self-hosted PyPI and some are simple packaged into a zip file (stored in a self-hosted Artifactory). Finally, there is another application (not developed by us), that uses some of our Python scripts as plugins. So, the dependency graph is rather complex for a python environment, I guess. The following snippet should explain the graph:
Script (own, zip package)
Module (own, pypi)
Module (external, pypi)
Module (own, pypi)
Module (external, pypi)
This is just an example, in reality, there are much more dependencies. But in the end, it is a mix of zip packaged and pypi packaged Pyhton scripts and modules. The dependencies of the pypi modules are managed via setuptools install_requires parameter in the setup.py. But the dependencies of the zip packaged scripts are managed via a self-implemented configuration and install script.
At the end, we have our install script that creates one virtual environment and installs all dependencies in it. Either via pip or simple download the zip files and put it in the right directory. But honestly, that feels a little bit weird and we are not sure if this is the right (pythonic) way to go.
We already searched couple of days through the internet but found no answer. It also seems, that it is very uncommon to have such a complex system implemented in Python? So, the final question is: is our approach the right one or is there not really "the right way"? Or is our approach completely wrong?
I have a python app with its setup.py that's working just fine to install it through setuptools. I am then packaging it up in DEB and PKGNG using the excellent Effing package management. I've also made some quick tests with setuptools-pkg and that seems to work too.
Now I have a need to distribute the packages including init scripts to start/stop/manage the service. I have my init scripts in the source repo and, according to what seems to be best practice, I'm not doing anything with them in setuptools and I'm handling them in the os-specific packaging: for debian-based systems I use the --deb-init, --deb-upstart and --deb-systemd FPM options as needed.
How can I build a FreeBSD package that includes the correct rc.d script, using FPM or through any other means?
All the examples I've seen are adding the rc.d script when building a package through the ports collection but this is an internal app and is not going to be published to the Ports or on PyPi. I want to be able to check out the repository on a FreeBSD system, launch a command that gives me a package, distribute it to other FreeBSD systems, install it using pkg and have my init script correctly deployed to /usr/local/etc/rc.d/<myappname>. There's no need to keep using FPM for that, anything works as long as it gives me a well-formed package.
I would highly suggest creating your package as if it were any other port either if is going to be published or not.
One of the advantages you can inherit by doing this is that you could also include all your test and automate the deployment having out of the box the base for a continues integration/delivery setup.
Check out poudriere. You could indeed maintain a set of custom ports with your very own settings and distribute them across your environments without any hassle:
pkg install -r your-poudriere yourpkg
In case this is probably too much or probably doesn't adapt well to your use case, you can always fallback to ansible, in where you could create a custom rc.d within a template of an ansible role.
If you just want to build and deploy something, let's say a microservice, then probably pkg is not the best tool, maybe you just need a supervisor that can work on all your platforms (sysutils/immortal) so that you could just distribute your code and have a single recipe for starting/stoping the service.
nbari's answer is probably the Right Way™ to do this and I'd probably create my own "port" and use that to build the package on a central host.
At the time of my original question I had taken a different approach that I'm reporting here for the sake of completeness.
I am still building the applications package (ie. myapp-1.0.0.txz) with fpm -s python -t freebsd, which basically uses Python's setuptools infrastructure to get the necessary informations, and I don't include any rc.d file in it.
I also build a second package which I will call myapp-init-1.0.0.txz with the source directory type (ie. fpm -s dir -t freebsd) and I only include the init script in that package.
Both packages get distributed to hosts and installed, thus solving my distribution issue.
I need to create an executable python app package containing all dependencies, including those that cannot be installed using pip (like gtk or other native modules). I must create Linux and Windows versions of this package. The problem is, I cannot find any way to create such package/executable. There are tools like py2exe or cx_Freeze, but they fail when including pygtk and other system dependent modules.
I just cannot believe there is no production ready solution, which allows python developers to do such obvious and trivial tasks.
I am writing a CLI python application that has dependencies on a few libraries (Paramiko etc.).
If I download their source and just place them under my main application source, I can import them and everything works just fine.
Why would I ever need to run their setup.py installers or deal with python package managers?
I understand that when deploying server side applications it is OK for an admin to run easy_install/pip commands etc to install the prerequsites, but for a script like CLI apps that have to be distributed as a self-contained apps that only depend on a python binary, what is the recommented approach?
Several reasons:
Not all packages are pure-python packages. It's easy to include C-extensions in your package and have setup.py automate the compilation process.
Automated dependency management; dependencies are declared and installed for you by the installer tools (pip, easy_install, zc.buildout). Dependencies can be declared dynamically too (try to import json, if that fails, declare a dependency on simplejson, etc.).
Custom resource installation setups. The installation process is highly configurable and dynamic. The same goes for dependency detection; the cx_Oracle has to jump through quite some hoops to make installation straightforward with all the various platforms and quirks of the Oracle library distribution options it needs to support, for example.
Why would you still want to do this for CLI scripts? That depends on how crucial the CLI is to you; will you be maintaining this over the coming years? Then I'd still use a setup.py, because it documents what the dependencies are, including minimal version needs. You can add tests (python setup.py test), and deploy to new locations or upgrade dependencies with ease.
I'm new to python and I'm writing my first program. I would like after I finish to be able to run the program from the source code on a windows or mac machine. My program has dependencies on 3rd party modules.
I read about virtualenv but I don't think it helps me because it says it's not relocatable and it's not cross-platform (see Making Environments Relocatable http://pypi.python.org/pypi/virtualenv).
The best scenario is to install the 3rd party modules locally in my project, aka xcopy installation.
I will be really surprised if python doesn't support this easily especially since it promotes simplicity and frictionless programming.
You can do what you want, you just have to make sure that the directory containing your third-party modules is on the python path.
There's no requirement to install modules system-wide.
Note, while packaging your whole app with py2exe may not be an option, you can use it to make a simple launcher environment. You make a script with imports your module/package/whatever and launches the main() entry-point. Package this with py2exe but keep your application code outside this, as python code or an egg. I do something similar where I read a .pth text file to learn what paths to add to the sys.path in order to import my application code.
Simply, that's generally not how python works. Modules are installed site-wide and used that way. Are you familiar with pip and/or easy_install? Those + pypi let you automatically install dependencies no matter what you need.
If you want to create a standalone executable typically you'd use py2exe, py2app or something like that. Then you would have no dependencies on python at all.
I also found about zc.buildout that can be used to include dependencies in an automatic way.