easy, straightforward way to package a python program for debian? - python

i'm having trouble navigating the maze of distribution tools for python and debian; cdbs, debhelper, python-support, python-central, blah blah blah ..
my application is a fairly straightforward one - a single python package (directory containing modules and a __init__.py), a script for running the program (script.py) and some icons (.png) and menu items (.desktop files).
from these files, how can i construct a simple, clean .deb file from scratch without using the nonsensical tools listed above?
i'm mainly targeting ubuntu, but would like it if the package worked on straight debian

python-stdeb should work for you. It's on Debian testing/unstable and Ubuntu (Lucid onwards). apt-get install python-stdeb
It is less a shortcut method than a tool that tries to generate as much of the source package as possible. It can actualy build a package that both works properly and is almost standards compliant. If you want your package to meet the quality standards for inclusion in Debian, Ubuntu, etc you will need to fill out files like debian/copyright, etc.
As much as people claim cdbs is really easy, I'd like to point out that the rules file Nick mentioned could easily have been done with debhelper7. Not to forget, dh7 can be customized far more easily than cdbs can.
#!/usr/bin/make -f
%:
dh $#
Note: You should check whether your package meets the Debian Policy, Debian Python Policy, etc before you submit to Debian. You will actually need to read documents for that - no shortcut.

First, the answer is that there is no straightforward way to make a dpkg, and the documentation is parceled out in a million tiny morsels from as many places. However, the ubuntu Python Packaging Guide is pretty useful.
For simple packages (ones easy to describe to setuptools), the steps are pretty simple once you have a debian control structure set up:
Run setup.py --sdist --prune and also make sure to set dist-dir to something reasonable
Invoke dpkg-buildpackage with the proper options for your package (probably -b at least)
You will need a debian/rules file for buildpackage to function from, but luckily the work is done for you if you use cdbs, you'll want something very similar to:
#!/usr/bin/make -f
DEB_PYTHON_SYSTEM := pysupport
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
If you're not using distutils, you might want to take a look at the DebianPython/Policy page on the wiki (under "CDBS + the hard way"). There is a pycentral option for DEB_PYTHON_SYSTEM as well, which you can google if you want to find some more information about.

Related

Proper Chef way to use Poise installed Python/Ruby

We are trying to use Poise to manage runtimes for Python and Ruby on our Centos7 servers. From my understanding this works with other recipes, but I can't figure out what the "right" way is to link the binaries to the standard bin locations (/usr/bin/, etc.). So far I have been unable to find a way to do this as part of the standard process - only by digging around to figure out where they were installed and then adding those links as a separate step later in the recipe - it seems like a major hack.
In other words, adding the following in a recipe that has some scripts that get copied to the server that require Python 3 looks like it installs Python 3:
python_runtime '3'
But the scripts (which cannot be changed) will never know that Python 3 exists.
Everything obviously works fine if I just do an install of Python3 using yum - which poise actually appears to do as well for Centos.
I am relatively new to Chef, but I have checked with our other devops team members and done a lot of searching and we couldn't figure out how this is officially supposed to be done. We aren't looking for more hacks as we can obviously do that, but what is the "Chef" way to do this?
Thanks in advance.
Unfortunately just linking the binaries wouldn't really help you much since by default on CentOS it will use the SCL packages which require some special environment variables to operate. If you want it to use the "normal" system you can do this:
python_runtime '3' do
provider :system
end
However that will probably fail because there is no EL7 distro package for Python 3. If you want to continue using SCL packages but have them look like normal binaries, maybe try something like this:
file '/usr/local/bin/python' do # or .../python3 if you prefer
owner 'root'
group 'root'
mode '755'
content "#!/bin/sh\nexec scl enable rh-python35 -- python3 \"$#\""
end
Or something like that. That still hardwires the fact that it is SCL under the hood and which SCL package is being used, which is not lovely, but the fully generic form (while doable) is a lot more complex.

Manage 3rd party libraries installation C++

I am wondering how to intelligently manage the building and installation for some of our 3rd party C++ dependencies on Linux(Ubuntu). The way I currently have it set up is a git-lfs with all the necessary compressed 3rd party sources. I then use a shell script I wrote to install all the necessary system dependencies and then unzip and build the desired library. This shell script also takes care setting up all the paths so that our source code can easily link to the 3rd party libraries.
Example commands for our script are ./install opencv or ./install everything
However, over the months the script has gotten quite large and breaks sometimes when certain libraries are already installed or other minor issues. Thus I would like to replace it with something a bit more intelligent and useful. I have currently been looking into writing some kind of python script, but just changing the language from shell to python is not that big of an advantage. So I am looking if there are any specific python libraries that can help me with managing these libraries.
I have looked into things like chef and other automated builds stuff, but that is overkill for the small project I am working on.
I was wondering what other people used for this 3rd party management stuff, as sadly C++ does not have anything like pip.
I use jhbuild for this kind of thing (if I understand what you are doing correctly). It came out of the GNOME project (they use it to build the whole desktop from source), but it's easy to customize for any set of projects. The jhbuild packaged in recent Ubuntus works fine.
You write a little XML to describe each project: where to download the sources, what patches to apply, what configure flags to use, what projects it depends on, and so on; then when you enter jhbuild build mything it works out what to build and in what order and gets on with it. It's reasonably smart about changes, so if you edit a source file in one of the projects that makes up your stack, it'll only rebuild the affected parts.
For example, I have this for fftw3, the excellent fast Fourier transform library:
<autotools id="fftw3"
autogen-sh="configure"
autogenargs="--disable-static --enable-shared --disable-threads"
>
<branch
repo="fftw"
module="fftw-3.3.4.tar.gz"
/>
<dependencies>
<dep package="libiconv"/>
</dependencies>
</autotools>
With probably obvious meanings. That's building from a release tarball, but it can build from git as well. It's happy with cmake projects. jhbuild is written in Python, so it's simple to customize. Thanks to GNOME, many common libraries are included.
I actually use it to build Windows binaries. You can set it up to build everything with a cross-compiler, then put it inside Docker. It makes it a one-liner for anyone to be able to build a large and complex application on (almost) any platform. You can use it to do automated nightly builds as well, of course.
There are probably better things around, but it has worked well for me.

Install a CMake macro script from within a python package install script (using setup.py)

So I have a Python package – it’s all set up on PyPI, and on GitHub, no problem. This is something I’m relatively familiar with.
What is unknown to me is: the notion of installing a CMake script as part of the python package install process. The python package in question is a development tool – you use it to preprocess some of your C/C++/Obj-C/Obj-C++ source files and generate some predefined macros in a header – and it works well when it’s wrapped in a CMake macro (for example like so) and executed as part of a proper chain of dependencies.
For one, I am not sure how to approach this, as there seem to be significant differences between the setuptools sandbox stance and distutils’ willing systems-level installer integration – and then even if I did know how to go about setting things up correctly in setup.py, I can’t find a good precedent on where a CMake script pertaining to a Python package might live.
All thoughts and insights on the matter are welcome.
It took me a while to understand your question. If I understand correctly, what you are trying to do is provide the IodSymbolize.cmake in the standard installation location of cmake so that other users/projects who rely on your software(symbolizer) can use it in their build process. I think you are thinking in a good direction, trying to provide services for end users of your package. Good question!
Here is my understanding of how things work in the cmake world.
Say I am an end user who wants to use "symbolizer" executable. What I would do is
find_package(symbolizer). This would try to figure out the location of the executable and it would set certain variables which can be used in the build process.
You need to provide Findsymbolizer.cmake file.
Please take a look at : http://www.cmake.org/Wiki/CMake:How_To_Find_Libraries
Also look at the Find*.cmake files provided in /usr//share/cmake/Modules directory if you are Unix/Linux platform.
Once the Findsymbolizer.cmake file is working properly, send it to the cmake mailing list for review. Once accepted it can be packaged in the next release of cmake. Then your module is usable with cmake. Hope I answered your question. Please update if you need more info.

Preparing a complex python project for submission to launchpad

I'm trying to wrap my head around the whole PPA thing and it seems to be as unnecessarily difficult as everybody is making it out to be. Let's take a project like http://docs.bokeh.org/ which has a node.js dependency and make a .deb out of it. Following this guide, and various posts here, I tried to use stdeb to do it:
pypi-download bokeh
tar xfz bokeh-0.7.0.tar.gz
cd bokeh-0.7.0/bokehjs/
npm install
grunt build
cd ..
python3 setup.py --command-packages=stdeb.command sdist_dsc
The end of the output is
dh clean --with python3 --buildsystem=python_distutils
dh_testdir -O--buildsystem=python_distutils
debian/rules override_dh_auto_clean
make[1]: Entering directory `/home/emre/Desktop/bokeh-0.7.0/deb_dist/bokeh-0.7.0'
python3 setup.py clean -a
/home/emre/Desktop/bokeh-0.7.0/deb_dist/bokeh-0.7.0/bokehjs
ERROR: Cannot install BokehJS: files missing in `./bokehjs/build`.
Please build BokehJS by running setup.py with the `--build_js` option.
Dev Guide: http://docs.bokeh.org/docs/dev_guide.html.
I just did that! Am I missing something? Is this building even necessary for something that's straight off pypi? The guides gloss over these things.
Making good debs can be complicated, yes, especially when you are not the upstream author and aren't sure exactly what their intentions were for installations of their software. The complication is necessary because well-behaved debs must conform to a fairly long list of policies and requirements so that users know what to expect from them in many different situations and cases. Source for debs needs to contain enough information that it can be built by automated systems (including installing any necessary build dependencies). Binary (built) debs must put their files in the right places on the system and not break any other packages and be able to clean up after themselves fully on uninstall. Debs should be installable without a user watching on an interactive terminal. Debs must declare all of their dependencies, and necessary versions of those dependencies, except for a few packages considered "required". Debs should not download anything from the internet during build or install. And so on, and so on. This strictness and the degree to which the community adheres to it is actually one of the most important benefits of running a Debian-based distribution.
Python source distributions such as those you find on PyPI, on the other hand, can pretty much do whatever they want. There are emerging best-practices for build and install commands with setup.py, but they're not always followed, and even when they are, there is still a lot of room for interpretation and variance. Some, such as the one you reference here, might arbitrarily require the user to call setup.py with a different nonstandard option before building normally. Some go ahead and download their own dependencies and put them wherever they want. Most packages beyond the trivial don't know how to uninstall themselves.
Both approaches are fine and are better in different contexts. But hopefully you can see now why it's not possible in the general case to make arbitrary Python source distributions automatically into working debs. There is just too much that the computer has to assume about how the Python will behave.
Having said all that, if you don't care about conforming to Ubuntu/Debian policy and you just want to be able to put something in a personal repository, the easiest path for you might be to change the Python source so that it does its --build_js thing automatically as necessary, rather than complaining and asking the user to do it.

Installing a Python program on Linux

I wrote a Python program. I would like to add to it an installation script that will set up everything necessary - like desktop icon, entry in the menu, home directory file, etc.
I'm working on Linux (ubuntu). When a Python program is installed, what needs to happen in general? I know that it probably depends on the nature of the program.
Can you give me some general ideas? Or, point me in the right direction? I have no idea how to look for this on Google.
Thanks
If it's a Python program you're trying to package, you should consider using its 'standard' distribution framework distutils. I can't replicate the entire document here but I'd recommend that you read it. Once you're done with that, check out the Hitchhikers guide to packaging which contains details on distribute - the extensions to distutils that allow you to package and distribute more effectively.
You could create an rpm easily using checkinstall. Search for checkinstall in google and download it. It will allow you to create an rpm and set the options.
For Ubuntu if you want it to be easily distributable to other Ubuntu users it'll have to be packaged properly, which is no simple task. You might want to consult their Packaging Guide for more information.
Otherwise, generally speaking there are a few standard packaging options for Python. Setuptools is popular, but becoming reviled lately. Read James Bennett's blog post "On Packaging" for a decent in-depth look into the ups and downs of the Python packaging world.
How a program is launched and placed in the menu is determined by a .desktop file (you can read the specification or just look at some examples from /usr/share/applications). Properly installing a program (placing all files in the right directories and so on) requires either making a package like a deb or rpm, or you could use something like distutils or setuptools.
It may also help to just look at some (open source) examples of Python programs for Linux.

Categories