Why does yocto scipy recipe require python3 explicitly set? How? - python

I have a recipe to build scipy which parses fine and bitbake starts building but the python3
version requirement is not met. It exits with
| DEBUG: Executing shell function do_configure
| Traceback (most recent call last):
| File "setup.py", line 31, in <module>
| raise RuntimeError("Python version >= 3.5 required.")
| RuntimeError: Python version >= 3.5 required.
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_configure (log file is located at /home/marius/mender-qemu/build/tmp/work/core2-64-poky-linux/python3-scipy/1.4.1-r0/temp/log.do_configure.30478)
I successfully built other python3 packages which can be imported in the running image. You can also see from the path that python3 is used and the image runs on python3.5. I'm using thud.
For the sake of completion here is the recipe. I also tried explicitly adding dependencies (numpy) but that did not have any effect.
SUMMARY = "Scipy summary to be filled"
DESCRIPTION = "Scientific computing"
PYPI_PACKAGE = "scipy"
LICENSE = "BSD"
LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=011ccf01b7e0590d9435a864fc6a4d2b"
SRC_URI[md5sum] = "3a97689656f33f67614000459ec08585"
SRC_URI[sha256sum] = "dee1bbf3a6c8f73b6b218cb28eed8dd13347ea2f87d572ce19b289d6fd3fbc59"
This is the python-scipy.inc
inherit setuptools3 distutils
require python-scipy.inc
Also, I tried to add inherit python3native without effect.
My question is: how can I explicitly set python3 to build this recipe?

The simple and obvious solution was to inherit distutils3 instead of inherit distutils.
NOTE: A python-scipy recipe that worked out of the box for me can be found here: https://github.com/gpanders/oe-scipy
Nice job gpanders!

My guess is that you run a python script setup.py as part of your build that requires python3 on your host(the system that build Yocto).
You can install it like this:
sudo apt-get install python3

Related

Sphinx not using the right Python version inside virtual environments

I've created a virtual environment using both virtualenv and pipenv and in the both cases it seems that sphinx is not able to figure out the correct Python version. I have installed Python 2.7 and Python 3.8 in my global environment.
The error shows up when I try to use sphinx-apidoc + make html. I'm on a Windows 10 machine. Because I'm using type annotations, I get this error:
(venv) C:\Users\eug\Documents\learning\learning-pdoc\docs>make html
Running Sphinx v1.8.5
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 3 source files that are out of date
updating environment: 3 added, 0 changed, 0 removed
reading sources... [100%] sample_package
WARNING: autodoc: failed to import module u'core' from module u'sample_package'; the following exception was raised:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\sphinx\ext\autodoc\importer.py", line 154, in import_module
__import__(modname)
File "C:\Users\eug\Documents\learning\learning-pdoc\sample_package\core.py", line 4
def sample_function2(a : Number, b : Number)->Number:
^
SyntaxError: invalid syntax
As you can see I'm currently on the virtual environment (venv). Calling python by itself correctly calls the right version. In order to execute what I want I need to call:
python -m sphinx.cmd.build -M html . .
Which is not ideal. Is there a way to fix this?

In NixOS, how can I install an environment with the Python packages SpaCy, pandas, and jenks-natural-breaks?

I'm very new to NixOS, so please forgive my ignorance. I'm just trying to set up a Python environment---any kind of environment---for developing with SpaCy, the SpaCy data, pandas, and jenks-natural-breaks. Here's what I've tried so far:
pypi2nix -V "3.6" -E gcc -E libffi -e spacy -e pandas -e numpy --default-overrides, followed by nix-build -r requirements.nix -A packages. I've managed to get the first command to work, but the second fails with Could not find a version that satisfies the requirement python-dateutil>=2.5.0 (from pandas==0.23.4)
Writing a default.nix that looks like this: with import <nixpkgs> {};
python36.withPackages (ps: with ps; [ spacy pandas scikitlearn ]). This fails with collision between /nix/store/9szpqlby9kvgif3mfm7fsw4y119an2kb-python3.6-msgpack-0.5.6/lib/python3.6/site-packages/msgpack/_packer.cpython-36m-x86_64-linux-gnu.so and /nix/store/d08bgskfbrp6dh70h3agv16s212zdn6w-python3.6-msgpack-python-0.5.6/lib/python3.6/site-packages/msgpack/_packer.cpython-36m-x86_64-linux-gnu.so
Making a new virtualenv, and then running pip install on all these packages. Scikit-learn fails to install, with fish: Unknown command 'ar rc build/temp.linux-x86_64-3.6/liblibsvm-skl.a build/temp.linux-x86_64-3.6/sklearn/svm/src/libsvm/libsvm_template.o'
I guess ideally I'd like to install this environment with nix, so that I could enter it with nix-shell, and so other environments could reuse the same python packages. How would I go about doing that? Especially since some of these packages exist in nixpkgs, and others are only on Pypi.
Caveat
I had trouble with jenks-natural-breaks to the tune of
nix-shell ❯ poetry run python -c 'import jenks_natural_breaks'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/matt/2022/12/28-2/.venv/lib/python3.10/site-packages/jenks_natural_breaks/__init__.py", line 5, in <module>
from ._jenks_matrices import ffi as _ffi
ModuleNotFoundError: No module named 'jenks_natural_breaks._jenks_matrices'
So I'm going to use jenkspy which appears to be a bit livelier. If that doesn't scratch your itch, I'd contact the maintainer of jenks-natural-breaks for guidance
Flakes
you said:
so other environments could reuse the same python packages
Which makes me think that a flake.nix is what you need. What's cool about flakes is that you can define an environment that has spacy, pandas, and jenkspy with one flake. And then you (or somebody else) might say:
I want an env like Jonathan's, except I also want sympy
and rather than copying your env and making tweaks, they can declare your env as a build input and write a flake.nix with their modifications--which can be further modified by others.
One could imagine a sort of family-tree of environments, so you just need to pick the one that suits your task. The python community has not yet converged on this vision.
Poetry
Poetry will treat you like you're trying to publish a library when all you asked for is an environment, but a library's dependencies are pretty much an environment so there's nothing wrong with having an empty package and just using poetry as an environment factory.
Bonus: if you decide to publish a library after all, you're ready.
The Setup
nix flakes thinks in terms of git repo's, so we'll start with one:
$ git init
Then create a file called flake.nix. Usually I end up with poetry handling 90% of the python stuff, but both pandas and spacy are in that 10% that has dependencies which link to system libraries. So we ask nix to install them so that when poetry tries to install them in the nix develop shell, it has what it needs.
{
description = "Jonathan's awesome env";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs";
};
outputs = { self, nixpkgs, flake-utils }: (flake-utils.lib.eachSystem [
"x86_64-linux"
"x86_64-darwin"
"aarch64-linux"
"aarch64-darwin"
] (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in
rec {
packages.jonathansenv = pkgs.poetry2nix.mkPoetryApplication {
projectDir = ./.;
};
defaultPackage = packages.jonathansenv;
devShell = pkgs.mkShell {
buildInputs = [
pkgs.poetry
pkgs.python310Packages.pandas
pkgs.python310Packages.spacy
];
};
}));
}
Now we let git know about the flake and enter the environment:
❯ git add flake.nix
❯ nix develop
$
Then we initialize the poetry project. I've found that poetry, installed by nix, is kind of odd about which python it uses by default, so we'll set it explicitly
$ poetry init # follow prompts
$ poetry env use $(which python)
$ poetry run python --version
Python 3.10.9 # declared in the flake.nix
At this point, we should have a pyproject.toml:
[tool.poetry]
name = "jonathansenv"
version = "0.1.0"
description = ""
authors = ["Your Name <you#example.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.10"
jenkspy = "^0.3.2"
spacy = "^3.4.4"
pandas = "^1.5.2"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
Usage
Now we create the venv that poetry will use, and run a command that depends on these.
$ poetry install
$ poetry run python -c 'import jenkspy, spacy, pandas'
You can also have poetry put you in a shell:
$ poetry shell
(venv)$ python -c 'import jenkspy, spacy, pandas'
It's kind of awkward to do so though, because we're two subshells deep and any shell customizations that we have the grandparent shell are not available. So I recommend using direnv, to enter the dev shell whenever I navigate to that directory and then just use poetry run ... to run commands in the environment.
Publishing the env
In addition to running nix develop with the flake.nix in your current dir, you can also do nix develop /local/path/to/repo or develop nix develop github:/githubuser/githubproject to achieve the same result.
To demonstrate the github example, I have pushed the files referenced above here. So you ought to be able to run this from any linux shell with nix installed:
❯ nix develop github:/MatrixManAtYrService/nix-flake-pandas-spacy
$ poetry install
$ poetry run python -c 'import jenkspy, spacy, pandas'
I say "ought" because if I run that command on a mac it complains about linux-headers-5.19.16 being unsupported on x86_64-darwin.
Presumably there's a way to write the flake (or fix a package) so that it doesn't insist on building linux stuff on a mac, but until I figure it out I'm afraid that this is only a partial answer.

Error Compiling Tensorflow From Source - No module named 'keras_applications'

I am attempting to build tensorflow from source with MKL optimizations on an Intel CPU setup. I have followed the official instructions here up until the command bazel build --config=mkl --config=opt //tensorflow/tools/pip_package:build_pip_package.
Unfortunately, the compilation runs for some period of time and then fails. I'd appreciate any help with this matter.
Updated Output log (using bazel --verbose_failures):
ERROR: /home/jok/build/tensorflow/tensorflow/BUILD:584:1: Executing genrule //tensorflow:tensorflow_python_api_gen failed (Exit 1): bash failed: error executing command
(cd /home/jok/.cache/bazel/_bazel_jok120/737f8d6dbadde71050b1e0783c31ea62/execroot/org_tensorflow && \
exec env - \
LD_LIBRARY_PATH=LD_LIBRARY_PATH:/usr/local/cuda-9.0/lib64/:/usr/local/cuda-9.0/extras/CUPTI/lib64 \
PATH=/home/jok/.conda/envs/tf_mkl/bin:/home/jok/bin:/opt/anaconda3/bin:/usr/local/bin:/bin:/usr/bin:/snap/bin:/home/jok/bin \
/bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; bazel-out/host/bin/tensorflow/create_tensorflow.python_api --root_init_template=tensorflow/api_template.__init__.py --apidir=bazel-out/host/genfiles/tensorflow --apiname=tensorflow --apiversion=1 --package=tensorflow.python --output_package=tensorflow bazel-out/host/genfiles/tensorflow/__init__.py bazel-out/host/genfiles/tensorflow/app/__init__.py bazel-out/host/genfiles/tensorflow/bitwise/__init__.py bazel-out/host/genfiles/tensorflow/compat/__init__.py bazel-out/host/genfiles/tensorflow/data/__init__.py bazel-out/host/genfiles/tensorflow/debugging/__init__.py bazel-out/host/genfiles/tensorflow/distributions/__init__.py bazel-out/host/genfiles/tensorflow/dtypes/__init__.py bazel-out/host/genfiles/tensorflow/errors/__init__.py bazel-out/host/genfiles/tensorflow/feature_column/__init__.py bazel-out/host/genfiles/tensorflow/gfile/__init__.py bazel-out/host/genfiles/tensorflow/graph_util/__init__.py bazel-out/host/genfiles/tensorflow/image/__init__.py bazel-out/host/genfiles/tensorflow/io/__init__.py bazel-out/host/genfiles/tensorflow/initializers/__init__.py bazel-out/host/genfiles/tensorflow/keras/__init__.py bazel-out/host/genfiles/tensorflow/keras/activations/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/densenet/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/inception_resnet_v2/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/inception_v3/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/mobilenet/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/mobilenet_v2/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/nasnet/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/resnet50/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/vgg16/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/vgg19/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/xception/__init__.py bazel-out/host/genfiles/tensorflow/keras/backend/__init__.py bazel-out/host/genfiles/tensorflow/keras/callbacks/__init__.py bazel-out/host/genfiles/tensorflow/keras/constraints/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/boston_housing/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/cifar10/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/cifar100/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/fashion_mnist/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/imdb/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/mnist/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/reuters/__init__.py bazel-out/host/genfiles/tensorflow/keras/estimator/__init__.py bazel-out/host/genfiles/tensorflow/keras/initializers/__init__.py bazel-out/host/genfiles/tensorflow/keras/layers/__init__.py bazel-out/host/genfiles/tensorflow/keras/losses/__init__.py bazel-out/host/genfiles/tensorflow/keras/metrics/__init__.py bazel-out/host/genfiles/tensorflow/keras/models/__init__.py bazel-out/host/genfiles/tensorflow/keras/optimizers/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/image/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/sequence/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/text/__init__.py bazel-out/host/genfiles/tensorflow/keras/regularizers/__init__.py bazel-out/host/genfiles/tensorflow/keras/utils/__init__.py bazel-out/host/genfiles/tensorflow/keras/wrappers/__init__.py bazel-out/host/genfiles/tensorflow/keras/wrappers/scikit_learn/__init__.py bazel-out/host/genfiles/tensorflow/layers/__init__.py bazel-out/host/genfiles/tensorflow/linalg/__init__.py bazel-out/host/genfiles/tensorflow/logging/__init__.py bazel-out/host/genfiles/tensorflow/losses/__init__.py bazel-out/host/genfiles/tensorflow/manip/__init__.py bazel-out/host/genfiles/tensorflow/math/__init__.py bazel-out/host/genfiles/tensorflow/metrics/__init__.py bazel-out/host/genfiles/tensorflow/nn/__init__.py bazel-out/host/genfiles/tensorflow/nn/rnn_cell/__init__.py bazel-out/host/genfiles/tensorflow/profiler/__init__.py bazel-out/host/genfiles/tensorflow/python_io/__init__.py bazel-out/host/genfiles/tensorflow/quantization/__init__.py bazel-out/host/genfiles/tensorflow/resource_loader/__init__.py bazel-out/host/genfiles/tensorflow/strings/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/builder/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/constants/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/loader/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/main_op/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/signature_constants/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/signature_def_utils/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/tag_constants/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/utils/__init__.py bazel-out/host/genfiles/tensorflow/sets/__init__.py bazel-out/host/genfiles/tensorflow/sparse/__init__.py bazel-out/host/genfiles/tensorflow/spectral/__init__.py bazel-out/host/genfiles/tensorflow/summary/__init__.py bazel-out/host/genfiles/tensorflow/sysconfig/__init__.py bazel-out/host/genfiles/tensorflow/test/__init__.py bazel-out/host/genfiles/tensorflow/train/__init__.py bazel-out/host/genfiles/tensorflow/train/queue_runner/__init__.py bazel-out/host/genfiles/tensorflow/user_ops/__init__.py')
Traceback (most recent call last):
File "/home/jok/.cache/bazel/_bazel_jok120/737f8d6dbadde71050b1e0783c31ea62/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/tools/api/generator/create_python_api.py", line 27, in <module>
from tensorflow.python.tools.api.generator import doc_srcs
File "/home/jok/.cache/bazel/_bazel_jok120/737f8d6dbadde71050b1e0783c31ea62/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/__init__.py", line 81, in <module>
from tensorflow.python import keras
File "/home/jok/.cache/bazel/_bazel_jok120/737f8d6dbadde71050b1e0783c31ea62/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/keras/__init__.py", line 25, in <module>
from tensorflow.python.keras import applications
File "/home/jok/.cache/bazel/_bazel_jok120/737f8d6dbadde71050b1e0783c31ea62/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/keras/applications/__init__.py", line 21, in <module>
import keras_applications
ModuleNotFoundError: No module named 'keras_applications'
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 695.098s, Critical Path: 152.03s
INFO: 7029 processes: 7029 local.
FAILED: Build did NOT complete successfully
This appears to be a problem with Tensorflow 1.10 build. I recommend you check out the r1.9 branch as it builds totally fine. Either the dependency list needs to be updated or Tensorflow will fix this. If you are determined to run the r.1.10 api then run the following in terminal:
pip install keras_applications==1.0.4 --no-deps
pip install keras_preprocessing==1.0.2 --no-deps
pip install h5py==2.8.0
If you're just interested in the release version (git tag will show you all available releases), run git checkout v1.10.1 before the ./configure step. Then you can follow the official instructions without installing additional dependencies.
Currently, a master branch build will give me the following error in Keras code that worked previously (this is after calling model.fit_generator() from the stand alone version of Keras):
`steps_per_epoch=None` is only valid for a generator based on the `keras.utils.Sequence` class. Please specify `steps_per_epoch` or use the `keras.utils.Sequence` class.
Builds based on the 1.10.1 release version of TensorFlow don't cause this error.

Cmake seems not to use the Python interpreter it confirms it uses

I have the following CMakeLists.txt file, which is instructed to use Python 3.4
cmake_minimum_required(VERSION 3.2 FATAL_ERROR)
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/../cmake/")
project(aConfigd VERSION 1.0)
string(TOLOWER aConfigd project_id)
find_package(PythonInterp 3.4 REQUIRED)
include(FindPythonInterp)
set(PYTHON ${PYTHON_EXECUTABLE})
message(STATUS "\${PYTHON_EXECUTABLE} == ${PYTHON_EXECUTABLE}")
set(pkgdatadir /usr/share/configd)
set(configdir /etc/amy)
set(SONARCONFIGID_SOURCE_DIR etc/configd)
set(SRC_DIR configd/src/)
include(common)
# "${SRC_DIR}/systemd_client.py"
# "${SRC_DIR}/amyconfig_service.py"
"${SRC_DIR}/__init__.py"
"${SRC_DIR}/main.py"
"${SRC_DIR}/application.py"
DESTINATION ${pkgdatadir}/configd/
)
#general
set(CPACK_PACKAGE_NAME "a-config")
set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "a-config-manager")
set(CPACK_PACKAGE_DESCRIPTION "a-config-manager")
# redhat
set(CPACK_RPM_EXCLUDE_FROM_AUTO_FILELIST_ADDITION
/etc/amy
)
include(cpack)
Indeed, it confirms that ${PYTHON_EXECUTABLE} == /usr/bin/python3.4 (see 4th line below):
$ make clean ; cmake -DCMAKE_BUILD_TYPE=Release -DSHORT_VERSION=NO -DCUSTOMER=NO .. ; make -j12 ; make package
-- Found PythonInterp: /usr/bin/python3.4 (found suitable version "3.4.5", minimum required is "3.4")
-- Found PythonInterp: /usr/bin/python3.4 (found version "3.4.5")
-- ${PYTHON_EXECUTABLE} == /usr/bin/python3.4
-- Build Type: Release
-- Detected distribution: rhel fedora
-- Detected aConfigd version: 2.3.0-3030-gf7733cf659
-- Detected distribution: rhel fedora
-- Configuring done
-- Generating done
-- Build files have been written to: /local/raid0/git/amy/aConfig/build
Run CPack packaging tool...
CPack: Create package using RPM
CPack: Install projects
CPack: - Run preinstall target for: aConfigd
CPack: - Install project: aConfigd
CPack: Create package
CPackRPM:Warning: CPACK_SET_DESTDIR is set (=ON) while requesting a relocatable package (CPACK_RPM_PACKAGE_RELOCATABLE is set): this is not supported, the package won't be relocatable.
CPackRPM: Will use GENERATED spec file: /local/raid0/git/my/aConfig/build/_CPack_Packages/Linux/RPM/SPECS/a-config.spec
CPack: - package: /local/raid0/git/my/aConfig/build/a-config-2.3.0-3030-gf7733cf659.el7.my.x86_64.rpm generated.
$
However, if I uncomment the "${SRC_DIR}/systemd_client.py" line, I get the error:
Compiling /local/raid0/git/my/aConfig/build/_CPack_Packages/Linux/RPM/a-config-2.3.0-3030-gf7733cf659.el7.my.x86_64/usr/share/configd/configd/systemd_client.py ...
File "/usr/share/configd/configd/systemd_client.py", line 21
def __init__(self, systemd_proxy:Gio.DBusProxy):
^
SyntaxError: invalid syntax
Isn't def __init__(self, systemd_proxy:Gio.DBusProxy): a valid Python 3.4 syntax?
If yes, why does Cmake complains?
The root-cause occurs in the rpmbuild step.
RPM is trying to be extra-helpful, and tries to (byte-code) compile .py files it encounters.
Alas, it wrongly uses the python2 interpreter to create a file's byte-code (even though find_package(PythonInterp 3.4 REQUIRED) is declared in the CMakeLists.txt file).
The fix that worked for me was:
set(CPACK_RPM_BUILDREQUIRES python34-devel)
set(CPACK_RPM_SPEC_MORE_DEFINE "%define __python ${PYTHON_EXECUTABLE}")
When you just run "${SRC_DIR}/systemd_client.py", you're telling it to run that script the same way it would be run by the shell: by looking at the #! line and running it with whatever interpreter is specified there. Which is probably something like #! /usr/bin/python or #! /usr/bin/env python.
If you want to run your script with a particular interpreter, you have to run that interpreter and pass it the script—just as you would at the shell. I'm pretty rusty with CMake, but I'd assume you do that like this:
"${PYTHON_EXECUTABLE}" "${SRC_DIR}/amyconfig_service.py"
Alternatively, since this is your code, maybe you want to use setuptools to programmatically generate scripts for your entry-points, which means it would create a #! line for them that runs whichever Python version was used to run setup.py.

Installing dependencies of debian/control file

I am in the process of porting a Ruby file used in our build system to Python. The file looks for Depends lines in a debian/control file in our repository, checks every dependency, and apt-get installs everything that isn't installed. I am trying to reproduce this functionality.
As part of porting this to Python, I looked at the deb_pkg_tools module. I pip installed it and created a simple script, install-dep2.py.
#!/usr/bin/python
import deb_pkg_tools
controlDict = deb_pkg_tools.control.load_control_file('debian/control')
However, when I run this script, I get the following error:
$ build/bin/install-dep2.py
Traceback (most recent call last):
File "build/bin/install-dep2.py", line 4, in <module>
controlDict = deb_pkg_tools.control.load_control_file('debian/control')
AttributeError: 'module' object has no attribute 'control'
The debian/control file exists:
$ ls -l debian/control
-rw-rw-r-- 1 stephen stephen 2532 Jul 13 14:28 debian/control
How can I process this debian/control file? I don't need to use deb_pkg_tools if there is a better way.
The problem you have is not that Python thinks that debian/control does not exist, but rather that it seems like deb_pkg_tools.control does not exist.
I would use the python-debian package from Debian to parse the control file if I were you. Here is the code that will parse the control file to get the dependencies. It should work even for packages with multiple binary packages.
import deb822
for paragraph in deb822.Deb822.iter_paragraphs(open('debian/control')):
for item in paragraph.items():
if item[0] == 'Depends':
print item[1]
Each item in the above example is a tuple that pairs the "key" with the "value", so item[0] gives us the "key" and item[1] gives us the "value".
Obviously the above sample just prints out the dependencies as they are in the control file, so the dependencies aren't in a format that is suitable to directly plug into apt-get install. Also, by parsing the control file, I got stuff like ${python:Depends} in addition to actual package names, so that is something you will have to consider. Here is an example of the output I got from the above example:
joseph#crunchbang:~$ python test.py
bittornado,
${python:Depends},
python-psutil,
python-qt4,
python-qt4reactor,
python-twisted,
xdg-utils,
${misc:Depends},
${shlibs:Depends}
I found this bug report and the python-debian source code to be quite useful resources when answering your question.
You might want to have a look into mk-build-deps (from the devscripts package) that is a standard script that already does what you want to achieve.
$ mk-build-deps -i -s sudo

Categories