scipy ImportError on travis-ci - python

I'm setting up Travis-CI for the first time. I install scipy in what I believe is the standard way:
language: python
python:
- "2.7"
# command to install dependencies
before_install:
- sudo apt-get -qq update
- sudo apt-get -qq install python-numpy python-scipy python-opencv
- sudo apt-get -qq install libhdf5-serial-dev hdf5-tools
install:
- "pip install numexpr"
- "pip install cython"
- "pip install -r requirements.txt --use-mirrors"
# command to run tests
script: nosetests
Everything builds. But when the nosetests begin, I get
ImportError: No module named scipy.ndimage
Update: Here is a more direct demonstration of the problem.
$ sudo apt-get install python-numpy python-scipy python-opencv
$ python -c 'import scipy'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named scipy
The command "python -c 'import scipy'" failed and exited with 1 during install.
I tried installing scipy using pip also. I tried installing gfortran first. Here is one example of a failed build. Any suggestions?
Another Update: Travis has since added official documentation on using conda with Travis. See ostrokach's answer.

I found two ways around this difficulty:
As #unutbu suggested, build your own virtual environment and install everything using pip inside that environment. I got the build to pass, but installing scipy from source this way is very slow.
Following the approach used by the pandas project in this .travis.yml file and the shell scripts that it calls, force travis to use system-wide site-packages, and install numpy and scipy using apt-get. This is much faster. The key lines are
virtualenv:
system_site_packages: true
in travis.yml before the before_install group, followed by these shell commands
SITE_PKG_DIR=$VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/site-packages
rm -f $VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/no-global-site-packages.txt
and then finally
apt-get install python-numpy
apt-get install python-scipy
which will be found when nosetests tries to import them.
Update
I now prefer a conda-based build, which is faster than either of the strategies above. Here is one example on a project I maintain.

This is covered in the official conda documentation: Using conda with Travis CI.
The .travis.yml file
The following shows how to modify the .travis.yml file to use Miniconda for a project that supports Python 2.6, 2.7, 3.3, and 3.4.
NOTE: Please see the Travis CI website for information about the basic configuration for Travis.
language: python
python:
# We don't actually use the Travis Python, but this keeps it organized.
- "2.6"
- "2.7"
- "3.3"
- "3.4"
install:
- sudo apt-get update
# We do this conditionally because it saves us some downloading if the
# version is the same.
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
wget https://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh;
else
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
fi
- bash miniconda.sh -b -p $HOME/miniconda
- export PATH="$HOME/miniconda/bin:$PATH"
- hash -r
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
# Useful for debugging any issues with conda
- conda info -a
# Replace dep1 dep2 ... with your dependencies
- conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION dep1 dep2 ...
- source activate test-environment
- python setup.py install
script:
# Your test script goes here

I found this approach to work:
http://danielnouri.org/notes/2012/11/23/use-apt-get-to-install-python-dependencies-for-travis-ci/
Add these lines to your Travis configuration to use a virtualenv with --system-site-packages:
virtualenv:
system_site_packages: true
You can thus install Python packages via apt-get in the before_install section, and use them in your virtualenv:
before_install:
- sudo apt-get install -qq python-numpy python-scipy
A real-world use of this approach can be found in nolearn.

As Dan Allan pointed out in his update, he now prefers a conda-based build. Here is a gist courtesy Dan Blanchard giving a full .travis.yml file example that will pre-install scipy on the test machine:
language: python
python:
- 2.7
- 3.3
notifications:
email: false
# Setup anaconda
before_install:
- wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh
- chmod +x miniconda.sh
- ./miniconda.sh -b
- export PATH=/home/travis/miniconda/bin:$PATH
- conda update --yes conda
# The next couple lines fix a crash with multiprocessing on Travis and are not specific to using Miniconda
- sudo rm -rf /dev/shm
- sudo ln -s /run/shm /dev/shm
# Install packages
install:
- conda install --yes python=$TRAVIS_PYTHON_VERSION atlas numpy scipy matplotlib nose dateutil pandas statsmodels
# Coverage packages are on my binstar channel
- conda install --yes -c dan_blanchard python-coveralls nose-cov
- python setup.py install
# Run test
script:
- nosetests --with-cov --cov YOUR_PACKAGE_NAME_HERE --cov-config .coveragerc --logging-level=INFO
# Calculate coverage
after_success:
- coveralls --config_file .coveragerc

Related

How do I install a specific python version without pyenv within nvidia docker so that it interacts well with poetry?

I am trying to build a docker image that contains cuda, cudnn and python, each with specific versions that are templatable as a base for downstream users.
(In this example I have replace all the irrelevant templating with hard-coded versions, this is just FYI as a motivation).
Please note that the following questions are not duplicates:
How to install python in a docker image? does not involve poetry
Integrating Python Poetry with Docker Does not concern itself with installing dependencies
How do I integrate pyenv, poetry, and docker? This works for me already, I am looking for a different solution
I have achieved what I want using pyenv to install the specific python version within docker inside the nvidia image.
However, this solution is not optimal since the resulting image is about 1.5GB larger than what I think should be possible. Sidenote: I know that there are other ways to reduce the image size further that I have not done in this example. This is not the question here.
I have prepared a dummy pyproject.toml and poetry.lock to demonstrate the issue that I am currently facing:
pyproject.toml
[tool.poetry]
name = "example_project"
version = "1.0.0"
description = ""
authors = ["RunOrVeith"]
[tool.poetry.dependencies]
python = ">=3.8,<3.11"
scipy = "^1.9.3"
[build-system]
requires = ["poetry-core>=1.1.0"]
build-backend = "poetry.core.masonry.api"
Working Dockerfile.pyenv
FROM nvidia/cuda:11.0.3-cudnn8-runtime-ubuntu20.04 as base
ARG PYTHON_VERSION=3.8
ENV DEBIAN_FRONTEND=noninteractive
# Set-up necessary Env vars for PyEnv
ENV PYENV_ROOT /root/.pyenv
ENV PATH $PYENV_ROOT/shims:$PYENV_ROOT/bin:$PATH
ENV PATH="/root/.local/bin/:$PATH"
# Install essentials for pyenv https://github.com/pyenv/pyenv/wiki
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget ca-certificates \
curl llvm libncurses5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev mecab-ipadic-utf8 git \
&& rm -rf /var/lib/apt/lists/*
# Install pyenv
RUN set -ex \
&& curl https://pyenv.run | bash \
&& pyenv update \
&& pyenv install $PYTHON_VERSION \
&& pyenv global $PYTHON_VERSION \
&& pyenv rehash \
&& pip install --upgrade pip
# Install poetry
RUN curl -sSL https://install.python-poetry.org | python - \
&& poetry --version && poetry config virtualenvs.create false
FROM base as example # The template that I want to provide ends here, this is just for demoing the issue
WORKDIR /app
COPY pyproject.toml .
COPY poetry.lock .
RUN poetry install --no-interaction --no-ansi
The version that doesn't work Dockerfile.plain
FROM nvidia/cuda:11.0.3-cudnn8-runtime-ubuntu20.04 as base
ENV DEBIAN_FRONTEND=noninteractive
ENV PYTHON_VERSION=3.8
ENV PATH="/root/.local/bin/:$PATH"
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys A4B469963BF863CC \
&& apt update \
&& apt install -y git curl \
&& apt install -y --no-install-recommends make build-essential
# Don't be confused, distutils-3.9 also installs python 3.8 https://github.com/deadsnakes/issues/issues/150
RUN apt install -y --no-install-recommends python${PYTHON_VERSION} python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-distutils python${PYTHON_VERSION}-venv \
&& update-alternatives --install /usr/bin/python python /usr/bin/python${PYTHON_VERSION} 10 \
&& update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 10 \
&& apt-get install -y --no-install-recommends python3-pip python3-setuptools \
&& update-alternatives --install /usr/local/bin/pip pip /usr/bin/pip 10 \
&& update-alternatives --install /usr/local/bin/pip3 pip3 /usr/bin/pip 10 \
&& apt-get clean
WORKDIR /virtualenvs
RUN curl -sSL https://install.python-poetry.org | python${PYTHON_VERSION} - \
&& poetry --version && poetry config virtualenvs.create false
FROM base as example
WORKDIR /app
COPY pyproject.toml .
COPY poetry.lock .
RUN poetry install --no-interaction --no-ansi
You can build this using
DOCKER_BUILDKIT=1 docker build -t github:example-plain --target example -f Dockerfile.plain .
and then run using
docker run -it github:example-plain bash
Here is the issue:
All the following commands are run from within the docker image.
According to poetry, everything is installed:
root#5e1ffb1f971c:/app# poetry show
Skipping virtualenv creation, as specified in config file.
numpy 1.23.4 NumPy is the fundamental package for array computing with Python.
scipy 1.9.3 Fundamental algorithms for scientific computing in Python
root#5e1ffb1f971c:/app# poetry run pip --version
Skipping virtualenv creation, as specified in config file.
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
However using regular pip, there is nothing, and imports also fail.
If I use poetry to import something, it also does not work.
root#5e1ffb1f971c:/app# pip --version
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
root#5e1ffb1f971c:/app# pip freeze
root#5e1ffb1f971c:/app# python -c "import scipy"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'scipy'
root#5e1ffb1f971c:/app# poetry run python -c "import scipy"
Skipping virtualenv creation, as specified in config file.
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'scipy'
What is also interesting is that if I upgrade pip with poetry it tells me it can't uninstall pip, I am assuming this is due to this ubuntu patch that tries to prevent me from breaking the system (even though I just install pip).
Afterwards, the poetry pip executable also points somewhere else.
root#5e1ffb1f971c:/app# poetry run pip install --upgrade pip
Skipping virtualenv creation, as specified in config file.
Collecting pip
Using cached pip-22.3.1-py3-none-any.whl (2.1 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 20.0.2
Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
Can't uninstall 'pip'. No files were found to uninstall.
Successfully installed pip-22.3.1
root#5e1ffb1f971c:/app# poetry run pip --version
Skipping virtualenv creation, as specified in config file.
pip 22.3.1 from /usr/local/lib/python3.8/dist-packages/pip (python 3.8)
So how do I set this up so that I get a fresh python install of whichever version I configure, and it works with poetry? It is also required that the python and python3 aliases point to whatever poetry is using.
Reference with working version:
If I do the same commands with the working version using pyenv, it looks like this:
root#c0a9af7f05b4:/app# pip freeze
numpy==1.23.4
scipy==1.9.3
root#c0a9af7f05b4:/app# poetry show
Skipping virtualenv creation, as specified in config file.
numpy 1.23.4 NumPy is the fundamental package for array computing with Python.
scipy 1.9.3 Fundamental algorithms for scientific computing in Python
root#c0a9af7f05b4:/app# poetry run pip --version
Skipping virtualenv creation, as specified in config file.
pip 22.3.1 from /root/.pyenv/versions/3.8.15/lib/python3.8/site-packages/pip (python 3.8)
root#c0a9af7f05b4:/app# pip --version
pip 22.3.1 from /root/.pyenv/versions/3.8.15/lib/python3.8/site-packages/pip (python 3.8)

RPM installation Trino throws python dependency

I'm trying to install Trino using RPM on Red Hat Enterprise Linux distribution. I install the Trino dependencies using the following commands:
$ sudo yum update -y
$ sudo yum install -y java-11-openjdk.x86_64 python3
$ sudo alternatives --set python /usr/bin/python3
Then I try to install Trino from archive in single-node mode. This however gives a dependency error:
$ sudo rpm -i trino-server-rpm-368.rpm
error: Failed dependencies:
python >= 2.4 is needed by trino-server-rpm-0:368-1.noarch
This error doesn't make sense to me given that this dependency is actually satisfied when checking my python version:
$ python -V
Python 3.6.8
An answers has been provided by #hashhar on this Github Issue if you actually have the correct dependencies installed:
$ sudo rpm -i --nodeps trino-server-rpm-368.rpm

Accessing clipboard on Travis-CI

I am trying to run an (integration?) test on my application, to verify that it actually copies the expected string to the clipboard with pyperclip.
This part is working on my development machine (Windows 10); but fails on travis-ci, where I get the following in my travis job log.
self = <pyperclip.init_no_clipboard.<locals>.ClipboardUnavailable object at 0x7ff0cd743588>
args = ('7 809823 102890 string 291',), kwargs = {}
def __call__(self, *args, **kwargs):
> raise PyperclipException(EXCEPT_MSG)
E pyperclip.PyperclipException:
E Pyperclip could not find a copy/paste mechanism for your system.
E For more information, please visit https://pyperclip.readthedocs.io/en/latest/introduction.html#not-implemented-error
../../../virtualenv/python3.7.1/lib/python3.7/site-packages/pyperclip/__init__.py:301: PyperclipException
According the to the pyperclip documentation, this occurs on Linux when there is no copy/paste mechanism. The solution is to install one of the following (quoting pyperclip docs):
sudo apt-get install xsel to install the xsel utility.
sudo apt-get install xclip to install the xclip utility.
pip install gtk to install the gtk Python module.
pip install PyQt4 to install the PyQt4 Python module.
So in my .travis.yml file, I have
before_install:
- sudo apt-get install xclip
I've also tried xsel, with the same results.
Since the system on travis is Ubuntu 16.04.6, I tried adding sudo apt-get install python3-pyperclip to the before_install key, with the same result.
I was not able to install either gtk or PyQt4 by adding them to the install key in .travis.yml.
install:
- pip install -r requirements_dev.txt
- pip install PyQt4
# or
- pip install gtk
As these both result in the following error:
Could not find a version that satisfies the requirement gtk (from versions: )
No matching distribution found for gtk
The command "pip install gtk" failed and exited with 1 during .
By this point, my before_install looks like this:
- sudo apt-get install xclip
- sudo apt-get install xsel
- sudo apt-get install python3-pyperclip
- sudo apt-get install gtk2.0
This seems like overkill (and still does not work); but I am currently out of ideas on how to make that test pass. Any pointers would be greatly appreciated.
Thanks
xclip requires an X server to be running, while Travis machines run in headless mode. You need to run the command in a virtual framebuffer; install xvfb in addition to xclip and use xvfb-run pytest instead of pytest. Full config example:
language: python
addons:
apt:
packages:
- xclip
- xvfb
python:
- "3.7"
# setup installation
install:
- pip install -r requirements_dev.txt
script: xvfb-run pytest
Here's an example build on Travis. Notice that I used addons to declare dependencies that should be installed with APT; your solution with installing them explicitly in a before_install section is perfectly valid too, just a matter of taste.

Cannot load CLoader with pyyaml

I'm working on a python project using pyyaml. I need to run it in a Docker container based on bitnami/minideb:jessie. Python version is 2.7.9.
The original code is using CLoader and I cannot change it currently.
Any reason CLoader fails to load but Loader is fine ?
>>> import yaml
>>> yaml.__version__
'3.12'
>>> from yaml import Loader
>>> from yaml import CLoader
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name CLoader
>>>
I cannot figure out what I'm missing here. Any idea ?
Running it from the Docker image python:2.7.9 does not raise any error then:
$ docker run -ti python:2.7.9 bash
#/ python
>>> from yaml import CLoader
>>> from yaml import Loader
>>>
By default, the setup.py script checks whether LibYAML is installed
and if so, builds and installs LibYAML bindings.
This is the minimum to get CLoader compiled and installed.
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y \
python3 python3-dev python3-pip gcc libyaml-dev
RUN pip3 install pyyaml
# verify
RUN python3 -c "import yaml; yaml.CLoader"
I ran into the same problem. You need to install the libyaml-dev package, then install libyaml and pyyaml from source. Here's the complete Dockerfile for minideb:jessie:
FROM bitnami/minideb:jessie
RUN apt-get update
RUN apt-get install -y \
automake \
autoconf \
build-essential \
git-core \
libtool \
libyaml-dev \
make \
python \
python-dev \
python-pip
RUN pip install --upgrade pip
RUN pip install Cython==0.29.10
RUN mkdir /libyaml
WORKDIR /libyaml
RUN git clone https://github.com/yaml/libyaml.git . && \
git checkout dist-0.2.2 && \
autoreconf -f -i && \
./configure && \
make && \
make install
RUN mkdir /pyyaml
WORKDIR /pyyaml
RUN git clone https://github.com/yaml/pyyaml.git . && \
git checkout 5.1.1 && \
python setup.py install
RUN python -c "import yaml; from yaml import CLoader; print 'Loaded CLoader!'"
A couple of additions to others' solutions:
If you want the install command to hard-fail if the libyaml C extension won't build (instead of silently falling back to a pure-Python only install), you can pass the --with-libyaml global option, eg: python setup.py --with-libyaml install.
If you're doing this with something that might ever need to be upgraded (eg implicitly via another package's requirement for a higher pyyaml version), it's better to use pip instead of directly calling setup.py, as that (currently) uses a pure distutils installation, which pip will fail to uninstall later. You'll see an error like "ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall."
Doing the required extension build with pip looks something like pip install --global-option='--with-libyaml' pyyaml.
I'm just copying the developer's answer from the issue linked above, but this happens because pyyaml only installs the libyaml bindings (CLoader & co.) if it finds the libyaml-dev package (that's the debian package, anyway) at install time. If it doesn't find it, it prints a warning and skips the libyaml bindings.
So, install libyaml-dev before installing pyyaml.
I tried all the step mentions, and the following steps fixed my issue.
Install
apt-get install -y gcc libyaml-dev
pip install --ignore-installed --global-option='--with-libyaml' pyyaml
Test
python -c "import yaml; yaml.CLoader"

How do I use python-openbabel in Travis CI?

I use Travis CI as part of a Toxicology mapping project. For this project I require python-openbabel as a dependency. As such, I have added the apt-get installer to the .travis.yml file, shown below ( comments removed ).
language: python
python:
- "2.7"
before_install:
- sudo apt-get update -qq
- sudo apt-get install python-openbabel
install: "pip install -r requirements.txt"
script: nosetests tox.py
However, all these attempts failed with the error message Error: SWIG failed. Is Open Babel installed?. I have tried adding SWIG to the list of applications to be installed, to no avail.
Additionally, I have attempted to add the entire build process as proposed by Openbabel itself, this yields the following travis.yml:
language: python
python:
- "2.7"
before_install:
- sudo apt-get update -qq
- sudo apt-get install python-openbabel
- wget http://downloads.sourceforge.net/project/openbabel/openbabel/2.3.1/openbabel-2.3.1.tar.gz?r=http://%3A%2F%2Fsourceforge.net%2Fprojects%2Fopenbabel%2Fopenbabel%2F2.3.1%2Fts=1393727248&use_mirror=switch
- tar zxf openbabel-2.3.1.tar.gz
- mkdir build
- cd build
- cmake ../openbabel-2.3.1 -DPYTHON_BINDINGS=ON
- make
- make install
- export PYTHONPATH=/usr/local/lib:$PYTHONPATH
install: "pip install -r requirements.txt"
script: nosetests tox.py
This fails when trying to untar the downloaded file.
All the failed builds can be seen on Travis-CI: https://travis-ci.org/ToxProject/ToxProject
The Github repo is here: https://github.com/ToxProject/ToxProject
In short, how do I get python-openbabel working with Travis-CI?
The version of openbabel installed via apt-get is 1.7 while the version specified in setup.py in requirements.txt is openbabel>=1.8.
This make makes the package installed by apt-get not satisfy the requirements.txt and pip is trying install it regardless the installed old version of openbabel. And virtualenv doesn't use the already installed system packages.
And when install openbabel via pip, it needs the header files of libopenbabel which is not included in libopenbabel4 which is automatically installed by python-openbabel The version of libopenbabel-dev in ubuntu 12.04 used by travisCI doesn't satisfy the needs of openbabel==1.8.
Solution:
install newer version of libopenbabel-dev and libopenbabel4 manually:
before_install:
- sudo apt-get install -qq -y swig python-dev
- wget http://mirrors.kernel.org/ubuntu/pool/universe/o/openbabel/libopenbabel4_2.3.2+dfsg-1.1_amd64.deb
- sudo dpkg -i libopenbabel4_2.3.2+dfsg-1.1_amd64.deb
- wget http://mirrors.kernel.org/ubuntu/pool/universe/o/openbabel/libopenbabel-dev_2.3.2+dfsg-1.1_amd64.deb
- sudo dpkg -i libopenbabel-dev_2.3.2+dfsg-1.1_amd64.deb
I see that now the build fails at the pip install requirements stage. Travis creates a virtual environment for running python. By default, python packages installed on the system (ie via apt-get) will not be available, unless you add this to your travils.yml:
virtualenv:
system_site_packages: true
I had the same problem with python-qt4 and python-qgis, here is a travis.yml file I used recently: https://github.com/anitagraser/TimeManager/blob/master/.travis.yml

Categories