Apologies for newbie question. I've tried all of the answers from other questions here but the don't pay off either.
Dockerizing my python/django app with Postgres is proving... daunting. I'm getting the error "Error: pg_config executable not found." consistently when it starts working through my requirements.txt
Here's the Dockerfile:
FROM python:3.8.3-slim-buster
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD ./ /code/
...and my requirements.txt...
asgiref==3.3.1
Django==3.1.4
psycopg2-binary==2.7.4
pytz==2020.5
sqlparse==0.4.1
django-compressor>=2.2
django-libsass>=0.7
and docker-compose.yml
v ersion: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
volumes:
pgdata:
When I run docker-compose up --build I'm getting this error over and over:
Step 6/7 : RUN pip install -r requirements.txt
---> Running in e0fd67d2d935
Collecting asgiref==3.3.1
Downloading asgiref-3.3.1-py3-none-any.whl (19 kB)
Collecting Django==3.1.4
Downloading Django-3.1.4-py3-none-any.whl (7.8 MB)
Collecting psycopg2-binary==2.7.4
Downloading psycopg2-binary-2.7.4.tar.gz (426 kB)
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-fha1c65p/psycopg2-binary/setup.py'"'"'; __file__='"'"'/tmp/pip-install-fha1c65p/psycopg2-binary/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-ewxlxmh6
cwd: /tmp/pip-install-fha1c65p/psycopg2-binary/
Complete output (23 lines):
running egg_info
creating /tmp/pip-pip-egg-info-ewxlxmh6/psycopg2_binary.egg-info
writing /tmp/pip-pip-egg-info-ewxlxmh6/psycopg2_binary.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-ewxlxmh6/psycopg2_binary.egg-info/dependency_links.txt
writing top-level names to /tmp/pip-pip-egg-info-ewxlxmh6/psycopg2_binary.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-ewxlxmh6/psycopg2_binary.egg-info/SOURCES.txt'
Error: pg_config executable not found.
Ultimately, the answer was seemingly found in a couple of things in my Dockerfile, but... ultimately... downgrading from Python 3.8 to 3.7 unlocked everything.
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN apt-get update
RUN apt-get install -y postgresql
RUN apt-get install libpq-dev gcc
RUN export PATH=/usr/lib/postgresql/X.Y/bin/:$PATH
RUN apt-get install -y python3-dev
RUN apt-get install -y python3-psycopg2
RUN pip3 install -r requirements.txt
ADD ./ /code/
Related
I would greatly appreciate help in tackling a problem that drives me crazy (I do need Ubuntu 18.04 and python 3). I did try using different scenarios, but everything fails when installing PyPI package isal on Ubuntu 18.04:
FROM ubuntu:18.04
RUN apt update -y && apt upgrade -y
RUN apt install -y python3 python3-pip
RUN pip3 install isal
docker build . fails with:
Step 3/3 : RUN pip3 install isal
---> Running in 71a47c31d97c
Collecting isal
Downloading https://files.pythonhosted.org/packages/d6/72/b997fd8ba95a0820edcd5da268505705a5518fd860d64bf28a7c1c343a3a/isal-0.11.0.tar.gz (680kB)
Building wheels for collected packages: isal
Running setup.py bdist_wheel for isal: started
Running setup.py bdist_wheel for isal: finished with status 'error'
....
running build_ext
/tmp/tmpk3o08f96/autogen.sh: 3: /tmp/tmpk3o08f96/autogen.sh: autoreconf: not found
error: [Errno 2] No such file or directory: '/tmp/tmpk3o08f96/configure': '/tmp/tmpk3o08f96/configure'
----------------------------------------
Failed building wheel for isal
Running setup.py clean for isal
Failed to build isal
Installing collected packages: isal
Running setup.py install for isal: started
Running setup.py install for isal: finished with status 'error'
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-qa68yevk/isal/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-8qioc11y-record/install-record.txt --single-version-externally-managed --compile:
/usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
warnings.warn(msg)
....
running build_ext
/tmp/tmpnfxsy9ug/autogen.sh: 3: /tmp/tmpnfxsy9ug/autogen.sh: autoreconf: not found
error: [Errno 2] No such file or directory: '/tmp/tmpnfxsy9ug/configure': '/tmp/tmpnfxsy9ug/configure'
----------------------------------------
Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-qa68yevk/isal/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-8qioc11y-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-qa68yevk/isal/
The command '/bin/sh -c pip3 install isal' returned a non-zero code: 1
while FROM ubuntu:20.04 everything works fine
pip also fails:
FROM ubuntu:18.04
RUN apt update -y && apt upgrade -y \
&& apt install -y python python-pip
RUN pip install isal
Step 3/3 : RUN pip install isal
---> Running in 6e157d7d965a
Collecting isal
Could not find a version that satisfies the requirement isal (from versions: )
No matching distribution found for isal
The command '/bin/sh -c pip install isal' returned a non-zero code: 1
you can try apt install autoconf to resolve autoreconf: not found
I am trying to install apache-airflow inside a docker image with alpine, the docker file is the following:
FROM python:3.9.5-alpine3.13
WORKDIR /usr/app
RUN pip install pipenv
COPY Pipfile* ./
RUN apk add --no-cache libressl-dev musl-dev libffi-dev libressl-dev musl-dev libffi-dev gcc build-base
RUN apk add gcc musl-dev libffi-dev openssl-dev python3-dev
RUN apk update && apk add libressl-dev postgresql-dev libffi-dev gcc musl-dev python3-dev
RUN pipenv install --system --deploy --ignore-pipfile
RUN airflow db init
RUN airflow users create --username admin --password admin --firstname Anonymous --lastname Admin --role Admin --email admin#example.org
RUN cp dags ~/airflow/dags/
RUN airflow webserver
RUN airflow scheduler
COPY ./src ./src
COPY ./src/.env.docker ./src/.env
CMD ["python3", "src/main.py"]
But when I execute that I got the following error:
An error occurred while installing cryptography==3.4.7; python_version >= '3.6' --hash=sha256:3d10de8116d25649631977cb37da6c
I tried solved installing another a lot of libraries but I still cannot install any idea?
EDIT
It is necessary to install rust in alpine, but that alpine has an old version.
EDIT 2
After update the docker file like this:
FROM python:3.9.6-alpine3.14
WORKDIR /usr/app
RUN pip install pipenv
COPY Pipfile* ./
RUN apk add --no-cache gcc musl-dev python3-dev libffi-dev openssl-dev cargo
RUN pipenv install --system --deploy --ignore-pipfile
RUN airflow db init
RUN airflow users create --username admin --password admin --firstname Anonymous --lastname Admin --role Admin --email admin#example.org
RUN cp dags ~/airflow/dags/
RUN airflow webserver
RUN airflow scheduler
COPY ./src ./src
COPY ./src/.env.docker ./src/.env
CMD ["python3", "src/main.py"]
I got the following error:
[pipenv.exceptions.InstallError]: Collecting pandas==1.3.0
[pipenv.exceptions.InstallError]: Using cached pandas-1.3.0.tar.gz
(4.7 MB) [pipenv.exceptions.InstallError]: ERROR: Command errored
out with exit status 1: [pipenv.exceptions.InstallError]:
command: /usr/local/bin/python -c 'import io, os, sys, setuptools,
tokenize; sys.argv[0] =
'"'"'/tmp/pip-install-lfncf2u8/pandas_6f9f84af90264ff59c98fca45e89ed74/setup.py'"'"';
file='"'"'/tmp/pip-install-lfncf2u8/pandas_6f9f84af90264ff59c98fca45e89ed74/setup.py'"'"';f
= getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import
setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"',
'"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))'
egg_info --egg-base /tmp/pip-pip-egg-info-s8ljky3j
[pipenv.exceptions.InstallError]: cwd:
/tmp/pip-install-lfncf2u8/pandas_6f9f84af90264ff59c98fca45e89ed74/
[pipenv.exceptions.InstallError]: Complete output (7 lines):
[pipenv.exceptions.InstallError]: Traceback (most recent call
last): [pipenv.exceptions.InstallError]: File "", line
1, in [pipenv.exceptions.InstallError]: File
"/tmp/pip-install-lfncf2u8/pandas_6f9f84af90264ff59c98fca45e89ed74/setup.py",
line 650, in [pipenv.exceptions.InstallError]:
ext_modules=maybe_cythonize(extensions,
compiler_directives=directives), [pipenv.exceptions.InstallError]:
File
"/tmp/pip-install-lfncf2u8/pandas_6f9f84af90264ff59c98fca45e89ed74/setup.py",
line 414, in maybe_cythonize [pipenv.exceptions.InstallError]:
raise RuntimeError("Cannot cythonize without Cython installed.")
[pipenv.exceptions.InstallError]: RuntimeError: Cannot cythonize
without Cython installed.
Thanks
Answer for main question
Do you try python3 -m pip install cryptography==3.4.7 ?
Answer for EDIT 2
You probably need to install Cython.
I redirect you to this page about Cython in Alpine:
https://pkgs.alpinelinux.org/package/v3.3/main/x86/cython
You can also try to run python3 -m pip install Cython.
I am trying to build a docker image for a python script that I would like to deploy.
This is the first time I am using docker so I'm probably doing something wrong but I have no clue what.
My System:
OS: Ubuntu 20.04
docker version: 19.03.8
I am using this Dockerfile:
# Dockerfile
FROM nvidia/cuda:11.0-base
COPY . /SingleModelTest
WORKDIR /SingleModelTest
RUN nvidia-smi
RUN set -xe \ #these are just to make sure pip and git are installed to install the requirements
&& apt-get update \
&& apt-get install python3-pip -y \
&& apt-get install git -y
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements/requirements1.txt
RUN pip3 install -r requirements/requirements2.txt #this is where it fails
ENTRYPOINT ["python"]
CMD ["TabNetAPI.py"]
The output from nvidia-smi is as expected:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 1050 Off | 00000000:01:00.0 On | N/A |
| 0% 54C P0 N/A / 90W | 1983MiB / 1995MiB | 18% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
So cuda does work, but when I try to install the required packages from the requirements files this happens:
command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/SingleModelTest/src/mmdet/setup.py'"'"'; __file__='"'"'/SingleModelTest/src/mmdet/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
cwd: /SingleModelTest/src/mmdet/
Complete output (24 lines):
running develop
running egg_info
creating mmdet.egg-info
writing mmdet.egg-info/PKG-INFO
writing dependency_links to mmdet.egg-info/dependency_links.txt
writing requirements to mmdet.egg-info/requires.txt
writing top-level names to mmdet.egg-info/top_level.txt
writing manifest file 'mmdet.egg-info/SOURCES.txt'
reading manifest file 'mmdet.egg-info/SOURCES.txt'
writing manifest file 'mmdet.egg-info/SOURCES.txt'
running build_ext
building 'mmdet.ops.utils.compiling_info' extension
creating build
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/mmdet
creating build/temp.linux-x86_64-3.8/mmdet/ops
creating build/temp.linux-x86_64-3.8/mmdet/ops/utils
creating build/temp.linux-x86_64-3.8/mmdet/ops/utils/src
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/usr/local/lib/python3.8/dist-packages/torch/include -I/usr/local/lib/python3.8/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.8/dist-packages/torch/include/TH -I/usr/local/lib/python3.8/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.8 -c mmdet/ops/utils/src/compiling_info.cpp -o build/temp.linux-x86_64-3.8/mmdet/ops/utils/src/compiling_info.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=compiling_info -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
mmdet/ops/utils/src/compiling_info.cpp:3:10: fatal error: cuda_runtime_api.h: No such file or directory
3 | #include <cuda_runtime_api.h>
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/SingleModelTest/src/mmdet/setup.py'"'"'; __file__='"'"'/SingleModelTest/src/mmdet/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
The package that fails is mmdetection.
I am using 2 seperate requirements files to make sure some packages are installed before others to prevent a dependency failure
requirements1.txt:
torch==1.4.0+cu100
-f https://download.pytorch.org/whl/torch_stable.html
torchvision==0.5.0+cu100
-f https://download.pytorch.org/whl/torch_stable.html
numpy==1.19.2
requirements2.txt:
addict==2.3.0
albumentations==0.5.0
appdirs==1.4.4
asynctest==0.13.0
attrs==20.2.0
certifi==2020.6.20
chardet==3.0.4
cityscapesScripts==2.1.7
click==7.1.2
codecov==2.1.10
coloredlogs==14.0
coverage==5.3
cycler==0.10.0
Cython==0.29.21
decorator==4.4.2
flake8==3.8.4
Flask==1.1.2
humanfriendly==8.2
idna==2.10
imagecorruptions==1.1.0
imageio==2.9.0
imgaug==0.4.0
iniconfig==1.1.1
isort==5.6.4
itsdangerous==1.1.0
Jinja2==2.11.2
kiwisolver==1.2.0
kwarray==0.5.9
MarkupSafe==1.1.1
matplotlib==3.3.2
mccabe==0.6.1
mmcv==0.4.3
-e git+https://github.com/open-mmlab/mmdetection.git#0f33c08d8d46eba8165715a0995841a975badfd4#egg=mmdet
networkx==2.5
opencv-python==4.4.0.44
opencv-python-headless==4.4.0.44
ordered-set==4.0.2
packaging==20.4
pandas==1.1.3
Pillow==6.2.2
pluggy==0.13.1
py==1.9.0
pycocotools==2.0.2
pycodestyle==2.6.0
pyflakes==2.2.0
pyparsing==2.4.7
pyquaternion==0.9.9
pytesseract==0.3.6
pytest==6.1.1
pytest-cov==2.10.1
pytest-runner==5.2
python-dateutil==2.8.1
pytz==2020.1
PyWavelets==1.1.1
PyYAML==5.3.1
requests==2.24.0
scikit-image==0.17.2
scipy==1.5.3
Shapely==1.7.1
six==1.15.0
terminaltables==3.1.0
tifffile==2020.9.3
toml==0.10.1
tqdm==4.50.2
typing==3.7.4.3
ubelt==0.9.2
urllib3==1.25.11
Werkzeug==1.0.1
xdoctest==0.15.0
yapf==0.30.0
The command i use to (try to) build the image:
nvidia-docker build -t firstdockertestsinglemodel:latest
Things I have tried:
setting the cuda environment variables like CUDA_HOME, LIBRARY_PATH, LD_LIBRARY_PATH but I am not sure I did it correctly since I can't check the paths I set because I cant see them in the Ubuntu Files app
I'll be very grateful for any help that anyone could offer.
If I need to supply more information I'll be happy to.
Thanks to #Robert Crovella I solved my problem.
Turned out I just needed to use nvidia/cuda/10.0-devel as base image instead of nvidia/cuda/10.0-base
so my Dockerfile is now:
# Dockerfile
FROM nvidia/cuda:10.0-devel
RUN nvidia-smi
RUN set -xe \
&& apt-get update \
&& apt-get install python3-pip -y \
&& apt-get install git -y
RUN pip3 install --upgrade pip
WORKDIR /SingleModelTest
COPY requirements /SingleModelTest/requirements
RUN export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64
RUN pip3 install -r requirements/requirements1.txt
RUN pip3 install -r requirements/requirements2.txt
COPY . /SingleModelTest
ENTRYPOINT ["python"]
CMD ["TabNetAPI.py"]
EDIT: this answer just tells you how to verify what's happening in your docker image. Unfortunately I'm unable to figure out why it is happening.
How to check it?
At each step of the docker build, you can see the various layers being generated. You can use that ID to create a temporary image to check what's happening. e.g.
docker build -t my_bonk_example .
[...]
Removing intermediate container xxxxxxxxxxxxx
---> 57778e7c9788
Step 19/31 : RUN mkdir -p /tmp/spark-events
---> Running in afd21d853bcb
Removing intermediate container xxxxxxxxxxxxx
---> 33b26e1a2286 <-- let's use this ID
[ failure happens ]
docker run -it --rm --name bonk_container_before_failure 33b26e1a2286 bash
# now you're in the container
echo $LD_LIBRARY_PATH
ls /usr/local/cuda
side notes about your Dockerfile:
you can improve the build time for future builds if you change the instructions order in your Dockerfile. Docker uses a cache that gets invalidated in the moment it finds something different from the previous build. I'd expect you to change the code more often than the requirements of your docker image, so it'd make sense to move the COPY after the apt instructions. e.g.
# Dockerfile
FROM nvidia/cuda:10.2-base
RUN set -xe \
&& apt-get update \
&& apt-get install python3-pip -y \
&& apt-get install git -y
RUN pip3 install --upgrade pip
WORKDIR /SingleModelTest
COPY requirements /SingleModelTest/requirements
RUN pip3 install -r requirements/requirements1.txt
RUN pip3 install -r requirements/requirements2.txt
COPY . /SingleModelTest
RUN nvidia-smi
ENTRYPOINT ["python"]
CMD ["TabNetAPI.py"]
NOTE: this is just an example.
Concerning the Why the image doesn't build, I found that PyTorch 1.4 does not support CUDE 11.0 (https://discuss.pytorch.org/t/pytorch-with-cuda-11-compatibility/89254) but also using a previous version of CUDA does not fix the issue.
Currently building the image:
FROM python:3.7-slim-stretch
WORKDIR /root/forstack-host
COPY requirements.txt /root/requirements.txt
RUN apt-get update && apt-get install -y libgomp1 gcc
RUN apt-get install -y libpq-dev
RUN apt-get install net-tools
RUN apt-get install -y libhdf5-serial-dev hdf5-tools
RUN python3 -m pip install --no-cache-dir -U pip && \
python3 -m pip install --no-cache-dir -r /root/requirements.txt
When installing tables=3.4.4 I get the error
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-91revz2s/tables/setup.py'"'"'; __file__='"'"'/tmp/pip-install-91revz2s/tables/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-91revz2s/tables/pip-egg-info
cwd: /tmp/pip-install-91revz2s/tables/
Complete output (12 lines):
/tmp/H5close9pyniq85.c: In function ‘main’:
/tmp/H5close9pyniq85.c:2:5: warning: implicit declaration of function ‘H5close’ [-Wimplicit-function-declaration]
H5close();
^~~~~~~
/usr/bin/ld: cannot find -lhdf5
collect2: error: ld returned 1 exit status
* Using Python 3.7.5 (default, Oct 19 2019, 00:03:48)
* USE_PKGCONFIG: True
.. ERROR:: Could not find a local HDF5 installation.
You may need to explicitly state where your local HDF5 headers and
library can be found by setting the ``HDF5_DIR`` environment
variable or by using the ``--hdf5`` command-line option.
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
Using RUN apt-get install -y libhdf5-serial-dev hdf5-tools doesn't seem to fix this error and I can't seem to set the HDF5_DIR env. Is there a working 3.7 image with a fix for this hdf5 issue?
you may try add this ARG to your Dockerfile before installing:
ARG HDF5_DIR=PATH_TO_YOUR_HDF5 # normally should be in /usr/local or /opt/local
you may also need to install build-dep
I am trying to create a docker container for my python application (this is the first time I am trying to use docker). I looked at the online tutorials and created a DockerFile as follows:
FROM python:3.6-alpine
COPY . /app
WORKDIR /app
RUN apk --update add --no-cache \
lapack-dev \
gcc \
freetype-dev
# Install dependencies
RUN apk add --no-cache --virtual .build-deps \
gfortran \
musl-dev \
g++
RUN pip3 install -r requirements.txt
RUN python3 setup.py install
RUN apk del .build-deps
ENTRYPOINT python3 testapp.py
My project requirements are:
numpy==1.13.3
Cython==0.28.2
nibabel==2.2.1
scipy==1.0.0
I build the docker file as: docker build -t myimg .
So, the docker file progresses but scipy fails to build with the following error:
Collecting numpy==1.13.3 (from -r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/bf/2d/005e45738ab07a26e621c9c12dc97381f372e06678adf7dc3356a69b5960/numpy-1.13.3.zip (5.0MB)
Collecting Cython==0.28.2 (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/79/9d/dea8c5181cdb77d32e20a44dd5346b0e4bac23c4858f2f66ad64bbcf4de8/Cython-0.28.2.tar.gz (1.9MB)
Collecting nibabel==2.2.1 (from -r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/d7/de/1d96fd0b118c9047bf35f02090db8ef8fd3927dfce635f09a6f7d5b572e6/nibabel-2.2.1.zip (4.2MB)
Collecting scipy==1.0.0 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/d0/73/76fc6ea21818eed0de8dd38e1e9586725578864169a2b31acdeffb9131c8/scipy-1.0.0.tar.gz (15.2MB)
Collecting six>=1.3 (from nibabel==2.2.1->-r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: numpy, Cython, nibabel, scipy
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/b6/10/65/189b772e73b4505109d5a1e6671b07e65797023718777295e0
Running setup.py bdist_wheel for Cython: started
Running setup.py bdist_wheel for Cython: still running...
Running setup.py bdist_wheel for Cython: still running...
Running setup.py bdist_wheel for Cython: still running...
Running setup.py bdist_wheel for Cython: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/6f/24/5d/def09ad0aed8ba26186f2a38070906f70ab4b2287bf64d4414
Running setup.py bdist_wheel for nibabel: started
Running setup.py bdist_wheel for nibabel: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/46/50/8d/bcb0b8f7c030da5bac1752fbe9cc375cbf5725fa93ba79ad84
Running setup.py bdist_wheel for scipy: started
Running setup.py bdist_wheel for scipy: finished with status 'error'
Complete output from command /usr/local/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-boosbyfg/scipy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-cczhwdqj --python-tag cp36:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-boosbyfg/scipy/setup.py", line 418, in <module>
setup_package()
File "/tmp/pip-install-boosbyfg/scipy/setup.py", line 398, in setup_package
from numpy.distutils.core import setup
ModuleNotFoundError: No module named 'numpy'
Not sure why it is having issue finding numpy as it was installed as part of the requirements?
Because to build scipy wheel you need to have numpy installed. However, only numpy wheel build was complete by the time pip attempts to build scipy wheel.
You will have to install the dependencies first. There are multiple ways to do this.
1) Use a shell script like the one below, copy it and RUN it instead of RUN pip install -r requirements.txt:
#!/bin/sh
while read module; do
pip install $module
done < requirements.txt
2) Install scipy in a seperate RUN command.
3) apk add py-numpy#community as discussed in this answer.