I am trying to get mayavi working inside a docker container and originally I was starting my Dockerfile from continuumio/anaconda3. I did a "conda install mayavi" it would appear to install but as soon as I tried to import it or vtk for that matter I would get:
"ModuleNotFoundError: No module named 'vtkRenderingOpenGL2Python'"
When I try installing it from pip3 it fails to install with "ModuleNotFoundError: No module named 'vtkOpenGLKitPython'"
I have tried it starting from centos:7 and get the same issues. I guess its worth mentioning that a conda search or pip search of these modules comes up blank. However I can install it outside of docker and everything goes fine.
If it helps, my current Dockerfile looks like:
FROM centos:7
RUN yum install vim -y
RUN yum install python3 -y
RUN yum install python3-pip -y
RUN yum install python3-devel -y
RUN yum install gcc -y
#RUN pip3 install mayavi
#RUN pip3 install PyQt5
RUN mkdir /home/working
WORKDIR /home/working
I have been at this for some time now and any help would be appreciated.
you can take a look at my binder repo fork in which you can load inline mayavi in jupyter notebooks.
Pasting the Dockerfile here for posterity:
FROM jupyter/minimal-notebook:65761486d5d3
MAINTAINER Jean-Remi King <jeanremi.king#gmail.com>
# Install core debian packages
USER root
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get -yq dist-upgrade \
&& apt-get install -yq --no-install-recommends \
openssh-client \
vim \
curl \
gcc \
&& apt-get clean
# Xvfb
RUN apt-get install -yq --no-install-recommends \
xvfb \
x11-utils \
libx11-dev \
qt5-default \
&& apt-get clean
ENV DISPLAY=:99
# Switch to notebook user
USER $NB_UID
# Upgrade the package managers
RUN pip install --upgrade pip
RUN npm i npm#latest -g
# Install Python packages
RUN pip install vtk && \
pip install boto && \
pip install h5py && \
pip install nose && \
pip install ipyevents && \
pip install ipywidgets && \
pip install mayavi && \
pip install nibabel && \
pip install numpy && \
pip install pillow && \
pip install pyqt5 && \
pip install scikit-learn && \
pip install scipy && \
pip install xvfbwrapper && \
pip install https://github.com/nipy/PySurfer/archive/master.zip
# Install Jupyter notebook extensions
RUN pip install RISE && \
jupyter nbextension install rise --py --sys-prefix && \
jupyter nbextension enable rise --py --sys-prefix && \
jupyter nbextension install mayavi --py --sys-prefix && \
jupyter nbextension enable mayavi --py --sys-prefix && \
npm cache clean --force
# Try to decrease initial IPython kernel load times
RUN ipython -c "import matplotlib.pyplot as plt; print(plt)"
# Add an x-server to the entrypoint. This is needed by Mayavi
ENTRYPOINT ["tini", "-g", "--", "xvfb-run"]
Related
I am building a docker image. Within it I am trying to install a number of python packages within one RUN. All packages within that command are installed correctly, but PyInstaller is not for some reason, although the build logs make me think that it should have been: Successfully installed PyInstaller
The minimal Dockerfile to reproduce the issue:
FROM debian:buster
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
unixodbc-dev
RUN python3 -m pip install --no-cache-dir pyodbc==4.0.30 && \
python3 -m pip install --no-cache-dir Cython==0.29.19 && \
python3 -m pip install --no-cache-dir PyInstaller==3.5 && \
python3 -m pip install --no-cache-dir selenium==3.141.0 && \
python3 -m pip install --no-cache-dir bs4==0.0.1
RUN python3 -m PyInstaller
The last run command fails with /usr/bin/python3: No module named PyInstaller, all other packages can be imported as expected.
The issue is also reproducible with this Dockerfile:
FROM debian:buster
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip
RUN python3 -m pip install --no-cache-dir PyInstaller==3.5
RUN python3 -m PyInstaller
What is the reason for this issue and what is the fix?
EDIT:
When I run the layer before the last RUN, I can see that no PyInstaller is installed, but I can run python3 -m pip install --no-cache-dir PyInstaller==3.5 and then it works without changing anything else.
Although I do not fully undestand the reason behind it, it seems like the --no-cache-dir option was causing the issue. The dockerfile below builds without an issue:
FROM debian:buster
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip
RUN python3 -m pip install PyInstaller==3.5
RUN python3 -m PyInstaller --help
Edit: This seems to be an issue outside of PyInstaller, but with the specific version of pip, see https://github.com/pyinstaller/pyinstaller/issues/6963 for details.
I'm not familiar with PyInstaller but in their requirements page they wrote:
If the pip setup fails to build a bootloader, or if you do not use pip
to install, you must compile a bootloader manually. The process is
described under Building the Bootloader.
Have you try that in your Dockerfile?
(And you're totally right, it should fail... )
I am trying to install python and pip & Ansible using Dockerfile but I get this error
/bin/sh: 1: python: not found
The command '/bin/sh -c curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && python get-pip.py && python -m pip install --upgrade "pip < 21.0" && pip install ansible --upgrade' returned a non-zero code: 127
ERROR: Service 'jenkins' failed to build : Build failed
Here is my Dockerfile:
FROM jenkins/jenkins
USER root
RUN curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && \
python get-pip.py && \
python -m pip install --upgrade "pip < 21.0" && \
pip install ansible --upgrade
USER jenkins
Note: I used the same instructions on another Dockerfile and it went without errors. Here is the Dockerfile from CentOS image:
FROM centos:7
RUN yum update -y && \
yum -y install openssh-server && \
yum install -y passwd
RUN useradd remote_user && \
echo "password" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
RUN yum -y install mysql
RUN curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && \
python get-pip.py && \
python -m pip install --upgrade "pip < 21.0" && \
pip install awscli --upgrade
CMD /usr/sbin/sshd -D
Since I'm not entirely sure my comments were fully understandable, here is how I would install ansible in your current base image jenkins/jenkins.
Notes:
I fixed the tag to lts since building from latest is a bit on the edge. You can change that to whatever tag suits your needs.
That base image is itself based on Ubuntu and not CentOS as reported in your title (hence using apt and not yum/dnf)
I used two RUN directives (one for installing python, the other for ansible) but you can merge them in a single instruction if you want to further limit the number of layers.
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y python3-pip && \
rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip && \
pip install ansible && \
pip cache purge
USER jenkins
I deleted RUN instructions and replaced it with :
RUN apt-get update
RUN apt-get install -y ansible
Worked like a charm.
I have the following Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Setup the Python's configs
RUN pip install --upgrade pip && \
pip install --no-cache-dir matplotlib==3.0.2 pandas==0.23.4 numpy==1.16.3 && \
pip install --no-cache-dir pybase64 && \
pip install --no-cache-dir scipy && \
pip install --no-cache-dir dask[complete] && \
pip install --no-cache-dir dash==1.6.1 dash-core-components==1.5.1 dash-bootstrap-components==0.7.1 dash-html-components==1.0.2 dash-table==4.5.1 dash-daq==0.2.2 && \
pip install --no-cache-dir plotly && \
pip install --no-cache-dir adjustText && \
pip install --no-cache-dir networkx && \
pip install --no-cache-dir scikit-learn && \
pip install --no-cache-dir tzlocal
# Setup the R configs
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
RUN add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/'
RUN apt update
ENV DEBIAN_FRONTEND=noninteractive
RUN apt install -y r-base
RUN pip install rpy2==2.9.4
RUN apt-get -y install libxml2 libxml2-dev libcurl4-gnutls-dev libssl-dev
RUN echo "r <- getOption('repos'); r['CRAN'] <- 'https://cran.r-project.org'; options(repos = r);" > ~/.Rprofile
RUN Rscript -e "install.packages('BiocManager')"
RUN Rscript -e "BiocManager::install('ggplot2')"
RUN Rscript -e "BiocManager::install('DESeq2')"
RUN Rscript -e "BiocManager::install('RColorBrewer')"
RUN Rscript -e "BiocManager::install('ggrepel')"
RUN Rscript -e "BiocManager::install('factoextra')"
RUN Rscript -e "BiocManager::install('FactoMineR')"
RUN Rscript -e "BiocManager::install('apeglm')"
WORKDIR /
# Copy all the necessary files of the app to the container
COPY ./ ./
# Install the slider-input component
WORKDIR /slider_input
RUN pip install --no-cache-dir slider_input-0.0.1.tar.gz
WORKDIR /
EXPOSE 8050
# Launch the app
CMD ["python", "./app.py"]
It's used for running dash app that using R commands, and it works fine.
The problem is the size of the image.
I want to minimize the size of the image as minimal as possible, but everything I tried was unsuccessful because of the combination of python and R.
Do you have any idea how can I minimize this image, and provide the same functionality?
Use docker-slim to minimize and secure your docker images. docker-slim will profile your docker image and throw away what you don't need.
It has been used with Node.js, Python, Ruby, Java, Golang, Rust, Elixir and PHP (some app types) running on Ubuntu, Debian, CentOS, Alpine and even Distroless.
docker-slim is production ready, but consider testing your container before deploying it to production. Minify docker images by up to 30x while making it secure too!
A multi-stage build will allow you to omit the compiler toolchain, headers, etc.. from the final image, only including the resulting code.
A three-part tutorial for Python specifically starts here: https://pythonspeed.com/articles/smaller-python-docker-images/
And the generic Docker docs: https://docs.docker.com/develop/develop-images/multistage-build/
I'm new to Docker (Community Edition) and currently trying to create a Dockerfile to run my python3 script but I'm encountering a problem when I try to build the image
Here's my Dockerfile:
FROM python:3
COPY . /
RUN \
apt-get update \
apt-get install python3-pip \
pip3 install bs4 \
pip3 install requests \
apt-get install python3-lxml -y \
pip3 install Pillow \
apt-get install libopenjp2-7 -y \
apt-get install libtiff5 -y
CMD [ "python3","./Manga-Alert.py" ]
But I'm getting an error, he doesn't find the package python3-pip
And then fails completely:
I'm probably writing my Dockerfile wrongly but I don't know how to resolve my problem.
Those slashes just mean new line in the docker file. It isn't the same as running the commands on the terminal. Because of this you need to separate each command with an && if you want them all to execute under one RUN direction.
FROM python:3
COPY . /
RUN \
apt-get update -y && \
apt-get install python3-pip -y && \
pip3 install bs4 && \
pip3 install requests && \
apt-get install python3-lxml -y && \
pip3 install Pillow && \
apt-get install libopenjp2-7 -y && \
apt-get install libtiff5 -y
CMD [ "python3","./Manga-Alert.py" ]
I'm trying to install awscli using pip (as per Amazon's recommendations) in a custom Docker image that comes FROM library/node:6.11.2. Here's a repro:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-pip \
python-setuptools \
groff \
less \
&& pip --no-cache-dir install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
However, with the above I'm met with:
no such option: --no-cache-dir
Presumably because I've got incorrect versions of Python and/or Pip?
I'm installing Python, Pip, and awscli in a similar way with FROM maven:3.5.0-jdk-8 and there it works just fine. I'm unsure what the relevant differences between the two images are.
Removing said option from my Dockerfile doesn't do me much good either, because then I'm met with a big pile of different errors, an excerpt here:
Installing collected packages: awscli, PyYAML, docutils, rsa, colorama, botocore, s3transfer, pyasn1, jmespath, python-dateutil, futures, six
Running setup.py install for PyYAML
checking if libyaml is compilable
### ABBREVIATED ###
ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
### ABBREVIATED ###
Bottom line: how do you properly install awscli in library/node:6.x based images?
Adding python-dev as per this other answer works, but throws an alarming number of compiler warnings (errors?), so I went with a variation of #SergeyKoralev's answer, which needed some tweaking before it worked.
Here's the changes I needed to make this work:
Change to python3 and pip3 everywhere.
Add a statement to upgrade pip itself.
Separate the awscli install in a separate RUN command.
Here's a full repro that does seem to work:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
CMD ["/bin/bash"]
You can probably also keep the aws install in the same RUN layer if you add a shell command before the install that refreshes things after upgrading pip. Not sure how though.
All the answers are about aws-cli version 1, If you want version 2 try the below
FROM node:lts-stretch-slim
RUN apt-get update && \
apt-get install -y \
unzip \
curl \
&& apt-get clean \
&& curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& ./aws/install \
&& rm -rf \
awscliv2.zip \
&& apt-get -y purge curl \
&& apt-get -y purge unzip
CMD ["/bin/bash"]
As you have correctly stated, pip installing on the docker image you are using is an older one not supporting --no-cache-dir. You can try updating that or you can also fix the second problem which is about missing python source headers. This can be fixed by installing python-dev package. Just add that to the list of packages installed in the Dockerfile:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-dev \
python-pip \
python-setuptools \
groff \
less \
&& pip install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
You can then run aws which should be on your path.
Your image is based on Debian Jessie, so you are installing Python 2.7. Try using Python 3.x:
apt-get install -y python3-pip
pip3 install awscli
Install AWS CLI in docker container using below command:
apt upgrade -y;apt update;apt install python3 python3-pip python3-setuptools -y; python3 -m pip --no-cache-dir install --upgrade awscli
To check the assumed role or AWS identity run below command:
aws sts get-caller-identity