I'm trying to install awscli using pip (as per Amazon's recommendations) in a custom Docker image that comes FROM library/node:6.11.2. Here's a repro:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-pip \
python-setuptools \
groff \
less \
&& pip --no-cache-dir install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
However, with the above I'm met with:
no such option: --no-cache-dir
Presumably because I've got incorrect versions of Python and/or Pip?
I'm installing Python, Pip, and awscli in a similar way with FROM maven:3.5.0-jdk-8 and there it works just fine. I'm unsure what the relevant differences between the two images are.
Removing said option from my Dockerfile doesn't do me much good either, because then I'm met with a big pile of different errors, an excerpt here:
Installing collected packages: awscli, PyYAML, docutils, rsa, colorama, botocore, s3transfer, pyasn1, jmespath, python-dateutil, futures, six
Running setup.py install for PyYAML
checking if libyaml is compilable
### ABBREVIATED ###
ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
### ABBREVIATED ###
Bottom line: how do you properly install awscli in library/node:6.x based images?
Adding python-dev as per this other answer works, but throws an alarming number of compiler warnings (errors?), so I went with a variation of #SergeyKoralev's answer, which needed some tweaking before it worked.
Here's the changes I needed to make this work:
Change to python3 and pip3 everywhere.
Add a statement to upgrade pip itself.
Separate the awscli install in a separate RUN command.
Here's a full repro that does seem to work:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
CMD ["/bin/bash"]
You can probably also keep the aws install in the same RUN layer if you add a shell command before the install that refreshes things after upgrading pip. Not sure how though.
All the answers are about aws-cli version 1, If you want version 2 try the below
FROM node:lts-stretch-slim
RUN apt-get update && \
apt-get install -y \
unzip \
curl \
&& apt-get clean \
&& curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& ./aws/install \
&& rm -rf \
awscliv2.zip \
&& apt-get -y purge curl \
&& apt-get -y purge unzip
CMD ["/bin/bash"]
As you have correctly stated, pip installing on the docker image you are using is an older one not supporting --no-cache-dir. You can try updating that or you can also fix the second problem which is about missing python source headers. This can be fixed by installing python-dev package. Just add that to the list of packages installed in the Dockerfile:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-dev \
python-pip \
python-setuptools \
groff \
less \
&& pip install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
You can then run aws which should be on your path.
Your image is based on Debian Jessie, so you are installing Python 2.7. Try using Python 3.x:
apt-get install -y python3-pip
pip3 install awscli
Install AWS CLI in docker container using below command:
apt upgrade -y;apt update;apt install python3 python3-pip python3-setuptools -y; python3 -m pip --no-cache-dir install --upgrade awscli
To check the assumed role or AWS identity run below command:
aws sts get-caller-identity
Related
I am trying to install python and pip & Ansible using Dockerfile but I get this error
/bin/sh: 1: python: not found
The command '/bin/sh -c curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && python get-pip.py && python -m pip install --upgrade "pip < 21.0" && pip install ansible --upgrade' returned a non-zero code: 127
ERROR: Service 'jenkins' failed to build : Build failed
Here is my Dockerfile:
FROM jenkins/jenkins
USER root
RUN curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && \
python get-pip.py && \
python -m pip install --upgrade "pip < 21.0" && \
pip install ansible --upgrade
USER jenkins
Note: I used the same instructions on another Dockerfile and it went without errors. Here is the Dockerfile from CentOS image:
FROM centos:7
RUN yum update -y && \
yum -y install openssh-server && \
yum install -y passwd
RUN useradd remote_user && \
echo "password" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
RUN yum -y install mysql
RUN curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py && \
python get-pip.py && \
python -m pip install --upgrade "pip < 21.0" && \
pip install awscli --upgrade
CMD /usr/sbin/sshd -D
Since I'm not entirely sure my comments were fully understandable, here is how I would install ansible in your current base image jenkins/jenkins.
Notes:
I fixed the tag to lts since building from latest is a bit on the edge. You can change that to whatever tag suits your needs.
That base image is itself based on Ubuntu and not CentOS as reported in your title (hence using apt and not yum/dnf)
I used two RUN directives (one for installing python, the other for ansible) but you can merge them in a single instruction if you want to further limit the number of layers.
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y python3-pip && \
rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip && \
pip install ansible && \
pip cache purge
USER jenkins
I deleted RUN instructions and replaced it with :
RUN apt-get update
RUN apt-get install -y ansible
Worked like a charm.
I am building a ubuntu docker image that is going to run my python application, and I have some libraries that require python <= 3.6 to work otherwise it will throw errors.
My problem is that when I install pip, it will always automatically use python 3.8, and I'm not sure how to let pip use an older version of python, this is the installation in my Dockerfile
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
apt-add-repository universe && \
apt-get update && \
apt-get install -y \
libmysqlclient-dev \
netcat \
python3 \
python-dev \
build-essential \
python3-setuptools \
python3-pip \
supervisor && \
pip install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
I tried to change python3-pip by just python-pip but when I run it it gives me the following error
E: Unable to locate package python-pip
I've tried a lot of solutions but always the same problem
Outside of Docker, if python3.6 is the python you need, you can do:
python3.6 -m pip install
In Docker right now obviously python3 is pointing to Python 3.8 so you must first install python3.6 and find out how to call it (python3.6 or python3). You might need to compile it from source and probably create some symbolic link. This can get very ugly to do inside a Docker, but you can try to write a shell script with all commands and to run the shell script inside a Docker. Or if you are lucky you may find a ready Python3.6 Docker package that works for you and apt-get install it instead of python3 the same way as you do now.
Running into an expected issue trying to prepare an ubuntu 20.04 based image with python and pyodbc.
FROM ubuntu:20.04
# install mssql odbc driver
RUN apt-get update && apt-get upgrade -y && apt-get install -y curl gnupg build-essential
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql17 unixodbc-dev
# install python 3.7.9 from source
RUN apt-get install -y python3 python3-pip
# clean up
# this does not work
RUN apt-get remove -y perl curl gnupg && apt-get autoremove -y
# this works
# RUN apt-get remove -y curl gnupg && apt-get autoremove -y
RUN pip3 install pyodbc
If perl is not removed, the installation of pyodbc is uneventful, but if perl is removed, the following error is displayed:
src/pyodbc.h:56:10: fatal error: sql.h: No such file or directory
As if the unixodbc-dev is also removed for some reason. Has anyone run into this before? If perl is required, wouldn't apt-get prevent it from being deleted? Or I need to install a different set of c-bindings to make this work.
Also running apt-get install -f after installing msodbcsql17 doesn't help either.
Thanks.
unixodbc-dev was installed as a transitive dependency and was automatically removed when no longer needed, i.e. after perl was removed. You need to install it explicitly:
RUN apt-get install -y unixodbc-dev
See the following bug report for details: https://github.com/mkleehammer/pyodbc/issues/441
I'm new to Docker (Community Edition) and currently trying to create a Dockerfile to run my python3 script but I'm encountering a problem when I try to build the image
Here's my Dockerfile:
FROM python:3
COPY . /
RUN \
apt-get update \
apt-get install python3-pip \
pip3 install bs4 \
pip3 install requests \
apt-get install python3-lxml -y \
pip3 install Pillow \
apt-get install libopenjp2-7 -y \
apt-get install libtiff5 -y
CMD [ "python3","./Manga-Alert.py" ]
But I'm getting an error, he doesn't find the package python3-pip
And then fails completely:
I'm probably writing my Dockerfile wrongly but I don't know how to resolve my problem.
Those slashes just mean new line in the docker file. It isn't the same as running the commands on the terminal. Because of this you need to separate each command with an && if you want them all to execute under one RUN direction.
FROM python:3
COPY . /
RUN \
apt-get update -y && \
apt-get install python3-pip -y && \
pip3 install bs4 && \
pip3 install requests && \
apt-get install python3-lxml -y && \
pip3 install Pillow && \
apt-get install libopenjp2-7 -y && \
apt-get install libtiff5 -y
CMD [ "python3","./Manga-Alert.py" ]
I want to install some packages with pip in a container. The trivial way to do this is the following:
FROM ubuntu:trusty
RUN apt-get update && \
apt-get install python-pip <lots-of-dependencies-needed-only-for-pip-install>
RUN pip install <some-packages>
However, this way I install a lot of unneeded dependencies, which increases the size of the container unnecessarily.
My first idea was to do this:
FROM ubuntu:trusty AS pip_install
RUN apt-get update && \
apt-get install python-pip <lots-of-dependencies-needed-only-for-pip-install>
RUN pip install <some-packages>
FROM ubuntu:trusty
RUN apt-get update && \
apt-get install python-pip <runtime-dependencies>
COPY --from=pip_install /usr/local/bin /usr/local/bin
COPY --from=pip_install /usr/local/lib/python2.7 /usr/local/lib/python2.7
This works, but feels like a workaround. Is there any more elegant way of doing this? I thought of something like this:
FROM ubuntu:trusty AS pip_install
RUN apt-get update && \
apt-get install python-pip <lots-of-dependencies-needed-only-for-pip-install>
RUN pip install <some-packages>
VOLUME /usr/local
FROM ubuntu:trusty
<somehow mount /usr/local from pip_install to /tmp/pip>
RUN apt-get update && \
apt-get install python-pip <runtime-dependencies>
RUN pip install <from /tmp/pip> <some-packages>
Is this even possible?
I could have used some of the python images, but in my real application I derive from another image that itself derives from ubuntu:trusty. As for this question, it's beside the point.