I am building a ubuntu docker image that is going to run my python application, and I have some libraries that require python <= 3.6 to work otherwise it will throw errors.
My problem is that when I install pip, it will always automatically use python 3.8, and I'm not sure how to let pip use an older version of python, this is the installation in my Dockerfile
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
apt-add-repository universe && \
apt-get update && \
apt-get install -y \
libmysqlclient-dev \
netcat \
python3 \
python-dev \
build-essential \
python3-setuptools \
python3-pip \
supervisor && \
pip install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
I tried to change python3-pip by just python-pip but when I run it it gives me the following error
E: Unable to locate package python-pip
I've tried a lot of solutions but always the same problem
Outside of Docker, if python3.6 is the python you need, you can do:
python3.6 -m pip install
In Docker right now obviously python3 is pointing to Python 3.8 so you must first install python3.6 and find out how to call it (python3.6 or python3). You might need to compile it from source and probably create some symbolic link. This can get very ugly to do inside a Docker, but you can try to write a shell script with all commands and to run the shell script inside a Docker. Or if you are lucky you may find a ready Python3.6 Docker package that works for you and apt-get install it instead of python3 the same way as you do now.
Related
Running into an expected issue trying to prepare an ubuntu 20.04 based image with python and pyodbc.
FROM ubuntu:20.04
# install mssql odbc driver
RUN apt-get update && apt-get upgrade -y && apt-get install -y curl gnupg build-essential
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql17 unixodbc-dev
# install python 3.7.9 from source
RUN apt-get install -y python3 python3-pip
# clean up
# this does not work
RUN apt-get remove -y perl curl gnupg && apt-get autoremove -y
# this works
# RUN apt-get remove -y curl gnupg && apt-get autoremove -y
RUN pip3 install pyodbc
If perl is not removed, the installation of pyodbc is uneventful, but if perl is removed, the following error is displayed:
src/pyodbc.h:56:10: fatal error: sql.h: No such file or directory
As if the unixodbc-dev is also removed for some reason. Has anyone run into this before? If perl is required, wouldn't apt-get prevent it from being deleted? Or I need to install a different set of c-bindings to make this work.
Also running apt-get install -f after installing msodbcsql17 doesn't help either.
Thanks.
unixodbc-dev was installed as a transitive dependency and was automatically removed when no longer needed, i.e. after perl was removed. You need to install it explicitly:
RUN apt-get install -y unixodbc-dev
See the following bug report for details: https://github.com/mkleehammer/pyodbc/issues/441
I'm trying to update an existing Dockerfile to switch from python3.5 to python3.8, previously it was creating a symlink for python3.5 and pip3 like this:
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN ln -s /usr/bin/python3 /usr/bin/python
I've updated the Dockerfile to install python3.8 from deadsnakes:ppa
apt-get install python3-pip python3.8-dev python3.8-distutils python3.8-venv
if I remove python3-pip, it complains about gcc
C compiler or Python headers are not installed on this system. Try to run: sudo apt-get install gcc python3-dev
with these installations in place I'm trying to update existing symlink creation something like this:
RUN ln -s /usr/bin/pip3 /usr/local/lib/python3.8/dist-packages/pip
RUN ln -s /usr/bin/pip /usr/local/lib/python3.8/dist-packages/pip
RUN ln -s /usr/bin/python3.8 /usr/bin/python3
it fails, saying
ln: failed to create symbolic link '/usr/bin/python3': File exists
which I assume fails because python3 points to python3.6.
if I try: RUN ln -s /usr/bin/python3.8 /usr/bin/python it doesn't complain about symlink and image gets build successfully, but fails while installing requirements later (we use Makefile targets to install dependencies inside the container using pip and pip-sync):
ERROR: Cannot uninstall 'python-apt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
which I assume because python-apt gets installed as part of the default python3.6 installation and python3.8 pip can't uninstall it.
PS: my Dockerfile image is based on Ubunut 18.04 which comes with python3.6 as default.
How can I properly switch Dockerfile / image from python3.5 to python3.8? so I can later use pip directly and it points to python3.8's pip
Replacing the system python in this way is usually not a good idea (as it can break operating-system-level programs which depend on those executables) -- I go over that a little bit in this video I made "why not global pip / virtualenv?"
A better way is to create a prefix and put that on the PATH earlier (this allows system executables to continue to work, but bare python / python3 / etc. will use your other executable)
in the case of deadsnakes which it seems like you're using, something like this should work:
FROM ubuntu:bionic
RUN : \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository -y ppa:deadsnakes \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
python3.8-venv \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& :
RUN python3.8 -m venv /venv
ENV PATH=/venv/bin:$PATH
the ENV line is the key here, that puts the virtualenv on the beginning of the path
$ docker build -t test .
...
$ docker run --rm -ti test bash -c 'which python && python --version && which pip && pip --version'
/venv/bin/python
Python 3.8.5
/venv/bin/pip
pip 20.1.1 from /venv/lib/python3.8/site-packages/pip (python 3.8)
disclaimer: I'm the maintainer of deadsnakes
Why not just build a new image from ubuntu:18.04 with the desired config you need?
Like this:
FROM ubuntu:18.04
RUN apt update && apt install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa && install python3.8 -y
RUN ln -s /usr/bin/pip3 /usr/bin/pip && \
ln -s /usr/bin/python3.8 /usr/bin/python
You can install and enable your python version.
# Python 3.8 and pip3
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa -y
RUN apt-get install -y python3.8
RUN ln -s /usr/bin/python3.8 /usr/bin/python
RUN apt-get install -y python3-pip
Sometimes, modifying the OS (like getting new Ubuntu clean os) is not favorable, because the current OS is too complicated. For example, my base OS is FROM ufoym/deepo:all-cu101.
So, to modify the existing python (3.6) to python 3.8, I added these 2 lines:
RUN apt-get update -qq && apt-get install -y -qq python3.8
RUN rm /usr/bin/python && rm /usr/bin/python3 && ln -s /usr/bin/python3.8 /usr/bin/python && ln -s /usr/bin/python3.8 /usr/bin/python3 \
&& rm /usr/local/bin/python && rm /usr/local/bin/python3 && ln -s /usr/bin/python3.8 /usr/local/bin/python && ln -s /usr/bin/python3.8 /usr/local/bin/python3 \
&& apt-get install -y python3-pip python-dev python3.8-dev && python3 -m pip install pip --upgrade
The first step is to install the python3.8;
The second step is to modify the softlink of python and python3 to point to python3.8
After that, install python3-pip, and update it to make sure the pip is using the current python 3.8 environment.
I have LightGBM installed in my mac and tested earlier for a different project.
Now I am inside a docker with python 3.6 on my mac. As soon as I add import lightgbm as lgbm in my Flask application, I get error
OSError: libgomp.so.1: cannot open shared object file: No such file or directory
What is going on? Can anyone please suggest?
This worked for me, include it in your dockerfile
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get -y install curl
RUN apt-get install libgomp1
Source: https://github.com/microsoft/LightGBM/issues/2223#issuecomment-499788066
Depending on the image yo use you may need a c++ compiler, together with libgomp1 too. The issue is that Lightgbm is coded in c++ indeed and the base image of your dockerfile may not have all dependencies installed by default (while your mac has).
Following these links
( https://raw.githubusercontent.com/Microsoft/LightGBM/master/docker/dockerfile-cli)
(https://github.com/microsoft/LightGBM/issues/2223)
the solution would be to add to the dockerfile
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
cmake \
build-essential \
gcc \
g++ \
git && \
rm -rf /var/lib/apt/lists/* && \
apt-get install libgomp1 -y
I'm trying to install awscli using pip (as per Amazon's recommendations) in a custom Docker image that comes FROM library/node:6.11.2. Here's a repro:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-pip \
python-setuptools \
groff \
less \
&& pip --no-cache-dir install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
However, with the above I'm met with:
no such option: --no-cache-dir
Presumably because I've got incorrect versions of Python and/or Pip?
I'm installing Python, Pip, and awscli in a similar way with FROM maven:3.5.0-jdk-8 and there it works just fine. I'm unsure what the relevant differences between the two images are.
Removing said option from my Dockerfile doesn't do me much good either, because then I'm met with a big pile of different errors, an excerpt here:
Installing collected packages: awscli, PyYAML, docutils, rsa, colorama, botocore, s3transfer, pyasn1, jmespath, python-dateutil, futures, six
Running setup.py install for PyYAML
checking if libyaml is compilable
### ABBREVIATED ###
ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
### ABBREVIATED ###
Bottom line: how do you properly install awscli in library/node:6.x based images?
Adding python-dev as per this other answer works, but throws an alarming number of compiler warnings (errors?), so I went with a variation of #SergeyKoralev's answer, which needed some tweaking before it worked.
Here's the changes I needed to make this work:
Change to python3 and pip3 everywhere.
Add a statement to upgrade pip itself.
Separate the awscli install in a separate RUN command.
Here's a full repro that does seem to work:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
CMD ["/bin/bash"]
You can probably also keep the aws install in the same RUN layer if you add a shell command before the install that refreshes things after upgrading pip. Not sure how though.
All the answers are about aws-cli version 1, If you want version 2 try the below
FROM node:lts-stretch-slim
RUN apt-get update && \
apt-get install -y \
unzip \
curl \
&& apt-get clean \
&& curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& ./aws/install \
&& rm -rf \
awscliv2.zip \
&& apt-get -y purge curl \
&& apt-get -y purge unzip
CMD ["/bin/bash"]
As you have correctly stated, pip installing on the docker image you are using is an older one not supporting --no-cache-dir. You can try updating that or you can also fix the second problem which is about missing python source headers. This can be fixed by installing python-dev package. Just add that to the list of packages installed in the Dockerfile:
FROM library/node:6.11.2
RUN apt-get update && \
apt-get install -y \
python \
python-dev \
python-pip \
python-setuptools \
groff \
less \
&& pip install --upgrade awscli \
&& apt-get clean
CMD ["/bin/bash"]
You can then run aws which should be on your path.
Your image is based on Debian Jessie, so you are installing Python 2.7. Try using Python 3.x:
apt-get install -y python3-pip
pip3 install awscli
Install AWS CLI in docker container using below command:
apt upgrade -y;apt update;apt install python3 python3-pip python3-setuptools -y; python3 -m pip --no-cache-dir install --upgrade awscli
To check the assumed role or AWS identity run below command:
aws sts get-caller-identity
I want to install some packages with pip in a container. The trivial way to do this is the following:
FROM ubuntu:trusty
RUN apt-get update && \
apt-get install python-pip <lots-of-dependencies-needed-only-for-pip-install>
RUN pip install <some-packages>
However, this way I install a lot of unneeded dependencies, which increases the size of the container unnecessarily.
My first idea was to do this:
FROM ubuntu:trusty AS pip_install
RUN apt-get update && \
apt-get install python-pip <lots-of-dependencies-needed-only-for-pip-install>
RUN pip install <some-packages>
FROM ubuntu:trusty
RUN apt-get update && \
apt-get install python-pip <runtime-dependencies>
COPY --from=pip_install /usr/local/bin /usr/local/bin
COPY --from=pip_install /usr/local/lib/python2.7 /usr/local/lib/python2.7
This works, but feels like a workaround. Is there any more elegant way of doing this? I thought of something like this:
FROM ubuntu:trusty AS pip_install
RUN apt-get update && \
apt-get install python-pip <lots-of-dependencies-needed-only-for-pip-install>
RUN pip install <some-packages>
VOLUME /usr/local
FROM ubuntu:trusty
<somehow mount /usr/local from pip_install to /tmp/pip>
RUN apt-get update && \
apt-get install python-pip <runtime-dependencies>
RUN pip install <from /tmp/pip> <some-packages>
Is this even possible?
I could have used some of the python images, but in my real application I derive from another image that itself derives from ubuntu:trusty. As for this question, it's beside the point.