Docker image taking long to build - python

Here are the contents of the Dockerfile, i changed from an alpine image to the slim-buster image. Im really struggling to see why its taking so long i think its got to do with all the aps im updating and installing in apt-get update. I might be reinstalling packages i don't need perhaps or doing something i don't need to, is there a way i can speed this up?
# pull official base image
FROM python:3.8-slim-buster
# set work directory
WORKDIR /opt/workspace
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Need this crap for wkhtml old install because any version after this doesn't work with charts and javascript
RUN echo "deb http://security.debian.org/debian-security jessie/updates main" >> /etc/apt/sources.list
# Pillow and Psycopg Dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
wget \
libpq-dev \
libpng-dev \
libjpeg-dev \
python-dev \
postgresql-client \
python3-pip \
python3-setuptools \
python3-wheel \
python3-cffi \
libssl1.0.0 \
libpng12-0 \
xfonts-base \
xfonts-75dpi \
libcairo2 \
libpango-1.0-0 \
libpangocairo-1.0-0 \
libgdk-pixbuf2.0-0 \
libffi-dev \
shared-mime-info\
gcc \
musl-dev \
python3-dev \
tk-dev \
uuid-dev \
&& rm -rf /var/lib/apt/lists/*
# fetch wait for it script
RUN wget -q -O /usr/bin/wait-for-it https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh && \
chmod +x /usr/bin/wait-for-it
RUN pip install psycopg2
# bunch of wkhtmltopdf shit only works with charts on this image.
RUN wget http://archive.ubuntu.com/ubuntu/pool/main/libj/libjpeg-turbo/libjpeg-turbo8_2.0.3-0ubuntu1_amd64.deb
RUN dpkg -i libjpeg-turbo8_2.0.3-0ubuntu1_amd64.deb
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.2.1/wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
RUN dpkg -i wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
# Install Dependencies
COPY requirements.txt /opt/workspace/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh /opt/workspace/entrypoint.sh
# copy project
COPY . /opt/workspace
# run entrypoint.sh
ENTRYPOINT ["/opt/workspace/entrypoint.sh"]
EDIT:
After Receiving an answer I have updated my dockerfile to the following. Bare in mind the only packages i need to install using apt-get are the packages that wkhtmltopdf requires to run! it is running so much faster now and is working a lot better.
FROM python:3.8-slim-buster
# set work directory
WORKDIR /opt/workspace
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBIAN_FRONTEND=noninteractive
# Need this crap for wkhtml old install
RUN echo "deb http://security.debian.org/debian-security jessie/updates main" >> /etc/apt/sources.list
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
fontconfig \
xfonts-base \
xfonts-75dpi \
libssl1.0.0 \
libpq-dev \
libpng-dev \
libjpeg-dev \
libffi-dev \
libpng12-0 \
libxext6 \
libx11-6 \
libxrender1 \
gcc \
&& rm -rf /var/lib/apt/lists/*
# fetch wait for it script
RUN wget -q -O /usr/bin/wait-for-it https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh && \
chmod +x /usr/bin/wait-for-it
# bunch of wkhtmltopdf shit only works with charts on this image.
RUN wget http://archive.ubuntu.com/ubuntu/pool/main/libj/libjpeg-turbo/libjpeg-turbo8_2.0.3-0ubuntu1_amd64.deb
RUN dpkg -i libjpeg-turbo8_2.0.3-0ubuntu1_amd64.deb
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.2.1/wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
RUN dpkg -i wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
# Install Dependencies
COPY requirements.txt /opt/workspace/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh /opt/workspace/entrypoint.sh
# copy project
COPY . /opt/workspace
# run entrypoint.sh
ENTRYPOINT ["/opt/workspace/entrypoint.sh"]

There's a bunch of issues here. First, there are packages you don't need:
You're installing python twice. You're installing python, the Debian Python package, but the Docker python image has its own version of Python (in /usr/local), with dev headers already there. You don't need this, and it can lead to confusion because you end up with two versions of Python (https://pythonspeed.com/articles/importerror-docker/).
musl-dev is unnecessary. Debian uses glibc, not musl. I suspect this is holdover from Alpine.
You are installing a compiler, a whole bunch of C headers in general, all those *-dev packages. It's quite possible you don't need to at all! On Alpine, you need to compile everything because Alpine can't use normal binary wheels. (https://pythonspeed.com/articles/alpine-docker-python/) Since you're on Debian, quite possibly all your depedencies have binary wheels. You would still need a compiler for your own code, but if it's pure Python quite possibly not.
So my suggestion: just drop the whole apt-get line. Pretty good chance it'll Just Work without it.

Related

AWS lambda stuck importing library

I am trying to deploy my ECR image to aws lambda. The image works fine locally, but on aws, it gets stuck importing this library https://github.com/jianfch/stable-ts.
import json
import boto3
import requests
import numpy
print("All imports ok 1 ...")
from stable_whisper import load_model
print("All imports ok 2 ...")
The first statement is printed but it gets stuck on importing and the second statement never got printed until it timed out.
Docker File:
# Build FFmpeg
FROM public.ecr.aws/lambda/python:3.8 as lambda-base
COPY requirements.txt ./
COPY myfunction.py ./
RUN pip3 install -r requirements.txt
WORKDIR /ffmpeg_sources
RUN yum install autoconf automake bzip2 bzip2-devel cmake libxcb libxcb-devel \
freetype-devel gcc gcc-c++ git libtool make pkgconfig zlib-devel -y -q
# Compile NASM assembler
RUN curl -OL https://www.nasm.us/pub/nasm/releasebuilds/2.15.05/nasm-2.15.05.tar.bz2
RUN tar xjvf nasm-2.15.05.tar.bz2
RUN cd nasm-2.15.05 && sh autogen.sh && \
./configure --prefix="/ffmpeg_sources/ffmpeg_build" \
--bindir="/ffmpeg_sources/bin" && \
make && make install
# Compile Yasm assembler
RUN curl -OL https://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz
RUN tar xzvf yasm-1.3.0.tar.gz
RUN cd yasm-1.3.0 && \
./configure --prefix="/ffmpeg_sources/ffmpeg_build" \
--bindir="/ffmpeg_sources/bin" && \
make && make install
# Compile FFmpeg
RUN curl -OL https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2
RUN tar xjvf ffmpeg-snapshot.tar.bz2
RUN cd ffmpeg && \
export PATH="/ffmpeg_sources/bin:$PATH" && \
export PKG_CONFIG_PATH="/ffmpeg_sources/ffmpeg_build/lib/pkgconfig" && \
./configure \
--prefix="/ffmpeg_sources/ffmpeg_build" \
--pkg-config-flags="--static" \
--extra-cflags="-I/ffmpeg_sources/ffmpeg_build/include" \
--extra-ldflags="-L/ffmpeg_sources/ffmpeg_build/lib" \
--extra-libs=-lpthread \
--extra-libs=-lm \
--enable-libxcb \
--bindir="/ffmpeg_sources/bin" && \
make && \
make install
# Final image with code and dependencies
FROM lambda-base
COPY myfunction.py /var/task/
CMD ["myfunction.lambda_handler"]
inside the requirements.txt, I tried both stable-ts and git+https://github.com/jianfch/stable-ts.git
I appreciate any help.
stable_whisper has a lot of dependencies and some of them contain compiled code (ffmpeg).
Python packages that contain compiled code aren't always compatible with Lambda runtimes by default. I don’t know how to build ffmpeg, but I can point you to a useful AWS sample, which utilizes python packages based on this dependency. Maybe it will contribute to solving your problem or maybe others will be able to help you further.
Sample Dockerfile:
# Install dependencies
…
# Build FFmpeg
FROM public.ecr.aws/lambda/python:3.8 as ffmpeg
WORKDIR /ffmpeg_sources
RUN yum install autoconf automake bzip2 bzip2-devel cmake libxcb libxcb-devel \
freetype-devel gcc gcc-c++ git libtool make pkgconfig zlib-devel -y -q
# Compile NASM assembler
RUN curl -OL https://www.nasm.us/pub/nasm/releasebuilds/2.15.05/nasm-2.15.05.tar.bz2
RUN tar xjvf nasm-2.15.05.tar.bz2
RUN cd nasm-2.15.05 && sh autogen.sh && \
./configure --prefix="/ffmpeg_sources/ffmpeg_build" \
--bindir="/ffmpeg_sources/bin" && \
make && make install
# Compile Yasm assembler
RUN curl -OL https://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz
RUN tar xzvf yasm-1.3.0.tar.gz
RUN cd yasm-1.3.0 && \
./configure --prefix="/ffmpeg_sources/ffmpeg_build" \
--bindir="/ffmpeg_sources/bin" && \
make && make install
# Compile FFmpeg
RUN curl -OL https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2
RUN tar xjvf ffmpeg-snapshot.tar.bz2
RUN cd ffmpeg && \
export PATH="/ffmpeg_sources/bin:$PATH" && \
export PKG_CONFIG_PATH="/ffmpeg_sources/ffmpeg_build/lib/pkgconfig" && \
./configure \
--prefix="/ffmpeg_sources/ffmpeg_build" \
--pkg-config-flags="--static" \
--extra-cflags="-I/ffmpeg_sources/ffmpeg_build/include" \
--extra-ldflags="-L/ffmpeg_sources/ffmpeg_build/lib" \
--extra-libs=-lpthread \
--extra-libs=-lm \
--enable-libxcb \
--bindir="/ffmpeg_sources/bin" && \
make && \
make install
# Final image with code and dependencies
FROM lambda-base
# Copy FFMpeg binary
COPY --from=ffmpeg /ffmpeg_sources/bin/ffmpeg /usr/bin/

How to set default python3 to py3.8 in the Dockerfile?

I tried to alias python3 to python3.8 in the Dockerfile. But It doesn't work for me. I am using ubuntu:18.04.
Step 25/41 : RUN apt-get update && apt-get install -y python3.8
---> Using cache
---> 9fa81ca14a53
Step 26/41 : RUN alias python3="python3.8" && python3 --version
---> Running in d7232d3c8b8f
Python 3.6.9
As you can see the python3 is still 3.6.9. How can I fix this issue?
Thanks.
EDIT
Just attach my Dockerfile:
##################################################################################################################
# Build
#################################################################################################################
#FROM openjdk:8
FROM ubuntu:18.04
############## Linux and perl packages ###############
RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer && \
apt-get update -y && \
apt-get install curl groff python-gdbm -y;
# Fix certificate issues, found as of
# https://bugs.launchpad.net/ubuntu/+source/ca-certificates-java/+bug/983302
RUN apt-get update && \
apt-get install -y ca-certificates-java && \
apt-get clean && \
update-ca-certificates -f && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer;
# Setup JAVA_HOME, this is useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
# install git
RUN apt-get update && \
apt-get install -y mysql-server && \
apt-get install -y uuid-runtime git jq python python-dev python-pip python-virtualenv libdbd-mysql-perl && \
rm -rf /var/lib/apt/lists/* && \
apt-get install perl && \
perl -MCPAN -e 'CPAN::Shell->install("Inline")' && \
perl -MCPAN -e 'CPAN::Shell->install("DBI")' && \
perl -MCPAN -e 'CPAN::Shell->install("List::MoreUtils")' && \
perl -MCPAN -e 'CPAN::Shell->install("Inline::Python")' && \
perl -MCPAN -e 'CPAN::Shell->install("LWP::Simple")' && \
perl -MCPAN -e 'CPAN::Shell->install("JSON")' && \
perl -MCPAN -e 'CPAN::Shell->install("LWP::Protocol::https")';
RUN apt-get update && \
apt-get install --yes cpanminus
RUN cpanm \
CPAN::Meta \
YAML \
DBI \
Digest::SHA \
Module::Build \
Test::Most \
Test::Weaken \
Test::Memory::Cycle \
Clone
# Install perl modules for network and SSL (and their dependencies)
RUN apt-get install --yes \
openssl \
libssl-dev \
liblwp-protocol-https-perl
RUN cpanm \
LWP \
LWP::Protocol::https
# New module for v1.2 annotation
RUN perl -MCPAN -e 'CPAN::Shell->install("Text::NSP::Measures::2D::Fisher::twotailed")'
#############################################
############## python packages ###############
# python packages
RUN pip install pymysql==0.10.1 awscli boto3 pandas docopt fastnumbers tqdm pygr
############## python3 packages ###############
# python3 packages
RUN apt-get update && \
apt-get install -y python3-pip && \
python3 -m pip install numpy && \
python3 -m pip install pandas && \
python3 -m pip install sqlalchemy && \
python3 -m pip install boto3 && \
python3 -m pip install pymysql && \
python3 -m pip install pymongo;
RUN python3 -m pip install pyfaidx
#############################################
#############################################
############# expose tcp ports
EXPOSE 3306/tcp
EXPOSE 80/tcp
EXPOSE 8080
############# RUN entrypoint.sh
# commented out for testing
ENTRYPOINT ["./entrypoint.sh"]
© 2022 GitHub, Inc.
Terms
When I install the package pyfaidx with default python3.6, it raises an error. I found that python3.8 can install it. Thus, I want to switch to python3.8 to install all py3 packages.
Bash alias that you define in your RUN statement will be available only in the current shell session. When the current RUN statement finishes executing, you exit the session, effectively forgetting any aliases you set up there.
See also: How can I set Bash aliases for docker containers in Dockerfile?
Another option is to use update-alternatives, e.g.,
# update-alternatives --install `which python3` python3 `which python3.8` 20
update-alternatives: using /usr/bin/python3.8 to provide /usr/bin/python3 (python3) in auto mode
# python3 --version
Python 3.8.0
This may interfere with other container packages that do require 3.6 which was default on Ubuntu 18.04 back in the day. Furthermore, pip's authors do not recommend using pip to install system-wide packages like that. In fact, newer pip versions will emit a warning when attempting to use pip globally along the lines of your Dockerfile.
Therefore a better course of action is using a virtualenv:
# apt install -y python3-venv python3.8-venv
...
# python3.8 -m venv /usr/local/venv
# /usr/local/venv/bin/pip install -U pip setuptools wheel
# /usr/local/venv/bin/pip install -U pyfaidx
... (etc)
You can also "enter" your virtualenv by activating it:
root#a1d0210118a8:/# source /usr/local/venv/bin/activate
(venv) root#a1d0210118a8:/# python -V
Python 3.8.0
See also: Use different Python version with virtualenv.

Docker build process and persistence of files to final image for use in container

I'm working on containerizing a somewhat complex application, and I'm running into some issues that are probably down to a lack of understanding of how Docker works and I've done a large amount of googling and reading but still haven't seem to gotten a solution.
I'm currently using docker-compose to launch the containers, and for building using docker-compose up --build at build time.
I've got a .dockerignore file going to try and limit my build context as best I can or build-times can take a very long time.
my docker-compose.yml looks something like
services:
base_image:
container_name: base_image_generator
build:
context: .
My dockerfile looks something like
FROM ubuntu:20.04 as dds_install
WORKDIR /app
COPY ./rti/rti_connext_dds-6.1.0-pro-host-x64Linux.run \
./rti/rti_connext_dds-6.1.0-pro-target-x64Linux4gcc7.3.0.rtipkg ./
RUN chmod +x rti_connext_dds-6.1.0-pro-host-x64Linux.run \
&& ./rti_connext_dds-6.1.0-pro-host-x64Linux.run --mode unattended
RUN /opt/rti_connext_dds-6.1.0/bin/rtipkginstall \
-u rti_connext_dds-6.1.0-pro-target-x64Linux4gcc7.3.0.rtipkg
FROM ubuntu:20.04
ENV TZ="America/Los Angeles"
ENV NODE_OPTIONS=--max_old_space_size=16384
ARG DEBIAN_FRONTEND=noninteractive
# Copy the data from the dds build
COPY --from=dds_install \
/opt/rti_connext_dds-6.1.0 /opt/rti_connext_dds-6.1.0
# Copy in the license file
COPY ./rti/rti_license.dat /opt/rti_connext_dds-6.0.1
# Add the NDDSHOME to our path
ENV NDDSHOME /opt/rti_connext_dds-6.1.0
ENV PATH $PATH:$NDDSHOME/bin
RUN apt-get update && apt-get install -y software-properties-common \
&& add-apt-repository ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install -y \
python3.6 \
python3.6-venv \
python3.6-dev
RUN apt-get update \
&& apt-get -y upgrade \
&& apt-get install -y \
build-essential \
cmake \
cmake-gui \
curl \
ffmpeg \
git \
gnupg2 \
gnome-keyring \
libboost-chrono-dev \
libboost-filesystem-dev \
libboost-program-options-dev \
libboost-system-dev \
libboost-thread-dev \
libboost-timer-dev \
libcurl4-openssl-dev \
libssl-dev \
pass \
python3-pkg-resources \
python3-pip \
python3-venv \
python-is-python3 \
tzdata \
vim \
wget \
nano
RUN python3.6 -m venv /root/venv
ENV PATH="/root/venv/bin:$PATH"
# Get node 14, this is the LTS version of node and install it
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get update && apt-get -y install nodejs
RUN npm install -g #angular/cli
RUN pip install --upgrade pip && pip install \
cython \
jinja2 \
pip-login \
pycurl \
setuptools \
vcstool \
wheel \
xmltodict \
gdal==2.2.3
I then tag the image that is generated, and move onto another docker-compose.yml that uses the generated image, and use another Dockerfile to copy source code into the Docker build context, and do a build process, and login to a private pip repo and pull down some python packages using pip.
The issue I'm running into is that my python venv isn't persisting. If I run the pip list command prior to the docker build finishing by issuing a RUN pip list the list that is printed is as expected. When I issue a docker-compose up, to bring a container up my app doesn't function properly though because it's missing all of the python packages that I installed, although running back using docker exec -it <container_name> bash shows that my virtualenv is being used (verified with a which pip)

How to install Python2.7.5 in Ubuntu docker image?

I have specific requirement to install Python 2.7.5 in Ubuntu, I could install 2.7.18 without any issues
Below is my dockerfile
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
RUN apt-get update -y \
&& apt-get install -y python2.7.x \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["python"]
however if I set it to python2.7.5
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
RUN apt-get update -y \
&& apt-get install -y python2.7.5 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["python"]
it is throwing the following error
E: Couldn't find any package by regex 'python2.7.5'
I want to install Python 2.7.5 along with relevant PIP, what should I do?
This version is no longer available in canonical mirrors.
It has been released in 2013.
As a result, having both python and pip working together since then is challenging.
Python 2.7.5 + PIP on centos7
It may be the simplest way if ubuntu is not a requirement.
ARG CENTOS_VERSION=7
FROM centos:$CENTOS_VERSION
# Python 2.7.5 is installed with centos7 image
# Add repository for PIP
RUN yum install -y epel-release
# Install pip
RUN yum install -y python-pip
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.5 on ubuntu
I've been able to install it from source
It has not been a success to install pip :
https://bootstrap.pypa.io/pip/2.7/get-pip.py
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
ARG PYTHON_VERSION=2.7.5
# Install dependencies
# PIP - openssl version > 1.1 may be an issue (try older ubuntu images)
RUN apt-get update \
&& apt-get install -y wget gcc make openssl libffi-dev libgdbm-dev libsqlite3-dev libssl-dev zlib1g-dev \
&& apt-get clean
WORKDIR /tmp/
# Build Python from source
RUN wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz \
&& tar --extract -f Python-$PYTHON_VERSION.tgz \
&& cd ./Python-$PYTHON_VERSION/ \
&& ./configure --enable-optimizations --prefix=/usr/local \
&& make && make install \
&& cd ../ \
&& rm -r ./Python-$PYTHON_VERSION*
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.6 + pip on ubuntu
Ubuntu 14.04 still has mirrors working (how long ???).
Python packages are really close to your expectations.
You may try to run your scripts with that one.
ARG UBUNTU_VERSION=14.04
FROM ubuntu:$UBUNTU_VERSION
RUN apt-get update \
&& apt-get install -y python python-pip \
&& apt-get clean
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.5 + pip, not working but could on ubuntu
Here is what I've tried with no success.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:$UBUNTU_VERSION
# Install dependencies
RUN apt-get update \
&& apt-get install -y wget gcc make openssl libffi-dev libgdbm-dev libsqlite3-dev libssl-dev zlib1g-dev \
&& apt-get clean
WORKDIR /tmp/
# Build python from source
RUN wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz \
&& tar --extract -f Python-$PYTHON_VERSION.tgz \
&& cd ./Python-$PYTHON_VERSION/ \
&& ./configure --enable-optimizations --prefix=/usr/local \
&& make && make install \
&& cd ../ \
&& rm -r ./Python-$PYTHON_VERSION*
# Build pip from source
RUN wget https://bootstrap.pypa.io/pip/2.7/get-pip.py \
&& python get-pip.py
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.9 with pip - as requested in comment
You can use this dockerfile, building python includes pip.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:$UBUNTU_VERSION
ARG PYTHON_VERSION=2.7.9
# Install dependencies
RUN apt-get update \
&& apt-get install -y wget gcc make openssl libffi-dev libgdbm-dev libsqlite3-dev libssl-dev zlib1g-dev \
&& apt-get clean
WORKDIR /tmp/
# Build Python from source
RUN wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz \
&& tar --extract -f Python-$PYTHON_VERSION.tgz \
&& cd ./Python-$PYTHON_VERSION/ \
&& ./configure --with-ensurepip=install --enable-optimizations --prefix=/usr/local \
&& make && make install \
&& cd ../ \
&& rm -r ./Python-$PYTHON_VERSION*
RUN python --version \
&& pip --version
ENTRYPOINT [ "python" ]
The simplest possible solution:
sudo apt-get install libssl-dev openssl
wget https://www.python.org/ftp/python/2.7.5/Python-2.7.5.tgz
tar xzvf Python-2.7.5.tgz
cd Python-2.7.5
./configure
make
sudo make install
After installation completed set installed python as default one.

Couldn't find any package by regex in python:3.8.3 docker image

I'm new to docker and I created a docker image and this is how my docker file looks like.
FROM python:3.8.3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 openssl-
1.2.20 xmlsec1-openssl-devel-1.2.20 \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
This image is used in docker-compose.yml's app service. So when I run the docker-compose build, I'm getting the below error saying it couldn't find the package. Those are few dependencies which I want to install in order to install a python package.
In the beginning, I've run the apt-get update to update the package lists.
Can anyone please help me with this issue.
Updated Dockerfile
FROM python:3.8.3
RUN apt-get update
RUN apt-get install -y postgresql-client\
&& apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-
devel-1.2.20 xmlsec1 openssl-1.2.20 xmlsec1-openssl-devel-1.2.20 \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
You are trying to use apt-get install after doing rm -rf /var/lib/apt/lists/*. That is guaranteed not to end well. Try removing the rm command initially to see if that helps. If you really need to keep the size of the image down then put the rm command as the very last command in the run statement.
If you really want to reduce your image size then try switching to using python:3.8-slim or python:3.8-alpine. Alpine is a different OS to the default of Ubuntu, but its package manager can be told not to cache files locally. eg.
FROM python:3.8-alpine
RUN apk add --no-cache postgresql-client
RUN apk add --no-cache gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 \
openssl-1.2.20 xmlsec1-openssl-devel-1.2.20
RUN apk add --no-cache curl gnupg
RUN apk add --no-cache nodejs
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
Certain bits of software might be available under different package names, so you'll have to check that out.
The instruction rm -rf /var/lib/apt/lists/* is more or less negating apt-get update. APT is no longer able to access the list of available packages after that. Move the rm to the end (and perhaps consider using the safer apt-get clean all).

Categories