I created a slim docker file for my app:
FROM python:3.7-slim-stretch AS build
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
rm -rf /var/cache/apt/* /var/lib/apt/lists/*
ADD ./requirements.txt /project/
RUN /venv/bin/pip install -r /project/requirements.txt
ADD . /project
RUN /venv/bin/pip install /project
WORKDIR /project
FROM python:3.7-slim-stretch AS production
COPY --from=build /venv /venv
CMD ["/venv/bin/python3","-m", "myapp"]
The docker is building and working. The running python executable is copied from the build image. (Verified, if I remove "/venv/bin" it won't run).
However, to save some space I want to change my production base docker to:
FROM debian:stretch-slim
But then I'm getting an error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/venv/bin/python3\": stat /venv/bin/python3: no such file or directory": unknown.
Now, I don't understand this error. I can see the python executable is there, why he wouldn't run? Whats in the base python docker image allow it to run?
Go in your venv in your container and ls -l the bin directory.
lrwxrwxrwx 1 root root 21 Dec 4 17:28 python -> /usr/local/bin/python
Yes python is there but it is a symlink to a file which does not exists.
You can go around this first problem by using RUN python3 -m venv --copies /venv in your Dockerfile.
But you will then hit the following error message:
error while loading shared libraries: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory
So you will finally need to install the exact same version of python in your image as the one available at build time.
Related
I am trying to build this docker image with docker compose:
FROM python:3.7-slim
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
RUN apt-get update && apt-get install -y \
build-essential \
make \
gcc \
python3-dev \
mongodb
# Create working directory and copy all files
COPY . /app
WORKDIR /app
# Pip install requirements
RUN pip install --user -r requirements.txt
# Port to expose
EXPOSE 8000
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "main.py", "runserver"]
but i get this error:
Package 'mongodb' has no installation candidate
When I run the same exact docker image with python:3.4-slim it works. Why?
That's because python:3.4-slim uses Debian stretch (9) for its base and the mongodb package is available in its repos. But for python:3.7-slim, the base is bullseye (11) and mongodb is no longer in its repos.
I'd recommend not to install mongodb in the image that you're building above but rather use a separate mongodb container.
I am building a Docker based Flask API that needs to connect to a remote Oracle Database. I can get it to work on my machine outside of Docker but when I go to containerize it I get the error. I have tried every article I can find on stackoverflow and I still get the error:
load by OS failure: libclntsh.so: cannot open shared object file: No such file or directory
I have tried 3 different ways:
FROM python:3.9-buster
ENV DPI_DEBUG_LEVEL=64
# Installing Oracle instant client
# INSTALL TOOLS
RUN apt-get update \
&& apt-get -y install unzip \
&& apt-get -y install libaio1 libaio-dev \
&& mkdir -p /opt/data/api
ADD ./oracle-instantclient/ /opt/data
ADD ./install-instantclient.sh /opt/data
ADD ./requirements.txt /opt/data
WORKDIR /opt/data
ENV ORACLE_HOME=/opt/oracle/instantclient
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME
ENV OCI_HOME=/opt/oracle/instantclient
ENV OCI_LIB_DIR=/opt/oracle/instantclient
ENV OCI_INCLUDE_DIR=/opt/oracle/instantclient/sdk/include
RUN ./install-instantclient.sh
# Python set up
# set working directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# add and install requirements
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# add app
COPY . .
# add entrypoint.sh
COPY ./entrypoint.sh .
RUN chmod +x /usr/src/app/entrypoint.sh
This was the first way I tried for instant client:
# Install system dependencies and clean up rpms afterwards
RUN apt-get update \
&& apt-get -y install alien unzip libaio1 \
&& apt-get clean
# ZIP Install
ENV DPI_DEBUG_LEVEL=64
ENV INSTANT_CLIENT_FILE=instantclient-basic-linux.x64-19.13.0.0.0dbru.zip
RUN mkdir -p /opt/oracle
ADD ./resources/${INSTANT_CLIENT_FILE} /opt/oracle
RUN apt-get -y install unzip
RUN unzip /opt/oracle/${INSTANT_CLIENT_FILE} -d /opt/oracle
RUN ln -s /opt/oracle/instantclient_19_13/libclntsh.so.19.13 /usr/lib/libclntsh.so
RUN rm -rf /opt/oracle/${INSTANT_CLIENT_FILE}
# This needs to be set to the path that was created when the unzip occurred
# I figured out what the directory name after /opt/oracle was going to be by
# unzipping the file on my computer
ENV ORACLE_HOME=/opt/oracle/instantclient_19_13
ENV LD_LIBRARY_PATH=${ORACLE_HOME}
ENV ORACLE_BASE=${ORACLE_HOME}
ENV PATH="${ORACLE_HOME}:${PATH}"
RUN sh -c "echo ${ORACLE_HOME} > /etc/ld.so.conf.d/oracle-instantclient.conf"
RUN ldconfig
Then tried RPM install:
# RPM Install
ENV INSTANT_CLIENT_FILE_NAME=oracle-instantclient-basic-21.4.0.0.0-1.el8.x86_64
RUN mkdir /resources
COPY ./resources/${INSTANT_CLIENT_FILE_NAME}.rpm /resources
RUN alien -ct --scripts /resources/${INSTANT_CLIENT_FILE_NAME}.rpm
#RUN alien --scripts --to-deb /resources/${INSTANT_CLIENT_FILE_NAME}.tgz
RUN apt-get -y install ./resources/${INSTANT_CLIENT_FILE_NAME}.deb
RUN rm -rf ./resources/${INSTANT_CLIENT_FILE_NAME}.rpm
RUN rm -rf ./resources/${INSTANT_CLIENT_FILE_NAME}.deb
Each time I get the error but I have either specified LD_LIBRARY_PATH directly or I have run:
RUN sh -c "echo ${ORACLE_HOME} > /etc/ld.so.conf.d/oracle-instantclient.conf"
RUN ldconfig
And if I run ldconfig -p in the container then I see my entries. Or if I look at the environment variables in the container everything is set and I can see them. But I still get the error about not being able to find the files. Any other suggestions would be greatly appreciated.
So I have been working on this for two days. And there was a part of my setup that I was over looking that turned out to be causing the issue. I'm on a Mac Mini M1 and the reason that nothing I tried worked. I was missing an important part in my Dockerfile. I needed to add --platform=linux/amd64. I didn't know this because I just switched to the Mac Mini 2 days ago and this wasn't something I needed to do before. Hopefully if someone runs into the same issue they will find this and it will help them.
Too long for a comment, and you have a few scenarios, so here are some thoughts.
Use Oracle's container which already has cx_Oracle? Look for the *-oracledb container on https://github.com/oracle/docker-images/pkgs/container/oraclelinux7-python
Never set ORACLE_HOME with Instant Client.
What are those OCI_HOME, OCI_LIB_DIR and OCI_INCLUDE_DIR variables for? They are not used by cx_Oracle install or runtime.
With RPMs on Ubuntu I do:
alien -i --scripts oracle-instantclient19.13-basic-19.13.0.0.0-1.x86_64.rpm
alien -i --scripts oracle-instantclient19.13-sqlplus-19.13.0.0.0-1.x86_64.rpm
apt-get install libaio1
Then I don't need to create symlinks, or run ldconfig. I.e. it should 'just work'.
Perhaps check my blog post series Docker for Oracle Database Applications in Node.js and Python which has some Dockerfile examples for Python cx_Oracle?
I have a Flask API that connects to an Azure SQL database, deployed on Azure App Service in a Docker Image.
It works fine but I am trying to keep consistency between my development, staging and production environments using Alembic/Flask-Migrate to apply database upgrades.
I saw on Miguel Grinberg's Docker Deployment Tutorial, that this can be achieved by adding the flask db upgrade command to a boot.sh script, like so:
#!/bin/sh
flask db upgrade
exec gunicorn -w 4 -b :5000 --access-logfile - --error-logfile - app:app
My problem is that, when running the boot.sh script, I receive the error:
Usage: flask db [OPTIONS] COMMAND [ARGS]...
Try 'flask db --help' for help.
'.ror: No such command 'upgrade
Which indicates the script cannot find the Flask-Migrate library. This actually happens if I try other site-packages, such as just trying to run flask commands.
The weird thing is:
gunicorn works just fine
The API works just fine
I can run flask db upgrade with no problem if I fire up the container and open a terminal session with docker exec -i -t api /bin/sh
Obviously, there's a problem with my Dockerfile. I would massively appreciate any help here as I'm relatively new to Docker and Linux so I'm sure I'm missing something obvious:
EDIT: It also works just fine if I add the following line to my Dockerfile, just before the entrypoint CMD:
RUN flask db upgrade
Dockerfile
FROM python:3.8-alpine
# Dependencies for pyodbc on Linux
RUN apk update
RUN apk add curl sudo build-base unixodbc-dev unixodbc freetds-dev
RUN apk add gcc musl-dev libffi-dev openssl-dev
RUN apk add --no-cache tzdata
RUN rm -rf /var/cache/apk/*
RUN curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/msodbcsql17_17.5.2.2-1_amd64.apk
RUN sudo sudo apk add --allow-untrusted msodbcsql17_17.5.2.2-1_amd64.apk
RUN mkdir /code
WORKDIR /code
COPY requirements.txt requirements.txt
RUN python -m pip install --default-timeout=100 -r requirements.txt
RUN python -m pip install gunicorn
ADD . /code/
COPY boot.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/boot.sh
EXPOSE 5000
ENTRYPOINT ["sh", "boot.sh"]
I ended up making some major changes to my Dockerfile and boot.sh script. I'll share these as best I can below:
Problem 1: Entrypoint script cannot access directories
My main issue was that I had an inconsistent folder structure in my directory. There were 2 boot.sh scripts and the one being run on entrypoint either had the wrong permissions or was in the wrong place to find my site packages.
I simplified the copying of files from my local machine to the Docker image like so:
RUN mkdir /code
WORKDIR /code
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install --default-timeout=100 -r requirements.txt
RUN venv/bin/pip install gunicorn
COPY app app
COPY migrations migrations
COPY api.py config.py boot.sh ./
RUN chmod u+x boot.sh
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
The changes involved:
Setting up a virtualenv and installing all site packages in there
Making sure the config.py, boot.sh, and api.py files were in the root directory of the application folder (./)
Changing the entrypoint command from ["bin/sh", "boot.sh"] to just ["./boot.sh"]
Moving migrations files into the relevant folder for the upgrade script
I was then able to activate the virtual environment in the entrypoint file, and run the flask upgrade commands (NB: I had a problem with line endings being CRLF instead of LF in boot.sh, so make sure to change it if on Windows):
#!/bin/bash
source venv/bin/activate
flask db upgrade
exec gunicorn -w 4 -b :5000 --access-logfile - --error-logfile - api:app
Problem 2: Alpine Linux Too Slow
My other issue was that my image was taking forever to build (upwards of 45 mins) on Alpine Linux. Turns out this is a pretty well-established issue when using some of the libraries in my API (Pandas, Numpy).
I switched to a Debian build so that I could makes changes more quickly to my Docker image.
Including the installation of pyodbc to connect to Azure SQL Server, the first half of my Dockerfile now looks like:
FROM python:3.8-slim-buster
RUN apt-get update
RUN apt-get install -y apt-utils curl sudo gcc g++ gnupg2
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get install -y libffi-dev libgssapi-krb5-2 unixodbc-dev unixodbc freetds-dev
RUN sudo apt-get update
RUN sudo ACCEPT_EULA=Y apt-get install msodbcsql17
RUN apt-get clean -y
Where the curl commands and below come from the official MS docs on installing pyodbc on Debian
Full dockerfile:
FROM python:3.8-slim-buster
RUN apt-get update
RUN apt-get install -y apt-utils curl sudo gcc g++ gnupg2
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get install -y libffi-dev libgssapi-krb5-2 unixodbc-dev unixodbc freetds-dev
RUN sudo apt-get update
RUN sudo ACCEPT_EULA=Y apt-get install msodbcsql17
RUN apt-get clean -y
RUN mkdir /code
WORKDIR /code
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install --default-timeout=100 -r requirements.txt
RUN venv/bin/pip install gunicorn
COPY app app
COPY migrations migrations
COPY api.py config.py boot.sh ./
RUN chmod u+x boot.sh
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
I think this is the key information.
Which indicates the script cannot find the Flask-Migrate library. This actually happens if I try other site-packages, such as just trying to run flask commands.
To me this may indicate that the problem is not specific to Flask-Migrate but to all packages - as you write. This may mean on of following two.
First, it can mean that the packages are not correctly installed. However, this is unlikely as you write that it works when you manually start the container.
Second, something is wrong with how you execute your boot.sh script. For example, try changing
ENTRYPOINT ["sh", "boot.sh"]
to
ENTRYPOINT ["/bin/sh", "boot.sh"]
HTH!
I'm deploying node.js web application on custom runtime with flex environment. I'm calling child_process in Node.js to open python3 as such:
const spawn = require("child_process").spawn;
pythonProcess = spawn('python3');
Which runs fine locally but when deployed to GAE, it gives me an error as such:
Error: spawn python3 ENOENT
at Process.ChildProcess._handle.onexit (child_process.js:240)
at onErrorNT (internal/child_process.js:415)
at process._tickCallback (next_tick.js:63)
However, when I run python2, it works fine.
After doing some research and digging, I came across this question on stackoverflow
How to install Python3 in Google Cloud Platform for a Node app
It seems that I have to do something with building custom runtime from docker file to allow multiple runtimes (something like that).
I've tried countless things with docker file such as:
# Trying to install python3
FROM ubuntu as stage0
WORKDIR /python/
RUN apt-get update || : && apt-get install --yes python3;
RUN apt-get install python3-pip -y
# My main node.js docker stuff
FROM gcr.io/google_appengine/nodejs
COPY . /app/
... etc
and
# From google app engine python runtime docker repo
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
# My main node.js docker stuff
FROM gcr.io/google_appengine/nodejs
COPY . /app/
... etc
which none of it worked.
What is the correct approach of doing this and how can I do it?
Thank you.
Google's image is based on ubuntu but only has python 2 and 2.7. This answer showed how to use python3.6, but we're going to install 3.5 it via software-properties-common. Putting things together, you get:
FROM launcher.gcr.io/google/nodejs
# same as
# FROM gcr.io/google-appengine/nodejs
RUN apt-get update && apt-get install software-properties-common -y
# RUN unlink /usr/bin/python
# RUN ln -sv /usr/bin/python3.5 /usr/bin/python
# RUN python -V
RUN python3 -V
# Copy application code.
COPY . /app/
# Install dependencies.
RUN npm --unsafe-perm install
If you're just going to call python3 from your spawn, you don't need to unlink (commented lines) which I included so that you can just call python.
I have a working service running on a python:3.6-jessie image.
I am trying to reduce the size of it to speed up serverless cold starts.
I have tried the images python:3.6-alpine, python:3.6-slim-buster and python:3.6-slim-jessie.
With all of them I end up having to install many additional packages and I end up with the follwing error that I cannot fix with more packages:
ImportError: libmysqlclient.so.18: cannot open shared object file: No such file or directory
My current Dockerfile is
FROM python:3.6-jessie as build
ENV PYTHONUNBUFFERED 0
ENV FLASK_APP "api/app.py"
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /opt/venv
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
FROM python:3.6-slim-jessie
COPY --from=build /opt/venv /opt/venv
WORKDIR /opt/venv
RUN apt-get update
RUN apt-get --assume-yes install gcc
RUN apt-get --assume-yes install python-mysqldb
ENV PATH="/opt/venv/bin:$PATH"
RUN rm -rf configs tests draw_results env .idea .git .pytest_cache
EXPOSE 8000
CMD ["/opt/venv/run.sh"]
The relevant lines from requirements.txt:
mysqlclient==1.4.2.post1
PyMySQL==0.9.3
Flask-SQLAlchemy==2.3.2
SQLAlchemy==1.3.0
The run.sh is just my gunicorn start command.
Is there any package I can use to fix this last issue, is there some other mysql library I should be using or some other way for me to fix this. Or should I just stick to full python:3.6 images when I want a mysql client?
I'm using python:3.7-slim and using the following command
RUN apt-get -y install default-libmysqlclient-dev
Try to add this line to the dockerfile:
RUN apt-get install -y libmysqlclient-dev
For python slim-buster (debian os) use can run this command on Dockerfile.
RUN apt-get update && apt-get install -y default-mysql-client
This worked for me.
I have used python:3.10.6-slim-buster