I have a working service running on a python:3.6-jessie image.
I am trying to reduce the size of it to speed up serverless cold starts.
I have tried the images python:3.6-alpine, python:3.6-slim-buster and python:3.6-slim-jessie.
With all of them I end up having to install many additional packages and I end up with the follwing error that I cannot fix with more packages:
ImportError: libmysqlclient.so.18: cannot open shared object file: No such file or directory
My current Dockerfile is
FROM python:3.6-jessie as build
ENV PYTHONUNBUFFERED 0
ENV FLASK_APP "api/app.py"
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /opt/venv
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
FROM python:3.6-slim-jessie
COPY --from=build /opt/venv /opt/venv
WORKDIR /opt/venv
RUN apt-get update
RUN apt-get --assume-yes install gcc
RUN apt-get --assume-yes install python-mysqldb
ENV PATH="/opt/venv/bin:$PATH"
RUN rm -rf configs tests draw_results env .idea .git .pytest_cache
EXPOSE 8000
CMD ["/opt/venv/run.sh"]
The relevant lines from requirements.txt:
mysqlclient==1.4.2.post1
PyMySQL==0.9.3
Flask-SQLAlchemy==2.3.2
SQLAlchemy==1.3.0
The run.sh is just my gunicorn start command.
Is there any package I can use to fix this last issue, is there some other mysql library I should be using or some other way for me to fix this. Or should I just stick to full python:3.6 images when I want a mysql client?
I'm using python:3.7-slim and using the following command
RUN apt-get -y install default-libmysqlclient-dev
Try to add this line to the dockerfile:
RUN apt-get install -y libmysqlclient-dev
For python slim-buster (debian os) use can run this command on Dockerfile.
RUN apt-get update && apt-get install -y default-mysql-client
This worked for me.
I have used python:3.10.6-slim-buster
Related
When building a Docker file, I get the error
"/bin/sh: 1: apt-get: not found"
docker file:
FROM python:3.8
FROM ubuntu:20.04
ENV PATH="/env/bin/activate"
RUN apt-get update -y && apt-get upgrade -y
WORKDIR /var/www/html/
COPY . .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "manage.py"]
You are setting the PATH to /env/bin/activate and that is then the only place where apt-get is searched for. There is no need to activate a virtual env inside the container, just get rid of that line. pip can install the packages in requirements.txt to the "system" Python without issues.
You cannot layer 2 images like you are attempting to do, with multiple FROM statements. Just use FROM python:3.8 and drop the ubuntu. Multiple FROM statements are used in multi-stage builds where you have intermediate images which produce artifacts that are copied to the final image.
So just do:
FROM python:3.8
RUN apt-get update -y && apt-get upgrade -y
WORKDIR /var/www/html/
COPY . .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "manage.py"]
.. although why you would put Python code in /var/www/html beats me. Probably you don't.
I have a Flask API that connects to an Azure SQL database, deployed on Azure App Service in a Docker Image.
It works fine but I am trying to keep consistency between my development, staging and production environments using Alembic/Flask-Migrate to apply database upgrades.
I saw on Miguel Grinberg's Docker Deployment Tutorial, that this can be achieved by adding the flask db upgrade command to a boot.sh script, like so:
#!/bin/sh
flask db upgrade
exec gunicorn -w 4 -b :5000 --access-logfile - --error-logfile - app:app
My problem is that, when running the boot.sh script, I receive the error:
Usage: flask db [OPTIONS] COMMAND [ARGS]...
Try 'flask db --help' for help.
'.ror: No such command 'upgrade
Which indicates the script cannot find the Flask-Migrate library. This actually happens if I try other site-packages, such as just trying to run flask commands.
The weird thing is:
gunicorn works just fine
The API works just fine
I can run flask db upgrade with no problem if I fire up the container and open a terminal session with docker exec -i -t api /bin/sh
Obviously, there's a problem with my Dockerfile. I would massively appreciate any help here as I'm relatively new to Docker and Linux so I'm sure I'm missing something obvious:
EDIT: It also works just fine if I add the following line to my Dockerfile, just before the entrypoint CMD:
RUN flask db upgrade
Dockerfile
FROM python:3.8-alpine
# Dependencies for pyodbc on Linux
RUN apk update
RUN apk add curl sudo build-base unixodbc-dev unixodbc freetds-dev
RUN apk add gcc musl-dev libffi-dev openssl-dev
RUN apk add --no-cache tzdata
RUN rm -rf /var/cache/apk/*
RUN curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/msodbcsql17_17.5.2.2-1_amd64.apk
RUN sudo sudo apk add --allow-untrusted msodbcsql17_17.5.2.2-1_amd64.apk
RUN mkdir /code
WORKDIR /code
COPY requirements.txt requirements.txt
RUN python -m pip install --default-timeout=100 -r requirements.txt
RUN python -m pip install gunicorn
ADD . /code/
COPY boot.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/boot.sh
EXPOSE 5000
ENTRYPOINT ["sh", "boot.sh"]
I ended up making some major changes to my Dockerfile and boot.sh script. I'll share these as best I can below:
Problem 1: Entrypoint script cannot access directories
My main issue was that I had an inconsistent folder structure in my directory. There were 2 boot.sh scripts and the one being run on entrypoint either had the wrong permissions or was in the wrong place to find my site packages.
I simplified the copying of files from my local machine to the Docker image like so:
RUN mkdir /code
WORKDIR /code
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install --default-timeout=100 -r requirements.txt
RUN venv/bin/pip install gunicorn
COPY app app
COPY migrations migrations
COPY api.py config.py boot.sh ./
RUN chmod u+x boot.sh
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
The changes involved:
Setting up a virtualenv and installing all site packages in there
Making sure the config.py, boot.sh, and api.py files were in the root directory of the application folder (./)
Changing the entrypoint command from ["bin/sh", "boot.sh"] to just ["./boot.sh"]
Moving migrations files into the relevant folder for the upgrade script
I was then able to activate the virtual environment in the entrypoint file, and run the flask upgrade commands (NB: I had a problem with line endings being CRLF instead of LF in boot.sh, so make sure to change it if on Windows):
#!/bin/bash
source venv/bin/activate
flask db upgrade
exec gunicorn -w 4 -b :5000 --access-logfile - --error-logfile - api:app
Problem 2: Alpine Linux Too Slow
My other issue was that my image was taking forever to build (upwards of 45 mins) on Alpine Linux. Turns out this is a pretty well-established issue when using some of the libraries in my API (Pandas, Numpy).
I switched to a Debian build so that I could makes changes more quickly to my Docker image.
Including the installation of pyodbc to connect to Azure SQL Server, the first half of my Dockerfile now looks like:
FROM python:3.8-slim-buster
RUN apt-get update
RUN apt-get install -y apt-utils curl sudo gcc g++ gnupg2
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get install -y libffi-dev libgssapi-krb5-2 unixodbc-dev unixodbc freetds-dev
RUN sudo apt-get update
RUN sudo ACCEPT_EULA=Y apt-get install msodbcsql17
RUN apt-get clean -y
Where the curl commands and below come from the official MS docs on installing pyodbc on Debian
Full dockerfile:
FROM python:3.8-slim-buster
RUN apt-get update
RUN apt-get install -y apt-utils curl sudo gcc g++ gnupg2
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get install -y libffi-dev libgssapi-krb5-2 unixodbc-dev unixodbc freetds-dev
RUN sudo apt-get update
RUN sudo ACCEPT_EULA=Y apt-get install msodbcsql17
RUN apt-get clean -y
RUN mkdir /code
WORKDIR /code
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install --default-timeout=100 -r requirements.txt
RUN venv/bin/pip install gunicorn
COPY app app
COPY migrations migrations
COPY api.py config.py boot.sh ./
RUN chmod u+x boot.sh
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
I think this is the key information.
Which indicates the script cannot find the Flask-Migrate library. This actually happens if I try other site-packages, such as just trying to run flask commands.
To me this may indicate that the problem is not specific to Flask-Migrate but to all packages - as you write. This may mean on of following two.
First, it can mean that the packages are not correctly installed. However, this is unlikely as you write that it works when you manually start the container.
Second, something is wrong with how you execute your boot.sh script. For example, try changing
ENTRYPOINT ["sh", "boot.sh"]
to
ENTRYPOINT ["/bin/sh", "boot.sh"]
HTH!
My Dockerfile is:
FROM ubuntu:18.04
RUN apt-get -y update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update -y
RUN apt-get install -y python3.7 build-essential python3-pip
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
RUN pip3 install pipenv
COPY . /app
WORKDIR /app
RUN pipenv install
EXPOSE 5000
CMD ["pipenv", "run", "python3", "application.py"]
When I do docker build -t flask-sample:latest ., it builds fine (I think).
I run it with docker run -d -p 5000:5000 flask-sample and it looks okay
But when I go to http://localhost:5000, nothing loads. What am I doing wrong?
Why do you need a virtual environment ? Why do you use Ubuntu as base layer:
A simpler approach would be:
Dockerfile:
FROM python:3
WORKDIR /usr/src/
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
ENTRYPOINT FLASK_APP=/usr/src/app.py flask run --host=0.0.0.0
You put in your requirements.txt the desired packages (e.g flask).
Build image:
docker build -t dejdej/flasky:latest .
Start container:
docker run -it -p 5000:5000 dejdej/flasky
If it is mandatory to use virtual environment , you can try it with
venv:
FROM python:2.7
RUN virtualenv /YOURENV
RUN /YOURENV/bin/pip install flask
CMD ["/YOURENV/bin/python", "application.py"]
Short answer:
Your container is running pipenv, not your application. You need to fix the last line.
CMD ["pipenv", "run", "python3", "application.py"] should be only CMD ["python3", "application.py"]
Right answer:
I completely agree that there isnĀ“t any reason to use pipenv. Better solution is replace your Dockfile to use a python image and forget pipenv. You already in a container, no reason to use a enviroment.
I made the image from ubuntu:18.04 and install python.
However when I did this in docker-compose
command: python manage.py runserver
it shows the path error.
Maybe I didn't set the path??
but how I set the path for docker user??
ERROR: for django Cannot start service django: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown
ERROR: Encountered errors while bringing up the project.
FROM ubuntu:18.04
ENV PYTHONUNBUFFERED 1
RUN apt-get -y update
RUN apt-get -y install emacs wget
RUN apt-get -y install apache2-dev mysql-client
RUN apt-get -y install mysql-server libmysqlclient-dev
RUN apt-get install -y software-properties-common
RUN add-apt-repository -y ppa:deadsnakes/ppa
RUN apt-get install -y python3.7
RUN apt-get install -y python-pip
RUN pip install uwsgi django mysqlclient tensorflow_hub django-mysql django-extensions djangorestframework django-filter requests_oauthlib mecab-python3 neologdn gensim janome --no-input
RUN pip install keras tensorflow==1.14.0 --no-cache-dir --no-input
RUN mkdir /code
WORKDIR /code
ADD ./src /code/
You can solve this in two ways (works for me):
in docker-compose add:
command: bash -c 'python manage.py runserver'
or you can add CMD command in your Dockerfile:
CMD: python manage.py runserver
I have a Dockerfile
FROM ubuntu:xenial
LABEL maintainer="info#martin-thoma.com"
# Settings for the local user to create
ENV APP_USER docker
ENV APP_USER_UID 9999
ENV APP_USER_GROUP docker
ENV APP_USER_GROUP_GID 4711
ENV PYTHONIOENCODING utf-8
# Install and update software
RUN apt-get update -y && apt-get install -y --fix-missing git python-pip python-dev build-essential poppler-utils libmysqlclient-dev
RUN pip install pip --upgrade
# Copy projects code
COPY . /opt/app
WORKDIR /opt/app
RUN pip install -r requirements.txt
# Create user
RUN groupadd --gid ${APP_USER_GROUP_GID} ${APP_USER_GROUP} \
&& useradd --uid ${APP_USER_UID} --create-home -g ${APP_USER_GROUP} ${APP_USER} \
&& chown -R $APP_USER:$APP_USER_GROUP /opt/app
# Start app
USER docker
RUN mkdir -p /opt/app/filestorage
ENTRYPOINT ["python"]
CMD ["app.py"]
and a requirements.txt
-e git+https://github.com/ecederstrand/exchangelib.git#85eada6d59d0e2c757ef17c6ce143f3c976d2a90#egg=exchangelib
Flask==0.12.2
fuzzywuzzy==0.15.1
When I change the exchangelib line to exchangelib (hence not using git, but the version on PyPI) it works (but my code doesn't work as I need some of the recent changes).
When I have this, I get:
web_1 | ImportError: No module named exchangelib
What is the problem? Why can't my container find a pip installed package (via git)? How do I fix it?
My intuition is that the problem is that I install it as the root user, but the application runs as another user. The PyPI packages seem to get installed for all users while the editable is only local. But I still don't know how to fix it.
Simply using
git+git://github.com/ecederstrand/exchangelib.git#85eada6d59d0e2c757ef17c6ce143f3c976d2a90#egg=exchangelib
as a line in the requirements.txt worked. No change in the docker file was necessary.