Default pip package PATH in python:3.8-slim-buster docker image - python

Im installing gunicorn pip package in my docker python:3.8-slim-buster image and when I use CMD gunicorn im told /bin/sh: 1: gunicorn: not found.
So im considering changing the path but i have a few questions to do so :
should i use (in my Dockerfile):
pip --target=path_already_in_PATH install gunicorn
ENV PYTHONPATH "${PYTHONPATH}:good_path"
ENV PATH="/default_pip_path:${PATH}"
I dont know which option is better and what to put in good_path, path_already_in_PATH and default_pip_path
This is my Dockerfile :
FROM python:3.8-slim-buster
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential netcat
# cleaning up unused files
# && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
# && rm -rf /var/lib/apt/lists/*
RUN addgroup --system kr1p \
&& adduser --system --ingroup kr1p kr1p
WORKDIR /app
COPY app .
RUN chown -R kr1p:kr1p /app
USER kr1p
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
CMD gunicorn
I ve also tried python -m gunicorn but it's the same and also CMD ["gunicorn"]
And the docker-compose.yml
---
version: '3.7'
services:
app:
container_name: app
build:
context: .
dockerfile: ./app/Dockerfile
volumes:
- app:/app
ports:
- 5000:5000
volumes:
app:
name: app
I noticed pip says "Defaulting to user installation because normal site-packages is not writeable" at the begining of the installation probably because i've created a new user
It's another issue but pip also tells me at the end : #10 385.5 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
What is the proper way to set a virtualenv to avoid issues?

Ah, so the problem shows up in the docker build output:
Step 8/10 : RUN pip install gunicorn
---> Running in 5ec725d1c957
Defaulting to user installation because normal site-packages is not writeable
Collecting gunicorn
Downloading gunicorn-20.1.0-py3-none-any.whl (79 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.5/79.5 KB 2.3 MB/s eta 0:00:00
Requirement already satisfied: setuptools>=3.0 in /usr/local/lib/python3.8/site-packages (from gunicorn) (57.5.0)
Installing collected packages: gunicorn
WARNING: The script gunicorn is installed in '/home/kr1p/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed gunicorn-20.1.0
WARNING: You are using pip version 22.0.4; however, version 22.1.2 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
Removing intermediate container 5ec725d1c957
---> c42800562d88
Step 9/10 : ENV PYTHONUNBUFFERED 1
---> Running in 8d9342ec2288```
Namely: " WARNING: The script gunicorn is installed in '/home/kr1p/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location."
This is because it's running as your non-root kr1p user, so it's actually ending up in $HOME/.local/bin/gunicorn instead.
I would either:
add that dir to the PATH statically in the dockerfile, like:
ENV PATH=/home/kr1p/.local/bin:$PATH
or, install dependencies as root, prior to switching down to the unpriv user for copying source files and other setup.
USER root
COPY requirements.txt /reqs.txt
RUN pip install --root-user-action=ignore -r /reqs.txt
USER kr1p
COPY --chown kr1p app/ ./
The root-user-action is just to suppress a message about how you should be using virtualenvs, which doesn't necessarily apply when walling things off inside a container instead. This requires a newer pip than that which comes with debian-buster though, so I ended up removing it (and you're just stuck with that warning if you use the install while root approach).
As a full working example for the PATH modifying approach, see:
FROM python:3.8-slim-buster
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential netcat
# cleaning up unused files
# && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
# && rm -rf /var/lib/apt/lists/*
# sets kr1p home dir to /app
RUN adduser --home /app --system --group kr1p
ENV PATH=/app/.local/bin:$PATH \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1
WORKDIR /app
COPY app/requirements.txt .
USER kr1p
RUN pip install -r /app/requirements.txt
COPY --chown=kr1p:kr1p app .
# otherwise a shell runs gunicorn, and signals don't get passed down properly
CMD ["gunicorn", "--help"]
(There were a few other things wrong like a missing = in your ENV statement, etc.)

Related

Dockerfile: /bin/sh: 1: apt-get: not found

When building a Docker file, I get the error
"/bin/sh: 1: apt-get: not found"
docker file:
FROM python:3.8
FROM ubuntu:20.04
ENV PATH="/env/bin/activate"
RUN apt-get update -y && apt-get upgrade -y
WORKDIR /var/www/html/
COPY . .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "manage.py"]
You are setting the PATH to /env/bin/activate and that is then the only place where apt-get is searched for. There is no need to activate a virtual env inside the container, just get rid of that line. pip can install the packages in requirements.txt to the "system" Python without issues.
You cannot layer 2 images like you are attempting to do, with multiple FROM statements. Just use FROM python:3.8 and drop the ubuntu. Multiple FROM statements are used in multi-stage builds where you have intermediate images which produce artifacts that are copied to the final image.
So just do:
FROM python:3.8
RUN apt-get update -y && apt-get upgrade -y
WORKDIR /var/www/html/
COPY . .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "manage.py"]
.. although why you would put Python code in /var/www/html beats me. Probably you don't.

Run python mysql client on slim python 3.6 docker image

I have a working service running on a python:3.6-jessie image.
I am trying to reduce the size of it to speed up serverless cold starts.
I have tried the images python:3.6-alpine, python:3.6-slim-buster and python:3.6-slim-jessie.
With all of them I end up having to install many additional packages and I end up with the follwing error that I cannot fix with more packages:
ImportError: libmysqlclient.so.18: cannot open shared object file: No such file or directory
My current Dockerfile is
FROM python:3.6-jessie as build
ENV PYTHONUNBUFFERED 0
ENV FLASK_APP "api/app.py"
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /opt/venv
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
FROM python:3.6-slim-jessie
COPY --from=build /opt/venv /opt/venv
WORKDIR /opt/venv
RUN apt-get update
RUN apt-get --assume-yes install gcc
RUN apt-get --assume-yes install python-mysqldb
ENV PATH="/opt/venv/bin:$PATH"
RUN rm -rf configs tests draw_results env .idea .git .pytest_cache
EXPOSE 8000
CMD ["/opt/venv/run.sh"]
The relevant lines from requirements.txt:
mysqlclient==1.4.2.post1
PyMySQL==0.9.3
Flask-SQLAlchemy==2.3.2
SQLAlchemy==1.3.0
The run.sh is just my gunicorn start command.
Is there any package I can use to fix this last issue, is there some other mysql library I should be using or some other way for me to fix this. Or should I just stick to full python:3.6 images when I want a mysql client?
I'm using python:3.7-slim and using the following command
RUN apt-get -y install default-libmysqlclient-dev
Try to add this line to the dockerfile:
RUN apt-get install -y libmysqlclient-dev
For python slim-buster (debian os) use can run this command on Dockerfile.
RUN apt-get update && apt-get install -y default-mysql-client
This worked for me.
I have used python:3.10.6-slim-buster

Create Docker container with Django, npm and gulp

I changed my local dev. to Docker. I use the Django framework. For frontend I use the gulp build command to “create” my files. Now I tried a lot, looked into the Cookiecutter and Saleor project but still having issues to install npm in a way that I can call the gulp build command in my Docker container.
I already tried to add:
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get update && apt-get install -y \
nodejs \
COPY ./package.json /app/
RUN npm install
While npm is installed, I still can’t run the command gulp build in my Container. It just says gulp is an unknown command. So it seems npm doesn’t install the defined packages in my package.json file. Anyone here who already solved that and can give me some tips?
Dockerfile
# Pull base image
FROM python:3.7
# Define environment variable
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y \
# Language dependencies
gettext \
# In addition, when you clean up the apt cache by removing /var/lib/apt/lists
# it reduces the image size, since the apt cache is not stored in a layer.
&& rm -rf /var/lib/apt/lists/*
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install Python dependencies
RUN pip install pipenv
RUN pipenv install --system --deploy --dev
docker-compose.py
version: '3'
services:
web:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
env_file: .env
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
entrypoint: ./compose/local/django/entrypoint.sh
container_name: myproject
db:
image: postgres
ports:
- "5432:5432"
environment:
# Password will be required if connecting from a different host
- POSTGRES_PASSWORD=password
Update 1:
# Pull base image
FROM combos/python_node:3_10
# Define environment variable
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y \
# Language dependencies
gettext \
# In addition, when you clean up the apt cache by removing /var/lib/apt/lists
# it reduces the image size, since the apt cache is not stored in a layer.
&& rm -rf /var/lib/apt/lists/*
# COPY webpack.config.js app.json package.json package-lock.json /app/
# WORKDIR /app
# RUN npm install
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN npm install
# Install Python dependencies
RUN pip install pipenv
RUN pipenv install --system --deploy --dev
Are you sure npm install is being called in the correct directory? You copy package.json to /app but run npm install from an unknown folder. You can try:
COPY ./package.json /app/
RUN cd /app/ && npm install
But I think you'd want to install gulp globally anyway, so you can skip package.json and just use:
RUN npm install -g gulp-cli
This way whatever calls gulp should have it in PATH and not just that specific directory.
Also, if you want to get a Docker image with both Python 3.7 and Node.js 10 already installed, you can use combos/python_node:3.7_10. It's rebuilt daily to contain the latest versions of both images.

Why can't my container find a pip installed package (via git)?

I have a Dockerfile
FROM ubuntu:xenial
LABEL maintainer="info#martin-thoma.com"
# Settings for the local user to create
ENV APP_USER docker
ENV APP_USER_UID 9999
ENV APP_USER_GROUP docker
ENV APP_USER_GROUP_GID 4711
ENV PYTHONIOENCODING utf-8
# Install and update software
RUN apt-get update -y && apt-get install -y --fix-missing git python-pip python-dev build-essential poppler-utils libmysqlclient-dev
RUN pip install pip --upgrade
# Copy projects code
COPY . /opt/app
WORKDIR /opt/app
RUN pip install -r requirements.txt
# Create user
RUN groupadd --gid ${APP_USER_GROUP_GID} ${APP_USER_GROUP} \
&& useradd --uid ${APP_USER_UID} --create-home -g ${APP_USER_GROUP} ${APP_USER} \
&& chown -R $APP_USER:$APP_USER_GROUP /opt/app
# Start app
USER docker
RUN mkdir -p /opt/app/filestorage
ENTRYPOINT ["python"]
CMD ["app.py"]
and a requirements.txt
-e git+https://github.com/ecederstrand/exchangelib.git#85eada6d59d0e2c757ef17c6ce143f3c976d2a90#egg=exchangelib
Flask==0.12.2
fuzzywuzzy==0.15.1
When I change the exchangelib line to exchangelib (hence not using git, but the version on PyPI) it works (but my code doesn't work as I need some of the recent changes).
When I have this, I get:
web_1 | ImportError: No module named exchangelib
What is the problem? Why can't my container find a pip installed package (via git)? How do I fix it?
My intuition is that the problem is that I install it as the root user, but the application runs as another user. The PyPI packages seem to get installed for all users while the editable is only local. But I still don't know how to fix it.
Simply using
git+git://github.com/ecederstrand/exchangelib.git#85eada6d59d0e2c757ef17c6ce143f3c976d2a90#egg=exchangelib
as a line in the requirements.txt worked. No change in the docker file was necessary.

Docker build seems to hang when installing non-pip Python packages

I've got some non-pip packages,
which I've written into my requirements.txt as:
git+https://github.com/manahl/arctic.git
This seems to work OK on my localhost, but when I do docker build I get this:
Collecting git+https://github.com/manahl/arctic.git (from -r scripts/requirements.txt (line 11))
│ Cloning https://github.com/manahl/arctic.git to /tmp/pip-1gw7spz2-build
And it just seems to hang. It moves on silently after several minutes, but it doesn't look like it's worked at all. It seems to do this for every git based dependency.
What am I doing wrong?
Dockerfile:
FROM python:3.6.1
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN apt-get update && apt-get install -y \
git\
build-essential
# Install any needed packages specified in requirements.txt
RUN pip install -r scripts/requirements.txt
# Run app.py when the container launches
CMD ["python", "scheduler.py"]
If scripts folder exists in current directory try RUN pip install -r /scripts/requirements.txt

Categories