Unable install Django through Dockerfile - python

when I run 'docker build .' command,
"ERROR: Invalid requirement: 'Django=>4.0.4' (from line 1 of /requirements.txt)
WARNING: You are using pip version 22.0.4; however, version 22.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command."
this error shows up. I have upgraded pip to the latest version. When I check version of pip , it shows 22.1.
But when I run docker build command again, nothing changes.
I have upgraded from this /usr/local/bin/python location. but still nothing changed.
I am using Ubuntu 20.04, python version is 3.8.
my docker file:
FROM python:3.8-alpine
MAINTAINER Kanan App Developer
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
requirements.txt file:
Django=>4.0.4
djangorestframework=>3.13.1

Just use == or >= instead => in your requirements.txt, like this
Django==4.0.4
djangorestframework==3.13.1

=> is not a valid realtional operator for greater than or equal to.
The valid operator is >=. So, your requirements.txt file should be:
Django>=4.0.4
djangorestframework>=3.13.1

Related

Error with requirements file when creating docker image

Note, I'm brand new to docker.
I'm trying to create a docker image of my flask app but when I run sudo docker image build -t flask_docker . it keeps throwing version errors at the step of installing the requirements.txt file.
Here is my docker file
FROM python:3.8-alpine
COPY requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT ["python3"]
CMD ["app.py"]
And here is the error.
ERROR: Could not find a version that satisfies the requirement Brlapi==0.8.2 (from versions: none)
ERROR: No matching distribution found for Brlapi==0.8.2
WARNING: You are using pip version 22.0.4; however, version 22.2.2 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
What is the proper way to fix this error? Should I just manually go through and find which packages don't install properly and just remove them?
I would recommend using a Virtual environment to install packages, but for your situation try reinstalling Brlapi with
pip install --upgrade --force-reinstall Brlapi
and then pip freeze to get your requirements.txt

Default pip package PATH in python:3.8-slim-buster docker image

Im installing gunicorn pip package in my docker python:3.8-slim-buster image and when I use CMD gunicorn im told /bin/sh: 1: gunicorn: not found.
So im considering changing the path but i have a few questions to do so :
should i use (in my Dockerfile):
pip --target=path_already_in_PATH install gunicorn
ENV PYTHONPATH "${PYTHONPATH}:good_path"
ENV PATH="/default_pip_path:${PATH}"
I dont know which option is better and what to put in good_path, path_already_in_PATH and default_pip_path
This is my Dockerfile :
FROM python:3.8-slim-buster
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential netcat
# cleaning up unused files
# && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
# && rm -rf /var/lib/apt/lists/*
RUN addgroup --system kr1p \
&& adduser --system --ingroup kr1p kr1p
WORKDIR /app
COPY app .
RUN chown -R kr1p:kr1p /app
USER kr1p
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
CMD gunicorn
I ve also tried python -m gunicorn but it's the same and also CMD ["gunicorn"]
And the docker-compose.yml
---
version: '3.7'
services:
app:
container_name: app
build:
context: .
dockerfile: ./app/Dockerfile
volumes:
- app:/app
ports:
- 5000:5000
volumes:
app:
name: app
I noticed pip says "Defaulting to user installation because normal site-packages is not writeable" at the begining of the installation probably because i've created a new user
It's another issue but pip also tells me at the end : #10 385.5 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
What is the proper way to set a virtualenv to avoid issues?
Ah, so the problem shows up in the docker build output:
Step 8/10 : RUN pip install gunicorn
---> Running in 5ec725d1c957
Defaulting to user installation because normal site-packages is not writeable
Collecting gunicorn
Downloading gunicorn-20.1.0-py3-none-any.whl (79 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.5/79.5 KB 2.3 MB/s eta 0:00:00
Requirement already satisfied: setuptools>=3.0 in /usr/local/lib/python3.8/site-packages (from gunicorn) (57.5.0)
Installing collected packages: gunicorn
WARNING: The script gunicorn is installed in '/home/kr1p/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed gunicorn-20.1.0
WARNING: You are using pip version 22.0.4; however, version 22.1.2 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
Removing intermediate container 5ec725d1c957
---> c42800562d88
Step 9/10 : ENV PYTHONUNBUFFERED 1
---> Running in 8d9342ec2288```
Namely: " WARNING: The script gunicorn is installed in '/home/kr1p/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location."
This is because it's running as your non-root kr1p user, so it's actually ending up in $HOME/.local/bin/gunicorn instead.
I would either:
add that dir to the PATH statically in the dockerfile, like:
ENV PATH=/home/kr1p/.local/bin:$PATH
or, install dependencies as root, prior to switching down to the unpriv user for copying source files and other setup.
USER root
COPY requirements.txt /reqs.txt
RUN pip install --root-user-action=ignore -r /reqs.txt
USER kr1p
COPY --chown kr1p app/ ./
The root-user-action is just to suppress a message about how you should be using virtualenvs, which doesn't necessarily apply when walling things off inside a container instead. This requires a newer pip than that which comes with debian-buster though, so I ended up removing it (and you're just stuck with that warning if you use the install while root approach).
As a full working example for the PATH modifying approach, see:
FROM python:3.8-slim-buster
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential netcat
# cleaning up unused files
# && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
# && rm -rf /var/lib/apt/lists/*
# sets kr1p home dir to /app
RUN adduser --home /app --system --group kr1p
ENV PATH=/app/.local/bin:$PATH \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1
WORKDIR /app
COPY app/requirements.txt .
USER kr1p
RUN pip install -r /app/requirements.txt
COPY --chown=kr1p:kr1p app .
# otherwise a shell runs gunicorn, and signals don't get passed down properly
CMD ["gunicorn", "--help"]
(There were a few other things wrong like a missing = in your ENV statement, etc.)

WARNING: Running pip as the 'root' user

I am making simple image of my python Django app in Docker. But at the end of the building container it throws next warning (I am building it on Ubuntu 20.04):
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead
Why does it throw this warning if I am installing Python requirements inside my image? I am building my image using:
sudo docker build -t my_app:1 .
Should I be worried about warning that pip throws, because I know it can break my system?
Here is my Dockerfile:
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
The way your container is built doesn't add a user, so everything is done as root.
You could create a user and install to that users's home directory by doing something like this;
FROM python:3.8.3-alpine
RUN pip install --upgrade pip
RUN adduser -D myuser
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser requirements.txt requirements.txt
RUN pip install --user -r requirements.txt
ENV PATH="/home/myuser/.local/bin:${PATH}"
COPY --chown=myuser:myuser . .
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
This behavior was introduced in pip 21.1 as a "bug fix".
As of pip 22.1, you can now opt out of the warning using a parameter:
pip install --root-user-action=ignore
You can ignore this in your container by using the environment:
ENV PIP_ROOT_USER_ACTION=ignore
#11035
UPDATE 220930
The good news of this answer here is just that you can ignore the warning, but ignoring the warning is not best practice anymore for pip version >=22.1. At the time of writing this answer, the new trick for pip version >=22.1 was not known to me.
pip version >=22.1
Follow the answer of Maximilian Burszley. It was not known to me at the time of writing and allows you to avoid the warning with a tiny parameter.
pip version >=21.1 and <22.1
You can ignore this warning since you create the image for an isolated purpose and it therefore is organizationally as isolated as a virtual environment. Not technically, but that does not matter here.
It usually should not pay off to invest the time and create a virtual environment in an image or add a user as in the other answer only to avoid the warning, since you should not have any issues with this. It might cloud your view during debugging, but it does not stop the code from working.
Just check pip -V and pip3 -V to know whether you need to pay attention not to mistakenly use pip for Python 2 when you want pip for Python 3. But that should be it, and if you install only pip for python 3, you will not have that problem anyway.
pip version <21.1
In these older versions, the warning does not pop up anyway, see the other answer again. And it is also clear from the age of the question that this warning did not show up in older versions.
I don't like ignoring warnings, as one day you will oversee an important one.
Here is a good explanation on best docker practices with python. Search for Example with virtualenv and you'll find this:
# temp stage
FROM python:3.9-slim as builder
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY requirements.txt .
RUN pip install -r requirements.txt
# final stage
FROM python:3.9-slim
COPY --from=builder /opt/venv /opt/venv
WORKDIR /app
ENV PATH="/opt/venv/bin:$PATH"
Works like charm. No warnings or alike. BTW they also recommend to create a non root user for security reasons.
EDIT: to get rid of all warnings you may also want to add the following entries to the builder part of your Dockerfile (applies for Debian 8.3.x):
ARG DEBIAN_FRONTEND=noninteractive
ARG DEBCONF_NOWARNINGS="yes"
RUN python -m pip install --upgrade pip && \
...

How to invalidate Dockerfile cache when pip installing from repo

I have a Dockerfile that needs to install the latest package code from a private git repo, however because the dockerfile/url/commit doesn't change (I just follow the latest in master), Docker will cache this request and won't pull the latest code.
I can disable build caching entirely which fixes the issue - but this results in a slow build.
How can I just force docker not to use the cache for the one command?
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./requirements.txt /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# This needs to be separate to trigger to invalidate the build cache
RUN pip install -e git+https://TOKEN#github.com/user/private-package.git#egg=private_package
COPY ./main.py /app
COPY ./app /app/app
Add
ARG foo=bar
Before RUN pip install -e ... in your docker file.
Then in your script with docker build .... add as a parameter
--build-arg foo="$(date -s)"

Python package not installable in docker container

I have basic python docker container file like this:
FROM python:3.8
RUN pip install --upgrade pip
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
RUN useradd appuser && chown -R appuser /app
USER appuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
I want to run my flask application in a docker container by using this definition file. Locally I can start a new virtual env, install everything via pip install -r requirements.txt on python 3.8 and it does not fail.
When building the docker image it fails to install all packages from the requirements.txt. For example this package fails:
ERROR: Could not find a version that satisfies the requirement cvxopt==1.2.5.post1
ERROR: No matching distribution found for cvxopt==1.2.5.post1
When I comment out the package in the requirements.txt everything seems to work. The package itself claims to be compatible with python >2.7. Same behavior for the package pywin32==228 here.
Looing at the wheel files in the package, cvxopt.1.2.5.post1 only contains a build for Windows. For Linux (such as the docker container), you should use cvxopt.1.2.5.
You should replace the version with 1.2.5 (pip install cvxopt==1.2.5)
The latest version cvxopt 1.2.5.post1 is not compatible with all architectures: https://pypi.org/project/cvxopt/1.2.5.post1/#files
The previous one is compatible with a lot more hardware and should be able to run on your Docker image: https://pypi.org/project/cvxopt/1.2.5/#files

Categories