I have basic python docker container file like this:
FROM python:3.8
RUN pip install --upgrade pip
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
RUN useradd appuser && chown -R appuser /app
USER appuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
I want to run my flask application in a docker container by using this definition file. Locally I can start a new virtual env, install everything via pip install -r requirements.txt on python 3.8 and it does not fail.
When building the docker image it fails to install all packages from the requirements.txt. For example this package fails:
ERROR: Could not find a version that satisfies the requirement cvxopt==1.2.5.post1
ERROR: No matching distribution found for cvxopt==1.2.5.post1
When I comment out the package in the requirements.txt everything seems to work. The package itself claims to be compatible with python >2.7. Same behavior for the package pywin32==228 here.
Looing at the wheel files in the package, cvxopt.1.2.5.post1 only contains a build for Windows. For Linux (such as the docker container), you should use cvxopt.1.2.5.
You should replace the version with 1.2.5 (pip install cvxopt==1.2.5)
The latest version cvxopt 1.2.5.post1 is not compatible with all architectures: https://pypi.org/project/cvxopt/1.2.5.post1/#files
The previous one is compatible with a lot more hardware and should be able to run on your Docker image: https://pypi.org/project/cvxopt/1.2.5/#files
Related
I'm trying to use this tutorial to upload a docker container to AWS ECR for Lambda. My problem is that my python script uses psycopg2, and I couldn't figure out how to install psycopg2 inside the Docker image. I know that I need postgres-devel for the libq library and gcc for compiling, but it still doesn't work.
My requirements.txt:
pandas==1.3.0
requests==2.25.1
psycopg2==2.9.1
pgcopy==1.5.0
Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
WORKDIR /app
COPY my_script.py .
COPY some_file.csv .
COPY requirements.txt .
RUN yum install -y postgresql-devel gcc*
RUN pip install -r requirements.txt
CMD ["/app/my_script.handler"]
After building, running the image, and testing the lambda function locally, I get this error message:
psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above
So I think the container has the wrong version of postgres(-devel). But I'm not sure how to install the proper version? Any tips for deploying a psycopg2 script to docker for lambda usage?
This might be a little old and too late to answer but figure I post what worked for me.
FROM public.ecr.aws/lambda/python:3.8
COPY . ${LAMBDA_TASK_ROOT}
RUN yum install -y gcc python27 python27-devel postgresql-devel
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "app.handler" ]
I have a Dockerfile that needs to install the latest package code from a private git repo, however because the dockerfile/url/commit doesn't change (I just follow the latest in master), Docker will cache this request and won't pull the latest code.
I can disable build caching entirely which fixes the issue - but this results in a slow build.
How can I just force docker not to use the cache for the one command?
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./requirements.txt /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# This needs to be separate to trigger to invalidate the build cache
RUN pip install -e git+https://TOKEN#github.com/user/private-package.git#egg=private_package
COPY ./main.py /app
COPY ./app /app/app
Add
ARG foo=bar
Before RUN pip install -e ... in your docker file.
Then in your script with docker build .... add as a parameter
--build-arg foo="$(date -s)"
I've created flask app and try to dockerize it. It uses machine learning libraries, I had some problems with download it so my Dockerfile is a little bit messy, but Image was succesfully created.
from alpine:latest
RUN apk add --no-cache python3-dev \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
FROM python:3.5
RUN pip3 install gensim
RUN pip3 freeze > requirements.txt
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENV PATH=/venv/bin:$PATH
ENV FLASK_APP /sentiment-service/__init__.py
CMD ["python","-m","flask", "run", "--host", "0.0.0.0", "--port", "5000"]
and when i try:
docker run my_app:latest
I get
/usr/local/bin/python: No module named flask
Of course I have Flask==1.1.1 in my requirements.txt file.
Thanks for any help!
The problem is here:
RUN pip3 freeze > requirements.txt
The > operator in bash overwrites the content of the file. If you want to append to your requirements.txt, consider using >> operator:
RUN pip3 freeze >> requirements.txt
Thank you All. Finally I rebuilded my app, simplified requirements, exclude alpine and use python 3.7 in my Dockerfile.
I could run app locally, but Docker probably could not find some file from path, or get some other error from app, that is why it stopped just after starting.
This is my Dockerfile:
FROM docker_with_pre_installed_packages:1.0
ADD requirements.txt .
RUN pip install -r requirements.txt
ADD app app
WORKDIR /
docker_with_pre_installed_packages has:
/usr/local/lib/python2.7/site-packages/my_packages/db
/usr/local/lib/python2.7/site-packages/my_packages/config
/usr/local/lib/python2.7/site-packages/my_packages/logging
requirements.txt:
my_package.redis-db
my_package.common.utils
after running
docker build -t test:1.0 .
docker run -it test:1.0 bash
cd /usr/local/lib/python2.7/site-packages/my_packages/
ls
__init__.py redis_db common
pip freeze still shows the old packages
and I can still see the dist-info directory
but when trying to run python and import something from the pre installed packages I getting:
ImportError: No module named my_package.config
Thanks!
Did you try to install python in your docker_with_pre_installed_packages or just copied some files? Looks like python was not properly installed.
By the way, Python 2.7 is not supported since this year, highly recommend to use Python 3.
Try to use python docker image and install dependencies and compare.
FROM python:3
ADD requirements.txt /
ADD app app
RUN pip install -r requirements.txt
CMD [ "python", "./my_script.py" ]
I suggest checking what exactly has changed in your docker container due to executing
pip install -r requirements.txt
I would
1) build docker container from lines before pip install
FROM docker_with_pre_installed_packages:1.0
ADD requirements.txt .
CMD /bin/bash
2) run the docker and execute manually pip install -r requirements.txt
Then outside docker (but having it still running) I would see what a difference the above command caused in my container by executing
3) docker ps to see container_id (e.g. 397cd1c9939f)
4) docker diff 397cd1c9939f to see the difference
Hope it helps.
The problem was with the jenkins slave running this build.
I tried to run it on a new one and all got fixed
I've got some non-pip packages,
which I've written into my requirements.txt as:
git+https://github.com/manahl/arctic.git
This seems to work OK on my localhost, but when I do docker build I get this:
Collecting git+https://github.com/manahl/arctic.git (from -r scripts/requirements.txt (line 11))
│ Cloning https://github.com/manahl/arctic.git to /tmp/pip-1gw7spz2-build
And it just seems to hang. It moves on silently after several minutes, but it doesn't look like it's worked at all. It seems to do this for every git based dependency.
What am I doing wrong?
Dockerfile:
FROM python:3.6.1
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN apt-get update && apt-get install -y \
git\
build-essential
# Install any needed packages specified in requirements.txt
RUN pip install -r scripts/requirements.txt
# Run app.py when the container launches
CMD ["python", "scheduler.py"]
If scripts folder exists in current directory try RUN pip install -r /scripts/requirements.txt