I am using LINUX WSL in my Windows 10 system to build docker image but always encounter error:
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
I have already included COPY command inside the dockerfile to copy everything (including the requirements.txt) to the /app directory. This thing always happens when I directly instruct the docker build command without copying the repository folder from the Windows 10 host to the LINUX WSL (using /mnt directory to locate where is the dockerfile in the host system).
However, if I copy the repository folder to the WSL first, it works without problem. I attached the dockerfile below,
#get python
FROM python:3.7
#install odbc unix distribution
RUN apt-get update && apt-get install -y --no-install-recommends \
unixodbc-dev \
unixodbc \
libpq-dev
#set working directory
WORKDIR /app
# Copy the rest of the working directory contents into the container at /app
COPY . .
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
Related
I'm trying to install ffmpeg on docker for amazon lambda function.
Code for Dockerfile is:
FROM public.ecr.aws/lambda/python:3.8
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
# Install the function's dependencies using file requirements.txt
# from your project folder.
COPY requirements.txt .
RUN yum install gcc -y
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
RUN yum install -y ffmpeg
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]
I am getting an error:
> [6/6] RUN yum install -y ffmpeg:
#9 0.538 Loaded plugins: ovl
#9 1.814 No package ffmpeg available.
#9 1.843 Error: Nothing to do
Since the ffmpeg package is not available with yum package manager, I have manually installed ffmpeg and made it part of the container. Here are the steps:
Downloaded the static build from here (the build for the public.ecr.aws/lambda/python:3.8 image is ffmpeg-release-amd64-static.tar.xz
Here is a bit more info on the topic.
Manually unarchived it in the root folder of my project (where my Dockerfile and app.py files are). I use a CodeCommit repo but this is not mandatory of course.
Added the following line my Dockerfile:
COPY ffmpeg-5.1.1-amd64-static /usr/local/bin/ffmpeg
In the requirements.txt I added the following line (so that the python package managed installs ffmpeg-python package):
ffmpeg-python
And here is how I use it in my python code:
import ffmpeg
...
process1 = (ffmpeg
.input(sourceFilePath)
.output("pipe:", format="s16le", acodec="pcm_s16le", ac=1, ar="16k", loglevel="quiet")
.run_async(pipe_stdout=True, cmd=r"/usr/local/bin/ffmpeg/ffmpeg")
)
Note that in order to work, in the run method (or run_async in my case) I
needed to specify the cmd property with the location of the ffmpeg
executable.
I was able to build the container and the ffmpeg is working properly for me.
FROM public.ecr.aws/lambda/python:3.8
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
COPY input_files ./input_files
COPY ffmpeg-5.1.1-amd64-static /usr/local/bin/ffmpeg
RUN chmod 777 -R /usr/local/bin/ffmpeg
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.lambda_handler" ]
I am trying to build this docker image with docker compose:
FROM python:3.7-slim
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
RUN apt-get update && apt-get install -y \
build-essential \
make \
gcc \
python3-dev \
mongodb
# Create working directory and copy all files
COPY . /app
WORKDIR /app
# Pip install requirements
RUN pip install --user -r requirements.txt
# Port to expose
EXPOSE 8000
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "main.py", "runserver"]
but i get this error:
Package 'mongodb' has no installation candidate
When I run the same exact docker image with python:3.4-slim it works. Why?
That's because python:3.4-slim uses Debian stretch (9) for its base and the mongodb package is available in its repos. But for python:3.7-slim, the base is bullseye (11) and mongodb is no longer in its repos.
I'd recommend not to install mongodb in the image that you're building above but rather use a separate mongodb container.
I have basic python docker container file like this:
FROM python:3.8
RUN pip install --upgrade pip
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
RUN useradd appuser && chown -R appuser /app
USER appuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
I want to run my flask application in a docker container by using this definition file. Locally I can start a new virtual env, install everything via pip install -r requirements.txt on python 3.8 and it does not fail.
When building the docker image it fails to install all packages from the requirements.txt. For example this package fails:
ERROR: Could not find a version that satisfies the requirement cvxopt==1.2.5.post1
ERROR: No matching distribution found for cvxopt==1.2.5.post1
When I comment out the package in the requirements.txt everything seems to work. The package itself claims to be compatible with python >2.7. Same behavior for the package pywin32==228 here.
Looing at the wheel files in the package, cvxopt.1.2.5.post1 only contains a build for Windows. For Linux (such as the docker container), you should use cvxopt.1.2.5.
You should replace the version with 1.2.5 (pip install cvxopt==1.2.5)
The latest version cvxopt 1.2.5.post1 is not compatible with all architectures: https://pypi.org/project/cvxopt/1.2.5.post1/#files
The previous one is compatible with a lot more hardware and should be able to run on your Docker image: https://pypi.org/project/cvxopt/1.2.5/#files
I created a slim docker file for my app:
FROM python:3.7-slim-stretch AS build
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
rm -rf /var/cache/apt/* /var/lib/apt/lists/*
ADD ./requirements.txt /project/
RUN /venv/bin/pip install -r /project/requirements.txt
ADD . /project
RUN /venv/bin/pip install /project
WORKDIR /project
FROM python:3.7-slim-stretch AS production
COPY --from=build /venv /venv
CMD ["/venv/bin/python3","-m", "myapp"]
The docker is building and working. The running python executable is copied from the build image. (Verified, if I remove "/venv/bin" it won't run).
However, to save some space I want to change my production base docker to:
FROM debian:stretch-slim
But then I'm getting an error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/venv/bin/python3\": stat /venv/bin/python3: no such file or directory": unknown.
Now, I don't understand this error. I can see the python executable is there, why he wouldn't run? Whats in the base python docker image allow it to run?
Go in your venv in your container and ls -l the bin directory.
lrwxrwxrwx 1 root root 21 Dec 4 17:28 python -> /usr/local/bin/python
Yes python is there but it is a symlink to a file which does not exists.
You can go around this first problem by using RUN python3 -m venv --copies /venv in your Dockerfile.
But you will then hit the following error message:
error while loading shared libraries: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory
So you will finally need to install the exact same version of python in your image as the one available at build time.
I've got some non-pip packages,
which I've written into my requirements.txt as:
git+https://github.com/manahl/arctic.git
This seems to work OK on my localhost, but when I do docker build I get this:
Collecting git+https://github.com/manahl/arctic.git (from -r scripts/requirements.txt (line 11))
│ Cloning https://github.com/manahl/arctic.git to /tmp/pip-1gw7spz2-build
And it just seems to hang. It moves on silently after several minutes, but it doesn't look like it's worked at all. It seems to do this for every git based dependency.
What am I doing wrong?
Dockerfile:
FROM python:3.6.1
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN apt-get update && apt-get install -y \
git\
build-essential
# Install any needed packages specified in requirements.txt
RUN pip install -r scripts/requirements.txt
# Run app.py when the container launches
CMD ["python", "scheduler.py"]
If scripts folder exists in current directory try RUN pip install -r /scripts/requirements.txt