This is my Dockerfile:
FROM docker_with_pre_installed_packages:1.0
ADD requirements.txt .
RUN pip install -r requirements.txt
ADD app app
WORKDIR /
docker_with_pre_installed_packages has:
/usr/local/lib/python2.7/site-packages/my_packages/db
/usr/local/lib/python2.7/site-packages/my_packages/config
/usr/local/lib/python2.7/site-packages/my_packages/logging
requirements.txt:
my_package.redis-db
my_package.common.utils
after running
docker build -t test:1.0 .
docker run -it test:1.0 bash
cd /usr/local/lib/python2.7/site-packages/my_packages/
ls
__init__.py redis_db common
pip freeze still shows the old packages
and I can still see the dist-info directory
but when trying to run python and import something from the pre installed packages I getting:
ImportError: No module named my_package.config
Thanks!
Did you try to install python in your docker_with_pre_installed_packages or just copied some files? Looks like python was not properly installed.
By the way, Python 2.7 is not supported since this year, highly recommend to use Python 3.
Try to use python docker image and install dependencies and compare.
FROM python:3
ADD requirements.txt /
ADD app app
RUN pip install -r requirements.txt
CMD [ "python", "./my_script.py" ]
I suggest checking what exactly has changed in your docker container due to executing
pip install -r requirements.txt
I would
1) build docker container from lines before pip install
FROM docker_with_pre_installed_packages:1.0
ADD requirements.txt .
CMD /bin/bash
2) run the docker and execute manually pip install -r requirements.txt
Then outside docker (but having it still running) I would see what a difference the above command caused in my container by executing
3) docker ps to see container_id (e.g. 397cd1c9939f)
4) docker diff 397cd1c9939f to see the difference
Hope it helps.
The problem was with the jenkins slave running this build.
I tried to run it on a new one and all got fixed
Related
I am running docker compose up which consists of multiple containers on of which is python 3.* and all the containers have volumes attached to them.
also I have already created requirements.txt file
I have entered python container and install x packages then I did
pip freeze > requirements.txt
I then I stoped the containers and restart the containers again, but python container didn't start and the log says modules x is not found, so what I did is that O deleted the container and created a new one and it worked,
my questions is, Is there any way to not deleting the container (I think its over kill)
but some-who still able to manage installing packages in the container?
Dockerfile
FROM python:3.6
RUN apt-get update
RUN apt-get install -y gettext
RUN mkdir -p /var/www/server
COPY src/requirements.txt /var/www/server/
WORKDIR /var/www/server
RUN pip install -r ./requirements.txt
EXPOSE 8100
ENTRYPOINT sleep 3 && python manage.py migrate && python manage.py runserver 0.0.0.0:8100
You should move your project source files into the container during build and within in it run the pip install -r requirements.txt.
Below is an example to give you an idea:
--- Other build commands follow
WORKDIR /usr/src/service
COPY ./service . # Here, I am moving everything of the service folder/module into WORKDIR within docker
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
-- Other build commands follow.
Finally, you will use docker-compose build service to build the define service in the docker-compose.yml pointing to the Dockerfile in the build context.
...
build:
context: .
dockerfile: service/Dockerfile
...
Broadly, set up your Dockerfile such that you need to do the least-changing and most time-costly work first
FROM FOO
RUN get os-level and build dependencies
COPY only exactly files needed to identify dependencies
RUN install dependencies that takes a long time
RUN install more frequently-changing dependencies
COPY rest of your wanted content
ENTRYPOINT define me
as #coldly says in their Answer, write your dependencies into a requirements file and install them during the container build!
I have basic python docker container file like this:
FROM python:3.8
RUN pip install --upgrade pip
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
RUN useradd appuser && chown -R appuser /app
USER appuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
I want to run my flask application in a docker container by using this definition file. Locally I can start a new virtual env, install everything via pip install -r requirements.txt on python 3.8 and it does not fail.
When building the docker image it fails to install all packages from the requirements.txt. For example this package fails:
ERROR: Could not find a version that satisfies the requirement cvxopt==1.2.5.post1
ERROR: No matching distribution found for cvxopt==1.2.5.post1
When I comment out the package in the requirements.txt everything seems to work. The package itself claims to be compatible with python >2.7. Same behavior for the package pywin32==228 here.
Looing at the wheel files in the package, cvxopt.1.2.5.post1 only contains a build for Windows. For Linux (such as the docker container), you should use cvxopt.1.2.5.
You should replace the version with 1.2.5 (pip install cvxopt==1.2.5)
The latest version cvxopt 1.2.5.post1 is not compatible with all architectures: https://pypi.org/project/cvxopt/1.2.5.post1/#files
The previous one is compatible with a lot more hardware and should be able to run on your Docker image: https://pypi.org/project/cvxopt/1.2.5/#files
I've created flask app and try to dockerize it. It uses machine learning libraries, I had some problems with download it so my Dockerfile is a little bit messy, but Image was succesfully created.
from alpine:latest
RUN apk add --no-cache python3-dev \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
FROM python:3.5
RUN pip3 install gensim
RUN pip3 freeze > requirements.txt
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENV PATH=/venv/bin:$PATH
ENV FLASK_APP /sentiment-service/__init__.py
CMD ["python","-m","flask", "run", "--host", "0.0.0.0", "--port", "5000"]
and when i try:
docker run my_app:latest
I get
/usr/local/bin/python: No module named flask
Of course I have Flask==1.1.1 in my requirements.txt file.
Thanks for any help!
The problem is here:
RUN pip3 freeze > requirements.txt
The > operator in bash overwrites the content of the file. If you want to append to your requirements.txt, consider using >> operator:
RUN pip3 freeze >> requirements.txt
Thank you All. Finally I rebuilded my app, simplified requirements, exclude alpine and use python 3.7 in my Dockerfile.
I could run app locally, but Docker probably could not find some file from path, or get some other error from app, that is why it stopped just after starting.
I want to be able to add some extra requirements to an own create docker image. My strategy is build the image from a dockerfile with a CMD command that will execute a "pip install -r" command using a mounted volume in runtime.
This is my dockerfile:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
WORKDIR /root
CMD ["pip install -r /root/sourceCode/requirements.txt"]
Having that dockerfile I build the image:
sudo docker build -t test .
And finally I try to attach my new requirements using this command:
sudo docker run -v $(pwd)/sourceCode:/root/sourceCode -it test /bin/bash
My local folder "sourceCode" has inside a valid requirements.txt file (it contains only one line with the value "gunicorn").
When I get the prompt I can see that the requirements file is there, but if I execute a pip freeze command the gunicorn package is not listed.
Why the requirements.txt file is been attached correctly but the pip command is not working properly?
TLDR
pip command isn't running because you are telling Docker to run /bin/bash instead.
docker run -v $(pwd)/sourceCode:/root/sourceCode -it test /bin/bash
^
here
Longer explanation
The default ENTRYPOINT for a container is /bin/sh -c. You don't override that in the Dockerfile, so that remains. The default CMD instruction is probably nothing. You do override that in your Dockerfile. When you run (ignore the volume for brevity)
docker run -it test
what actually executes inside the container is
/bin/sh -c pip install -r /root/sourceCode/requirements.txt
Pretty straight forward, looks like it will run pip when you start the container.
Now let's take a look at the command you used to start the container (again, ignoring volumes)
docker run -v -it test /bin/bash
what actually executes inside the container is
/bin/sh -c /bin/bash
the CMD arguments you specified in your Dockerfile get overridden by the COMMAND you specify in the command line. Recall that docker run command takes this form
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Further reading
This answer has a really to the point explanation of what CMD and ENTRYPOINT instructions do
The ENTRYPOINT specifies a command that will always be executed when the container starts.
The CMD specifies arguments that will be fed to the ENTRYPOINT.
This blog post on the difference between ENTRYPOINT and CMD instructions that's worth reading.
You may change the last statement i.e., CMD to below.
--specify absolute path of pip location in below statement
CMD ["/usr/bin/pip", "install", "-r", "/root/sourceCode/requirements.txt"]
UPDATE: adding additional answer based on comments.
One thing must be noted that, if customized image is needed with additional requirements, that should part of the image rather than doing at run time.
Using below base image to test:
docker pull colstrom/python:legacy
So, installing packages should be run using RUN command of Dockerfile.
And CMD should be used what app you actually wanted to run as a process inside of container.
Just checking if the base image has any pip packages by running below command and results nothing.
docker run --rm --name=testpy colstrom/python:legacy /usr/bin/pip freeze
Here is simple example to demonstrate the same:
Dockerfile
FROM colstrom/python:legacy
COPY requirements.txt /requirements.txt
RUN ["/usr/bin/pip", "install", "-r", "/requirements.txt"]
CMD ["/usr/bin/pip", "freeze"]
requirements.txt
selenium
Build the image with pip packages Hope you know to place Dockerfile, requirements.txt file in fresh directory.
D:\dockers\py1>docker build -t pypiptest .
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM colstrom/python:legacy
---> 640409fadf3d
Step 2 : COPY requirements.txt /requirements.txt
---> abbe03846376
Removing intermediate container c883642f06fb
Step 3 : RUN /usr/bin/pip install -r /requirements.txt
---> Running in 1987b5d47171
Collecting selenium (from -r /requirements.txt (line 1))
Downloading selenium-3.0.1-py2.py3-none-any.whl (913kB)
Installing collected packages: selenium
Successfully installed selenium-3.0.1
---> f0bc90e6ac94
Removing intermediate container 1987b5d47171
Step 4 : CMD /usr/bin/pip freeze
---> Running in 6c3435177a37
---> dc1925a4f36d
Removing intermediate container 6c3435177a37
Successfully built dc1925a4f36d
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Now run the image
If you are not passing any external command, then container takes command from CMD which is just shows the list of pip packages. Here in this case, selenium.
D:\dockers\py1>docker run -itd --name testreq pypiptest
039972151eedbe388b50b2b4cd16af37b94e6d70febbcb5897ee58ef545b1435
D:\dockers\py1>docker logs testreq
selenium==3.0.1
So, the above shows that package is installed successfully.
Hope this is helpful.
Using the concepts that #Rao and #ROMANARMY have explained in their answers, I find out finally a way of doing what I wanted: add extra python requirements to a self-created docker image.
My new Dockerfile is as follows:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
WORKDIR /root
COPY install_req.sh .
CMD ["/bin/bash" , "install_req.sh"]
I've added as first command the execution of a shell script that has the following content:
#!/bin/bash
pip install -r /root/sourceCode/requirements.txt
pip freeze > /root/sourceCode/freeze.txt
And finally I build and run the image using these commands:
docker build --tag test .
docker run -itd --name container_test -v $(pwd)/sourceCode:/root/sourceCode test <- without any parameter at the end
As I explained at the beginning of the post, I have in a local folder a folder named sourceCode that contains a valid requirements.txt file with only one line "gunicorn"
So finally I've the ability of adding some extra requirements (gunicorn package in this example) to a given docker image.
After building and running my experiment If I check the logs (docker logs container_test) I see something like this:
Downloading gunicorn-19.6.0-py2.py3-none-any.whl (114kB)
100% |################################| 122kB 1.1MB/s
Installing collected packages: gunicorn
Furthermore, the container have created a freeze.txt file inside the mounted volume that contains all the pip packages installed, including the desired gunicorn:
chardet==2.0.1
colorama==0.2.5
gunicorn==19.6.0
html5lib==0.999
requests==2.2.1
six==1.5.2
urllib3==1.7.1
Now I've other problems with the permissions of the new created file, but that will be probably in a new post.
Thank you!
i'm trying to get involved in docker magic and there is a question:
I want to run container in which installed all python packages and after up of this "source" container run my python script that uses those installed on first docker container packages?
I have one dockerfile in which I have installed python3 and pip and in docker-compose.yml file i am building all requirments in first container
How to update docker-compose.yml file to make second container be able to use all installed on another container packages?
use pip freeze to get a list of all installed pip packages along with their specific versions. Standard practice is to keep a text file requirements.txt for pip installations.
$ pip freeze > requirements.txt
$ pip install -r requirements.txt
add this line to your docker file:
ONBUILD RUN pip install -r /app/requirements.txt
Another option is to use a virtualenv and docker volumes.
In one image build a virtualenv and make it a volume:
RUN virtualenv /venv
RUN /venv/bin/pip install ...
VOLUME /venv
And for the second container use volumes_from: ['the-first-container']
You might not need the virtualenv if you want to just make the default python site-packages path a VOLUME.