I have a Dockerfile like this:
FROM python:3.9
WORKDIR /app
RUN apt-get update && apt-get upgrade -y
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/install-poetry.py | python -
ENV PATH /root/.local/bin:$PATH
COPY pyproject.toml poetry.lock Makefile ./
COPY src ./src
COPY tests ./tests
RUN poetry install && poetry run pytest && make clean
CMD ["bash"]
As you can see that tests will be run during the build. It could slow down the build little bit, but will ensure that my code runs in the Docker container.
If tests pass in my local machine, does not mean that they will also pass in the docker container.
Suppose I add a feature in my code that uses chromedriver or ffmpeg binaries, that is present in my system, so tests will pass in my system.
But, suppose I forget to install those dependencies in the Dockerfile, then the docker build will fail (as tests are running during the build)
What is the standard way of doing what i am trying to do ?
Is my Dockerfile good ? or should i do something differently ?
Run pytest on image construction makes no sense to me. However, what you can do is run tests after the image is completed. In your pipeline you should have something like this:
Test your python package locally
Build wheel with poetry
Build docker image with your python package
Run your docker image to test if it works (running pytests for example)
Publish your tested image to container registry
If you use multiple stages, you can run docker build and avoid running the tests, and if you want to run the tests afterwards, running docker build --target test would then run the tests on what was previously built. This approach is explained on the docker official documentation.
This way not only we avoid running the build twice due to the Docker caching mechanism, we also can avoid having the test code being shipped in the image.
A possible use case of this implementation when doing CI/CD is to run the two commands when developing locally and in the CI; and not run the tests command in the CD, because the code being deployed will already be tested.
Related
My Docker knowledge is very poor, I have Docker installed only because I would use freqtrade, so I followed this simple HOWTO
https://www.freqtrade.io/en/stable/docker_quickstart/
Now , all freqtrade commands run using docker , for example this
D:\ft_userdata\user_data>docker-compose run --rm freqtrade backtesting --config user_data/cryptofrog.config.json --datadir user_data/data/binance --export trades --stake-amount 70 --strategy CryptoFrog -i 5m
Well , I started to have problems when I would had try this strategy
https://github.com/froggleston/cryptofrog-strategies
for freqtrade . This strategy requires Python module finta .
I understood that the Python module finta should be installed in my Docker container
and NOT in my Windows system (it was easy "pip install finta" from console!).
Even if I tried to find a solution over stackoverflow and google, I do no understand how to do this step (install finta python module in freqtrade container).
after several hours I am really lost.
Someone can explain me in easy steps how to do this ?
Freqtrade mount point is
D:\ft_userdata\user_data
You can get bash from your container with this command:
docker-compose exec freqtrade bash
and then:
pip install finta
OR
run only one command:
docker-compose exec freqtrade pip install finta
If the above solutions didn't work, You can run docker ps command and get container id of your container. Then
docker exec -it CONTAINER_ID bash
pip install finta
You need to make your own docker image that has finta installed. Luckily you can build on top of the standard freqtrade docker image.
First make a Dockerfile with these two lines in it
FROM freqtradeorg/freqtrade:stable
RUN pip install finta
Then build the image (calling the new image myfreqtrade) by running the command
docker build -t myfreqtrade .
Finally change the docker-compose.yml file to run your image by changing the line
image: freqtradeorg/freqtrade:stable
to
image: myfreqtrade
And that should be that.
The way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it.
To generate a Docker image we need to create a Dockerfile that contains instructions needed to build the image. The Dockerfile is then processed by the Docker builder which generates the Docker image. Then, with a simple docker run command, we create and run a container with the Python service.
An example of a Dockerfile containing instructions for assembling a Docker image for Python service installing finta is the following
# set base image (host OS)
FROM python:3.8
# install dependencies
RUN pip install finta
# command to run on container start
CMD [ "python", "-V" ]
For each instruction or command from the Dockerfile, the Docker builder generates an image layer and stacks it upon the previous ones. Therefore, the Docker image resulting from the process is simply a read-only stack of different layers.
docker build -t myimage .
Then, we can check the image is in the local image store:
docker images
Please refer to the freqtrade DockerFile https://github.com/freqtrade/freqtrade/blob/develop/Dockerfile
I have crated a chat-bot using python 3.6 and TensorFlow 1.15. And created the Command line utility for testing in local environment.
The command line utility works fine without docker as shown in the below image.
The problem arrived when i dockerized or containerized the application with dependencies.
The Command line utility is automatically closing after running the docker image.
The dockerfile for the application as below.
FROM python:3.6-buster
WORKDIR /usr/app
COPY ./req.txt ./
RUN pip install -r req.txt
COPY ./ ./
RUN python -m nltk.downloader punkt
CMD ["python","botui.py"]
And after running the docker image it automatically shuts down the Command line utility.
Please help me in finding the solution.
should i need to add something in dockerfile ?
Your dockerfile seems fine.
for the interactive mode for your chatbot conversation you need to add "-i" flag in your docker run command.
docker run -i <image_name>
My laptop (Macbook) pre-installed an old version of Python (2.7).
I have a couple of different python scripts task1.py and task2.py that require Python 3.7 and pip install some_handy_python_package
Several online sources say updating the system-wide version of Python on a Macbook could break some (unspecified) legacy apps.
Seems like a perfect use-case for Docker, to run some local scripts with a custom Python setup, but I do not find any online examples for this simple use case:
Laptop hosts folder mystuff has two scripts task1.py and task2.py (plus a Dockerfile and docker-compose.yml file)
Create a docker image with python 3.7 and whatever required packages, eg pip install some_handy_python_package
Can run any local-hosted python scripts from inside the docker container
perhaps something like docker run -it --rm some-container-name THEN at a bash prompt 'inside' docker run the script(s) via python task1.py
or perhaps something like docker-compose run --rm console python task1.py
I assume the Dockerfile starts off something like this:
FROM python:3.7
RUN pip install some_handy_python_package
but I'm not sure what to add to either the Dockerfile or a docker-compose.yml file so I can either a) run in Docker a bash prompt that lets me run python task1.py, or b) lets me define a 'console' service that can invoke python task1.py from the command line
In case it helps someone else, here's a kind of basic example how to run some local-folder python scripts inside Dockerized python environment. (A better example would setup a volume share within the Dockerfile.)
cd sc2
pwd # /Users/thisisme/sc2` -- you use this path later, when run docker, to set a volume share
Create Dockerfile
# Dockerfile
FROM python:3.7
RUN pip install some_package
Build the container, named rp in this example:
docker build -t rp .
In the local folder, create some python scripts, for example: task1.py
# task1.py
from some_package import SomePackage
# do some stuff
In the local folder, run the script in the container by creating a app share point:
docker run --rm -v YOUR_PATH_TO_FOLDER:/app rp python /app/task1.py
Specifically:
docker run --rm -v /Users/thisisme/sc2:/app rp python /app/task1.py
And sometimes it is handy to run the python interpreter in the container while developing code:
docker run -it --rm rp1
>>> 2 + 2
4
>>>
I am trying to deploy a Python Webservice (with flask) that uses CNTK in a Docker Container. I use an Ubuntu-Base-Image from Microsoft that is supposed to contain all the neccessary and correct programs and libraries to run CNTK.
The Script works locally (on Windows) and also when I run the container and start a bash from cmd-line with
docker exec -it <container_id> bash
and start the script from "within the container".
An important addition is that the python script uses two precompiled modules that are *.pyd files for windows and *.so files for Linux. So for the docker image I replaced the former for the latter for running the script from within the container.
The problems start when I start the script with a CMD in the Dockerfile. The creation of the image shows no problems. But when I start the container with
docker run -p 1234:80 app
I get the following error:
...
ImportError: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory
It seems like the library is missing. But (I repeat) when I run the script from within a bash running in the container (which should only have the containers libraries as far as I can see), everything works fine. I even can look up the library with
ldd $(which python)
And the file is definitely in the folder. So the question is why python can't find its dependency when running the docker container.
It even gets weirder when I try to give the path to the library explicitely by writing it in the environment variable:
ENV LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/root/anaconda3/pkgs/python-3.5.2-0/lib/"
Then the library it seems the library is found but it is not accepted as correct:
ImportError: dynamic module does not define init function (initcython_bbox)
"cython_bbox" is the name of one of the *.pyd / *.so file/library that is to be imported. This is apparantly a typical error for these kinds of filetypes. But I don't have any experience with them.
I am also not at the point (in my personal development) to be able to compile my own files from foreign source or create the docker image itself on my own. I rely on the parts I got from Microsoft. But I would be open to suggestions.
I also already tried to install the library anew inside my Dockerfile after importing the base image with
RUN apt-get install -y libpython3.5
But it provoked the same error as when I put the path in the environment variable.
I am really eager to know what goes wrong here. Why does everything run smoothly "inside the container" but not with Autostart at Initialization of a Container with CMD?
For additional info I add the Dockerfile:
# Use an official Python runtime as a parent image
FROM microsoft/cntk:2.5.1-cpu-python3.5
# Set the working
directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN apt-get update && apt-get install -y python-pip RUN pip install
--upgrade pip
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Run app.py when the container launches
CMD ["python", "test.py"]
The rest of the project is a pretty straightforward flask-webapp that runs without problems when I comment out all import of the actual CNTK-project. It is the CNTK Object Detection with Faster-RCNN by the way, as it can be found in the cntk-git-repository.
EDIT:
I found out what the actual problem is, yet I still have no way to solve it. The thing is that when I start bash with "docker exec" it runs a script at startup that activates an anaconda environment with python3.5 and all the neat libraries. But when CMD just starts python this is done by the standard Bourne shell "sh" which (as I tried out) runs with python2.7.
So I need a way either to start my container with bash (including its autostart scripts) or somehow activate the environment on startup in another way.
I looked up the script and it basically checks if bash is the current shell, sets some environment variables and activates the environment.
if [ -z "$BASH_VERSION" ]; then
echo Error: only Bash is supported.
elif [ "$(basename "$0" 2> /dev/null)" == "activate-cntk" ]; then
echo Error: this script is meant to be sourced. Run 'source activate-cntk'
else
export PATH="/cntk/cntk/bin:$PATH"
export LD_LIBRARY_PATH="/cntk/cntk/lib:/cntk/cntk/dependencies/lib:$LD_LIBRARY_PATH"
source "/root/anaconda3/bin/activate" "/root/anaconda3/envs/cntk-py35"
cat <<MESSAGE
************************************************************
CNTK is activated.
Please checkout tutorials and examples here:
/cntk/Tutorials
/cntk/Examples
To deactivate the environment run
source /root/anaconda3/bin/deactivate
************************************************************
MESSAGE
fi
I tried some dozens of things like linking sh to bash
RUN ln -fs /bin/bash /bin/sh
or using bash as ENTRYPOINT.
I have found a workaround that works for now.
First I manually link python to python3 in my environment:
RUN ln -fs /root/anaconda3/envs/cntk-py35/bin/python3.5 /usr/bin/python
Then I add the environment libraries to the Library-Path:
ENV LD_LIBRARY_PATH "/cntk/cntk/lib:/cntk/cntk/dependencies/lib:$LD_LIBRARY_PATH"
And to be sure I add all important folders to PATH:
ENV PATH "/cntk/cntk/bin:$PATH"
ENV PATH "/root/anaconda3/envs/cntk-py35/bin:$PATH"
I then have to install my python packages again:
RUN pip install flask
And can finally just start my script with:
CMD ["python", "app.py"]
I have also found this GIT Repository doing pretty much the same thing I did. And they also need to start their environment. Realizing that I need to learn how to write better Dockerfiles. I think this is the correct way to do it, i.e. using a shell script as ENTRYPOINT
ENTRYPOINT ["/app/run.sh"]
which activates the environment, installs additional python packages (this could be a problem) and starting the actual app.
#!/bin/bash
source /root/anaconda3/bin/activate root
pip install easydict
pip install azure-ml-api-sdk==0.1.0a9
pip install sanic
python3 /app/app.py
I am a beginner in using Docker.
I'm using Docker toolbox for Windows 7 , i built an image for my python web app and everything works fine.
However, for this app, i use nltk module which also needs java and java_home setting to the java file.
When running on my computer, i can mannualy set the java_home, but how to do it in the dockerfile so that it wont get error when running on another machine.
Here is my error :
p.s : Answer below
When you are running a container you have the option of passing in environment variables that will be set in your container using the -e flag. This answer explains environment variables nicely: How do I pass environment variables to Docker containers?
docker container run -e JAVA_HOME='/path/to/java' <your image>
Make sure your image actually contains Java as well. You might want to look at something like the openjdk:8 image on docker hub.
It sounds like you need a docker file to build your image. Have a look at the ENV command documented here to set the JAVA_HOME var: https://docs.docker.com/engine/reference/builder/#env and then build your image with docker build /path/to/Dockerfile
I see you've already tried that and didn't have much luck.. run the container and instead of running your application process just run a bash script along the lines of echo $JAVA_HOME so you can at least verify that part is working.
Also make sure you copy in whatever files/binaries needed to the appropriate directories within the image in the docker file as noted below.
i finally found the way to install the java for dockerfile , it is use the java install commandline of ubuntu image.
Below is the docker file . Thanks for your reading.
RUN apt-get update
RUN apt-get install -y python-software-properties
RUN apt-get install -y software-properties-common
RUN add-apt-repository -y ppa:openjdk-r/ppa
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME