How to run multiple Python scripts and an executable files using Docker? - python

I want to create a container that is contained with two Python packages as well as a package consist of an executable file.
Here's my main package (dockerized_package) tree:
dockerized_project
├── docker-compose.yml
├── Dockerfile
├── exec_project
│   ├── config
│   │   └── config.json
│   ├── config.json
│   ├── gowebapp
├── pythonic_project1
│   ├── __main__.py
│   ├── requirements.txt
│   ├── start.sh
│   └── utility
│   └── utility.py
└── pythonic_project2
├── collect
│   ├── collector.py
├── __main__.py
├── requirements.txt
└── start.sh
Dockerfile content:
FROM ubuntu:18.04
RUN apt update
RUN apt-get install -y python3.6 python3-pip python3-dev build-essential gcc \
libsnmp-dev snmp-mibs-downloader
RUN pip3 install --upgrade pip
RUN mkdir /app
WORKDIR /app
COPY . /app
WORKDIR /app/snmp_collector
RUN pip3 install -r requirements.txt
WORKDIR /app/proto_conversion
RUN pip3 install -r requirements.txt
WORKDIR /app/pythonic_project1
CMD python3 __main__.py
WORKDIR /app/pythonic_project2
CMD python3 __main__.py
WORKDIR /app/exec_project
CMD ["./gowebapp"]
docker-compose content:
version: '3'
services:
proto_conversion:
build: .
image: pc:2.0.0
container_name: proto_conversion
# command:
# - "bash snmp_collector/start.sh"
# - "bash proto_conversion/start.sh"
restart: unless-stopped
ports:
- 8008:8008
tty: true
Problem:
When I run this project with docker-compose up --build, only the last CMD command runs. Hence, I think the previous CMD commands are killed in Dockerfile because when I remove the last two CMD, the first CMD works well.
Is there any approach to run multiple Python scripts and an executable file in the background?
I've also tried with the bash files without any success either.

As mentioned in the documentation, there can be only one CMD in the docker file and if there is more, the last one overrides the others and takes effect.
A key point of using docker might be to isolate your programs, so at first glance, you might want to move them to separate containers and talk to each other using a shared volume or a docker network, but if you really need them to run in the same container, including them in a bash script and replacing the last CMD with CMD run.sh will run them alongside each other:
#!/bin/bash
exec python3 /path/to/script1.py &
exec python3 /path/to/script2.py
Add COPY run.sh to the Dockerfile and use RUN chmod a+x run.sh to make it executable. CMD should be CMD ["./run.sh"]

try it via entrypoint.sh
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python3 not__main__.py &
exec python3 __main__.py
symbol & says that you run service as daemon in background

Best practice is to launch these as three separate containers. That's doubly true since you're taking three separate applications, bundling them into a single container, and then trying to launch three separate things from them.
Create a separate Dockerfile in each of your project subdirectories. These can be simpler, especially for the one that just contains a compiled binary
# execproject/Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . ./
CMD ["./gowebapp"]
Then in your docker-compose.yml file have three separate stanzas to launch the containers
version: '3'
services:
pythonic_project1:
build: ./pythonic_project1
ports:
- 8008:8008
env:
PY2_URL: 'http://pythonic_project2:8009'
GO_URL: 'http://execproject:8010'
pythonic_project2:
build: ./pythonic_project2
execproject:
build: ./execproject
If you really can't rearrange your Dockerfiles, you can at least launch three containers from the same image in the docker-compose.yml file:
services:
pythonic_project1:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
pythonic_project2:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
There's several good reasons to structure your project with multiple containers and images:
If you roll your own shell script and use background processes (as other answers have), it just won't notice if one of the processes dies; here you can use Docker's restart mechanism to restart individual containers.
If you have an update to one of the programs, you can update and restart only that single container and leave the rest intact.
If you ever use a more complex container orchestrator (Docker Swarm, Nomad, Kubernetes) the different components can run on different hosts and require a smaller block of CPU/memory resource on a single node.
If you ever use a more complex container orchestrator, you can individually scale up components that are using more CPU.

Related

Auto Updating the hosts files and the files past to the container synchronously Dockerfile

This is a concept question relating to images and containers. So I have have a python 3.9 image on ubuntu. I have a main.py which will altered and rewritten. However I see that when I change the contents of the main.py from my host computer's files the main.py within the container will not be one to one as the contents of the main.py file will not change unless I change it within the container itself.
Is there a way to have it so that when there is an update on the host computer the files within the container will also pull the latest updates so the files would be one to one? I obviously want to alter the main.py from my host computer as the if I did the vice versa the container changes wouldn't be seeable outside it.
Host Computer:
.
├── files/
│ ├── main.py
│ └── text.csv
└── Dockerfile
Container Directory:
test-poetry/
├─tests/
│ └─__init__.py
├─README.md
├─pyproject.toml
├─test_poetry/
│ ├─__init__.py
│ ├─main.py
│ └─text.csv
└─poetry.lock
Docker Contents:
#Python - 3.9 - ubuntu
FROM python:3.9-slim
ENTRYPOINT [ "/bin/bash" ]
WORKDIR /src/test-poetry/test_poetry
COPY files .
You need to use files as a container volume.
Dockerfile:
FROM python:3.9-slim
# makes no sense to have this as a long path
WORKDIR /project
# keep this towards the end of the file for clarity
ENTRYPOINT [ "/bin/bash" ]
Then build the image and run your container with:
docker build -t test/poetry:0.1 .
docker container run --rm -ti -v $(pwd)/files:/project test/poetry:0.1
** NOTE **
For your purposes you can even skip the build completely and run a simple container like:
docker run \
--rm -ti \
-v $(pwd)/files:/project \
--workdir /project \
python:3.9-slim bash

Python script in Docker container runs fine when called manually not via Docker CLI

Sorry if the title is unclear. I have a custom image that runs a Python script on a loop. The script continues to run as expected when I call it directly with python3 <script> locally or even in the Docker container that spins up from the image I've built.
However, when I run the containers via docker-compose, the script does nothing...and when I run it as a docker service or standalone with docker run, the script runs correctly but only for a short time and then stops.
Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.8-slim-buster
WORKDIR /mqtt_client
COPY . .
RUN pip3 install -r requirements.txt
CMD ["python3", "/mqtt_client/mqtt_client.py"]
Any ideas?
docker-compose snippet:
py_publisher:
image: python-mqtt-client
deploy:
mode: replicated
replicas: 3
depends_on:
- broker
entrypoint: "python3 /path/to/mqtt_client.py"
image directory structure:
/ <-- root
.
├── Dockerfile
├── mqtt_client.py
└── requirements.txt
remove the entrypoint from your docker-compose file, it will start by the CMD of dockerfile.
You can also set the restart option to always.

Best practice 10+ projects will run by using ONE container / parameterised

The idea is to have a single container which will contain all small projects and will run based on parameters.
What is the current situation:
I have folders with the project this way:
├── MAIN_PROJECT_FOLDER
│ ├── PROJECT_SUB_CATEGORY1
│ ├── ├── PROJECT_NAME_FOLDER1
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── ├── PROJECT_NAME_FOLDER2
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── PROJECT_SUB_CATEGORY2
│ ├── ├── PROJECT_NAME_FOLDER1
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── ├── PROJECT_NAME_FOLDER2
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
Each run.sh file has prod/dev parameters which can be executed like this:
sudo ./run.sh prod = prod
sudo ./run.sh dev = dev
sudo ./run.sh = dev
What is the way to create another .SH file or Dockerfile which at the end can be executed like this:
sudo docker run CONTAINER_NAME PROJECT_NAME PROD/DEV
sudo docker run test_contaner test_project1 prod
sudo docker run test_contaner test_project1 dev
sudo docker run test_contaner test_project2 prod
... and so one
Basically, each project is the parameter and prod/dev will be part of run.sh execution somehow.
Looking for the best practice to make this happen.
The best practice is generally to have an image that does only one thing. In your example that would imply four separate Docker images; each directory would have its own Dockerfile.
It also tends to be easier to configure settings like this using environment variables than command-line parameters. Sites like https://12factor.net/ describe this and some other practices for building services. (In YAML specifications like Docker Compose or Kubernetes, it is easier to add another key/value environment pair than to build up a correct command line from multiple disparate parts, in my experience.)
This leads you to a sequence like
sudo docker build -t me/cat1proj1 CATEGORY_1/PROJECT_1
sudo docker run -e ENVIRONMENT=prod me/cat1proj1
Architecturally, the Docker container runs any single process, and absolutely nothing stops you from writing the wrapper script you describe. That single command is specified as a combination of an "entrypoint" and a "command"; if you specify both then the command is passed as arguments to the entrypoint. The "command" part can be specified in the Dockerfile CMD, but it can also be overridden at the docker run command line.
If you write no special scripts at all, you can run (assuming you've COPYd the projects to the right directories)
sudo docker run test_image ./test_project1/run.sh prod
(I have a couple of projects that are the same application with different scripts to start them in different ways – a Web server vs. an async job runner with the same code, for instance – and just launch them with alternate startup scripts this way.)
There is a pattern of making some other script be the ENTRYPOINT, and interpreting the "command" as just arguments to that script. The command just gets passed as arguments $1, $2, "$#". The problem with doing this is that it breaks some routine debugging paths.
# "test_project1" "prod" passed as arguments to entrypoint script
sudo docker run test_image test_project1 prod
# But that breaks getting a debug shell
sudo docker run --rm -it test_image bash
# More complex commands get awkward
sudo docker run --rm --entrypoint=/bin/ls test_image -l /app
I would personally use tool like Supervisor which can be run inside one docker container.
Installing supervisor on Ubuntu and Debian based distros:
sudo apt install supervisor
Starting supervisor daemon:
sudo service supervisor start
In /etc/supervisord/supervisord.conf you will find place where to put configs for your projects:
[include]
files = /etc/supervisor/conf.d/*.conf
Now you can create configuration for supervisor and copy it to /etc/supervisor/conf.d/. Example supervisor config for project PROJECT_1:
project_1_supervisor.conf:
[program:project_1_app]
command=/usr/bin/bash /project_1_path/run.sh prod
directory=/project_1_path/
autostart=true
autorestart=true
startretries=3
stderr_logfile=/var/log/project_1.err.log
stdout_logfile=/var/log/project_1.out.log
After this restart your supervisor:
sudo supervisorctl reread
sudo supervisorctl update
After this you can check if your project program runs:
$ supervisorctl
project_1_app RUNNING pid 590, uptime 0:02:45
I think the best way to handle this is ENV, Here is the complete example that what you are looking for.
Here is the directory structure
Here is the smartest dockerfile that clone the above app and do smart thing ;) That will take four env, by default it will run project A.
ENV BASE_PATH="/opt/project"
This ENV is for project base path during clone
ENV PROJECT_PATH="/main/sub_folder_a/project_a"
This ENV is for project path for example Project B
ENV SCRIPT_NAME="hello.py"
This ENV will be used to run the actual file can be run.sh or main.py in your case.
ENV SYSTEM_ENV=dev
This env is used run.sh this can either dev or prod
FROM python:3.7.4-alpine3.10
WORKDIR /opt/project
# Required Tools
RUN apk add --no-cache supervisor git tree && \
mkdir -p /etc/supervisord.d/
# clone remote project or copy your own one
RUN echo "Starting remote clonning...."
RUN git clone https://github.com/Adiii717/python-demo-app.git /opt/project
RUN tree /opt/project
# ENV for start different project, can be overide at run time
ENV BASE_PATH="/opt/project"
ENV PROJECT_PATH="/main/sub_folder_a/project_a"
ENV SCRIPT_NAME="hello.py"
# possible dev or prod
ENV SYSTEM_ENV=dev
RUN chmod +x /opt/project/main/*/*/run.sh
# general config
RUN echo $'[supervisord] \n\
[unix_http_server] \n\
file = /tmp/supervisor.sock \n\
chmod = 0777 \n\
chown= nobody:nogroup \n\
[supervisord] \n\
logfile = /tmp/supervisord.log \n\
logfile_maxbytes = 50MB \n\
logfile_backups=10 \n\
loglevel = info \n\
pidfile = /tmp/supervisord.pid \n\
nodaemon = true \n\
umask = 022 \n\
identifier = supervisor \n\
[supervisorctl] \n\
serverurl = unix:///tmp/supervisor.sock \n\
[rpcinterface:supervisor] \n\
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface \n\
[include] \n\
files = /etc/supervisord.d/*.conf' >> /etc/supervisord.conf
# script supervisord Config
RUN echo $'[supervisord] \n\
nodaemon=true \n\
[program:run_project ] \n\
command= /run_project.sh \n\
stdout_logfile=/dev/fd/1 \n\
stdout_logfile_maxbytes=0MB \n\
stderr_logfile_maxbytes = 0 \n\
stderr_logfile=/dev/fd/2 \n\
redirect_stderr=true \n\
autorestart=false \n\
startretries=0 \n\
exitcodes=0 ' >> /etc/supervisord.d/run_project.conf
RUN echo $'#!/bin/ash \n\
echo -e "\x1B[31m starting project having name ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \x1B[0m" \n\
fullfilename=${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \n\
filename=$(basename "$fullfilename") \n\
extension="${filename##*.}" \n\
if [[ ${extension} == "sh" ]];then \n\
sh ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} ${SYSTEM_ENV} \n\
else \n\
python ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \n\
fi ' >> /run_project.sh
RUN chmod +x /run_project.sh
EXPOSE 9080 8000 9088 80
ENTRYPOINT ["supervisord", "--nodaemon", "--configuration", "/etc/supervisord.conf"]
Build the docker image
docker build -t multipy .
Run the docker container
docker run --rm -it multipy
This will run project a by default
to project b, your command will be
docker run --rm -it --env PROJECT_PATH=/main/sub_folder_b/project_b --env SCRIPT_NAME=hello.py multipy
To run your run.sh bash file command will be
docker run --rm -it --env SCRIPT_NAME=run.sh multipy
Here is the some logs

Reuse Docker Images for multiple python applications

I am quite new to the whole world of Docker. Actually I am trying to establish an environment for different python machine learning applications which should act independently from each other in an own docker container. Since I dont really understood the way of using base images and trying to extend those base images I using a dockerfile for each new application which defines the different packages that I use. They all have in one thing in common - they use the command FROM python:3.6-slim as the base.
I am looking for a starting point or way that I can easily extend this base image to form a new image which contains the individual packages that each of the application needs in order to save disk space. Right now, each of the images has a file size of approx. 1gb and hopefully this could be a solution to reduce the amount.
Without going into details about the different storage backend solutions for Docker (check Docker - About Storage Drivers for reference), docker reuses all the shared intermediate points of an image.
Having said that, even though you see in the docker images output [1.17 GB, 1.17 GB, 1.17 GB, 138MB, 918MB] it does not mean that is using the sum in your storage. We can say the following:
sum(`docker images`) <= space-in-disk
Each of the steps in the Dockerfile creates a layer.
Let's take the following project structure:
├── common-requirements.txt
├── Dockerfile.1
├── Dockerfile.2
├── project1
│   ├── requirements.txt
│   └── setup.py
└── project2
├── requirements.txt
└── setup.py
With Dockerfile.1:
FROM python:3.6-slim
# - here we have a layer from python:3.6-slim -
# 1. Copy requirements and install dependencies
# we do this first because we assume that requirements.txt changes
# less than the code
COPY ./common-requirements.txt /requirements.txt
RUN pip install -r requirements
# - here we have a layer from python:3.6-slim + your own requirements-
# 2. Install your python package in project1
COPY ./project1 /code
RUN pip install -e /code
# - here we have a layer from python:3.6-slim + your own requirements
# + the code install
CMD ["my-app-exec-1"]
With Dockerfile.2:
FROM python:3.6-slim
# - here we have a layer from python:3.6-slim -
# 1. Copy requirements and install dependencies
# we do this first because we assume that requirements.txt changes
# less than the code
COPY ./common-requirements.txt /requirements.txt
RUN pip install -r requirements
# == here we have a layer from python:3.6-slim + your own requirements ==
# == both containers are going to share the layers until here ==
# 2. Install your python package in project1
COPY ./project2 /code
RUN pip install -e /code
# == here we have a layer from python:3.6-slim + your own requirements
# + the code install ==
CMD ["my-app-exec-2"]
The two docker images are going to share the layers with python and the common-requirements.txt. It is extremely useful when building application with a lot heavy libraries.
To compile, I will do:
docker build -t app-1 -f Dockerfile.1 .
docker build -t app-2 -f Dockerfile.2 .
So, think that the order how you write the steps in the Dockerfile does matter.

`docker run -v` doesn't work as expected

I'm experimenting with a Docker image repository cloned from https://github.com/amouat/example_app.git (which is based on another repository: https://github.com/mrmrcoleman/python_webapp).
The structure of this repository is:
├── Dockerfile
├── example_app
│ ├── app
│ │ ├── __init__.py
│ │ └── views.py
│ └── __init__.py
├── example_app.wsgi
After building this repository with tag example_app, I try to mount a directory from the host in the repository:
$ pwd
/Users/satoru/Projects/example_app
$ docker run -v $(pwd):/opt -i -t example_app bash
root#3a12236a1471:/# ls /opt/example_app/
root#3a12236a1471:/# exit
$ ls example_app
__init__.py app run.py
Note that when I tried to list files in /opt/example_app in the container it turned out to be empty.
What's wrong in my configuration?
Your Dockerfile looks like this:
FROM python_webapp
MAINTAINER amouat
ADD example_app.wsgi /var/www/flaskapp/flaskapp.wsgi
CMD service apache2 start && tail -F /var/log/apache2/error.log
So you won't find the files you mentioned since there were nonADD-d in the Dockerfile. Also, this is not going to work unless python_webapp installs apache and creates /var/www/flaskapp and /var/log/apache2 exists. Without knowing what these other customs parts do, it is hard to know what to expect.

Categories