I'm experimenting with a Docker image repository cloned from https://github.com/amouat/example_app.git (which is based on another repository: https://github.com/mrmrcoleman/python_webapp).
The structure of this repository is:
├── Dockerfile
├── example_app
│ ├── app
│ │ ├── __init__.py
│ │ └── views.py
│ └── __init__.py
├── example_app.wsgi
After building this repository with tag example_app, I try to mount a directory from the host in the repository:
$ pwd
/Users/satoru/Projects/example_app
$ docker run -v $(pwd):/opt -i -t example_app bash
root#3a12236a1471:/# ls /opt/example_app/
root#3a12236a1471:/# exit
$ ls example_app
__init__.py app run.py
Note that when I tried to list files in /opt/example_app in the container it turned out to be empty.
What's wrong in my configuration?
Your Dockerfile looks like this:
FROM python_webapp
MAINTAINER amouat
ADD example_app.wsgi /var/www/flaskapp/flaskapp.wsgi
CMD service apache2 start && tail -F /var/log/apache2/error.log
So you won't find the files you mentioned since there were nonADD-d in the Dockerfile. Also, this is not going to work unless python_webapp installs apache and creates /var/www/flaskapp and /var/log/apache2 exists. Without knowing what these other customs parts do, it is hard to know what to expect.
Related
I have xml files that reside in the directory like this
src
|---lib
| |---folder
| | |---XML files
| |---script.py
|---app.py
The app.py file runs the codes in script.py, and in script.py it requires the XML files. When I run the server locally (window) I can just use the relative path "lib\folder\'xml files'". But when I deploy my server to Cloud Run, it says the files don't exist.
I've tried to specify the absolute path by doing this in script.py
package_directory = os.path.dirname(os.path.abspath(__file__))
path = os.path.join(package_directory, "folder\'xml files")
and tried changing all backward dash to forward dash, but the error still occurs.
In the dockerfile, I had this:
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
which I believe to copy everything in the src folder except things specified in .dockerignore, which I had these insides:
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
Because Cloud Run requires a container, a good test for you would be to create the container and run it locally. I suspect that it's your container that's incorrect rather than Cloud Run.
I created the following repro of your code:
.
├── app.py
├── Dockerfile
└── lib
├── folder
│ └── XML files
│ └── test
├── __init__.py
└── script.py
app.py:
from lib import script
script.foo()
script.py:
import os
def foo():
package_directory = os.path.dirname(os.path.abspath(__file__))
path = os.path.join(package_directory, "folder/XML files")
for f in os.listdir(path):
if os.path.isfile(os.path.join(path,f)):
print(f)
Dockerfile:
FROM docker.io/python:3.9.9
ENV APP_HOME /app
WORKDIR ${APP_HOME}
COPY . ./
ENTRYPOINT ["python","app.py"]
And, when I build|run the container, it correctly reports test:
Q="70734734"
podman build \
--tag=${Q} \
--file=./Dockerfile \
.
podman run \
--interactive --tty \
localhost/${Q}
I'm confident that, if I were to push it to Cloud Run, it would work correctly there too.
NOTE
Try to avoid spaces in directory names; os.path.join accommodates spaces
You describe XML files but your code references xml files
You don't include a full repro of your issue making it more difficult to help you
Trying to use GNS3 to practice ansible script, there is a docker instance called "Network Automation" with built-in ansible. However, it still uses Python 2.7 as the interpreter:
root#Network-Automation:~# ansible --version
ansible 2.7.11
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
I understand I can use "ansible-playbook --version -e 'ansible_python_interpreter=/usr/bin/python3'" command to run a playbook with Python version 3, or I can specifiy var within the playbook:
- name: Common package
hosts: all
gather_facts: no
vars:
ansible_python_interpreter: /usr/bin/python3
roles:
- { role: python, tags: [ init, python, common, addusers] }
...
...
However, I would like to have a permanent way to force ansible to use Python3 version. How can I achieve this? Thanks.
Why not use the var directory in your role...
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
in vars/main.yml
just add....
---
# vars file for XXXX
ansible_python_interpreter: /usr/bin/python3
Per https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html you could simply set it in the inventory for that host, or in your configuration file for ansible (which can also be shipped in the same directory as the playbooks and/or inventory):
To control the discovery behavior:
for individual hosts and groups, use the ansible_python_interpreter inventory variable
globally, use the interpreter_python key in the [defaults] section of ansible.cfg
Adding some points that you might overlook based on comments above:
In the original post, the ansible was installed under root account, which in many other environment, you won't use root. In this case, you need to sudo su then install the ansible with pip3, otherwise, it will end up installing for you account only under: ~/.local/bin
By new pip version, it's recommended to use python3 -m pip install xxx than directly execute pip3 install xxx
The idea is to have a single container which will contain all small projects and will run based on parameters.
What is the current situation:
I have folders with the project this way:
├── MAIN_PROJECT_FOLDER
│ ├── PROJECT_SUB_CATEGORY1
│ ├── ├── PROJECT_NAME_FOLDER1
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── ├── PROJECT_NAME_FOLDER2
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── PROJECT_SUB_CATEGORY2
│ ├── ├── PROJECT_NAME_FOLDER1
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
│ ├── ├── PROJECT_NAME_FOLDER2
│ │ │ ├── run.sh
│ │ │ ├── main.py
│ │ │ ├── config.py
Each run.sh file has prod/dev parameters which can be executed like this:
sudo ./run.sh prod = prod
sudo ./run.sh dev = dev
sudo ./run.sh = dev
What is the way to create another .SH file or Dockerfile which at the end can be executed like this:
sudo docker run CONTAINER_NAME PROJECT_NAME PROD/DEV
sudo docker run test_contaner test_project1 prod
sudo docker run test_contaner test_project1 dev
sudo docker run test_contaner test_project2 prod
... and so one
Basically, each project is the parameter and prod/dev will be part of run.sh execution somehow.
Looking for the best practice to make this happen.
The best practice is generally to have an image that does only one thing. In your example that would imply four separate Docker images; each directory would have its own Dockerfile.
It also tends to be easier to configure settings like this using environment variables than command-line parameters. Sites like https://12factor.net/ describe this and some other practices for building services. (In YAML specifications like Docker Compose or Kubernetes, it is easier to add another key/value environment pair than to build up a correct command line from multiple disparate parts, in my experience.)
This leads you to a sequence like
sudo docker build -t me/cat1proj1 CATEGORY_1/PROJECT_1
sudo docker run -e ENVIRONMENT=prod me/cat1proj1
Architecturally, the Docker container runs any single process, and absolutely nothing stops you from writing the wrapper script you describe. That single command is specified as a combination of an "entrypoint" and a "command"; if you specify both then the command is passed as arguments to the entrypoint. The "command" part can be specified in the Dockerfile CMD, but it can also be overridden at the docker run command line.
If you write no special scripts at all, you can run (assuming you've COPYd the projects to the right directories)
sudo docker run test_image ./test_project1/run.sh prod
(I have a couple of projects that are the same application with different scripts to start them in different ways – a Web server vs. an async job runner with the same code, for instance – and just launch them with alternate startup scripts this way.)
There is a pattern of making some other script be the ENTRYPOINT, and interpreting the "command" as just arguments to that script. The command just gets passed as arguments $1, $2, "$#". The problem with doing this is that it breaks some routine debugging paths.
# "test_project1" "prod" passed as arguments to entrypoint script
sudo docker run test_image test_project1 prod
# But that breaks getting a debug shell
sudo docker run --rm -it test_image bash
# More complex commands get awkward
sudo docker run --rm --entrypoint=/bin/ls test_image -l /app
I would personally use tool like Supervisor which can be run inside one docker container.
Installing supervisor on Ubuntu and Debian based distros:
sudo apt install supervisor
Starting supervisor daemon:
sudo service supervisor start
In /etc/supervisord/supervisord.conf you will find place where to put configs for your projects:
[include]
files = /etc/supervisor/conf.d/*.conf
Now you can create configuration for supervisor and copy it to /etc/supervisor/conf.d/. Example supervisor config for project PROJECT_1:
project_1_supervisor.conf:
[program:project_1_app]
command=/usr/bin/bash /project_1_path/run.sh prod
directory=/project_1_path/
autostart=true
autorestart=true
startretries=3
stderr_logfile=/var/log/project_1.err.log
stdout_logfile=/var/log/project_1.out.log
After this restart your supervisor:
sudo supervisorctl reread
sudo supervisorctl update
After this you can check if your project program runs:
$ supervisorctl
project_1_app RUNNING pid 590, uptime 0:02:45
I think the best way to handle this is ENV, Here is the complete example that what you are looking for.
Here is the directory structure
Here is the smartest dockerfile that clone the above app and do smart thing ;) That will take four env, by default it will run project A.
ENV BASE_PATH="/opt/project"
This ENV is for project base path during clone
ENV PROJECT_PATH="/main/sub_folder_a/project_a"
This ENV is for project path for example Project B
ENV SCRIPT_NAME="hello.py"
This ENV will be used to run the actual file can be run.sh or main.py in your case.
ENV SYSTEM_ENV=dev
This env is used run.sh this can either dev or prod
FROM python:3.7.4-alpine3.10
WORKDIR /opt/project
# Required Tools
RUN apk add --no-cache supervisor git tree && \
mkdir -p /etc/supervisord.d/
# clone remote project or copy your own one
RUN echo "Starting remote clonning...."
RUN git clone https://github.com/Adiii717/python-demo-app.git /opt/project
RUN tree /opt/project
# ENV for start different project, can be overide at run time
ENV BASE_PATH="/opt/project"
ENV PROJECT_PATH="/main/sub_folder_a/project_a"
ENV SCRIPT_NAME="hello.py"
# possible dev or prod
ENV SYSTEM_ENV=dev
RUN chmod +x /opt/project/main/*/*/run.sh
# general config
RUN echo $'[supervisord] \n\
[unix_http_server] \n\
file = /tmp/supervisor.sock \n\
chmod = 0777 \n\
chown= nobody:nogroup \n\
[supervisord] \n\
logfile = /tmp/supervisord.log \n\
logfile_maxbytes = 50MB \n\
logfile_backups=10 \n\
loglevel = info \n\
pidfile = /tmp/supervisord.pid \n\
nodaemon = true \n\
umask = 022 \n\
identifier = supervisor \n\
[supervisorctl] \n\
serverurl = unix:///tmp/supervisor.sock \n\
[rpcinterface:supervisor] \n\
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface \n\
[include] \n\
files = /etc/supervisord.d/*.conf' >> /etc/supervisord.conf
# script supervisord Config
RUN echo $'[supervisord] \n\
nodaemon=true \n\
[program:run_project ] \n\
command= /run_project.sh \n\
stdout_logfile=/dev/fd/1 \n\
stdout_logfile_maxbytes=0MB \n\
stderr_logfile_maxbytes = 0 \n\
stderr_logfile=/dev/fd/2 \n\
redirect_stderr=true \n\
autorestart=false \n\
startretries=0 \n\
exitcodes=0 ' >> /etc/supervisord.d/run_project.conf
RUN echo $'#!/bin/ash \n\
echo -e "\x1B[31m starting project having name ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \x1B[0m" \n\
fullfilename=${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \n\
filename=$(basename "$fullfilename") \n\
extension="${filename##*.}" \n\
if [[ ${extension} == "sh" ]];then \n\
sh ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} ${SYSTEM_ENV} \n\
else \n\
python ${BASE_PATH}${PROJECT_PATH}/${SCRIPT_NAME} \n\
fi ' >> /run_project.sh
RUN chmod +x /run_project.sh
EXPOSE 9080 8000 9088 80
ENTRYPOINT ["supervisord", "--nodaemon", "--configuration", "/etc/supervisord.conf"]
Build the docker image
docker build -t multipy .
Run the docker container
docker run --rm -it multipy
This will run project a by default
to project b, your command will be
docker run --rm -it --env PROJECT_PATH=/main/sub_folder_b/project_b --env SCRIPT_NAME=hello.py multipy
To run your run.sh bash file command will be
docker run --rm -it --env SCRIPT_NAME=run.sh multipy
Here is the some logs
I want to create a container that is contained with two Python packages as well as a package consist of an executable file.
Here's my main package (dockerized_package) tree:
dockerized_project
├── docker-compose.yml
├── Dockerfile
├── exec_project
│ ├── config
│ │ └── config.json
│ ├── config.json
│ ├── gowebapp
├── pythonic_project1
│ ├── __main__.py
│ ├── requirements.txt
│ ├── start.sh
│ └── utility
│ └── utility.py
└── pythonic_project2
├── collect
│ ├── collector.py
├── __main__.py
├── requirements.txt
└── start.sh
Dockerfile content:
FROM ubuntu:18.04
RUN apt update
RUN apt-get install -y python3.6 python3-pip python3-dev build-essential gcc \
libsnmp-dev snmp-mibs-downloader
RUN pip3 install --upgrade pip
RUN mkdir /app
WORKDIR /app
COPY . /app
WORKDIR /app/snmp_collector
RUN pip3 install -r requirements.txt
WORKDIR /app/proto_conversion
RUN pip3 install -r requirements.txt
WORKDIR /app/pythonic_project1
CMD python3 __main__.py
WORKDIR /app/pythonic_project2
CMD python3 __main__.py
WORKDIR /app/exec_project
CMD ["./gowebapp"]
docker-compose content:
version: '3'
services:
proto_conversion:
build: .
image: pc:2.0.0
container_name: proto_conversion
# command:
# - "bash snmp_collector/start.sh"
# - "bash proto_conversion/start.sh"
restart: unless-stopped
ports:
- 8008:8008
tty: true
Problem:
When I run this project with docker-compose up --build, only the last CMD command runs. Hence, I think the previous CMD commands are killed in Dockerfile because when I remove the last two CMD, the first CMD works well.
Is there any approach to run multiple Python scripts and an executable file in the background?
I've also tried with the bash files without any success either.
As mentioned in the documentation, there can be only one CMD in the docker file and if there is more, the last one overrides the others and takes effect.
A key point of using docker might be to isolate your programs, so at first glance, you might want to move them to separate containers and talk to each other using a shared volume or a docker network, but if you really need them to run in the same container, including them in a bash script and replacing the last CMD with CMD run.sh will run them alongside each other:
#!/bin/bash
exec python3 /path/to/script1.py &
exec python3 /path/to/script2.py
Add COPY run.sh to the Dockerfile and use RUN chmod a+x run.sh to make it executable. CMD should be CMD ["./run.sh"]
try it via entrypoint.sh
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python3 not__main__.py &
exec python3 __main__.py
symbol & says that you run service as daemon in background
Best practice is to launch these as three separate containers. That's doubly true since you're taking three separate applications, bundling them into a single container, and then trying to launch three separate things from them.
Create a separate Dockerfile in each of your project subdirectories. These can be simpler, especially for the one that just contains a compiled binary
# execproject/Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . ./
CMD ["./gowebapp"]
Then in your docker-compose.yml file have three separate stanzas to launch the containers
version: '3'
services:
pythonic_project1:
build: ./pythonic_project1
ports:
- 8008:8008
env:
PY2_URL: 'http://pythonic_project2:8009'
GO_URL: 'http://execproject:8010'
pythonic_project2:
build: ./pythonic_project2
execproject:
build: ./execproject
If you really can't rearrange your Dockerfiles, you can at least launch three containers from the same image in the docker-compose.yml file:
services:
pythonic_project1:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
pythonic_project2:
build: .
workdir: /app/pythonic_project1
command: ./__main__.py
There's several good reasons to structure your project with multiple containers and images:
If you roll your own shell script and use background processes (as other answers have), it just won't notice if one of the processes dies; here you can use Docker's restart mechanism to restart individual containers.
If you have an update to one of the programs, you can update and restart only that single container and leave the rest intact.
If you ever use a more complex container orchestrator (Docker Swarm, Nomad, Kubernetes) the different components can run on different hosts and require a smaller block of CPU/memory resource on a single node.
If you ever use a more complex container orchestrator, you can individually scale up components that are using more CPU.
I'm trying to use celery and redis queue to perform a task for my Django app. Supervisord is installed on the host via apt-get, whereas celery resides in a specific virtualenv on my system, installed via `pip.
As a result, I can't seem to get the celery command to run via supervisord. If I run it from inside the virtualenv, it works fine, outside of it, it doesn't. How do I get it to run under my current set up? Is the solution simply to install celery via apt-get, instead of inside the virtualenv? Please advise.
My celery.conf inside /etc/supervisor/conf.d is:
[program:celery]
command=/home/mhb11/.virtualenvs/myenv/local/lib/python2.7/site-packages/celery/bin/celery -A /etc/supervisor/conf.d/celery.conf -l info
directory = /home/mhb11/somefolder/myproject
environment=PATH="/home/mhb11/.virtualenvs/myenv/bin",VIRTUAL_ENV="/home/mhb11/.virtualenvs/myenv",PYTHONPATH="/home/mhb11/.virtualenvs/myenv/lib/python2.7:/home/mhb11/.virtualenvs/myenv/lib/python2.7/site-packages"
user=mhb11
numprocs=1
stdout_logfile = /etc/supervisor/logs/celery-worker.log
stderr_logfile = /etc/supervisor/logs/celery-worker.log
autostart = true
autorestart = true
startsecs=10
stopwaitsecs = 600
killasgroup = true
priority = 998
And the folder structure for my Django project is:
/home/mhb11/somefolder/myproject
├── myproject
│ ├── celery.py # The Celery app file
│ ├── __init__.py # The project module file (modified)
│ ├── settings.py # Including Celery settings
│ ├── urls.py
│ └── wsgi.py
├── manage.py
├── celerybeat-schedule
└── myapp
├── __init__.py
├── models.py
├── tasks.py # File containing tasks for this app
├── tests.py
└── views.py
If I do a status check via supervisorctl, I get a FATAL error on the command I'm trying to run in celery.conf. Help!
p.s. note that user mhb11 does not have root privileges, in case it matters. Moreover, /etc/supervisor/logs/celery-worker.log is empty. And inside supervisord.log the relevant error I see is INFO spawnerr: can't find command '/home/mhb11/.virtualenvs/redditpk/local/lib/python2.7/site-packages/celery/bin/celery'.
Path to celery binary is myenv/bin/celery whereas you are using myenv/local/lib/python2.7/site-packages/celery/bin/celery.
So if you try on your terminal the command you are passing to supervisor (command=xxx), you should get the same error.
You need to replace your command=xxx in your celery.conf with
command=/home/mhb11/.virtualenvs/myenv/bin/celery -A myproject.celery -l info
Note that I have also replaced -A parameter with celery app, instead of supervisor configuration. This celery app is relevant to your project directory set in celery.conf with
directory = /home/mhb11/somefolder/myproject
On a side note, if you are using Celery with Django, you can manage celery with Django's manage.py, no need to invoke celery directly. Like
python manage.py celery worker
python manage.py celery beat
For detail please read intro of Django Celery here.