Calling a Docker container through Python subprocess - python

I am novice to Docker and containers.
I am running 2 containers. 1st runs FAST API the 2nd one runs a tool in Go language.
From an endpoint, I want to invoke the GO container and run the tool.
I have docker-compose:
version: '3'
services:
fastapi:
build: ./
image: myimage
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
ports:
- 8000:8000
networks:
- test_network
amass_git_worker:
build: https://github.com/OWASP/Amass.git
stdin_open: true
tty: true
entrypoint: ['/bin/sh']
networks:
- test_network
networks:
test_network:
driver: bridge
Main fastapi app Dockerfile:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
The endpoint calls this function:
def amass_wrapper(search_key:str):
try:
subprocess.run(['docker', 'run', '-v', 'OUTPUT_DIR_PATH:/.config/amass/', 'integrate_scanning_modules-amass_git_worker/bin/sh', 'enum' ,'-d', 'owasp.org'])
When I call this endpoint, I get this error:
Process failed because the executable could not be found.
No such file or directory: 'docker'
Does this mean that i need to install docker in the fastapi container.
Any other advice how I can invoke the Go container through Python subprocess.

You should install the Go binary in the Python application's image, and then call it normally using the subprocess module. Do not do anything Docker-specific here, and especially do not try to run a docker command.
Most Go programs compile down to a single binary, so it's simple enough to put this binary in $PATH somewhere. For example, your Dockerfile might say
FROM python:3.10-slim
# Install OS-level dependencies
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl \
unzip
# Download and unpack the Amass zip file, saving only the binary
RUN cd /usr/local \
&& curl -LO https://github.com/OWASP/Amass/releases/download/v3.20.0/amass_linux_amd64.zip \
&& unzip amass_linux_amd64.zip \
&& mv amass_linux_amd64/amass bin \
&& rm -rf amass_linux_amd64 amass_linux_amd64.zip
# Install your application the same way you have it already
WORKDIR /app
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
Now since your image contains a /usr/local/bin/amass binary, you can just run it.
subprocess.run(['amass', 'enum', '-d', 'wasp.org'])
And you do not need the "do-nothing" container in the Compose setup
version: '3.8'
services:
fastapi:
build: .
ports:
- '8000:8000'
It's difficult to programmatically run a command in an existing container. Running a new temporary container to launch the program is no easier but is at least somewhat better style. In both cases you'd need to install either the docker binary or the Docker SDK, and give your container access to the host's Docker socket; this access comes with unrestricted root access to the entire host, should you choose to take advantage of it. So this setup is both tricky to test and also comes with some significant security implications, and I'd generally avoid it if possible.

Related

Docker container keeps on restarting again and again in docker-compose but not when runs isolated

I'm trying to run a python program that uses MongoDB and I want to deploy it on a server, that's because I write a docker-compose file. My problem is that when I run the python project isolated with the docker build -t PROJET_NAME . and docker run image commands everything works properly, however when executing docker-compose up -d the python container restarts over and over again. What am I doing wrong?
I just tried to log it but nothing shows up
Here is the Dockerfile
FROM python:3.7
WORKDIR /app
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
ENV PROD=true
COPY . .
# Installing requirements
RUN pip install -r requirements.txt
RUN export PYTHONPATH=$PATHONPATH:`pwd`
CMD ["python3", "foo.py"]
And the docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
container_name: app
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
MONGODB_DATABASE: db
MONGODB_USERNAME: appuser
MONGODB_PASSWORD: mongopassword
MONGODB_HOSTNAME: mongodb
depends_on:
- mongodb
networks:
- internal
mongodb:
image: mongo
container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: mongodbuser
MONGO_INITDB_ROOT_PASSWORD: mongodbrootpassword
MONGO_INITDB_DATABASE: db
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db
networks:
- internal
networks:
internal:
driver: bridge
volumes:
mongodbdata:
driver: local

Docker compose up giving error no such file or dir. for .sh files in ubuntu

I am trying to run the docker container for web which has script start.sh file to start it, but docker-compose up web is giving an error.
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/app/start.sh\": stat /home/app/start.sh: no such file or directory": unknown
By the following command it shows the start.sh is present in the docker image
docker run -it web bin/bash
docker-compose.yaml
web:
container_name: "web"
environment:
- LANG=C.UTF-8
env_file:
- .env
build: .
volumes:
- ../app:/home/app
- ../media:/home/media
- ../misc:/home/downloads
command: ["/home/app/start.sh"]
dockerfile
# Fetch the base image
FROM ubuntu:18.04
# Install python3 and pip3
RUN apt-get -y update && apt-get install -y python3 python3-pip git libsasl2-dev python-dev libldap2-dev libssl-dev openjdk-8-jdk libwebkitgtk-1.0-0 curl nano wget unzip
# Install pip3 lib
COPY pip3-requirements.txt /pip3-requirements.txt
RUN pip3 install -r pip3-requirements.txt
# Copy Code
ADD . /home/app/
Details:
docker version:
Docker version 19.03.6, build 369ce74a3c
docker-compose version:
docker-compose version 1.17.1, build unknown
base image of dockerfile:
ubuntu:18.04
I'm attempting this again via example, as I think it really does come down to the project structure, sorry if my prior attempt was confusing.
I have a Dockerfile:
FROM ubuntu:18.04
ADD . /home/app
And a docker-compose.yml file:
web:
container_name: "web"
build: .
volumes:
- ../test:/home/app
command: ["/home/app/start.sh"]
With a directory structure of:
./test
docker-compose.yml
Dockerfile
start.sh
Then I can run:
chmod +x start.sh
docker-compose build
docker-compose up
What can be useful to troubleshoot this is to run:
docker run -it web bash
> ls -al / /home /home/app
I hope that helps, I suspect the script isn't being placed into /home/app as the error you are getting is stating.

Docker containers network: Redis and Custom Image

I am struggling with connecting two container services. Specifically I would like to use a Redis server (https://hub.docker.com/_/redis/) running in one container as: docker run -d --name my_redis_server redis and a custom image run like:docker run -p 8888:8888 --mount type=bind,source=<my_folder>,target=/data/ my_container build with the following Dockerfile and docker-compose.yml:
Dockerfile
FROM ubuntu
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
# Updates and tools
RUN apt-get update && \
apt-get install -y gcc make apt-transport-https ca-certificates build-essential git redis-server
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda --version
# Create conda environment
RUN conda create python=3.6 --name my_env
# Run in a new shell
RUN /bin/bash -c "activate my_env"
RUN <Install some packages>
RUN conda install -c conda-forge jupyterlab -y
RUN conda install -c anaconda redis
# The code to run when the container is started:
# Entrypoint
WORKDIR /data/
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0", "--no-browser", "--allow-root"]
docker-compose.yml
version: '2.3'
services:
my_container:
container_name: my_container_env
build: ./
restart: always
ports:
- '8888:8888'
According to my understanding, I should be able to connect from my_container (and specifically jupyter) to my_redis_server by using either the internal bridge IP (i.e 172.17.0.X) or the docker DNS name (i.e my_redis_server) in both cases by using the standard Redis image port 6379.
Unfortunately, this does not work for me... what am I missing?
Thank you all!
System: Windows 10 - Docker 2.3.0.2
Additional notes:
I did try (as walkaround) to change approach and to connect from my_container to the local host Redis server (the compiled WIN version) by running my_container as: docker run -p 8888:8888 -p 6379:6379 --mount type=bind,source=<my_folder>,target=/data/ my_container and connecting from the jupyter inside the container to the local host as 127.0.0.1:6379, this did not work neither.
You haven't specified which method you are exactly following. In Both cases, the issue arose because of network not being defined. In the docker run method as specified in the beginning, you need to specify the network by using --network=<network_name>. This network can be a default bridge network, user-defined bridge network, host network, or none. Be sure about what to use as all of them have their own purpose and disadvantages.
In docker-compose way I believe you still run Redis using docker run and my_container in docker-compose which results in both the containers being connected to different networks. So here you need to run Redis also using the same compose.
Updated docker-compose:
version: '2.3'
services:
my_container:
container_name: my_container_env
build: ./
restart: always
ports:
- '8888:8888'
redis:
container_name: my_redis_server
restart: always
ports:
- 6379:6379
References:
Networking overview
Use bridge networks
Networking in Compose

How to create a common base docker image for flask and celery applications

My project using flask and celery libraries. I have deployed my application in AWS ECS Fargate. Here are the two docker files for flask and celery.
# Flask Docker File
FROM python:3.6
RUN apt-get update -y
RUN pip3 install pipenv
ENV USER dockeruser
RUN useradd -ms /bin/bash $USER
ENV APP_PATH /home/$USER/my_project
RUN mkdir -p $APP_PATH
COPY . $APP_PATH
WORKDIR $APP_PATH
RUN chown -R $USER:$USER $APP_PATH
RUN pipenv install --system --deploy
USER $USER
EXPOSE 5000
CMD gunicorn run:my_app -b 0.0.0.0:5000 -w 4
# Celery Docker File
FROM python:3.6
RUN apt-get update -y
RUN pip3 install pipenv
ENV USER dockeruser
RUN useradd -ms /bin/bash $USER
ENV APP_PATH /home/$USER/my_project
RUN mkdir -p $APP_PATH
COPY . $APP_PATH
WORKDIR $APP_PATH
RUN chown -R $USER:$USER $APP_PATH
RUN pipenv install --system --deploy
USER $USER
CMD celery -A celery_tasks.celery worker -l INFO -E --autoscale=2,1 -Q apple,ball,cat
Both docker files are the same for celery and flask application. Is there is a way to create a common base image file both docker files? I am using AWS ECR to store docker images.
You can start a Dockerfile FROM any image you want, including one you built yourself. If you built the Flask image as
docker build -t me/flaskapp .
then you can build a derived image that just overrides its CMD as
FROM me/flaskapp
CMD celery -A celery_tasks.celery worker -l INFO -E --autoscale=2,1 -Q apple,ball,cat
If you prefer you can have an image that includes the source code but no default CMD. Since you can't un-EXPOSE a port, this has the minor advantage that it doesn't look like your Celery worker has a network listener. ("Expose" as a verb means almost nothing in modern Docker, though.)
FROM me/code-base
EXPOSE 5000
CMD gunicorn run:my_app -b 0.0.0.0:5000 -w 4
#Frank's answer suggests a Docker Compose path. If you're routinely using Compose you might prefer that path, since there's not an easy way to make it build multiple images in correct dependency order. All of the ways to run a container have a way to specify an alternate command (from extra docker run options through a Kubernetes pod command: setting) so this isn't an especially limiting approach. Conversely, in a CI environment, you generally can specify multiple things to build in sequence, but you'll probably want to use an ARG to specify the image tag.
I think you can use docker-compose(https://docs.docker.com/compose/).
You can specify more than 1 docker instance inside docker-compose YAML config file and run them base on the same docker image.
One Example:
test.yaml:
version: '2.0'
services:
web:
image: sameimage
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
command: ["gunicorn", "run:my_app", "-b", "0.0.0.0:5000", "-w", "4"]
celery:
image: sameimage
command: ["celery", "-A", "celery_tasks.celery"]
volumes:
logvolume01: {}
You can run it by:
docker-compose -f test.yaml -p sameiamge up --no-deps

Executing shell script using docker file

I have some tar files which I want to move to docker image and extract them there and then run python web app script. I have sh script that extracts those files.
If I run the script using RUN command then they get extracted but they are not present in the final container.
I also used entrypoint but it executes and then closes container not executing "main python script".
Is there a way how to execute this install script and then continue running the main script and not closing?
Dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
RUN apt-get update \
&& apt-get install -y curl \
&& curl -sL https://deb.nodesource.com/setup_4.x | bash \
&& apt-get install -y nodejs \
&& apt-get install -y git \
&& npm install -g bower \
&& npm install -g gulp#^3.9.1
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip install -r requirements.txt \
&& npm install \
&& bower install --allow-root \
&& gulp default
# Define environment variable
ENV PATH "$PATH:/app/tree-tagger/cmd/"
ENV PATH "$PATH:/app/tree-tagger/bin/"
ENV TREETAGGER "/app/tree-tagger/cmd/"
ENV TREETAGGER_HOME "/app/tree-tagger/cmd/"
CMD python app.py
ENTRYPOINT sh tree-tagger/install-tagger.sh
Here is docker-compose script on top of that
web:
build: .
ports:
- "8080:8080"
volumes:
- .:/app
links:
- db
db:
image: mongo:3.0.2
You can just use RUN for that. RUN runs on container side so anything new that gets created by it - will be included in container as well.
Your issue most likely comes from the fact that your docker-compose file is mounting a volume that you previously wrote to.
Basically, what you're trying to do is:
1. During container creation, copy current dir to /app and do something with it
2. After container is created - run it while mounting current dir to /app again.
So you end up with anything newly created in /app/ to be overwritten. If you skip the volume part in docker-compose, it will all work as expected. But if you prefer to mount current app code to /app anyway (so you don't have to rebuild image during development every time your code change), there is a way. Just change your docker-compose to:
web:
build: .
ports:
- "8080:8080"
volumes:
- .:/app
- /app/tree-tagger/
links:
- db
db:
image: mongo:3.0.2
Now you can just change your dockerfile to what you had before (with RUN instead of ENTRYPOINT) and /app/tree-tagger/ won't be replaced this time. So:
...
RUN sh tree-tagger/install-tagger.sh
CMD python app.py

Categories