Docker containers network: Redis and Custom Image - python

I am struggling with connecting two container services. Specifically I would like to use a Redis server (https://hub.docker.com/_/redis/) running in one container as: docker run -d --name my_redis_server redis and a custom image run like:docker run -p 8888:8888 --mount type=bind,source=<my_folder>,target=/data/ my_container build with the following Dockerfile and docker-compose.yml:
Dockerfile
FROM ubuntu
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
# Updates and tools
RUN apt-get update && \
apt-get install -y gcc make apt-transport-https ca-certificates build-essential git redis-server
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda --version
# Create conda environment
RUN conda create python=3.6 --name my_env
# Run in a new shell
RUN /bin/bash -c "activate my_env"
RUN <Install some packages>
RUN conda install -c conda-forge jupyterlab -y
RUN conda install -c anaconda redis
# The code to run when the container is started:
# Entrypoint
WORKDIR /data/
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0", "--no-browser", "--allow-root"]
docker-compose.yml
version: '2.3'
services:
my_container:
container_name: my_container_env
build: ./
restart: always
ports:
- '8888:8888'
According to my understanding, I should be able to connect from my_container (and specifically jupyter) to my_redis_server by using either the internal bridge IP (i.e 172.17.0.X) or the docker DNS name (i.e my_redis_server) in both cases by using the standard Redis image port 6379.
Unfortunately, this does not work for me... what am I missing?
Thank you all!
System: Windows 10 - Docker 2.3.0.2
Additional notes:
I did try (as walkaround) to change approach and to connect from my_container to the local host Redis server (the compiled WIN version) by running my_container as: docker run -p 8888:8888 -p 6379:6379 --mount type=bind,source=<my_folder>,target=/data/ my_container and connecting from the jupyter inside the container to the local host as 127.0.0.1:6379, this did not work neither.

You haven't specified which method you are exactly following. In Both cases, the issue arose because of network not being defined. In the docker run method as specified in the beginning, you need to specify the network by using --network=<network_name>. This network can be a default bridge network, user-defined bridge network, host network, or none. Be sure about what to use as all of them have their own purpose and disadvantages.
In docker-compose way I believe you still run Redis using docker run and my_container in docker-compose which results in both the containers being connected to different networks. So here you need to run Redis also using the same compose.
Updated docker-compose:
version: '2.3'
services:
my_container:
container_name: my_container_env
build: ./
restart: always
ports:
- '8888:8888'
redis:
container_name: my_redis_server
restart: always
ports:
- 6379:6379
References:
Networking overview
Use bridge networks
Networking in Compose

Related

Calling a Docker container through Python subprocess

I am novice to Docker and containers.
I am running 2 containers. 1st runs FAST API the 2nd one runs a tool in Go language.
From an endpoint, I want to invoke the GO container and run the tool.
I have docker-compose:
version: '3'
services:
fastapi:
build: ./
image: myimage
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
ports:
- 8000:8000
networks:
- test_network
amass_git_worker:
build: https://github.com/OWASP/Amass.git
stdin_open: true
tty: true
entrypoint: ['/bin/sh']
networks:
- test_network
networks:
test_network:
driver: bridge
Main fastapi app Dockerfile:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
The endpoint calls this function:
def amass_wrapper(search_key:str):
try:
subprocess.run(['docker', 'run', '-v', 'OUTPUT_DIR_PATH:/.config/amass/', 'integrate_scanning_modules-amass_git_worker/bin/sh', 'enum' ,'-d', 'owasp.org'])
When I call this endpoint, I get this error:
Process failed because the executable could not be found.
No such file or directory: 'docker'
Does this mean that i need to install docker in the fastapi container.
Any other advice how I can invoke the Go container through Python subprocess.
You should install the Go binary in the Python application's image, and then call it normally using the subprocess module. Do not do anything Docker-specific here, and especially do not try to run a docker command.
Most Go programs compile down to a single binary, so it's simple enough to put this binary in $PATH somewhere. For example, your Dockerfile might say
FROM python:3.10-slim
# Install OS-level dependencies
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl \
unzip
# Download and unpack the Amass zip file, saving only the binary
RUN cd /usr/local \
&& curl -LO https://github.com/OWASP/Amass/releases/download/v3.20.0/amass_linux_amd64.zip \
&& unzip amass_linux_amd64.zip \
&& mv amass_linux_amd64/amass bin \
&& rm -rf amass_linux_amd64 amass_linux_amd64.zip
# Install your application the same way you have it already
WORKDIR /app
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
Now since your image contains a /usr/local/bin/amass binary, you can just run it.
subprocess.run(['amass', 'enum', '-d', 'wasp.org'])
And you do not need the "do-nothing" container in the Compose setup
version: '3.8'
services:
fastapi:
build: .
ports:
- '8000:8000'
It's difficult to programmatically run a command in an existing container. Running a new temporary container to launch the program is no easier but is at least somewhat better style. In both cases you'd need to install either the docker binary or the Docker SDK, and give your container access to the host's Docker socket; this access comes with unrestricted root access to the entire host, should you choose to take advantage of it. So this setup is both tricky to test and also comes with some significant security implications, and I'd generally avoid it if possible.

Docker container keeps on restarting again and again in docker-compose but not when runs isolated

I'm trying to run a python program that uses MongoDB and I want to deploy it on a server, that's because I write a docker-compose file. My problem is that when I run the python project isolated with the docker build -t PROJET_NAME . and docker run image commands everything works properly, however when executing docker-compose up -d the python container restarts over and over again. What am I doing wrong?
I just tried to log it but nothing shows up
Here is the Dockerfile
FROM python:3.7
WORKDIR /app
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
ENV PROD=true
COPY . .
# Installing requirements
RUN pip install -r requirements.txt
RUN export PYTHONPATH=$PATHONPATH:`pwd`
CMD ["python3", "foo.py"]
And the docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
container_name: app
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
MONGODB_DATABASE: db
MONGODB_USERNAME: appuser
MONGODB_PASSWORD: mongopassword
MONGODB_HOSTNAME: mongodb
depends_on:
- mongodb
networks:
- internal
mongodb:
image: mongo
container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: mongodbuser
MONGO_INITDB_ROOT_PASSWORD: mongodbrootpassword
MONGO_INITDB_DATABASE: db
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db
networks:
- internal
networks:
internal:
driver: bridge
volumes:
mongodbdata:
driver: local

Docker compose up giving error no such file or dir. for .sh files in ubuntu

I am trying to run the docker container for web which has script start.sh file to start it, but docker-compose up web is giving an error.
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/app/start.sh\": stat /home/app/start.sh: no such file or directory": unknown
By the following command it shows the start.sh is present in the docker image
docker run -it web bin/bash
docker-compose.yaml
web:
container_name: "web"
environment:
- LANG=C.UTF-8
env_file:
- .env
build: .
volumes:
- ../app:/home/app
- ../media:/home/media
- ../misc:/home/downloads
command: ["/home/app/start.sh"]
dockerfile
# Fetch the base image
FROM ubuntu:18.04
# Install python3 and pip3
RUN apt-get -y update && apt-get install -y python3 python3-pip git libsasl2-dev python-dev libldap2-dev libssl-dev openjdk-8-jdk libwebkitgtk-1.0-0 curl nano wget unzip
# Install pip3 lib
COPY pip3-requirements.txt /pip3-requirements.txt
RUN pip3 install -r pip3-requirements.txt
# Copy Code
ADD . /home/app/
Details:
docker version:
Docker version 19.03.6, build 369ce74a3c
docker-compose version:
docker-compose version 1.17.1, build unknown
base image of dockerfile:
ubuntu:18.04
I'm attempting this again via example, as I think it really does come down to the project structure, sorry if my prior attempt was confusing.
I have a Dockerfile:
FROM ubuntu:18.04
ADD . /home/app
And a docker-compose.yml file:
web:
container_name: "web"
build: .
volumes:
- ../test:/home/app
command: ["/home/app/start.sh"]
With a directory structure of:
./test
docker-compose.yml
Dockerfile
start.sh
Then I can run:
chmod +x start.sh
docker-compose build
docker-compose up
What can be useful to troubleshoot this is to run:
docker run -it web bash
> ls -al / /home /home/app
I hope that helps, I suspect the script isn't being placed into /home/app as the error you are getting is stating.

How to create a common base docker image for flask and celery applications

My project using flask and celery libraries. I have deployed my application in AWS ECS Fargate. Here are the two docker files for flask and celery.
# Flask Docker File
FROM python:3.6
RUN apt-get update -y
RUN pip3 install pipenv
ENV USER dockeruser
RUN useradd -ms /bin/bash $USER
ENV APP_PATH /home/$USER/my_project
RUN mkdir -p $APP_PATH
COPY . $APP_PATH
WORKDIR $APP_PATH
RUN chown -R $USER:$USER $APP_PATH
RUN pipenv install --system --deploy
USER $USER
EXPOSE 5000
CMD gunicorn run:my_app -b 0.0.0.0:5000 -w 4
# Celery Docker File
FROM python:3.6
RUN apt-get update -y
RUN pip3 install pipenv
ENV USER dockeruser
RUN useradd -ms /bin/bash $USER
ENV APP_PATH /home/$USER/my_project
RUN mkdir -p $APP_PATH
COPY . $APP_PATH
WORKDIR $APP_PATH
RUN chown -R $USER:$USER $APP_PATH
RUN pipenv install --system --deploy
USER $USER
CMD celery -A celery_tasks.celery worker -l INFO -E --autoscale=2,1 -Q apple,ball,cat
Both docker files are the same for celery and flask application. Is there is a way to create a common base image file both docker files? I am using AWS ECR to store docker images.
You can start a Dockerfile FROM any image you want, including one you built yourself. If you built the Flask image as
docker build -t me/flaskapp .
then you can build a derived image that just overrides its CMD as
FROM me/flaskapp
CMD celery -A celery_tasks.celery worker -l INFO -E --autoscale=2,1 -Q apple,ball,cat
If you prefer you can have an image that includes the source code but no default CMD. Since you can't un-EXPOSE a port, this has the minor advantage that it doesn't look like your Celery worker has a network listener. ("Expose" as a verb means almost nothing in modern Docker, though.)
FROM me/code-base
EXPOSE 5000
CMD gunicorn run:my_app -b 0.0.0.0:5000 -w 4
#Frank's answer suggests a Docker Compose path. If you're routinely using Compose you might prefer that path, since there's not an easy way to make it build multiple images in correct dependency order. All of the ways to run a container have a way to specify an alternate command (from extra docker run options through a Kubernetes pod command: setting) so this isn't an especially limiting approach. Conversely, in a CI environment, you generally can specify multiple things to build in sequence, but you'll probably want to use an ARG to specify the image tag.
I think you can use docker-compose(https://docs.docker.com/compose/).
You can specify more than 1 docker instance inside docker-compose YAML config file and run them base on the same docker image.
One Example:
test.yaml:
version: '2.0'
services:
web:
image: sameimage
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
command: ["gunicorn", "run:my_app", "-b", "0.0.0.0:5000", "-w", "4"]
celery:
image: sameimage
command: ["celery", "-A", "celery_tasks.celery"]
volumes:
logvolume01: {}
You can run it by:
docker-compose -f test.yaml -p sameiamge up --no-deps

Run Python Console via docker-compose on Pycharm

I'm having some problems running pycharm with a remote python interpreter via docker-compose. Everything works just great except Python console when I press the run button it just shows the following message:
"Error: Unable to locate container name for service "web" from
docker-compose output"
I really can't understand why it keeps me showing that if my docker-compose.yml provides a web service.
Any help?
EDIT:
docker-compose.yml
version: '2'
volumes:
dados:
driver: local
media:
driver: local
static:
driver: local
services:
beat:
build: Docker/beat
depends_on:
- web
- worker
restart: always
volumes:
- ./src:/app/src
db:
build: Docker/postgres
ports:
- 5433:5432
restart: always
volumes:
- dados:/var/lib/postgresql/data
jupyter:
build: Docker/jupyter
command: jupyter notebook
depends_on:
- web
ports:
- 8888:8888
volumes:
- ./src:/app/src
python:
build:
context: Docker/python
args:
REQUIREMENTS_ENV: 'dev'
image: helpdesk/python:3.6
redis:
image: redis:3.2.6
ports:
- 6379:6379
restart: always
web:
build:
context: .
dockerfile: Docker/web/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- python
- db
ports:
- 8001:8000
restart: always
volumes:
- ./src:/app/src
worker:
build: Docker/worker
depends_on:
- web
- redis
restart: always
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.6
# Set requirements environment
ARG REQUIREMENTS_ENV
ENV REQUIREMENTS_ENV ${REQUIREMENTS_ENV:-prod}
# Set PYTHONUNBUFFERED so the output is displayed in the Docker log
ENV PYTHONUNBUFFERED=1
# Install apt-transport-https
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-transport-https
# Configure yarn repo
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install APT dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
locales \
openssl \
yarn
# Set locale
RUN locale-gen pt_BR.UTF-8 && \
localedef -i pt_BR -c -f UTF-8 -A /usr/share/locale/locale.alias pt_BR.UTF-8
ENV LANG pt_BR.UTF-8
ENV LANGUAGE pt_BR.UTF-8
ENV LC_ALL pt_BR.UTF-8
# Copy requirements files to the container
RUN mkdir -p /tmp/requirements
COPY requirements/requirements-common.txt \
requirements/requirements-$REQUIREMENTS_ENV.txt \
/tmp/requirements/
# Install requirements
RUN pip install \
-i http://root:test#pypi.defensoria.to.gov.br:4040/root/pypi/+simple/ \
--trusted-host pypi.defensoria.to.gov.br \
-r /tmp/requirements/requirements-$REQUIREMENTS_ENV.txt
# Remove requirements temp folder
RUN rm -rf /tmp/requirements
This is the python image Dockerfile, the web Dockerfile just declares from this image and copies the source folder to the container.
I think that this is an dependency chain problem, web depends on python so, when the python container gets up, web one still not exists. That may cause the error.
Cheers
Installing required libraries via command line and running the python interpreter from the PATH should suffice.
You can also refer to the JetBrains manual, as to how they have configured for the interpreters of their IDEs.

Categories