I have some tar files which I want to move to docker image and extract them there and then run python web app script. I have sh script that extracts those files.
If I run the script using RUN command then they get extracted but they are not present in the final container.
I also used entrypoint but it executes and then closes container not executing "main python script".
Is there a way how to execute this install script and then continue running the main script and not closing?
Dockerfile
# Use an official Python runtime as a parent image
FROM python:2.7-slim
RUN apt-get update \
&& apt-get install -y curl \
&& curl -sL https://deb.nodesource.com/setup_4.x | bash \
&& apt-get install -y nodejs \
&& apt-get install -y git \
&& npm install -g bower \
&& npm install -g gulp#^3.9.1
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip install -r requirements.txt \
&& npm install \
&& bower install --allow-root \
&& gulp default
# Define environment variable
ENV PATH "$PATH:/app/tree-tagger/cmd/"
ENV PATH "$PATH:/app/tree-tagger/bin/"
ENV TREETAGGER "/app/tree-tagger/cmd/"
ENV TREETAGGER_HOME "/app/tree-tagger/cmd/"
CMD python app.py
ENTRYPOINT sh tree-tagger/install-tagger.sh
Here is docker-compose script on top of that
web:
build: .
ports:
- "8080:8080"
volumes:
- .:/app
links:
- db
db:
image: mongo:3.0.2
You can just use RUN for that. RUN runs on container side so anything new that gets created by it - will be included in container as well.
Your issue most likely comes from the fact that your docker-compose file is mounting a volume that you previously wrote to.
Basically, what you're trying to do is:
1. During container creation, copy current dir to /app and do something with it
2. After container is created - run it while mounting current dir to /app again.
So you end up with anything newly created in /app/ to be overwritten. If you skip the volume part in docker-compose, it will all work as expected. But if you prefer to mount current app code to /app anyway (so you don't have to rebuild image during development every time your code change), there is a way. Just change your docker-compose to:
web:
build: .
ports:
- "8080:8080"
volumes:
- .:/app
- /app/tree-tagger/
links:
- db
db:
image: mongo:3.0.2
Now you can just change your dockerfile to what you had before (with RUN instead of ENTRYPOINT) and /app/tree-tagger/ won't be replaced this time. So:
...
RUN sh tree-tagger/install-tagger.sh
CMD python app.py
Related
I am novice to Docker and containers.
I am running 2 containers. 1st runs FAST API the 2nd one runs a tool in Go language.
From an endpoint, I want to invoke the GO container and run the tool.
I have docker-compose:
version: '3'
services:
fastapi:
build: ./
image: myimage
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
ports:
- 8000:8000
networks:
- test_network
amass_git_worker:
build: https://github.com/OWASP/Amass.git
stdin_open: true
tty: true
entrypoint: ['/bin/sh']
networks:
- test_network
networks:
test_network:
driver: bridge
Main fastapi app Dockerfile:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
The endpoint calls this function:
def amass_wrapper(search_key:str):
try:
subprocess.run(['docker', 'run', '-v', 'OUTPUT_DIR_PATH:/.config/amass/', 'integrate_scanning_modules-amass_git_worker/bin/sh', 'enum' ,'-d', 'owasp.org'])
When I call this endpoint, I get this error:
Process failed because the executable could not be found.
No such file or directory: 'docker'
Does this mean that i need to install docker in the fastapi container.
Any other advice how I can invoke the Go container through Python subprocess.
You should install the Go binary in the Python application's image, and then call it normally using the subprocess module. Do not do anything Docker-specific here, and especially do not try to run a docker command.
Most Go programs compile down to a single binary, so it's simple enough to put this binary in $PATH somewhere. For example, your Dockerfile might say
FROM python:3.10-slim
# Install OS-level dependencies
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl \
unzip
# Download and unpack the Amass zip file, saving only the binary
RUN cd /usr/local \
&& curl -LO https://github.com/OWASP/Amass/releases/download/v3.20.0/amass_linux_amd64.zip \
&& unzip amass_linux_amd64.zip \
&& mv amass_linux_amd64/amass bin \
&& rm -rf amass_linux_amd64 amass_linux_amd64.zip
# Install your application the same way you have it already
WORKDIR /app
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
Now since your image contains a /usr/local/bin/amass binary, you can just run it.
subprocess.run(['amass', 'enum', '-d', 'wasp.org'])
And you do not need the "do-nothing" container in the Compose setup
version: '3.8'
services:
fastapi:
build: .
ports:
- '8000:8000'
It's difficult to programmatically run a command in an existing container. Running a new temporary container to launch the program is no easier but is at least somewhat better style. In both cases you'd need to install either the docker binary or the Docker SDK, and give your container access to the host's Docker socket; this access comes with unrestricted root access to the entire host, should you choose to take advantage of it. So this setup is both tricky to test and also comes with some significant security implications, and I'd generally avoid it if possible.
I'm trying to run DjangoRQ workers inside a docker container - a simple 'worker' container which I will run on a digital ocean droplet. I'm using supervisord to run multiple workers.
Supervisord will run if I set the container command to sleep 3600 (so I can bash in before it crashes) then bash into the container and run supervisord -c supervisord.conf. However, if I set the command on the container to that very same command, command: supervisord -c supervisord.conf then the container exits saying Unlinking stale socket /tmp/supervisor.sock
worker:
build:
context: ./
dockerfile: DockerfileWorker
env_file:
- .env
environment:
- DJANGO_CONFIG
volumes:
- .:/dask
depends_on:
- postgres
- redis
command: sleep 3600
# `python-base` sets up all our shared environment variables
FROM ubuntu:latest
# python
ENV PYTHONUNBUFFERED=1 \
# prevents python creating .pyc files
PYTHONDONTWRITEBYTECODE=1 \
\
# pip
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
\
# poetry
# https://python-poetry.org/docs/configuration/#using-environment-variables
POETRY_VERSION=1.1.5 \
# make poetry install to this location
POETRY_HOME="/opt/poetry" \
# make poetry create the virtual environment in the project's root
# it gets named `.venv`
POETRY_VIRTUALENVS_IN_PROJECT=true \
# do not ask any interactive question
POETRY_NO_INTERACTION=1 \
\
# paths
# this is where our requirements + virtual environment will live
PYSETUP_PATH="/opt/pysetup" \
VENV_PATH="/opt/pysetup/.venv" \
LOG_LEVEL=DEBUG
# prepend poetry and venv to path
ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
# deps for installing poetry
curl \
# deps for building python deps
build-essential \
python3-pip
# install poetry - respects $POETRY_VERSION & $POETRY_HOME
RUN pip3 install poetry
# copy project requirement files here to ensure they will be cached.
WORKDIR $PYSETUP_PATH
COPY poetry.lock pyproject.toml ./
# install runtime deps - uses $POETRY_VIRTUALENVS_IN_PROJECT internally
RUN poetry install --no-dev
# # `production` image used for runtime
# FROM python-base as production
COPY . /dask
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
WORKDIR /dask
For anyone who hit this problem - the solution is frustratingly simple. You must run supervisord in the foreground or Docker doesn't think a process is running so it exits. Add the -n tag to supervisord and you'll be good to go.
worker:
build:
context: ./
dockerfile: DockerfileWorker
env_file:
- .env
environment:
- DJANGO_CONFIG
volumes:
- .:/dask
depends_on:
- postgres
- redis
command: supervisord -n -c supervisord.conf
I am trying to run the docker container for web which has script start.sh file to start it, but docker-compose up web is giving an error.
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/app/start.sh\": stat /home/app/start.sh: no such file or directory": unknown
By the following command it shows the start.sh is present in the docker image
docker run -it web bin/bash
docker-compose.yaml
web:
container_name: "web"
environment:
- LANG=C.UTF-8
env_file:
- .env
build: .
volumes:
- ../app:/home/app
- ../media:/home/media
- ../misc:/home/downloads
command: ["/home/app/start.sh"]
dockerfile
# Fetch the base image
FROM ubuntu:18.04
# Install python3 and pip3
RUN apt-get -y update && apt-get install -y python3 python3-pip git libsasl2-dev python-dev libldap2-dev libssl-dev openjdk-8-jdk libwebkitgtk-1.0-0 curl nano wget unzip
# Install pip3 lib
COPY pip3-requirements.txt /pip3-requirements.txt
RUN pip3 install -r pip3-requirements.txt
# Copy Code
ADD . /home/app/
Details:
docker version:
Docker version 19.03.6, build 369ce74a3c
docker-compose version:
docker-compose version 1.17.1, build unknown
base image of dockerfile:
ubuntu:18.04
I'm attempting this again via example, as I think it really does come down to the project structure, sorry if my prior attempt was confusing.
I have a Dockerfile:
FROM ubuntu:18.04
ADD . /home/app
And a docker-compose.yml file:
web:
container_name: "web"
build: .
volumes:
- ../test:/home/app
command: ["/home/app/start.sh"]
With a directory structure of:
./test
docker-compose.yml
Dockerfile
start.sh
Then I can run:
chmod +x start.sh
docker-compose build
docker-compose up
What can be useful to troubleshoot this is to run:
docker run -it web bash
> ls -al / /home /home/app
I hope that helps, I suspect the script isn't being placed into /home/app as the error you are getting is stating.
I am trying to compose a docker image on my local machine in an effort to deploy it to google cloud.
The app is simple Python and Flask, built on top of the GPT 2 Repo. So I tried the following commands in terminal:
$docker build -t text-gen:v1 .
The container and the image are created fine but they exit immediately with code [1] they don't appear under the running containers when I run $docker ps command, however, they do exist under the stoped images when I run $docker ps -a
I removed the created container using the command $docker rm text-gen -f
3- Stoped docker, restarted my mac, started docker again, built a docker compose file and this time I tried the following command to create the container:
$docker-compose up --build flask
assuming flask as a service, however, the same thing happened again the containers exits upon creation with code [1].
Considering this is my first time using any of these services, it took me a while to find this command
$docker logs text-gen and it appears there is a syntax error in my main app.py file which doesn't make any sense:
(venv) myyapproot $docker logs af6175cxxxx
File "app.py", line 89
return f'<div>{html}</div>'
^
SyntaxError: invalid syntax
this the code the logs refers to in app.py:
87 html = ''
88 html = add_content(html, box(seed, text))
89 return f'<div>{html}</div>'
I expected a div element to return some html content, which works fine when I run it in python environment but not in docker
The code seems fine to me, can anyone please point me to the right direction?
I feel Im missing a dependency or something
Things I've done:
-reinstalled Docker with wired connection
-retested my app without docker build several times
-Used different build method
My setup:
MacOs 10.14.1
Dependencies that produces a working app in python:
this how I had my environment.yml
dependencies:
- python==3.7
- pip:
- Flask==1.0.2
- torch==1.0.1
- regex==2017.4.5
- requests==2.21.0
- numpy==1.16.2
- wtforms==2.2.1
- tqdm==4.31.1
- gunicorn==19.9.0
- firebase-admin==2.13.0
- google-cloud-firestore==0.29.0
This is my docker-compose.yml file :
version: '3.3'
services:
flask:
image: text-gen
build: .
command: /opt/conda/envs/ml-flask/bin/python app.py
ports:
- "5000:5000"
and this is my dockerFile
FROM ubuntu:18.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
COPY ["environment.yml", "/root/environment.yml"]
RUN apt-get update --fix-missing && apt-get install -y wget bzip2 ca-
certificates \
libglib2.0-0 libxext6 libsm6 libxrender1 \
git mercurial subversion python-dev gcc
# install miniconda and python 3.7
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-4.5.11-
Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc
RUN /opt/conda/bin/conda env create -f=/root/environment.yml -n ml-flask
RUN echo "conda activate ml-flask" >> ~/.bashrc
SHELL ["/bin/bash", "-c", "source ~/.bashrc"]
RUN conda activate ml-flask
COPY ["deployment", "/usr/src/app/deployment"]
COPY ["models", "/usr/src/app/models"]
WORKDIR /usr/src/app/deployment
CMD [ "/bin/bash" ]
I changed my local dev. to Docker. I use the Django framework. For frontend I use the gulp build command to “create” my files. Now I tried a lot, looked into the Cookiecutter and Saleor project but still having issues to install npm in a way that I can call the gulp build command in my Docker container.
I already tried to add:
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get update && apt-get install -y \
nodejs \
COPY ./package.json /app/
RUN npm install
While npm is installed, I still can’t run the command gulp build in my Container. It just says gulp is an unknown command. So it seems npm doesn’t install the defined packages in my package.json file. Anyone here who already solved that and can give me some tips?
Dockerfile
# Pull base image
FROM python:3.7
# Define environment variable
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y \
# Language dependencies
gettext \
# In addition, when you clean up the apt cache by removing /var/lib/apt/lists
# it reduces the image size, since the apt cache is not stored in a layer.
&& rm -rf /var/lib/apt/lists/*
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install Python dependencies
RUN pip install pipenv
RUN pipenv install --system --deploy --dev
docker-compose.py
version: '3'
services:
web:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
env_file: .env
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
entrypoint: ./compose/local/django/entrypoint.sh
container_name: myproject
db:
image: postgres
ports:
- "5432:5432"
environment:
# Password will be required if connecting from a different host
- POSTGRES_PASSWORD=password
Update 1:
# Pull base image
FROM combos/python_node:3_10
# Define environment variable
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y \
# Language dependencies
gettext \
# In addition, when you clean up the apt cache by removing /var/lib/apt/lists
# it reduces the image size, since the apt cache is not stored in a layer.
&& rm -rf /var/lib/apt/lists/*
# COPY webpack.config.js app.json package.json package-lock.json /app/
# WORKDIR /app
# RUN npm install
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN npm install
# Install Python dependencies
RUN pip install pipenv
RUN pipenv install --system --deploy --dev
Are you sure npm install is being called in the correct directory? You copy package.json to /app but run npm install from an unknown folder. You can try:
COPY ./package.json /app/
RUN cd /app/ && npm install
But I think you'd want to install gulp globally anyway, so you can skip package.json and just use:
RUN npm install -g gulp-cli
This way whatever calls gulp should have it in PATH and not just that specific directory.
Also, if you want to get a Docker image with both Python 3.7 and Node.js 10 already installed, you can use combos/python_node:3.7_10. It's rebuilt daily to contain the latest versions of both images.