How to stop a docker database container - python

Trying to run the following docker compose file
version: '3'
services:
database:
image: postgres
container_name: pg_container
environment:
POSTGRES_USER: partman
POSTGRES_PASSWORD: partman
POSTGRES_DB: partman
app:
build: .
container_name: partman_container
links:
- database
environment:
- DB_NAME=partman
- DB_USER=partman
- DB_PASSWORD=partman
- DB_HOST=database
- DB_PORT=5432
- SECRET_KEY='=321t+92_)#%_4b+f-&0ym(fs2p5-0-_nz5mhb_cak9zlo!bv#'
depends_on:
- database
expose:
- "8000"
- "8020"
ports:
- "127.0.0.1:8020:8020"
volumes:
pgdata: {}
when running docker-compose up-build with the following docker file
# Dockerfile
# FROM directive instructing base image to build upon
FROM python:3.7-buster
RUN apt-get update && apt-get install nginx vim -y --no-install-recommends
COPY nginx.default /etc/nginx/sites-available/default
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
RUN mkdir .pip_cache \
mkdir -p /opt/app \
mkdir -p /opt/app/pip_cache \
mkdir -p /opt/app/py-partman
COPY start-server.sh /opt/app/
COPY requirements.txt start-server.sh /opt/app/
COPY .pip_cache /opt/app/pip_cache/
COPY partman /opt/app/py-partman/
WORKDIR /opt/app
RUN pip install -r requirements.txt --cache-dir /opt/app/pip_cache
RUN chown -R www-data:www-data /opt/app
RUN /bin/bash -c 'ls -la; chmod +x /opt/app/start-server.sh; ls -la'
EXPOSE 8020
STOPSIGNAL SIGTERM
CMD ["/opt/app/start-server.sh"]
/opt/app/start-server.sh :
#!/usr/bin/env bash
# start-server.sh
ls
pwd
cd py-partman
ls
pwd
python manage.py createsuperuser --no-input
python manage.py makemigrations
python manage.py migrate
python manage.py initialize_entities
the database image keeps on running, i want to stop it because otherwise the jenkins job will keep on waiting for the image to terminate.
Any good ideas / better ideas how to do so ?

Maybe with -> docker stop <"container id or container name">
Use -f to force it, if it can't be stopped.
Try it.

Docker Compose is generally oriented around long-running server-type processes, and where database containers can frequently take 30-60 seconds to start up, it's usually beneficial to not repeat them. (In fact, the artifacts you show look a little odd for not including a python manage.py runserver command.)
It looks like there is a docker-compose up option for what you're looking for
docker-compose up --build --abort-on-container-exit
If you wanted to do this more manually, and especially if your app container's normal behavior is to actually start a server, you can docker-compose run the initialization command. This will start up the container and its dependencies, but it also expects its command to return, and then you can clean up yourself.
docker-compose build
docker-compose run app /opt/app/initialize-only.sh
docker-compose down -v

Related

Running Django's collectstatic in Dockerfile produces empty directory

I'm trying to run Django from a Docker container on Heroku, but to make that work, I need to run python manage.py collectstatic during my build phase. To achieve that, I wrote the following Dockerfile:
# Set up image
FROM python:3.10
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install poetry and identify Python dependencies
RUN pip install poetry
COPY pyproject.toml /usr/src/app/
# Install Python dependencies
RUN set -x \
&& apt update -y \
&& apt install -y \
libpq-dev \
gcc \
&& poetry config virtualenvs.create false \
&& poetry install --no-ansi
# Copy source into image
COPY . /usr/src/app/
# Collect static files
RUN python -m manage collectstatic -v 3 --no-input
And here's the docker-compose.yml file I used to run the image:
services:
db:
image: postgres
env_file:
- .env.docker.db
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
ports:
- "5433:5432"
web:
build: .
restart: always
env_file:
- .env.docker.web
ports:
- "8001:$PORT"
volumes:
- .:/usr/src/app
depends_on:
- db
networks:
- backend
command: gunicorn --bind 0.0.0.0:$PORT myapp.wsgi
volumes:
db:
networks:
backend:
driver: bridge
The Dockerfile builds just fine, and I can even see that collectstatic is running and collecting the appropriate files during the build. However, when the build is finished, the only evidence that collectstatic ran is an empty directory called staticfiles. If I run collectstatic again inside of my container, collectstatic works just fine, but since Heroku doesn't persist files created after the build stage, they disappear when my app restarts.
I found a few SO answers discussing how to get collectstatic to run inside a Dockerfile, but that's not my problem; my problem is that it does run, but the collected files don't show up in the container. Anyone have a clue what's going on?
UPDATE: This answer did the trick. My docker-compose.yml was overriding the changes made by collectstatic with this line:
volumes:
- .:/usr/src/app
If, like me, you want to keep the bind mount for ease of local development (so that you don't need to re-build each time), you can edit the command for the web service as follows:
command: bash -c "python -m manage collectstatic && gunicorn --bind 0.0.0.0:$PORT myapp.wsgi"
Note that the image would have run just fine as-is had I pushed it to Heroku (since Heroku doesn't use the docker-compose.yml file), so this was just a problem affecting containers I created on my local machine.
You are overriding the content of /usr/src/app in your container when you added the
volumes:
- .:/usr/src/app
to your docker compose file.
Remove it since you already copied everything during the build.

How to Build a ubuntu docker container with postgres installed in?

Here is the challenge, I've written a program in python with pyqt5 and some others libraries including postgresql, now the question is, how could build a docker ubuntu container with postgresql installed in ? And I have to set up the postgres user as postgres and password as 1234 in order to make everything working well.
I'am lost in how to write well the Dockerfile and respecting all exigence.
Thanks in advance for the solution and if something wasn't clear ask me a question than I will clarify in few minutes.
I have put together a sample configuration.
docker-compose.yml
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
web:
build:
context: .
dockerfile: ./compose/python/Dockerfile
ports:
- "8000:8000"
depends_on:
- postgres
env_file:
- ./.envs/.postgres
command: /start
postgres:
build:
context: .
dockerfile: ./compose/postgres/Dockerfile
image: app_production_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.postgres
ports:
- "5432:5432"
compose/postgres/Dockerfile
FROM postgres:11.3
compose/python/Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY ./compose/python/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/python/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
ENTRYPOINT ["/entrypoint"]
compose/python/entrypoint
#!/bin/sh
set -o errexit
set -o nounset
if [ -z "${POSTGRES_USER}" ]; then
base_postgres_image_default_user='postgres'
export POSTGRES_USER="${base_postgres_image_default_user}"
fi
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
postgres_ready() {
python << END
import sys
import psycopg2
try:
psycopg2.connect(
dbname="${POSTGRES_DB}",
user="${POSTGRES_USER}",
password="${POSTGRES_PASSWORD}",
host="${POSTGRES_HOST}",
port="${POSTGRES_PORT}",
)
except psycopg2.OperationalError:
sys.exit(-1)
sys.exit(0)
END
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available'
exec "$#"
compose/python/start
#!/bin/sh
set -o errexit
set -o nounset
python -m http.server
requirements.txt
psycopg2>=2.7,<3.0
.envs/.postgres
# PostgreSQL
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=your_app
POSTGRES_USER=debug
POSTGRES_PASSWORD=debug
This configuration is cut down version of docker project generated by django cookiecutter

docker entrypoint behaviour with django

I'm trying to make my first django container with uwsgi. It works as follows:
FROM python:3.5
RUN apt-get update && \
apt-get install -y && \
pip3 install uwsgi
COPY ./projects.thux.it/requirements.txt /opt/app/requirements.txt
RUN pip3 install -r /opt/app/requirements.txt
COPY ./projects.thux.it /opt/app
COPY ./uwsgi.ini /opt/app
COPY ./entrypoint /usr/local/bin/entrypoint
ENV PYTHONPATH=/opt/app:/opt/app/apps
WORKDIR /opt/app
ENTRYPOINT ["entrypoint"]
EXPOSE 8000
#CMD ["--ini", "/opt/app/uwsgi.ini"]
entrypoint here is a script that detects whether to call uwsgi (in case there are no args) or python manage in all other cases.
I'd like to use this container both as an executable (dj migrate, dj shell, ... - dj here is python manage.py the handler for django interaction) and as a long-term container (uwsgi --ini uwsgi.ini). I use docker-compose as follows:
web:
image: thux-projects:3.5
build: .
ports:
- "8001:8000"
volumes:
- ./projects.thux.it/web/settings:/opt/app/web/settings
- ./manage.py:/opt/app/manage.py
- ./uwsgi.ini:/opt/app/uwsgi.ini
- ./logs:/var/log/django
And I manage in fact to serve the project correctly but to interact with django to "check" I need to issue:
docker-compose exec web entrypoint check
while reading the docs I would have imagined I just needed the arguments (without entrypoint)
Command line arguments to docker run will be appended after
all elements in an exec form ENTRYPOINT, and will override all
elements specified using CMD. This allows arguments to be passed to
the entry point, i.e., docker run -d will pass the -d argument
to the entry point.
The working situation with "repeated" entrypoint:
$ docker-compose exec web entrypoint check
System check identified no issues (0 silenced).
The failing one if I avoid 'entrypoint':
$ docker-compose exec web check
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"check\": executable file not found in $PATH": unknown
docker exec never uses a container's entrypoint; it just directly runs the command you give it.
When you docker run a container, the entrypoint and command you give to start it are combined to produce a single command line, and that command becomes the main container process. On the other hand, when you docker exec a command in a running container, it's interpreted literally; there aren't two parts of the command line to assemble, and the container's entrypoint isn't considered at all.
For the use case you describe, you don't need an entrypoint script to process the command in an unusual way. You can create a symlink to the manage.py script to give a shorter alias to run it, but make the default command be the uwsgi runner.
RUN chmod +x manage.py
RUN ln -s /opt/app/manage.py /usr/local/bin/dj
CMD ["uwsgi", "--ini", "/opt/app/uwsgi.ini"]
# Runs uwsgi:
docker run -p 8000:8000 myimage
# Manually trigger database migrations:
docker run --rm myimage dj migrate

Compose up container exited with code 0 and logs it with empty

I need to containerize a Django Web project with docker. I divided the project into dashboard, api-server and database. When I type docker-compose up, it print api-server exited with code 0 and api-server container Exited (0), and I type docker logs api-server, it return empty, but other container normal. I don't know how to check problem.
api-server directory structure is as follows
api-server
server/
Dockerfile
requirements.txt
start.sh
...
...
Some compose yml content is as follows
dashboard:
image: nginx:latest
container_name: nginx-dashboard
volumes:
- /nginx/nginx/default:/etc/nginx/conf.d/default.conf:ro
- /nginx/dist:/var/www/html:ro
ports:
- "80:80"
depends_on:
- api-server
api-server:
build: /api-server
container_name: api-server
volumes:
- /api-server:/webapps
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: Postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- "5432:5432"
Some Dockerfile content of api-server is as follows
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
RUN apt-get clean && apt-get update && apt-get upgrade -y && apt-get install -y python3-pip libpq-dev apt-utils
COPY ./requirements.txt /webapps/
RUN pip3 install -r /webapps/requirements.txt
COPY . /webapps/
CMD ["bash","-c","./start.sh"]
start.sh is as follows
#!/usr/bin/env bash
cd server/
python manage.py runserver 0.0.0.0:8000
type docker-compose up result as follows
root#VM:/home/test/Documents/ComposeTest# docker-compose up
Creating network "composetest_default" with the default driver
Creating Postgres ... done
Creating api-server ... done
Creating dashboard ... done
Attaching to Postgres, api-server, dashboard
Postgres | The files belonging to this database system will be owned by user "postgres".
Postgres | This user must also own the server process.
...
...
api-server exited with code 0
api-server exited with code 0
docker logs api-server is empty
I would very appreciate it if you guys can tell me how to check this problems, It is better to provide a solution.
You are already copying api-server to Dockerfile during build time which should work fine, but in Docker compose it all override all the pip packages and code.
volumes:
- /api-server:/webapps
Remove the volume from your Docker compose and it should work.
Second thing set permission to the bash script.
COPY . /webapps/
RUN chmod +x ./start.sh
Third thing, you do need to run python using bash as there is no thing in the bash that CMD can not perform so why not as a CMD?
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Adding docker to django project: no such file or directory

I am trying to add docker support to an already existing django project. I have a Dockerfile, a docker-compose, and a gunicorn.sh which I use as a script to launch the whole things. That script works fine when I run it from my shell.
When I run:
docker-compose -f docker-compose.yml up
I get this error:
ERROR: for intranet_django_1 Cannot start service django: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/srv/gunicorn.sh\": stat /srv/gunicorn.sh: no such file or directory"
What the hell am I doing wrong?
I am very much a docker n00b so any explanation would be most welcome.
The Dockerfile looks like so:
FROM python:3
ENV PYTHONUNBUFFERED 1
ENV DB_NAME unstable_intranet_django
ENV DB_USER django
ENV DB_PASSWORD ookookEEK
ENV DB_HOST db
ENV DB_PORT 3306
RUN groupadd -r django
RUN useradd -r -g django django
COPY ./requirements/requierments.txt /srv/
RUN pip install -U pip
RUN pip install -r /srv/requierments.txt
COPY ./intranet_site/ /srv
RUN chmod a+rx /srv/gunicorn.sh
RUN chown -R django:django /srv/
USER django
WORKDIR /srv
I am well aware that the passwords should not be set here and that a permanent volume with a file containing them is probably the best way to deal with it. However, I kinda want something working instead of spending hours fiddling with things and not being able to see anything run…
The docker-compose.yml looks like:
version: '3'
services:
db:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=fubar
- MYSQL_USER=django
- MYSQL_PASSWORD=ookookEEK
- MYSQL_DATABASE=unstable_intranet_django
django:
build: .
command: /srv/gunicorn.sh
volumes:
- .:/srv
ports:
- "8000:8000"
depends_on:
- db
Finally, the gunicorn.sh file is:
#!/bin/bash
# -*- coding: utf-8 -*-
# Check if the database is alive or not.
python << END
from MySQLdb import Error
from MySQLdb import connect
from sys import exit
from time import sleep
retry=0
while True:
try:
conn = connect(db="$DB_NAME",
user="$DB_USER",
password="$DB_PASSWORD",
host="$DB_HOST",
port=$DB_PORT)
print("✔ DB $DB_NAME on $DB_HOST:$DB_PORT is up.")
break
except Error as err:
snooze = retry / 10.0
print("✖ DB $DB_NAME on $DB_HOST:$DB_PORT is unavailable "
"→ sleeping for {}…".format(snooze))
sleep(snooze)
retry += 1
exit(0)
END
# Set up log file.
log="./gunicorn.log"
date > ${log}
# Collectstatic
echo "Collecting static files." | tee -a ${log}
python manage.py collectstatic -v 3 --noinput >> ${log}
# Migrate database
echo "Doing database migration." | tee -a ${log}
python manage.py migrate -v 3 >> ${log}
# New shiny modern hip way:
echo "Running Gunicorn on ${HOSTNAME} …" | tee -a ${log}
gunicorn -b ${HOSTNAME}:8000 -w 2 intranet_site.wsgi | tee -a ${log}
To make things stranger:
; docker run -it intranet_web /bin/bash
django#ce7f641cc1c7:/srv$ ls -l gunicorn.sh
-rwxrwxr-x. 1 django django 1677 Jun 2 07:51 gunicorn.sh
django#ce7f641cc1c7:/srv$ ./gunicorn.sh
✖ DB unstable_intranet_django on 127.0.0.1:3306 is unavailable → sleeping for 0.0…
So running the script from the containers seems to work just fine…
I think you should have:
ADD . /srv/ instead of COPY ./intranet_site/ /srv
because ADD . /srv/ adds all the content of the directory in which you have the Dockerfile to the srv folder from container. So the COPY/ADD command should be used in the folder that contains the Dockerfile. And I suppose your Dockerfile is in this root directory of the project (alongside docker-compose.yml and gunicorn.sh).
You could also use COPY . /srv/ with the same effect.
Suspect the path shouldn't have a leading .:
command: /srv/gunicorn.sh

Categories