How to Build a ubuntu docker container with postgres installed in? - python

Here is the challenge, I've written a program in python with pyqt5 and some others libraries including postgresql, now the question is, how could build a docker ubuntu container with postgresql installed in ? And I have to set up the postgres user as postgres and password as 1234 in order to make everything working well.
I'am lost in how to write well the Dockerfile and respecting all exigence.
Thanks in advance for the solution and if something wasn't clear ask me a question than I will clarify in few minutes.

I have put together a sample configuration.
docker-compose.yml
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
web:
build:
context: .
dockerfile: ./compose/python/Dockerfile
ports:
- "8000:8000"
depends_on:
- postgres
env_file:
- ./.envs/.postgres
command: /start
postgres:
build:
context: .
dockerfile: ./compose/postgres/Dockerfile
image: app_production_postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.postgres
ports:
- "5432:5432"
compose/postgres/Dockerfile
FROM postgres:11.3
compose/python/Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY ./compose/python/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/python/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
ENTRYPOINT ["/entrypoint"]
compose/python/entrypoint
#!/bin/sh
set -o errexit
set -o nounset
if [ -z "${POSTGRES_USER}" ]; then
base_postgres_image_default_user='postgres'
export POSTGRES_USER="${base_postgres_image_default_user}"
fi
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
postgres_ready() {
python << END
import sys
import psycopg2
try:
psycopg2.connect(
dbname="${POSTGRES_DB}",
user="${POSTGRES_USER}",
password="${POSTGRES_PASSWORD}",
host="${POSTGRES_HOST}",
port="${POSTGRES_PORT}",
)
except psycopg2.OperationalError:
sys.exit(-1)
sys.exit(0)
END
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available'
exec "$#"
compose/python/start
#!/bin/sh
set -o errexit
set -o nounset
python -m http.server
requirements.txt
psycopg2>=2.7,<3.0
.envs/.postgres
# PostgreSQL
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=your_app
POSTGRES_USER=debug
POSTGRES_PASSWORD=debug
This configuration is cut down version of docker project generated by django cookiecutter

Related

Dockerized Django app and MySQL with docker-compose using .env

I would to run my Django project into a Docker container with its Database on another Docker container inside a Bebian
When i run my docker container, I have some errors. Like : Lost connection to MySQL server during query ([Errno 104] Connection reset by peer).
This command mysql > SET GLOBAL log_bin_trust_function_creators = 1 is very important because database's Django user create trigger.
Morever, I use a .env file used same for create DB image to store DB user and password. This path is settings/.env.
My code:
docker-compose.yml:
version: '3.3'
services:
db:
image: mysql:8.0.29
container_name: db_mysql_container
environment:
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
command: ["--log_bin_trust_function_creators=1"]
ports:
- '3306:3306'
expose:
- '3306'
api:
build: .
container_name: django_container
command: bash -c "pip install -q -r requirements.txt &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- '8000:8000'
depends_on:
- db
Dockerfile :
# syntax=docker/dockerfile:1
FROM python:3.9.14-buster
ENV PYTHONUNBUFFERED=1
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
How to start my Django project ? Is possible to start only the DB container ?
What command i need execute and what changes i need to make, I'm novice with Docker ! So if you help me, please explains your commands and actions !
You can find this project on my GitHub
Thank !
To run dockerized django project.
Simply you can run below command:
docker-compose run projectname bash -c "python manage.py createsuperuser"
Above command is used for to create superuser

Running Django's collectstatic in Dockerfile produces empty directory

I'm trying to run Django from a Docker container on Heroku, but to make that work, I need to run python manage.py collectstatic during my build phase. To achieve that, I wrote the following Dockerfile:
# Set up image
FROM python:3.10
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install poetry and identify Python dependencies
RUN pip install poetry
COPY pyproject.toml /usr/src/app/
# Install Python dependencies
RUN set -x \
&& apt update -y \
&& apt install -y \
libpq-dev \
gcc \
&& poetry config virtualenvs.create false \
&& poetry install --no-ansi
# Copy source into image
COPY . /usr/src/app/
# Collect static files
RUN python -m manage collectstatic -v 3 --no-input
And here's the docker-compose.yml file I used to run the image:
services:
db:
image: postgres
env_file:
- .env.docker.db
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
ports:
- "5433:5432"
web:
build: .
restart: always
env_file:
- .env.docker.web
ports:
- "8001:$PORT"
volumes:
- .:/usr/src/app
depends_on:
- db
networks:
- backend
command: gunicorn --bind 0.0.0.0:$PORT myapp.wsgi
volumes:
db:
networks:
backend:
driver: bridge
The Dockerfile builds just fine, and I can even see that collectstatic is running and collecting the appropriate files during the build. However, when the build is finished, the only evidence that collectstatic ran is an empty directory called staticfiles. If I run collectstatic again inside of my container, collectstatic works just fine, but since Heroku doesn't persist files created after the build stage, they disappear when my app restarts.
I found a few SO answers discussing how to get collectstatic to run inside a Dockerfile, but that's not my problem; my problem is that it does run, but the collected files don't show up in the container. Anyone have a clue what's going on?
UPDATE: This answer did the trick. My docker-compose.yml was overriding the changes made by collectstatic with this line:
volumes:
- .:/usr/src/app
If, like me, you want to keep the bind mount for ease of local development (so that you don't need to re-build each time), you can edit the command for the web service as follows:
command: bash -c "python -m manage collectstatic && gunicorn --bind 0.0.0.0:$PORT myapp.wsgi"
Note that the image would have run just fine as-is had I pushed it to Heroku (since Heroku doesn't use the docker-compose.yml file), so this was just a problem affecting containers I created on my local machine.
You are overriding the content of /usr/src/app in your container when you added the
volumes:
- .:/usr/src/app
to your docker compose file.
Remove it since you already copied everything during the build.

Docker-compose Django Supervisord Configuration

I would like to run some programs when my django application running. That's why I choose supervisord. I configured my docker-compose and Dockerfile like :
Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
# some of project settings here
ADD supervisord.conf /etc/supervisord.conf
ADD supervisor-worker.conf /etc/supervisor/conf.d/
CMD ["/usr/local/bin/supervisord", "-c", "/etc/supervisord.conf"]
docker-compose:
api:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
restart: unless-stopped
container_name: project
volumes:
- .:/project
ports:
- "8000:8000"
network_mode: "host"
supervisord.conf
[supervisord]
nodaemon=true
[include]
files = /etc/supervisor/conf.d/*.conf
[supervisorctl]
[inet_http_server]
port=*:9001
username=root
password=root
So my problem is when I up the docker-compose project and other dependencies (postgresql, redis) works fine but supervisord didn't work. When I run "supervisord" command inside container it's working. But in startup, It don't work.

How to stop a docker database container

Trying to run the following docker compose file
version: '3'
services:
database:
image: postgres
container_name: pg_container
environment:
POSTGRES_USER: partman
POSTGRES_PASSWORD: partman
POSTGRES_DB: partman
app:
build: .
container_name: partman_container
links:
- database
environment:
- DB_NAME=partman
- DB_USER=partman
- DB_PASSWORD=partman
- DB_HOST=database
- DB_PORT=5432
- SECRET_KEY='=321t+92_)#%_4b+f-&0ym(fs2p5-0-_nz5mhb_cak9zlo!bv#'
depends_on:
- database
expose:
- "8000"
- "8020"
ports:
- "127.0.0.1:8020:8020"
volumes:
pgdata: {}
when running docker-compose up-build with the following docker file
# Dockerfile
# FROM directive instructing base image to build upon
FROM python:3.7-buster
RUN apt-get update && apt-get install nginx vim -y --no-install-recommends
COPY nginx.default /etc/nginx/sites-available/default
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
RUN mkdir .pip_cache \
mkdir -p /opt/app \
mkdir -p /opt/app/pip_cache \
mkdir -p /opt/app/py-partman
COPY start-server.sh /opt/app/
COPY requirements.txt start-server.sh /opt/app/
COPY .pip_cache /opt/app/pip_cache/
COPY partman /opt/app/py-partman/
WORKDIR /opt/app
RUN pip install -r requirements.txt --cache-dir /opt/app/pip_cache
RUN chown -R www-data:www-data /opt/app
RUN /bin/bash -c 'ls -la; chmod +x /opt/app/start-server.sh; ls -la'
EXPOSE 8020
STOPSIGNAL SIGTERM
CMD ["/opt/app/start-server.sh"]
/opt/app/start-server.sh :
#!/usr/bin/env bash
# start-server.sh
ls
pwd
cd py-partman
ls
pwd
python manage.py createsuperuser --no-input
python manage.py makemigrations
python manage.py migrate
python manage.py initialize_entities
the database image keeps on running, i want to stop it because otherwise the jenkins job will keep on waiting for the image to terminate.
Any good ideas / better ideas how to do so ?
Maybe with -> docker stop <"container id or container name">
Use -f to force it, if it can't be stopped.
Try it.
Docker Compose is generally oriented around long-running server-type processes, and where database containers can frequently take 30-60 seconds to start up, it's usually beneficial to not repeat them. (In fact, the artifacts you show look a little odd for not including a python manage.py runserver command.)
It looks like there is a docker-compose up option for what you're looking for
docker-compose up --build --abort-on-container-exit
If you wanted to do this more manually, and especially if your app container's normal behavior is to actually start a server, you can docker-compose run the initialization command. This will start up the container and its dependencies, but it also expects its command to return, and then you can clean up yourself.
docker-compose build
docker-compose run app /opt/app/initialize-only.sh
docker-compose down -v

Compose up container exited with code 0 and logs it with empty

I need to containerize a Django Web project with docker. I divided the project into dashboard, api-server and database. When I type docker-compose up, it print api-server exited with code 0 and api-server container Exited (0), and I type docker logs api-server, it return empty, but other container normal. I don't know how to check problem.
api-server directory structure is as follows
api-server
server/
Dockerfile
requirements.txt
start.sh
...
...
Some compose yml content is as follows
dashboard:
image: nginx:latest
container_name: nginx-dashboard
volumes:
- /nginx/nginx/default:/etc/nginx/conf.d/default.conf:ro
- /nginx/dist:/var/www/html:ro
ports:
- "80:80"
depends_on:
- api-server
api-server:
build: /api-server
container_name: api-server
volumes:
- /api-server:/webapps
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: Postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- "5432:5432"
Some Dockerfile content of api-server is as follows
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
RUN apt-get clean && apt-get update && apt-get upgrade -y && apt-get install -y python3-pip libpq-dev apt-utils
COPY ./requirements.txt /webapps/
RUN pip3 install -r /webapps/requirements.txt
COPY . /webapps/
CMD ["bash","-c","./start.sh"]
start.sh is as follows
#!/usr/bin/env bash
cd server/
python manage.py runserver 0.0.0.0:8000
type docker-compose up result as follows
root#VM:/home/test/Documents/ComposeTest# docker-compose up
Creating network "composetest_default" with the default driver
Creating Postgres ... done
Creating api-server ... done
Creating dashboard ... done
Attaching to Postgres, api-server, dashboard
Postgres | The files belonging to this database system will be owned by user "postgres".
Postgres | This user must also own the server process.
...
...
api-server exited with code 0
api-server exited with code 0
docker logs api-server is empty
I would very appreciate it if you guys can tell me how to check this problems, It is better to provide a solution.
You are already copying api-server to Dockerfile during build time which should work fine, but in Docker compose it all override all the pip packages and code.
volumes:
- /api-server:/webapps
Remove the volume from your Docker compose and it should work.
Second thing set permission to the bash script.
COPY . /webapps/
RUN chmod +x ./start.sh
Third thing, you do need to run python using bash as there is no thing in the bash that CMD can not perform so why not as a CMD?
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Categories