i used docker and django for this project and gitlab ci/cd pipleline and test wont even start and exit below error:
tests was running until i add some tests in django app and after that it failed.
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
here is my Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
my docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and my gitlab-ci.yml:
image: python:latest
services:
- mysql:latest
- postgres:latest
variables:
POSTGRES_DB: postgres
cache:
paths:
- ~/.cache/pip/
test:
variables:
DATABASE_URL: "postgresql://postgres:postgres#postgres:5432/$POSTGRES_DB"
script:
- pip install -r requirements.txt
- python manage.py test
build:
image: docker:19.03.12
stage: build
services:
- docker:19.03.12-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
Create a network for the containers in your docker-compose file and share the network between your app and db.
something like this
db:
networks:
- network_name
# your other db setup follows
web:
networks:
- network_name
# your other web setup follows
networks:
network_name:
Related
I am working on a CLI App in python for AWS SQS(which is run on localstack) on docker. Here's my docker-compose.yml:
version: "3.8"
networks:
localstack-net:
name: localstack-net
driver: bridge
services:
localstack:
image: localstack/localstack
privileged: true
networks:
- localstack-net
ports:
- "4576:4576"
environment:
- DEBUG=1
- EDGE_PORT=4576
- DATA_DIR=/tmp/localstack/data
- SERVICES=sqs:4567
volumes:
- ./.temp/localstack:/tmp/localstack
- ./localstack_setup:/docker-entrypoint-initaws.d/
cli_app:
build:
dockerfile: Dockerfile
container_name: my_app
and here's my dockerfile:
FROM python:3.8-slim
RUN useradd --create-home --shell /bin/bash app_user
WORKDIR /home/app_user
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
USER app_user
COPY . .
CMD ["bash"]
The problem that occurs is that the service cli_app exits when I run the command docker-compose up.
What can I do to rectify this problem?
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" (172.28.0.2) and accepting TCP/IP connections on port 5432?
docker-compose
version: '3.9'
services:
backend:
build: ./backend
command: sh -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- ./backend:/app/backend
ports:
- "8000:8000"
env_file:
- backend/.env.dev
depends_on:
- db
db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "5432:5432"
env_file:
- backend/.env.dev
volumes:
postgres_data:
Dockerfile:
FROM python:3.9.10-alpine
ENV PYTHONUNBUFFERED 1
WORKDIR /app/backend
COPY requirements.txt /app/backend
RUN pip install --upgrade pip
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip install -r requirements.txt
RUN apk del .tmp-build-deps
EXPOSE 8000
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
Database setting:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"NAME": os.environ.get("POSTGRES_DB"),
"USER": os.environ.get("POSTGRES_USER"),
"PASSWORD": os.environ.get("POSTGRES_PASSWORD"),
"HOST": os.environ.get("POSTGRES_HOST"),
"PORT": 5432,
}
}
.env :
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=my_db
POSTGRES_HOST=db
Use docker-compose networks
docker-compose.yml:
version: 3.9
services:
backend:
build: ./backend
command: sh -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- ./backend:/app/backend
networks:
- backend
ports:
- "8000:8000"
env_file:
- backend/.env.dev
depends_on:
- db
db:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "5432:5432"
networks:
- backend
env_file:
- backend/.env.dev
volumes:
postgres_data:
networks:
backend:
driver: bridge
I added network. see this link
I guess it work.
I also faced this issue. I was using docker version 20.10.12 and then upgraded docker-compose to v2.4.1. After restarting my system the build was working fine.
Are you running Django on your local machine or inside of the docker?
If you're trying to run it on your local machine, just use localhost instead of the container name db for your POSTGRES_HOST env variable.
But, I'm not sure about the exact reason. Can you share the logs via docker logs backend and docker logs db?
I have been tring to setup gitlab ci/cd config for a django project which will be deployed as a container.
This is what i have tried:
CI/CD -
image: creatiwww/docker-compose:latest
services:
- docker:dind
stages:
- lint
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
lint:
stage: lint
image: python:3.8
before_script:
- pip install pipenv
- pipenv install --dev
script:
- pipenv run python -m flake8 --exclude=migrations,settings.py backend
allow_failure: false
build:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- echo "IMAGE_APP_TAG=$TAG_LATEST" >> .env
- docker-compose build
- docker-compose push
only:
- master
deploy-to-prod:
stage: deploy
script:
- eval $(ssh-agent -s)
- echo "${ID_RSA}" | tr -d '\r' | ssh-add - > /dev/null
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- echo "IMAGE_APP_TAG=$TAG_LATEST" >> .env
- echo "SECRET_KEY=$SECRET_KEY" >> .env
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" down --remove-orphans
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" pull
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" up -d
only:
- master
when: manual
The pipeline succeds but while checking the log of container i get following output-
python: can't open file 'manage.py': [Errno 2] No such file or directory
also my image field in docker ps is empty.
Please help
Put the code in Docker_Compose.yml
version: '3.7'
services:
backend:
build: ./project_name
command: sh -c "cd project && python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
depends_on:
- db
network_mode: host
db:
image: postgres:12.0-alpine
network_mode: host
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER='db_user'
- POSTGRES_PASSWORD='db_password'
- POSTGRES_DB='db_name'
volumes:
postgres_data:
I'm making a Python (Django) and MySQL project.
After creating some files, I get an error when I run the command.
docker-compose up -d
ERROR: Cannot locate specified Dockerfile: Dockerfile
Why am I getting an error when I have a Dockerfile in the current directory?
MYAPP
-django
--__init__.py
--asgi.py
--settings.py
--urls.py
--wsgi.py
-docker-compose.yml
-Dockerfile
-manage.py
-requirement.txt
-wait-for-in.sh
docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- .:/var/www/django
restart: always
environment:
MYSQL_ROOT_PASSWORD: django
MYSQL_DATABASE: django
MYSQL_USER: django
MYSQL_PASSWORD: django
web:
build: django
command: sh -c "./wait-for-it.sh db:3306; python3 manage.py runserver 0.0.0.0:8000"
volumes:
- .:/var/www/django
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /var/www/django
WORKDIR /var/www/django
ADD requirements.txt /var/www/django/
RUN pip install -r requirements.txt
ADD . /var/www/django/
web:
build: django
is a shortcut for
web:
build:
context: django
and this means, the Docker image is built in the django directory. The Dockerfile should be placed there. Same with manage.py, requirement.txt and wait-for-in.sh.
You could try the following:
web:
build:
context: ./django/
dockerfile: ./Dockerfile
You could try this
web:
build:
context: ./django/
dockerfile: ./Dockerfile
command: >
sh -c "./wait-for-it.sh db:3306; python3 manage.py runserver 0.0.0.0:8000"
depends_on:
- db
ports:
- "8000:8000"
I am getting this error when trying to run migrations in my container. I cannot seem to figure out why.
Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"alembic\": executable file not found in $PATH": unknown
Dockerfile:
FROM python:3.8.2
WORKDIR /workspace/
COPY . .
RUN pip install pipenv
RUN pipenv install --deploy --ignore-pipfile
#EXPOSE 8000
#CMD ["pipenv", "run", "python", "/workspace/bin/web.py"]
Docker-Compose:
version: '3'
services:
db:
image: postgres:12
ports:
- "5432:5432"
env_file:
- .env.database.local
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- db
redis:
image: "redis:alpine"
web:
build: .
environment:
- PYTHONPATH=/workspace
env_file:
- .env.local
ports:
- "8000:8000"
volumes:
- .:/workspace
depends_on:
- db
- redis
command: "alembic upgrade head && pipenv run python /workspace/bin/web.py"
The command I run when I encounter this problem:
docker-compose run web alembic revision - autogenerate -m "First migration"
I defined in my Dockerfile that all my program will be running in the workspace directory. So it should point to it.
Yes the issue was that I did not add it to my $PATH.
This is what I added inside my docker-compose:
- PATH=/directory/bin:$PATH
docker-compose run web pipenv run alembic revision - autogenerate -m "First migration"
or
change in Dockerfile
RUN pipenv install --deploy --ignore-pipfile --system
and run
docker-compose run web alembic revision - autogenerate -m "First migration"