DB connection stopped working in docker-compose file - python

This docker-compose file was working fine six months ago. But recently I tried to use it to test my app and received this error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "db-test" to address: Name or service not known
I read through some other stack overflow answers and tried adding 'restart: always' to the web service. I also tried adding a local network to the compose file, and nothing has worked. Any ideas what I am doing wrong? Here is my compose file:
version: '3'
services:
# postgres database
db-test:
image: postgres:10.9
environment:
- POSTGRES_PASSWORD=example
volumes:
- pg-test-data:/var/lib/postgresql/data
# main redis instance, used to store available years for each organization
redis-test:
image: redis:5.0.4
volumes:
- redis-test-data:/data
# redis cache used for caching agency pages like /agencies/salaries/
redis_cache-test:
image: redis:5.0.4
# search engine
elasticsearch-test:
image: elasticsearch:5.6.10
volumes:
- elasticsearch-test-data:/usr/share/elasticsearch/data
# web app
web-test:
build: .
environment:
- DATABASE_URL=postgresql://postgres:example#db-test/postgres
- ENVIRONMENT=development
- REDIS_URL=redis://redis-test:6379
- REDIS_CACHE_URL=redis://redis_cache-test:6379
- ELASTIC_ENDPOINT=elasticsearch-test:9200
env_file: docker.env
depends_on:
- db-test
- redis-test
- redis_cache-test
- elasticsearch-test
volumes:
- .:/code
# worker instance for processing large files in background
worker-test:
build: .
command: python run-worker.py
environment:
- DATABASE_URL=postgresql://postgres:example#db-test/postgres
- ENVIRONMENT=development
- REDIS_URL=redis://redis-test:6379
- REDIS_CACHE_URL=redis://redis_cache-test:6379
- ELASTIC_ENDPOINT=elasticsearch-test:9200
env_file: docker.env
depends_on:
- db-test
- redis-test
volumes:
- .:/code
volumes:
pg-test-data: {}
redis-test-data: {}
elasticsearch-test-data: {}
Here is my Dockerfile:
FROM python:2.7.17
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
ADD requirements /requirements
RUN pip install -r /requirements/local.txt

I was able to fix this by adding:
links:
- db-test:db-test
to the web-test service.

Related

Configure Docker-compose to connect to my local database [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 2 months ago.
I am setting up an application with the Flask framework using MySQL as the database. This database is located locally on the machine. I manage to use the occifielle image of MySQL without problem. Only that I would rather use a local database that is on my computer.
Here is my extract, please help me.
Dockerfile
FROM python:3.9-slim
RUN apt-get -y update
RUN apt install python3-pip -y
WORKDIR /flask_docker_test
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 80
CMD gunicorn --bind 0.0.0.0:5000 app:app
Docker-compose file
version: "3"
services:
app:
build: .
container_name: app
links:
- db
ports:
- "5000:5000"
depends_on:
- db
networks:
- myapp
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
container_name: mysql_db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: flask_test_db
MYSQL_USER: eric
MYSQL_PASSWORD: 1234
ports:
- "3306:3306"
networks:
- myapp
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- 8080:80
depends_on:
- db
environment:
PMA_ARBITRARY: 1
PMA_USER: serge
PMA_HOST: db
PMA_PASSWORD: 1234
networks:
- myapp
networks:
myapp:
I would like to establish a connection with my local database rather than with the database provided by the MySQL container
in order to connect to your local database, you should :
remove the db from your docker-compose.yaml
remove the network myapp
use network_mode host
But IMO you should keep your db in the docker-compose file, otherwise other developers won't be able to start the project on their machine
EDIT : code snippet for network_mode
services:
app:
...
network_mode: host

Unknown mysql server host on docker and python

I'm building an API that fetches data from a MySQL database using Docker. I've tried everything and I always get this error: 2005 (HY000): Unknown MySQL server host 'db' (-3). Here is my docker compose file:
version: '3'
services:
web:
container_name: nginx
image: nginx
volumes:
- ./nginx/nginx.conf:/tmp/nginx.conf
environment:
- FLASK_SERVER_ADDR=backend:9091
- DB_PASSWORD=password
- DB_USER=user
- DB_HOST=db
command: /bin/bash -c "envsubst < /tmp/nginx.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 80:80
networks:
- local
depends_on:
- backend
backend:
container_name: app
build: flask
environment:
- FLASK_SERVER_PORT=9091
- DB_PASSWORD=password
volumes:
- flask:/tmp/app_data
restart: unless-stopped
networks:
- local
depends_on:
- db
links:
- db
db:
container_name: db
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- 3306:3306
networks:
local:
volumes:
flask:
driver: local
db:
driver: local
Inside the flask directory I have my Dockerfile like so:
FROM ubuntu:latest
WORKDIR /src
RUN apt -y update
RUN apt -y upgrade
RUN apt install -y python3
RUN apt install -y python3-pip
COPY . .
RUN chmod +x -R .
RUN pip install -r requirements.txt --no-cache-dir
CMD ["python3","app.py"]
Finally, on my app.py file I try to connect to the database with the name of the Docker container. I have tried using localhost and it still gives me the same error. This is the part of the code I use to access it:
conn = mysql.connector.connect(
host="db",
port=3306,
user="user",
password="password",
database="database")
What is it that I'm doing wrong?
The containers aren't on the same networks:, which could be why you're having trouble.
I'd recommend deleting all of the networks: blocks in the file, both the blocks at the top level and the blocks in the web and backend containers. Compose will create a network named default for you and attach all of the containers to that network. Networking in Compose in the Docker documentation has more details on this setup.
The links: block is related to an obsolete Docker networking mode, and I've seen it implicated in problems in other SO questions. You should remove it as well.
You also do not need to manually specify container_name: in most cases. For the Nginx container, the Docker Hub nginx image already knows how to do the envsubst processing so you do not need to override its command:.
This should leave you with:
version: '3.8'
services:
web:
image: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/templates/default.conf.template
environment: { ... }
ports:
- 80:80
depends_on:
- backend
backend:
build: flask
environment: { ... }
volumes:
- flask:/tmp/app_data
restart: unless-stopped
depends_on:
- db
db:
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
- db:/var/lib/mysql
environment: { ... }
ports:
- 3306:3306
volumes:
flask:
db:

Connecting psycopg2 to dockerized postgreSQL

I am trying to connect to a postgreQSL-database initialized within a Dockerized Django project. I am currently using the python package psycopg2 inside a Notebook in Jupyter to connect and add/manipulate data inside the db.
With the code:
connector = psycopg2 .connect(
database="postgres",
user="postgres",
password="postgres",
host="postgres",
port="5432")
It raises the following error:
OperationalError: could not translate host name "postgres" to address:
Unknown host
Meanwhile, It connects correctly to the local db named postgres with host as localhost or 127.0.0.1, but it is not the db I want to access. How can I connect from Python to the db? Should I change something in the project setup?
You can find the Github repository here. Many thanks!
docker-compose.yml:
version: '3.8'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
hostname: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
pgadmin:
image: dpage/pgadmin4
depends_on:
- postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
Dockefile:
FROM python:3.7-slim
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
Edit
To verify that localhost is not the correct hostname I tried to visualize the tables inside PgAdmin (which connects to the correct host), and psycopg2:
The (correct) tables of pgadmin:
The (incorrect) tables of psycopg2:

Django on Docker is starting up but browser gives empty response

For a simple app with Django, Python3, Docker on mac
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
CMD python3 manage.py runserver
COPY . /code/
docker-compose.yml
version: "3.9"
services:
# DB
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306"
expose:
# Opens port 3306 on the container
- '3307'
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
# Web app
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Also, what I wanted is to execute the SQL the first time the image is created,
after that database should be mounted.
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
Looks like the Docker is starting but from my browser, I get this response
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
what is that I am missing. Please help
Looks like your django project is running already when you create image. Since you use command option docker-compose.yml file, you don't need CMD command in Dockerfile in this case.
I would rewrite Dockerfile and docker-compose.yml as follows:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
COPY . /code/
version: "3.9"
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306" # make sure django project connects to 3306 port
volumes:
- $HOME/proj/sql:/docker-entrypoint-initdb.d
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
A few things to point out.
When you run docker-compose up, you will probably see an error, because your django project will already be running even before db is initialised.
That's natural. So you need customized command or shell program to force django project to wait to try to connect db.
In my case I would use a custom command.
version: "3.9"
services:
db:
image: mysql:8
env_file:
- .env
command:
- --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "3308:3306"
web:
build: .
command: >
sh -c "python manage.py wait_for_db &&
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8001:8000"
depends_on:
- db
env_file:
- .env
Next, wait_for_db.py. This file is what I created in myapp/management/commands/wait_for_db.py. With this you postpone db connection until db is ready. This SO post has helped me a lot.
See Writing custom django-admin command for detail.
import time
from django.db import connection
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""Wait to connect to db until db is initialised"""
def handle(self, *args, **options):
start = time.time()
self.stdout.write('Waiting for database...')
while True:
try:
connection.ensure_connection()
break
except OperationalError:
time.sleep(1)
end = time.time()
self.stdout.write(self.style.SUCCESS(f'Database available! Time taken: {end-start:.4f} second(s)'))
Looks like you want to populate your database with sql file when your db container starts running. Mysql docker hub says
Initializing a fresh instance
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
So your .sql file should be located in /docker-entrypoint-initdb.d in your mysql container. See this post for more.
Last but not least, your db is lost when you run docker-compose down, since you don't have volumes other than sql file. It that's not what you want, you might want to consider the following
version: "3.9"
services:
db:
...
volumes:
- data:/var/lib/mysql
...
volumes:
data:

Commiting new changes to Gunicorn + Nginx + Django dockerized application in server

Docker novice here.
I have committed new changes inside the application. These changes where copied from my local to host machine, and then to docker container.
So I created a new image sudo docker commit old_container_id new_image_name(djangotango-on-docker_web)
Then I spin the docker container by using new image created.
sudo docker run --name djangotango-web -d --expose 8000 djangotango-on-docker_web gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
Here djangotango-on-docker_web is my new image created.
But my application gives 502 error after this. My new container is not synced properly.
dockerfile
version: '3.8'
# networks:
# public_network:
# name: public_network
# driver: bridge
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:web
command: gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
volumes:
# - .:/home/app/web/
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
expose:
- 8000
env_file:
- ./.env.staging
networks:
service_network:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.staging.db
networks:
service_network:
# depends_on:
# - web
# pgadmin:
# image: dpage/pgadmin4
# env_file:
# - ./.env.staging.db
# ports:
# - "8080:80"
# volumes:
# - pgadmin-data:/var/lib/pgadmin
# depends_on:
# - db
# links:
# - "db:pgsql-server"
# environment:
# - PGADMIN_DEFAULT_EMAIL=4652173624824872
# - PGADMIN_DEFAULT_PASSWORD=exampleeee
# - PGADMIN_LISTEN_PORT=80
# networks:
# service_network:
nginx-proxy:
build: nginx
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:nginx-proxy
restart: always
ports:
- 443:443
- 80:80
networks:
service_network:
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
networks:
service_network:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
networks:
service_network:
volumes:
postgres_data:
pgadmin-data:
static_volume:
media_volume:
certs:
html:
vhost:
How to do it in correct way? I'm running my production application on my domain name.
What I can understand from logs is, my web is not in same network as other container now.
I don't want to rebuild my docker-compose which will solve the problem but will increase the image size, plus it's not recommended I guess.
The correct approach here is to use only docker-compose commands, and to go ahead and rebuild your image:
docker-compose up --build --force-recreate web
Many of the options you'd need to recreate this with a plain docker run command are listed in the docker-compose.yml file, but some generated implicitly. The docker run command you show doesn't have a --net option to attach to the Compose network (which could result in the error you're getting), and it doesn't have the -v options to overwrite the image's static files with content from a volume or the settings from the .env.staging file.
You should almost never use docker commit either. What's the code change you made in your image, and how would your colleagues get and test that change? Especially with the mentions of "prod" here, running code in production that you haven't built from source and tested through your usual CI process is usually discouraged.
(In terms of image size, a committed image will always be larger than the original image; docker build a new image will start from the base image and generally be smaller. Committing images also tends to lose options like the default command to run.)

Categories