Unknown mysql server host on docker and python - python

I'm building an API that fetches data from a MySQL database using Docker. I've tried everything and I always get this error: 2005 (HY000): Unknown MySQL server host 'db' (-3). Here is my docker compose file:
version: '3'
services:
web:
container_name: nginx
image: nginx
volumes:
- ./nginx/nginx.conf:/tmp/nginx.conf
environment:
- FLASK_SERVER_ADDR=backend:9091
- DB_PASSWORD=password
- DB_USER=user
- DB_HOST=db
command: /bin/bash -c "envsubst < /tmp/nginx.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 80:80
networks:
- local
depends_on:
- backend
backend:
container_name: app
build: flask
environment:
- FLASK_SERVER_PORT=9091
- DB_PASSWORD=password
volumes:
- flask:/tmp/app_data
restart: unless-stopped
networks:
- local
depends_on:
- db
links:
- db
db:
container_name: db
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- 3306:3306
networks:
local:
volumes:
flask:
driver: local
db:
driver: local
Inside the flask directory I have my Dockerfile like so:
FROM ubuntu:latest
WORKDIR /src
RUN apt -y update
RUN apt -y upgrade
RUN apt install -y python3
RUN apt install -y python3-pip
COPY . .
RUN chmod +x -R .
RUN pip install -r requirements.txt --no-cache-dir
CMD ["python3","app.py"]
Finally, on my app.py file I try to connect to the database with the name of the Docker container. I have tried using localhost and it still gives me the same error. This is the part of the code I use to access it:
conn = mysql.connector.connect(
host="db",
port=3306,
user="user",
password="password",
database="database")
What is it that I'm doing wrong?

The containers aren't on the same networks:, which could be why you're having trouble.
I'd recommend deleting all of the networks: blocks in the file, both the blocks at the top level and the blocks in the web and backend containers. Compose will create a network named default for you and attach all of the containers to that network. Networking in Compose in the Docker documentation has more details on this setup.
The links: block is related to an obsolete Docker networking mode, and I've seen it implicated in problems in other SO questions. You should remove it as well.
You also do not need to manually specify container_name: in most cases. For the Nginx container, the Docker Hub nginx image already knows how to do the envsubst processing so you do not need to override its command:.
This should leave you with:
version: '3.8'
services:
web:
image: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/templates/default.conf.template
environment: { ... }
ports:
- 80:80
depends_on:
- backend
backend:
build: flask
environment: { ... }
volumes:
- flask:/tmp/app_data
restart: unless-stopped
depends_on:
- db
db:
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
- db:/var/lib/mysql
environment: { ... }
ports:
- 3306:3306
volumes:
flask:
db:

Related

Configure Docker-compose to connect to my local database [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 2 months ago.
I am setting up an application with the Flask framework using MySQL as the database. This database is located locally on the machine. I manage to use the occifielle image of MySQL without problem. Only that I would rather use a local database that is on my computer.
Here is my extract, please help me.
Dockerfile
FROM python:3.9-slim
RUN apt-get -y update
RUN apt install python3-pip -y
WORKDIR /flask_docker_test
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 80
CMD gunicorn --bind 0.0.0.0:5000 app:app
Docker-compose file
version: "3"
services:
app:
build: .
container_name: app
links:
- db
ports:
- "5000:5000"
depends_on:
- db
networks:
- myapp
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
container_name: mysql_db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: flask_test_db
MYSQL_USER: eric
MYSQL_PASSWORD: 1234
ports:
- "3306:3306"
networks:
- myapp
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- 8080:80
depends_on:
- db
environment:
PMA_ARBITRARY: 1
PMA_USER: serge
PMA_HOST: db
PMA_PASSWORD: 1234
networks:
- myapp
networks:
myapp:
I would like to establish a connection with my local database rather than with the database provided by the MySQL container
in order to connect to your local database, you should :
remove the db from your docker-compose.yaml
remove the network myapp
use network_mode host
But IMO you should keep your db in the docker-compose file, otherwise other developers won't be able to start the project on their machine
EDIT : code snippet for network_mode
services:
app:
...
network_mode: host

Problem with converting docker command to compose file

I am trying to run my flask app in 2 ways: by use docker run command and by use a compose file. When i use following commands everything is working fine:
docker container run --name flask-database -d --network flask_network
-e POSTGRES_USER=admin -e POSTGRES_PASSWORD=admin -e POSTGRES_DB=flask_db -v postgres_data:/var/lib/postgresql/data -p
5432:5432 postgres:13
docker container run -p 5000:5000 --network flask_network flask_app
But when I am trying to use my compose file (by docker compose up) i see error:
File "/app/main_python_files/routes.py", line 11, in home
web | cur.execute('SELECT * FROM books;')
web | psycopg2.errors.UndefinedTable: relation "books" does not exist
web | LINE 1: SELECT * FROM books;
When i have to change in my compose file? I will be very grateful for response! Here is my compose file:
version: '3.7'
services:
flask-database:
container_name: flask-database
image: postgres:13
restart: always
ports:
- 5432:5432
networks:
- flask_network
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=flask_db
volumes:
- postgres_data:/var/lib/postgresql/data
web:
container_name: web
#build: web
image: flask_app
restart: always
ports:
- 5000:5000
networks:
- flask_network
depends_on:
- flask-database
links:
- flask-database
networks:
flask_network: {}
volumes:
postgres_data: {}

Connecting psycopg2 to dockerized postgreSQL

I am trying to connect to a postgreQSL-database initialized within a Dockerized Django project. I am currently using the python package psycopg2 inside a Notebook in Jupyter to connect and add/manipulate data inside the db.
With the code:
connector = psycopg2 .connect(
database="postgres",
user="postgres",
password="postgres",
host="postgres",
port="5432")
It raises the following error:
OperationalError: could not translate host name "postgres" to address:
Unknown host
Meanwhile, It connects correctly to the local db named postgres with host as localhost or 127.0.0.1, but it is not the db I want to access. How can I connect from Python to the db? Should I change something in the project setup?
You can find the Github repository here. Many thanks!
docker-compose.yml:
version: '3.8'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
hostname: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
pgadmin:
image: dpage/pgadmin4
depends_on:
- postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
Dockefile:
FROM python:3.7-slim
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
Edit
To verify that localhost is not the correct hostname I tried to visualize the tables inside PgAdmin (which connects to the correct host), and psycopg2:
The (correct) tables of pgadmin:
The (incorrect) tables of psycopg2:

Django on Docker is starting up but browser gives empty response

For a simple app with Django, Python3, Docker on mac
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
CMD python3 manage.py runserver
COPY . /code/
docker-compose.yml
version: "3.9"
services:
# DB
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306"
expose:
# Opens port 3306 on the container
- '3307'
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
# Web app
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Also, what I wanted is to execute the SQL the first time the image is created,
after that database should be mounted.
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
Looks like the Docker is starting but from my browser, I get this response
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
what is that I am missing. Please help
Looks like your django project is running already when you create image. Since you use command option docker-compose.yml file, you don't need CMD command in Dockerfile in this case.
I would rewrite Dockerfile and docker-compose.yml as follows:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
COPY . /code/
version: "3.9"
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306" # make sure django project connects to 3306 port
volumes:
- $HOME/proj/sql:/docker-entrypoint-initdb.d
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
A few things to point out.
When you run docker-compose up, you will probably see an error, because your django project will already be running even before db is initialised.
That's natural. So you need customized command or shell program to force django project to wait to try to connect db.
In my case I would use a custom command.
version: "3.9"
services:
db:
image: mysql:8
env_file:
- .env
command:
- --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "3308:3306"
web:
build: .
command: >
sh -c "python manage.py wait_for_db &&
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8001:8000"
depends_on:
- db
env_file:
- .env
Next, wait_for_db.py. This file is what I created in myapp/management/commands/wait_for_db.py. With this you postpone db connection until db is ready. This SO post has helped me a lot.
See Writing custom django-admin command for detail.
import time
from django.db import connection
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""Wait to connect to db until db is initialised"""
def handle(self, *args, **options):
start = time.time()
self.stdout.write('Waiting for database...')
while True:
try:
connection.ensure_connection()
break
except OperationalError:
time.sleep(1)
end = time.time()
self.stdout.write(self.style.SUCCESS(f'Database available! Time taken: {end-start:.4f} second(s)'))
Looks like you want to populate your database with sql file when your db container starts running. Mysql docker hub says
Initializing a fresh instance
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
So your .sql file should be located in /docker-entrypoint-initdb.d in your mysql container. See this post for more.
Last but not least, your db is lost when you run docker-compose down, since you don't have volumes other than sql file. It that's not what you want, you might want to consider the following
version: "3.9"
services:
db:
...
volumes:
- data:/var/lib/mysql
...
volumes:
data:

DB connection stopped working in docker-compose file

This docker-compose file was working fine six months ago. But recently I tried to use it to test my app and received this error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "db-test" to address: Name or service not known
I read through some other stack overflow answers and tried adding 'restart: always' to the web service. I also tried adding a local network to the compose file, and nothing has worked. Any ideas what I am doing wrong? Here is my compose file:
version: '3'
services:
# postgres database
db-test:
image: postgres:10.9
environment:
- POSTGRES_PASSWORD=example
volumes:
- pg-test-data:/var/lib/postgresql/data
# main redis instance, used to store available years for each organization
redis-test:
image: redis:5.0.4
volumes:
- redis-test-data:/data
# redis cache used for caching agency pages like /agencies/salaries/
redis_cache-test:
image: redis:5.0.4
# search engine
elasticsearch-test:
image: elasticsearch:5.6.10
volumes:
- elasticsearch-test-data:/usr/share/elasticsearch/data
# web app
web-test:
build: .
environment:
- DATABASE_URL=postgresql://postgres:example#db-test/postgres
- ENVIRONMENT=development
- REDIS_URL=redis://redis-test:6379
- REDIS_CACHE_URL=redis://redis_cache-test:6379
- ELASTIC_ENDPOINT=elasticsearch-test:9200
env_file: docker.env
depends_on:
- db-test
- redis-test
- redis_cache-test
- elasticsearch-test
volumes:
- .:/code
# worker instance for processing large files in background
worker-test:
build: .
command: python run-worker.py
environment:
- DATABASE_URL=postgresql://postgres:example#db-test/postgres
- ENVIRONMENT=development
- REDIS_URL=redis://redis-test:6379
- REDIS_CACHE_URL=redis://redis_cache-test:6379
- ELASTIC_ENDPOINT=elasticsearch-test:9200
env_file: docker.env
depends_on:
- db-test
- redis-test
volumes:
- .:/code
volumes:
pg-test-data: {}
redis-test-data: {}
elasticsearch-test-data: {}
Here is my Dockerfile:
FROM python:2.7.17
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
ADD requirements /requirements
RUN pip install -r /requirements/local.txt
I was able to fix this by adding:
links:
- db-test:db-test
to the web-test service.

Categories