how to access env variables from docker compose file - python

I have a two services in docker compose 1. an application and 2. db service (can be Mysql or Postgres). Depending upon environment variables set in compose file for db services, I need create DATABASE_URI for sqlalchemy engine. How do I access these ENV in app docker container?
I am trying to access env set inside docker-compose file and not Dockerfile.
Below is how my Docker-compose file looks
version: "3.7"
services:
myapp:
image: ${TAG:-myapp}
build:
context: .
ports:
- "5000:5000"
docker_postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"

You need to set environment variables for your postgres database if you want to build your own database_uri.
environment:
POSTGRES_DB: dev
POSTGRES_USER: username
POSTGRES_PASSWORD: pw

#jonrsharpe You mean to say I can do something like this?
services:
myapp:
image: ${TAG:-myapp}
build:
context: .
environment:
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_HOST=docker_postgres
ports:
- "5000:5000"
docker_postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
ports:
- "5432:5432"```

Related

Unknown mysql server host on docker and python

I'm building an API that fetches data from a MySQL database using Docker. I've tried everything and I always get this error: 2005 (HY000): Unknown MySQL server host 'db' (-3). Here is my docker compose file:
version: '3'
services:
web:
container_name: nginx
image: nginx
volumes:
- ./nginx/nginx.conf:/tmp/nginx.conf
environment:
- FLASK_SERVER_ADDR=backend:9091
- DB_PASSWORD=password
- DB_USER=user
- DB_HOST=db
command: /bin/bash -c "envsubst < /tmp/nginx.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 80:80
networks:
- local
depends_on:
- backend
backend:
container_name: app
build: flask
environment:
- FLASK_SERVER_PORT=9091
- DB_PASSWORD=password
volumes:
- flask:/tmp/app_data
restart: unless-stopped
networks:
- local
depends_on:
- db
links:
- db
db:
container_name: db
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- 3306:3306
networks:
local:
volumes:
flask:
driver: local
db:
driver: local
Inside the flask directory I have my Dockerfile like so:
FROM ubuntu:latest
WORKDIR /src
RUN apt -y update
RUN apt -y upgrade
RUN apt install -y python3
RUN apt install -y python3-pip
COPY . .
RUN chmod +x -R .
RUN pip install -r requirements.txt --no-cache-dir
CMD ["python3","app.py"]
Finally, on my app.py file I try to connect to the database with the name of the Docker container. I have tried using localhost and it still gives me the same error. This is the part of the code I use to access it:
conn = mysql.connector.connect(
host="db",
port=3306,
user="user",
password="password",
database="database")
What is it that I'm doing wrong?
The containers aren't on the same networks:, which could be why you're having trouble.
I'd recommend deleting all of the networks: blocks in the file, both the blocks at the top level and the blocks in the web and backend containers. Compose will create a network named default for you and attach all of the containers to that network. Networking in Compose in the Docker documentation has more details on this setup.
The links: block is related to an obsolete Docker networking mode, and I've seen it implicated in problems in other SO questions. You should remove it as well.
You also do not need to manually specify container_name: in most cases. For the Nginx container, the Docker Hub nginx image already knows how to do the envsubst processing so you do not need to override its command:.
This should leave you with:
version: '3.8'
services:
web:
image: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/templates/default.conf.template
environment: { ... }
ports:
- 80:80
depends_on:
- backend
backend:
build: flask
environment: { ... }
volumes:
- flask:/tmp/app_data
restart: unless-stopped
depends_on:
- db
db:
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
- db:/var/lib/mysql
environment: { ... }
ports:
- 3306:3306
volumes:
flask:
db:

Connecting psycopg2 to dockerized postgreSQL

I am trying to connect to a postgreQSL-database initialized within a Dockerized Django project. I am currently using the python package psycopg2 inside a Notebook in Jupyter to connect and add/manipulate data inside the db.
With the code:
connector = psycopg2 .connect(
database="postgres",
user="postgres",
password="postgres",
host="postgres",
port="5432")
It raises the following error:
OperationalError: could not translate host name "postgres" to address:
Unknown host
Meanwhile, It connects correctly to the local db named postgres with host as localhost or 127.0.0.1, but it is not the db I want to access. How can I connect from Python to the db? Should I change something in the project setup?
You can find the Github repository here. Many thanks!
docker-compose.yml:
version: '3.8'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
hostname: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
pgadmin:
image: dpage/pgadmin4
depends_on:
- postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
Dockefile:
FROM python:3.7-slim
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
Edit
To verify that localhost is not the correct hostname I tried to visualize the tables inside PgAdmin (which connects to the correct host), and psycopg2:
The (correct) tables of pgadmin:
The (incorrect) tables of psycopg2:

Docker Network Host on Ubuntu

I have a Django REST service and another Flask service that work as a broker for the application. Both are different projects that run with their own Docker container.
I'm able to POST a product on the Django service that is consumed by the Flask service, however, I cannot reach the Django service via Flask.
These containers are running on the same network, and I already tried Thomasleveil's suggestions, including docker-host by qoomon.
The error received by the request is the same as before I tried to forward the traffic. The difference is that now when I do the request it keeps hanging for a while until it fails.
The error is as follows:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0039388340>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The request I'm trying to make is a POST at /api/products/1/like. At the moment, no body is required.
Here is how I'm doing the POST with Flask, where the IP is the Docker IP:
#app.route("/api/products/<int:id>/like", methods=["POST"])
def like(id):
req = requests.get("http://172.17.0.1:8000/api/user")
json = req.json()
try:
product_user = ProductUser(user_id=json["id"], product=id)
db.session.add(product_user)
db.session.commit()
publish("product_liked", id)
except:
abort(400, "You already liked this product")
return jsonify({
"message": "success"
})
Django's docker compose file (please ignore the service tcp_message_emitter):
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- backend
tcp_message_emitter:
image: alpine
depends_on:
- dockerhost
command: [ "sh", "-c", "while :; do date; sleep 1; done | nc 'dockerhost' 2323 -v"]
networks:
- backend
networks:
backend:
driver: bridge
Flask's docker compose file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
At this point, I know that I am missing some detail or that I've a misconfiguration.
You can have a look at the repo here: https://github.com/lfbatista/ms-ecommerce
Any help would be appreciated.
These containers are not actually on the same network. To put two containers from different docker-compose projects into one network you need to 'import' an existing network in one of the files. Here's how you can do it:
# first project
networks:
internal:
shared:
---
# second project
networks:
internal:
shared:
# This is where all the magic happens:
external: true # Means do not create a network, import existing.
name: admin_shared # Name of the existing network. It's usually made of <folder_name>_<network_name> .
Do not forget to put all services into the same internal network or they will not be able to communicate with each other. If you forget to do that Docker will create a <folder_name>-default network and put any container with no explicitly assigned network there. You can assign networks like this:
services:
backend:
...
networks:
internal:
# Since this service needs access to the service in another project
# you put here two networks.
shared:
# This part is relevant for this specific question because
# both projects has services with identical names. To avoid
# mess with DNS names you can add an additional name to the
# service using 'alias'. This particular service will be
# available in shared network as 'flask-backend'.
aliases:
- flask-backend
db:
...
# You can also assign networks as an array if you need no extra configuration:
networks:
- internal
And here are the files from your repository. Instead of IP-address one service can reach the other via flask-backend or django-backend respectively. Note that I cut out those strange 'host network containers'.
admin/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
networks:
internal:
shared:
aliases:
- django-backend
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
networks:
- internal
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
networks:
- internal
networks:
internal:
shared:
main/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
networks:
internal:
shared:
aliases:
- flask-backend
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
networks:
- internal
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
networks:
- internal
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
networks:
internal:
shared:
external: true
name: admin_shared

Creating shared directory between containers with docker-compose

My Flask app (webapp) has two directories inside (uploads and images). I want my second container (rq-worker) to have access to them, so it can take something from uploads, and save back to images. How I can organize this inside my docker-compose.yml?
version: '3.5'
services:
web:
build: ./webapp
image: webapp
container_name: webapp
ports:
- "5000:5000"
depends_on:
- redis-server
- mongodb
redis-server:
image: redis:alpine
container_name: redis-server
ports:
- 6379:6379
mongodb:
image: mongo:4.2-bionic
container_name: mongodb
ports:
- "27017:27017"
rq-worker:
image: jaredv/rq-docker:0.0.2
container_name: rq-worker
command: rq worker -u redis://redis-server:6379 high normal low
deploy:
replicas: 1
depends_on:
- redis-server
dashboard:
build: ./dashboard
image: dashboard
container_name: dashboard
ports:
- "9181:9181"
command: rq-dashboard -H redis-server
depends_on:
- redis-server
You'll need to specify a volume like this:
volumes:
data-volume:
And then attach it to your services, e.g. for web:
web:
build: ./webapp
image: webapp
container_name: webapp
ports:
- "5000:5000"
depends_on:
- redis-server
- mongodb
volumes:
- data-volume:/my/mnt/point
The documentation has more info, also how to configure a volume driver, e.g. if you want to have a volume on NFS. Furthermore this lists available volume plugins which enable Docker volumes to persist across multiple Docker hosts.

Named volume not being created in docker

I'm trying to create a django/nginx/gunicorn/postgres docker-compose configuration.
Every time I call docker-compose down, I noticed that my postgres db was getting wiped. I did a little digging, and when I call docker-compose up, my named volume is not being created like i've seen in other tutorials.
What am I doing wrong?
Here is my yml file (if it helps, I'm using macOS to run my project)
version: "3"
volumes:
postgres:
driver: local
services:
database:
image: "postgres:latest" # use latest postgres
container_name: database
environment:
- POSTGRES_USER=REDACTED
- POSTGRES_PASSWORD=REDACTED
- POSTGRES_DB=REDACTED
volumes:
- postgres:/postgres
ports:
- 5432:5432
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8000:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- ./src/static:/static
depends_on:
- web
migrate:
build: .
container_name: migrate
depends_on:
- database
command: bash -c "python manage.py makemigrations && python manage.py migrate"
volumes:
- ./src:/src
web:
build: .
container_name: django
command: gunicorn Project.wsgi:application --bind 0.0.0.0:8000
depends_on:
- migrate
- database
volumes:
- ./src:/src
- ./src/static:/static
expose:
- "8000"
You need to mount the data directory at /var/lib/postgresql/data
volumes:
- postgres:/var/lib/postgresql/data

Categories