I have a Django REST service and another Flask service that work as a broker for the application. Both are different projects that run with their own Docker container.
I'm able to POST a product on the Django service that is consumed by the Flask service, however, I cannot reach the Django service via Flask.
These containers are running on the same network, and I already tried Thomasleveil's suggestions, including docker-host by qoomon.
The error received by the request is the same as before I tried to forward the traffic. The difference is that now when I do the request it keeps hanging for a while until it fails.
The error is as follows:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0039388340>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The request I'm trying to make is a POST at /api/products/1/like. At the moment, no body is required.
Here is how I'm doing the POST with Flask, where the IP is the Docker IP:
#app.route("/api/products/<int:id>/like", methods=["POST"])
def like(id):
req = requests.get("http://172.17.0.1:8000/api/user")
json = req.json()
try:
product_user = ProductUser(user_id=json["id"], product=id)
db.session.add(product_user)
db.session.commit()
publish("product_liked", id)
except:
abort(400, "You already liked this product")
return jsonify({
"message": "success"
})
Django's docker compose file (please ignore the service tcp_message_emitter):
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- backend
tcp_message_emitter:
image: alpine
depends_on:
- dockerhost
command: [ "sh", "-c", "while :; do date; sleep 1; done | nc 'dockerhost' 2323 -v"]
networks:
- backend
networks:
backend:
driver: bridge
Flask's docker compose file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
At this point, I know that I am missing some detail or that I've a misconfiguration.
You can have a look at the repo here: https://github.com/lfbatista/ms-ecommerce
Any help would be appreciated.
These containers are not actually on the same network. To put two containers from different docker-compose projects into one network you need to 'import' an existing network in one of the files. Here's how you can do it:
# first project
networks:
internal:
shared:
---
# second project
networks:
internal:
shared:
# This is where all the magic happens:
external: true # Means do not create a network, import existing.
name: admin_shared # Name of the existing network. It's usually made of <folder_name>_<network_name> .
Do not forget to put all services into the same internal network or they will not be able to communicate with each other. If you forget to do that Docker will create a <folder_name>-default network and put any container with no explicitly assigned network there. You can assign networks like this:
services:
backend:
...
networks:
internal:
# Since this service needs access to the service in another project
# you put here two networks.
shared:
# This part is relevant for this specific question because
# both projects has services with identical names. To avoid
# mess with DNS names you can add an additional name to the
# service using 'alias'. This particular service will be
# available in shared network as 'flask-backend'.
aliases:
- flask-backend
db:
...
# You can also assign networks as an array if you need no extra configuration:
networks:
- internal
And here are the files from your repository. Instead of IP-address one service can reach the other via flask-backend or django-backend respectively. Note that I cut out those strange 'host network containers'.
admin/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
networks:
internal:
shared:
aliases:
- django-backend
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
networks:
- internal
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
networks:
- internal
networks:
internal:
shared:
main/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
networks:
internal:
shared:
aliases:
- flask-backend
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
networks:
- internal
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
networks:
- internal
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
networks:
internal:
shared:
external: true
name: admin_shared
Related
I am running a dockerized django app using the following dockerfile:
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn PriceOptimization.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
networks:
- dbnet
ports:
- "8000:8000"
environment:
aws_access_key_id: ${aws_access_key_id}
redis:
restart: always
image: redis:latest
networks:
- dbnet
ports:
- "6379:6379"
celery:
restart: always
build:
context: .
command: celery -A PriceOptimization worker -l info
volumes:
- ./PriceOptimization:/PriceOptimization
depends_on:
- web
- redis
networks:
- dbnet
environment:
access_key_id: ${access_key_id}
nginx:
build: ./nginx
ports:
- "80:80"
volumes:
- static_volume:/home/app/web/staticfiles
depends_on:
- web
networks:
- dbnet
database:
image: "postgres" # use latest official postgres version
restart: unless-stopped
env_file:
- ./database.env # configure postgres
networks:
- dbnet
ports:
- "5432:5432"
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
volumes:
database-data:
static_volume:
media_volume:
I have added celery.py to my app, and I am building / running the docker container as follows:
docker-compose -f $HOME/PriceOpt/PriceOptimization/docker-compose.prod.yml up -d --build
Running the application in my development environment lets me check at the command line that the celery app is correctly connected, etc. Is there a way that I can test to see if my celery app is initiated properly at the end of the build process?
I have several micro-services running as docker containers. All web services work fine and route correctly.
The only issue is the websocket service.
The websocket service itself is using python websockets and has it's own TLS certificates.
Trying to access the websocket with wss://websocket.localhost fails, in the setup below it doesn't find the page at all.
In my previous configurations, it results in the Bad Gateway error.
Apparently traefik comes out of the box working with websockets with no additional configurations.
This doesn't seem to be the case. Any pointers?
The websocket connection works without docker or traefik involved, so I ruled that issue out.
Any help on this would be extremely appreciated.
docker-compose.yml
version: "3.7"
networks:
web:
external: true
internal:
external: false
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.localhost`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)
traefik.yml
entryPoints:
insecure:
address: :80
http:
redirections:
entryPoint:
to: secure
scheme: https
secure:
address: :443
log:
level: INFO
accessLog:
filePath: "traefik-access.log"
bufferingSize: 100
api:
dashboard: true
insecure: true
ping: {}
providers:
file:
filename: /config/dynamic.yml # traefik dynamic configuration
watch: true # everytime it changes, it will be reloaded
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: true
config
tls:
stores:
default:
defaultCertificate:
certFile: cert.crt
keyFile: key.key
certificates:
- certFile: crt.crt
keyFile: key.key
stores:
- default
domains:
- main: "localhost"
while looking at your configuration, the following doesn't fit:
The docker-compose projectname will be part of the domain names. The default is to use the parent folder name of your docker-compose.yaml. You didn't specify it here, therefore I assume it to by traefik. You can set this explicitly in the docker-compose call with docker-compose -p traefik up or by setting the env variable PROJECT_NAME.
you are using the domain name '.localhost', but you don't define the domainname explicitly. That means the default name is used which is derived from the service name, the project name (the folder where is docker-compose file is stored), and the docker-network name that you attach to with this pattern: servicename.projectname_networkname.
Use the attributes hostname and domainname to explicitly define a name (only works for networks with internal=false).
When having two network connections and additionally a domainname definition you get the following domain names:
db.traefik_internal (only intern, db.localhost will not work)
dozzle.traefik_internal (only intern, dozzle.localhost will not work)
traefik.localhost
traefik.traefik_web
traefik.traefik_internal
websocket.localhost
websocket.traefik_web
websocket.traefik_internal
external=true just means that the network is created externally by docker network create or by another docker-compose project. The main effect is, that it is not delected when doing docker-compose down. It has nothing to do with the connection to the outside world.
To get an isolated internal network you have to use the option internal: true
the option condition: service_healthy is no longer supported for version: "3.7", so either remove that option (it nevertheless doesn't work like you expect) or change the version to 2.4
Here my current version of the docker-compose.yaml:
version: "2.4"
networks:
web:
internal:
internal: true
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
hostname: traefik
domainname: localhost
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
hostname: dozzle
domainname: localhost
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.traefik_internal`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
hostname: db
domainname: localhost
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
hostname: websocket
domainname: localhost
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)
My Flask app (webapp) has two directories inside (uploads and images). I want my second container (rq-worker) to have access to them, so it can take something from uploads, and save back to images. How I can organize this inside my docker-compose.yml?
version: '3.5'
services:
web:
build: ./webapp
image: webapp
container_name: webapp
ports:
- "5000:5000"
depends_on:
- redis-server
- mongodb
redis-server:
image: redis:alpine
container_name: redis-server
ports:
- 6379:6379
mongodb:
image: mongo:4.2-bionic
container_name: mongodb
ports:
- "27017:27017"
rq-worker:
image: jaredv/rq-docker:0.0.2
container_name: rq-worker
command: rq worker -u redis://redis-server:6379 high normal low
deploy:
replicas: 1
depends_on:
- redis-server
dashboard:
build: ./dashboard
image: dashboard
container_name: dashboard
ports:
- "9181:9181"
command: rq-dashboard -H redis-server
depends_on:
- redis-server
You'll need to specify a volume like this:
volumes:
data-volume:
And then attach it to your services, e.g. for web:
web:
build: ./webapp
image: webapp
container_name: webapp
ports:
- "5000:5000"
depends_on:
- redis-server
- mongodb
volumes:
- data-volume:/my/mnt/point
The documentation has more info, also how to configure a volume driver, e.g. if you want to have a volume on NFS. Furthermore this lists available volume plugins which enable Docker volumes to persist across multiple Docker hosts.
I have a two services in docker compose 1. an application and 2. db service (can be Mysql or Postgres). Depending upon environment variables set in compose file for db services, I need create DATABASE_URI for sqlalchemy engine. How do I access these ENV in app docker container?
I am trying to access env set inside docker-compose file and not Dockerfile.
Below is how my Docker-compose file looks
version: "3.7"
services:
myapp:
image: ${TAG:-myapp}
build:
context: .
ports:
- "5000:5000"
docker_postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
You need to set environment variables for your postgres database if you want to build your own database_uri.
environment:
POSTGRES_DB: dev
POSTGRES_USER: username
POSTGRES_PASSWORD: pw
#jonrsharpe You mean to say I can do something like this?
services:
myapp:
image: ${TAG:-myapp}
build:
context: .
environment:
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_HOST=docker_postgres
ports:
- "5000:5000"
docker_postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
ports:
- "5432:5432"```
I need your help. I'm trying to use memcached + docker-compose but I'm getting None. I did port forwarding web with memcached. Base cache is given 11211 port.
What am I doing wrong?
View.py example
from django.core.cache import cache
def show_category(requests):
categorys_name = CategoryNews.objects.all()
cache_key = 'category_names'
cache_time = 1800
result = cache.get(cache_key)
print(result)
if result is None:
result = categorys_name
cache.set(cache_key, result, cache_time)
return render(requests, 'home_app/category.html', {'categorys_name':categorys_name})
return print('No none')
settings
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '0.0.0.0:11211',
}
}
Docker-compose
sersion: '3'
services:
db:
restart: always
image: postgres
web:
restart: always
working_dir: /var/app
build: ./testsite
entrypoint: ./docker-entrypoint.sh
volumes:
- ./testsite:/var/app
expose:
- "80"
- "11211"
depends_on:
- db
ngnix:
restart: always
build: ./ngnix
ports:
- "80:80"
volumes:
- ./testsite/static:/staticimage
- ./testsite/media:/mediafilesh
depends_on:
- web
memcached:
image: memcached:latest
entrypoint:
- memcached
- -m 64
ports:
- "11211:11211"
depends_on:
- web
You need to expose the port on your memcached container and use memcached as LOCATION in your cache config. I think you have a misconception about expose and ports:
EXPOSE: Expose ports without publishing them to the host machine (YOUR COMPUTER) - they’ll only be accessible to linked services (BETWEEN CONTAINERS). Only the internal port can be specified.
PORTS: Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen). So you redirect a host port (YOUR COMPUTER) to a container port
So, in your particular example this should help:
Docker-compose.yml
version: '3'
services:
db:
restart: always
image: postgres
web:
restart: always
working_dir: /var/app
build: ./testsite
entrypoint: ./docker-entrypoint.sh
volumes:
- ./testsite:/var/app
expose:
- "80"
depends_on:
- db
ngnix:
restart: always
build: ./ngnix
ports:
- "80:80"
volumes:
- ./testsite/static:/staticimage
- ./testsite/media:/mediafilesh
depends_on:
- web
memcached:
image: memcached:latest
entrypoint:
- memcached
- -m 64
ports:
- "11211:11211" # This is only needed if you wants to connect from your host to the container
expose:
- "11211"
depends_on:
- web
Your cache settings:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'memcached:11211',
}
}
Note that memcached is a reference to your memcached container in your docker-compose.yml file. For instance, if you named your memcached container like my_project_memcached you will need to use that name in your settings file.
my_project_memcached:
image: memcached:latest
entrypoint:
- memcached
- -m 64
ports:
- "11211:11211"
expose:
- "11211"
depends_on:
- web