I have several micro-services running as docker containers. All web services work fine and route correctly.
The only issue is the websocket service.
The websocket service itself is using python websockets and has it's own TLS certificates.
Trying to access the websocket with wss://websocket.localhost fails, in the setup below it doesn't find the page at all.
In my previous configurations, it results in the Bad Gateway error.
Apparently traefik comes out of the box working with websockets with no additional configurations.
This doesn't seem to be the case. Any pointers?
The websocket connection works without docker or traefik involved, so I ruled that issue out.
Any help on this would be extremely appreciated.
docker-compose.yml
version: "3.7"
networks:
web:
external: true
internal:
external: false
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.localhost`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)
traefik.yml
entryPoints:
insecure:
address: :80
http:
redirections:
entryPoint:
to: secure
scheme: https
secure:
address: :443
log:
level: INFO
accessLog:
filePath: "traefik-access.log"
bufferingSize: 100
api:
dashboard: true
insecure: true
ping: {}
providers:
file:
filename: /config/dynamic.yml # traefik dynamic configuration
watch: true # everytime it changes, it will be reloaded
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: true
config
tls:
stores:
default:
defaultCertificate:
certFile: cert.crt
keyFile: key.key
certificates:
- certFile: crt.crt
keyFile: key.key
stores:
- default
domains:
- main: "localhost"
while looking at your configuration, the following doesn't fit:
The docker-compose projectname will be part of the domain names. The default is to use the parent folder name of your docker-compose.yaml. You didn't specify it here, therefore I assume it to by traefik. You can set this explicitly in the docker-compose call with docker-compose -p traefik up or by setting the env variable PROJECT_NAME.
you are using the domain name '.localhost', but you don't define the domainname explicitly. That means the default name is used which is derived from the service name, the project name (the folder where is docker-compose file is stored), and the docker-network name that you attach to with this pattern: servicename.projectname_networkname.
Use the attributes hostname and domainname to explicitly define a name (only works for networks with internal=false).
When having two network connections and additionally a domainname definition you get the following domain names:
db.traefik_internal (only intern, db.localhost will not work)
dozzle.traefik_internal (only intern, dozzle.localhost will not work)
traefik.localhost
traefik.traefik_web
traefik.traefik_internal
websocket.localhost
websocket.traefik_web
websocket.traefik_internal
external=true just means that the network is created externally by docker network create or by another docker-compose project. The main effect is, that it is not delected when doing docker-compose down. It has nothing to do with the connection to the outside world.
To get an isolated internal network you have to use the option internal: true
the option condition: service_healthy is no longer supported for version: "3.7", so either remove that option (it nevertheless doesn't work like you expect) or change the version to 2.4
Here my current version of the docker-compose.yaml:
version: "2.4"
networks:
web:
internal:
internal: true
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
hostname: traefik
domainname: localhost
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
hostname: dozzle
domainname: localhost
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.traefik_internal`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
hostname: db
domainname: localhost
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
hostname: websocket
domainname: localhost
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)
Related
I have a Django REST service and another Flask service that work as a broker for the application. Both are different projects that run with their own Docker container.
I'm able to POST a product on the Django service that is consumed by the Flask service, however, I cannot reach the Django service via Flask.
These containers are running on the same network, and I already tried Thomasleveil's suggestions, including docker-host by qoomon.
The error received by the request is the same as before I tried to forward the traffic. The difference is that now when I do the request it keeps hanging for a while until it fails.
The error is as follows:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0039388340>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The request I'm trying to make is a POST at /api/products/1/like. At the moment, no body is required.
Here is how I'm doing the POST with Flask, where the IP is the Docker IP:
#app.route("/api/products/<int:id>/like", methods=["POST"])
def like(id):
req = requests.get("http://172.17.0.1:8000/api/user")
json = req.json()
try:
product_user = ProductUser(user_id=json["id"], product=id)
db.session.add(product_user)
db.session.commit()
publish("product_liked", id)
except:
abort(400, "You already liked this product")
return jsonify({
"message": "success"
})
Django's docker compose file (please ignore the service tcp_message_emitter):
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- backend
tcp_message_emitter:
image: alpine
depends_on:
- dockerhost
command: [ "sh", "-c", "while :; do date; sleep 1; done | nc 'dockerhost' 2323 -v"]
networks:
- backend
networks:
backend:
driver: bridge
Flask's docker compose file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
At this point, I know that I am missing some detail or that I've a misconfiguration.
You can have a look at the repo here: https://github.com/lfbatista/ms-ecommerce
Any help would be appreciated.
These containers are not actually on the same network. To put two containers from different docker-compose projects into one network you need to 'import' an existing network in one of the files. Here's how you can do it:
# first project
networks:
internal:
shared:
---
# second project
networks:
internal:
shared:
# This is where all the magic happens:
external: true # Means do not create a network, import existing.
name: admin_shared # Name of the existing network. It's usually made of <folder_name>_<network_name> .
Do not forget to put all services into the same internal network or they will not be able to communicate with each other. If you forget to do that Docker will create a <folder_name>-default network and put any container with no explicitly assigned network there. You can assign networks like this:
services:
backend:
...
networks:
internal:
# Since this service needs access to the service in another project
# you put here two networks.
shared:
# This part is relevant for this specific question because
# both projects has services with identical names. To avoid
# mess with DNS names you can add an additional name to the
# service using 'alias'. This particular service will be
# available in shared network as 'flask-backend'.
aliases:
- flask-backend
db:
...
# You can also assign networks as an array if you need no extra configuration:
networks:
- internal
And here are the files from your repository. Instead of IP-address one service can reach the other via flask-backend or django-backend respectively. Note that I cut out those strange 'host network containers'.
admin/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
networks:
internal:
shared:
aliases:
- django-backend
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
networks:
- internal
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
networks:
- internal
networks:
internal:
shared:
main/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
networks:
internal:
shared:
aliases:
- flask-backend
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
networks:
- internal
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
networks:
- internal
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
networks:
internal:
shared:
external: true
name: admin_shared
I am running jupyter in docker. From my jupyter notebook I want to connect to an API that is accessible via the url http://localhost:9000/api/v1/data.
if i execute the lines below in my local jupyter notebook (i.e. not in docker) i successfully connect.
import requests
r =requests.get('http://localhost:9000/api/v1/data')
r.status_code
However the same lines will not return an error if executed in jupyter notebook in docker.
The resulting error reads ...
ConnectionError: HTTPConnectionPool(host='localhost', port=9000): Max retries exceeded with url: /api/v1/data (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fca3b20bfd0>: Failed to establish a new connection: [Errno 111] Connection refused',))
I tried to map ports 9000 to 9000 in the YML file that is used to run the container.
# Copyright 2019 QuantRocket LLC - All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Features
# - for local deployment
# - includes all services
# - pinned to the current production versions
# - Sends anonymous crash reports. To disable, edit flightlog
# env to: SEND_CRASH_REPORTS: 'false'
x-quantrocket-version: '1.9.0'
x-quantrocket-deploy-target: 'local'
version: '2.4' # Docker Compose file version
volumes:
codeload:
db:
flightlog:
settings:
zipline:
services:
account:
image: 'quantrocket/account:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
depends_on:
- db
restart: always
blotter:
image: 'quantrocket/blotter:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
depends_on:
- db
restart: always
codeload:
image: 'quantrocket/codeload:1.9.0'
environment:
GIT_URL: 'https://github.com/quantrocket-codeload/quickstart.git'
GIT_BRANCH: 1.9
volumes:
- 'codeload:/codeload'
restart: always
countdown:
image: 'quantrocket/countdown:1.9.0'
volumes:
- 'settings:/etc/opt/quantrocket'
- 'codeload:/codeload'
restart: always
db:
image: 'quantrocket/db:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
- 'settings:/etc/opt/quantrocket'
depends_on:
- postgres
restart: always
flightlog:
image: 'quantrocket/flightlog:1.9.0'
volumes:
- 'flightlog:/var/log/flightlog'
- 'settings:/etc/opt/quantrocket'
restart: always
environment:
SEND_CRASH_REPORTS: 'true'
fundamental:
image: 'quantrocket/fundamental:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
depends_on:
- db
restart: always
history:
image: 'quantrocket/history:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
depends_on:
- db
restart: always
houston:
image: 'quantrocket/houston:1.9.0'
ports:
- '1969:80'
restart: always
ibg1:
image: 'quantrocket/ibg:1.9.972.0'
environment:
INI_SETTINGS: '--ExistingSessionDetectedAction=primary'
API_SETTINGS: '--readOnlyApi=false --masterClientID=6000 --exposeEntireTradingSchedule=true'
volumes:
- 'settings:/etc/opt/quantrocket'
restart: always
jupyter:
image: 'quantrocket/jupyter:1.9.0'
ports:
- '9000:9000'
volumes:
- 'codeload:/codeload'
- 'db:/var/lib/quantrocket'
restart: always
launchpad:
image: 'quantrocket/launchpad:1.9.0'
volumes:
- 'codeload:/codeload'
restart: always
license-service:
image: 'quantrocket/license-service:1.9.0'
volumes:
- 'settings:/etc/opt/quantrocket'
restart: always
logspout:
image: 'gliderlabs/logspout:latest'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
depends_on:
- houston
- flightlog
command: 'syslog+udp://flightlog:9021,syslog://logs5.papertrailapp.com:47405?filter.name=*houston*'
restart: always
master:
image: 'quantrocket/master:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
- 'codeload:/codeload'
depends_on:
- db
restart: always
moonshot:
image: 'quantrocket/moonshot:1.9.0'
volumes:
- 'codeload:/codeload'
restart: always
postgres:
image: 'quantrocket/postgres:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
environment:
PGDATA: '/var/lib/quantrocket/postgresql/data/pg_data'
restart: always
realtime:
image: 'quantrocket/realtime:1.9.0'
volumes:
- 'db:/var/lib/quantrocket'
depends_on:
- db
restart: always
satellite:
image: 'quantrocket/satellite:1.9.0'
volumes:
- 'codeload:/codeload'
depends_on:
- codeload
restart: always
theia:
image: 'quantrocket/theia:1.9.0'
volumes:
- 'codeload:/codeload'
depends_on:
- codeload
restart: always
zipline:
image: 'quantrocket/zipline:1.9.0'
volumes:
- 'codeload:/codeload'
- 'zipline:/root/.zipline'
restart: always
But the error remains.
When communicating between docker containers within the same compose you can reference another container as the domain name by using its name.
so in your case
import requests
r =requests.get('http://jupyter:9000/api/v1/data')
r.status_code
I am trying to run a docker-compose that will let me have a Zookeeper ensemble that manages my SolrCloud. Everything runs and from every way I've checked inside the container, my Zookeeper ensemble appears to be up and running. Instead, everytime I try to connect I get an error that the name or service could not be found.
I've tried using different docker-compose.ymls, I've tried changing the name of my containers in docker, I've tried changing up the ports in the connection string, I've tried changing the hostname in the connection string, and I've tried the localhost for the connection string.
solr1:
container_name: solr1
image: solr:5-slim
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
volumes:
- data:/var/solr
command: >
sh -c "solr-precreate users"
solr2:
image: solr:5-slim
container_name: solr2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr3:
image: solr:5-slim
container_name: solr3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
ports:
- 8983:8983
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
zoo1:
image: zookeeper:3.4
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
zoo2:
image: zookeeper:3.4
container_name: zoo2
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
zoo3:
image: zookeeper:3.4
container_name: zoo3
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
and then my Python code is
import pysolr
def connect_solrcloud():
zookeeper = pysolr.ZooKeeper("zoo1:2181,zoo2:2181,zoo3:2181")
solr = pysolr.SolrCloud(zookeeper, "users")
solr.ping()
connect_solrcloud()
I would expect that the Zookeeper object is able to connect and then I would be able to access the "users" core I created in my docker container. Instead I get an error saying
WARNING:: Cannot resolve zoo1: [Errno -2] Name or service not known
WARNING:: Cannot resolve zoo2: [Errno -2] Name or service not known
WARNING:: Cannot resolve zoo3: [Errno -2] Name or service not known
I don't know if this is a docker-compose issue or if it's the way I set Zookeeper up. It appears no one else online has a problem here. They either have problems standing Zookeeper up or some issue once it's connected.
Found my issue. My web container did not include
networks:
- solr
so it wasn't able to access zookeeper.
I need your help. I'm trying to use memcached + docker-compose but I'm getting None. I did port forwarding web with memcached. Base cache is given 11211 port.
What am I doing wrong?
View.py example
from django.core.cache import cache
def show_category(requests):
categorys_name = CategoryNews.objects.all()
cache_key = 'category_names'
cache_time = 1800
result = cache.get(cache_key)
print(result)
if result is None:
result = categorys_name
cache.set(cache_key, result, cache_time)
return render(requests, 'home_app/category.html', {'categorys_name':categorys_name})
return print('No none')
settings
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '0.0.0.0:11211',
}
}
Docker-compose
sersion: '3'
services:
db:
restart: always
image: postgres
web:
restart: always
working_dir: /var/app
build: ./testsite
entrypoint: ./docker-entrypoint.sh
volumes:
- ./testsite:/var/app
expose:
- "80"
- "11211"
depends_on:
- db
ngnix:
restart: always
build: ./ngnix
ports:
- "80:80"
volumes:
- ./testsite/static:/staticimage
- ./testsite/media:/mediafilesh
depends_on:
- web
memcached:
image: memcached:latest
entrypoint:
- memcached
- -m 64
ports:
- "11211:11211"
depends_on:
- web
You need to expose the port on your memcached container and use memcached as LOCATION in your cache config. I think you have a misconception about expose and ports:
EXPOSE: Expose ports without publishing them to the host machine (YOUR COMPUTER) - they’ll only be accessible to linked services (BETWEEN CONTAINERS). Only the internal port can be specified.
PORTS: Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen). So you redirect a host port (YOUR COMPUTER) to a container port
So, in your particular example this should help:
Docker-compose.yml
version: '3'
services:
db:
restart: always
image: postgres
web:
restart: always
working_dir: /var/app
build: ./testsite
entrypoint: ./docker-entrypoint.sh
volumes:
- ./testsite:/var/app
expose:
- "80"
depends_on:
- db
ngnix:
restart: always
build: ./ngnix
ports:
- "80:80"
volumes:
- ./testsite/static:/staticimage
- ./testsite/media:/mediafilesh
depends_on:
- web
memcached:
image: memcached:latest
entrypoint:
- memcached
- -m 64
ports:
- "11211:11211" # This is only needed if you wants to connect from your host to the container
expose:
- "11211"
depends_on:
- web
Your cache settings:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'memcached:11211',
}
}
Note that memcached is a reference to your memcached container in your docker-compose.yml file. For instance, if you named your memcached container like my_project_memcached you will need to use that name in your settings file.
my_project_memcached:
image: memcached:latest
entrypoint:
- memcached
- -m 64
ports:
- "11211:11211"
expose:
- "11211"
depends_on:
- web
what i am trying to do - run airflow in docker with celery
my issue - my celery workers are in containers and i dont know how to scale them
my docker-comopose file:
version: '2'
services:
mysql:
image: mysql:latest
restart: always
ports:
- "3306:3306"
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_USER=airflow
- MYSQL_PASSWORD=airflow
- MYSQL_DATABASE=airflow
volumes:
- mysql:/var/lib/mysql
rabbitmq:
image: rabbitmq:3-management
restart: always
ports:
- "15672:15672"
- "5672:5672"
- "15671:15671"
environment:
- RABBITMQ_DEFAULT_USER=airflow
- RABBITMQ_DEFAULT_PASS=airflow
- RABBITMQ_DEFAULT_VHOST=airflow
volumes:
- rabbitmq:/var/lib/rabbitmq
webserver:
image: airflow:ver5
restart: always
volumes:
- ~/airflow/dags:/usr/local/airflow/dags
- /opt/scripts:/opt/scripts
environment:
- AIRFLOW_HOME=/usr/local/airflow
ports:
- "8080:8080"
links:
- mysql:mysql
- rabbitmq:rabbitmq
- worker:worker
- scheduler:scheduler
depends_on:
- mysql
- rabbitmq
- worker
- schedulerv
command: webserver
env_file: ./airflow.env
scheduler:
image: airflow:ver5
restart: always
volumes:
- ~/airflow/dags:/usr/local/airflow/dags
- /opt/scripts:/opt/scripts
environment:
- AIRFLOW_HOME=/usr/local/airflow
links:
- mysql:mysql
- rabbitmq:rabbitmq
depends_on:
- mysql
- rabbitmq
command: scheduler
env_file: ./airflow.env
worker:
image: airflow:ver5
restart: always
volumes:
- ~/airflow/dags:/usr/local/airflow/dags
- /opt/scripts:/opt/scripts
environment:
- AIRFLOW_HOME=/usr/local/airflow
ports:
- "8793:8793"
links:
- mysql:mysql
- rabbitmq:rabbitmq
depends_on:
- mysql
- rabbitmq
command: worker
env_file: ./airflow.env
So i run the docker-compose command using the above file and it starts an instance of worker on port 8793 on localhost as i am mapping from docker port to localhost. Now what i want to do is scale the number of workers i have and to do that i use the following command:
docker-compose -f docker-compose.yml scale worker=5
but that gives out an error as an instance of worker is already running on 8793. Is there a way to dynamically allocate port to new instances of worker containers as i scale up?
You could allow your worker nodes to expose the worker port to the host machine on a random port number:
worker:
image: airflow:ver5
restart: always
volumes:
- ~/airflow/dags:/usr/local/airflow/dags
- /opt/scripts:/opt/scripts
environment:
- AIRFLOW_HOME=/usr/local/airflow
ports:
- "8793"
links:
- mysql:mysql
- rabbitmq:rabbitmq
depends_on:
- mysql
- rabbitmq
command: worker
env_file: ./airflow.env
Setting port: to - 80 will expose port 80, in the container, to a random port on the host.
Because Docker Compose uses networks, you can actually omit this publish step altogether, and it would work. So simply remove ports: from the worker