I'm trying to use HAProxy as a loadbalancer for my python webapp which uses redis. I'm working on transitioning the docker run commands to docker-compose using the docker-compose.yml --but I'm running into issues
Below is my current "docker run" commands, which works perfectly fine!
docker run --name sdnapi-redis -v /opt/redis:/data -p 6379:6379 -d redis redis-server --appendonly yes
docker run -d --name sdnapi1 --link sdnapi-redis:redis mycomp/sdnapi
docker run -d --name sdnapi2 --link sdnapi-redis:redis mycomp/sdnapi
docker run -d --name sdnapilb -p 80:80 -p 443:443 -p 1936:1936 -e DEFAULT_SSL_CERT="$(awk 1 ORS='\\n' ./certs/cert.pem)" -v /certs/:/certs/ --link sdnapi1:sdnapi1 --link sdnapi2:sdnapi2 dockercloud/haproxy
Here is my docker-compose.yml that should replicate the same functionality
version: '2'
services:
sdnapi:
image: mycomp/sdnapi
links:
- sdnapi-redis:redis
sdnapilb:
image: dockercloud/haproxy:1.2.1
environment:
- DEFAULT_SSL_CERT
volumes:
- /certs/:/certs/
ports:
- "80:80"
- "443:443"
- "1936:1936"
links:
- sdnapi:sdnapi
sdnapi-redis:
image: redis
volumes:
- /opt/redis:/data
ports:
- "6379:6379"
command: redis-server --appendonly yes
When I run the docker run commands, this is the sdnapilb logs:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
log-send-hostname
maxconn 4096
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.stats level admin
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
balance roundrobin
log global
mode http
option redispatch
option httplog
option dontlognull
option forwardfor
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind :1936
mode http
stats enable
timeout connect 10s
timeout client 1m
timeout server 1m
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth stats:stats
frontend default_frontend
bind :80
bind :443 ssl crt /certs/
reqadd X-Forwarded-Proto:\ https
maxconn 4096
defcon 1
default_backend default_service
When I run the docker-compose.yml with "docker-compose up -d".. I lose the frontend section.
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
log-send-hostname
maxconn 4096
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.stats level admin
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
balance roundrobin
log global
mode http
option redispatch
option httplog
option dontlognull
option forwardfor
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind :1936
mode http
stats enable
timeout connect 10s
timeout client 1m
timeout server 1m
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth stats:stats
Can you see any issues with either setup? I want to use docker-compose for its ability to scale.
I figured out the issue...
The docker-compose.yml has issues with the links.. the format for links is service name:alias
The issue with mine is that even though my service names were correct, the alias' were incorrect... causing the docker-compose to fail without an actual error. Since the alias doesn't exist, it just doesn't link the container -thus no frontend
Related
This question already has an answer here:
Docker-compose: App container can't connect to Postgres
(1 answer)
Closed yesterday.
I am trying to run my django/postgres application with docker compose. When I run docker compose up -d I get the the following logs on my postgres container running on port 5432:
2023-02-18 00:10:25.049 UTC [1] LOG: starting PostgreSQL 13.8 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv6 address "::", port 5432
2023-02-18 00:10:25.052 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-02-18 00:10:25.054 UTC [22] LOG: database system was shut down at 2023-02-18 00:10:06 UTC
2023-02-18 00:10:25.056 UTC [1] LOG: database system is ready to accept connections
It appears my postgres container is working properly. However my python/django container has the following logs:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- ./xi/.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_DB=***
- POSTGRES_USER=***
- POSTGRES_PASSWORD=***
volumes:
- dev-db-data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
dev-db-data:
Dockerfile:
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install nano python3-dev libpq-dev -y
COPY requirements-prod.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
I must be missing something small that allows the python container to communicate with the postgres container.
Also a few additional questions:
What does it mean that the container is "listening on IPv4 address '0.0.0.0', port 5432"? To my understanding 0.0.0.0 encapsulates all ip addresses including 127.0.0.1. So, in this case that shouldn't be an issue (correct me if I'm wrong)
I have been struggling with this for a few days. I have followed the getting started docs as well as the python usage guides on the docker docs, and it appears that I feel that I understand everything, but I am unable to debug my containers efficiently. What additional supplemental knowledge can I acquire that helps me debug a container with the same level of comfortability as I would a python script?
I tried a few things:
swapping env_file with the credentials hard coded in
changing python with python3
removing sh -c
I tried building my database first with docker-compose up -d --build db and then building my web app with docker-compose up -d --build web and the issue persisted.
I tried everything with the environment variables and it appears improper credentials is not the issue. Running python manage.py runserver without docker it successfully connects to the database. There are some similar stack overflow questions, but I have tried their solutions and they do not work.
Part of my issue is I don't know what to try and how to efficiently debug docker containers yet (hence the question above).
What have you set as your HOST variable in DATABASES['default'] in settings.py? If it's '127.0.0.1', try changing to 'db' to match the container service name.
I have created a simple django application that has one endpoint /health/live and it returns a success message upon receiving a get request.
I run the application locally with python manage.py runserver on port 8000
I also have a docker-compose and Dockerfile as below:
FROM python
ENV PYTHONUNBUFFERED 1
RUN mkdir /inventory
WORKDIR /inventory
COPY . /inventory
WORKDIR /inventory
RUN pip install -r requirements.txt
and
version: '3'
networks:
kong-net:
name: kong-net
driver: bridge
ipam:
config:
- subnet: 172.1.1.0/24
services:
inventory:
container_name: inventory
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.11
ports:
- "8000:8000"
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000
I then run docker-compose up (I don't detach it to be able to see the logs)
They both work. I send a get request to http://127.0.0.1:8000/health/live:
based on the logs I see, the request goes through the service running directly on the system and not on the docker container
If I stop the service running directly without docker, and send the request, the request goes through the one deployed on docker
is there a reason this is happening? why the first one takes priority?
And shouldn't I see an error when trying to run the docker container or start the application locally? because they are both listening to port 8000!
I am having issues with getting data back from a docker-selenium container, via a Flask application (also dockerized).
When I have the Flask application running in one container, I get the following error on http://localhost:5000, which goes to the selenium driver using a Remote driver that is running on http://localhost:4444/wd/hub
The error that is generated is:
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>
I have created a github repo with my code to test, see here.
My docker-compose file below seems ok:
version: '3.5'
services:
web:
volumes:
- ./app:/app
ports:
- "5000:80"
environment:
- FLASK_APP=main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
command: flask run --host=0.0.0.0 --port=80
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
selenium-hub:
image: selenium/hub:3.141
container_name: selenium-hub
ports:
- 4444:4444
chrome:
shm_size: 2g
volumes:
- /dev/shm:/dev/shm
image: selenium/node-chrome:3.141
# image: selenium/standalone-chrome:3.141.59-copernicium
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
What is strange is that when I run the Flask application in Pycharm, and the selenium grid is up in docker, I am able to get the data back through http://localhost:5000. The issue is only happening when the Flask app is running inside docker.
Thanks for the help in advance, let me know if you require further information.
Edit
So I amended my docker-compose.yml file to include a network (updated the code in github. As I've had the Flask app code running in debug and in a volume, any update to the code results in a refresh of the debugger.
I ran docker network inspect on the created network, and found the local docker IP address of selenium-hub. I updated the app/utils.py code, in get_driver() to use the IP address in command_executor rather than localhost. Saving, and re-running from my browser results in a successful return of data.
But I don't understand why http://localhost:4444/wd/hub would not work, the docker containers should see each other in the network as localhost, right?
the docker containers should see each other in the network as localhost, right?
No, this is only true when they use the host networking and expose ports through the host.
When you have services interacting with each other in docker-compose (or stack) the services should refer to each other by the service name. E.g. you would reach the hub container at http://selenium-hub:4444/wd/hub. Your Flask application could be reached by another container on the same network at http://web
You may be confused if your default when running docker normally is to use host networking because on the host network selenium-hub is also exposed on the same port 4444. So, if you started a container with host networking it could use http://localhost:4444 just fine there as well.
Could potentially be a port in use issue related to the execution?
See:
Python urllib2: Cannot assign requested address
I'd like to have two Docker containers, which are defined in the same docker-compose.yaml file to be able to share a network and interact with each others' exposed ports. I'm running all of this on Docker for Mac.
In order to do so, I've set up a couple docker containers that are running a tiny Flask server which can either return a "Hello" or make a request to another server (see below for details). So far, I've been unable to allow the two apps to communicate with each other.
What I've tried so far:
exposeing the relevant ports
publishing the ports and mapping them 1:1 with the host
For flask using both localhost and 0.0.0.0 as the --host arg
curl from one container to another (using both localhost:<other_container_port> and 0.0.0.0:<other_container_port>
Using the implicit network as per the docs
Explicit network definition
All of the above examples give me a Connection Refused error, so I feel like I'm missing something basic about Docker networking.
The Networking in Compose doc mentions the following:
When you run docker-compose up, the following happens:
...
A container is created using db’s configuration. It
joins the network myapp_default under the name db.
And their example appears to have all the separate services be able to communicate without any network definitions, which leads me to believe that I probably should not need to define a network either.
Below is my docker-compose.yaml file - all the files can be found at this gist:
version: '3'
services:
receiver:
build: ./app
# Tried with/without expose
expose:
- 3000
# Tried with/without ports
ports:
- 3000:3000
# Tried with/without 0.0.0.0
command: "--host 0.0.0.0 --port 3000"
# Tried with/without explicit network
networks:
- mine
requester:
build: ./app
expose:
- 4000
ports:
- 4000:4000
# This one's ip is 0.0.0.0, so we can access from host
command: "--host 0.0.0.0 --port 4000"
networks:
- mine
networks:
mine: {}
The app.py file:
#app.route("/")
def hello():
return "Hello from {}".format(request.host)
#app.route("/request/<int:port>")
def doPing(port):
location = "http://localhost:{}/".format(port)
return requests.get(location, timeout=5).content
in docker-compose the services that are on same network can access each other by its name, you dont even have to expose the ports to host. so your docker-compose.yaml can be simplified to:
version: '3'
services:
receiver:
build: ./app
command: "--host 0.0.0.0 --port 3000"
requester:
build: ./app
command: "--host 0.0.0.0 --port 4000"
and inside the container requester you can access the other one with
ping receiver
that resolves the name and you can verify the port is also open, for example with netcat
nc -z receiver 3000 -v
I have a dockerized setup running a Django app within which I use Celery tasks. Celery uses Redis as the broker.
Versioning:
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.15.0, build e12f3b9
Django==1.9.6
django-celery-beat==1.0.1
celery==4.1.0
celery[redis]
redis==2.10.5
Problem:
My celery workers appear to be unable to connect to the redis container located at localhost:6379. I am able to telnet into the redis server on the specified port. I am able to verify redis-server is running on the container.
When I manually connect to the Celery docker instance and attempt to create a worker using the command celery -A backend worker -l info I get the notice:
[2017-11-13 18:07:50,937: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
Trying again in 4.00 seconds...
Notes:
I am able to telnet in to the redis container on port 6379. On the redis container, redis-server is running.
Is there anything else that I'm missing? I've gone pretty far down the rabbit hole, but feel like I'm missing something really simple.
DOCKER CONFIG FILES:
docker-compose.common.yml here
docker-compose.dev.yml here
When you use docker-compose, you aren't going to be using localhost for inter-container communication, you would be using the compose-assigned hostname of the container. In this case, the hostname of your redis container is redis. The top level elements under services: are your default host names.
So for celery to connect to redis, you should try redis://redis:6379/0. Since the protocol and the service name are the same, I'll elaborate a little more: if you named your redis service "butter-pecan-redis" in your docker-compose, you would instead use redis://butter-pecan-redis:6379/0.
Also, docker-compose.dev.yml doesn't appear to have celery and redis on a common network, which might cause them not to be able to see each other. I believe they need to share at least one network in common to be able to resolve their respective host names.
Networking in docker-compose has an example in the first handful of paragraphs, with a docker-compose.yml to look at.
You may need to add the link and depends_on sections to your docker compose file, and then reference the containers by their hostname.
Updated docker-compose.yml:
version: '2.1'
services:
db:
image: postgres
memcached:
image: memcached
redis:
image: redis
ports:
- '6379:6379'
backend-base:
build:
context: .
dockerfile: backend/Dockerfile-base
image: "/backend:base"
backend:
build:
context: .
dockerfile: backend/Dockerfile
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- gunicorn backend.wsgi:application -b 0.0.0.0:8000 -k gevent -w 3
ports:
- 8000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
celery:
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- celery worker -E -B --loglevel=INFO --concurrency=1
environment:
C_FORCE_ROOT: "yes"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend-base:
build:
context: .
dockerfile: frontend/Dockerfile-base
args:
NPM_REGISTRY: http://.view.build
PACKAGE_INSTALLER: yarn
image: "/frontend:base"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
image: "/frontend:${ENV:-local}"
command: 'bash -c ''gulp'''
working_dir: /app/user
environment:
PORT: 3000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
Then configure the urls to redis, postgres, memcached, etc. with:
redis://redis:6379/0
postgres://user:pass#db:5432/database
The issue for me was that all of the containers, including celery had a network argument specified. If this is the case the redis container must also have the same argument otherwise you will get this error. See below, the fix was adding 'networks':
redis:
image: redis:alpine
ports:
- '6379:6379'
networks:
- server