I'm trying to connect to single-node Kafka server through Docker but I am getting the following error:
%3|1529395526.480|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused
%3|1529395526.480|ERROR|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused
%3|1529395526.480|ERROR|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: 1/1 brokers are down
The docker-compose.yml file contents are as follows:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
network_mode: host
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
extra_hosts:
- "moby:127.0.0.1"
kafka:
image: confluentinc/cp-kafka:latest
network_mode: host
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_ADVERTISED_HOSTNAME: kafka
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "moby:127.0.0.1"
schema_registry:
image: confluentinc/cp-schema-registry
hostname: schema_registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema_registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: '127.0.0.1:2181'
The Dockerfile contents are the following:
FROM python:2
WORKDIR /kafkaproducerapp
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./BackOffice_Producer.py" ]
What am I doing wrong?
You need this:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
Otherwise the Kafka brokers will be telling anyone connecting that it can be found on localhost:9092, which is not going to work from the other containers. From your other containers use kafka:29092 as the broker host & port, as well as zookeeper:2181 for zookeeper.
From your local host machine, you can access your broker on 9092 (assuming you expose the port).
Check out this docker-compose for a full example (from this repo)
Related
so I tried to connect my docker app (python-1) into another docker app (postgres). But it giving me this error:
psycopg.OperationalError: connection failed: Connection refused
python-1 | Is the server running on host "localhost" (127.0.0.1) and accepting
python-1 | TCP/IP connections on port 25432?
I've tried using condition: service_healthy but it doesn't work. In fact, I already make sure my database is running before python-1 is trying to connect. But the problem seems not about the database hasn't turned on yet. I already use 0.0.0.0 or postgres container's IP using postgres on the host and it also doesn't work.
Here is my docker-compose.yml
version: "3.8"
services:
postgres:
image: postgres:14.6
ports:
- 25432:5432
healthcheck:
test: ["CMD-SHELL", "PGPASSWORD=${DB_PASSWORD}", "pg_isready", "-U", "${DB_USERNAME}", "-d", "${DB_NAME}"]
interval: 30s
timeout: 60s
retries: 5
start_period: 80s
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
python:
build:
context: .
dockerfile: Dockerfile
depends_on:
postgres:
condition: service_healthy
command: flask --app app init-db && flask --app app run -h 0.0.0.0 -p ${PORT}
ports:
- ${PORT}:${PORT}
environment:
DB_HOST: localhost
DB_PORT: 25432
DB_NAME: ${DB_NAME}
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
And this is my Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.10
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
In a container, localhost means the container itself.
Different containers on a docker network can communicate using the service names as host names. Also, on the docker network, you use the unmapped port numbers.
So change your environment variables to
environment:
DB_HOST: postgres
DB_PORT: 5432
and you should be able to connect.
I have a docker image using the compose.yml file as below
version: "3.9"
services:
flask:
build:
context: consumelogs/
dockerfile: Dockerfile.web
env_file:
- ./consumelogs/.env
ports:
- "5000:5000"
redis:
image: "redis:alpine"
ports:
- "6379:6379"
worker:
build:
context: consumelogs/
dockerfile: Dockerfile.worker
env_file:
- ./consumelogs/.env
depends_on:
- redis
And my SQL Server's (which is a Docker image) command is
docker run -d --name sql_server -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=123456’ --net slackbot-net -p 1433:1433 mcr.microsoft.com/mssql/server:2019-latest
where the default userName = sa and my bridge network = slackbot-net. My python app docker is started using the command
docker-compose up --build --force-recreate
I know that if I use localhost as ServerName, it points to docker itself, but I've tried my machine's IP address, 0.0.0.0, 127.0.0.1 and even the IP address from the query
SELECT
CONNECTIONPROPERTY('net_transport') AS net_transport,
CONNECTIONPROPERTY('protocol_type') AS protocol_type,
CONNECTIONPROPERTY('auth_scheme') AS auth_scheme,
CONNECTIONPROPERTY('local_net_address') AS local_net_address,
CONNECTIONPROPERTY('local_tcp_port') AS local_tcp_port,
CONNECTIONPROPERTY('client_net_address') AS client_net_address
and I still get Cannot assign requested address. Is there something obvious that I'm missing?
Azure Data Studio is able to connect to the SQL Server from docker using localhost and username/password combo, just fine and so does my python tool using pytds, but just not from within docker.
Any help is greatly appreciated!
I have a python socketio application that works just fine when run locally. However, when I move it into Docker, external clients are unable to connect and throw this error:
socketio.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
The WebSocket server is really basic and publishes to 0.0.0.0:8080. My client apps listen to localhost:8080. In my Docker container, I've exposed port 8080. I'm guessing that I'm setting up my container incorrectly.:
Dockerfile:
FROM python:latest
WORKDIR /path/to/project
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
docker-compose.yml
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
I start up my container with docker-compose up and I'm using Docker Desktop (if that helps). I use this environment for development so I start my server with python my_server.py. The server starts successfully. What else am I missing?
I've tried the following based on what others have said about this problem online
Explicitly setting 0.0.0.0 to be the host
Using EXPOSE 8080 in my Dockerfile
Set the network mode to host
Your container connectivity appears to be fine.
You have not specified any command to run in your container. In order to ensure your server is actually running on port 8080, you can add a CMD instruction in your Dockerfile or specify cmd in your docker-compose.yaml. This can be done with:
CMD ["python", "my_server.py"]
Or with docker-compose.yaml file as:
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
command: python my_server.py #<== this overrides the Dockerfile CMD instruction
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
You can learn more about command in docker-compose.yaml here.
That's also the recommended way to develop with docker. Running commands inside running containers for dev work is not the right way. You essentially run docker compose every time you test a new change. That avoids a lot of problems like these where connectivity doesn't work.
I'm building an API that fetches data from a MySQL database using Docker. I've tried everything and I always get this error: 2005 (HY000): Unknown MySQL server host 'db' (-3). Here is my docker compose file:
version: '3'
services:
web:
container_name: nginx
image: nginx
volumes:
- ./nginx/nginx.conf:/tmp/nginx.conf
environment:
- FLASK_SERVER_ADDR=backend:9091
- DB_PASSWORD=password
- DB_USER=user
- DB_HOST=db
command: /bin/bash -c "envsubst < /tmp/nginx.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 80:80
networks:
- local
depends_on:
- backend
backend:
container_name: app
build: flask
environment:
- FLASK_SERVER_PORT=9091
- DB_PASSWORD=password
volumes:
- flask:/tmp/app_data
restart: unless-stopped
networks:
- local
depends_on:
- db
links:
- db
db:
container_name: db
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- 3306:3306
networks:
local:
volumes:
flask:
driver: local
db:
driver: local
Inside the flask directory I have my Dockerfile like so:
FROM ubuntu:latest
WORKDIR /src
RUN apt -y update
RUN apt -y upgrade
RUN apt install -y python3
RUN apt install -y python3-pip
COPY . .
RUN chmod +x -R .
RUN pip install -r requirements.txt --no-cache-dir
CMD ["python3","app.py"]
Finally, on my app.py file I try to connect to the database with the name of the Docker container. I have tried using localhost and it still gives me the same error. This is the part of the code I use to access it:
conn = mysql.connector.connect(
host="db",
port=3306,
user="user",
password="password",
database="database")
What is it that I'm doing wrong?
The containers aren't on the same networks:, which could be why you're having trouble.
I'd recommend deleting all of the networks: blocks in the file, both the blocks at the top level and the blocks in the web and backend containers. Compose will create a network named default for you and attach all of the containers to that network. Networking in Compose in the Docker documentation has more details on this setup.
The links: block is related to an obsolete Docker networking mode, and I've seen it implicated in problems in other SO questions. You should remove it as well.
You also do not need to manually specify container_name: in most cases. For the Nginx container, the Docker Hub nginx image already knows how to do the envsubst processing so you do not need to override its command:.
This should leave you with:
version: '3.8'
services:
web:
image: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/templates/default.conf.template
environment: { ... }
ports:
- 80:80
depends_on:
- backend
backend:
build: flask
environment: { ... }
volumes:
- flask:/tmp/app_data
restart: unless-stopped
depends_on:
- db
db:
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
- db:/var/lib/mysql
environment: { ... }
ports:
- 3306:3306
volumes:
flask:
db:
testdriven.io
docker build -f project/Dockerfile.prod -t registry.heroku.com/mighty-savannah-85236/web ./project
Successfully built 3df1e0c4eea4
Successfully tagged registry.heroku.com/mighty-savannah-85236/web:latest
docker run --name fastapi-tdd -e PORT=8765 -e DATABASE_URL=sqlite://sqlite.db -p 5003:8765 registry.heroku.com/mighty-savannah-85236/web:latest
nc: getaddrinfo for host "web-db" port 5432: Name or service not known
docker-compose file
services:
web:
build: ./project
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./project:/usr/src/app
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
build:
context: ./project/db
dockerfile: Dockerfile
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
It seems your container is trying to connect to web-db:5432 which given the port likely is a Postgres database server. And as web-db is not a real domain most likely what happens is, that there should be another container called web-db which probably should be a Postgres database which your container wants to connect to.
This connection will only work though if both containers - the one you are starting and the Postgres database container - are in the same user-defined Docker network as only then Docker service discovery works. You might wanna have a look at the Docker documentation for this.
But essentially you need to create a Docker network using
docker network create my-network
and then attach both containers - again, your container and the Postgres database - to that network using the --network option.
Additionally your Postgres container must be called web-db so that the service discovery will work.
So the skeleton of the command to start the DB would be the following:
docker run --name web-db --network my-network -p 5432:5432 your-database-image
The command to start your application would be
docker run --name fastapi-tdd --network my-network -e PORT=8765 -e DATABASE_URL=sqlite://sqlite.db -p 5003:8765 registry.heroku.com/mighty-savannah-85236/web:latest
Might also be worth exploring Docker-compose to simplify this whole process.
Edit
Now with your docker-compose.yaml file the same rule applies. Both containers need to be in the same user-defined network bridge, which can be declared using networks: (be aware: don't put it into services:).
services:
web:
build: ./project
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./project:/usr/src/app
# attach this container to the network
networks:
- my-network
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
# attach this container to the network
networks:
- my-network
# name this container web-db
container_name: web-db
build:
context: ./project/db
dockerfile: Dockerfile
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
# declare the network ressource
networks:
my-network
Now a connection should be possible. Be aware that you also need to configure PostgreSQL correctly to allow you to connect to it setting listen_address='*' in postgresql.conf.