psycopg.OperationalError: connection failed: Connection refused in Docker - python

so I tried to connect my docker app (python-1) into another docker app (postgres). But it giving me this error:
psycopg.OperationalError: connection failed: Connection refused
python-1 | Is the server running on host "localhost" (127.0.0.1) and accepting
python-1 | TCP/IP connections on port 25432?
I've tried using condition: service_healthy but it doesn't work. In fact, I already make sure my database is running before python-1 is trying to connect. But the problem seems not about the database hasn't turned on yet. I already use 0.0.0.0 or postgres container's IP using postgres on the host and it also doesn't work.
Here is my docker-compose.yml
version: "3.8"
services:
postgres:
image: postgres:14.6
ports:
- 25432:5432
healthcheck:
test: ["CMD-SHELL", "PGPASSWORD=${DB_PASSWORD}", "pg_isready", "-U", "${DB_USERNAME}", "-d", "${DB_NAME}"]
interval: 30s
timeout: 60s
retries: 5
start_period: 80s
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
python:
build:
context: .
dockerfile: Dockerfile
depends_on:
postgres:
condition: service_healthy
command: flask --app app init-db && flask --app app run -h 0.0.0.0 -p ${PORT}
ports:
- ${PORT}:${PORT}
environment:
DB_HOST: localhost
DB_PORT: 25432
DB_NAME: ${DB_NAME}
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
And this is my Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.10
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .

In a container, localhost means the container itself.
Different containers on a docker network can communicate using the service names as host names. Also, on the docker network, you use the unmapped port numbers.
So change your environment variables to
environment:
DB_HOST: postgres
DB_PORT: 5432
and you should be able to connect.

Related

Configure Docker-compose to connect to my local database [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 2 months ago.
I am setting up an application with the Flask framework using MySQL as the database. This database is located locally on the machine. I manage to use the occifielle image of MySQL without problem. Only that I would rather use a local database that is on my computer.
Here is my extract, please help me.
Dockerfile
FROM python:3.9-slim
RUN apt-get -y update
RUN apt install python3-pip -y
WORKDIR /flask_docker_test
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 80
CMD gunicorn --bind 0.0.0.0:5000 app:app
Docker-compose file
version: "3"
services:
app:
build: .
container_name: app
links:
- db
ports:
- "5000:5000"
depends_on:
- db
networks:
- myapp
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
container_name: mysql_db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: flask_test_db
MYSQL_USER: eric
MYSQL_PASSWORD: 1234
ports:
- "3306:3306"
networks:
- myapp
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- 8080:80
depends_on:
- db
environment:
PMA_ARBITRARY: 1
PMA_USER: serge
PMA_HOST: db
PMA_PASSWORD: 1234
networks:
- myapp
networks:
myapp:
I would like to establish a connection with my local database rather than with the database provided by the MySQL container
in order to connect to your local database, you should :
remove the db from your docker-compose.yaml
remove the network myapp
use network_mode host
But IMO you should keep your db in the docker-compose file, otherwise other developers won't be able to start the project on their machine
EDIT : code snippet for network_mode
services:
app:
...
network_mode: host

Docker + SQL Server Cannot assign requested address

I have a docker image using the compose.yml file as below
version: "3.9"
services:
flask:
build:
context: consumelogs/
dockerfile: Dockerfile.web
env_file:
- ./consumelogs/.env
ports:
- "5000:5000"
redis:
image: "redis:alpine"
ports:
- "6379:6379"
worker:
build:
context: consumelogs/
dockerfile: Dockerfile.worker
env_file:
- ./consumelogs/.env
depends_on:
- redis
And my SQL Server's (which is a Docker image) command is
docker run -d --name sql_server -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=123456’ --net slackbot-net -p 1433:1433 mcr.microsoft.com/mssql/server:2019-latest
where the default userName = sa and my bridge network = slackbot-net. My python app docker is started using the command
docker-compose up --build --force-recreate
I know that if I use localhost as ServerName, it points to docker itself, but I've tried my machine's IP address, 0.0.0.0, 127.0.0.1 and even the IP address from the query
SELECT
CONNECTIONPROPERTY('net_transport') AS net_transport,
CONNECTIONPROPERTY('protocol_type') AS protocol_type,
CONNECTIONPROPERTY('auth_scheme') AS auth_scheme,
CONNECTIONPROPERTY('local_net_address') AS local_net_address,
CONNECTIONPROPERTY('local_tcp_port') AS local_tcp_port,
CONNECTIONPROPERTY('client_net_address') AS client_net_address
and I still get Cannot assign requested address. Is there something obvious that I'm missing?
Azure Data Studio is able to connect to the SQL Server from docker using localhost and username/password combo, just fine and so does my python tool using pytds, but just not from within docker.
Any help is greatly appreciated!

testdriven.io How do I fix "Name or service not known" during docker run?

testdriven.io
docker build -f project/Dockerfile.prod -t registry.heroku.com/mighty-savannah-85236/web ./project
Successfully built 3df1e0c4eea4
Successfully tagged registry.heroku.com/mighty-savannah-85236/web:latest
docker run --name fastapi-tdd -e PORT=8765 -e DATABASE_URL=sqlite://sqlite.db -p 5003:8765 registry.heroku.com/mighty-savannah-85236/web:latest
nc: getaddrinfo for host "web-db" port 5432: Name or service not known
docker-compose file
services:
web:
build: ./project
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./project:/usr/src/app
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
build:
context: ./project/db
dockerfile: Dockerfile
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
It seems your container is trying to connect to web-db:5432 which given the port likely is a Postgres database server. And as web-db is not a real domain most likely what happens is, that there should be another container called web-db which probably should be a Postgres database which your container wants to connect to.
This connection will only work though if both containers - the one you are starting and the Postgres database container - are in the same user-defined Docker network as only then Docker service discovery works. You might wanna have a look at the Docker documentation for this.
But essentially you need to create a Docker network using
docker network create my-network
and then attach both containers - again, your container and the Postgres database - to that network using the --network option.
Additionally your Postgres container must be called web-db so that the service discovery will work.
So the skeleton of the command to start the DB would be the following:
docker run --name web-db --network my-network -p 5432:5432 your-database-image
The command to start your application would be
docker run --name fastapi-tdd --network my-network -e PORT=8765 -e DATABASE_URL=sqlite://sqlite.db -p 5003:8765 registry.heroku.com/mighty-savannah-85236/web:latest
Might also be worth exploring Docker-compose to simplify this whole process.
Edit
Now with your docker-compose.yaml file the same rule applies. Both containers need to be in the same user-defined network bridge, which can be declared using networks: (be aware: don't put it into services:).
services:
web:
build: ./project
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./project:/usr/src/app
# attach this container to the network
networks:
- my-network
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
depends_on:
- web-db
web-db:
# attach this container to the network
networks:
- my-network
# name this container web-db
container_name: web-db
build:
context: ./project/db
dockerfile: Dockerfile
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
# declare the network ressource
networks:
my-network
Now a connection should be possible. Be aware that you also need to configure PostgreSQL correctly to allow you to connect to it setting listen_address='*' in postgresql.conf.

Connection refused error (111) when using Locust for Performance Testing against docker-compose app

I am using Locust, a performance testing tool, to load test an application that is setup to run within docker-compose. I get the following error (connection refused, error 111) for every request:
Error report
# occurrences Error
--------------------------------------------------------------------------------------------------------------------------------------------
5 GET /parties/: 'ConnectionError(MaxRetryError("HTTPConnectionPool(host=\'localhost\', port=8080): Max retries exceeded with url: /parties/ (Caused by NewConnectionError(\'<urllib3.connection.HTTPConnection object at 0x7fa6f294df28>: Failed to establish a new connection: [Errno 111] Connection refused\',))",),)'
I am running Locust from a docker container as follows:
docker run --volume /mydir/locustfile:/mnt/locust -e LOCUSTFILE_PATH=/mnt/locust/locustfile.py -e TARGET_URL=https://localhost:8080/parties -e LOCUST_OPTS="--clients=1 --no-web --run-time=600" locustio/locust
The weird thing is that when I use curl to hit the exact same URL it works properly.
curl http://localhost:8080/parties
Any help is appreciated!
localhost is probably the wrong hostname to use in your context. when you use containers, the hostname must match the container's name. so use your container's name instead of localhost.
references:
https://github.com/elastic/elasticsearch-py/issues/715
https://docs.locust.io/en/latest/running-locust-docker.html
https://docs.locust.io/en/stable/configuration.html
general assumption:
locustfile.py exists in the current working directory
a distributed example with docker compose:
for example, the following docker-compose.yml file config will work:
services:
web:
build: .
command: python manage.py runserver 0:8000
volumes:
- .:/code/
ports:
- "8000:8000"
master:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --master -H http://web:8000
worker:
image: locustio/locust
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host master
the -H flag (alias for --host) on the following line makes the trick:
command: -f /mnt/locust/locustfile.py --master -H http://web:8000
a non-distributed example with docker compose:
services:
web:
build: .
command: python manage.py runserver 0:8000
volumes:
- .:/code/
ports:
- "8000:8000"
locust:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --host=http://web:8000
you can also specify a config file, instead of defining the flags on the command line. supose the file locust.conf exists in the current working directory:
locust.conf:
host: http://web:8000
so in your docker-compose.yml you do:
command: -f /mnt/locust/locustfile.py --config=/mnt/locust/locust.conf
instead of:
command: -f /mnt/locust/locustfile.py --host=http://web:8000
a non-distributed example without docker compose:
the containers must live on the same network, so they can communicate with each other. to do so, let's create a common bridge network called locustnw:
docker network create --driver bridge locustnw
now run your app container within this network. Supose it's listening on port 8000 and named web:
docker run -p 8000:8000 --network=locustnw --name web <my_image>
now run your locust container, within the same network too. Supose it's listening on port 8089:
docker run -p 8089:8089 --network=locustnw -v $PWD:/mnt/locust locustio/locust -f /mnt/locust/locustfile.py --host=http://web:8000
the --network, --name and --host flags are the keys!

Cannot connect to single-node Kafka server through Docker

I'm trying to connect to single-node Kafka server through Docker but I am getting the following error:
%3|1529395526.480|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused
%3|1529395526.480|ERROR|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused
%3|1529395526.480|ERROR|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: 1/1 brokers are down
The docker-compose.yml file contents are as follows:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
network_mode: host
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
extra_hosts:
- "moby:127.0.0.1"
kafka:
image: confluentinc/cp-kafka:latest
network_mode: host
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_ADVERTISED_HOSTNAME: kafka
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "moby:127.0.0.1"
schema_registry:
image: confluentinc/cp-schema-registry
hostname: schema_registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema_registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: '127.0.0.1:2181'
The Dockerfile contents are the following:
FROM python:2
WORKDIR /kafkaproducerapp
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./BackOffice_Producer.py" ]
What am I doing wrong?
You need this:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
Otherwise the Kafka brokers will be telling anyone connecting that it can be found on localhost:9092, which is not going to work from the other containers. From your other containers use kafka:29092 as the broker host & port, as well as zookeeper:2181 for zookeeper.
From your local host machine, you can access your broker on 9092 (assuming you expose the port).
Check out this docker-compose for a full example (from this repo)

Categories