docker-compose: redis connection refused between containers - python

I am trying to setup a docker-compose file that is intended to replace a single Docker container solution that runs several processes (RQ worker, RQ dashboard and a Flask application) with Supervisor.
The host system is a Debian 8 Linux and my docker-compose.yml looks like this (I deleted all other entries to reduce error sources):
version: '2'
services:
redis:
image: redis:latest
rq-worker1:
build: .
command: /usr/local/bin/rqworker boo-uploads
depends_on:
- redis
"rq-worker1" is a Python RQ worker, trying to connect to redis via localhost and port 6379, but it fails to establish a connection:
redis_1 | 1:M 23 Dec 13:06:26.285 * The server is now ready to accept connections on port 6379
rq-worker1_1 | [2016-12-23 13:06] DEBUG: worker: Registering birth of worker d5cb16062fc0.1
rq-worker1_1 | Error 111 connecting to localhost:6379. Connection refused.
galileoqueue_rq-worker1_1 exited with code 1
The output of docker ps looks like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36cac91670d2 redis:latest "docker-entrypoint.sh" 14 minutes ago Up About a minute 6379/tcp galileoqueue_redis_1
I tried everything from running the RQ worker against the local IPs 0.0.0.0 / 127.0.0.1 and even localhost. Other solutions posted on Stackoverflow didn't work for me, too (docker-compose: connection refused between containers, but service accessible from host e.g.).
And this is my docker info output:
Containers: 25
Running: 1
Paused: 0
Stopped: 24
Images: 485
Server Version: 1.12.5
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 436
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 13.61 GiB
Name: gp-pc-201
ID: LBGV:K26G:UXXI:BWRH:OYVE:OQTA:N7LQ:I4DV:BTNH:FZEW:7XDD:WOCU
Does anyone have an idea why the connect between the two containers doesn't work?

In your code localhost from rq-worker1 is rq-worker1 itself, not redis and you can't reach redis:6379 by connect to localhost from rq-worker1. But by default redis and rq-worker1 are in the same network and you can use service name as a domain name in that network.
It means, that you can connect to redis service from rq-worker1 using redis as a domain name, for instance: client.connect(("redis", 6379))
You should replace localhost with redis in config of rq-worker1.

Related

how to connect application running on localhost from docker container

I am trying to connect the application (which is not running on docker)
i am trying to run this docker image with the help of docker compose.
i am using host network mode connecting external services on
host.docker.internal on port 7497
i am trying to call from the docker container from the python code
this docker is not having port config
services:
ibkr-bot-eminisp500:
container_name: ibkr-bot-eminisp500
image: |my-image|
network_mode: host
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- IBKR_CLIENT_URL_KEY= "host.docker.internal"
- IBKR_PORT_KEY=7497
but i am getting following error. what i am missing
| API connection failed: gaierror(-2, 'Name or service not known')
ibkr-bot-eminisp500 | Traceback (most recent call last):
ibkr-bot-eminisp500 | File "/usr/bin/src/app/main.py", line 8, in <module>
ibkr-bot-eminisp500 | ibkrBot = IBKRBot()
Combining host.docker.internal with network_mode: host doesn't make any sense.
If you're running under Linux, then when using network_mode: host your container is running your host's main network environment. Drop the extra_hosts section from your config because it isn't doing you any good. You can connect to a service on your host using any ip address from any host interface, including 127.0.0.1.
If you are running on anything other than Linux, then network_mode: host is probably never useful (because the Docker "host" is actually a virtual machine running on top of your primary operating system). In this case, drop network_mode: host from your config, and connect using host.docker.internal.

Failed to connect to postgre database inside docker container

docker-compose.yaml
version: '3.9'
services:
web:
env_file: .env
build: .
command: sh -c "alembic upgrade head && uvicorn main:app --host 0.0.0.0 --port 8000"
volumes:
- .:/app
ports:
- 8000:8000
depends_on:
- db
- redis
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
redis:
image: redis:6-alpine
volumes:
postgres_data:
.env
DB_USER='wplay'
DB_PASS='wplay'
DB_HOST=db
DB_NAME='wplay'
DB_PORT=5432
When I running docker container
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 5432?
I try to change .env DB_HOST='localhost' and add
ports:
- '5432:5432'
to yaml db configuration, but nothing
upd
logs
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2023-01-04 12:44:55.386 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2023-01-04 12:44:55.386 UTC [1] LOG: listening on IPv6 address "::", port 5432
Connection to DB
db.py
import os
from decouple import config
import databases
import sqlalchemy
DEFAULT_DATABASE_URL = f"postgresql://{config('DB_USER')}:{config('DB_PASS')}" \
f"#{config('DB_HOST')}:5432/{config('DB_NAME')}"
DATABASE_URL = (os.getenv('DATABASE_URL', DEFAULT_DATABASE_URL))
database = databases.Database(DATABASE_URL)
metadata = sqlalchemy.MetaData()
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.create_all(engine)
You are spinning up two individual containers - a "web" and a "db" but trying to connect to the other by localhost. localhost will only resolve to within it's own container.
with Docker Run
Use --network="host" in your docker run command, then localhost and 127.0.0.1 in your docker container will point to your docker host.
With Docker Compose
Each container can look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
Within the web container, your connection string to db would look like postgres://db:5432
See docker documentation or docker compose documentation

docker-compose and connection to Mongo container

I am trying to create 2 containers as per the following docker-compose.yml file. The issue is that if I start up the mongo database container and then run my code locally (hitting 127.0.0.1) then everything is fine but if I try and run my api container and hit that (see yml file) then I get connection refused i.e.
172.29.0.12:27117: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id:
60437a460a3e0fa904650e35, topology_type: Single, servers: [<ServerDescription ('172.29.0.12', 27117) server_type:
Unknown, rtt: None, error=AutoReconnect('172.29.0.12:27117: [Errno 111] Connection refused')>]>
Please note: I have set mongo to use port 27117 rather than 27017
My app is a Python Flask app and I am using PyMongo in the following manner:
try:
myclient = pymongo.MongoClient('mongodb://%s:%s#%s:%s/%s' % (username, password, hostName, port, database))
mydb = myclient[database]
cursor = mydb["temperatures"]
app.logger.info('Database connected to: ' + database)
except:
app.logger.error('Error connecting to database')
What's driving me mad is it runs locally and successfully accesses mongo via the container, but as soon as I try the app in a container it fails.
docker-compose.yml as follows:
version: '3.7'
services:
hotbin-db:
image: mongo
container_name: hotbin-db
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- '27117:27017'
expose:
# Opens port 3306 on the container
- '27117'
command: [--auth]
environment:
MONGO_INITDB_ROOT_USERNAME: ***
MONGO_INITDB_ROOT_PASSWORD: ***
MONGO_INITDB_DATABASE: ***
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
# Where our data will be persisted
volumes:
- /home/simon/mongodb/database/hotbin-db/:/data/db
#- my-db:/var/lib/mysql
# env_file:
# - .env
networks:
hotbin-net:
ipv4_address: 172.29.0.12
hotbin-api:
image: scsherlock/compost-api:latest
container_name: hotbin-api
environment:
MONGODB_DATABASE: ***
MONGODB_USERNAME: ***
MONGODB_PASSWORD: ***
MONGODB_HOSTNAME: 172.29.0.12
MONGODB_PORT: '27117'
depends_on:
- hotbin-db
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- '5050:5050'
expose:
- '5050'
networks:
hotbin-net:
ipv4_address: 172.29.0.13
# # Names our volume
volumes:
my-db:
networks:
hotbin-net:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.29.0.0/16
Using the service name of the mongo container and the standard port of
27017 instead of 27117 (even though that's what is defined in the
docker-compose file) works. I'd like to understand why though
Your docker compose file does NOT configure MongoDB to run on port 27117. If you want to get it to run on 27117 you would have to change this line in the docker compose:
command: mongod --auth --port 27117
As you haven't specified a port, MongoDB will run on the default port 27017.
Your expose section exposes the container port 27117 to the host, but Mongo isn't running on that port, so that line is effectively doing nothing.
Your ports section maps a host port 27117 to a container port 27017. This means if you're connecting from the host, you can connect on port 27117, but that is connecting to port 27017 on the container.
Now to your python program. As this is running in the container network, to connect services within a docker-compose network, you reference them by their service name.
Putting this together, your connection string will be: mongodb://hotbin-db:27017/yourdb?<options>
As others have mentioned, you really don't need to create specific IP addresses unless you have a very good need to. You also don't even to define a network, as docker-compose creates it's own internal network.
Reference: https://docs.docker.com/compose/networking/
Are you using Windows to run the container?
If yes,localhost is identified as localhost of the container and not the localhost of your host machine.
Hence, instead of providing the IP address of your host, try modifying your mongodB string this way when running inside the docker container:
Try this:
mongodb://host.docker.internal:27017/
instead of:
mongodb://localhost:27017/

how to connect python app in docker container with running docker container with url

I have an app in python that I want to run in a docker container and it has a line:
h2o.connect(ip='127.0.0.1', port='54321')
The h2o server is running in docker container and it always has different ip. One time it was started on 172.19.0.5, the other time 172.19.0.3, sometimes 172.17.0.3.
So it is always random, and I can't connect the python app.
I tried to expose the port of h2o server to localhost and then connect the python (the code above), but it is not working.
You dont connect two docker containers though ip addresses. Instead, you want to use docker internal network aliases:
version: '3'
services:
server:
...
depends_on:
- database
database:
...
expose:
- 54321:54321
then you can define your connectio in server as:
h2o.connect(ip='127.0.0.1', port='54321')

Docker-Compose HAProxy missing frontend

I'm trying to use HAProxy as a loadbalancer for my python webapp which uses redis. I'm working on transitioning the docker run commands to docker-compose using the docker-compose.yml --but I'm running into issues
Below is my current "docker run" commands, which works perfectly fine!
docker run --name sdnapi-redis -v /opt/redis:/data -p 6379:6379 -d redis redis-server --appendonly yes
docker run -d --name sdnapi1 --link sdnapi-redis:redis mycomp/sdnapi
docker run -d --name sdnapi2 --link sdnapi-redis:redis mycomp/sdnapi
docker run -d --name sdnapilb -p 80:80 -p 443:443 -p 1936:1936 -e DEFAULT_SSL_CERT="$(awk 1 ORS='\\n' ./certs/cert.pem)" -v /certs/:/certs/ --link sdnapi1:sdnapi1 --link sdnapi2:sdnapi2 dockercloud/haproxy
Here is my docker-compose.yml that should replicate the same functionality
version: '2'
services:
sdnapi:
image: mycomp/sdnapi
links:
- sdnapi-redis:redis
sdnapilb:
image: dockercloud/haproxy:1.2.1
environment:
- DEFAULT_SSL_CERT
volumes:
- /certs/:/certs/
ports:
- "80:80"
- "443:443"
- "1936:1936"
links:
- sdnapi:sdnapi
sdnapi-redis:
image: redis
volumes:
- /opt/redis:/data
ports:
- "6379:6379"
command: redis-server --appendonly yes
When I run the docker run commands, this is the sdnapilb logs:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
log-send-hostname
maxconn 4096
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.stats level admin
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
balance roundrobin
log global
mode http
option redispatch
option httplog
option dontlognull
option forwardfor
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind :1936
mode http
stats enable
timeout connect 10s
timeout client 1m
timeout server 1m
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth stats:stats
frontend default_frontend
bind :80
bind :443 ssl crt /certs/
reqadd X-Forwarded-Proto:\ https
maxconn 4096
defcon 1
default_backend default_service
When I run the docker-compose.yml with "docker-compose up -d".. I lose the frontend section.
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
log-send-hostname
maxconn 4096
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.stats level admin
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
balance roundrobin
log global
mode http
option redispatch
option httplog
option dontlognull
option forwardfor
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind :1936
mode http
stats enable
timeout connect 10s
timeout client 1m
timeout server 1m
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth stats:stats
Can you see any issues with either setup? I want to use docker-compose for its ability to scale.
I figured out the issue...
The docker-compose.yml has issues with the links.. the format for links is service name:alias
The issue with mine is that even though my service names were correct, the alias' were incorrect... causing the docker-compose to fail without an actual error. Since the alias doesn't exist, it just doesn't link the container -thus no frontend

Categories