I have the following docker-compose file:
version: '2.3'
networks:
default: { external: true, name: $NETWORK_NAME } # NETWORK_NAME in .env file is `uv_atp_network`.
services:
car_parts_segmentor:
# container_name: uv-car-parts-segmentation
image: "uv-car-parts-segmentation:latest"
ports:
- "8080:8080"
volumes:
- ../../../../uv-car-parts-segmentation/configs:/uveye/configs
- /isilon/:/isilon/
# - local_data_folder:local_data_folder
command: "--run_service rabbit"
runtime: nvidia
depends_on:
rabbitmq_local:
condition: service_started
links:
- rabbitmq_local
restart: always
rabbitmq_local:
image: 'rabbitmq:3.6-management-alpine'
container_name: "rabbitmq"
ports:
- ${RABBIT_PORT:?unspecified_rabbit_port}:5672
- ${RABBIT_MANAGEMENT_PORT:?unspecified_rabbit_management_port}:15672
When this runs, docker ps shows
21400efd6493 uv-car-parts-segmentation:latest "python /uveye/app/m…" 5 seconds ago Up 1 second 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp joint_car_parts_segmentor_1
bf4ab8581f1f rabbitmq:3.6-management-alpine "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq
I want to create a connection to that rabbitmq. The user:pass is guest:guest.
I was unable to do it, with the very uninformative AMQPConnectionError in all cases:
Below code runs in another, unrelated container.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#localhost/"))
Also tried with
$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rabbitmq
172.27.0.2
and
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#172.27.0.2/")) #
Also tried with
credentials = pika.credentials.PlainCredentials(
username="guest",
password="guest"
)
parameters = pika.ConnectionParameters(
host=ip_address, # tried all above options
port=5672,
credentials=credentials,
heartbeat=10,
)
Note that the container car_parts_segmentor is able to see the container rabbitmq. Both are started by docker-compose.
My assumption is this has to do with the uv_atp_network both containers live in, and I am trying to access a docker inside that network, from outside the network.
Is this really the problem?
If so, how can this be achieved?
For the future - how to get more informative errors from pika?
As I suspected, the problem was the name rabbitmq existed only in the network uv_atp_network.
The code attempting to connect to that network runs inside a container of its own, which was not present in the network.
Solution connect the current container to the network:
import socket
client = docker.from_env()
network_name = "uv_atp_network"
atp_container = client.containers.get(socket.gethostname())
client.networks.get(network_name).connect(container=atp_container.id)
After this, the above code in the question does work, because rabbitmq can be resolved.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
Related
I'm using Docker with InfluxDB and Python in a framework. I want to write to InfluxDB inside the framework but I always get the error "Name or service not known" and have no idea what is the problem.
I link the InfluxDB container to the framework container in the docker compose file like so:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- 8086:8086
volumes:
- influxdb_data:/var/lib/influxdb
framework:
image: framework
build: framework
volumes:
- framework:/tmp/framework_data
links:
- influxdb
depends_on:
- influxdb
volumes:
framework:
driver: local
influxdb_data:
Inside the framework I have a script that only focuses on writing to the database. Because I don't want to access the database with the url "localhost:8086", I am using links to make it easier and connect to the database with the url "influxdb:8086". This is my code in that script:
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS, WritePrecision
bucket = "bucket"
token = "token"
def insert_data(message):
client = InfluxDBClient(url="http://influxdb:8086", token=token, org=org)
write_api = client.write_api(write_options=SYNCHRONOUS)
point = Point("mem") \
.tag("sensor", message["sensor"]) \
.tag("metric", message["type"]) \
.field("true_value", float(message["true_value"])) \
.field("value", float(message["value"])) \
.field("failure", message["failure"]) \
.field("failure_type", message["failure_type"]) \
.time(datetime.datetime.now(), WritePrecision.NS)
write_api.write(bucket, org, point) #the error seams to happen here
Everytime I use the function insert_data I get the error urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fac547d9d00>: Failed to establish a new connection: [Errno -2] Name or service not known.
Why can't I write into the database?
I think the problem resides in your docker-compose file. First of all links is a legacy feature so I'd recommend you to use user-defined networks instead. More on that here: https://docs.docker.com/compose/compose-file/compose-file-v3/#links
I've created a minimalistic example to demonstrate the approach:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
environment: # manage the secrets the best way you can!!! the below are only for demonstration purposes...
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=secret
- DOCKER_INFLUXDB_INIT_ORG=my-org
- DOCKER_INFLUXDB_INIT_BUCKET=my-bucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=secret-token
networks:
- local
framework:
image: python:3.10.2
depends_on:
- influxdb
networks:
- local
networks:
local:
Notice the additional networks definition and the local network. Also this network is referenced from the containers.
Also make sure to initialize your influxdb with the right enviroment variables according to the docker image's documentation: https://hub.docker.com/_/influxdb
Then to test it just run a shell in your framework container via docker-compose:
docker-compose run --entrypoint sh framework
and then in the container install the client:
pip install influxdb_client['ciso']
Then in a python shell - still inside the container - you can verify the connection:
from influxdb_client import InfluxDBClient
client = InfluxDBClient(url="http://influxdb:8086", token="secret-token", org="my-org") # the token and the org values are coming from the container's docker-compose environment definitions
client.health()
# {'checks': [],
# 'commit': '657e1839de',
# 'message': 'ready for queries and writes',
# 'name': 'influxdb',
# 'status': 'pass',
# 'version': '2.1.1'}
Last but not least to clean up the test resources do:
docker-compose down
I have the following docker-compose file:
version: '2.3'
networks:
default: { external: true, name: $NETWORK_NAME } # NETWORK_NAME in .env file is `uv_atp_network`.
services:
car_parts_segmentor:
# container_name: uv-car-parts-segmentation
image: "uv-car-parts-segmentation:latest"
ports:
- "8080:8080"
volumes:
- ../../../../uv-car-parts-segmentation/configs:/uveye/configs
- /isilon/:/isilon/
# - local_data_folder:local_data_folder
command: "--run_service rabbit"
runtime: nvidia
depends_on:
rabbitmq_local:
condition: service_started
links:
- rabbitmq_local
restart: always
rabbitmq_local:
image: 'rabbitmq:3.6-management-alpine'
container_name: "rabbitmq"
ports:
- ${RABBIT_PORT:?unspecified_rabbit_port}:5672
- ${RABBIT_MANAGEMENT_PORT:?unspecified_rabbit_management_port}:15672
When this runs, docker ps shows
21400efd6493 uv-car-parts-segmentation:latest "python /uveye/app/m…" 5 seconds ago Up 1 second 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp joint_car_parts_segmentor_1
bf4ab8581f1f rabbitmq:3.6-management-alpine "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq
I want to create a connection to that rabbitmq. The user:pass is guest:guest.
I was unable to do it, with the very uninformative AMQPConnectionError in all cases:
Below code runs in another, unrelated container.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#localhost/"))
Also tried with
$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rabbitmq
172.27.0.2
and
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#172.27.0.2/")) #
Also tried with
credentials = pika.credentials.PlainCredentials(
username="guest",
password="guest"
)
parameters = pika.ConnectionParameters(
host=ip_address, # tried all above options
port=5672,
credentials=credentials,
heartbeat=10,
)
Note that the container car_parts_segmentor is able to see the container rabbitmq. Both are started by docker-compose.
My assumption is this has to do with the uv_atp_network both containers live in, and I am trying to access a docker inside that network, from outside the network.
Is this really the problem?
If so, how can this be achieved?
For the future - how to get more informative errors from pika?
As I suspected, the problem was the name rabbitmq existed only in the network uv_atp_network.
The code attempting to connect to that network runs inside a container of its own, which was not present in the network.
Solution connect the current container to the network:
import socket
client = docker.from_env()
network_name = "uv_atp_network"
atp_container = client.containers.get(socket.gethostname())
client.networks.get(network_name).connect(container=atp_container.id)
After this, the above code in the question does work, because rabbitmq can be resolved.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
I want to create a local kafka setup using docker-compose that replicates very closely the secured kafka setup in confluent cloud.
The cluster I have in Confluent Cloud can be connected to using
c = Consumer(
{
"bootstrap.servers": "broker_url",
"sasl.mechanism": "PLAIN",
"security.protocol": "SASL_SSL",
"sasl.username": "key",
"sasl.password": "secret",
"group.id": "consumer-name",
}
)
But I am unable to create a docker-compose.yml locally that has the same config and can be connected to using the same code.
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.0
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:6.2.0
depends_on:
- zookeeper
ports:
- '9092:9092'
- '19092:19092'
expose:
- '29092'
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENERS: INSIDE-DOCKER-NETWORK://0.0.0.0:29092,OTHER-DOCKER-NETWORK://0.0.0.0:19092,HOST://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE-DOCKER-NETWORK://kafka:29092,OTHER-DOCKER-NETWORK://host.docker.internal:19092,HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE-DOCKER-NETWORK:PLAINTEXT,OTHER-DOCKER-NETWORK:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE-DOCKER-NETWORK
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# Allow to swiftly purge the topics using retention.ms
KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 100
# Security Stuff
KAFKA_LISTENER_NAME_EXTERNAL_PLAIN_SASL_JAAS_CONFIG: |
org.apache.kafka.common.security.plain.PlainLoginModule required \
username="broker" \
password="broker" \
user_alice="alice-secret";
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SASL_SSL
Here is what I have in terms of the local docker-compose file but its not working
This is the error I get when I try connecting using the same code
%3|1628019607.757|FAIL|rdkafka#consumer-1| [thrd:sasl_ssl://localhost:9092/bootstrap]: sasl_ssl://localhost:9092/bootstrap: SSL handshake failed: Disconnected: connecting to a PLAINTEXT broker listener? (after 9ms in state SSL_HANDSHAKE)
Here's your hint: Disconnected: connecting to a PLAINTEXT broker listener?
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP only has PLAINTEXT mappings, so there is no SASL_SSL connection that your client can use
For what it looks like you did configure to have SASL_SSL, you only have one broker, so KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL doesn't really do anything
In this demo repo you can find brokers that use all possible protocol mappings
I have a docker swarm mode with one HAProxy container, and 3 python web apps. The container with HAProxy is expose port 80 and should load balance the 3 containers of my app (by leastconn).
Here is my docker-compose.yml file:
version: '3'
services:
scraper-node:
image: scraper
ports:
- 5000
volumes:
- /profiles:/profiles
command: >
bash -c "
cd src;
gunicorn src.interface:app \
--bind=0.0.0.0:5000 \
--workers=1 \
--threads=1 \
--timeout 500 \
--log-level=debug \
"
environment:
- SERVICE_PORTS=5000
deploy:
replicas: 3
update_config:
parallelism: 5
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
networks:
- web
proxy:
image: dockercloud/haproxy
depends_on:
- scraper-node
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
networks:
web:
driver: overlay
When I deploy this swarm (docker stack deploy --compose-file=docker-compose.yml scraper) I get all of my containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
245f4bfd1299 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp
995aefdb9346 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb
a51474322583 scraper:latest "/docker-entrypoin..." 21 hours ago Up 19 minutes 80/tcp, 5000/tcp, 8000/tcp scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e
3f97f34678d1 dockercloud/haproxy "/sbin/tini -- doc..." 21 hours ago Up 19 minutes 80/tcp, 443/tcp, 1936/tcp scraper_proxy.1.rng5ysn8v48cs4nxb1atkrz73
And when I display the haproxy container log it looks like he recognize the 3 python containers:
INFO:haproxy:dockercloud/haproxy 1.6.6 is running outside Docker Cloud
INFO:haproxy:Haproxy is running in SwarmMode, loading HAProxy definition through docker api
INFO:haproxy:dockercloud/haproxy PID: 6
INFO:haproxy:=> Add task: Initial start - Swarm Mode
INFO:haproxy:=> Executing task: Initial start - Swarm Mode
INFO:haproxy:==========BEGIN==========
INFO:haproxy:Linked service: scraper_scraper-node
INFO:haproxy:Linked container: scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e, scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb, scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp
INFO:haproxy:HAProxy configuration:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
log-send-hostname
maxconn 4096
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.stats level admin
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
defaults
balance leastconn
log global
mode http
option redispatch
option httplog
option dontlognull
option forwardfor
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind :1936
mode http
stats enable
timeout connect 10s
timeout client 1m
timeout server 1m
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth stats:stats
frontend default_port_80
bind :80
reqadd X-Forwarded-Proto:\ http
maxconn 4096
default_backend default_service
backend default_service
server scraper_scraper-node.1.0u8q4zn432n7p5gl93ohqio8e 10.0.0.5:5000 check inter 2000 rise 2 fall 3
server scraper_scraper-node.2.wem9v2nug8wqos7d97zknuvqb 10.0.0.6:5000 check inter 2000 rise 2 fall 3
server scraper_scraper-node.3.iyi33hv9tikmf6m2wna0cypgp 10.0.0.7:5000 check inter 2000 rise 2 fall 3
INFO:haproxy:Launching HAProxy
INFO:haproxy:HAProxy has been launched(PID: 12)
INFO:haproxy:===========END===========
But when I try to GET to http://localhost I get an error message:
<html>
<body>
<h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body>
</html>
There was two problems:
The command in docker-compose.yml file should be one line.
The scraper image should expose port 5000 (in his Dockerfile).
Once I fix those, I deploy this swarm the same way (with stack) and the proxy container recognize the python containers and was able to load balance between them.
A 503 error usually means a failed health check to the backend server.
Your stats page might be helpful here: if you mouse over the LastChk column of one of your DOWN backend servers, HAProxy will give you a vague summary of why that server is DOWN:
It does not look like you configured the health check (option httpchk) for your default_service backend: can you reach any of your backend servers directly (e.g. curl --head 10.0.0.5:5000)? From the HAProxy documentation:
[R]esponses 2xx and 3xx are
considered valid, while all other ones indicate a server failure, including
the lack of any response.
I have a python script that runs the following
import mongoengine
client = mongoengine.connect('ppo-image-server-db', host="db", port=27017)
db = client.test_db
test_data = {
'name' : 'test'
}
db.test_data.insert_one( test_data )
print("DONE")
And I have a docker-compose.yml that looks like the following
version: '2'
networks:
micronet:
services:
user-info-service:
restart : always
build : .
container_name: test-user-info-service
working_dir : /usr/local/app/test
entrypoint : ""
command : ["manage", "run-user-info-service", "--host=0.0.0.0", "--port=5000"]
volumes :
- ./:/usr/local/app/test/
ports :
- "5000:5000"
networks :
- micronet
links:
- db
db:
image : mongo:3.0.2
container_name : test-mongodb
volumes :
- ./data/db:/data/db
ports :
- "27017:27017"
However every time when I run docker-compose build and docker-compose up, the python script is not able to find the host (in this case 'db'). Do I need any special linking or any environment variables to pass in the mongo server's IP address?
I could still access the dockerized mongo-db using robomongo
Please note that I'm not creating any docker-machine for this test case yet.
Could you help me to point out what's missing in my configuration?
Yes,
What you need is to tell docker that one application depends on the other. Here is how I built my docker-compose:
version: '2'
services:
mongo-server:
image: mongo
volumes:
- .data/mdata:/data/db # mongodb persistence
myStuff:
build: ./myStuff
depends_on:
- mongo-server
Also, in the connection url, you need to use the url "mongo-server". Docker will take care of connecting your code to the mongo container.
Example:
private val mongoClient: MongoClient = MongoClient("mongodb://mongo-server:27017")
That should solve your problem