I have had a very strange problem when I build my docker service. My service framework is docker+gunicorn+flask+redis+vue,I can build the image and up service normally, but when I login it, the error as follows come out.
**dw-backend-v2 | 2023-01-12 11:04:30,546 exception_handler.py [line: 28] ERROR: Traceback (most recent call last):
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 611, in connect
dw-backend-v2 | sock = self.retry.call_with_retry(
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/retry.py", line 46, in call_with_retry
dw-backend-v2 | return do()
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 612, in <lambda>
dw-backend-v2 | lambda: self._connect(), lambda error: self.disconnect(error)
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 677, in _connect
dw-backend-v2 | raise err
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 665, in _connect
dw-backend-v2 | sock.connect(socket_address)
dw-backend-v2 | OSError: [Errno 99] Cannot assign requested address**
docker-compose:
version: '2'
services:
backend:
build:
context: ./dw_backend
dockerfile: Dockerfile
image: dw-backend:2.0.0
container_name: dw-backend-v2
restart: always
ports:
- "9797:9797"
volumes:
- "./dw_backend:/home/data_warehouse/app"
# privileged: true
environment:
TZ: "Asia/Shanghai"
FLASK_ENV: DEBUG
RD_HOST: redis
RD_PORT: 6379
RD_DB: 2
RD_PWD:
RD_POOL_SIZE: 10
RD_KEY_EXPIRE: 43200
frontend:
hostname: dw-frontend-v2
container_name: dw-frontend-v2
restart: always
build:
context: ./dw_frontend
dockerfile: Dockerfile
ports:
- "8150:80"
- "2443:443"
volumes:
- ./dw_frontend/dist:/usr/share/nginx/html
links:
- backend
depends_on:
- backend
environment:
TZ: "Asia/Shanghai"
networks:
default:
external:
name: common_net
I even deleted all the codes related to redis, but the same error still happened! I could't find any solutions to solve this problem.
And my another one service is normal,all of the services connected same redis service, the difference between service I and service II is service II is restful API but service I is not.
Does anyone know the reason for this problem?
There are also no any ports keep in "TIME_WAIT" status!
I have tried to delete all the code related to redis and even write a new method to connect redis, but no miracle has happened!
I hope someone can help me solve it.
OK,I solved this problem,I add a parameter "extra_hosts: localhost: redis" in my docker-compose.yml file, it's solved !
Related
I've been trying to connect flask with mongodb over docker but constantly get the timeout error. Here's my code and error below. Please let me know where I've gone wrong? Thanks.
Also, I've intentionally chosen port 27018 instead of 27017
app.py code:
from pymongo import MongoClient
client = MongoClient(host="test_mongodb",
port = 27018,
username = "root",
password = "rootpassword",
authSource = "admin"
)
#db is same as directory created to identify database
#default port is 27017
db = client.aNewDB
#db is a new database
UserNum = db["UserNum"]
#UserNum is a new Collection
UserNum.insert_one({'num_of_users':0})
docker-compose.yml
version: '3'
services:
web:
build: ./Web
ports:
- "5000:5000"
links:
- db #Web is dependent on db
db:
image: mongo:latest
hostname: test_mongodb
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27018:27018
Error during docker-compose up:
web_1 | Traceback (most recent call last):
web_1 | File "app.py", line 21, in <module>
web_1 | UserNum.insert_one({'num_of_users':0})
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/collection.py", line 628, in insert_one
web_1 | comment=comment,
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/collection.py", line 562, in _insert_one
web_1 | self.__database.client._retryable_write(acknowledged, _insert_command, session)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1447, in _retryable_write
web_1 | with self._tmp_session(session) as s:
web_1 | File "/usr/local/lib/python3.7/contextlib.py", line 112, in __enter__
web_1 | return next(self.gen)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1729, in _tmp_session
web_1 | s = self._ensure_session(session)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1712, in _ensure_session
web_1 | return self.__start_session(True, causal_consistency=False)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1657, in __start_session
web_1 | self._topology._check_implicit_session_support()
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 538, in _check_implicit_session_support
web_1 | self._check_session_support()
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 555, in _check_session_support
web_1 | readable_server_selector, self.get_server_selection_timeout(), None
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 240, in _select_servers_loop
web_1 | % (self._error_message(selector), timeout, self.description)
web_1 | pymongo.errors.ServerSelectionTimeoutError: test_mongodb:27018: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 62fa0685c58c2b61f79ea52e, topology_type: Unknown, servers: [<ServerDescription ('test_mongodb', 27018) server_type: Unknown, rtt: None, error=NetworkTimeout('test_mongodb:27018: timed out')>]>
flask_project_web_1 exited with code 1
In docker-compose config file, ports means expose specified ports from container network to host network, it does not do anything to tell the mongodb container to serve at port 27018, therefore, the mongodb will still open the port at 27017 even you specified the port option, therefore, you should tell the mongodb container by using the command option.
Add this line command: mongod --port 27018 into the db service, then it should be working.
like:
db:
image: mongo:latest
hostname: test_mongodb
command: mongod --port 27018
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27018:27018
I have dockerized my project that using kafka for communication, I am using python on consume part of my project, I used two different kafka image because of the NoBrokersAvaible error and I tried all of the combinations I guess. I want to consume messages from kafka but my consumer cannot connect with my kafka brokers.
Console Logs
Traceback (most recent call last):
File "main.py", line 15, in <module>
kafka_consumer = kafka_client.prepare_kafka_consumer()
File "/-----/client/kafka_client.py", line 17, in prepare_kafka_consumer
consumer = KafkaConsumer(self.topic,
File "/usr/local/lib/python3.8/site-packages/kafka/consumer/group.py", line 356, in __init__
self._client = KafkaClient(metrics=self._metrics, **self.config)
File "/usr/local/lib/python3.8/site-packages/kafka/client_async.py", line 244, in __init__
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/usr/local/lib/python3.8/site-packages/kafka/client_async.py", line 900, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
Docker Compose
services:
zookeeper:
image: confluentinc/cp-zookeeper:3.2.0
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
restart: always
kafka:
image: confluentinc/cp-kafka:3.2.0
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- '9092:9092'
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:19091,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
restart: always
app:
build:
context: .
dockerfile: DockerFile
ports:
- "8080:8080"
depends_on:
- kafka
Config.py
class Settings:
KAFKA_BROKERS = ['kafka:19091']
KAFKA_TOPIC = "topic"
MAIL_USERNAME = "mail"
MAIL_PASSWORD = "pass"
settings = Settings()
I am not able to use my app and kafka in same container properly.
have encountered this error "Unrecognized field: schemaType (HTTP status code 422, SR code 422)" when i execute a json_producer.py example in Confluent Github repository
this is my docker-compose:
version: '3.3'
services:
postgres:
container_name: postgres
ports:
- '5432:5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=shipment_db
- PGPASSWORD=password
image: 'debezium/postgres:13'
zookeeper:
container_name: zookeeper
ports:
- '2181:2181'
- '2888:2888'
- '3888:3888'
image: 'debezium/zookeeper:1.7'
kafka:
container_name: kafka
ports:
- '9092:9092'
links:
- 'zookeeper:zookeeper'
image: 'debezium/kafka:1.7'
environment:
ZOOKEEPER_CONNECT: zookeeper:2181
#volumes:
# - ./kafka:/kafka/config:rw
connect:
image: debezium/connect:1.7
hostname: connect
container_name: connect
ports:
- 8083:8083
environment:
BOOTSTRAP_SERVERS: kafka:9092
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: my_connect_configs
OFFSET_STORAGE_TOPIC: my_connect_offsets
STATUS_STORAGE_TOPIC: my_connect_statuses
CONNECT_BOOTSTRAP_SERVERS: kafka:9092
CONNECT_GROUP_ID: connect-cluster-A
CONNECT_PLUGIN_PATH: /kafka/data, /kafka/connect
#EXTERNAL_LIBS_DIR: /kafka/external_libs,/kafka/data
CLASSPATH: /kafka/data/*
KAFKA_CONNECT_PLUGINS_DIR: /kafka/data, /kafka/connect
#CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect=DEBUG,org.apache.plc4x.kafka.Plc4xSinkConnector=DEBUG"
volumes:
- type: bind
source: ./plugins
target: /kafka/data
depends_on:
- zookeeper
- kafka
- postgres
links:
- zookeeper
- kafka
- postgres
schema-registry:
image: confluentinc/cp-schema-registry:5.4.6
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8081"
ksqldb-server:
image: confluentinc/ksqldb-server:0.23.1
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- kafka
- zookeeper
- schema-registry
ports:
- "8088:8088"
volumes:
- "./confluent-hub-components/:/usr/share/kafka/plugins/"
environment:
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_BOOTSTRAP_SERVERS: "kafka:9092"
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_CONNECT_URL: http://connect:8083
the code of json_producer is here in the repository.
The error appear when i execute this command:
$ python3 json_producer.py -b 0.0.0.0:9092 -s http://0.0.0.0:8081 -t test
and the stacktrace is the following:
raceback (most recent call last):
File "/home/alessio/fm_v2/python/confluent-kafka-python-master/examples/venv_examples/lib/python3.8/site-packages/confluent_kafka/serializing_producer.py", line 172, in produce
value = self._value_serializer(value, ctx)
File "/home/alessio/fm_v2/python/confluent-kafka-python-master/examples/venv_examples/lib/python3.8/site-packages/confluent_kafka/schema_registry/json_schema.py", line 190, in __call__
self._schema_id = self._registry.register_schema(subject,
File "/home/alessio/fm_v2/python/confluent-kafka-python-master/examples/venv_examples/lib/python3.8/site-packages/confluent_kafka/schema_registry/schema_registry_client.py", line 336, in register_schema
response = self._rest_client.post(
File "/home/alessio/fm_v2/python/confluent-kafka-python-master/examples/venv_examples/lib/python3.8/site-packages/confluent_kafka/schema_registry/schema_registry_client.py", line 127, in post
return self.send_request(url, method='POST', body=body)
File "/home/alessio/fm_v2/python/confluent-kafka-python-master/examples/venv_examples/lib/python3.8/site-packages/confluent_kafka/schema_registry/schema_registry_client.py", line 174, in send_request
raise SchemaRegistryError(response.status_code,
confluent_kafka.schema_registry.error.SchemaRegistryError: Unrecognized field: schemaType (HTTP status code 422, SR code 422)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "json_producer.py", line 172, in <module>
main(parser.parse_args())
File "json_producer.py", line 151, in main
producer.produce(topic=topic, key=str(uuid4()), value=user,
File "/home/alessio/fm_v2/python/confluent-kafka-python-master/examples/venv_examples/lib/python3.8/site-packages/confluent_kafka/serializing_producer.py", line 174, in produce
raise ValueSerializationError(se)
confluent_kafka.error.ValueSerializationError: KafkaError{code=_VALUE_SERIALIZATION,val=-161,str="Unrecognized field: schemaType (HTTP status code 422, SR code 422)"}
where is the problem? Thanks for any reply.
Jsonschema support was not added to the Confluent Schema Registry until version 6.0, and this is why the error reports issues about a schemaType field, because any lower versions of the Registry response/request payload do not know about that field.
Upgrading to at least that version, or using the latest version of the image will solve that error
If you just want to produce JSON, then you don't need the Registry. More details at https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/ .
You can use the regular producer.py example and provide JSON objects as strings on the CLI
Here is my docker-compose.yml used to create the database container.
version: '3.7'
services:
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080" #- 8080:8080
database_mongo:
image: "mongo:4.2"
expose:
- 27017
volumes:
- ./data/database/mongo:/data/db
database_neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
etl_pipeline:
depends_on:
- database_mongo
- database_neo4j
build:
context: ./data/etl
dockerfile: dockerfile #dockerfile-prod
volumes:
- ./data/:/data/
- ./data/etl:/app/
I'm trying to connect to my neo4j database with python driver. I have already been able to connect to mongoDb with this line:
mongo_client = MongoClient(host="database_mongo")
I'm trying to do something similar to the mongoDb to connect to my neo4j with the GraphDatabase in neo4j like this:
url = "{scheme}://{host_name}:{port}".format(scheme = "bolt", host_name="database_neo4j", port = 7687)
baseNeo4j = GraphDatabase.driver(url, encrypted=False)
or with py2neo like this
neo_client = Graph(host="database_neo4j")
However, nothing of this has worked yet and so I'm not sure if I'm using the right syntax in order to use neo4j with docker. I've tried many things and looked around, but couldn't find the answer...
The whole error message is:
etl_pipeline_1 | MongoClient(host=['database_mongo:27017'], document_class=dict, tz_aware=False, connect=True)
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 929, in _connect
etl_pipeline_1 | s.connect(resolved_address)
etl_pipeline_1 | ConnectionRefusedError: [Errno 111] Connection refused
etl_pipeline_1 |
etl_pipeline_1 | During handling of the above exception, another exception occurred:
etl_pipeline_1 |
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "main.py", line 26, in <module>
etl_pipeline_1 | baseNeo4j = GraphDatabase.driver(url, encrypted=False)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 183, in driver
etl_pipeline_1 | return cls.bolt_driver(parsed.netloc, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 196, in bolt_driver
etl_pipeline_1 | return BoltDriver.open(target, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 359, in open
etl_pipeline_1 | pool = BoltPool.open(address, auth=auth, pool_config=pool_config, workspace_config=default_workspace_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in open
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in <listcomp>
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 545, in acquire
etl_pipeline_1 | return self._acquire(self.address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 409, in _acquire
etl_pipeline_1 | connection = self.opener(address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 528, in opener
etl_pipeline_1 | return Bolt.open(addr, auth=auth, timeout=timeout, routing_context=routing_context, **pool_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 198, in open
etl_pipeline_1 | keep_alive=pool_config.keep_alive,
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1049, in connect
etl_pipeline_1 | raise last_error
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1039, in connect
etl_pipeline_1 | s = _connect(resolved_address, timeout, keep_alive)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 943, in _connect
etl_pipeline_1 | raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error))
etl_pipeline_1 | neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv4Address(('172.29.0.2', 7687)) (reason [Errno 111] Connection refused)
Ok so it may not be the best answer, but for anyone else who would have this problem, I was able to solve it by adding a sleep(30) at the begining of main
You can try to create network and use it across your services. Something like this:
networks:
neo4j_network:
driver: bridge
services:
neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
networks:
- neo4j_network
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080"
networks:
- neo4j_network
Then, for your neo4j driver url (in your code), make sure to use bolt://host.docker.internal:7687
I know this may seem like a duplicate of docker-compose with multiple databases, but I still can't manage to get it working after going through the answers.
Here is my docker-compose.yml:
version: '3'
services:
backend:
image: backend:1.0
build: ./backend
ports:
- "9090:9090"
depends_on:
- db
- ppt
environment:
- DATABASE_HOST=db
db:
image: main_db:26.03.18
restart: always
build: ./db
ports:
- "5432:5432"
ppt:
image: ppt_generator:1.0
build: ./ppt
ports:
- "6060:6060"
login:
image: login:1.0
build: ./login
ports:
- "7070:7070"
depends_on:
- login_db
login_db:
image: login_db:27.04.2018
restart: always
build: ./login_db
ports:
- "5433:5433"
Notice that I have one db on port 5433 and the other on 5432. However, when I run docker ps after starting the containers I get the following. I don't fully understand what is going on with the ports.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
997f816ddff3 backend:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:9090->9090/tcp backendservices_backend_1
759546109a66 login:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:7070->7070/tcp, 9090/tcp backendservices_login_1
a2a26b72dd0c login_db:27.04.2018 "docker-entrypoint..." About a minute ago Up About a minute 5432/tcp, 0.0.0.0:5433->5433/tcp backendservices_login_db_1
3f97de7fc41e main_db:26.03.18 "docker-entrypoint..." About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp backendservices_db_1
1a61e741ccba ppt_generator:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:6060->6060/tcp backendservices_ppt_1
Both my db dockerfiles are essentially identical except for the port number I expose:
FROM postgres:9.6.3
ENV POSTGRES_USER ludo
ENV POSTGRES_PASSWORD password
ENV POSTGRES_DB login
EXPOSE 5433
ADD db_dump.sql /docker-entrypoint-initdb.d
This is the error I get:
backend_1 | Traceback (most recent call last):
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2147, in _wrap_pool_connect
backend_1 | return fn()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 387, in connect
backend_1 | return _ConnectionFairy._checkout(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 766, in _checkout
backend_1 | fairy = _ConnectionRecord.checkout(pool)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 516, in checkout
backend_1 | rec = pool._do_get()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1138, in _do_get
backend_1 | self._dec_overflow()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
backend_1 | compat.reraise(exc_type, exc_value, exc_tb)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 187, in reraise
backend_1 | raise value
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1135, in _do_get
backend_1 | return self._create_connection()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 333, in _create_connection
backend_1 | return _ConnectionRecord(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 461, in __init__
backend_1 | self.__connect(first_connect_check=True)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 651, in __connect
backend_1 | connection = pool._invoke_creator(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 105, in connect
backend_1 | return dialect.connect(*cargs, **cparams)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 393, in connect
backend_1 | return self.dbapi.connect(*cargs, **cparams)
backend_1 | File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
backend_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
backend_1 | psycopg2.OperationalError: could not connect to server: Connection refused
backend_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
backend_1 | TCP/IP connections on port 5432?
backend_1 | could not connect to server: Cannot assign requested address
backend_1 | Is the server running on host "localhost" (::1) and accepting
backend_1 | TCP/IP connections on port 5432?
backend_1 |
Why is the db not running on port 5432? It used to work when I only had one database and now with two it seems to be confused...?
UPDATE
I can access the databases respectively on ports 5432 and 5433 locally. However, from my backend container I can't. My backend container seems to not be receiving anything running on port 5432. How do I make db container port 5432 visible to the backend container?
UPDATE
After a lot of fiddling around I got it to work. As #Iarwa1n suggested, you map one db as such "5432:5432" and the other as such "5433:5432". The error I encountered was due to how I was calling postgres from the application itself. It is important to realize the postgres host is not localhost anymore, but whatever name you gave your database service in docker-compose.yaml. In my case; db for backend and login_db for the login service. Additionally, I had to change my driver from postgresql to postgres – not sure why this is...
As such, my db_url ended up looking like this from within my python backend app:
postgres://ludo:password#db:5432/main_db
And defined in this way:
DATABASE_CONFIG = {
'driver': 'postgres',
'host': 'db',
'user': 'ludo',
'password': 'password',
'port': 5432,
'dbname': main_db
}
db_url = '{driver}://{user}:{password}#{host}:{port}/{dbname}'.format(database_config)
Two things to note:
1) Regardless of how you mapped your ports, you always have to connect to postgres default port 5432 from within your app
2) If you're using the requests python library (as was I) then make sure to change the url appropriately as well. For example I had a ppt service I was calling via the requests library and I had to change the url to:
'http://ppt:6060/api/getPpt' instead of 'http://localhost:6060/api/getPpt'
The postgres process on both your machines listens to 5432, as this is the default behavior. Just by EXPOSE you do not change that, this just means, that the container exposes port 5433 instead of 5432. But on this port no process is listening.
Try to change the following:
FROM postgres:9.6.3
ENV POSTGRES_USER ludo
ENV POSTGRES_PASSWORD password
ENV POSTGRES_DB login
EXPOSE 5432
ADD db_dump.sql /docker-entrypoint-initdb.d
and then change the docker-compose like this:
login_db:
image: login_db:27.04.2018
restart: always
build: ./login_db
ports:
- "5433:5432"
Now you can access the "db" at 5432 (from the host) and "login_db" at 5433 from the host. Note, that you still need to use 5432 to access one of the dbs from another container.