I've been trying to connect flask with mongodb over docker but constantly get the timeout error. Here's my code and error below. Please let me know where I've gone wrong? Thanks.
Also, I've intentionally chosen port 27018 instead of 27017
app.py code:
from pymongo import MongoClient
client = MongoClient(host="test_mongodb",
port = 27018,
username = "root",
password = "rootpassword",
authSource = "admin"
)
#db is same as directory created to identify database
#default port is 27017
db = client.aNewDB
#db is a new database
UserNum = db["UserNum"]
#UserNum is a new Collection
UserNum.insert_one({'num_of_users':0})
docker-compose.yml
version: '3'
services:
web:
build: ./Web
ports:
- "5000:5000"
links:
- db #Web is dependent on db
db:
image: mongo:latest
hostname: test_mongodb
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27018:27018
Error during docker-compose up:
web_1 | Traceback (most recent call last):
web_1 | File "app.py", line 21, in <module>
web_1 | UserNum.insert_one({'num_of_users':0})
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/collection.py", line 628, in insert_one
web_1 | comment=comment,
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/collection.py", line 562, in _insert_one
web_1 | self.__database.client._retryable_write(acknowledged, _insert_command, session)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1447, in _retryable_write
web_1 | with self._tmp_session(session) as s:
web_1 | File "/usr/local/lib/python3.7/contextlib.py", line 112, in __enter__
web_1 | return next(self.gen)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1729, in _tmp_session
web_1 | s = self._ensure_session(session)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1712, in _ensure_session
web_1 | return self.__start_session(True, causal_consistency=False)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1657, in __start_session
web_1 | self._topology._check_implicit_session_support()
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 538, in _check_implicit_session_support
web_1 | self._check_session_support()
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 555, in _check_session_support
web_1 | readable_server_selector, self.get_server_selection_timeout(), None
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 240, in _select_servers_loop
web_1 | % (self._error_message(selector), timeout, self.description)
web_1 | pymongo.errors.ServerSelectionTimeoutError: test_mongodb:27018: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 62fa0685c58c2b61f79ea52e, topology_type: Unknown, servers: [<ServerDescription ('test_mongodb', 27018) server_type: Unknown, rtt: None, error=NetworkTimeout('test_mongodb:27018: timed out')>]>
flask_project_web_1 exited with code 1
In docker-compose config file, ports means expose specified ports from container network to host network, it does not do anything to tell the mongodb container to serve at port 27018, therefore, the mongodb will still open the port at 27017 even you specified the port option, therefore, you should tell the mongodb container by using the command option.
Add this line command: mongod --port 27018 into the db service, then it should be working.
like:
db:
image: mongo:latest
hostname: test_mongodb
command: mongod --port 27018
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27018:27018
Related
I have had a very strange problem when I build my docker service. My service framework is docker+gunicorn+flask+redis+vue,I can build the image and up service normally, but when I login it, the error as follows come out.
**dw-backend-v2 | 2023-01-12 11:04:30,546 exception_handler.py [line: 28] ERROR: Traceback (most recent call last):
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 611, in connect
dw-backend-v2 | sock = self.retry.call_with_retry(
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/retry.py", line 46, in call_with_retry
dw-backend-v2 | return do()
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 612, in <lambda>
dw-backend-v2 | lambda: self._connect(), lambda error: self.disconnect(error)
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 677, in _connect
dw-backend-v2 | raise err
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 665, in _connect
dw-backend-v2 | sock.connect(socket_address)
dw-backend-v2 | OSError: [Errno 99] Cannot assign requested address**
docker-compose:
version: '2'
services:
backend:
build:
context: ./dw_backend
dockerfile: Dockerfile
image: dw-backend:2.0.0
container_name: dw-backend-v2
restart: always
ports:
- "9797:9797"
volumes:
- "./dw_backend:/home/data_warehouse/app"
# privileged: true
environment:
TZ: "Asia/Shanghai"
FLASK_ENV: DEBUG
RD_HOST: redis
RD_PORT: 6379
RD_DB: 2
RD_PWD:
RD_POOL_SIZE: 10
RD_KEY_EXPIRE: 43200
frontend:
hostname: dw-frontend-v2
container_name: dw-frontend-v2
restart: always
build:
context: ./dw_frontend
dockerfile: Dockerfile
ports:
- "8150:80"
- "2443:443"
volumes:
- ./dw_frontend/dist:/usr/share/nginx/html
links:
- backend
depends_on:
- backend
environment:
TZ: "Asia/Shanghai"
networks:
default:
external:
name: common_net
I even deleted all the codes related to redis, but the same error still happened! I could't find any solutions to solve this problem.
And my another one service is normal,all of the services connected same redis service, the difference between service I and service II is service II is restful API but service I is not.
Does anyone know the reason for this problem?
There are also no any ports keep in "TIME_WAIT" status!
I have tried to delete all the code related to redis and even write a new method to connect redis, but no miracle has happened!
I hope someone can help me solve it.
OK,I solved this problem,I add a parameter "extra_hosts: localhost: redis" in my docker-compose.yml file, it's solved !
I'm trying to connect python with MYSQL, both are in different dockers. I can access the MYSQL from my ubuntu terminal but when I try to access with the url I used in python It doesn't work.
Docker-compose
version: "3.9" # optional since v1.27.0
services:
mysql:
image: 'mysql:latest'
restart: always
volumes:
- './my-vol/mysql_data:/var/lib/mysql'
ports:
- '3306:3306'
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/my-vol
Python file
from flask import Flask
app = Flask(__name__)
import sqlalchemy as db
import mysql.connector
from mysql.connector import Error
#app.route('/db')
def python():
connection = mysql.connector.connect(host="mysql", user="root", password="root", database="test")
cursor = connection.cursor()
with connection.cursor() as cursor:
cursor.execute("Select * from test_table")
for(userId , firstName , lastName ) in cursor:
return print("{}, {}, {}".format(userId, firstName, lastName))
Finally, this is the completed error that appears when I try to access /db url.
[2021-09-13 08:34:19,119] ERROR in app: Exception on /db [GET]
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2070, in wsgi_app
web_1 | response = self.full_dispatch_request()
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1516, in full_dispatch_request
web_1 | return self.finalize_request(rv)
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1535, in finalize_request
web_1 | response = self.make_response(rv)
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1698, in make_response
web_1 | raise TypeError(
web_1 | TypeError: The view function for 'python' did not return a valid response. The function either returned None or ended without a return statement.
web_1 | 172.24.0.1 - - [13/Sep/2021 08:34:19] "GET /db HTTP/1.1" 500 -
Your view should return a response object. Instead your view returns result of the print function that is always None.
Also, your for loop will be exited on the first iteration due to the return statement. It looks like it is not what you wanted to achieve.
Your code successfully connects to mysql, otherwise you'd get a different exception.
Here is my docker-compose.yml used to create the database container.
version: '3.7'
services:
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080" #- 8080:8080
database_mongo:
image: "mongo:4.2"
expose:
- 27017
volumes:
- ./data/database/mongo:/data/db
database_neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
etl_pipeline:
depends_on:
- database_mongo
- database_neo4j
build:
context: ./data/etl
dockerfile: dockerfile #dockerfile-prod
volumes:
- ./data/:/data/
- ./data/etl:/app/
I'm trying to connect to my neo4j database with python driver. I have already been able to connect to mongoDb with this line:
mongo_client = MongoClient(host="database_mongo")
I'm trying to do something similar to the mongoDb to connect to my neo4j with the GraphDatabase in neo4j like this:
url = "{scheme}://{host_name}:{port}".format(scheme = "bolt", host_name="database_neo4j", port = 7687)
baseNeo4j = GraphDatabase.driver(url, encrypted=False)
or with py2neo like this
neo_client = Graph(host="database_neo4j")
However, nothing of this has worked yet and so I'm not sure if I'm using the right syntax in order to use neo4j with docker. I've tried many things and looked around, but couldn't find the answer...
The whole error message is:
etl_pipeline_1 | MongoClient(host=['database_mongo:27017'], document_class=dict, tz_aware=False, connect=True)
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 929, in _connect
etl_pipeline_1 | s.connect(resolved_address)
etl_pipeline_1 | ConnectionRefusedError: [Errno 111] Connection refused
etl_pipeline_1 |
etl_pipeline_1 | During handling of the above exception, another exception occurred:
etl_pipeline_1 |
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "main.py", line 26, in <module>
etl_pipeline_1 | baseNeo4j = GraphDatabase.driver(url, encrypted=False)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 183, in driver
etl_pipeline_1 | return cls.bolt_driver(parsed.netloc, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 196, in bolt_driver
etl_pipeline_1 | return BoltDriver.open(target, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 359, in open
etl_pipeline_1 | pool = BoltPool.open(address, auth=auth, pool_config=pool_config, workspace_config=default_workspace_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in open
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in <listcomp>
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 545, in acquire
etl_pipeline_1 | return self._acquire(self.address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 409, in _acquire
etl_pipeline_1 | connection = self.opener(address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 528, in opener
etl_pipeline_1 | return Bolt.open(addr, auth=auth, timeout=timeout, routing_context=routing_context, **pool_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 198, in open
etl_pipeline_1 | keep_alive=pool_config.keep_alive,
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1049, in connect
etl_pipeline_1 | raise last_error
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1039, in connect
etl_pipeline_1 | s = _connect(resolved_address, timeout, keep_alive)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 943, in _connect
etl_pipeline_1 | raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error))
etl_pipeline_1 | neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv4Address(('172.29.0.2', 7687)) (reason [Errno 111] Connection refused)
Ok so it may not be the best answer, but for anyone else who would have this problem, I was able to solve it by adding a sleep(30) at the begining of main
You can try to create network and use it across your services. Something like this:
networks:
neo4j_network:
driver: bridge
services:
neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
networks:
- neo4j_network
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080"
networks:
- neo4j_network
Then, for your neo4j driver url (in your code), make sure to use bolt://host.docker.internal:7687
I am not able to connect to elasticsearch in kubernetes inside docker. My elasticsearch is accessed via kubernetes and I have an index called 'radius_ml_posts'. I am using elasticsearch's python library to connect to elasticsearch. When I run the whole process on my python IDE (Spyder), it works just fine. However, when I try to run it inside a docker container, I get connection issues. What am I missing? Below are my configs and code:
The localhost:9200:
{
"name" : "elasticsearch-dev-client-6858c5f9dc-zbz8p",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "lJJbPJpJRaC1j7k5IGhj7g",
"version" : {
"number" : "6.7.0",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "8453f77",
"build_date" : "2019-03-21T15:32:29.844721Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
My python code to connect to elasticsearch host:
def get_data_es(question):
es = Elasticsearch(hosts=[{"host": "elastic", "port": 9200}], connection_class=RequestsHttpConnection, max_retries=30,
retry_on_timeout=True, request_timeout=30)
#es = Elasticsearch(hosts='http://host.docker.internal:5000', connection_class=RequestsHttpConnection, max_retries=30, timeout=30)
doc = {'author': 'gunner','text': 'event', "timestamp": datetime.now()}
es.indices.refresh(index="radius_ml_posts")
res = es.index(index="radius_ml_posts", id = 1, body = doc)
res = es.search(index="radius_ml_posts", size = 30, body={ "query": {
"query_string": {
"default_field": "search_text",
"query": question
}
}
}
)
return res
My docker-compose.yml file:
version: '2.2'
services:
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.0
container_name: elastic
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9300:9300
- 9200:9200
networks:
- elastic
myimage:
image: myimage:myversion
ports:
- 5000:5000
expose:
- 5000
networks:
- elastic
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
My Dockerfile:
FROM python:3.7.4
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip3 install -U nltk
RUN python3 -m nltk.downloader all
RUN pip --default-timeout=100 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["main.py"]
The docker commands I am running stepwise:
docker build -t myimage:myversion .
docker-compose up
The error I am getting:
myimage_1 | Traceback (most recent call last):
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
myimage_1 | response = self.full_dispatch_request()
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
myimage_1 | rv = self.handle_user_exception(e)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
myimage_1 | reraise(exc_type, exc_value, tb)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
myimage_1 | raise value
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
myimage_1 | rv = self.dispatch_request()
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
myimage_1 | return self.view_functions[rule.endpoint](**req.view_args)
myimage_1 | File "main.py", line 41, in launch_app
myimage_1 | ques = get_data_es(ques1)
myimage_1 | File "/app/Text_Cleaning.py", line 32, in get_data_es
myimage_1 | es.indices.refresh(index="radius_ml_posts")
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/client/utils.py", line 92, in _wrapped
myimage_1 | return func(*args, params=params, headers=headers, **kwargs)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/client/indices.py", line 42, in refresh
myimage_1 | "POST", _make_path(index, "_refresh"), params=params, headers=headers
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/transport.py", line 362, in perform_request
myimage_1 | timeout=timeout,
myimage_1 | File "/usr/local/lib/python3.7/site-packages/elasticsearch/connection/http_requests.py", line 157, in perform_request
myimage_1 | raise ConnectionError("N/A", str(e), e)
myimage_1 | elasticsearch.exceptions.ConnectionError: ConnectionError(HTTPConnectionPool(host='elastic', port=9200): Max retries exceeded with url: /radius_ml_posts/_refresh (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f967a9b1710>: Failed to establish a new connection: [Errno -2] Name or service not known'))) caused by: ConnectionError(HTTPConnectionPool(host='elastic', port=9200): Max retries exceeded with url: /radius_ml_posts/_refresh (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f967a9b1710>: Failed to establish a new connection: [Errno -2] Name or service not known')))
Please help in fixing the issue.
Thanks in advance.
I fixed it by using the host as:
host:"host.docker.internal"
Code change,
es = Elasticsearch(hosts=[{"host": "host.docker.internal", "port": 9200}], connection_class=RequestsHttpConnection, max_retries=30,
retry_on_timeout=True, request_timeout=30)
You can try to set the ELASTICSEARCH_NODES variable in your application environment section as and then consume the variable in your python code as http://ELASTICSEARCH_NODES:
version: '2.2'
services:
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.0
container_name: elastic
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9300:9300
- 9200:9200
networks:
- elastic
myimage:
image: myimage:myversion
depends_on:
- elastic
environment:
- ELASTICSEARCH_NODES=http://elastic:9200
ports:
- 5000:5000
expose:
- 5000
networks:
- elastic
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
I know this may seem like a duplicate of docker-compose with multiple databases, but I still can't manage to get it working after going through the answers.
Here is my docker-compose.yml:
version: '3'
services:
backend:
image: backend:1.0
build: ./backend
ports:
- "9090:9090"
depends_on:
- db
- ppt
environment:
- DATABASE_HOST=db
db:
image: main_db:26.03.18
restart: always
build: ./db
ports:
- "5432:5432"
ppt:
image: ppt_generator:1.0
build: ./ppt
ports:
- "6060:6060"
login:
image: login:1.0
build: ./login
ports:
- "7070:7070"
depends_on:
- login_db
login_db:
image: login_db:27.04.2018
restart: always
build: ./login_db
ports:
- "5433:5433"
Notice that I have one db on port 5433 and the other on 5432. However, when I run docker ps after starting the containers I get the following. I don't fully understand what is going on with the ports.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
997f816ddff3 backend:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:9090->9090/tcp backendservices_backend_1
759546109a66 login:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:7070->7070/tcp, 9090/tcp backendservices_login_1
a2a26b72dd0c login_db:27.04.2018 "docker-entrypoint..." About a minute ago Up About a minute 5432/tcp, 0.0.0.0:5433->5433/tcp backendservices_login_db_1
3f97de7fc41e main_db:26.03.18 "docker-entrypoint..." About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp backendservices_db_1
1a61e741ccba ppt_generator:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:6060->6060/tcp backendservices_ppt_1
Both my db dockerfiles are essentially identical except for the port number I expose:
FROM postgres:9.6.3
ENV POSTGRES_USER ludo
ENV POSTGRES_PASSWORD password
ENV POSTGRES_DB login
EXPOSE 5433
ADD db_dump.sql /docker-entrypoint-initdb.d
This is the error I get:
backend_1 | Traceback (most recent call last):
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2147, in _wrap_pool_connect
backend_1 | return fn()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 387, in connect
backend_1 | return _ConnectionFairy._checkout(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 766, in _checkout
backend_1 | fairy = _ConnectionRecord.checkout(pool)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 516, in checkout
backend_1 | rec = pool._do_get()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1138, in _do_get
backend_1 | self._dec_overflow()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
backend_1 | compat.reraise(exc_type, exc_value, exc_tb)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 187, in reraise
backend_1 | raise value
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1135, in _do_get
backend_1 | return self._create_connection()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 333, in _create_connection
backend_1 | return _ConnectionRecord(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 461, in __init__
backend_1 | self.__connect(first_connect_check=True)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 651, in __connect
backend_1 | connection = pool._invoke_creator(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 105, in connect
backend_1 | return dialect.connect(*cargs, **cparams)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 393, in connect
backend_1 | return self.dbapi.connect(*cargs, **cparams)
backend_1 | File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
backend_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
backend_1 | psycopg2.OperationalError: could not connect to server: Connection refused
backend_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
backend_1 | TCP/IP connections on port 5432?
backend_1 | could not connect to server: Cannot assign requested address
backend_1 | Is the server running on host "localhost" (::1) and accepting
backend_1 | TCP/IP connections on port 5432?
backend_1 |
Why is the db not running on port 5432? It used to work when I only had one database and now with two it seems to be confused...?
UPDATE
I can access the databases respectively on ports 5432 and 5433 locally. However, from my backend container I can't. My backend container seems to not be receiving anything running on port 5432. How do I make db container port 5432 visible to the backend container?
UPDATE
After a lot of fiddling around I got it to work. As #Iarwa1n suggested, you map one db as such "5432:5432" and the other as such "5433:5432". The error I encountered was due to how I was calling postgres from the application itself. It is important to realize the postgres host is not localhost anymore, but whatever name you gave your database service in docker-compose.yaml. In my case; db for backend and login_db for the login service. Additionally, I had to change my driver from postgresql to postgres – not sure why this is...
As such, my db_url ended up looking like this from within my python backend app:
postgres://ludo:password#db:5432/main_db
And defined in this way:
DATABASE_CONFIG = {
'driver': 'postgres',
'host': 'db',
'user': 'ludo',
'password': 'password',
'port': 5432,
'dbname': main_db
}
db_url = '{driver}://{user}:{password}#{host}:{port}/{dbname}'.format(database_config)
Two things to note:
1) Regardless of how you mapped your ports, you always have to connect to postgres default port 5432 from within your app
2) If you're using the requests python library (as was I) then make sure to change the url appropriately as well. For example I had a ppt service I was calling via the requests library and I had to change the url to:
'http://ppt:6060/api/getPpt' instead of 'http://localhost:6060/api/getPpt'
The postgres process on both your machines listens to 5432, as this is the default behavior. Just by EXPOSE you do not change that, this just means, that the container exposes port 5433 instead of 5432. But on this port no process is listening.
Try to change the following:
FROM postgres:9.6.3
ENV POSTGRES_USER ludo
ENV POSTGRES_PASSWORD password
ENV POSTGRES_DB login
EXPOSE 5432
ADD db_dump.sql /docker-entrypoint-initdb.d
and then change the docker-compose like this:
login_db:
image: login_db:27.04.2018
restart: always
build: ./login_db
ports:
- "5433:5432"
Now you can access the "db" at 5432 (from the host) and "login_db" at 5433 from the host. Note, that you still need to use 5432 to access one of the dbs from another container.