docker-compose with multiple postgres databases from sql dumps - python

I know this may seem like a duplicate of docker-compose with multiple databases, but I still can't manage to get it working after going through the answers.
Here is my docker-compose.yml:
version: '3'
services:
backend:
image: backend:1.0
build: ./backend
ports:
- "9090:9090"
depends_on:
- db
- ppt
environment:
- DATABASE_HOST=db
db:
image: main_db:26.03.18
restart: always
build: ./db
ports:
- "5432:5432"
ppt:
image: ppt_generator:1.0
build: ./ppt
ports:
- "6060:6060"
login:
image: login:1.0
build: ./login
ports:
- "7070:7070"
depends_on:
- login_db
login_db:
image: login_db:27.04.2018
restart: always
build: ./login_db
ports:
- "5433:5433"
Notice that I have one db on port 5433 and the other on 5432. However, when I run docker ps after starting the containers I get the following. I don't fully understand what is going on with the ports.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
997f816ddff3 backend:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:9090->9090/tcp backendservices_backend_1
759546109a66 login:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:7070->7070/tcp, 9090/tcp backendservices_login_1
a2a26b72dd0c login_db:27.04.2018 "docker-entrypoint..." About a minute ago Up About a minute 5432/tcp, 0.0.0.0:5433->5433/tcp backendservices_login_db_1
3f97de7fc41e main_db:26.03.18 "docker-entrypoint..." About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp backendservices_db_1
1a61e741ccba ppt_generator:1.0 "/bin/sh -c 'pytho..." About a minute ago Up About a minute 0.0.0.0:6060->6060/tcp backendservices_ppt_1
Both my db dockerfiles are essentially identical except for the port number I expose:
FROM postgres:9.6.3
ENV POSTGRES_USER ludo
ENV POSTGRES_PASSWORD password
ENV POSTGRES_DB login
EXPOSE 5433
ADD db_dump.sql /docker-entrypoint-initdb.d
This is the error I get:
backend_1 | Traceback (most recent call last):
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2147, in _wrap_pool_connect
backend_1 | return fn()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 387, in connect
backend_1 | return _ConnectionFairy._checkout(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 766, in _checkout
backend_1 | fairy = _ConnectionRecord.checkout(pool)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 516, in checkout
backend_1 | rec = pool._do_get()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1138, in _do_get
backend_1 | self._dec_overflow()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
backend_1 | compat.reraise(exc_type, exc_value, exc_tb)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 187, in reraise
backend_1 | raise value
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1135, in _do_get
backend_1 | return self._create_connection()
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 333, in _create_connection
backend_1 | return _ConnectionRecord(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 461, in __init__
backend_1 | self.__connect(first_connect_check=True)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool.py", line 651, in __connect
backend_1 | connection = pool._invoke_creator(self)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 105, in connect
backend_1 | return dialect.connect(*cargs, **cparams)
backend_1 | File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 393, in connect
backend_1 | return self.dbapi.connect(*cargs, **cparams)
backend_1 | File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
backend_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
backend_1 | psycopg2.OperationalError: could not connect to server: Connection refused
backend_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
backend_1 | TCP/IP connections on port 5432?
backend_1 | could not connect to server: Cannot assign requested address
backend_1 | Is the server running on host "localhost" (::1) and accepting
backend_1 | TCP/IP connections on port 5432?
backend_1 |
Why is the db not running on port 5432? It used to work when I only had one database and now with two it seems to be confused...?
UPDATE
I can access the databases respectively on ports 5432 and 5433 locally. However, from my backend container I can't. My backend container seems to not be receiving anything running on port 5432. How do I make db container port 5432 visible to the backend container?

UPDATE
After a lot of fiddling around I got it to work. As #Iarwa1n suggested, you map one db as such "5432:5432" and the other as such "5433:5432". The error I encountered was due to how I was calling postgres from the application itself. It is important to realize the postgres host is not localhost anymore, but whatever name you gave your database service in docker-compose.yaml. In my case; db for backend and login_db for the login service. Additionally, I had to change my driver from postgresql to postgres – not sure why this is...
As such, my db_url ended up looking like this from within my python backend app:
postgres://ludo:password#db:5432/main_db
And defined in this way:
DATABASE_CONFIG = {
'driver': 'postgres',
'host': 'db',
'user': 'ludo',
'password': 'password',
'port': 5432,
'dbname': main_db
}
db_url = '{driver}://{user}:{password}#{host}:{port}/{dbname}'.format(database_config)
Two things to note:
1) Regardless of how you mapped your ports, you always have to connect to postgres default port 5432 from within your app
2) If you're using the requests python library (as was I) then make sure to change the url appropriately as well. For example I had a ppt service I was calling via the requests library and I had to change the url to:
'http://ppt:6060/api/getPpt' instead of 'http://localhost:6060/api/getPpt'

The postgres process on both your machines listens to 5432, as this is the default behavior. Just by EXPOSE you do not change that, this just means, that the container exposes port 5433 instead of 5432. But on this port no process is listening.
Try to change the following:
FROM postgres:9.6.3
ENV POSTGRES_USER ludo
ENV POSTGRES_PASSWORD password
ENV POSTGRES_DB login
EXPOSE 5432
ADD db_dump.sql /docker-entrypoint-initdb.d
and then change the docker-compose like this:
login_db:
image: login_db:27.04.2018
restart: always
build: ./login_db
ports:
- "5433:5432"
Now you can access the "db" at 5432 (from the host) and "login_db" at 5433 from the host. Note, that you still need to use 5432 to access one of the dbs from another container.

Related

docker+flask+gunicorn+redis OSError

I have had a very strange problem when I build my docker service. My service framework is docker+gunicorn+flask+redis+vue,I can build the image and up service normally, but when I login it, the error as follows come out.
**dw-backend-v2 | 2023-01-12 11:04:30,546 exception_handler.py [line: 28] ERROR: Traceback (most recent call last):
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 611, in connect
dw-backend-v2 | sock = self.retry.call_with_retry(
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/retry.py", line 46, in call_with_retry
dw-backend-v2 | return do()
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 612, in <lambda>
dw-backend-v2 | lambda: self._connect(), lambda error: self.disconnect(error)
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 677, in _connect
dw-backend-v2 | raise err
dw-backend-v2 | File "/opt/bitnami/python/lib/python3.9/site-packages/redis/connection.py", line 665, in _connect
dw-backend-v2 | sock.connect(socket_address)
dw-backend-v2 | OSError: [Errno 99] Cannot assign requested address**
docker-compose:
version: '2'
services:
backend:
build:
context: ./dw_backend
dockerfile: Dockerfile
image: dw-backend:2.0.0
container_name: dw-backend-v2
restart: always
ports:
- "9797:9797"
volumes:
- "./dw_backend:/home/data_warehouse/app"
# privileged: true
environment:
TZ: "Asia/Shanghai"
FLASK_ENV: DEBUG
RD_HOST: redis
RD_PORT: 6379
RD_DB: 2
RD_PWD:
RD_POOL_SIZE: 10
RD_KEY_EXPIRE: 43200
frontend:
hostname: dw-frontend-v2
container_name: dw-frontend-v2
restart: always
build:
context: ./dw_frontend
dockerfile: Dockerfile
ports:
- "8150:80"
- "2443:443"
volumes:
- ./dw_frontend/dist:/usr/share/nginx/html
links:
- backend
depends_on:
- backend
environment:
TZ: "Asia/Shanghai"
networks:
default:
external:
name: common_net
I even deleted all the codes related to redis, but the same error still happened! I could't find any solutions to solve this problem.
And my another one service is normal,all of the services connected same redis service, the difference between service I and service II is service II is restful API but service I is not.
Does anyone know the reason for this problem?
There are also no any ports keep in "TIME_WAIT" status!
I have tried to delete all the code related to redis and even write a new method to connect redis, but no miracle has happened!
I hope someone can help me solve it.
OK,I solved this problem,I add a parameter "extra_hosts: localhost: redis" in my docker-compose.yml file, it's solved !

MongoDB shows timeout error when connected with flask over docker

I've been trying to connect flask with mongodb over docker but constantly get the timeout error. Here's my code and error below. Please let me know where I've gone wrong? Thanks.
Also, I've intentionally chosen port 27018 instead of 27017
app.py code:
from pymongo import MongoClient
client = MongoClient(host="test_mongodb",
port = 27018,
username = "root",
password = "rootpassword",
authSource = "admin"
)
#db is same as directory created to identify database
#default port is 27017
db = client.aNewDB
#db is a new database
UserNum = db["UserNum"]
#UserNum is a new Collection
UserNum.insert_one({'num_of_users':0})
docker-compose.yml
version: '3'
services:
web:
build: ./Web
ports:
- "5000:5000"
links:
- db #Web is dependent on db
db:
image: mongo:latest
hostname: test_mongodb
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27018:27018
Error during docker-compose up:
web_1 | Traceback (most recent call last):
web_1 | File "app.py", line 21, in <module>
web_1 | UserNum.insert_one({'num_of_users':0})
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/collection.py", line 628, in insert_one
web_1 | comment=comment,
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/collection.py", line 562, in _insert_one
web_1 | self.__database.client._retryable_write(acknowledged, _insert_command, session)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1447, in _retryable_write
web_1 | with self._tmp_session(session) as s:
web_1 | File "/usr/local/lib/python3.7/contextlib.py", line 112, in __enter__
web_1 | return next(self.gen)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1729, in _tmp_session
web_1 | s = self._ensure_session(session)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1712, in _ensure_session
web_1 | return self.__start_session(True, causal_consistency=False)
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1657, in __start_session
web_1 | self._topology._check_implicit_session_support()
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 538, in _check_implicit_session_support
web_1 | self._check_session_support()
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 555, in _check_session_support
web_1 | readable_server_selector, self.get_server_selection_timeout(), None
web_1 | File "/usr/local/lib/python3.7/site-packages/pymongo/topology.py", line 240, in _select_servers_loop
web_1 | % (self._error_message(selector), timeout, self.description)
web_1 | pymongo.errors.ServerSelectionTimeoutError: test_mongodb:27018: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 62fa0685c58c2b61f79ea52e, topology_type: Unknown, servers: [<ServerDescription ('test_mongodb', 27018) server_type: Unknown, rtt: None, error=NetworkTimeout('test_mongodb:27018: timed out')>]>
flask_project_web_1 exited with code 1
In docker-compose config file, ports means expose specified ports from container network to host network, it does not do anything to tell the mongodb container to serve at port 27018, therefore, the mongodb will still open the port at 27017 even you specified the port option, therefore, you should tell the mongodb container by using the command option.
Add this line command: mongod --port 27018 into the db service, then it should be working.
like:
db:
image: mongo:latest
hostname: test_mongodb
command: mongod --port 27018
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27018:27018

The function either returned None or ended without a return statement connecting python with mysql using docker

I'm trying to connect python with MYSQL, both are in different dockers. I can access the MYSQL from my ubuntu terminal but when I try to access with the url I used in python It doesn't work.
Docker-compose
version: "3.9" # optional since v1.27.0
services:
mysql:
image: 'mysql:latest'
restart: always
volumes:
- './my-vol/mysql_data:/var/lib/mysql'
ports:
- '3306:3306'
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/my-vol
Python file
from flask import Flask
app = Flask(__name__)
import sqlalchemy as db
import mysql.connector
from mysql.connector import Error
#app.route('/db')
def python():
connection = mysql.connector.connect(host="mysql", user="root", password="root", database="test")
cursor = connection.cursor()
with connection.cursor() as cursor:
cursor.execute("Select * from test_table")
for(userId , firstName , lastName ) in cursor:
return print("{}, {}, {}".format(userId, firstName, lastName))
Finally, this is the completed error that appears when I try to access /db url.
[2021-09-13 08:34:19,119] ERROR in app: Exception on /db [GET]
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2070, in wsgi_app
web_1 | response = self.full_dispatch_request()
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1516, in full_dispatch_request
web_1 | return self.finalize_request(rv)
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1535, in finalize_request
web_1 | response = self.make_response(rv)
web_1 | File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1698, in make_response
web_1 | raise TypeError(
web_1 | TypeError: The view function for 'python' did not return a valid response. The function either returned None or ended without a return statement.
web_1 | 172.24.0.1 - - [13/Sep/2021 08:34:19] "GET /db HTTP/1.1" 500 -
Your view should return a response object. Instead your view returns result of the print function that is always None.
Also, your for loop will be exited on the first iteration due to the return statement. It looks like it is not what you wanted to achieve.
Your code successfully connects to mysql, otherwise you'd get a different exception.

Unable to connect to neo4j database on docker container with docker compose

Here is my docker-compose.yml used to create the database container.
version: '3.7'
services:
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080" #- 8080:8080
database_mongo:
image: "mongo:4.2"
expose:
- 27017
volumes:
- ./data/database/mongo:/data/db
database_neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
etl_pipeline:
depends_on:
- database_mongo
- database_neo4j
build:
context: ./data/etl
dockerfile: dockerfile #dockerfile-prod
volumes:
- ./data/:/data/
- ./data/etl:/app/
I'm trying to connect to my neo4j database with python driver. I have already been able to connect to mongoDb with this line:
mongo_client = MongoClient(host="database_mongo")
I'm trying to do something similar to the mongoDb to connect to my neo4j with the GraphDatabase in neo4j like this:
url = "{scheme}://{host_name}:{port}".format(scheme = "bolt", host_name="database_neo4j", port = 7687)
baseNeo4j = GraphDatabase.driver(url, encrypted=False)
or with py2neo like this
neo_client = Graph(host="database_neo4j")
However, nothing of this has worked yet and so I'm not sure if I'm using the right syntax in order to use neo4j with docker. I've tried many things and looked around, but couldn't find the answer...
The whole error message is:
etl_pipeline_1 | MongoClient(host=['database_mongo:27017'], document_class=dict, tz_aware=False, connect=True)
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 929, in _connect
etl_pipeline_1 | s.connect(resolved_address)
etl_pipeline_1 | ConnectionRefusedError: [Errno 111] Connection refused
etl_pipeline_1 |
etl_pipeline_1 | During handling of the above exception, another exception occurred:
etl_pipeline_1 |
etl_pipeline_1 | Traceback (most recent call last):
etl_pipeline_1 | File "main.py", line 26, in <module>
etl_pipeline_1 | baseNeo4j = GraphDatabase.driver(url, encrypted=False)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 183, in driver
etl_pipeline_1 | return cls.bolt_driver(parsed.netloc, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 196, in bolt_driver
etl_pipeline_1 | return BoltDriver.open(target, auth=auth, **config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/__init__.py", line 359, in open
etl_pipeline_1 | pool = BoltPool.open(address, auth=auth, pool_config=pool_config, workspace_config=default_workspace_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in open
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 531, in <listcomp>
etl_pipeline_1 | seeds = [pool.acquire() for _ in range(pool_config.init_size)]
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 545, in acquire
etl_pipeline_1 | return self._acquire(self.address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 409, in _acquire
etl_pipeline_1 | connection = self.opener(address, timeout)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 528, in opener
etl_pipeline_1 | return Bolt.open(addr, auth=auth, timeout=timeout, routing_context=routing_context, **pool_config)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 198, in open
etl_pipeline_1 | keep_alive=pool_config.keep_alive,
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1049, in connect
etl_pipeline_1 | raise last_error
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 1039, in connect
etl_pipeline_1 | s = _connect(resolved_address, timeout, keep_alive)
etl_pipeline_1 | File "/usr/local/lib/python3.7/site-packages/neo4j/io/__init__.py", line 943, in _connect
etl_pipeline_1 | raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error))
etl_pipeline_1 | neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv4Address(('172.29.0.2', 7687)) (reason [Errno 111] Connection refused)
Ok so it may not be the best answer, but for anyone else who would have this problem, I was able to solve it by adding a sleep(30) at the begining of main
You can try to create network and use it across your services. Something like this:
networks:
neo4j_network:
driver: bridge
services:
neo4j:
image: neo4j:latest
expose:
- 27018
volumes:
- ./data/database/neo4j:/data
ports:
- "7474:7474" # web client
- "7687:7687" # DB default port
environment:
- NEO4J_AUTH=none
networks:
- neo4j_network
application:
build:
context: ./app
dockerfile: dockerfile #dockerfile-prod
depends_on:
- database_mongo
- database_neo4j
- etl_pipeline
environment:
- flask_env=dev #flask_env=prod
volumes:
- ./app:/app
ports:
- "8080:8080"
networks:
- neo4j_network
Then, for your neo4j driver url (in your code), make sure to use bolt://host.docker.internal:7687

Python requests in Docker Compose containers

Problem
I have a 2-container docker-compose.yml file.
One of the containers is a small FastAPI app.
The other is just trying to hit the API using Python's requests package.
I can access the app container from outside with the exact same code as is in the Python package trying to hit it, and it works, but it will not work within the package.
docker-compose.yml
version: "3.8"
services:
read-api:
build:
context: ./read-api
depends_on:
- "toy-api"
networks:
- ds-net
toy-api:
build:
context: ./api
networks:
- ds-net
ports:
- "80:80"
networks:
ds-net:
Relevant requests code
from requests import Session
def post_to_api(session, raw_input, path):
print(f"The script is sending: {raw_input}")
print(f"The script is sending it to: {path}")
response = session.post(path, json={"payload": raw_input})
print(f"The script received: {response.text}")
def get_from_api(session, path):
print(f"The datalake script is trying to GET from: {path}")
response = session.get(path)
print(f"The datalake script received: {response.text}")
session = Session()
session.trust_env = False ### I got that from here: https://stackoverflow.com/a/50326101/534238
get_from_api(session, path="http://localhost/test")
post_to_api(session, "this is a test", path="http://localhost/raw")
Running It REPL-Style
If I create an interactive session and run those exact commands above in the requests code portion, it works:
>>> get_from_api(session, path="http://localhost/test")
The script is trying to GET from: http://localhost/test
The script received: {"payload":"Yes, you reached here..."}
>>> post_to_api(session, "this is a test", path="http://localhost/raw")
The script is sending: this is a test
The script is sending it to: http://localhost/raw
The script received: {"payload":"received `raw_input`: this is a test"}
To be clear: the API code is still being run as a container, and that container was still created with the docker-compose.yml file. (In other words, the API container is working properly, when accessed from the host.)
Running Within Container
Doing the same thing within the container, I get the following (fairly long) errors:
read-api_1 | The script is trying to GET from: http://localhost/test
read-api_1 | Traceback (most recent call last):
read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 159, in _new_conn
read-api_1 | conn = connection.create_connection(
read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 84, in create_connection
read-api_1 | raise err
read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 74, in create_connection
read-api_1 | sock.connect(sa)
read-api_1 | ConnectionRefusedError: [Errno 111] Connection refused
read-api_1 |
read-api_1 | During handling of the above exception, another exception occurred:
.
.
.
read-api_1 | Traceback (most recent call last):
read-api_1 | File "access_api.py", line 99, in <module>
read-api_1 | get_from_api(session, path="http://localhost/test")
read-api_1 | File "access_datalake.py", line 86, in get_from_api
read-api_1 | response = session.get(path)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 543, in get
read-api_1 | return self.request('GET', url, **kwargs)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
read-api_1 | resp = self.send(prep, **send_kwargs)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
read-api_1 | r = adapter.send(request, **kwargs)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
read-api_1 | raise ConnectionError(e, request=request)
read-api_1 | requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /test (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa9c69b3a0>: Failed to establish a new connection: [Errno 111] Connection refused'))
ai_poc_work_read-api_1 exited with code 1
Attempts to Solve
I thought it was with how the host identified itself within the container group, or whether that origin could be accessed, so I have already tried to change the following, with no success:
Instead of using localhost as the host, I used read-api.
Actually, I started with read-api, and had no luck, but once using localhost, I could at least use REPL on the host machine, as shown above.
I also tried 0.0.0.0, no luck. (I did not expect that to fix it.)
I have changed what CORS ORIGINS are allowed in the API, including all of the possible paths for the container that is trying to read, and just using "*" to flag all CORS origins. No luck.
What am I doing wrong? It seems the problem must be with the containers, or maybe how requests interacts with containers, but I cannot figure out what.
Here are some relevant GitHub issues or SO answers I found, but none solved it:
GitHub issue: Docker Compose problems with requests
GitHub issue: Solving high latency requests in Docker containers
SO problem: containers communicating with requests
Within the Docker network, applications must be accessed with the service names defined in the docker-compose.yml.
If you're trying to access the toy-api service, use
get_from_api(session, path="http://toy-api/test")
You can access the application via http://localhost/test on your host machine because Docker exposes the application to the host machine. However, loosely speaking, within the Docker network, localhost does not refer to the host's localhost but only to the container's localhost. And in the case of the read-api service, there is no application listening to http://localhost/test.

Categories