I have a flask app that needs to make a request to a grpc server when a request is made to the flask endpoint.
#main.route("/someroute", methods=["POST"])
def some_function():
# Do something here
make_grpc_request(somedata)
return create_response(data=None, message="Something happened")
def make_grpc_request(somedata):
channel = grpc.insecure_channel('localhost:30001')
stub = some_proto_pb2_grpc.SomeServiceStub(channel)
request = some_proto_pb2.SomeRequest(id=1)
response = stub.SomeFunction(request)
logger.info(response)
But I keep getting an error InactiveRpcError of RPC that terminated with: StatusCode.UNAVAILABLE failed to connect to all addresses
Just putting the client code inside a normal .py file works fine, and making the request inside BloomRPC works fine too so it couldn't be a server issue.
Is this something to do with how flask works and I'm just missing something?
I have also tried using https://github.com/public/sonora without any success like this:
with sonora.client.insecure_web_channel("localhost:30001") as channel:
stub = some_proto_pb2_grpc.SomeServiceStub(channel)
request = some_proto_pb2.SomeRequest(id=1)
response = stub.SomeFunction(request)
docker-compose.yml
version: "3.7"
services:
core-profile: #This is where the grpc requests are sent to
container_name: core-profile
build:
context: ./app/profile/
target: local
volumes:
- ./app/profile/:/usr/src/app/
env_file:
- ./app/profile/database.env
- ./app/profile/jwt.env
- ./app/profile/oauth2-dev.env
environment:
- APP_PORT=50051
- PYTHONUNBUFFERED=1
- POSTGRES_HOST=core-profile-db
ports:
- 30001:50051
expose:
- 50051
depends_on:
- core-profile-db
core-profile-db:
image: postgres:10-alpine
expose:
- 5432
ports:
- 54321:5432
env_file:
- ./app/profile/database.env
app-flask-server-db:
image: postgres:10-alpine
expose:
- 5433
ports:
- 54333:5433
env_file:
- ./app/flask-server/.env
flask-server:
build:
context: ./app/flask-server/
dockerfile: Dockerfile-dev
volumes:
- ./app/flask-server:/usr/src/app/
env_file:
- ./app/flask-server/.env
environment:
- FLASK_ENV=docker
ports:
- 5000:5000
depends_on:
- app-flask-server-db
volumes:
app-flask-server-db:
name: app-flask-server-db
Your Python app (service) should reference the gRPC service as core-profile:50051.
The host name is the Compose service core-profile and, because the Python service is also within the Compose network, it must use 50051.
localhost:30001 is how you'd access it from the Compose host
Related
I have a docker project where I got 2 containers (Flask RestfulAPI, MongoDB).
At first start I need the MongoDB to be populated with data stored in local.
To achieve this I usually use a python script running pymongo; In order to execute the python script at first start (of the mongodb container) I tried with running the python (in volumes) through init-db.sh, but it fails as the python command is not recognized; Is the only way to reproduce all the steps in populate.py in a init-db.js?
docker-compose.yml
version: "3.7"
services:
mongodb:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./init-db.sh:/docker-entrypoint-initdb.d/init-db.sh:ro
web:
container_name: web
build: ./web
ports:
- "5000:5000"
init-db.sh
#!/bin/bash
# shellcheck disable=SC2164
python -m init-db.py
init-db.py
import...
if __name__ == "__main__":
basepath = os.path.join(os.path.expanduser(r'~\Documents'), r"DataHub")
client = MongoClient('mongodb://mongodb:27017/')
db = client["database"]
collection = db["documents"]
# Populate MongoDB
for filepath in glob.glob(basepath + f"/*/*/*.json"):
file = json.load(open(filepath, "r"))
# data pre-processing...
collection.insert_one()
# Create Indexes
collection.create_index("f1")
I have the follow setup:
A simple Flask web app with a Mongo DB, each being in a docker Container, and the containers running inside a free EC2 instance.
My question is the following:
I bough a domain from Gandi, (domain.com, for example).
How can I "map" my domain to the IP address of the EC2 instance that the app is running inside of.
If I go to domain.com I want to be "redirected" to my Flask App and the url to still be domain.com, not the IP of the Instance.
Can this be achieved without Route53? Maybe with nginx?
This the docker_compose.yml file:
version: '3.5'
services:
web-app-flask:
image: web-app
ports:
- 5000:5000
mongodb_test:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongo-express_test:
image: mongo-express
restart: always
ports:
- 8088:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
This the main.py of the app:
from website import create_app
app = create_app()
if __name__ == '__main__':
from waitress import serve
serve(app)
I'm using Docker with InfluxDB and Python in a framework. I want to write to InfluxDB inside the framework but I always get the error "Name or service not known" and have no idea what is the problem.
I link the InfluxDB container to the framework container in the docker compose file like so:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- 8086:8086
volumes:
- influxdb_data:/var/lib/influxdb
framework:
image: framework
build: framework
volumes:
- framework:/tmp/framework_data
links:
- influxdb
depends_on:
- influxdb
volumes:
framework:
driver: local
influxdb_data:
Inside the framework I have a script that only focuses on writing to the database. Because I don't want to access the database with the url "localhost:8086", I am using links to make it easier and connect to the database with the url "influxdb:8086". This is my code in that script:
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS, WritePrecision
bucket = "bucket"
token = "token"
def insert_data(message):
client = InfluxDBClient(url="http://influxdb:8086", token=token, org=org)
write_api = client.write_api(write_options=SYNCHRONOUS)
point = Point("mem") \
.tag("sensor", message["sensor"]) \
.tag("metric", message["type"]) \
.field("true_value", float(message["true_value"])) \
.field("value", float(message["value"])) \
.field("failure", message["failure"]) \
.field("failure_type", message["failure_type"]) \
.time(datetime.datetime.now(), WritePrecision.NS)
write_api.write(bucket, org, point) #the error seams to happen here
Everytime I use the function insert_data I get the error urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fac547d9d00>: Failed to establish a new connection: [Errno -2] Name or service not known.
Why can't I write into the database?
I think the problem resides in your docker-compose file. First of all links is a legacy feature so I'd recommend you to use user-defined networks instead. More on that here: https://docs.docker.com/compose/compose-file/compose-file-v3/#links
I've created a minimalistic example to demonstrate the approach:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
environment: # manage the secrets the best way you can!!! the below are only for demonstration purposes...
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=secret
- DOCKER_INFLUXDB_INIT_ORG=my-org
- DOCKER_INFLUXDB_INIT_BUCKET=my-bucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=secret-token
networks:
- local
framework:
image: python:3.10.2
depends_on:
- influxdb
networks:
- local
networks:
local:
Notice the additional networks definition and the local network. Also this network is referenced from the containers.
Also make sure to initialize your influxdb with the right enviroment variables according to the docker image's documentation: https://hub.docker.com/_/influxdb
Then to test it just run a shell in your framework container via docker-compose:
docker-compose run --entrypoint sh framework
and then in the container install the client:
pip install influxdb_client['ciso']
Then in a python shell - still inside the container - you can verify the connection:
from influxdb_client import InfluxDBClient
client = InfluxDBClient(url="http://influxdb:8086", token="secret-token", org="my-org") # the token and the org values are coming from the container's docker-compose environment definitions
client.health()
# {'checks': [],
# 'commit': '657e1839de',
# 'message': 'ready for queries and writes',
# 'name': 'influxdb',
# 'status': 'pass',
# 'version': '2.1.1'}
Last but not least to clean up the test resources do:
docker-compose down
I have the following docker-compose file:
version: '2.3'
networks:
default: { external: true, name: $NETWORK_NAME } # NETWORK_NAME in .env file is `uv_atp_network`.
services:
car_parts_segmentor:
# container_name: uv-car-parts-segmentation
image: "uv-car-parts-segmentation:latest"
ports:
- "8080:8080"
volumes:
- ../../../../uv-car-parts-segmentation/configs:/uveye/configs
- /isilon/:/isilon/
# - local_data_folder:local_data_folder
command: "--run_service rabbit"
runtime: nvidia
depends_on:
rabbitmq_local:
condition: service_started
links:
- rabbitmq_local
restart: always
rabbitmq_local:
image: 'rabbitmq:3.6-management-alpine'
container_name: "rabbitmq"
ports:
- ${RABBIT_PORT:?unspecified_rabbit_port}:5672
- ${RABBIT_MANAGEMENT_PORT:?unspecified_rabbit_management_port}:15672
When this runs, docker ps shows
21400efd6493 uv-car-parts-segmentation:latest "python /uveye/app/m…" 5 seconds ago Up 1 second 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp joint_car_parts_segmentor_1
bf4ab8581f1f rabbitmq:3.6-management-alpine "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq
I want to create a connection to that rabbitmq. The user:pass is guest:guest.
I was unable to do it, with the very uninformative AMQPConnectionError in all cases:
Below code runs in another, unrelated container.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#localhost/"))
Also tried with
$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rabbitmq
172.27.0.2
and
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#172.27.0.2/")) #
Also tried with
credentials = pika.credentials.PlainCredentials(
username="guest",
password="guest"
)
parameters = pika.ConnectionParameters(
host=ip_address, # tried all above options
port=5672,
credentials=credentials,
heartbeat=10,
)
Note that the container car_parts_segmentor is able to see the container rabbitmq. Both are started by docker-compose.
My assumption is this has to do with the uv_atp_network both containers live in, and I am trying to access a docker inside that network, from outside the network.
Is this really the problem?
If so, how can this be achieved?
For the future - how to get more informative errors from pika?
As I suspected, the problem was the name rabbitmq existed only in the network uv_atp_network.
The code attempting to connect to that network runs inside a container of its own, which was not present in the network.
Solution connect the current container to the network:
import socket
client = docker.from_env()
network_name = "uv_atp_network"
atp_container = client.containers.get(socket.gethostname())
client.networks.get(network_name).connect(container=atp_container.id)
After this, the above code in the question does work, because rabbitmq can be resolved.
connection = pika.BlockingConnection(pika.URLParameters("amqp://guest:guest#rabbitmq/"))
I created a simple library project in microservices to study and implement FastAPI.
Docker starts 5 main services:
books
db-book
author
db-author
nginx
Everything works as expected, making requests with postman I have no problem.
Structure
Problem description
I added a test directory where I test endpoints.
Example of (incomplete) author test
from starlette.testclient import TestClient
from app.main import app
from app.api.author import authors
import logging
log = logging.getLogger('__name__')
import requests
client = TestClient(app)
def test_get_authors():
response = client.get("/")
assert response.status_code == 200
def test_get_author():
response = client.get("/1")
assert response.status_code == 200
$> docker-compose exec author_service pytest .
returns this
============================================================================================================= test session starts =============================================================================================================
platform linux -- Python 3.8.3, pytest-5.3.2, py-1.9.0, pluggy-0.13.1
rootdir: /app
collected 2 items
tests/test_author.py FF [100%]
================================================================================================================== FAILURES ===================================================================================================================
______________________________________________________________________________________________________________ test_get_authors _______________________________________________________________________________________________________________
def test_get_authors():
response = client.get("/")
> assert response.status_code == 200
E assert 404 == 200
E + where 404 = <Response [404]>.status_code
tests/test_author.py:12: AssertionError
_______________________________________________________________________________________________________________ test_get_author _______________________________________________________________________________________________________________
def test_get_author():
response = client.get("/1")
> assert response.status_code == 200
E assert 404 == 200
E + where 404 = <Response [404]>.status_code
tests/test_author.py:16: AssertionError
============================================================================================================== 2 failed in 0.35s ==============================================================================================================
I tried to start the tests directly from the container shell but nothing the same.
This problem occurs only with tests that are done following the documentation (using starlette / fastapi) and with requests
You can find the complete project here
Library Microsrevices example
Environment
OS:[Linux Fedora 32]
FastAPI Version [0.55.1]:
Python: [Python 3.8.3]
docker-compose file
version: '3.7'
services:
book_service:
build: ./book-service
command: uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
volumes:
- ./book-service/:/app/
ports:
- 8001:8000
environment:
- DATABASE_URI=postgresql://book_db_username:book_db_password#book_db/book_db_dev
- AUTHOR_SERVICE_HOST_URL=http://author_service:8000/api/v1/authors/
depends_on:
- book_db
book_db:
image: postgres:12.1-alpine
volumes:
- postgres_data_book:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=book_db_username
- POSTGRES_PASSWORD=book_db_password
- POSTGRES_DB=book_db_dev
author_service:
build: ./author-service
command: uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
volumes:
- ./author-service/:/app/
ports:
- 8002:8000
environment:
- DATABASE_URI=postgresql://author_db_username:author_db_password#author_db/author_db_dev
depends_on:
- author_db
author_db:
image: postgres:12.1-alpine
volumes:
- postgres_data_author:/var/lib/postgres/data
environment:
- POSTGRES_USER=author_db_username
- POSTGRES_PASSWORD=author_db_password
- POSTGRES_DB=author_db_dev
nginx:
image: nginx:latest
ports:
- "8080:8080"
volumes:
- ./nginx_config.conf:/etc/nginx/conf.d/default.conf
depends_on:
- author_service
- book_service
volumes:
postgres_data_book:
postgres_data_author:
Fixed using docker0 network ip address and with requests. Tests can now be started using 172.13.0.1 on port 8080
the main problem here are your endpoints on test file
Test example fixed:
from starlette.testclient import TestClient
from app.main import app
from app.api.author import authors
import logging
log = logging.getLogger('__name__')
import requests
client = TestClient(app)
def test_get_authors():
response = client.get("/authors") # this must be your API endpoint to test
assert response.status_code == 200
def test_get_author():
response = client.get("/authors/1") # this must be your API endpoint to test
assert response.status_code == 200