Flask App inside EC2, map Domain Name from Gandi to EC2 IP - python

I have the follow setup:
A simple Flask web app with a Mongo DB, each being in a docker Container, and the containers running inside a free EC2 instance.
My question is the following:
I bough a domain from Gandi, (domain.com, for example).
How can I "map" my domain to the IP address of the EC2 instance that the app is running inside of.
If I go to domain.com I want to be "redirected" to my Flask App and the url to still be domain.com, not the IP of the Instance.
Can this be achieved without Route53? Maybe with nginx?
This the docker_compose.yml file:
version: '3.5'
services:
web-app-flask:
image: web-app
ports:
- 5000:5000
mongodb_test:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongo-express_test:
image: mongo-express
restart: always
ports:
- 8088:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
This the main.py of the app:
from website import create_app
app = create_app()
if __name__ == '__main__':
from waitress import serve
serve(app)

Related

Redirect localhost:5000/some_path to localhost:5000

I have a dockenizer flask api app that runs in localhost:5000. The api runs with no problem. But when I tried to use it by another app, which I cannot change, it uses localhost:5000/some_path.
I'd like to redirect from localhost:5000/some_path to localhost:5000.
I have read that I can use a prefix in my flask api app, but I'd prefer another approach. I don't want to mess with the code.
Is there a redirect/middleware or another way to redirect this traffic?
docker-compose.yml:
# Use root/example as user/password credentials
version: "3.1"
services:
my-db:
image: mariadb
restart: always
environment:
MARIADB_ROOT_PASSWORD: example
ports:
- 3306:3306
volumes:
- ./0_schema.sql:/docker-entrypoint-initdb.d/0_schema.sql
- ./1_data.sql:/docker-entrypoint-initdb.d/1_data.sql
adminer:
image: adminer
restart: always
environment:
ADMINER_DEFAULT_SERVER: my-db
ports:
- 8080:8080
my-api:
build: ../my-awesome-api/
ports:
- 5000:5000
If you use a web server to serve your application you could manage it with it, for example with nginx you could do:
location = /some_path {
return 301 /;
}
Or you can use a middleware:
class PrefixMiddleware(object):
def __init__(self, app, prefix=""):
self.app = app
self.prefix = prefix
def __call__(self, environ, start_response):
if environ["PATH_INFO"].startswith(self.prefix):
environ["PATH_INFO"] = environ["PATH_INFO"][len(self.prefix) :]
environ["SCRIPT_NAME"] = self.prefix
return self.app(environ, start_response)
else:
#handle not found
Then register your middleware by adding the prefix to "ignore"
app = Flask(__name__)
app.wsgi_app = PrefixMiddleware(biosfera_fe.wsgi_app, prefix="/some_path")

Service not known when writing to InfluxDB in Python

I'm using Docker with InfluxDB and Python in a framework. I want to write to InfluxDB inside the framework but I always get the error "Name or service not known" and have no idea what is the problem.
I link the InfluxDB container to the framework container in the docker compose file like so:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- 8086:8086
volumes:
- influxdb_data:/var/lib/influxdb
framework:
image: framework
build: framework
volumes:
- framework:/tmp/framework_data
links:
- influxdb
depends_on:
- influxdb
volumes:
framework:
driver: local
influxdb_data:
Inside the framework I have a script that only focuses on writing to the database. Because I don't want to access the database with the url "localhost:8086", I am using links to make it easier and connect to the database with the url "influxdb:8086". This is my code in that script:
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS, WritePrecision
bucket = "bucket"
token = "token"
def insert_data(message):
client = InfluxDBClient(url="http://influxdb:8086", token=token, org=org)
write_api = client.write_api(write_options=SYNCHRONOUS)
point = Point("mem") \
.tag("sensor", message["sensor"]) \
.tag("metric", message["type"]) \
.field("true_value", float(message["true_value"])) \
.field("value", float(message["value"])) \
.field("failure", message["failure"]) \
.field("failure_type", message["failure_type"]) \
.time(datetime.datetime.now(), WritePrecision.NS)
write_api.write(bucket, org, point) #the error seams to happen here
Everytime I use the function insert_data I get the error urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fac547d9d00>: Failed to establish a new connection: [Errno -2] Name or service not known.
Why can't I write into the database?
I think the problem resides in your docker-compose file. First of all links is a legacy feature so I'd recommend you to use user-defined networks instead. More on that here: https://docs.docker.com/compose/compose-file/compose-file-v3/#links
I've created a minimalistic example to demonstrate the approach:
version: '3'
services:
influxdb:
image: influxdb
container_name: influxdb
restart: always
environment: # manage the secrets the best way you can!!! the below are only for demonstration purposes...
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=secret
- DOCKER_INFLUXDB_INIT_ORG=my-org
- DOCKER_INFLUXDB_INIT_BUCKET=my-bucket
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=secret-token
networks:
- local
framework:
image: python:3.10.2
depends_on:
- influxdb
networks:
- local
networks:
local:
Notice the additional networks definition and the local network. Also this network is referenced from the containers.
Also make sure to initialize your influxdb with the right enviroment variables according to the docker image's documentation: https://hub.docker.com/_/influxdb
Then to test it just run a shell in your framework container via docker-compose:
docker-compose run --entrypoint sh framework
and then in the container install the client:
pip install influxdb_client['ciso']
Then in a python shell - still inside the container - you can verify the connection:
from influxdb_client import InfluxDBClient
client = InfluxDBClient(url="http://influxdb:8086", token="secret-token", org="my-org") # the token and the org values are coming from the container's docker-compose environment definitions
client.health()
# {'checks': [],
# 'commit': '657e1839de',
# 'message': 'ready for queries and writes',
# 'name': 'influxdb',
# 'status': 'pass',
# 'version': '2.1.1'}
Last but not least to clean up the test resources do:
docker-compose down

Grpc requests failing when python client used inside flask

I have a flask app that needs to make a request to a grpc server when a request is made to the flask endpoint.
#main.route("/someroute", methods=["POST"])
def some_function():
# Do something here
make_grpc_request(somedata)
return create_response(data=None, message="Something happened")
def make_grpc_request(somedata):
channel = grpc.insecure_channel('localhost:30001')
stub = some_proto_pb2_grpc.SomeServiceStub(channel)
request = some_proto_pb2.SomeRequest(id=1)
response = stub.SomeFunction(request)
logger.info(response)
But I keep getting an error InactiveRpcError of RPC that terminated with: StatusCode.UNAVAILABLE failed to connect to all addresses
Just putting the client code inside a normal .py file works fine, and making the request inside BloomRPC works fine too so it couldn't be a server issue.
Is this something to do with how flask works and I'm just missing something?
I have also tried using https://github.com/public/sonora without any success like this:
with sonora.client.insecure_web_channel("localhost:30001") as channel:
stub = some_proto_pb2_grpc.SomeServiceStub(channel)
request = some_proto_pb2.SomeRequest(id=1)
response = stub.SomeFunction(request)
docker-compose.yml
version: "3.7"
services:
core-profile: #This is where the grpc requests are sent to
container_name: core-profile
build:
context: ./app/profile/
target: local
volumes:
- ./app/profile/:/usr/src/app/
env_file:
- ./app/profile/database.env
- ./app/profile/jwt.env
- ./app/profile/oauth2-dev.env
environment:
- APP_PORT=50051
- PYTHONUNBUFFERED=1
- POSTGRES_HOST=core-profile-db
ports:
- 30001:50051
expose:
- 50051
depends_on:
- core-profile-db
core-profile-db:
image: postgres:10-alpine
expose:
- 5432
ports:
- 54321:5432
env_file:
- ./app/profile/database.env
app-flask-server-db:
image: postgres:10-alpine
expose:
- 5433
ports:
- 54333:5433
env_file:
- ./app/flask-server/.env
flask-server:
build:
context: ./app/flask-server/
dockerfile: Dockerfile-dev
volumes:
- ./app/flask-server:/usr/src/app/
env_file:
- ./app/flask-server/.env
environment:
- FLASK_ENV=docker
ports:
- 5000:5000
depends_on:
- app-flask-server-db
volumes:
app-flask-server-db:
name: app-flask-server-db
Your Python app (service) should reference the gRPC service as core-profile:50051.
The host name is the Compose service core-profile and, because the Python service is also within the Compose network, it must use 50051.
localhost:30001 is how you'd access it from the Compose host

How to make Minio-client (from host) talk with Minio-server(docker container)?

I am running a minio-server in the container of docker-compose. I am trying to upload a file to the minio-server in the container, from the host machine (Ubuntu) (instead of container) by using minio-client (python SDK).
I did not make it happen as expected.
I am not clear if it is because of my endpoint(URL), or due to the connection issue between container and host?
The endpoints i tried:
url_1 = 'http://minio:9000' # from my default setup for minio link;
url_2 = 'http://localhost:9000/minio/test' # from Minio browser.
For url_1, what i got is: " botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: http://minio:9000/test ".
The line of error: s3.create_bucket(Bucket='test')
For url_2, what i got is: " All access to this bucket has been disabled. ".
The line of error: s3.create_bucket(Bucket='test')
I tried the similar thing: activating my minio-server and minio-client both in my host machine. Then i upload file from the minio-client to the minio-server. I can see those uploaded files in Minio browser in localhost.
######### python script uploading files
import boto3
from botocore.client import Config
import os
import getpass
my_url1='http://minio:9000' # this is from os.environ['S3_URL']
my_url2='http://localhost:9000/minio/test' # this is from browser
s3 = boto3.resource('s3',
endpoint_url=my_url2,
aws_access_key_id=os.environ['USER'],
aws_secret_access_key = getpass.getpass('Password:'),
config = Config(signature_version='s3v4'),
region_name='us-east-1')
print ('********', s3)
s3.create_bucket(Bucket='test')
uploadfile= os.getcwd()+'/'+'test.txt'
s3.Bucket('testBucket').upload_file(uploadfile,'txt')
######### docker-yml file for Minio
minio:
image: minio/minio
entrypoint:
- minio
- server
-/data
ports:
- "9000:9000"
environment:
minio_access_key = username
minio_secret_key = password
mc:
image: minio/mc
environment:
minio_access_key = username
minio_secret_key = password
entrypoint:
/bin/sh -c
depends_on:
minio
i expected to see the uploaded files from the minio browser('http://localhost:9000/minio/test') , just like what i did from activating minio-server and minio-client both at the host.
With default docker networking, you would have to try to access minio at http://localhost:9000 on your host. So you can just use this URL in your Python script. The http://minio:9000 will work from containers on the same docker network as your minio server.
Try to use Pyminio client instead of boto3.
import os
from pyminio import Pyminio
pyminio_client = Pyminio.from_credentials(
endpoint='http://localhost:9000/',
access_key=os.environ['USER'],
secret_key=getpass.getpass('Password:')
)
pyminio_client.mkdirs('/test/')
pyminio_client.put_file(
to_path='/test/',
file_path=os.path.join(os.getcwd(), 'test.txt')
)
use this configuration in your compose.yml file
version: "3"
services:
minio:
image: "minio/minio"
container_name: mi
ports:
- "9000:9000"
environment:
- "MINIO_ACCESS_KEY=ACCRESS"
- "MINIO_SECRET_KEY=SECRET"
restart: always
command: server /data
mc:
image: minio/mc
container_name: mc
network_mode: host
entrypoint: >
/bin/sh -c "
/usr/bin/mc config host add minio http://127.0.0.1:9000 ACCESS SECRET;
/usr/bin/mc rm -r --force minio/psb-new;
/usr/bin/mc mb minio/psb-new;
/usr/bin/mc policy set public minio/psb-new;
exit 0;
"
networks:
elastic:
driver: bridge

Can't connect to mongo from flask in docker containers

I have a python script that runs the following
import mongoengine
client = mongoengine.connect('ppo-image-server-db', host="db", port=27017)
db = client.test_db
test_data = {
'name' : 'test'
}
db.test_data.insert_one( test_data )
print("DONE")
And I have a docker-compose.yml that looks like the following
version: '2'
networks:
micronet:
services:
user-info-service:
restart : always
build : .
container_name: test-user-info-service
working_dir : /usr/local/app/test
entrypoint : ""
command : ["manage", "run-user-info-service", "--host=0.0.0.0", "--port=5000"]
volumes :
- ./:/usr/local/app/test/
ports :
- "5000:5000"
networks :
- micronet
links:
- db
db:
image : mongo:3.0.2
container_name : test-mongodb
volumes :
- ./data/db:/data/db
ports :
- "27017:27017"
However every time when I run docker-compose build and docker-compose up, the python script is not able to find the host (in this case 'db'). Do I need any special linking or any environment variables to pass in the mongo server's IP address?
I could still access the dockerized mongo-db using robomongo
Please note that I'm not creating any docker-machine for this test case yet.
Could you help me to point out what's missing in my configuration?
Yes,
What you need is to tell docker that one application depends on the other. Here is how I built my docker-compose:
version: '2'
services:
mongo-server:
image: mongo
volumes:
- .data/mdata:/data/db # mongodb persistence
myStuff:
build: ./myStuff
depends_on:
- mongo-server
Also, in the connection url, you need to use the url "mongo-server". Docker will take care of connecting your code to the mongo container.
Example:
private val mongoClient: MongoClient = MongoClient("mongodb://mongo-server:27017")
That should solve your problem

Categories