How to connect Flask app to SQLite DB running in Docker? - python

docker-compose.yml
version: "3"
services:
sqlite3:
image: nouchka/sqlite3:latest
stdin_open: true
tty: true
volumes:
- ./db/:/root/db/
app:
build:
context: ../
dockerfile: build/Dockerfile
ports:
- "5000:5000"
volumes:
- ../:/app
command: pipenv run gunicorn --bind=0.0.0.0:5000 --reload app:app
Now how do I get my Flask app to connect to my dockerized db?
SQLALCHEMY_DATABASE_URI = 'sqlite:///db.sqlite' (I'm not sure what to put here to connect to the Docker image db)

You can share the volumes between containers.
version: "3"
services:
sqlite3:
image: nouchka/sqlite3:latest
stdin_open: true
tty: true
volumes:
- ./db/:/root/db/
app:
build:
context: ../
dockerfile: build/Dockerfile
ports:
- "5000:5000"
volumes:
- ../:/app
- ./db/:/my/sqlite/path/ # Here is the change
command: pipenv run gunicorn --bind=0.0.0.0:5000 --reload app:app
Now the files inside ./db/ directory is accessible from your python image too, so you can set URI like this:
SQLALCHEMY_DATABASE_URI = 'sqlite:////my/sqlite/path/'

Related

Unknown mysql server host on docker and python

I'm building an API that fetches data from a MySQL database using Docker. I've tried everything and I always get this error: 2005 (HY000): Unknown MySQL server host 'db' (-3). Here is my docker compose file:
version: '3'
services:
web:
container_name: nginx
image: nginx
volumes:
- ./nginx/nginx.conf:/tmp/nginx.conf
environment:
- FLASK_SERVER_ADDR=backend:9091
- DB_PASSWORD=password
- DB_USER=user
- DB_HOST=db
command: /bin/bash -c "envsubst < /tmp/nginx.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 80:80
networks:
- local
depends_on:
- backend
backend:
container_name: app
build: flask
environment:
- FLASK_SERVER_PORT=9091
- DB_PASSWORD=password
volumes:
- flask:/tmp/app_data
restart: unless-stopped
networks:
- local
depends_on:
- db
links:
- db
db:
container_name: db
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- 3306:3306
networks:
local:
volumes:
flask:
driver: local
db:
driver: local
Inside the flask directory I have my Dockerfile like so:
FROM ubuntu:latest
WORKDIR /src
RUN apt -y update
RUN apt -y upgrade
RUN apt install -y python3
RUN apt install -y python3-pip
COPY . .
RUN chmod +x -R .
RUN pip install -r requirements.txt --no-cache-dir
CMD ["python3","app.py"]
Finally, on my app.py file I try to connect to the database with the name of the Docker container. I have tried using localhost and it still gives me the same error. This is the part of the code I use to access it:
conn = mysql.connector.connect(
host="db",
port=3306,
user="user",
password="password",
database="database")
What is it that I'm doing wrong?
The containers aren't on the same networks:, which could be why you're having trouble.
I'd recommend deleting all of the networks: blocks in the file, both the blocks at the top level and the blocks in the web and backend containers. Compose will create a network named default for you and attach all of the containers to that network. Networking in Compose in the Docker documentation has more details on this setup.
The links: block is related to an obsolete Docker networking mode, and I've seen it implicated in problems in other SO questions. You should remove it as well.
You also do not need to manually specify container_name: in most cases. For the Nginx container, the Docker Hub nginx image already knows how to do the envsubst processing so you do not need to override its command:.
This should leave you with:
version: '3.8'
services:
web:
image: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/templates/default.conf.template
environment: { ... }
ports:
- 80:80
depends_on:
- backend
backend:
build: flask
environment: { ... }
volumes:
- flask:/tmp/app_data
restart: unless-stopped
depends_on:
- db
db:
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
- db:/var/lib/mysql
environment: { ... }
ports:
- 3306:3306
volumes:
flask:
db:

How can I check the health status of my dockerized celery / django app?

I am running a dockerized django app using the following dockerfile:
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn PriceOptimization.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
networks:
- dbnet
ports:
- "8000:8000"
environment:
aws_access_key_id: ${aws_access_key_id}
redis:
restart: always
image: redis:latest
networks:
- dbnet
ports:
- "6379:6379"
celery:
restart: always
build:
context: .
command: celery -A PriceOptimization worker -l info
volumes:
- ./PriceOptimization:/PriceOptimization
depends_on:
- web
- redis
networks:
- dbnet
environment:
access_key_id: ${access_key_id}
nginx:
build: ./nginx
ports:
- "80:80"
volumes:
- static_volume:/home/app/web/staticfiles
depends_on:
- web
networks:
- dbnet
database:
image: "postgres" # use latest official postgres version
restart: unless-stopped
env_file:
- ./database.env # configure postgres
networks:
- dbnet
ports:
- "5432:5432"
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
volumes:
database-data:
static_volume:
media_volume:
I have added celery.py to my app, and I am building / running the docker container as follows:
docker-compose -f $HOME/PriceOpt/PriceOptimization/docker-compose.prod.yml up -d --build
Running the application in my development environment lets me check at the command line that the celery app is correctly connected, etc. Is there a way that I can test to see if my celery app is initiated properly at the end of the build process?

Celery Flower with Multiple Workers in Different Docker Containers

I've been up and down StackOverflow and Google, but I can't seem to come close to an answer.
tl;dr How do I register a dockerized Celery worker in a dockerized Flower dashboard? How do I point the worker to the Flower dashboard so that the dashboard "knows" about it?
I have 2 FastAPI apps, both deployed with docker-compose.yml files. The first app's compose file looks like this:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_web
# '/start' is the shell script used to run the service
command: /start
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
redis:
image: redis:6-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
flower:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_celery_flower
command: /start-flower
volumes:
- .:/app
env_file:
- .env/.dev-sample
ports:
- 5557:5555
depends_on:
- redis
So this app is responsible for creating the Celery Flower dashboard.
The second app's compose file looks like:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_two_web
# '/start' is the shell script used to run the service
command: /start
volumes:
- .:/app
ports:
- 8011:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
redis:
image: redis:6-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_two_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
I can't get this second app's worker to register in the Celery Flower dashboard running on port 5557. Everything works fine, and I can even launch a second Flower dashboard with the second app if on a different port, but I can't seem to connect the second worker to the first app's Flower dashboard.
This is what main.py looks like, for both apps.
from project import create_app
app = create_app()
celery = app.celery_app
def celery_worker():
from watchgod import run_process
import subprocess
def run_worker():
subprocess.call(
["celery", "-A", "main.celery", "worker", "-l", "info"]
)
run_process("./project", run_worker)
if __name__ == "__main__":
celery_worker()
Thanks for any ideas that I can throw at this.
First enable event monitoring by putting "-E" in your worker container "command:"
Second, specify environment variable C_FORCE_ROOT in every worker services in your docker-compose configuration.

Docker Pytest containers stay up after completing testing process

I've set my django project and now I'm trying to test it with pytest. What is issue running pytest withing my containers doesn't kill it at the end of the process. So at the end of day I'm stuck with multiple running containers from pytest and often postgreSql connection problems.
My docker-compose file:
version: '3'
services:
license_server:
build: .
command: bash -c "python manage.py migrate && gunicorn LicenseServer.wsgi --reload --bind 0.0.0.0:8000"
depends_on:
- postgres
volumes:
- .:/code
environment:
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
env_file: .env
ports:
- "8000:8000"
restart: always
postgres:
build: ./postgres
volumes:
- ./postgres/postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_PASSWORD: postgres
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
command: "-p 8005"
env_file: .env
ports:
- "127.0.0.1:8005:8005"
restart: always
nginx:
image: nginx:latest
container_name: nginx1
ports:
- "8001:80"
volumes:
- .:/code
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- license_server
What I want to achieve is automatically closing containers after the testing process is finished.
When you have restart: always they will just keep restarting when all the processes spawned by the command have exited. Even when you try to kill the running containers yourself they will tend to restart (which can be a nuisance). Try removing restart: always from your service descriptions.
For more info, check the docker-compose.yml reference

Named volume not being created in docker

I'm trying to create a django/nginx/gunicorn/postgres docker-compose configuration.
Every time I call docker-compose down, I noticed that my postgres db was getting wiped. I did a little digging, and when I call docker-compose up, my named volume is not being created like i've seen in other tutorials.
What am I doing wrong?
Here is my yml file (if it helps, I'm using macOS to run my project)
version: "3"
volumes:
postgres:
driver: local
services:
database:
image: "postgres:latest" # use latest postgres
container_name: database
environment:
- POSTGRES_USER=REDACTED
- POSTGRES_PASSWORD=REDACTED
- POSTGRES_DB=REDACTED
volumes:
- postgres:/postgres
ports:
- 5432:5432
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8000:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- ./src/static:/static
depends_on:
- web
migrate:
build: .
container_name: migrate
depends_on:
- database
command: bash -c "python manage.py makemigrations && python manage.py migrate"
volumes:
- ./src:/src
web:
build: .
container_name: django
command: gunicorn Project.wsgi:application --bind 0.0.0.0:8000
depends_on:
- migrate
- database
volumes:
- ./src:/src
- ./src/static:/static
expose:
- "8000"
You need to mount the data directory at /var/lib/postgresql/data
volumes:
- postgres:/var/lib/postgresql/data

Categories