Creating shared directory between containers with docker-compose - python

My Flask app (webapp) has two directories inside (uploads and images). I want my second container (rq-worker) to have access to them, so it can take something from uploads, and save back to images. How I can organize this inside my docker-compose.yml?
version: '3.5'
services:
web:
build: ./webapp
image: webapp
container_name: webapp
ports:
- "5000:5000"
depends_on:
- redis-server
- mongodb
redis-server:
image: redis:alpine
container_name: redis-server
ports:
- 6379:6379
mongodb:
image: mongo:4.2-bionic
container_name: mongodb
ports:
- "27017:27017"
rq-worker:
image: jaredv/rq-docker:0.0.2
container_name: rq-worker
command: rq worker -u redis://redis-server:6379 high normal low
deploy:
replicas: 1
depends_on:
- redis-server
dashboard:
build: ./dashboard
image: dashboard
container_name: dashboard
ports:
- "9181:9181"
command: rq-dashboard -H redis-server
depends_on:
- redis-server

You'll need to specify a volume like this:
volumes:
data-volume:
And then attach it to your services, e.g. for web:
web:
build: ./webapp
image: webapp
container_name: webapp
ports:
- "5000:5000"
depends_on:
- redis-server
- mongodb
volumes:
- data-volume:/my/mnt/point
The documentation has more info, also how to configure a volume driver, e.g. if you want to have a volume on NFS. Furthermore this lists available volume plugins which enable Docker volumes to persist across multiple Docker hosts.

Related

How can I check the health status of my dockerized celery / django app?

I am running a dockerized django app using the following dockerfile:
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn PriceOptimization.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
networks:
- dbnet
ports:
- "8000:8000"
environment:
aws_access_key_id: ${aws_access_key_id}
redis:
restart: always
image: redis:latest
networks:
- dbnet
ports:
- "6379:6379"
celery:
restart: always
build:
context: .
command: celery -A PriceOptimization worker -l info
volumes:
- ./PriceOptimization:/PriceOptimization
depends_on:
- web
- redis
networks:
- dbnet
environment:
access_key_id: ${access_key_id}
nginx:
build: ./nginx
ports:
- "80:80"
volumes:
- static_volume:/home/app/web/staticfiles
depends_on:
- web
networks:
- dbnet
database:
image: "postgres" # use latest official postgres version
restart: unless-stopped
env_file:
- ./database.env # configure postgres
networks:
- dbnet
ports:
- "5432:5432"
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
volumes:
database-data:
static_volume:
media_volume:
I have added celery.py to my app, and I am building / running the docker container as follows:
docker-compose -f $HOME/PriceOpt/PriceOptimization/docker-compose.prod.yml up -d --build
Running the application in my development environment lets me check at the command line that the celery app is correctly connected, etc. Is there a way that I can test to see if my celery app is initiated properly at the end of the build process?

Celery Flower with Multiple Workers in Different Docker Containers

I've been up and down StackOverflow and Google, but I can't seem to come close to an answer.
tl;dr How do I register a dockerized Celery worker in a dockerized Flower dashboard? How do I point the worker to the Flower dashboard so that the dashboard "knows" about it?
I have 2 FastAPI apps, both deployed with docker-compose.yml files. The first app's compose file looks like this:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_web
# '/start' is the shell script used to run the service
command: /start
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
redis:
image: redis:6-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
flower:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_celery_flower
command: /start-flower
volumes:
- .:/app
env_file:
- .env/.dev-sample
ports:
- 5557:5555
depends_on:
- redis
So this app is responsible for creating the Celery Flower dashboard.
The second app's compose file looks like:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_two_web
# '/start' is the shell script used to run the service
command: /start
volumes:
- .:/app
ports:
- 8011:8000
env_file:
- .env/.dev-sample
depends_on:
- redis
redis:
image: redis:6-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/fastapi/Dockerfile
image: app_two_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env/.dev-sample
depends_on:
- redis
I can't get this second app's worker to register in the Celery Flower dashboard running on port 5557. Everything works fine, and I can even launch a second Flower dashboard with the second app if on a different port, but I can't seem to connect the second worker to the first app's Flower dashboard.
This is what main.py looks like, for both apps.
from project import create_app
app = create_app()
celery = app.celery_app
def celery_worker():
from watchgod import run_process
import subprocess
def run_worker():
subprocess.call(
["celery", "-A", "main.celery", "worker", "-l", "info"]
)
run_process("./project", run_worker)
if __name__ == "__main__":
celery_worker()
Thanks for any ideas that I can throw at this.
First enable event monitoring by putting "-E" in your worker container "command:"
Second, specify environment variable C_FORCE_ROOT in every worker services in your docker-compose configuration.

Docker Network Host on Ubuntu

I have a Django REST service and another Flask service that work as a broker for the application. Both are different projects that run with their own Docker container.
I'm able to POST a product on the Django service that is consumed by the Flask service, however, I cannot reach the Django service via Flask.
These containers are running on the same network, and I already tried Thomasleveil's suggestions, including docker-host by qoomon.
The error received by the request is the same as before I tried to forward the traffic. The difference is that now when I do the request it keeps hanging for a while until it fails.
The error is as follows:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0039388340>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The request I'm trying to make is a POST at /api/products/1/like. At the moment, no body is required.
Here is how I'm doing the POST with Flask, where the IP is the Docker IP:
#app.route("/api/products/<int:id>/like", methods=["POST"])
def like(id):
req = requests.get("http://172.17.0.1:8000/api/user")
json = req.json()
try:
product_user = ProductUser(user_id=json["id"], product=id)
db.session.add(product_user)
db.session.commit()
publish("product_liked", id)
except:
abort(400, "You already liked this product")
return jsonify({
"message": "success"
})
Django's docker compose file (please ignore the service tcp_message_emitter):
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- backend
tcp_message_emitter:
image: alpine
depends_on:
- dockerhost
command: [ "sh", "-c", "while :; do date; sleep 1; done | nc 'dockerhost' 2323 -v"]
networks:
- backend
networks:
backend:
driver: bridge
Flask's docker compose file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
At this point, I know that I am missing some detail or that I've a misconfiguration.
You can have a look at the repo here: https://github.com/lfbatista/ms-ecommerce
Any help would be appreciated.
These containers are not actually on the same network. To put two containers from different docker-compose projects into one network you need to 'import' an existing network in one of the files. Here's how you can do it:
# first project
networks:
internal:
shared:
---
# second project
networks:
internal:
shared:
# This is where all the magic happens:
external: true # Means do not create a network, import existing.
name: admin_shared # Name of the existing network. It's usually made of <folder_name>_<network_name> .
Do not forget to put all services into the same internal network or they will not be able to communicate with each other. If you forget to do that Docker will create a <folder_name>-default network and put any container with no explicitly assigned network there. You can assign networks like this:
services:
backend:
...
networks:
internal:
# Since this service needs access to the service in another project
# you put here two networks.
shared:
# This part is relevant for this specific question because
# both projects has services with identical names. To avoid
# mess with DNS names you can add an additional name to the
# service using 'alias'. This particular service will be
# available in shared network as 'flask-backend'.
aliases:
- flask-backend
db:
...
# You can also assign networks as an array if you need no extra configuration:
networks:
- internal
And here are the files from your repository. Instead of IP-address one service can reach the other via flask-backend or django-backend respectively. Note that I cut out those strange 'host network containers'.
admin/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
networks:
internal:
shared:
aliases:
- django-backend
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
networks:
- internal
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
networks:
- internal
networks:
internal:
shared:
main/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
networks:
internal:
shared:
aliases:
- flask-backend
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
networks:
- internal
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
networks:
- internal
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
networks:
internal:
shared:
external: true
name: admin_shared

Named volume not being created in docker

I'm trying to create a django/nginx/gunicorn/postgres docker-compose configuration.
Every time I call docker-compose down, I noticed that my postgres db was getting wiped. I did a little digging, and when I call docker-compose up, my named volume is not being created like i've seen in other tutorials.
What am I doing wrong?
Here is my yml file (if it helps, I'm using macOS to run my project)
version: "3"
volumes:
postgres:
driver: local
services:
database:
image: "postgres:latest" # use latest postgres
container_name: database
environment:
- POSTGRES_USER=REDACTED
- POSTGRES_PASSWORD=REDACTED
- POSTGRES_DB=REDACTED
volumes:
- postgres:/postgres
ports:
- 5432:5432
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8000:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- ./src/static:/static
depends_on:
- web
migrate:
build: .
container_name: migrate
depends_on:
- database
command: bash -c "python manage.py makemigrations && python manage.py migrate"
volumes:
- ./src:/src
web:
build: .
container_name: django
command: gunicorn Project.wsgi:application --bind 0.0.0.0:8000
depends_on:
- migrate
- database
volumes:
- ./src:/src
- ./src/static:/static
expose:
- "8000"
You need to mount the data directory at /var/lib/postgresql/data
volumes:
- postgres:/var/lib/postgresql/data

How to run Python Django and Celery using docker-compose?

I've a Python application using Django and Celery, and I trying to run using docker and docker-compose because i also using Redis and Dynamodb
The problem is the following:
I'm not able to execute both services WSGI and Celery, cause just the first instruction works fine..
version: '3.3'
services:
redis:
image: redis:3.2-alpine
volumes:
- redis_data:/data
ports:
- "6379:6379"
dynamodb:
image: dwmkerr/dynamodb
ports:
- "3000:8000"
volumes:
- dynamodb_data:/data
jobs:
build:
context: nubo-async-cfe-seces
dockerfile: Dockerfile
environment:
- REDIS_HOST=redisrvi
- PYTHONUNBUFFERED=0
- CC_DYNAMODB_NAMESPACE=None
- CC_DYNAMODB_ACCESS_KEY_ID=anything
- CC_DYNAMODB_SECRET_ACCESS_KEY=anything
- CC_DYNAMODB_HOST=dynamodb
- CC_DYNAMODB_PORT=8000
- CC_DYNAMODB_IS_SECURE=False
command: >
bash -c "celery worker -A tasks.async_service -Q dynamo-queue -E --loglevel=ERROR &&
uwsgi --socket 0.0.0.0:8080 --protocol=http --wsgi-file nubo_async/wsgi.py"
depends_on:
- redis
- dynamodb
volumes:
- .:/jobs
ports:
- "9090:8080"
volumes:
redis_data:
dynamodb_data:
Has anyone had the same problem?
You may refer to docker-compose of Saleor project. I would suggest to let celery run its daemon only depend on redis as the broker. See the configuration of docker-compose.yml file:
services:
web:
build:
context: .
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- db
- redis
celery:
build:
context: .
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
command: celery -A saleor worker --app=saleor.celeryconf:app --loglevel=info
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
See also that the connection from both services to redis are set separately by the environtment vatables as shown on the common.env file:
CACHE_URL=redis://redis:6379/0
CELERY_BROKER_URL=redis://redis:6379/1
Here's the docker-compose as suggested by #Satevg, run the Django and Celery application by separate containers. Works fine!
version: '3.3'
services:
redis:
image: redis:3.2-alpine
volumes:
- redis_data:/data
ports:
- "6379:6379"
dynamodb:
image: dwmkerr/dynamodb
ports:
- "3000:8000"
volumes:
- dynamodb_data:/data
jobs:
build:
context: nubo-async-cfe-services
dockerfile: Dockerfile
environment:
- REDIS_HOST=redis
- PYTHONUNBUFFERED=0
- CC_DYNAMODB_NAMESPACE=None
- CC_DYNAMODB_ACCESS_KEY_ID=anything
- CC_DYNAMODB_SECRET_ACCESS_KEY=anything
- CC_DYNAMODB_HOST=dynamodb
- CC_DYNAMODB_PORT=8000
- CC_DYNAMODB_IS_SECURE=False
command: bash -c "uwsgi --socket 0.0.0.0:8080 --protocol=http --wsgi-file nubo_async/wsgi.py"
depends_on:
- redis
- dynamodb
volumes:
- .:/jobs
ports:
- "9090:8080"
celery:
build:
context: nubo-async-cfe-services
dockerfile: Dockerfile
environment:
- REDIS_HOST=redis
- PYTHONUNBUFFERED=0
- CC_DYNAMODB_NAMESPACE=None
- CC_DYNAMODB_ACCESS_KEY_ID=anything
- CC_DYNAMODB_SECRET_ACCESS_KEY=anything
- CC_DYNAMODB_HOST=dynamodb
- CC_DYNAMODB_PORT=8000
- CC_DYNAMODB_IS_SECURE=False
command: celery worker -A tasks.async_service -Q dynamo-queue -E --loglevel=ERROR
depends_on:
- redis
- dynamodb
volumes:
- .:/jobs
volumes:
redis_data:
dynamodb_data:

Categories