docker compose for multiple commands - python

I am unable to run multiple commands for scripts (py) in rulsmalldata service. Can you please provide me solutions.
version: "3"
networks:
mlflow:
external: true
services:
redis:
restart: always
image: redis:latest
command:
- --loglevel warning
container_name: "redis"
rulsmalldata:
image: rulsmalldata:mlflow-project-latest
command: bash -c "python mlflow_model_run.py && python mlflow_model_serve.py && python mlflow_model_output.py"
networks:
- mlflow
ports:
- "80:80"
environment:
MLFLOW_TRACKING_URI: <<TRACKING_URI>>
REDIS_ADDRESS: redis
AZURE_STORAGE_CONNECTION_STRING: 'DefaultEndpointsProtocol=https;AccountName=<<Name>>;AccountKey=<<KEY>>EndpointSuffix=core.windows.net'

Related

Problem with converting docker command to compose file

I am trying to run my flask app in 2 ways: by use docker run command and by use a compose file. When i use following commands everything is working fine:
docker container run --name flask-database -d --network flask_network
-e POSTGRES_USER=admin -e POSTGRES_PASSWORD=admin -e POSTGRES_DB=flask_db -v postgres_data:/var/lib/postgresql/data -p
5432:5432 postgres:13
docker container run -p 5000:5000 --network flask_network flask_app
But when I am trying to use my compose file (by docker compose up) i see error:
File "/app/main_python_files/routes.py", line 11, in home
web | cur.execute('SELECT * FROM books;')
web | psycopg2.errors.UndefinedTable: relation "books" does not exist
web | LINE 1: SELECT * FROM books;
When i have to change in my compose file? I will be very grateful for response! Here is my compose file:
version: '3.7'
services:
flask-database:
container_name: flask-database
image: postgres:13
restart: always
ports:
- 5432:5432
networks:
- flask_network
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=flask_db
volumes:
- postgres_data:/var/lib/postgresql/data
web:
container_name: web
#build: web
image: flask_app
restart: always
ports:
- 5000:5000
networks:
- flask_network
depends_on:
- flask-database
links:
- flask-database
networks:
flask_network: {}
volumes:
postgres_data: {}

Unknown mysql server host on docker and python

I'm building an API that fetches data from a MySQL database using Docker. I've tried everything and I always get this error: 2005 (HY000): Unknown MySQL server host 'db' (-3). Here is my docker compose file:
version: '3'
services:
web:
container_name: nginx
image: nginx
volumes:
- ./nginx/nginx.conf:/tmp/nginx.conf
environment:
- FLASK_SERVER_ADDR=backend:9091
- DB_PASSWORD=password
- DB_USER=user
- DB_HOST=db
command: /bin/bash -c "envsubst < /tmp/nginx.conf > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 80:80
networks:
- local
depends_on:
- backend
backend:
container_name: app
build: flask
environment:
- FLASK_SERVER_PORT=9091
- DB_PASSWORD=password
volumes:
- flask:/tmp/app_data
restart: unless-stopped
networks:
- local
depends_on:
- db
links:
- db
db:
container_name: db
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=database
- MYSQL_USER=user
- MYSQL_PASSWORD=password
ports:
- 3306:3306
networks:
local:
volumes:
flask:
driver: local
db:
driver: local
Inside the flask directory I have my Dockerfile like so:
FROM ubuntu:latest
WORKDIR /src
RUN apt -y update
RUN apt -y upgrade
RUN apt install -y python3
RUN apt install -y python3-pip
COPY . .
RUN chmod +x -R .
RUN pip install -r requirements.txt --no-cache-dir
CMD ["python3","app.py"]
Finally, on my app.py file I try to connect to the database with the name of the Docker container. I have tried using localhost and it still gives me the same error. This is the part of the code I use to access it:
conn = mysql.connector.connect(
host="db",
port=3306,
user="user",
password="password",
database="database")
What is it that I'm doing wrong?
The containers aren't on the same networks:, which could be why you're having trouble.
I'd recommend deleting all of the networks: blocks in the file, both the blocks at the top level and the blocks in the web and backend containers. Compose will create a network named default for you and attach all of the containers to that network. Networking in Compose in the Docker documentation has more details on this setup.
The links: block is related to an obsolete Docker networking mode, and I've seen it implicated in problems in other SO questions. You should remove it as well.
You also do not need to manually specify container_name: in most cases. For the Nginx container, the Docker Hub nginx image already knows how to do the envsubst processing so you do not need to override its command:.
This should leave you with:
version: '3.8'
services:
web:
image: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/templates/default.conf.template
environment: { ... }
ports:
- 80:80
depends_on:
- backend
backend:
build: flask
environment: { ... }
volumes:
- flask:/tmp/app_data
restart: unless-stopped
depends_on:
- db
db:
image: mysql
restart: unless-stopped
volumes:
- ./mysql:/docker-entrypoint-initdb.d
- db:/var/lib/mysql
environment: { ... }
ports:
- 3306:3306
volumes:
flask:
db:

Docker Pytest containers stay up after completing testing process

I've set my django project and now I'm trying to test it with pytest. What is issue running pytest withing my containers doesn't kill it at the end of the process. So at the end of day I'm stuck with multiple running containers from pytest and often postgreSql connection problems.
My docker-compose file:
version: '3'
services:
license_server:
build: .
command: bash -c "python manage.py migrate && gunicorn LicenseServer.wsgi --reload --bind 0.0.0.0:8000"
depends_on:
- postgres
volumes:
- .:/code
environment:
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
env_file: .env
ports:
- "8000:8000"
restart: always
postgres:
build: ./postgres
volumes:
- ./postgres/postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_PASSWORD: postgres
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
command: "-p 8005"
env_file: .env
ports:
- "127.0.0.1:8005:8005"
restart: always
nginx:
image: nginx:latest
container_name: nginx1
ports:
- "8001:80"
volumes:
- .:/code
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- license_server
What I want to achieve is automatically closing containers after the testing process is finished.
When you have restart: always they will just keep restarting when all the processes spawned by the command have exited. Even when you try to kill the running containers yourself they will tend to restart (which can be a nuisance). Try removing restart: always from your service descriptions.
For more info, check the docker-compose.yml reference

Python Celery trying to occupy a port number in docker-compose and creating problems

docker-compose.yml:
python-api: &python-api
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
networks:
- app-tier
expose:
- "8000"
depends_on:
- python-model
volumes:
- .:/python_api/
environment:
- PYTHON_API_ENV=development
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
expose:
- "8001"
volumes:
- .:/python_model/
command: >
sh -c "ls /python-model/
python_setup.sh development
cd /server/ &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8001"
python-celery:
<<: *python-api
environment:
- PYTHON_API_ENV=development
networks:
- app-tier
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
image: redis:5.0.8-alpine
hostname: redis
networks:
- app-tier
expose:
- "6379"
ports:
- "6379:6379"
command: ["redis-server"]
python-celery is inside python-api which should run as a separate container. But it is trying to occupy the same port as python-api, which should never be the case.
The error that I'm getting is:
AjayB$ docker-compose up
Creating integrated_redis_1 ... done
Creating integrated_python-model_1 ... done
Creating integrated_python-api_1 ...
Creating integrated_python-celery_1 ... error
Creating integrated_python-api_1 ... done
e1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: for python-celery Cannot start service python-celery: driver failed programming external connectivity on endpoint integrated_python-celery_1 (ab5e079dbc3a30223e16052f21744c2b5dfc56adbe1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
on doing docker ps -a, I get this:
AjayB$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ff1277fb7a7 integrated_python-celery "sh -c 'celery -A se…" 10 seconds ago Created integrated_python-celery_1
5b60221b42a4 integrated_python-api "sh -c 'ls /crackd-a…" 11 seconds ago Up 9 seconds 0.0.0.0:8000->8000/tcp integrated_python-api_1
bacd8aa3268f integrated_python-model "sh -c 'ls /crackd-m…" 12 seconds ago Exited (2) 10 seconds ago integrated_python-model_1
9fdab833b436 redis:5.0.8-alpine "docker-entrypoint.s…" 12 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp integrated_redis_1
Tried force removing the containers and tried docker-compose up getting the same error. :/ where am I making mistake?
Just doubtful on volumes: section. Can anyone please tell me if volumes is correct?
and please help me on this error. PS, first try on docker.
Thanks!
This is because you re-use the full config of python-api including the ports section which will expose port 8000 (by the way, expose is redundant since your ports section already exposes the port).
I would create a common section that could be used by any services. In your case, it would be something like that:
version: '3.7'
x-common-python-api:
&default-python-api
build:
context: /Users/AjayB/Desktop/python-api/
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
volumes:
- .:/python_api/
services:
python-api:
<<: *default-python-api
ports:
- "8000:8000"
depends_on:
- python-model
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
.
.
.
python-celery:
<<: *default-python-api
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
.
.
.
There is a lot in that docker-compose.yml file, but much of it is unnecessary. expose: in a Dockerfile does almost nothing; links: aren't needed with the current networking system; Compose provides a default network for you; your volumes: try to inject code into the container that should already be present in the image. If you clean all of this up, the only part that you'd really want to reuse from one container to the other is its build: (or image:), at which point the YAML anchor syntax is unnecessary.
This docker-compose.yml should be functionally equivalent to what you show in the question:
version: '3'
services:
python-api:
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
# No networks:, use `default`
# No expose:, use what's in the Dockerfile (or nothing)
depends_on:
- python-model
# No volumes:, use what's in the Dockerfile
# No environment:, this seems to be a required setting in the Dockerfile
# No command:, use what's in the Dockerfile
python-model:
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
python-celery:
build: # copied from python-api
context: /Users/AjayB/Desktop/python-api/
depends_on:
- redis
command: celery -A server worker -l info # one line, no sh -c wrapper
redis:
image: redis:5.0.8-alpine
# No hostname:, it doesn't do anything
ports:
- "6379:6379"
# No command:, use what's in the image
Again, notice that the only thing we've actually copied from the python-api container to the python-celery container is the build: block; all of the other settings that would be shared across the two containers (code, exposed ports) are included in the Dockerfile that describes how to build the image.
The flip side of this is that you need to make sure all of these settings are in fact included in your Dockerfile:
# Copy the application code in
COPY . .
# Set the "development" environment variable
ENV PYTHON_API_ENV=development
# Document which port you'll use by default
EXPOSE 8000
# Specify the default command to run
# (Consider writing a shell script with this content instead)
CMD python_api_setup.sh development && \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000

How to run Python Django and Celery using docker-compose?

I've a Python application using Django and Celery, and I trying to run using docker and docker-compose because i also using Redis and Dynamodb
The problem is the following:
I'm not able to execute both services WSGI and Celery, cause just the first instruction works fine..
version: '3.3'
services:
redis:
image: redis:3.2-alpine
volumes:
- redis_data:/data
ports:
- "6379:6379"
dynamodb:
image: dwmkerr/dynamodb
ports:
- "3000:8000"
volumes:
- dynamodb_data:/data
jobs:
build:
context: nubo-async-cfe-seces
dockerfile: Dockerfile
environment:
- REDIS_HOST=redisrvi
- PYTHONUNBUFFERED=0
- CC_DYNAMODB_NAMESPACE=None
- CC_DYNAMODB_ACCESS_KEY_ID=anything
- CC_DYNAMODB_SECRET_ACCESS_KEY=anything
- CC_DYNAMODB_HOST=dynamodb
- CC_DYNAMODB_PORT=8000
- CC_DYNAMODB_IS_SECURE=False
command: >
bash -c "celery worker -A tasks.async_service -Q dynamo-queue -E --loglevel=ERROR &&
uwsgi --socket 0.0.0.0:8080 --protocol=http --wsgi-file nubo_async/wsgi.py"
depends_on:
- redis
- dynamodb
volumes:
- .:/jobs
ports:
- "9090:8080"
volumes:
redis_data:
dynamodb_data:
Has anyone had the same problem?
You may refer to docker-compose of Saleor project. I would suggest to let celery run its daemon only depend on redis as the broker. See the configuration of docker-compose.yml file:
services:
web:
build:
context: .
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- db
- redis
celery:
build:
context: .
dockerfile: ./Dockerfile
args:
STATIC_URL: '/static/'
command: celery -A saleor worker --app=saleor.celeryconf:app --loglevel=info
restart: unless-stopped
networks:
- saleor-backend-tier
env_file: common.env
depends_on:
- redis
See also that the connection from both services to redis are set separately by the environtment vatables as shown on the common.env file:
CACHE_URL=redis://redis:6379/0
CELERY_BROKER_URL=redis://redis:6379/1
Here's the docker-compose as suggested by #Satevg, run the Django and Celery application by separate containers. Works fine!
version: '3.3'
services:
redis:
image: redis:3.2-alpine
volumes:
- redis_data:/data
ports:
- "6379:6379"
dynamodb:
image: dwmkerr/dynamodb
ports:
- "3000:8000"
volumes:
- dynamodb_data:/data
jobs:
build:
context: nubo-async-cfe-services
dockerfile: Dockerfile
environment:
- REDIS_HOST=redis
- PYTHONUNBUFFERED=0
- CC_DYNAMODB_NAMESPACE=None
- CC_DYNAMODB_ACCESS_KEY_ID=anything
- CC_DYNAMODB_SECRET_ACCESS_KEY=anything
- CC_DYNAMODB_HOST=dynamodb
- CC_DYNAMODB_PORT=8000
- CC_DYNAMODB_IS_SECURE=False
command: bash -c "uwsgi --socket 0.0.0.0:8080 --protocol=http --wsgi-file nubo_async/wsgi.py"
depends_on:
- redis
- dynamodb
volumes:
- .:/jobs
ports:
- "9090:8080"
celery:
build:
context: nubo-async-cfe-services
dockerfile: Dockerfile
environment:
- REDIS_HOST=redis
- PYTHONUNBUFFERED=0
- CC_DYNAMODB_NAMESPACE=None
- CC_DYNAMODB_ACCESS_KEY_ID=anything
- CC_DYNAMODB_SECRET_ACCESS_KEY=anything
- CC_DYNAMODB_HOST=dynamodb
- CC_DYNAMODB_PORT=8000
- CC_DYNAMODB_IS_SECURE=False
command: celery worker -A tasks.async_service -Q dynamo-queue -E --loglevel=ERROR
depends_on:
- redis
- dynamodb
volumes:
- .:/jobs
volumes:
redis_data:
dynamodb_data:

Categories