I am working on a CLI App in python for AWS SQS(which is run on localstack) on docker. Here's my docker-compose.yml:
version: "3.8"
networks:
localstack-net:
name: localstack-net
driver: bridge
services:
localstack:
image: localstack/localstack
privileged: true
networks:
- localstack-net
ports:
- "4576:4576"
environment:
- DEBUG=1
- EDGE_PORT=4576
- DATA_DIR=/tmp/localstack/data
- SERVICES=sqs:4567
volumes:
- ./.temp/localstack:/tmp/localstack
- ./localstack_setup:/docker-entrypoint-initaws.d/
cli_app:
build:
dockerfile: Dockerfile
container_name: my_app
and here's my dockerfile:
FROM python:3.8-slim
RUN useradd --create-home --shell /bin/bash app_user
WORKDIR /home/app_user
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
USER app_user
COPY . .
CMD ["bash"]
The problem that occurs is that the service cli_app exits when I run the command docker-compose up.
What can I do to rectify this problem?
Related
I try to debug a Django app inside Docker container, the app is launched under uWSGI. Unfortunately, PyCharm debugger can't connect to the container and stops by timeout.
My run configuration:
I've added up --build to run all containers in debug mode.
docker-compose.yml:
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.10.7-management-alpine
container_name: bo-rabbitmq
rsyslog:
build:
context: .
dockerfile: docker/rsyslog/Dockerfile
image: bo/rsyslog:latest
container_name: bo-rsyslog
platform: linux/amd64
env_file:
- .env
volumes:
- shared:/app/mnt
api:
build:
context: .
dockerfile: docker/api/Dockerfile
image: bo/api:latest
container_name: bo-api
platform: linux/amd64
ports:
- "8081:8081"
- "8082:8082"
env_file:
- .env
volumes:
- shared:/app/mnt
apigw:
build:
context: .
dockerfile: docker/apigw/Dockerfile
image: bo/apigw:latest
container_name: bo-apigw
platform: linux/amd64
ports:
- "8080:8080"
env_file:
- .env
volumes:
- shared:/app/mnt
depends_on:
- api
volumes:
shared:
Dockerfile (for api):
FROM nexus.custom.ru/base/python27:2.7.17 # CentOS 7 with Python 2.7
# Environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PYTHONPATH /app/
ENV PATH /app/:$PATH
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1
# Install required software
RUN yum -y install enchant
# Working directory
WORKDIR /app
# Install and configure Poetry
RUN pip install --no-cache-dir poetry \
&& poetry config virtualenvs.create false
# Install project dependencies
COPY pyproject.toml .
COPY poetry.lock .
RUN poetry install --no-root --no-interaction
# Copy project files
COPY . .
COPY docker/api/manage.py ./
COPY docker/api/settings.py ./apps/adm/
COPY docker/api/config.py ./apps/adm/
COPY docker/api/config/development.yml ./config/
COPY docker/api/config/uwsgi/uwsgi.yml ./config/uwsgi/
COPY docker/api/entrypoint.sh ./
# Allow execution
RUN chmod +x /app/entrypoint.sh
# Entrypoint
ENTRYPOINT /app/entrypoint.sh
entrypoint.sh:
#!/bin/sh
# Create required directories
mkdir -p /app/mnt/spooler
mkdir -p /app/mnt/logs
mkdir -p /app/mnt/run
mkdir -p /app/mnt/shared/static
mkdir -p /app/mnt/protected_media
mkdir -p /app/mnt/htdocs
# Copy static
cp -r -n /app/static /app/mnt/shared/static
# Run uWSGI
uwsgi --yml=/app/config/uwsgi/uwsgi.yml
uwsgi.yml:
uwsgi:
chdir: /app
master: true
procname-master: b::master
procname: b::worker
processes: 2
threads: 4
listen: 128
max-requests: 1024
buffer-size: 16384
reload-on-exception: false
master-fifo: /app/mnt/run/running.fifo
vacuum: false
lazy-apps: true
enable-threads: true
pythonpath: /app
http: :8081
env: DJANGO_SETTINGS_MODULE=apps.adm.settings
module: apps.adm.wsgi
stats: :8082
stats-http: true
memory-report: 1
disable-logging: 0
log-5xx: true
log-4xx: true
log-slow: 500
What am I doing wrongly? Is it possible to connect PyCharm to Django app with uWSGI inside docker?
i used docker and django for this project and gitlab ci/cd pipleline and test wont even start and exit below error:
tests was running until i add some tests in django app and after that it failed.
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
here is my Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
my docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and my gitlab-ci.yml:
image: python:latest
services:
- mysql:latest
- postgres:latest
variables:
POSTGRES_DB: postgres
cache:
paths:
- ~/.cache/pip/
test:
variables:
DATABASE_URL: "postgresql://postgres:postgres#postgres:5432/$POSTGRES_DB"
script:
- pip install -r requirements.txt
- python manage.py test
build:
image: docker:19.03.12
stage: build
services:
- docker:19.03.12-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
Create a network for the containers in your docker-compose file and share the network between your app and db.
something like this
db:
networks:
- network_name
# your other db setup follows
web:
networks:
- network_name
# your other web setup follows
networks:
network_name:
I'm making a Python (Django) and MySQL project.
After creating some files, I get an error when I run the command.
docker-compose up -d
ERROR: Cannot locate specified Dockerfile: Dockerfile
Why am I getting an error when I have a Dockerfile in the current directory?
MYAPP
-django
--__init__.py
--asgi.py
--settings.py
--urls.py
--wsgi.py
-docker-compose.yml
-Dockerfile
-manage.py
-requirement.txt
-wait-for-in.sh
docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- .:/var/www/django
restart: always
environment:
MYSQL_ROOT_PASSWORD: django
MYSQL_DATABASE: django
MYSQL_USER: django
MYSQL_PASSWORD: django
web:
build: django
command: sh -c "./wait-for-it.sh db:3306; python3 manage.py runserver 0.0.0.0:8000"
volumes:
- .:/var/www/django
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /var/www/django
WORKDIR /var/www/django
ADD requirements.txt /var/www/django/
RUN pip install -r requirements.txt
ADD . /var/www/django/
web:
build: django
is a shortcut for
web:
build:
context: django
and this means, the Docker image is built in the django directory. The Dockerfile should be placed there. Same with manage.py, requirement.txt and wait-for-in.sh.
You could try the following:
web:
build:
context: ./django/
dockerfile: ./Dockerfile
You could try this
web:
build:
context: ./django/
dockerfile: ./Dockerfile
command: >
sh -c "./wait-for-it.sh db:3306; python3 manage.py runserver 0.0.0.0:8000"
depends_on:
- db
ports:
- "8000:8000"
I am getting this error when trying to run migrations in my container. I cannot seem to figure out why.
Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"alembic\": executable file not found in $PATH": unknown
Dockerfile:
FROM python:3.8.2
WORKDIR /workspace/
COPY . .
RUN pip install pipenv
RUN pipenv install --deploy --ignore-pipfile
#EXPOSE 8000
#CMD ["pipenv", "run", "python", "/workspace/bin/web.py"]
Docker-Compose:
version: '3'
services:
db:
image: postgres:12
ports:
- "5432:5432"
env_file:
- .env.database.local
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- db
redis:
image: "redis:alpine"
web:
build: .
environment:
- PYTHONPATH=/workspace
env_file:
- .env.local
ports:
- "8000:8000"
volumes:
- .:/workspace
depends_on:
- db
- redis
command: "alembic upgrade head && pipenv run python /workspace/bin/web.py"
The command I run when I encounter this problem:
docker-compose run web alembic revision - autogenerate -m "First migration"
I defined in my Dockerfile that all my program will be running in the workspace directory. So it should point to it.
Yes the issue was that I did not add it to my $PATH.
This is what I added inside my docker-compose:
- PATH=/directory/bin:$PATH
docker-compose run web pipenv run alembic revision - autogenerate -m "First migration"
or
change in Dockerfile
RUN pipenv install --deploy --ignore-pipfile --system
and run
docker-compose run web alembic revision - autogenerate -m "First migration"
Dockerfile
FROM python:3.7
FROM registry.gitlab.com/datadrivendiscovery/images/primitives:ubuntu-bionic-python36-v2020.1.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /bbml
WORKDIR /bbml
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
ADD . /bbml/
CMD [ "python", "./manage.py runserver 0.0.0.0:8800" ]
docker-compose.yml
version: '3'
services:
web:
build: .
command: "python3 manage.py runserver 0.0.0.0:8800"
container_name: bbml
volumes:
- .:/bbml
ports:
- "8800:8800"
So I managed to get this to run properly by doing 'docker-compsoe run web' and go the standard "Starting development server" message at the bottom, but when I go to localhost:8800 it says "site can't be reached". What's going on?
Dockerfile
FROM python:3
FROM registry.gitlab.com/datadrivendiscovery/images/primitives:ubuntu-bionic-python36-v2020.1.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /bbml
WORKDIR /bbml
COPY requirements.txt /bbml/
RUN pip install -r requirements.txt
COPY . /bbml
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
container_name: bbml
volumes:
- .:/app
ports:
- "8000:8000"
Check your folders Permission and Run :
sudo docker-compose up
I found the issue. Docker will be running on a port you actually need to go to :8800 rather than localhost:8800