When running docker-compose run server python manage.py makemigrations (making migrations) and getting this error:
django.template.library.InvalidTemplateLibrary: Invalid template library specified.
ImportError raised when trying to load 'rest_framework.templatetags.rest_framework': No mo
dule named 'pytz'
My docker-compose.yml:
version: '3'
services:
db:
build: ./etc/docker/db
restart: always
volumes:
- ./var/volumes/dbdata:/var/lib/mysql
env_file:
- ./etc/docker/db/env
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=example
interval: 1s
timeout: 5s
retries: 10
server: &web
build:
context: .
dockerfile: ./etc/docker/web/Dockerfile
volumes:
- ./server:/home/web/server
# depends_on:
# db: {condition: service_healthy}
ports:
- "8080:8080"
command: ["python", "manage.py", "runserver", "0.0.0.0:8080"]
I tried installing pytz through pip install pytz, but I still get the same error. Now I'm confused, please explain what the problem could be.
You need to install all dependencies inside Dockerfile.
Your dockerfile should contains something like
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
Where your requirements.txt contains all libraries you need to run the server.
or you can just install libs you need inside dockerfile
RUN pip install pytz django etc...
Related
I'm having a problem with starting my django project in docker container. I have tried this in multiple ways, for example:
docker-compose run web python manage.py runserver
or just
docker-compose up
first time i have tried this i got no error, but i couldn't open the app in browser so i tried it again and then it stopped working completely.
I'm getting this error:
manage.py runserver: error: unrecognized arguments: python manage.py runserver
My docker-compose.yml
version: "3.9"
services:
db:
image: postgres:alpine
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=gitlabdumptables
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=gitlabdumptables
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
My Dockerfile
# syntax=docker/dockerfile:1
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip3 install -r requirements.txt
CMD python manage.py makemigrations
CMD python manage.py migrate
COPY . /code/
ENTRYPOINT [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The main process run in a Docker container is made up of two parts. You're providing an ENTRYPOINT in the Dockerfile, and then the equivalent of a CMD with the Compose command: or docker run ./manage.py ... arguments. When you have both an ENTRYPOINT and a CMD, the CMD gets passed as additional arguments to the ENTRYPOINT; so in this case, you're running python manage.py runserver 0.0.0.0:8000 python manage.py runserver 0.0.0.0:8000.
For this setup I'd suggest:
If you want to be able to override the command when you launch the container (which is useful), prefer CMD to ENTRYPOINT in your Dockerfile. Then these command overrides will replace the command.
If you have a useful default command (which you do), put it as the Dockerfile CMD. You do not need to specify a Compose command:.
You're also trying to run migrations in a way that won't work; again, a container only runs a single process, and the last CMD or the startup-time override wins. I'd use an entrypoint wrapper script to run migrations:
#!/bin/sh
# entrypoint.sh
# Run migrations
python manage.py migrate
# Then run the main container command (passed to us as arguments)
exec "$#"
In your Dockerfile make sure this is COPYed in (the existing COPY command you have will do it) and make this script be the ENTRYPOINT (with JSON-array syntax).
FROM python:3.10
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] # not ENTRYPOINT
In your Compose file, you don't need to inject the code with volumes: or replace the command:, these are already part of your Dockerfile.
version: '3.8'
services:
db: { ... }
web:
build: .
ports:
- '8000:8000'
environment: { ... }
depends_on:
- db
# but no volumes: or command:
I am working on a CLI App in python for AWS SQS(which is run on localstack) on docker. Here's my docker-compose.yml:
version: "3.8"
networks:
localstack-net:
name: localstack-net
driver: bridge
services:
localstack:
image: localstack/localstack
privileged: true
networks:
- localstack-net
ports:
- "4576:4576"
environment:
- DEBUG=1
- EDGE_PORT=4576
- DATA_DIR=/tmp/localstack/data
- SERVICES=sqs:4567
volumes:
- ./.temp/localstack:/tmp/localstack
- ./localstack_setup:/docker-entrypoint-initaws.d/
cli_app:
build:
dockerfile: Dockerfile
container_name: my_app
and here's my dockerfile:
FROM python:3.8-slim
RUN useradd --create-home --shell /bin/bash app_user
WORKDIR /home/app_user
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
USER app_user
COPY . .
CMD ["bash"]
The problem that occurs is that the service cli_app exits when I run the command docker-compose up.
What can I do to rectify this problem?
i used docker and django for this project and gitlab ci/cd pipleline and test wont even start and exit below error:
tests was running until i add some tests in django app and after that it failed.
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
here is my Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
my docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and my gitlab-ci.yml:
image: python:latest
services:
- mysql:latest
- postgres:latest
variables:
POSTGRES_DB: postgres
cache:
paths:
- ~/.cache/pip/
test:
variables:
DATABASE_URL: "postgresql://postgres:postgres#postgres:5432/$POSTGRES_DB"
script:
- pip install -r requirements.txt
- python manage.py test
build:
image: docker:19.03.12
stage: build
services:
- docker:19.03.12-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
Create a network for the containers in your docker-compose file and share the network between your app and db.
something like this
db:
networks:
- network_name
# your other db setup follows
web:
networks:
- network_name
# your other web setup follows
networks:
network_name:
I am getting this error when trying to run migrations in my container. I cannot seem to figure out why.
Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"alembic\": executable file not found in $PATH": unknown
Dockerfile:
FROM python:3.8.2
WORKDIR /workspace/
COPY . .
RUN pip install pipenv
RUN pipenv install --deploy --ignore-pipfile
#EXPOSE 8000
#CMD ["pipenv", "run", "python", "/workspace/bin/web.py"]
Docker-Compose:
version: '3'
services:
db:
image: postgres:12
ports:
- "5432:5432"
env_file:
- .env.database.local
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- db
redis:
image: "redis:alpine"
web:
build: .
environment:
- PYTHONPATH=/workspace
env_file:
- .env.local
ports:
- "8000:8000"
volumes:
- .:/workspace
depends_on:
- db
- redis
command: "alembic upgrade head && pipenv run python /workspace/bin/web.py"
The command I run when I encounter this problem:
docker-compose run web alembic revision - autogenerate -m "First migration"
I defined in my Dockerfile that all my program will be running in the workspace directory. So it should point to it.
Yes the issue was that I did not add it to my $PATH.
This is what I added inside my docker-compose:
- PATH=/directory/bin:$PATH
docker-compose run web pipenv run alembic revision - autogenerate -m "First migration"
or
change in Dockerfile
RUN pipenv install --deploy --ignore-pipfile --system
and run
docker-compose run web alembic revision - autogenerate -m "First migration"
Dockerfile
FROM python:3.7
FROM registry.gitlab.com/datadrivendiscovery/images/primitives:ubuntu-bionic-python36-v2020.1.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /bbml
WORKDIR /bbml
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
ADD . /bbml/
CMD [ "python", "./manage.py runserver 0.0.0.0:8800" ]
docker-compose.yml
version: '3'
services:
web:
build: .
command: "python3 manage.py runserver 0.0.0.0:8800"
container_name: bbml
volumes:
- .:/bbml
ports:
- "8800:8800"
So I managed to get this to run properly by doing 'docker-compsoe run web' and go the standard "Starting development server" message at the bottom, but when I go to localhost:8800 it says "site can't be reached". What's going on?
Dockerfile
FROM python:3
FROM registry.gitlab.com/datadrivendiscovery/images/primitives:ubuntu-bionic-python36-v2020.1.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /bbml
WORKDIR /bbml
COPY requirements.txt /bbml/
RUN pip install -r requirements.txt
COPY . /bbml
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
container_name: bbml
volumes:
- .:/app
ports:
- "8000:8000"
Check your folders Permission and Run :
sudo docker-compose up
I found the issue. Docker will be running on a port you actually need to go to :8800 rather than localhost:8800