Running Django's collectstatic in Dockerfile produces empty directory - python

I'm trying to run Django from a Docker container on Heroku, but to make that work, I need to run python manage.py collectstatic during my build phase. To achieve that, I wrote the following Dockerfile:
# Set up image
FROM python:3.10
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install poetry and identify Python dependencies
RUN pip install poetry
COPY pyproject.toml /usr/src/app/
# Install Python dependencies
RUN set -x \
&& apt update -y \
&& apt install -y \
libpq-dev \
gcc \
&& poetry config virtualenvs.create false \
&& poetry install --no-ansi
# Copy source into image
COPY . /usr/src/app/
# Collect static files
RUN python -m manage collectstatic -v 3 --no-input
And here's the docker-compose.yml file I used to run the image:
services:
db:
image: postgres
env_file:
- .env.docker.db
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
ports:
- "5433:5432"
web:
build: .
restart: always
env_file:
- .env.docker.web
ports:
- "8001:$PORT"
volumes:
- .:/usr/src/app
depends_on:
- db
networks:
- backend
command: gunicorn --bind 0.0.0.0:$PORT myapp.wsgi
volumes:
db:
networks:
backend:
driver: bridge
The Dockerfile builds just fine, and I can even see that collectstatic is running and collecting the appropriate files during the build. However, when the build is finished, the only evidence that collectstatic ran is an empty directory called staticfiles. If I run collectstatic again inside of my container, collectstatic works just fine, but since Heroku doesn't persist files created after the build stage, they disappear when my app restarts.
I found a few SO answers discussing how to get collectstatic to run inside a Dockerfile, but that's not my problem; my problem is that it does run, but the collected files don't show up in the container. Anyone have a clue what's going on?
UPDATE: This answer did the trick. My docker-compose.yml was overriding the changes made by collectstatic with this line:
volumes:
- .:/usr/src/app
If, like me, you want to keep the bind mount for ease of local development (so that you don't need to re-build each time), you can edit the command for the web service as follows:
command: bash -c "python -m manage collectstatic && gunicorn --bind 0.0.0.0:$PORT myapp.wsgi"
Note that the image would have run just fine as-is had I pushed it to Heroku (since Heroku doesn't use the docker-compose.yml file), so this was just a problem affecting containers I created on my local machine.

You are overriding the content of /usr/src/app in your container when you added the
volumes:
- .:/usr/src/app
to your docker compose file.
Remove it since you already copied everything during the build.

Related

Compose up container exited with code 0 and logs it with empty

I need to containerize a Django Web project with docker. I divided the project into dashboard, api-server and database. When I type docker-compose up, it print api-server exited with code 0 and api-server container Exited (0), and I type docker logs api-server, it return empty, but other container normal. I don't know how to check problem.
api-server directory structure is as follows
api-server
server/
Dockerfile
requirements.txt
start.sh
...
...
Some compose yml content is as follows
dashboard:
image: nginx:latest
container_name: nginx-dashboard
volumes:
- /nginx/nginx/default:/etc/nginx/conf.d/default.conf:ro
- /nginx/dist:/var/www/html:ro
ports:
- "80:80"
depends_on:
- api-server
api-server:
build: /api-server
container_name: api-server
volumes:
- /api-server:/webapps
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: Postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- "5432:5432"
Some Dockerfile content of api-server is as follows
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
RUN apt-get clean && apt-get update && apt-get upgrade -y && apt-get install -y python3-pip libpq-dev apt-utils
COPY ./requirements.txt /webapps/
RUN pip3 install -r /webapps/requirements.txt
COPY . /webapps/
CMD ["bash","-c","./start.sh"]
start.sh is as follows
#!/usr/bin/env bash
cd server/
python manage.py runserver 0.0.0.0:8000
type docker-compose up result as follows
root#VM:/home/test/Documents/ComposeTest# docker-compose up
Creating network "composetest_default" with the default driver
Creating Postgres ... done
Creating api-server ... done
Creating dashboard ... done
Attaching to Postgres, api-server, dashboard
Postgres | The files belonging to this database system will be owned by user "postgres".
Postgres | This user must also own the server process.
...
...
api-server exited with code 0
api-server exited with code 0
docker logs api-server is empty
I would very appreciate it if you guys can tell me how to check this problems, It is better to provide a solution.
You are already copying api-server to Dockerfile during build time which should work fine, but in Docker compose it all override all the pip packages and code.
volumes:
- /api-server:/webapps
Remove the volume from your Docker compose and it should work.
Second thing set permission to the bash script.
COPY . /webapps/
RUN chmod +x ./start.sh
Third thing, you do need to run python using bash as there is no thing in the bash that CMD can not perform so why not as a CMD?
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

docker-compose up --build, get stuck while installing the pip package in alpine container

Installing the package in alpine get stuck
it stuck at
(6/12) Installing ncurses-terminfo (6.1_p20190105-r0) OR
(10/12) Installing python2 (2.7.16-r1)
Sometimes it works properly.
Command: sudo docker-compose build
Tried proxy but didn't worked
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/admin/systemd/
#
# Customize location of Docker binary (especially for development testing).
#DOCKERD="/usr/local/bin/dockerd"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export DOCKER_TMPDIR="/mnt/bigdrive/docker-tmp"
Also tried by increating the MTU
docker-compose.yml
version: '3.7'
services:
admin-api:
container_name: admin-api
build:
context: .
dockerfile: Dockerfile
environment:
- HOME=/home
- NODE_ENV=dev
- DB_1=mongodb://mongo:27017/DB_1
- DB_2=mongodb://mongo:27017/DB_2
volumes:
- '.:/app'
- '/app/node_modules'
- '$HOME/.aws:/home/.aws'
ports:
- '4004:4004'
networks:
- backend
links:
- mongo
mongo:
container_name: mongo
image: mongo:4.2.0-bionic
ports:
- "27018:27017"
networks:
- backend
networks:
backend:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1500
Dockerfile
# base image
FROM node:8.16.1-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN apk add --update-cache py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm -rf /var/cache/apk/*
RUN npm install --silent
RUN npm install -g nodemon
# start app
CMD nodemon
EXPOSE 4004
My work is dependent on AWS and it requires AWS credentials, I installed the AWS using pip and mounted the /home/.aws (local) with /home/.aws container but when I am creating or building the container it gets stuck and doesn't show any error. While building the container, I also checked the network monitor, it shows receiving packets 0 bytes/s
Tried --verbose but it didn't get any useful information

Enable to import flask in python

I have react app which communicates with flask API and display data. I had both of these projects in separate folders and everything worked fine.
Then I wanted to containerize Flask + React app with docker-compose for practice and then I created a folder in which I have my middleware(flask) and frontend(react) folders. Then I created a virtual environment and installed flask. Now when I import flask inside python file I get an error.
I do not understand why simply adding the folder inside another folder would affect my project. You can see the project structure and error in the picture below.
Dockerfile react app
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
CMD [ "npm", "start" ]
Dockerfile flask api
FROM python:3.7.2
# set working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# add requirements (to leverage Docker cache)
ADD ./requirements.txt /usr/src/app/requirements.txt
# install requirements
RUN pip install -r requirements.txt
# add app
ADD . /usr/src/app
# run server
CMD python app.py runserver -h 0.0.0.0
docker-compose.yml
version: '3'
services:
middleware:
build: ./middleware
expose:
- 5000
ports:
- 5000:5000
volumes:
- ./middleware:/usr/src/app
environment:
- FLASK_ENV=development
- FLASK_APP=app.py
- FLASK_DEBUG=1
frontend:
build: ./frontend
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./frontend/src:/usr/src/app/src
- ./frontend/public:/usr/src/app/public
links:
- "middleware:middleware"
When moving folders around, you should change the python path in your vscode/.settings file. Otherwise you'll be using the wrong Python interpreter - one without Flask.

Auto reloading Django server on Docker

I am learning to use Docker and I have been having a problem since yesterday (before I resorted to asking, I started to investigate but I could not solve the problem), my problem is that I have a Django project in my local machine, I also have the same project with Docker, but when I change my local project, it is not reflected in the container that the project is running. I would be very grateful if you could help me with this please. Thank you.
Dockerfile
FROM python:3.7-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /code
RUN pipenv install --skip-lock --system --dev
COPY ./entrypoint.sh /code
COPY . /code
ENTRYPOINT [ "/code/entrypoint.sh" ]
docker-compose.yml
# version de docker-compose con la que trabajaremos
version: '3'
# definiendo los servicios que correran en nuestro contenedor
services:
web:
restart: always
build: .
command: gunicorn app.wsgi:application --bind 0.0.0.0:8000 #python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- static_volume:/code/staticfiles
- media_volume:/code/mediafiles
expose:
- 8000
environment:
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=postgres
- SQL_PASSWORD=postgres
- SQL_HOST=db
- SQL_PORT=5432
- DATABASE=postgres
depends_on:
- db
env_file: .env
db:
restart: always
image: postgres:10.5-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
nginx:
restart: always
build: ./nginx
volumes:
- static_volume:/code/staticfiles
- media_volume:/code/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
static_volume:
media_volume:
And a little doubt here, is it a good practice to store the environment variables in Dockerfile or docker-compose ?, I use .env but I have seen in many places that they store the variables in docker-compose, as shown in the code of above
I hope you can help me, any recommendation about my project, is very well received, as I comment, I'm new to Docker but I really like it a lot and I would like to learn more about it.
How people usually approach this is to have a separate docker-compose configurations for development and production environment, e.g. local.yml and production.yml. That way you can use runserver while developing (which you'll probably find more suitable since you'll get a lot of debug information) and gunicorn on production.
I'd recommend looking into https://github.com/pydanny/cookiecutter-django project which has a lot of Django good practices integrated as well as a good out of the box Docker configuration. You can create a test project using the cookiecutter and then inspect how they do the Docker setup, including environment variables.

Applying changes in django/docker files

I'm new at the development with django and docker and I have a problem when I change a file in the project. My problem is as follows:
I make changes in the content of any file in the django project (Template, view, urls) but it does not update in my current running app. Always I want to see my changes I need to restart the server (I'm using nginx) doing docker-compose up.
Is there a package or a alteration that I should install/do to make it able to accept change in running time?
This is my Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
RUN pip install pipenv && pipenv install --system
RUN pip install django-livereload
COPY . /opt/services/djangoapp/src
RUN cd hello && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "hello", "hello.wsgi:application"]
Let me know any other information that I might provide to give a better glimpse of the problem (if it is not clear enough).
version: '3'
services:
# database containers, one for each db
database1:
image: postgres:10
volumes:
- database1_volume:/var/lib/postgresql/data
env_file:
- config/db/database1_env
networks:
- database1_network
# web container, with django + gunicorn
djangoapp:
build: .
environment:
- DJANGO_SETTINGS_MODULE
volumes:
- .:/opt/services/djangoapp/src
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
- .:/code
networks:
- database1_network
- nginx_network
depends_on:
- database1
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
networks:
- nginx_network
depends_on:
- djangoapp
networks:
database1_network:
driver: bridge
database2_network:
driver: bridge
nginx_network:
driver: bridge
volumes:
database1_volume:
static:
media:
This is pretty simple. What happens here now
You have the Dockerfile and you COPY your current folder(at the time you build your image) to the container. So while you are running the container it DOES NOT sync with you host(current working folder) if you change something in the host after create the container.
If you want to sync your host with the container you have to mount it as volume with, either -v in single container or with volumes in docker compose.
docker run -v /host/directory:/container/directory
docker run -v ./:/opt/services/djangoapp/src
or using docker-compose if you have multiple containers
version: '3'
services:
web-service:
build: . # path to Dockerfile
image: your-image
volumes:
- /host/directory:/container/directory
#- ./:/opt/services/djangoapp/src

Categories