Applying changes in django/docker files - python

I'm new at the development with django and docker and I have a problem when I change a file in the project. My problem is as follows:
I make changes in the content of any file in the django project (Template, view, urls) but it does not update in my current running app. Always I want to see my changes I need to restart the server (I'm using nginx) doing docker-compose up.
Is there a package or a alteration that I should install/do to make it able to accept change in running time?
This is my Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
RUN pip install pipenv && pipenv install --system
RUN pip install django-livereload
COPY . /opt/services/djangoapp/src
RUN cd hello && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "hello", "hello.wsgi:application"]
Let me know any other information that I might provide to give a better glimpse of the problem (if it is not clear enough).
version: '3'
services:
# database containers, one for each db
database1:
image: postgres:10
volumes:
- database1_volume:/var/lib/postgresql/data
env_file:
- config/db/database1_env
networks:
- database1_network
# web container, with django + gunicorn
djangoapp:
build: .
environment:
- DJANGO_SETTINGS_MODULE
volumes:
- .:/opt/services/djangoapp/src
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
- .:/code
networks:
- database1_network
- nginx_network
depends_on:
- database1
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
networks:
- nginx_network
depends_on:
- djangoapp
networks:
database1_network:
driver: bridge
database2_network:
driver: bridge
nginx_network:
driver: bridge
volumes:
database1_volume:
static:
media:

This is pretty simple. What happens here now
You have the Dockerfile and you COPY your current folder(at the time you build your image) to the container. So while you are running the container it DOES NOT sync with you host(current working folder) if you change something in the host after create the container.
If you want to sync your host with the container you have to mount it as volume with, either -v in single container or with volumes in docker compose.
docker run -v /host/directory:/container/directory
docker run -v ./:/opt/services/djangoapp/src
or using docker-compose if you have multiple containers
version: '3'
services:
web-service:
build: . # path to Dockerfile
image: your-image
volumes:
- /host/directory:/container/directory
#- ./:/opt/services/djangoapp/src

Related

Docker image ran for Django but cannot access dev server url

Working on containerizing my server. I believe I successfully run build, when I run docker-compose my development server appears to run, but when I try to visit the associated dev server URL:
http://0.0.0.0:8000/
However, I get a page with the error:
This site can’t be reachedThe webpage at http://0.0.0.0:8000/ might be temporarily down or it may have moved permanently to a new web address.
These are the settings on my Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
WORKDIR C:/Users/15512/Desktop/django-project/peerplatform
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
EXPOSE 8000
CMD ["python", "./manage.py", "runserver", "0.0.0.0:8000", "--settings=signup.settings"]
This is my docker-compose.yml file:
version: "3.8"
services:
redis:
restart: always
image: redis:latest
ports:
- "49153:6379"
pairprogramming_be:
restart: always
depends_on:
- redis
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
env_file:
- ./signup/.env
- ./payments/.env
- ./.env
build:
context: ./
dockerfile: Dockerfile
ports:
- "8000:8001"
container_name: "pairprogramming_be"
volumes:
- "C:/Users/15512/Desktop/django-project/peerplatform://pairprogramming_be"
working_dir:
"/C:/Users/15512/Desktop/django-project/peerplatform"
This is the .env file:
DEBUG=1
DJANGO_ALLOWED_HOSTS=0.0.0.0
FYI: the redis image runs successfully. This is what I have tried:
I tried changing the allowed hosts to localhost and 127.0.0.0.1
I tried running the command python manage.py runserver and eventually added 0.0.0.0:8000
When I run docker inspect --format '{{ .NetworkSettings.IPAddress }} pairprogramming_be I get a blank response/my docker container doesn't appear to have an IP Address
where is the 8001 port taken from? this is the internal (expected) listening port. Since you set your application (inside docker) to listen on 8000, you should map it from 8000 to anything else..
just change compose to:
ports:
- "8000:8000"

Docker-Compose Output File To Local Host

I have the below docker-compose.yaml file that sets up a database and runs a python script
version: '3.3'
services:
db:
image: mysql:8.0
cap_add:
- SYS_NICE
restart: always
environment:
- MYSQL_DATABASE=test_db
- MYSQL_ROOT_PASSWORD=xxx
ports:
- '3310:3310'
volumes:
- db:/var/lib/mysql
py_service:
container_name: test_py
build: .
command: ./main.py -r compute_init
depends_on:
- db
ports:
- 80:80
environment:
DB_HOST: db
DB_PORT: 3306
DB_USER: root
DB_PASSWORD: xxx
DB_NAME: test_db
links:
- db
volumes:
- py_output:/app/output
volumes:
db:
driver: local
py_output:
To run it I perform the following
docker-compose build
docker-compose up
docker-compose run -v /home/ubuntu/docker_directory/output:/app/output/* py_service
Here is the Dockerfile
FROM python:3.7
RUN mkdir /app
WORKDIR /app
COPY env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python3","main.py","-r","compute_init"]
Now this works fine I can see the data has been properly populated under the generated in the msql database.
The python file at the end of the script should dump a csv file to /app/ouput/output.csv (via pandas library df.to_csv("output/output.csv"))
My question is, how to recover that csv from the container to the local directory.
The script seems to finish off without any errors, but can't find the output file at the end.
it seems using docker-compose run -v $(pwd)/output:/app/output py_service
did the job

This site can’t be reached 127.0.0.1 refused to connect flask

I am trying to dockerize a flask project with Redis and SQLite. I kept getting this error when I run the project using docker. The project works just fine when I run it normally using python manage.py run
Dockerfile
FROM python:3.7.2-slim
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python","manage.py run", "--host=0.0.0.0"]
docker-compose.yml
version: '3'
services:
sqlite3:
image: nouchka/sqlite3:latest
stdin_open: true
tty: true
volumes:
- ./db/:/root/db/
api:
container_name: flask-container
build: .
entrypoint: python manage.py run
env_file:
- app/main/.env
ports:
- '5000:5000'
volumes:
- ./db/:/root/db/
- ./app/main/:/app/main/
redis:
image: redis
container_name: redis-container
ports:
- "6379:6379"
Please what could be the problem?
Your docker-compose.yml file has several overrides that fundamentally change the way the image works. In particular, the entrypoint: line suppresses the CMD from the Dockerfile, which loses the key --host option. You also should not need volumes: to inject the application code (it's already in the image), nor should you need to manually specify container_name:.
services:
api:
build: .
env_file:
- app/main/.env
ports:
- '5000:5000'
# and no other settings
In the Dockerfile, your CMD has two shell words combined together. You need to split those up into separate words in the JSON-array syntax.
CMD ["python","manage.py", "run", "--host=0.0.0.0"]
# ^^^^ two words
With these two fixes, you'll be running the CMD from the image, with the code built into the image, and with the critical --host=0.0.0.0 option.

Docker & Python, permission denied on Linux, but works when runnning on Windows

I'm trying to prepare a development container with Python + Flask and Postgre.
Since it is a development container, it is meant to be productive, so I don't want to run a build each time I change a file, so I can't COPY the files in the build phase, instead I mount a volume with all the source files, so when I change a python file in the host machine, the Flask server will automatically detect the changes and restart itself, even though it is in the container.
So far so good, running docker-compose up and these containers run fine on Windows, but when I tried to run on Linux, i got:
/bin/sh: 1: ./start.sh: Permission denied
Everyplace I searched tells me to RUN chmod +x start.sh, which doesn't work, because the file doesn't exist at build phase, so I try changing to CMD, instead of RUN... but still same error.
Any ideas why? Aren't containers supposed to help with the 'works on my machine' ? Because these files work on a Windows Host, but not on a Linux Host.
Is what I am doing the right approach in order to make the file changes on the host machine reflect in the container (without a build)?
Thanks in advance!!
Below are my files:
docker-compose.yml:
version: '3'
services:
postgres-docker:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: "Postgres2019!"
ports:
- "9091:5432"
expose:
- "5432"
volumes:
- volpostgre:/var/lib/postgresql/data
networks:
- app-network
rest-server:
build:
context: ./projeto
ports:
- "9092:5000"
depends_on:
- postgres-docker
volumes:
- ./projeto:/app
networks:
- app-network
volumes:
volpostgre:
networks:
app-network:
driver: bridge
and inside projeto folder I got the following Dockerfile
FROM python:3.8.5
WORKDIR /app
CMD ./start.sh
And in start.sh:
#!/bin/bash
pip install -r requirements.txt
python setupdatabase.py
python run.py
One of the options that you can try is to override CMD in docker-compose.yml and first set the permission to file and then start the execute the script.
So by doing this you do not need to build docker image at all as the only thing in the image is you are setting the CMD ./start.sh
webapp:
image: python:3.8.5
volumes:
- $PWD/:/app
working_dir: /app
command: bash -c 'chmod +x start.sh && ./start.sh'

Compose up container exited with code 0 and logs it with empty

I need to containerize a Django Web project with docker. I divided the project into dashboard, api-server and database. When I type docker-compose up, it print api-server exited with code 0 and api-server container Exited (0), and I type docker logs api-server, it return empty, but other container normal. I don't know how to check problem.
api-server directory structure is as follows
api-server
server/
Dockerfile
requirements.txt
start.sh
...
...
Some compose yml content is as follows
dashboard:
image: nginx:latest
container_name: nginx-dashboard
volumes:
- /nginx/nginx/default:/etc/nginx/conf.d/default.conf:ro
- /nginx/dist:/var/www/html:ro
ports:
- "80:80"
depends_on:
- api-server
api-server:
build: /api-server
container_name: api-server
volumes:
- /api-server:/webapps
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: Postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- "5432:5432"
Some Dockerfile content of api-server is as follows
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
RUN apt-get clean && apt-get update && apt-get upgrade -y && apt-get install -y python3-pip libpq-dev apt-utils
COPY ./requirements.txt /webapps/
RUN pip3 install -r /webapps/requirements.txt
COPY . /webapps/
CMD ["bash","-c","./start.sh"]
start.sh is as follows
#!/usr/bin/env bash
cd server/
python manage.py runserver 0.0.0.0:8000
type docker-compose up result as follows
root#VM:/home/test/Documents/ComposeTest# docker-compose up
Creating network "composetest_default" with the default driver
Creating Postgres ... done
Creating api-server ... done
Creating dashboard ... done
Attaching to Postgres, api-server, dashboard
Postgres | The files belonging to this database system will be owned by user "postgres".
Postgres | This user must also own the server process.
...
...
api-server exited with code 0
api-server exited with code 0
docker logs api-server is empty
I would very appreciate it if you guys can tell me how to check this problems, It is better to provide a solution.
You are already copying api-server to Dockerfile during build time which should work fine, but in Docker compose it all override all the pip packages and code.
volumes:
- /api-server:/webapps
Remove the volume from your Docker compose and it should work.
Second thing set permission to the bash script.
COPY . /webapps/
RUN chmod +x ./start.sh
Third thing, you do need to run python using bash as there is no thing in the bash that CMD can not perform so why not as a CMD?
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Categories