Running a django application on both the host machine and docker container at the same time - python

I have created a simple django application that has one endpoint /health/live and it returns a success message upon receiving a get request.
I run the application locally with python manage.py runserver on port 8000
I also have a docker-compose and Dockerfile as below:
FROM python
ENV PYTHONUNBUFFERED 1
RUN mkdir /inventory
WORKDIR /inventory
COPY . /inventory
WORKDIR /inventory
RUN pip install -r requirements.txt
and
version: '3'
networks:
kong-net:
name: kong-net
driver: bridge
ipam:
config:
- subnet: 172.1.1.0/24
services:
inventory:
container_name: inventory
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.11
ports:
- "8000:8000"
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000
I then run docker-compose up (I don't detach it to be able to see the logs)
They both work. I send a get request to http://127.0.0.1:8000/health/live:
based on the logs I see, the request goes through the service running directly on the system and not on the docker container
If I stop the service running directly without docker, and send the request, the request goes through the one deployed on docker
is there a reason this is happening? why the first one takes priority?
And shouldn't I see an error when trying to run the docker container or start the application locally? because they are both listening to port 8000!

Related

Docker - Build a service after the dependant service is up and running

I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.

Docker image ran for Django but cannot access dev server url

Working on containerizing my server. I believe I successfully run build, when I run docker-compose my development server appears to run, but when I try to visit the associated dev server URL:
http://0.0.0.0:8000/
However, I get a page with the error:
This site can’t be reachedThe webpage at http://0.0.0.0:8000/ might be temporarily down or it may have moved permanently to a new web address.
These are the settings on my Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
WORKDIR C:/Users/15512/Desktop/django-project/peerplatform
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
EXPOSE 8000
CMD ["python", "./manage.py", "runserver", "0.0.0.0:8000", "--settings=signup.settings"]
This is my docker-compose.yml file:
version: "3.8"
services:
redis:
restart: always
image: redis:latest
ports:
- "49153:6379"
pairprogramming_be:
restart: always
depends_on:
- redis
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
env_file:
- ./signup/.env
- ./payments/.env
- ./.env
build:
context: ./
dockerfile: Dockerfile
ports:
- "8000:8001"
container_name: "pairprogramming_be"
volumes:
- "C:/Users/15512/Desktop/django-project/peerplatform://pairprogramming_be"
working_dir:
"/C:/Users/15512/Desktop/django-project/peerplatform"
This is the .env file:
DEBUG=1
DJANGO_ALLOWED_HOSTS=0.0.0.0
FYI: the redis image runs successfully. This is what I have tried:
I tried changing the allowed hosts to localhost and 127.0.0.0.1
I tried running the command python manage.py runserver and eventually added 0.0.0.0:8000
When I run docker inspect --format '{{ .NetworkSettings.IPAddress }} pairprogramming_be I get a blank response/my docker container doesn't appear to have an IP Address
where is the 8001 port taken from? this is the internal (expected) listening port. Since you set your application (inside docker) to listen on 8000, you should map it from 8000 to anything else..
just change compose to:
ports:
- "8000:8000"

Python WebSocket connection refused in Docker

I have a python socketio application that works just fine when run locally. However, when I move it into Docker, external clients are unable to connect and throw this error:
socketio.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
The WebSocket server is really basic and publishes to 0.0.0.0:8080. My client apps listen to localhost:8080. In my Docker container, I've exposed port 8080. I'm guessing that I'm setting up my container incorrectly.:
Dockerfile:
FROM python:latest
WORKDIR /path/to/project
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
docker-compose.yml
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
I start up my container with docker-compose up and I'm using Docker Desktop (if that helps). I use this environment for development so I start my server with python my_server.py. The server starts successfully. What else am I missing?
I've tried the following based on what others have said about this problem online
Explicitly setting 0.0.0.0 to be the host
Using EXPOSE 8080 in my Dockerfile
Set the network mode to host
Your container connectivity appears to be fine.
You have not specified any command to run in your container. In order to ensure your server is actually running on port 8080, you can add a CMD instruction in your Dockerfile or specify cmd in your docker-compose.yaml. This can be done with:
CMD ["python", "my_server.py"]
Or with docker-compose.yaml file as:
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
command: python my_server.py #<== this overrides the Dockerfile CMD instruction
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
You can learn more about command in docker-compose.yaml here.
That's also the recommended way to develop with docker. Running commands inside running containers for dev work is not the right way. You essentially run docker compose every time you test a new change. That avoids a lot of problems like these where connectivity doesn't work.

Docker & Python, permission denied on Linux, but works when runnning on Windows

I'm trying to prepare a development container with Python + Flask and Postgre.
Since it is a development container, it is meant to be productive, so I don't want to run a build each time I change a file, so I can't COPY the files in the build phase, instead I mount a volume with all the source files, so when I change a python file in the host machine, the Flask server will automatically detect the changes and restart itself, even though it is in the container.
So far so good, running docker-compose up and these containers run fine on Windows, but when I tried to run on Linux, i got:
/bin/sh: 1: ./start.sh: Permission denied
Everyplace I searched tells me to RUN chmod +x start.sh, which doesn't work, because the file doesn't exist at build phase, so I try changing to CMD, instead of RUN... but still same error.
Any ideas why? Aren't containers supposed to help with the 'works on my machine' ? Because these files work on a Windows Host, but not on a Linux Host.
Is what I am doing the right approach in order to make the file changes on the host machine reflect in the container (without a build)?
Thanks in advance!!
Below are my files:
docker-compose.yml:
version: '3'
services:
postgres-docker:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: "Postgres2019!"
ports:
- "9091:5432"
expose:
- "5432"
volumes:
- volpostgre:/var/lib/postgresql/data
networks:
- app-network
rest-server:
build:
context: ./projeto
ports:
- "9092:5000"
depends_on:
- postgres-docker
volumes:
- ./projeto:/app
networks:
- app-network
volumes:
volpostgre:
networks:
app-network:
driver: bridge
and inside projeto folder I got the following Dockerfile
FROM python:3.8.5
WORKDIR /app
CMD ./start.sh
And in start.sh:
#!/bin/bash
pip install -r requirements.txt
python setupdatabase.py
python run.py
One of the options that you can try is to override CMD in docker-compose.yml and first set the permission to file and then start the execute the script.
So by doing this you do not need to build docker image at all as the only thing in the image is you are setting the CMD ./start.sh
webapp:
image: python:3.8.5
volumes:
- $PWD/:/app
working_dir: /app
command: bash -c 'chmod +x start.sh && ./start.sh'

Applying changes in django/docker files

I'm new at the development with django and docker and I have a problem when I change a file in the project. My problem is as follows:
I make changes in the content of any file in the django project (Template, view, urls) but it does not update in my current running app. Always I want to see my changes I need to restart the server (I'm using nginx) doing docker-compose up.
Is there a package or a alteration that I should install/do to make it able to accept change in running time?
This is my Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
RUN pip install pipenv && pipenv install --system
RUN pip install django-livereload
COPY . /opt/services/djangoapp/src
RUN cd hello && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "hello", "hello.wsgi:application"]
Let me know any other information that I might provide to give a better glimpse of the problem (if it is not clear enough).
version: '3'
services:
# database containers, one for each db
database1:
image: postgres:10
volumes:
- database1_volume:/var/lib/postgresql/data
env_file:
- config/db/database1_env
networks:
- database1_network
# web container, with django + gunicorn
djangoapp:
build: .
environment:
- DJANGO_SETTINGS_MODULE
volumes:
- .:/opt/services/djangoapp/src
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
- .:/code
networks:
- database1_network
- nginx_network
depends_on:
- database1
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
networks:
- nginx_network
depends_on:
- djangoapp
networks:
database1_network:
driver: bridge
database2_network:
driver: bridge
nginx_network:
driver: bridge
volumes:
database1_volume:
static:
media:
This is pretty simple. What happens here now
You have the Dockerfile and you COPY your current folder(at the time you build your image) to the container. So while you are running the container it DOES NOT sync with you host(current working folder) if you change something in the host after create the container.
If you want to sync your host with the container you have to mount it as volume with, either -v in single container or with volumes in docker compose.
docker run -v /host/directory:/container/directory
docker run -v ./:/opt/services/djangoapp/src
or using docker-compose if you have multiple containers
version: '3'
services:
web-service:
build: . # path to Dockerfile
image: your-image
volumes:
- /host/directory:/container/directory
#- ./:/opt/services/djangoapp/src

Categories