I have django application with some model. I have manage.py command that creates n models and saves it to db. It runs with decent speed on my host machine.
But if I run it in docker it runs very slow, 1 instance created and saved in 40-50 seconds. I think I am missing something on how Docker works, can somebody point out why performance is low and what can i do with it?
docker-compose.yml:
version: '2'
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- /usr/local/var/postgres:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
dockerfile for web service:
FROM python:3.6
ENV PYTHONBUFFERED 1
ADD . .
WORKDIR .
RUN pip install -r requirements.txt
RUN chmod +x wait-for-it.sh
The problem here is most likely the volume /usr/local/var/postgres:/var/lib/postgresql as you are using it on Mac. As I understand the Docker for Mac solution, it uses file sharing to implement host volumes, which is a lot slower then native filesystem access.
A possible workaround is to use a docker volume instead of a host volume. Here is an example:
version: '2'
volumes:
postgres_data:
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
Please note that this may complicate management of the postgres data, as you can't simply access the data from your Mac. You can only use the docker CLI or containers to access, modify and backup this data. Also, I'm not sure what happens if you uninstall Docker from your Mac, it may be that you lose this data.
Two things, can be a probable cause:
Starting of docker container takes some time, so if you start new container for each instance this can add up.
What storage driver do you use? Docker (often) defaults to device mapper loopback storage driver, which is slow. Here is some context. This will be painfull especially if you start this container often.
Other than that your config looks sensibly, and there are no obvious causes problems there. So if the above two points don't apply to you, please add some extra comments --- like how you actually add these model instances.
Related
I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.
docker-compose.yml:
python-api: &python-api
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
networks:
- app-tier
expose:
- "8000"
depends_on:
- python-model
volumes:
- .:/python_api/
environment:
- PYTHON_API_ENV=development
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
expose:
- "8001"
volumes:
- .:/python_model/
command: >
sh -c "ls /python-model/
python_setup.sh development
cd /server/ &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8001"
python-celery:
<<: *python-api
environment:
- PYTHON_API_ENV=development
networks:
- app-tier
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
image: redis:5.0.8-alpine
hostname: redis
networks:
- app-tier
expose:
- "6379"
ports:
- "6379:6379"
command: ["redis-server"]
python-celery is inside python-api which should run as a separate container. But it is trying to occupy the same port as python-api, which should never be the case.
The error that I'm getting is:
AjayB$ docker-compose up
Creating integrated_redis_1 ... done
Creating integrated_python-model_1 ... done
Creating integrated_python-api_1 ...
Creating integrated_python-celery_1 ... error
Creating integrated_python-api_1 ... done
e1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: for python-celery Cannot start service python-celery: driver failed programming external connectivity on endpoint integrated_python-celery_1 (ab5e079dbc3a30223e16052f21744c2b5dfc56adbe1d1055165b1f85f179f69c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
on doing docker ps -a, I get this:
AjayB$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ff1277fb7a7 integrated_python-celery "sh -c 'celery -A se…" 10 seconds ago Created integrated_python-celery_1
5b60221b42a4 integrated_python-api "sh -c 'ls /crackd-a…" 11 seconds ago Up 9 seconds 0.0.0.0:8000->8000/tcp integrated_python-api_1
bacd8aa3268f integrated_python-model "sh -c 'ls /crackd-m…" 12 seconds ago Exited (2) 10 seconds ago integrated_python-model_1
9fdab833b436 redis:5.0.8-alpine "docker-entrypoint.s…" 12 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp integrated_redis_1
Tried force removing the containers and tried docker-compose up getting the same error. :/ where am I making mistake?
Just doubtful on volumes: section. Can anyone please tell me if volumes is correct?
and please help me on this error. PS, first try on docker.
Thanks!
This is because you re-use the full config of python-api including the ports section which will expose port 8000 (by the way, expose is redundant since your ports section already exposes the port).
I would create a common section that could be used by any services. In your case, it would be something like that:
version: '3.7'
x-common-python-api:
&default-python-api
build:
context: /Users/AjayB/Desktop/python-api/
networks:
- app-tier
environment:
- PYTHON_API_ENV=development
volumes:
- .:/python_api/
services:
python-api:
<<: *default-python-api
ports:
- "8000:8000"
depends_on:
- python-model
command: >
sh -c "ls /python-api/ &&
python_api_setup.sh development
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
python-model: &python-model
.
.
.
python-celery:
<<: *default-python-api
links:
- redis:redis
depends_on:
- redis
command: >
sh -c "celery -A server worker -l info"
redis:
.
.
.
There is a lot in that docker-compose.yml file, but much of it is unnecessary. expose: in a Dockerfile does almost nothing; links: aren't needed with the current networking system; Compose provides a default network for you; your volumes: try to inject code into the container that should already be present in the image. If you clean all of this up, the only part that you'd really want to reuse from one container to the other is its build: (or image:), at which point the YAML anchor syntax is unnecessary.
This docker-compose.yml should be functionally equivalent to what you show in the question:
version: '3'
services:
python-api:
build:
context: /Users/AjayB/Desktop/python-api/
ports:
- "8000:8000"
# No networks:, use `default`
# No expose:, use what's in the Dockerfile (or nothing)
depends_on:
- python-model
# No volumes:, use what's in the Dockerfile
# No environment:, this seems to be a required setting in the Dockerfile
# No command:, use what's in the Dockerfile
python-model:
build:
context: /Users/AjayB/Desktop/Python/python/
ports:
- "8001:8001"
python-celery:
build: # copied from python-api
context: /Users/AjayB/Desktop/python-api/
depends_on:
- redis
command: celery -A server worker -l info # one line, no sh -c wrapper
redis:
image: redis:5.0.8-alpine
# No hostname:, it doesn't do anything
ports:
- "6379:6379"
# No command:, use what's in the image
Again, notice that the only thing we've actually copied from the python-api container to the python-celery container is the build: block; all of the other settings that would be shared across the two containers (code, exposed ports) are included in the Dockerfile that describes how to build the image.
The flip side of this is that you need to make sure all of these settings are in fact included in your Dockerfile:
# Copy the application code in
COPY . .
# Set the "development" environment variable
ENV PYTHON_API_ENV=development
# Document which port you'll use by default
EXPOSE 8000
# Specify the default command to run
# (Consider writing a shell script with this content instead)
CMD python_api_setup.sh development && \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000
I'm new at the development with django and docker and I have a problem when I change a file in the project. My problem is as follows:
I make changes in the content of any file in the django project (Template, view, urls) but it does not update in my current running app. Always I want to see my changes I need to restart the server (I'm using nginx) doing docker-compose up.
Is there a package or a alteration that I should install/do to make it able to accept change in running time?
This is my Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
RUN pip install pipenv && pipenv install --system
RUN pip install django-livereload
COPY . /opt/services/djangoapp/src
RUN cd hello && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "hello", "hello.wsgi:application"]
Let me know any other information that I might provide to give a better glimpse of the problem (if it is not clear enough).
version: '3'
services:
# database containers, one for each db
database1:
image: postgres:10
volumes:
- database1_volume:/var/lib/postgresql/data
env_file:
- config/db/database1_env
networks:
- database1_network
# web container, with django + gunicorn
djangoapp:
build: .
environment:
- DJANGO_SETTINGS_MODULE
volumes:
- .:/opt/services/djangoapp/src
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
- .:/code
networks:
- database1_network
- nginx_network
depends_on:
- database1
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
networks:
- nginx_network
depends_on:
- djangoapp
networks:
database1_network:
driver: bridge
database2_network:
driver: bridge
nginx_network:
driver: bridge
volumes:
database1_volume:
static:
media:
This is pretty simple. What happens here now
You have the Dockerfile and you COPY your current folder(at the time you build your image) to the container. So while you are running the container it DOES NOT sync with you host(current working folder) if you change something in the host after create the container.
If you want to sync your host with the container you have to mount it as volume with, either -v in single container or with volumes in docker compose.
docker run -v /host/directory:/container/directory
docker run -v ./:/opt/services/djangoapp/src
or using docker-compose if you have multiple containers
version: '3'
services:
web-service:
build: . # path to Dockerfile
image: your-image
volumes:
- /host/directory:/container/directory
#- ./:/opt/services/djangoapp/src
My current docker-compose.yml file:
version: '2'
services:
app:
restart: always
build: ./web
ports:
- "8000:8000"
volumes:
- ./web:/app/web
command: /usr/local/bin/gunicorn -w 3 -b :8000 project:create_app()
environment:
FLASK_APP: project/__init__.py
depends_on:
- db
working_dir: /app/web
db:
image: postgres:9.6-alpine
restart: always
volumes:
- dbvolume:/var/lib/postgresql/data
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
volumes:
dbvolume:
I'm now trying to create a docker-compose-test.yml file that overrides the previous file for testing. What came to my mind was to use this:
version: '2'
services:
app:
command: pytest
db:
volumes:
- dbtestvolume:/var/lib/postgresql/data
volumes:
dbtestvolume:
And then run the tests with the command:
docker-compose -f docker-compose.yml -f docker-compose-test.yml run --rm app
that as far as I understand should override only the different aspects compared to the docker-file used for development, that is the command used and the data volume where the data is stored.
The command is successfully overridden, while unfortunately the data volume stays the same and so the data of my application get overwritten if I run my tests.
Is this the correct way to set up a docker configuration for the tests? Any suggestion about what is going wrong?
If this is not the correct way, what is the proper way to setup a docker-compose configuration for testing?
Alternative test
I tried to change my docker-compose-test.yml file to use a different service (db-test) for testing:
version: '2'
services:
app:
command: pytest
depends_on:
- db-test
db-test:
image: postgres:9.6-alpine
restart: always
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
What happens now is that I have data is not overwritten (so, in a way, it works, hurray!) when a run my tests, but if I try to run the command:
docker-compose down
I get this ouput:
Stopping app_app_1 ... done
Stopping app_db_1 ... done
Found orphan containers (app_db-test_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
and then the docker-compose down fails. So something is not configured properly.
Any idea?
If you don't want to persist the DB data, don't use volumes, so you will have a fresh database everytime you start the container.
I guess you need some prepopulated data in your tables, so just build a new DB image copying the data you need. The Docker file could be something like:
FROM postgres:9.6-alpine
COPY db-data/ /var/lib/postgresql/data
In case you need to update the data, mount the db-data/ using -v, change it and rebuild the image.
BTW, it would be better to use an automated pipeline to test your builds, using Jenkins, GitLab CI, Travis or whatever solution that suits you. Anyway, you can use docker-compose in your pipeline as well to keep it consistent with your local development environment.
I'm trying to find a good way to populate a database with initial data for a simple application. I'm using a tutorial from realpython.com as a starting point. I then run a simple python script after the database is created to add a single entry, but when I do this the data is added multiple times even though I only call the script once. result
population script (test.py):
from app import db
from models import *
t = Post("Hello 3")
db.session.add(t)
db.session.commit()
edit:
Here is the docker-compose file which i use to build the project:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
it references two different Dockerfiles:
Dockerfile #1 which builds the App container and is 1 line:
FROM python:3.4-onbuild
Dockerfile #2 is used to build the nginx container
FROM tutum/nginx
RUN rm /etc/nginx/sites-enabled/default
ADD sites-enabled/ /etc/nginx/sites-enabled
edit2:
Some people have suggested that the data was persisting over several runs, and that was my initial thought as well. This is not the case, as I remove all active docker containers via docker rm before testing. Also the number of "extra" data is not consistent, ranging randomly from 3-6 in the few tests that I have run so far.
It turns out this is a bug related to using the run command on containers with the "restart: always" instruction in the docker-compose/Dockerfile. In order to resolve this issue without a bug fix I removed the "restart: always" from the web container.
related issue: https://github.com/docker/compose/issues/1013