This is quite a basic question but I haven't been able to get an answer from researching on Google, although I think its more due to my lack of understanding, than the answer not being out there.
I am getting to grips with Docker and have a python Flask Admin script and a postgres db both in two separate containers but under on docker-compose file. I would like another python script to run at the same time which will be scraping a website. I have the file all set up but how do I include it in the same Docker-Compose or DockerFile?
version: '2'
services:
db:
image: postgres
environment:
- PG_PASSWORD=XXXXX
dev:
build: .
volumes:
- ./app:/code/app
- ./run.sh:/code/run.sh
ports:
- "5000:5000"
depends_on:
- db
Exactly what to write depends on your directory configuration, but you basically want
version: '2'
services:
db:
image: postgres
environment:
- PG_PASSWORD=XXXXX
dev:
build: <path-to-dev>
volumes:
- ./app:/code/app
- ./run.sh:/code/run.sh
ports:
- "5000:5000"
depends_on:
- db
scraper:
build: <path-to-scraper>
depends_on:
- db
The two paths might be the same. You might push an image and then reference that instead of building it on the fly. You might do the same business of just mounting the code directory instead of building it into the image (but don't do that for actual deployment).
Related
Docker novice here.
I have committed new changes inside the application. These changes where copied from my local to host machine, and then to docker container.
So I created a new image sudo docker commit old_container_id new_image_name(djangotango-on-docker_web)
Then I spin the docker container by using new image created.
sudo docker run --name djangotango-web -d --expose 8000 djangotango-on-docker_web gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
Here djangotango-on-docker_web is my new image created.
But my application gives 502 error after this. My new container is not synced properly.
dockerfile
version: '3.8'
# networks:
# public_network:
# name: public_network
# driver: bridge
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:web
command: gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
volumes:
# - .:/home/app/web/
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
expose:
- 8000
env_file:
- ./.env.staging
networks:
service_network:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.staging.db
networks:
service_network:
# depends_on:
# - web
# pgadmin:
# image: dpage/pgadmin4
# env_file:
# - ./.env.staging.db
# ports:
# - "8080:80"
# volumes:
# - pgadmin-data:/var/lib/pgadmin
# depends_on:
# - db
# links:
# - "db:pgsql-server"
# environment:
# - PGADMIN_DEFAULT_EMAIL=4652173624824872
# - PGADMIN_DEFAULT_PASSWORD=exampleeee
# - PGADMIN_LISTEN_PORT=80
# networks:
# service_network:
nginx-proxy:
build: nginx
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:nginx-proxy
restart: always
ports:
- 443:443
- 80:80
networks:
service_network:
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
networks:
service_network:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
networks:
service_network:
volumes:
postgres_data:
pgadmin-data:
static_volume:
media_volume:
certs:
html:
vhost:
How to do it in correct way? I'm running my production application on my domain name.
What I can understand from logs is, my web is not in same network as other container now.
I don't want to rebuild my docker-compose which will solve the problem but will increase the image size, plus it's not recommended I guess.
The correct approach here is to use only docker-compose commands, and to go ahead and rebuild your image:
docker-compose up --build --force-recreate web
Many of the options you'd need to recreate this with a plain docker run command are listed in the docker-compose.yml file, but some generated implicitly. The docker run command you show doesn't have a --net option to attach to the Compose network (which could result in the error you're getting), and it doesn't have the -v options to overwrite the image's static files with content from a volume or the settings from the .env.staging file.
You should almost never use docker commit either. What's the code change you made in your image, and how would your colleagues get and test that change? Especially with the mentions of "prod" here, running code in production that you haven't built from source and tested through your usual CI process is usually discouraged.
(In terms of image size, a committed image will always be larger than the original image; docker build a new image will start from the base image and generally be smaller. Committing images also tends to lose options like the default command to run.)
My current docker-compose.yml file:
version: '2'
services:
app:
restart: always
build: ./web
ports:
- "8000:8000"
volumes:
- ./web:/app/web
command: /usr/local/bin/gunicorn -w 3 -b :8000 project:create_app()
environment:
FLASK_APP: project/__init__.py
depends_on:
- db
working_dir: /app/web
db:
image: postgres:9.6-alpine
restart: always
volumes:
- dbvolume:/var/lib/postgresql/data
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
volumes:
dbvolume:
I'm now trying to create a docker-compose-test.yml file that overrides the previous file for testing. What came to my mind was to use this:
version: '2'
services:
app:
command: pytest
db:
volumes:
- dbtestvolume:/var/lib/postgresql/data
volumes:
dbtestvolume:
And then run the tests with the command:
docker-compose -f docker-compose.yml -f docker-compose-test.yml run --rm app
that as far as I understand should override only the different aspects compared to the docker-file used for development, that is the command used and the data volume where the data is stored.
The command is successfully overridden, while unfortunately the data volume stays the same and so the data of my application get overwritten if I run my tests.
Is this the correct way to set up a docker configuration for the tests? Any suggestion about what is going wrong?
If this is not the correct way, what is the proper way to setup a docker-compose configuration for testing?
Alternative test
I tried to change my docker-compose-test.yml file to use a different service (db-test) for testing:
version: '2'
services:
app:
command: pytest
depends_on:
- db-test
db-test:
image: postgres:9.6-alpine
restart: always
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
What happens now is that I have data is not overwritten (so, in a way, it works, hurray!) when a run my tests, but if I try to run the command:
docker-compose down
I get this ouput:
Stopping app_app_1 ... done
Stopping app_db_1 ... done
Found orphan containers (app_db-test_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
and then the docker-compose down fails. So something is not configured properly.
Any idea?
If you don't want to persist the DB data, don't use volumes, so you will have a fresh database everytime you start the container.
I guess you need some prepopulated data in your tables, so just build a new DB image copying the data you need. The Docker file could be something like:
FROM postgres:9.6-alpine
COPY db-data/ /var/lib/postgresql/data
In case you need to update the data, mount the db-data/ using -v, change it and rebuild the image.
BTW, it would be better to use an automated pipeline to test your builds, using Jenkins, GitLab CI, Travis or whatever solution that suits you. Anyway, you can use docker-compose in your pipeline as well to keep it consistent with your local development environment.
I have django application with some model. I have manage.py command that creates n models and saves it to db. It runs with decent speed on my host machine.
But if I run it in docker it runs very slow, 1 instance created and saved in 40-50 seconds. I think I am missing something on how Docker works, can somebody point out why performance is low and what can i do with it?
docker-compose.yml:
version: '2'
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- /usr/local/var/postgres:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
dockerfile for web service:
FROM python:3.6
ENV PYTHONBUFFERED 1
ADD . .
WORKDIR .
RUN pip install -r requirements.txt
RUN chmod +x wait-for-it.sh
The problem here is most likely the volume /usr/local/var/postgres:/var/lib/postgresql as you are using it on Mac. As I understand the Docker for Mac solution, it uses file sharing to implement host volumes, which is a lot slower then native filesystem access.
A possible workaround is to use a docker volume instead of a host volume. Here is an example:
version: '2'
volumes:
postgres_data:
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
Please note that this may complicate management of the postgres data, as you can't simply access the data from your Mac. You can only use the docker CLI or containers to access, modify and backup this data. Also, I'm not sure what happens if you uninstall Docker from your Mac, it may be that you lose this data.
Two things, can be a probable cause:
Starting of docker container takes some time, so if you start new container for each instance this can add up.
What storage driver do you use? Docker (often) defaults to device mapper loopback storage driver, which is slow. Here is some context. This will be painfull especially if you start this container often.
Other than that your config looks sensibly, and there are no obvious causes problems there. So if the above two points don't apply to you, please add some extra comments --- like how you actually add these model instances.
I'm trying to find a good way to populate a database with initial data for a simple application. I'm using a tutorial from realpython.com as a starting point. I then run a simple python script after the database is created to add a single entry, but when I do this the data is added multiple times even though I only call the script once. result
population script (test.py):
from app import db
from models import *
t = Post("Hello 3")
db.session.add(t)
db.session.commit()
edit:
Here is the docker-compose file which i use to build the project:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
it references two different Dockerfiles:
Dockerfile #1 which builds the App container and is 1 line:
FROM python:3.4-onbuild
Dockerfile #2 is used to build the nginx container
FROM tutum/nginx
RUN rm /etc/nginx/sites-enabled/default
ADD sites-enabled/ /etc/nginx/sites-enabled
edit2:
Some people have suggested that the data was persisting over several runs, and that was my initial thought as well. This is not the case, as I remove all active docker containers via docker rm before testing. Also the number of "extra" data is not consistent, ranging randomly from 3-6 in the few tests that I have run so far.
It turns out this is a bug related to using the run command on containers with the "restart: always" instruction in the docker-compose/Dockerfile. In order to resolve this issue without a bug fix I removed the "restart: always" from the web container.
related issue: https://github.com/docker/compose/issues/1013
I'm trying to use docker for odoo module developement. I have the following docker-compose.yml file
db:
image: postgres
environment:
POSTGRES_USER: odoo
POSTGRES_PASSWORD: odoo
volumes:
- data:/var/lib/postgresql/data/
odoo:
image: odoo
links:
- db:db
ports:
- "127.0.0.1:8069:8069"
volumes:
- extra-addons:/mnt/extra-addons
command: -- --update=tutorial
The module contains only an __openerp__.py file but odoo doesn't show the changes I make to it even with --update=tutorial option
{
'name': "tutorial",
'summary': """Hello world!!""",
'description': """
This is the new description
""",
'author': "ybouhjira",
'website': "ybouhjira.com",
'category': 'Technical Settings',
'version': '0.1',
'depends': ["base"],
}
this file is in a folder named tutorial located in extra-addons, and I tried stop and starting the containers even removing and recreating them.
Like shodowsjedi already said, you need to create a __init__.py file (see module structure : https://www.odoo.com/documentation/8.0/howtos/backend.html#module-structure ).
Also, check permissions in your odoo containers, your files in the odoo volume will have uid and gid of your system (the host) in the container (that can be associated to a different user). To check this you can use docker exec :
docker exec docker_odoo_1 ls -la /mnt/extra-addons
If you don't know the docker name of your container you can retrieve it by using :
docker-compose ps
Last and probably the most important one, check odoo logs by using :
docker-compose logs
and update your module in the configuration page of Odoo (or at the startup of the server)
You have to add own config file. first in docker-compose.yml mount /etc/odoo
odoo:
image: odoo
links:
- db:db
ports:
- "127.0.0.1:8069:8069"
volumes:
- extra-addons:/mnt/extra-addons
- ./config:/etc/odoo
Then create "odoo.conf" in ./config and add configuration options like below.
[options]
addons_path = /mnt/extra-addons,/usr/lib/python2.7/dist- packages/odoo/addons
data_dir = /var/lib/odoo
auto_reload = True
restart odoo, go to debug mode then apps->update module list
If still not works, then check access rights on addons directories and check if group and others can read them
To create new module you need more then Odoo Manifest file __openerp__.py file you also need Python Descriptor file __init__.py as minimal structure, of course you need more then two file but that minimal to module to exists. Once you create a module on existing database you need call Update module List under setting to load your module correctly and then you will be able to install it.
Here the quick guide on module creation.
Here the Detail Guide on API and framework.
The --update option requires -d specifying the database name
Odoo CLI doc
Take into account that the description, icons, and version inside the manifest, not always change innmediatly. Try to shift f5 your browser, but this is not so relevant when you are developing.
Besides having as a minimum, the manifest, and init.py file, if you are using docker-compose, I recommend having a script to put down, remove and recreate your container.
./doeall
cat doeall
#!/bin/sh
docker-compose down
docker-compose rm
docker-compose up -d
docker-compose logs -f
For developing purposes, is also convenient to have db in a separated docker-compose.yml, so that you can reuse the same db container for several odoo installations.
Take a look to my docker-compose for multi-instances here:
https://github.com/bmya/odoo-docker-compose/tree/multi
anyway, if you still want to use Postgres as db together in the same docker-compose file, you have it in this other branch:
https://github.com/bmya/odoo-docker-compose/blob/uni/docker-compose.yml
Again, regarding your module:
The important thing when you are writing code is:
When you change something in the methods in python code, just restart the server.
When you change something in the model inside python restart the server and reinstall.
When you change data files (views, data, etc) just reinstall the module in order to update the data files.
this fix my problem, we need create "odoo.conf" in ./config
[options]
addons_path = /mnt/extra-addons,/usr/lib/python2.7/dist- packages/odoo/addons
data_dir = /var/lib/odoo
auto_reload = True
First of all create a directory with the docker-compose.yml file and these directories:
/addons
/volumes/odoo/sessions
/volumes/odoo/filestore
/docker-compose.yml
Put this code in your docker-compose.yml file :
version: '3'
services:
web:
image: odoo:12.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./volumes/odoo/filestore:/opt/odoo/data/filestore
- ./volumes/odoo/sessions:/opt/odoo/data/sessions
- ./addons:/mnt/extra-addons
db:
image: postgres:10
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
Then in a terminal write for build your environnement:
docker-compose up
docker-compse start or docker-compose stop
If you want to add custom module , just put it in addons directory then clic on update app list in App module, restart docker , after this disable all filters in search bar. Normally if you write module name in search bar your custom module will show below.
My docker-compose file support run Odoo 15 on Docker:
version: '3'
services:
postgres:
image: postgres:13
container_name: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
PGDATA: /var/lib/postgresql/data
volumes:
- ./data/postgres:/var/lib/postgresql/data
odoo:
image: odoo:15
container_name: odoo
restart: always
depends_on:
- postgres
ports:
- "8069:8069"
- "8072:8072"
environment:
HOST: postgres
USER: ${POSTGRES_USER}
PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- ./etc/odoo:/etc/odoo
- ./data/addons:/mnt/extra-addons
- ./data/odoo:/var/lib/odoo